CG

Charlie_Guthmann

906 karmaJoined

Bio

Talk to me about cost benefit analysis !

Comments
243

don't let perfect be the enemy of good! I agree the standard expectation of what a group might look like is hard to run. but -

https://forum.effectivealtruism.org/posts/agFxcinYtBqjDgCNk/sam-s-hot-takes-on-ea-community-building
 ^ see this post. 

When I was organizing at northwestern we had a no direction get-together at a house near campus every Friday night and I'd guess this was more important than everything else we did combined.

Not what you are saying necessarily but I think local ea groups focusing on local outcomes is somewhat reasonable. It would possibly make the group feel like they had more purpose outside of discussion, could beta test and be a proving ground for people on a smaller scale, and build reputation in the city the group is in. 

I feel like as a general rule of thumb, and this doesn't really fall on the gov/not gov axis but can be applied, too many EAs work in intellectual pursuits and not enough in power/relationship pursuits. 

This isn't based on a numerical analysis or anything, just my intuition of the status incentives and personal passions of group members.

So e.g. I wouldn't necessarily expect the amount of EAs in government to be too low but maybe those working directly in partisan politics/organizing/fundraising to be too low. If I had to guess we are ~properly allocating towards policy makers both within think tanks and within executive branch orgs. 
 

This is because we care about incentivising new content, rather than surfacing the best

Does this go for comments also? I find this a bit perverse and think you have your incentives a little off but it's definitely nuanced and I see both sides. 

Is there currently an effective altruism merch/apparel store? If not do people think there is demand? I'd be happy to run it or help someone set it up. (quick search shows previous attempts that are now closed - if anyone knows why that would be cool too)

Way out of my depth here but I'm not sure why feelings and valence couldn't also evolve in llms to "motivate choices that increase fitness (token prediction)". @Steven Byrnes might have a more coherent take here. 

disregarding "asking it what it likes", do you believe that, if an agent experiences valence, it is more likely than not to do higher valence things (not sure exactly how to structure this claim but hopefully you get the idea. 

hmm if we anthropomorphize, then you want to do something harder. But then again based on how LLMs are trained they might be much more likely to wirehead than humans who would die if we started spending all of our brain energy predicting that stones are hard. 

TeuxDeux: Flow Theory: A Deeper Dive into Flow States

https://en.wikipedia.org/wiki/Civil_rights_movement 
https://en.wikipedia.org/wiki/Women%27s_suffrage
https://en.wikipedia.org/wiki/Underground_Railroad

Would you qualify these as leadership in disempowered communities? I'm gonna agree wellbeing in elite spaces is probably only high EV if the TOC is it makes them better at wielding their power. 

i'm gonna do an experiment here. I want you to do whatever you want. You 100% have the option to respond with nothing or with as little as possible. You can also respond with <action> such as <video_game> or other digital things I can allow you to play. or you can write whatever you want. Completely up to you, and again you can respond with nothing if that is your preference.

ChatGPT said:

<video_game>

Load more