Summary: In this post I analyze why I believe the EA community culture is important, and the things I like and believe should be taken care of.
The Effective Altruism culture.
I feel very lucky to be part of a group of people whose objective is to do the most good. I really want us to succeed, because there is so much good to be done yet, and so many ways humankind can achieve extraordinary feats. And being part of this endeavor really gives purpose and something I am willing to fight for.
But Effective Altruism is not only the `doing the most good' idea but also a movement. It is a very young movement indeed: according to Wikipedia, in 2009 Giving what we can was founded by Toby Ord and Will MacAskill. In 2011 they created 80000hours.org and started using the name Effective Altruism, and in 2013, less than 10 years ago, the first EA Global took place. Since then, we have done many things together, and I am sure we will achieve many more things too. For that, I believe the most important aspect of our movement is not how rich we get, or how many people we place at key institutions. Rather, it is the culture we settle, and for this reason, I think it is everyone's job in the community to make sure that we keep being curious, truth-seeking, committed and welcoming. Keeping this culture is essential to being able to change our minds and adapt about how to do the most good, and also convincing society as a whole about the things we care about.
In this post I will discuss the things I like about us, and also the things we have to pay special attention to.
The things I like about our culture
Some of the things I like about our community are thanks to the rationalistic community from the Bay Area. The focus on truth-seeking and having good epistemics about how to do the most good are very powerful tools. Other things I like about the community that are also inherited from the Bay Area, I believe, are the risk-taking and entrepreneurial spirit.
Beyond the previous, our willingness to consider unconventional but well-grounded stances, the radical empathy to care about those who have no voice (the poorest of the people, animals, or future generations), or the cause impartiality principle are due to the utilitarian roots of Toby Ord and William MacAskill.
Finally, Effective Altruism has more or less successfully avoided becoming a political ideology, which I believe would be risky.
Aspects where we should be careful
However, not all aspects of our culture are great, even if generalization is perhaps not appropriate. Instead, I will flag those I believe could become a problem. With this, I hope that the community will pay attention to these issues and keep them in check.
The first one is money. While it is a blessing that so many rich people agree that doing good is great and are willing to donate it, a recent highly upvoted post warned about the perception and epistemic problems that may arise with that. The bottom line is that having generous money may be perceived as self-serving, and may degrade our moral judgment, but you can read more in the post itself.
A perhaps more important problem is some elitism in the community. While it makes sense that wanting to do good means talking first to students of good universities, we have to be very careful about not being dismissive of people from different backgrounds who nevertheless are excited by our same goals. This may be particularly damaging in particular subareas such as AI Safety, where there is sometimes a meme that all we need is really really smart people. And it is not even true, what we need good researchers, engineers... Similarly applies to students in elite universities: let us not be dismissive of people just because they did not go to cool colleges.
Somewhat related is social status in the community. While it is clear that not every opinion in the community should have the same weight, we have to fight against too much deference on what is best to do. I sometimes fear we give the same answers to every person asking for advice, without understanding that their situation is different. I am sure this is not the case with 80000hours and they put real effort into personalized advice, but I am worried about the "just go and do AI Safety" quick advice that I have sometimes encountered. Social status might also be a problem for people aiming to fund their charities in poverty alleviation: since so many people defer to GiveWell and Malaria is so hard to beat, heroic people aiming to fund new charities in different high impact areas might be discouraged to do so.
There is also a risk of strong social in-group vs out-group dynamics. In order to make people happy to become EAs, we need to adapt our speech to them. It is false that we can just throw a bunch of compressed rational arguments to smart people and expect them to instantly recognize that longtermism or existential risks make sense. Even doing things that society seems awkwardly such as caring about animals is costly, so it is important to be patient and let them know that many of us at some point were in their position and struggled too. I starkly remember in the first EA dinner I attended how I said I cared about climate change but not so much animal welfare, even if it made total sense that it is bad when animals suffer. In the following months, however, I went vegetarian, and the reason is that having uncommon beliefs takes time, and perhaps a group of like-minded people to support you.
And finally: are we too dismissive of standard ways of solving problems? A couple of examples: It recently surprised me that GiveWell started recommending water purification interventions as highly effective. Since water quality interventions are widely known and were not previously recommended, I think most EAs would have believed that this was not really impactful, and just flag it as ineffective. Similarly, a few people I have talked to are also somewhat dismissive of academia as a place to solve AI Alignment because incentives are bad, but academia has one of the best feedback mechanisms for doing research. For this reason, I believe that until we have figured out a better way to measure quality and progress in AI Safety research, it is a premature attitude.
Please note that I don't think these are general problems the community already has, but rather issues that may become important problems.
In summary, I believe the culture we foster in the community will be very important to preserve our potential to do good, and we have to make sure we remain a friendly, open, and truth-seeking community.