Effective Altruism (EA) is a global movement that seeks to maximize the impact of charitable giving and improve the world. It was originally focused on poverty alleviation and founded on the principles of evidence-based decision making and cost-effectiveness, as demonstrated by the work of GiveWell. However, over time, the focus of EA has shifted towards longtermism. This is bad because 1) as the current polycrisis affecting EA has come entirely from the longtermists, 2) it’s unclear that the overall impact of longtermism is positive, 3) clearly positive impact causes such as global poverty and animal suffering are neglected, 4) large numbers of potential EAs and large amounts of funding are neglected.
Some examples of EA focus shifting towards longtermism:
- The EA Handbook promotes longtermism and minimizes other areas. For example, “By page count, AI is 45.7% of the entire causes sections. And as Catherine Low pointed out, in both the animal and the global poverty articles (which I didn't count toward the page count), more than half the article was dedicated to why we might not choose this cause area, with much of that space also focused on far-future of humanity.” It has had minor modifications since then, but still focuses on longtermism and minimizes other causes.
- The Resources pages of Effective Altruism contains an introduction to EA and information about longtermism. For example, the top recommended books are by longtermist superstars such as William Macaskill, Toby Ord, and Benjamin Todd.
- The most pressing world problems page at 80,000 hours is almost entirely about longtermism. While it may be the case that EAs know that 80,000 hours is a website dedicated to promoting longtermism, many ordinary users would see it as a site about how to make a positive impact on the world through Effective Altruism.
- Feedback from London EA Global, where longtermism is promoted with some fairly blatant nudges.
I think it’s fairly uncontroversial that EA has significantly shifted towards longtermism, but happy to discuss this more if needed.
Longtermism has damaged Effective Altruism
Longtermism has come under fire in recent times due to the numerous scandals that have been associated with it. These scandals have cast a shadow on the reputation of effective altruism as a whole, and have eroded the public trust in the movement.
Longtermism has also been accused of channeling funding away from other effective altruist causes, against the wishes of the wider EA community. This has led to concerns that longtermism is becoming a self-serving cause that is prioritizing its own interests over the broader goals of effective altruism.
Additionally, longtermists have been criticized for having disproportionate control over decision-making power within effective altruism, yet do not contribute much in return. This has led to frustration and resentment among other effective altruists, who feel that their efforts and contributions are being ignored.
Finally, the strategy of the core longtermists focuses on “highly engaged EAs” and “mega-donors”. That’s because longtermism is “too weird” for the majority of people interested in EA and potential donors. Unfortunately, this focus on a small group may be beneficial for longtermism, but it can also harm other causes by deterring a large pool of potential supporters and funding.
Longtermism has anti-synergy with the rest of Effective Altruism
Trying to forcibly marry longtermism to EA is harmful for both. For example, saying that EA is “talent constrained” makes sense for longtermists, because they are looking for geniuses who have the vision to work on AI alignment research, for example. A slightly less smart person could inadvertently advance AI capabilities instead. However, this makes absolutely no sense for people wanting to improve global poverty or animal suffering. These cause areas are robust to slightly suboptimal efforts, and would highly benefit from more people working in them. This mismatch in the required talent pool is a result of forcing longtermism and EA together.
Another example is that the focus on “mega-donors” makes sense for longtermists, because it is a very “weird” cause area that requires them to work on a potential donor for an extended period of time. But this is not the case for other areas - global poverty is intuitively appealing to many people, and a small nudge will often suffice to get them to donate to more effective charities. These donations would do a huge amount of good, but outreach is de-prioritized because it doesn’t work for longtermism.
Even longtermists realize that this is becoming more generally known, and are attempting to preserve the status quo. For example, a characteristic forum post, “EA is more than longtermism”, attempted to argue that longtermism is not unduly privileged over other causes. Under the “What do we do?” section, this forum post proposed:
I’m not sure, there are likely a few strategies (e.g. Shakeel Hashim suggested we could put in some efforts to promote older EA content, such as Doing Good Better, or organizations associated with causes like Global Health and Farmed Animal Welfare).
It should be fairly clear that “putting in some efforts to promote older EA content” is not likely to have much of an impact. This proposal doesn't seem to be coming from a place of genuine concern, and it seems more focused on defending longtermism rather than actively finding a solution.
Reform will not work
Many longtermists have been fairly clear that they are not interested in sharing power. For example, this comment says that “a lot of EA organizations are led and influenced by a pretty tightly knit group of people who consider themselves allies”, and then explains why these longtermists will not be receptive to conventional EA arguments. In fact, a later comment makes this more explicit.
Longtermists seem to have control of the major institutions within EA, which provides them with a certain level of immunity. This has led to a situation where they may not feel the need to show concern for other cause areas. As a result, it may be challenging to bring about any meaningful changes. The current arrangement, where longtermists receive resources for their cause while negative events are attributed to the EA community as a whole, is favorable for them. Therefore, requests for reform or a more balanced distribution of power may only result in further delays.
Solution: split longtermism into a separate organization
A more effective solution is to split longtermism into a separate organization. The major benefits would be:
- Splitting this would solve the anti-synergy problem mentioned above, bringing more resources to EA overall and doing more good for the world. This change could also attract more potential EAs and donors to join and contribute, as the cause would be more approachable.
- Detaching longtermism from effective altruism may help repair the damage caused by the recent scandals, by distancing the rest of EA from the negative image associated with longtermism. This can help regain public trust in the movement and preserve its positive impact..
- With longtermism as a separate entity, other causes within EA would have access to the resources they need to succeed, without being held back by any negative associations with longtermism.
- Separation could also lead to a more equal distribution of power, giving other causes a stronger voice within the movement and ensuring that all causes are given fair consideration.
What are some concerns?
While splitting longtermism and EA would be good, there are some implementation difficulties. I will discuss each in turn.
- Longtermists control the levers of power, which makes reform more difficult.
- Longtermists control the flow of money, which is necessary for all EA cause areas.
- Most EAs are very bad at fighting for their fair share, and would rather focus on “growing the pie”.
Longtermists control the levers of power, which makes reform more difficult.
As discussed previously, a core of longtermists control the power in EA. This means that longtermists would usually be unwilling to split from the rest of EA, since their power would be reduced. In the absence of an external shock, there is no reason for longtermists to share their power with competing causes.
However, the recent scandals linked to longtermism have provided an external shock that could potentially lead to positive change. These scandals have impacted all causes within EA and provide a unique opportunity for the movement to reconfigure and become stronger.
Though the scandals have certainly dealt a blow to EA, they also present a rare opportunity for growth and improvement. It's important to take advantage of such opportunities to ensure that EA continues to have a positive impact on the world.
Longtermists control the flow of money, which is necessary for all EA cause areas.
The source of longtermist power comes from the control of funding. For example, blacklisting is powerful because it can deny funding to opponents. Because people are scared to come forward, it’s hard for opponents to organize. By forcing opponents of longtermism to remain anonymous on the EA forums, or to heavily censor their words, it destroys the ability of opponents to mobilize.
EA operations don’t usually generate cash, so they depend on a continuing stream of cashflow from aligned philanthropists. This is why individuals like Will MacAskill were closely connected with Sam Bankman-Fried. This is also why longtermists have leveraged this by working on major donors who support other EA causes and converting them to longtermism. Taking money from other causes also gives longtermists more power, although this is only a secondary consideration.
We now have a good opportunity to convince major donors to support traditional causes and split off longtermism into its own institutions. One approach could be to have private discussions with major longtermism donors and ask if longtermism is really the cause they want to be associated with.
Most EAs are very bad at fighting for their fair share, and would rather focus on “growing the pie”.
I don’t have a good solution for this - my suspicion is that many EAs in global poverty, animal suffering, and other “traditional” cause areas are uncomfortable with assisting with this type of restructuring action, as it isn’t their area of expertise. Suggestions for overcoming this are welcome.
See this tweet for a discussion of this.
I am not saying that most longtermists support this! But these are the revealed preferences of the core EA group, and they are the ones whose opinions matter.