Acknowledgements
A big thank you to Bruce Tsai, Shakeel Hashim, Ines, and Nathan Young for their insightful notes and additions (though they do not necessarily agree with/endorse anything in this post).
Important Note
This post is a quick exploration of the question, 'is EA just longtermism?' (I come to the conclusion that EA is not). This post is not a comprehensive overview of EA priorities nor does it dive into the question from every angle - it is mostly just my brief thoughts. As such, there are quite a few things missing from this post (many of the comments do a great job of filling in some gaps). In the future, maybe I'll have the chance to write a better post on this topic (or perhaps someone else will; please let me know if you do so I can link to it here).
Also, I've changed the title from 'Is EA just longtermism now?' so my main point is clear right off the bat.
Preface
In this post, I address the question: is Effective Altruism (EA) just longtermism? I then ask, among other questions, what factors contribute to this perception? What are the implications?
1. Introduction
Recently, I’ve heard a few criticisms of Effective Altruism (EA) that hinge on the following: “EA only cares about longtermism.” I’d like to explore this perspective a bit more and the questions that naturally follow, namely: How true is it? Where does it come from? Is it bad? Should it be true?
2. Is EA just longtermism?
In 2021, around 60% of funds deployed by the Effective Altruism movement came from Open Philanthropy (1). Thus, we can use their grant data to try and explore EA funding priorities. The following graph (from Effective Altruism Data) shows Open Philanthropy’s total spending, by cause area, since 2012:
Overall, Global Health & Development accounts for the majority of funds deployed. How has that changed in recent years, as AI Safety concerns grow? We can look at this uglier graph (bear with me) showing Open Philanthropy grants deployed from January, 2021 to present (data from the Open Philanthropy Grants Database):
We see that Global Health & Development is still the leading fund-recipient; however, Risks from Advanced AI is now a closer second. We can also note that the third and fourth most funded areas, Criminal Justice Reform and Farm Animal Welfare, are not primarily driven by a goal to influence the long-term future
With this data, I feel pretty confident that EA is not just longtermism. However, it is also true (and well-known) that funding for longtermist issues, particularly AI Safety, has increased. Additionally, the above data doesn't provide a full picture of the EA funding landscape nor community priorities. This raises a few more questions:
2.1 Funding has indeed increased, but what exactly is contributing to the view that EA essentially is longtermism/AI Safety?
(Note: this list is just an exploration and not meant to claim whether the below things are good or bad, or true)
- William Macaskill’s upcoming book, What We Owe the Future, has generated considerable promotion and discussion. Following Toby Ord’s The Precipice, published in March, 2020, I imagine this has contributed to the outside perception that EA is becoming synonymous with longtermism.
- The longtermist approach to philanthropy is different from mainstream, traditional philanthropy. When trying to describe a concept like Effective Altruism, sometimes the thing that most differentiates it is what stands out, consequently becoming its defining feature.
- Of the longtermist causes, AI Safety receives the most funding, and furthermore, has a unique ‘weirdness’ factor that generates interest and discussion. For example, some of the popular thought experiments used to explain Alignment concerns can feel unrealistic, or something out of a sci-fi movie. I think this can serve to both: 1. draw in onlookers whose intuition is to scoff, 2. give AI-related discussions the advantage of being particularly interesting/compelling, leading to more attention.
- AI Alignment is an ill-defined problem with no clear solution and tons of uncertainties: What counts as AGI? What does it mean for an AI system to be fair or aligned? What are the best approaches to Alignment research? With so many fundamental questions unanswered, it’s easy to generate ample AI Safety discussion in highly visible places (e.g. forums, social media, etc.) to the point that it can appear to dominate EA discourse.
- AI Alignment is a growing concern within the EA movement, so it's been highlighted recently by EA-aligned orgs (for example, AI Safety technical research is listed as the top recommended career path by 80,000 Hours).
- Within the AI Safety space, there is cross-over between EA and other groups, namely tech and rationalism. Those who learn about EA through these groups may only interact with EA spaces focussed on AI Safety/crossing over into other groups–I imagine this shapes their understanding of EA as a whole.
- For some, the recent announcement of the FTX Future Fund seemed to solidify the idea that EA is now essentially billionaires distributing money to protect the long-term future.
- [Edit: There are many more factors to consider that others have outlined in the comments below :)]
2.2 Is this view a bad thing? If so, what can we do?
Is it actually a problem that some people feel EA is “just longtermism”. I would say, yes, insofar that it is better to have an accurate picture of an idea/movement versus an inaccurate one. Beyond that, such a perception may turn away people who could be convinced to work on cause areas more unrelated to longtermism, like farmed animal welfare, but would disagree with longtermist arguments. If this group is large enough, then it seems important to try and promote a clearer outside understanding of EA, allowing the movement to grow in various directions and find its different target audiences, rather than having its pieces eclipsed by one cause area or worldview.
What can we do?
I’m not sure, there are likely a few strategies (e.g. Shakeel Hashim suggested we could put in some efforts to promote older EA content, such as Doing Good Better, or organizations associated with causes like Global Health and Farmed Animal Welfare).
2.3 So EA isn’t “just longtermism,” but maybe it’s “a lot of longtermism”? And maybe it’s moving towards becoming “just longtermism”?
I have no clue if this is true, but if so, then the relevant questions are:
2.4 What if EA was just longtermism? Would that be bad? Should EA just be longtermism?
I’m not sure. I think it’s true that EA being “just longtermism” leads to worse optics (though this is just a notable downside, not an argument against shifting towards longtermism). We see particularly charged critiques like,
Longtermism is an excuse to ignore the global poor and minority groups suffering today. It allows the privileged to justify mistreating others in the name of countless future lives, when in actuality, they’re obsessed with pursuing profitable technologies that result in their version of ‘utopia’–AGI, colonizing mars, emulated minds–things only other privileged people would be able to access, anyway.
I personally disagree with this. As a counter-argument:
Longtermism, as a worldview, does not want present day people to suffer; instead, it wants to work towards a future with as much fluorishing as possible, for everyone. This idea is not as unusual as it is sometimes framed - we hear something very similar with climate change advocacy (i.e. “We need climate interventions to protect the future of our planet. Future generations could stand to suffer immensely poor environmental conditions due to our choices”). An individual or elite few individuals could twist longtermist arguments to justify poor behavior, but this is true of all philanthropy.
Finally, there are many conclusions one can draw from longtermist arguments–but the ones worth pursuing will be well thought-out. Critiques can often highlight niche tech rather than the prominent concerns held by the longtermist community at large: risks from advanced Artificial Intelligence, pandemic preparedness, and global catastrophic risks. Notably, working on these issues can often improve the lives of people living today (e.g. working towards safe advanced AI includes addressing already present issues, like racial or gender bias in today’s systems).
But back to the optics–so longtermism can be less intuitively digestible, it can be framed in a highly negative way–does that matter? If there is a strong case for longtermism, should we not shift our priorities towards it? In which case, the real question is, does the case for longtermism hold?
This leads me to the conclusion: if EA were to become "just longtermism," that’s fine, conditional on the arguments being incredibly strong. And if there are strong arguments against longtermism, the EA community (in my experience) is very keen to hear them.
Conclusion
Overall, I hope this post generates some useful discussion around EA and longtermism. I posed quite a few questions, and offered some of my personal thoughts; however, I hold all these ideas loosely and would be very happy to hear other perspectives.
Citations
For me, it’s been stuff like:
- People (generally those who prioritize AI) describing global poverty as “rounding error”.
- From late 2017 to early 2021, effectivealtruism.org (the de facto landing page for EA) had at least 3 articles on longtermist/AI causes (all listed above the single animal welfare article), but none on global poverty.
- The EA Grants program granted ~16x more money to longtermist projects as global poverty and animal welfare projects combined. [Edit: this statistic only refers to the first round of EA Grants, the only round for which grant data has been published. ]
- The EA Handbook 2.0 heavily emphasized AI relative to global poverty and animal welfare. As one EA commented: “By page count, AI is 45.7% of the entire causes sections. And as Catherine Low pointed out, in both the animal and the global poverty articles (which I didn't count toward the page count), more than half the article was dedicated to why we might not choose this cause area, with much of that space also focused on far-future of humanity. I'd find it hard for anyone to read this and not take away that
... (read more)Some things from EA Global London 2022 that stood out for me (I think someone else might have mentioned one of them):
These things might feel small but considering this is one of the main EA conferences, having the actual conference organisers associate so strongly with the promotion of a longtermist (albeit yes, also one of the main founders of EA) book made me think "Wow, CEA is really trying to push longtermism to attendees". This seems quite reasonable given the potential significance of the book, I just wonder if CEA have done this for any other worldview-focused books recently (last 1-3 years) or would do so in the future e.g. a new book on animal farming.
Curious to get someone else's take on this or if it just felt important in my head.
Other small things:
As the ma... (read more)
Yeah, this is an excellent list. To me, the OP seems to miss the obvious the point, which is that if you look at what the central EA individuals, organisations, and materials are promoting, you very quickly get the impression that, to misquote Henry Ford, "you can have any view you want, so long as it's longtermism". One's mileage may vary, of course, as to whether one thinks this is a good result.
To add to the list, the 8-week EA Introductory Fellowship curriculum, the main entry point for students, i.e. the EAs of the future, has 5 sections on cause areas, of which 3 are on longtermism. As far as I can tell, there are no critiques of longtermism anywhere, even in the "what might we be missing?" week, which I found puzzling.
[Disclosure: when I saw the Fellowship curriculum about a year ago, I raised this issue with Aaron Gertler, who said it had been created without much/any input from non-longtermists, this was perhaps an oversight, and I would be welcome to make some suggestions. I meant to make some, but never prioritised it, in large part because it was unclear to me if any suggestions I made would get incorporated.]
(Not a response to your whole comment, hope that's OK.)
I agree that there should be some critiques of longtermism or working on X risk in the curriculum. We're working on an update at the moment. Does anyone have thoughts on what the best critiques are?
Some of my current thoughts:
- Why I am probably not a longtermist
- This post arguing that it's not clear if X risk reduction is positive
- On infinite ethics (and Ajeya's crazy train metaphor)
IMO good-faith, strong, fully written-up, readable, explicit critiques of longtermism are in short supply; indeed, I can't think of any. The three you raise are good, but they are somewhat tentative and limited in scope. I think that stronger objections could be made.
FWIW, on the EA facebook page, I raised three critiques of longtermism in response to Finn Moorhouse's excellent recent article on the subject, but all my comments were very brief.
The first critique involves defending person-affecting views in population ethics and arguing that, when you look at the details, the assumptions underlying them are surprisingly hard to reject. My own thinking here is very influenced by Bader (2022), which I think is a philosophical masterclass, but is also very dense and doesn't address longtermism directly. There are other papers arguing for person-affecting views, e.g. Narveson (1967) and Heyd (2012) but both are now a bit dated - particularly Narveson - in the sense they don't respond to the more sophisticated challenges to their views that have since been raised in the literature. For the latest survey of the literature and those challenges - albeit not one sympathetic to person-affecti... (read more)
The longtermist critique is a critique of arguments for a particular (perhaps the main) priority in the longtermism community, extinction risk reduction. I don't think it's necessary to endorse longtermism to be sympathetic to the critique. That extinction risk reduction might not be robustly positive is a separate point from the claim that s-risk reduction and trajectory changes are more promising.
Someone could think extinction risk reduction, s-risk reduction and trajectory changes are all not robustly positive, or that no intervention aimed at any of them is robustly positive. The post can be one piece of this, arguing against extinction risk reduction. I'm personally sympathetic to the claim that no longtermist intervention will look robustly positive or extremely cost-effective when you try to deal with the details and indirect effects.
The case for stable very long-lasting trajectory changes other than those related to extinction hasn't been argued persuasively, as far as I know, in cost-effectiveness terms over, say, animal welfare, and there are lots of large indirect effects to worry about. S-risk work often has potential for backfire, too. Still, I'm personally sympathetic to both enough to want to investigate further, at least over extinction risk reduction.
I obviously expected this comment would get a mix of upvotes and downvotes, but I'd be pleased if any of the downvoters would be kind enough to explain on what grounds they are downvoting.
Do you disagree with the empirical claim that central EAs entities promote longtermism (the claim we should give priority to improving the longterm)?
Do you disagree with the empirical claim that there is pressure within EA to agree with longtermism, e.g. if you don't, it carries a perceived or real social or other penalty (such, as, er, getting random downvotes)?
Are my claims about the structure of the EA Introductory Fellowship false?
Is it something about what I put in the disclaimer?
The top comment:
Your comment:
But I think the biggest issue is that, for a moment, there was this thing where people could have listened.
You sort of just walked... (read more)
Hmm. This is very helpful, thank you very much. I don't think we're on the same page, but it's useful for indicating where those differences may lie.
I'm not what you mean by 'supporters'. Supporters of what? Supporters of 'non-longtermism'? Supporters of the view that "EA is just longtermism"? FWIW, I have a lot of respect for (very many) longtermists: I see them as seriously and sincerely engaged in a credible altruistic project, just not one I (currently?) consider the priority; I hope they would view me in the same way about my efforts to make lives happier, and that we would be able to cooperate engage in moral trade where possible.
What I am less happy is the (growing) sense that EA is only longtermism - it's the only credible game in town - which is the subject of this post. One can be a longtermist - indeed of any moral persuasion - and object to that if you want the effective altruism community to be a pluralistic and inclusive place.
On the other hand, one could take a different, rather sneering, arrogant, and unplea... (read more)
Thank you for sharing these thoughts.
I can see how the work of several EA projects, especially CEA, contributed to this. I think that some of these were mistakes (and we think some of them were significant enough to list on our website). I personally caused several of the mistakes that you list, and I'm sorry for that.
Often my take on these cases is more like "it's bad that we called this thing "EA"", rather than "it's bad that we did this thing". E.g. I think that the first round of EA Grants made some good grants (e.g. to LessWrong 2.0), but that it would have been better to have used a non-EA brand for it. I think that calling things "EA" means that there's a higher standard of representativeness, which we sometimes failed to meet.
I do want to note that all of the things you list took place around 2017-2018[1], and our work and plans have changed since then. For instance, CBG evaluation criteria are no longer as you state, EA Grants changed a lot after the first round and was closed down around 2019, the EA Handbook is different, and effectivealtruism.org has a new design.
If you have comments about our current work, then please give us (anonymous) feedback!
As I noted in an... (read more)
Thank you for wanting to be principled about such an important issue. However (speaking as someone who is both very strongly longtermist and a believer of the importance of cause prioritization), a core problem with the "neutrality"/expert-views framing of this comment is selection bias. We would naively expect people who spend a lot of time on cause prioritization to systematically overrate (relative to the broader community) both the non-obviousness of the most important causes, and their esotericism. &... (read more)
To be more explicit, there's also a selection bias towards esotericism. Like how much you think most of the work is "done for you" by the rest of the world (e.g. in developmental economics or moral philosophy), versus needing to come up with the frameworks yourself.
My observations about 80k, GPI, and CFAR are all ongoing (though they originated earlier). I also think there are plenty of post-2018 examples related to CEA’s work, such as the Introductory Fellowship content Michael noted (not to mention the unexplained downvoting he got for doing so), Domassoglia’s observations about the most recent EAG and EAGx (James hits on similar themes), and the late 2019 event that was framed as a “Leader’s Forum” but was actually “some people who CEA staff think it would be useful to get together for a few days” (your words) with those people skewing heavily longtermist. I think all of these things “contribute to the view that EA essentially is longtermism/AI Safety?”(though of course longtermism could be “right” in which case these would all be positive developments.)
I also strongly share this worry about selection effects. There are additional challenges to those mentioned already: the more EA looks like an answer, rather than a question, the more inclined anyone who doesn't share that answer is simply to 'exit', rather than 'voice', leading to an increasing skew over time of what putative experts believe. A related issue is that, if you want to work on animal welfare or global development you can do that without participating in EA, which is much harder if you want to work on longtermism.
Further, it's a sort of double counting if you consider people as experts because they work in a particular organisation when they would only realistically be hired if they had a certain worldview. If FHI hired 100 more staff, and they were polled, I'm not sure we should update our view on what the expert consensus is any more than I should become more certain of the day's events by reading different copies of the same newspaper. (I mean no offence to FHI or its staff, by the way, it's just a salient example).
The EA Handbook is different, but as far as I can tell the mistakes made with the Handbook 2.0 were repeated for the 3rd edition.
CEA describing those “mistakes” around the Handbook 2.0:
... (read more)Hey, I've just messaged the people directly involved to double check, but my memory is that we did check in with some non-longtermists, including previous critics (as well as asking more broadly for input, as you note). (I'm not sure exactly what causes the disconnect between this and what Aaron is saying, but Aaron was not the person leading this project.) In any case, we're working on another update, and I'll make sure to run that version by some critics/non-longtermists.
Also, per other bits of my reply, we're aiming to be ~70-80% longtermist, and I think that the intro curriculum is consistent with that. (We are not aiming to give equal weight to all cause areas, or to represent the views of everyone who fills out the EA survey.)
Since the content is aiming to represent the range of expert opinion in EA, since we encourage people to reflect on the readings and form their own views, and since we asked the community for input into it, I think that it's more appropriate to call it the "EA Handbook" than the previous edition.
I don’t recall seeing the ~70-80% number mentioned before in previous posts but I may have missed it.
I’m curious to know what the numbers are for the other cause areas and to see the reasoning for each laid out transparently in a separate post.
I think that CEA’s cause prioritisation is the closest thing the community has to an EA ‘parliament’ and for that process to have legitimacy it should be presented openly and be subject to critique.
Agree! This decision has huge implications for the entire community, and should be made explicitly and transparently.
Fwiw, my model of CEA is approximately that it doesn't want to look like it's ignoring differing opinions but that, nevertheless, it isn't super fussed about integrating them or changing what it does.
This is my view of CEA as an organisation. Basically, every CEA staff member I've ever met (including Max D) has been a really lovely, thoughtful individual.
I agree with your takes on CEA as an organization and as individuals (including Max).
Personally, I’d have a more positive view of CEA the organization if it were more transparent about its strategy around cause prioritization and representativeness (even if I disagree with the strategy) vs. trying to make it look like they are more representative than they are. E.g. Max has made it pretty clear in these comments that poverty and animal welfare aren’t high priorities, but you wouldn’t know that from reading CEA’s strategy page where the very first sentence states: “CEA's overall aim is to do the most we can to solve pressing global problems — like global poverty, factory farming, and existential risk — and prepare to face the challenges of tomorrow.”
It's possibly worth flagging that these are (sadly) quite long-running issues. I wrote an EA forum post now 5 years ago on the 'marketing gap', the tension between what EA organisations present EA as being about and what those the organisations believe it should be about, and arguing they should be more 'morally inclusive'. By 'moral inclusive', I mean welcoming and representing the various different ways of doing the most good that thoughtful, dedicated individuals have proposed.
This gap has since closed a bit, although not always in the way I hoped for, i.e. greater transparency and inclusiveness. As two examples, GWWC has been spun off from CEA, rebooted, and now does seem to be cause neutral. 80k is much more openly longtermist.
I recognise this is a challenging issue, but I still think the right solution to this is for the more central EA organisations to actually try hard to be morally inclusive. I've been really impressed at how well GWWC seem to be doing this. I think it's worth doing this for the same reasons I gave in that (now ancient) blogpost: it reduces groupthink, increases movement size, and reduces infighting. If people truly felt like EA was morally inclusive, I don't think this post, or any of these comments (including this one) would have been written.
Did subsequent rounds of EA Grants give non-trivial amounts to animal welfare and/or global poverty? What percentage of funding did these cause areas receive, and how much went to longtermist causes? Only the first round of grants was made public.
This account has some of the densest and most informative writing in the forum, here's another comment
(The comment describes CEA in a previous era. It seems the current CEA has different leadership and should be empowered and supported).
Thank you! I really appreciate this comment, and I’m glad you find my writing helpful.
I think EA Grants is different from EA Funds. EA Grants was discontinued a while back - https://www.effectivealtruism.org/grants
Oh, I get it now. That seems like a misleading summary, given that that program was primarily aimed at EA community infrastructure (which received 66% of the funding), the statistic cited here is only for a single grants round, and one of the five concrete examples listed seems to be a relatively big global poverty grant.
I still expect there to be some skew here, but I would take bets that the actual numbers for EA Grants look substantially less skewed than 1:16.
I think it's important to frame longtermism as particular subset of EA. We should be EAs first and longtermists second. EA says to follow the importance-tractability-crowdedness framework, and allocate funding to the most effective causes. This can mean funding longtermist interventions, if they are the most cost-effective. If longtermist interventions get a lot of funding and hit diminishing returns, then they won't be the most cost-effective anymore. The ITC framework is more general than the longtermist framing of "focus on the long-term future", and allows us to pivot as funding and tractability changes.
I'm really hoping we can get some better data on resource allocation and estimated effectiveness to make it clearer when funders or individuals should return to focusing on global poverty etc.
There's a few projects in the works for "ea epistemic infrastructure"
I basically agree with your comment, but wanted to emphasize the part I disagree with:
EA is about prioritising in order to (try to) do the most good. The ITN framework is just a heuristic for that, which may very well be wrong in many places; and funding is just one of the resources we can use.
On one hand it's clear that global poverty does get the most overall EA funding right now, but it's also clear that it's more easy for me to personally get my 20th best longtermism idea funded than to get my 3rd best animal idea or 3rd best global poverty idea funded and this asymmetry seems important.
I think it's a factor of global health being already allocated to much more scalable opportunities than exist in longtermism, whereas the longtermists have a much smaller amount of funding opportunities to compete for. EA individuals are the main source of longtermist opportunities and thus we get showered in longtermist money but not other kinds of money.
Animals is a bit more of a mix of the two.
Thanks for posting! My current belief is that EA has not become purely about longtermism. In fact, recently it has been argued in the community that longtermism is not necessary to pursue the kind of things we currently do, as pandemics or AI Safety can also be justified in terms of preventing global catastrophes.
That being said, I'd very much prefer the EA community bottom line to be about doing "the most good" rather than subscribing to longtermism or any other cool idea we might come up with. These are all subject to change and debate, whether doing the most good really shouldn't.
Additionally, it might be worth highlighting, especially when talking with unfamiliarized people, that we deeply care about all present people suffering. Quoting Nate Soares:
In general it doesn't seem logical to me to bucket cause areas as either "longtermist" or "neartermist".
I think this bucketing can paint an overly simplistic image of EA cause prioritization that is something like:
Are you longtermist?
But really the situation is way more complicated than this, and I don't think the simplification is accurate enough to be worth spreading.
When thinking through cause prioritization, I think most EAs (including me) over-emphasize the importance of philosophical considerations like longtermism or speciesism, and under-emphasize the importance of empirical considerations like AI timelines, how much effort it would take to make bio-weapons obsolete or what diseases cause the most intense suffering.
Agreed! And we should hardly be surprised to see such a founder effect, being that EA was started by philosophers and philosophy fans.
Open philanthropy is not the only grantmaker in the EA Space! If you add the FTX Community, FTX Future Fund, EA Funds etc. my guess would be that it recently made a large shift towards longtermism, primarily due to the Future fund being so massive.
I also want to emphasize that many central EA Organisations are increasingly focused on longtermist concerns, and not as transparent about it as I would like for them to be. People and organisations should not pretend to care about things they do not for the sake of optics. One of EA's most central tenets is to hold truth at a very high regard. Being transparent about what we believe is necessary to do so.
I think there are a few pitfalls EA can fall into by its increasing focus on longtermism, but by and large people are noticing these and actively discussing them. This is a good sign! I'm a bit worried if many longtermists entirely stop caring about global poverty and animal suffering. Having a deep intuition that losing your parent to malaria is a horrible thing for a child to go through and that you can prevent it, is a very healthy sanity check.
I think the best strategy for 'hardcore' longtermists may very well be to do a bit of both, not because of optics but because regularly working on things with tight feedback loops reminds you just how difficult even well-defined objectives can be to achieve. That said there is enormous value in specialisation, so I'm not sure what the exact optimal trade-off is.
I think starting in 2022 this will be true in aggregate – as you say largely because of the FTX Future Fund.
However, for EA Funds specifically, it might be worth keeping in mind that the Global Health and Development Fund has been the largest of the four funds by payout amount, and by received donations even is about as big as all other funds combined.
Two points to add re: the cultural capture of longtermism in EA:
I'm not sure where to find agendas for past EAGx events I didn't attend. But looking at EAG London, I get a 4:3 ratio for LT/non-LT (not counting topics that fit neither "category", like founding startups):
LT
- "Countering weapons of mass destruction"
- "Acquiring and applying information security skills for long-term impact"
- "How to contribute to the UN's 'Our Common Agenda' report" (maybe goes in neither category? Contributions from EA people so far have been LT-focused, but I assume the process is the same for anything someone wants to add)
- "Exploring AI Futures with Role Play"
- "Speed meeting + discussion: biosecurity and engineering interventions"
- "Ambitious thinking in longtermist community building"
- "What's new in biosecurity? Concepts and priorities for the coming decade"
- "Transformer interpretability tool walk-through"
- "Longtermist talent search"
- "Workshop: New research topics in global priorities research" (maybe goes in neither category, most speakers from LT-focused orgs but topics were broad/varied)
- "So, how freaked out should we be about AI?"
- "Workshop: Possible research projects
... (read more)I think that longtermism has grown very dramatically, but that it is wrong to equate it with EA (both as a matter of accurate description and for strategic reasons, as are nicely laid out in the above post).
I think the confusion here exists in part because the "EA vanguard" has been quite taken up with longtermism and this has led to people seeing it as more prominent in EA than it actually is. If you look to organizations like The Life You Can Save or Giving What We Can, they either lead with "global health and wellbeing"-type cause areas or focus on that exclusively. I don't mean to say that this is good or bad, just that EA is less focused on longtermism than people might think based on elite messaging. IIRC this is affirmed by past community surveys.
Personally, I think OpenPhil's worldview diversification is as good an intellectual frame for holding all this together as I've seen. We all get off the "crazy train" at some point, and those who think they'll be hardcore and bite all bullets eventually hit something like this.
This is somewhat less true when one looks at the results across engagement levels. Among the less engaged ~50% of EAs (levels 1-3), neartermist causes are much more popular than longtermism. For level 4/5 engagement EAs, the average ratings of neartermist, longtermist and meta causes are roughly similar, though with neartermism a bit lower. And among the most highly engaged EAs, longtermist and meta causes are dramatically more popular than neartermist causes.
Descriptively, this adds something to the picture described here (based on analyses we provided), which is that although the most engaged level 5 EAs are strongly longtermist on average, the still highly engaged level 4s are more mixed. (Level 5 corresponds to roughly EA org staff and group leaders, while level 4 is people who've "engaged extensively with effective altruism content (e.g. attending an EA Global conference, applying for career coaching, or organizing an EA meetup)."
One thing that does bear emphasising is that even among the most highly engaged EAs, neartermist causes do n... (read more)
This is a really helpful chart and has updated my model of the community more than any of the written comments.
For a community of data nerds, it’s surprising that we don’t use data visualisations in our Forum comments more regularly.
Do you have data on the trends over time? I’m interested to know if the three attributes are getting closer together or further apart at both ends of the engagement spectrum.
My hypothesis is that the attributes will be getting closer together at low levels of engagement and getting further apart at the higher levels.
I learned a lot from reading this post and some of the top comments, thanks for the useful analysis.
Throughout the post and comments people are tending to classify AI safety as a "longtermist" cause. This isn't wrong, but for anyone less familiar with the topic, I just want to point out that there are many of us who work in the field and consider AI to be a near-to-medium term existential risk.
Just in case "longtermism" gave anyone the wrong impression that AI x-risk is something we definitely won't be confronted with for 100+ years. Many of us think it will be much sooner than that (though there is still considerable uncertainty and disagreement about timelines).
See the related post "Long-Termism" vs. "Existential Risk" by Scott Alexander.
My view is that more traditional philanthropic targets make for a much easier sell, so GiveWell style messaging is going to reach/convince way more people than longtermist or x-risk messaging.
So you'll probably have way, way more people who are interested in EA on the global poverty and health side. I still only donate my pledge money to AMF, plus $100 a month extra to animal welfare, despite being somewhat involved in longtermist/x-risk stuff professionally (and pretty warm on these projects beyond my own involvement).
That being said, for some people EA is their primary social peer group. These people also tend to be highly ambitious. That's a recipe for people trying really hard to figure out what's the most prized, and orienting toward that. So there's lots of buzz around longtermism, despite the absolute numbers in the longtermist direction (people, clicks, eyeballs, money) being lower than those for more traditional, popular interventions.
This is a bit misleading. Some longtermists, myself included, prioritizing minimizing suffering in the future. But this is definitely not a consensus among longtermists, and many popular longtermist interventions will probably increase future suffering (by increasing future sentient life, including mostly-happy lives, in general).
Reading this post after going through funding options, I notice that:
There are many more avenues for funding long-termism projects than neartermism ones. GiveWell holds almost a monopoly and is not set up to fund the full spectrum of opportunities. For example:
As a result, I think many junior EAs really drift towards long-termism because that's where the funding is.
I don't know how much of OpenPhil's neartermism funding is informed by GiveWell, or how OpenPhil decides on neartermism funding outside of GiveWell.
Writing this all up, makes me tentatively believe it's a mistake to delegate the Global Health & Well-being fund to GiveWell, and that the neartermism funding space needs development.
It's also possible that I'm wrong about the above. In that case, I still expect many people to share my perception of the neartermism space. This perception probably contributes to the view that 'EA is currently primarily about long-termism'.
Are there any amateur EA historians who can help explain how longtermism grew in importance/status? I’d say 80k for instance is much more likely now to encourage folks to start a longtermist org than a global health org. There is lots of funding still moving towards the traditional neartermist causes like malaria and deworming, but not too much funding encouraging people to innovate there (or start another AMF).
Ultimately, I’m curious which person or orgs got convinced about longtermism first! It feels much more driven by top-down propagation than a natural evolution of an EA idea.
In short: OpenPhil gave half a billion towards global health and development in 2021. So it isn't just long-termism, but long-termism in increasing in influence within EA.
I intend to run a Polis poll to try and help us process this and see if there are interesting community trades that can be made. Anyone got suggestions for seed questions?
What's a 'trade'?
Animal welfare isn't my top priority but I try and eat vegan to signal its importance to non-EAs and signal my solidarity with animal welfare EAs. Likewise when I talk about how good GiveWell is, because it is. Others can trade by understanding my cause areas and finding cheap ways to tell others about them.
I would imagine if you look at number of jobs in EA that focus on longtermism vs Global health & Development, the picture would be more skewed towards longtermism relatively to funding.
I expected you to be right, but when I looked on the 80k job board right now of the 962 roles: 161 were in AI, 105 were in pandemics, and 308 were in global health and development. Hard to say exactly how that relates to funding, but regardless I think it shows development is also a major area of focus when measured by jobs instead of dollars.
The 80K board is an understandable proxy for "jobs in EA". But that description can be limiting.
Many non-student EA Global attendees had jobs at organizations that most wouldn't label "EA orgs", and that nevertheless fit the topics of the conference.
Examples:
Some of these might have some of their jobs advertised by 80K, but there are also tons of jobs at those places that wouldn't make the 80K job board* but that nevertheless put people in an excellent position to make an impact across any number of areas. And because global development is bigger than all the LT areas put together**, I expect there to be many more jobs on the non-LT side in this category.
*Not necessarily because 80K examined them and found them wanting, but as (I'd expect) a practical matter — there are 157 open jobs at the World Bank right now, and I wouldn't expect 80K to evaluate all of them (or turn the World Bank into 15% of the whole job board).
**Other than biosecurity, maybe? As a quick sanity-check, USAID's budget is ~4x the CDC budget, results may vary across countries and international institutions.
Frances, your posts are always so well laid out with just the right amount of ease-of-reading colloquialism and depth of detail. You must teach me this dark art at some point!
As for the content of the post itself, it's funny that recently the two big criticisms of longermism in EA are that EA is too longtermist and that EA isn't longtermist enough! I've always thought that means it's about right, haha. You can't keep everyone happy all of the time.
I'm one of those people you mention who only really interacts with the longtermist side of E... (read more)
EA has definitely been moving towards "a lot of longtermism".
The OP has already provided some evidence of this with funding data. Another thing that signals to me that this is happening is the way 80k hours has been changing their career guide. Their earlier career guide started by talking about Seligmann's factors/Positive Psychology and made the very simple claim that if you want a satisfying career, positive psychology says... (read more)
The title of this post (and a link to it) was quoted here as supporting the claim that EA is mostly just longtermism.
https://reboothq.substack.com/p/ineffective-altruism?s=r
Sorry I'm a bit late to the party on this, but thanks for the well-researched and well thought-out post.
My two cents, as this line caught my eye:
Notably, working on these issues can often improve the lives of people living today (e.g. working towards safe advanced AI includes addressing already present issues, like racial or gender bias in today’s systems).
I think the line of reasoning concerns me. If working on racial/gender bias from AI is one of the most cost-effective ways to make people happier or save lives, then I would advocate&nb... (read more)
I am trying not to be snarky and dismissive, but among a large number of things I think this post gets wrong, this sticks out as a ridiculous and obviously wrong claim.
First, non-effective altruists have been giving to global threat reduction for most of a century, starting with nuclear n... (read more)
I agree with your specific claims, but FWIW I thought albeit having some gaps the post was good overall, and unusually well written in terms of being engaging and accessible.
The reason why I overall still like this post is that I think at its core it's based on (i) a correct diagnosis that there is an increased perception that 'EA is just longtermism' both within and outside the EA community, as reflected in prominent public criticism of EA that mostly talk about their opposition to longtermism, and (ii) it describes some mostly correct facts that explain and/or debunk the 'EA is just longtermism' claim (even though omitting some important facts and arguably underselling the influence of longtermism in EA overall).
E.g., on the claim you quote, a more charitable interpretation would be that longtermism is one of potentially several things that differentiates EA's approach to philanthropy from traditional ones, and that this contributes to longtermism being a feature that outside observers tend to particularly focus on.
Now, while true in principle, my guess is that even this effect is fairly small compared to some other reasons behind the attention that longtermism gets. ... (read more)