Hide table of contents

In the last 2 years:

  • What ideas that were considered wrong[1]/low status have been championed here?
  • What has the movement acknowledged it was wrong about previously?
  • What new, effective organisations have been started?

This isn't to claim that this is the only work that matters, but it feels like a chunk of what matters. Someone asked me and I realised I didn't have good answers.

  1. ^

    Changed in response to comment from @JWS 🔸 

57

0
0
2

Reactions

0
0
2
New Answer
New Comment

4 Answers sorted by

It's very difficult to underrate how much EA has changed over the past two years.

For context, two years ago was 2022 July 30. It was 17 days prior to the "What We Owe the Future" book launch. It was also about three months before the FTX fraud was discovered (but at this time it was massively underway in secret) and the ensuing bankruptcy. We were still at the height of the Big Money Big Longtermism era.

It was also about eight months before the FLI Pause Letter, which I think coincided with roughly when the US and UK governments took very serious and intense interest in AI risk.

I think these two events were really key changes for the EA movement and led to a huge vibe shift. "Longtermism" feels very antiquated now and feels abandoned in the name of "holy crap we have to deal with AI risk occurring within the next ten years". Big Money is out, but we still have a lot of money, and it feels more responsible and somewhat more sustainable now. There are no longer regrantors running around everywhere, for better and for worse.

Many of the people previously working on longtermism have pivoted to "pandemics and AI" and many of the people previously working on pandemic risk have pivoted to "AI x bio intersections". WWOTF captures the current mid-2024 vibe of EA much less than Leopold's "Situational Awareness".

There also has been a massive pivot towards mainstream engagement. Many EAs have edited their LinkedIns to purge that two-word phrase and now barely and begrudgingly admit to being "EA-adjacent". These people now take meetings in DC and engage in the mainstream policy process (whereas previously "politics was the mindkiller"). Many AI policy orgs have popped up or become more prominent as a result. Even MIRI, which had just announced "Death with Dignity" only about three months prior to that date of 2022 July 30, has now given up on giving up and pivoted to policy work. DC is a much bigger EA hub than it was two years ago, but the people working in DC certainly wouldn't refer to it as that.

The vibe shift towards AI has also continued to cannibalize the rest of EA as well, for better and for worse. This trend was already in full swing in 2022 but became much more prominent over 2023-2024. There's a lot less money available for global health and animal welfare work than before, especially if you worked on more weird stuff like shrimp. Shrimp welfare kinda peaked in 2022 and the past two years have unfortunately not been kind to shrimp.

This looks pretty much right, as a description of how EA has responded tactically to important events and vibe shifts. Nevertheless it doesn't answer OP's questions, which I'll repeat:

  • What ideas that were considered wrong/low status have been championed here?
  • What has the movement acknowledged it was wrong about previously?
  • What new, effective organisations have been started?

Your reply is not about new ideas, or the movement acknowledging it was wrong (except about Bankman-Fried personally, which doesn't seem like what OP is asking about), or new organizations.

It seems important, to me, that EA's history over the last two years is instead mainly the story of changes in funding, in popular discourse, and in the social strategy of preexisting institutions. e.g. the FLI pause letter was the start of a significant PR campaign, but all the *ideas* in it would've been perfectly familiar to an EA in 2014 (except for "Should we let machines flood our information channels with propaganda and untruth?", which is a consequence of then-unexpected developments in AI technology rather than of intellectual work by EAs).

I'm not sure I understand the expectations enough about what these questions are looking for to answer.

Firstly, I don't think "the movement" is centralized enough to explicitly acknowledge things as a whole - that may be a bad expectation. I think some individual people and organizations have done some reflection (see here and here for prominent examples), though I would agree that there likely should be more.

Secondly, It definitely seems very wrong to me though to say that EA has had no new ideas in the past two years. Back in 2022 the main answer to "how do we reduce AI risk?" was "I don't know, I guess we should urgently figure that out" and now there's been an explosion of analysis, threat modeling, and policy ideas - for example Luke's 12 tentative ideas were basically all created within the past two years. On top of that, a lot of EAs were involved in the development of Responsible Scaling Policies which is now the predominant risk management framework for AI. And there's way more too.

Unfortunately I can mainly only speak to AI as it is my current area of expertise, but there's been updates in other areas as well. For example, at just Rethink Priorities, welfare ranges, CRAFT... (read more)

underrate

Nit - I'm pretty sure you mean 'overrate'.

So you'd say the major shift is:

  • Towards AI policy work
  • Towards AI x bio policy work

Also this seems notable:

Many EAs have edited their LinkedIns to purge that two-word phrase and now barely and begrudgingly admit to being "EA-adjacent".

Going to take a stab at this (from my own biased perspective). I think Peter did a very good job, but Sarah was right that I don't think this quite answered your question. I think it's difficult to think of what counts as 'generating ideas' vs rediscovering new ones, many new philosophies/movements can generate ideas but they can often be bad ones. And again, EA is a decentral-ish movement and it's hard to get centralised/consensus statements on it.

With enough caveats out of the way, and very much from my biased PoV:

"Longtermism" is dead - I'm not sure if someone has gone 'on record' for this, but I think longtermism, especially strong longtermism, as a driving idea for effective altruism is dead. Indeed, to the extent that AI x-risk and Longtermism went hand-in-hand is gone because AI x-risk proponents increasingly view it as a risk that will be played out in years and decades, not centuries and millenia. I don't expect future EA work to be justified under longtermist framing, and I think this reasonably counts as the movement 'acknowledging it was wrong' in some collective-intelligence sort of way.

The case for Animal Welfare is growing - In the last 2 years, I think the intellectual case for Animal Welfare as a leading, and perhaps the EA cause has actually strengthened quite a bit. Rethink published their Moral Weight Sequence which has influenced much subsequent work, see Ariel's excellent pitch for Animal Welfare to dominate nearttermist spending.[1] On radical new ideas to implement, Matthias' pitch for screwworm eradication sounded great to me, let's get it happening! Overall, Animal Welfare is good and EA continues to be directionally ahead on it, and the source of both interesting ideas and funding in this space, in my non-expert opinion.

Thorstad's Criticism of Astronomical Value - I'm specifically referring to David's sequence of 'Existential Risk Pessimism', which I think is broadly part of the EA-idea ecosystem, even if from a critical perspective. The first few pieces, which argues that actually longtermists should have low x-risk probabilities, and vice versa, was really novel and interesting to me (and I wish more people had responded to it). I think that being able to openly criticise x-risk arguments and defer less is hopefully becoming more open, though it may still be a minority view amongst leadership.

Effective Giving is Back - My sense is that, over the last years, and probably spurred by the FTX collapse and fallout, that Effective Giving is back on the menu. I'm not particularly sure why it left, or what extent it did,[2] but there are a number of posts (e.g. see here, here, and here) that indicate it's becoming a lot more of a thing. This is sort of a corrolary of 'longtermism is dead', people realised that perhaps earning-to-give, or even just giving, is something which is still valuable that a can be a unifying thing in the EA movement. 

There are other things that I could mention but I ran out of time to do so fully. I think there is a sense that there are not as many new, radical ideas as there were in the opening days of EA - but in some sense that's an inevitable part of how social movements and ideas grow and change.
 

  1. ^

    I don't think longtermist spending can avoid the force of his arguments too!

  2. ^

    I'm not sure if effective giving being deprioritised actually happened, or if it was whether that was deliberate strategy or just incentives playing out. So this is just my vibe-take

"Longtermism is dead": I feel quite confused about what the idea is here.

Is it that (1) people no longer find the key claims underlying longtermism compelling? (2) it seems irrelevant to influencing decisions? (3) it seems less likely to be the best messaging strategy for motivating people to take specific actions? (4) something else?

I'm also guessing that this is just a general summary of vibe and attitudes from people you've spoken to, but if there's some evidence you could point to that demonstrates this overall point or any of the subpoints I'd be pretty interested in that.

(Responding to you, but Peter made a similar point.)

Thanks!

6
JWS 🔸
On the platonic/philosophical side I'm not sure, I think many EAs weren't really bought into it to begin with and the shift to longtermism was in various ways the effect of deference and/or cohort effects. In my case I feel that the epistemic/cluelessness challenge to longtermism/far future effects is pretty dispositive, but I'm just one person. On the vibes side, I think the evidence is pretty damning: * The launch of WWOTF was almost perfectly at the worst time possible and the idea seems indelibly linked with SBF's risky/naïve ethics and immoral actions. * Do a Google News or Twitter search for 'longtermism' in its EA context and it's ~broadly to universally negative. The Google trends data also points toward the term fading away. * No big EA org or "EA leader" however defined is going to bat for longtermism any more in the public sphere. The only people talking about it are the critics. When you get that kind of dynamic, it's difficult to see how an idea can survive. * Even on the Forum, very little discussion on the Forum seems to be based on 'longtermism' these days. People either seem to have left the Forum/EA, or longtermist concerns have been subsumed into AI/bio risk. Longtermism just seems superfluous to these discussions. That's just my personal read on things though. But yeah, seems very much like that SBF-Community Drama-OpenAI board triple whammy from Nov22-Nov23 marked the death knell for longtermism at least as the public facing justification of EA.
4
Jamie_Harris
Thanks! That's helpful.  * Seems to me that at least 80,000 Hours still "bat for longtermism" (E.g. it's very central in their resources about cause prioritisation.) * Not sure why you think that no "'EA leader' however defined is going to bat for longtermism any more in the public sphere". * Longtermism (or at least, x-risk / GCRs as proxies for long-term impact) seem pretty crucial to various prioritisation decisions within AI and bio?  * And longtermism unequivocally seems pretty crucial to s-risk work and justification, although that's a far smaller component of EA than x-risk work. (No need to reply to these, just registering some things that seem surprising to me.)

In terms of changes in status and what people are doing:

  • pivot from AI safety technical research to AI governance policy work
  • pivot from broader biosecurity to intersection of AI and bio
  • adoption of progress studies ideas / adoption of metascience and innovation policy as a priority cause area
  • taking broad-based economic growth seriously rather than a sole focus on randomista development
  • greater general engagement with politics
  • further reduction in focus on effective giving, increased focus on career impact

I don’t think the Global Health and Animal Welfare cause areas have changed too much, but probably get a smaller proportion of attention.

I think focusing on AI explosive growth has grown in status over the last two years. I don't think many people were focusing on it two years ago except Tom Davidson. Since then, Utility Bill has decided to focus on it full-time, Vox has written about it, it's a core part of the Situational Awareness model, and Carl Shulman talked about it for hours in influential episodes on the 80K and Dwarkesh podcasts.

Comments3
Sorted by Click to highlight new comments since:

It would be helpful to understand the context in which these questions arose. 

For instance, one possible origin story is that the questions arose in a discussion of the value of new-cause development / openness to weird and controversial ideas with limited current support / etc. I could see that conversation going on in light of the recent controversies related to Manifest / scientific racism. A helpful response in the context of that conversation would be much different than a helpful response to ~"how has EA changed in the last two years, generally?"

Love this question, and think it's important for us all to consider.

Some considerations for clarification:

  • why say 'considered low status' instead of 'considered wrong' or 'considered wrong by EA Leadership or whatever'.
  • I guess, given EA is somewhat decentralised in terms of claimed ownership, it's hard to say what 'the movement' has acknowledged, but maybe substantial or significant minorities of the movement beginning to champion a new cause/idea would meet the criteria?

How is this edit?

Curated and popular this week
Relevant opportunities