This is a frame that I have found useful and I'm sharing in case others find it useful.
EA has arguably gone through several waves:
Waves of EA (highly simplified model — see caveats below) | |||
First wave | Second wave | Third wave | |
Time period | 2010[1]-2017[2] | 2017-2023 | 2023-?? |
Primary constraint | Money | Talent |
??? |
Primary call to action | Donations to effective charities | Career change | |
Primary target audience | Middle-upper-class people | University students and early career professionals | |
Flagship cause area | Global health and development | Longtermism | |
Major hubs | Oxford > SF Bay > Berlin (?) | SF Bay > Oxford > London > DC > Boston |
The boundaries between waves are obviously vague and somewhat arbitrary. This table is also overly simplistic – I first got involved in EA through animal welfare, which is not listed at all on this table, for example. But I think this is a decent first approximation.
It’s not entirely clear to me whether we are actually in a third wave. People often overestimate the extent to which their local circumstances are unique. But there are two main things which make me think that we have a “wave” which is distinct from, say, mid 2022:
- Substantially less money, through a combination of Meta stock falling, FTX collapsing, and general market/crypto downturns[3]
- AI safety becoming (relatively) mainstream
If I had to choose an arbitrary date for the beginning of the third wave, I might choose March 22, 2023, when the FLI open letter on pausing AI experiments was published.
It remains to be seen if public concern about AI is sustained – Superintelligence was endorsed by a bunch of fancy people when it first came out, but they mostly faded away. If it is sustained though, I think EA will be in a qualitatively new regime: one where AI safety worries are common, AI safety is getting a lot of coverage, people with expertise in AI safety might get into important rooms, and where the field might be less neglected.
Third wave EA: what are some possibilities?
Here are a few random ideas; I am not intending to imply that these are the most likely scenarios.
Example future scenario | Politics and Civil Society[4] | Forefront of weirdness | Return to non-AI causes |
Description of the possible “third wave” — chosen to illustrate the breadth of possibilities | There is substantial public appetite to heavily regulate AI. The technical challenges end up being relatively easy. The archetypal EA project is running a grassroots petition for a moratorium on AI. | AI safety becomes mainstream and "spins out" of EA. EA stays at the forefront of weirdness and the people who were previously interested in AI safety turn their focus to digital sentience, acausal moral trade, and other issues that still fall outside the Overton window. | AI safety becomes mainstream and "spins out" of EA. AI safety advocates leave EA, and vibes shift back to “first wave” EA. |
Primary constraint | Political will | Research | Money |
Primary call to action | Voting/advocacy | Research | Donations |
Primary target audience | Voters in US/EU | Future researchers (university students) | Middle-upper class people |
Flagship cause area | AI regulation | Digital sentience | Animal welfare |
Where do we go from here?
- I’m interested in organizing more projects like EA Strategy Fortnight. I don’t feel very confident about what third wave EA should look like, or even that there will be a third wave, but it does seem worth spending time discussing the possibilities.
- I'm particularly interested in claims that there isn't, or shouldn't be, a third wave of EA (i.e. please feel free to disagree with the whole model, argue that we’re still in wave 2, argue we might be moving towards wave 3 but shouldn’t be, etc.).
- I’m also interested in generating cruxes and forecasts about those cruxes. A lot of these are about the counterfactual value of EA, e.g. will digital sentience become “a thing” without EA involvement?
This post is part of EA Strategy Fortnight. You can see other Strategy Fortnight posts here.
Thanks to a bunch of people for comments on earlier drafts, including ~half of my coworkers at CEA, particularly Lizka. “Waves” terminology stolen from feminism, and the idea that EA has been through waves and is entering a third wave is adapted from Will MacAskill, though I think he has a slightly different framing, but he still deserves a lot of the credit here.
- ^
Starting date is somewhat arbitrarily chosen from the history listed here.
- ^
Arbitrarily choosing the coining of the word “longtermism” as the starting event of the second wave
- ^
Although Meta stock is back up since I first wrote this; I would be appreciative if someone could do an update on EA funding
- ^
Analogy from Will MacAskill: Quakers:EA::Abolition:AI Safety
I'm glad Ben shared this post!
I'm not sure how much I agree with the framework — or at least the idea that we're entering a third wave, but this seems like a useful tool/exercise.
Here's one consideration that comes to mind as I think about whether we're entering a third wave (written quickly, sorry in advance!).
I've got competing intuitions:
My best guess (not resilient and pretty vague) is that we're generally too slow to update on in-the-world changes (that aren't about other people's views or the like), and too quick to update on ~memes in our immediate surroundings or our information sources/networks. I tentatively think that (public) opinion does in fact change a lot, but those changes are generally slower, and that we should be cautious about thinking that opinion-like changes are big, since small/local changes can feel huge/permanent/global.
So: to the extent that the idea that we're entering a third wave is based on the sense that AI safety concerns are going mainstream, I feel very unsure that we're interpreting things correctly. We have decent (and not vibes-based) signals that AI safety is in fact going mainstream, but I'm still pretty unsure if things will go back to ~normal. Of course, other things have also changed; specific influential people seem to have gotten worried, it seems like governments are taking AI (existential) risk seriously, etc. — these seem less(?) likely to revert to normal (although I'm just guessing, again). I imagine that we can look at past case studies of this and get very rough ~base rates, potentially — I'd be very interested.
(I have some other concerns about using/believing this model, but just wanted to outline one for now.)
I'll also share some notes/comments I added on a slightly earlier draft. I haven't read the comments carefully, so at least some of this is probably redundant.
Some other possible "third waves" (very quick brainstorm)
I like the distinction between overreacting and underreacting as being "in the world" vs. "memes" - another way of saying this is something like "object level reality" vs. "social reality".
If the longtermism wave is real, then that was pretty about social reality, at least within EA, and changed how money was spent and things people said (as I understand it, I wasn't really socially involved at the time).
So to the extent that this is about "what's happening to EA" I think there's clearly a third wave here, where people are running and getting funded ... (read more)