This is a frame that I have found useful and I'm sharing in case others find it useful.
EA has arguably gone through several waves:
Waves of EA (highly simplified model — see caveats below) | |||
First wave | Second wave | Third wave | |
Time period | 2010[1]-2017[2] | 2017-2023 | 2023-?? |
Primary constraint | Money | Talent |
??? |
Primary call to action | Donations to effective charities | Career change | |
Primary target audience | Middle-upper-class people | University students and early career professionals | |
Flagship cause area | Global health and development | Longtermism | |
Major hubs | Oxford > SF Bay > Berlin (?) | SF Bay > Oxford > London > DC > Boston |
The boundaries between waves are obviously vague and somewhat arbitrary. This table is also overly simplistic – I first got involved in EA through animal welfare, which is not listed at all on this table, for example. But I think this is a decent first approximation.
It’s not entirely clear to me whether we are actually in a third wave. People often overestimate the extent to which their local circumstances are unique. But there are two main things which make me think that we have a “wave” which is distinct from, say, mid 2022:
- Substantially less money, through a combination of Meta stock falling, FTX collapsing, and general market/crypto downturns[3]
- AI safety becoming (relatively) mainstream
If I had to choose an arbitrary date for the beginning of the third wave, I might choose March 22, 2023, when the FLI open letter on pausing AI experiments was published.
It remains to be seen if public concern about AI is sustained – Superintelligence was endorsed by a bunch of fancy people when it first came out, but they mostly faded away. If it is sustained though, I think EA will be in a qualitatively new regime: one where AI safety worries are common, AI safety is getting a lot of coverage, people with expertise in AI safety might get into important rooms, and where the field might be less neglected.
Third wave EA: what are some possibilities?
Here are a few random ideas; I am not intending to imply that these are the most likely scenarios.
Example future scenario | Politics and Civil Society[4] | Forefront of weirdness | Return to non-AI causes |
Description of the possible “third wave” — chosen to illustrate the breadth of possibilities | There is substantial public appetite to heavily regulate AI. The technical challenges end up being relatively easy. The archetypal EA project is running a grassroots petition for a moratorium on AI. | AI safety becomes mainstream and "spins out" of EA. EA stays at the forefront of weirdness and the people who were previously interested in AI safety turn their focus to digital sentience, acausal moral trade, and other issues that still fall outside the Overton window. | AI safety becomes mainstream and "spins out" of EA. AI safety advocates leave EA, and vibes shift back to “first wave” EA. |
Primary constraint | Political will | Research | Money |
Primary call to action | Voting/advocacy | Research | Donations |
Primary target audience | Voters in US/EU | Future researchers (university students) | Middle-upper class people |
Flagship cause area | AI regulation | Digital sentience | Animal welfare |
Where do we go from here?
- I’m interested in organizing more projects like EA Strategy Fortnight. I don’t feel very confident about what third wave EA should look like, or even that there will be a third wave, but it does seem worth spending time discussing the possibilities.
- I'm particularly interested in claims that there isn't, or shouldn't be, a third wave of EA (i.e. please feel free to disagree with the whole model, argue that we’re still in wave 2, argue we might be moving towards wave 3 but shouldn’t be, etc.).
- I’m also interested in generating cruxes and forecasts about those cruxes. A lot of these are about the counterfactual value of EA, e.g. will digital sentience become “a thing” without EA involvement?
This post is part of EA Strategy Fortnight. You can see other Strategy Fortnight posts here.
Thanks to a bunch of people for comments on earlier drafts, including ~half of my coworkers at CEA, particularly Lizka. “Waves” terminology stolen from feminism, and the idea that EA has been through waves and is entering a third wave is adapted from Will MacAskill, though I think he has a slightly different framing, but he still deserves a lot of the credit here.
- ^
Starting date is somewhat arbitrarily chosen from the history listed here.
- ^
Arbitrarily choosing the coining of the word “longtermism” as the starting event of the second wave
- ^
Although Meta stock is back up since I first wrote this; I would be appreciative if someone could do an update on EA funding
- ^
Analogy from Will MacAskill: Quakers:EA::Abolition:AI Safety
I like this framing, and here's some thoughts on movements from a movement veteran. First, it's obvious EA moved from raising money for effective charities focus to longtermism/X-risk and it interesting to see all the cultural flows of that in EA. And then it seems fairly obvious that the series of scandals from FTX to sexual harassment has had a reverberating shock wave affect in EA and that to me is the clearest sign EA is primed for a new wave, the third. I would date it not only by AI but also by some date averaging of these big difficult stories from Dec. 2022 to early 2023 where FTX was still everywhere while ChatGPT debuted globally and the sex stuff. The fact that EA is years ahead of any other organized movement in organizing on AI Safety means it can be a hub and that has value moving forward.
I think the really big question is this - what will the effect of all the trauma and embarrassment and people reassessing themselves and EA as a whole end up producing on the steering rudder of EA...where will it turn, what ways will it change? Comments on that would be very interesting.
Here's my comment, which if you were to read all my posts and comments you could see the trend: I don't know at all what direction the rudder will steer us toward, but I hope that it includes a huge cultural reformation surrounding Utilitarianism. One of the iconic quotes that would give me this idea is of an interview with SBF where he says he's a Benthamite Utilitarian very soon before he's revealed to be a historically awful fraudster who was spawned and enabled by a bunch of Benthamite Utilitarians calling themselves Effective Altruists.
Now I know some leading lights have spoken out recently that, "Aw we haven't really been hard core Benthamites...we've always been more balanced". I would say that's classic blind spot talk because EA you have no idea how strongly Utilitarian you come across to anyone from the outside...you are not at all balanced, you are hard core Utilitarian...if you think you're balanced you're just too inside to know how things look from the outside. I think what's happening mentally is you are smart enough to imagine the freakish' crazy side of extreme Utilitarianism and you know you aren't that...but that's because nobody is that freakish' except literally psychopath outliers who don't count, but instead you are still firmly situated in a kind of Utilitarianism which though balanced with some common sense which is both socially and literally unavoidable, is still very far over from most of your peers in non EA culture worlds.
I can imagine all the defensive comments saying it's not that bad, we're more balanced, but as I said above that's just being too inside to see from the outside - if there was any one major cultural thing that typified EA and EA people it would be utilitarianism...eating Huel alone at your desk so you can grind on to be more effective is to me the iconic image of that.
I know this is tough love, but I do dearly love EA...and I just want all to be happy and stop eating Huel alone at your desk and discover the joy of being with others and having an ice cream cone now and then. You'll be far more optimized by that to do your good work. Utilitarianism is optimizing for robots not for humans. Effective Altruists are humans helping humanity...optimize for humanness.
Well, okay. I've argued that other decision procedures and moralities do have value, but are properly considered subordinate to CU. Not sure if these ideas swayed you at all, but if you're Christian you may be thinking "I have my Rock" so you feel no need for another.
You could do this, but you'd be arguing axiomatically. A claim like "my... (read more)