This is a frame that I have found useful and I'm sharing in case others find it useful.
EA has arguably gone through several waves:
Waves of EA (highly simplified model — see caveats below) | |||
First wave | Second wave | Third wave | |
Time period | 2010[1]-2017[2] | 2017-2023 | 2023-?? |
Primary constraint | Money | Talent |
??? |
Primary call to action | Donations to effective charities | Career change | |
Primary target audience | Middle-upper-class people | University students and early career professionals | |
Flagship cause area | Global health and development | Longtermism | |
Major hubs | Oxford > SF Bay > Berlin (?) | SF Bay > Oxford > London > DC > Boston |
The boundaries between waves are obviously vague and somewhat arbitrary. This table is also overly simplistic – I first got involved in EA through animal welfare, which is not listed at all on this table, for example. But I think this is a decent first approximation.
It’s not entirely clear to me whether we are actually in a third wave. People often overestimate the extent to which their local circumstances are unique. But there are two main things which make me think that we have a “wave” which is distinct from, say, mid 2022:
- Substantially less money, through a combination of Meta stock falling, FTX collapsing, and general market/crypto downturns[3]
- AI safety becoming (relatively) mainstream
If I had to choose an arbitrary date for the beginning of the third wave, I might choose March 22, 2023, when the FLI open letter on pausing AI experiments was published.
It remains to be seen if public concern about AI is sustained – Superintelligence was endorsed by a bunch of fancy people when it first came out, but they mostly faded away. If it is sustained though, I think EA will be in a qualitatively new regime: one where AI safety worries are common, AI safety is getting a lot of coverage, people with expertise in AI safety might get into important rooms, and where the field might be less neglected.
Third wave EA: what are some possibilities?
Here are a few random ideas; I am not intending to imply that these are the most likely scenarios.
Example future scenario | Politics and Civil Society[4] | Forefront of weirdness | Return to non-AI causes |
Description of the possible “third wave” — chosen to illustrate the breadth of possibilities | There is substantial public appetite to heavily regulate AI. The technical challenges end up being relatively easy. The archetypal EA project is running a grassroots petition for a moratorium on AI. | AI safety becomes mainstream and "spins out" of EA. EA stays at the forefront of weirdness and the people who were previously interested in AI safety turn their focus to digital sentience, acausal moral trade, and other issues that still fall outside the Overton window. | AI safety becomes mainstream and "spins out" of EA. AI safety advocates leave EA, and vibes shift back to “first wave” EA. |
Primary constraint | Political will | Research | Money |
Primary call to action | Voting/advocacy | Research | Donations |
Primary target audience | Voters in US/EU | Future researchers (university students) | Middle-upper class people |
Flagship cause area | AI regulation | Digital sentience | Animal welfare |
Where do we go from here?
- I’m interested in organizing more projects like EA Strategy Fortnight. I don’t feel very confident about what third wave EA should look like, or even that there will be a third wave, but it does seem worth spending time discussing the possibilities.
- I'm particularly interested in claims that there isn't, or shouldn't be, a third wave of EA (i.e. please feel free to disagree with the whole model, argue that we’re still in wave 2, argue we might be moving towards wave 3 but shouldn’t be, etc.).
- I’m also interested in generating cruxes and forecasts about those cruxes. A lot of these are about the counterfactual value of EA, e.g. will digital sentience become “a thing” without EA involvement?
This post is part of EA Strategy Fortnight. You can see other Strategy Fortnight posts here.
Thanks to a bunch of people for comments on earlier drafts, including ~half of my coworkers at CEA, particularly Lizka. “Waves” terminology stolen from feminism, and the idea that EA has been through waves and is entering a third wave is adapted from Will MacAskill, though I think he has a slightly different framing, but he still deserves a lot of the credit here.
- ^
Starting date is somewhat arbitrarily chosen from the history listed here.
- ^
Arbitrarily choosing the coining of the word “longtermism” as the starting event of the second wave
- ^
Although Meta stock is back up since I first wrote this; I would be appreciative if someone could do an update on EA funding
- ^
Analogy from Will MacAskill: Quakers:EA::Abolition:AI Safety
Well, okay. I've argued that other decision procedures and moralities do have value, but are properly considered subordinate to CU. Not sure if these ideas swayed you at all, but if you're Christian you may be thinking "I have my Rock" so you feel no need for another.
You could do this, but you'd be arguing axiomatically. A claim like "my axioms are above those of utilitarians!" would just be a bare assertion with nothing to back it up. As I mentioned, I have axioms too, but only the bare minimum necessary, because axioms are unprovable, and years of reflection led me to reject all unnecessary axioms.
The most important thing to realize is that "things with intrinsic value" is a choice that lies outside consequentialism. A consequentialist could indeed choose an axiom that "art is intrinsically valuable". Calling it "utilitarian" feels like nonstandard terminology, but such value assignment seems utilitarian-adjacent unless you treat it merely as a virtue or rule rather than as a goal you seek after.
Note, however, that beauty doesn't exist apart from an observer to view it, which is part of the reason I think this choice would be a mistake. Imagine a completely dead universe―no people, no life, no souls/God/heaven/hell, and no chance life will ever arise. Just an endless void pockmarked by black holes. Suppose there is art drifting through the void (perhaps Boltzmann art, by analogy to a Boltzmann brain). Does it have value? I say it does not. But if, in the endless void, a billion light years beyond the light cone of this art that can never be seen, there should be one solitary civilization left alive, I argue that this civilization's art is infinitely more valuable. More pointedly, I would say that it is the experience of art that is valuable and not the art itself, or that art is instrumentally valuable, not intrinsically. Thus a great work of art viewed by a million people has delivered 100 times as much value as the same art only seen by 10,000 people―though one should take into account countervailing factors such as the fact that the first 10,000 who see it are more likely to be connoisseurs who appreciate it a lot, and the fact that any art you experience takes away time you might have spent seeing other art. e.g. for me I wish I could listen to more EA and geeky songs (as long as they have beautiful melodies), but lacking that, I still enjoy hearing nice music that isn't tailored as much to my tastes. Thus EA art is less valuable in a world that is already full of art. But EA art could be instrumentally valuable both in the experiences it creates (experiences with intrinsic value) and in its tendency to make the EA movement healthy and growing.
So, to be clear, I don't see a bug in utilitarianism; I see the other views as the ones with bugs. This is simply because I see no flaws in my moral system, but I see flaws in other systems. There are of course flaws in myself , as I illustrate below.
I think it's important to understand and accept that humans cannot be maximally moral; we are all flawed. And this is not a controversial statement to a Christian, right? We can be flawed even in our own efforts to act morally! I'll give an example from last year, when Russia invaded Ukraine. I was suddenly and deeply interested in helping, seeing that a rapid response was needed. But other EAs weren't nearly as interested as I was. I would've argued that although Ukrainian lives are less cost-effective to save than Africans, there was a meaningful longtermist angle: Russia was tipping the global balance of power from democracy to dictatorship, and if we don't respond strongly against Putin, Xi Jinping could be emboldened to invade Taiwan; in the long term, this could lead to tyranny taking over the world (EA relies on freedom to work, so this is a threat to us). Yet I didn't make this argument to EAs; I kept it to myself (perhaps for the same reason I waited ~6 months to publish this, I was afraid EAs wouldn't care). I ended up thinking deeply about the war―about what it might be like as a civilian on the front lines, about what kinds of things would help Ukraine most on a small-ish budget, about how the best Russians weren't getting the support they deserved, and about how events might play out (which turned into a hobby of learning and forecasting, and the horrors of war I've seen are like―holy shit, did I burn out my empathy unit?). But not a single EA organization offered any way to help Ukraine and I was left looking for other ways to help. So, I ended up giving $2000 CAD to Ukraine-related causes, half of which went to Ripley's Heroes which turned out to be a (probably, mostly) fraudulent organization. Not my best EA moment! From a CU perspective, I performed badly. I f**ked up. I should've been able to look at the situation and accept that there was no way I could give military aid effectively with the information I had. And I certainly knew that military aid was far from an ideal intervention and that there were probably better interventions, I just didn't have access to them AFAIK. The correct course of action was not to donate to Ukraine (understanding that some people could help effectively, just not me). But emotionally I couldn't accept that. But you know, I have no doubt that it was myself that was flawed and not my moral system; CU's name be praised! 😉 Also I don't really feel guilty about it, I just think "well, I'm human, I'll make some mistakes and no one's judging me anyway, hopefully I'll do better next time."
In sum: humans can't meet the ideals of (M)CU, but that doesn't mean (M)CU isn't the correct standard by which to make and evaluate choices. There is no better standard. And again, the Christian view is similar, just with a different axiomatic foundation.
Edit: P.S. a relevant bit of the Consequentialism FAQ: