The phrase "long-termism" is occupying an increasing share of EA community "branding". For example, the Long-Term Future Fund, the FTX Future Fund ("we support ambitious projects to improve humanity's long-term prospects"), and the impending launch of What We Owe The Future ("making the case for long-termism").
Will MacAskill describes long-termism as:
I think this is an interesting philosophy, but I worry that in practical and branding situations it rarely adds value, and might subtract it.
In The Very Short Run, We're All Dead
AI alignment is a central example of a supposedly long-termist cause.
But Ajeya Cotra's Biological Anchors report estimates a 10% chance of transformative AI by 2031, and a 50% chance by 2052. Others (eg Eliezer Yudkowsky) think it might happen even sooner.
Let me rephrase this in a deliberately inflammatory way: if you're under ~50, unaligned AI might kill you and everyone you know. Not your great-great-(...)-great-grandchildren in the year 30,000 AD. Not even your children. You and everyone you know. As a pitch to get people to care about something, this is a pretty strong one.
But right now, a lot of EA discussion about this goes through an argument that starts with "did you know you might want to assign your descendants in the year 30,000 AD exactly equal moral value to yourself? Did you know that maybe you should care about their problems exactly as much as you care about global warming and other problems happening today?"
Regardless of whether these statements are true, or whether you could eventually convince someone of them, they're not the most efficient way to make people concerned about something which will also, in the short term, kill them and everyone they know.
The same argument applies to other long-termist priorities, like biosecurity and nuclear weapons. Well-known ideas like "the hinge of history", "the most important century" and "the precipice" all point to the idea that existential risk is concentrated in the relatively near future - probably before 2100.
The average biosecurity project being funded by Long-Term Future Fund or FTX Future Fund is aimed at preventing pandemics in the next 10 or 30 years. The average nuclear containment project is aimed at preventing nuclear wars in the next 10 to 30 years. One reason all of these projects are good is that they will prevent humanity from being wiped out, leading to a flourishing long-term future. But another reason they're good is that if there's a pandemic or nuclear war 10 or 30 years from now, it might kill you and everyone you know.
Does Long-Termism Ever Come Up With Different Conclusions Than Thoughtful Short-Termism?
I think yes, but pretty rarely, in ways that rarely affect real practice.
Long-termism might be more willing to fund Progress Studies type projects that increase the rate of GDP growth by .01% per year in a way that compounds over many centuries. "Value change" type work - gradually shifting civilizational values to those more in line with human flourishing - might fall into this category too.
In practice I rarely see long-termists working on these except when they have shorter-term effects. I think there's a sense that in the next 100 years, we'll either get a negative technological singularity which will end civilization, or a positive technological singularity which will solve all of our problems - or at least profoundly change the way we think about things like "GDP growth". Most long-termists I see are trying to shape the progress and values landscape up until that singularity, in the hopes of affecting which way the singularity goes - which puts them on the same page as thoughtful short-termists planning for the next 100 years.
Long-termists might also rate x-risks differently from suffering alleviation. For example, suppose you could choose between saving 1 billion people from poverty (with certainty), or preventing a nuclear war that killed all 10 billion people (with probability 1%), and we assume that poverty is 10% as bad as death. A short-termist might be indifferent between these two causes, but a long-termist would consider the war prevention much more important, since they're thinking of all the future generations who would never be born if humanity was wiped out.
In practice, I think there's almost never an option to save 1 billion people from poverty with certainty. When I said that there was, that was a hack I had to put in there to make the math work out so that the short-termist would come to a different conclusion from the long-termist. A 1/1 million chance of preventing apocalypse is worth 7,000 lives, which takes $30 million with GiveWell style charities. But I don't think long-termists are actually asking for $30 million to make the apocalypse 0.0001% less likely - both because we can't reliably calculate numbers that low, and because if you had $30 million you could probably do much better than 0.0001%. So I'm skeptical that problems like this are likely to come up in real life.
When people allocate money to causes other than existential risk, I think it's more often as a sort of moral parliament maneuver, rather than because they calculated it out and the other cause is better in a way that would change if we considered the long-term future.
"Long-termism" vs. "existential risk"
Philosophers shouldn't be constrained by PR considerations. If they're actually long-termist, and that's what's motivating them, they should say so.
But when I'm talking to non-philosophers, I prefer an "existential risk" framework to a "long-termism" framework. The existential risk framework immediately identifies a compelling problem (you and everyone you know might die) without asking your listener to accept controversial philosophical assumptions. It forestalls attacks about how it's non-empathetic or politically incorrect not to prioritize various classes of people who are suffering now. And it focuses objections on the areas that are most important to clear up (is there really a high chance we're all going to die soon?) and not on tangential premises (are we sure that we know how our actions will affect the year 30,000 AD?)
I'm interested in hearing whether other people have different reasons for preferring the "long-termism" framework that I'm missing.
Speaking about AI Risk particularly, I haven't bought into the idea there's a "cognitively substantial" chance AI could kill us all by 2050. And even if I had done, many of my interlocutors haven't either. There's two key points to get across to bring the average interlocutor on the street or at a party into an Eliezer Yudkowsky level of worrying:
It's not trivial to convince people of either of these points without sounding a little nuts. So I understand why some people prefer to take the longtermist framing. Then it doesn't matter whether transformative AI will happen in 10 years or 30 or 100, and you only have make the argument about why you should care about the magnitude of this problem.
If I think AI has a maybe 1% chance of being a catastrophic disaster, rather than, say, the 1/10 that Toby Ord gives it over the next 100 years or the higher risk that Yud gives it (>50%? I haven't seen him put a number to it)...then I have to go through the additional step of explaining to someone why they should care about a 1% risk of something. After the pandemic, where the statistically average person has a ~1% chance of dying from covid, it has been difficult to convince something like 1/3 of the population to give a shit about it. The problem with small numbers like 1%, or even 10%, is a lot of people just shrug and dismiss them. Cognitively they round to zero. But the conversation "convince me 1% matters" can look a lot like just explaining longtermism to someone.
The way I like to describe it to my Intro to EA cohorts in the Existential Risk week is to ask "How many people, probabilistically, would die each year from this?"
So, if I think there's a 10% chance AI kills us in the next 100 years, that's 1 in 1,000 people "killed" by AI each year, or 7 million per year - roughly 17x more than malaria.
If I think there's a 1% chance, AI risk kills 700,000 - it's still just as important as malaria prevention, and much more neglected.
If I think there's an 0.1% chance, AI kills 70,000 - a non-trivial problem, but not w... (read more)