A key (new-ish) proposition in EA discussions is "Strong Longtermism," that the vast majority of the value in the universe is in the far future, and that we need to focus on it. This far future is often understood to be so valuable that almost any amount of preference for the long term is justifiable.
In this brief post, I want to argue that this strong claim is unnecessary compared to a weaker argument, creates new problems that are easily avoided otherwise, and should be replaced with the weaker claim. (I am far from the first to propose this.)
The 'regular longtermism' claim, as I present it, is that we should assign approximately similar value to the long term future as we do to the short-term. This is a philosophically difficult position which nonetheless, I argue, is superior to either status quo, or strong longtermism.
Philosophical grounding
The typical presentation of longtermism is that if we do not discount future lives exponentially, almost any weight placed on the future, which almost certainly can be massively larger than the present, will overwhelm the value of the present. This is hard to justify intuitively - it implies that we should ignore the near-term costs, and (taken to the extreme) could justify almost any atrocity in the pursuit of a miniscule reduction of long-term risks.
The typical alternative is presented by naïve economic discounting, which assumes that we should exponentially discount the far future at some finite rate. This leads to claims that a candy bar today is worth more than the entire future of humanity starting in, say, 10,000 years. This is also hard to justify intuitively.
A third perspective roughly justifies the current position; we should discount the future at the rate current humans think is appropriate, but also separately place significant value on having a positive long term future. This preserves both the value of the long-term future of humanity if positive, and the preference for the present. Lacking any strong justification for setting the balance, I will very tentatively claim they should be weighted approximately equally, but this is not critical - almost any non trivial weight on the far future would be a large shift from the status quo towards longer-term thinking. This may be non-rigorous, but has many attractive features.
The key question, it seems, is whether the new view is different, and/or whether the exact weights for the near and long term will matter in practice.
Does 'regular longtermism' say anything?
Do the different positions lead to different conclusions in the short term? If they do not, there is clearly no reason to prefer strong longtermism. If they do, it seems that almost all of these differences are intuitively worrying. Strong longtermism implies we should engage in much larger near term sacrifices, and justifies ignoring near-term problems like global poverty, unless they have large impacts on the far future. Strong neartermism, AKA strict exponential discounting, implies that we should do approximately nothing about the long term future.
So, does regular longtermism suggest less focus on reducing existential risks, compared to the status quo? Clearly not. In fact, it suggests overwhelmingly more effort should be spent on avoiding existential risk than is currently available for the task. It may suggest less effort than strong longtermism, but only to the extent that we have very strong epistemic reasons for thinking that very large short term sacrifices are effective.
What now?
I am unsure that there is anything new in this post. At the same time, it seems that the debate has crystallized into two camps which I strongly disagree with - the "anti-longtermist" camp, typified by Phil Torres, who is horrified by the potentially abusive view of longtermism, and Vaden Masrani, who wrote a criticism of the idea, versus the "strong longtermism" camp, typified by Toby Ord and (Edit: see Toby's comment) Will MacAskill, (Edit: See Will's comment.) who seems to imply that Effective Altruism should focus entirely on longtermism. (Edit: I should now say that it turns out that this is a weak-man argument, but also note that several commenters explicitly say they embrace this viewpoint.)
Given the putative dispute, I would be very grateful if we could start to figure out as a community whether the strong form of longtermism is a tentative question about how to work out a coherent position that doesn't have potentially worrying implications, or if it is intended as a philosophical shibboleth. I will note that my typical mind fallacy view is that both sides actually endorse, or at least only slightly disagree with, my mid-point view, but I may be completely wrong.
- Note that Will has called this "very strong longtermism", but it seems unclear how a line is drawn between very strong and strong forms. This is true especially because the definition-based version he proposes, that human lives in the far future are equally valuable and should not be discounted, seems to lead directly to this very strong longtermist conclusion.
- (Edited to add:) In contrast, any split of value between near-term and long-term value completely changes the burden of proof for longtermist interventions. As noted here, given strong longtermism, we would have a clear case for any positive-expectation risk reduction measure, and the only possible response to refute it is a claim that the expectation in terms of reduced risk is negative. With a weaker form, we can perform cost-benefit analysis to decide whether the loss in the near-term is worthwhile.
I tentatively believe (ii), depending on some definitions. I'm somewhat surprised to see Ben and Darius implying it's a really weird view, and makes me wonder what I'm missing.
I don't want the EA community to stop working on all non-longtermist things. But the reason is because I think many of those things have positive indirect effects on the EA community. (I just mean indirect effects on the EA community, and maybe on the broader philanthropic community, I don't mean indirect effects more broadly in the sense of 'better health in poor countries' --> 'more economic growth' --> 'more innovation')
For example non-longtermist interventions are often a good way to demonstrate EA ideas and successes (eg. pointing to GiveWell is really helpful as an intro to EA); non-longtermist causes are a way for people to get involved with EA and end up working on longtermist causes (eg. [name removed] incoming at GPI comes to mind as a great success story along those lines); work on non-longtermist causes has better feedback loops so it might improve the community's skills (eg. Charity Entrepreneurship incubatees probably are highly skilled 2-5 years after the program. Though I'm not sure that actually translates to more skill-hours going towards longtermist causes).
But none of these reasons are that I think the actual intended impact of non-longtermist interventions is competitive with longtermist interventions. Eg. I think Charity Entrepreneurship is good because it's creating a community and culture of founding impact-oriented nonprofits, not because [it's better for shrimp/there's less lead in paint/fewer children smoke tobacco products]. Basically I think the only reasons the near-term interventions might be good is because they might make the long-term future go better.
I'm not sure what counts as 'astronomically' more cost effective, but if it means ~1000x more important/cost-effective I might agree with (ii). It's hard to come up with a good thought experiment here to test this intuition.
One hypothetical is 'would you rather $10,000 gets donated to the Longterm Future Fund, or $10 mil gets donated to Give Well's Maximum Impact Fund'. This is confusing though, because I'm not sure how important extra funding is in these areas. Another hypothetical is 'would you rather 10 fairly smart people devote their careers to longtermist causes (eg. following 80k advice), or 10,000 fairly smart people devote their careers to neartermist causes (eg. following AAC advice)'. This is confusing because I expect 10,000 people working on effective animal advocacy to have some effect on the long-term future. Some of them might end up working on nearby long-termist things like digital sentience. They might slightly shift the culture of veganism to be more evidence-based and welfarist, which could lead to faster flow of people from veganism to EA over time. They would also do projects which EA could point to as success, which could be helpful for getting more people into EA and eventually into longtermist causes.
If I try to imagine a version of this hypothetical without those externalities, I think I prefer the longtermist option, indicating that the 1000x difference seems plausible to me.
I wonder if some of the reasons people don't hold the view I do is some combination of (1) 'this feels weird so maybe it's wrong' and (2) 'I don't want to be unkind to people working on neartermist causes'.
I think (1) does carry some weight and we should be cautious when acting on new, weird ideas that imply strange actions. However, I'm not sure how much longtermism actually falls into this category.
I also feel the weight of (2). It makes me squirm to reconcile my tentative belief in strong longtermism with my admiration of many people who do really impressive work on non-longtermist causes and my desire to get along with those people. I really think longtermists shouldn't make people who work on other causes feel bad. However, I think it's possible to commit to strong longtermism without making other people feel attacked, or too unappreciated. And I don't think these kinds of social considerations have any bearing on which cause to prioritise working on.
I feel like a big part of the edge of the EA and rationality community is that we follow arguments to their conclusions even when it's weird, or it feels difficult, or we're not completely sure. We make tradeoffs even when it feels really hard - like working on reducing existential risk instead of helping people in extreme poverty or animals in factory farms today.
I feel like I also need to clarify some things: