I hear two conflicting voices in my head, and in EA:
- Voice: it's highly uncertain whether deworming is effective, based on 20 years of research, randomized controlled trials, and lots of feedback. In fact, many development interventions have a small or negative impact.
- Same voice: we are confident that work for improving the far future is effective, based on <insert argument involving the number of stars in the universe>.
I believe that I could become convinced to work on artificial intelligence or extinction risk reduction. My main crux is that these problems seem intractable. I am worried that my work would have a negligible or a negative impact.
These questions are not sufficiently addressed yet, in my opinion. So far, I've seen mainly vague recommendations (e.g., "community building work does not increase risks" or "look at the success of nuclear disarmament"). Examples of existing work for improving the far future often feel very indirect (e.g., "build a tool to better estimate probabilities ⇒ make better decisions ⇒ facilitate better coordination ⇒ reduce the likelihood of conflict ⇒ prevent a global war ⇒ avoid extinction") and thus disconnected from actual benefits for humanity.
One could argue that uncertainty is not a problem, that it is negligible when considering the huge potential benefit of work for the far future. Moreover, impact is fat-tailed, and thus the expected value dominated by a few really impactful projects, and thus it's worth trying projects even if they have low success probability[1]. This makes sense, but only if we can protect against large negative impacts. I doubt we really can — for example, a case can be made that even safety-focused AI researchers accelerate AI and thus increase its risks.[2]
One could argue that community building or writing "what we owe the future" are concrete ways to do good for the future . Yet this seems to shift the problem rather than solve it. Consider a community builder who convinces 100 people to work on improving the far future. There are now 100 people doing work with uncertain, possibly-negative impact. The community builder's impact is some function which is similarly uncertain and possibly negative. This is especially true if is fat-tailed, as the impact will be dominated by the most successful (or most destructive) people.
To summarize: How can we reliably improve the far future, given that even near-termist work like deworming, with plenty of available data and research and rapid feedback loops and simple theories, so often fails? As someone who is eager to do spend my work time well, who thinks that our moral circle should include the future, but who does not know ways to reliably improve it... what should I do?
Will MacAskill on fat-tailed impact distribution: https://youtu.be/olX_5WSnBwk?t=695 ↩︎
For examples on this forum, see When is AI safety research harmful? or What harm could AI safety do? ↩︎
Thank you for this detailed reply. I really appreciate it.
I overall like the point of preventing harm. It seems that there are two kinds: (1) small harms like breaking a glass bottle. I absolutely agree that this is good, but I think that typical longtermist arguments don't apply here, because such actions do not have a lasting effect on the future. (2) large, irreversible harms like ocean pollution. Here, I think we are back to the tractability issues that I write about in the post. It is extremely difficult to reliably improve ocean health. Much of the work is indirect (e.g., write a book to promote veganism ⇒ fewer people eat fish ⇒ reduced demand causes less fishing ⇒ fish populations improve).
Projects that preserve knowledge for the future (like the Lunar Library) are probably net positive. I agree with you on this. However, the scenarios where these projects have a large impact are very exotic; many improbable conditions would need to happen together. So again, this is very indirect work, and it's quite likely to have zero benefit.
Improving human genes and physical experiences is intriguing. I haven't thought much about it before. Thank you for the idea. I'll do more thinking, but would like to mention that past efforts in this area have often gone horribly wrong, for example the eugenics movement in the Nazi era. There is also positive precedent, though: I believe GMO crops are probably a net win for agriculture.
In the last part of your answer, you mention coordination problems, misaligned incentives, errors... I think we agree 100% here. These problems are a big part of why I think work for improving the far future is so intractable. Even work for improving today's world is difficult, but at least this work has data, experiments, and fast feedback (like in the deworming case).