Hide table of contents

Suppose we're sometime in the (near-ish) future. The longtermist project hasn't fulfilled 2020's expectations. Where did we go wrong? What scenarios (and with what probabilities) may have lead to this?

I hope this question isn't strictly isomorphic to asking about objections to long-termism.

18

0
0

Reactions

0
0
New Answer
New Comment


1 Answers sorted by

Both of the possibilities below don't seem to be things that it would be that easy to realise even once we're in some (near-ish) future. I hope this isn't begging the question, it isn't intended to be. I've put credences and I'm glad you asked for them, but they are very uncertain.

One possibility is that we were just wrong about the whole long-termism thing. Given how much disagreement in philosophy there seems to be about basically everything, it seems prudent to give this idea non-trivial credence, even if you find arguments for long-termism very convincing. I'd maybe give a 10% probability to long-termism just being wrong.

More significant seems to be the chance that long-termism was right but that trying to directly intervene in the long-term future by taking actions that were only expected to have consequences in the long term was a bad strategy, and instead we should have been (approximate credence):

  • Investing money to be spent in the future (10%)
  • Investing in the future by growing the EA community (25%)
  • Doing the most good posible in the short term for the developing world/animals, as this turns out to positively shape the future more than actually trying to. (20%)
I'd maybe give a 10% probability to long-termism just being wrong.

What could you observe that would cause you to think that longtermism is wrong? (I ask out of interest; I think it's a subtle question.)

3
alex lawsen
A really convincing argument from a philosopher or group of philosophers I respected would probably do it, especially if it caused prominent longtermists to change their minds. I've no idea what this argument would be, because if I could think of the argument myself it would already have changed my mind.
1
Eli Rose
Makes sense!

What about a scenario where long-termism turns out to be right, but there is some sort of community-level value drift which results in long-term cause areas becoming neglected, perhaps as a result of the community growing too quickly or some intra-community interest groups becoming too powerful? I wouldn't say this is very likely (maybe 5%), but we should consider the base rate of this type of thing happening.


I realise that this outcome might be subsumed in the the points raised above. Specifically, it might be that instead of directly trying to intervene in the long-term future, EA should have invested in sustainably growing the community with the intention of avoiding value drift (option 2) - I am just wondering how granular we can get with this pre-mortem before it becomes unhelpfully complex.


From a strategic point of view this pre-mortem is a great idea.

Great comment. I count only 65 percentage points - is the other third "something else happened"?

Or were you not conditioning on long-termist failure? (That would be scary.)

1
alex lawsen
I was not conditioning on long termist failure, but I also don't think my last three points are mutually exclusive, so they shouldn't be naively summed.
1
Azure
Additionally, is it not likely that those scenarios are correlated?
Curated and popular this week
 ·  · 16m read
 · 
At the last EAG Bay Area, I gave a workshop on navigating a difficult job market, which I repeated days ago at EAG London. A few people have asked for my notes and slides, so I’ve decided to share them here.  This is the slide deck I used.   Below is a low-effort loose transcript, minus the interactive bits (you can see these on the slides in the form of reflection and discussion prompts with a timer). In my opinion, some interactive elements were rushed because I stubbornly wanted to pack too much into the session. If you’re going to re-use them, I recommend you allow for more time than I did if you can (and if you can’t, I empathise with the struggle of making difficult trade-offs due to time constraints).  One of the benefits of written communication over spoken communication is that you can be very precise and comprehensive. I’m sorry that those benefits are wasted on this post. Ideally, I’d have turned my speaker notes from the session into a more nuanced written post that would include a hundred extra points that I wanted to make and caveats that I wanted to add. Unfortunately, I’m a busy person, and I’ve come to accept that such a post will never exist. So I’m sharing this instead as a MVP that I believe can still be valuable –certainly more valuable than nothing!  Introduction 80,000 Hours’ whole thing is asking: Have you considered using your career to have an impact? As an advisor, I now speak with lots of people who have indeed considered it and very much want it – they don't need persuading. What they need is help navigating a tough job market. I want to use this session to spread some messages I keep repeating in these calls and create common knowledge about the job landscape.  But first, a couple of caveats: 1. Oh my, I wonder if volunteering to run this session was a terrible idea. Giving advice to one person is difficult; giving advice to many people simultaneously is impossible. You all have different skill sets, are at different points in
 ·  · 47m read
 · 
Thank you to Arepo and Eli Lifland for looking over this article for errors.  I am sorry that this article is so long. Every time I thought I was done with it I ran into more issues with the model, and I wanted to be as thorough as I could. I’m not going to blame anyone for skimming parts of this article.  Note that the majority of this article was written before Eli’s updated model was released (the site was updated june 8th). His new model improves on some of my objections, but the majority still stand.   Introduction: AI 2027 is an article written by the “AI futures team”. The primary piece is a short story penned by Scott Alexander, depicting a month by month scenario of a near-future where AI becomes superintelligent in 2027,proceeding to automate the entire economy in only a year or two and then either kills us all or does not kill us all, depending on government policies.  What makes AI 2027 different from other similar short stories is that it is presented as a forecast based on rigorous modelling and data analysis from forecasting experts. It is accompanied by five appendices of “detailed research supporting these predictions” and a codebase for simulations. They state that “hundreds” of people reviewed the text, including AI expert Yoshua Bengio, although some of these reviewers only saw bits of it. The scenario in the short story is not the median forecast for any AI futures author, and none of the AI2027 authors actually believe that 2027 is the median year for a singularity to happen. But the argument they make is that 2027 is a plausible year, and they back it up with images of sophisticated looking modelling like the following: This combination of compelling short story and seemingly-rigorous research may have been the secret sauce that let the article to go viral and be treated as a serious project:To quote the authors themselves: It’s been a crazy few weeks here at the AI Futures Project. Almost a million people visited our webpage; 166,00
 ·  · 32m read
 · 
Authors: Joel McGuire (analysis, drafts) and Lily Ottinger (editing)  Formosa: Fulcrum of the Future? An invasion of Taiwan is uncomfortably likely and potentially catastrophic. We should research better ways to avoid it.   TLDR: I forecast that an invasion of Taiwan increases all the anthropogenic risks by ~1.5% (percentage points) of a catastrophe killing 10% or more of the population by 2100 (nuclear risk by 0.9%, AI + Biorisk by 0.6%). This would imply it constitutes a sizable share of the total catastrophic risk burden expected over the rest of this century by skilled and knowledgeable forecasters (8% of the total risk of 20% according to domain experts and 17% of the total risk of 9% according to superforecasters). I think this means that we should research ways to cost-effectively decrease the likelihood that China invades Taiwan. This could mean exploring the prospect of advocating that Taiwan increase its deterrence by investing in cheap but lethal weapons platforms like mines, first-person view drones, or signaling that mobilized reserves would resist an invasion. Disclaimer I read about and forecast on topics related to conflict as a hobby (4th out of 3,909 on the Metaculus Ukraine conflict forecasting competition, 73 out of 42,326 in general on Metaculus), but I claim no expertise on the topic. I probably spent something like ~40 hours on this over the course of a few months. Some of the numbers I use may be slightly outdated, but this is one of those things that if I kept fiddling with it I'd never publish it.  Acknowledgements: I heartily thank Lily Ottinger, Jeremy Garrison, Maggie Moss and my sister for providing valuable feedback on previous drafts. Part 0: Background The Chinese Civil War (1927–1949) ended with the victorious communists establishing the People's Republic of China (PRC) on the mainland. The defeated Kuomintang (KMT[1]) retreated to Taiwan in 1949 and formed the Republic of China (ROC). A dictatorship during the cold war, T