Hide table of contents

If you ask modern young people about having children, a lot of them say something about climate change. This is bad. Along with overpopulation/overconsumption worries, there's a refrain of "well, the world will probably end anyway." But in that case - climate change - it very likely will not end. Rather, there's an apocalyptic mood that can seep in, which is hazardous to long-term planning and mental health.

I see something similar with x-risk, and specifically AI risk. So allow me to make the case that you, personally, shouldn't totally change your life plan in light of the possibility that AI destroys humanity.

Outside View One: Bad Vibes

Let's start with the basics. People are really receptive to arguments that the end is nigh, and conventional wisdom is generally that "modern times" are corrupt and dangerous relative to the good old days.

On top of this, the last several years have seen an increased incidence of people feeling like right now in particular really sucks, and this feeling being socially contagious. I stopped using most social media in part because every single year people posted memes about how sure [LAST_YEAR] and [TWO_YEARS_AGO] were bad, but [CURRENT_YEAR] really takes the cake! (This was before COVID-19, though of course that did accelerate the trend.) 

So to a first approximation we can assume there's something about modern (or even timeless?) conditions that encourages doomy thoughts and feelings. These ideas are just pretty contagious, maybe even invasive, and we have at least one clear example (climate change) where an actual, serious bad thing is systematically overstated by the people most invested in it.

In other words, if you find yourself thinking "We're all doomed" as a sort of cognitive shorthand for a more complex/layered thing underneath, you may want to find a different shorthand, because it's a way to trick yourself into buying the uncritical, simplistic version of the claim, even if you can't defend it.

Outside View Two: Forecasting is Really Hard

(If you don't like the arguments in this section or find yourself rolling your eyes, it's fine to just skip it. The rest doesn't depend on it heavily.)

If you think back to early EA, a lot of it was nerds lamenting non-rigorous evidentiary practices for charity. Or, well, lots of people lamented that as an excuse not to be charitable. But EAs did something about it. They hunted for the best evidence, and publicized that. But along the way, there are notable discoveries. Specifically:

  • Lots and lots and lots of stuff that sounds airtight actually fails spectacularly, and without a strong incentive (like a profit incentive), it can often just fail indefinitely on someone else's dime
  • Lots of other stuff has strong evidence in favor in one context, but fails badly in different contexts, so it gets really confusing
  • Still other stuff - those rare interventions with overall solid evidence - are still very complicated and can inspire tons and tons of debate, usually boiling down to judgment calls (AMF vs. SCI vs. GiveWell is not an objective or simple matter, and that's all the same broad cause area)

When you try to make reality your benchmark and check your work, you discover that it's just extremely hard to figure out what the best opportunities are. Even preregistered RCTs mysteriously fail to replicate all the time, and those are the gold standard for scientific studies.

So if "can we really trust this meta-analysis" is a fraught question, speculating about technological progress (famously unpredictable with a few exceptions like Moore's Law - but even that has fallen off) is just off the map. We should discount anything in this space quite heavily, maybe even so heavily it's just in a totally different qualitative category. Cluster thinking has real merit here!

Inside View One: What are the Odds, Anyway?

If you're like me, by this point outside view considerations already have your salt shaker primed. But let's slide it just out of reach for now, and think about the odds of human extinction given we trust prevalent x-risk models as a whole.

So, what are those odds?

First and foremost, nobody knows. Furthermore, nobody will ever know, by definition. They could be 99% and we could luck into the 1%. They could be 0.01% and we could all die anyway. This is not an experiment we can run 100 times.

But with that out of the way, the guy who wrote the book on x-risk, Toby Ord, says AI x-risk this century is about 10%. There are lots of caveats here, and you can see some by clicking through to that post.

Notably, he also says the estimate could easily be off by a factor of three in either direction. So, could be 3%, could be 30%.

Toby argues - and I think he's right - that if his guesses/arguments are anywhere near accurate it makes sense for loads of smart people and smart money to go after defending us from advanced AI risks. Just seems like a good idea. In fact, indirectly, I am one such person! But 10% (or 3%, or even 30%) is a far cry from "we're all probably doomed".

I don't think you have to stop here, though.

Inside View Two: Should we Discount?

Toby Ord has, in my view, a great track record. For one, he helped invent effective altruism. That's a pretty solid move. I find him to be a rigorous thinker and when I've seen and transcribed his talks I've been impressed by his careful analyses.

So there's some argument for just... importing his view wholesale? He says 10% with wide error bars, it wouldn't be that crazy to just believe that as a first approximation. But personally, despite my great respect for him and his research, I do think some discounting is appropriate.

Why? Because broad forecasts of the future are extremely hard. Lots of people thought the internet wouldn't matter, basically nobody saw the industrial revolution coming, computers were a boondoggle until they very much weren't, communists have been anticipating a glorious revolution for centuries, and many religions and cults are eagerly awaiting an imminent apocalypse.

The very act of writing a book on the future, of going out and seeking arguments, is itself accepting certain base assumptions/frames. And the world can be very cruel to such frames! Here's a list of things that, in retrospect, I don't think would be unprecedently shocking to have happened by 2100:

  • Video games/social media get dramatically better than they are now, such that most human beings on the planet spend most waking hours on diversions. Productivity is almost entirely automated, and innovation slows.
  • Medicine gets so much better that people's moods are dramatically better than they are today, and we all operate at what we'd now consider a superhuman level of cognition just due to having far fewer parasites/subtle post-viral syndromes.
  • There's a really, really bad pandemic that doesn't pose extinction risk but reshapes society significantly in many unpredictable ways.
  • There's serious political unrest in current major economic powers, or a world war between those powers, slowing down the financial conditions underlying cutting-edge AI research to a major degree.
  • There's a great power war including nuclear exchange, killing most people directly or indirectly.
  • There's some incredible gigantic breakthrough in some field other than AI, like nanotechnology, that captures everyone's attention and causes shifts in the balance of labor and power comparable to the industrial revolution or rise of the internet.

What happens to x-risk forecasts in each of these cases? I don't know! And I just made them up. Given time, I could make up hundreds of thousands more. Maybe not a single one would happen quite as I'd envisioned it, but some other equally surprising/impactful thing almost certainly would.

I think, when making models of the world, you sort of have to just ignore the possibility of huge paradigm shifts upending the model wholesale. If you're always thinking "but what if some thing I'll never think of makes all this invalid", then you don't get anywhere. But when we're talking about the long or medium-run future, some thing you'll never think of probably will complicate your argument. Because history is surprising, especially before it's history at all.

And now I am speaking really personally, since by nature this sort of thing is qualitative and fuzzy and hard to pin down. But instinctively, I'm prone to scale down intricate-world-model-from-other-people probabilities by, say, a factor of (at least) 5. So if I see some argument that looks pretty good on the merits, but it's making broad medium-to-long term predictions about the world, and the argument implies a 10% chance of some event, I'm more likely to assume a 2% chance for that event.

Another example of this is here. I believe that this guy knows a lot more than me on the subject and thinks the odds of nuclear war with North Korea were about 10%. But I'd have put it around 2%, because the world is really weird and complicated and specific events often just don't happen.

(Obviously there are caveats here, since I can't divide a set of mutually exhaustive/exclusive probabilities by 5. But I think this can work for positive events that aren't like "or none of that happens".)

What happens if you think it's 2%?

Do I really think there are 2% odds we're all killed by AI in the next century? I don't know. Mostly depends on if I've been scrolling on LessWrong for 2 hours and it's currently 3am. If so, I feel like it's higher. If I've just had my coffee and it's 11am on a nice day, I feel like it's lower. But 2% seems like a naively ok anchor to me. What does that actually feel like?

  • The odds of dying in a car crash over your lifetime are about 1%.
  • The odds of dying of an opioid overdose, across the US population in general, are about 1.5%.
  • The odds of dying of cancer are about 14%.

So say you're considering having a kid. It's reasonable to worry a little that they'll be killed by AI, perhaps even when they're still young. Just like it's reasonable to make sure they understand that it's important to wear a seatbelt, and to get screened if they find any weird lumps when they're older.

But let's think about those opioid numbers for a second. I actually know a couple people who died of opioid overdoses. Their parents seemed perfectly normal. I don't think there was probably much they could have done to prevent their child from dying in this way. But I also don't think it would have been reasonable, when they were debating having a child, to worry that that child might get addicted to heroin and die.

It would have been correct, sure. And that's really chilling. For me, too, as someone who'd like to have a child. And it may be correct that AI kills us all. But... risk is just part of making life plans. We deal with low risks of horrifying outcomes all the time.

So, you know. Wear your seatbelt. Be nice to your local AI Safety researcher. But keep your salt shaker ready, and maybe put a little money in your 401k, too.

22

0
0

Reactions

0
0

More posts like this

Comments4
Sorted by Click to highlight new comments since:

Thanks for writing the post :-)

I think I'm confused that I expected the post (going by the title) to say something like "even if you think AI risk by year Y is X% or greater, you maybe shouldn't  change your life plans too much" but instead you're saying "AI risk might be lower than you think, and at a low level it doesn't affect your plans much" and then give some good considerations for potentially lower AI x-risk. 

I really want to ask the question: Why do people think that doom is inevitable or that things usually get worse, not better? Is it actually founded on reality, or is a lot of the doomerism psychological and social?

I also understand that outside view 1 entirely, given voters have very bad moods about both the economy and inflation, when they're actually doing okayish.

I'd guess a lot of it is an evolved defense mechanism to use learned helplessness to avoid confronting difficult situations more directly. Basically the same just-so explanation as the one behind the "rank theory of depression", where people who lose a lot of status fights become depressed so they don't keep losing them.

I think it varies with the merits of the underlying argument! But at the very least we should suppose there's an irrational presumption toward doom: for whatever reason(s), maybe evopsych or maybe purely memetic, doomy ideas have some kind of selection advantage that's worth offsetting.

More from Justis
Curated and popular this week
Relevant opportunities