Hide table of contents

I previously asked this question in an AMA, but I'd like to pose it to the broader EA Forum community:

Do you believe that AGI poses a greater existential risk than other proposed x-risk hazards, such as engineered pandemics? Why or why not?

For background, it seems to me like the longtermist movement spends lots of resources (including money and talent) on AI safety and biosecurity as opposed to working to either discover or mitigate other potential x-risks such as disinformation and great power war. It also seems to be a widely held belief that transformative AI poses a greater threat to humanity's future than all other existential hazards, and I am skeptical of this. At most, we have arguments that TAI poses a big x-risk, but not arguments that it is bigger than every other x-risk (although I appreciate Nate's argument that it does outweigh engineered pandemics).

9

0
0

Reactions

0
0
New Answer
New Comment


2 Answers sorted by

Sorry, I totally meant to answer this on the AMA, but ended up missing it.

I don't have particularly strong views on which x-risk is largest, mostly because I haven't thought very much about x-risks other than AI.

(That being said, for many x-risks such as nuclear war you can get some rough bounds by noticing that they haven't occurred yet, whereas we don't have AGI yet so we can't make a similar argument. Anyone who thinks that AI poses, say, >50% x-risk could use these arguments to justify that AI x-risk is larger than most other risks.)

I do think that "all else equal", AI alignment is the most impactful object-level x-risk to work on, because:

  1. The absolute level of risk is not small (and is at least comparable to other risks)
  2. It's a "single problem", i.e. there's a specific technical-ish problem description and it is plausible that there's a "single solution" that entirely handles it, that we have to find. (This cashes out as higher tractability in the ITN framework, though there are also reasons to expect lower tractability.)
  3. It's extremely neglected relative to most other risks.

I am not sure if "all else equal" (by which I think you mean if we are don’t have good likelihood estimates) that "AI alignment is the most impactful object-level x-risk to work on" applies to people without relevant technical skills.

If there is some sense of "all risks are equal" then for people with policy skills I would direct them to focus their attention right now on pandemics (or on general risk management) which is much more politically tractable, and much clear what kinds of policy changes are needed.

By "all else equal" I meant to ignore questions of personal fit (including e.g. whether or not people have the relevant technical skills). I was not imagining that the likelihoods were similar.

I agree that in practice personal fit will be a huge factor in determining what any individual should do.

Ah, sorry, I misunderstood. Thank you for the explanation :-)

I think the answer depends on the timeframe you are asking over. I give some example timeframes you might want to ask the question over and plausible answers to the biggest x-risks. 

  • 1-3 year: nuclear war
    Reasoning: we are not close enough to building TAI that it will happen in the next few years. Nuclear war this year seems possible.
  • 4-20 years: TAI
    Reasoning: Firstly you could say we are a bit closer to TAI than to building x-risk level viruses  (very unsure about that). Secondly the TAI threat is most worrying in scenarios where it happens very quickly and we loose control (a fast risk) and the pandemic threat is most worrying in scenarios where we gradually get more and more ability to produce homebrew viruses (a slow risk).
  • 21-50 years: TAI or manmade pandemics (unclear)
    Reasoning: As above TAI is less worrying if we have lots of time to work on alignment.
  • 51-100 years: unknown unknown risks
    Reasoning: Imagine trying to predict the biggest x-risks today from 50 years ago. The world is changing too fast. There are so many technologies that could be transformative and potentially pose x-risk level threats. To think that the risks we think are biggest today will still be biggest in 50+ years is hubris.

I think as a community we could do more to map out the likelihood of different risks on different timeframes and to consider strategies for addressing unknown unknow risks

Comments5
Sorted by Click to highlight new comments since:

To clarify, is the main source of your skepticism that you don't think TAI has a particularly high chance of leading to an existential catastrophe, or are you also not sure we get TAI soon enough to matter?

Also, I think your post is asking for arguments which directly compare risks. Is there a reason you'd find this particularly compelling? If I tell you a coin has 2 sides, and Alice tells you a die has 6, it feels like you have enough information to work out that getting tails is more likely than rolling a 1, even if I've never met Alice.

The precipice is probably the best place to start if you do want direct comparisons though

My main source of skepticism is that I am not sure whether we'll get to TAI this century. While there are currently some organizations dedicated to building AGI (OpenAI, DeepMind), it could be that comprehensive AI services obviate the economic incentive to develop AGI rather than a collection of narrow AIs (especially given that AGI poses known risks that narrow AIs don't).

If I tell you a coin has 2 sides, and Alice tells you a die has 6, it feels like you have enough information to work out that getting tails is more likely than rolling a 1

Yes, that is an argument that directly compares probabilities. Fair coins and fair 6-sided dice are straightforward mathematical objects, whereas x-risk and expected value estimates for x-risk reduction depend on sketchy numerical assumptions.

Sorry for my extremely delayed response, btw!

Great power conflict is generally considered an existential risk factor, rather than an existential risk per se – it increases the chance of existential risks like bioengineered pandemics, nuclear war, and transformative AI, or lock-in of bad values (Modelling great power conflict as an existential risk factor, The Precipice chapter 7).

I can define a new existential risk factor that could be as great as all existential risks combined – the fact that our society and the general populace do not sufficiently prioritize existential risks, for example. So no, I don't think TAI is greater than all possible existential risk factors. But I think addressing this "risk" would involve thinking a lot about its impact mediated through more direct existential risks like TAI, and if TAI is the main one, then that would be a primary focus.

This passage from The Precipice may be helpful:

the threat of great-power war may (indirectly) pose a significant amount of existential risk. For example, it seems that the bulk of the existential risk last century was driven by the threat of great-power war. Consider your own estimate of how much existential risk there is over the next hundred years. How much of this would disappear if you knew that the great powers would not go to war with each other over that time? It is impossible to be precise, but I’d estimate an appreciable fraction would disappear—something like a tenth of the existential risk over that time. Since I think the existential risk over the next hundred years is about one in six, I am estimating that great power war effectively poses more than a percentage point of existential risk over the next century. This makes it a larger contributor to total existential risk than most of the specific risks we have examined.

While you should feel free to disagree with my particular estimates, I think a safe case can be made that the contribution of great-power war to existential risk is larger than the contribution of all natural risks combined. So a young person choosing their career, a philanthropist choosing their cause or a government looking to make a safer world may do better to focus on great-power war than on detecting asteroids or comets.

This is an interesting point, thanks! I tend not to distinguish between "hazards" and "risk factors" because the distinction between them is whether they directly or indirectly cause an existential catastrophe, and many hazards are both. For example:

  1. An engineered pandemic could wipe out humanity either directly or indirectly by causing famine, war, etc.
  2. Misaligned AI is usually thought of as a direct x-risk, but it can also be thought of a risk factor because it uses its knowledge of other hazards in order to drive humanity extinct as efficiently as possible (e.g. by infecting all humans with botox-producing nanoparticles).

Mathematically, you can speak of the probability of an existential catastrophe given a risk factor by summing up the probabilities of that risk factor indirectly causing a catastrophe by elevating the probability of a "direct" hazard:

You can do the same thing with direct risks. All that matters for prioritization is the overall probability of catastrophe given some combination of risk factors.

It's important to distinguish existential risk (x-risk) from global catastrophic risk (GCR). Nuclear war and extreme climate change, for example, are much more likely to have survivors, so are mostly GCRs rather than x-risks. Similarly with engineered pandemics - it seems like they are more likely to be survivable by some fraction of humanity, down to the relatively slow speed of spread, and the possibility of countermeasures (you are only up against human level intelligence), compared to an unaligned AGI (you are up against superintelligence which could wipe out the human race in minutes).

Curated and popular this week
 ·  · 22m read
 · 
The cause prioritization landscape in EA is changing. Prominent groups have shut down, others have been founded, and everyone’s trying to figure out how to prepare for AI. This is the third in a series of posts critically examining the state of cause prioritization and strategies for moving forward. Executive Summary * An increasingly common argument is that we should prioritize work in AI over work in other cause areas (e.g. farmed animal welfare, reducing nuclear risks) because the impending AI revolution undermines the value of working in those other areas. * We consider three versions of the argument: * Aligned superintelligent AI will solve many of the problems that we currently face in other cause areas. * Misaligned AI will be so disastrous that none of the existing problems will matter because we’ll all be dead or worse. * AI will be so disruptive that our current theories of change will all be obsolete, so the best thing to do is wait, build resources, and reformulate plans until after the AI revolution. * We identify some key cruxes of these arguments, and present reasons to be skeptical of them. A more direct case needs to be made for these cruxes before we rely on them in making important cause prioritization decisions. * Even on short timelines, the AI transition may be a protracted and patchy process, leaving many opportunities to act on longer timelines. * Work in other cause areas will often make essential contributions to the AI transition going well. * Projects that require cultural, social, and legal changes for success, and projects where opposing sides will both benefit from AI, will be more resistant to being solved by AI. * Many of the reasons why AI might undermine projects in other cause areas (e.g. its unpredictable and destabilizing effects) would seem to undermine lots of work on AI as well. * While an impending AI revolution should affect how we approach and prioritize non-AI (and AI) projects, doing this wisel
 ·  · 9m read
 · 
This is Part 1 of a multi-part series, shared as part of Career Conversations Week. The views expressed here are my own and don't reflect those of my employer. TL;DR: Building an EA-aligned career starting from an LMIC comes with specific challenges that shaped how I think about career planning, especially around constraints: * Everyone has their own "passport"—some structural limitation that affects their career more than their abilities. The key is recognizing these constraints exist for everyone, just in different forms. Reframing these from "unfair barriers" to "data about my specific career path" has helped me a lot. * When pursuing an ideal career path, it's easy to fixate on what should be possible rather than what actually is. But those idealized paths often require circumstances you don't have—whether personal (e.g., visa status, financial safety net) or external (e.g., your dream org hiring, or a stable funding landscape). It might be helpful to view the paths that work within your actual constraints as your only real options, at least for now. * Adversity Quotient matters. When you're working on problems that may take years to show real progress, the ability to stick around when the work is tedious becomes a comparative advantage. Introduction Hi, I'm Rika. I was born and raised in the Philippines and now work on hiring and recruiting at the Centre for Effective Altruism in the UK. This post might be helpful for anyone navigating the gap between ambition and constraint—whether facing visa barriers, repeated setbacks, or a lack of role models from similar backgrounds. Hearing stories from people facing similar constraints helped me feel less alone during difficult times. I hope this does the same for someone else, and that you'll find lessons relevant to your own situation. It's also for those curious about EA career paths from low- and middle-income countries—stories that I feel are rarely shared. I can only speak to my own experience, but I hop
 ·  · 8m read
 · 
And other ways to make event content more valuable.   I organise and attend a lot of conferences, so the below is correct and need not be caveated based on my experience, but I could be missing some angles here. Also on my substack. When you imagine a session at an event going wrong, you’re probably thinking of the hapless, unlucky speaker. Maybe their slides broke, they forgot their lines, or they tripped on a cable and took the whole stage backdrop down. This happens sometimes, but event organizers usually remember to invest the effort required to prevent this from happening (e.g., checking that the slides work, not leaving cables lying on the stage). But there’s another big way that sessions go wrong that is sorely neglected: wasting everyone’s time, often without people noticing. Let’s give talks a break. They often suck, but event organizers are mostly doing the right things to make them not suck. I’m going to pick on two event formats that (often) suck, why they suck, and how to run more useful content instead. Panels Panels. (very often). suck. Reid Hoffman (and others) have already explained why, but this message has not yet reached a wide enough audience: Because panelists know they'll only have limited time to speak, they tend to focus on clear and simple messages that will resonate with the broadest number of people. The result is that you get one person giving you an overly simplistic take on the subject at hand. And then the process repeats itself multiple times! Instead of going deeper or providing more nuance, the panel format ensures shallowness. Even worse, this shallow discourse manifests as polite groupthink. After all, panelists attend conferences for the same reasons that attendees do – they want to make connections and build relationships. So panels end up heavy on positivity and agreement, and light on the sort of discourse which, through contrasting opinions and debate, could potentially be more illuminating. The worst form of shal