Hide table of contents

I've been citing AGI Ruin: A List of Lethalities to explain why the situation with AI looks lethally dangerous to me. But that post is relatively long, and emphasizes specific open technical problems over "the basics".

Here are 10 things I'd focus on if I were giving "the basics" on why I'm so worried:[1]


1. General intelligence is very powerful, and once we can build it at all, STEM-capable artificial general intelligence (AGI) is likely to vastly outperform human intelligence immediately (or very quickly).

When I say "general intelligence", I'm usually thinking about "whatever it is that lets human brains do astrophysics, category theory, etc. even though our brains evolved under literally zero selection pressure to solve astrophysics or category theory problems".

It's possible that we should already be thinking of GPT-4 as "AGI" on some definitions, so to be clear about the threshold of generality I have in mind, I'll specifically talk about "STEM-level AGI", though I expect such systems to be good at non-STEM tasks too.

Human brains aren't perfectly general, and not all narrow AI systems or animals are equally narrow. (E.g., AlphaZero is more general than AlphaGo.) But it sure is interesting that humans evolved cognitive abilities that unlock all of these sciences at once, with zero evolutionary fine-tuning of the brain aimed at equipping us for any of those sciences. Evolution just stumbled into a solution to other problems, that happened to generalize to millions of wildly novel tasks.

More concretely:

  • AlphaGo is a very impressive reasoner, but its hypothesis space is limited to sequences of Go board states rather than sequences of states of the physical universe. Efficiently reasoning about the physical universe requires solving at least some problems that are different in kind from what AlphaGo solves.
    • These problems might be solved by the STEM AGI's programmer, and/or solved by the algorithm that finds the AGI in program-space; and some such problems may be solved by the AGI itself in the course of refining its thinking.[2]
  • Some examples of abilities I expect humans to only automate once we've built STEM-level AGI (if ever):
    • The ability to perform open-heart surgery with a high success rate, in a messy non-standardized ordinary surgical environment.
    • The ability to match smart human performance in a specific hard science field, across all the scientific work humans do in that field.
  • In principle, I suspect you could build a narrow system that is good at those tasks while lacking the basic mental machinery required to do par-human reasoning about all the hard sciences. In practice, I very strongly expect humans to find ways to build general reasoners to perform those tasks, before we figure out how to build narrow reasoners that can do them. (For the same basic reason evolution stumbled on general intelligence so early in the history of human tech development.)[3]

When I say "general intelligence is very powerful", a lot of what I mean is that science is very powerful, and that having all of the sciences at once is a lot more powerful than the sum of each science's impact.[4]

Another large piece of what I mean is that (STEM-level) general intelligence is a very high-impact sort of thing to automate because STEM-level AGI is likely to blow human intelligence out of the water immediately, or very soon after its invention.

80,000 Hours gives the (non-representative) example of how AlphaGo and its successors compared to humanity:

In the span of a year, AI had advanced from being too weak to win a single [Go] match against the worst human professionals, to being impossible for even the best players in the world to defeat.

I expect general-purpose science AI to blow human science ability out of the water in a similar fashion.

Reasons for this include:

  • Empirically, humans aren't near a cognitive ceiling, and even narrow AI often suddenly blows past the human reasoning ability range on the task it's designed for. It would be weird if scientific reasoning were an exception.
  • Empirically, human brains are full of cognitive biases and inefficiencies. It's doubly weird if scientific reasoning is an exception even though it's visibly a mess with tons of blind spots, inefficiencies, and motivated cognitive processes, and even though there are innumerable historical examples of scientists and mathematicians taking decades to make technically simple advances.
  • Empirically, human brains are extremely bad at some of the most basic cognitive processes underlying STEM.
    • E.g., consider the stark limits on human working memory and ability to do basic mental math. We can barely multiply smallish multi-digit numbers together in our head, when in principle a reasoner could hold thousands of complex mathematical structures in its working memory simultaneously and perform complex operations on them. Consider the sorts of technologies and scientific insights that might only ever occur to a reasoner if it can directly see (within its own head, in real time) the connections between hundreds or thousands of different formal structures.
  • Human brains underwent no direct optimization for STEM ability in our ancestral environment, beyond traits like "I can distinguish four objects in my visual field from five objects".[5]
  • In contrast, human engineers can deliberately optimize AGI systems' brains for math, engineering, etc. capabilities; and human engineers have an enormous variety of tools available to build general intelligence that evolution lacked.[6]
  • Software (unlike human intelligence) scales with more compute.
  • Current ML uses far more compute to find reasoners than to run reasoners. This is very likely to hold true for AGI as well.
  • We probably have more than enough compute already, if we knew how to train AGI systems in a remotely efficient way.

And on a meta level: the hypothesis that STEM AGI can quickly outperform humans has a disjunctive character. There are many different advantages that individually suffice for this, even if STEM AGI doesn't start off with any other advantages. (E.g., speed, math ability, scalability with hardware, skill at optimizing hardware...)

In contrast, the claim that STEM AGI will hit the narrow target of "par-human scientific ability", and stay at around that level for long enough to let humanity adapt and adjust, has a conjunctive character.[7]

 

2. A common misconception is that STEM-level AGI is dangerous because of something murky about "agents" or about self-awareness. Instead, I'd say that the danger is inherent to the nature of action sequences that push the world toward some sufficiently-hard-to-reach state.[8]

Call such sequences "plans".

If you sampled a random plan from the space of all writable plans (weighted by length, in any extant formal language), and all we knew about the plan is that executing it would successfully achieve some superhumanly ambitious technological goal like "invent fast-running whole-brain emulation", then hitting a button to execute the plan would kill all humans, with very high probability. This is because:

  • "Invent fast WBE" is a hard enough task that succeeding in it usually requires gaining a lot of knowledge and cognitive and technological capabilities, enough to do lots of other dangerous things.
  • "Invent fast WBE" is likelier to succeed if the plan also includes steps that gather and control as many resources as possible, eliminate potential threats, etc. These are "convergent instrumental strategies"—strategies that are useful for pushing the world in a particular direction, almost regardless of which direction you're pushing.
  • Human bodies and the food, water, air, sunlight, etc. we need to live are resources ("you are made of atoms the AI can use for something else"); and we're also potential threats (e.g., we could build a rival superintelligent AI that executes a totally different plan).

The danger is in the cognitive work, not in some complicated or emergent feature of the "agent"; it's in the task itself.

It isn't that the abstract space of plans was built by evil human-hating minds; it's that the instrumental convergence thesis holds for the plans themselves. In full generality, plans that succeed in goals like "build WBE" tend to be dangerous.

This isn't true of all plans that successfully push our world into a specific (sufficiently-hard-to-reach) physical state, but it's true of the vast majority of them.

This is counter-intuitive because most of the impressive "plans" we encounter today are generated by humans, and it’s tempting to view strong plans through a human lens. But humans have hugely overlapping values, thinking styles, and capabilities; AI is drawn from new distributions.

 

3. Current ML work is on track to produce things that are, in the ways that matter, more like "randomly sampled plans" than like "the sorts of plans a civilization of human von Neumanns would produce". (Before we're anywhere near being able to produce the latter sorts of things.)[9]

We're building "AI" in the sense of building powerful general search processes (and search processes for search processes), not building "AI" in the sense of building friendly ~humans but in silicon.

(Note that "we're going to build systems that are more like A Randomly Sampled Plan than like A Civilization of Human Von Neumanns" doesn't imply that the plan we'll get is the one we wanted! There are two separate problems: that current ML finds things-that-act-like-they're-optimizing-the-task-you-wanted rather than things-that-actually-internally-optimize-the-task-you-wanted, and also that internally ~maximizing most superficially desirable ends will kill humanity.)

Note that the same problem holds for systems trained to imitate humans, if those systems scale to being able to do things like "build whole-brain emulation". "We're training on something related to humans" doesn't give us "we're training things that are best thought of as humans plus noise".

It's not obvious to me that GPT-like systems can scale to capabilities like "build WBE". But if they do, we face the problem that most ways of successfully imitating humans don't look like "build a human (that's somehow superhumanly good at imitating the Internet)". They look like "build a relatively complex and alien optimization process that is good at imitation tasks (and potentially at many other tasks)".

You don't need to be a human in order to model humans, any more than you need to be a cloud in order to model clouds well. The only reason this is more confusing in the case of "predict humans" than in the case of "predict weather patterns" is that humans and AI systems are both intelligences, so it's easier to slide between "the AI models humans" and "the AI is basically a human".

 

4. The key differences between humans and "things that are more easily approximated as random search processes than as humans-plus-a-bit-of-noise" lies in lots of complicated machinery in the human brain.

(Cf. Detached Lever Fallacy, Niceness Is Unnatural, and Superintelligent AI Is Necessary For An Amazing Future, But Far From Sufficient.)

Humans are not blank slates in the relevant ways, such that just raising an AI like a human solves the problem.

This doesn't mean the problem is unsolvable; but it means that you either need to reproduce that internal machinery, in a lot of detail, in AI, or you need to build some new kind of machinery that’s safe for reasons other than the specific reasons humans are safe.

(You need cognitive machinery that somehow samples from a much narrower space of plans that are still powerful enough to succeed in at least one task that saves the world, but are constrained in ways that make them far less dangerous than the larger space of plans. And you need a thing that actually implements internal machinery like that, as opposed to just being optimized to superficially behave as though it does in the narrow and unrepresentative environments it was in before starting to work on WBE. "Novel science work" means that pretty much everything you want from the AI is out-of-distribution.)

 

5. STEM-level AGI timelines don't look that long (e.g., probably not 50 or 150 years; could well be 5 years or 15).

I won't try to argue for this proposition, beyond pointing at the field's recent progress and echoing Nate Soares' comments from early 2021:

[...] I observe that, 15 years ago, everyone was saying AGI is far off because of what it couldn't do -- basic image recognition, go, starcraft, winograd schemas, simple programming tasks. But basically all that has fallen. The gap between us and AGI is made mostly of intangibles. (Computer programming that is Actually Good? Theorem proving? Sure, but on my model, "good" versions of those are a hair's breadth away from full AGI already. And the fact that I need to clarify that "bad" versions don't count, speaks to my point that the only barriers people can name right now are intangibles.) That's a very uncomfortable place to be!

[...] I suspect that I'm in more-or-less the "penultimate epistemic state" on AGI timelines: I don't know of a project that seems like they're right on the brink; that would put me in the "final epistemic state" of thinking AGI is imminent. But I'm in the second-to-last epistemic state, where I wouldn't feel all that shocked to learn that some group has reached the brink. Maybe I won't get that call for 10 years! Or 20! But it could also be 2, and I wouldn't get to be indignant with reality. I wouldn't get to say "but all the following things should have happened first, before I made that observation!". Those things have happened. I have made those observations. [...]

I think timing tech is very difficult (and plausibly ~impossible when the tech isn't pretty imminent), and I think reasonable people can disagree a lot about timelines.

I also think converging on timelines is not very crucial, since if AGI is 50 years away I would say it's still the largest single risk we face, and the bare minimum alignment work required for surviving that transition could easily take longer than that.

Also, "STEM AGI when?" is the kind of argument that requires hashing out people's predictions about how we get to STEM AGI, which is a bad thing to debate publicly insofar as improving people's models of pathways can further shorten timelines.

I mention timelines anyway because they are in fact a major reason I'm pessimistic about our prospects; if I learned tomorrow that AGI were 200 years away, I'd be outright optimistic about things going well.

 

6. We don't currently know how to do alignment, we don't seem to have a much better idea now than we did 10 years ago, and there are many large novel visible difficulties. (See AGI Ruin and the Capabilities Generalization, and the Sharp Left Turn.)

On a more basic level, quoting Nate Soares: "Why do I think that AI alignment looks fairly difficult? The main reason is just that this has been my experience from actually working on these problems."

 

7. We should be starting with a pessimistic prior about achieving reliably good behavior in any complex safety-critical software, particularly if the software is novel. Even more so if the thing we need to make robust is structured like undocumented spaghetti code, and more so still if the field is highly competitive and you need to achieve some robustness property while moving faster than a large pool of less-safety-conscious people who are racing toward the precipice.

The default assumption is that complex software goes wrong in dozens of different ways you didn't expect. Reality ends up being thorny and inconvenient in many of the places where your models were absent or fuzzy. Surprises are abundant, and some surprises can be good, but this is empirically a lot rarer than unpleasant surprises in software development hell.

The future is hard to predict, but plans systematically take longer and run into more snags than humans naively expect, as opposed to plans systematically going surprisingly smoothly and deadlines being systematically hit ahead of schedule.

The history of computer security and of safety-critical software systems is almost invariably one of robust software lagging far, far behind non-robust versions of the same software. Achieving any robustness property in complex software that will be deployed in the real world, with all its messiness and adversarial optimization, is very difficult and usually fails.

In many ways I think the foundational discussion of AGI risk is Security Mindset and Ordinary Paranoia and Security Mindset and the Logistic Success Curve, and the main body of the text doesn't even mention AGI. Adding in the specifics of AGI and smarter-than-human AI takes the risk from "dire" to "seemingly overwhelming", but adding in those specifics is not required to be massively concerned if you think getting this software right matters for our future.

 

8. Neither ML nor the larger world is currently taking this seriously, as of April 2023.

This is obviously something we can change. But until it's changed, things will continue to look very bad.

Additionally, most of the people who are taking AI risk somewhat seriously are, to an important extent, not willing to worry about things until after they've been experimentally proven to be dangerous. Which is a lethal sort of methodology to adopt when you're working with smarter-than-human AI.

My basic picture of why the world currently isn't responding appropriately is the one in Four mindset disagreements behind existential risk disagreements in ML, The inordinately slow spread of good AGI conversations in ML, and Inadequate Equilibria.[10]

 

9. As noted above, current ML is very opaque, and it mostly lets you intervene on behavioral proxies for what we want, rather than letting us directly design desirable features.

ML as it exists today also requires that data is readily available and safe to provide. E.g., we can’t robustly train the AGI on "don’t kill people" because we can’t provide real examples of it killing people to train against the behavior we don't want; we can only give flawed proxies and work via indirection.

 

10. There are lots of specific abilities which seem like they ought to be possible for the kind of civilization that can safely deploy smarter-than-human optimization, that are far out of reach, with no obvious path forward for achieving them with opaque deep nets even if we had unlimited time to work on some relatively concrete set of research directions.

(Unlimited time suffices if we can set a more abstract/indirect research direction, like "just think about the problem for a long time until you find some solution". There are presumably paths forward; we just don’t know what they are today, which puts us in a worse situation.)

E.g., we don’t know how to go about inspecting a nanotech-developing AI system’s brain to verify that it’s only thinking about a specific room, that it’s internally representing the intended goal, that it’s directing its optimization at that representation, that it internally has a particular planning horizon and a variety of capability bounds, that it’s unable to think about optimizers (or specifically about humans), or that it otherwise has the right topics internally whitelisted or blacklisted.

 

Individually, it seems to me that each of these difficulties can be addressed. In combination, they seem to me to put us in a very dark situation.

 


 

One common response I hear to points like the above is:

The future is generically hard to predict, so it's just not possible to be rationally confident that things will go well or poorly. Even if you look at dozens of different arguments and framings and the ones that hold up to scrutiny nearly all seem to point in the same direction, it's always possible that you're making some invisible error of reasoning that causes correlated failures in many places at once.

I'm sympathetic to this because I agree that the future is hard to predict.

I'm not totally confident things will go poorly; if I were, I wouldn't be trying to solve the problem! I think things are looking extremely dire, but not hopeless.

That said, some people think that even "extremely dire" is an impossible belief state to be in, in advance of an AI apocalypse actually occurring. I disagree here, for two basic reasons:

 

a. There are many details we can get into, but on a core level I don't think the risk is particularly complicated or hard to reason about. The core concern fits into a tweet:

STEM AI is likely to vastly exceed human STEM abilities, conferring a decisive advantage. We aren't on track to knowing how to aim STEM AI at intended goals, and STEM AIs pursuing unintended goals tend to have instrumental subgoals like "control all resources".

Zvi Mowshowitz puts the core concern in even more basic terms:

I also notice a kind of presumption that things in most scenarios will work out and that doom is dependent on particular ‘distant possibilities,’ that often have many logical dependencies or require a lot of things to individually go as predicted. Whereas I would say that those possibilities are not so distant or unlikely, but more importantly that the result is robust, that once the intelligence and optimization pressure that matters is no longer human that most of the outcomes are existentially bad by my values and that one can reject or ignore many or most of the detail assumptions and still see this.

The details do matter for evaluating the exact risk level, but this isn't the sort of topic where it seems fundamentally impossible for any human to reach a good understanding of the core difficulties and whether we're handling them.

 

b. Relatedly, as Nate Soares has argued, AI disaster scenarios are disjunctive. There are many bad outcomes for every good outcome, and many paths leading to disaster for every path leading to utopia.

Quoting Eliezer Yudkowsky:

You don't get to adopt a prior where you have a 50-50 chance of winning the lottery "because either you win or you don't"; the question is not whether we're uncertain, but whether someone's allowed to milk their uncertainty to expect good outcomes.

Quoting Jack Rabuck:

I listened to the whole 4 hour Lunar Society interview with @ESYudkowsky
(hosted by @dwarkesh_sp) that was mostly about AI alignment and I think I identified a point of confusion/disagreement that is pretty common in the area and is rarely fleshed out:

Dwarkesh repeatedly referred to the conclusion that AI is likely to kill humanity as "wild."

Wild seems to me to pack two concepts together, 'bad' and 'complex.' And when I say complex, I mean in the sense of the Fermi equation where you have an end point (dead humanity) that relies on a series of links in a chain and if you break any of those links, the end state doesn't occur.

It seems to me that Eliezer believes this end state is not wild (at least not in the complex sense), but very simple. He thinks many (most) paths converge to this end state.

That leads to a misunderstanding of sorts. Dwarkesh pushes Eliezer to give some predictions based on the line of reasoning that he uses to predict that end point, but since the end point is very simple and is a convergence, Eliezer correctly says that being able to reason to that end point does not give any predictive power about the particular path that will be taken in this universe to reach that end point.

Dwarkesh is thinking about the end of humanity as a causal chain with many links and if any of them are broken it means humans will continue on, while Eliezer thinks of the continuity of humanity (in the face of AGI) as a causal chain with many links and if any of them are broken it means humanity ends. Or perhaps more discretely, Eliezer thinks there are a few very hard things which humanity could do to continue in the face of AI, and absent one of those occurring, the end is a matter of when, not if, and the when is much closer than most other people think.

Anyway, I think each of Dwarkesh and Eliezer believe the other one falls on the side of extraordinary claims require extraordinary evidence - Dwarkesh thinking the end of humanity is "wild" and Eliezer believing humanity's viability in the face of AGI is "wild" (though not in the negative sense). 

I don't consider "AGI ruin is disjunctive" a knock-down argument for high p(doom) on its own. NASA has a high success rate for rocket launches even though success requires many things to go right simultaneously. Humanity is capable of achieving conjunctive outcomes, to some degree; but I think this framing makes it clearer why it's possible to rationally arrive at a high p(doom), at all, when enough evidence points in that direction.[11]

 

  1. ^

    Eliezer Yudkowsky's So Far: Unfriendly AI Edition and Nate Soares' Ensuring Smarter-Than-Human Intelligence Has a Positive Outcome are two other good (though old) introductions to what I'd consider "the basics".

    To state the obvious: this post consists of various claims that increase my probability on AI causing an existential catastrophe, but not all the claims have to be true in order for AI to have a high probability of causing such a catastrophe.

    Also, I wrote this post to summarize my own top reasons for being worried, not to try to make a maximally compelling or digestible case for others. I don't expect others to be similarly confident based on such a quick overview, unless perhaps you've read other sources on AI risk in the past. (Including more optimistic ones, since it's harder to be confident when you've only heard from one side of a disagreement. I've written in the past about some of the things that give me small glimmers of hope, but people who are overall far more hopeful will have very different reasons for hope, based on very different heuristics and background models.)

  2. ^

    E.g., the physical world is too complex to simulate in full detail, unlike a Go board state. An effective general intelligence needs to be able to model the world at many different levels of granularity, and strategically choose which levels are relevant to think about, as well as which specific pieces/aspects/properties of the world at those levels are relevant to think about.

    More generally, being a general intelligence requires an enormous amount of laserlike focus and strategicness when it comes to which thoughts you do or don't think. A large portion of your compute needs to be relentlessly funneled into exactly the tiny subset of questions about the physical world that bear on the question you're trying to answer or the problem you're trying to solve. If you fail to be relentlessly targeted and efficient in "aiming" your cognition at the most useful-to-you things, you can easily spend a lifetime getting sidetracked by minutiae, directing your attention at the wrong considerations, etc.

    And given the variety of kinds of problems you need to solve in order to navigate the physical world well, do science, etc., the heuristics you use to funnel your compute to the exact right things need to themselves be very general, rather than all being case-specific.

    (Whereas we can more readily imagine that many of the heuristics AlphaGo uses to avoid thinking about the wrong aspects of the game state (or getting otherwise sidetracked) are Go-specific heuristics.)

  3. ^

    Of course, if your brain has all the basic mental machinery required to do other sciences, that doesn't mean that you have the knowledge required to actually do well in those sciences. An STEM-level artificial general intelligence could lack physics ability for the same reason many smart humans can't solve physics problems.

  4. ^

    E.g., because different sciences can synergize, and because you can invent new scientific fields and subfields, and more generally chain one novel insight into dozens of other new insights that critically depended on the first insight.

  5. ^

    More generally, the sciences (and many other aspects of human life, like written language) are a very recent development on evolutionary timescales. So evolution has had very little time to refine and improve on our reasoning ability in many of the ways that matter.

  6. ^

    "Human engineers have an enormous variety of tools available that evolution lacked" is often noted as a reason to think that we may be able to align AGI to our goals, even though evolution failed to align humans to its "goal". It's additionally a reason to expect AGI to have greater cognitive ability, if engineers try to achieve great cognitive ability.

  7. ^

    And my understanding is that, e.g., Paul Christiano's soft-takeoff scenarios don't involve there being much time between par-human scientific ability and superintelligence. Rather, he's betting that we have a bunch of decades between GPT-4 and par-human STEM AGI.

  8. ^

    I'll classify thoughts and text outputs as "actions" too, not just physical movements.

  9. ^

    Obviously, neither is a particularly good approximation for ML systems. The point is that our optimism about plans in real life generally comes from the fact that they're weak, and/or it comes from the fact that the plan generators are human brains with the full suite of human psychological universals. ML systems don't possess those human universals, and won't stay weak indefinitely.

  10. ^

    Quoting Four mindset disagreements behind existential risk disagreements in ML:

    • People are taking the risks unseriously because they feel weird and abstract.
    • When they do think about the risks, they anchor to what's familiar and known, dismissing other considerations because they feel "unconservative" from a forecasting perspective.
    • Meanwhile, social mimesis and the bystander effect make the field sluggish at pivoting in response to new arguments and smoke under the door.

    Quoting The inordinately slow spread of good AGI conversations in ML:

    Info about AGI propagates too slowly through the field, because when one ML person updates, they usually don't loudly share their update with all their peers. This is because:

    1. AGI sounds weird, and they don't want to sound like a weird outsider.

    2. Their peers and the community as a whole might perceive this information as an attack on the field, an attempt to lower its status, etc.

    3. Tech forecasting, differential technological development, long-term steering, exploratory engineering, 'not doing certain research because of its long-term social impact', prosocial research closure, etc. are very novel and foreign to most scientists.

    EAs exert effort to try to dig up precedents like Asilomar partly because Asilomar is so unusual compared to the norms and practices of the vast majority of science. Scientists generally don't think in these terms at all, especially in advance of any major disasters their field causes.

    And the scientists who do find any of this intuitive often feel vaguely nervous, alone, and adrift when they talk about it. On a gut level, they see that they have no institutional home and no super-widely-shared 'this is a virtuous and respectable way to do science' narrative.

    Normal science is not Bayesian, is not agentic, is not 'a place where you're supposed to do arbitrary things just because you heard an argument that makes sense'. Normal science is a specific collection of scripts, customs, and established protocols.

    In trying to move the field toward 'doing the thing that just makes sense', even though it's about a weird topic (AGI), and even though the prescribed response is also weird (closure, differential tech development, etc.), and even though the arguments in support are weird (where's the experimental data??), we're inherently fighting our way upstream, against the current.

    Success is possible, but way, way more dakka is needed, and IMO it's easy to understand why we haven't succeeded more.

    This is also part of why I've increasingly updated toward a strategy of "let's all be way too blunt and candid about our AGI-related thoughts".

    The core problem we face isn't 'people informedly disagree', 'there's a values conflict', 'we haven't written up the arguments', 'nobody has seen the arguments', or even 'self-deception' or 'self-serving bias'.

    The core problem we face is 'not enough information is transmitting fast enough, because people feel nervous about whether their private thoughts are in the Overton window'.

    On the more basic level, Inadequate Equilibria paints a picture of the world's baseline civilizational competence that I think makes it less mysterious why we could screw up this badly on a novel problem that our scientific and political institutions weren't designed to address. Inadequate Equilibria also talks about the nuts and bolts of Modest Epistemology, which I think is a key part of the failure story.

  11. ^

    Quoting a recent conversation between Aryeh Englander and Eliezer Yudkowsky:

    Aryeh: [...] Yet I still have a very hard time understanding the arguments that would lead to such a high-confidence prediction. Like, I think I understand the main arguments for AI existential risk, but I just don't understand why some people seem so sure of the risks. [...]

    Eliezer: I think the core thing is the sense that you cannot in this case milk uncertainty for a chance of good outcomes; to get to a good outcome you'd have to actually know where you're steering, like trying to buy a winning lottery ticket or launching a Moon rocket. Once you realize that uncertainty doesn't move estimates back toward "50-50, either we live happily ever after or not", you realize that "people in the EA forums cannot tell whether Eliezer or Paul is right" is not a factor that moves us toward 1:1 good:bad but rather another sign of doom; surviving worlds don't look confused like that and are able to make faster progress.

    Not as a fully valid argument from which one cannot update further, but as an intuition pump: the more all arguments about the future seem fallible, the more you should expect the future Solar System to have a randomized configuration from your own perspective. Almost zero of those have humans in them. It takes confidence about some argument constraining the future to get to more than that.

    Aryeh: when you talk about uncertainty here do you mean uncertain factors within your basic world model, or are you also counting model uncertainty? I can see how within your world model extra sources of uncertainty don't point to lower risk estimates. But my general question I think is more about model uncertainty: how sure can you really be that your world model and reference classes and framework for thinking about this is the right one vs e.g., Robin or Paul or Rohin or lots of others? And in terms of model uncertainty it looks like most of these other approaches imply much lower risk estimates, so adding in that kind of model uncertainty should presumably (I think) point to overall lower risk estimates.

    Eliezer: Aryeh, if you've got a specific theory that says your rocket design is going to explode, and then you're also very unsure of how rockets work really, what probability should you assess of your rocket landing safely on target?

    Aryeh: how about if you have a specific theory that says you should be comparing what you're doing to a rocket aiming for the moon but it'll explode, and then a bunch of other theories saying it won't explode, plus a bunch of theories saying you shouldn't be comparing what you're doing to a rocket in the first place? My understanding of many alignment proposals is that they think we do understand "rockets" sufficiently so that we can aim them, but they disagree on various specifics that lead you to have such high confidence in an explosion. And then there are others like Robin Hanson who use mostly outside-type arguments to argue that you're framing the issues incorrectly, and we shouldn't be comparing this to "rockets" at all because that's the wrong reference class to use. So yes, accounting for some types of model uncertainty won't reduce our risk assessments and may even raise them further, but other types of model uncertainty - including many of the actual alternative models / framings at least as I understand them - should presumably decrease our risk assessment.

    Eliezer: What if people are trying to build a flying machine for the first time, and there's a whole host of them with wildly different theories about why it ought to fly easily, and you think there's basic obstacles to stable flight that they're not getting? Could you force the machine to fly despite all obstacles by recruiting more and more optimists to have different theories, each of whom would have some chance of being right?

    Aryeh: right, my point is that in order to have near certainty of not flying you need to be very very sure that your model is right and theirs isn't. Or in other words, you need to have very low model uncertainty. But once you add in model uncertainty where you consider that maybe those other optimists' models could be right, then your risk estimates will go down. Of course you can't arbitrarily add in random optimistic models from random people - it needs to be weighted in some way. My confusion here is that you seem to be very, very certain that your model is the right one, complete with all its pieces and sub-arguments and the particular reference classes you use, and I just don't quite understand why.

    Eliezer: There's a big difference between "sure your model is the right one" and the whole thing with people wandering over with their own models and somebody else going, "I can't tell the difference between you and them, how can you possibly be so sure they're not right?"

    The intuition I'm trying to gesture at here is that you can't milk success out of uncertainty, even by having a bunch of other people wander over with optimistic models. It shouldn't be able to work in real life. If your epistemology says that you can generate free success probability that way, you must be doing something wrong.

    Or maybe another way to put it: When you run into a very difficult problem that you can see is very difficult, but inevitably a bunch of people with less clear sight wander over and are optimistic about it because they don't see the problems, for you to update on the optimists would be to update on something that happens inevitably. So to adopt this policy is just to make it impossible for yourself to ever perceive when things have gotten really bad.

    Aryeh: not sure I fully understand what you're saying. It looks to me like to some degree what you're saying boils down to your views on modest epistemology - i.e., basically just go with your own views and don't defer to anybody else. It sounds like you're saying not only don't defer, but don't even really incorporate any significant model uncertainty based on other people's views. Am I understanding this at all correctly or am I totally off here?

    Eliezer: My epistemology is such that it's possible in principle for me to notice that I'm doomed, in worlds which look very doomed, despite the fact that all such possible worlds no matter how doomed they actually are, always contain a chorus of people claiming we're not doomed.

    (See Inadequate Equilibria for a detailed discussion of Modest Epistemology, deference, and "outside views", and Strong Evidence Is Common for the basic first-order case that people can often reach confident conclusions about things.)

Comments13
Sorted by Click to highlight new comments since:

Might be a naive question:

For a STEM-capable AGI (or any intelligence for that matter) to do new science, it would have to interact with the physical environment to conduct experiments. Otherwise, how can the intelligent agent discover and validate new theories? For example, an AGI that understands physics and material science may theorize and propose thousands of possible high-temperature superconductors, but actually discovering a working material can happen only after actually synthesizing those materials and performing the experiments, which is time-consuming and difficult to do.

If that's true, then the speed in which the STEM-capable AGI discovers new knowledge, and correspondingly its "knowledge advantage" (not intelligence advantage) over humanity is bottlenecked by the speed in which the AGI can interact and perform experiments in the physical world, which as of now depends almost entirely on human operated equipment and is constrained by various real world physical limitations (wear and tear, speed of chemical reactions, speed of biological systems, energy consumption etc.). Doesn't this significantly throttles the speed of AGI gaining advantage over humanity, giving us more time for alignment?

For a STEM-capable AGI (or any intelligence for that matter) to do new science, it would have to interact with the physical environment to conduct experiments.

Or read arXiv papers and draw inferences that humans failed to draw, etc.

Doesn't this significantly throttles the speed of AGI gaining advantage over humanity, giving us more time for alignment?

I expect there's a ton of useful stuff you can learn (that humanity is currently ignorant about) just from looking at existing data on the Internet. But I agree that AGI will destroy the world a little slower in expectation because it may get bottlenecked on running experiments, and it's at least conceivable that at least one project will decide not let it run tons of physical experiments.

(Though I think the most promising ways to save the world involve AGIs running large numbers of physical experiments, so in addition to merely delaying AGI doom by some number of months, 'major labs don't let AGIs run physical experiments' plausibly rules out the small number of scenarios where humanity has a chance of surviving.)

I expect there's a ton of useful stuff you can learn (that humanity is currently ignorant about) just from looking at existing data on the Internet. 

Thank you for the reply, I agree with this point. Now that I think about it, protein folding is a good example of how the data was already available but before AlphaFold, nobody could predict sequence to structure with high accuracy. Maybe a sufficiently smart AGI can get more knowledge out of existing data on the internet without performing too many new experiments.

How much more can it squeeze out of existing data (which were not generated specifically with the AGI's new hypothesis in mind), and if it that can put a decisive advantage over humanity in a short span of time could be important? I.e. whether existing data out there contains within them enough information to figure out new science that is completely beyond our current understanding and can totally screw us.

I would argue that an important component of your first argument still stands. Even though AlphaFold can predict structures to some level of accuracy based on some training data sets that may already exist, an AI would STILL need to check if what it learned is usable in practice for the purposes it is intended to. This logically requires experimentation. Also hold in mind that most data which already exists was not deliberately prepared to help a machine "do X". Any intelligence no matter how strong will still need to check its hypotheses and, thus, prepare data sets that can actually deliver the evidence necessary for drawing warranted conclusions.

I am not really sure what the consequences of this are, though. 

I think a sufficiently intelligent intelligence can generate accurate beliefs from evidence, not just 'experiments', and not just its own experiments. I imagine AIs will be suggesting experiments too (if they're not already).

It is still plausible that not being able to run its own experiments will greatly hamper AI's scientific agendas, but it's harder to know how much it will exactly for intelligences likely to be much more intelligent than ourselves.

Afaik it is pretty well established that you cannot really learn anything new without actually testing your new belief in practice, i.e., experiments. I mean how else would this work? Evidence does not grow on trees, it has to be created (i.e., data has to be carefully generated, selected and interpreted to become useful evidence). 

While it might be true that this experimenting can sometimes be done using existing data, the point is that if you want to learn something new about the universe like “what is dark matter and can it be used for something?” existing data is unlikely to be enough to test any idea you come up with. 

Even if you take data from published academic papers and synthesize some new theories from that, it is still not always (or even likely) the case that the theory you come up with can be tested with already existing data because any theory has unique requirements towards what counts as evidence against it. I mean thats the whole point why we continue to do experiments rather than just metanalyze the sh*t out of all the papers out there. 

Of course, advanced AI could trick us into doing certain experiments or looking at ChatGPT plugins, we may just give it access to anything on the internet wholesale in due time so all of this may just be a short bump in the road. If we are lucky, we might avoid a FOOM style takeover though as long as advanced AI remains dependent on us to carry out its experiments for it simply because of the time those experiments will take. So even if it could bootstrap to nanotech quickly due to good understanding of physics based on our formulas and existing data, the first manufacturing machine / factory would still need to be built somehow and that may take some time.

I feel the weakest part of this argument, and the weakest part of the AI Safety space generally, is the part where AI kills everyone (part 2, in this case).

You argue that most paths to some ambitious goal like whole-brain emulation end terribly for humans, because how else could the AI do whole-brain emulation without subjugating, eliminating or atomising everyone?

I don't think that follows. This seems like what the average hunter-gatherer would have thought when made to imagine our modern commercial airlines or microprocessor industries: how could you achieve something requiring so much research, so many resources and so much coordination without enslaving huge swathes of society and killing anyone that gets in the way? And wouldn't the knowledge to do these things cause terrible new dangers?

Luckily the peasant is wrong: the path here has led up a slope of gradually increasing quality of life (some disagree).

I think the point is not that it is not conceivable that progress can continue with humans still being alive but with the game theoretic dilemma that whatever we humans want to do is unlikely to be exactly what some super powerful advanced AI would want to do. And because the advanced AI does not need us or depend on us, we simply lose and get to be ingredients for whatever that advanced AI is up to.

Your example with humanity fails because humans have always and continue to be a social species that is dependent on each other. An unaligned advanced AI would not be so. A more appropriate example would be to look at the relationship between humans and insects. I don't know if you noticed but a lot of those are dying out right now because we simply don't care about or depend on them. The point with advanced AI would be that because it is potentially even more removed from us than we are from insects and also much more capable in achieving its goals that this whole competitive process which we all engage in is going to be much more competitive and faster when advanced AIs start playing in the game. 

I don't want to be the bearer of bad news but I think it is not that easy to reject this analysis... it seems pretty simple and solid. I would love to know if there is some flaw in the reasoning. Would help me sleep better at night! 

Your example with humanity fails because humans have always and continue to be a social species that is dependent on each other.

I would much more say that it fails because humans have human values.

Maybe a hunter-gatherer would have worried that building airplanes would somehow cause a catastrophe? I don't exactly see why; the obvious hunter-gatherer rejoinder could be 'we built fire and spears and our lives only improved; why would building wings to fly make anything bad happen?'.

Regardless, it doesn't seem like you can get much mileage via an analogy that sticks entirely to humans. Humans are indeed safe, because "safety" is indexed to human values; when we try to reason about non-human optimizers, we tend to anthropomorphize them and implicitly assume that they'll be safe for many of the same reasons. Cf. The Tragedy of Group Selectionism and Anthropomorphic Optimism.

You argue that most paths to some ambitious goal like whole-brain emulation end terribly for humans, because how else could the AI do whole-brain emulation without subjugating, eliminating or atomising everyone?

'Wow, I can't imagine a way to do something so ambitious without causing lots of carnage in the process' is definitely not the argument! On the contrary, I think it's pretty trivial to get good outcomes from humans via a wide variety of different ways we could build WBE ourselves.

The instrumental convergence argument isn't 'I can't imagine a way to do this without killing everyone'; it's that sufficiently powerful optimization behaves like maximizing optimization for practical purposes, and maximizing-ish optimization is dangerous if your terminal values aren't included in the objective being maximized.

If it helps, we could maybe break the disagreement about instrumental convergence into three parts, like:

  • Would a sufficiently powerful paperclip maximizer kill all humans, given the opportunity?
  • Would sufficiently powerful inhuman optimization of most goals kill all humans, or are paperclips an exception?
  • Is 'build fast-running human whole-brain emulation' an ambitious enough task to fall under the 'sufficiently powerful' criterion above? Or if so, is there some other reason random policies might be safe if directed at this task, even if they wouldn't be safe for other similarly-hard tasks?

The step that's missing for me is the one where the paperclip maximiser gets the opportunity to kill everyone.

Your talk of "plans" and the dangers of executing them seems to assume that the AI has all the power it needs to execute the plans. I don't think the AI crowd has done enough to demonstrate how this could happen.

If you drop a naked human in amongst some wolves I don't think the human will do very well despite its different goals and enormous intellectual advantage. Similarly, I don't see how a fledgling sentient AGI on OpenAI servers can take over enough infrastructure that it poses a serious threat. I've not seen a convincing theory for how this would happen. Mailorder nanobots seem unrealistic (too hard to simulate the quantum effects in protein chemistry), the AI talking itself out of its box is another suggestion that seems far-fetched (main evidence seems to be some chat games that Yudkowsky played a few times?), a gradual takeover by its voluntary uptake into more an more of our lives seems slow enough to stop.

Is your question basically how an AGI would gain power in the beginning in order to get to a point where it could execute on a plan to annihilate humans?

I would argue that:

  • Capitalists would quite readily give the AGI all the power it wants, in order to stay competitive and drive profits.
  • Some number of people would deliberately help the AGI gain power just to "see what happens" or specifically to hurt humanity. Think ChaosGPT, or consider the story of David Charles Hahn.
  • Some number of lonely, depressed, or desperate people could be persuaded over social media to carry out actions in the real world.

Considering these channels, I'd say that a sufficiently intelligent AGI with as much access to the real world as ChatGPT has now would have all the power needed to increase its power to the point of being able to annihilate humans.

Thank you for taking the time to write this - I think it is a clear and concise entry point into the AGI ruin arguments.

I want to voice an objection / point out an omission to point 2: I agree that any plan towards a sufficiently complicated goal will include "acquire resources" as a sub-goal, and that "getting rid of all humans" might be a by-product of some ways to achieve this sub-goal.  I'm also willing to grant that if all we now about the plan is that it achieves the end (sufficiently complicated) goal, it is likely that the plan might lead to the destruction of all humans.

However I don't see why we can't infer more about the plans. Specifically I think an ASI plan for a sufficiently complicated goal should be 1) feasible and 2) efficient (at least in some sense). If the ASI doesn't believe that it can overpower humanity, then it's plans will not include overpowering humanity. Even more, if the ASI ascribes a high enough cost to overpowering humanity, it would instead opt to acquire resources in another way.

It seems that for point 2 to hold you must think that an ASI can overpower humanity with 1) close to a 100% certainty and 2) at negligible cost to the ASI. However I don't think this is (explicitly) argued for in the article. Or maybe I'm missing something?

Thanks, I thought this to be informative!

Curated and popular this week
Relevant opportunities