Hide table of contents

Disclaimer: all beliefs here are mine alone, and do not necessarily reflect the beliefs of my employer or any organization I work with.

The other day, I published a piece explaining why my P(doom) — my estimate of the probability that AI will doom humanity in the next 15 years — is 2.76%.

By the standards of the AI safety community, this actually makes me an optimist! Many people in the AI space have P(doom) values far higher than mine — it’s not rare to hear an expert claim that humanity has a 10%, 20%, or even 50% chance of being killed by AI within the next few decades. While the term “AI doomer” doesn’t have an exact definition, I think it’s fair to say that there are many doomers among the AI safety crowd.

But you don’t have to be a doomer to support AI safety efforts. Here’s a list of five major reasons why, even if you have a low P(doom) like I do, you should still support AI safety:

1. Low risk ≠ no risk

Almost everybody who has seriously thought about AI risks agrees that there is at least some chance that AI could kill us all. You might think that chance is very low — maybe one-in-a-thousand — but even low risks are worth considering given the stakes involved.

Here’s one way to think of things: What value do you place on the survival of humanity, in dollars? In other words, how much would you be willing to pay to make sure that you and everyone you love and every other person on the planet won’t die tomorrow? The currently global GDP is around $100 trillion, and that’s just the amount we can produce now in a single year. It seems likely that if humanity survives for another century, that value will rise by at least an order of magnitude. So, I’ll conservatively estimate that the continued survival of humanity is worth at least $1 quadrillion. If we can reduce the odds of human extinction in this century by at least 0.1%, then such a reduction would be worth at least $1 quadrillion X 0.1% = $1 trillion. Therefore, even if you think the odds of AI-induced human extinction are only one-in-a-thousand, you should still be willing to pay $1 trillion to fight against that risk.

I believe that humanity only has a 1-in-30 chance of dying from misaligned AI. But I would rather that chance be 0-in-30, and so I will still fight to lower the odds.

2. Epistemic humility — you could be wrong!

I’ll quote from myself here:

Trying to predict the future of technology is inherently an epistemically fraught thing to do. This is especially true in the case of AI, for several reasons: AI is a very new and speculative technology; it's not clear who, if anybody, should be considered an expert on forecasting AI; and AI forecasting relies on insights gleaned from many disparate disciplines (computer science, mathematics, materials sciences / semiconductor physics, economics, history, and even philosophy). It's hard to be truly knowledgeable about even one of these areas, and being knowledgeable enough to make accurate predictions across each of those disciplines seems practically impossible. It's important to be epistemically humble in moments like this.

Even if you believe that AI stands no chance of wiping us out, can you really be confident in that belief? Can you really be sure that you haven’t made an error somewhere in your reasoning?

If you’ve underestimated the chance of AI doom (and it’s quite plausible that you have), then you’ve underestimated the value of AI safety. Be mindful against making that mistake.

3. When it comes to existential risks, it’s better to be too cautious than not cautious enough.

Many people — especially in the business and tech worlds — have grown wary of safetyism, and understandably so. People often have a tendency to overcorrect against real or perceived risks; they also frequently underestimate the opportunity costs borne from not taking enough risks. Safety measures aren’t costless; they usually come at the price of less convenience, less innovation, and lower economic growth. It’s reasonable to be skeptical whenever somebody says we need regulation to stop some dangerous new technology or corporate practice.

Yet, there’s a difference between existential risks and mundane corporate risks. Mundane risks affect only a small slice of the population; existential risks threaten the life of every human being on the planet. Mundane risks usually have fixed costs; existential risks cost the very existence of mankind, a priceless good. Mundane problems can usually be fixed or reversed; human extinction is permanent and irreversible.

If we are too cautious when it comes to AI, the result will be slower-than-optimal innovation and deployment — not an insignificant cost, to be sure, but a finite cost. If we are not cautious enough when it comes to AI, the result will be total human extinction — a near-infinite loss. Thus, I would rather err on the side of caution.

4. AI safety and AI progress are not necessarily contradictory.

I’m a defensive accelerationist. I believe that we can and should promote AI innovation, while at the same time promoting AI safety measures. It is totally possible to implement policies that harness the near-term benefits of AI adoption while guarding against medium- to long-term harms.

What might these policies look like? Well, I wrote an entire post about the subject, which I recommend checking out if you’re curious:

5. Somebody’s gonna want regulation, and it’s better if we’re the ones doing it.

In public opinion poll after public opinion poll, most people report being distrustful of AI and of those creating it. Support for AI regulation, while popular, has yet to become salient enough to mobilize a mass movement or prompt most lawmakers into action. But that can — and probably will — change in the future. Once people really start “feeling the AGI”, they will demand legislative action. This will especially be true once AI starts seriously impacting the job market. People might not like AI right now, but once their job gets automated, their dislike will turn into a politically potent rage.

The question is not, “Should AI be regulated?” but rather, “How should AI be regulated?” If you support AI innovation, then you should prefer having a pro-market, pro-technology wonk like me at the regulating table, instead of some technophobic demagogue. It’s in your best interest to support the people conducting serious research into reasonable and minimally disruptive AI governance, because otherwise the people in charge of governance will have no clue what they’re doing. The alternative to good regulations isn’t no regulations; it’s bad regulations.


In short, there are very good reasons to support AI safety, even if like me, you have a low P(doom).

10

1
0

Reactions

1
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities