If you enjoy this, please consider subscribing to my Substack.

Sam Altman has said he thinks that developing artificial general intelligence (AGI) could lead to human extinction, but OpenAI is trying to build it ASAP. Why?

The common story for how AI could overpower humanity involves an “intelligence explosion,” where an AI system becomes smart enough to further improve its capabilities, bootstrapping its way to superintelligence. Even without any kind of recursive self-improvement, some AI safety advocates argue that a large enough number of copies of a genuinely human-level AI system could pose serious problems for humanity. (I discuss this idea in more detail in my recent Jacobin cover story.)

Some people think the transition from human-level AI to superintelligence could happen in a matter of months, weeks, days, or even hours. The faster the takeoff, the more dangerous, the thinking goes. 

Sam Altman, circa February 2023, agrees that a slower takeoff would be better. In an OpenAI blog post called “Planning for AGI and beyond,” he argues that “a slower takeoff gives us more time to figure out empirically how to solve the safety problem and how to adapt.” 

So why does rushing to AGI help? Altman writes that “shorter timelines seem more amenable to coordination and more likely to lead to a slower takeoff due to less of a compute overhang.” 

Let’s set aside the first claim, which is far from obvious to me.

Computational resources, or compute, is one of the key inputs into training AI models. Altman is basically arguing that the longer it takes to get to AGI, the cheaper and more abundant the compute, which can then be plowed back into improving or scaling up the model. 

The amount of compute used to train AI models has increased roughly one-hundred-millionfold since 2010. Compute supply has not kept pace with demand, driving up prices and rewarding the companies that have near-monopolies on chip design and manufacturing. 

Last May, Elon Musk said that “GPUs at this point are considerably harder to get than drugs” (and he would know). One startup CEO said “It’s like toilet paper during the pandemic.”

Perhaps no one has benefited more from the deep learning revolution than the 31-year-old GPU designer Nvidia. GPUs, chips originally designed to process 3D video game graphics, were discovered to be the best hardware for training deep learning models. Nvidia, once little-known outside of PC gaming circles, reportedly accounts for 88 percent of the GPU market and has ridden the wave of AI investment. Since OpenAI’s founding in December 2015, Nvidia’s valuation has risen more than 9,940 percent, breaking $1 trillion last summer. CEO and cofounder Jensen Huang was worth $5 billion in 2020. Now it’s $64 billion.

If training a human-level AI system requires an unprecedented amount of computing power, close to economic and technological limits, as seems likely, and additional compute is needed to increase the scale or capabilities of the system, then your takeoff speed may be rate-limited by the availability of this key input. This kind of reasoning is probably why Altman thinks a smaller compute overhang will result in a slower takeoff. 

Given all this, many in the AI safety community think that increasing the supply of compute will increase existential risk from AI, by both shortening timelines AND increasing takeoff speed — reducing the time we have to work on technical safety and AI governance and making loss of control more likely. 

So why is Sam Altman reportedly trying to raise trillions of dollars to massively increase the supply of compute? 

Last night, the Wall Street Journal reported that Altman was in talks with the UAE and other investors to raise up to $7 trillion to build more AI chips. 

I’m going to boldly predict that Sam Altman will not raise $7 trillion to build more AI chips. But even one percent of that total would nearly double the amount of money spent on semiconductor manufacturing equipment last year. 

Perhaps most importantly, Altman’s plan seems to fly in the face of the arguments he made not even one year ago. Increasing the supply of compute is probably the purest form of boosting AI capabilities and would increase the compute overhang that he claimed to worry about. 

The AI safety community sometimes divides AI research into capabilities and safety, but some researchers push back on this dichotomy. A friend of mine who works as a machine learning academic once wrote to me that “in some sense, almost all [AI] researchers are safety researchers because the goal is to try to understand how things work.” 

Altman makes a similar point in the blog post: 

Importantly, we think we often have to make progress on AI safety and capabilities together. It’s a false dichotomy to talk about them separately; they are correlated in many ways. Our best safety work has come from working with our most capable models. That said, it’s important that the ratio of safety progress to capability progress increases.

There are good reasons to doubt the numbers reported above (mostly because they’re absurdly, unprecedentedly big). But regardless of its feasibility, this effort to massively expand the supply of compute is hard to square with the above argument. Making compute cheaper speeds things up without any necessary increase in understanding. 

Following November’s board drama, early reporting emerged about Altman’s Middle East chip plans. It’s worth noting that Helen Toner and Tasha McCauley, two of the (now ex-) board members who voted to fire Altman, reviewed drafts of the February 2023 blog post. While I don’t think there was any single smoking gun that prompted the board to fire him, I’d be surprised if these plans didn’t increase tensions. 

OpenAI deserves credit for publishing blog posts like “Planning for AGI and beyond.” Given the stakes of what they’re trying to do, it’s important to look at how OpenAI publicly reasons about these issues (of course, corporate blogs should be taken with a grain of salt and supplemented with independent reporting). And when the actions of company leaders seem to contradict these documents, it’s worth calling that out. 

If Sam Altman has changed his mind about compute overhangs, it’d be great to hear about it from him.

253

27
1
1
1
1

Reactions

27
1
1
1
1

More posts like this

Comments11
Sorted by Click to highlight new comments since: Today at 7:32 PM

Some other relevant responses:

Scott Alexander writes

My current impression of OpenAI’s multiple contradictory perspectives here is that they are genuinely interested in safety - but only insofar as that’s compatible with scaling up AI as fast as possible. This is far from the worst way that an AI company could be. But it’s not reassuring either.

Zvi Mowshowitz writes

Even scaling back the misunderstandings, this is what ambition looks like.

It is not what safety looks like. It is not what OpenAI’s non-profit mission looks like. It is not what it looks like to have concerns about a hardware overhang, and use that as a reason why one must build AGI soon before someone else does. The entire justification for OpenAI’s strategy is invalidated by this move.

[...]

The chip plan seems entirely inconsistent with both OpenAI’s claimed safety plans and theories, and with OpenAI’s non-profit mission. It looks like a very good way to make things riskier faster. You cannot both try to increase investment on hardware by orders of magnitude, and then say you need to push forward because of the risks of allowing there to be an overhang.

Or, well, you can, but we won’t believe you.

This is doubly true given where he plans to build the chips. The United States would be utterly insane to allow these new chip factories to get located in the UAE. At a minimum, we need to require ‘friend shoring’ here, and place any new capacity in safely friendly countries.

Also, frankly, this is not The Way in any sense and he has to know it:

Sam Altman: You can grind to help secure our collective future or you can write substacks about why we are going fail.

Thanks, these are good

What would be the proper response of the EA/AI safety community, given that Altman is increasingly diverging from good governance/showing his true colors? Should there be any strategic changes?

So, what do we think Altman's mental state/true belief is? (Wish this could be a poll)

  1. Values safety, but values personal status & power more
  2. Values safety, but believes he needs to be in control of everything & has a messiah complex
  3. Doesn't really care about safety, it was all empty talk
  4. Something else

I'm also very curious what the internal debate on this is - if I were working on safety inside OpenAI, I'd be very upset.

  1. and 2. seem very similar to me. I think it's something like that.

The way I envision him (obviously I don't know and might be wrong):

  • Genuinely cares about safety and doing good.
  • Also really likes the thought of having power and doing earth-shaking stuff with powerful AI.
  • Looks at AI risk arguments with a lens of motivated cognition influenced by the bullet point above.
  • Mostly thinks things will go well, but this is primarily from an instinctive feel of a high-energy CEO, who are predominantly personality-selected for optimistic attitudes. If he were to really sit down and try to introspect on his views on the question (and stare into the abyss), as a very smart person, he might find that he thinks things might well go poorly, but then thoughts come up like "ehh, if I can't make AI go well, others probably can't either, and it's worth the risk especially because things could be really cool for a while or so before it all ends."
  • If he ever has thoughts like "Am I one of the bad guys here?," he'll shrug them off with "nah" rather than having the occasional existential crises and self-doubts around that sort of thing.
  • He maybe has no stable circle of people to whom he defers on knowledge questions; that is, no one outside himself he trusts as much as himself. He might say he updates to person x or y and considers them smarter than himself/better forecasters, but in reality, he "respects" whoever is good news for him as long as they are good news for him. If he learns that smart people around him are suddenly confident that what he's doing is bad, he'll feel system-1 annoyed at them, which prompts him to find reasons to now disagree with them and no longer consider them included in his circle of epistemic deference. (Maybe this trait isn't black and white; there's at least some chance that he'd change course if 100% of people he at one point in time respects spoke up against his plan all at once.)
  • Maybe doesn't have a lot of mental machinery built around treating it as a sacred mission to have true beliefs, so he might say things about avoiding hardware overhang as an argument for OpenAI's strategy and then later do something that seemingly contradicts his previous stance, because he was using arguments that felt like they'd fit but without really thinking hard about them and building a detailed model for forecasting that he operates from for every such decision.

Altman, like most people with power, doesn’t have a totally coherent vision for why him gaining power is beneficial for humanity, but can come up with some vague values or poorly-thought-out logic when pressed. He values safety, to some extent, but is skeptical of attempts to cut back on progress in the name of safety.

Hard to say, but his behavior (and the accounts from other people) seems most consistent with 1.

I imagine Sam's mental model is the bigger lead OpenAI has over others, the more control they can have at pivotal moments, and (in his mind) the safer things will be. Everyone else is quickly catching up in terms of capability, but if OpenAI has special chips their competitors don't have access to, then they have an edge. Obviously, this can't really be distinguished from Sam just trying to maximize his own ambitions, but it doesn't necessarily undercut safety goals either.

Sam is not pitching special chips for OpenAI here, right?

I do not read safety goals into this project, which sounds more like it's "make there be many more fabs distributed around the world for more chips and decreased centralization". (Which, fwiw, erodes options for containing specialized chips.)

Wouldn't Sam selling large amounts of chips to OAI's direct competitors constitute a conflict of interest? It also doesn't seem like something he would want to do, since he seems very devoted to OAI's success, for better or worse. Why would he want to increase decentralization?

Executive summary: Sam Altman has argued slower AI progress reduces existential risk, yet he now reportedly aims to rapidly expand compute supply, contradicting his stance and potentially accelerating capabilities growth without safety improvements.

Key points:

  1. Altman said slower AI progress allows more time to ensure safety, but his massive chip manufacturing plan would quicken capabilities growth.
  2. Increasing compute supply likely speeds up AI progress, shortening timelines and potentially enabling faster, more dangerous takeoff speeds.
  3. The plan seems at odds with Altman's view that safety and capabilities progress should increase together in a balanced way.
  4. Altman claimed smaller compute overhangs may slow takeoff speeds, but expanding supply boosts overhangs and acceleration.
  5. Board members who voted to remove Altman had reviewed the blog post stating his slower-is-safer perspective.
  6. If Altman changed his mind on risks of compute overhangs, it would help to publicly explain his updated views.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.