We (80,000 Hours) have just released our longest and most in-depth problem profile — on reducing existential risks from AI.
You can read the profile here.
The rest of this post gives some background on the profile, a summary and the table of contents.
Some background
Like much of our content, this profile is aimed at an audience that has probably spent some time on the 80,000 Hours website, but is otherwise unfamiliar with EA -- so it's pretty introductory. That said, we hope the profile will also be useful and clarifying for members of the EA community.
The profile primarily represents my (Benjamin Hilton's) views, though it was edited by Arden Koehler (our website director) and reviewed by Howie Lempel (our CEO), who both broadly agree with the takeaways.
I've tried to do a few things with this profile to make it as useful as possible for people new to the issue:
- I focus on what I see as the biggest issue: risks of power-seeking AI from strategically aware planning systems with advanced capabilities, as set out by Joe Carlsmith.
- I try to make things feel more concrete, and have released a whole separate article on what an AI-caused catastrophe could actually look like. (This owes a lot to Carlsmith's report, as well as Christiano's What failure looks like and Bostrom's Superintelligence.)
- I give (again, what I see as) important background information, such as the results of surveys of ML experts on AI risk, an overview of recent advances in AI and scaling laws
- I try to honestly explain the strongest reasons why the argument I present might be wrong
- I include a long FAQ of common objections to working on AI risk to which I think there are strong responses
Also, there's a feedback form if you want to give feedback and prefer that to posting publicly.
This post includes the summary from the article and a table of contents.
Summary
We expect that there will be substantial progress in AI in the next few decades, potentially even to the point where machines come to outperform humans in many, if not all, tasks. This could have enormous benefits, helping to solve currently intractable global problems, but could also pose severe risks. These risks could arise accidentally (for example, if we don’t find technical solutions to concerns about the safety of AI systems), or deliberately (for example, if AI systems worsen geopolitical conflict). We think more work needs to be done to reduce these risks.
Some of these risks from advanced AI could be existential — meaning they could cause human extinction, or an equally permanent and severe disempowerment of humanity. [1] There have not yet been any satisfying answers to concerns — discussed below — about how this rapidly approaching, transformative technology can be safely developed and integrated into our society. Finding answers to these concerns is very neglected, and may well be tractable. We estimate that there are around 300 people worldwide working directly on this.[2] As a result, the possibility of AI-related catastrophe may be the world’s most pressing problem — and the best thing to work on for those who are well-placed to contribute.
Promising options for working on this problem include technical research on how to create safe AI systems, strategy research into the particular risks AI might pose, and policy research into ways in which companies and governments could mitigate these risks. If worthwhile policies are developed, we’ll need people to put them in place and implement them. There are also many opportunities to have a big impact in a variety of complementary roles, such as operations management, journalism, earning to give, and more — some of which we list below.
Our overall view
Recommended - highest priority
This is among the most pressing problems to work on.
Scale
AI will have a variety of impacts and has the potential to do a huge amount of good. But we’re particularly concerned with the possibility of extremely bad outcomes, especially an existential catastrophe. We’re very uncertain, but based on estimates from others using a variety of methods, our overall guess is that the risk of an existential catastrophe caused by artificial intelligence within the next 100 years is around 10%. This figure could significantly change with more research — some experts think it’s as low as 0.5% or much higher than 50%, and we’re open to either being right. Overall, our current take is that AI development poses a bigger threat to humanity’s long-term flourishing than any other issue we know of.
Neglectedness
Around $50 million was spent on reducing the worst risks from AI in 2020 – billions were spent advancing AI capabilities.[3] [4] While we are seeing increasing concern from AI experts, there are still only around 300 people working directly on reducing the chances of an AI-related existential catastrophe.[2] Of these, it seems like about two-thirds are working on technical AI safety research, with the rest split between strategy (and policy) research and advocacy.
Solvability
Making progress on preventing an AI-related catastrophe seems hard, but there are a lot of avenues for more research and the field is very young. So we think it’s moderately tractable, though we’re highly uncertain — again, assessments of the tractability of making AI safe vary enormously.
Full table of contents
- Introduction
- 1. Many AI experts think there’s a non-negligible chance AI will lead to outcomes as bad as extinction
- 2. We’re making advances in AI extremely quickly
- 3. Power-seeking AI could pose an existential threat to humanity
- This all sounds very abstract. What could an existential catastrophe caused by AI actually look like?
- 4. Even if we find a way to avoid power-seeking, there are still risks
- So, how likely is an AI-related catastrophe?
- 5. We can tackle these risks
- 6. This work is extremely neglected
- What do we think are the best arguments we’re wrong?
- Arguments against working on AI risk to which we think there are strong responses
- What you can do concretely to help
- Top resources to learn more
- Acknowledgements
Acknowledgements
Huge thanks to Joel Becker, Tamay Besiroglu, Jungwon Byun, Joseph Carlsmith, Jesse Clifton, Emery Cooper, Ajeya Cotra, Andrew Critch, Anthony DiGiovanni, Noemi Dreksler, Ben Edelman, Lukas Finnveden, Emily Frizell, Ben Garfinkel, Katja Grace, Lewis Hammond, Jacob Hilton, Samuel Hilton, Michelle Hutchinson, Caroline Jeanmaire, Kuhan Jeyapragasan, Arden Koehler, Daniel Kokotajlo, Victoria Krakovna, Alex Lawsen, Howie Lempel, Eli Lifland, Katy Moore, Luke Muehlhauser, Neel Nanda, Linh Chi Nguyen, Luisa Rodriguez, Caspar Oesterheld, Ethan Perez, Charlie Rogers-Smith, Jack Ryan, Rohin Shah, Buck Shlegeris, Marlene Staib, Andreas Stuhlmüller, Luke Stebbing, Nate Thomas, Benjamin Todd, Stefan Torges, Michael Townsend, Chris van Merwijk, Hjalmar Wijk, and Mark Xu for either reviewing the article or their extremely thoughtful and helpful comments and conversations. (This isn’t to say that they would all agree with everything I said – in fact we’ve had many spirited disagreements in the comments on the article!)
This work is licensed under a Creative Commons Attribution 4.0 International License.
- ^
We’re also concerned about the possibility that AI systems could deserve moral consideration for their own sake — for example, because they are sentient. We’re not going to discuss this possibility in this article; we instead cover artificial sentience in a separate article here.
- ^
I estimated this using the AI Watch database. For each organisation, I estimated the proportion of listed employees working directly on reducing existential risks from AI. There’s a lot of subjective judgement in the estimate (e.g. “does it seem like this research agenda is about AI safety in particular?”), and it could be too low if AI Watch is missing data on some organisations, or too high if the data counts people more than once or includes people who no longer work in the area. My 90% confidence interval would range from around 100 people to around 1,500 people.
- ^
It’s difficult to say exactly how much is being spent to advance AI capabilities. This is partly because of a lack of available data, and partly because of questions like:
* What research in AI is actually advancing the sorts of dangerous capabilities that might be increasing potential existential risk?
* Do advances in AI hardware or advances in data collection count?
* How about broader improvements to research processes in general, or things that might increase investment in the future through producing economic growth?The most relevant figure we could find was the expenses of DeepMind from 2020, which were around £1 billion, according to their annual report. We’d expect most of that to be contributing to “advancing AI capabilities” in some sense, since their main goal is building powerful, general AI systems. (Although it’s important to note that DeepMind is also contributing to work in AI safety, which may be reducing existential risk.)
If DeepMind is around about 10% of the spending on advancing AI capabilities, this gives us a figure of around £10 billion. (Given that there are many AI companies in the US, and a large effort to produce advanced AI in China, we think 10% could be a good overall guess.)
As an upper bound, the total revenues of the AI sector in 2021 were around $340 billion.
So overall, we think the amount being spent to advance AI capabilities is between $1 billion and $340 billion per year. Even assuming a figure as low as $1 billion, this would still be around 100 times the amount spent on reducing risks from AI.
Thank you for this great overview! I might have missed it but is there a link to work being done/needed to be done on how to help people to adapt/reskill to upcoming AI development. Similar to the reskilling need linked to the need for greener jobs. I can imagine that a real focus and opportunity lies here. To ensure that people who will see their current jobs/field being widely impacts by AI have the guidance and support to move towards a career that increases their sense of purpose and contribution vs lead them to a sense of loss of meaning and/or exclusion.
Thank you! Alix.