Concisely,
I've just released the book Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World

  • It's an engaging introduction to the main issues and arguments about AI safety and risk. Clarity and accessibility were prioritized. There are blurbs of support from Max Tegmark, Will MacAskill, Roman Yampolskiy and others.  
  • Main argument is that AI capabilities are increasing rapidly, we may not be able to fully align or control advanced AI systems, which creates risk. There is great uncertainty, so we should be prudent and act now to ensure AI is developed safely. It tries to be hopeful.
  • Why does it exist? 
    There are lots of useful posts, blogs, podcasts, and articles on AI safety, but there was no up-to-date book entirely dedicated to the AI safety issue that is written for those without any exposure to the issue. (Including those with no science background.)
  • This book is meant to fill that gap and could be useful outreach or introductory materials. 
  • If you have already been following the AI safety issue, there likely isn't a lot that is new for you. So, this might be best seen as something useful for friends, relatives, some policy makers, or others just learning about the issue. (although, you may still like the framing)
  • It's available on numerous Amazon marketplaces. Audiobook (edit) now available and Hardcover options to follow. 
  • It was a hard journey. I hope it is of value to the community. 
Comments9


Sorted by Click to highlight new comments since:

Darren - congratulations on publication! Impressive endorsements. 

It's great to see one up-to-date book on this issue that we can recommend widely, for the AI-Safety-curious. (Here's my tweet about it, FWIW).

Looks interesting! Do you plan to release an audio book version of it? I think you can reach a larger audience and more kinds of people that way.

Sure do! As I said in the second last bullet it is in progress :)
(hopefully within the next two weeks) 

Great! I don’t know how I missed that 😅

Audiobook is out :)

[anonymous]2
0
0

Hi Darren - I do AI Safety community / movement building in Australia.

Firstly, congrats on the book!
Second, I think book giveaways are a great way to create engagement for Community Builders. I wonder if there is a way to buy in bulk and get your book at a discount? 

Thanks!
There might be. If you're interested in pursuing that in Australia, send me a DM and we'll explore what's possible. 

[This comment is no longer endorsed by its author]Reply

It's a tricky balance and I don't think that there is a perfect solution.   The issue that both the Title and the Cover have to be intriguing and compelling (also, ideally, short / immediately understandable).  What will intrigue some will be less appealing to others. 
So, I could have had a question mark, or some other less dramatic image... but when not only safety researchers but the CEOs of the leading AI companies believe the product that they are developing could lead to extinction, I believe that this is alarming. This is an alarming fact about the world. That drove the cover.
The inside is more nuanced and cautious. 

Curated and popular this week
 ·  · 5m read
 · 
Today, Forethought and I are releasing an essay series called Better Futures, here.[1] It’s been something like eight years in the making, so I’m pretty happy it’s finally out! It asks: when looking to the future, should we focus on surviving, or on flourishing? In practice at least, future-oriented altruists tend to focus on ensuring we survive (or are not permanently disempowered by some valueless AIs). But maybe we should focus on future flourishing, instead.  Why?  Well, even if we survive, we probably just get a future that’s a small fraction as good as it could have been. We could, instead, try to help guide society to be on track to a truly wonderful future.    That is, I think there’s more at stake when it comes to flourishing than when it comes to survival. So maybe that should be our main focus. The whole essay series is out today. But I’ll post summaries of each essay over the course of the next couple of weeks. And the first episode of Forethought’s video podcast is on the topic, and out now, too. The first essay is Introducing Better Futures: along with the supplement, it gives the basic case for focusing on trying to make the future wonderful, rather than just ensuring we get any ok future at all. It’s based on a simple two-factor model: that the value of the future is the product of our chance of “Surviving” and of the value of the future, if we do Survive, i.e. our “Flourishing”.  (“not-Surviving”, here, means anything that locks us into a near-0 value future in the near-term: extinction from a bio-catastrophe counts but if valueless superintelligence disempowers us without causing human extinction, that counts, too. I think this is how “existential catastrophe” is often used in practice.) The key thought is: maybe we’re closer to the “ceiling” on Survival than we are to the “ceiling” of Flourishing.  Most people (though not everyone) thinks we’re much more likely than not to Survive this century.  Metaculus puts *extinction* risk at about 4
 ·  · 4m read
 · 
Context: I’m a senior fellow at Conservation X Labs (CXL), and I’m seeking support as I attempt to establish a program on humane rodent fertility control in partnership with the Wild Animal Initiative (WAI) and the Botstiber Institute for Wildlife Fertility Control (BIWFC). CXL is a biodiversity conservation organization working in sustainable technologies, not an animal welfare organization. However, CXL leadership is interested in simultaneously promoting biodiversity conservation and animal welfare, and they are excited about the possibility of advancing applied research that make it possible to ethically limit rodent populations to protect biodiversity.  I think this represents the wild animal welfare community’s first realistic opportunity to bring conservation organizations into wild animal welfare work while securing substantial non-EA funding for welfare-improving interventions.  Background Rodenticides cause immense suffering to (likely) hundreds of millions of rats and mice annually through anticoagulation-induced death over several days, while causing significant non-target harm to other animals. In the conservation context, rodenticides are currently used in large-scale island rat and mouse eradications as a way of protecting endemic species. But these rodenticides kill lots of native species in addition to the mice and rats. So advancements in fertility control would be a benefit to both conservation- and welfare-focused stakeholders. CXL is a respected conservation organization with a track record of securing follow-on investments for technologies we support (see some numbers below). We are interested in co-organizing a "Big Think" workshop with WAI and BIWFC. The event will launch an open innovation program (e.g., a prize or a challenge process) to accelerate fertility control development. The program would specifically target island conservation applications where conservation groups are already motivated to replace rodenticides, but would likely
 ·  · 11m read
 · 
Epistemic status: I think you should interpret this as roughly something like “GenAI is not so powerful that it shows up in the most obvious way of analyzing the data, but maybe if someone did a more careful analysis which controlled for e.g. macroeconomic trends they would find that GenAI is indeed causing faster growth.” 1. Some people have asked me to consider going back to earning-to-give, and so I am considering founding another company. A major hesitation is that startups generally take 6 to 10 years to be acquired or go public. This is unfortunate for founders who believe that we are less than 10 years away from all human labor being automated. 2. It's possible that the advent of generative AI will make startups grow faster, which could provide a faster exit path. YCombinator CEO Garry Tan has publicly stated that recent YC batches are the fastest growing in their history because of generative AI. 3. However, I find that YCombinator companies have, if anything, been growing more slowly since the release of ChatGPT in 2022: 1. Of the 20 companies which had the highest 2 year growth post-YC, only 1 (Tennr) was a 2023+ batch company, even though 16% of the companies I could find 2 year growth data for were 2023+.[1] The average valuation of a YCombinator company two years after it goes through YC is only $13.3M for 2023+ batches, compared to $46.1M for <2023 batches.[2] 2. Of the 20 companies which had the highest 1 year growth post-YC, only 1 (Legora) was a 2023+ batch company, even though 34% of the companies I could find 1 year growth data for were 2023+. The average valuation of a YCombinator company one year after it goes through YC is only $7.9M for 2023+ batches, compared to $13.3M for <2023 batches. 4. These results are preliminary and subject to significant caveats: 1. Public data about the valuation of privately held companies is notoriously limited. My guess is that these results aren't incredibly off because the largest companie