Hide table of contents

Note: I work at MIRI, but not on the comms team. I don’t work on the book directly and am writing this post in my personal capacity, but do think it would be great if a lot of people read the book.

Eliezer Yudkowsky and Nate Soares have written a book about why artificial superintelligence might cause human extinction. The book is called If Anyone Builds It, Everyone Dies. It is out September 16th and is available for preorder now.

The book is good

Having read it, I can say it’s genuinely good. While I personally really enjoy Yudkowsky's other writing, I understand why some people find it off-putting or arrogant. I think If Anyone Builds It is much more accessible, it's measured and it doesn’t condescend to readers. It’s also just an enjoyable (although also alarming) read.

This is by far the best explainer of existential risk from AI that I have read. It’s more up-to-date and describes the actual problem better (in my opinion) than other work like Life 3.0, The Alignment Problem, or Human Compatible. I also like some parts of those previous books, but I think If Anyone Builds It does a much better job at conveying the realness and urgency of the problem.

There have been impressive endorsements

You might have been worried that a book with this title and topic wouldn’t be taken seriously. But the book has received many endorsements, from scientists, academics, people in the National Security community, and public figures. This includes many people from outside the AI safety sphere. Some endorsements that I was particularly surprised/impressed by:

  • Ben Bernanke, Nobel-winning economist; former Chairman of the U.S. Federal Reserve
  • Jon Wolfsthal, former Special Assistant to the President for National Security Affairs; former Senior Director for Arms Control and Nonproliferation, White House, National Security Council
  • Lieutenant General John (Jack) N.T. Shanahan (USAF, Ret.), Inaugural Director, Department of Defense Joint AI Center

There are many more endorsements, you can see Malo’s post on them here.

Preorders matter

I think it is important that the public, policymakers, and other decision-makers read the book. MIRI is working hard on this. Preorders help a lot with this by helping get the book onto bestseller lists.

A few notes on preordering:

  • American preorders count most for bestseller lists
  • Bulk orders (e.g., over 50 books) can be penalized for the purpose of bestseller lists. I think orders of around 10 are fine. If you want to order multiple (e.g., 50) copies for your family, office, or school, you can get a discount here.
  • Please only buy copies that will actually be read, we aren’t trying to sell computer monitor stands!

You can see options for preordering here (including audio versions).

48

1
0
2

Reactions

1
0
2
Comments4
Sorted by Click to highlight new comments since:

I pre-ordered the book recently. I took this opportunity to talk to my parents about AI x-risk—I'd been hesitant to talk about it, but they were more receptive than I expected. They've pre-ordered the book as well.

Note: I'm being a bit adversarial with these questions, probably because the book launch advertisements are annoying me a bit. Still, an answer to my questions could tip me over to pre-ordering/not pre-ordering the book.

Would you say that this book meaningfully moves the frontier of the public discussion on AI x-risk forward?

As in, if someone's read much to ~all of the publicly available MIRI material (including all of the arbital alignment domain, the 2021 dialogues, the LW sequences, and even some of the older papers), plus a bunch of writing from detractors (e.g. Pope, Belrose, Turner, Barnett, Thornley, 1a3orn), will they find updated defenses/elaborations on the evolution analogy, why automated alignment isn't possible, why to expect expected utility maximizers, why optimization will be "infectious", and some more on things linked here?

Additionally, would any of the long-time MIRI-debaters (as mentioned above, also including Christiano, the OpenPhil/Constellation cluster of people) plausibly give a positive endorsement of the book as to not just being a good distillation, but moving the frontier of the public discussion forward?

Thanks for your comment :) sorry you finding all the book posts annoying, I decided to post here after seeing that there hadn’t been a post on the EA Forum 

I’m not actually sure what book content I’m allowed to talk about publicly before the launch. Overall the book is written much more for an audience who are new to the AI x-risk arguments (e.g., policymakers and the general public), and it is less focused on providing new arguments to people who have been thinking/reading about this for years (although I do think they’ll find it an enjoyable and clarifying read). I don’t think it's trying to go 15 arguments deep in a LessWrong argument chain. That said, I think there is new stuff in there; the arguments are clearer than previously, there are novel framings on things, and I would guess that there’s at least some things in there that you would find new. I don’t know if I would expect people from the “Pope, Belrose, Turner, Barnett, Thornley, 1a3orn” crowd to be convinced, but they might appreciate the new framings. There will also be related online resources, which I think will cover more of the argument tree, although again, I don’t know how convincing this will be to people who are already in deep. 

Here’s what Nate said in the LW announcement post:

If you're a LessWrong regular, you might wonder whether the book contains anything new for you personally. The content won’t come as a shock to folks who have read or listened to a bunch of what Eliezer and I have to say, but it nevertheless contains some new articulations of our arguments, that I think are better articulations than we’ve ever managed before.

I would guess many people from the OpenPhil/Constellation cluster would give endorsements as the book being a good distillation. But insofar as it's moving the frontier of online arguments about AI x-risk forward, it will mainly be by saying arguments more clearly (which imo is still progress). 

Thanks! That's the kind of answer I was looking for. I'll sleep a night about pre-ordering, and then definitely look forward more to the online appendices. (I also should've specified it's the other Barnett ;-)

Curated and popular this week
Relevant opportunities