Hide table of contents

I have just published my new book on s-risks, titled Avoiding the Worst: How to Prevent a Moral Catastrophe. You can find it on Amazon, read the PDF version, or listen to the audio version.

The book is primarily aimed at longtermist effective altruists. I wrote it because I feel that s-risk prevention is a somewhat neglected priority area in the community, and because a single, comprehensive introduction to s-risks did not yet exist. My hope is that a coherent introduction will help to strengthen interest in the topic and spark further work.

Here’s a short description of the book:

From Nineteen Eighty-Four to Black Mirror, we are all familiar with the tropes of dystopian science fiction. But what if worst-case scenarios could actually become reality? And what if we could do something now to put the world on a better path?

In Avoiding the Worst, Tobias Baumann lays out the concept of risks of future suffering (s-risks). With a focus on s-risks that are both realistic and avoidable, he argues that we have strong reasons to consider their reduction a top priority. Finally, he turns to the question of what we can do to help steer the world away from s-risks and towards a brighter future.

For a rough overview, here’s the book's table of contents:

Part I: What are s-risks?

Chapter 1: Technology and astronomical stakes

Chapter 2: Types of s-risks

Part II: Should we focus on s-risks?

Chapter 3: Should we focus on the long-term future?

Chapter 4: Should we focus on reducing suffering?

Chapter 5: Should we focus on worst-case outcomes?

Chapter 6: Cognitive biases

Part III: How can we best reduce s-risks?

Chapter 7: Risk factors for s-risks

Chapter 8: Moral advocacy

Chapter 9: Better Politics

Chapter 10: Emerging technologies

Chapter 11: Long-term impact

And finally, some blurbs for the book:

“One of the most important, original, and disturbing books I have read. Tobias Baumann provides a comprehensive introduction to the field of s-risk reduction. Most importantly, he outlines sensible steps towards preventing future atrocities. Highly recommended.”

— David Pearce, author of The Hedonistic Imperative and Can Biotechnology Abolish Suffering?

“This book is a groundbreaking contribution on a topic that has been severely neglected to date. Tobias Baumann presents a powerful case for averting worst-case scenarios that could involve vast amounts of suffering. A much needed read for our time.”

— Oscar Horta, co-founder of Animal Ethics and author of Making a Stand for Animals


 

Comments28
Sorted by Click to highlight new comments since:

Congratulations on the book!

Apart from the utterly horrifying & well-known Black Mirror episode 'White Christmas', another viscerally compelling depiction of s-risks was the Iain M. Banks 'Culture' novel 'Surface Detail' (2010), in which a futuristic society with a strong religious fundamentalist streak uploads recently-dead minds into digital virtual hells just to torment them for subjective millennia -- mostly in order to intimidate the living into righteousness.

Maybe you mention it, but if you're not familiar with it, it's one of the more plausible depictions of Things Going Very Wrong Indeed, in terms of net sentient utility.

My go-to is this (warning: horrifying) 1 minute comic. I credit it for me viscerally getting just how important s-risks are.

There's always factory farm footage too. Dominion and Earthlings are the best for this.

I think it's no surprise that people who were previously in animal welfare end up going into s-risks. It makes you realize how very plausible massive scale suffering is, even if there are no malevolent actors. 

Oh no. Very horrifying! 

Not entirely clear why the sadistic robots would do such a thing. 

One thing I liked about the novel 'Surface Detail' was that the sadists imposing the suffering had at least some kind of semi-plausible religious rationale for what they were doing -- which makes the whole scenario more psychologically plausible and therefore all the more terrifying.

Yeah, I agree it's not clear why they'd do it. I give the comic writer some slack though, since it's hard to fit that much into a comic. 

Couple reasons that I can think of off the top of my head that that could happen:

  • Sign flip. Accidentally flip the sign and instead of trying to maximize human flourishing, it's trying to minimize it. 
  • Punishment. Imagine a dictator created TAI and was using it to punish people that fit a certain demographics (e.g. Uyghurs). Imagine that the human there is a Uyghur, or that they failed to sufficiently specify the demographic and it started doing everybody or large swathes of the world 

Honestly though, I think the most probable s-risks are the incidental ones (covered in Tobias's book and also this blog post here). Basically, something where suffering is a side-product, like factory farming or slavery. I also put highest odds it would be for digital minds, since I think the future will be predominantly digital minds.  

But it'd be very hard to make a comic about digital minds that would be emotionally compelling, which is why I like the comic (although "like" is a bit of a strong word. More, "found incredibly psychologically scarring but in a way that helps me remember what I'm fighting for")

This is amazing! Any recommendations for which are the most important parts of the book for people who are decently familiar with EA and LW, according to you? Especially looking for moral and practical arguments I might have overlooked, and I don't need to be persuaded to care about animal/insect/machine suffering in the first place.

I am (clearly) not Tobias, but I'd expect many people familiar with EA and LW would get something new out of Ch 2, 4, 5, and 7-11. Of these, seems like the latter half of 5, 9, and 11 would be especially novel if you're already familiar with the basics of s-risks along the lines of the intro resources that CRS and CLR have published. I think the content of 7 and 10 is sufficiently crucial that it's probably worth reading even if you've checked out those older resources, despite some overlap.

I agree with this answer.

I don't need to be persuaded to care about animal/insect/machine suffering in the first place.

That's great, because that is also the starting point of my book. From the introduction:

Before I dive deeper, I should clarify the values that underlie this book. A key principle is impartiality: suffering matters equally irrespective of who experiences it. In particular, I believe we should care about all sentient beings, including nonhuman animals. Similarly, I believe suffering matters equally regardless of when it is experienced. A future individual is no less (and no more) deserving of moral consideration than someone alive now. So the fact that a moral catastrophe takes place in the distant future does not reduce the urgency of preventing it, if we have the means to do so. I will assume that you broadly agree with these fundamental values, which form the starting point of the book.

That is, I'm not dwelling on an argument for these fundamental values, as that can be found elsewhere.

Congratulations. : ) any plans for an audiobook?

Thanks! 

An audiobook is a good idea and I'll look into it, though I don't expect it to be done any time soon (i.e. it would at least take several months, I think).

Audiobook version: [new] Aaron made an awesome audiobook version here. 

[Original] It's easy to turn it into an audiobook version with Evie or Natural Reader for anybody who likes to read with their ears instead of their eyes. Full guide I wrote up on how to turn everything into audio here

Also, Tobias, if you want to make a super simple audiobook version of the book, I recommend using Amazon Polly. It'll probably cost under $100 and take less than 10 hours and increase the number of people who read your book by a lot. I know a ton of people who only read with their ears or who are more than 10x likely to read something if there's an audio version. Even I only found out about this book because I listened to the article on the Nonlinear Library (sorry for the shameless but relevant plug 😛)

Finally, congrats on the book! So far I'm loving it. Thank you for writing it. I think s-risks deserve more attention in the EA movement and think this book will help move the needle. 

I’m working on getting this to be read in a high quality voice, but for anyone else who wants to try/do it better, here’s a txt with what I think the voice should actually read (like no reading out page numbers and stuff like that): https://drive.google.com/file/d/155pb6tMkE-rSmrizi1jbzPKTMSmYJRDX/view?usp=drivesdk

Not 100% certain it’s correct though, likely a missing word or two

Update: provisional text to speech audio (Siri reading that text file haha) now linked the top of the post!

Should be up on all the main podcast apps soon, but until then, you can listen on Spreaker at https://bit.ly/3UmQS85

Youtube (w/ accurate subtitles, embedded below)

Apple Podcasts:

Spotify

RSS feed url:  https://www.spreaker.com/show/5706170/episodes/feed

Drive folder with some scripts and other resources; may add more later!

(Will try to clean the thumbnail of my name and Spreaker logo, but don't want to wait to share! )

...

Not certain and might take a couple days, but should be pretty easy (though not instantaneous) to  basically make readouts in any English accent the new IOS can handle ('British full version.m4a' in the Drive folder is a sample, but has pronunciation issues due to weird text encoding) 

We've now put together a new and improved audio version, which can be found here.

Just listened to it! The pleasant and thoughtful narration by Adrian Nelson felt perfect for the book. I might even recommend the audiobook version over the text version to people who might otherwise find it distressing to think about s-risks. :)

You might consider creating a text-to-speech version by using e.g. Amazon Polly. Whilst imperfect, it is listenable and might be useful to people. Here is a sample generated with the British English Arthur Male voice.

Yes, Amazon Polly is great! 

Small thing: British voices sound more credible, which is good, but at the trade-off of being harder to listen to at high speeds, which is my strong preference. 

There are probably not a lot of people who are listening to it at high enough speeds that the trade-off is worth it, but that is the trade-off to consider. 

Also, my research for the Nonlinear Library found that on average people prefer listening to male voices, for what it's worth.  I didn't research it hard or for long and don't think it matters a ton either way, but just to share what I found. 

I'll just throw out the possibility of copy and pasting the whole thing (with anywhere from zero to a lot of formatting/editing) into an EA Forum post, which (I assume?) would trigger the Nonlinear Library system to turn it into audio. This would also get it into the feeds of people who only consume the forum via podcast app. 

Since this has generated so much interested it is worth noting that the Center for Reducing Suffering is hiring for a Communications Director, so if you are knowledgable about s-risks do check out the job description.

Thank you Tobias! I've wanted to learn more about the practical implications of s-risks for a while but never quite knew where to start, I'm really keen to read Part III.

I'm glad risks from Scotland are finally being explored! (Can't unsee the Scottish flag on the cover)

I hate to be this person, but is there an epub version available? 

The easiest way to download it as an epub is here.

Congratulations! It's on my reading list now.

Anyone interested in an in-person-in-London reading group on this?

+2 private expressions of interest

[comment deleted]7
0
0
Curated and popular this week
Relevant opportunities