Hide table of contents
2 min read 27

293

I have just published my new book on s-risks, titled Avoiding the Worst: How to Prevent a Moral Catastrophe. You can find it on Amazon, read the PDF version, or listen to the audio version.

The book is primarily aimed at longtermist effective altruists. I wrote it because I feel that s-risk prevention is a somewhat neglected priority area in the community, and because a single, comprehensive introduction to s-risks did not yet exist. My hope is that a coherent introduction will help to strengthen interest in the topic and spark further work.

Here’s a short description of the book:

From Nineteen Eighty-Four to Black Mirror, we are all familiar with the tropes of dystopian science fiction. But what if worst-case scenarios could actually become reality? And what if we could do something now to put the world on a better path?

In Avoiding the Worst, Tobias Baumann lays out the concept of risks of future suffering (s-risks). With a focus on s-risks that are both realistic and avoidable, he argues that we have strong reasons to consider their reduction a top priority. Finally, he turns to the question of what we can do to help steer the world away from s-risks and towards a brighter future.

For a rough overview, here’s the book's table of contents:

Part I: What are s-risks?

Chapter 1: Technology and astronomical stakes

Chapter 2: Types of s-risks

Part II: Should we focus on s-risks?

Chapter 3: Should we focus on the long-term future?

Chapter 4: Should we focus on reducing suffering?

Chapter 5: Should we focus on worst-case outcomes?

Chapter 6: Cognitive biases

Part III: How can we best reduce s-risks?

Chapter 7: Risk factors for s-risks

Chapter 8: Moral advocacy

Chapter 9: Better Politics

Chapter 10: Emerging technologies

Chapter 11: Long-term impact

And finally, some blurbs for the book:

“One of the most important, original, and disturbing books I have read. Tobias Baumann provides a comprehensive introduction to the field of s-risk reduction. Most importantly, he outlines sensible steps towards preventing future atrocities. Highly recommended.”

— David Pearce, author of The Hedonistic Imperative and Can Biotechnology Abolish Suffering?

“This book is a groundbreaking contribution on a topic that has been severely neglected to date. Tobias Baumann presents a powerful case for averting worst-case scenarios that could involve vast amounts of suffering. A much needed read for our time.”

— Oscar Horta, co-founder of Animal Ethics and author of Making a Stand for Animals


 

Comments28


Sorted by Click to highlight new comments since:

Congratulations on the book!

Apart from the utterly horrifying & well-known Black Mirror episode 'White Christmas', another viscerally compelling depiction of s-risks was the Iain M. Banks 'Culture' novel 'Surface Detail' (2010), in which a futuristic society with a strong religious fundamentalist streak uploads recently-dead minds into digital virtual hells just to torment them for subjective millennia -- mostly in order to intimidate the living into righteousness.

Maybe you mention it, but if you're not familiar with it, it's one of the more plausible depictions of Things Going Very Wrong Indeed, in terms of net sentient utility.

My go-to is this (warning: horrifying) 1 minute comic. I credit it for me viscerally getting just how important s-risks are.

There's always factory farm footage too. Dominion and Earthlings are the best for this.

I think it's no surprise that people who were previously in animal welfare end up going into s-risks. It makes you realize how very plausible massive scale suffering is, even if there are no malevolent actors. 

Oh no. Very horrifying! 

Not entirely clear why the sadistic robots would do such a thing. 

One thing I liked about the novel 'Surface Detail' was that the sadists imposing the suffering had at least some kind of semi-plausible religious rationale for what they were doing -- which makes the whole scenario more psychologically plausible and therefore all the more terrifying.

Yeah, I agree it's not clear why they'd do it. I give the comic writer some slack though, since it's hard to fit that much into a comic. 

Couple reasons that I can think of off the top of my head that that could happen:

  • Sign flip. Accidentally flip the sign and instead of trying to maximize human flourishing, it's trying to minimize it. 
  • Punishment. Imagine a dictator created TAI and was using it to punish people that fit a certain demographics (e.g. Uyghurs). Imagine that the human there is a Uyghur, or that they failed to sufficiently specify the demographic and it started doing everybody or large swathes of the world 

Honestly though, I think the most probable s-risks are the incidental ones (covered in Tobias's book and also this blog post here). Basically, something where suffering is a side-product, like factory farming or slavery. I also put highest odds it would be for digital minds, since I think the future will be predominantly digital minds.  

But it'd be very hard to make a comic about digital minds that would be emotionally compelling, which is why I like the comic (although "like" is a bit of a strong word. More, "found incredibly psychologically scarring but in a way that helps me remember what I'm fighting for")

This is amazing! Any recommendations for which are the most important parts of the book for people who are decently familiar with EA and LW, according to you? Especially looking for moral and practical arguments I might have overlooked, and I don't need to be persuaded to care about animal/insect/machine suffering in the first place.

I am (clearly) not Tobias, but I'd expect many people familiar with EA and LW would get something new out of Ch 2, 4, 5, and 7-11. Of these, seems like the latter half of 5, 9, and 11 would be especially novel if you're already familiar with the basics of s-risks along the lines of the intro resources that CRS and CLR have published. I think the content of 7 and 10 is sufficiently crucial that it's probably worth reading even if you've checked out those older resources, despite some overlap.

I agree with this answer.

I don't need to be persuaded to care about animal/insect/machine suffering in the first place.

That's great, because that is also the starting point of my book. From the introduction:

Before I dive deeper, I should clarify the values that underlie this book. A key principle is impartiality: suffering matters equally irrespective of who experiences it. In particular, I believe we should care about all sentient beings, including nonhuman animals. Similarly, I believe suffering matters equally regardless of when it is experienced. A future individual is no less (and no more) deserving of moral consideration than someone alive now. So the fact that a moral catastrophe takes place in the distant future does not reduce the urgency of preventing it, if we have the means to do so. I will assume that you broadly agree with these fundamental values, which form the starting point of the book.

That is, I'm not dwelling on an argument for these fundamental values, as that can be found elsewhere.

Congratulations. : ) any plans for an audiobook?

Thanks! 

An audiobook is a good idea and I'll look into it, though I don't expect it to be done any time soon (i.e. it would at least take several months, I think).

Audiobook version: [new] Aaron made an awesome audiobook version here. 

[Original] It's easy to turn it into an audiobook version with Evie or Natural Reader for anybody who likes to read with their ears instead of their eyes. Full guide I wrote up on how to turn everything into audio here

Also, Tobias, if you want to make a super simple audiobook version of the book, I recommend using Amazon Polly. It'll probably cost under $100 and take less than 10 hours and increase the number of people who read your book by a lot. I know a ton of people who only read with their ears or who are more than 10x likely to read something if there's an audio version. Even I only found out about this book because I listened to the article on the Nonlinear Library (sorry for the shameless but relevant plug 😛)

Finally, congrats on the book! So far I'm loving it. Thank you for writing it. I think s-risks deserve more attention in the EA movement and think this book will help move the needle. 

I’m working on getting this to be read in a high quality voice, but for anyone else who wants to try/do it better, here’s a txt with what I think the voice should actually read (like no reading out page numbers and stuff like that): https://drive.google.com/file/d/155pb6tMkE-rSmrizi1jbzPKTMSmYJRDX/view?usp=drivesdk

Not 100% certain it’s correct though, likely a missing word or two

Update: provisional text to speech audio (Siri reading that text file haha) now linked the top of the post!

Should be up on all the main podcast apps soon, but until then, you can listen on Spreaker at https://bit.ly/3UmQS85

Youtube (w/ accurate subtitles, embedded below)

Apple Podcasts:

Spotify

RSS feed url:  https://www.spreaker.com/show/5706170/episodes/feed

Drive folder with some scripts and other resources; may add more later!

(Will try to clean the thumbnail of my name and Spreaker logo, but don't want to wait to share! )

...

Not certain and might take a couple days, but should be pretty easy (though not instantaneous) to  basically make readouts in any English accent the new IOS can handle ('British full version.m4a' in the Drive folder is a sample, but has pronunciation issues due to weird text encoding) 

We've now put together a new and improved audio version, which can be found here.

Just listened to it! The pleasant and thoughtful narration by Adrian Nelson felt perfect for the book. I might even recommend the audiobook version over the text version to people who might otherwise find it distressing to think about s-risks. :)

You might consider creating a text-to-speech version by using e.g. Amazon Polly. Whilst imperfect, it is listenable and might be useful to people. Here is a sample generated with the British English Arthur Male voice.

Yes, Amazon Polly is great! 

Small thing: British voices sound more credible, which is good, but at the trade-off of being harder to listen to at high speeds, which is my strong preference. 

There are probably not a lot of people who are listening to it at high enough speeds that the trade-off is worth it, but that is the trade-off to consider. 

Also, my research for the Nonlinear Library found that on average people prefer listening to male voices, for what it's worth.  I didn't research it hard or for long and don't think it matters a ton either way, but just to share what I found. 

I'll just throw out the possibility of copy and pasting the whole thing (with anywhere from zero to a lot of formatting/editing) into an EA Forum post, which (I assume?) would trigger the Nonlinear Library system to turn it into audio. This would also get it into the feeds of people who only consume the forum via podcast app. 

Since this has generated so much interested it is worth noting that the Center for Reducing Suffering is hiring for a Communications Director, so if you are knowledgable about s-risks do check out the job description.

Thank you Tobias! I've wanted to learn more about the practical implications of s-risks for a while but never quite knew where to start, I'm really keen to read Part III.

I'm glad risks from Scotland are finally being explored! (Can't unsee the Scottish flag on the cover)

I hate to be this person, but is there an epub version available? 

The easiest way to download it as an epub is here.

Thank you! :)

Congratulations! It's on my reading list now.

Anyone interested in an in-person-in-London reading group on this?

+2 private expressions of interest

[comment deleted]7
0
0
Curated and popular this week
 ·  · 16m read
 · 
This is a crosspost for The Case for Insect Consciousness by Bob Fischer, which was originally published on Asterisk in January 2025. [Subtitle.] The evidence that insects feel pain is mounting, however we approach the issue. For years, I was on the fence about the possibility of insects feeling pain — sometimes, I defended the hypothesis;[1] more often, I argued against it.[2] Then, in 2021, I started working on the puzzle of how to compare pain intensity across species. If a human and a pig are suffering as much as each one can, are they suffering the same amount? Or is the human’s pain worse? When my colleagues and I looked at several species, investigating both the probability of pain and its relative intensity,[3] we found something unexpected: on both scores, insects aren’t that different from many other animals.  Around the same time, I started working with an entomologist with a background in neuroscience. She helped me appreciate the weaknesses of the arguments against insect pain. (For instance, people make a big deal of stories about praying mantises mating while being eaten; they ignore how often male mantises fight fiercely to avoid being devoured.) The more I studied the science of sentience, the less confident I became about any theory that would let us rule insect sentience out.  I’m a philosopher, and philosophers pride themselves on following arguments wherever they lead. But we all have our limits, and I worry, quite sincerely, that I’ve been too willing to give insects the benefit of the doubt. I’ve been troubled by what we do to farmed animals for my entire adult life, whereas it’s hard to feel much for flies. Still, I find the argument for insect pain persuasive enough to devote a lot of my time to insect welfare research. In brief, the apparent evidence for the capacity of insects to feel pain is uncomfortably strong.[4] We could dismiss it if we had a consensus-commanding theory of sentience that explained why the apparent evidence is ir
 ·  · 7m read
 · 
Introduction I have been writing posts critical of mainstream EA narratives about AI capabilities and timelines for many years now. Compared to the situation when I wrote my posts in 2018 or 2020, LLMs now dominate the discussion, and timelines have also shrunk enormously. The ‘mainstream view’ within EA now appears to be that human-level AI will be arriving by 2030, even as early as 2027. This view has been articulated by 80,000 Hours, on the forum (though see this excellent piece excellent piece arguing against short timelines), and in the highly engaging science fiction scenario of AI 2027. While my article piece is directed generally against all such short-horizon views, I will focus on responding to relevant portions of the article ‘Preparing for the Intelligence Explosion’ by Will MacAskill and Fin Moorhouse.  Rates of Growth The authors summarise their argument as follows: > Currently, total global research effort grows slowly, increasing at less than 5% per year. But total AI cognitive labour is growing more than 500x faster than total human cognitive labour, and this seems likely to remain true up to and beyond the point where the cognitive capabilities of AI surpasses all humans. So, once total AI cognitive labour starts to rival total human cognitive labour, the growth rate of overall cognitive labour will increase massively. That will drive faster technological progress. MacAskill and Moorhouse argue that increases in training compute, inference compute and algorithmic efficiency have been increasing at a rate of 25 times per year, compared to the number of human researchers which increases 0.04 times per year, hence the 500x faster rate of growth. This is an inapt comparison, because in the calculation the capabilities of ‘AI researchers’ are based on their access to compute and other performance improvements, while no such adjustment is made for human researchers, who also have access to more compute and other productivity enhancements each year.