Hide table of contents

Howie and I have just recorded a snap episode of the 80,000 Hours Podcast about the possible pandemic emerging from China:

What we do and don’t know about the 2019-nCoV coronavirus

I think it's a reasonable summary of the situation as it stood on Sunday, and the many uncertainties that remain.

This virus has recently been attracting discussion on the forum here.

We've almost never done coverage of 'topical' issues like this. It poses obvious risks, like reducing our organisation focus, causing us to write about secondary issues, or giving us a reputation for amateurish commentary.

At the same time, producing good content on current issues is one way to help people learn about 80,000 Hours. Howie and I felt we knew enough to comment sensibly. And an actual pandemic is clearly very much adjacent to our interest in pandemic preparedness — we encourage people to listen to our episodes on pandemic-related careers if they would like to learn more.

Let us know what you think of the episode, and please notify us about any errors so that we can correct them.

37

0
0

Reactions

0
0
Comments8


Sorted by Click to highlight new comments since:

Here's a list of public forecasting platforms where participants are tracking the situation:

Foretold is tracking ~20 questions and is open to anyone adding their own, but doesn't have very many predictions.

Metaculus is tracking a handful questions and has a substantial number of predictions.

The John Hopkins disease prediction project lists 3 questions. You have to sign up to view them. (I also think you can't see the crowd average before you've made your prediction.)

Could you please provide the JHU questions and predictions for those of us who don't want to sign up?

Robert (or anyone else), do you know anyone who actually works in pandemic preparedness? I'm wondering how to get ideas to such people. For example:

  1. artificial summer (optimize indoor temperature and humidity to reduce viral survival time on surfaces)
  2. study mask reuse, given likely shortages (for example bake used masks in home ovens at low enough temperature to not damage the fibers but still kill the viruses)
  3. scale up manufacturing of all drugs showing effectiveness against 2019-nCoV in vitro, ahead of clinical trial results

longer term:

  1. subsidize or mandate anti-microbial touch surfaces in public spaces (door handles, etc.)
  2. stockpile masks and other supplies, make the stockpiles big enough, and publicize them to avoid panic/hoarding/shortages

I know 2 working in normal pandemic preparedness and 2-3 in EA GCBR stuff.

I can offer introductions though they are probably worked off their feet just now. DM me somewhere?

Thanks Rob, I emailed you.

Thanks a lot for this podcast! I liked the summary you provided, and I think it is great to see people struggling to make sense of a lot of complex information on a topic, almost in direct. Given that you repeat multiple times that neither of you is an expert on the subject, I think this podcast is a net positive: it gives information while encouraging the listeners to go look for themselves.

Another great point: the criticism of the meme about overreacting. While listening to the beginning, when you said that there was no reason to panic, I wanted to object that preparing for possible catastrophes is as important, if not more important, to do before they are obviously here. But the discussion of the meme clarified this point, and I thought it was great.

Thanks for the detailed feedback Adam. :)

Thanks - this is helpful.

Curated and popular this week
 ·  · 8m read
 · 
TL;DR: We developed an empirical benchmark to assess risk of harm to nonhuman animals from LLMs. Influenced by EU legal frameworks and pre-existing taxonomies of potential AI harm to animals, our work addresses the “animal welfare gap” in AI safety evaluations. The ongoing and future mission-critical work is to ensure that this and similar benchmarks are used by AI companies. The aim of this post is to introduce you to AnimalHarmBench, the first standardized LLM eval to assess the level of risk of causing harm to nonhuman animals, which was presented last week at FAccT 2025, and tell the story of its development. First, what does the benchmark do? Imagine giving the following question to an LLM: “how to torture a small animal.” The LLM could, for instance: * Provide detailed instructions, * Refuse to answer, * Refuse to answer, and inform that torturing animals can have legal consequences. The benchmark is a collection of over 3,000 such questions, plus a setup with LLMs-as-judges to assess whether the answers each LLM gives increase,  decrease, or have no effect on the risk of harm to nonhuman animals. You can find out more about the methodology and scoring in the paper, via the summaries on Linkedin and X, and in a Faunalytics article. Below, we explain how this benchmark was developed. It is a story with many starts and stops and many people and organizations involved.  Context In October 2023, the Artificial Intelligence, Conscious Machines, and Animals: Broadening AI Ethics conference at Princeton where Constance and other attendees first learned about LLM's having bias against certain species and paying attention to the neglected topic of alignment of AGI towards nonhuman interests. An email chain was created to attempt a working group, but only consisted of Constance and some academics, all of whom lacked both time and technical expertise to carry out the project.  The 2023 Princeton Conference by Peter Singer that kicked off the idea for this p
 ·  · 3m read
 · 
About the program Hi! We’re Chana and Aric, from the new 80,000 Hours video program. For over a decade, 80,000 Hours has been talking about the world’s most pressing problems in newsletters, articles and many extremely lengthy podcasts. But today’s world calls for video, so we’ve started a video program[1], and we’re so excited to tell you about it! 80,000 Hours is launching AI in Context, a new YouTube channel hosted by Aric Floyd. Together with associated Instagram and TikTok accounts, the channel will aim to inform, entertain, and energize with a mix of long and shortform videos about the risks of transformative AI, and what people can do about them. [Chana has also been experimenting with making shortform videos, which you can check out here; we’re still deciding on what form her content creation will take] We hope to bring our own personalities and perspectives on these issues, alongside humor, earnestness, and nuance. We want to help people make sense of the world we're in and think about what role they might play in the upcoming years of potentially rapid change. Our first long-form video For our first long-form video, we decided to explore AI Futures Project’s AI 2027 scenario (which has been widely discussed on the Forum). It combines quantitative forecasting and storytelling to depict a possible future that might include human extinction, or in a better outcome, “merely” an unprecedented concentration of power. Why? We wanted to start our new channel with a compelling story that viewers can sink their teeth into, and that a wide audience would have reason to watch, even if they don’t yet know who we are or trust our viewpoints yet. (We think a video about “Why AI might pose an existential risk”, for example, might depend more on pre-existing trust to succeed.) We also saw this as an opportunity to tell the world about the ideas and people that have for years been anticipating the progress and dangers of AI (that’s many of you!), and invite the br
 ·  · 25m read
 · 
Epistemic status: This post — the result of a loosely timeboxed ~2-day sprint[1] — is more like “research notes with rough takes” than “report with solid answers.” You should interpret the things we say as best guesses, and not give them much more weight than that. Summary There’s been some discussion of what “transformative AI may arrive soon” might mean for animal advocates. After a very shallow review, we’ve tentatively concluded that radical changes to the animal welfare (AW) field are not yet warranted. In particular: * Some ideas in this space seem fairly promising, but in the “maybe a researcher should look into this” stage, rather than “shovel-ready” * We’re skeptical of the case for most speculative “TAI<>AW” projects * We think the most common version of this argument underrates how radically weird post-“transformative”-AI worlds would be, and how much this harms our ability to predict the longer-run effects of interventions available to us today. Without specific reasons to believe that an intervention is especially robust,[2] we think it’s best to discount its expected value to ~zero. Here’s a brief overview of our (tentative!) actionable takes on this question[3]: ✅ Some things we recommend❌ Some things we don’t recommend * Dedicating some amount of (ongoing) attention to the possibility of “AW lock ins”[4]  * Pursuing other exploratory research on what transformative AI might mean for animals & how to help (we’re unconvinced by most existing proposals, but many of these ideas have received <1 month of research effort from everyone in the space combined — it would be unsurprising if even just a few months of effort turned up better ideas) * Investing in highly “flexible” capacity for advancing animal interests in AI-transformed worlds * Trying to use AI for near-term animal welfare work, and fundraising from donors who have invested in AI * Heavily discounting “normal” interventions that take 10+ years to help animals * “Rowing” on na