Take the 2025 EA Forum Survey to help inform our strategy and prioritiesTake the survey
Hide table of contents

Crossposted to LessWrong.

Introduction

We (the Cambridge Existential Risks Initiative) ran an Existential Risks Introductory Course (ERIC) in the first quarter of 2022, aiming to introduce the field of existential risks, without being explicitly associated with any particular philosophy. We expect the programme to be most useful to people who are new to this field, and we hypothesised that we may be able to reach a different target audience by not explicitly branding it as EA. 

The full curriculum we used for the programme, along with exercises and organisation spotlights, can be found here. This was primarily designed by Callum McDougall, with some inputs from the rest of the CERI team.

If you are interested in joining the next iteration of the course in Winter 2022 (either as a participant or as a facilitator), please fill out this interest form

This post contains an overview of the course, which is followed by an abbreviated version of the syllabus for the ease of gathering feedback. The weekly summaries may also be helpful for community builders looking for summaries of any of the core readings from our syllabus.

We welcome any feedback on the content, exercises or anything else pertaining to the course, either here publicly on the Forum, or you can also reach out to us privately if you prefer that. 

 

Course overview

The course consists of 8 weeks of reading (split into core and applied). Some weeks also include exercises, which participants are encouraged to complete and discuss in the session. Each week, participants will meet for 1.5 hour sessions where they will discuss the material and exercises with a facilitator.

The topics for each week are as follows:

  • Week 1: Introduction to Existential Risks
    Provides an introduction to x-risks, why they might be both highly important and neglected, and introduces some important terminology.
     
  • Week 2: Natural & Anthropogenic Risks
    Discusses natural risks, and risks from nuclear war and climate change.
     
  • Week 3: Biosecurity, And How To Think About Future Risks 
    Discusses risks from engineered pandemics, as well as a broader look at future risks in general and how we can reason about them and prepare for them.
     
  • Week 4: Unaligned Artificial Intelligence
    Discusses risks from unaligned AI, and provides a brief overview of the different approaches that are being taken to try and solve the problem.
     
  • Week 5: Dystopias, Lock-in & Unknown Unknowns
    Concludes the discussion of specific risks by discussing some more neglected risks. Also includes a discussion of the “unknown unknowns” problem, and how we can categorise and assess probabilities of risks.
     
  • Week 6: Forecasting & Decision-making
    Moves away from specific risks, and discusses broad strategies that can help mitigate a variety of risks, with a focus on improving forecasting and decision-making (both at the institutional and individual level).
     
  • Week 7: Different Frameworks for Existential Risk
    Further explores some alternative frameworks for x-risks than those found in The Precipice, e.g. FHI’s origin/scaling/endgame model, and the “Democratising Risk” paper.
     
  • Week 8: Next Steps
    Concludes the fellowship with a lookback on the key themes in the material, and a discussion of how the fellows plan to put what they’ve learned into action (e.g. in their future careers).

 

Abbreviated curriculum (Core readings)

Week 1: Introduction to Existential Risks 

The first group of core materials here outlines the key ideas of Toby Ord’s book The Precipice, that we may be living in a uniquely important and dangerous time thanks to the threat of existential risks.

However, it is important to know that not everyone in the existential risks field shares these views, and there have been alternative framings proposed. The paper below discusses the drawbacks with the “Techno-Utopian Approach” to x-risks, as exemplified by books like The Precipice. We will read more of it in later sessions, but for now it is important to be aware that there are other ways of thinking about these issues.

Finally, please check out these, which relate to how we’d like you to approach our discussion sessions:

(Week 1 Summaries)

 

Week 2: Natural & Anthropogenic Risks

In Week 2 we start to investigate specific existential risks. 

We will focus on anthropogenic risks, which arise from unique features of human society or current technology. Unlike natural risks, we can’t point to the historical record as evidence for their probability being small, meaning we could plausibly be facing more risk from them than we have at any previous time in history.

(Week 2 Summaries)

 

Week 3: Biosecurity, And How To Think About Future Risks

During Week 3, we will focus on the first of two particularly significant risks from future technology: engineered pathogens. We discuss some past examples of bioweapon misuse or bioresearch accidents, as well as the kind of organisations and protocols that exist to make these events less likely. 

We will also discuss the different ways we can think about risks from future technology, for instance through the lens of the unilateralist’s curse (the idea that in any group, decisions made by individuals that have a significant impact on the rest of the group will be systematically made more than they should, and this problem grows as the group grows).

(Week 3 Summaries)

 

Week 4: Unaligned Artificial Intelligence

In Week 4, we turn our attention to the second major risk from future technology: unaligned artificial intelligence.

AI safety is a large field (it has 2 separate 8-week fellowships at Cambridge alone!). In this session, we hope to give you an overview of the key ideas: why AI could be transformative to society, why it poses risks which humanity may be currently unsuited to handle, and some of the different approaches that are being taken to try and make it go well.

(Week 4 Summaries)

 

Week 5: Dystopias, Lock-in & Unknown Unknowns

This week, we’ll conclude our discussion of specific risks. We will cover dystopias (defined as a world with a functioning civilisation, but is locked into a terrible form), and extreme suffering risks (or s-risks).

Additionally, many of the risks we have discussed so far wouldn’t have occurred to researchers 50 years ago, which could make us suspect the existence of other currently unknown risks, which make themselves known as technology advances. We will discuss the possibility of such “unknown unknowns”, and what they imply for work on existential risks.

Finally, we will also provide an overview of the risk landscape, and discuss the probabilities that Ord puts on each of the risks he discusses. 

(Week 5 Summaries)

 

Week 6: Forecasting & Decision-making

Week 6 marks a movement away from discussing specific risks, and towards the kinds of broad strategies that can help mitigate a variety of risks. We will focus on how to improve forecasting & decision-making — both within organisations and at the individual level.

(Week 6 Summaries)

 

Week 7: Different Frameworks for Existential Risk

This week, we will discuss some different responses to the problem of existential risks. One influential school of thought is longtermism - the idea that the very long-run future impact of our actions should represent our dominant moral consideration. However, not all frameworks for thinking about x-risks need to go hand-in-hand with longtermism; in fact an over-focus on longtermism may pose some serious problems. The recently-published Democratising Risk paper offers an alternative path for the field.

Other Frameworks

Longtermism

Our Potential

(Week 7 Summaries)

 

Week 8: Next Steps

One of the main ways in which we can affect the world for the better is through our careers. For this final week we hope to help you think about potential next steps for applying the ideas of existential risk reduction or longtermism to your own life and career.

 

Click here for the full version of the curriculum, which contains additional readings, exercises, and organisation spotlights. If you are planning to run a similar programme at your local group, please do reach out to us, as we may also be able to share our facilitators’ guide and other resources. 

 

Ways in which you can help

Call for facilitators

If you would be interested in facilitating this course when it is run again in Winter 2022, we would love to hear from you. 

The course will be virtual as default. Familiarity with the core concepts of x-risks is desirable, but you don’t have to be an expert in order to facilitate. This is designed as an introductory fellowship, so far more important are good communication skills, and an ability to stimulate productive and interesting conversations among the fellows. Overall, being a facilitator is a great way to help with outreach and field-building, as well as improve your communication skills. I can say that I’ve personally thoroughly enjoyed the experience!

If this sounds like it might be a good fit for you or someone you know, the expression of interest form can be found here.

Call for fellows

If you or anyone you know might be interested in participating in this course, please fill out the expression of interest form here. We welcome people from all backgrounds and career stages!

Call for feedback

We would be particularly interested in hearing if you have suggestions for:

  • How to improve the balance of the curriculum by reducing the focus on Toby Ord and Nick Bostrom’s writings, and introducing some different perspectives,
  • Material discussing climate change as an existential risk,
  • Material focusing on solutions as well as problems (i.e. which research directions in x-risk reduction seem promising, and what work in the field tends to look like)
  • Suggestions for how we could improve the marketing of this course (i.e. places it could be advertised)
Comments14
Sorted by Click to highlight new comments since:

Added to the list of courses here.

Thank you for making this! This looks great. I've added it to the list of AI safety courses.

It's not on just technical AI safety but I feel like it's related enough that anybody looking at the list will also be interested in this resource.

Suggestion: If the goal is to attract non-EAs, I might change the title to be more legible to people who don't know what an existential risk is. 

That's something we've definitely considered, but the idea is for this course to be marketed mainly via CERI, and since they already have existential risks in their name plus define it in a lot of their promo material, we felt like it would probably be more appropriate to stick with that terminology. 

Thank you for taking the time to make this publicly available. 

Hi, thanks for sharing this!

One clarification: given that the course is almost 100% EA/longtermist in content and structure (with the exception of just under half of week 7), does the mention of introducing existential risk without being explicitly associated with any particular philosophy refer to 1) intending to provide an even-handed introduction to the field, or 2) using the concept of existential risk as an EA/longtermism recruitment approach?

I see trade-offs with using either approach. 2) may lead to further impact down the line through career-alignment, but will necessarily reduce the quality of the course by narrowing the range of acceptable topics, readings, and approaches.

In practice the "create something which is ideologically independent from EA" wasn't really what we went for, it's more like "really hone in on this one area that lots of EAs care about". We could have phrased it better in the post.

Yeah +1 to Nandini's point, I think we should have been made this clearer in the post. I think people have a lot of misconceptions about EA (e.g. lots of people just think EA is about effective charitable giving), and we wanted to emphasise this particular part rather than trying to construct the whole tower of assumptions.

That being said, I do think that the abundance of writing from Ord/Bostrom is something that we could have done a better job of toning down, and different perspectives could have been included. If you have any specific recommendations for reading material you think would positively contribute in any week (or reading material already in the course that you think could be removed), we'd be really grateful!

[anonymous]1
0
0

Greetings

This course seems interesting.

Thank you for making this.

And lastly I filled the interest form but still didn't receive any developments about the course or my acceptance?

 

Regards 

[comment deleted]3
0
0
Curated and popular this week
 ·  · 1m read
 · 
This morning I was looking into Switzerland's new animal welfare labelling law. I was going through the list of abuses that are now required to be documented on labels, and one of them made me do a double-take: "Frogs: Leg removal without anaesthesia."  This confused me. Why are we talking about anaesthesia? Shouldn't the frogs be dead before having their legs removed? It turns out the answer is no; standard industry practice is to cut their legs off while they are fully conscious. They remain alive and responsive for up to 15 minutes afterward. As far as I can tell, there are zero welfare regulations in any major producing country. The scientific evidence for frog sentience is robust - they have nociceptors, opioid receptors, demonstrate pain avoidance learning, and show cognitive abilities including spatial mapping and rule-based learning.  It's hard to find data on the scale of this issue, but estimates put the order of magnitude at billions of frogs annually. I could not find any organisations working directly on frog welfare interventions.  Here are the organizations I found that come closest: * Animal Welfare Institute has documented the issue and published reports, but their focus appears more on the ecological impact and population decline rather than welfare reforms * PETA has conducted investigations and released footage, but their approach is typically to advocate for complete elimination of the practice rather than welfare improvements * Pro Wildlife, Defenders of Wildlife focus on conservation and sustainability rather than welfare standards This issue seems tractable. There is scientific research on humane euthanasia methods for amphibians, but this research is primarily for laboratory settings rather than commercial operations. The EU imports the majority of traded frog legs through just a few countries such as Indonesia and Vietnam, creating clear policy leverage points. A major retailer (Carrefour) just stopped selling frog legs after welfar
 ·  · 10m read
 · 
This is a cross post written by Andy Masley, not me. I found it really interesting and wanted to see what EAs/rationalists thought of his arguments.  This post was inspired by similar posts by Tyler Cowen and Fergus McCullough. My argument is that while most drinkers are unlikely to be harmed by alcohol, alcohol is drastically harming so many people that we should denormalize alcohol and avoid funding the alcohol industry, and the best way to do that is to stop drinking. This post is not meant to be an objective cost-benefit analysis of alcohol. I may be missing hard-to-measure benefits of alcohol for individuals and societies. My goal here is to highlight specific blindspots a lot of people have to the negative impacts of alcohol, which personally convinced me to stop drinking, but I do not want to imply that this is a fully objective analysis. It seems very hard to create a true cost-benefit analysis, so we each have to make decisions about alcohol given limited information. I’ve never had problems with alcohol. It’s been a fun part of my life and my friends’ lives. I never expected to stop drinking or to write this post. Before I read more about it, I thought of alcohol like junk food: something fun that does not harm most people, but that a few people are moderately harmed by. I thought of alcoholism, like overeating junk food, as a problem of personal responsibility: it’s the addict’s job (along with their friends, family, and doctors) to fix it, rather than the job of everyday consumers. Now I think of alcohol more like tobacco: many people use it without harming themselves, but so many people are being drastically harmed by it (especially and disproportionately the most vulnerable people in society) that everyone has a responsibility to denormalize it. You are not likely to be harmed by alcohol. The average drinker probably suffers few if any negative effects. My argument is about how our collective decision to drink affects other people. This post is not
 ·  · 5m read
 · 
Today, Forethought and I are releasing an essay series called Better Futures, here.[1] It’s been something like eight years in the making, so I’m pretty happy it’s finally out! It asks: when looking to the future, should we focus on surviving, or on flourishing? In practice at least, future-oriented altruists tend to focus on ensuring we survive (or are not permanently disempowered by some valueless AIs). But maybe we should focus on future flourishing, instead.  Why?  Well, even if we survive, we probably just get a future that’s a small fraction as good as it could have been. We could, instead, try to help guide society to be on track to a truly wonderful future.    That is, I think there’s more at stake when it comes to flourishing than when it comes to survival. So maybe that should be our main focus. The whole essay series is out today. But I’ll post summaries of each essay over the course of the next couple of weeks. And the first episode of Forethought’s video podcast is on the topic, and out now, too. The first essay is Introducing Better Futures: along with the supplement, it gives the basic case for focusing on trying to make the future wonderful, rather than just ensuring we get any ok future at all. It’s based on a simple two-factor model: that the value of the future is the product of our chance of “Surviving” and of the value of the future, if we do Survive, i.e. our “Flourishing”.  (“not-Surviving”, here, means anything that locks us into a near-0 value future in the near-term: extinction from a bio-catastrophe counts but if valueless superintelligence disempowers us without causing human extinction, that counts, too. I think this is how “existential catastrophe” is often used in practice.) The key thought is: maybe we’re closer to the “ceiling” on Survival than we are to the “ceiling” of Flourishing.  Most people (though not everyone) thinks we’re much more likely than not to Survive this century.  Metaculus puts *extinction* risk at about 4