This is a 6-9 session syllabus on the fundamentals of global priorities research in economics.
The purpose is to help economics students and researchers interested in GPR get a big picture view of the field and come up with research ideas.
Because of this focus on fundamentals, the readings are rather light on economics and heavy on philosophy and empirics of different cause areas.
Previous versions of this list were used internally at GPI and during GPI’s Oxford Global Priorities Fellowship in 2023, where the prompts guided individual reflection and group discussion.
Many thanks to the following for their help creating and improving this syllabus: Gustav Alexandrie, Loren Fryxell, Arden Koehler, and Luis Mota. The readings below don't necessarily represent their views, GPI's, or mine.
1. Philosophical Foundations
Topic: Global priorities research is a normative enquiry. It is primarily interested in understanding what we should do in the face of global problems, and only derivatively interested in how those problems work/facts about the world that surround them.
In this session, we will focus on understanding what ethical theory is, what some of the most important moral theories are, how these theories relate to normative thinking in economics, and what these theories imply about what the most important causes are.
Literature:
- MacAskill, William. 2019. “The Definition of Effective Altruism” (Section 4 is optional)
- Prompt 1: How aligned with your aims as a researcher is the definition of Effective Altruism proposed in this article (p. 14)?
- Trammell, Philip. 2022. Philosophical foundations (Slides 1-2, 5-9, 12-16, 20-24)
- Prompt 2: What is your best guess theory of welfare? How much do you think it matters to get this right?
- Prompt 3: What is your best guess view in axiology? What are your key uncertainties about it? Do you think axiology is all that matters in determining what one ought to do (excluding empirical uncertainty)?
- Trammell, Philip. 2022. Three sins of economics (Slides 1-24, 27)
- Prompt 4: What are your “normative defaults”? What are views here that you would like to explore more?
- Prompt 5: Do you agree that economics has the normative defaults identified in the reading? Can you give examples of economics work that avoids these?
- Prompt 6: Insofar as economists tend to commit the 3 'sins', what do you think of the strategy of finding problems which are underprovided by those views?
Extra reading:
- Wilkinson, Hayden. 2022. “Key Lessons From Global Priorities Research” (watch video here — slides are not quite self-contained)
- Which key results are most interesting or surprising to you and why? Do you think any of them are wrong?
- Greaves, Hilary. 2017. “Population axiology”
- Broome, John. 1996. “The Welfare Economics of Population”
2. Effective altruism: differences in impact and cost-effectiveness estimates
Topic: In this session we tackle two key issues in cause prioritization. First, how is impact distributed across interventions (or importance across problems). Second, how to compare the cost-effectiveness of interventions which are differentially well-grounded.
Literature:
- Kokotajlo, Daniel and Oprea, Alexandra. 2020. “Counterproductive Altruism: The Other Heavy Tail” (Skip Sections I and II)
- Prompt 1: Do you think there is a heavy right tail of opportunities to do good? What about a heavy left tail?
- Prompt 2: How do the distributions of impact of interventions aimed at the near-term and long-term compare (specifically, in terms of the heaviness of their tails)?
- Karnofsky, Holden. 2016. “Why we can't take expected value estimates literally (even when they're unbiased)”
- Prompt 3: What, in your view, is the biggest problem with the “explicit expected value formula” approach to giving?
- Prompt 4: What is the most difficult part of implementing the proposed Bayesian approach to decisions about giving (e.g. coming up with a prior, selecting a reference class, etc.)?
- Prompt 5: In a Bayesian adjustment, how do you feel about the accuracy vs. transparency trade off involved in relying on one’s intuitions vs. formal analysis?
Extra readings:
- Haber, Noah. 2022. “GiveWell’s Uncertainty Problem”
- Relates to Karnofsky 2016 (above). Would be worth reading this with a critical eye. In particular, a good exercise would be to try to translate the post’s claims into an economics of information framework.
- Ord, Toby. 2014. short version: Moral Imperative Towards Cost-effectiveness
- Tomasik, Brian. “Why Charities Usually Don’t Differ Astronomically in Expected Cost-Effectiveness”
- Todd, Benjamin. “How much do solutions to social problems differ in their effectiveness? A collection of all the studies we could find”
3. Animal Welfare
Topic: Non-human animals constitute the majority of beings alive today, and farmed animals alone far outnumber humans. If animals deserve a moral status anywhere near that of humans, then preventing harmful practices towards them could be the biggest moral issue of the present. In this session, we will discuss a philosophical argument for giving animals significant moral consideration, as well as an empirically-grounded assessment of the range of welfare levels that animals can experience.
Literature:
- Singer, Peter. 1993. Practical Ethics. Chapter 3: "Equality For Animals?"
- P1: Do you agree with Singer's Principle of Equal Consideration of Interests?
- P2: What does the Principle of Equal Consideration of Interests imply about how we ought to treat animals?
- Fischer, Bob. 2023. “Rethink Priorities’ Welfare Range Estimates.”
- P3: Are these welfare range estimates smaller or larger than what you would have expected?
- P4: How sound is the methodology for calculating these welfare ranges? Are there any particularly serious weaknesses?
Extra readings:
- Clare, Stephen, and Goth, Aidan. 2020. “How Good Is The Humane League Compared to the Against Malaria Foundation?”
- Browning, Heather, and Veit, Walter. 2022. “Longtermism and Animals.”
4. Longtermism and Existential Risk Reduction
Topic:[1] Longtermism is, roughly, the idea that the long term future morally matters 1) in principle and 2) in practice. In particular, longtermism says that the best interventions we have available to us today are best because they have an impact in the far future.
In this session, we will discuss whether and if so why the long-term future matters, investigate whether this implies existential risk reduction is plausibly one of the highest impact cause areas, and consider whether other types of interventions remain competitive once we account for the long-term future.
Literature:
- Ord, Toby. 2020. The Precipice. Chapter 2: Existential Risks.
- Prompt 1: In assessing the impacts of different interventions, ought we discount future well-being directly? (i.e. to treat well-being as less important because and when it occurs later in time)
- Prompt 2: Why might we discount future well-being indirectly?
- Prompt 3: What do you think is the most compelling case for taking existential risks to be of particular moral importance?
- Prompt 4: Assuming that we should not discount future well-being directly, can interventions which deliver their impacts in the “near term” compete with existential risk reduction?
- Prompt 5: Are there promising interventions that improve the long-term future but not through the channel of reducing extinction risks?
Extra readings:
- MacAskill, William. 2022. What We Owe The Future
- Tarsney, Christian. 2022. “The Epistemic Challenge to Longtermism”
- Greaves, Hilary. 2020. Talk: “Evidence, cluelessness and the long term” (Watch from 7:41)
- Eden, Maya and Alexandrie, Gustav. 2023. “Is Existential Risk Mitigation Uniquely Cost-Effective? Not in Standard Population Models”
5. AI risk
Topic: Because extinction threatens the existence of a potentially vast future, existential risk reduction might be the most important cause area—especially under longtermism. There are multiple important existential threats, such as pandemics and nuclear weapons. In this session, we focus on a (possibly less obvious) candidate for the most important existential risk: AI risk.
Literature:
- Piper, Kelsey. 2020. “The case for taking AI seriously as a threat to humanity”
- Prompt 1: Are you compelled by the case of how AI could wipe us out? What kind of evidence would push you towards one side or the other on this issue?
- Cotra, Ajeya. 2021. “Why AI alignment could be hard with modern deep learning”
- Prompt 2: Which threat seems more likely: schemers or sycophants? Which seems more dangerous?
- Prompt 3: Do you think the misalignment concerns with deep learning generalize to any kind of advanced AI?
- Ngo, Richard. 2020. “AGI safety from first principles: Control”
- Prompt 4: Are you compelled by the threat from an AGI that overpowers humanity? Is intelligence sufficient for a system to obtain such immense power?
- Prompt 5: How might we prevent an AGI from obtaining dangerous levels of power?
Extra readings:
- Barak, Boaz and Edelman, Ben. 2022. ‘AI will change the world, but won’t take it over by playing “3-dimensional chess”’
- On the issue of “returns of power to intelligence”
- Chritian, Brian. 2020. The Alignment Problem.
6. Economics and AI risk
Topic: We consider how economists can contribute to mitigating risks from AI. We will focus on two main research areas within AI risk mitigation: AI governance and AI controllability.
Literature:
- Siegmann, Charlotte. 2023. "Economics and AI Risk: Background"
- Prompt 1: Do you think there's a possibility of TAI or explosive growth in the next decades/this century? How would you reduce uncertainty about this?
- Prompt 2: Which reason do you find most compelling for taking the controllability challenge seriously?
- Prompt 3: Which of the concerns raised in this article do you find most compelling?
- Siegmann, Charlotte. 2023. "Economics and AI Risk: Research Agenda and Overview"
- Prompt 4: Choose one of the three areas of AI governance research (i.e. development, deployment, forecasting) and discuss what you take to be the most promising ways to contribute to it.
- Prompt 5: How can economists contribute to AI controllability and alignment research?
Optional additional sessions
Existential risk and economic growth
Goal and topic: Increasing economic growth could be one way to positively affect the future. Whether this is the case depends in part on the relationship between growth and existential risk. We will explore some of the various ways in which existential risks and economic growth are related:
- Roughly, you can improve the long-term future by (1) making good trajectories better (e.g. via economic growth) or (2) avoiding bad trajectories such as extinction or suffering-filled scenarios.
- Whether economic growth mitigates or increases existential risks and what factors influence this relationship.
- Whether the lack of economic growth (stagnation) poses existential risk.
Literature:
- Greaves, Hilary. 2021. “Longtermism and Economic Growth” (video)
- Prompt 1: Excluding the issue of existential risks, is economic growth good (from a longterm-sensitive perspective)? In what ways might it not be?
- Prompt 2: Is faster or slower economic growth better? How does this depend on the nature of risks (state vs. transition) and the relationship between growth and risk?
- Aschenbrenner, Leopold. 2020. “Securing Posterity”
- Prompt 3: How could faster economic growth turn out to decrease overall risk? What could be missing from this picture?
- MacAskill, William. 2022. What We Owe The Future (Chapter 7: Stagnation)
- Prompt 4: Do you think stagnation is an important problem on its own? What is the most contentious premise behind the case for stagnation as a hugely important problem? What is the strongest case?
Extra readings:
- Aschenbrenner, Leopold. 2020. “Existential Risk and Growth”
Other issues in this space:
- How advanced AI might affect economic growth
- Can we use economic growth models to predict the longterm future and AGI?
Relevant readings:
- Davidson, Tom. 2021. “Report on Whether AI Could Drive Explosive Economic Growth”
- Trammell, Philip and Korinek, Anton. 2020. “Economic growth under transformative AI”
Patient philanthropy
Topic: The optimal timing of funding altruistic projects and how this depends on one’s level of 'patience' in relation to that of other funders.
Literature:
- Trammell, Philip. 2021. “Patient Philanthropy in an Impatient World” (sections 1 to 3)
- Prompt 1: In the current philanthropy landscape, is it important to consider the kinds of strategic interactions Trammel points to? Why?
- Prompt 2: Sections 2.3 and 2.4 give reasons to believe that philanthropists can act more patiently than governments. Do you think that this is a good argument in support of philanthropic spending from a social standpoint?
- Prompt 3: To what extent do the funding areas of interest to patient and impatient philanthropy overlap?
- Prompt 4: Evaluate Open Philanthropy's decision to spend its entire budget in 20 years.
Population
Goal and topic: The size of the global population influences the well-being of present and future generations in various ways. For instance, prima facie more people drive more greenhouse gas emissions, which drives worse climate change. But also, more people drive more innovation. And more people means more potential for human well-being. In this session, we will evaluate the ways in which population size relates to and affects long-term global well-being.
Literature:
- Greaves, Hilary. 2017. “Population axiology”
- Prompt 1: Which population axiology do you find most intuitive?
- Prompt 2: Which view do you find most plausible upon reflection?
- Siegmann, Charlotte and Mota, Luis. 2022. “Assessing the case for population growth as a priority”
- Prompt 3: How would the implications of this writeup change when considering the consequences of population growth over the course of the next 2 centuries?
- Prompt 4: Do you believe that larger populations lead to more benefits or harms on net?
- Prompt 5: All things considered, is the case for intervening on population growth strong enough to make it one of the most important areas for further global priorities research?
Alternate Longtermism Session
This session goes more in depth into the philosophical issues surrounding longtermism.
Longtermism, existential risk, and cluelessness
Topic: This session is about longtermism, roughly the idea that we should focus on interventions that deliver their benefits in the longterm future. We will discuss the case for longtermism and whether uncertainty about the future undermines or supports it.
Literature:
- MacAskill, William. 2022. What We Owe The Future (Chapter 1: The Case for Longtermism)
- Prompt 1: Do you agree with the case for longtermism? Which of the premises do you find most contentious?
- Tarsney, Christian. 2022. “The Epistemic Challenge to Longtermism”
- Prompt 2: What does Tarsney’s model reveal to you about the presuppositions required for longtermism? Are these presuppositions true?
- Prompt 3: In light of the arguments of this article, to what extent do you think longtermism supports interventions other than existential risk reduction (e.g. economic growth)?
- Greaves, Hilary. 2020. Talk: “Evidence, cluelessness and the long term” (Watch from 7:41)
- Prompt 4: Which of the responses to the worry of cluelessness do you find most compelling?
- Prompt 5: What is the case for how cluelessness might support (as opposed to undermine) longtermism?
Extra readings:
- Ord, Toby. 2020. The Precipice. Chapter 2: Existential Risks.
- Greaves, Hilary. 2016. “Cluelessness”
- Eden, Maya and Alexandrie, Gustav. 2023. “Is Existential Risk Mitigation Uniquely Cost-Effective? Not in Standard Population Models”
- ^
See the alternate session on longtermism for a more in depth discussion of the key philosophical issues.
Thanks for sharing!
FWIW, I did the same comparison relying on Rethink Priorities' welfare range estimates, and Welfare Footprint Project's data on the time broilers experience various types of pain. I concluded corporate campaings for broiler welfare are 1.71 k times as effective as GiveWell's top charities, which is qualitatively similar to the 500 to 2 k of the post quoted above (its Guesstimate model outputs different results when it is refreshed; 500 to 2 k is the interval I got for 5 runs or so).
On the intersetion between human and animal welfare, have you considered discussing the meat eater problem? One may argue it does not apply much to GiveWell's top charities, which save lives in countries where animal consumption is low, but consumption will tend to increase as they eventually get richer. I suppose it would be interesting to model the economics of this, and figure out how accounting for the impacts on farmed animals changes the cost-effectiveness of global health and development interventions. FWIW, I did a quite shallow analysis on this.