For Existential Choices Debate Week, we’re trying out a new type of event: the Existential Choices Symposium. It'll be a written discussion between invited guests and any Forum user who'd like to join in.
How it works:
- Any forum user can write a top-level comment that asks a question or introduces a consideration, the answer of which might affect people’s answer to the debate statement[1]. For example: “Are there any interventions aimed at increasing the value of the future that are as widely morally supported as extinction-risk reduction?” You can start writing these comments now.
- The symposium’s signed-up participants, Will MacAskill, Tyler John, Michael St Jules, Andreas Mogensen and Greg Colbourn, will respond to questions, and discuss them with each other and other forum users, in the comments.
- To be 100% clear - you, the reader, are very welcome to join in any conversation on this post. You don't have to be a listed participant to take part.
This is an experiment. We’ll see how it goes and maybe run something similar next time. Feedback is welcome (message me with feedback here).
The symposium participants will be online between 3 - 5 pm GMT on Monday the 17th.
Brief bios for participants (mistakes mine):
- Will MacAskill is an Associate Professor of moral philosophy at the University of Oxford, and Senior Research Fellow at Forethought. He wrote the books Doing Good Better, Moral Uncertainty, and What We Owe The Future. He is the cofounder of Giving What We Can, 80,000 Hours, Centre for Effective Altruism and the Global Priorities Institute.
- Tyler John is an AI researcher, grantmaker, and philanthropic advisor. He is an incoming Visiting Scholar at the Cambridge Leverhulme Centre for the Future of Intelligence and an advisor to multiple philanthropists. He was previously the Programme Officer for emerging technology governance and Head of Research at Longview Philanthropy. Tyler holds a PhD in philosophy from Rutgers University—New Brunswick, where his dissertation focused on longtermist political philosophy and mechanism design, and the case for moral trajectory change.
- Michael St Jules is an independent researcher, who has written on “philosophy of mind, moral weights, person-affecting views, preference-based views and subjectivism, moral uncertainty, decision theory, deep uncertainty/cluelessness and backfire risks, s-risks, and indirect effects on wild animals”.
- Andreas Mogensen is a Senior Research Fellow in Philosophy at the Global Priorities Institute, part of the University of Oxford’s Faculty of Philosophy. His current research interests are primarily in normative and applied ethics. His previous publications have addressed topics in meta-ethics and moral epistemology, especially those associated with evolutionary debunking arguments.
- Greg Colbourn is the founder of CEEALAR and is currently a donor and advocate for Pause AI, which promotes a global AI moratorium. He has also supported various other projects in the space over the last 2 years.
Thanks for reading! If you'd like to contribute to this discussion, write some questions below which could be discussed in the symposium.
NB- To help conversations happen smoothly, I'd recommend sticking to one idea per top-level comment (even if that means posting multiple comments at once).
I agree with the framing.
Quantitatively, the willingness to pay to avoid extinction even just from the United States is truly enormous. The value of a statistical life in the US — used by the US government to estimate how much US citizens are willing to pay to reduce their risk of death — is around $10 million. The willingness to pay, therefore, from the US as a whole, to avoid a 0.1 percentage point of a catastrophe that would kill everyone in the US, is over $1 trillion. I don’t expect these amounts to be spent on global catastrophic risk reduction, but they show how much latent desire there is to reduce global catastrophic risk, which I’d expect to become progressively mobilised with increasing indications that various global catastrophic risks, such as biorisks, are real. [I think my predictions around this are pretty different than some others, who expect the world to be almost totally blindsided. Timelines and gradualness of AI takeoff is of course relevant here.]
In contrast, many areas of better futures work are likely to remain extraordinarily neglected. The amount of even latent interest in, for example, ensuring that resources outside of our solar system are put to their best use, or that misaligned AI produces a somewhat-better future than it would otherwise have done even if it kills us all, is tiny, and I don’t expect society to mobilise massive resources towards these issues even if there were indications that those issues were pressing.
In some cases, what people want will be actively opposed to what is in fact best, if what's best involves self-sacrifice on the part of those alive today, or with power today.
And then I think the neglectedness consideration beats the tractability consideration. Here are some pretty general reasons for optimism on expected tractability:
Of these considerations, it’s the last that personally moves me the most. It doesn't feel long ago that work on AI takeover risk felt extraordinarily speculative and low-tractability, where there was almost nowhere one could work for or donate to outside of the Future of Humanity Institute or Machine Intelligence Research Institute. In the early days, I was personally very sceptical about the tractability of the area. But I’ve been proved wrong. Via years of foundational work — both research work figuring out what the most promising paths forward are, and via founding new organisations that are actually squarely focused on the goal of reducing takeover risk or biorisk, rather than on a similar but tangential goal — the area has become tractable, and now there are dozens of great organisations that one can work for or donate to.