For Existential Choices Debate Week, we’re trying out a new type of event: the Existential Choices Symposium. It'll be a written discussion between invited guests and any Forum user who'd like to join in.
How it works:
- Any forum user can write a top-level comment that asks a question or introduces a consideration, the answer of which might affect people’s answer to the debate statement[1]. For example: “Are there any interventions aimed at increasing the value of the future that are as widely morally supported as extinction-risk reduction?” You can start writing these comments now.
- The symposium’s signed-up participants, Will MacAskill, Tyler John, Michael St Jules, Andreas Mogensen and Greg Colbourn, will respond to questions, and discuss them with each other and other forum users, in the comments.
- To be 100% clear - you, the reader, are very welcome to join in any conversation on this post. You don't have to be a listed participant to take part.
This is an experiment. We’ll see how it goes and maybe run something similar next time. Feedback is welcome (message me with feedback here).
The symposium participants will be online between 3 - 5 pm GMT on Monday the 17th.
Brief bios for participants (mistakes mine):
- Will MacAskill is an Associate Professor of moral philosophy at the University of Oxford, and Senior Research Fellow at Forethought. He wrote the books Doing Good Better, Moral Uncertainty, and What We Owe The Future. He is the cofounder of Giving What We Can, 80,000 Hours, Centre for Effective Altruism and the Global Priorities Institute.
- Tyler John is an AI researcher, grantmaker, and philanthropic advisor. He is an incoming Visiting Scholar at the Cambridge Leverhulme Centre for the Future of Intelligence and an advisor to multiple philanthropists. He was previously the Programme Officer for emerging technology governance and Head of Research at Longview Philanthropy. Tyler holds a PhD in philosophy from Rutgers University—New Brunswick, where his dissertation focused on longtermist political philosophy and mechanism design, and the case for moral trajectory change.
- Michael St Jules is an independent researcher, who has written on “philosophy of mind, moral weights, person-affecting views, preference-based views and subjectivism, moral uncertainty, decision theory, deep uncertainty/cluelessness and backfire risks, s-risks, and indirect effects on wild animals”.
- Andreas Mogensen is a Senior Research Fellow in Philosophy at the Global Priorities Institute, part of the University of Oxford’s Faculty of Philosophy. His current research interests are primarily in normative and applied ethics. His previous publications have addressed topics in meta-ethics and moral epistemology, especially those associated with evolutionary debunking arguments.
- Greg Colbourn is the founder of CEEALAR and is currently a donor and advocate for Pause AI, which promotes a global AI moratorium. He has also supported various other projects in the space over the last 2 years.
Thanks for reading! If you'd like to contribute to this discussion, write some questions below which could be discussed in the symposium.
NB- To help conversations happen smoothly, I'd recommend sticking to one idea per top-level comment (even if that means posting multiple comments at once).
Position statement: I chose 36% disagreement. AMA!
My view is that Earth-originating civilisation, if we become spacefaring, will attain around 0.0001% of all value. This still makes extinction risk astronomically valuable (it's equivalent to optimising a millionth of the whole cosmos!), but if we could increase the chance of optimising 1% of the universe by 1%, this would be 100x more valuable than avoiding extinction. (You're not going to get an extremely well grounded explanation of these numbers from me, but I hope they make my position clearer.)
My view is that if Earth-originating civilisation becomes spacefaring, over the long term we will settle the entire universe and use nearly all of its energy. However, we could use this energy for many different things. I expect that by default we will mostly use this energy to create copies of ourselves (human brains) or digital slaves that allow us to learn and achieve new things, since humans love nothing more than humans and Earthly things. But these are inefficient media for realising moral value. We could instead use this energy to realise vastly more value, as I argue in Power Laws of Value. But our descendants basically won't care at all about doing this. So I expect that we will miss out on nearly all value, despite using most of the universe's energy.
I think that there will be significant path dependency in what Earth-originating civilisation chooses to make with our cosmic endowment. In particular, I expect that artificial intelligence will drive most growth and most choices after the 21st century. Several factors, such as the number of distinct agents, their values, and the processes by which their values evolve over time will therefore make a decisive difference to what our descendants choose to do with the cosmos.
So our choices about AI governance and architecture today are likely to make a significant and, at least on some paths we can choose, predictable difference to what our descendants do with all of the energy in the universe. If we do this well, it could make the difference between attaining almost no value and nearly all value.
Given that I expect us to miss out on almost all value by default, I view the value of avoiding extinction as smaller than those who think we will achieve almost all value by default.
No. I've said this before elsewhere, and it's not directly relevant to most of this discussion, but I think it's very worth reinforcing; EA is not utilitarianism, and the commitment to EA does not imply that you have any obligatory trade-off between yourself or your family's welfare and your EA commitment. If, as is the generally accepted standard, a "normal" EA commitment is of 10% of your income and/or resources, it seems bad ... (read more)