Hide table of contents

Abstract

When our choice affects some other person and the outcome is unknown, it has been argued that we should defer to their risk attitude, if known, or else default to use of a risk avoidant risk function. This, in turn, has been claimed to require the use of a risk avoidant risk function when making decisions that primarily affect future people, and to decrease the desirability of efforts to prevent human extinction, owing to the significant risks associated with continued human survival. I raise objections to the claim that respect for others’ risk attitudes requires risk avoidance when choosing for future generations. In particular, I argue that there is no known principle of interpersonal aggregation that yields acceptable results in variable population contexts and is consistent with a plausible ideal of respect for others’ risk attitudes in fixed population cases.

Introduction

The long-run future is highly uncertain, as are the effects of present actions on posterity. In order to be able to make reasonable decisions that take account of the potential long-term impact of our choices today, we therefore need to know how to rationally manage uncertainty in decision-making.

According to orthodox decision theory, a rational agent in conditions of uncertainty prefers those acts that maximize expected utility (Arnauld and Nicole 1662; Bernoulli 1738; Ramsey 1926; von Neumann and Morgenstern 1947; Savage 1972). The utility function is assumed to be a cardinal measure of the agent’s strength of preferences over outcomes, and its expectation is taken relative to a probability function representing known chances and/or the strength of the agent’s beliefs about the state of the world.

In the recent philosophical literature, an influential alternative to expected utility the- ory is defended by Buchak (2013), building on earlier work by Quiggin (1982). Buchak argues for the rationality of maximizing risk-weighted expected utility (REU). On this view, a rational agent’s preferences over uncertain prospects depend not only on the probabilities she assigns to the different possible states of the world and the desirability of the different possible outcomes, but also independently on her attitude toward risk, as captured by a risk function on probabilities.

More recently, Buchak (2016, 2017, 2019) has argued that moral contexts require that we adopt a particular attitude toward risk. We are required, she claims, to exhibit a high degree of risk avoidance as a default. This default, she claims, should especially guide those of our decisions whose largest impacts are on future individuals, such as decisions about climate change. This approach has also been claimed to have important consequences for evaluating the prospect of continued human survival and actions aimed at ensuring that our species endures. In particular, Pettigrew (2022) argues that consequentialists are pushed in the direction of favouring premature human extinction over continued human survival in light of the significant risks associated with the persistence of human beings as a dominant species.

The core idea that motivates this line of argument is that of respect for others’ risk attitudes. When our actions affect others, we ought not simply impose our own idiosyncratic attitude toward risk on them, and should instead choose in a way that takes account of the risk attitudes of the people we potentially affect. So goes the thought. I will offer reasons to think that a plausible ideal of respect for others’ risk attitudes does not support the kind of conclusions outlined above, and may even be irrelevant when thinking about the impact of present actions on the long-run future. The problem is that there is no known principle of interpersonal aggregation suitable to variable population contexts that is consistent with a plausible ideal of respect for others’ risk attitudes in fixed population cases.

I begin in section 2 by outlining the theory of risk-weighted expected utility. In sec- tion 3, I then set out Buchak’s view that we should default to a risk avoidant risk attitude when making choices on behalf of others, and I outline claims made by Buchak and Pettigrew about the implications of this principle for actions whose most important effects concern future people. In section 4, I emphasize an important choice that we face when we aim to respect others’ risk attitudes when choosing on behalf of a group of persons: namely, whether to aggregate first across persons and then across outcomes, or vice versa. In section 5, I note that there is good reason to think that respect for others’ risk attitudes requires that we adopt the latter approach, but that this approach is incompatible with consequentialism. In section 6, I argue that this approach also threatens to break down in variable population cases of the kind we inevitably confront in making decisions about the long-run future. The same is not true of a procedure that aggregates across persons within outcomes and then across outcomes, but this procedure cannot be justified by appeal to a plausible ideal of respect for others’ risk attitudes. Section 7 provides a summary and conclusion.

Read the rest of the paper

4

0
0

Reactions

0
0

More posts like this

Comments1


Sorted by Click to highlight new comments since:

FWIW,

Richard Pettigrew has written a condensed version of their paper on the EA Forum.

Curated and popular this week
 ·  · 20m read
 · 
Once we expand to other star systems, we may begin a self-propagating expansion of human civilisation throughout the galaxy. However, there are existential risks potentially capable of destroying a galactic civilisation, like self-replicating machines, strange matter, and vacuum decay. Without an extremely widespread and effective governance system, the eventual creation of a galaxy-ending x-risk seems almost inevitable due to cumulative chances of initiation over time across numerous independent actors. So galactic x-risks may severely limit the total potential value that human civilisation can attain in the long-term future. The requirements for a governance system to prevent galactic x-risks are extremely demanding, and they need it needs to be in place before interstellar colonisation is initiated.  Introduction I recently came across a series of posts from nearly a decade ago, starting with a post by George Dvorsky in io9 called “12 Ways Humanity Could Destroy the Entire Solar System”. It’s a fun post discussing stellar engineering disasters, the potential dangers of warp drives and wormholes, and the delicacy of orbital dynamics.  Anders Sandberg responded to the post on his blog and assessed whether these solar system disasters represented a potential Great Filter to explain the Fermi Paradox, which they did not[1]. However, x-risks to solar system-wide civilisations were certainly possible. Charlie Stross then made a post where he suggested that some of these x-risks could destroy a galactic civilisation too, most notably griefers (von Neumann probes). The fact that it only takes one colony among many to create griefers means that the dispersion and huge population of galactic civilisations[2] may actually be a disadvantage in x-risk mitigation.  In addition to getting through this current period of high x-risk, we should aim to create a civilisation that is able to withstand x-risks for as long as possible so that as much of the value[3] of the univers
 ·  · 13m read
 · 
  There is dispute among EAs--and the general public more broadly--about whether morality is objective.  So I thought I'd kick off a debate about this, and try to draw more people into reading and posting on the forum!  Here is my opening volley in the debate, and I encourage others to respond.   Unlike a lot of effective altruists and people in my segment of the internet, I am a moral realist.  I think morality is objective.  I thought I'd set out to defend this view.   Let’s first define moral realism. It’s the idea that there are some stance independent moral truths. Something is stance independent if it doesn’t depend on what anyone thinks or feels about it. So, for instance, that I have arms is stance independently true—it doesn’t depend on what anyone thinks about it. That ice cream is tasty is stance dependently true; it might be tasty to me but not to you, and a person who thinks it’s not tasty isn’t making an error. So, in short, moral realism is the idea that there are things that you should or shouldn’t do and that this fact doesn’t depend on what anyone thinks about them. So, for instance, suppose you take a baby and hit it with great force with a hammer. Moral realism says: 1. You’re doing something wrong. 2. That fact doesn’t depend on anyone’s beliefs about it. You approving of it, or the person appraising the situation approving of it, or society approving of it doesn’t determine its wrongness (of course, it might be that what makes its wrong is its effects on the baby, resulting in the baby not approving of it, but that’s different from someone’s higher-level beliefs about the act. It’s an objective fact that a particular person won a high-school debate round, even though that depended on what the judges thought). Moral realism says that some moral statements are true and this doesn’t depend on what people think about it. Now, there are only three possible ways any particular moral statement can fail to be stance independently true: 1. It’s
 ·  · 7m read
 · 
Crossposted from my Substack.  @titotal recently posted an in-depth critique of AI 2027. I'm a fan of his work, and this post was, as expected, phenomenal*. Much of the critique targets the unjustified weirdness of the superexponential time horizon growth curve that underpins the AI 2027 forecast. During my own quick excursion into the Timelines simulation code, I set the probability of superexponential growth to ~0 because, yeah, it seemed pretty sus. But I didn’t catch (or write about) the full extent of its weirdness, nor did I identify a bunch of other issues titotal outlines in detail. For example: * The AI 2027 authors assign ~40% probability to a “superexponential” time horizon growth curve that shoots to infinity in a few years, regardless of your starting point. * The RE-Bench logistic curve (major part of their second methodology) is never actually used during the simulation. As a result, the simulated saturation timing diverges significantly from what their curve fitting suggests. * The curve they’ve been showing to the public doesn’t match the one actually used in their simulations. …And more. He’s also been in communication with the AI 2027 authors, and Eli Lifland recently released an updated model that improves on some of the identified issues. Highly recommend reading the whole thing! That said—it is phenomenal*, with an asterisk. It is phenomenal in the sense of being a detailed, thoughtful, and in-depth investigation, which I greatly appreciate since AI discourse sometimes sounds like: “a-and then it’ll be super smart, like ten-thousand-times smart, and smart things can do, like, anything!!!” So the nitty-gritty analysis is a breath of fresh air. But while titotal is appropriately uncertain about the technical timelines, he seems way more confident in his philosophical conclusions. At the end of the post, he emphatically concludes that forecasts like AI 2027 shouldn’t be popularized because, in practice, they influence people to make ser