Hide table of contents

Ambiguity aversion is a preference for known over unknown risks and influences decision-making. Ambiguity (also uncertainty) is particularly prevalent in the study of existential risk and is a diverse phenomenon. The following post analyses the influence of ambiguity aversion by means of the -maxmin model in a specific choice situation: Should an individual put their efforts in some proximate altruistic projects (PAP) or in the more long-term reduction of X-risks (RXR)?

Theoretical framework

Decision modelling

A choice situation is described by four basic structures (Etner 2012): The state space  endowed with a -algebra A choice situation is described by four basic structures (Etner 2012): The state space  endowed with a -algebra  of events (Stoye 2011), the outcome space , a set  of mappings   and a preference relation , which is defined over the mappings from  to 
   The expected utility (Chandler 2017) of an act is a weighted average of the utilities of each of  its  possible  outcomes.
 

Different types of uncertainty

One can distinguish two fundamental origins of uncertainty (Bradley 2017):

  • uncertainty from features of the world like indeterminacy or randomness (objective uncertainty)
  • uncertainty from lack of information about the world (subjective or epistemic uncertainty)

Epistemic uncertainty can be further split into several subtypes:

 

Type of uncertaintyassociated question
empirical/factual What is the case?
evaluative What should  be the case?
modal What could  be the case?
option what would  be the case if we were to make an intervention of some kind?
Types of uncertainty

 

Ambiguity Aversion

 If it is possible to assign probabilities to the uncertainties in question, the situation is called risky. If the assignment of an objective probability is not possible, the situation is ambiguous. Ambiguity aversion is an aversion against ambiguous situations, first described by Ellsberg (Ellsberg 1961). Ambiguity averse agents prefer the urn on the left in Fig. 1.

Fig. 1: Ellsberg two-color problem (Eichenberger and Pirner 2018)

 

Modelling ambiguity aversion

 In the -maxmin expected utility model (Ghirardato et al. 2004) an act  is evaluated in terms of expected utility:
       


        is a uniquely defined coefficient and an index of ambiguity aversion.  is a set of priors (credal set).

 

Modelling situation

In order to analyze the influence of ambiguity aversion, following Mogensen 2018, consider an altruistic person  (decision maker) facing the decision whether to put his efforts in the long-term reduction of X-risks (RXR) or in proximate altruistic projects (PAP). Hence we consider the two acts RXR and PAP. We want to look at the impact of the person under different circumstances. There are two ways the person could fail having any impact at all, namely if their effort is not necessary or if their effort is not sufficient for the achievement of the targeted effects. 

Two extreme cases

Based on these two modes of failure and , we will consider two different cases:

  • Case 1: RXR is assumed to be sufficient to prevent EC (existential catastrophe), but perhaps unnecessary. If the decision maker chooses RXR, the future will be good for sure.
  • Case 2: RXR is assumed to be necessary to prevent EC, but perhaps insufficient. If the decision maker chooses PAP instead of RXR, EC will happen for sure.

For both cases, the impact shall be specified in agent-neutral value. Besides, consider two probabilistic criteria: Whether or not RXR  prevents EC. Then we can set up two decision matrices:

 

 RXR prevents ECRXR does not prevent EC
 
 PAP1101
RXR100100
Case 1

 

An ambiguity-averse decision maker goes for RXR, because there, an expected value of 100 is guaranteed.

 

 RXR prevents ECRXR does not prevent EC
 
 PAP11
RXR1000
Case 2

 

In Case 2, an ambiguity averse agent will opt for PAP, because, in contrast to RXR, it does not involve ambiguity (PAP yields for sure an agent-neutral value of 1).

Generalization

In order to generalize beyond the two extreme cases, we will still follow the structure of necessity and sufficiency. Consider the act RXR for prevention of EC. It can be unnecessary, then we don`t care about sufficiency. It can bu necessary but insufficient or both necessary and sufficient. With those 3 possibilities we can set up a generalized decision matrix where the agent-neutral value is denoted by variables:

 

 EC independent of choicesplendid future if RXR is chosen, EC otherwisesplendid future independent of choice
 
PAP
RXR 
Generalized decision matrix

 

The worst case is represented by , where the agent neither enjoys the benefits of PAP, nor those of RXR.  represents the value yielded by the choice of PAP whereas  corresponds to the value yielded by the choice of RXR. The best case arises if the agent chooses PAP while RXR is not necessary, since then the agent-neutral value incorporates the benefits of PAP and RXR, amounting to . Therefore, clearly the following relation holds: 

From the definition of the four variables, the following equivalence can be deduced:

This seems reasonable, since the value achieved through PAP is independent of whether humanity goes extinct later or not. Since we are interested in the relation between the benefits of PAP and RXR, we define the quotient  as

 thus represent the ratio of the respective values achieved by PAP and RXR. Since  and also , it holds that .

The analysis yielded the following conditions: An ambiguity averse agent favors RXR over PAP if , i.e. if

The same agent favors PAP over RXR if the converse inequality holds, i.e. if

 

Discussion

We found that the choice of an ambiguity averse agent differs between the two cases considered: In Case 1a, where RXR is known to be sufficient, ambiguity aversion seems to favour RXR. In contrast, in Case 2a, where RXR is known to be necessary, ambiguity aversion favours PAP. This difference shows that the decision varies according to the setting. In reality, we are of course never sure that RXR will be sufficient. Rather, we can be sure that certain interventions decrease particular risks. It seems more plausible that we can be sure about the necessity of certain interventions. In this case, ambiguity aversion pushes the decision maker towards PAP. Then ambiguity aversion would not be helpful for the mitigation of X-risks.

 

Conclusion

Through decision theoretic analysis of our modelling situation, we specified conditions under which ambiguity aversion plays in favour for RXR and conditions, under which it pushes the decision maker towards PAP. In the study of X-risks, there are always different types of uncertainty involved. Almost never are we able to assign precise probabilities. Nevertheless, the decision theoretic approach presented here builds heavily on probabilities. This is because such an approach is helpful as a structure for our thinking. It can guide us through different aspects of the situation at hand. Besides, a formal approach allows to standardize to a certain extent. The latter is important since it is a condition for arriving at general statements. We did several assumptions in order to simplify the modelling situation. Even if those simplifications weaken the expressiveness of the results, they nevertheless are justified by the complexity of the issues under investigation. A theoretically more sophisticated analysis, which was not in the scope of this project, could maybe integrate more aspects and analytical tools.

 

Acknowledgements

This post is based on my research project in the Swiss Existential Risk Initiative. Views and mistakes are my own.

 

 

References

Etner, Johanna, Meglena Jeleva, and Jean-Marc Tallon (Apr. 2012). “DE-
CISION THEORY UNDER AMBIGUITY: DECISION THEORY UN-
DER AMBIGUITY”. In: Journal of Economic Surveys 26.2, pp. 234–
270. http://doi.wiley.com/10.1111/j.1467-6419.2010.00641.x (visited
on 07/02/2021).

Bradley, Richard (2017). Decision theory with a human face. OCLC: 1015858884.
Cambridge: Cambridge University Press. isbn : 978-1-108-54787-1 978-0-
511-76010-5.  https://doi.org/10.1017/9780511760105 (visited
on 07/02/2021).

Chandler, Jake (2017). “Descriptive Decision Theory”. In: The Stanford En-
cyclopedia of Philosophy. Ed. by Edward N. Zalta. Winter 2017. Meta-
physics Research Lab, Stanford University.

Eichberger, Jürgen and Hans Jürgen Pirner (Oct. 2018). “Decision theory
with a state of mind represented by an element of a Hilbert space: The
Ellsberg paradox”. In: Journal of Mathematical Economics 78, pp. 131–
141. https://linkinghub.elsevier.com/retrieve/pii/S0304406818300193
(visited on 08/17/2021).

Ellsberg, Daniel (Nov. 1961). “Risk, Ambiguity, and the Savage Axioms”. In:
The Quarterly Journal of Economics 75.4, p. 643. https://academic.oup.com/qje/article-
lookup/doi/10.2307/1884324 (visited on 07/30/2021).

Ghirardato, Paolo, Fabio Maccheroni, and Massimo Marinacci (Oct. 2004).
“Differentiating ambiguity and ambiguity attitude”. In: Journal of Eco-
nomic Theory 118.2, pp. 133–173.  https://linkinghub.elsevier.com/retrieve/
pii/S0022053104000262 (visited on 07/28/2021).

Mogensen, Andreas L. (2018). “Long-termism for risk averse altruists”. Un-
published Manuscript.

Stoye, Jörg (Feb. 2011). “Statistical decisions under ambiguity”. In: Theory
and Decision 70.2, pp. 129–148.  http://link.springer.com/10.
1007/s11238-010-9227-2 (visited on 07/02/2021).

29

0
0

Reactions

0
0

More posts like this

Comments6


Sorted by Click to highlight new comments since:

I didn't quite follow. What's the reasoning for claiming this?

From the definition of the four variables, the following equivalence can be deduced:

The reasoning is the following: 

The agent-neutral values are now denoted by variables instead of numbers.
The worst case is represented by  , where the agent neither enjoys the benefits
of PAP, nor those of RXR.   represents the value yielded by the choice of
PAP whereas corresponds to the value yielded by the choice of RXR. The
best case arises if the agent chooses PAP while RXR is not necessary, since
then the agent-neutral value incorporates the benefits of PAP and RXR,
amounting to . Therefore, clearly the following relation holds:

From there, the equivalence under question follows.

Do you agree?

I guess I don't understand why w > x > y > z implies w - y = x - y iff w - x = y - z. Sorry if this is a standard result I've forgotten, but at first glance it's not totally obvious to me.

Maybe it gets clearer if you compare the relative values of the 4 variables.  corresponds to the benfits of RXR,   also corresponds to the benefits of RXR. But maybe I was not precise enough: The equivalence does not follow only from , we also need to take into account the definitions of the 4 variables.
Do you see what I mean?


 

I didn't get the intuition behind the initial formulation:

 

What exactly is that supposed to represent? And what was the basis for assigning numbers to the contingency matrix in the two example cases you've considered? 

Thanks for your question!
This is how the -maxmin model is defined.  You can consider the coefficient  as a sort of pessimism index. For details, see the source Ghirardato et al. 

It is supposed to represent the extreme cases.
The numbers in the examples are exemplary. The purpose is to have two different cases in order to study. 

 

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr