Hide table of contents

This post is part of Rethink Priorities’ Worldview Investigations Team’s CURVE Sequence: “Causes and Uncertainty: Rethinking Value in Expectation.” The aim of this sequence is twofold: first, to consider alternatives to expected value maximization for cause prioritization; second, to evaluate the claim that a commitment to expected value maximization robustly supports the conclusion that we ought to prioritize existential risk mitigation over all else.

Introduction

RP has committed itself to doing good. Given the limits of our knowledge and abilities, we won’t do this perfectly but we can do this in a principled manner. There are better and worse ways to work toward our goal. In this post, we discuss some of the practical steps that we’re taking to navigate uncertainty, improve our reasoning transparency, and make better decisions. In particular, we want to flag the value of three changes we intend to make:

  • Incorporating multiple decision theories into Rethink Priorities’ modeling
  • More rigorously quantifying the value of different courses of action
  • Adopting transparent decision-making processes

Using Multiple Decision Theories

Decision theories are frameworks that help us evaluate and make choices under uncertainty about how to act.[1] Should you work on something that has a 20% chance of success and a pretty good outcome if success is achieved, or work on something that has a 90% chance of success but only a weakly positive outcome if achieved? Expected value theory is the typical choice to answer that type of question. It calculates the expected value (EV) of each action by multiplying the value of each possible outcome by its probability and summing the results, recommending the action with the highest expected value. But because low probabilities can always be offset by corresponding increases in the value of outcomes, traditional expected value theory is vulnerable to the charge of fanaticism, “risking arbitrarily great gains at arbitrarily long odds for the sake of enormous potential” (Beckstead and Thomas, 2021). Put differently, it seems to recommend spending all of our efforts on actions that, predictably, won’t achieve our ends.

Alternative decision theories have significant drawbacks of their own, giving up one plausible axiom or another. The simple alternative is expected value maximization but with very small probabilities rounded down to zero. This gives up the axiom of continuity, which suggests for a relation of propositions A ≥ B ≥ C, that there exists some probability that would make you indifferent between B and a probabilistic combination of A and C. This violation causes some weird outcomes where, say, believing the chance of something is 1 in 100,000,000,000 can mean an action gets no weight but believing it’s 1.0000001 in 100,000,000,000 means that the option dominates your considerations if the expected value upon success is high enough, which is a kind of attenuated fanaticism. There are also other problems like setting the threshold for where you should round down.[2]

Alternatively, you could go with a procedure like weighted-linear utility theory (WLU) (Bottomley and Williamson, 2023), but that gives up the principle of homotheticity, which involves indifference to mixing a given set of options with the worst possible outcome. Or you could go with a version of risk-weighted expected utility (REU) (Buchak, 2013) and give up the axiom of betweenness which suggests the order in which you are presented information shouldn’t alter your conclusions.[3]

It’s very unclear to us, for example, that giving up continuity is preferable to giving up homotheticity, and neither REU or WLU really logically eliminate issues with fanaticism (even if it seems in practice, say, WLU produces negative values for long shot possibilities in general)[4]. It seems once you switch from pure EV to other theories, whether it be REU, WLU, expected utility with rounding down, or some other future option, there isn’t an option that’s clearly best. Instead, many arguments rely on competing, but ultimately not easily resolvable, intuitions about which set of principles are best. Still, at worst, it seems the weaknesses in these alternative options are similar in scope to the amount of weakness provided to pure EV logically suggesting spending (and predictably wasting) all of our resources not on activities like x-risk prevention or insect welfare, but on actions like interacting with the multiverse or improving the welfare of protons.[5]

Broadly, we don’t think decision theories with various strengths and weaknesses, axiomatic and applied, are the type of claim you can be highly confident about. For this reason, we ultimately think you need to be unreasonably confident that a given procedure, or set of procedures that agree on the types of actions they suggest, is correct (possibly >90%) in order for the uncertainty across theories and what they imply not to impact your actions.[6] While there are arguments and counterarguments for many of these theories, we’re more confident in the broad claim that no arguments for one of these theories over all the others is decisive than we are in any particular argument or reply for any given theory.

So, we still plan to calculate the EV of the actions available to us, since we think in most cases this is identical to EV with rounding down. However, we won’t only calculate the EV of those actions anymore.[7] Now, we plan to use other decision theories as well, like REU and WLU, to get a better understanding of the riskiness of our options. This allows us, among other things, to identify options that are robustly good under decision theoretic uncertainty. (As Laura Duffy notes in a general discussion of risk aversion and cause prioritization and in the case of only the next few generations, work on corporate campaigns for chickens fits this description: it’s never the worst option and rarely produces negative value across these procedures). Using a range of decision theories also helps us represent internal disagreements more clearly: sometimes people agree on the probabilities and values of various outcomes, but disagree about how to weigh low probabilities, negative outcomes, or outcomes where our gamble doesn’t pay off. By formalizing these disagreements, we can sometimes resolve them.

Quantify, Quantify, Quantify

We’ve long built models to inform our decision-making.[8] However, probabilities can be unintuitive and the results of more rigorous calculations are often surprising. We’ve discovered during the CURVE sequence, for instance, that small changes to different kinds and levels of risk-aversion can alter what you ought to do; and, even if you assume that you ought to maximize expected utility, making small adjustments to future risk structures and value trajectories have significant impacts on the expected value of the existential risk mitigation work. And, of course, before the present sequence, RP has built many models, for example, to try to estimate some moral weights for animals, finding significant variance across them.[9]

What’s more, there are key areas where we know our models are inadequate. For example, it’s plausible that returns on different kinds of spending diminish at different rates, but estimating these rates remains difficult. We need to do more work to make thoughtful tradeoffs between, say, AI governance efforts and attempts to improve global health. Likewise, it’s less complex to assess the counterfactual credit due to some animal welfare interventions but extremely difficult to estimate the counterfactual credit due to efforts to reduce the risk of nuclear war. Since these kinds of factors could swing overall cost-effectiveness analyses, it’s crucial to keep improving our understanding of them. So, we’ll keep investigating these issues as systematically as we can.

None of this is to say we take the outputs of these types of quantitative models literally. We don’t. Nor is it to claim there is no place at all for qualitative inputs or reasoning in our decision-making. It is to say we think quantifying our uncertainties whenever possible generally helps us to make better decisions. The difficulty of accounting for all of the above issues are typically made worse, not better, when precise quantitative statements of beliefs or inputs are replaced by softer qualitative judgments. We think the work in the CURVE sequence has further bolstered this case that even when you can’t be precise in your estimates, quantifying your uncertainty can still significantly improve your ability to reason carefully.

Transparent Decision-Making

Knowing how to do good was hard enough before we introduced alternative decision theories. Still, RP has to make choices about how to distribute its resources, navigating deep uncertainty and, sometimes, differing perspectives among our leadership and staff. Since we want to make our choices sensitive to our evidential situation and transparent within the organization, we’re committed to finding a decision-making strategy that allows us to navigate this uncertainty in a principled manner. Thankfully, there are a wide range of deliberative decision-making processes, such as Delphi panels and citizen juries, that are available for just such purposes.[10] Moreover, there are a number of formal and informal methods of judgment aggregation that can be used at the end of the deliberative efforts.

We aren’t yet sure which of these particular decision procedures we’ll use and we expect creating such a process and executing it to take time.[11] All of these procedures have drawbacks in particular contexts and we don’t expect any such procedure to be able to handle all the specific decisions that RP faces. However, we’re confident that a clearly defined decision procedure that forces us to be explicit about the tradeoffs we’re making and why is superior to unilateral and intuition-based decision-making. We want to incorporate the best judgment of the leaders in our organization and own the intra- and inter-cause comparisons on which our decisions are based. So, we’re in the process of setting up such decision procedures and will report back what we can about how they’re operating.

Conclusion

We want to do good. The uncertainties involved in doing good are daunting, particularly given we are trying to take an impartial, scope sensitive, open to revision approach. However, RP aims to be a model of how to handle uncertainty well. In part, of course, this requires trying to reduce our uncertainty. But lately, we’ve been struck by how much it requires recognizing the depth of our uncertainty—all the way to the very frameworks we use for decision-making under uncertainty. We are trying to take this depth seriously without becoming paralyzed—which explains why we’re doubling down on modeling and collective decision-making procedures.

In practice, we suspect that a good rule of thumb is to spread our bets across our options. Essentially, we think we’ve entered a dizzying casino where the house won’t even tell us the rules of the game. And even if we knew the rules, we’d face a host of other uncertainties: the long-term payouts of various options, the risk of being penalized if we choose incorrectly among various courses of action, and a host of completely inscrutable possibilities where we have no idea what to think of them. In a situation of this type, it seems like a mistake to assume that one ruleset is correct and proceed accordingly. Instead, we want to find robustly good options among different plausible rulesets whenever we can. And when we can’t, we may want to distribute our resources in proportion to different reasonable approaches to prioritization.

This isn’t perfect or unobjectionable. But nothing is. RP will continue to do its best to make these decisions as transparently as we can, learning from our mistakes and continuing to try to advance the cause of improving the world.

Acknowledgements

The piece was written by Marcus A. Davis and Peter Wildeford. Thanks to David Moss, Abraham Rowe, Janique Behman, Carolyn Footitt, Hayley Clatterbuck, David Rhys Bernard, Cristina Schmidt Ibáñez, Jacob Peacock, Aisling Leow, Renan Araujo, Daniela R. Waldhorn, Onni Aarne, Melissa Guzikowski, and Kieran Greig  for feedback. A special thanks to Bob Fischer for writing a draft of this post. The post is a project of Rethink Priorities, a global priority think-and-do tank, aiming to do good at scale. We research and implement pressing opportunities to make the world better. We act upon these opportunities by developing and implementing strategies, projects, and solutions to key issues. We do this work in close partnership with foundations and impact-focused non-profits or other entities. If you're interested in Rethink Priorities' work, please consider subscribing to our newsletter. You can explore our completed public work here.



  1. As discussed in this post, when we refer to “decision theories” we are referring to normative theories of rational choice without regard to the distinction between evidential decision theory and causal decision theory. That distinction is about whether one should determine their actions based on expected causal effects, causal decision theory, or, for evidential decision theory, based on whether you should do what actions have the best news value (taking the action you will have wanted to learn that you will do), whether or not this was driven by causal effects. ↩︎

  2. This is something we would like to see explored further in research. Presently, the choice of where to set the threshold could seem to be somewhat arbitrary, with no current solid arguments about where to set such a threshold that doesn’t refer to hypothetical or real cases and consider whether outcomes of those cases are acceptable. ↩︎

  3. Violations of homotheticity and betweenness are both violations of the principle of independence, which decomposes into these two principles. As such, both REU and WLU violate independence. ↩︎

  1. We are aware that discussion of these principles can sound rather abstract. We think it's fine to be unfamiliar with these axioms and what they imply (we also weren't familiar with them before the past few years). What seems less ideal is having an unshakable belief that a particular rank ordering of these abstract principles is simple or obvious such that you can easily select a particular decision theory as superior to others, particularly once you decide to avoid fanaticism. ↩︎

  2. Some may doubt that EV would require this, but if you preemptively rule out really implausible actions like extending the existence of the universe that could have a really high value if done, even if the probability is really small, then in practice you are likely calculating expected value maximization with rounding down. This is what we think most actors in the EA space have been doing in practice rather than pure expected value maximization. For more on why, and what axioms different decision theory options including expected value maximization with rounding down are giving up, see the WIT sequence supplement from Hayley Clatterbuck on Fanaticism, Risk Aversion, and Decision Theory. For more on why fanaticism doesn’t endorse x-risk prevention or work on insects see Fanatical EAs should support very weird projects by Derek Shiller. For more on how one might maintain most of the value of expectational reasoning while not requiring actions like this, see Tarnsey 2020 Exceeding Expectations: Stochastic Dominance as a General Decision Theory. ↩︎

  3. Suppose, as an example, you are ~50% confident in pure EV, and 50% confident that conditional on pure EV being incorrect, EV with rounding down is best. That would imply an absolute credence of 25% in EV with rounding down and a 25% chance you think some other non-EV option is correct. If you were 70% confident in EV and 70% confident conditional on it being false that EU with rounding down is right that would leave your split as 70% EV, 21% EU with rounding down, 9% something else. If you were instead equally uncertain about these strengths and weaknesses across the theories discussed above, it would imply a 25% credence to each of WLU, REU, pure EV, and EV with rounding down (assuming you assigned no weight to other known theories and to the possibility that there may, say, be future theories distinct from these known options). Overall, because these theories often directionally disagree on the best actions, you need to line up confidence across theories to be just right to avoid uncertainty in what actions are recommended. ↩︎

  4. A counterargument here would be to say that expected utility or expected utility with rounding down is clearly superior to these other options and as such we should do whatever it says. In addition to our broader concerns we’ve mentioned about the type of evidence that can be brought to bear not being definitive, one problem with this type of response is it assumes the correct aggregation method across decision procedures either heavily favors EV outputs (in practice or for a theoretical reason) or that we can be confident now that all these alternatives are incorrect (i.e. the weight we should put in them is below ~1%). Neither move seems justifiable from the present knowledge we have. It’s worth noting in their 2021 paper The Evidentialist's Wager MacAskill et al. discuss the aggregation of evidential and causal decision theories but, for a variety of reasons, we don’t think the solutions posed for that related but separate dilemma apply here. ↩︎

  5. For example, we’ve built models to estimate the cost-effectiveness of particular interventions and to retrospectively assess the value of our research itself, both at the org level and at the level of individual projects. These models have often been inputs into our decision-making or to what we advise others to do. ↩︎

  6. Another example of the fragility of models is visible in Jamie Elsey's and David Moss's post Incorporating and visualizing uncertainty in cost effectiveness analyses: A walkthrough using GiveWell’s estimates for StrongMinds examining how modeling choices involving handling uncertainty can significantly alter your conclusions. ↩︎

  7. In this context, citizen juries, Delphi panels, and other deliberative decision-making procedures would be designed to help us assign credences across different theories, or make specific decisions in the face of uncertainty and disagreement across participants. ↩︎

  8. We also aren’t sure when we’ll do these things as they all take time and money. For example, analyzing different decision making frameworks and thinking through the cost-curves across interventions could involve ~3-6 months of work from multiple people. ↩︎

Show all footnotes
Comments10


Sorted by Click to highlight new comments since:

I have increasingly become open to incorporating alternative decision theories as I recognize that I cannot be entirely certain in expected value approaches, which means that (per expected value!) I probably should not solely rely on one approach. At the same time, I am still not convinced that there is a clear, good alternative, and I also repeatedly find that the arguments against using EV are not compelling (e.g., due to ignoring more sophisticated ways of applying EV).

Having grappled with the problem of EV-fanaticism for a long time in part due to the wild norms of competitive policy debate (e.g., here, here, and here), I've thought a lot about this, and I've written many comments on the forum about this. My expectation is that this comment won't gain sufficient attention/interest to warrant me going through and collecting all of those instances, but my short summary is something like:

  • Fight EV fire with EV fire: Countervailing outcomes—e.g., the risk that doing X has a negative 999999999... effect—are extremely important when dealing with highly speculative estimates. Sure, someone could argue that if you don't give $20 to the random guy wearing a tinfoil hat and holding a remote which he will use to destroy 3^3^3 galaxies there's at least a 0.000000...00001% chance he's telling the truth, but there's also a decent chance that doing this could have the opposite effect due to some (perhaps hard-to-identify) alternative effect.
  • One should probably distinguish between extremely low (e.g., 0.00001%) estimates which are the result of well-understood or ""objective""[1] analyses which you expect cannot be improved by further analysis or information collection (e.g., you can directly see/show the probability written in a computer program, a series of coin flips with a fair coin) vs. such estimates that are the result of very subjective estimates probability estimates that you expect you will likely adjust downwards due to further analysis, but where you just can't immediately rule out some sliver of uncertainty.[2]
    • Often you should recognize that when you get into small probability spaces for ""subjective"" questions, you are at a very high risk of being swayed by random noise or deliberate bias in argument/information selection—for example, if you've never thought about how nano-tech could cause extinction and listen to someone who gives you a sample of arguments/information in favor of the risks, you likely will not immediately know the counterarguments and you should update downwards based on the expectation that the sample you are exposed to is probably an exaggeration of the underlying evidence.
    • The cognitive/time costs of doing ""subjective"" analyses likely imposes high opportunity costs (going back to the first point);
    • When your analysis is not legible to other people, you risk high reputational costs (again, which goes back to the first point).
  • Based on the above, I agree that in some cases it may be a far more efficient heuristic for decision-making under analytical constraints to use heuristics like trimming off highly-""subjective"" risk estimates. However, I make this claim based on EV with the recognition that it is still a better general-purpose decision-making algorithm, but which may just not be optimized for application under realistic constraints (e.g., other people not being familiar with your method of thinking, short amount of time for discussion or research, error-prone brains which do not reliably handle lots of considerations and small numbers).[3]
  1. ^

    I dislike using "objective" and "subjective" to make these distinctions, but for simplicity's sake / for lack of a better alternative at the moment, I will use them.

  2. ^
  3. ^

    I advocate for something like this competitive policy debate, since "fighting EV fire with EV fire" risks "burning the discussion"—including the educational value, reputation of participants, etc. But most deliberations do not have to be made within the artificial constraints of competitive policy debate.

I have a bunch of comments on the specific decision theories, some adding to what you have here, and some corrections and recommendations, ~all of them minor and not detracting from the larger point of this post, but I think worth making anyway. Some of these are comments I've made elsewhere on the sequence.

  1. Expectational total utilitarianism (and expected utility maximization with unbounded utility generally) also violates Continuity, because of St Petersburg prospects and/or infinities. If we’re counting it against discounting, we have some reason to do so for expectational total utilitarianism. I could imagine expectational total utilitarianism's violations of Continuity being more intuitively acceptable, though.
  2. Violating Continuity per se doesn't seem instrumentally irrational or a big deal to me. The problem of attenuated fanaticism at the threshold you point out seems intuitively worse, but normative uncertainty over the threshold should smooth things out somewhat.
  3. Discounting small probabilities or small probability differences also violates Independence, and is vulnerable to money pumps in theory. (I think it also doesn't respect statewise dominance or stochastic dominance, which would be worse, but maybe fixable, by just ordering by dominance first and then filling in the rest of the missing comparisons with discounted expected utility?)
  4. Expectational total utilitarianism (and expected utility maximization with an unbounded utility function generally) violates countable extensions of Independence and is vulnerable to similar money pumps and Dutch books in theory, which should also count against the theory. Maybe not as bad as how WLU and REU violate Independence or are vulnerable to money pumps, but still worth pointing out.
  5. Expected utility maximization can be guaranteed to avoid fanaticism while satisfying the standard EUT axioms (and countable extensions), with a bounded utility function and the bounds small enough or marginal returns decreasing fast enough, in relative terms. Or, at least, say, a bounded function applied to differences in a difference-making version, but at the cost of stochastic dominance (specifically stochastic equivalence, not statewise dominance) and EUT. In my view, expected utility maximization with a bounded utility function (not difference-making) is the most instrumentally rational of the options, and it and boundedness with respect to differences seem the most promising, but have barely been discussed in the sequence (if it all?). I would recommend exploring these options more.
  6. WLU can be guaranteed to avoid fanaticism, with the right choice of weighting function (or, in the difference-making version, at least). If your utility function is u, I think just picking your weighting function w so that w(y)*y is bounded and increasing in y(=u(x)) works, e.g. w(y) = f(y) / y, where f is bounded and increasing, with the bounds small enough. Then WLU looks a lot like maximizing the expected value of f(u(x)), just with some renormalization. But then you could just maximize the expected value of f(u(x)) instead (or some other bounded u instead directly), like in the previous point, to avoid violating Independence.

Thanks for the engagement, Michael.

I largely agree with your notes and caveats.

However, on this:

Expected utility maximization can be guaranteed to avoid fanaticism while satisfying the standard EUT axioms (and countable extensions), with a bounded utility function and the bounds small enough or marginal returns decreasing fast enough, in relative terms… In my view, expected utility with a bounded utility function (not difference-making) is the most instrumentally rational of the options, and it and boundedness with respect to differences seem the most promising, but have barely been discussed in the sequence (if it all?). I would recommend exploring these options more.

I’m definitely in for exploring a variety of more options. We didn’t explore all possible options in this series, and I think we could, in theory, spend a lot more time investigating possible options including some of the combinations of theories, and more edge case versions of particular views like WLU you lay out.

However, I think while it is plausible EV could avoid some version of fanaticism that way, it still seems vulnerable to a very related issue like the following.

It seems there are actually two places for EV where rounding down or bound setting needs to happen to avoid issues with particularly risky gambles. (1) For really low probabilities (i.e. 1 in 100 trillion) with really high outcomes and (2) around the 50% line distinguishing actions that lean net positive from those that are neutral or negative in expectation. Conceptually, these are very similar but practically there may be different implications for doing them.

While it seems a bounded EV function with a function that assigns marginal returns a really steep decline could avoid the fanaticism of (1) (though this itself creates counterintuitive results), it doesn’t seem like this type of solution alone would resolve the issue where the the decision point is whether something is lean net positive but possibly only barely of (2).That is, there are many choices about actions where the sign of the action is uncertain and this applies, among other things, to x-risk interventions that have the possibility of having a very large expected utility if the action succeeds. Practically, it seems these types of choices are likely very common for charitable actors.

If despite a really large expected utility in your bounded function, you don’t think we should always take an action that is only, say, 50.0001% positive in expectation you wind up in a very similar place with regard to being “mugged” by high value outcomes that are not just unlikely to pay out but almost equally as likely to cause harm, then you think something has gone awry in EV. And it doesn’t seem reasonable bounds designed for avoiding really low probabilities but high EV outcomes will help you avoid this.

To be clear, I haven’t reasoned this out entirely, and I will just preemptively grant it’s possible you could create a different “bound” that would act on not just small probabilities, but also on these edge-cases where EU suggests taking these types of gambles. But if you do that this looks a lot like what you are doing is introducing a difference-making criteria to your decision theory. To the extent you may think this type of modified EU is viable, it is because it mimics the aversion of these other theories to certain types of uncertainty.

Basically, I’m actually not confident that this type of modification should matter much for us. The axiom choices matter here for which theory to put the most weight in but I’m unsure this type of distinction is buying you much practically if, say, after you make them you still end up with a set of theoretical options that look in practice like pure EV vs EV with rounding down vs something like WLU vs something like REU.

EDIT: grammar fix.

I agree it would be hard to avoid something like (2) with views that respect stochastic dominance with respect to the total welfare of outcomes, including background value (not difference-making). That includes maximizing the EV of a bounded increasing function of total welfare, as well as REU and WLU for total welfare, all with respect to outcomes including background value and not difference-making. Tarsney, 2020 makes it hard, and following it, x-risk reduction might be best across those views (Tarsney, 2023, footnote 43, although he says it could depend on the probabilities). See the following footnote for another possible exception with outcome risk aversion, relevant for extinction risk reduction.[1]

If you change the underlying order on outcomes from total welfare, you can also avoid nearly 50-50 actions from dominating things that are more likely to make a positive difference. A steep enough geometric discounting of future welfare[2] or a low enough future cutoff for consideration (a kind of view RP considered here) + excluding invertebrates might work.

I also think difference-making views, as you suggest, would avoid (2).

Basically, I’m actually not confident that this type of modification should matter much for us. The axiom choices matter here for which theory to put the most weight in but I’m unsure this type of distinction is buying you much practically if, say, after you make them you still end up with a set of theoretical options that look in practice like pure EV vs EV with rounding down vs something like WLU vs something like REU.

Fair. This seems right to me.

  1. ^

    Tarsney, 2020 requires a lot of very uncertain background value that's statistically independent from the effects of the intervention. Too little background value could be statistically independent, because a lot of things are jointly determined or correlated across the universe, e.g. sentience, moral weights, and, perhaps most importantly, (the sign of) the average welfare across the universe.

    Conditional on generally horrible welfare across aliens (non-Earth-originating moral patients, generally), we should worry more that our descendants (or Earth-originating moral patients) will have horrible welfare if we don't go extinct.

    Then you just need to be sufficiently risk-averse, and something slighly better than 50-50 that could make things far worse could look bad overall.

    I don't know if this actually works in practice, though. It'll depend on the particulars, and I've ignored our descendants' possible effects on aliens.

  2. ^

    And far away moral patients, if you accept acausal influence.

Difference-making risk aversion (the accounts RP has considered, other than rounding/discounting) doesn't necessarily avoid generalizations of (2), the 50-50 problem. It can

  1. just shift the 50-50 problem to a different place, e.g. 70% good vs 30% bad being neutral in expectation but 70.0001% being extremely good in expectation, or
  2. still have the 50-50 problem, but with unequal payoffs for good and bad, so be neutral at 50-50, but 50.0001% being extremely good in expectation.

To avoid these more general problems within standard difference-making accounts, I think you'd need to bound the differences you make from above. For example, apply a function that's bounded above to the difference, or assume differences in value are bounded above).

On the other hand, maybe having the problem at 50-50 with equal magnitude but opposite sign payoffs is much worse, because our uninformed prior for the value of a random action is generally going to be symmetric around 0 net value.

Proofs below.


Assume you have an action with positive payoff x (compared to doing nothing) with probability p=50.0001%, and negative payoff y=-x otherwise, with x very large. Then

  1. Holding the conditional payoffs x and -x constant, but changing the probabilities at 100% x and 0% y=-x, the act would be good overall. OTOH, it's bad at 0% x and 100% y=-x. By Continuity (or the Intermediate Value Theorem), there has to be some p so that the act that's x with probability p and y=-x with probability 1-p is neutral in expectation. Then we get the same problem at p, and a small probability like 0.0001% over p instead of p can make the action extremely good in expectation, if x was chosen to be large enough.
  2. Holding the probability p=50% constant, if the negative payoff y were actually 0, and the positive payoff still x and large, the act would be good overall. It's bad for y<0 low enough.[1] Then, by the Intermediate Value Theorem, there's some y so that the act that's x with probability 50% and y with probability 50% is neutral in expectation. And again, 50.0001% x and otherwise y can be extremely good in expectation, if x was chosen to be large enough.

Each can be avoided if the adjusted value of x is bounded and the bound is low enough, or x itself is bounded above with a low enough bound.

I think the same would apply to difference-making ambiguity aversion, too.

  1. ^

    y=-x if difference-making risk averse, any y< -x if difference-making risk neutral, and generally for some y<0 if the disvalue of net harm isn't bounded and the function is continuous.

In your conclusion with the casino analogy, I thought you were going to make an explore-exploit argument, eg:

Once we have played all the different games on offer for a while, we will both get better at each game, and work out which games are most profitable to play. Therefore, we should play lots of games to start with to maximise the information gained.

I think without this argument, the conclusion doesn't follow. If we are not planning to later narrow down on the best choices once we have learnt more, the case for spreading our resources now seems a lot less strong to me.

Thanks for the post, Marcus and Peter!

low probabilities can always be offset by corresponding increases in the value of outcomes, traditional expected value theory is vulnerable to the charge of fanaticism

I agree with this in theory, but do you think it happens in practice? For a value distribution to be plausible, it should have a finite expected value, which requires expected value density (probability density time value) to decrease with value for large values. David Thorstad calls this rapid diminution (see section 4 of The scope of longtermism).

I would also be curious to know whether fanaticism is the main responsible for you moving away from pure expected utility maximisation. I personally do not consider fanaticism problematic even in theory, but I do not think it comes up in practice either. All the other problems attributed to expected utility maximisaton only show up if one postulates the possibility of unbounded or infite value, which I do not think makes sense. For sufficiently large values, we have perfect evidential symmetry / absolute simple cluelessness with respect to the outcomes of any pair of actions we compare, so the counterfactual value stemming from such large values is 0.

Quantify, Quantify, Quantify

I am glad you want to continue investigating cause priorisation in a quantitative way!

Transparent Decision-Making

[...] we’re in the process of setting up such decision procedures and will report back what we can about how they’re operating

I think it would be great if you did report back. The ratio between decision-making transparency and funding of EA-aligned orgasiations working across multiple causes looks low to me, especially that of Open Phil. It would be nice to have RP as a positive example.

In practice, we suspect that a good rule of thumb is to spread our bets across our options.

This generally makes sense to me too. However, given RP's views on welfare ranges, which imply the best animal welfare interventions are orders of magnitude more cost-effective at increasing welfare than GiveWell's top charities (which may even decrease welfare accounting for the meat-eater problem), I am confused about why RP is still planning to invest significant resources in global health and development. RP's funding needs for this area are 2.6 M$, 74.3 % (= 2.6/3.5) the funding needs of 3.5 M$ for animal welfare.

Maybe a significant fraction of RP's team believes non-hedonic benefits to be a major factor? This would be surprising to me, as it would go against the conclusions of a post from RP's moral weight project sequence. "We argue that even if hedonic goods and bads (i.e., pleasures and pains) aren't all of welfare, they’re a lot of it. So, probably, the choice of a theory of welfare will only have a modest (less than 10x) impact on the differences we estimate between humans' and nonhumans' welfare ranges". Moreover, for GiveWell's top charities (or similarly cost-effective interventions) to be more cost-effective than the best animal welfare interventions, non-hedonic benefits would not only have to account for a major fraction of the benefits, but also be negatively correlated with the hedonic benefits? Presumably, improving animal welfare also has many non-hedonic benefits (e.g. fewer violations of basic needs).

This isn’t perfect or unobjectionable. But nothing is. RP will continue to do its best to make these decisions as transparently as we can, learning from our mistakes and continuing to try to advance the cause of improving the world.

That is the spirit. Thanks for all your work!

Hey Vasco, thanks for the thoughtful reply.

I do find fanaticism problematic at a theoretical level since it suggests spending all your time and resources on quixotic quests. I would go one further and say I think if you have a series of axioms and it proposes something like fanaticism, this should at least potentially count against that combination of axioms. That said, I definitely think, as Hayden Wilkinson pointed out in his In Defence of Fanaticism paper, there are many weaknesses with alternatives to EV.

Also, the idea that fanaticism doesn’t come up in practice doesn’t seem quite right to me. On one level, yeah, I’ve not been approached by a wizard asking for my wallet and do not expect to be. But I'm also not actually likely going to be approached by anyone threatening to money-pump me (and even if I were I could reject the series of bets) and this is often held as a weakness to EV alternatives or certain sets of beliefs. On another level, in some sense to the extent I think we can say fanatical claims don’t come up in practice it is because we’ve already decided it’s not worth pursuing them and discount the possibility, including the possibility of going looking for actions that would be fanatical.* Within the logic of EV, even if you thought there weren’t any ways to get the fanatical result with ~99% certainty, it would seem you’d need to be ~100% certain to fully shut the door on at least expending resources seeing if it’s possible you could get the fanatical option. To the extent we don’t go around doing that I think it’s largely because we are practically rounding down those fanatical possibilities to 0 without consideration (to be clear, I think this is the right approach).

All the other problems attributed to expected utility maximisaton only show up if one postulates the possibility of unbounded or infite value, which I do not think makes sense

I don’t think this is true. As I said in response to Michael St. Jules in the comments, EV maximization (and EV with rounding down unless it’s modified here too) also argues for a kind of edge-case fanaticism, where provided a high enough EV if successful you are obligated to take an action that’s 50.000001% positive in expectation even if the downside is similarly massive.

It’s really not clear to me the rational thing to do is consistently bet on actions that would impact a lot of possible lives but, say ~0.0001% chance of making a difference and are net positive in expectation but have a ~49.999999% chance of causing lots of harm. This seems like a problem even within a finite and bounded utility function for pure EV.

I am confused about why RP is still planning to invest significant resources in global health and development… Maybe a significant fraction of RP's team believes non-hedonic benefits to be a major factor?

I’ve not polled internally but I don’t think non-hedonic benefits issue is a driving force inside RP. Speaking for myself, I do think hedonism is makes up for at least more than half of what makes things valuable at least in part for the reasons outlined in that post.

The reasons we work across areas in general are because of differences in the amount of money in the areas, the number of influenceable actors, the non-fungibility of the resources in the spaces (both money and talent), and moral and decision-theoretic uncertainty.

In this particular comparison case of GHD and AW, there’s hundreds of millions more of plausibly influenceable dollars in the GHD space than in the AW space. For example, GiveWell obviously isn’t going to shift their resources to animal welfare, but they still move a lot of money and could do so more effectively in certain cases. GiveWell alone is likely larger than all of the farm animal welfare spending in the world by non-governmental actors combined, and that includes a large number of animal actors I think it’s not plausible to affect with research. Further, I think most people who work in most spaces aren’t “cause neutral” and, for example, the counterfactual of all our GHD researchers isn’t being paid by RP to do AW research that influences even a fraction of the money they could influence in GHD.

Additionally, you highlight that AW looks more cost-effective than GHD but you did not note that AMF looked pretty robustly positive across different decision theories and this was not true, say, of any of the x-risk interventions we considered in the series and some of the animal interventions. So, one additional reason to do GHD work is the robustness of the value proposition.

Ultimately, though, I’m still unsure about what the right overall approach is to these types of trade-offs and I hope further work from WIT can help clarify how best to make these tradeoffs between areas.

*A different approach is to resist this conclusion is to assert a kind of claim that you must drop your probability in claims of astronomical value, and that this always balances out increases in claims of value such that it's never rational within EV to act on these claims. I'm not certain this is wrong but, like with other approaches to this issue, within the logic of EV it seems you need to be at ~100% certainty this is correct to not pursue fanatical claims anyway. You could say in reply the rules of EV reasoning don't apply to claims about how you should reason about EV itself, and maybe that's right and true. But these sure seem like patches on a theory with weaknesses, not clear truths anyone is compelled to accept at the pain of being irrational. Kludges and patches on theories are fine enough. It's just not clear to me this possible move is superior to, say, just biting that you need to do rounding down to avoid this type of outcome.

Thanks for the reply, Marcus!

I do find fanaticism problematic at a theoretical level since it suggests spending all your time and resources on quixotic quests.

To clarify, fanaticism would only suggest pursuing quixotic quests if they had the highest EV, and I think this is very unlikely.

Also, the idea that fanaticism doesn’t come up in practice doesn’t seem quite right to me. On one level, yeah, I’ve not been approached by a wizard asking for my wallet and do not expect to be. But I'm also not actually likely going to be approached by anyone threatening to money-pump me (and even if I were I could reject the series of bets) and this is often held as a weakness to EV alternatives or certain sets of beliefs.

Money-pumping is not so intuitively repelling, but rejecting EV maximisation in principle (I am fine with rejecting it in practice for instrumental reasons) really leads to bad actions. If you reject EV maximisation, you could be forced to counterfactually create arbitrarily large amounts of torture. Consider these actions:

  • Action A. Prevent N days of torture with probability 100 %, i.e. prevent N days of torture in expectation.
  • Action B. Prevent 2*N/p days of torture with probability p, i.e. prevent 2*N days of torture in expectation.

Fanatic EV maximisation would always support B, thus preventing N (= 2*N - N) days of torture relative to A. I think rejecting fanaticism would imply picking A over B for a sufficiently small p, in which case one could be forced to counterfactually create arbitrarily many days of torture (for an arbitrarily large N).

A different approach is to resist this conclusion is to assert a kind of claim that you must drop your probability in claims of astronomical value

I believe this is a very sensible approach. I recently commented that:

[...] I think it is often the case that people in EA circles are sensitive to the possibility of astronomical upside (e.g. 10^70 lives), but not to astronomically low chance of achieving that upside (e.g. 10^-60 chance of achieving 0 longterm existential risk). I explain this by a natural human tendency not to attribute super low probabilities for events whose mechanics we do not understand well (e.g. surviving the time of perils), such that e.g. people would attribute similar probabilities to a cosmic endowment of 10^50 and 10^70 lives. However, these may have super different probabilities for some distributions. For example, for a Pareto distribution (a power-law), the probability density of a given value is proportional to "value"^-(alpha + 1). So, for a tail index of alpha = 1, a value of 10^70 is 10^-40 (= 10^(-2*(70 - 50))) as likely as a value of 10^50. So intuitions that the probability of 10^50 value is similar to that of 10^70 value would be completely off.

One can counter my particular example above by arguing that a power law is a priori implausible, and that we should use a more uninformative prior like a loguniform distribution. However, I feel like the choice of the prior would be somewhat arbitrary. For example, the upper bound of the prior loguniform distribution would be hard to define, and would be the major driver of the overall expected value. I think we should proceed with caution if prioritisation is hinging on decently arbitrary choices informed by almost no empirical evidence.

So I agree fanaticism can be troubling. However, just in practice (e.g. due to overly high probabilities of large upside), not in principle.

I don’t think this ["All the other problems attributed to expected utility maximisaton only show up if one postulates the possibility of unbounded or infite value, which I do not think makes sense"] is true. As I said in response to Michael St. Jules in the comments, EV maximization (and EV with rounding down unless it’s modified here too) also argues for a kind of edge-case fanaticism, where provided a high enough EV if successful you are obligated to take an action that’s 50.000001% positive in expectation even if the downside is similarly massive.

It’s really not clear to me the rational thing to do is consistently bet on actions that would impact a lot of possible lives but, say ~0.0001% chance of making a difference and are net positive in expectation but have a ~49.999999% chance of causing lots of harm. This seems like a problem even within a finite and bounded utility function for pure EV.

I think these cases are much less problematic than the alternative. In the situations above, one would still be counterfactually producing arbitrarily large amounts of welfare by pursuing EV maximisation. By rejecting it, one could be forced to counterfactually produce arbitrarity large amounts of torture. In any case, I do not think situations like the above are found in practice?

I’ve not polled internally but I don’t think non-hedonic benefits issue is a driving force inside RP. Speaking for myself, I do think hedonism is makes up for at least more than half of what makes things valuable at least in part for the reasons outlined in that post.

Thanks for clarifying!

The reasons we work across areas in general are because of differences in the amount of money in the areas, the number of influenceable actors, the non-fungibility of the resources in the spaces (both money and talent), and moral and decision-theoretic uncertainty.

Nice context! I would be curious to see a quantitative investigation of how much RP should be investing in each area accounting for the factors above, and the fact that the marginal cost-effectiveness of the best animal welfare interventions is arguably much higher than that of the best GHD interventions. Investing in animal welfare work could also lead to more outside investment (of both money and talent) in the area down the line, but I assume you are already trying to account for this in your allocation.

In this particular comparison case of GHD and AW, there’s hundreds of millions more of plausibly influenceable dollars in the GHD space than in the AW space. For example, GiveWell obviously isn’t going to shift their resources to animal welfare, but they still move a lot of money and could do so more effectively in certain cases.

I wonder how much of GiveWell's funding is plausibly influenceable. Open Phil has been one of its major funders, is arguably cause neutral, and is open to being influenced by Rethink, having at least partly funded (or was it ~fully funded?) Rethink's moral weight project. From my point of view, if people at Rethink generally believe the best AW interventions increase welfare much more cost-effectively than GiveWell's top charities, I would guess influencing Open Phil to spend less on GHD and more on AW would be a quite cost-effective endeavour.

One important reason I am less enthusiatic about GHD is that I am confused about whether saving/extending lives is beneficial/harmful. I recently commented that:

I think this ["[Rethink's] cost-effectiveness models include only first-order effects of spending on each cause. It’s likely that there are interactions between causes and/or positive and negative externalities to spending on each intervention"] is an important point. The meat-eater problem may well imply that live-saving interventions are harmful. I estimated it reduces the cost-effectiveness of GiveWell's top charities by 8.72 % based on the suffering linked to the current consumption of poultry in the countries targeted by GiveWell, adjusted upwards to include the suffering caused by other farmed animals. On the one hand, the cost-effectiveness reduction may be lower due to animals in low income countries generally having better lives than broilers in a reformed scenario. On the other, the cost-effectiveness reduction may be higher due to future increases in the consumption of farmed animals in the countries targeted by GiveWell. I estimated the suffering of farmed animals globally is 4.64 the happiness of humans globally, which suggests saving a random human life leads to a nearterm reduction in suffering.

Has the WIT considered analysing under which conditions saving lives is robustly good after accounting for effects on farmed animals? This would involve forecasting the consumption and conditions of farmed animals (e.g. in the countries targeted by GiveWell). Saving lives would tend to be better in countries whose peak and subsequent decline of the consumption of factory-farmed crayfish, crabs, lobsters, fish, chicken and shrimp happened sooner, or in countries which are predicted to have good conditions for these animals (which I guess account for most of the suffering of farmed animals).

Ideally, one would also account for effects on wild animals. I think these may well be the major driver of the changes in welfare caused by GiveWell's top charities, but they are harder to analyse due to the huge undercainty involved in assessing the welfare of wild animals.

You said:

Additionally, you highlight that AW looks more cost-effective than GHD but you did not note that AMF looked pretty robustly positive across different decision theories and this was not true, say, of any of the x-risk interventions we considered in the series and some of the animal interventions. So, one additional reason to do GHD work is the robustness of the value proposition.

For reasons like the ones I described in my comment just above (section 4 of Maximal cluelessness has more), I actually think AW interventions, at least ones which mostly focus on improving the conditions of animals (as opposed to reducing consumption), are more robustly positive than x-risk or GHD interventions.

Ultimately, though, I’m still unsure about what the right overall approach is to these types of trade-offs and I hope further work from WIT can help clarify how best to make these tradeoffs between areas.

Likewise. Looking forward to further work! By the way, it is possible to donate specifically to a single area of Rethink? If so, would the money flow across areas be negligible, such that one would not be donating in practice to Rethink's overall budget?

Executive summary: The post outlines practical steps Rethink Priorities is taking to address risk and uncertainty in cause prioritization, including using multiple decision theories, quantifying more, and adopting transparent decision procedures.

Key points:

  1. Rethink Priorities will incorporate multiple decision theories like expected value, weighted linear utility, and risk-weighted expected utility into modeling to identify robust options.
  2. More rigorous quantification of probabilities, outcomes, and factors like returns and counterfactual impact will improve reasoning and tradeoffs.
  3. Formal deliberative decision procedures will allow transparent navigation of uncertainty and aggregation of judgment.
  4. Spreading bets and resources across plausible approaches may be advisable given deep rule uncertainty.
  5. Practical improvements in modeling, quantification, and collective decision-making procedures aim to advance impartial cause prioritization.

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities