Hide table of contents

[Crossposted at https://vaccha.com/beneficiary-preferences/]

Introduction

When assessing interventions, GiveWell assigns "moral weights" to outcomes such as averting the death of a child or doubling the consumption of someone in extreme poverty. Moral weights represent the relative goodness of outcomes. So if you assign 1 to averting the death of a child and 0.1 to doubling the consumption of someone in extreme poverty, then you are treating averting the death as ten times as good as the consumption doubling.

These moral weights can make a big difference. If you assign a relatively high value to doubling someone's consumption, then you might conclude that GiveDirectly is more cost effective than the Against Malaria Foundation. If you assign a relatively high value to averting a death, then you might conclude the opposite.

Coming up with moral weights is hard, so GiveWell funded a study into the preferences of those who could be affected by some of their top-ranked interventions.[1] The idea seems to be that if those affected by an intervention would assign a particular weight to, say, averting the death of a child, then then we should assign that same weight to averting the death of a child when assessing the intervention. Or maybe the moral weights we use should at least be influenced in part by the moral weights that would be chosen by the potential beneficiaries.

But it's unclear what exactly this amounts to or what its justification is. So in this post I'll consider three ways in which you might let your moral weights be guided by beneficiary preferences, what if anything could justify each approach to moral weights, and what some challenges to each approach are. At the end, I'll briefly consider how you could take beneficiary preferences into account even if they don't affect the moral weights you use.

Approaches to moral weights

Person-relative

On the person-relative approach, you use different moral weights when assessing an intervention's impact on each person depending on that person's preferences. So if there is an intervention that affects both me and you, someone taking this approach would use the moral weights that match my preferences when assessing its impact on me, and use the (possibly different) moral weights that match your preferences when assessing its impact on you.

I do think there is something intuitive about this approach. The core idea seems to be that your preferences determine the impact that things have on your well-being, so if you want to maximize someone's well-being then you should try to maximize their preference-satisfaction. But I think there are a few issues with this approach.

The first issue is just a limitation: many of those affected by interventions lack any relevant preferences. For example, one question GiveWell wanted to answer was how much to value averting the death of a child under the age of five to averting the death of an adult. But the researchers, understandably, never interviewed children under the age of five to figure out what their preferences are, since they aren't sophisticated enough to have any relevant preferences.

The researchers did look at the preferences of adults in the children's community about saving children compared to saving adults. But if you adjust the moral weights of outcomes that affect one person as a result of someone else's preferences, you aren't taking the person-relative approach. Instead, you're probably taking the community-relative approach or the same-for-all approach, which I consider in the next two subsections.

The other two issues with this approach have to do with the motivation for it that I presented above. I think the core claim is not true generally, and even if it were true it wouldn't fully support this approach.

First, it's not true in general that someone's well-being will be maximized by satisfying their current preferences. For example, suppose that I discount my future at a high rate and would prefer a small benefit now to a massive benefit later. Despite that preference, I'd be better off with the massive benefit later. Or suppose that I have an irrational fear of death and would prefer even a second of extended life to any increase in happiness. Despite that preference, I'd be better off having my happiness significantly increased.[2]

Second, even if your preferences at a time completely determine the impact that things at that time have on your well-being, it doesn't follow that your preferences determine how important your well-being is compared to other things, such as the length of your life. But this is one of the questions that GiveWell wanted to answer.[3] Of course, you might think that your preferences determine both, but it's worth flagging that to defend this claim it's not enough to defend the more familiar claim that your preferences at a time determine the impact that things at that time have on your well-being.

Community-relative

On the community-relative approach, you use different moral weights when assessing an intervention's impact on each community depending on that community's preferences. Those in the community might have conflicting preferences, so you'll need to have some way of averaging them.

The community-relative approach avoids one limitation of the person-relative approach. Even if some people in the community, such as young children, lack any relevant preferences, this approach still offers guidance on what moral weights to assign to outcomes that affect them. But I think there's not much to be said for it beyond that. I see two main issues with it.

First, unlike with the person-relative approach, it just seems counterintuitive on its face. Why would the preferences of people in a child's community, who the child might be completely ignorant of, determine how bad it is for the child to die? That doesn't make a lot of sense to me. My preferences can affect the value of things for me, but I don't see how they could affect the value of things for you, especially if you have no clue that I exist and don't care about what I think.

Second, it is not obvious how to average people's preferences when some people have fanatical views. For example, many people studied claimed that averting even a single death was preferable to any number of cash transfers. So if you naively average these people's preferences, then you'll end up assigning an infinite value to averting a death and a finite value to increasing someone's consumption. This is true if there is even a single person in the community who has a fanatical view.[4]

There are a few ways you could to try to deal with the problem of fanaticism. First, you could take the median preference instead of the mean preference. This might help in the above case, but it won't work in every case. There is no median of infinity and negative infinity, so if people are fanatical at both extremes then the median will be undefined. Also, if a majority are fanatical, then even the median preference might still be infinite. But if only 51% have fanatical views, we might not want our moral weights to be determined entirely by the fanatics.

Second, you could put a cap on extreme views. For example, in the case of averting a death versus cash transfers, you could pretend that no one prefers averting a death to, say, $10,000,000 in cash transfers. But it's not obvious how to justify drawing the line at one point as opposed to another. If you thought that there was an objective fact about the correct moral weights and that you knew it, then you could use that knowledge to help you draw the line. But if you already have this knowledge, why bother asking people what their preferences are in the first place? It's also not obvious that there always should be a cap on extreme views. Maybe the fanatics are right about some things.

Third, you could look at people's revealed preferences instead of their stated preferences, and hope that those who state fanatical views don't actually act fanatically. But although there may be few people who act fanatically, it's not obvious that there are none. And even if the move to revealed preferences deals with the problem of fanaticism, it may introduce other problems. For example, if you were to ask me I would claim that my far future matters just as much as my near future, but I probably don't act perfectly like it does. But I wouldn't want someone to discount my far future as a result of this revealed preference.

Same-for-all

On the same-for-all approach, you use the same moral weights when assessing every intervention, regardless of the preferences of those affected by it. Nonetheless, when coming up with the universal set of moral weights that you will use, you might take into account people's preferences. This seems to be the approach that GiveWell plans to take:

In the future, we expect to have a single set of moral weights and that choosing this set will involve determining what moral weights would be implied by a variety of approaches or worldviews and then taking a weighted average of those views.

To the extent to which this approach involves averaging people's preferences, it faces the problem of fanaticism that I discussed in the previous subsection.

If you take this approach, it's unclear what justifies taking into account people's preferences. One possibility is that you think that there is an objective fact about the correct moral weights to use and that people's preferences are evidence of what the correct moral weights are.

Another possibility is that you think there are no objectively correct moral weights, but that we should (maybe because of this?) use the moral weights that best fit the preferences of everyone in the world. But if you are going to be a relativist about moral weights, it's unclear why you would say that they are relative to humanity as a whole as opposed to relative to each person or each community, which on the face of it seem like more plausible forms of relativism.

For its part, GiveWell's motivation for supporting the beneficiary preferences study seems in part to be that they were already taking into account people's preferences when setting their moral weights---it was just the preferences of those in higher-income countries, where most of this sort of research had been done before. Given this, I see how it makes sense for them to make sure that they aren't ignoring the preferences of the very people who are affected by the interventions that they assess. But there's still the deeper question of whether we should be looking at anyone's preferences in the first place to determine moral weights.

Weighting preferences beyond impact

So far I've been considering how beneficiary preferences could affect your moral weights, which represent the relative value of various outcomes. But even if you don't think that beneficiary preferences should affect your moral weights, you might think that we should take beneficiary preferences into account when prioritizing interventions for some other reason. Maybe outcomes aren't the only thing that matters. For example, you might think that we should respect people's autonomy, even if doing so will lead to somewhat worse outcomes.

If you want to do this, one obvious challenge is how to balance respect for people's autonomy (or whatever) with the expected value of outcomes.

A different challenge, specifically for trying to favour interventions that respect autonomy, is that it's not always obvious which intervention best respects a community's autonomy. For example, suppose that most people in a community strongly prefer interventions that avert deaths to interventions that distribute cash. Would their autonomy be best respected by distributing bed nets (which will avert many deaths, which they want) or by distributing cash (which they can do whatever they want with)?

Conclusion

I've considered three approaches to moral weights and how each might be informed by beneficiary preferences. I've focused on challenges that each approach faces. But to be clear, I have taken no stand on whether we should or should not take beneficiary preferences into account when setting our moral weights, or whether beneficiary preferences should be taken into account in some other way. Maybe the challenges can be overcome.

What if the challenges can't be overcome and we shouldn't take beneficiary preferences into account when setting our moral weights? How should we come up with our moral weights, then? Unfortunately, I don't have anything more helpful to say than "do philosophy". But as GiveWell points out, "philosophers have not done much work to consider how best to assign quantitative value to different kinds of outcomes". So I think this is an important and neglected topic, and I hope that more people work on it in the future.


  1. I will follow the study's authors in talking in terms of preferences, but I think 'values' or 'moral beliefs' would be probably more accurate terms. ↩︎

  2. GiveWell also flags this as a possible concern: "Preferences may not maximize well-being: Even if people perfectly understood the probability and information components of trading off income and mortality risk, they might not be able to reliably anticipate what would maximize their well-being, all things considered." ↩︎

  3. To be precise, here's how GiveWell frames an example of the sort of question that they wanted to answer: "how much should we value averting the death of a one-year-old relative to doubling the income of an extremely poor household?" But the general question, as I see it, is how much to value extending a life (for example, the life of a one-year-old) compared to increasing someone's well-being (for example, by increasing their income). ↩︎

  4. This problem is similar in some ways to the fanaticism problem for some approaches to dealing with moral uncertainty. See MacAskill, Bykvist, and Ord's Moral Uncertainty, chapter 6. ↩︎

10

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since:

Thank you for your post! I am an IDinsight researcher who was heavily involved in this project and I will share some of my perspectives (if I'm misrepresenting GiveWell,  feel free to let me know!):

  • My understanding is GiveWell wanted multiple perspectives to inform their moral weights, including a utilitarian perspective of respecting beneficiaries/recipient's preferences, as well as others (examples here). Even though beneficiary preferences may not the the only factor, it is an important one and one where empirical evidence was lacking before the study, which was why GiveWell and IDinsight decided to do it.
    • Also, the overall approach is that, because it's unrealistic to understand every beneficiary's preferences and target aid at the personal level, GiveWell and we had to come up with aggregate numbers to be used across all GiveWell top charities. (In the future, there may be the possibility of breaking it down further, e.g. by geography, as new evidence emerges. Also, note that we focus on preferences over outcomes -- saving lives vs. increasing income -- rather than interventions, and I explain here why we and GiveWell think that's a better approach given our purposes.) 
  • My understanding is that ideally GiveWell would like to know children's preferences (e.g. value of statistical life) if that was valid (e.g. rational) and could be measured, but in practice it could not be done, so we tried to use other things as proxies for it, e.g.
    • Measuring "child VSL" as their parents/caretakers' willingness-to-pay (WTP)  to reduce the children's mortality (rather than own, which is the definition of standard VSL)
    • Taking adults' VSL and adjusting it by the relative values adults place on individuals of different ages (there were other ).
    • (Something else that one could do here is to estimate own VSL (WTP) to reduce own mortality as a function of age. We did not have enough sample to do this. If I remember correctly, studies that have looked at it had conflicting evidence on the relationship between VSL and age.)
    • Obviously none of these is perfect -- we have little idea how close our proxy is to the true object of interest, children's WTP to reduce own mortality -- if that is a valid object at all, and what to do if not (which gets into tricky philosophical issues). But both approaches we tried gave a higher value for children's lives than for adult lives so we conclude it would be reasonable to place a higher value on children's lives if donors'/GiveWell's moral weights are largely/solely influenced by beneficiaries. But you are right that the philosophical foundation isn't solid. (Within the scope of the project we had to optimize for informing practical decisions, and we are not professional philosophers, but I agree overall more  discussions on this by philosophers would be helpful.)
  • Finally, another tricky issue that came up was -- as you mentioned as well -- what to do with "extreme" preferences (e.g. always choosing to save lives). Two related questions that are more fundamental are
    • If we want to put some weights on beneficiaries' views, should we use "preferences" (in the sense of what they prefer to happen to themselves, e.g. VSL for self) OR "moral views" (what they think should happen to their community)? For instance, people seem to value lives a lot higher in the latter case (although one nontrivial driver of the difference is questions on the moral views were framed without uncertainty -- which was a practicality we couldn't get around, as including it in an already complex hypothetical scenario trading off lives and cash transfers seemed extremely confusing to respondents).
    • In the case where you want to put some weights on their moral views (and I don't think that would be consistent with utilitarianism -- not sure what philosophical view that is, but I think certainly not unreasonable to put some weight on it), what do you do if you disagree with their view? E.g. I probably wouldn't put weight on views that were sexist or racist; what about views that purport you should value saving lives above increasing income no matter the tradeoff?
    • I don't have a good answer, and I'm really curious to see philosophical arguments here. My guess is respecting recipient communities moral views would be appealing to some in the development sector, and I'm wondering what should be done when that comes into conflict with other goals, e.g. maximizing their utility / satisfying preferences.
More from v
Curated and popular this week
Relevant opportunities