The summary and introduction can be read below. The full paper is available here.
This working paper was produced as part of the Happier Lives Institute’s 2022 Summer Research Fellowship
Summary
Given the current state of our moral knowledge, it is entirely reasonable to be uncertain about a wide range of moral issues. Hence, it is surprising how little attention contemporary philosophers have paid (until the past decade) to moral uncertainty. In this paper, I have considered the prima facie plausible suggestion that appropriateness under moral uncertainty is a matter of dividing one’s resources between the moral theories in which one has credence, allowing each theory to use its resources as it sees fit. I have gone on to develop this approach into a fully-fledged Property Rights Theory, sensitive to many of the complications that we face in making moral decisions over time. This Property Rights Theory deserves to takes its place as a leading theory of appropriateness under conditions of moral uncertainty.
Introduction
Distribution: Imagine that some agent J is devoting her life to ‘earning to give’: J is pursuing a lucrative career in investment banking and plans to donate most of her lifetime earnings to charity. According to the moral theory Thealth in which J has 60% credence, by far and away the best thing for her to do with her earnings is to donate them to global health charities, and the next best thing is to donate them to charities that benefit future generations by fighting climate change. On the other hand, according to the moral theory Tfuture in which J has 40% credence, by far and away the best thing for her to do with her earnings is to donate them to benefitting future generations, and the next best thing is to donate them to global health charities.2 On all other issues, Thealth and Tfuture are in total agreement: for instance, they agree on where J should work, what she should eat, and what kind of friend she should be. They only disagree about which charity J should donate to. Finally, neither Thealth nor Tfuture is risk loving: each theory implies that an $x donation to a charity that the theory approves of is at least no worse than a risky lottery over donations to that charity whose expected donation is $x. In light of her moral uncertainty, what is it appropriate for J to do with her earnings?3
According to one prima facie plausible proposal, it is appropriate for J to donate 60% of her earnings to global health charities and 40% of them to benefitting future generations – call this response Proportionality. Despite Proportionality’s considerable intuitive appeal, none of the theories of appropriateness under moral uncertainty thus far proposed in the literature support this simple response to Distribution.
In this paper, I propose and defend a Property Rights Theory (henceforth: PRT) of appropriateness under moral uncertainty, which supports Proportionality in Distribution.4 In §2.1, I introduce the notion of appropriateness. In §2.2, I introduce several of the theories of appropriateness that have been proposed thus far in the literature. In §2.3, I show that these theories fail to support Proportionality. In §§3.1-3.3, I introduce PRT and I demonstrate that it supports Proportionality in Distribution. In §§3.4-3.9, I discuss the details. In §3.10, I extend my characterisation of PRT to cover cases where an agent faces a choice between discrete options, as opposed to resource distribution cases like Distribution.5 In §4, I argue that PRT compares favourably to the alternatives introduced in §2.2. In §5, I conclude.
Acknowledgements: For helpful comments and conversations, I wish to thank Conor Downey, Paul Forrester, Hilary Greaves, Daniel Greco, Shelly Kagan, Marcus Pivato, Michael Plant, Stefan Riedener, John Roemer, Christian Tarsney, and Martin Vaeth. I also wish to thank the Forethought Foundation and the Happier Lives Institute for their financial support.
The arbitrariness ("not really any principled reason") comes from your choice to define the community you consider yourself to belong to for setting the reference allocation. In your first comment, you said the world, while in your reply, you said "the rest of my community", which I assume to be narrower (maybe just the EA community?). How do you choose between them? And then why not the whole universe/multiverse, the past and the future? Where do you draw the lines and why? I think some allocations in the world, like by poor people living in very remote regions, are extremely unlikely for you to affect, except through your impact on things with global scope, like global catastrophe (of course, they don't have many resources, so in practice, it probably doesn't matter whether or not you include them). Allocations in inaccessible parts of the universe are far far less likely for you to affect (except acausally), but not impossible to affect with certainty, if you allow the possibility that we're wrong about physical limits. I don't see how you could draw lines non-arbitrarily here.
By risk-neutral total symmetric views, I mean risk-neutral expected value maximizing total utilitarianism and other views with such an axiology (but possibly with other non-axiological considerations), where lives of neutral welfare are neutral to add, better lives are good to add and worse lives are bad to add. Risk neutrality just means you apply the expected value directly to the sum and maximize that, so it allows fanaticism, St. Petersburg problems and the like, in principle.
Rejecting separability requires rejecting total utilitarianism.
FWIW, I don't think it's unreasonable to reject separability or total utilitarianism, and I'm pretty sympathetic to rejecting both. Why can't I just care about the global distibution, and not just what I can affect? But rejecting separability is kind of weird: one common objection (often aimed at average utilitarianism) is that what you should do depends non-instrumentally on how well off the ancient Egyptians were.