Classical utilitarianism has many advantages as an ethical theory. But there are also many problems with it, some of which I discuss here. A few of the most important:
- The idea of reducing all human values to a single metric is counterintuitive. Most people care about a range of things, including both their conscious experiences and outcomes in the world. I haven’t yet seen a utilitarian conception of welfare which describes what I’d like my own life to be like.
- Concepts derived from our limited human experiences will lead to strange results when they’re taken to extremes (as utilitarianism does). Even for things which seem robustly good, trying to maximise them will likely give rise to divergence at the tails between our intuitions and our theories, as in the repugnant conclusion.
- Utilitarianism doesn’t pay any attention to personal identity (except by taking a person-affecting view, which leads to worse problems). At an extreme, it endorses the world destruction argument: that, if given the opportunity to kill everyone who currently exists and replace them with beings with greater welfare, we should do so.
- Utilitarianism is post-hoc on small scales; that is, although you can technically argue that standard moral norms are justified on a utilitarian basis, it’s very hard to explain why these moral norms are better than others. In particular, it seems hard to make utilitarianism consistent with caring much more about people close to us than strangers.
I (and probably many others) think that these objections are compelling, but none of them defeat the core intuition which makes utilitarianism appealing: that some things are good, and some things are bad, and we should continue to want more good things and fewer bad things even beyond the parochial scales of our own everyday lives. Instead, the problems seem like side effects of trying to pin down a version of utilitarianism which provides a precise, complete guide for how to act. Yet I’m not convinced that this is useful, or even possible. So I’d prefer that people defend the core intuition directly, at the cost of being a bit vaguer, rather than defending more specific utilitarian formalisations which have all sorts of unintended problems. Until now I’ve been pointing to this concept by saying things like “utilitarian-ish” or “90 percent utilitarian”. But it seems useful for coordination purposes to put a label on the property which I consider to be the most important part of utilitarianism; I’ll call it “scope-sensitivity”.
My tentative definition is that scope-sensitive ethics consists of:
- Endorsing actions which, in expectation, bring about more intuitively valuable aspects of individual lives (e.g. happiness, preference-satisfaction, etc), or bring about fewer intuitively disvaluable aspects of individual lives (e.g. suffering, betrayal).
- A tendency to endorse actions much more strongly when those actions increase (or decrease, respectively) those things much more.
I hope that describing myself as caring about scope-sensitivity conveys the most important part of my ethical worldview, without implying that I have a precise definition of welfare, or that I want to convert the universe into hedonium, or that I’m fine with replacing humans with happy aliens. Now, you could then ask me which specific scope-sensitive moral theory I subscribe to. But I think that this defeats the point: as soon as we start trying to be very precise and complete, we’ll likely run into many of the same problems as utilitarianism. Instead, I hope that this term can be used in a way which conveys a significant level of uncertainty or vagueness, while also being a strong enough position that if you accept scope-sensitivity, you don’t need to clarify the uncertainty or vagueness much in order to figure out what to do.
(I say "uncertainty or vagueness" because moral realists are often particularly uncomfortable with the idea of morality being intrinsically vague, and so this phrasing allows them to focus on the uncertainty part: the idea that some precise scope-sensitive theory is true, but we don't yet know which one. Whereas my own position is that it's fine and indeed necessary for morality to be intrinsically imprecise, and so it's hard to draw the line between questions we're temporarily uncertain about, and questions which don't have well-defined answers. From this perspective, we can also think about scope-sensitive ethics as a single vague theory in its own right.)
How does the definition I've given address the problems I described above? Firstly, it’s pluralist (within the restrictions of common sense) about what contributes to the welfare of individuals. The three most common types of utilitarian conceptions of welfare are hedonic theories, desire theories and objective-list theories. But each of these captures something which I care about, and so I don't think we know nearly enough about human minds (let alone non-human minds) to justify taking a strong position on which combination of these constitutes a good life. Scope-sensitivity also allows room for even wider conceptions of welfare: for example, people who think that achieving virtue is the most valuable aspect of life can be scope-sensitive if they try to promote that widely.
Secondly, it’s also consistent with pluralism about value more generally. Scope-sensitivity doesn’t require you to only care about welfare; you can value other things, as long as they don’t override the overall tendency to prioritise actions with bigger effects. In particular, unlike utilitarianism, scope-sensitivity is consistent with using non-consequentialist or non-impartial reasoning about most small-scale actions we take (even when we can't justify why that reasoning leads to the best consequences by impartial standards). Furthermore, it doesn’t require that you endorse welfare-increasing actions because they increase welfare. In addition to my moral preferences about sentient lives, I also have moral preferences about the trajectory of humanity as a whole: as long as humanity flourishing is correlated closely enough with humans flourishing, then those motivations are consistent with scope-sensitivity.
Thirdly, scope-sensitivity isn’t rigid. It doesn’t require welfare-maximisation in all cases; instead, specifying a “tendency” rather than a “rule” of increasing welfare allows us to abide by other constraints as well. I think this reflects the fact that a lot of people do have qualms about extreme cases (for which there may not be any correct answers) even when their general ethical framework aims towards increasing good things and decreasing bad things.
I should make two further points about evaluating the scope-sensitivity of existing moral theories. Firstly, I think it’s best interpreted as a matter of degree, rather than a binary classification. Secondly, we can distinguish between “principled” scope-sensitivity (scope-sensitivity across a wide range of scenarios, including implausible thought experiments) versus “practical” scope-sensitivity (scope-sensitivity given realistic scenarios and constraints).
I expect that almost all of the people who are most scope-sensitive in principle will be consequentialists. But in practice, non-consequentialists can also be highly scope-sensitive. For example, it may be the case that a deontologist who follows the rule "try to save the world, if it's in danger" is in practice nearly as scope-sensitive as a classical utilitarian, even if they also obey other rules which infrequently conflict with it (e.g. not lying). Meanwhile, some variants of utilitarianism (such as average utilitarianism) also aren’t scope-sensitive in principle, although they may be in practice.
One problem with the concept of scope-sensitivity is that it might induce motte-and-bailey fallacies - that is, we might defend our actions on the basis of scope-sensitivity when challenged, but then in practice act according to a particular version of utilitarianism which we haven't justified. But I actually think the opposite happens now: people are motivated by the intuition towards scope-sensitivity, and then defend their actions by appealing to utilitarianism. So I hope that introducing this concept improves our moral discourse, by pushing people to explicitly make the argument that scope-sensitivity is sufficient to motivate views like longtermism.
Another possibility is that scope-sensitivity is too weak a concept to motivate action - for example, if people claim to be scope-sensitive, but add a few constraints which mean they don’t ever need to act accordingly. But even if scope-sensitivity in principle is broad enough to include such views, hopefully the concept of practical scope-sensitivity identifies a natural cluster of moral views which, if people follow them, will actually make the world a much better place.
A small comment: I really like the term 'scope sensitive', but I worry that it's not easily understandable to people who aren't familiar with the 'scope neglect' bias, which isn't one of the more commonly known biases (e.g. when I search on Google, I get a very short wikipedia article first and at number 3 get a lesswrong article). I wonder if 'scale sensitive' might be more immediately understood by the typical person.
On google n-gram, 'scale sensitive' is about 10x more common.
I'm not sure which is better e.g. 'scope sensitive ethics' sounds nicer to me, but worth thinking about more if you want to turn it into a term.
Thanks for this post! This helped clarify a fuzzy intuition I had around utilitarianism, roughly: That some moral positions are obvious, (eg saving many more people >> saving few), and that utilitarianism is the only reasonable system that gets these important parts right. And that I'm uncertain about all of the messy details, but they don't seem clear or important, so I don't care about what the system says about those, so I should follow utilitarianism for everything important
I much prefer this way of framing it.
Hey Richard, I agree with this, and I like the framing.
I want to add though, that these are basically the reasons why we created EA in the first place, rather than promoting 'utilitarian charity'. The idea was that people with many ethical views can agree that the scale of effects on people's lives matters, and so it's a point of convergence that many can get behind, while also getting at a key empirical fact that's not widely appreciated (differences in scope are larger than people think).
So, I'd say scope sensitive ethics is a reinvention of EA. It's a regret of mine that we've not done a great job of communicating that so far. It's possible we need to try introducing the core idea in lots of ways to get it across, and this seems like a good one.
This doesn't seem quite right, because ethical theories and movements/ideologies are two different types of things. If you mean to say that scope sensitive ethics is a reinvention of the ethical intuitions which inspired EA, then I'm happy to agree; but the whole point of coining the term is to separate the ethical position from other empirical/methodological/community connotations that EA currently possesses, and which to me also seem like "core ideas" of EA.
Hi Richard,
That makes sense - it could be useful to define an ethical position that's separate from effective altruism (which I've been pushing to be defined as a practical and intellectual project rather than ethical theory).
I'd be excited to see someone try to develop it, and would be happy to try to help if you do more in this area.
In the early days of EA, we actually toyed with a similar idea, called Positive Ethics - an analogy with positive psychology - which aimed to be the ethics of how to best benefit others, rather than more discussion of prohibitions.
I think my main concern is that I'm not sure that in public awareness there's enough space in between EA, global priorities research and consequentialism for another field. (E.g. I also think it would be better if EA were framed more in terms of 'let's be scope sensitive' rather than the other connotations you mention), but it could be interesting to write more about the idea to see where you end up.
PS If you push ahead more, you might want to frame it as also a core ethical intuition in non-utilitarian moral theories, rather than presenting it mainly as a more acceptable, watered-down utilitarianism. I think one of the exciting things about scope sensitivity is that it's a moral principle that everyone should agree with, but also has potentially radical consequences for how we should act.
Why exactly is this a problem? To me it seems more sensible to recognize our disproportionate partiality toward people close to us as an evolutionary bug, rather than a feature. Even though we do care about people close to us much more, this doesn't mean we actually should regard their interests as overwhelmingly more important than those of strangers (whom we can probably help more cheaply), on critical reflection.
The problem is that one man's modus ponens is another man's modus tollens. Lots of people take the fact that utilitarianism says that you shouldn't care about your family more than a stranger as a rebuttal to utilitarianism.
Now, we could try to persuade them otherwise, but what's the point? Even amongst utilitarians, almost nobody gets anywhere near placing as much moral value on a spouse as a stranger. If there's a part of a theory that is of very little practical use, but is still seen as a strong point against the theory, we should try find a version without it. That's what I intend scope-sensitive ethics to be.
In other words, we go from "my moral theory says you should do X and Y, but everyone agrees that it's okay to ignore X, and Y is much more important" to "my moral theory says you should do Y", which seems better. Here X is "don't give your family special treatment" and Y is "spend your career helping the world".
My moral intuitions say that there isn’t really an objective way that I should act, however I do think there are states of the world that are objectively better than others and that this betterness ordering is determined by whatever the best version of utilitarianism is.
So it is indeed better if I don’t give my family special treatment, but I’m not actually obligated to. There’s no rule in my opinion which says “you must make the world as good as possible”.
This is how I have always interpreted utilitarianism. Not having studied philosophy formally I’m not sure if this is a common view or if it is seen as stupid, but I feel it allows me to give my family some special treatment whilst also thinking utilitarianism is in some way “right”.
Fair :) I admit I'm apparently unusually inclined to the modus ponens end of these dilemmas.
I think this depends on whether the version without it is internally consistent. But more to the point, the question about the value of strangers does seem practically relevant. It influences how much you're willing to effectively donate rather than spend on fancy gifts, for example, giving (far?) greater marginal returns of well-being to strangers than to loved ones. Ironically, if we're not impartial, it seems our loved ones are "utility monsters" in a sense. (Of course, you could still have some nonzero partiality while agreeing that the average person doesn't donate nearly enough.)
I find this as troubling as anyone else who cares deeply about their family and friends, certainly. But I'm inclined to think it's even more troubling that other sentient beings suffer needlessly because of my personal attachments... Ethics need not be easy.
There's also the argument that optimal altruism is facilitated by having some baseline of self-indulgence, to avoid burnout, but 1) I think this argument can be taken too far into the realm of convenient rationalization, and 2) this doesn't require any actual partiality baked into the moral system. It's just that partial attachments are instrumentally useful.
On more modest person-affecting views you might not be familiar with, I'd point you to
I also wrote this post defending the asymmetry, and when I tried to generalize the approach to choosing among more than two options and multiple individuals involved*, I ended up with a soft asymmetry: considering only the interests of possible future people, it would never be worse if they aren't born, but it wouldn't be better either, unless the aggregate welfare were negative.
*using something like the beatpath method discussed in Thomas's paper to get a transitive but incomplete order on the option set
And I looked into something like modelling ethics as a graph traversal problem where you go from option A to option B if the individuals who would exist in A have more interest in B than in A (or the moral reasons from the point of view of A in favour of B outweigh those in favour of A), and either pick the option you visit the most asymptotically, or accumulate scores on the options depending on the difference in interest in options as you traverse, and then pick the option which dominates asymptotically (and also check multiple starting points).
I'm pretty suspicious about approaches which rely on personal identity across counterfactual worlds; it seems pretty clear that either there's no fact of the matter here, or else almost everything you can do leads to different people being born (e.g. by changing which sperm leads to their conception).
And secondly, this leads us to the conclusion that unless we quickly reach a utopia where everyone has positive lives forever, then the best thing to do is end the world as soon as possible. Which I don't see a good reason to accept.
These approaches don't need to rely on personal identity across worlds; either they already "work" even without this (i.e. solve the nonidentity problem) or (I think) you can modify them into wide person-affecting views, using partial injections like the counterpart relations in this paper/EA Forum summary (but dropping the personal identity preservation condition, and using pairwise mappings between all pairs of options instead of for all available options at once).
I don't see how this follows for the particular views I've mentioned, and I think it contradicts what I said about soft asymmetry, which does not rely on personal identity and which some of the views described in Thomas's paper and my attempt to generalize the view in my post satisfy (I'm not sure about Dasgupta's approach). These views don't satisfy the independence of irrelevant alternatives (most person-affecting views don't), and the option of ensuring everyone has positive lives forever is not practically available to us (except as an unlikely fluke, which an approach dealing with uncertainty appropriately should handle, like in Thomas's paper), so we can't use it to rule out other options.
Even if they did imply this (I don't think they do), the plausibility of the views would be at least a reason to accept the conclusion, right? Even if you have stronger reasons to reject it.
If this is the technical meaning of "in expectation", this brings in a lot of baggage. I think it implicitly means that you value those things ~linearly in their amount (which makes the second statement superfluous?), and it opens you up to pascal's mugging.
I think it means that there is something which we value linearly, but that thing might be a complicated function of happiness, preference satisfaction, etc.
As a toy example, say that S(x) is some bounded sigmoid function, and my utility function is to maximize E[S(x)]; it's always going to be the case that E[S(x1)]≥E[S(x2)]⇔x1≥x2 so I am in some sense scope sensitive, but I don't think I'm open to Pascal's mugging. (Correct me if this is wrong though.)
This seems right to me.
Yeah, I have no quibbles with this. FWIW, I personally didn't interpret the passage as saying this, so if that's what's meant, I'd recommend reformulating.
(To gesture at where I'm coming from: "in expectation bring about more paperclips" seems much more specific than "in expectation increase some function defined over the number of paperclips"; and I assumed that this statement was similar, except pointing towards the physical structure of "intuitively valuable aspects of individual lives" rather than the physical structure of "paperclips". In particular, "intuitively valuable aspects of individual lives" seems like a local phenomena rather than something defined over world-histories, and you kind of need to define your utility function over world-histories to represent risk-aversion.)
That makes sense; your interpretation does seem reasonable, so perhaps a rephrase a would be helpful.
I learned a lot from this post!
My extremely-basic intuition still has trouble distinguishing between utilitarianism and scope-sensitivity in the context of moral justification. Most examples of scope-sensitivity highlight how we should be aware of better actions to take where both options are good (such as the birds and oil spill example) but don't explain the concept in terms of a "greater good" approach (such as the trolley problem).
Does scope-sensitivity apply to situations where the inverse (harming the least amount of people) is in play? I'd love some guidance here.
Thanks for this, an interesting proposal.
Do you have a view on how this approach might compare with having a strong credence in utilitarianism and smaller but non-zero credences in other moral theories, and then acting in a way that factors in moral uncertainty, perhaps by maximising expected choiceworthiness (MEC)?
I might be off the mark, but it seems there are some similarities in that MEC can avoid extreme situations and be pluralist, although it might be a bit more prescriptive than you would like.