EDIT: Following the lively discussion with David Moss, I have clarified my thinking on the below. I still endorse the conclusions drawn below, but for somewhat different reasons than those outlined in this post -- see the back-and-forth with David below. I am in the process of writing up my revised thinking at the moment.
EA Forum Preface
I've long had the impression that moral language may be on a similar footing to talk of personal identity in that both cannot be made fully precise and the identity/non-identity (rightness/wrongness) in certain thought experiments is simply under-determined. In this blog post, I look at how our use of moral language attains reference and then examine what implications this has on our more speculative uses of moral language -- e.g. in population ethics.
Besides the topics touched on in the post, another point of interest to EA may be its implications on animal welfare. When we take as primitive our reactions and interpret all else as derivative evidence of future reactions, it seems that personal experience with animals (and their suffering) becomes particularly significant. Other evidence from neuron counts, self-awareness experiments and the like only have force as mediated through our meta-reactions such as the 'call to universality' discussed in the post.
The Roots and Shoots of Moral Language
Background
I’m writing this post in order to clarify and share my variant of hybrid expressivism and to explore what implications this meta-ethical position has on issues of interest to effective altruism (EA). I call my position empirical expressivism. In short, empirical expressivism agrees with emotivist and expressivist theories that our use of moral language has its roots in emotional reactions and expressions of (dis)approval. Empirical expressivism then goes further to stress the importance of generalizing previous such reactions to anticipate future reactions. Empirical expressivism also stresses the importance of meta-reactions, e.g. frustration at one's own inability to empathize. Together these last two points, generalizing reaction and meta-reactions, make it possible to reconcile the expressivist position with the truth-aptness characteristic of many uses of moral language. Empirical expressivism seeks to characterize moral language as used today, but it does not rule out a future use of moral language which has some more naturalist or otherwise different basis.
If much of this post is correct, it could have a number of implications which I will briefly enumerate from least to most speculative:
- We should put more weight on intuitions which are closer to our lived experiences.
- We should put more weight on our meta-intuitions.
- We should take disagreement over everyday ethical judgements more seriously in a certain sense.
- We should disregard much of population ethics.
Let me briefly describe the appeal of the expressivist and emotivist positions in meta-ethics. These positions approach the problem of understanding moral language as a descriptive problem beginning by examining how we use and learn to use moral language. The upside of this approach becomes clear when comparing to a claim unmoored from moral language. Imagine somoeone claims, “When you’re talking about ethics, better and worse, good and bad, etc. you are (or should be) doing a utilitarian calculus.” She is making a steep claim, even if utilitarianism is a valuable formalization of some uses of ethical language, she needs to make the case that no other formalization is plausible of our ethical language. In contrast in this essay, we will begin by examining what is entailed by correct uses of moral language independent of any formalization, and see where that takes us.
When are ethical intuitions true?
Let’s start by discussing intuitions. In the hands of different authors, ‘intuition’ has been used to refer to different things. I will use ‘intuition’ to refer to cases in which we say that X seems worse than Y without having experienced X or Y. This may arise when discussing legal questions, personal experiences or thought experiments. Note that the apparent reason for such intuitions may vary. Take for example, “Is hearing an excruciatingly loud siren for a while worse than the worst toothache most people have in their lives?” When answering this, what goes through your mind? What goes through the typical non-EAer’s mind? There are many possibilities:
- An immediate answer springs to mind for no apparent reason
- You think of times you’ve heard loud noises and compare to toothaches you’ve had
- You think about some other people’s reactions that you’ve seen having a toothache or hearing a loud sound.
- You think about some abstract argument for summing pain experience over time and try to do some calculation.
All of the above considerations are similar in that they provide evidence as to what you and others would say had you experienced both X and Y. Or, seen from another angle, consider under what conditions you would agree that you were mistaken about the relation between X and Y. Surely if you one day experience both X and Y and feel that the opposite relationship held from what you expected, you would then agree that you had been mistaken. The truth condition for this kind of moral intuition thus entails the claim that you would make the same judgement after experiencing X and Y. Now let’s focus on beliefs as used to describe generalizations of intuitions. In other words, claims of the form, “For all experiences of kind X, Y: X is worse than Y” Once again the apparent reason why you hold a belief of this kind may be various, but the truth condition is a simple quantification of the truth condition for an intuition. Since the truth/falsity of claims depend on later observations of yourself, I call my meta-ethical position ‘empirical’. For the simplest instances of this pattern, e.g. “Tripping and falling is worse than not tripping” there can be no disagreement. Indeed we are only able to learn the meaning of ‘pain’, ‘bad’, ‘ouch’, etc. because we share a species-wide aversion to such common childhood experiences. For this reason, I call my position ‘expressivist’. In summary, we first learn the use of moral language as shared forms of expressing the unpleasant nature of certain experiences, and then in generalizing these experiences to novel experiences we arrive at moral intuitions and beliefs.
From intuitions to principles
So far I’ve only addressed a small fraction of our many uses of moral language. Much of everyday moral language involves talk of principles e.g. “Lying is bad”, “One must not kill”, or “An action's moral value is determined by its consequences”. To see how these uses fit in with the previous picture, let’s first talk about water. Before chemists identified H20 as a molecule there was talk of water, and people learned the meaning of the word exclusively ostensively. Although there was no chemical theory of water, one could still sensibly talk about laws or principles, e.g. “Is all ice water?” Perhaps most ice seen by our imagined speaker was whitish but now she comes across some transparent ice, and it turns out that this ice was indeed also water after applying some heat. I argue that we are in the same position with respect to morality as this pre-scientific individual was with respect to water. Perhaps it will turn out that morality has some underlying naturalist character, but until then any discussion of laws and principles of morality must be made using more mundane means. We can ‘melt’ but we cannot probe the ‘chemical structure’ for morality.
Melting in our previous story was a sort of reductive test, the unknown ice was reduced to a more well known form, water. In the same vein, we may reduce claims about moral principles to a more well known form, the comparative intuitions and beliefs discussed above. From this perspective, a purported moral principle is just an explanatory claim. Take lying for example: the claim “Lying is always bad”, we may see this as the claim “In any situation I am lied to, I would have preferred not to be lied to.”<sup>1</sup> Another way of putting this is that our use of moral language is defined by the expression of some shared preferences, and so any downstream use of moral language must naturally have some preference utilitarian structure.
Aggregating across persons
The cases treated involved single person-affecting comparisons. Let’s look at how we learn the use of moral language for the multiple person case. We see the reactions of groups to different events, e.g. two people experiencing emotional distress (e.g. a breakup), and a larger group experiencing similar distress (e.g. a death in a family). When confronted with such suffering we react sympathetically, experiencing sadness within ourselves. This sadness may be both attributable to a conscious process of building empathy by imagining the others’ experience, or perhaps an involuntary immediate reaction resulting from our neural wiring. As children we learn to associate these processes with words like sad or terrible, and eventually we associate the word immoral to any action which leads to such consequences. From this perspective, we may probe our usage of these words to check whether they correspond to an (additive) utilitarian calculus: Are our reactions to tragic events linearly stronger as the number of affected people rises? No. If we try to imagine the plight of the affected, are we able to hold in or mind the plight of many? Again, no. It seems thus that our use of moral language is distinctly non-utilitarian. Hence, if we are to justify utilitarianism, it will not be by formalizing our use of moral language as applied to actions.
Up to now, we’ve focused on prototypical examples of actions through which we learn the use of moral language, but we have neglected uses of moral language as applied to dispositions, thoughts and emotions. I argue that these latter cases explain our interest in utilitarianism. At some point in our development as speakers and thinkers, we come to have meta-emotions: guilt over jealousy, regret over anger, helplessness over implicit bias, etc. Insofar as we feel and say that an inability to empathize with large-scale suffering is wrong (e.g. war, oppression, global poverty, etc.), we also see impartial, closer-to-utilitarianism reactions as something to strive for. Note that this still does not justify interpreting moral language in terms of a utilitarian calculus, but rather closeness-to-utilitarianism is an end in itself. Another common and important emotional reaction is what I’ll term the ‘call to universality’. This class of reaction encompasses our praise for self-sacrifice and the desire of many to align their actions and beliefs with some coherent narrative — usually religion, but more recently utilitarianism or maybe virtue-ethics as it may be. Taken together these impartiality and coherency meta-reactions lend some normative force to utilitarianism, not as a system to make judgements, but rather as a system to which we ought to align our reactions. Notice also that considerable disagreement exists. For those who do not feel guilt or regret as a result of inability to empathize with large-scale tragedies, utilitarianism does not have the same force.
Notice that the connection between these meta-reactions and utilitarianism is somewhat distant and likely not precise enough to distinguish between average and total utilitarianism. If so, we should see deciding between average and total utilitarianism as independent from our use of moral language. Moreover, if we understand our moral intuitions as guesses about what we would believe having lived the relevant experiences, it follows that the further the subject of an intuition is from our lived experiences, the less likely it is for this intuition to be true. Hence, any argument which begins by appealing to an intuition about an alien world, e.g. the repugnant conclusion, should be discounted as unsubstantiated. Returning to our water analogy, in trying to do population ethics, we're in the same position the pre-scientific person would find themselves if they asked "Will water take on a new form when heated to 10,000 degrees?" In both the population ethics and water cases the question appears meaningful, but the answer is out of reach. We are limited by both our engineering inability -- to simulate worlds/heat water -- and definitional vagueness -- of morality/water.
Thoughts on EA
From the above, it follows that there is considerable individual variance over what force EA principles carry. I personally react more strongly to reading about distant causes, existential risks, etc. than others I know, so much of EA carries an emotional force for me in a way that it would not for them. It is perhaps correct to say that many of those who do not see the appeal of EA would if they were exposed to a broader set of experiences, but insofar as they do not feel themselves in the wrong for having limited sets of experiences, EA carries no force. In the end, these reflections have led me to a more calibrated understanding of the role of EA: EA is important not because it is the only right thing to do, but rather because our experiences have endowed us with a broader and richer sense of right. One which has the potential to play an invaluable role in guiding mankind to a brighter future.
Footnotes
<sup>1</sup>: Of course someone saying “Lying is always bad” may intend to make any of many other claims, e.g. "If we could achieve the same end without lying, that would be better", "A law against lying would be desirable", and so on. My claim is merely that if we want to give motivating force to the statement “Lying is always bad” as an extension of our defining uses of moral language, then we must interpret “Lying is always bad” as I do.
Apologies in advance for long reply.
Thanks for clarifying. This doesn't change my response though since I don't think there's a particularly notable convergence in emotional reactions to observing others in pain which would serve to make valenced emotional reactions a particularly central part of the meaning of moral terms. For example, it seems to me like children (and adults) often think that seeing others in pain is funny (c.f. punch and judy shows or lots of other comedy), fun to inflict and often well-deserved. And that's just among modern WEIRD children, who tend to be more Harm focused than non-WEIRD people.
Plenty of other things seem equally if not more central to morality (though I am not arguing that these are central, or part of the meaning of moral terms). For example, I think there's a good case that people (and primates for that matter) have innate moral reactions to (un)fairness: if a child is given some ice cream and is happy but then their sibling is given slightly more ice cream and is happy, they will react with moral outrage and will often demand either levelling down their sibling (at a cost to their pleasure) or even just directly inflicting suffering on their sibling. Indeed, children and primates (as well as adults) often prefer that no-one get anything than that an unjust allocation be made, which seems to count somewhat against any simple account of pleasant experience. I think innate reactions to do with obedience/disobedience and deference to authority, loyalty/betrayal, honesty/dishonesty etc. are equally central to morality and equally if not more prominent in the cases through which we actually learn morality. So it seems a bunch of other innate reactions may be central to morality and often morally mandate others suffering, so it doesn't seem likely to me that the very meaning of moral terms can be distinctively tied to the goodness/badness of valenced experience. Notably, it seems like a very common feature (until very recently in advanced industrial societies anyway) of cases of children's initial training in morality involved parents or others directly inflicting pain on children when they did something wrong and often the thing they did wrong seems like it has little or nothing to do with valenced experience, nor is it explained in these terms. This seems hard to square with the meaning of moral terms being rooted in the goodness/badness of valenced experience.
Just to clarify one thing: when I said that "It is morally right that you give me $10" might communicate (among other things) that you are apt for disapproval if you don't give me $10 (which is not implied by saying "I desire that you give me $10"), I had in mind something like the following: when I say "It is morally right that you give me $10" this communicates inter alia that I will disapprove of you if you don't give me $10, that I think think it's appropriate for me to so disapprove, that I think others should disapprove of you and I would disapprove of them if they don't etc. Maybe it involves a bunch of other attitudes and practical implications as well. That's in contrast to me just saying "I desire that you give me $10" which needn't imply any of the above. That's what I had in mind by saying that moral terms may communicate that I think you are apt for disapproval if you do something. I'm not sure how you interpreted "apt[ness] for disapproval" but it sounds from your subsequent comments like you think it means something other than what I mean.
I think the fundamental disagreement here is that I don't think we need to learn what specific kinds of cases are (considered) morally wrong in order to learn what "morally wrong" means. We could learn, for example, that "That's wrong!" expresses disapproval without knowing what specific things people disapprove and even if literally everyone entirely disagrees about what things are to be disapproved of. I guess I don't really understand why you think that there needs to be any degree consensus about these first order moral issues (or what makes things morally wrong) in order for people to learn the meaning of moral terms, or to distinguish moral terms from terms merely expressing desires.
I agree that learning what things my parents think are morally wrong (or what things they think are morally wrong vs which things they merely dislike) requires generalizing from specific things they say are morally wrong to other things. It doesn't seem to me that learning what it means for them to say that such and such is morally wrong vs what it means for them to say that they dislike something requires that we learn what specific things people (specifically or in general) think morally wrong / dislike.
To approach this from another angle: perhaps the reason why you think that it is essential to learning the meaning of moral terms (vs the meaning of liking/desiring terms) that we learn what concrete things people think are morally wrong and generalise from that, is because you think that we learn the meaning of moral terms primarily from simple ostension. i.e. we learn that "wrong" refers to kicking people, stealing things, not putting our toys away etc. (whereas we learn that "I like this" refers to flowers, candy, television etc.), and we infer what the terms mean primarily just from working out what general category unites the "wrong" things and what unites the "liked" things and reference to these concrete categories play a central role in fixing the meaning of the terms.
But I don't think we need to assume that language learning operates in this way (which sounds reminiscent of the Augustinian picture of language described at the beginning of PI). I think we can learn the meaning of terms by learning their practical role: e.g. that "that's morally wrong" implies various things practical things about disapproval (including that you will be punished if you do a morally bad thing, that you yourself will be considered morally bad and so face general disapproving attitudes and social censure from others) whereas "I don't like that" doesn't carry those implications. I think we find the same thing for various terms, where we find their meaning consists in different practical implications rather than fixed referents or fixed views about what kinds of things warrant their application (hence people can agree about the meaning of the terms but disagree about to which cases they should be applied to: which seems particularly common in morality).
Also, I recognise that you might say "I don't think that the meaning is necessarily set by specific things being agreed to be wrong- but it is set by a specific attitude which people take/reaction which people have, namely a negative attitude towards people experiencing negatively valenced emotions" (or some such). But I don't think this changes my response, since I don't think a shared reaction that is specifically about suffering need be involved to set the meaning of moral terms. I think the meaning of moral terms could consist in distinctive practical implications (e.g. you'll be punished and I would disapprove of others who don't disapprove of you- although of course I think the meaning of moral terms is more complex than this) which aren't implied by mere expressions of desire or distaste etc.
I agree that it might seem impossible to have a reasoned moral argument with someone who shares none of our moral presuppositions. But I don't think this tells us anything about the meaning of moral language. Even if we took for granted that the meaning of "That's wrong!" was to simply to express disapproval, I think it would still likely be impossible to reason with someone who didn’t share any moral beliefs with us. I think it may simply be impossible in general to conduct reasoned argumentation with someone who we share no agreement about reasons at all.
What seems to matter to me, as a test of the meaning of moral terms, is whether we can understand someone who says "Hurting people is good" as uttering a coherent moral sentence and, as I mentioned before, in this purely linguistic sense I think we can. There’s an important difference between a madman and someone who’s not competent in the use of language.
I don’t think there’s any difference, necessarily, between these cases in terms of how they are using moral language. The only difference consists in how many of our moral beliefs we share (or don’t share). The question is whether, when we faced with someone who asserts that it’s good for someone to suffer or morally irrelevant whether some other person is having valenced experience and that what matters is whether one is acting nobly, whether we should diagnose these people as misspeaking or evincing normal moral disagreement. Fwiw I think plenty of people from early childhood training to advanced philosophy use moral language in a way which is inconsistent with the analysis that “good”/“bad” centrally refer to valenced experience (in fact, I think the vast majority of people, outside of EAs and utilitarians, don’t use morality in this way).
I actually agree that if no-one shared (and could not be persuaded to share) any moral values then the use of moral language could not function in quite the same way it does in practice and likely would not have arisen in the same way it does now, because a large part of the purpose of moral talk (co-ordinating action) would be vitiated. Still, I think that moral utterances (with their current meaning) would still make perfect sense linguistically, just as moral utterances made in cases of discourse between parties who fundamentally disagree (e.g. people who think we should do what God X says we should do vs people who think we should do what God Y says we should do) still make perfect sense.
Crucially, I don't think that, absent moral consensus, moral utterances would reduce to "function[ing] in conversation just as all other preferences do." Saying "I think it is morally required for you to give me $10" would still perform a different function than saying "I prefer that you to give me $10" for the same reasons I outlined above. The moral statement is still communicating things other than just that I have an individual preference (e.g. that I'll disapprove of you for not doing so, endorse this disapproval, think that others should disapprove etc.). The fact that, in this hypothetical world where no-one shares any consensus about moral views nor could be persuaded to agree on any moral views and this would severely undermine the point of expressing moral views doesn't imply that the meaning of moral terms depends on reference to the objects of concrete agreement. (Note that it wouldn't entirely undermine the point of expressing moral views either: it seems like there would still be some practical purpose to communicating that I disapprove and endorse this disapproval vs merely that I have a preference etc.)
I also agree that moral language is often used to persuade people who share some of our moral views or to persuade people to share our moral views, but don't think this requires that the meaning of the moral terms depends on or involves consensus about the rightness or wrongness of specific moral things. For moral talk to be capable of serving this practical purpose we just need some degree of people being inclined to respond to the same kinds of things or to be persuaded to share the same attitudes. But this doesn’t require any particularly strong, near-universal consensus or consensus on a particular single thing being morally good/bad. It also need not require that there’s some specific things that people are inclined to agree on- it could rather, be that people are inclined to defer to the moral views of authorities/their group and this ensures some degree of consensus regardless). This seems compatible with very, very widespread disagreement in fact: it might be that people are disposed to think that some varying combinations of “fraternity, blood revenge, family pride, filial piety, gavelkind, primogeniture, friendship, patriotism, tribute, diplomacy, common ownership, honour, confession, turn taking, restitution, modesty, mercy, munificence, arbitration, mendicancy, and queuing” (list ripped off from Oliver Scott Curry are good, and yet disagree with each other to a large extent about which of these are valuable and to what extent and how they should be applied in particular cases. Moral language could still serve a function as people use it simply to express which of these things they approve or disapprove of and expect others to likewise promote or punish, without there being general consensus about what things are wrong and without the meaning of moral terms definitionally being fixed with reference to people’s concrete (and contested and changing) moral views.