On Helping vs Expressing
As social creatures, we care a lot about others’ attitudes towards us. And since we lack direct access to others’ minds, we care a lot about what their outward behavior expresses or reveals about their underlying attitudes. Since others are likewise apt to be hypervigilant about the expressive significance of our behavior, it can be to our social advantage to act in ways that signal our value as loyal and reliable allies. Commonsense morality is the result: enlightened egoism for social animals. This is not at all the same thing as genuine virtue (which I take to centrally involve unrestricted, scale-sensitive benevolence).
It’s a sad fact about the world that social reward and moral goodness are only loosely correlated. On almost any realistic margins, greater impartial altruism is clearly morally better, yet it is apt to inspire outright hostility from many (especially if it leads one to support unconventional causes like shrimp welfare or longtermism). Do-gooder derogation is rampant. Most people would prefer (for both themselves and others) to remain silent about their philanthropic efforts, thereby treating “seeming a braggart” as a greater moral risk than “failing to promote a culture of giving that could lead others to give more to life-saving charities.” Yet, at the same time, if your allies are sufficiently socially/politically dominant in your locality, you may have free rein to righteously bully isolated others who fail to keep up with the in-group’s latest moral fashions and shibboleths. Eek.
There’s too little moral concern where it matters, and too much where it doesn’t. That’s the inevitable conclusion that any clear-eyed beneficentrist must reach. But I now want to offer a more controversial suggestion. Putting aside opportunistic moral (dis)engagement of the sort described above, it seems to me that even the most principled forms of non-consequentialism are susceptible to the objection that they ultimately bottom out in a kind of egoistic self-indulgence, prioritizing expressive over concrete concerns.
Consider: How much should you care about treating someone as a means versus several extra people losing their entire lives? Deontology directs us to care more about the former than the latter. But surely lives matter more than moral abstractions? More generally: deontological verdicts can seem tempting insofar as they better align with our emotional responses. But are there any supporting arguments for the view that don’t ultimately come down to “I find the verdicts more appealing”? (Some rest their hopes on sacred invocations of the “separateness of persons”, but my Value Receptacles paper shows that there’s no barrier to consequentialists valuing distinct individuals separately.)[1] Appeal to the agent seems like an awfully self-centered reason for preferring an ethic to an alternative that would be better for moral patients.
As I wrote in Sacrificing Individuals for Symbolism:
If you’re proposing a policy that involves obvious utilitarian harms, you need to say something about why the other properties you’re interested in—vaguely egalitarian symbolism, or whatever—should take priority over others’ lives and vital interests. (How many “egalitarian” opponents of kidney markets are willing to openly admit: “40,000 people should die prematurely each year because I find markets aesthetically distasteful and worry that a few people might end up selling a life-saving kidney against their narrow best interests”?)
If they had to explicitly quantify how many QALYs were worth sacrificing on the altar of symbolism and other non-utilitarian ends, I doubt these sorts of views would end up with so many defenders.
At the same time, some non-consequentialists selectively apply charges of “complicity” to condemn specific activities they dislike (e.g. using AI, or “collaborating” with the federal government) without the burden of establishing that any harm was actually done or that the underlying reasoning wouldn’t terribly overgeneralize. Without the discipline of consequentialist principles, it’s way too easy for people to rationalize self-indulgent moralism whenever their tastes, politics, or personal priorities differ from others (and they anticipate social support for aggressively pressing their preferences).
Now, it’s mostly harmless to be a non-consequentialist in your personal life, where low-stakes everyday decisions are concerned. You’re likely to follow good norms, even if you have mistaken beliefs about their ultimate justification and sometimes indulge in annoyingly misguided moralism. Yet any moral theory can be improved by incorporating beneficentrism. Now notice that even non-consequentialist beneficentrists should generally agree with consequentialist verdicts in the realm of politics and policy. The stakes are simply too high for non-consequentialist reasons to carry the day.[2]
Distinctively non-consequentialist approaches to politics and policy then seem to depend upon disregarding the consequences for people’s lives and well-being, in a way that seems deeply appalling. (I wish this were more widely appreciated.) When lives are on the line (or good government norms, or the future of the planet, etc.), what matters most is securing better outcomes. (That’s compatible, of course, with thinking that following principled procedures is the most reliable way to secure this vital end.) Yet most people seemingly prefer to express their values than to promote them.[3] I understand that expressing oneself is more emotionally satisfying (and, in many circles, socially rewarding), but again, when the stakes are this high, decency surely requires us to acknowledge that improving the actual outcome matters more.[4]
Changing norms
My post on moral gadflies argued that well-meaning people should probably do more to shame do-gooder derogation and other forms of anti-beneficent collusion. (“Genuinely OK people needn’t positively do good in the world, when it’s effortful or costly. But those who outright undermine efforts to make the world better, without adequate reason or excuse, are falling short of such basic neutrality.”)
More awareness of, and critical attention to, the problem of self-indulgent moralizing—prioritizing expressive concerns and emotional comfort over genuine concern for how people’s lives are concretely affected—seems like it could be similarly helpful. If we want ethics to be a force for good in the world, as even deontologists should want, we plausibly need a public ethic that directs more attention to what really matters.
- ^
I’ve heard some respond that that is not what they meant by the objection. They just mean to object to aggregation as such: the extensional feature that consequentialism may allow many small interests to outweigh one big one. But now we’re just back to relying on emotional appeal. There’s no rational reason why many small interests shouldn’t ever outweigh one big one. But psychologically, it’s natural to feel more moved by a big salient interest than by many smaller ones. People want a moral theory that lets them indulge their natural emotional responses.
- ^
We should want policies that actually help, not ones that merely express support for helping. We should want the better party to make strategic compromises sufficient to win elections and do what good they can, rather than lose with purity. As Ezra Klein and Matt Yglesias each emphasize (in subtly different ways), better policy depends upon winning elections, and winning elections requires a broad tent emphasizing popular policies and (mostly) validating popular views, rather than the purity ratchets that win social points in a local monoculture.
Of course, I think many popular views (across the political spectrum) are awful and worth criticizing, but sometimes that role falls to public intellectuals rather than politicians and associated political bridge-builders. I also think intellectuals have a responsibility to keep things in perspective, which is also violated by most ideological purity-policing. Purity just isn’t that important, and more generally it seems that popular ideologies tend to track coalitional interests rather than anything even approximating objective importance. So it seems a really bad sign when supposed intellectuals start to sound and behave like ideological hacks.
It seems much better to take your ideology to be answerable to (and guided by) consequentialist considerations. Guidance like, “Try to do the most impartial good, guided by norms and principles that we have good reason to trust as reliable means to that end,” seems less prone to serve as a fig-leaf for gross self-indulgence. (One more commonly hears the “fig-leaf”/abusability objection rolled out against utilitarianism, but in principle it should apply most strongly to whichever views grant agents the most moral latitude—not something that utilitarianism is so famous for!)
- ^
- ^
I previously expressed my bafflement at some of Kamm’s arguments to the contrary. For another example, she addresses the traditional “paradox of deontology”—why not minimize the number of serious constraint-violations?—by writing: “each person who dies as a victim of a violation of a constraint because we do not minimize violations, dies as an inviolable person, harm to whom we have not endorsed.” (1991, p. 904.) How comforting!

Nice post, Richard! Do you have any takes on potential ways people in the effective altruism community are morally self-indulging, even if to a lesser extent than in the examples you mention?
Funnily enough, the main example that springs to mind is the excessive self-flagellation post-FTX. Many distanced themselves from the community and its optimizing norms/mindset—for understandable reasons, but ones more closely tied to "expressing" (and personal reputation management) than to actually "helping", IMO.
I'd be curious to hear if others think of further candidate examples.
The problem arises when utilitarianism isn't utilitarian enough. Ethics based on principles and a cultural conception of "virtue," which in its effects doesn't necessarily contradict utilitarianism, can promote cultural change by increasing altruistic action and transforming the behavior of large numbers of people. (Obviously, I'm not referring to a conventional ethics of principles and virtue, but to a conception of human relations based on benevolence, which is only impossible if we do nothing to try to achieve it.)
Promoting the well-being of shrimp and long-term projects for a future humanity ten thousand years from now might have its logic from a utilitarian point of view... but it's unlikely that these are altruistic projects capable of promoting cultural change. And if there's no cultural change in the sense of laying the foundations for a humanity that makes altruistic action its main economic activity... very little good will be done in consequentialist terms. The consequentialist may then experience their own case of self-indulgence.