... and other good causes
It’s International Shrimpact Week! My contribution offers a moderate’s case for shrimp welfare, as one cause among many that shouldn’t be neglected within your moral portfolio. Alas, since it is so extremely neglected by the population at large, you have an especially striking opportunity to promote balance and moderation by sparing a few dollars to save zillions of shrimp from suffering during slaughter. Donate here to support my campaign for sensible shrimp centrism against the extremists to either side (then help some people too, via my GiveDirectly fundraiser). If you’re more inclined to support hegemonic shrimp-first radicalism, go use Bentham’s fundraiser instead!
Introduction
A common theme of my blogging is that moral motivation is limited. No-one wants to be a totally self-sacrificing utilitarian agent. We are not so impartial as that. Some conclude from this that impartial utilitarianism must be wrong, but that seems mere wishful thinking—evaluating others’ lives and basic needs as properly a higher priority than luxuries for ourselves is surely among utilitarianism’s most clearly correct verdicts. The more reasonable conclusion is rather that we are all deeply morally imperfect. I add: that’s OK! (Not ideal, but OK.) We shouldn’t get too hung up on questions of virtue or deontic status. (You don’t want to be status-obsessed, do you?) Instead ask: what low-hanging fruit can we reach to easily do more good?[1]
Something I like a lot about Effective Altruism is its relentless focus on this question. There is no more important question for you to consider than how you can do the most good (at whatever non-trivial cost you’re willing to bear). Yet it’s so modest! Do whatever you want with 90% of your resources; just save 10% (or whatever) for the impartial good, and you’ll do immense good for others at minimal cost to your other interests! Not many people save dozens of lives (even doctors are mostly just filling a role that would be fulfilled almost as well by someone else if they weren’t there). But most well-educated citizens in wealthy nations have the opportunity to do at least this much good with their lives, relatively easily, through modest but well-targeted donations.
I find it helpful to model motivation as being guided by “sub-agents” with varying priorities and worldviews.[2] We can reserve the vast majority of our resources to be governed by severely partial sub-agents—concerned to prioritize our personal projects or the well-being of family and friends—and still set aside an EA/beneficentric sub-agent with enough resources to do more good than the vast majority of people who have ever lived. It’s a pretty incredible moral opportunity, when you think about it.
Or maybe it shouldn’t be just one. Perhaps we should further subdivide our altruistic concern across different types of causes (human vs non-human, nearterm vs longterm, safe bets vs high-impact longshots, etc.). That’s the idea I want to explore in this post.
Worldview Diversification Blocks Fanaticism
Many people intuitively recoil from “hegemonic” value systems that direct us to put all our eggs in one basket. Especially if the basket is weird and scaly.

So don’t! Remember that people, not theories, should be uncertain. Some hegemonic theory may well be true, but you’re probably not in a position to believe it with absolute confidence. (Even if you were, you may yet be unwilling to act accordingly, which amounts to much the same thing in practice.) We can avoid fanaticism by compartmentalizing: limiting the “reach” or power that we allow various ideas to exert over our lives, and empowering rival ideas to at least a modest extent. This naturally leads to a sensible moderate pluralism, as no single idea or worldview has dictatorial control over your life as a whole. By incorporating diverse sub-agents, each empowered to pursue their own conception of the good (with some portion of your resources), individual decision-makers can reproduce the advantages that liberal democracies have over authoritarian dictatorships. In neither society nor the individual mind should we wish to wholly banish hegemonic theories of the good. Instead, we assign them non-hegemonic representation. (Many good things work best by degrees.)
Consider “strong longtermism”. It’s hard to refute the argument that the interests of future generations decisively swamp those of present-day strangers. But few people are willing to fully endorse the practical implications. So don’t do either of these things! Instead, create a sub-agent to represent longtermism, give them some resources, and let them do their thing.
Similarly, if there’s a strong case that shrimp welfare swamps (present-day) human welfare—and there is!—you don’t have to respond by never helping another human being again. Just create a subagent to speak for the shrimp within your mental economy and give them a share of your altruistically-designated resources, proportionate to your confidence in the shrimp-friendly worldview: it surely shouldn’t be zero!
If you want to explicitly reserve space for a normie “global health & development” perspective, ensuring that the global poor aren’t entirely left out of your decisions no matter how many zillions of future digital shrimp you find yourself in a position to help: go right ahead! Create a representative subagent; you know the drill by now.

Note that you don’t have to fully endorse an idea for it to appropriately influence your actions. “Full” endorsement would require convincing every one of your subagents. But don’t you contain multitudes? Shouldn’t you include at least some skeptical voices, when faced with almost any significant (and hence disputable) idea?
Beware Fanatical Neglect
Missing crucial subagents can lead to moral disaster (as when people do nothing about the suffering of billions of factory-farmed animals). Expanding our moral circles does not require us to give overriding power to new beneficiaries; just adequate protection against abject moral neglect. I worry that most people are missing crucial subagents for neglected high-impact cause areas (like existential risk and animal welfare).
In ‘Refusing to Quantify is Refusing to Think’,[3] I highlighted the implicit fanaticism in conventional dogmatism:
It’s very conventional to think, “Prioritizing global health is epistemically safe; you really have to go out on a limb, and adopt some extreme views, in order to prioritize the other EA stuff.” This conventional thought is false. The truth is the opposite. You need to have some really extreme (near-zero) credence levels in order to prevent ultra-high-impact prospects from swamping more ordinary forms of do-gooding.
You should have at least some moral sub-agents who are anti-speciesist, and value suffering-relief in a species-neutral way. If we can relieve the dying agony of 1000+ beings per dollar, then something has gone very wrong with the world’s priorities and we should contribute non-trivially to remedying this. The Shrimp Welfare Project’s humane slaughter initiative plausibly achieves this remarkable feat (by providing free electrical stunners to shrimp slaughterhouses that commit to stunning 1800+ metric tons of shrimp annually): some of your anti-speciesist subagents should be extremely enthusiastic about funding this. Not with all your money—you have other subagents, with other priorities—but with the non-trivial amount that you reasonably allot to represent this credible anti-suffering worldview.
Donation Links
If you’re convinced—and sufficiently principled in your pluralism to allow your shrimp-friendly subagent to fund their favorite charity even if it isn’t your all-things-considered favorite—then please use this link to donate to my Shrimp Welfare Project fundraiser (featuring a 50% match from a generous donor).[4]
Alas, notorious shrimp fanatic and friend of the blog Bentham’s Bulldog is currently #1 on the Shrimpact Leaderboard. It will take a critical mass of modestly-contributing moderates for my fundraiser to overtake his, so don’t miss your chance to chip in:
Save the Shrimp (in moderation)!Alternatively: Animal Charity Evaluator’s Recommended Charity Fund is also running a “matching challenge” (without the competitive element of Substack-specific fundraisers). A worthy option to effectively help a variety of animals if you’re not sold on shrimp in particular.
To round out your moral portfolio, I’d suggest also finding a promising longtermist charity or grantmaking fund to support. One option is the Long-Term Future Fund.
Finally, if you’d find it reassuring to also empower a “normie” altruistic subagent who wants a safe bet to very reliably help the global poor—and who wouldn’t?—I know of no safer bet than GiveDirectly (for which I also have a Substack fundraiser):
GiveDirectly to the global poorDonating my Substack subscription revenue
I’ve kicked off my shrimp fundraiser by donating $2000 — 50% of my revenue-to-date from paid subscriptions this year. To balance it out, at year’s end I’ll send GiveDirectly 100% of all subscription revenue I receive this December (including full annual subscriptions that begin this month):
Subscribe this DecemberPaid subscriptions unlock the full versions of paywalled posts like:
- There’s No Moral Objection to AI Art
- Creepy Philosophy
- Vibe Bias
- Meta-Metaethical Realism, and
- The Best of All Possible Multiverses
Enjoy!
- ^
Once done: if you’re willing, ask it again.
- ^
See, e.g., the section on Mixed Motivations in ‘The Moral Gadfly’s Double-Bind’, and the Better Way I propose in ‘Limiting Reason’—inspired in part by Harry Lloyd’s work on bargaining approaches to moral uncertainty.
- ^
And, more recently, in ‘Rule High Stakes In, Not Out’.

Nice post! Though it comes from another post of yours, I appreciated the paragraph about how "common sense" worldviews may suffer from fanaticism. Thank you for contributing to Shrimpact week!