A principled conservative impulse
A general challenge in life is how to avoid being duped or exploited by clever-sounding but ultimately facile reasoning. One thing’s for sure, you don’t want to internalize the following norm:
(Easy Dupe): Whenever you hear an argument for doing X, and you can’t immediately refute it, you are thereby rationally committed to doing X.
Why not? Well, it’s just very common for bad reasoning to sound superficially compelling. So it’s unlikely that you could immediately refute every possible chain of bad reasoning. And other people have reasons to try to manipulate you, e.g. by exposing you to superficially-compelling reasoning that supports their interests or values over yours, whether or not their arguments are ultimately sound. Knowing all this, you can anticipate that accepting the above norm would make you vulnerable to exploitation, in a way that’s predictably detrimental to your true interests and values.
Perhaps the most common solution to this problem is:
(Dogmatism): Whenever you hear an argument for doing X, and doing X strikes you as intuitively outrageous, dismiss and ridicule the argument.
The dogmatist will probably have greater success in life than the easy dupe. But I hope it’s clear that they’re also vulnerable to a kind of moral risk, namely, the risk of having been duped by enculturation into blindly accepting bad starting points that they really ought to reconsider. We all think that generations before us had their blind spots (homophobia, sexism, racism, etc.), so it would be an extraordinary coincidence if our current conventional wisdom was, for the first time in human history, morally perfect (or even anywhere in that remote vicinity). Where our minds are currently bad or biased, we should want them to be changed by good arguments, to better approximate what our idealized selves would think. But the dogmatist is deaf to the calls of their better self.
Against Dogmatism
How appalled are you by the prospect that you would have become an enthusiastic Nazi (if steeped in the culture of 1930s Germany) or a slavery apologist (in the US antebellum South)? If you hope to avoid falling into whatever moral mistakes everyone around you happens to be making, you must reject the dogmatism of blind conformism, and be open to questioning assumptions that most people around you treat as unquestionable.
(You can’t literally “question everything”, at least not all at once. That would leave you with no ground from which to judge the things you are trying to question. And given that your time is limited, you may reasonably prioritize, dismissing some challenges in order to spend your time on those you regard as more pressing. That’s fine. But I do want to caution against dismissing views that seem to have strong principled support, when one’s only grounds for dismissal are that the view “sounds weird”. We know that many important moral truths sounded “ridiculous” to past generations. So you should fully expect that many important moral truths will initially sound ridiculous to you!)
A Better Way?
Is there a reasonable middle ground between (Easy Dupe) and (Dogmatism)? Probably several. But let me set out my preferred approach. (The comments are always open if others have even better ideas!)
I suggest compartmentalizing: Let your mind roam free, and try its best to assess things on their merits; but then simply be cautious of acting on fragile beliefs. You now don’t need to be scared of where arguments might lead you, because you can always just refuse to act on your beliefs if it seems too risky. (If anyone accuses you of hypocrisy, ask them to stop socially incentivizing dogmatism, and send them this post by way of explanation.)
But be sure to consider the risks from both angles. How bad is it donate to shrimp welfare (or longtermism) if the supporting arguments turn out to be bogus? How bad is it not to support them if the supporting arguments are successful? Pay particular attention to downside risk: it’s one thing to waste your resources doing something that turns out to be pointless. It’s something else entirely to actually do harm of a sort that could severely undermine trust in the larger moral projects you support.[1] “Everyone will think I’m a bit odd for caring about this,” is fine. But “Everyone will hate me and think I’m evil for doing this,” should probably set off alarm bells!
Finally, ask yourself how robust the rival verdicts are to potential updates in evidence: could you more easily see revisions being warranted in one direction rather than the other? That may be a sign that you should be updating more in that direction already, and are just dragging your feet.
After considering these questions, you’ll hopefully feel more secure about your ability to act wisely in the face of uncertainty (including uncertainty about the cogency of various arguments). It may further help to actively reject any pressure to commit either 100% or 0% of one’s altruistic resources to a candidate high-impact longshot.[2] You may prefer to imagine yourself with various “sub-agents” representing different philosophical perspectives, divide your resources between them (in whatever way strikes you as most reasonable for starters), and allow your various sub-agents to bargain from there.[3] That seems a simple recipe for reasonable pluralism.[4]
Conclusion
Everyone makes mistakes. We may hedge against the worst risks from all philosophical directions by deliberately allocating some resources (time, money, whatever) to each of a number of sub-agents representing moral/philosophical perspectives that seem sufficiently reasonable that they oughtn’t to be entirely ignored. Internal efficiency can be advanced by encouraging your sub-agents to bargain with each other to better advance their respective moral goals. One major goal of your “common sense” agent may be to ensure that you are not universally reviled. If you have some fanatical sub-agents, you can probably bargain with them to find a moderate solution that better advances your common goals without risking severe fallout. If you can’t be bothered to actually model such a “bargaining” process, a decent heuristic may be to combine (i) moderate support for high-impact longshots with limited downside risk, together with (ii) strictly vetoing anything that strikes you as outright evil or disreputable (no matter how high the expected value calculation may at first appear).
This pluralistic process seems apt to be a reliable improvement over both (Easy Dupe) and (Dogmatism), for reasonably competent agents. I expect it could be refined further, though. Feel free to share your suggestions.
- ^
“Stealing to give”, for example.
- ^
There are interesting arguments for why maximizing utility may require you to put all your eggs in one basket. But I view those as similar to the arguments for why you should have 100% confidence in necessary (e.g. logical or philosophical) truths. Such arguments involve such extreme idealization that they offer poor guidance for fallible human beings.
- ^
Here I’m indebted to Harry Lloyd’s work on bargaining approaches to moral uncertainty.
- ^
Riffing on a theme from my previous post, you can think that people ought to be pluralistic in their practical orientation, without necessarily thinking that first-order theories (of what ultimately matters) ought to be so.