After Nakul Krishna posted the best critique of Effective Altruism so far, I did what anyone would do. I tried to steelman his opinions into their best version, and read his sources. For the third time, I was being pointed to Bernard Williams, so I conceded, and read Bernard Williams's book Ethics and The Limits of Philosophy. It's a great book, and I'd be curious to hear what Will, Toby, Nick, Amanda, Daniel, Geoff, Jeff and other philosophers in our group have to say about it at some point. But what I want to talk about is what it made me realize: that my reasons for Effective Altruism are not moral reasons.
When we act there can be several reasons for our actions, and some of those reasons may be moral in kind. When a utilitarian reasons about a trolley problem, they usually save the 5 people mostly for moral reasons. They consider the situation not from the perspective of physics, or of biology, or of entropy of the system. They consider which moral agents are participants of the scenario, they reason about how they would like those moral agents (or in the case of animals moral recipients) to fare in the situation, and once done, they issue a response on whether they would pull the lever or not.
This is not what got me here, and I suspect not what got many of you here either.
My reasoning process goes:
Well I could analyse this from the perspective of physics. - but that seems irrelevant.
I could analyse it from the perspective of biology. - that also doesn't seem like the most important aspect of a trolley problem.
I could find out what my selfish preferences are in this situation. - Huh, that's interesting, I guess my preferences, given I don't know any one of the minds involved are a ranking of states of affairs, from best to worst, where if 6 survive, I prefer that, then 5 and so on.
I could analyse what morality would issue me to do. - this has two parts 1) Does morality require of me that I do something in particular? and 2) Does morality permit that I do a thing from a specific (unique) set of actions?
It seems to me that morality certainly permits that I pull the lever, possibly permits that I don't too. Does it require that I pull it? Not so sure. Let us assume for the time being it does not.
After doing all this thinking, I pull the lever, save 5 people, kill one, and go home with the feeling of a job well done.
However there are two confounding factors there. So far, I have been assuming that I save them for moral reasons, so I backtrack those reasons into the moral theory that would make that action permissible and even sometimes demanded, I find aggregative consequentialism (usually utilitarianism) and thus, I conclude: "I am probably an aggregative consequentialist utilitarian."
There is other factor though, which is what I prefer in that situation, and that is the ranking of states of affairs I mentioned previously. Maybe I'm not a utilitarian, and I just want the most minds to be happy.
I never tried to tell those apart, until Bernard Williams came knocking. He makes several distinctions that are much more fine grained and deeper than my understanding of ethics or that I could explain here, he writes well and knows how to play the philosopher game. Somehow, he made me realize those confounds in my reasoning. So I proceeded to reason about situations in which there is a conflict between the part of my reasoning that says "This is what is moral" and the part that says "I want there to be the most minds having the time of their lives."
After doing a bit of this tinkering, tweaking knobs here and there in thought experiments, I concluded that my preference for there being most minds having the time of their lives supersedes my morals. When my mind is in conflict between those things I will happily sacrifice the moral action to instead do the thing that makes most minds better off the most.
So let me add one more strange label to my already elating, if not accurate, "positive utilitarian" badge:
I am an amoral Effective Altruist.
I do not help people (computers, animals and aliens) because I think this is what should be done. I do not do it because this is morally permissible or morally demanded. Like anyone, I have moral uncertainty, maybe some 5% of me is virtue ethicist or Kantian, or some other perspective. But the point is that even if those parts were winning, I would still go there and pull that lever. Toby or Nick suggested that we use a moral parliament to think of moral uncertainty. Well, if I do, then my conclusion is that basically I am not in a parliamentary system, but in some other form of government, and the parliament is not that powerful. I take Effective Altruist actions not because they are what is morally right for me to do, but in spite of what is morally right to do.
So Nakul Krishna and Bernard Williams may well, and in fact might have, reasoned me out of the claim "utilitarianism is the right way to reason morally." That deepened my understanding of morality a fair bit.
But I'd still pull that goddamn lever.
So much the worse for Morality.
Thanks for bridging the gap!
Yeah, that is my current perspective, and I’ve found no meaningful distinction that would allow me to distinguish moral from amoral preferences. What you call intersubjective is something that I consider a strategic concern that follows from wanting to realize my moral preferences. I’ve wondered whether I should count the implications of these strategic concerns into my moral category, but that seemed less parsimonious to me. I’m wary of subjective things and want to keep them contained the same way I want to keep some ugly copypasted code contained, black-boxed in a separate module, so it has no effects on the rest of the code base.
I like to use two different words here to make the distinction clearer, moral preferences and moral goals. In both cases you can talk about instrumental and terminal moral preferences/goals. This is how I prefer to distinguish goals from preferences (copypaste from my thesis):
To aid comprehension, however, I will make an artificial distinction of moral preferences and moral goals that becomes meaningful in the case of agent-relative preferences: two people with a personal profit motive share the same preference for profit but their goals are different ones since they are different agents. If they also share the agent-neutral preference for minimizing global suffering, then they also share the same goal of reducing it.
I’ll assume that in this case we’re talking about agent-neutral preferences, so I’ll just use goal here for clarity. If someone has the personal goal of wanting to get good at playing the theremin, then on Tuesday morning, when they’re still groggy from a night of coding and all out of coffee and Modafinil, they’ll want to stay in bed and very much not want to practice the theremin on one level but still want to practice the theremin on another level, a system 2 level, because they know that to become good at it, they’ll need to practice regularly. Here having practiced is an instrumental goal to the (perhaps) terminal goal of becoming good at playing the theremin. You could say that their terminal goal requires or demands them to practice even though they don’t want to. When I had to file and send out donation certificates to donors I was feeling the same way.
Aw, hugs!
Oops, yes. I should’ve specified that.
If I understand you correctly, then that is what I tried to capture by “optimally.”
This seems to me like a combination of the two limitations above. A person can decide to not act on moral preferences that they continue to entertain for strategic purposes, e.g., to more effectively cooperate with others on realizing another moral goal. When a person rejects, i.e. no longer entertains a moral preference (assuming such a thing can be willed), and optimally furthers other moral goals of theirs, then I’d say they are doing what is moral (to them).
Cuddlepiles? Count me in! But these preferences also include “the most minds having the time of their lives.” I would put all these preferences on the same qualitative footing, but let’s say you care comparatively little about the whiteboards and a lot about the happy minds and the ecstatic dance. Let’s further assume that a lot of people out there are fairly neutral about the dance (at least so long as they don’t have to dance) but excited about the happy minds. When you decide to put the realization of the dance goal on the back-burner and concentrate on maximizing those happy minds, you’ll have an easy time finding a lot of cooperation partners, and together you actually have a bit of a shot of nudging the world in that direction. If you concentrated on the dance goal, however, you’d find much fewer partners and make much less progress, incurring a large opportunity cost in goal realization. Hence pursuing this goal would be less moral by (lacking) dint of its intersubjective tractability.
So yes, to recap, according to my understanding, everyone has, from your perspective, the moral obligation to satisfy your various goals. However, other people disagree, particularly on agent-relative goals but also at times on agent-neutral ones. Just as you require resources to realize your goals, you often also require cooperation from others, and costs and lacking tractability make some goals more and others less costly to attain. Hence, then moral thing to do is to minimize one’s opportunity cost in goal realization.
Please tell me if I’m going wrong somewhere. Thanks!
I really appreciate your point about intersubjective tractability. It enters the question of how much should we let empirical and practical considerations spill into our moral preferences (ought implies can for example, does it also imply "in a not extremely hard to coordinate way"?)
At large I'd say that you are talking about how to be an agenty Moral agent. I'm not sure morality requires being agenty, but it certainly benefits from it.
Bias dedication intensity: I meant something ortogonal to optimality. Dedicating only to moral preferences, bu... (read more)