After Nakul Krishna posted the best critique of Effective Altruism so far, I did what anyone would do. I tried to steelman his opinions into their best version, and read his sources. For the third time, I was being pointed to Bernard Williams, so I conceded, and read Bernard Williams's book Ethics and The Limits of Philosophy. It's a great book, and I'd be curious to hear what Will, Toby, Nick, Amanda, Daniel, Geoff, Jeff and other philosophers in our group have to say about it at some point. But what I want to talk about is what it made me realize: that my reasons for Effective Altruism are not moral reasons.
When we act there can be several reasons for our actions, and some of those reasons may be moral in kind. When a utilitarian reasons about a trolley problem, they usually save the 5 people mostly for moral reasons. They consider the situation not from the perspective of physics, or of biology, or of entropy of the system. They consider which moral agents are participants of the scenario, they reason about how they would like those moral agents (or in the case of animals moral recipients) to fare in the situation, and once done, they issue a response on whether they would pull the lever or not.
This is not what got me here, and I suspect not what got many of you here either.
My reasoning process goes:
Well I could analyse this from the perspective of physics. - but that seems irrelevant.
I could analyse it from the perspective of biology. - that also doesn't seem like the most important aspect of a trolley problem.
I could find out what my selfish preferences are in this situation. - Huh, that's interesting, I guess my preferences, given I don't know any one of the minds involved are a ranking of states of affairs, from best to worst, where if 6 survive, I prefer that, then 5 and so on.
I could analyse what morality would issue me to do. - this has two parts 1) Does morality require of me that I do something in particular? and 2) Does morality permit that I do a thing from a specific (unique) set of actions?
It seems to me that morality certainly permits that I pull the lever, possibly permits that I don't too. Does it require that I pull it? Not so sure. Let us assume for the time being it does not.
After doing all this thinking, I pull the lever, save 5 people, kill one, and go home with the feeling of a job well done.
However there are two confounding factors there. So far, I have been assuming that I save them for moral reasons, so I backtrack those reasons into the moral theory that would make that action permissible and even sometimes demanded, I find aggregative consequentialism (usually utilitarianism) and thus, I conclude: "I am probably an aggregative consequentialist utilitarian."
There is other factor though, which is what I prefer in that situation, and that is the ranking of states of affairs I mentioned previously. Maybe I'm not a utilitarian, and I just want the most minds to be happy.
I never tried to tell those apart, until Bernard Williams came knocking. He makes several distinctions that are much more fine grained and deeper than my understanding of ethics or that I could explain here, he writes well and knows how to play the philosopher game. Somehow, he made me realize those confounds in my reasoning. So I proceeded to reason about situations in which there is a conflict between the part of my reasoning that says "This is what is moral" and the part that says "I want there to be the most minds having the time of their lives."
After doing a bit of this tinkering, tweaking knobs here and there in thought experiments, I concluded that my preference for there being most minds having the time of their lives supersedes my morals. When my mind is in conflict between those things I will happily sacrifice the moral action to instead do the thing that makes most minds better off the most.
So let me add one more strange label to my already elating, if not accurate, "positive utilitarian" badge:
I am an amoral Effective Altruist.
I do not help people (computers, animals and aliens) because I think this is what should be done. I do not do it because this is morally permissible or morally demanded. Like anyone, I have moral uncertainty, maybe some 5% of me is virtue ethicist or Kantian, or some other perspective. But the point is that even if those parts were winning, I would still go there and pull that lever. Toby or Nick suggested that we use a moral parliament to think of moral uncertainty. Well, if I do, then my conclusion is that basically I am not in a parliamentary system, but in some other form of government, and the parliament is not that powerful. I take Effective Altruist actions not because they are what is morally right for me to do, but in spite of what is morally right to do.
So Nakul Krishna and Bernard Williams may well, and in fact might have, reasoned me out of the claim "utilitarianism is the right way to reason morally." That deepened my understanding of morality a fair bit.
But I'd still pull that goddamn lever.
So much the worse for Morality.
One thing which I think you should consider is the idea that one's preferences become "tuned" to one's moral beliefs. I would challenge the sentence in which you claim that "even if [virtue ethics/kant] were winning, I would still go there and pull that lever"...for wouldn't the idea that virtue ethics is winning be contradicted by your choosing to pull the lever? How do we know when we are fully convinced by an ethical theory? We measure our conviction to follow it. If you are fully convinced of utilitarianism, for example, your preferences will reflect that---for how could you possibly prefer to not follow an ethical theory which you completely believe in? It is not possible to say something similar to "I know for certain that this is right, but I prefer not to do it". What is really happening in a situation like this is that you actually give some ethical priority to your own preferences---hence you are partially an ethical egoist. To map this onto your situation, I would interpret your writing above as meaning that you are not fully convinced of the ethical theories you listed---you find that reason guides you to utilitarianism, Kantianism, whatever it may be, but you are overestimating your own certainty. You say that you take EA actions in spite of what is morally right to do. If you were truly convinced that something else were morally right, you would do it. Why wouldn't you?
If I observe that you do something which qualifies as an EA action, and then ask you why you did it, you might say something like "Because it is my preference to do it, even though I know that X is morally right", X being some alternative action. What I'm trying to say---apologies because this idea is difficult to communicate clearly---is that when you say "Because it is my preference", you are offering your preference as valid justification for your actions. This form of justification is a principle of ethical egoism, so some non-zero percentage of your ethical commitments must be toward yourself. Even though you claimed to be certain that X is right, I have reason to challenge your own certainty, because of the justification you gave for the action. This is certainly a semantics issue in some sense, turning on what we consider to qualify as "belief" in an ethical system.
It seems that you feel the moral obligation strongly from your comment. Like the Oxford student cited by Krishna you don't want to do what you want to do, you want to do what you oughtto do.
I don't experience that feeling, so let me reply to your questions:
Not really, the pulling of the lever is what I would do, it is what I would think I have reason to do, but it is not what I think I would have moral reason to do. I would reason that a virtuous person (ex hypothesi) wouldn't ... (read more)