After Nakul Krishna posted the best critique of Effective Altruism so far, I did what anyone would do. I tried to steelman his opinions into their best version, and read his sources. For the third time, I was being pointed to Bernard Williams, so I conceded, and read Bernard Williams's book Ethics and The Limits of Philosophy. It's a great book, and I'd be curious to hear what Will, Toby, Nick, Amanda, Daniel, Geoff, Jeff and other philosophers in our group have to say about it at some point. But what I want to talk about is what it made me realize: that my reasons for Effective Altruism are not moral reasons.
When we act there can be several reasons for our actions, and some of those reasons may be moral in kind. When a utilitarian reasons about a trolley problem, they usually save the 5 people mostly for moral reasons. They consider the situation not from the perspective of physics, or of biology, or of entropy of the system. They consider which moral agents are participants of the scenario, they reason about how they would like those moral agents (or in the case of animals moral recipients) to fare in the situation, and once done, they issue a response on whether they would pull the lever or not.
This is not what got me here, and I suspect not what got many of you here either.
My reasoning process goes:
Well I could analyse this from the perspective of physics. - but that seems irrelevant.
I could analyse it from the perspective of biology. - that also doesn't seem like the most important aspect of a trolley problem.
I could find out what my selfish preferences are in this situation. - Huh, that's interesting, I guess my preferences, given I don't know any one of the minds involved are a ranking of states of affairs, from best to worst, where if 6 survive, I prefer that, then 5 and so on.
I could analyse what morality would issue me to do. - this has two parts 1) Does morality require of me that I do something in particular? and 2) Does morality permit that I do a thing from a specific (unique) set of actions?
It seems to me that morality certainly permits that I pull the lever, possibly permits that I don't too. Does it require that I pull it? Not so sure. Let us assume for the time being it does not.
After doing all this thinking, I pull the lever, save 5 people, kill one, and go home with the feeling of a job well done.
However there are two confounding factors there. So far, I have been assuming that I save them for moral reasons, so I backtrack those reasons into the moral theory that would make that action permissible and even sometimes demanded, I find aggregative consequentialism (usually utilitarianism) and thus, I conclude: "I am probably an aggregative consequentialist utilitarian."
There is other factor though, which is what I prefer in that situation, and that is the ranking of states of affairs I mentioned previously. Maybe I'm not a utilitarian, and I just want the most minds to be happy.
I never tried to tell those apart, until Bernard Williams came knocking. He makes several distinctions that are much more fine grained and deeper than my understanding of ethics or that I could explain here, he writes well and knows how to play the philosopher game. Somehow, he made me realize those confounds in my reasoning. So I proceeded to reason about situations in which there is a conflict between the part of my reasoning that says "This is what is moral" and the part that says "I want there to be the most minds having the time of their lives."
After doing a bit of this tinkering, tweaking knobs here and there in thought experiments, I concluded that my preference for there being most minds having the time of their lives supersedes my morals. When my mind is in conflict between those things I will happily sacrifice the moral action to instead do the thing that makes most minds better off the most.
So let me add one more strange label to my already elating, if not accurate, "positive utilitarian" badge:
I am an amoral Effective Altruist.
I do not help people (computers, animals and aliens) because I think this is what should be done. I do not do it because this is morally permissible or morally demanded. Like anyone, I have moral uncertainty, maybe some 5% of me is virtue ethicist or Kantian, or some other perspective. But the point is that even if those parts were winning, I would still go there and pull that lever. Toby or Nick suggested that we use a moral parliament to think of moral uncertainty. Well, if I do, then my conclusion is that basically I am not in a parliamentary system, but in some other form of government, and the parliament is not that powerful. I take Effective Altruist actions not because they are what is morally right for me to do, but in spite of what is morally right to do.
So Nakul Krishna and Bernard Williams may well, and in fact might have, reasoned me out of the claim "utilitarianism is the right way to reason morally." That deepened my understanding of morality a fair bit.
But I'd still pull that goddamn lever.
So much the worse for Morality.
Why not conclude so much worse for ought, hedonism, or impersonal morality? There are many other moral theories build away from these notions which would not lead you to these conclusions – of course, this does not mean they ignore these notions. If this simplistic moral theory makes you want to abandon morality, please abandon the theory.
I find the idea that there are valid reasons to act that are not moral reasons weird; I think some folks call them prudential reasons. It seems that your reason to be an EA is a moral reason if utilitarianism (plus a bunch of other specific assumptions) is right, and “just a reason” if it isn't. But if increasing other's welfare is not producing value - or is not right or whatever - what is your reason for doing it? Is it due to some sort of moral akrasia? You know it is not the right thing to do, but you do it nevertheless? It seems there would only be bad reasons for you to act this way.
If you are not acting like you think you should after having complete information and moral knowledge, perfect motivation and reasoning capacity, then it does not seem like you are acting on prudential reasons, it seems you are being unreasonable. If you are acting on the best of your limited knowledge and capacities, it seems you are acting for moral reasons. These limitations might explain why you acted in a certain sub-optimal way, but they do not seem to constitute your reason to act.
Suppose the scenario where you are stuck on a desert island with another starving person with a slightly higher chance of survival (say, he is slightly healthier than you). There’s absolutely no food and you know that the best shot for at least one of you surviving is if one eats the other. He comes to attack you. Some forms of utilitarianism would say you ought to let him kill you. Any slight reaction would be immoral. If later on people find out you fought for your life, killed the other person and survived, the right thing for them to say would be “He did the wrong thing and had no right to defend his life.” The intuition you have the right to self-defence would be simply mistaken; there is no moral basis for it.
But we need not to abandon this intuition and that some forms of utilitarianism require us to do so will always be a point against them - in a similar manner that the intuition sentient pleasure is good is an intuition for them. It would be morally right to defend yourself in many other moral systems, including more elaborate forms of utilitarianism. You may believe people ought to have the right of self-defence as a deontological principle on its own, or even for utilitarian reasons (e.g., society works better that way). There might be impersonal reasons to have the right to put your personal interest in your survival above the interest that another person with slightly higher life expectancy survives. Hence, even if impersonal reasons are all the moral reasons there are, insofar as there are impersonal reasons for people to have personal reasons these latter are moral reasons.
If someone is consistently not acting like he thinks he should and upon reflection there is no change in behaviour or cognitive dissonance, then that person either is a hypocrite - he does not really think he should act that way - or a psychopath - he is incapable of moral reasoning. Claiming one does not have the right to self-defence even though you would feel you have strong reasons not to let the other person kill you seems like an instance of hypocrisy. Being an EA while fully knowing maximizing welfare is not the right thing to do seems like an instance of psychopathy (in the odd case EA is only about maximizing welfare). Of course, besides these two pathologies, you might have some form of cognitive dissonance or other accidental failures. Perhaps you are not really that sure maximizing welfare is not the right thing to do. You might not have the will to commit to do the things you should do in case right actions consists in something more complicated than maximizing welfare. You might be overwhelmed by a strong sense you have the right to life. It might not be practical at the time to consider these other complicated things. You might not know which moral theory is right. These are all accidental things clouding or limiting your capacity for moral reasoning, things you should prefer to overcome. This would be a way of saving the system of morality by attributing any failure to act right to accidents, uncertainties or pathologies. I prefer this solution of sophisticating the way moral reasons behave than to claim that there are valid reasons to act that are not moral reasons; the latter looks, even more than the former, like shielding the system of morality from the real world. If there are objective moral truths, they better have something to do with what people want to want to do upon reflection.
But perhaps there is no system to be had. Some other philosophers believe these limitations above are inherent to moral reason, and it is a mistake to think moral reasoning should function the same way as pure reasoning does. The right thing to do will always be an open question, and all moral reasoning can do is recommend certain actions over others, never to require. If there is more than one fundamental value, or if this one fundamental value is epistemically inaccessible, I see no other way out besides this solution. Incommensurable fundamental values are incompatible with pure rationality in its classical form. Moreover, if the fundamental value is simply hard to access, this solution is at least the most practical one and the one we should use in most of applied ethics until we come up with Theory X. (In fact, it is the solution the US Supreme Court adopts)
I personally think there is a danger with going about believing to believe in some simple moral theory while ignoring it whenever it feels right. Pretending to be able to abandon morality altogether would be another danger. How actually believing and following these simplistic theories fare among these latter two options is uncertain. If, as in Williams joke, one way of acting inhumanely is to act on certain kinds of principles, it does not fare very well.
It seems to me Williams made his point; or the point I wished him to make to you. You are saying “if this is morality, I reject it”. Good. Let’s look for one you can accept.
My understanding of prudential reasons is that they are reasons of the same class as those I have to want to live when someone points a gun at me. They are reasons that relate me to my own preferences and survival, not as a recipient of the utilitarian good, but... (read more)