After Nakul Krishna posted the best critique of Effective Altruism so far, I did what anyone would do. I tried to steelman his opinions into their best version, and read his sources. For the third time, I was being pointed to Bernard Williams, so I conceded, and read Bernard Williams's book Ethics and The Limits of Philosophy. It's a great book, and I'd be curious to hear what Will, Toby, Nick, Amanda, Daniel, Geoff, Jeff and other philosophers in our group have to say about it at some point. But what I want to talk about is what it made me realize: that my reasons for Effective Altruism are not moral reasons.
When we act there can be several reasons for our actions, and some of those reasons may be moral in kind. When a utilitarian reasons about a trolley problem, they usually save the 5 people mostly for moral reasons. They consider the situation not from the perspective of physics, or of biology, or of entropy of the system. They consider which moral agents are participants of the scenario, they reason about how they would like those moral agents (or in the case of animals moral recipients) to fare in the situation, and once done, they issue a response on whether they would pull the lever or not.
This is not what got me here, and I suspect not what got many of you here either.
My reasoning process goes:
Well I could analyse this from the perspective of physics. - but that seems irrelevant.
I could analyse it from the perspective of biology. - that also doesn't seem like the most important aspect of a trolley problem.
I could find out what my selfish preferences are in this situation. - Huh, that's interesting, I guess my preferences, given I don't know any one of the minds involved are a ranking of states of affairs, from best to worst, where if 6 survive, I prefer that, then 5 and so on.
I could analyse what morality would issue me to do. - this has two parts 1) Does morality require of me that I do something in particular? and 2) Does morality permit that I do a thing from a specific (unique) set of actions?
It seems to me that morality certainly permits that I pull the lever, possibly permits that I don't too. Does it require that I pull it? Not so sure. Let us assume for the time being it does not.
After doing all this thinking, I pull the lever, save 5 people, kill one, and go home with the feeling of a job well done.
However there are two confounding factors there. So far, I have been assuming that I save them for moral reasons, so I backtrack those reasons into the moral theory that would make that action permissible and even sometimes demanded, I find aggregative consequentialism (usually utilitarianism) and thus, I conclude: "I am probably an aggregative consequentialist utilitarian."
There is other factor though, which is what I prefer in that situation, and that is the ranking of states of affairs I mentioned previously. Maybe I'm not a utilitarian, and I just want the most minds to be happy.
I never tried to tell those apart, until Bernard Williams came knocking. He makes several distinctions that are much more fine grained and deeper than my understanding of ethics or that I could explain here, he writes well and knows how to play the philosopher game. Somehow, he made me realize those confounds in my reasoning. So I proceeded to reason about situations in which there is a conflict between the part of my reasoning that says "This is what is moral" and the part that says "I want there to be the most minds having the time of their lives."
After doing a bit of this tinkering, tweaking knobs here and there in thought experiments, I concluded that my preference for there being most minds having the time of their lives supersedes my morals. When my mind is in conflict between those things I will happily sacrifice the moral action to instead do the thing that makes most minds better off the most.
So let me add one more strange label to my already elating, if not accurate, "positive utilitarian" badge:
I am an amoral Effective Altruist.
I do not help people (computers, animals and aliens) because I think this is what should be done. I do not do it because this is morally permissible or morally demanded. Like anyone, I have moral uncertainty, maybe some 5% of me is virtue ethicist or Kantian, or some other perspective. But the point is that even if those parts were winning, I would still go there and pull that lever. Toby or Nick suggested that we use a moral parliament to think of moral uncertainty. Well, if I do, then my conclusion is that basically I am not in a parliamentary system, but in some other form of government, and the parliament is not that powerful. I take Effective Altruist actions not because they are what is morally right for me to do, but in spite of what is morally right to do.
So Nakul Krishna and Bernard Williams may well, and in fact might have, reasoned me out of the claim "utilitarianism is the right way to reason morally." That deepened my understanding of morality a fair bit.
But I'd still pull that goddamn lever.
So much the worse for Morality.
It seems that you feel the moral obligation strongly from your comment. Like the Oxford student cited by Krishna you don't want to do what you want to do, you want to do what you oughtto do.
I don't experience that feeling, so let me reply to your questions:
Not really, the pulling of the lever is what I would do, it is what I would think I have reason to do, but it is not what I think I would have moral reason to do. I would reason that a virtuous person (ex hypothesi) wouldn't kill someone, that the moral thing to do is let the lever be. Then I would act on my preference that is stronger than my preference that the moral thing be done. The only case where a contradiction would arise is if you subscribe to all reasons for action being moral reasons, or moral reasons having the ultimate call in all action choice. I don't.
In the same spirit, you suggest I'm an ethical egoist. This is because when you simulated me in this lever conflict, you think "morality comes first" so you dropped the altruism requirement to make my beliefs compatible with my action. When I reason however I think "morality is one of the things I should consider here" and it doesn't win over my preference for most minds having an exulting time. So I go with my preference even when it is against morality. This is orthogonal to Ethical Egoism, a position that I consider both despicable and naïve, to be frank. (Naïve because I know the subagents with whom I have personal identity care for themselves about more than just happiness or their preference satisfaction, and despicable because it is one thing to be a selfish prick, understandable in an unfair universe into which we are thrown into a finite life with no given meaning or sensible narrative, it is another thing to advocate a moral position in which you want everyone to be a selfish prick, and to believe that being a selfish prick is the right thing to do, that I find preposterous at a non-philosophical level.)
Because I don't always do what I should do. In fact I nearly never do what is morally best. I try hard to not stay too far from the target, but I flinch from staring into the void almost as much as the average EA Joe. I really prefer knowing what the moral thing to do is in a situation, it is very informative and helpful to assess what I in fact will do, but it is not compelling above and beyond the other contextual considerations at hand. A practical necessity, a failure of reasoning, a little momentary selfishness, and an appreciation for aesthetic values have all been known to cause me to act for non-moral reasons at times. And of course I often did what I should do too. I often acted the moral way.
To reaffirm, we disagree on what Ethical Egoism means. I take it to be the position that individuals in general ought to be egoists (say, some of the time). You seem to be saying that , and furthermore that if I use any egoistic reason to justify my action, then merely in virtue of my using it as justification I mean that everyone should be (permitted to) doing the same. That makes sense if your conception of just-ice is contractualist and you were assuming just-ification has a strong connection to just-ice. From me to me, I take it to be a justification (between my selves perhaps), but from me to you, you could take it as an explanation of my behavior, to avoid the implications you assign to the concept of justification as demanding the choice for ethical egoism.
I'm not sure what my ethical (meta-ethical) position is, but I am pretty certain it isn't, even in part, ethical egoism.