Update on Mar 21: I have completely reworked my response to Objection 1 to make it more convincing to some and hopefully more clear. I would also like to thank everyone who has responded thus far, in particular brianwang712, Michael_S, kbog and Telofy for sustained and helpful discussions.
Update on Apr 10: I have added a new objection (Objection 1.1) that captures an objection that kbog and Michael_S have raised to my response to Objection 1. I'd also like to thank Alex_Barry for a sustained and helpful discussion.
Update on Apr 24: I have removed Objection 1.1 temporarily. It is undergoing revision to be more clear.
Hey everyone,
This post is perhaps unlike most on this forum in that it questions the validity of effective altruism rather than assumes it.
A. Some background:
I first heard about effective altruism when professor Singer gave a talk on it at my university a few years ago while I was an undergrad. I was intrigued by the idea. At the time, I had already decided that I would donate the vast majority of my future income to charity because I thought that preventing and/or alleviating the intense suffering of others is a much better use of my money than spending it on personal luxuries. However, the idea of donating my money to effective charities was a new one to me. So, I considered effective altruism for some time, but soon I came to see a problem with it that to this day I cannot resolve. And so I am not an effective altruist (yet).
Right now, my stance is that the problem I've identified is a very real problem. However, given that so many intelligent people endorse effective altruism, there is a good chance I have gone wrong somewhere. I just can’t see where. I'm currently working on a donation plan and completing the plan requires assessing the merits of effective altruism. Thus, I would greatly appreciate your feedback.
Below, I state the problem I see with effective altruism, some likely objections and my responses to those objections.
Thanks in advance for reading!
B. The problem I see with effective altruism:
Suppose we find ourselves in the following choice situation: With our last $10, we can either help Bob avoid an extremely painful disease by donating our $10 to a charity working in his area, or we can help Amy and Susie each avoid an equally painful disease by donating our $10 to a more effective charity working in their area, but we cannot help all three. Who should we help?
Effective altruism would say that we should help the group consisting of Amy and Susie since that is the more effective use of our $10. Insofar as effective altruism says this, it effectively denies Bob (and anyone else in his place) any chance of being helped. But that seems counter to what reason and empathy would lead me to do.
Yes, Susie and Amy are two people, and two is more than one, but were they to suffer (as would happen if we chose to help Bob), it is not like any one of them would suffer more than what Bob would otherwise suffer. Indeed, were Bob to suffer, he would suffer no less than either Amy or Susie. Susie’s suffering would be felt by Susie alone. Amy’s suffering would be felt by Amy alone. And neither of their suffering would be greater than Bob’s suffering. So why simply help them over Bob rather than give all of them an equal chance of being helped by, say, tossing a coin? (footnote 1)
Footnote 1: A philosopher named John Taurek first discussed this problem and proposed this solution in his paper "Should the Numbers Count?" (1977)
C. Some likely objections and my responses:
Objection 1:
One might reply that two instances of suffering is morally worse than one instance of the same kind of suffering and that we should prevent the morally worse case (e.g., the two instances of suffering), so we should help Amy and Susie.
My response:
I don’t think two instances of suffering, spread across two people (e.g. Amy and Susie), is a morally worse case than one instance of the same kind of suffering had by one other person (e.g. Bob). I think these two cases are just as bad, morally speaking. Why’s that? Well, first of all, what makes one case morally worse than another? Answer: Morally relevant factors (i.e. things of moral significance, things that matter). Ok, and what morally relevant factors are present here? Well, experience is certainly one - in particular the severe pain that either Bob would feel or Susie and Amy would each feel, if not helped (footnote 2). Ok. So we can say that a case in which Amy and Susie would each suffer said pain is morally worse than a case in which only Bob would suffer said pain just in case there would be more pain or greater pain in the former case than in the latter case (i.e. iff Amy’s pain and Susie’s pain would together be experientially worse than Bob’s pain.)
Footnote 2: In my response to Objection 2, it will become clear that I think something else matters too: the identity of the sufferer. In other words, I don't just think suffering matters, I also think who suffers it matters. However, unlike the morally relevant factor of suffering, I don't think it's helpful for our understanding to understand this second morally relevant factor as having an effect on the moral worseness of a case, although one could understand it this way. Rather, I think its better for our understanding to accommodate its force via the denial that we should always prevent the morally worst case (i.e. the case involving the most suffering). If you find this result deeply unintuitive, then maybe its better for your understanding to understand this second morally relevant factor as having an effect on the moral worseness of a case, which allows you to say that what we should always do is prevent the morally worse case. In any case, ignore the morally relevant factor of identity for now as I haven't even argued for why it is morally relevant.
Here, it's helpful to keep in mind that more/greater instances of pain does not necessarily mean more/greater pain. For example, 2 very minor headaches is more instances of pains than 1 major headache, but they need not involve more pain than a major headache (i.e., they need not be experientially worse than a major headache). Thus, while there would clearly be more instances of pain in the former case than in the latter case (i.e. 2 vs 1; Amy's and Susie's vs Bob's), that does not necessarily mean that there would be more pain.
So the key question for us then is this: Are 2 instances of a given pain, spread across two people (e.g. Amy and Susie), experientially worse (i.e. do they involve more/greater pain) than one instance of the same pain had by one person (e.g. Bob)? If they are (call this thesis “Y”), then a case in which Amy and Susie would each suffer a given pain is morally worse than a case in which only Bob would suffer the given pain. If they aren’t (call this thesis “N”), then the two cases are morally just as bad, in which case Objection 1 would fail, even if we agreed that we should prevent the morally worse case.
Here’s my argument against Y:
Suppose that 5 instances of a certain minor headache, all experienced by one person, are experientially worse than a certain major headache experienced by one person. That is, suppose that any person in the world who has an accurate idea/appreciation of what 5 instances of this certain minor headache feels like and of what this certain major headache feels like would prefer to endure the major headache over the 5 minor headaches if put to the choice. Under this supposition, someone who holds Y must also hold that 5 minor headaches, spread across 5 people, are experientially worse than a major headache had by one person. Why? Because, at bottom, someone who holds Y must also hold that 5 minor headaches spread across 5 people are experientially just as bad as 5 minor headaches all had by one person.
So let's assess whether 5 minor headaches, spread across 5 people, really are experientially worse than a major headache had by one person. Given the supposition above, consider first what makes a single person who suffers 5 minor headaches experientially worse off than a person who suffers just 1 major headache, other things being equal.
Well, imagine that we were this person who suffers 5 minor headaches. We suffer one minor headache one day, suffer another minor headache sometime after that, then another after that, etc. By the end of our 5th minor headache, we will have experienced what it’s like to go through 5 minor headaches. After all, we went through 5 minor headaches! Note that the what-it’s-like-of-going-through-5-headaches consists simply in the what-it’s-like-of-going-through-the-first-minor-headache then the what-it’s-like-of-going-through-the-second-minor-headache then the what-it’s-like-of-going-through-the-third-minor-headache, etc. Importantly, the what-it’s-like-of-going-through-5-headaches is not whatever we experience right after having our 5th headache (e.g. exhaustion that might set in after going through many headaches or some super painful headache that is the "synthesis" of the intensity of the past 5 minor headaches). It is not a singular/continuous feeling like the feeling we have when we're experiencing a normal pain episode. It is simply this: the what-it’s-like of going through one minor headache, then another (some time later), then another, then another, then another. Nothing more. Nothing less.
Now, by the end of our 5th minor headache, we might have long forgotten about the first minor headache because, say, it happened so long ago. So, by the end of our 5th minor headache, we might not have an accurate appreciation of what it’s like to go through 5 minor headaches even though we in fact have experienced what it’s like to go through 5 minor headaches. As a result, if someone asked us whether we’ve been through more pain due to our minor headaches or more pain through a major headache that, say, we recently experienced, we would likely incorrectly answer the latter.
But, if we did have an accurate appreciation of what it’s like to go through 5 minor headaches, say, because we experienced all 5 minor headaches rather recently, then there will be a clear sense to us that going through them was experientially worse than the major headache. The 5 minor headaches would each be “fresh in our mind”, and thus the what-it’s-like-of-going-through-5-minor-headaches would be “fresh in our mind”. And with that what-it’s-like fresh in mind, it seems clear to us that it caused us more pain than the major headache did.
Now, a headache being “fresh in our mind” does not mean that the headache needs to be so fresh that it is qualitatively the same as experiencing a real headache. Being fresh in our mind just means we have an accurate appreciation/idea of what it feels like, just as we have some accurate idea of what our favorite dish tastes like.
Because we have appreciations of our past pains (to varying degrees of accuracy), we sometimes compare them and have a clear sense that one set of pains is worse than another. But it is not the comparison and the clear sense we have of one set of pains being worse than another that ultimately makes one set of pains worse than another. Rather, it is the other way around: it is the what-it’s-like-of-having-5-minor-headaches that is worse than the what-it’s-like-of-having-a-major-headache. And if we have an accurate appreciation of both what-it’s-likes, then we will conclude the same. But, when we don’t, then our own conclusions could be wrong, like in the example provided earlier of a forgotten minor headache.
So, at the end of the day, what makes a person who has 5 minor headaches worse off than a person who has 1 major headache is the fact that he experienced the what-it’s-like-of-going-through-5-minor-headaches.
But, in the case where the 5 minor headaches are spread across 5 people, there is no longer the what-it’s-like-of-going-through-5-minor-headaches because each of the 5 headaches is experienced by a different person. As a result, the only what-it’s-like that is present is the what-it’s-like-of-experiencing-one-minor-headache. Five different people each experience this what-it’s-like, but no one experiences what-it’s-like-of-going-through-5-minor-headaches. Moreover, the what-it’s-like of each of the 5 people cannot be linked to form the what-it’s-like-of-experiencing-5-minor-headaches because the 5 people are experientially independent beings.
Now, it's clearly the case that the what-it’s-like-of-going-through-1-minor-headache is not experientially worse than the what-it’s-like-of-going-through-a-major-headache. Given what I said in the previous paragraph, therefore, there is nothing present that could be experientially worse than the what-it’s-like-to-go-through-a-major-headache in the case where the 5 minor headaches are spread across 5 people. Therefore, 5 minor headaches, spread across 5 people, cannot be (and thus, is not) worse, experientially speaking, than one major headache.
Indeed, five independent what-it's-likes-of-going-through-1-minor-headache is very different from a single what-it's-like-of-going-through-5-minor-headaches. And given a moment's reflection, one thing should be clear: only the latter what-it's-like can plausibly be experientially worse than a major headache.
Thus, one should not treat 5 minor headaches spread across 5 people as being experientially just as bad as 5 minor headaches all had by 1 person. The latter is experientially worse than the former. The latter involves more/greater pain.
We can thus make the following argument against Y:
P1) If Y is true, then 5 minor headaches spread across 5 people is experientially just as bad 5 minor headaches all had by 1 person.
P2) But that is not the case (since 5 minor headaches all had by 1 person is experientially worse than 5 minor headaches spread across 5 people).
C) Therefore Y is false. And therefore Objection 1 fails, even if it's granted that we should prevent the morally worse case.
Objection 1.1: (Improving it)
Objection 1.2:
One might reply that experience is a morally relevant factor, but when the amount of pain in each case is the same (i.e. when the cases are experientially just as bad), the number of people in each case also becomes a morally relevant factor. Since the case in which Amy and Susie would each suffer involves more people, therefore, it is still the morally worse case.
My response:
I will respond to this objection in my response to Objection 2.
Objection 1.3:
One might reply that the number of people involved in each case is a morally relevant factor in of itself (i.e. completely independent of the amount of pain in each case). That is, one might say that the inherent moral relevance of the number of people involved in each case must be reconciled with the inherent moral relevance of the amount of pain in each case, and that therefore, in principle, a case in which many people would each suffer a relatively lesser pain can be morally worse than a case in which one other person would suffer a relatively greater pain, so long as there are enough people on the side of the many. For example, between helping a million people avoid depression or one other person avoid very severe depression, one might have the intuition that we should help the million, i.e. that a case in which a million people would suffer depression is morally worse.
My response:
I don’t deny that many people have this intuition, but I think this intuition is based on a failure to recognize and/or appreciate some important facts. In particular, I think that if you really kept in the forefront of your mind the fact that not one of the million would suffer worse than the one, and the fact that the million of them together would not suffer worse than the one (assuming my response to Objection 1 succeeds), then your intuition would not be as it is (footnote 3).
Nevertheless, you might still feel that the million people should still have a chance of being helped. I agree, but this is not because of the sheer number of them involved. Rather, it is because which individual suffers matters. (Please see my response to Objection 2.)
Footnote 3: For those familiar with Derk Pereboom’s position in the free will debate, he makes an analogous point. He doesn’t think we have free will, but admits that many have the intuition that we do. But he points out that this is because we are generally not aware of the deterministic psychological/neurological/physical causes of our actions. But once we become aware of them – once we have them in the forefront of our minds – our intuition would not be that we are free. See pg 95 of “Free Will, Agency, and Meaning in Life” (Pereboom, 2014)
Objection 2:
One might reply that we should help Amy and Susie because either of their suffering neutralizes/cancels out Bob’s suffering, leaving the other’s suffering to carry the day in favor of helping them over Bob.
My response:
I don’t think one person’s suffering can neutralize/cancel out another person’s suffering because who suffers matters. Which individual it is that suffers matters because it is the sufferer who bears the complete burden of the suffering. It is the particular person who ends up suffering that feels all the suffering. This is an obvious fact, but it is also a very significant fact when properly appreciated, and I don’t think it is properly appreciated.
Think about it. The particular person(s) who suffers has to bear everything. If we save Amy and Susie, it is Bob – that particular vantage point on the world - who has to feel all of the suffering (which it bears remembering is suffering that would be no less painful than the suffering Amy and Susie would each otherwise endure). The same, of course, is true of each of Amy and Susie were we to save Bob.
I fear that saying anymore might make the significance of the fact I’m pointing to less clear. For those who appreciate the significance of what I’m getting at, it should be clear that neither Amy’s or Susie’s suffering can be used to neutralize/cancel out Bob’s suffering and vice versa. Yes, it’s the same kind of suffering, but it’s importantly different whether Amy and Susie each experiences it or Bob experiences it, because again, whoever experiences it is the one who has to bear all of it.
Notice that this response to objection 2 is importantly compatible with empathizing with every individual involved (e.g., Amy, Susie and Bob). Indeed, to empathize with only select individuals is biased. Yet, it seems to me that many people are in fact likely to forget to empathize with the group containing the fewer number. Note that as I understand it, to empathize with someone is to imagine oneself in their shoes and to care about that imagined perspective.
Also, notice that this response to objection 2 also deals with Objection 1.2 since this response argues against (what seems to me) the only plausible way in which the number of people involved might be thought to be relevant when the amount of pain involved in each case is the same: when the amount of pain involved in each case is the same, it might be thought that one person's pain can neutralize or cancel out another person's pain, e.g. that the suffering Amy would feel can neutralize or cancel out the suffering Bob would feel, leaving only the suffering that Susie would feel left in play, and that therefore the case in which Amy and Susie would suffer is morally worse than the case in which Bob would suffer. But if my response to Objection 2 is right, then this thought is wrong.
Just to be clear, this is not to say that I think one person’s suffering can not balance (or, in the case of greater suffering, outweigh) another person’s equal (or lesser) suffering such that the reasonable and empathetic thing to do is to give the person who would face the greater suffering a higher chance of being helped. In fact, I think it can. But balancing is not the same as neutralizing/canceling out. Bob’s suffering balances out Amy’s suffering and it also independently balances out Susie’s suffering precisely because Bob’s suffering does not get neutralized/cancelled out by either of their suffering.
My own view is that we should give the person who would face the greater suffering a higher chance of being saved in proportion to how much greater his suffering would be relative to the suffering that the other person(s) would each otherwise face. We shouldn't automatically help him just because he would face a greater suffering if not helped. After all, who suffers matters, and this includes those who would be faced with the lesser suffering if not helped (footnote 4).
Footnote 4: My own view is slightly more complicated than this, but those details aren't important given the simple sorts of choice situations discussed in this essay.
Going back to Objection 1.3, this then explains why I agree that we should still give those who would each suffer a less serious depression a chance of being helped, even though the one other person would suffer more if not saved. Importantly, the number of people who would each suffer the less serious depression is irrelevant. I would give them a chance of being saved whether they are 2 persons or a million or a billion. How high of a chance would I give them? In proportion to how their depression compares in suffering to the single person’s severe depression. So, if it involves slightly less suffering, I would give them around 48% of being helped. If it involves a lot less suffering, then I would give them lot lower of a chance (footnote 5).
Footnote 5: Notice that with certain types of pain episodes, such as a torture episode vs a minor headache, there is such a big gap in amount of suffering between them that any clear-headed person in the world would rather endure an infinite number of minor headaches (i.e. live with very frequent minor headaches in an immortal life) than to endure the torture episode. This would explain why in a choice situation in which we can either save a person from torture or x number of persons from a minor headache (or 1 person from x minor headaches), we would just save the person who would be tortured rather than give the other(s) even the slightest chance of being helped. And I think this accords with our intuition well.
Objection 3:
One might reply that from “the perspective of the universe” or “moral perspective” or “objective perspective”, either of their suffering neutralizes/cancels out Bob’s suffering, leaving the other’s suffering to carry the day in favor of helping them over Bob.
My response:
As I understand it, the perspective of the universe is the impartial or unbiased perspective where personal biases are excluded from consideration. As a result, such a perspective entails that we should give equal weight to equal suffering. For example, whereas I would give more weight to my own suffering than to the equal suffering of others (due to the personal bias involved in my everyday personal perspective), if I took on the perspective of the universe, I would have to at least intellectually admit that their equal suffering matters the same amount as mine. Of course, it doesn’t matter the same amount as mine from my perspective. It matters the same amount as mine from the perspective of the universe that I have taken on. We might say it matters the same amount as mine period. However, none of this entails that, from the perspective of the universe, which individual suffers doesn’t matter – that whether it is I who suffers X or someone else who suffers X doesn’t matter. Clearly it does matter for the reason I gave earlier. Giving equal weight to equal suffering does not entail that who suffers said suffering doesn’t matter. It is precisely because it matters that in a choice situation in which we can either save person A from suffering X or person B from suffering X we think we should flip a coin to give each an equal chance of being saved, rather than, say, choosing one of them to save on a whim. This is our way of acknowledging that A suffering is importantly different from B suffering - that who suffers matters.
Even if I'm technically wrong about what the perspective of the universe - as understood by utilitarians - amounts to, all that shows is that the perspective of the universe, so understood, is not the moral perspective. For who suffers matters (assuming my response to Objection 2 is correct), and so the moral perspective must be one from which this fact is acknowledged. Any perspective from which it isn't therefore cannot be the moral perspective.
D. Conclusion:
I therefore think that according to reason and empathy, Bob should be accorded an equal chance to be helped (say via flipping a coin) as Amy and Susie. This conclusion holds regardless of the number of people that are added to Amy and Susie’s group as long as the kind of suffering remains the same. So for example, if with a $X donation we can either help Bob avoid an extremely painful disease or a million other people from the same painful disease, but not all, reason and empathy would say to flip a coin – a conclusion that is surely against effective altruism.
E. One final objection:
One might say that this conclusion is too counter-intuitive to be correct, and that therefore something must have gone wrong in my reasoning, even though it may not be clear what that something is.
My response:
But is it really all that counter-intuitive when we bear in mind all that I have said? Importantly, let us bear in mind three facts:
1) Were we to save the million people instead of Bob, Bob would suffer in a way that is no less painful than any one of the million others otherwise would. Indeed, he would suffer in a way that is just as painful as any one among the million. Conversely, were we to save Bob, no one among the million suffering would suffer in a way that is more painful than Bob would otherwise suffer. Indeed, the most any one of them would suffer is the same as what Bob would otherwise suffer.
2) The suffering of the million would involve no more pain than the pain Bob would feel (assuming my response to Objection 1 is correct). That is, a million instances of the given painful disease, spread across a million people, would not be experientially worse - would not involve more pain or greater pain - than one instance of the same painful disease had by Bob. (Again, keep in mind that more/greater instances of a pain does not necessarily mean more/greater pain.)
3) Were we to save the million and let Bob suffer, it is he – not you, not me, and certainly not the million of others – who has to bear that pain. It is that particular person, that unique sentient perspective on the world who has to bear it all.
In such a choice situation, reason and empathy tells me to give him an equal chance to be saved. To just save the millions seems to me to completely neglect what Bob has to suffer, whereas my approach seems to neglect no one.
One additional objection that one might have is that if Bob, Susie, and Amy all knew beforehand that you would end up in a situation where you could donate $10 to alleviate either two of them suffering or one of them suffering, but they didn't know beforehand which two people would be pitted against which one person (e.g., it could just as easily be alleviating Bob + Susie's suffering vs. alleviating Amy's suffering, or Bob + Amy's suffering vs. Susie's suffering, etc.), then they would all sign an agreement directing you to send a donation such that you would alleviate two people's suffering rather than one, since this would give each of them the best chance of having their suffering alleviated. This is related to Rawls' veil of ignorance argument.
And if Bob, Susie, Amy, and a million others were to sign an agreement directing your choice to donate $X to alleviate one person's suffering or a million peoples' suffering, again all of them behind a veil of ignorance, none of them would hesitate for a second to sign an agreement that said, "Please donate such that you would alleviate a million people's suffering, and please oh please don't just flip a coin."
More broadly spea... (read more)
I think you're making a mistaken assumption here about your readers. Conditional on agreeing 5 minor headaches in one person is worse than 1 major headache in one person, I would feel exactly the same if it were spread out over 5 people. I expect the majority of EAs would as well.
You think aggregating welfare between individuals is a flawed approach, such that you are indifferent between alleviating an equal amount of suffering for 1 or each of a million people.
You conclude that these values recommend giving to charities that directly address the sources of most intense individual suffering, and that between them, one should not choose by cost-effectiveness, but randomly. One should not give to say GiveDirectly, which does not directly tackle the most intense suffering.
This conclusion seems correct only for clear-cut textbook examp... (read more)
I think you are conflating EA with utilitarianism/consequentialism. To be fair this is totally understandable since many EAs are consequentialists and consequentialist EAs may not be careful to make or even see such a distinction, but as someone who is closest to being a virtue ethicist (although my actual metaethics are way more complicated) I see EA as being mainly about intentionally focusing on effectiveness rather than just doing what feels good in our altruistic endeavors.
If you think PETA is the best bet for reducing suffering, you might want to check out other farm animal advocacy organizations at Animal Charity Evaluators' website. The Organization to Prevent Intense Suffering (OPIS) is an EA-aligned organization which has a more explicit focus on advancing projects which directly mitigate abject and concrete suffering. You might also be interested in their work.
I think Brian Tomasik has addressed this briefly and Nick Bostrom at greater length.
What I’ve found most convincing (quoting myself in response to a case that hinged on the similarity of the two or many experiences):
... (read more)Hey Jeffhe- the position you put forward looks structurally really similar to elements of Scanlon's, and you discuss a dillema that is often discussed in the context of his work (the lifeboat/the rocks example)- It also seems like given your reply to objection 3 you might really like it's approach (if you are not familiar with it already). Subsection 7 of this SEP article (https://plato.stanford.edu/entries/contractualism/) gives a good overview of the case that is tied to the one you discuss. The idea of the separateness of persons, and the idea that o... (read more)
The following is roughly how I think about it:
If I am in a situation where I need help, then for purely selfish reasons, I would prefer people-who-are-capable-of-helping-me to act in such a way that has the highest probability of helping me. Because I obviously want my probability of getting help, to be as high as possible.
Let's suppose that, as in your original example, I am one of three people who need help, and someone is thinking about whether to act in a way that helps one person, or to act in a way that helps two people. Well, if they act in a way th... (read more)
I used to think that a large benefit to a single person was always more important than a smaller benefit to multiple people (no matter how many people experienced the smaller benefit). That's why I wrote this post asking others for counterarguments. After reading the comments on that post (one of which linked to this article), I became persuaded that I was wrong.
Here's an additional counterargument. Let's say that I have two choices:
A. I can save 1 person from a disease that decreases her quality of life by 95%; or
B. I can save 5 people from a disease tha... (read more)
I agree that aggregating suffering of different people is problematic. By necessity, it happens on a rather abstract level, divorced from the experiential. I would say that can lead to a certain impersonal approach which ignores the immediate reality of the human condition. Certainly we should be aware of how we truly experience the world.
However I think here we transcend ethics. We can't hope to resolve deep issues of of suffering within ethics, because we are somewhat egocentric beings by nature. We see only through our eyes and feel our body. I don't se... (read more)
(Posted as top-level comment as I has some general things to say, was originally a response here)
I just wanted to say I thought this comment did a good job explaining the basis behind your moral intuitions, which I had not really felt a strong motivation for before now. I still don't find it particularly compelling myself, but I can understand why others could find it important.
Overall I find this post confusing though, since the framing seems to be 'Effective Altruism is making an intellectual mistake' whereas you just actually seem to have a different se... (read more)
What? It seems to be exactly what reason and empathy would lead one to do. Reason and empathy don't tell you to arbitrarily save fewer people. At best, you could argue that empathy pulls you in neither direction, while conceding that it's still more reasonable to save more rather than fewer. You've not written an argument, just a bald assertion. You're dressing it up to look like a philosophical argument, but there is none.
... (read more)