J

Jeffhe

24 karmaJoined

Comments
69

Because you told me that it's the same amount of pain as five minor toothaches and you also told me that each minor toothache is 1 base unit of pain.

Where in supposition or the line of reasoning that I laid out earlier (i.e. P1) through to P5)) did I say that 1 major headache involves the same amount of pain as 5 minor toothaches?

I attributed that line of reasoning to you because I thought that was how you would get to C) from the supposition that 5 minor toothaches had by one person is experientially just as bad as 1 major toothache had by one person.

But you then denied that that line of reasoning represents your line of reasoning. Specifically, you denied that P1) is the basis for asserting P2). When I asked you what is your basis for P2), you assert that I told you that 1 major headache involves the same amount of pain as five minor toothaches. But where did I say this?

In any case, it would certainly help if you described your actual step by step reasoning from the supposition to C), since, apparently, I got it wrong.

If you mean that it feels worse to any given person involved, yes it ignores the difference, but that's clearly the point, so I don't know what you're doing here other than merely restating it and saying "I don't agree."

I'm not merely restating the fact that Reason S ignores this difference. I am restating it as part of a further argument against your sense of "involves more pain than" or "involves the same amount of pain as". The argument in essence goes: P1) Your sense relies on Reason S P2) Reason S does not care about pain-qua-how-it-feels (because it ignores the above stated difference). P3) We take pain to matter because of how it feels. C) Therefore, your sense is not in harmony with why pain matters (or at least why we take pain to matter).

I had to restate that Reason S ignores this difference as my support for P2, so it was not merely stated.

On the other hand, you do not care how many people are in pain, and you do not care how much pain someone experiences so long as there is someone else who is in more pain, so if anyone's got to figure out whether or not they "care" enough it's you.

Both accusations are problematic.

The first accusation is not entirely true. I don't care about how many people are in pain only in situations where I have to choose between helping, say, Amy and Susie or just Bob (i.e. situations where a person in the minority party does not overlap with anyone in the majority party). However, I would care about how many people are in pain in situations where I have to choose between helping, say, Amy and Susie or just Amy (i.e. situations where the minority party is a mere subset of the majority party). This is due to the strict pareto principle which would make Amy and Susie each suffering morally worse than just Amy suffering, but would not make Amy and Susie suffering morally worse than Bob suffering. I don't want to get into this at this point because it's not very relevant to our discussion. Suffice it to say that it's not entirely true that I don't care about how many people are in pain.

The second accusation is plain false. As I made clear in my response to Objection 2 in my post, I think who suffers matters. As a result, if I could either save one person from suffering some pain or another person from suffering a slightly less pain, I would give each person a chance of being saved in proportion to how much each has to suffer. This is what I think I should do. Ironically, your second accusation against me is precisely true of what you stand for.

You've pretty much been repeating yourself for the past several weeks, so, sure.

In my past few replies, I have:

1) Outlined in explicit terms a line of reasoning that got from the supposition to C), which I attributed to you.

2) Highlighted that that line of reasoning appealed to Reason S.

3) On that basis, argued that your sense of "involves the same amount of pain as" goes against the spirit of why pain matters.

If that comes across to you as "just repeating myself for the past several weeks", then I can only think that you aren't putting enough effort into trying to understand what I'm saying.

the reason why 5 minor toothaches spread among 5 people is equivalent to 5 minor toothache had by one person is DIFFERENT from the reason for why 5 minor headaches had by one person is equivalent to 1 major toothache had by one person.

No, both equivalencies are justified by the fact that they involve the same amount of base units of pain.

So you're saying that just as 5 MiTs/5 people is equivalent to 5 MiTs/1 person because both sides involve the same amount of base units of pain, 5 MiTs/1 person is equivalent to 1 MaT/1 person because both sides involve the same amount of base units of pain (and not because both sides give rise to what-it's-likes that are experientially just as bad).

My question to you then is this: On what basis are you able to say that 1 MaT/1 person involves 5 base units of pain?

But Reason S doesn't give a crap about how bad the pains on the two sides of the equation FEEL

Sure it does. The presence of pain is equivalent to feeling bad. Feeling bad is precisely what is at stake here, and all that I care about.

Reason S cares about the amount of base units of pain there are because pain feels bad, but in my opinion, that doesn't sufficiently show that it cares about pain-qua-how-it-feels. It doesn't sufficiently show that it cares about pain-qua-how-it-feels because 5 base units of pain all experienced by one person feels a whole heck of a lot worse than anything felt when 5 base units of pain are spread among 5 people, yet Reason S completely ignores this difference. If Reason S truly cared about pain-qua-how-it-feels, it cannot ignore this difference.

I understand where you're coming from though. You hold that Reason S cares about the quantity of base units of pain precisely because pain feels bad, and that this fact alone sufficiently shows that Reason S is in harmony with the fact that we take pain to matter because of how it feels (i.e. that Reason S cares about pain-qua-how-it-feels).

However, given what I just said, I think this fact alone is too weak to show that Reason S is in harmony with the fact that we take pain to matter because of how it feels. So I believe my objection stands.

Have we hit bedrock?

I see the problem. I will fix this. Thanks.

I was trying to keep the discussions of 'which kind of pain is morally relevant' and of your proposed system of giving people a chance to be helped in proportion to their suffering sperate. It might be that they are so intertwined as for this to be unproductive, but I think I would like you to response to my comment about the latter before we discuss it further.

I think I see the original argument you were going for. The argument against my approach-minus-the-who-suffers-matters-bit is that it renders all resulting states of affairs equally bad, morally speaking, because all resulting states of affairs would involve the same total pain. Given that we should prevent the morally worst case, this means that my approach would have it that we shouldn't take any action, and that's just absurd. Therefore, my way of determining total pain is problematic. Here "a resulting state of affairs" is broadly understood as the indefinite span of time following a possible action, as opposed to any particular point in time following a possible action. On this broad understanding, it seems undeniable that each possible action will result in a state of affairs with the same total maximal pain, since there will surely be someone who suffers maximally at some point in time in each indefinite span of time.

Well, if who suffered didn't matter, then I think leximin should be used to determine which resulting state of affairs is morally worse. According to leximin, we determine which state of affairs is morally better as follows: Step 1: From each state of affairs, select a person among the worst off in that state of affairs. Compare these people. If there is a person who is better off than the rest, then that state of affairs is morally better than all the others. If these people are all just as badly off, then move onto Step 2. Step 2: From each state of affairs, select a person among the worst off in that state of affairs, except for the person who has already been selected. Compare these people. If there is a person who is better off than the rest, then that state of affairs is morally better than all the others. If these people are all just as badly off, then move onto Step 3. And so forth... According to this method, even though all resulting states of affairs will involve the same total pain, certain resulting states of affairs will be morally better than others, and we should act so as to realize them.

My appeal to leximin is not ad hoc because it takes an individual's suffering seriously, which is inline with my approach. Notice that leximin can be used to justify saving Susie and Amy over Bob. I don't actually endorse leximin because leximin does not take an individual's identity seriously (i.e. it doesn't treat who suffers as morally relevant, whereas I do. I think who suffers matters).

So that is one response I have to your argument: it grants you that the total pain in each resulting state of affairs would be the same and then argues that this does not mean that all resulting state of affairs would be morally just as bad.

Another response I have is that, most probably, different states of affairs will involve different amounts of pain, and so some states of affairs will be morally worse than others just based on total pain involved. This becomes more plausible when we keep in mind what the maximum amount of pain is on my approach. It is not the most intense pain, e.g. a torture session. It is not the longest pain, e.g. a minor headache that lasts one's entire life. Rather, it is the most intense pain over the longest period of time. The person who suffers maximum pain is the person who suffers the most intense pain for the longest period of time. Realizing this, it is unlikely that each possible action will lead to a state of affairs involving this. (Note that this is to deny A1.)

Anyways, if literally each possible action I could take would inevitably result in a different person suffering maximal pain (i.e. if A1 and A2 are true), I think I ought to assign an equal chance to each possible action (even though physically speaking I cannot).

But this seems extremely far removed from any day to day intuitions we would have about morality, no? If you flipped a coin to decide whether you should murder each person you met, (a very implementable approximation of this result) I doubt many would find this justified on the basis that someone in the future is going to be suffering much more than them.

To give each possible action an equal chance is certainly not to flip a coin between murdering someone or not. At any given moment, I have thousands (or perhaps an infinite number) of possible actions I could take. Murdering the person in front of me is but one. (There are many complexities here that make the discussion hard like what counts as a distinct action.)

However, I understand that the point of your objection is that my approach can allow the murder of an innocent. In this way, your objection is like that classical argument against utilitarianism. Anyways, I guess, like effective altruism, I can recognize rules that forbid murdering etc. I should clarify that my goal is not to come up with a complete moral theory as such. Rather it is to show that we shouldn't use the utilitarian way of determining "total pain", which underlies effective altruism.

I have argued for this by

1) arguing that the utilitarian way of determining "total pain" goes against the spirit of why we take pain to matter in the first place. In response, you have suggested a different framing of utilitarianism on which they are determining a "total moral value" based on people's pains, which is different from determining a total pain. I still need to address this point.

2) responding to your objection against my way of determining "total pain" (first half of this reply)

Hey Alex! Sorry for the super late response! I have a self-control problem and my life got derailed a bit in the past week >< Anyways, I'm back :P

How much would you be willing to trade off helping people verses the help being distributed fairly? e.g. if you could either have a 95% chance of helping people in proportion to their suffering, but a 5% chance of helping no one, verses a 100% chance of only helping the person suffering the most.

This is an interesting question, adding another layer of chance to the original scenario. As you know, if (there was a 100% chance) I could give each person a chance of being saved in proportional to his/her suffering, I would do that instead of outright saving the person who has the worst to suffer. After all, this is what I think we should do, given that suffering matters, but who suffers also matters. Here, there seems to me a nice harmony between these two morally relevant factors – the suffering and the identity of who suffers, where both have a sufficient impact on what we ought to do: we ought to give each person a chance of being saved because who suffers matters, but each person’s chance ought to be in proportion to what he/she has to suffer because suffering also matters.

Now you’re asking me what I would do if there was only a 95% chance that I could give each person a chance of being saved in proportion to his/her suffering with a 5% chance of not helping anyone at all: would I accept the 95% chance or outright save the person who has the worst to suffer?

Well, what should I do? I must admit it’s not clear. I think it comes down to how much weight we should place on the morally relevant factor of identity. The more weight it has, the more likely the answer is that we should accept the 95% chance. I think it’s plausible that it has enough weight such that we should accept a 95% chance, but not a 40% chance. If one is a moral realist, one can accept that there is a correct objective answer yet not know what it is.

One complication is that you mention the notion of fairness. On my account of what matters, the fair thing to do – as you suggest - seems to be to give each person a chance in proportion to his/her suffering. Fairness is often thought of as a morally relevant factor in of itself, but if what the fair thing to do in any given situation is grounded in other morally relevant factors (e.g. experience and identity), then its moral relevance might be derived. If so, I think we can ignore the notion of fairness.

For example:

• Suppose Alice is experiencing 10 units of suffering (by some common metric)

• 10n people (call them group B) are experiencing 1 units of suffering each

• We can help exactly one person, and reduce their suffering to 0

In this case your principle says we should give Alice a 10/(10+10n) = 1/(n+1) chance of being helped, and each person in group B a 1/(10+10n) chance of being helped. But in the case we help someone from group B the level of 'total pain' remains at 10 as Alice is not helped.

This means that n/(n+1) proportion of the time the 'total pain' remains unchanged. i.e. we can make the chance of actually effecting the thing you say is morally important arbitrarily small. It seems strange to say your morally is motivated by x if your actions are so distanced from it that your chance of actually effecting x can go to zero.

This is a fantastic objection. This objection is very much in the spirit of the objection I was raising against utilitarianism: both objections show that the respective approaches can trivialize suffering given enough people (i.e. given that n is large enough). I think this objection shows a serious problem with giving each person a chance of being saved proportional to his/her suffering insofar as it shows that doing so can lead us to give a very very small chance to someone who has a lot to suffer when it intuitively seems to me that we should give him a much higher chance of being saved given how much more he/she has to suffer relative to any other person.

So perhaps we ought to outright save the person who has the most to suffer. But this conclusion doesn’t seem right either in a trade-off situation involving him and one other person who has just a little less to suffer, but still a whole lot. In such a situation, it intuitively seems that we should give one a slightly higher chance of being saved than the other, just as it intuitively seems that we should give each an equal chance of being saved in a trade-off situation where they each have the same amount to suffer.

I also have an intuition against utilitarianism. So if we use intuitions as our guide, it seems to leave us nowhere. Maybe one or more of these intuitions can be “evolutionarily debunked”, sparing one of the three approaches, but I don’t really have an idea of how that would go.

Indeed, for another example:

• Say a child (child A) is about to be tortured for the rest of their life, which you can prevent for £2.

• However another child (child B) has just dropped their ice cream, which has slightly upset them (although not much, they are just a little sad). You could buy them another ice cream for £2, which would cheer them up.

You only have £2, so you can only help one of the children. Under your system you would give some (admittedly (hopefully!) very small) chance that you would help child B. However in the case that you rolled your 3^^^3 sided die and it come up in favour of B, as you started walking over to the ice cream van it seems like it would be hard to say you were acting in accordance with "reason and empathy".

I had anticipated this objection when I wrote my post. In footnote 4, I wrote:

“Notice that with certain types of pain episodes, such as a torture episode vs a minor headache, there is such a big gap in amount of suffering between them that any clear-headed person in the world would rather endure an infinite number of minor headaches (i.e. live with very frequent minor headaches in an immortal life) than to endure the torture episode. This would explain why in a choice situation in which we can either save a person from torture or x number of persons from a minor headache (or 1 person from x minor headaches), we would just save the person who would be tortured rather than give the other(s) even the slightest chance of being helped. And I think this accords with our intuition well.”

Admittedly, there are two potential problems with what I say in my footnote.

1) It’s not clear that any clear-headed person would do as I say, since it seems possible that the what-it’s-like-of-going-through-infinite-minor-headaches can be experientially worse than the what-it’s-like-of-going-through-a-torture-session.

2) Even if any clear-headed person would do as I say, it’s not clear that this can yield the result that we should outright save the one person from torture. It depends on how the math works out, and I’m terrible at math lol. Does 1/infinity = 0? If so, then it seems we ought to give the person who would suffer the minor headache a 0% chance (i.e. we ought to outright save the other person from torture).

But the biggest problem is that even if what I say in my footnote can adequately address this objection, it cannot adequately address your previous objection. This is because in your previous example concerning Alice, I think she should have a high chance of being saved (e.g. around 90%) no matter how big n is, and what I say in footnote 4 cannot help me get that result.

All in all, your previous objection shows that my own approach leads to a result that I cannot accept. Thanks for that (haha). However, I should note that it doesn’t make the utilitarian view more plausible to me because, as I said, your previous objection is very much in the spirit of my own objection against utilitarianism.

I wonder if dropping the idea that we should give each person a chance of being saved proportional to his/her suffering requires dropping the idea that who suffers matters... I used the latter idea to justify the former idea, but maybe the latter idea can also be used to justify something weaker - something more acceptable to me... (although I feel doubtful about this).

I certainly did not mean to cause confusion, and I apologize for wasting any of your time that you spent trying to make sense of things.

By "you switched", do you mean that in my response to Objection 1, I gave the impression that only experience matters to me, such that when I mentioned in my response to Objection 2 that who suffers matters to me too, it seems like I've switched?

And thanks, I have fixed the broken quote. Btw, do you know how to italicize words?

Thanks for the exposition. I see the argument now.

You're saying that, if we determined "total pain" by my preferred approach, then all possible actions will certainly result in states of affairs in which the total pains are uniformly high with the only difference between the states of affairs being the identity of those who suffers it.

I've since made clear to you that who suffers matters to me too, so if the above is right, then according to my moral theory, what we ought to do is assign an equal chance to any possible action we could take, since each possible action gives rise to the same total pain, just suffered by different individuals.

Your argument would continue: Any moral theory that gave this absurd recommendation cannot be correct. Since the root of the absurdity is my preferred approach to determining total pain, that approach to determining total pain must be problematic too.

My response:

JanBrauner, if I remember correctly was talking about extreme unpredictability, but your argument doesn't seem to be based on unpredictability. If A1 and A2 are true, then each possible action more-or-less seems to inevitably result in a different person suffering maximal pain.

Anyways, if literally each possible action I could take would inevitably result in a different person suffering maximal pain (i.e. if A1 and A2 are true), I think I ought to assign an equal chance to each possible action (even though physically speaking I cannot).

I think there is no more absurdity to assigning each possible action an equal chance (assuming A1 and A2 are true) than there is in, say, flipping a coin between saving a million people on one island from being burned alive and saving one other person on another island from being burned alive. Since I don't find the latter absurd at all (keeping in mind that none of the million will suffer anything worse than the one, i.e. that the one would suffer no less than any one of the million), I would not find the former absurd either. Indeed, giving each person an equal chance of being saved from being burned alive seems to me like the right thing to do given that each person has the same amount to suffer. So I would feel similarly about assigning each possible action an equal chance (assuming A1 and A2 are true).

So you're suggesting that most people aggregate different people's experiences as follows:

FYI, I have since reworded this as "So you're suggesting that most people determine which of two cases/states-of-affairs is morally worse via experience this way:"

I think it is a more precise formulation. In any case, we're on the same page.

Basically I think sentences like:

"I don't think what we ought to do is to OUTRIGHT prevent the morally worse case"

are sufficiently far from standard usage (at least in EA circles) you should flag up that you are using 'morally worse' in a nonstandard way (and possibly use a different term). I have the intuition that if you say "X is the morally relevant factor" then which actions you say are right will depend solely on how they effect X.

The way I phrased Objection 1 was as follows: "One might reply that two instances of suffering is morally worse than one instance of the same kind of suffering and that we should prevent the morally worse case (e.g., the two instances of suffering), so we should help Amy and Susie."

Notice that this objection in argument form is as follows:

P1) Two people suffering a given pain is morally worse than one other person suffering the given pain.

P2) We ought to prevent the morally worst case.

C) Therefore, we should help Amy and Susie over Bob.

My argument with kbog concerns P1). As I mentioned, one basic premise that kbog and I have been working with is this: If two people suffering involves more pain than one person suffering, then two people suffering is morally worse (i.e. twice as morally bad) as one person suffering.

Given this premise, I've been arguing that two people suffering a given pain does not involve more pain than one person suffering the given pain, and thus P1) is false. And kbog has been arguing that two people suffering a given pain does involve more pain than one person suffering the given pain, and thus P1) is true. Of course, both of us are right on our respective preferred sense of "involves more pain than". So I recently started arguing that my sense is the sense that really matters.

Anyways, notice that P2) has not been debated. I understand that consequentialists would accept P2). But for other moral theorists, they would not because not all things that they take to matter (i.e. to be morally relevant, to have moral value, etc) can be baked into/captured by the moral worseness/goodness of a state of affairs. Thus, it seems natural for them to talk of side constraints, etc. For me, two things matter: experience matters, and who suffers it matters. I think the latter morally relevant thing is best captured as a side constraint.

However, you are right that I should make this aspect of my work more clear.

Yes. I bring up that most people would accept this different framing of P3 (even when the people involved are different) as a fundamental piece of their morality. To most of the people here this is the natural, obvious and intuitively correct way of aggregating experience. (Hence why I started my very first comment by saying you are unlikely to get many people to change their minds!)

I think thinking in terms of 'total pain' is not normally how this is approached, instead one thinks about converting each persons experience into 'utility' (or 'moral badness' etc.) on a personal level, but then aggregates all the different personal utilities into a global figure. I don't know if you find this formulation more intuitively acceptable (it is some sense feels like it respects your reason for caring about pain more).

So you're suggesting that most people determine which of two cases/states-of-affairs is morally worse via experience this way:

  1. Assign a moral value to each person's experiences based on its overall what-it's-like. For example, if someone is to experience 5 headaches, we are to assign a single moral value to his 5 headaches based on how experientially bad the what-it's-like-of-going-through-5-headaches is. If going through 5 such headaches is about experientially as bad as going through 1 major headache, then we would assign the same moral value to someone's 5 minor headaches as we would to someone else's 1 major headache.

  2. We then add up the moral value assigned to each person's experiences to get a global moral value, and compare this moral value to the other global values corresponding to the other states of affairs we could bring about.

This approach reminds me of trade-off situations that involve saving lives instead of saving people from suffering. For example, suppose we can either save Amy's and Susie's life or Bob's life, but we cannot save all. Who do we save? Most people would reason that we should save Amy's and Susie's life because each life is assigned a certain positive moral value, so 2 lives is twice the moral value as 1 life. I purposely avoided talking about trade-off situations involving saving lives because I don't think a life has moral value in itself, yet I anticipated that people would appeal to life having some sort of positive moral value in itself and I didn't want to spend time arguing about that. In any case, if life does have positive moral value in itself, then I think it makes sense to add those values just as it makes sense to add the dollar values of different merchandise. This would result in Amy's and Susie's death being a morally worse thing than Bob's death, and so I would at least agree that what we ought to do in this case wouldn't be to give everyone a 50% chance.

In any case, if we assign a moral value to each person's experience in the same way that we might assign a moral value to each person's life, then I can see how people reach the conclusion that more people suffering a given pain is morally worse than fewer people suffering the given pain (even if the fewer are other people). Moreover, given step 1., I agree that this approach, at least prima facie, respects [the fact that pain matters solely because of how it FEELS] more than the approach that I've attributed to kbog). (I added the "[...]" to make the sentence structure more clear.) As such, this is an interesting approach that I would need to think more about, so thanks for bringing it up. But, even granting this approach, I don't think what we ought to do is to OUTRIGHT prevent the morally worse case; rather we ought to give a higher chance to preventing the morally worse case proportional to how much morally worse it is than the other case. I will say more about this below.

Then I am really not sure at all what you are meaning by 'morally worse' (or 'right'!). In light of this, I am now completely unsure of what you have been arguing the entire time.

Please don't be alarmed (haha). I assume you're aware that there are other moral theories that recognize the moral value of experience (just as utilitarianism does), but also recognizes other side constraints such that, on these moral theories, the right thing to do is not always to OUTRIGHT prevent the morally worst consequence. For example, if a side constraint is true of some situation, then the right thing to do would not be to prevent the morally worst consequence if doing so violates the side constraint. That is why these moral theories are not consequentialist.

You can think of my moral position as like one of these non-consequentialist theories. The one and only side constraint that I recognize is captured by the fact that who suffers matters. Interestingly, this side constraint arises from the fact that experience matters, so it is closer to utilitarianism than other moral theories in this respect. Here's an example of the side constraint in action: Suppose I can either save 100 people from a minor headache or 1 other person from a major headache. Going by my sense of "more pain" (i.e. my way of quantifying and comparing pains), the single person suffering the major headache is morally worse than the 100 people each suffering a minor headache because his major headache is experientially worse than any of the other people's minor headache. But in this case, I would not think the right thing to do is to OUTRIGHT save the person with the major headache (even though his suffering is the morally worse case). I would think that the right thing to do is to give him a higher chance of being saved proportional to how much worse his suffering is experientially speaking than any one of the others (i.e. how much morally worse his suffering is relative to the 100's suffering).

Similarly, if we adopted the approach you outlined above, maybe the 100 people each suffering a minor headache would be the morally worse case. If so, given the side constraint, I would still similarly think that it would not be right to OUTRIGHT save the 100 from their minor headaches. I would again think that the right thing to do would be to give the 100 people a higher chance of being saved proportional to how much morally worse their suffering is relative to the single person's suffering.

I hope that helps.

Hey Alex,

Thanks again for taking the time to read my conversation with kbog and replying. I have a few thoughts in response:

(Indeed I think many people here would explicitly embrace the assumption than your P3 in your second reply to kblog, typically framed as 'two people experiencing the same pain is twice as bad as one person experiencing that pain' (there is some change from discussing 'total pain' to 'badness' here, but I think it still fits with our usage).)

When you say that many people here would embrace the assumption that "two people experiencing the same pain is twice as bad as one person experiencing that pain", are you using "bad" to mean "morally bad?"

I ask because I would agree if you meant morally bad IF the single person was a subset of the two people. For example, I would agree that Amy and Susie each suffering is twice as morally bad as just Amy suffering. However, I would not agree IF the single person was not a subset of the two (e.g., if the single person was Bob). If the single person was Bob, I would think the two cases are morally just as bad.

Now, one basic premise that kbog and I have been working with is this: If two people suffering involves more pain than one person suffering, then two people suffering is morally worse (i.e. twice as morally bad) as one person suffering.

However, based on my preferred sense of "more pain", two people suffering involves the same amount of pain as one person suffering, irrespective of whether the single person is a subset or not.

Therefore, you might wonder how I am able to arrive at the different opinions above. More specifically, if I think Amy and Susie each suffering involves the same amount of pain as just Amy suffering, shouldn't I be committed to saying that the former is morally just as bad as the latter, rather than twice as morally bad (which is what I want to say?)

I don't think so. I think the pareto principle provides an adequate reason for taking Amy and Susie each suffering to be morally worse than just Amy's suffering. As Otsuka (a philosopher at Harvard) puts it, the Pareto states that “One distribution of benefits over a population is strictly Pareto superior to another distribution of benefits over that same population just in case (i) at least one person is better off under the former distribution than she would be under the latter and (ii) nobody is worse off under the former than she would be under the latter." Since just Amy suffering (i.e. Susie not suffering) is Pareto superior to Amy and Susie each suffering, therefore just Amy suffering is morally better than Amy and Susie each suffering. In other words, Amy and Susie each suffering is morally worse than just Amy suffering. Notice, however, that if the single person were Bob, condition (ii) would not be satisfied because Bob would be made worse off. The Pareto principle is based on the appealing idea that we shouldn't begrudge another person an improvement that costs us nothing. Amy shouldn't begrudge Susie an improvement that costs her nothing.

Anyways, I just wanted to make that aspect of my thinking clear. So I would agree with you that more people suffering is morally worse than fewer people suffering as long as the smaller group of people is a subset of the larger group, due to the Pareto principle. But I would not agree with you that more people suffering is morally worse than fewer people suffering if those fewer people are not a subset of the larger group, since the Pareto principle is not a basis for it, nor is there more pain in the former case than the latter case on my preferred sense of "more pain". And since I think my preferred sense of "more pain" is the one that ultimately matters because it respects the fact that pain matters solely because of how it feels, I think others should agree with me.

A couple of brief points in favour of the classical approach: It in some sense 'embeds naturally' in the universe, in that if our actions can only effect some small section of the universe, we need only consider that section when making decisions. However if one only cares about those experiencing the most suffering, no matter where they are in the universe, then it then it might turn out that an alien experiencing extreme suffering should make us indifferent to all suffering on Earth. (Average utilitarianism faces a similar problem).

I'm not sure I see the advantage here, or what the alleged advantage is. I don't see why my view commits me to pay any attention towards people who I cannot possibly affect via my actions (even though I may care about them). My view simply commits me to giving those who I can possibly affect a chance of being helped proportional to their suffering.

As discussed in other comments, it also has other pleasing properties, such as the veil of ignorance as discussed in other comments.

The veil of ignorance approach at minimum supports a policy of helping the greater number (given the stipulation that each person has an equal chance of occupying anyone's position). However, as I argued, this stipulation is not true OF the real world because each of us didn't actually have an equal chance of being in any of our position, and what we should do should be based on the facts, and not on a stipulation. In kbog's latest reply to me regarding the veil of ignorance, he seems to argue that the stipulation should determine what we ought to do (irrespective of whether it is true in the actual world) because "The reason we look at what they would agree to from behind the veil of ignorance as opposed to outside is that it ensures that they give equal consideration to everyone, which is a basic principle that appeals to us as a cornerstone of any decent moral system." I have yet to respond to this latest reply because I have been too busy arguing about our senses of "more pain", but if I were to respond, I would say this: "I agree that we should give equal consideration to everyone, which is why I believe we should give each person a chance of being helped proportional to the suffering they face. The only difference is that this is giving equal consideration to everyone in a way that respects the facts of the world." Anyways, I don't want to say too much here, because kbog might not see it and it wouldn't be fair if you only heard my side. I'll respond to kbog's reply eventually (haha) and you can follow the discussion there if you wish.

Let me just add one thing: Based on Singer's intro to Utilitarianism, Harsanyi argued that the veil of ignorance also entails a form of utilitarianism on which we ought to maximize average utility, as opposed to Rawls' claim that it entails giving priority to the worst off. If this is right, then the veil of ignorance approach doesn't support classical utilitarianism which just says we ought to maximize utility and not average utility.

One could imagine others who also disagreed with Comparability, but thought the appropriate solution was to always help the person suffering the most, and not care at all about anyone else.

Yes, they could, but I also argued that who suffers matters in my response to Objection 2, and to simply help the person suffering the most is to ignore this fact. Thus, even if one person suffering a lot is experientially worse (and thus morally worse) than many others each suffering something less, I believe we should give the others some chance of being helped. That is to say, in light of the fact that who suffers matters, I believe it is not always right to prevent the morally worse case.

To take things to the opposite extreme, someone could also deny Comparability but think that the most important thing was minimizing the number of people suffering at all and not take into account intensity whatsoever (although they would likely justify rejecting Comparability on different grounds to you).

While this is a possible position to hold, it is not a plausible one, because it effectively entails that the numbers matter in itself. That is, such a person thinks he should save the many over one other person not because he thinks the many suffering involves more pain than the one suffering (for he denies that a non-purely experientially determined amount of pain can be compared with a purely experientially determined amount of pain). Rather, he thinks he should save the many solely because they are many. But it is hard to see how numbers matter in itself.

Load more