T

ThePlanetaryNinja108

6 karmaJoined

Comments
4

As stated in another comment, you have proved any ethical theory that is identical to total utilitarianism with fixed population sizes (e.g average utilitarianism).

But, you can use separability to rule out non-total versions of utilitarianism.

Separability is roughly the principle that, in comparing the value of two outcomes, one can ignore any people whose existence and welfare are unaffected.

Non-total versions of utilitarianism violate separability because they imply that the value of creating someone depends on the population or wellbeing of unaffected beings.

A lot of people (including I) lean towards empty individualism.

From an empty individualistic perspective, there is no difference between creating 1 billion people who experience 100 years of bliss and creating 100 billion people who experience 1 year of bliss.

So that version of the RC is easy to bite the bullet on.

I am a negative utilitarian so both would be neutral, in my opinion.

Your objection against NAE is also raised by MichaelStJules in this post, which I have responded to. I completely accept NAE.

Rejecting transitivity is vulnerable to money pump arguments. So I completely accept transitivity.

I would reject mere addition. Mere addition implies that it would be moral to create someone if they experienced a year of extreme torture followed by just enough happiness to 'outweigh' the torture.

Re 2: Your objection to non-anti egalitarianism can easily be chalked up to scope neglect.

World A - One person with an excellent life plus 999,999 people with neutral lives.

World B - 1,000,000 people with just above neutral lives.

Let's use the veil of ignorance.

Would you prefer a 100% chance of a just above neutral life or a 1 in a million chance of an excellent life with a 99.9999% chance of a neutral life? I would definitely prefer the former.

Here is an alternative argument. 

Surely, it would be moral to decrease the wellbeing of a happy person from +1000 to +999 to make 1,000 neutral people 1 unit better off; rejecting this is outrageously implausible. 

If the process was repeated 1000 times, then it would be moral to bring a happy person down to neutrality to make a million neutral people 1 unit better off.