Hi,
(first post, hope I'm doing everything more or less right).
You’re probably familiar with the phrase (I don’t know who framed it this way) that “we care about making people happy, but we’re indifferent to making happy people.” I nicely summarizes the idea that while it is important to provide currently living people with as much wellbeing as possible (because they are here), creating more humans doesn’t really matter morally, even if they would be very happy, because the unborn can’t care about being born (I hope I'm doing an okay job at paraphrasing).
I share this view (I'm pretty indifferent about making happy people - except if more people has an impact on people already existing). In fact, I can’t intuitively understand why someone could have the opposite opinion. But clearly I must be missing something, because it seems in the EA community many or most people do care about creating as many (happy) people as possible.
I have wrestled with this topic for a long time, and watching a new Kurtzgesagt video on longtermism made me want to write this post. In that (wonderfully made) video, the makers clearly are of the opnion that making happy people is a good thing. The video contains things like
“If we screw up the present so many people may not come to exist. Quadrillions of unborn humans are at our mercy. The unborn are the largest group of people and the most disenfranchised. Someone who might be born in a thousand or even a million years, deeply depends on us today for their existence.”
This doesn’t make that much sense to me (except in the context when more people means more happiness for everyone, not just additional happiness because there’s more people), and I don't understand how the makers of this video present the “making happy people” option as if it is not up for debate. Unless... it is not up for debate?
My questions, if you want:
1. how do you estimate is the division within the EA community? How many people are indifferent to making happy people, and how many care about making happy people?
2. if you are of the opposite opinion: what am I not seeing if I'm indifferent to making happy people? Is this stance still a respectable opinion? Or is it not at all?
Thank you!
It's possible you are. There are some strains of person-affecting view that are genuinely indifferent to future people, but most person-affecting theorists do accept that being in the future isn't what makes the difference. What I (and I think some others in this thread) are pointing to is that, even though in theory person-affecting views care about the welfare of future generations, in practice, without making some very difficult modifications to the way the theory thinks of identity that arguably no one has fully pulled off, it still implies near total indifference between impacts on the future.
The reason is basically that personal identity is very fragile. If you were conceived a moment earlier or later, it would have been with a different sperm. Even if it was the same sperm, what if it splits and you have an identical twin in one timeline, and it doesn't split in the other, which of you is made happier by benefits to this timeline where the zygote doesn't split? Given this, and the messy, ripple effects that basically all attempts to impact the future even a couple of generations out will have, you are choosing between two different sets of future people whenever you are choosing between policies that impact the future. That is, you are not making future people better off rather than worse off, you are choosing whether a happy group of people gets born, or a different, less happy group.
This sounds academic and easy to just escape from in same number of people cases, but the tricky thing about choosing between distributions of happy people rather than just making people happy is the future scenarios in which not only the identities, but also numbers of these people differ. If you try to construct a view which cares about whatever people exist being well-off, and is indifferent to both these identity considerations and numbers, the most obvious formalization of this is averagism for instance. Unforunately averagism conflicts pretty strongly with person-affecting views, including in ways people often find very absurd (which is why most people aren't averagists).
Consider for instance the "sadistic conclusion". If you have a group of very happy people, and you can choose to either bring into existence one person whose life isn't worth living, or many people whose lives are worth living, but much less happy than the existing people, than averagism can tell you to bring into existence the life not worth living. The basic problem is that, between two options, it can favor the world in which no one is better off and some are worse off, if this happens in such a way that the distribution of welfare is shifted to have more of the population in the best off group.