Hi,
(first post, hope I'm doing everything more or less right).
You’re probably familiar with the phrase (I don’t know who framed it this way) that “we care about making people happy, but we’re indifferent to making happy people.” I nicely summarizes the idea that while it is important to provide currently living people with as much wellbeing as possible (because they are here), creating more humans doesn’t really matter morally, even if they would be very happy, because the unborn can’t care about being born (I hope I'm doing an okay job at paraphrasing).
I share this view (I'm pretty indifferent about making happy people - except if more people has an impact on people already existing). In fact, I can’t intuitively understand why someone could have the opposite opinion. But clearly I must be missing something, because it seems in the EA community many or most people do care about creating as many (happy) people as possible.
I have wrestled with this topic for a long time, and watching a new Kurtzgesagt video on longtermism made me want to write this post. In that (wonderfully made) video, the makers clearly are of the opnion that making happy people is a good thing. The video contains things like
“If we screw up the present so many people may not come to exist. Quadrillions of unborn humans are at our mercy. The unborn are the largest group of people and the most disenfranchised. Someone who might be born in a thousand or even a million years, deeply depends on us today for their existence.”
This doesn’t make that much sense to me (except in the context when more people means more happiness for everyone, not just additional happiness because there’s more people), and I don't understand how the makers of this video present the “making happy people” option as if it is not up for debate. Unless... it is not up for debate?
My questions, if you want:
1. how do you estimate is the division within the EA community? How many people are indifferent to making happy people, and how many care about making happy people?
2. if you are of the opposite opinion: what am I not seeing if I'm indifferent to making happy people? Is this stance still a respectable opinion? Or is it not at all?
Thank you!
I think this is actually a central question that is relatively unresolved among philosophers, but it is my impression that philosophers in general, and EAs in particular, lean in the "making happy people" direction. I think of there as being roughly three types of reason for this. One is that views of the "making people happy" variety basically always wind up facing structural weirdness when you formalize them. It was my impression until recently that all of these views imply intransitive preferences (i.e something like A>B>C>A), until I had a discussion with Michael St Jules in which he pointed out more recent work that instead denies the independence of irrelevant alternatives. This avoids some problems, but leaves you with something very structurally weird or even absurd to some. I think Larry Temkin has a good quote about it something like "I will have the chocolate ice-cream, unless you have vanilla, in which case I will have strawberry".
The second reason is the non-identity problem, formalized by Derek Parfit. Basically the issue this raises is that almost all of our decisions that impact the longer term future in some way also change who gets born, so a standard person affecting view seems to allow us to do almost anything to future generations. Use up all their resources, bury radioactive waste, you name it.
The third maybe connects more directly to why EAs in particular often reject these views. Most EAs subscribe to a sort of universalist, beneficent ethics, that seems to imply that if something is genuinely good for someone, then that something is good in a more impersonal sense that tugs on ethics for all. For those of us who live lives worth living, are glad we were born, and don't want to die, it seems clear that existence is good for us. If this is the case, it seems like this presents a reason for action to anyone who can impact it if we accept this sort of universal form of ethics. Therefore, it seems like we are left with three choices. We can say that our existence actually is good for us, and so it is also good for others to bring it about, we can say that it is not good for others to bring it about, and therefore it is not actually good for us after all, or we can deny that ethics has this omnibenevolent quality. To many EAs, the first choice is clearly best.
I think here is where a standard person-affecting view might counter that it cares about all reasons that actually exist, and if you aren't born, you don't actually exist, and so a universal ethics on this timeline cannot care about you either. The issue is that without some better narrowing, this argument seems to prove too much. All ethics is about choosing between possible worlds, so just saying that a good only exists in one possible world doesn't seem like it will help us in making decisions between these worlds. Arguably the most complete spelling out of a view like this looks sort of like "we should achieve a world in which no reasons for this world not to exist are present, and nothing beyond this equilibrium matters in the same way". I actually think some variation of this argument is sometimes used by negative utilitarians and people with similar views. A frustrated interest exists in the timeline it is frustrated in, and so any ethics needs to care about it. A positive interest (i.e. having something even better than an already good or neutral state) does not exist in a world in which it isn't brought about, so it doesn't provide reasons to that world in the same way. Equilabrium is already adequetely reached when no one is badly off.
This is coherent, but again it proves much more than most people want to about what ethics should actually look like, so going down that route seems to require some extra work.
It depends if by valuing "making people happy" one means 1) intrinsically valuing adding happiness to existing people's lives, or 2) valuing "m... (read more)