TL;DR I think it is sensible and important to distinguish between propositions about the well-being of existing people and propositions about hypothetical incremental people. Furthermore, I think utilitarian-ish discussion suffers from a lack of clarity about the condition (or lack thereof) of a hypothetical person's existence, and clarity here provides what I think is a very reasonable and satisfying resolution to the repugnant conclusion, as well as flexibility on what I view as the "too-strong" logic of longtermism.
I first wrote about these ideas on my Substack in "Morality and Marginal Existence," and just followed that up with "Unreal Persons." Some of the text in this post comes directly from those Substack posts. My Substack is a bit idiosyncratic, and not explicitly about EA or philosophy, and therefore those posts are written for more of a general audience, so I've rewritten my argument here with you all as the intended audience. This post is a little more formal, and has a bigger emphasis on longtermism, while the Substack version might be a little more... fun to read? Feel free to check them out in addition to or instead of this post, the main ideas are the same in both venues, but I do more exploration and meandering on Substack.
Motivation
Utilitarianism is believing that what is important is maximizing well-being, which, taken on its own, leads to the conclusion that introducing new people capable of experiencing well-being is comparable and perhaps even preferable to improving the well-being of existing people. But there is a meaningful difference between propositions about improving the well-being of existing persons and propositions about introducing new persons. Improving the well-being of existing persons is obviously good, because existing persons have the capacity for well-being and suffering. Introducing new persons, even if we can assume that their lives will have more well-being than suffering, is not obviously good (at least to me), because the persons in question have no consciousness, and therefore no capacity for well-being or suffering. These hypothetical, incremental persons currently experience nothing, they do not long for existence, they do not benefit from anything, really. There is a qualitative difference between increasing well-being via effecting a benefit to an existing person and increasing well-being via introducing a new person capable of well-being.
The question is whether bringing a person into existence is a benefit to that person. I think that there can and should be divergence on this particular question, but anyway it is important that we are aware of it when we discuss propositions such as pro-natalism and longtermism. I suspect many people will be thinking, "Of course it is a benefit to a person to bring them into existence. Existing is better than having never existed." I am not sure this is true. I think that your intuitions fail you on this proposition because, of course, you exist. There is an implicit condition of your existence. The preference you actually have is that you would prefer to continue to live rather than die. Try to actually consider the counterfactual of your existence -- that is, your nonexistence. Can you actually say that you prefer existing to not existing? It's not really a comparison you can make, because the state of nonexistence is a state you can not experience, it is a state of non-experience. Insofar as experience (consciousness) is the basis of the value of well-being, a state of non-experience does not have any value.
Regardless, whether you think that existence is a benefit in and of itself or not, that is not the main point of this post. The main point of this post is to push for that question to be made explicit when it comes into arguments. More concretely, I am pushing for that condition of existence (or an existence being left unconditional) to be made explicit in arguments. The situation above, where you use yourself as a proxy to answer questions about the value of existence, is an example of how the condition of existence sneaks into arguments. Another example is the repugnant conclusion / mere addition paradox.
I am sure that I do not have to describe the mere addition paradox to anyone here, but I would like to state it in my own terms for the sake of this discussion. The mere addition paradox arises when you are comparing differing amounts of well-being for unconditional persons. The paradox is: under utilitarianism, a world of N happy people is worse than a world of M miserable people, M>> N. It’s a weird comparison to make, though: we’re comparing the well-being of persons that aren’t conditioned to exist--and they have to be unconditional, otherwise they would have to appear in both scenarios.
You can sort of avoid repugnancy by demanding clear conditions. Imagine a discussion like this:
Person A: The problem with utilitarianism is it leads to the repugnant conclusion. Consider two hypothetical worlds, X and Y. In world X, there are 5000 extremely happy and satisfied people. In world Y, there is some huge number of people whose lives are just barely worth living. Under utilitarianism, there must be some number of miserable people, M, you can put in world Y to make it preferable to world X, because you can add up all of their utility to get a number bigger than whatever the fixed utility of world X is. But that is saying that a world consisting of nothing but miserable people is better than a world of completely happy people!
Person B: Well, what are we really comparing here? Conditional on the existence of M people, it is definitely better that those people are happy. But without conditioning on the existence of those M people that you would have to put in world Y, I can’t really make a comparison.
…To be clear, I’m not trying to say that leaving it unconditional is a logical error or an abject philosophical failure or anything. A person committed to utilitarianism would respond:
Person C: No, even unconditional on the existence of the M-N miserable people, world Y is better.
I just think that is wrong, and unnecessary. I think it would be a perfectly reasonable position to abstain from making moral judgements about unconditional existences, both because it seems dangerous to make evaluations based on unconditional terms, and also because there is a qualitative difference between propositions regarding existing people (or people conditioned to exist) and nonexistent people.
Marginal existence
I've come up with a term for this "unconditional existence:" "marginal existence." Although you could just say "unconditional existence," I like the way "marginal" communicates both the property of being unconditional (as in statistics) as well as the property of being incremental (as in economics).
If you claim that it is morally right or wrong to have kids, you are making a claim about marginal existences. The hypothetical kids in question are marginal. When I say that most existing arguments don’t properly handle marginal existence, I mean that it is a problem to fail to recognize that there is a difference between claims about improving the well-being of existing people and claims about introducing new people. I think basically everyone agrees that improving the well-being of existing people is good, but not everyone accepts the utilitarian logic that says: “Well-being is good, therefore the more people there are to experience well-being, the more good.” The discrepancy, I think, comes from different positions on marginal existence. It is perfectly reasonable to hesitate, and say: “Yes, well-being is good. But is it necessarily good to create more people to have a greater total quantity of well-being? If we introduce a new person who experiences some amount of well-being, we haven’t just moved from a smaller quantity of well-being to a larger quantity of well-being—we’ve also moved from a smaller capacity for well-being to a larger capacity for well-being. I’m not saying it’s meaningless, but maybe it should be treated differently—maybe there’s no ‘therefore’ there. Maybe the ‘goodness’ of well-being has more to do with a transition in the state of affairs. It is good to do things which bring about positive consequences, and ‘positive’ basically comes down to an improvement in well-being somewhere, somehow. If we understand good consequences to be more about good transitions in the state of affairs, then we can require that the persons subject to the state of affairs are conditioned to exist. In which case, the question of the utility of introducing new people breaks down: if they are conditioned to exist, their existence cannot be in question.”
Treating marginal existences the same as actual existences (which is to say, treating people unconditioned to exist the same as people conditioned to exist, or treating hypothetical people the same as real people) gives you pure & strong Benthamite utilitarianism. If would describe yourself as a utilitarian, but not as a “pure” utilitarian, it is likely because you have a nuanced stance about marginal existence—and this is good.
Pro-natalism
I am not a pro-natalist. I am not an anti-natalist, either. Both of these stances seem off to me. I have a hunch that there are others like me, that do not feel that pro-natalism is a moral good, but have a hard time squaring that feeling with utilitarianism.
I suspect that many people that consider themselves pro-natalists arrived there because they were convinced by utilitarian logic, even though they do not intuitively feel that pro-natalism is good. They are choosing logic over feeling and intuition. And this is good, I think. I also think that it is perfectly logical to not believe that pro-natalism is a moral good--it just comes down to your position on marginal existence. I am not a pro-natalist because I do not think that there is value in marginal existence, or at least not enough value to be comparable to other issues like poverty, or that propositions regarding marginal existence cannot be evaluated.
Another illustrative dialogue:
Person A: Utilitarianism seems obviously true, so if you’re reasonably sure that if you have a kid and his or her life will be worth living, it is a moral good to have kids. In fact, we should all have as many kids as possible because that’s probably the easiest way to increase utility and earn our morality bucks.
Person B: Utilitarianism’s assertion that increasing the well-being of people is the basis of morality seems obviously true, but I’m not sure if it applies here. Having a kid doesn’t necessarily mean you’re increasing the well-being of people, it means you’re introducing a new person capable of experiencing well-being. If you’ve decided that you’re going to have a kid, then we can condition on that kid’s existence, and I would say it’s morally important to make sure that kid has lots of well-being, but unconditional on the existence of that kid, I don’t think I can make a moral evaluation of the proposition.
Longtermism
The logic behind longtermism is too strong. It is irrefutable. If there are 100 nonillion potential people, there is nothing that could happen in your lifetime that could possibly matter compared to ensuring the continued survival of humanity. All resources should be diverted to preventing existential risk, even if we really don't know whether these risks are real or these efforts are effective, because the value at stake is simply too large.
But if we introduce the concept of marginal existence, we can make the question of potential future people and longtermism more workable. In other words, we need to be more specific about our conditions. For example, there will almost certainly be another generation after me. Therefore, it is sensible (necessary, even) to condition on the existence of that generation, at which point they are no longer marginal. Conditional on the existence of that generation, it is a moral good to improve their well-being. There will almost certainly be another generation after that, and many more after that. As long as we agree to condition on their existence, we probably agree that it is morally important to improve their well-being, by working on climate change, preventing AI risk, etc.
Eventually, though, there will be a point when it is no longer necessary to condition on the existence of a future generation, and there will be a point when it is no longer sensible. I have no idea where those points are, and I don't think it's possible to identify those points, but anyway you don't have to invoke the existence of all 100 nonillion potential people. You can calibrate a mixed near-term--long-term position based on what conditions seem sensible to you. I think this is the direction that discussions around and arguments for longtermism should move.
Positions on marginal existence
As I said above, holding marginal existence as the same as real or conditional existence gives you "pure" utilitarianism. Asserting that marginal existence comes with a value of zero gives you the person-affecting view. I would tend to hold that propositions regarding marginal existences cannot be evaluated. This makes sense to me, because marginal existences do not exist. They have no capacity for well-being or suffering, and therefore it is not important that we are able to make judgements about them.
I see this as a form of consequentialism that is not utilitarianism. Rather than the basis of morality being the maximization of well-being, the basis of morality is transition in the state of affairs. Something is good if it produces a positive change in the state of affairs--positive consequences. The way I see it, understanding the basis of morality as transitions in the state of affairs for people is equivalent to requiring the condition of existence to make moral evaluations.
Further reading
(if you know of any relevant writing, let me know and I'll edit it in here)
If true, so much the worse for other causes. But I don't think it's true, longtermism can also imply that we should focus on expanding civilization more quickly or putting humanity on a better trajectory of political development. And if directly working on x-risks is ineffective then indirect steps like creating a more thoughtful and tolerant culture could be the most effective way to reduce x-risk.
Well sure? It sounds like you're stating the obvious. Like, of course long term impacts won't happen if future generations don't exist, I thought that wouldn't need to be stated.
I think I'm already pretty familiar with thinking around this. What I don't know is if there is any way to get people who have different intuitions around these questions to converge or to switch intuitions.
So I'm pro-natalist in part because I see potential people who do not exist, but who might someday exist as being the sort of people who I can either help (by increasing their odds of someday existing and having a good life, or decreasing their odds of existing and having a bad life) or harm (by doing the opposite).
At a deep level this describes my feelings when I imagine the nearly infinite number of potential humans, when I imagine what my state was before I was conceived, and when I think about how happy I am to be alive, and how grateful I am that I got the chance to exist, when it easily could have been someone else, or when humanity easily could have failed to evolve at all.
So I very, very much intuitively feel like if I bring someone into existence who will have a good life, I just did something very nice for them. If I make it so that they don't come into existence, I did something extremely unkind to them.
And this intuition connects to all sorts of other identities and feelings I have, decisions I make, things I wish I had or could do, etc. As closely as I can tell it is deeply embedded in me.
It possibly has to do with the fact that I was homeschooled, so I never got bullied in school, and that I am thirty eight, and a couple of weeks ago I had some nasty mouth ulcers, and I realized that this was the physically most unpleasant thing I've ever gone through. What I'm saying, is I haven't ever actually suffered, and this feeds into my into my intuitions about the goodness of life.
But ultimately: I am pronatalist because I care about people who do not exist, and who therefore cannot either suffer or feel happiness. I am pronatalist because I think that it is possible to do something beneficial to individuals who do not currently exist, and who might never exist. It is not because I don't understand that they don't exist.
I could be wrong, but I'm pretty sure that most people who adopt a sort of pure longtermist utilitarianism already understand your argument here, but have different intuitions about it.