Now, if you accept utilitarianism for a fixed population, you should think that D is better than C
If we imagine that world C already exists, then yeah, we should try to change C into D.(Similarly, if world D already exists, we'd want to prevent changes from D to C.)
So, if either of the two worlds already exists, D>C.
Where the way you're setting up this argument turns controversial, though, is when you suggest that "D>C" is valid in some absolute sense, as opposed to just being valid (in virtue of how it better fulfills the preferences of existing people) under the stipulation of starting out in one of the worlds (that already contains all the relevant people).
Let's think about the case where no one exists so far, where we're the population planners for a new planet that can either shape into C or D. (In that scenario, there's no relevant difference between B and C, btw.) I'd argue that both options are now equally defensible because the interests of possible people are under-defined* and there are defensible personal stances on population ethics for justifying either.**
*The interests of possible people are underdefined not just because it's open how many people we might create. In addiiton, it's also open who we might create: Some human psychological profiles are such that when someone's born into a happy/priviledged life, they adopt a Buddhist stance towards existence and think of themselves as not having benefitted from being born. Other psychological profiles are such that people do think of themselves as grateful and lucky for having been born. (In fact, others yet even claim that they'd consider themselves lucky/grateful even if their lives consisted of nothing but torture). These varying intuitions towards existence can inspire people's population-ethical leanings. But there's no fact of the matter of "which intuitions are more true." These are just difference interpretations for the same sets of facts. There's no uniquely correct way to approach population ethics.
**Namely, C is better on anti-natalist harm reduction grounds (at least depending on how we interpret the scale/negative numbers on the scale), whereas D is better on totalist grounds.
All of that was assuming that C or D are the only options. If we add a third alternative, say, "create no one," the ranking between C and D (previously they were equally defensible) can change.
At this point, the moral realist proponents of an objective "theory of the good" might shriek in agony and think I have gone mad. But hear me out. It's not crazy at all to think that choices depend on the alternatives we have available. If we also get the option, "create no one," then I'd say C becomes worse than the two other options because there's no approach to population ethics according to which C is optimal from the three options. My person-affecting stance on population ethics says that we're free to do a bunch of things, but the one thing we cannot do is do things that reflect a negligent disregard for the interests of potential people/beings.
Why? Essentially for similar reasons why common-sense morality says that struggling lower-class families are permitted to have children that they raise under harship with little means (assuming their lives are still worth living in expectation), but if a millionaire were to do the same to their child, he'd be an asshole. The fact that the millionaire has the option "give my child enough resources to have a high chance at happiness" makes it worse if he then proceeds to give his child hardly any resources at all. Bringing people into existence makes you responsible for them. If you have the option to make your children really well off, but you decide not to do that, you're not taking into consideration the interests of your child, which is bad. (Of course, if the millionaire donates all their money to effective causes and then raises a child in relative poverty, that's acceptable again.)
I think where the proponents of an objective theory of the good go wrong is the idea that you keep track, on the same objective scoreboard, no matter whether it concerns existing people or potential people. But those are not commensurate perspectives. This whole idea of an "objective axiology/theory of the good" is dubious to me. It also has pretty counterintuitive implications to try to squeeze these perspectives under one umbrella. As I wrote elsewhere:
There’s a tension between the beliefs “there’s an objective axiology” and “people are free to choose their life goals.”
Many effective altruists hesitate to say, “One of you must be wrong!” when one person cares greatly about living forever while the other doesn’t. By contrast, when two people disagree on population ethics “One of you must be wrong!” seems to be the standard (implicit) opinion. I think these two attitudes are in tension. To the degree people are confident that life goals are up to the individual to decide/pursue, I suggest they lean in on this belief. I expect that resolving the tension in that way – leaning in on the belief “people are free to choose their life goals;” giving up on “there’s an axiology that applies to everyone” – makes my framework more intuitive and gives a better sense of what the framework is for, what it’s trying to accomplish.
Here's a framework for doing population ethics without an objective axiology. In this framework, person-affecting views seem quite intuitive because we can motivate them as follows:
When I said earlier that some people form non-hedonistic life goals, I didn't mean that they commit to the claim that there are things that everyone else should value. I meant that there are non-hedonistic things that the person in question values personally/subjectively.
You might say that subjective (dis)value is trumped by objective (dis)value -- then we'd get into the discussion of whether objective (dis)value is a meaningful concept. I argue against that in my above-linked post on hedonist axiology. Here's a shorter attempt at making some of the key points from that post:
Earlier, when I agreed with you that we can, in a sense, view "suffering is bad" as moral fact, I would still maintain that this way of speaking makes sense only as a shorthand pointing towards the universality and uncontroversialness of "suffering is bad," rather than it pointing to some kind objectivity-that-through-its-nature-trumps-everything-else that suffering is supposed to have (again, I don't believe in that sort of objectivity). By definition, when there's suffering, there's a felt sense (by the sufferer) of wanting the experience to end or change, so there's dissatisfaction and a will towards change. The definition of suffering means it's a motivational force. But whether it is the only impetus/motivational force that matters to someone, or whether there are other pulls and pushes that they deem equally worthy (or even more worthy, in many cases), depends on the person. So, that's where your question about the non-hedonistic life goals comes in.
But why do they say so? Because they have a feeling that something or other has value?
People choosing life goals is a personal thing, more existentialism than morality. I wouldn't even use the word "value" here. People adopt life goals that motivate them to get up in the morning and go beyond the path of least resistance (avoiding short-term suffering). If I had tto sum it up in one word, I'd say it's about meaning rather than value. See my post on life goals, which also discusses my theory of why/how people adopt them.
If you feel that we're talking past each other, it's likely because we're thinking in different conceptual frameworks.
Let's take a step back. I see morality as having two separate parts:
Separately, there are non-moral life goals (and it's possible for people to have no life goals, if there's nothing that makes them go beyond the path of least resistance). Personally, I have a non-moral life goal (being a good husband to my wife) and a moral one (reducing suffering subject to low-effort cooperation with other people's life goals).
That's pretty much it. As I say in my post on life goals, I subscribe to the Wittgensteinian view of philosophy (summarized in the Stanford Encyclopedia of Philosophy):
[...] that philosophers do not—or should not—supply a theory, neither do they provide explanations. “Philosophy just puts everything before us, and neither explains nor deduces anything. Since everything lies open to view there is nothing to explain (PI 126).”
Per this perspective, I see the aim of moral philosophy as to accurately and usefully describe our option space – the different questions worth asking and how we can reason about them.
I feel like my framework lays out the option space and lets us reason about (the different parts of) morality in a satisfying way, so that we don't also need the elusive concept of "objective value". I wouldn't understand how that concept works and I don't see where it would fit in. On the contrary, I think thinking in terms of that concept loses us clarity.
Some people might claim that they can't imagine doing without it or would consider everything meaningless if they had to do without it (see "Why realists and anti-realists disagree"). I argued against that here, here and here. (In those posts, I directly discuss the concept of "irreducible normativity" instead of "objective value," but those are very closely linked, such that objections against one also apply against the other, mostly.)
Depends what you mean by "moral realism."
I consider myself a moral anti-realist, but I would flag that my anti-realism is not the same as saying "anything goes." Maybe the best way to describe my anti-realism to a person who thinks about morality in a realist way is something like this:
"Okay, if you want to talk that way, we can say there is a moral reality, in a sense. But it's not a very far-reaching one, at least as far as the widely-compelling features of the reality are concerned. Aside from a small number of uncontroversial moral statements like 'all else equal, more suffering is worse than less suffering,' much of morality is under-defined. That means that several positions on morality are equally defensible. That's why I personally call it anti-realism: because there's not one single correct answer."
See section 2 of my post here for more thoughts on that way of defining moral realism. And here's Luke Muehlhauser saying a similar thing.
I agree that hedonically "neutral" experiences often seem perfectly fine.
I suspect that there's a sleight of hand going on where moral realist proponents of hedonist axiology try to imply that "pleasure has intrinsic value" is the same claim as "pleasure is good." But the only sense in which "pleasure is good" is obviously uncontroversial is merely the sense of "pleasure is unobjectionable." Admittedly, pleasure also often is something we desire, or something we come to desire if we keep experiencing it -- but this clearly isn't always the case for all people, as any personal hedonist would notice if they stopped falling into the typical mind fallacy and took seriously that many other people sincerely and philosophically-unconfusedly adopt non-hedonistic life goals.
See also this short form or this longer post.
I haven't read your other recent comments on this, but here's a question on the topic of pausing AI progress. (The point I'm making is similar to what Brad West already commented.)
Let's say we grant your assumptions (that AIs will have values that matter the same as or more than human values and that an AI-filled future would be just as or more morally important than one with humans in control). Wouldn't it still make sense to pause AI progress at this important junction to make sure we study what we're doing so we can set up future AIs to do as well as (reasonably) possible?
You say that we shouldn't be confident that AI values will be worse than human values. We can put a pin in that. But values are just one feature here. We should also think about agent psychologies and character traits and infrastructure beneficial for forming peaceful coalitions. On those dimensions, some traits or setups seem (somewhat robustly?) worse than others?
We're growing an alien species that might take over from humans. Even if you think that's possibly okay or good, wouldn't you agree that we can envision factors about how AIs are built/trained and about what sort of world they are placed in that affect whether the future will likely be a lot better or a lot worse?
I'm thinking about things like:
If (some of) these things are really important, wouldn't it make sense to pause and study this stuff until we know whether some of these traits are tractable to influence?
(And, if we do that, we might as well try to make AIs have the inclination to be nice to humans, because humans already exist, so anything that kills humans who don't want to die frustrates already-existing life goals, which seems worse than frustrating the goals of merely possible beings.)
I know you don't talk about pausing in your above comment -- but I think I vaguely remember you being skeptical of it in other comments. Maybe that was for different reasons, or maybe you just wanted to voice disagreement with the types of arguments people typically give in favor of pausing?
FWIW, I totally agree with the position that we should respect the goals of AIs (assuming they're not just roleplayed stated goals but deeply held ones -- of course, this distinction shouldn't be uncharitably weaponized against AIs ever being considered to have meaningful goals). I'm just concerned because whether the AIs respect ours in turn, especially when they find themselves in a position where they could easily crush us, will probably depend on how we build them.
Cool post!
From the structure of your writing (moslty the high number of subtitles), I often wasn't sure where you're endorsing a specific approach versus just laying out what the options are and what people could do. (That's probably fine because I anyway see the point of good philosophy as "clearly laying out the option space.")
In any case, I think you hit on the things I also find relevant. E.g., even as a self-identifying moral anti-realist, I place a great deal of importance on "aim for simplicity (if possible/sensible)" in practice.
Some thoughts where I either disagree or have something important to add:
Thanks for the reply, and sorry for the wall of text I'm posting now (no need to reply further, this is probably too much text for this sort of discussion)...
I agree that uncertainty is in someone's mind rather than out there in the world. Still, granting the accuracy of probability estimates feels no different from granting the accuracy of factual assumptions. Say I was interested in eliciting people's welfare tradeoffs between chicken sentience and cow sentience in the context of eating meat (how that translates into suffering caused per calorie of meat). Even if we lived in a world where false-labelling of meat was super common (such that, say, when you buy things labelled as 'cow', you might half the time get tuna, and when you buy chicken, you might half the time get ostrich), if I'm asking specifically for people's estimates on the moral disvalue from chicken calories vs cow calories, it would be strange if survey respondees factored in information about tunas and ostriches. Surely, if I was also interested in how people thought about calories from tunas and ostriches, I'd be asking about those animals too!
Also, circumstances about the labelling of meat products can change over time, so that previously elicited estimates on "chicken/cow-labelled things" would now be off. Survey results will be more timeless if we don't contaminate straightforward thought experiments with confounding empirical considerations that weren't part of the question.
A respondee might mention Kant and how all our knowledge about the world is indirect, how there's trust involved in taking assumptions for granted. That's accurate, but let's just take them for granted anyway and move on?
On whether "1.5%" is too precise of an estimate for contexts where we don't have extensive data: If we grant that thought experiments can be arbitrarily outlandish, then it doesn't really matter.
Still, I could imagine that you'd change your mind about never using these estimates if you thought more about situations where they might become relevant. For instance, I used estimates in that area (roughly around 1.5% chance of something happening) several times within the last two years:
My wife developed lupus a few years ago, which is the illness that often makes it onto the whiteboard in the show Dr House because it can throw up symptoms that mimic tons of other diseases, sometimes serious ones. We had a bunch of health scares where we were thinking "this is most likely just some weird lupus-related symptom that isn't actually dangerous, but it also resembles that other thing (which is also a common secondary complication from lupus or its medications), which would be a true emergency. In these situations, should we go to the ER for a check-up or not? With a 4-5h average A&E waiting time and the chance to catch viral illnesses while there (which are extra bad when you already have lupus), it probably doesn't make sense to go in if we think the chance of a true emergency is only <0.5%. However, at 2% or higher, we'd for sure want to go in. (In between those two, we'd probably continue to feel stressed and undecided, and maybe go in primarily for peace of mind, lol). Narrowing things down from "most likely it's nothing, but some small chance that it's bad!" to either "I'm confident this is <0.5%" or "I'm confident this is at least 2%" is not easy, but it worked in some instances. This suggests some usefulness (as a matter of practical necessity of making medical decisions in a context of long A&E waiting times) to making decisions based on a fairly narrowed down low-probability estimate. Sure, the process I described is still a bit more fuzzy than just pulling a 1.5% point estimate from somewhere, but I feel like it approaches similar levels of precision needed to narrow things down that much, and I think many other people would have similar decision thresholds in a situation like ours.
Admittedly, medical contexts are better studied than charity contexts, and especially influencing-the-distant-future charity contexts. So, it makes sense if you're especially skeptical of that level of precision in charitable contexts. (And I indeed agree with this; I'm not defending that level of precision in practice for EA charities!) Still, like habryka pointed out in another comment, I don't think there's a red line were fundamental changes happen as probabilities get lower and lower. The world isn't inherently frequentist, but we can often find plausibly-relevant base rates. Admittedly, there's always some subjectivity, some art, in choosing relevant base rates, assessing additional risk factors, making judgment calls about "how much is this symptom a match?." But if you find the right context for it (meaning: a context where you're justifiably anchoring to some very low-probability base rate), you can get well below the 0.5% level for practically-relevant decisions (and maybe make proportional upwards or downwards adjustments from there). For these reasons, it doesn't strike me as totally outlandish that some group will at some point come up with ranged very-low-probability estimate of averting some risk (like asteroid risk or whatever), while being well-calibrated. I'm not saying I have a concrete example in mind, but I wouldn't rule it out.
That makes sense; I understand that concern.
I wonder if, next time, the survey makers could write something to reassure us that they're not going to be using any results out of context or with an unwarranted spin (esp. in cases like the one here, where the question is related to a big 'divide' within EA, but worded as an abstract thought experiment.)
As you say you can block the obligation to gamble and risk Common-sense Eutopia for something better in different ways/for different reasons.
For me, Common-sense Eutopia sounds pretty appealing because it ensures continuity for existing people. Considering many people don't have particularly resource-hungry life goals, Common-sense Eutopia would score pretty high on a perspective where it matters what existing people want for the future of themselves and their loved ones.
Even if we say that other considerations besides existing people also matter morally, we may not want those other considerations to just totally swamp/outweigh how good Common-sense Eutopia is from the perspective of existing people.