The following is an excerpt from some comments I wrote to Will MacAskill about a pre-publication draft of What We Owe the Future. It is in response to the chapter on population ethics.
Chapter 8 presented some interesting ideas and did so clearly, I learned a lot from it.
That said, I couldn’t shake the feeling that there was something bizarre about the entire enterprise of trying to rate and rank different worlds and populations. I wonder if the attempt is misguided, and if that’s where some of the paradoxes come from.
When I encounter questions like “is a world where we add X many people with Y level of happiness better or worse?” or “if we flatten the happiness of a population to its average, is that better or worse?”—my reaction is to reject the question.
First, I can’t imagine a reasonable scenario in which I would ever have the power to choose between such worlds. Second, if I consider realistic, analogous scenarios, there are always major considerations that guide my choices other than an abstract, top-down decision about overall world-values.
For instance, if I choose to bring one more person into the world, by having a child (which, incidentally, we just did!), that decision is primarily about what kind of life I want to have, and what commitments I am willing to make, rather than about whether I think the world, in the abstract, is better or not with one more person in it.
Similarly, if I were to consider whether I should make the lives of some people worse, in order to make the lives of some less-well-off people better, my first thought is: by what means, and what right do I have to do so? If it were by force or conquest, I would reject the idea, not necessarily because of the end, but because I don’t believe that the ends justify the means.
There seems to be an implicit framework to a lot of this along the lines of: “in order to figure out what to do, we need to first decide which worlds are better than which other worlds, and then we can work towards better worlds or avoiding worse worlds.”
This is fairly abstract, centralized, and top-down. World-states are assigned value without considering, to whom and for what? The world-states are presumed to be universal, the same for everyone. And it provides no guidance about what means are acceptable to work towards world-states.
An approach that makes more sense to me is something like: “The goal of ethics is to guide action. But actions are taken by individuals, who are ultimately sovereign entities. Further, they have differing goals and even unique perspectives and preferences. Ethics should help individuals decide what goals they want to pursue, and should give guidance for how they do so, including principles for how they interact with others in society. This can ultimately include concepts of what kind of society and world we want to live in, but these world-level values must be built bottom-up, grounded in the values and preferences of individuals. Ultimately, world-states must be understood as an emergent property of individuals pursuing their own life-courses, rather than something that we can always evaluate top-down.”
I wonder if, in that framework, a lot of the paradoxes in the book would dissolve. (Although perhaps, of course, new ones would be created!) Rather than asking whether a world-state is desirable or not, we would consider the path by which it came about. Was it the result of a population of individuals pursuing good (if not convergent) goals, according to good principles (like honesty and integrity), in the context of good laws and institutions that respect rights and prohibit oppression? If so, then how can anyone say that a different world-state would have been better, especially without explaining how it might have come about?
I’m not sure that this alternate framework is compatible with EA—indeed, it seems perhaps not even compatible with altruism as such. It’s more of an individualist / enlightened-egoism framework, and I admit that it represents my personal biases and background. It also may be full of holes and problems itself—but I hope it’s useful for you to consider it, if only to throw light on some implicit assumptions.
Incidentally, aside from all this, my intuition about the Repugnant Conclusion is that Non-Anti-Egalitarianism is wrong. The very reason that the Conclusion is repugnant is the idea that there’s some nonlinearity to happiness: a single thriving, flourishing life is better than the same amount of happiness spread thin over many lives. But if that’s the case, then it’s wrong to average out a more-happy population with a less-happy population. I suppose this makes me an anti-egalitarian, which is OK with me. (But again, I prefer to analyze this in terms of the path to the outcome and how it relates to the choices and preferences of the individuals involved.)
I think my crux with this argument is "actions are taken by individuals". This is true, strictly speaking; but when e.g. a member of U.S. Congress votes on a bill, they're taking an action on behalf of their constituents, and affecting the whole U.S. (and often world) population. I like to ground morality in questions of a political philosophy flavor, such as: "What is the algorithm that we would like legislators to use to decide which legislation to support?". And as I see it, there's no way around answering questions like this one, when decisions have significant trade-offs in terms of which people benefit.
And often these trade-offs need to deal with population ethics. Imagine, as a simplified example, that China is about to deploy an AI that has a 50% chance of killing everyone and a 50% chance of creating a flourishing future of many lives like the one many longtermists like to imagine. The U.S. is considering deploying its own "conservative" AI, which we're pretty confident is safe, and which will prevent any other AGIs from being built but won't do much else (so humans might be destined for a future that looks like a moderately improved version of the present). Should the U.S. deploy this AI? It seems like we need to grapple with population ethics to answer this question.
(And so I also disagree with "I can’t imagine a reasonable scenario in which I would ever have the power to choose between such worlds", insofar as you'll have an effect on what we choose, either by voting or more directly than that.)
Maybe you'd dispute that this is a plausible scenario? I think that's a reasonable position, though my example is meant to point at a cluster of scenarios involving AI development. (Abortion policy is a less fanciful example: I think any opinion on the question built on consequentialist grounds needs to either make an empirical claim about counterfactual worlds with different abortion laws, or else wrestle with difficult questions of population ethics.)
I would like them to use an algorithm that is not based on some sort of global calculation about future world-states. That leads to parentalism in government and social engineering. Instead, I would like the algorithm to be based on something like protecting rights and preventing people from directly harming each other. Then, within that framework, people have the freedom to improve their own lives and their own world.
Re the China/US scenario: this does see... (read more)