Philosophy, global priorities and animal welfare research. My current specific interests include: philosophy of mind, moral weights, person-affecting views, preference-based views and subjectivism, moral uncertainty, decision theory, deep uncertainty/cluelessness and backfire risks, s-risks, and indirect effects on wild animals.
I've also done economic modelling for some animal welfare issues.
(Edited to elaborate and for clarity.)
Thomas (2019) calls these sorts of person-affecting views "wide". I think "narrow" person-affecting views can be more liberal (due to incommensurability) about what kinds of beings are brought about.
And narrow asymmetric person-affecting views, as in Thomas, 2019 and Pummer, 2024, can still tell you to prevent "bad" lives or bads in lives, but, contrary to antinatalist views, "good" lives and goods in lives can still offset the bad. Pummer (2024) solves a special case of the Nonidentity problem this way, by looking at goods and bads in lives.
But these asymmetric views may be less liberal than strict/symmetric narrow person-affecting views, because they could be inclined to prevent the sorts of lives of which many are bad in favour of better average lives. Or more liberal, depending on how you think of liberalism. If someone would have a horrible life to which they would object, it seems illiberal to force them to have it.
I think these papers have made some pretty important progress in further developing person-affecting views.[1]
I think they need to be better adapted to choices between more than 2 options, in order to avoid the Repugnant Conclusion and replacement (St. Jules, 2024). I've been working on this and have a tentative solution, but I'm struggling to find anyone interested in reading my draft.
For a given individual, can they have a higher probability of averting extinction (i.e. making the difference) or for a different long-term trajectory change? If you discount small enough probabilities of making a difference or are otherwise difference-making risk averse (as an individual), would one come out ahead as a result?
Some thoughts: extinction is a binary event. But there's a continuum of possible values that future agents could have, including under value lock-in. A small tweak in locked-in values seems more achievable counterfactually than being the difference for whether we go extinct. And a small tweak in locked-in values would still have astronomical impact if they persist into the far future. It seems like value change might depend less on very small probabilities of making a difference.
Since others have discussed the implications, I want to push a bit on the assumptions.
I worry that non-linear axiologies[1] end up endorsing egoism, helping only those whose moral patienthood you are most confident in or otherwise prioritizing them far too much over those of less certain moral patienthood. See Oesterheld, 2017 and Tarsney, 2023.
(I also think average utilitarianism in particular is pretty bad, because it would imply that if the average welfare is negative (even torturous), adding bad lives can be good, as long as they're even slightly less bad than average.)
Maybe you can get around this with non-aggregative or partially aggregative views. EDIT: Or, if you're worried about fanaticism, difference-making views.
Assuming completeness, transitivity and the independence of irrelevant alternatives and each marginal moral patient matters less.
I don't think it's valuable to ensure future moral patients exist for their own sake, and extinction risk reduction only really seems to expectably benefit humans who would otherwise die in an extinction event, who would be in the billions. An astronomical number of future moral patients could have welfare at stake if we don't go extinct, so I'd prioritize them on the basis of their numbers.
I might go back and forth on whether "the good" exists, as my subjective order over each set of outcomes (or set of outcome distributions). This example seems pretty compelling against it.
However, I'm first concerned with "good/bad/better/worse to someone" or "good/bad/better/worse from a particular perspective". Then, ethics is about doing better by and managing tradeoffs between these perspectives, including as they change (e.g. with additional perspectives created through additional moral patients). This is what my sequence is about. Whether "the good" exists doesn't seem very important.
Hmm, interesting.
I think this bit from the footnote helped clarify, since I wasn't sure what you meant in your comment:
Note, however, that there is no assumption that d - f are outcomes for anyone to choose, as opposed to outcomes that might arise naturally. Thus, it is not clear how the appeal to choice set dependent betterness can be used to block the argument that f is not worse than d, since there are no choice sets in play here.
I might be inclined to compare outcome distributions using the same person-affecting rules as I would for option sets, whether or not they're being chosen by anyone. I think this can make sense on actualist person-affecting views, illustrated with my "Best in the outcome argument"s here, which is framed in terms of betterness (between two outcome distributions) and not choice. (The "Deliberation path argument" is framed in terms of choice.)
Then, I'd disagree with this:
And if, say, it makes the outcome better if an additional happy person happens to exist without anyone making it so
Also, for those interested in animal welfare specifically: https://www.animaladvocacycareers.org/
Seems fine to direct people to 80,000 Hours for AI/x-risk, Animal Advocacy Careers for animal welfare and Probably Good more generally.