Stanford student (math/economics). Formerly intern at Rethink Priorities (animal welfare) and J-PAL South Asia (IDEA Initiative).
In my opinion, this is a neutral-to-positive update in favor of broiler welfare reforms (even though it increases the variance of possible outcomes as far as net harm goes). With high uncertainty, my best guess is that the average arthropod lives a net negative life (assuming sentience) — I’m aware you are more undecided about this than I am. Additionally, also with high uncertainty, my best guess is that additional land use from feed reduces arthropod populations, which is also your conclusion. So for me, this is an increase in the expected value of broiler welfare reforms.
I’m not completely sure I would call your view constructivist, because of this comment by Sebo under the same piece.
Also, here’s a random thought, which I don’t necessarily think works/holds for your view, but I’m curious what you think. I think objective tends to mean, as Huemer puts it in Ethical Intuitionism, constitutively independent of the attitudes of observers specifically, rather than anyone’s attitudes or stances. For example, a preference utilitarian can think there is an objective moral fact that it is bad to, all else equal, do something to someone that they disprefer, even though the “badness” comes from their dispreference. It seems like that would be subjective only if the claim was that it was bad according to some observer. But I don’t know if that means your view accepts objective or stance-independent moral facts.
I think the expected value of the long-term future, in the “business as usual” scenario, is positive. In particular, I anticipate that advanced/transformative artificial intelligence drives technological innovation to solve a lot of world problems (e.g., helping create cell-based meat eventually), and I also think a decent amount of this EV is contained in futures with digital minds and/or space colonization (even though I’d guess it’s unlikely we get to that sort of world). However, I’m very uncertain about these futures—they could just as easily contain a large amount of suffering. And if we don’t get to those futures, I’m worried about wild animal suffering being high in the meantime. Separately, I’m not sure addressing a lot of s-risk scenarios right now is particularly tractable (nor, more imminently, does wild animal suffering seem awfully tractable to me).
Probably the biggest reason I’m so close to the center is I think a significant amount of existential risk from AI looks like disempowering humanity without killing literally every human, and hence, I view AI alignment work as at least partially serving the goal of “increasing the value of futures where we survive.”
The fourth objection, on who the victim is, has always seemed like the strongest explanation of the deontological moral difference to me. When you offset your CO2 emissions, you haven‘t actually harmed anyone. (I’m personally inclined to place higher credence on utilitarianism than most other moral theories, so I‘m not too bothered by this, and I also think it’s certainly better than the most plausible alternative – people eat meat but don’t offset it – but regardless, interesting philosophical question.)
The Carlsmith article you linked -- post 1 of his two-post series -- seems to mostly argue against the standard arguments people might have for ethical anti-realists reasoning about ethics (i.e., he argues that neither a brute preference for consistency nor money-pumping arguments seem like the whole picture). You might be talking about the second piece in the two-post series instead?
Brian Tomasik considers more selection toward animals with faster life histories in his piece on the effects of climate change on wild animals. He seems to think it‘s not decisive (and ends up concluding that he’s basically 50–50 on the sign of the effects of climate change on overall animal suffering) for ~three reasons (paraphrasing Tomasik):
I’d be curious for how you think the arguments in the above post should change Tomasik’s view, in light of these considerations.
I didn’t say they fell under the ethics of killing, I was using killing as an example of a generic rights violation under a plausible patient-centered deontological theory to illustrate the difference between “a rights violation happening to one person and help coming for a separate person as an offset” and “one’s harm being directly offset.”
(I agree that it seems a bit more unclear if potential people can have rights, even if they can have moral consideration, and in particular rights to not be brought into existence, but I think it’s very plausible.)
Note, however, that I think the question of whether there can be deontic side-constraints regarding our treatment of animals is unclear even conditioning on deontology. Many deontologist philosophers – like Huemer – are uncertain whether animals have “rights” (as a patient-centered deontologist would put it), even though they think (1) humans have rights and (2) animals are still deserving of moral consideration. Deontologists sometimes resort to something like “deontology for people, consequentialism for animals” (although some other deontologists, like Nozick, thought that this was insufficient for animals).
I lean in favor of (some kind of) normative realism. My grounds for this are the relatively-basic ones: it certainly seems, for example, that some choices are plain irrational or that some states-of-affairs are bad in a stance-independent way. And of course, robust realists will always point to the partners-in-crime of moral facts, in other kinds of a priori domains.
My main source of uncertainty — indeed, the reason I flip back and forth between realism and anti-realism — is (various presentations of) the epistemological objection to moral realism. In particular, (i) I’m not sure if we have the right kind of epistemic access to any abstract facts (and find views like mathematical/logical conventionalism plausible for this reason) and (ii) even if we did, I often struggle to find an explanation for why we have moral knowledge specifically (it doesn’t seem obviously evolutionarily advantageous to know the true moral facts, and the idea that it’s a mere byproduct of other a priori knowledge feels a bit unsatisfying, in that I don’t really know if there’s a connection between moral facts and facts about other platonic universals). See this essay by Carlsmith articulating this kind of objection. I like some of the responses to arguments of this general sort in this paper by David Enoch.