Philosophy, global priorities and animal welfare research. My current specific interests include: philosophy of mind, moral weights, person-affecting views, preference-based views and subjectivism, moral uncertainty, decision theory, deep uncertainty/cluelessness and backfire risks, s-risks, and indirect effects on wild animals.
I've also done economic modelling for some animal welfare issues.
If using precise credences, then I'd be a strong longtermist (probably focusing on existential risks of some kind) or chase infinities. I haven't thought a lot from this perspective about practical donation recommendations, if I'm assuming not suffering-focused. If suffering-focused (like I actually am), then probably CLR.
A Pascal's mugging by nematodes? Nematodes as utility monsters?
Pascal's bugging and the Rebugnant Conclusion (Sebo, 2024). :P
Interested to hear from Insect Welfare and Wild Animal Welfare advocates why they disagree that nematodes are the primary moral concern of planet Earth.
I'm sympathetic to difference-making risk aversion and difference-making ambiguity aversion (although see here) and assign nematodes a quite low probability of mattering much at all to me, low enough for now that I'm inclined to ignore them altogether (and what would have gone to nematodes instead goes to mitigating s-risks). Mites, springtails, copepods and insect larvae seem substantially more likely to matter to me, based on my beliefs about their capacities.
Still, I'd rather not go 100% on invertebrates either, also due to my difference-making sympathies. I'd deal with this like normative uncertainty and use a kind of bucket approach, like the Property Rights approach and hedging, with normative uncertainty about difference-making and approaches to dealing with uncertainty, about the nature of consciousness and moral patienthood and how to deal with it (although also see this), and about aggregation. So, roughly in practice, based on the probabilities of making a difference, probabilities of moral patienthood, attitudes towards risk and aggregation, I have a humans bucket, a mammals and birds bucket, a fish bucket, a shrimp and insects bucket, a mites, springtails and copepods bucket, and an s-risks bucket.
Another potentially useful takeaway is that these interventions Vasco considered, or at least diet change interventions like Veganuary and School Plates, are not robustly positive in expectation, when considering exactly the near-term animal effects. So why would we support them?
These interventions don't seem justified by their direct cost-effectiveness, unless we have adequate reason to single out those effects and ignore or discount the effects on wild terrestrial invertebrates. We'd need a good reason to single out the direct effects, or refer to even more indirect or longer term reasons (e.g. moral circle expansion, space colonization and s-risks).
For the comparison to Shrimp Welfare Project's Humane Slaughter Initiative, how long are you assuming the stunners are (counterfactually) used for? If I recall correctly, some prior estimates only assumed 1 year, which seems very conservative, and would probably make the comparisons to other opportunities here unfair.
It's not so much that there's a specific threshold away from 50%, it's more that if you're wildly uncertain and it's highly speculative, rather than assigning a single precise probability like 55%, you should use a range of probabilities, say 40% to 70%. This range has values on either side of 50%. Then:
(I'm assuming we're ruling out an average welfare of exactly 0 or assigning that negligible probability, EDIT: conditional on sentience/having any welfare at all.)
On standard accounts of difference-making ambiguity aversion, which I think are problematic. I'm less sure about the implications of other accounts. See my 2024 post.
I'll add that even if I would make different methodological choices, I think it's still useful to highlight the scale of indirect effects on wild animals. The default in the community seems to be to ignore these effects, and there doesn't seem to be good justification for that. I think it's great that Vasco is taking these effects seriously and seeing where they might lead.
(And, as in my other comment, the conclusions and analysis could hold approximately anyway for those sufficiently pessimistic about the lives of wild invertebrates, or who give enough weight to sufficiently suffering-focused views.)
I agree both with the specific point about using LLMs and the more general point about sensitivity to highly speculative and ambiguous values. I would endorse imprecise credences, and the use of approaches to decision-making with imprecise credences. See also Anthony's piece against precise Bayesianism.[1]
That being said, if you're sufficiently suffering-focused or confident that their lives are negative on average (or confident that their lives are positive on average), then you don't have to worry about this too much.
On difference-making ambiguity aversion as one natural group of approaches, see my 2024 post, my 2020 post and Greaves et al., 2022. I'm not confident these are the best approaches for dealing with imprecise credences (if averse to fanaticism).
Hi Nicolas, thanks for commenting!
Ah, good point. (You're assuming the separate components can be added directly (or with fixed weights, say).)
I guess the cases where you can't add directly (or with fixed weights) involve genuine normative uncertainty or incommensurability. Or, maybe some cases of two envelopes problems where it's too difficult or unjustifiable to set a unique common scale and use the Bayesian solution.
In practice, I may have normative uncertainty about moral weights between species.
If you're risk neutral, probably. Maybe not if you're difference-making risk averse. Perhaps helping insects is robustly positive in expectation, but highly likely to have no impact at all. Then you might like a better chance of positive impact, while maintaining 0 (or low) probability of negative impact.
For my illustration, that's right.
However, my illustration treats the components as independent, so that you can get the worst case on each of them together. But this need not be the case in practice. You could in principle have interventions A and B, both with ranges of (expected) cost-effectiveness [-1, 2], but whose sum is exactly 1. Let the cost-effectiveness of B be 1-"the cost-effectiveness of A". Having things cancel out so exactly and ending up with a range that's a single value is unrealistic, but I wonder if we could at least get a positive range this way.
Ya, the default doesn't seem privileged if you're a consequentialist. See this post.