Author’s note: This is an adapted version of my recent talk at EA Global NYC (I’ll add a link when it’s available). The content has been adjusted to reflect things I learned from talking to people after my talk. If you saw the talk, you might still be interested in the “some objections” section at the end.
23 November: edited to make a slight terminological change to one of the approaches (precise probabilities --> precise probabilities to as many things as you can)
Summary
Wild animal welfare faces frequent tractability concerns, amounting to the idea that ecosystems are too complex to intervene in without causing harm. However, I suspect these concerns reflect inconsistent justification standards rather than unique intractability. To explore this idea:
- I provide some context about why people sometimes have tractability concerns about wild animal welfare, providing a concrete example using bird-window collisions.
- I then describe four approaches to handling uncertainty about indirect effects: spotlighting (focusing on target beneficiaries while ignoring broader impacts), ignoring cluelessness (acting on knowable effects only), assigning precise probabilities to all outcomes, and seeking ecologically inert interventions.
- I argue that, when applied consistently across cause areas, none of these approaches suggest wild animal welfare is distinctively intractable compared to global health or AI safety. Rather, the apparent difference most commonly stems from arbitrarily wide "spotlights" applied to wild animal welfare (requiring consideration of millions of species) versus narrow ones for other causes (typically just humans).
While I remain unsure about the right approach to handling indirect effects, I think that this is a problem for all cause areas as soon as you realize wild animals belong in your moral circle, and especially if you take a consequentialist approach to moral analysis. Overall, while I’m sympathetic to worries about unanticipated ecological consequences, they aren’t unique to wild animal welfare, and so either wild animal welfare is not uniquely intractable, or everything is.
Consequentialism + impartial altruism → hard to do good
Let’s assume (as I will for the rest of this post) that you think animals matter morally, you’re pretty consequentialist, and you want to analyze ways to help the world through a ~ scale/neglectedness/tractability lens. How would you analyze wild animal welfare?
It’s pretty obvious that you’d give it points for scale and neglectedness — in the six years I’ve worked on wild animal welfare, few people have argued otherwise. But I and others working in the space frequently hear concerns about the cause area’s tractability. Most commonly, these take the form of concerns that nature is too complex to work in, and that it therefore won’t be feasible to benefit the lives of even a portion of the trillions–quintillions of wild animals without doing harm to others or messing up the system in some way.
I don’t think wild animal welfare is uniquely intractable, even though I have sympathy with these ecologically motivated concerns. I suspect that if you agree with the broad moral views I described, and think it’s reasonable to work on global health or AI safety, you should think wild animal welfare is reasonable to work on as well.
I think the issue with wild animal welfare is how we talk about it, and our standards for justifying working on it. In this post, I’ll explain this opinion, and argue that if justification standards were applied consistently, wild animal welfare is about as tractable as anything else.
The challenge: Deep uncertainty and backfire risk
First, let’s return to why people think wild animal welfare is intractable in the first place. Broadly, the idea is that there are tons of different species in the category “wild animals,” many of which we know little about, and whose members live in an unconstrained ecological system that seems highly sensitive to perturbations. As a result, people often feel “clueless” about the effects of their actions, which makes it hard to figure out if there is a reasonable intervention to pursue that would be cost effective. Worse, you could even end up doing more harm than good.
I think most people who have been exposed to the idea of wild animal welfare at least have some intuitive understanding of the idea that “messing around in ecosystems” is risky, but I think it’s useful to provide a concrete example. So I’ll examine this idea by means of the topic I’m currently studying: bird-window collisions.
Example: Bird-window collisions
Bird-window collisions may kill over a billion birds annually in North America alone (Loss et al., 2014, 2015). On its face, legislation or remediation campaigns to require bird-safe glass — using visible patterns, dots, or modified glass to prevent collisions — seem like a positive, pro-welfare way to address these collisions. Uncertainties that might arise include whether these campaigns are cost effective to work on, and whether bird-safe glass has unintended ecological effects. Several layers of uncertainty complicate this assessment:
We don’t actually understand the welfare consequences of bird-window collisions on birds
While some birds die quickly from skull hemorrhages, others may suffer from crop rupture, fractures, or other injuries for hours to weeks before dying (Fornazari et al., 2021; Klem, 1990). We lack good data on rates of different outcomes. Only two empirical studies have examined sublethal strikes — one using window panes in the woods (Klem Jr. et al., 2024) and another studying only one building as a pilot of the methods (Samuels et al., 2022). So it’s hard to say how bad a death by window collision is on average.
This matters because:
We don’t know how birds would die otherwise
Birds saved from window collisions don't become immortal — they die later from other causes, most commonly predation, as far as we can tell (Hill et al., 2019). Based on age-structured mortality models for affected species like song sparrows, collision victims who survive gain approximately 1–2 additional years of life[1]. Whether this is net positive depends on comparing the suffering of window collision deaths versus alternative deaths (predominantly predation), plus the value of those additional life-years. Critically, if the difference in the amount of suffering caused by the new death outweighs the joy gained from an additional 1–2 years of life, the intervention could be net negative for birds themselves. Whether you think this is possible or likely depends both on empirical facts we don’t currently have access to, as well as philosophical beliefs about what makes a life worth living.
Making things worse:
The effects on other animals are even more uncertain
We don’t know if bird-window collisions affect bird population sizes. If populations are resource limited, preventing collision deaths might not increase population size — it might just shift which individuals die and how they die. If populations do increase, though, this creates cascading effects on prey species (primarily insects); scavengers who feed on collision victims; other animals who compete with birds for space, food, or other resources; and broader ecosystem dynamics. It’s unfortunately quite difficult to assess how city-scale interventions affect population dynamics over time. So far, only one study explicitly has addressed this question, and found no detectable population effects (Arnold & Zink, 2011). Other experts have contested this finding (see here), but the only available data has significant biases in the amount of effort spent surveying populations and causes of death, among other things. Observational studies of this type, even when carefully done, can be extremely hard to interpret.
If there are population size changes, the resulting changes in ecological dynamics create cascading uncertainties. The new system will be different from the old one in all kinds of ways, and because we have no history of studying wild animal welfare, it’s incredibly difficult to make predictions about the quality of life of the animals in that new system, and whether they’d be better off or worse off overall.
This idea — that sometimes things with local or intended benefits can shift systems in such a way that they create more negative outcomes than positive ones — is commonly referred to as backfire risk. Backfire risk is not unique to the wild animal welfare community: AI policy advocates, for example, might worry about a legal initiative causing some kind of social response that creates more AI risk, even if the initiative initially looked like a way to decrease it. What’s unique is *how many moral patients* wild animal welfare advocates are asked to account for, as I’ll discuss below.
Four approaches to handling uncertainty
I spoke with a number of individuals from across several of the main EA cause areas about how they think about and consider backfire risk in their work. Using these conversations, and my own reading on the Forum, and this extremely useful summary from Jim Buhler, I’ve defined four approaches: spotlighting, assigning precise probabilities, setting aside things you're clueless about, and seeking ecologically inert interventions.
I am intentionally not endorsing any one of these approaches in this piece; my personal position is that they all have worrying flaws and I hope someone will come up with something better. Rather, I am seeking merely to describe the approaches I've observed, and show that wild animal welfare doesn’t seem particularly intractable regardless of which approach you use (provided you use the approach consistently).
Spotlighting
I think spotlighting is the most common approach to handling uncertainty, and is used in both global health and AI governance and safety. Spotlighting is focusing on specific moral patient groups when assessing cost-effectiveness, while setting aside effects on others as "out of scope.” The justification for setting aside these other moral patients varies, but from my conversations with grantmakers and researchers, it seems to me like people are generally assuming (quite tenuously, as far as I can tell) that off-target effects are negligible compared to direct effects.
For example, if I’m using spotlighting to assess a global health intervention in Malawi, I look at the expected effects on the people of Malawi. Unless the intervention has extremely clear and obvious off-target effects that seem to be of the same scale as the target effects, I don’t ask difficult questions about theoretical possibilities, like the risk that improving the health of those in Malawi could somehow trigger cascading economic changes in the region that put some farmers out of business in a neighboring country. I certainly don’t ask tough questions about how a general improvement in the health of Malawians could be expected to affect the animals they eat, or the wild animals of the region. People in neighboring states, farmed animals, and wild animals are all “left in the dark” in my analysis — out of the spotlight.
To offer an AI example: When evaluating using spotlighting, if I pass a bill, I might worry about how that bill could backfire on my intended effect of slowing down the development of AI. For example, I might worry that slowing down development in the US could just lead to AI development taking off in China, somehow. But, with a few exceptions, it doesn’t seem to me that people attempting to estimate the value of passing the bill account for the risk that slowing down AI will affect global health or wild animals. In the case of global health, there is at least some discussion of this topic. Those who don’t want to slow down AI, for example, seem to consider the benefits they expect AI to bring the developing world as a reason to keep developing an ostensibly dangerous technology. But there is minimal thought at all paid to wild animals, despite the huge scale of individuals likely to be affected.
It’s possible that many AI-focused people simply don’t care about wild animals. But as stated at the beginning, I’m assuming a moral framework under which wild animals matter. Over the years, I’ve spoken to a lot of people who work in non-animal spaces, and generally, if I ask them about animal-affecting backfire, I get one of two responses: either that (a) we’re assuming these interventions don't affect wild animals, or (b) the effects are small enough to ignore. Unfortunately, both claims seem obviously incorrect. Just to illustrate one aspect of this issue: Any AI policy that influences the timelines to AGI will affect land use, resource consumption, and mining behavior — all of which have substantial effects on wild animal populations and welfare. The net effect is unlikely to be small and we don't know whether it is positive or negative.
But regardless, the common approach is to set aside these sorts of effects through spotlighting. It’s not always clear how the scope of the spotlight is defined, but it seems like many people are focusing on “the target beneficiaries” or “target outcomes.” If the target beneficiaries in AI and global health are basically “humans,” maybe you think it’s fairly obvious why wild animal welfare seems intractable: There are just a ton more beings in the spotlight. Someone working on farmed animal welfare just needs to think about maybe a few hundred species (if we include fish); global health and AI-focused actors only need to consider one (humans). In contrast, wild animal welfare has to “spotlight” multiple millions of species with competing needs.
But if spotlighting is permissible in this way, the fairly obvious question is: Why did we make wild animal welfare’s spotlight so wide? If we proceed intervention by intervention, I can’t see why I’m not allowed to “spotlight” just birds (or perhaps even a single species of birds), for example, when considering wild bird welfare and how it would be affected by bird-safe glass. If you think that’s an acceptable move, wild animal welfare (when analyzing one set of target beneficiaries at a time) doesn’t seem particularly intractable at all. I still have some uncertainties around how birds will die instead, but these kinds of uncertainties present a much smaller and simpler research problem, similar to the kinds of things regularly handled in global health research, where we act as if ecosystem-scale questions can be set aside.
A part of me is quite sympathetic, when trying to help humans in Malawi, to the idea of just focusing on uncertainties around the effects for people in Malawi. But it looks like such a strange and thoughtless thing to do in wild animal welfare — to think about whether bird-safe glass helps birds without worrying about whether it might harm scavengers or insects.
I haven’t seen anyone produce a coherent story of how spotlight sizes should be chosen, and few people are even working on these sorts of meta questions. But regardless of whether you like the approach or not, it seems like any reasonable and non-arbitrary standard for defining how to spotlight will put wild animal welfare and other cause areas on a similar playing field in tractability terms.
Set aside that which you are clueless about
So let’s say you don’t like spotlighting, because it seems like the distinction between moral patients in and out of your spotlight are arbitrary. But, if you try to consider all moral patients at once, you tend to become “clueless” (in the sense popularized by Greaves, 2016) quite quickly. So another set of options seems to be: include all the effects you aren’t clueless about, act based on what you learn from looking at those things, and ignore the stuff you’re clueless about. So, for example, if I’m at least confident that bird-safe glass helps birds and causes more insects to get eaten, but I’m clueless about its effects on other animals and any ecological consequences, I can proceed just by analyzing the bird and insect effects I understand.
While in some cases this might look the same as spotlighting (e.g., if I spotlight just birds, and birds also happen to be the only moral patients I’m not clueless about), I think the reasoning is different and the outcomes can come apart. In spotlighting, it’s an allowed move to narrow your focus to some specific set of moral patients, because (for some reason or another) you think it’s okay to assume that effects on this group determine the overall result without actually checking in each instance that this is true. In “ignoring cluelessness” approaches, I’m not using that heuristic off the bat. The precise procedure might vary depending on how someone is operationalizing “ignore cluelessness,” but broadly, I include all moral patients in theory, and only exclude things if I’m clueless about them in the specific scenario under consideration.
There’s a rich discussion of cluelessness with a reasonable number of pending disagreements I won’t get into here (but you can dive into the discussion here). Suffice to say that I’m skeptical that most articulated approaches are non-arbitrary. As a result, I think that cluelessness and unawareness (the idea that we can’t account for possibilities we don’t even think of) are pretty important issues that deserve more research attention. That said, I’m reasonably optimistic that a newer proposal called bracketing (post 1, post 2) will offer a principled guide to “making decisions based on what you aren’t clueless about.”
So what does this mean for wild animal welfare? Again, if this approach works at all (e.g., because bracketing turns out to be a great proposal), it would still need to be applied equally to wild animal welfare and AI risk and global health. If we can ignore things we’re pretty clueless about, wild animal welfare becomes a lot easier. We still need to do research (to address the tractable uncertainties, like how birds commonly die in nature), but we won’t need to tackle the cluelessness-inducing issues at the ecological scale. But if this doesn’t work, and we can’t ignore cluelessness, then AI and global health actors are in the same problematic boat: If we apply equivalent epistemic standards across these cause areas, we should all be clueless about the effects of any intervention on (1) wild animals and (2) the long-term future.
Assign precise probabilities to as many things as you can
Another group of people attempting to include all moral patients in their analyses seem to basically reject cluelessness by trying to calculate (at least partially based on intuitions) the effects of interventions on as many questionably sentient moral patients as possible (for example, see this post). The idea is to come up with all the effects you can think of and assign precise probabilities to every possible outcome, even in the face of deep uncertainty. You can even assign some kind of modifier to capture all the “unknown unknowns.”
Others on the Forum have already made the case for why we should generally be suspicious of precise probability assignments when we can’t straightforwardly quantify our uncertainty, so I won’t make those arguments here. However, I’d also like to point out a social consequence of this approach that seems extremely problematic. Basically, when your uncertainties are so broad, a tiny amount of new data can swing your probability assignments significantly (i.e., the swing might not even be very big, but your EV estimate was so close to 0 anyway that this small change flips the sign). As a result, your views become volatile: You might determine an AI policy is net positive today, then completely reverse that judgment months later, after minor updates. Although some may think that this outcome is an unfortunate but necessary aspect of the “right” decision theory, it is extremely hard to see how one might run a movement this way. Switching from endorsing bird-safe glass to not endorsing it on a monthly basis would lead to little impact and few supporters.
Seek ecologically inert interventions
Finally, the last contingent of EA has basically accepted profound cluelessness about anything that influences wild animal populations — which is basically everything happening in global health and AI safety, and most things that you could think of in wild animal welfare and farmed animal welfare. As a result, these individuals hope to identify "ecologically inert" interventions that don't affect population dynamics or have cascading effects. Corporate welfare campaigns might be one sort of intervention that clears this bar. If we assume that going from caged → cage free improves welfare, but only changes conditions in the “closed” farming system, and doesn’t put factory farming on a pathway to elimination or change the amount of land it uses, we can be reasonably confident that all the effects of the interventions are just on chickens themselves, and that there won’t be any ecological consequences.
While this is quite difficult, it’s not clear to me that it’s totally intractable. I (and several others) think we could reasonably view a handful of interventions as worth pursuing under this mindset. Mostly, these sorts of interventions change how humans kill animals or control populations, such that suffering is decreased without changing the net population outcome. Examples might include stunning wild-caught fish before slaughter or replacing rodenticides with fertility control on islands.
But very importantly, if your view is that interventions must be ecologically inert to be worth doing, this isn’t a unique problem for wild animal welfare — anyone with this mindset about wild animal welfare should apply it to AI safety and global health (as long as you endorse the moral positions I assumed at the outset). And since AI safety and global health interventions are almost certainly not ecologically inert, we see that we have the same status across all cause areas, rather than a unique type of intractability for wild animal welfare specifically.
Some objections & questions
After I gave a talk summarizing the above, I was asked a few questions that pushed in favor of one or the other of the above approaches. Responses to these objections below:
The global health comparison: Spotlighting hasn't backfired (for humans)
One objection to my rejection of spotlighting is that, basically, spotlighting is how we’ve been doing things for generations (in global health but also in all kinds of other decision making). It hasn’t backfired, so why should we be so suspicious of the approach?[2]
While some might quibble with the idea that spotlighting-endorsed interventions have never backfired for humans, even if the position is broadly correct for that context, it is deeply unobvious that it’s true for animals. It may be that interventions remain robustly positive for human welfare even under deep uncertainty about broader effects. But it seems wildly unclear that progress in human health over the last decades has not harmed animal welfare.
If impartial altruism is foundational to EA (and it’s at least foundational to my interest in EA), this asymmetry should trouble us. We can't simultaneously claim to care about all sentient beings while accepting spotlighting only when it's convenient to humans.
Action-inaction distinctions
Some people have asked me why people taking cluelessness seriously aren’t allowed to do whatever they want. If we’re clueless about “doing bird-safe glass” and “not doing bird-safe glass,” aren’t both just fine then? They would argue that we can decide to promote bird-safe glass based on some kind of deontological consideration or mere preference, since when we look at consequences we’re clueless either way.
But people who feel clueless about consequences often acknowledge that either action may be permissible. However, if you have a certain amount of money or time to spend on making the world a better place, and you have at least a couple of options that seem robustly good (and a ton that you’re clueless about), it seems better to spend it on things that look robustly good. They search for ecologically inert interventions not because these are the only permissible options, but because "certain positive" seems better than "no idea."
I have some concerns about this view. For example, it seems like you are doomed to be clueless about truly transformative change, like trying to shift the world to have massively different attitudes toward animals. Thus, I worry that the only things that appear robustly good under this view would just be tiny shifts at the margin. Maybe you think that decision theory comes first, and that a rejection of transformative change stemming from that is just an unfortunate truth. But since I think a ton of our feelings about decision theory are based on intuitive judgements anyway, I want to also take seriously my own intuition that working toward the best world, and not just the nearest better world, is something I want my justification standards to at least sometimes license.
Why should justification standards be the same?
To me, this question gets at the core of whether EA is a community or not. As someone I spoke with suggested, why should I expect people in the arts and humanities charity space to have similar justification practices for why they give to Museum of Modern Art instead of the Philadelphia Art Museum, as I have for why I work on the wild animal welfare effects of bird-safe glass? Of course, I don’t expect similar justification standards from actors outside my community.
Maybe the answer here is that, really, people working on AI safety, animal welfare, and global health are all in separate communities, with different donors and decision makers, and so obviously they have different justification standards. But I don’t think this is what I’d like to see in my ideal version of EA.
First, I think it would be sad! I like being in a community of people who, due to our uncertainty about various empirical points, might disagree on what cause to work on — but we still try to figure out interesting things about cost-effectiveness in an impartially altruistic way. Second, I can still critique the standards of both my own communities and others I see, and hope to identify the best justification standards I can — which, in practice, might involve looking at what people are doing in other spaces. So even if, sadly-to-me, EA is not a community anymore, I still want wild animal welfare to develop with good justification standards, both applied within the cause area (as we try to figure out which ways of helping wild animals are most worth doing) and across cause areas. I’d hope for non-arbitrary standards that neither freeze us in inaction, nor license clearly irresponsible actions. Even though I’m still trying to figure out what that means, it feels like an issue for everyone, and not just for wild animal welfare.
Conclusion
I’ve reviewed the four broad categories of approaches I’m aware of for handling uncertainty about indirect effects, and hopefully shown that, when applied consistently, it’s not obvious that wild animal welfare is less tractable than anything else.
I don’t know what the consequences of taking indirect effects seriously should be for other cause areas. In the case of wild animal welfare, I look forward to further developments from researchers thinking hard about cluelessness, unawareness, and decision theory to help guide research and action. Even though we mostly can’t figure out the exact consequences of our actions on all wild animals today, I expect that the sort of research and field-building efforts we’re pursuing in the wild animal welfare space will help us (a) understand how to help wild animals in at least some cases, and (b) dispel learned helplessness with regards to the suffering of wild animals, so that we’re in a better position to build toward understanding and responding to more and more indirect effects over time.
Many thanks to Jason Schukraft, Anthony DiGiovanni, Michael St. Jules, Simon Eckerström Liedholm, Jesse Clifton, Bob Fischer, and Abraham Rowe for conversations that improved the talk this document is based on. Thank you to Shannon Ray for copy edits (on short notice!).
AI usage: I used Claude to create an initial draft from the transcript of my talk, which I then heavily edited and added to (the final is about twice as long as Claude’s draft). Claude also wrote the first draft of the summary once I had finished the draft, which I then edited.

I think this post is excellent, but I disagree with your fundamental statement - that if wild Animal Welfare is intractable, then everything is. I think you've made a good argument, but only half the tractability argument.
I agree with your well made arguments that in theory, animal welfare is tractable. I think though that your argument is a little bit of a strawman because I think you ignore what is probably the most important part of tractability of Wild animal welfare interventions - will most people actually support and allow interventions that might improve wild animal welfare? As in once we have followed your great thought process and decided on a helpful intervention, is it something that most people will agree with, is it "tractable" in real life. Tractability considerations include both what is physically possible, and what is societally/politically likely to happen (not addressed here). And in the case of Wild Animal Welfare I think the biggest tractability problems are what society/regular people will allow.
Unfortunately much of "tractability" in any field are about what is socially possible and likely to happen, as much as what is mechanically and physically possible. In medicine when we consider tractability of challenge trials, yes they are physically possible - the question is will regulatory bodies and society allow them? That's the tractability problem, as the trials themselves are obviously theoretically tractable. We could also pretty easily knock half a degree to a degree off where rising temperatures would end up, for example by taxing carbon emissions consistently or even just subsidising green energy more than fossil fuels. But tractability is low on many aspects of climate change work because people these options are close to politically implausible on a global scale.
My biggest questions about tractability of wild animal welfare interventions (unfortunately) are around what's going to be societally/politically possible, which isn't addressed at all in this article. On a mechanical, theoretical level yes I agree wild animal welfare is as tractable as everything else, but on a societal/political level, it may be less tractable. At least these aspects need to be considered in order to be accurate about how tractable animal welfare interventions might be.
I agree it's worth pointing out that Mal addresses only one of the two very distinct kinds of the tractability objection (i.e., cluelessness about our overall influence rather than "can we influence things at all").
However, I don't find your tractability concerns compelling. Surely there are consensual WAW interventions people already support, and we might not even need their support. Developing humane insecticides, for example, doesn't seem obviously less tractable to me than pushing for humane slaughter reforms for farmed animals. (I know very little about WAW and others might give more/better examples, though, or tell me I'm actually wrong and your concerns are fair.)
First I'm not at all anti WAW interventions and i think there will be some that thread the tractable line, I just don't think the OPs arguments cover most of the tractibility problem here.
I'm not a WAW person, but i would ask you to try a thought experiment. think of 5 WAW interventions you think could have a big impact. Then ask yourself whether most people would actually support and go for them? i disagree that "surely there are consensual WAW interventions people already support". I think most interventions will have poor public tractibility.
Your insecticide example is a great one. sure you could develop a humane insecticide, but how are you going to get a decent amount of market share and get millions to actually use it? THATs the tractability problem. Most people wouldn't care enough to buy insecticide that was more humane, they will focus on efficacy, cost and even brand name. i would guess "heavy use of humane insecticide" would be very difficult to make happen, and quite intractable.
Hi Nick! Thanks for engaging. I'm not reading you as being anti WAW interventions, and I think you're bringing up something that many people will wonder about, so I appreciate you giving me the opportunity to comment on it.
Basically, let's say the type of intractability worry I was mainly addressing in the post is "intractability due to indirect ecological effects." And the type you're talking about is "intractability due to palatability" or something like that.
I think for readers who broadly buy the arguments in my post, but don't think WAW interventions are palatable, are not correct but for understandable reasons. I think the reason is either (1) underexposure to the most palatable WAW ideas because WAW EAs tend not to focus on/enjoy talking about those or (2) using the "ecologically inert" framework when talking about WAW and one of the other frameworks when talking about other types of interventions.
Let's first assume you're okay with spotlighting, at least to a certain degree. Then, "preventing bird-window collisions with bird safe glass legislation" and "banning second generation anti-coagulant rodenticides" are actually very obviously good things to do, and also seem quite cost-effective based on the limited evidence available. I think people don't really realize how many animals are affected by these issues - my current best-guess CEA for bird safe glass suggest it's competitive with corporate chicken campaigns, although I want to do a little more research to pin down some high-uncertainty parameters before sharing it more widely.
Anti-coagulant bans and bird-safe glass are also palatable, and the proof is in the pudding: California, for example, has already passed a state-wide ban on these specific rodenticides, and 22 cities (including NYC and Washington DC) have already passed bird-safe glass regulations. I think I could provide probably at least 5 other examples of things that fit into this bucket (low backfire under spotlighting, cost effective, palatable), and I don't really spend most of my time trying to think of them (because WAI is focused on field-building, not immediate intervention development, and because I'm uncertain if spotlighting is okay or if I should only be seeking ecologically inert interventions).
The important thing to note is that WAW is actually more tractable, in some cases, then FAW interventions because it doesn't require anyone to change their diet, and people in many cultures have been conditioned to care about wild animals in a way they've been conditioned to reject caring about farmed animals. There's also a lot of "I love wild animals" sentiment being channelled into conservation, but my experience is that when you talk to folks with that sentiment, they also get excited about bird window collision legislation and things like that.
But perhaps you're actually hoping for ecologically inert interventions. Then, I'm not sure which interventions you'd think would be acceptable instead? Sure, humane insecticides could end up being hard (although I think much less hard than you think, for reasons I won't go into here). But literally nothing else - in FAW, in GHD, in AI - seems reasonably likely to be ecologically inert while still plausibly causing a reduction in suffering (maybe keel bone fracture issues in FAW?). But the folks who say "WAW interventions aren't palatable" have not generally, in my experienced, said "and I also don't do GHD because it's not ecologically inert" -- so I suspect in at least some instances they are asking for ecologically inert interventions from WAW, and something else from their cause area of preference.
Thanks @mal_graham🔸 this is super helpful and makes more sense now. I think it would make your argument far more complete if you put something like your third and fourth paragraphs here in your main article.
And no I'm personally not worried about interventions being ecologically inert.
As a side note its interesting that you aren't putting much effort into making interventions happen yet - my loose advice would be to get started trying some things. I get that you're trying to build a field, but to have real-world proof of this tractability it might be better to try something sooner rather than later? Otherwise it will remain theory. I'm not too fussed about arguing whether an intervention will be difficult or not - in general I think we are likely to underestimate how difficult an intervention might be.
Show me a couple of relatively easy wins (even small-ish ones) an I'll be right on board :).
Thanks! I think I might end up writing a separate post on palatability issues, to be honest :)
On the intervention front, the movement of WAW folks is turning now to interventions in at least some cases (in WAI's case, rodenticide fertility control is something they're trying to fundraise for, and at NYU/Arthropoda I'm working on or fundraising for work on humane insecticides and bird window collisions). I just meant that perhaps one reason we don't have more of them is that there's been a big focus on field-building for the last five years.
For field-building purposes, there's still been some focus on interventions for the reasons you mention, but with additional constraints --- not just cost-effective to pursue but also attractive to scientists to work on and serves to clarify what WAW is, etc., to maximize the field-building outcomes if we can.
I'm not familiar with the examples you listed @mal_graham🔸(anticoagulant bans and bird-safe glass), are these really robustly examples of palatability? I'm betting that they are more motivated by safety for dogs, children and predatory birds, not the rats? And I'm guessing that even the glass succeeded more on conservation grounds?
Certainly, even if so, it's good to see that there are some palatability workarounds. But given the small-body problem, this doesn't encourage great confidence that there could be more latent palatability for important interventions. Especially once the palatable low-hanging fruit are plucked.
Enjoyed this post.
Maybe I'll speak from an AI safety perspective. The usual argument among EAs working on AI safety is:
This is also the main argument motivating me — though I retain meaningful meta-uncertainty and am also interested in more commonsense motivations for AI safety work.
A lot of the potential goodness in 1. seems to come from digital minds that humans create, since it seems that at some point these will be much quicker to replicate than humans or animals. But lots of the interventions in 2. seem to also be helpful for getting things to go better for current farmed and wild animals, e.g. because they are aimed avoiding a takeover of society by forces which don't care at all about morals. Personally I hope we use technology to lift wild animals out of their current predicament, although I have little idea what it would look like with any concreteness.
This relies on what you call the "assigning precise probabilities" approach, and indeed I rarely encounter AI safety x EA people who aren't happy assigning precise probabilities, even in the face of deep uncertainty. I really like how your post points out that this is a difference from the discourse around wild animal welfare but that it's not clear what the high-level reason for this is. I don't see a clear high-level reason either from my vantage point. Some thoughts:
Coda: to your "why should justification standards be the same" question, I'd just want to say I'm very interested in maintaining the ideal that EAs compare and debate these things; thanks for writing this!
Presumably misaligned AIs are much less likely than humans to want to keep factory farming around, no? (I'd agree the case of wild animals is more complicated, if you're very uncertain or clueless whether their lives are good or bad.)
That does seem right, thanks. I intended to include dictator-ish human takeover there (which seems to me to be at least as likely as misaligned AI takeover) as well, but didn't say that clearly.
Edited to "relatively amoral forces" which still isn't great but maybe a little clearer.
Thanks Eli!
I sort of wonder if some people in the AI community -- any maybe you, from what you've said here? -- are using precise probabilities to get to the conclusion that you want to work primarily on AI stuff, and then spotlighting to that cause area when you're analyzing at the level of interventions.
I think someone using precise probabilities all the way down is building a lot more explicit models every time they consider a specific intervention. Like if you're contemplating running a fellowship program for AI interested people, and you have animals in your moral circle, you're going to have to build this botec that includes the probability an X% of the people you bring into the fellowship are not going to care about animals and likely, if they get a policy role, to pass policies that are really bad for them. And all sorts of things like that. So your output would be a bunch of hypotheses about exactly how these fellows are going to benefit AI policy, and some precise probabilities about how those policy benefits are going to help people, and possibly animals to what degree, etc.
I sort of suspect that only a handful of people are trying to do this, and I get why! I made a reasonably straightforward botec for calculating the benefits to birds of bird-safe glass, that accounted for backfire to birds, and it took a lot of research effort. If you asked me how bird-safe glass policy is going to affect AI risk after all that, I might throw my computer at you. But I think the precise probabilities approach would imply that I should.
Re:
I'm definitely interested in robustness comparisons but not always sure how they would work, especially given uncertainty about what robustness means. I suspect some of these things will hinge on how optimistic you are about the value of life. I think the animal community attracts a lot more folks who are skeptical about humans being good stewards of the world, and so are less convinced that a rogue AI would be worse in expectation (and even folks who are skeptical that extinction would be bad). So I worry AI folks would view "preserving the value of the future" as extremely obviously positive by default, and that (at least some) animal folks wouldn't, and that would end up being the crux about whether these interventions are in fact robust. But perhaps you could still have interesting discussions among folks who are aligned on certain premises.
Re:
Yeah, I think this is a feeling that the folks working on bracketing are trying to capture: that in quotidian decision-making contexts, we generally use the factors we aren't clueless about (@Anthony DiGiovanni -- I think I recall a bracketing piece explicitly making a comparison to day-to-day decision making, but now can't find it... so correct me if I'm wrong!). So I'm interested to see how that progresses.
I suspect though, that people generally just don't think about justification that much. In the case of WAW-tractability-skeptics, I'd guess some large percentage are likely more driven by the (not unreasonable at first glance) intuition that messing around in nature is risky. The problem of course is that all of life is just messing around in nature, so there's no avoiding it.
I think the vast majority of people making decisions about public policy or who to vote for either aren't ethically impartial, or they're "spotlighting", as you put it. I expect the kind of bracketing I'd endorse upon reflection to look pretty different from such decision-making.
That said, maybe you're thinking of this point I mentioned to you on a call: I think even if someone is purely self-interested (say), they plausibly should be clueless about their actions' impact on their expected lifetime welfare, because of strange post-AGI scenarios (or possible afterlives, simulation hypotheses, etc.).[1] See this paper. So it seems like the justification for basic prudential decision-making might have to rely on something like bracketing, as far as I can tell. Even if it's not the formal theory of bracketing given here. (I have a draft about this on the backburner, happy to share if interested.)
I used to be skeptical of this claim, for the reasons argued in this comment. I like the "impartial goodness is freaking weird" intuition pump for cluelessness given in the comment. But I've come around to thinking "time-impartial goodness, even for a single moral patient who might live into the singularity, is freaking weird".
But suppose I want to know who of two candidates to vote for, and I'd like to incorporate impartial ethics into that decision. What do I do then?
Hmm, I don't recall this; another Eli perhaps? : )
@Eli Rose🔸 I think Anthony is referring to a call he and I had :)
@Anthony DiGiovanni
I think I meant more like there was a justification of the basic intuition bracketing is trying to capture as being similar to how someone might make decisions in their life, where we may also be clueless about many of the effects of moving home or taking a new job, but still move forward. But I could be misremembering!Just read your comment more carefully and I think you're right that this conversation is what I was thinking of.Oh woops didn't look at parent comment, haah
Just purely on the descriptive level and not the normative one —
I agree but even more strongly: in AI safety I've basically never seen a BOTEC this detailed. I think Eric Neyman's BOTEC of the cost-effectiveness of donating to congressional candidate Alex Bores is a good public example of the type of analysis common in EA-driven AI safety work: it bottoms out in pretty general goods like "government action on AI safety" and does not try to model second-order effects to the degree described here. It doesn't model even considerations like "what if AI safety legislation is passed, but that legislation backfires by increasing polarization on the issue?" let alone anything about animals.
Instead, this kind of strategic discussion tends to be qualitative, and is hashed out in huge blocks of prose and comment threads e.g. on LessWrong, or verbally.
I see why you describe it this way, and this directionally this seems right. But, what we do doesn't really sound like "spotlighting" as you describe it in the post: focusing on specific moral patient groups and explicitly setting aside others.
Essentially I think the epistemic framework we use is just more anarchic and freeform than that! In AIS discourse, it feels like "but this intervention could slow down the US relative to China" or "but this intervention could backfire by increasing polarization" or "but this intervention could be bad for animals" exist at the same epistemic level, and all are considered valid points to raise.
(I do think that there is a significant body of orthodox AI safety thought which takes particular stances on each of these issues and other issues, which in a lot of contexts likely makes various points feel like not "valid" to raise. I think this is unfortunate.)
Maybe it's similar to the difference between philosophy and experimental science, where in philosophy a lot of discourse is fundamentally unstructured and qualitative, and in the experimental sciences there is much more structure because any contribution needs to be an empirical experiment, and there are specific norms and formats for those, which have certain implications for how second-order effects are or aren't considered. AI safety discourse also feels similar at times to wonk-ish policy discourse.
(Within certain well-scoped sub-areas of AI safety things are less epistemically anarchic; e.g. research into AI interpretability usually needs empirical results if it's to be taken seriously.)
Hmm, I wouldn't agree that someone using precise probabilities "all the way down" is necessarily building these kind of explicit models. I wonder if the term "precise probabilities" is being understood differently in our two areas.
In the Bayesian epistemic style that EA x AI safety has, it's felt that anyone can attach precise probabilities to their beliefs with ~no additional thought, and that these probabilities are subjective things which may not be backed by any kind of explicit or even externally legible model. There's a huge focus on probabilities as betting odds, and betting odds don't require such things (diverging notably from how probabilities are used in science).
I mean, I think typically people have something to say to justify their beliefs, but this can be & often is something as high-level as "it seems good if AGI companies are required to be more transparent about their safety practices," with little in the way of explicit models about downstream effects thereof.[1]
Apologies for not responding to some of the other threads in your post, ran out of time; looking forward to discussing in person sometime.
While it's common for AI safety people to agree with my statement about transparency here, some may flatly disagree (i.e. disagree about sign), and others (more commonly) may disagree massively about the magnitude of the effect. There are many verbal arguments but relatively few explicit models to adjudicate these disputes.
All very interesting, and yes let's talk more later!
One quick thing: Sorry my comment was unclear -- when I said "precise probabilities" I meant the overall approach, which amounts to trying to quantify everything about an intervention when deciding its cost effectiveness (perhaps the post was also unclear).
I think most people in EA/AW spaces use the general term "precise probabilities" the same way you're describing, but perhaps there is on average a tendency toward the more scientific style of needing more specific evidence for those numbers. That wasn't necessarily true of early actors in the WAW space and I think it had some mildly unfortunate consequences.
But this makes me realize I should not have named the approach that way in the original post, and should have called it something like the "quantify as much as possible" approach. I think that approach requires using precise probabilities -- since if you allow imprecise ones you end up with a lot of things being indeterminate -- but there's more to it than just endorsing precise probabilities over imprecise ones (at least as I've seen it appear in WAW).
+1 to maintaining justification standards across cause areas, thanks for writing this post!
Fwiw I feel notably less clueless about WAW than about AI safety, and would have assumed the same is true of most people who work in AI safety, though I admittedly haven't talked to very many of them about this. (And also haven't thought about it that deeply myself.)
This does not undermine the central claim of your Spotlighting section or that of your overall post, which are about tractability and not magnitude/importance, but a quick comment on:
Longtermist AI policy folks could say "sure, but these effects are still small in terms of moral importance" because they think the value of our actions is dominated by things that have nothing to do with wild animal welfare---or WAW before AGI---[1]e.g., the welfare of digital minds or biologically enhanced humans which they expect to be more numerous in the far future.[2] In fact, I think that's what most longtermists who seriously thought about impartial cause prio believe.[3]
And especially wild animal welfare before AGI. Maybe they think long-term WAW is what matters most (like, e.g., Bentham's Bulldog in this post) and that reducing x-risks or making AI safe is more promising than current WAW work to increase long-term WAW. And maybe they're clueless about how reducing x-risks or making AI safe affects near-term WAW, but they think this is dwarfed by long-term WAW anyway. Surely some people in the Sentient Futures and Longtermism and Animals communities think this (or would think this under more reflection).
From my experience, people who would hold this view endorse assigning precise probabilities (no matter what). And they kind of have to. It's hard to defend this view without endorsing that, at least implicitly.
I'm not saying they're right. Just trying to clarify what the crux is here (magnitude, not tractability) and highlighting that it may not be consensual at all that (b) is incorrect.
Yes, totally agree that some longtermist or AI safety oriented types have actually thought about these things, and endorse precise probabilties, and have precise probability assignments to things I find quite strange, like thinking it's 80% likely that the universe will be dominated by sentient machines instead of wild animals. Although I expect I'd find any precise probability assignment about outcomes like this quite surprising, perhaps I'm just a very skeptical person.
But I think a lot of EAs I talk to have not reflected on this much and don't realize how much the view hinges on these sorts of beliefs.
Agreed. I think we should probably have very indeterminate/imprecise beliefs about what moral patients will dominate in the far future, and this imprecision arguably breaks the Pascalian wager (that many longtermists take) in favor of assuming enhanced human-ish minds outnumber wild animals.
However, many of the longtermists who would be convinced by this might fall back on the opinion I describe in footnote 1 of my above comment in the (they don't know how likely) scenario where wild animals dominate (and then the crux becomes what we can reasonably think is good/best for long-term WAW).
Thanks Mal. I really liked your EAG talk and I'm very pleased this post can share the ideas more widely. I agree with ~everything here.
The "ecologically inert" perspective makes a good deal of sense to me, but I can also find it ~frustrating that a worldview with such a vast and ambitious moral convas (wide moral circle, serious consideration of cluelessness and backfire risks) tends to recommend such "tiny shifts at the margin". So I really appreciated your paragraph about finding a decision theory that permits the possibility of radically transformative changes.
Awesome post! I strongly agree with the central claim on tractability.
I think this is great food for thought for the farmed animal advocates who may think "I agree wild animal welfare matters more in theory, but I'm too uncertain about the overall consequences of WAW work on wild animals". The consequences of their farmed-animal work on wild animals are just as uncertain, if not more. And, unless they intentionally seek ecologically inert interventions,[1] it's gonna be hard to convincingly argue that these effects are obviously too negligible for them not to dwarf the farmed-animal effects they focus on (your Spotlighting section is especially relevant, here).[2] And if they endorse ignoring (some) indirect effects in order to justify focusing on farmed animals, then they have to explain how their original concern regarding WAW work still applies! (as you suggest in your first two sub-sections on the approaches to handling uncertainty.)
I think there is only a very specific handful of people extremely sympathetic to cluelessness concerns in animal welfare who actually do that.
Maybe one defensible position would be holding both that i) wild vertebrates are only trivially affected by their farmed-animal work (see this paper some evidence in favor of this), and ii) wild invertebrates are immensely affected but their welfare matters so much less morally (compared to that of whatever farmed animals they're helping) that this compensates. But then they're going against what experts on tradeoffs between species believe and they're gonna need arguments.
Thanks so much!
I actually have a lot of sympathy with farmed animal advocates who feel the way you describe, despite disagreeing that WAW should be seen as intractable by their lights. I think in the scheme of things, if I had to choose, I'd prefer global health and AI folks updated to care more about animals, rather than farmed animal advocates updated more to care about indirect effects. But I'm not sure that's a well-calibrated view as opposed to frustration with how little people care about animals in general.
I think the latter group will/should find your arguments much more convincing, though, yeah... I doubt the potential intractability of WAW is a crux for GDH people---otherwise, they'd be working on farmed animals?[1] And same for many AI safetists, I think. If they work on AI safety for neartermist reasons, then what I say about the crux of GDH people applies to them too. If they're longtermists, they can just say they happen to think that AI safety is more pressing than current WAW work for magnitude reasons (as I suggest in our other comment thread), even if they also think long-term WAW is what matters most!
But yeah, I don't doubt that many GDH and AI safety folks gave you the tractability of WAW concern as a reason to favor their work over yours. And you're right to argue this is a bad argument. I just don't think this is their real crux, or would be their real crux under more reflection. It'd instead most likely be either the above magnitude longtermist argument or reasons not to morally care about non-human animals nearly as much as you do.
I get the frustration, though. Focusing on convincing farmed animal advocates, specifically, because of the above feels like infighting. (Your response made me slightly edit my phrasing in my first comment to make it less adversarial-looking towards farmed animal advocates who feel the way I describe, thanks). :)
EDIT November 22nd: Oh maybe a few people who don't want to be associated with veganism or something!
just stepping back a bit, i don't think the biggest issues here are about infighting within animal welfare or whether GHD and AI people care enough about animals. I think zero sum games aren't a great framing.
for a start i think close to zero GHD people are not working on WAW because they "think it's intractable". Most of them are likely just really into their current work, have a better skillset/experience for GHD or just don't think WAW is their jam to work on in general. i would be surprised if there were even 10 people working in GHD thinking "oh, if WAW was a bit more tractable i would change careers". there might be one or 2 tho...?
i think if you keep making good arguments for WAW and then start to get a few practical real world wins attributible to your work, then more people will gradually fund you and work with you.
This going in my personal best-of for Forum posts of 2025! You explore crucial considerations and possible responses in a clear and transparent way, with pleasant sequencing. I find it very helpful in order to be less confused about my reactions in the face of backfire effects.
Thanks for the great post, Mal! I strongly upvoted it.
Agreed. In addition, I do not think wild animal welfare is distinctively intractable compared to interventios focussing on non-wild animals. I am uncertain to the point that I do not know whether electrically stunning shrimp increases or decreases welfare in expectation, and I see it as one of the interventions where effects on non-target beneficiaries are the least important.
In cases where there is large uncertainty about whether an intervention increases or decreases welfare (in expectation), I believe it is very often better to support interventions decreasing that uncertainty. In the post of mine linked above, my top recommendation is decreasing the uncertainty about whether soil nematodes have positive or negative lives. I tried to be clearer about decreasing uncertainty being my priority here.
At the same time, I would not say constantly switching between 2 options which can easily increase or decrease welfare in expectation is robustly worse than just pursuing one of them. The constant switching would achieve no impact, but it is unclear whether this is better or worse than pursuing a single option if there is large uncertainty about whether it increases or decreases welfare.
Hi Vasco! Thanks for the comment. I agree with you that switching is not necessarily worse (depending on your goals and principles) then just pursuing one uncertain intervention. I also agree with you that research is important when you find yourself in such a position -- it's why I've dedicated my career to research :) And critically, I appreciate the clarification that "decreasing uncertainty" is your priority - I didn't realize that from past posts, but I think your most recent one is clear on that.
One thing I'll just mention as a matter of personal inclination -- I feel unenthusiastic about precise probabilities for more reasons than just the switching issue (I pointed it out just to add to the discourse about things someone with that view should reflect on). Personally, it just doesn't feel accurate to my own epistemic state. When I look at my own uncertainties of this kind, it feels almost like lying to put a precise number on them (I'm not saying others should feel this way, just that it is how I feel). So that's the most basic reason (among the other sort of theoretic reasons out there) that I feel attached to imprecise probabilities.
Yes, I think I could have been clearer about it in the past. Now I am also more uncertain. I previously thought increasing agricultural was a pretty good heuristic for decreasing soil-animal-years, but it looks like it may easily increase these due to increasing soil-nematode-years.
Makes sense. However, I would simply assign roughly the same probability to values (of a variable of interest) I feel very similarly about. The distribution representing the different possible values will be wider if one is indifferent between more of them. Yet, I do not understand how one could accept imprecise probabilities. In my mind, a given value is always less, more, or as likely as another. I would not be able to distinguish between the mass of 2 objects with 1 and 1.001 kg by just having them in my hands, but this does not mean their masses are incomparable.
Interesting post! Re: "how spotlight sizes should be chosen", I think a natural approach is to think about the relative priorities of representatives in a moral parliament. Take the meat eater problem, for example. Suppose you have some mental representatives of human interests, and some representatives of factory farmed animal interests. Then we can ask each representative: "How high a priority is it for you to get your way on whether or not to prevent this child from dying of malaria?" The human representatives will naturally see this as a very high priority—we don't have many better options for saving human lives. But the animal representatives, even if they aren't thrilled by retaining another omnivore, have more pressing priorities than trying to help animals by eliminating meat-eaters one by one. Given how incredibly cost-effective animal-focused charities can be, it will make sense for them to make the moral trade: "OK, save this life, but then let's donate more to the Animal Welfare Fund."
Of course, for spotlighting to work out well for all representatives, it's going to be important to actually follow through on supporting the (otherwise unopposed) top priorities of neglected representatives (like those for wild animal welfare). But I think the basic approach here does a decent job of capturing why it isn't intuitively appropriate to take animal interests into account when deciding whether to save a person's life. In short: insofar as we want to take animal interests into account, there are better ways to do it, that don't require creating conflict with another representative's top priorities. Avoiding such suboptimal conflict, and instead being open to moral trade, seems an important part of being a "good moral colleague".
Interesting! This resembles Michael St. Jules' hedging proposal.
Is it that obvious? I find it hard to come up with interventions that reduce farmed animal suffering (and farmed animal suffering only, ignoring all the rest) more robustly than "minimize the number of entities that incidentally perpetuate this farmed animal suffering". I feel like this should actually be pretty high up the list for someone who strictly spotlights farmed animals (like the farmed animal representatives are supposed to in your spotlighting parliament, if I understand correctly).
Why don't the managers of the Animal Welfare Fund support preventing the existence of more potential meat-eaters, then? (via, e.g., increasing x-risks.) I presume because
Curious if you agree.
For the moral trade between the representatives of human victims of malaria and farmed animal representatives to be fair, in your setup, the preferences of the latter would have to actually stoplight farmed animals the same way the former spotlights human victims of malaria. I.e., the preferences of farmed animal representatives in your spotlighting parliament should not be those of real farmed animal advocates who are not spotlighting farmed animals (otherwise, they would obviously be pro-x-risks and stuff despite the downsides on other beings, the same way the representatives of human malaria victims are anti-poverty despite the meat-eater pb).
I think this article makes its case compellingly, and appreciate that you spell-out the sometimes subtle ways uncertainty gets handled.
Did the question "Why should justification standards be the same?" arise in a sociological / EA movement context? My interpretation (from the question wording alone) would be more epistemic, along the lines of the unity of science. In my view, standards for justification have to be standardized, otherwise they wouldn't be standards; one could just offer an arbitrary justification to any given question.
Yeah, I could have made that more clear -- I am more focused on the sociology of justification. I supposed if you're talking pure epistemics, it depends whether you're constructivist about epistemological truth. If you are, then you'd probably have a similar position -- that different communities can reasonably end up with justification standards, and no one community have more claim to truth than the other.
I suspect, though, that most EAs are not constructivists about epistemology, and so vaguely think that some communities have better justification standards than others. If that's right, then the point is more sociological: that some communities are more rigorous about this stuff than others, or even that they might use the same justification standards but differ in some other way (like not caring about animals) that means the process looks a little different. So the critic I'm modeling in the post is saying something like: "sure, some people do justification better than others, but these are different communities so it makes sense that some communities care more about getting this right than others do."
I guess another angle could be from meta-epistemic uncertainty. Like if we think there is a truth about what kinds of justification practices are better than others, but we're deeply uncertain about what it is, it may then still seem quite reasonable that different groups are trying different things, especially if they aren't trying to participate in the same justificatory community.
Not entirely sure I've gotten all the philosophical terms technically right here, but hopefully the point I'm trying to make is clear enough!
Great post ! Thanks for highlighting these concerns.
If the impact on animals, wild and farmed wasn't so uncertain and likely important, I'd probably be working on AI safety and would still be donating a bit to Givewell charities.
But right now, it seems less risky for me to donate farmed animals, at least to welfare reforms with much less impact on wild animals, like cage free campaigns.
More money to research on the wild animal field is also super important. Wild Animal Initiative seems to do very relevant work to remove some of the uncertainties.
Hi CB.
For individual welfare per animal-year proportional to "number of neurons"^0.5, I estimate that cage-free and broiler welfare corporate campaigns change the welfare of soil ants, termites, springtails, mites, and nematodes 1.15 k and 18.0 k times as much as they increse the welfare of chickens. I have little idea about whether the effects on soil animals are positive or negative. I am very uncertain about what increases or decreases soil-animal-years, and whether soil animals have positive or negative lives. So I am also very uncertain about whether such campaigns increase or decrease welfare (in expectation). I do not even know whether electrically stunning shrimp increases or decreases welfare, and I see it as one of the interventions where effects on non-target beneficiaries are the least important.
I really enjoyed your EAG talk, and am glad you're sharing it here! This is the first I've heard of the "ecologically inert" strategy, and I found it inspiring to understand how some WAW researchers tackle cluelessness head on.
I find your section on justification standards provocative in a valuable way. Thank you!
One possible solution is to consider that the human potential to develop both technology (the capacity to intervene in any material environment) and altruistic motivation is practically unlimited, meaning it would only be a matter of time before no area of action remains untouched by human intervention aimed at reducing suffering.
If we act from this premise, our priority must always be the development of altruistic motivation, something that requires cultural changes that can begin now.
I think animals matter morally, but their moral worth is derived from the value they offer to humans (eg as pets or food). I wouldn’t particularly care about birds crashing into windows unless the owners of the windows care, in which case they will voluntarily invest in measures to prevent such crashes and no regulation is necessary. Or maybe if birds crash into windows so much that it has a negative effect on the ecosystem, with knock-on effects in turn hurting people.
I always get a bit queasy when somebody tries to present some moral calculus and then wants it turned into legislation. It’s just a way of saying the government should force others, on your behalf, to do what you think is right. That’s not the government’s job.