Author’s note: This is an adapted version of my recent talk at EA Global NYC (I’ll add a link when it’s available). The content has been adjusted to reflect things I learned from talking to people after my talk. If you saw the talk, you might still be interested in the “some objections” section at the end.
Summary
Wild animal welfare faces frequent tractability concerns, amounting to the idea that ecosystems are too complex to intervene in without causing harm. However, I suspect these concerns reflect inconsistent justification standards rather than unique intractability. To explore this idea:
- I provide some context about why people sometimes have tractability concerns about wild animal welfare, providing a concrete example using bird-window collisions.
- I then describe four approaches to handling uncertainty about indirect effects: spotlighting (focusing on target beneficiaries while ignoring broader impacts), ignoring cluelessness (acting on knowable effects only), assigning precise probabilities to all outcomes, and seeking ecologically inert interventions.
- I argue that, when applied consistently across cause areas, none of these approaches suggest wild animal welfare is distinctively intractable compared to global health or AI safety. Rather, the apparent difference most commonly stems from arbitrarily wide "spotlights" applied to wild animal welfare (requiring consideration of millions of species) versus narrow ones for other causes (typically just humans).
While I remain unsure about the right approach to handling indirect effects, I think that this is a problem for all cause areas as soon as you realize wild animals belong in your moral circle, and especially if you take a consequentialist approach to moral analysis. Overall, while I’m sympathetic to worries about unanticipated ecological consequences, they aren’t unique to wild animal welfare, and so either wild animal welfare is not uniquely intractable, or everything is.
Consequentialism + impartial altruism → hard to do good
Let’s assume (as I will for the rest of this post) that you think animals matter morally, you’re pretty consequentialist, and you want to analyze ways to help the world through a ~ scale/neglectedness/tractability lens. How would you analyze wild animal welfare?
It’s pretty obvious that you’d give it points for scale and neglectedness — in the six years I’ve worked on wild animal welfare, few people have argued otherwise. But I and others working in the space frequently hear concerns about the cause area’s tractability. Most commonly, these take the form of concerns that nature is too complex to work in, and that it therefore won’t be feasible to benefit the lives of even a portion of the trillions–quintillions of wild animals without doing harm to others or messing up the system in some way.
I don’t think wild animal welfare is uniquely intractable, even though I have sympathy with these ecologically motivated concerns. I suspect that if you agree with the broad moral views I described, and think it’s reasonable to work on global health or AI safety, you should think wild animal welfare is reasonable to work on as well.
I think the issue with wild animal welfare is how we talk about it, and our standards for justifying working on it. In this post, I’ll explain this opinion, and argue that if justification standards were applied consistently, wild animal welfare is about as tractable as anything else.
The challenge: Deep uncertainty and backfire risk
First, let’s return to why people think wild animal welfare is intractable in the first place. Broadly, the idea is that there are tons of different species in the category “wild animals,” many of which we know little about, and whose members live in an unconstrained ecological system that seems highly sensitive to perturbations. As a result, people often feel “clueless” about the effects of their actions, which makes it hard to figure out if there is a reasonable intervention to pursue that would be cost effective. Worse, you could even end up doing more harm than good.
I think most people who have been exposed to the idea of wild animal welfare at least have some intuitive understanding of the idea that “messing around in ecosystems” is risky, but I think it’s useful to provide a concrete example. So I’ll examine this idea by means of the topic I’m currently studying: bird-window collisions.
Example: Bird-window collisions
Bird-window collisions may kill over a billion birds annually in North America alone (Loss et al., 2014, 2015). On its face, legislation or remediation campaigns to require bird-safe glass — using visible patterns, dots, or modified glass to prevent collisions — seem like a positive, pro-welfare way to address these collisions. Uncertainties that might arise include whether these campaigns are cost effective to work on, and whether bird-safe glass has unintended ecological effects. Several layers of uncertainty complicate this assessment:
We don’t actually understand the welfare consequences of bird-window collisions on birds
While some birds die quickly from skull hemorrhages, others may suffer from crop rupture, fractures, or other injuries for hours to weeks before dying (Fornazari et al., 2021; Klem, 1990). We lack good data on rates of different outcomes. Only two empirical studies have examined sublethal strikes — one using window panes in the woods (Klem Jr. et al., 2024) and another studying only one building as a pilot of the methods (Samuels et al., 2022). So it’s hard to say how bad a death by window collision is on average.
This matters because:
We don’t know how birds would die otherwise
Birds saved from window collisions don't become immortal — they die later from other causes, most commonly predation, as far as we can tell (Hill et al., 2019). Based on age-structured mortality models for affected species like song sparrows, collision victims who survive gain approximately 1–2 additional years of life[1]. Whether this is net positive depends on comparing the suffering of window collision deaths versus alternative deaths (predominantly predation), plus the value of those additional life-years. Critically, if the difference in the amount of suffering caused by the new death outweighs the joy gained from an additional 1–2 years of life, the intervention could be net negative for birds themselves. Whether you think this is possible or likely depends both on empirical facts we don’t currently have access to, as well as philosophical beliefs about what makes a life worth living.
Making things worse:
The effects on other animals are even more uncertain
We don’t know if bird-window collisions affect bird population sizes. If populations are resource limited, preventing collision deaths might not increase population size — it might just shift which individuals die and how they die. If populations do increase, though, this creates cascading effects on prey species (primarily insects); scavengers who feed on collision victims; other animals who compete with birds for space, food, or other resources; and broader ecosystem dynamics. It’s unfortunately quite difficult to assess how city-scale interventions affect population dynamics over time. So far, only one study explicitly has addressed this question, and found no detectable population effects (Arnold & Zink, 2011). Other experts have contested this finding (see here), but the only available data has significant biases in the amount of effort spent surveying populations and causes of death, among other things. Observational studies of this type, even when carefully done, can be extremely hard to interpret.
If there are population size changes, the resulting changes in ecological dynamics create cascading uncertainties. The new system will be different from the old one in all kinds of ways, and because we have no history of studying wild animal welfare, it’s incredibly difficult to make predictions about the quality of life of the animals in that new system, and whether they’d be better off or worse off overall.
This idea — that sometimes things with local or intended benefits can shift systems in such a way that they create more negative outcomes than positive ones — is commonly referred to as backfire risk. Backfire risk is not unique to the wild animal welfare community: AI policy advocates, for example, might worry about a legal initiative causing some kind of social response that creates more AI risk, even if the initiative initially looked like a way to decrease it. What’s unique is *how many moral patients* wild animal welfare advocates are asked to account for, as I’ll discuss below.
Four approaches to handling uncertainty
I spoke with a number of individuals from across several of the main EA cause areas about how they think about and consider backfire risk in their work. Using these conversations, and my own reading on the Forum, and this extremely useful summary from Jim Buhler, I’ve defined four approaches: spotlighting, assigning precise probabilities, setting aside things you're clueless about, and seeking ecologically inert interventions.
I am intentionally not endorsing any one of these approaches in this piece; my personal position is that they all have worrying flaws and I hope someone will come up with something better. Rather, I am seeking merely to describe the approaches I've observed, and show that wild animal welfare doesn’t seem particularly intractable regardless of which approach you use (provided you use the approach consistently).
Spotlighting
I think spotlighting is the most common approach to handling uncertainty, and is used in both global health and AI governance and safety. Spotlighting is focusing on specific moral patient groups when assessing cost-effectiveness, while setting aside effects on others as "out of scope.” The justification for setting aside these other moral patients varies, but from my conversations with grantmakers and researchers, it seems to me like people are generally assuming (quite tenuously, as far as I can tell) that off-target effects are negligible compared to direct effects.
For example, if I’m using spotlighting to assess a global health intervention in Malawi, I look at the expected effects on the people of Malawi. Unless the intervention has extremely clear and obvious off-target effects that seem to be of the same scale as the target effects, I don’t ask difficult questions about theoretical possibilities, like the risk that improving the health of those in Malawi could somehow trigger cascading economic changes in the region that put some farmers out of business in a neighboring country. I certainly don’t ask tough questions about how a general improvement in the health of Malawians could be expected to affect the animals they eat, or the wild animals of the region. People in neighboring states, farmed animals, and wild animals are all “left in the dark” in my analysis — out of the spotlight.
To offer an AI example: When evaluating using spotlighting, if I pass a bill, I might worry about how that bill could backfire on my intended effect of slowing down the development of AI. For example, I might worry that slowing down development in the US could just lead to AI development taking off in China, somehow. But, with a few exceptions, it doesn’t seem to me that people attempting to estimate the value of passing the bill account for the risk that slowing down AI will affect global health or wild animals. In the case of global health, there is at least some discussion of this topic. Those who don’t want to slow down AI, for example, seem to consider the benefits they expect AI to bring the developing world as a reason to keep developing an ostensibly dangerous technology. But there is minimal thought at all paid to wild animals, despite the huge scale of individuals likely to be affected.
It’s possible that many AI-focused people simply don’t care about wild animals. But as stated at the beginning, I’m assuming a moral framework under which wild animals matter. Over the years, I’ve spoken to a lot of people who work in non-animal spaces, and generally, if I ask them about animal-affecting backfire, I get one of two responses: either that (a) we’re assuming these interventions don't affect wild animals, or (b) the effects are small enough to ignore. Unfortunately, both claims seem obviously incorrect. Just to illustrate one aspect of this issue: Any AI policy that influences the timelines to AGI will affect land use, resource consumption, and mining behavior — all of which have substantial effects on wild animal populations and welfare. The net effect is unlikely to be small and we don't know whether it is positive or negative.
But regardless, the common approach is to set aside these sorts of effects through spotlighting. It’s not always clear how the scope of the spotlight is defined, but it seems like many people are focusing on “the target beneficiaries” or “target outcomes.” If the target beneficiaries in AI and global health are basically “humans,” maybe you think it’s fairly obvious why wild animal welfare seems intractable: There are just a ton more beings in the spotlight. Someone working on farmed animal welfare just needs to think about maybe a few hundred species (if we include fish); global health and AI-focused actors only need to consider one (humans). In contrast, wild animal welfare has to “spotlight” multiple millions of species with competing needs.
But if spotlighting is permissible in this way, the fairly obvious question is: Why did we make wild animal welfare’s spotlight so wide? If we proceed intervention by intervention, I can’t see why I’m not allowed to “spotlight” just birds (or perhaps even a single species of birds), for example, when considering wild bird welfare and how it would be affected by bird-safe glass. If you think that’s an acceptable move, wild animal welfare (when analyzing one set of target beneficiaries at a time) doesn’t seem particularly intractable at all. I still have some uncertainties around how birds will die instead, but these kinds of uncertainties present a much smaller and simpler research problem, similar to the kinds of things regularly handled in global health research, where we act as if ecosystem-scale questions can be set aside.
A part of me is quite sympathetic, when trying to help humans in Malawi, to the idea of just focusing on uncertainties around the effects for people in Malawi. But it looks like such a strange and thoughtless thing to do in wild animal welfare — to think about whether bird-safe glass helps birds without worrying about whether it might harm scavengers or insects.
I haven’t seen anyone produce a coherent story of how spotlight sizes should be chosen, and few people are even working on these sorts of meta questions. But regardless of whether you like the approach or not, it seems like any reasonable and non-arbitrary standard for defining how to spotlight will put wild animal welfare and other cause areas on a similar playing field in tractability terms.
Set aside that which you are clueless about
So let’s say you don’t like spotlighting, because it seems like the distinction between moral patients in and out of your spotlight are arbitrary. But, if you try to consider all moral patients at once, you tend to become “clueless” (in the sense popularized by Greaves, 2016) quite quickly. So another set of options seems to be: include all the effects you aren’t clueless about, act based on what you learn from looking at those things, and ignore the stuff you’re clueless about. So, for example, if I’m at least confident that bird-safe glass helps birds and causes more insects to get eaten, but I’m clueless about its effects on other animals and any ecological consequences, I can proceed just by analyzing the bird and insect effects I understand.
While in some cases this might look the same as spotlighting (e.g., if I spotlight just birds, and birds also happen to be the only moral patients I’m not clueless about), I think the reasoning is different and the outcomes can come apart. In spotlighting, it’s an allowed move to narrow your focus to some specific set of moral patients, because (for some reason or another) you think it’s okay to assume that effects on this group determine the overall result without actually checking in each instance that this is true. In “ignoring cluelessness” approaches, I’m not using that heuristic off the bat. The precise procedure might vary depending on how someone is operationalizing “ignore cluelessness,” but broadly, I include all moral patients in theory, and only exclude things if I’m clueless about them in the specific scenario under consideration.
There’s a rich discussion of cluelessness with a reasonable number of pending disagreements I won’t get into here (but you can dive into the discussion here). Suffice to say that I’m skeptical that most articulated approaches are non-arbitrary. As a result, I think that cluelessness and unawareness (the idea that we can’t account for possibilities we don’t even think of) are pretty important issues that deserve more research attention. That said, I’m reasonably optimistic that a newer proposal called bracketing (post 1, post 2) will offer a principled guide to “making decisions based on what you aren’t clueless about.”
So what does this mean for wild animal welfare? Again, if this approach works at all (e.g., because bracketing turns out to be a great proposal), it would still need to be applied equally to wild animal welfare and AI risk and global health. If we can ignore things we’re pretty clueless about, wild animal welfare becomes a lot easier. We still need to do research (to address the tractable uncertainties, like how birds commonly die in nature), but we won’t need to tackle the cluelessness-inducing issues at the ecological scale. But if this doesn’t work, and we can’t ignore cluelessness, then AI and global health actors are in the same problematic boat: If we apply equivalent epistemic standards across these cause areas, we should all be clueless about the effects of any intervention on (1) wild animals and (2) the long-term future.
Assign precise probabilities
Another group of people attempting to include all moral patients in their analyses seem to basically reject cluelessness by trying to calculate (at least partially based on intuitions) the effects of interventions on as many questionably sentient moral patients as possible (for example, see this post). The idea is to come up with all the effects you can think of and assign precise probabilities to every possible outcome, even in the face of deep uncertainty. You can even assign some kind of modifier to capture all the “unknown unknowns.”
Others on the Forum have already made the case for why we should generally be suspicious of precise probability assignments when we can’t straightforwardly quantify our uncertainty, so I won’t make those arguments here. However, I’d also like to point out a social consequence of this approach that seems extremely problematic. Basically, when your uncertainties are so broad, a tiny amount of new data can swing your probability assignments significantly (i.e., the swing might not even be very big, but your EV estimate was so close to 0 anyway that this small change flips the sign). As a result, your views become volatile: You might determine an AI policy is net positive today, then completely reverse that judgment months later, after minor updates. Although some may think that this outcome is an unfortunate but necessary aspect of the “right” decision theory, it is extremely hard to see how one might run a movement this way. Switching from endorsing bird-safe glass to not endorsing it on a monthly basis would lead to little impact and few supporters.
Seek ecologically inert interventions
Finally, the last contingent of EA has basically accepted profound cluelessness about anything that influences wild animal populations — which is basically everything happening in global health and AI safety, and most things that you could think of in wild animal welfare and farmed animal welfare. As a result, these individuals hope to identify "ecologically inert" interventions that don't affect population dynamics or have cascading effects. Corporate welfare campaigns might be one sort of intervention that clears this bar. If we assume that going from caged → cage free improves welfare, but only changes conditions in the “closed” farming system, and doesn’t put factory farming on a pathway to elimination or change the amount of land it uses, we can be reasonably confident that all the effects of the interventions are just on chickens themselves, and that there won’t be any ecological consequences.
While this is quite difficult, it’s not clear to me that it’s totally intractable. I (and several others) think we could reasonably view a handful of interventions as worth pursuing under this mindset. Mostly, these sorts of interventions change how humans kill animals or control populations, such that suffering is decreased without changing the net population outcome. Examples might include stunning wild-caught fish before slaughter or replacing rodenticides with fertility control on islands.
But very importantly, if your view is that interventions must be ecologically inert to be worth doing, this isn’t a unique problem for wild animal welfare — anyone with this mindset about wild animal welfare should apply it to AI safety and global health (as long as you endorse the moral positions I assumed at the outset). And since AI safety and global health interventions are almost certainly not ecologically inert, we see that we have the same status across all cause areas, rather than a unique type of intractability for wild animal welfare specifically.
Some objections & questions
After I gave a talk summarizing the above, I was asked a few questions that pushed in favor of one or the other of the above approaches. Responses to these objections below:
The global health comparison: Spotlighting hasn't backfired (for humans)
One objection to my rejection of spotlighting is that, basically, spotlighting is how we’ve been doing things for generations (in global health but also in all kinds of other decision making). It hasn’t backfired, so why should we be so suspicious of the approach?[2]
While some might quibble with the idea that spotlighting-endorsed interventions have never backfired for humans, even if the position is broadly correct for that context, it is deeply unobvious that it’s true for animals. It may be that interventions remain robustly positive for human welfare even under deep uncertainty about broader effects. But it seems wildly unclear that progress in human health over the last decades has not harmed animal welfare.
If impartial altruism is foundational to EA (and it’s at least foundational to my interest in EA), this asymmetry should trouble us. We can't simultaneously claim to care about all sentient beings while accepting spotlighting only when it's convenient to humans.
Action-inaction distinctions
Some people have asked me why people taking cluelessness seriously aren’t allowed to do whatever they want. If we’re clueless about “doing bird-safe glass” and “not doing bird-safe glass,” aren’t both just fine then? They would argue that we can decide to promote bird-safe glass based on some kind of deontological consideration or mere preference, since when we look at consequences we’re clueless either way.
But people who feel clueless about consequences often acknowledge that either action may be permissible. However, if you have a certain amount of money or time to spend on making the world a better place, and you have at least a couple of options that seem robustly good (and a ton that you’re clueless about), it seems better to spend it on things that look robustly good. They search for ecologically inert interventions not because these are the only permissible options, but because "certain positive" seems better than "no idea."
I have some concerns about this view. For example, it seems like you are doomed to be clueless about truly transformative change, like trying to shift the world to have massively different attitudes toward animals. Thus, I worry that the only things that appear robustly good under this view would just be tiny shifts at the margin. Maybe you think that decision theory comes first, and that a rejection of transformative change stemming from that is just an unfortunate truth. But since I think a ton of our feelings about decision theory are based on intuitive judgements anyway, I want to also take seriously my own intuition that working toward the best world, and not just the nearest better world, is something I want my justification standards to at least sometimes license.
Why should justification standards be the same?
To me, this question gets at the core of whether EA is a community or not. As someone I spoke with suggested, why should I expect people in the arts and humanities charity space to have similar justification practices for why they give to Museum of Modern Art instead of the Philadelphia Art Museum, as I have for why I work on the wild animal welfare effects of bird-safe glass? Of course, I don’t expect similar justification standards from actors outside my community.
Maybe the answer here is that, really, people working on AI safety, animal welfare, and global health are all in separate communities, with different donors and decision makers, and so obviously they have different justification standards. But I don’t think this is what I’d like to see in my ideal version of EA.
First, I think it would be sad! I like being in a community of people who, due to our uncertainty about various empirical points, might disagree on what cause to work on — but we still try to figure out interesting things about cost-effectiveness in an impartially altruistic way. Second, I can still critique the standards of both my own communities and others I see, and hope to identify the best justification standards I can — which, in practice, might involve looking at what people are doing in other spaces. So even if, sadly-to-me, EA is not a community anymore, I still want wild animal welfare to develop with good justification standards, both applied within the cause area (as we try to figure out which ways of helping wild animals are most worth doing) and across cause areas. I’d hope for non-arbitrary standards that neither freeze us in inaction, nor license clearly irresponsible actions. Even though I’m still trying to figure out what that means, it feels like an issue for everyone, and not just for wild animal welfare.
Conclusion
I’ve reviewed the four broad categories of approaches I’m aware of for handling uncertainty about indirect effects, and hopefully shown that, when applied consistently, it’s not obvious that wild animal welfare is less tractable than anything else.
I don’t know what the consequences of taking indirect effects seriously should be for other cause areas. In the case of wild animal welfare, I look forward to further developments from researchers thinking hard about cluelessness, unawareness, and decision theory to help guide research and action. Even though we mostly can’t figure out the exact consequences of our actions on all wild animals today, I expect that the sort of research and field-building efforts we’re pursuing in the wild animal welfare space will help us (a) understand how to help wild animals in at least some cases, and (b) dispel learned helplessness with regards to the suffering of wild animals, so that we’re in a better position to build toward understanding and responding to more and more indirect effects over time.
Many thanks to Jason Schukraft, Anthony DiGiovanni, Michael St. Jules, Simon Eckerström Liedholm, Jesse Clifton, Bob Fischer, and Abraham Rowe for conversations that improved the talk this document is based on. Thank you to Shannon Ray for copy edits (on short notice!).
AI usage: I used Claude to create an initial draft from the transcript of my talk, which I then heavily edited and added to (the final is about twice as long as Claude’s draft). Claude also wrote the first draft of the summary once I had finished the draft, which I then edited.

This going in my personal best-of for Forum posts of 2025! You explore crucial considerations and possible responses in a clear and transparent way, with pleasant sequencing. I find it very helpful in order to be less confused about my reactions in the face of backfire effects.
Thnks for the great post, Mal! I strongly upvoted it.
Agreed. In addition, I do not think wild animal welfare is distinctively intractable compared to interventios focussing on non-wild animals. I am uncertain to the point that I do not know whether electrically stunning shrimp increases or decreases welfare in expectation, and I see it as one of the interventions where effects on non-target beneficiaries are the least important.
In cases where there is large uncertainty about whether an intervention increases or decreases welfare (in expectation), I believe it is very often better to support interventions decreasing that uncertainty. In the post of mine linked above, my top recommendation is decreasing the uncertainty about whether soil nematodes have positive or negative lives. I tried to be clearer about decreasing uncertainty being my priority here.
At the same time, I would not say constantly switching between 2 options which can easily increase or decrease welfare in expectation is robustly worse than just pursuing one of them. The constant switching would achieve no impact, but it is unclear whether this is better or worse than pursuing a single option if there is large uncertainty about whether it increases or decreases welfare.
Great post ! Thanks for highlighting these concerns.
If the impact on animals, wild and farmed wasn't so uncertain and likely important, I'd probably be working on AI safety and would still be donating a bit to Givewell charities.
But right now, it seems less risky for me to donate farmed animals, at least to welfare reforms with much less impact on wild animals, like cage free campaigns.
More money to research on the wild animal field is also super important. Wild Animal Initiative seems to do very relevant work to remove some of the uncertainties.
Hi CB.
For individual welfare per animal-year proportional to "number of neurons"^0.5, I estimate that cage-free and broiler welfare corporate campaigns change the welfare of soil ants, termites, springtails, mites, and nematodes 1.15 k and 18.0 k times as much as they increse the welfare of chickens. I have little idea about whether the effects on soil animals are positive or negative. I am very uncertain about what increases or decreases soil-animal-years, and whether soil animals have positive or negative lives. So I am also very uncertain about whether such campaigns increase or decrease welfare (in expectation). I do not even know whether electrically stunning shrimp increases or decreases welfare, and I see it as one of the interventions where effects on non-target beneficiaries are the least important.
Awesome post! I strongly agree with the central claim on tractability.
I think this is great food for thought for the farmed animal advocates who may think "I agree wild animal welfare matters more in theory, but I'm too uncertain about the overall consequences of WAW work on wild animals". The consequences of their farmed-animal work on wild animals are just as uncertain, if not more. And, unless they intentionally seek ecologically inert interventions,[1] it's gonna be hard to convincingly argue that these effects are obviously too negligible for them not to dwarf the farmed-animal effects they focus on (your Spotlighting section is especially relevant, here).[2] And if they endorse ignoring (some) indirect effects in order to justify focusing on farmed animals, then they have to explain how their original concern regarding WAW work still applies! (as you suggest in your first two sub-sections on the approaches to handling uncertainty.)
I think there is only a very specific handful of people extremely sympathetic to cluelessness concerns in animal welfare who actually do that.
Maybe one defensible position would be holding both that i) wild vertebrates are only trivially affected by their farmed-animal work (see this paper some evidence in favor of this), and ii) wild invertebrates are immensely affected but their welfare matters so much less morally (compared to that of whatever farmed animals they're helping) that this compensates. But then they're going against what experts on tradeoffs between species believe and they're gonna need arguments.
Thanks so much!
I actually have a lot of sympathy with farmed animal advocates who feel the way you describe, despite disagreeing that WAW should be seen as intractable by their lights. I think in the scheme of things, if I had to choose, I'd prefer global health and AI folks updated to care more about animals, rather than farmed animal advocates updated more to care about indirect effects. But I'm not sure that's a well-calibrated view as opposed to frustration with how little people care about animals in general.
I think the latter group will/should find your arguments much more convincing, though, yeah... I doubt the potential intractability of WAW is a crux for GDH people---otherwise, they'd be working on farmed animals? And same for many AI safetists, I think. If they work on AI safety for neartermist reasons, then what I say about the crux of GDH people applies to them too. If they're longtermists, they can just say they happen to think that AI safety is more pressing than current WAW work for magnitude reasons (as I suggest in our other comment thread), even if they think long-term WAW is what matters most!
But yeah, I don't doubt that many GDH and AI safety folks gave you the tractability of WAW concern as a reason to favor their work over yours. And you're right to argue this is a bad argument. I just don't think this is their real crux, or would be their real crux under more reflection. It'd instead be either the above magnitude longtermist argument or reasons not to morally care about non-human animals nearly as much as you do.
I get the frustration, though. Focusing on convincing farmed animal advocates because of the above feels like infighting. (Your response made me slightly edit my phrasing in my first comment to make it less adversarial-looking towards farmed animal advocates who feel the way I describe, thanks). :)
This does not undermine the central claim of your Spotlighting section or that of your overall post, which are about tractability and not magnitude/importance, but a quick comment on:
Longtermist AI policy folks could say "sure, but these effects are still small in terms of moral importance" because they think the value of our actions is dominated by things that have nothing to do with wild animal welfare---or WAW before AGI---[1]e.g., the welfare of digital minds or biologically enhanced humans which they expect to be more numerous in the far future.[2] In fact, I think that's what most longtermists who seriously thought about impartial cause prio believe.[3]
And especially wild animal welfare before AGI. Maybe they think long-term WAW is what matters most (like, e.g., Bentham's Bulldog in this post) and that reducing x-risks or making AI safe is more promising than current WAW work to increase long-term WAW. And maybe they're clueless about how reducing x-risks or making AI safe affects near-term WAW, but they think this is dwarfed by long-term WAW anyway. Surely some people in the Sentient Futures and Longtermism and Animals communities think this (or would think this under more reflection).
From my experience, people who would hold this view endorse assigning precise probabilities (no matter what). And they kind of have to. It's hard to defend this view without endorsing that, at least implicitly.
I'm not saying they're right. Just trying to clarify what the crux is here (magnitude, not tractability) and highlighting that it may not be consensual at all that (b) is incorrect.
Yes, totally agree that some longtermist or AI safety oriented types have actually thought about these things, and endorse precise probabilties, and have precise probability assignments to things I find quite strange, like thinking it's 80% likely that the universe will be dominated by sentient machines instead of wild animals. Although I expect I'd find any precise probability assignment about outcomes like this quite surprising, perhaps I'm just a very skeptical person.
But I think a lot of EAs I talk to have not reflected on this much and don't realize how much the view hinges on these sorts of beliefs.
Agreed. I think we should probably have very indeterminate/imprecise beliefs about what moral patients will dominate in the far future, and this imprecision arguably breaks the Pascalian wager (that many longtermists take) in favor of assuming enhanced human-ish minds outnumber wild animals.
However, many of the longtermists who would be convinced by this might fall back on the opinion I describe in footnote 1 of my above comment in the (they don't know how likely) scenario where wild animals dominate (and then the crux becomes what we can reasonably think is good/best for long-term WAW).
One possible solution is to consider that the human potential to develop both technology (the capacity to intervene in any material environment) and altruistic motivation is practically unlimited, meaning it would only be a matter of time before no area of action remains untouched by human intervention aimed at reducing suffering.
If we act from this premise, our priority must always be the development of altruistic motivation, something that requires cultural changes that can begin now.