This is not an unreasonable take, but just in the interest of having an accurate public record, I'm actually the strategy director for WAI (although I was the executive director previously). Also, none of us at Arthropoda are technically animal welfare scientists. Our training is all in different things (for example, my PhD is in engineering mechanics and Bob's a philosopher who published a lot of skeptical pieces on insects).Â
Basically, I think we came to Arthropoda because the work we did before that changed our minds. More importantly, I don't think the majority of Arthropoda's work will be about checking for sentience? Rather, we're taking a precautionary framework about insects being sentient and asking how to improve their welfare if they are. In this context our views on sentience seem less likely to cause a COI -- although I also expect all our research to be publicly available for people to red-team as needed :)
Finally, fully agree on the extreme personnel overlap. I would love to not be co-running a bug granting charity as a volunteer in addition to my two other jobs! But the resource constraints and unusualness of this space are unfortunately not particularly conducive to finding a ton of people willing to take on leadership roles.Â
All very interesting, and yes let's talk more later!Â
One quick thing: Sorry my comment was unclear -- when I said "precise probabilities" I meant the overall approach, which amounts to trying to quantify everything about an intervention when deciding its cost effectiveness (perhaps the post was also unclear).Â
I think most people in EA/AW spaces use the general term "precise probabilities" the same way you're describing, but perhaps there is on average a tendency toward the more scientific style of needing more specific evidence for those numbers. That wasn't necessarily true of early actors in the WAW space and I think it had some mildly unfortunate consequences.Â
But this makes me realize I should not have named the approach that way in the original post, and should have called it something like the "quantify as much as possible" approach. I think that approach requires using precise probabilities -- since if you allow imprecise ones you end up with a lot of things being indeterminate -- but there's more to it than just endorsing precise probabilities over imprecise ones (at least as I've seen it appear in WAW).Â
Thanks Eli!
I sort of wonder if some people in the AI community -- any maybe you, from what you've said here? -- are using precise probabilities to get to the conclusion that you want to work primarily on AI stuff, and then spotlighting to that cause area when you're analyzing at the level of interventions.Â
I think someone using precise probabilities all the way down is building a lot more explicit models every time they consider a specific intervention. Like if you're contemplating running a fellowship program for AI interested people, and you have animals in your moral circle, you're going to have to build this botec that includes the probability an X% of the people you bring into the fellowship are not going to care about animals and likely, if they get a policy role, to pass policies that are really bad for them. And all sorts of things like that. So your output would be a bunch of hypotheses about exactly how these fellows are going to benefit AI policy, and some precise probabilities about how those policy benefits are going to help people, and possibly animals to what degree, etc.Â
I sort of suspect that only a handful of people are trying to do this, and I get why! I made a reasonably straightforward botec for calculating the benefits to birds of bird-safe glass, that accounted for backfire to birds, and it took a lot of research effort. If you asked me how bird-safe glass policy is going to affect AI risk after all that, I might throw my computer at you. But I think the precise probabilities approach would imply that I should. Â
Re:
It might be interesting to move out of high-level reason zone entirely and just look at the interventions, e.g. directly compare the robustness of installing bird-safe glass in a building vs. something like developing new technical techniques to help us avoid losing control of AIs.
I'm definitely interested in robustness comparisons but not always sure how they would work, especially given uncertainty about what robustness means. I suspect some of these things will hinge on how optimistic you are about the value of life. I think the animal community attracts a lot more folks who are skeptical about humans being good stewards of the world, and so are less convinced that a rogue AI would be worse in expectation (and even folks who are skeptical that extinction would be bad). So I worry AI folks would view "preserving the value of the future" as extremely obviously positive by default, and that (at least some) animal folks wouldn't, and that would end up being the crux about whether these interventions are in fact robust. But perhaps you could still have interesting discussions among folks who are aligned on certain premises.Â
Re:
What would the justification standards in wild animal welfare say about uncertainty-laden decisions that involve neither AI nor animals: e.g. as a government, deciding which policies to enact, or as a US citizen, deciding who to vote for President?
Yeah, I think this is a feeling that the folks working on bracketing are trying to capture: that in quotidian decision-making contexts, we generally use the factors we aren't clueless about (@Anthony DiGiovanni -- I think I recall a bracketing piece explicitly making a comparison to day-to-day decision making, but now can't find it... so correct me if I'm wrong!). So I'm interested to see how that progresses.
I suspect though, that people generally just don't think about justification that much. In the case of WAW-tractability-skeptics, I'd guess some large percentage are likely more driven by the (not unreasonable at first glance) intuition that messing around in nature is risky. The problem of course is that all of life is just messing around in nature, so there's no avoiding it.Â
Yeah, I could have made that more clear -- I am more focused on the sociology of justification. I supposed if you're  talking pure epistemics, it depends whether you're constructivist about epistemological truth. If you are, then you'd probably have a similar position -- that different communities can reasonably end up with justification standards, and no one community have more claim to truth than the other.Â
I suspect, though, that most EAs are not constructivists about epistemology, and so vaguely think that some communities have better justification standards than others. If that's right, then the point is more sociological: that some communities are more rigorous about this stuff than others, or even that they might use the same justification standards but differ in some other way (like not caring about animals) that means the process looks a little different. Â So the critic I'm modeling in the post is saying something like: "sure, some people do justification better than others, but these are different communities so it makes sense that some communities care more about getting this right than others do."
I guess another angle could be from meta-epistemic uncertainty. Like if we think there is a truth about what kinds of justification practices are better than others, but we're deeply uncertain about what it is, it may then still seem quite reasonable that different groups are trying different things, especially if they aren't trying to participate in the same justificatory community.Â
Not entirely sure I've gotten all the philosophical terms technically right here, but hopefully the point I'm trying to make is clear enough!
Hi Vasco! As we’ve discussed in other threads/emails/etc, we have different meta-ethical views and different views about consciousness. So I’m not surprised we’ve landed in somewhat different places on this issue :)
Bob and I make most of the strategic and granting decisions for Arthropoda, and we have slightly different views, so I don’t know exactly where we will land (he'll reply in a second with his thoughts). But broadly, we both agree that we don’t think soil nematodes and some other soil invertebrates have enough likelihood of being sentient to be a high priority, nor do we think that (for those that are sentient) we have a good enough understanding of what would help them to make action-oriented grants (which is Arthropoda’s focus) — in part because we don’t endorse precise-probabilities approaches to handling uncertainty, and so want to make grants that are aimed towards actions that appear robustly positive under a range of possible probability assignments/ways of handling uncertainty.Â
That said, our confidence in our own position is not high. So, we’d be willing to fund things to challenge our own views: If we had sufficient funding from folks interested in the question, Arthropoda would fund a grant round specifically on soil invertebrate sentience and relevant natural history studies (especially in ways that attempt to capture the likely enormous range of differences between species in this group). Currently, much of our grant-making funds are restricted (at least informally) to farmed insects and shrimp, so it’s not an option.Â
As a result, I expect that Arthropoda is probably still one of the better bets for soil invertebrate interested donors. As a correction to your comment, Arthropoda is not restricted in focus as a matter of principle, but just has happened for contingent reasons to focus on farmed animals in its first rounds. We collaborate with Wild Animal Initiative (I’m the strategy director at WAI) to reduce duplication of effort, and have a slightly better public profile for running soil invertebrate studies, so we expect it will generally be Arthropoda rather than WAI who would be more likely to run this kind of program. I don’t want to speak for CWAW, so I’ll let them reply if they have interests in this area; but from my own conversations I doubt they would be in a good position to make soil invertebrates a priority in the next couple of years. Finally, you haven’t mentioned them, but Rethink Priorities may also be open to some work in this area (I’m not sure though).Â
Arthropoda treasurer here - pretty much option 2. We are hoping to increase our expenditure next year to run an extra grants round, add a contractor to help manage some things (currently we're almost entirely volunteer), add a bit to our strategic reserve (to carry us through donation fluctuations without needing to pause grant-making), and a few other small bits and pieces. A good chunk of this expansion can be covered by our reserves + some existing donor commitments, and 55k is about what's left.Â
We have actually a much higher room for more funding in theory, up to several million to run a couple of targeted programs we have in mind. These activities would require hiring someone to run them as a program manager as well as a lot more in grants. But we're not really expecting EA Forum readers to fill that gap unless they happen to run a large foundation :)
Â
haha I can confirm I did not karma knock you and I was kind of surprised you had gotten so downvoted! I actually upvoted when I saw that to counteract.Â
One random thought I'll add is that since you are most experienced (afaict?) in ghd, I'd expect your arguments to be at their best in that context, so you getting upvoted on GHD and downvoted on AW is at least consistent with having more expertise in one than the other, so not necessarily evidence that AW folks are more sensitive. Although I'm not ruling that out!
The other thing I'm not sure I understand is how much weight a single individual's downvote can have - is there any chance that a few AW people have a ton of karma here, so that just a few people downvoting can take you negative in a way that wouldn't happen as much in GHD?Â
Thanks! I think I might end up writing a separate post on palatability issues, to be honest :)
On the intervention front, the movement of WAW folks is turning now to interventions in at least some cases (in WAI's case, rodenticide fertility control is something they're trying to fundraise for, and at NYU/Arthropoda I'm working on or fundraising for work on humane insecticides and bird window collisions). I just meant that perhaps one reason we don't have more of them is that there's been a big focus on field-building for the last five years.Â
For field-building purposes, there's still been some focus on interventions for the reasons you mention, but with additional constraints --- not just cost-effective to pursue but also attractive to scientists to work on and serves to clarify what WAW is, etc., to maximize the field-building outcomes if we can.Â
Hi Nick! Thanks for engaging. I'm not reading you as being anti WAW interventions, and I think you're bringing up something that many people will wonder about, so I appreciate you giving me the opportunity to comment on it.Â
Basically, let's say the type of intractability worry I was mainly addressing in the post is "intractability due to indirect ecological effects." And the type you're talking about is "intractability due to palatability" or something like that.Â
I think for readers who broadly buy the arguments in my post, but don't think WAW interventions are palatable, are not correct but for understandable reasons. I think the reason is either (1) underexposure to the most palatable WAW ideas because WAW EAs tend not to focus on/enjoy talking about those or (2) using the "ecologically inert" framework when talking about WAW and one of the other frameworks when talking about other types of interventions.Â
Let's first assume you're okay with spotlighting, at least to a certain degree. Then, "preventing bird-window collisions with bird safe glass legislation" and "banning second generation anti-coagulant rodenticides" are actually very obviously good things to do, and also seem quite cost-effective based on the limited evidence available. I think people don't really realize how many animals are affected by these issues - my current best-guess CEA for bird safe glass suggest it's competitive with corporate chicken campaigns, although I want to do a little more research to pin down some high-uncertainty parameters before sharing it more widely.Â
Anti-coagulant bans and bird-safe glass are also palatable, and the proof is in the pudding: California, for example, has already passed a state-wide ban on these specific rodenticides, and 22 cities (including NYC and Washington DC) have already passed bird-safe glass regulations. I think I could provide probably at least 5 other examples of things that fit into this bucket (low backfire under spotlighting, cost effective, palatable), and I don't really spend most of my time trying to think of them (because WAI is focused on field-building, not immediate intervention development, and because I'm uncertain if spotlighting is okay or if I should only be seeking ecologically inert interventions).Â
The important thing to note is that WAW is actually more tractable, in some cases, then FAW interventions because it doesn't require anyone to change their diet, and people in many cultures have been conditioned to care about wild animals in a way they've been conditioned to reject caring about farmed animals. There's also a lot of "I love wild animals" sentiment being channelled into conservation, but my experience is that when you talk to folks with that sentiment, they also get excited about bird window collision legislation and things like that.Â
But perhaps you're actually hoping for ecologically inert interventions. Then, I'm not sure which interventions you'd think would be acceptable instead? Sure, humane insecticides could end up being hard (although I think much less hard than you think, for reasons I won't go into here). But literally nothing else - in FAW, in GHD, in AI - seems reasonably likely to be ecologically inert while still plausibly causing a reduction in suffering (maybe keel bone fracture issues in FAW?). But the folks who say "WAW interventions aren't palatable" have not generally, in my experienced, said "and I also don't do GHD because it's not ecologically inert" -- so I suspect in at least some instances they are asking for ecologically inert interventions from WAW, and something else from their cause area of preference.Â
@Eli Rose🔸  I think Anthony is referring to a call he and I had :)
@Anthony DiGiovanni I think I meant more like there was a justification of the basic intuition bracketing is trying to capture as being similar to how someone might make decisions in their life, where we may also be clueless about many of the effects of moving home or taking a new job, but still move forward. But I could be misremembering!Â
Â