MG

mal_graham🔸

Strategy Director @ Wild Animal Initiative
205 karmaJoined Working (6-15 years)Philadelphia, PA, USA

Comments
9

Hi Max, thanks for the positive feedback and for the question. 

I will ask our research team if they are aware of any specific papers I could point to; several of them are more familiar with this landscape than I am. My general idea that AI-enabled modeling would be beneficial is more from the very basic guess that given that AI is pretty good at coding, stuff that relies on coding might get a lot better if we had TAI. If that's right, then even if we don't see currently great examples of modeling work being useful now, it could nevertheless get a lot better sooner than we think. 

Thanks for bringing up the usefulness sentence, I think I could have been a lot clearer there and will revise it in future versions. I think I mainly meant that I was less confident about what TAI would mean for infrastructure and academic influence, and so any possible implications for WAW strategy would be more tentative. However, thinking about it a bit more now, I think the two cases are a bit different.

For infrastructure:  In part, I down-weighted this issue because I find the idea that the manufacturing explosion will allow every scientist to have a lab in their house less probable, at least on short timelines, than software-based takeoffs. But also, and perhaps more importantly, I generally think that on my list of reasons to do science within academia, 1 and 3 are stronger reasons than 2. Infrastructure can be solved with more money, while the others can't. So even if thinking about TAI caused me to throw out the infrastructure consideration, I might still choose to focus on growing WAWS inside academia, and that makes figuring out exactly what TAI means for infrastructure less useful for strategy. 

For "academic stamp of approval": I think I probably just shouldn't have mentioned this here, because I do end up talking about legitimacy in the piece quite a bit. But here's an attempt at articulating more clearly what I was getting at: 

  • Assume TAI makes academic legitimacy less important after TAI arrives.
  • You still want decision-makers to care about wild animal welfare before TAI arrives, so that they use it well etc.
  • Most decision-makers don't know much about WAW now, and one of the main pathways now that wildlife decision-makers become familiar with a new issue is through academia.
  • So, academic legitimacy is still useful in the interim.
  • And, if academic legitimacy is still important after TAI arrives, you also want to work on academic legitimacy now.
  • So, it isn't worth spending too much time thinking about how TAI will influence academic legitimacy, because you'd do the same thing either way. 

That said, I find this argument suspiciously convenient, given that as an academic, of course I'm inclined to think academic legitimacy is important. This is definitely an area where I'm interested in getting more perspectives. At minimum, taking TAI seriously suggests to me that you should diversify the types of legitimacy you try to build, to better prepare for uncertainty. 

Hi Henry, thanks for your question. I should be clear that I am speaking about my own opinions in this comment, not any institutional position of Wild Animal Initiative

I do not assume that wild animal life is net negative. I feel pretty clueless about the typical quality of life in the wild. The reason I work on wild animal welfare science is in part because I think people have been way too quick to jump from hypothesis to conclusion on the quality of life in the wild, and empirical studies are important to fill that knowledge gap.  

Given the above, the main reason for my comment about space propagation is that I feel risk averse about spreading life we don't understand well onto other planets (although I suspect there are a number of philosophical positions besides my own that could make one skeptical of bringing wild animal life to space in a thoughtless way). It seems very likely that even if life on earth for wild animals was knowably great, it could still be quite bad on other planets or in space, depending on which animals are brought to space, how they are treated, what kinds of experiments are tried on the way to successful propagation, etc. 

People are very thoughtless about wild animal welfare when reintroducing animals to habitats on Earth already (there are a number of conservation failures that come to mind), so I suspect that humans might be equally thoughtless about animal welfare when bringing animals to space. I might think the average pet dog has a great life and still be hesitant to suggest that really inexperienced owners buy pet dogs they don't know how to take care of. 

Maybe I'm misunderstanding you, but your last statement seems to imply that anyone who is concerned about wild animals having potentially net-negative lives should be a button-pusher? I'm not sure that follows except under very pure-EV-chasing utilitarianism, which is not my moral position nor a position I recommend. Personally, I would not push the button. 

Thank you for writing this, I found it very useful.

You mention that all the studies you looked at involved national protests. So is it fair to say that the takeaway is that we have pretty strong evidence for the efficacy of very large protests in the US, but very little evidence about smaller protest activities? 

Another consistency is that all the protests were on issues affecting humans. I wonder if protests about animals can expect to have similar results, given that baseline consideration for animals as relevant stakeholders seems to be quite a bit lower. 

Finally, just musing, but I wonder if any studies have looked at patterns of backlash? E.g., BLM protest succeeds in the short term, but then DEI is cancelled by the Trump administration. I suppose there could be backlash to any policy success regardless of how it was accomplished, but one hypothesis could be that protest is a particularly public form of moving your movement forward, and so perhaps particularly likely to draw opposition -- although why you would see that years later instead of immediately is not clear, and so maybe this isn't a very good hypothesis... 

And I guess also more generally, again from a relatively outside perspective, it's always seemed like AI folks in EA have been concerned with both gaining the benefits of AI and avoiding X risk. That kind of tension was at issue when this article blew up here a few years back and seems to be a key part of why the OpenAI thing backfired so badly. It just seems really hard to combine building the tool and making it safe into the same movement; if you do, I don't think stuff like Mechanize coming out of it should be that surprising, because your party will have guests who only care about one thing or the other.

Oh whoops, I was looking for a tweet they wrote a while back and confused it with the one I linked. I was thinking of this one, where he states that "slowing down AI development" is a mistake. But I'm realizing that this was also only in January, when the OpenAI funding thing came out, so doesn't necessarily tell us much about historical values.  

I suppose you could interpret some tweets like this or this in a variety of ways but it now reads as consistent with "don't let AI fear get in the way of progress" type views. I don't say this to suggest that EA funders should have been able to tell ages ago, btw, just trying to see if there's any way to get additional past data.

Another fairly relevant thing to me is that their work is on benchmarking and forecasting potential outcomes, something that doesn't seem directly tied to safety and which is also clearly useful to accelerationists. As a relative outsider to this space, it surprises me much less that Epoch would be mostly made up of folks interested in AI acceleration or at least neutral towards it, than if I found out that some group researching something more explicitly safety-focused had those values. Maybe the takeaway there is that if someone is doing something that is useful both to acceleration-y people and safety people, check the details? But perhaps that's being overly suspicious. 

Responding here for greater visibility -- I'm responding to the idea in your short-form that the lesson from this is to hire for greater value alignment. 

Epoch's founder has openly stated that their company culture is not particularly fussed about most AI risk topics [edit: they only stated this today, making the rest of my comment here less accurate; see thread]. Key quotes from that post: 

  • "on net I support faster development of AI, so we can benefit earlier from it."
  • "I am not very concerned about violent AI takeover. I am concerned about concentration of power and gradual disempowerment."

So I'm not sure this is that much of a surprise? It's at least not totally obvious that Mechanize's existence is contrary to those values.

As a result, I'm not sure the lesson is "EA orgs should hire for value alignment." I think most EAs just didn't understand what Epoch's values were. If that's right, the lesson is that the EA community shouldn't assume that an organization that happens to work adjacent to AI safety actually cares about it. In part, that's a lesson for funders to not just look at the content of the proposal in front of you, but also what the org as a whole is doing. 

My vibe is that you aren't genuinely interested in exploring the right messaging strategy for animal advocacy; if I'm wrong feel free to message me. 

A separate nitpick of your post: it doesn't seem fair to say that "Shrimp Welfare Project focuses on" ablation, if by that you meant "primarily works on." Perhaps that's not what you meant, but since other people might interpret it the same way I did, I'll just share a few points in the interest of spreading an accurate impression of what the shrimp welfare movement is up to: 

  • SWP primarily works on changing how shrimp are killed, not ablation. Their Humane Slaughter Initiative is listed first on their list of interventions.
  • In fact, they don't list anything related to eyestalk ablation on their interventions list at all; it appears they just write up a profile when a company reports phasing out eyestalk ablation, but it doesn't seem like they are actively campaigning on it.  
  • In support of that theory, SWP's guesstimate model on their impact doesn't include eyestalk ablation reforms; it only counts their shrimp stunning work.
  • Recent campaign wins in the UK were for eyestalk ablation and stunning (e.g., item 4 on the Tesco welfare policy), not just ablation, and that the Mercy For Animals announcement on it is clear that ablation only happens to breeding females. As far as I am aware, all shrimp welfare campaigning that includes eyestalk ablation also includes other higher-impact reforms in their ask. 

I'm currently reviewing Wild Animal Initiative's strategy in light of the US political situation. The rough idea is that things aren't great here for wild animal welfare or for science, we're at a critical time in the discipline when things could grow a lot faster relatively soon, and the UK and the EU might generally look quite a bit better for this work in light of those changes. We do already support a lot of scientist in Europe, so this wouldn't be a huge shift in strategy. It’s more about how much weight to put toward what locations for community and science building, and also if we need to make any operational changes (at this early stage, we’re trying to be very open-minded about options — anything from offering various kinds of support to staff  to opening a UK branch). 

However, in trying to get a sense of whether that rough approach is right, it's extremely hard to get accurate takes (or, at least, to be able to tell whether someone is thinking of the relevant risks rationally). And, its hard to tell whether "how people feel now" will have lasting impact. For example, a lot of the reporting on scientist sentiment sounds extremely grim (example 123), but it's hard to know what level the effect will be over the next few years -- a reduction in scientific talent, certainly, but so much so that the UK is a better place to work given our historical reasons for existing in the US? Less clear. 

It doesn't help that I personally feel extremely angry about the political situation so that probably is biasing my research. 

Curious if any US-based EA orgs have considered leaving the US or taking some other operational/strategic step, given the political situation/staff concerns/etc? Why or why not? 

I think it's quite important to remember the difference between a charity focusing on something because of gut level vibes and a charity using gut level vibes to inspire action. Most people are not EAs. If only EAs were inspired by my careful analytical report of which things cause the most suffering in farmed shrimp, my report would not achieve anything. But if I know that X is the most important thing, and Y gets people to care, I can use Y to get people in the door in order to solve X. 

Also, because most people are not EAs, I actually think you're wrong that most people will feel duped if they find out it's not many shrimp. My parents, for example, are not vegan but were horrified by the eyestalk ablation thing. I told them honestly that it didn't involve many shrimp, but they aren't utilitarians: the number of individuals affected doesn't have as much of a visceral impact to them as that it is happening at all. Despite my father knowing full well how many chickens die in horrible conditions, he still eats chicken, and yet the eyestalk ablation thing got him to stop eating shrimp. Remembering that people are broadly motivated by different things, and being able to speak to different kinds of motivation, seems to me to be a critical aspect of effective advocacy.