Director of Operations at GovAI. I have a blog about nonprofit ops and strategy.
I previously co-founded and served as Executive Director at Wild Animal Initiative, was the COO of Rethink Priorities from 2020 to 2024, and ran an operations consultancy, Good Structures, from 2024-2025.
Thanks James! I liked the old piece. I have no idea how to handle the pay questions: I think my default answer is something like "pay people reasonably well such that they can save for retirement, have families, etc" but that view just collapses when you're competing with the market in many ways. And I think the AI space feels it especially hard — they have to compete directly with labs for talent.
But yeah, I think I don't really know how to sit with all of this. I think maybe it's just a set of feelings I don't want to be unsaid. But I also worry that things that have pushed the community to find really interesting, unusual opportunities have come from the community being narrow, high-trust, and high-truth seeking, which might change with the growth.
I definitely agree to some extent about FTX, though money did flow into some other spaces as well. But agree that I'm painting with too broad a brush there.
And definitely — I think I meant this post partially as a lamentation of what feels inevitable with funding. Our ability to make the world better might massively grow, but at the same time, it feels like something essential to EA's past success (or at least, the culture and community that pushed it to do weird, fringe-y things that I think hold much of EA's greatest promise) might be lost.
Nice, that was useful. I agree that the downside to this is some risk of interventions not being robust. I'm not really sure how to think about that trade off - on the other hand, increasing our certainty could make it really hard to do any interventions at all (e.g. a world where we think nematodes matter, but don't know if they have good or bad lives seems really hard to operate in).
On motivational trade-offs — I definitely agree that there is some evidence threshold that would change my mind. I'm not totally ruling this possibility out. But maybe directly answering your question — no, motivational trade offs alone wouldn't change it I don't think. But, I haven't thought much about it, and not sure that position will hold up to scrutiny.
Math nitpicks are helpful, thanks! Both were right - just doing math too quickly :).
RE welfare comparisons: I could imagine a difference between us being relative confidence that empirical research will improve our understanding? I think I might be less bullish on this sort of work because I don't feel confident we'll meaningfully reduce our uncertainty about welfare ranges. But, I'm not confident in this. Would you expect the most useful work for reducing your own uncertainty to be philosophical or empirical?
RE nematodes: I agree that this isn't clear cut in some sense, but I feel fairly confident that they should be bracketed out unless we significantly advance in our understanding of animal consciousness (and see above - maybe my own lack of confidence in our ability to make empirical progress on this is part of the reason I'm more confident in casting them aside).
RE cage-free: yes — I think the meaningful counterfactual is that money spent on cage free otherwise not being spent on animal welfare at all, or being spent in mostly useless ways, and I'd endorse cage-free campaigns over that most likely, despite agreeing with you on non-target uncertainty being high, but I haven't thought about it much.
There is a (currently) non-public organization working on making this happen. If anyone is interested in learning more, feel free to reach out (we're hiring)!
Yeah, I agree with all those being challenges here - I think I was mainly responding to what I perceive to be a push (maybe explicitly in this case) to reduce the options presented to new funders to a few funds with fairly similar views, which I think is possibly a strategic mistake, even if the alternative isn't ideal either.
I think I feel less convinced than you that scaling most these are going to end up resulting in meaningful positive impact for more animals — the exception is welfare technology, which I'm quite excited about, but my impression is that the good opportunities here are pretty fundable right now.
To be clear, I also don't think loads of money should go into neglected animals either (though on the margin I'm more excited about things here than FAW) — I think there is a lot more potential for helping animals in wild animal and invertebrate welfare, but there aren't ways to absorb tons (e.g. tens of millions of dollars) of funding there either (at least not yet).
I generally in both cases am excited by a smaller, more highly coordinated and strategic movement (or set of movements) than a larger one, but I think more funding right now would be used primarily to try to make a bigger one. I'm guessing this is a lot of the crux between us. But, I also know that I'm a bit on my own island with these views at times, and am genuinely pro-pluralism in the space. So I appreciate you pushing on it so hard!
I think AI safety mostly can't absorb new funding that effectively, except for political things (which maybe are complicated due to various backfire risks), but it also has a better track record so far than FAW which suggests it can use the money it has more effectively. But I'm not a partisan here really — at heart I'm an animal welfare person who mainly feels sad that it might be pretty hard to help more animals than we already are effectively.
Thanks Michael!
I definitely agree - I have this sense of "am I complaining about something real, or is this just nostalgia for an inevitable change that comes with more funding?"
I also don't really have good answers here on what to do about it. A slightly hesitant/low-confidence thought (because I'm recommending Continental philosophy on the EA Forum) is that I think Distinction is the best book for thinking about EA-as-a-social-scene, and think Distinction-y/Spheres of Justice-style interventions are probably going to be the best ones (reduce the number of spheres of power individuals have — i.e. try to not give people who have lots of financial resources greater cultural/social power, try to not give people who have cultural/social power lots of financial resources, don't reinforce norms of "cultural elite" within EA in the pipelines for new people, etc). But I think this is quite hard to do, especially from the bottom up.