Managing Director at Hive. Effective Altruism and Animal Advocacy Community Builder, experience in national, local and cause-area specific community building. Amateur Philosopher, particularly keen on moral philosophy.
I'm super happy to chat with anyone and learn from you, so don't hesitate to reach out if you don't have any expertise on the following - however, some specific areas I am hoping to learn more about are:
- I work at Hive, a global community-building organization for farmed animal advocates. I would love to hear your thoughts, (project) ideas and feedback!
- The implications, opportunities and risks of AI development on farmed animal advocacy.
- Farmed animal advocacy careers outside of NGOs and Alt-protein (e.g., food industry/adjacent sector jobs and policy in governmental institutions)
I have a fairly good overview of the farmed animal advocacy space, so happy to chat about all things there. I find that I am most helpful in brainstorming, red-teaming, effective giving and career advice. And, of course, happy to talk about Hive or meta-level work in animal advocacy more generally! I have some experience in community building on a city, national and cause-area specific level, so happy to nerd about that. I also have a background in philosophy, focusing on moral philosophy - so happy to bounce ideas or chat cause prioritization.
Great question, thank you for working on this. An inter-cause-prio-crux that I have been wondering about is something along the lines of:
"How likely is it that a world where AI goes well for humans also goes well for other sentient beings?"
It could probably be much more precise and nuanced, but specifically, I would want to assess whether "trying to make AI go well for all sentient beings" is marginally better supported through directly related work (e.g., AIxAnimals work) or through conventional AI safety measures - the latter of which would be supported if, e.g., making AI go well for humans will inevitably or is necessary to make sure that AI goes well for all. Although if it is necessary, it would depend further on how likely AI will go well for humans and such; but I think a general assessment of AI futures that go well for humans would be a great and useful starting point for me.
I also think various explicit estimates of how neglected exactly a (sub-)cause area is (e.g., in FTE or total funding) would greatly inform some inter-cause-prio questions I have been wondering about - assuming that explicit marginal cost-effectiveness estimates aren't really possible, this seems like the most common proxy I refer to that I am missing solid numbers on.
Super interesting read, thanks for writing this! I have been thinking a bit about the US and China in an AI race and was wondering whether I could get your thoughts on two things I have been unsure about:
1) Can we expect the US to remain a liberal democracy once it develops AGI? (I think I first saw this point brought up in a comment here), especially given recent concerns around democratic backsliding? (And if we can't, would AGI under the US still be better?)
2) On animal welfare specifically, I'm wondering whether the very pragmatic, techno-optimistic, efficiency stance of China could make a pivot to alternative proteins (assuming they are an ultimately more efficient product) more likely than in the US, where alt-proteins might be more of a politically charged topic?
I don't have strong opinions on either, but these two points first nudged me to be significantly less confident in my prior preference for the US in this discussion.
Interestingly, Claude‘s numbers would actually suggest that BOAS is a higher EV decision (for some reason, it appears to double-count the risk; I.e., it took the EV which takes 60% failure into account and multiplied it again by 0.4).
Not that anyone here should (or would) make these decisions based on unchecked Claude BOTECs anyway; just found it to be an interesting flaw.
I would like to add to this and applaud Vasco for being such a good sport about this, sharing the draft with me in advance and engaging in an unusually civil and productive back and forth with me to clear up misunderstandings, including nitpicky nuances and issues that arose from my own miscommunication. To anyone who would like to share feedback or ways to improve our community guidelines, but prefers no to do so publicly, you can also reach me/us per dm here on the Forum/E-mail/Slack, and we have an anonymous form! Although we do generally think that a public discussion here could be valuable for other community spaces as well. I would also like to - despite this - thank you, Vasco, for being a valued community member and for your exceptional moral seriousness/commitment to taking ideas seriously and care.
Strong agree! I also often get asked „why push careers, if the movement is primarily funding constrained“ - it’s almost as though there is a bit of a misconception around the idea that only non profit work is a „career that helps animals“ and I think part of this is that there is no good guide on making an impact in adjacent areas (outside of E2G perhaps). I‘m very excited to see the research you are producing!
Great post, thanks for looking into this! I previously noted four different types of interventions one might want to prioritize given AIxAnimals; I'd love to hear your thoughts on the implications on this intersection from a broader, zoomed out perspective!
Really enjoyed reading this post!
This example reminded me of something similar I have been meaning to write about, but @AppliedDivinityStudies got there before me (and did so much better than I could have!) - it is not just that influencing Big Normie Foundations could produce the same marginal impact due to a lower counterfactual, but also that there is way more money in them.
I think one can conceptualize impact as a function of how much influence we are affecting, where it is moving from (e.g., the counterfactual badness/lack-of-goodness), and where it is moving to. It seems to me like we are overly focused on affecting where the influence is moving to. Perhaps justifiably so, for the objections you mention in the post, but it seems far from obvious that we are focus is optimally balanced.