Campaign coordinator for the World Day for the End of Fishing and Fish Farming, organizer of Sentience Paris 8 (animal ethics student group), FutureKind AI Fellow, freelance translator, enthusiastic donor.
Fairly knowledgeable about the history of animal advocacy and possible strategies in the movement. Very interested in how AI developments and future risks could affect non-human animals (both wild and farmed). Reasonably clueless about this.
"We have enormous opportunity to reduce suffering on behalf of sentient creatures [...], but even if we try our hardest, the future will still look very bleak." - Brian Tomasik
In the spirit of thanking whoever helped you, this post was what finally convinced me to substantially donate (1,500$+ since then) to charities working on limiting the growth of insect farming when I read it in February of 2025. And yet, I had already substantially engaged with work on insect suffering, especially from Tomasik. Not sure what pushed me over the edge, but this post really managed to make take in the current evidence as worthy of influencing my priorities.
I'm not well-learned enough on this, but it seems that you would appreciate this very cool recent post which (badly summarized) kind of explains how one can still have forms of moral action-guidance even if some sound moral theories (in your post, a form of impartial consequentialism?) imply that "none of our actions matter" : Resolving radical cluelessness with metanormative bracketing.
Really cool! Love these breakdowns that go into the weeds of measurable impact.Â
"Programming: Some more focus on EA’s past achievements would have been beneficial in making a more compelling case for the movement as a whole."
I intuitively agree ("track record" seems like one of the strongest arguments for EA (obligatory Scott Alexander reference), especially in global health and farm animal welfare), but I wonder what makes you say that. Was it something you heard in the feedback form?
Welcome to the Forum, Zoe! I guess my knee-jerk response to this would be that I agree that these are significant problems with EA branding, I don't think most of these have easy, tractable answers (sadly a common occurrence in EA branding problems imo, eg "longtermism" being perceived as callous toward present-day issues).Â
"Hive mind" seems hard to avoid when building a community of people working toward common goals that are somewhat strange. "Holier-than-thou" is almost inevitable in "doing the most good one can" (and EA seems in fact quite relaxed by this standard, though your specific criticisms of the 10% pledge were interesting to read). "Sounds like AI", however, is probably fixable, and individuals could make some efforts, in the age where "AI-like writing" is increasingly criticized, to have a slightly warmer style, and maybe to de-emphasize bulletpoints somewhat? (less sure about this, I like bulletpoints)
But above all, I want to say, congratulations on your yearly donations! Even if it's not the holy grail of 10%, 10K a year is absolutely no joke, and giving 10% is far from having become an EA norm anyway. This level of donations, and the plan to keep going, is rare and precious. Thank you for doing so much for others!
Interesting question! Might the Kurzgesagt video on factory farming count as an example of this for animal welfare? If someone wants to do it again, they could try to assess what they think the video did right (and wrong) and improve upon it. Maybe some cues on messaging could be taken from Lewis Bollard's fairly successful appearance on the Dwarkesh podcast?
Also, a potential reason why AI Safety focused on it (compared to other cause areas) might be that they have pipelines which can absorb a fair amount of people, and so they find it more worthwhile to launch broad outreach that could get a few dozen counterfactual people applying to fellowships, and the like? This may less be the case for other causes when it comes to talent - I assume that for animal welfare and global health, the informal theory of change behind funding a high-quality video would be rather donation-focused. However, I could be wrong about the talent pipeline reason, and maybe some content creation funders mostly want to raise broad awareness of AI risk issues (seemed to be the case for the Future of Life Institute).
I think this is a very compelling (and enjoyable) essay. I particularly appreciate the first point of 2.1 as an intuitive reminder of the complicated empirical issues at hand. The main argument here is strengthened by this intuitive way of highlighting that doing (impartial) good is actually complicated.
I appreciate the efforts made here of highlighting alternatives to long-term EV maximization with precise credences, since the lack of "other options" can be a big mental blocker. Part 3 (and the conclusion, to an extent) seem to constitute the first solid high-level overview of this on the Forum, so this is quite helpful. Not to mention, these sections act as serious reminders of how important it is to "get it right", whatever that ends up meaning.
When discussing considerations around backfire risks and near-term uncertainty, it is common to hear that this is all excessive nitpicking, and that such discussion lacks action guidance, making it self-defeating. And it's true that raising salience of these issues isn't always productive because it doesn't offer clear alternatives to going with our best guess, deferring to current evaluators that take backfire risks less seriously, or simply not seeking out interventions to make the world a bit better.
Thus, because this article centers the discussion on the search for positive interventions through a reasonably action list of criteria, it has been one of my most valuable reads of the year.
I think the more time we spend exploring the consequences of our interventions, the more we realize that doing good is hard. But it's plausibly not insurmountable, and there may be tentative, helpful answers to the big question of effective altruism down the line. I hope that this document will inspire stronger consideration for uncertainty. Because the individuals impacted by near-term second-order effects of an action are not rhetorical points or numbers on a spreadsheet: they're as real and sentient as the target beneficiaries, and we shouldn't give up on the challenge of limiting negative outcomes for them.
As someone who's interested in the practical implications of cluelessness for practical decisions but would not be able to read that paper, I'm grateful that you went beyond a linkpost and took the time to make your theory accessible to more Forum readers. I'm excited to see what comes next in terms of practical action guidance beyond the reliance on EV estimates. Thank you so much for a great read!
(10% disagree) I do not think there are any robust interventions for current humans who wish to improve "impartial welfare" in the future, but I'd find these interventions probably dominant if I believed there were any.Â
I don't want to say I'm "not a longtermist" since I'm never sure whether action-guidance has to be contained within one's theory of morality, but given the framing of the question is about what to do, I have to put myself in disagree, as I'm quite gung-ho on extreme neartermism (seeing a short path to impact as a sort of multiplier effect, though I may be wrong).
This thesis is one of the most insightful community-related things I've read on the forum. I'd love to read more about if and hear if you think there's anything actionable on the margin (highly de-emphasize careers within EA orgs in outreach material, especially now that top impact may have moved elsewhere, eg, in high-impact non-EA roles?). Thanks!