Campaign coordinator for the World Day for the End of Fishing and Fish Farming, organizer of Sentience Paris 8 (animal ethics student group), FutureKind AI Fellow, freelance translator, enthusiastic donor.
Fairly knowledgeable about the history of animal advocacy and possible strategies in the movement. Very interested in how AI developments and future risks could affect non-human animals (both wild and farmed). Reasonably clueless about this.
"We have enormous opportunity to reduce suffering on behalf of sentient creatures [...], but even if we try our hardest, the future will still look very bleak." - Brian Tomasik
As someone who's interested in the practical implications of cluelessness for practical decisions but would not be able to read that paper, I'm grateful that you went beyond a linkpost and took the time to make your theory accessible to more Forum readers. I'm excited to see what comes next in terms of practical action guidance beyond the reliance on EV estimates. Thank you so much for a great read!
(10% disagree) I do not think there are any robust interventions for current humans who wish to improve "impartial welfare" in the future, but I'd find these interventions probably dominant if I believed there were any.Â
I don't want to say I'm "not a longtermist" since I'm never sure whether action-guidance has to be contained within one's theory of morality, but given the framing of the question is about what to do, I have to put myself in disagree, as I'm quite gung-ho on extreme neartermism (seeing a short path to impact as a sort of multiplier effect, though I may be wrong).
Compelling and moving linkpost. However, the first footnote is broken for some reason, and says "Here the best AI system is shown as Claude 3.7 Sonnet, though note that a more recent evaluation finds that OpenAI’s o3 may be above trend, also broadly at a 1-2h time horizon." when I slide my mouse over it. However, at the bottom, the footnote appears correctly. I wonder what causes this.
There is a strong chance that the sum total of what I do due to EA will end up having no impact (due to short AGI timelines) or being net-negative (due to flow-through effects). However, EA has also convinced me that all but a few altruistic endeavors are strongly likely to be beneficial for the world. My donations of a few K a year (and occasional volunteering) for these endeavors would have been extremely unlikely had I not engaged deeply with EA.
The counterfactual seems pretty bleak. Before getting convinced overnight of EA's importance by stumbling onto the pdf of Suffering-focused Ethics, I was convinced that it was impossible to be positive for the world, and I felt diseased by guilt (the latter turned out to be useful fuel to get into doing good, so I don't regret it).Â
Thank you for this post! It's quite clear and illustrates all the different "reflexes" in the face of potential TAI development that I've observed in the movement. Since we can often jump to a mode of action and assume it's the correct path, I find it useful to get the opportunity to carefully read the assumptions and show all the possible responses.Â
Right now, my decision parliament tries to accommodate "Optimise harder for immediate results" and "Focus on building capacity to prepare for TAI". Though it is frustrating to know that one of the ways of responding to AI developments you list here will be the "best" path for sentient beings... but that we can't actually be sure of which one it is.
While the formatting might deter some, now that the author has become a successful and controversial blogger, quite literally EA's bulldog, it is interesting to see the earlier theoretical considerations that seems to have led them where they are now (making fun of anti-shrimp welfare advocates online).Â
Some of the densest and most action-focused conversations I've ever had were in the two days of the Unconference, I was quite impressed with how well-organized and successful this was. And I admire how the budget was handled! I definitely encourage joining the Slack community if you are eager to understand the issues better and take action.
When discussing considerations around backfire risks and near-term uncertainty, it is common to hear that this is all excessive nitpicking, and that such discussion lacks action guidance, making it self-defeating. And it's true that raising salience of these issues isn't always productive because it doesn't offer clear alternatives to going with our best guess, deferring to current evaluators that take backfire risks less seriously, or simply not seeking out interventions to make the world a bit better.
Thus, because this article centers the discussion on the search for positive interventions through a reasonably action list of criteria, it has been one of my most valuable reads of the year.
I think the more time we spend exploring the consequences of our interventions, the more we realize that doing good is hard. But it's plausibly not insurmountable, and there may be tentative, helpful answers to the big question of effective altruism down the line. I hope that this document will inspire stronger consideration for uncertainty. Because the individuals impacted by near-term second-order effects of an action are not rhetorical points or numbers on a spreadsheet: they're as real and sentient as the target beneficiaries, and we shouldn't give up on the challenge of limiting negative outcomes for them.