Volunteer spread over multiple animal welfare orgs, freelance translator, and enthusiastic donor. Reasonably clueless about what interventions are impartially good. Past experiences include launching an animal ethics university group, coordinating small campaigns in animal advocacy, and designing automated workflows in that context.
"We have enormous opportunity to reduce suffering on behalf of sentient creatures [...], but even if we try our hardest, the future will still look very bleak." - Brian Tomasik
Happy to give feedback on projects, or get on a call about anything to give advice and share contacts.
Clara said she would appreciate "Any negative utilitarian or person knowledgeable about negative utilitarianism commenting on why NU doesn't necessarily recommend extinction."
Simon Knutsson's paper "The World Destruction Argument" has an argument that I'd simplify as "The World Destruction Problem is a problem of consequentialism, not of NU". One of the arguments (simplifying) being that classic utilitarians would recommend the extinction of all sentient life if they found out that it was resiliently net negative, or that it could be replaced by marginally happier sentient life through killing off every current sentient being painlessly.
For other thoughts on why Negative Utilitarianism doesn't recommend extinction, see this excerpt from a Center For Reducing Suffering article.
I think this point is potentially significant, but the post is clearly LLM-generated, and thus, most of the paragraphs don't add much beyond the initial point of "there's no Script of Truth and it depends on the person's context". In practice, I have no clear examples of people making wrong choices based on overconfident EA advice - in fact, my experience has been the opposite: people don't want to give high-level advice, because they think it depends too much on the options that are available to me, and they couldn't choose from there. Sure, counterexamples could exist, but this post hasn't convinced me of this.
I'd have found the post much more valuable if it had a few anonymized examples, rather than LLM-generated text to complete the main post.
Strongly agree with this post! EA Connect has been more useful to me than the two in-person EA conferences I've attended, and I estimate I've been far more useful (as a mentor, but not only) at EA Connect than at these other conferences.
No awesome afterparty
I wonder if say, links to post-conference games of Gartic Phone and the like could be a low-cost way to recreate that moment where everyone winds down (esp since the online conference feels more serious and formal) for those who would be interested. (Not confident it's a good idea, just throwing it out there.)
(20% Wild Animal Welfare)
Nice poll, but tough call! With the little we know, the effects of interventions on wild animals seem likely to outweigh those on farmed animals. However, we do not have a clear notion of how current wild animal interventions (even field-building and research) will affect wild animals in the long run (though this is also true of interventions that don't aim to help wild animals).
I do not think a "robust" and "safe" pick in animal welfare exists yet (that we're aware of): under the current state of my uncertainties, I'm voting with my dollars on invertebrate welfare interventions (though those are still probably outweighed by effects on wild invertebrates). Though I'm gradually seeing the appeal of funding more research (especially on small wild animals).
Slightly in favor of wild animal welfare here, because it seems likely that if we gain enough knowledge to find a robust intervention in animal welfare, it will target wild animals directly or indirectly (since they're probably the dominant group of moral patients).
Thank you so much for pushing back on my simplistic comment! I agree that my framing was misleading (I commented without even re-reading had said). Thanks for highlighting crucial considerations on counterintuitive conclusions in NU and CU.
Your comment makes me realize that an objection based on utopian situations makes sense (and I've found it reasonable in the past as a crux against NU). I guess my frustration with the use of the World Destruction Argument against NU, in the ways EAs often bring it up, is that it criticizes the fact that NU recommends extinction in our world (which contains suffering), even though CU has a decent chance of recommending extinction in our world (as soon as we determine whether wild invertebrates are living net-negative lives or not!).[1]
Though again, if there are higher chances of astronomically good than astronomically bad futures, animal suffering is easily outweighed in CU, but not in NU (but CUs could change their mind on the empirical aspect and recommend extinction). But my impression is that this isn't what people (among non-philosophers, which includes me) are objecting to? They mostly seem to find deliberate extinction repugnant (which is something I think many views can agree upon).