[...] whether it would be good or bad for everyone to die
I'm sorry for not engaging with the rest of your comment (I'm not very knowledgeable on questions of cluelessness), but this is something I sometimes hear in X-risk discussion and I find it a bit confusing. Depending on what animals are sentient, it's likely that every few weeks, the vast majority of the world's individuals die prematurely, often in painful ways (being eaten alive or starving). To my understanding, the case EA makes against X-risk is not the badness of death for the individuals whose lives will be somewhat shortened - because it would not seem compelling in that case, especially when aiming to take into consideration of the welfare / interests of most individuals on earth. I don't think this is a complex philosophical point or some extreme skepticism: I'm just superficially observing that the situation of "everyone dies prematurely"[1] seems to be very close to what we already have, so it doesn't seem that obvious that this is what makes X-risks intuitively bad.
(To be clear, I'm not saying "animals die so X-risk is good", my point is simply that I don't agree that the fact that X-risks cause death is what makes them exceptionally bad, and (though I'm much less sure about that) to my understanding, that is not what initially motivated EAs to care about X-risks (as opposed to the possibility of creating a flourishing future, or other considerations I know less well)).
Not that I supposed that "prematurely" was implied when you said "good or bad for everyone to die". Of course, if we think that it's bad in general that individuals will die, no matter whether they die at a very young age or not, the case for X-risks being exceptionally bad seems weaker.
My bad, I wasn't very clear when I used the term "counterargument", and "nuance" or something else might have fit better. It doesn't argue against the fact that without humans, there won't be any species concerned with moral issues, only the case that humans are potentially so immoral that their presence might make the future worse than one with no humans. Which is indeed not really a "counterargument" to the idea that we'd need humans to fix moral issues, but instead argues against the point that this would make it more likely positive than not for the future (since he argues that humans may have very bad moral values, and thus ensure a bad future).
If you are interested, Magnus Vinding outlines a few counterarguments to this idea in his article about Pause AI (though of course, he's far from alone in having argued this, but this is the first post that comes to mind).
I have a question, and then a consideration that motivates it, which is also framed as a question that you can answer if you like.
If an existential catastrophe occurs, how likely is it to wipe out all animal sentience on earth?
I've already asked that question here (and also, to some acquaintances working in AI Safety, but the answers have very much differed - it seems we're quite far from a consensus on this, so it would be interesting to see perspectives from the varied voices taking part in this symposium.
Less important question, but that may clarify what motivates me to ask my main question: if you believe that a substantial part of X-risk scenarios entail animal sentience being left behind, do you then think that estimating the current and possible future welfare of wild animals is an important factor in evaluating the value of both existential risk reduction and interventions aimed at influencing the future? A few days ago, I was planning on making a post on invertebrate sentience being a possible crucial consideration when evaluating the value and disvalue of X-risk scenarios, but then thought that if this factor was rarely brought up, it could be that I was personally uninformed on the reasons why the experiences of invertebrates (if they are sentient) might not actually matter that much in future trajectories (aside from the possibility that they will all go extinct soon, which is why this question hinges on the prior belief that it is likely that sentient animals will continue existing on earth for a long time). There are probably different reasons to agree (or disagree) with this, and I'd be happy to hear yours in short, though it's not as important to me as my first question. Thank you for doing this!
This is a difficult one, and both my thoughts and my justifications (especially the few sources I cite) are very incomplete.
It seems to me for now that existential risk reduction is likely to be negative, as both human and AI-controlled futures could contain immense orders of magnitude more suffering than the current world (and technological developments could also enable more intense suffering, whether in humans or in digital minds). The most salient ethical problems with the extinction of earth-originating intelligent life seem to be the likelihood of biological suffering continuing on earth for millions of years (though it's not clear to me whether it would be more or less intense without intelligent life on earth), and the possibility of space (and eventually earth) being colonized by aliens (though whether their values will be better or worse remains an open question in my view).
Another point (which I'm not certain about how to weigh in my considerations) is that certain extinction events could massively reduce suffering on earth, by preventing digital sentience or even by causing the end of biological sentient life (this seems unlikely, and I've asked here how likely or unlikely EAs thought this was).
However, I am very uncertain of the tractability of improving future outcomes, especially considering recent posts by researchers at the Center on Long-Term Risk, or this one by a former researcher there, highlighting how uncertain it is that we are well-placed to improve the future. Nonetheless, I think that efforts made to improve the future, like the work of the Center for Reducing Suffering, the Center on Long-Term Risk, or the Sentience Institute, advocate for important values and could have some positive flow-through effects in the medium-term (though I don't necessarily think that this robustly improves the longer term future). I will note, however, that I am biased since work related the Center for Reducing Suffering was the primary reason I got into EA.
I am very open to changing my mind on this, but for now I'm under 50% agree because it seems to me that, in short:
Lots of uncertainties. I expect to have moved my cursor before the end of the week!
Very interesting post, as is often the case with you. Insightful and pragmatic. However, I feel like a closer investigation on charities that effectively ensure that large herbivores are helped. It's plausible that broader conservationist initiatives which have only part of their focus on wild herbivores could still have a larger effects than smaller charities that seem to work mostly at the individual level. In any case, I think it's likely that you're right, and if you are, it would be very interesting to see where donations are most likely to effectively increase the population of large herbivores. Do you currently have any idea of the potential effectiveness of those organizations ?
I appreciated this post - I find it good to see arguments related to wild animal ethics developed in a framework that isn't strictly consequentialist. I was a bit surprised by the references to creating new ecosystems on other planets, as that seems to be a quite different matter, and hadn't really been introduced in the post - but maybe your original writeup contained previous references to this, which made the reference make more sense?
Yes, I agree with that! This is what I consider to be the core concern regarding X-risk. Therefore, instead of framing it as "whether it would be good or bad for everyone to die," the statement "whether it would be good or bad for no future people to come into existence" seems more accurate, as it addresses what is likely the crux of the issue. This latter framing makes it much more reasonable to hold some degree of agnosticism on the question. Moreover, I think everyone maintains some minor uncertainties about this - even those most convinced of the importance of reducing extinction risk often remind us of the possibility of "futures worse than extinction." This clarification isn't intended to draw any definitive conclusion, just to highlight that being agnostic on this specific question isn't as counter-intuitive as the initial statement in your top comment might have suggested (though, as Jim noted, the post wasn't specifically arguing that we should be agnostic on that point either).
I hope I didn't come across as excessively nitpicky. I was motivated to write by impression that in X-risk discourse, there is sometimes (accidental) equivocation between the badness of our deaths and the badness of the non-existence of future beings. I sympathize with this: given the short timelines, I think many of us are concerned about X-risks for both reasons, and so it's understandable that both get discussed (and this isn't unique to X-risks, of course). I hope you have a nice day of existence, Richard Y. Chappell, I really appreciate your blog!