Great. Another crucial consideration I missed. I was convinced that working on reducing the existential risk for humanity should be a global priority.
Upholding our potential and ensuring that we can create a truly just future seems so wonderful.
Well, recently I was introduced to the idea that this might actually not be the case.
The argument is rooted in suffering-focused ethics and the concept of complex cluelessness. If we step back and think critically, what predicts suffering more than the mere existence of sentient beings—humans in particular? Our history is littered with pain and exploitation: factory farming, systemic injustices, and wars, to name just a few examples. Even with our best intentions, humanity has perpetuated vast amounts of suffering.
So here’s the kicker: what if reducing existential risks isn’t inherently good? What if keeping humanity alive and flourishing actually risks spreading suffering further and faster—through advanced technologies, colonization of space, or systems we can’t yet foresee? And what if our very efforts to safeguard the future have unintended consequences that exacerbate suffering in ways we can't predict?
I was also struck by the critique of the “time of perils” assumption. The idea that now is a uniquely critical juncture in history, where we can reduce existential risks significantly and set humanity on a stable trajectory, sounds compelling. But the evidence supporting this claim is shaky at best. Why should we believe that reducing risks now will have lasting, positive effects over millennia—or even that we can reduce these risks at all, given the vast uncertainties?
This isn’t to say existential risk reduction is definitively bad—just that our confidence in it being good might be misplaced. A truly suffering-focused view might lean toward seeing existential risk reduction as neutral at best, and possibly harmful at worst.
It’s humbling, honestly. And frustrating. Because I want to believe that by focusing on existential risks, we’re steering humanity toward a better future. But the more I dig, the more I realize how little we truly understand about the long-term consequences of our actions.
So, what now? I’m not sure.
I am sick of missing crucial considerations. All I want to do is to make a positive impact. But no. Radical uncertainty it is.
I know that this will potentially cost me hundreds of hours to fully think through. It is going to cost a lot of energy if I pursue with this.
Right now I am just considering to pursue earning to give instead and donate a large chunk of my money to different worldviews and cause areas.
Would love to get your thoughts.
Hi there :) I very much sense that a conversation with me last weekend at EAGxVirtual is causally connected to this post, so I thought I'd share some quick thoughts!
First, I apologize if our conversation led you to feel more uncertain about your career in a way that negatively affected your well-being. I know how subjectively "annoying" it can be to question your priorities.
Then, I think your post raises three different potential problems with reducing x-risks (the three of which I know we've talked about) worth disentangling:
1. You mention suffering-focused ethics and reasons to believe these advise against x-risk reduction.
2. You also mention the problem of cluelessness, which I think is worth dissociating. I think motivations for cluelessness vis-a-vis the sign of x-risk reduction are very much orthogonal to suffering-focused ethics. I don't think someone who rejects suffering-focused ethics should be less clueless. In fact, one can argue that they should be more agnostic about this while those endorsing suffering-focused ethics might have good reasons to at least weakly believe x-risk reduction hurts their values, for the "more beings -> more suffering" reason you mention. (I'm however quite uncertain about this and sympathetic to the idea that those endorsing suffering-focused ethics should maybe be just as clueless.)
3. Finally, objections to the 'time of perils' hypothesis can also be reasons to doubt the value of x-risk reduction (Thorstad 2023), but for very different reasons. It's purely a question of what is the most "impactable" between x-risks (and maybe other longterm causes) and shorter-term causes, rather than a question of whether x-risk reduction does more good than harm to begin with (like with 1 and 2).
Discussions regarding the questions raised by these three points seem healthy, indeed.
Hey Jim,
Thanks for chiming in, and you're spot on: our chat at EAGxVirtual definitely got the gears turning! No worries at all about the existential crisis, I see it as part of the journey (and I actively requested it) :) I actually think these moments of doubt are important to progress in my mission in EA (similarly laid out by JWS in his post). I usually don't do this, but the post was a good way for me to vent and help me process some of the ideas + get feedback.
You've broken down my jumbled thoughts really well. It is helpful to see the three points la... (read more)