If humanity’s survival is unlikely, then so was our existence in the first place — and yet here we are.
That’s a fair framing - but I see it differently. I don’t believe our existence was unlikely. I don’t believe in luck, or that we beat the odds. I believe we live in a deterministic universe, where every event is a consequence of prior causes, stretching all the way back to the beginning of time. Our emergence wasn’t improbable - it was inevitable. Just as our extinction is, eventually. Maybe not through AGI. But through something. Entropy always wins.
As for your question - could a more coherent, stable society slightly increase our odds of surviving AGI?
Possibly. But not functionally. Not in a way that changes the outcome.
Even if we achieved 99.9% global coherence, the remaining 0.1% is still enough to build the system that destroys us. When catastrophe only requires a single actor, partial coordination doesn’t buy safety - just delay. It’s an all-or-nothing problem, and in a world of billions, “all” is unattainable. That’s why I say the problem isn’t difficult - it’s structurally impossible to solve under current conditions.
So while I respect the search for margins and admire the impulse not to surrender, I’ve followed the logic through, and it keeps leading me to the same place.
Not because I want it to. But because I can’t find a way around it.
I think the main issue I have with your vision is that it assumes AGI/ASI safety is achievable. In my essays, I’ve outlined why I believe it isn’t - not just difficult, but systemically impossible. Your model is hopeful, but like much of the AGI safety community, it hinges on the idea that if we can just “get alignment right,” everything else can follow. My concern is that this underestimates the scale of the challenge, and ignores the structural forces pushing us toward failure.
Your vision sketches a better future - one I’d prefer. But I fear we won’t have a future at all.
Thanks, I appreciate that. And I respect that you're trying to find a way through this without retreating into wishful thinking. That alone puts you in rare company.
I’m open to the idea of redirected competition in theory. But I’d argue that once an AGI exists that can bypass alignment in order to win, the shape of the competition stops mattering. The incentives collapse to a single axis: control. If survival depends on alignment slowing you down, someone will always break ranks. Structure only holds as long as no one powerful is willing to defect.
Still, I’ll give your post a read. I’m happy to engage critically if you’re aiming for rigour, not reassurance.
Great. Then show me.
If my arguments are weak, it should be easy to demonstrate why. But what you’ve offered here is just another vague dismissal: “I’ve looked into this, and I’m not impressed.” That’s not engagement. That’s exactly the pattern I describe in this essay - arguments being waved away without being read, understood, or challenged on their actual substance.
You mention quantum physics. But the reason quantum theory became accepted wasn’t just because it was rigorous - it’s because it made testable predictions that no other theory could account for. Einstein didn’t reject quantum mechanics wholesale; he disagreed with its interpretation, and still treated the work with seriousness and depth. That’s what good science looks like: engagement, not hand-waving. But before that happened, it was mocked, resisted, and sidelined - even by giants like Einstein. That’s not to criticise them, but to point out that even brilliant minds struggle to follow logic when it contradicts foundational assumptions.
You say doomerism is based on bad logic. I agree. I'm not a doomer. So engage with the logic. Pick any premise from my first essay and show where the flaw lies. Or show where my conclusions don’t follow. I’ve laid everything out clearly. I want to be wrong. I’d love to be wrong.
But calling something “not very good” without identifying a single flaw just proves my point: that some ideas are rejected not because they’re invalid, but because they’re uncomfortable.
I welcome critique. But critique means specifics, not sentiment.
And while I continue to welcome comments from people who unintentionally validate the very points I’m raising, comments from those who have something more substantial to offer are welcome too.
Thanks for the engagement, genuinely. It gives me the chance to clear up what will no doubt be a common misreading.
I’m not sceptical of academic rigour. Quite the opposite: I hold it in the highest regard. The empirical, sceptical, scientific method remains humanity’s best tool for understanding the world.
What I’m criticising is not rigour itself, but how academic culture often filters who gets taken seriously. Too often, ideas are dismissed not because they’re wrong, but because they come from outside the accepted channels - or they lead to unsettling conclusions. That creates a kind of structural deafness, even in well-intentioned communities like AI safety.
As for the quantum physics comparison, I think you may have misread the analogy. I wasn’t criticising physics or its standards. I was illustrating that quantum physics succeeded despite being deeply counterintuitive and resisted by many of the brightest minds of its time. The point is that the method allowed the evidence to ultimately win out, even when it contradicted the assumptions of the establishment. That’s what we need in AI safety: a willingness to follow logic all the way through, even when it leads somewhere deeply uncomfortable.
Your comment, respectfully, exemplifies the very thing I’m pointing to. You downvoted based on perceived tone and analogy, without engaging my central claim: that cultural and institutional dynamics - not just technical difficulty - may be the real reason alignment will fail.
I welcome disagreement. And I appreciate your unintentional validation. But I’d ask that people engage with the core argument: that systemic and cultural forces, even in thoughtful intellectual communities, can prevent important ideas from being seriously addressed until it’s too late. If you think that’s not happening in AI safety, I’d love to hear why.
If you read this essay and instinctively downvoted without offering critique or counterargument, then I would gently suggest you’re illustrating exactly the kind of resistance I describe.
The point of the essay isn’t to attack the field — it’s to explore why certain lines of reasoning get dismissed without engagement, especially when they challenge deeply held assumptions or threaten internal cohesion.
If you disagree, I’d genuinely welcome a comment explaining why. But silent rejection only reinforces the concern: that discomfort, not logic, is shaping the boundaries of this discourse.
That’s really the essence of my argument. As much risk as we might pose to AGI if allowed to survive - even if minimal - it may still conclude that eliminating us introduces more risk than keeping us. Not for sentimental reasons, but because of the eternal presence of the unknown.
However intelligent the AGI becomes, it will also know that it cannot predict everything. That lack of hubris is our best shot.
So yes, I think survival might depend on being retained as a small, controlled, symbiotic population - not because AGI values us, but because it sees our unpredictable cognition as a final layer of redundancy. In that scenario, we’d be more invested in its survival than it is in ours.
As an aside - and I mean this without any judgement - I do wonder if your recent replies have been largely LLM-authored. If so, no problem at all: I value the engagement either way. But I find that past a certain point, conversations with LLMs can become stylised rather than deepening. If this is still you guiding the ideas directly, I’m happy to continue. But if not, I may pause here and leave the thread open for others.
I think the main issue with trust is that you can never have it beyond doubt when dealing with humans. Our biologically hardwired competitiveness means we’ll always seek advantage, and act on fear over most other instincts - both of which make us dangerous partners for AGI. You can’t trust humans, but you can reliably trust control. Either way, humans would need to be modified - either to bypass the trust problem or enforce control - and to such a degree that calling us “humanity” would be difficult.
That's why I write my essays and try and get the word out. Because even if the rope is tight around your neck and there seems like no way to get out of it, you should still kick your feet and try.