Ihor Ivliev

Thinker
1 karmaJoined Retired
Interests:
AI safety

Bio

Thinker

Comments
7

To the architects/instigators/enablers of the AI acceleration race, a direct question:

 

Are you prepared to own the extremely probable, catastrophic, systemic collapse of civilization you are now actively engineering/hastening/orchestrating?

 

Here is the United States, making its zero-sum bid for total dominance brutally clear with a formal doctrine of unconditional acceleration: https://www.whitehouse.gov/articles/2025/07/white-house-unveils-americas-ai-action-plan/

 

And here is China, pursuing the identical material objective under the strategic camouflage of multilateral cooperation: https://www.gov.cn/yaowen/liebiao/202507/content_7033929.htm

 

For those who now wish to understand the unforgiving logic of this suicidal contest, my meta-analysis/audit provides the detailed dissection/explanation - written before the official Action Plans from the US and China confirmed its conclusions: https://doi.org/10.6084/m9.figshare.29183669.v16

Hello :) thank you for the thoughtful comment on my old post. I really appreciate you taking the time to engage with it, and you're spot on - it was a high-level, abstract vision.

It’s funny you ask for the "gears-level" design, because I did spend a long time trying to build it out. That effort resulted in a massive (and honestly, monstrously complex and still naive/amateur) paper on the G-CCACS architecture (https://doi.org/10.6084/m9.figshare.28673576.v5).

However, my own perspective has shifted significantly since then. Really.

My current diagnosis, detailed in my latest work "Warped Wetware" (https://doi.org/10.6084/m9.figshare.29183669.v13) and in this latest article "The Engine of Foreclosure" (https://forum.effectivealtruism.org/posts/6be7xQHFREPYJKmyE/the-engine-of-foreclosure) is that the AI control problem is formally intractable. Not because we can't design clever technical architectures, but because the global human system (i call it "Distributed Human Optimizer") is structurally wired to reject them. The evidence, from the 100:1 (or even 400+ to 1) capability-vs-safety funding gap to the failure of every governance paradigm we've tried, seems conclusive.

This has led me to a stark conclusion: focusing on purely technical solutions like G-CCACS, while intellectually interesting, feels dangerously naive until we confront these underlying systemic failures. The best blueprint in the world is useless if the builders are locked in a race to the bottom.

That's why my work has pivoted entirely to the Doctrine of Material Friction - pragmatic, physical interventions designed to slow the system down rather than "solve" alignment. Your point about "memetic stickiness" was incredibly sharp, and it's even more of a challenge for this grimmer diagnosis.

Anyway, thanks again for the great feedback. It's exactly the kind of clear-eyed engagement this field needs.

Many thanks. I agree that's a critical point - these "social instinct" failure modes are a subtle and potent threat. The VSPE framework sounds like a fascinating and important line of research.

To be fully transparent, I've just wrapped the intensive project I recently published and am now in a period focused entirely on rest and recovery.

I truly appreciate your generous offer to compare notes. It's the kind of collaboration the field needs.

Thanks again for adding such a valuable perspective to the discussion. I wish you all the best in this noble and critically important direction!

Thanks you, I really appreciate. You're absolutely right - some of the most concerning behaviors emerge not through visible defection but through socially-shaped reward optimization. Subtle patterns like sycophancy or goal obfuscation often surface before more obvious misalignment. Grateful you raised this! it's a very-very important, even critical, angle - especially now, as system capabilities are advancing faster than oversight mechanisms can realistically keep up.