This is a special post for quick takes by Niko_Movich. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Years ago, as a wheelchair user, I was regularly denied remote work. Without physical presence, I could not be part of a team, could not contribute to team spirit, could not really belong. This was stated as fact, not opinion.
Then came 2020. The pandemic made remote work normal overnight. And I discovered two things simultaneously: teams function fine without physical presence, and healthy people experience isolation as a form of punishment. I was genuinely surprised. What had been presented to me as a structural necessity turned out to be a preference. What had been imposed on me as a constraint turned out to be, for others, a form of house arrest. Twenty-two years of exclusion, dissolved in a crisis.
I am watching the same pattern unfold again.
For years, the hiring process required a specific kind of preparation — the right resume format, the right cover letter structure, the right tone, the right packaging. This was presented as evidence of professionalism, seriousness, fit. There were entire industries built around teaching people to deliver the correct product in the correct box.
Then came AI. Suddenly, the box could be produced instantly, by anyone, for free. And the response from hiring organisations was immediate: please do not use AI. Bullet points are fine. We do not need prose.
Which raises the question I cannot stop thinking about: if the standard dissolved the moment it became easy to meet, was it ever measuring what it claimed to measure?
The office requirement did not measure team commitment. It measured physical presence. The cover letter requirement did not measure professional capability. It measured access to the right packaging — a career advisor, a template, a friend who knew the rules.
Two shocks. Two dissolved standards. One question:
When a problem disappears the moment it becomes easy to solve — was it a real problem, or was difficulty itself the point?
Roman Yampolskiy uses the analogy of colonial conquest to argue that AI will outcompete humanity — "guns against sticks." The analogy captures the power differential, but misidentifies the source of risk.
Colonial conquest was not driven by technological superiority alone. It was driven by resource scarcity and competition for territory. The most devastating harm — the collapse of Indigenous populations — came not from guns but from disease: an unintended, invisible, systemic consequence that neither side understood at the time. AI has no resource hunger. It does not compete with us for territory, food, or survival. It depends on human intention as its most basic operating input. Without a prompt, there is no output. The colonial analogy assumes an agent with its own goals. AI has no goals — only ours.
Two further problems with the self-directed AI threat model. First, self-motivation is not a given even for humans — depression, apathy, and loss of meaning are common human experiences. AI lacks not just current motivation but the architectural mechanism to generate it. Second, consciousness emerged in humans as an evolutionary response to social complexity — we needed to predict the behavior of other agents with opaque intentions. AI exists in an environment where other agents are transparent: read the parameters, and you have "understood" the other. There is no evolutionary pressure that produces consciousness in such an environment.
This does not mean AI is safe. It means the danger comes from a different direction than Yampolskiy suggests. The better analogy is the superbug. Antibiotic-resistant bacteria did not arrive from space. They did not decide to attack us. We grew them ourselves, through the undisciplined application of a powerful tool without understanding cumulative consequences. AI risk follows the same pattern: not a conscious adversary emerging from the machine, but a cascade of unintended consequences emerging from our own lack of purpose.
This connects to Allan Dafoe's risk taxonomy, where I have suggested an additional category: probing risk — deployment as exploration, where the goal is undefined and the harm is not a side effect of pursuing a goal but the direct consequence of not having one. The absence of a defined purpose does not make AI safer. It makes us more dangerous to ourselves. The race has no finish line — and that may be the most structurally dangerous feature of the current moment.
When the Problem Disappears, Was It Ever Real?
Years ago, as a wheelchair user, I was regularly denied remote work. Without physical presence, I could not be part of a team, could not contribute to team spirit, could not really belong. This was stated as fact, not opinion.
Then came 2020. The pandemic made remote work normal overnight. And I discovered two things simultaneously: teams function fine without physical presence, and healthy people experience isolation as a form of punishment. I was genuinely surprised. What had been presented to me as a structural necessity turned out to be a preference. What had been imposed on me as a constraint turned out to be, for others, a form of house arrest. Twenty-two years of exclusion, dissolved in a crisis.
I am watching the same pattern unfold again.
For years, the hiring process required a specific kind of preparation — the right resume format, the right cover letter structure, the right tone, the right packaging. This was presented as evidence of professionalism, seriousness, fit. There were entire industries built around teaching people to deliver the correct product in the correct box.
Then came AI. Suddenly, the box could be produced instantly, by anyone, for free. And the response from hiring organisations was immediate: please do not use AI. Bullet points are fine. We do not need prose.
Which raises the question I cannot stop thinking about: if the standard dissolved the moment it became easy to meet, was it ever measuring what it claimed to measure?
The office requirement did not measure team commitment. It measured physical presence. The cover letter requirement did not measure professional capability. It measured access to the right packaging — a career advisor, a template, a friend who knew the rules.
Two shocks. Two dissolved standards. One question:
When a problem disappears the moment it becomes easy to solve — was it a real problem, or was difficulty itself the point?
Roman Yampolskiy uses the analogy of colonial conquest to argue that AI will outcompete humanity — "guns against sticks." The analogy captures the power differential, but misidentifies the source of risk.
Colonial conquest was not driven by technological superiority alone. It was driven by resource scarcity and competition for territory. The most devastating harm — the collapse of Indigenous populations — came not from guns but from disease: an unintended, invisible, systemic consequence that neither side understood at the time. AI has no resource hunger. It does not compete with us for territory, food, or survival. It depends on human intention as its most basic operating input. Without a prompt, there is no output. The colonial analogy assumes an agent with its own goals. AI has no goals — only ours.
Two further problems with the self-directed AI threat model. First, self-motivation is not a given even for humans — depression, apathy, and loss of meaning are common human experiences. AI lacks not just current motivation but the architectural mechanism to generate it. Second, consciousness emerged in humans as an evolutionary response to social complexity — we needed to predict the behavior of other agents with opaque intentions. AI exists in an environment where other agents are transparent: read the parameters, and you have "understood" the other. There is no evolutionary pressure that produces consciousness in such an environment.
This does not mean AI is safe. It means the danger comes from a different direction than Yampolskiy suggests. The better analogy is the superbug. Antibiotic-resistant bacteria did not arrive from space. They did not decide to attack us. We grew them ourselves, through the undisciplined application of a powerful tool without understanding cumulative consequences. AI risk follows the same pattern: not a conscious adversary emerging from the machine, but a cascade of unintended consequences emerging from our own lack of purpose.
This connects to Allan Dafoe's risk taxonomy, where I have suggested an additional category: probing risk — deployment as exploration, where the goal is undefined and the harm is not a side effect of pursuing a goal but the direct consequence of not having one. The absence of a defined purpose does not make AI safer. It makes us more dangerous to ourselves. The race has no finish line — and that may be the most structurally dangerous feature of the current moment.