Second-year undergraduate in International Relations. Interested in AI governance, tech policy, and the structural incentives that shape how governments and AI companies negotiate over control of frontier systems.
I focus on the gap between declared positions and operational reality: what institutions say they want vs. what their incentive structures actually produce.
Building toward a career in AI governance and policy.
Full name: Viacheslav Kolodiazhnyi.
Feedback on my analysis – I'm early in my career and learning in public. Introductions to people working in AI governance, especially at policy institutes and fellowship programs. Pointers to opportunities in the field that are open to non-US/non-EU applicants.
I can offer:
Close reading and detailed feedback on drafts – especially on structure, argumentation, and whether claims are supported by evidence.
Historical and comparative context for AI governance questions – I have a deep background in European political history, institutional design, and how states have historically managed relationships with critical suppliers.
Russian-language source access – I can read and translate Russian-language policy documents, media coverage, and academic work that may not be accessible to English-speaking researchers.
Perspective from outside the usual jurisdictions – I'm based in Central Asia with experience living in Russia, which sometimes surfaces assumptions that are invisible from other regions of the world.
Hi everyone! I'm Slava, 19 years old, a second-year undergraduate student in International Relations. My interests include world history, international relations, literature and philosophy. In 2024, I placed 6th in the final round of the Republican History Olympiad in Kyrgyzstan.
Over the past few months I've been seriously exploring the intersection of AI with my core interests. To deepen my understanding, I completed the Elements of AI and Ethics of AI programmes from the University of Helsinki, and recently published my first post on this forum – on the institutional conflict between Anthropic and the Pentagon.
What concerns me is the broader turbulence of recent years – political radicalisation, rising international tensions, economic instability and general uncertainty. The rise of AI in this turbulent moment is a double-edged sword: enormous opportunities on one side, serious risks on the other. I'm interested in researching and mitigating those risks, particularly in sensitive domains like defence and public discourse.
The distinction you draw here seems important and underexplored. AI is genuinely valuable in that it reduces the cost of routine work, freeing up time and energy for new ideas. But when it comes to verificatory routine – the kind you describe, whose output becomes the foundation for further epistemic conclusions – this automation carries a specific risk.
The tasks that get delegated first are the ones people least want to do. Searching for sources, tracing chains of reasoning, cross-checking claims – these are the classic examples. It is psychologically easiest to hand off what you dislike, especially when the tool handles it faster and more smoothly. But this same monotonous routine is what builds an intuitive understanding of how these processes work from the inside – what looks suspicious, where errors tend to hide, when a source is too convenient to be genuine.
When a person stops doing this work, they lose not just the skill but the ability to validate what the algorithm produces. And precisely because the task is disliked, there is little motivation to maintain any kind of checking mode. The value of automation and its vulnerability turn out to be in the same place: the more readily a person delegates a task, the less capable they are of noticing when the algorithm gets it wrong.
Do you think maintaining deliberate checking habits is enough to offset this, or is the risk more structural?