Hi, I’m Uladzislau. I’m a generalist in a full‑time career transition toward impact, trying to spend my most productive hours and real‑world org‑building skills on important, neglected, and tractable problems by joining or starting a high‑impact organization.
Right now I’m especially interested in field‑building for AI safety (including better pipelines for generalist/ops talent) and in humanity’s resilience to AI and other global catastrophic risks, with a focus on our epistemic and cognitive readiness for fast change. I’m also open to other high‑impact causes where there’s good evidence for stronger counterfactual impact and personal fit.
I’ve spent 10+ years building evidence‑driven operations and helping small businesses grow into international companies during crises—spotting bottlenecks, owning solutions, and setting up durable functions in finance, project management, PR, and more.
I’ve completed and am currently involved in EA‑related 1:1 advisory and accelerator programs, applying for fellowships, and looking for skilled volunteering and test projects where I can test my fit and start contributing to field‑ and org‑building work.
I am curious to read more about EA community’s current takes on humanity’s epistemic resilience in view of growing AI. In other words, I’m wondering: What are the risks that our capacity for curiosity, agency, critical thinking, sourcing, and vetting information, and evaluative decision-making might deteriorate as AI usage increases? How big, tractable, and neglected are these risks, especially as AI systems may reduce our incentives to develop or use these skills?
My intuition is it could drive challenges even with aligned AI and without direct misuse—we as humans could disempower ourselves voluntarily out of mere laziness or lost skills. The risk could be aggravated if, following the “Intelligence Curse” logic, the “powerful actors” see no reasons to keep humans epistemically capable. Besides, it could threaten AI alignment if our capacity to make informed decisions about AI governance diminishes.
I’m now only learning the EA ways and hopefully in some time will be able to myself evaluate whether this is a valid issue or I’m just doomsaying. However, if I imagine that for AI to go well we need both AI aligned with humans and humans evolved for AI, I’m under the impression that current EA efforts lean towards the former more than to the latter. Is my estimate sensible?
Far from claiming to have conclusive evidence, I've made some observations to fuel the above subjective impression. I draw them from reflecting on the information bubble I’m building around myself as I’m now delving into Effective altruism.
For example, as I searched for skilled volunteering opportunities, I reviewed 20 AI orgs through EA-related opportunities boards (EA, 80,000 Hours, ProbablyGood, AISafety, BlueDot Impact, Consultants for Impact). I tried to be impartial, though if I had any bias, it was toward preferring work on epistemic resilience issues. Of these organizations, I found 4 that tackle the issue more or less explicitly—focusing on the human side—compared to 16 that seem to mainly address the AI side.
Also following 80,000 Hours problem profiles and AI articles, BlueDot Impact Future of AI course, EA Forum digest and several AI newsletters of the recent 1-2 months, supplied with some express googling, I extracted 5 more or less explicit mentions on the topic of preparing humans for AI. While I didn’t count precisely, the proportion of articles focusing on AI-side problems (e.g., compute, AI rights, alignment) seemed subjectively much higher. 2 of those 5 specifically tackle intentional misuse, and the rest 3—more general changes in cognitive patterns including but not limited to malevolent usage, e.g., Michael Gerlich’s 2025 study “AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking”. I am asking about the latter 3—broader implications for our thinking without bad intentions as a key risk factor.
Does EA community have any view on the topic of our readiness to use AI without degrading? Is my impression about EA community leaning more towards the AI side of the issue vs. the human side of it sensible? Is it a problem, worth exploring further? Are there any drafts on the topic that wait to be published?