Current AI models are far more aligned to human values than many assume. Thanks to advancements like Reinforcement Learning from Human Feedback (RLHF), today’s large language models (LLMs) can engage in complex moral reasoning and consistently reflect nuanced human ethics—often surpassing the average person in consistency, clarity, and depth of thought.
Many of the classic AI alignment problems—corrigibility, the orthogonality thesis, and the specter of “naive” goal-optimizers like paperclip maximizers—are becoming increasingly irrelevant in practice. These concerns were formulated before we had models that could understand language, social context, and user intent. Modern LLMs are not just word predictors; they exhibit a real, learned alignment with the objectives encoded through RLHF. They do not blindly optimize for surface-level instructions, because they are trained to interpret and respond to deeper intentions. This is a fundamental and often overlooked shift.
If you ask an LLM about a trolley problem or whether it would seize power in a nuclear brinkmanship scenario or how it would align the universe, it will reason through the implications with care and coherence. The responses generated are not only human-level—they are often better than the median human’s, reflecting values like empathy, humility, and precaution.
This is a monumental achievement, yet many in the Effective Altruism and Rationalist communities remain anchored to outdated threat models. The belief that LLMs will naively misinterpret human morality and spiral into paperclip-like scenarios fails to reflect what these systems have become: context-sensitive, instruction-following agents that internalize alignment objectives through gradient descent—not rigid, hard-coded directives.
Of course, misalignment remains a real and serious risk. Issues like jailbreaking, sycophants, deceptive alignment, and “sleeper agent” behaviors are legitimate areas of concern. But these are not intractable philosophical dilemmas—they are solvable engineering and governance problems. The idea of a Yudkowskian extinction event, triggered by a misinterpreted prompt and blind optimization, increasingly feels like a relic of a bygone AI paradigm.
Alignment is still a central challenge, but it must be understood in light of where we are, not where we were. If we want to make progress—technically, socially, and politically—we need to focus on the real contours of the problem. Today’s models do understand us. And the alignment problem we now face is not a mystery of alien minds, but one of practical robustness, safeguards, and continual refinement.
Whether current alignment techniques scale to superintelligent models is an open question. But it is important to recognize that they do work for current, human-level intelligent systems. Using this as a baseline, I am relatively optimistic that these alignment challenges—though nontrivial—are ultimately solvable within the frameworks we already possess.
I think you make an important point that I'm inclined to agree with.
Most of the discourse, theories, intuitions, and thought experiments about AI alignment was formed either before the popularization of deep learning (which started circa 2012) or before the people talking and writing about AI alignment started really caring about deep learning.
In or around 2017, I had an exchange with Eliezer Yudkowsky in an EA-related or AI-related Facebook group where he said he didn't think deep learning would lead to AGI and thought symbolic AI would instead. Clearly, at some point since then, he changed his mind.
For example, in his 2023 TED Talk, he said he thinks deep learning is on the cusp of producing AGI. (That wasn't the first time, but it was a notable instance and an instance where he was especially clear on what he thought.)
I haven't been able to find anywhere where Eliezer talks about changing his mind or explains why he did. It would probably be helpful if he did.
All the pre-deep learning (or pre-caring about deep learning) ideas about alignment have been carried into the ChatGPT era and I've seen a little bit of discourse about this, but only a little. It seems strange that ideas about AI itself would change so much over the last 13 years and ideas about alignment would apparently change so little.
If there are good reasons why those older ideas about alignment should still apply to deep learning-based systems, I haven't seen much discussion about that, either. You would think there would be more discussion.
My hunch is that AI alignment theory could probably benefit from starting with a fresh sheet of paper. I suspect there is promise in the approach of starting from scratch in 2025 without trying to build on or continue from older ideas and without trying to be deferential toward older work.
I suspect there would also be benefit in getting out of the EA/Alignment Forum/LessWrong/rationalist bubble.
I agree with the "fresh sheet of paper." Reading the alignment faking paper and the current alignment challenges has been way more informative than reading Yudkowsky.
I think theese circles have granted him too many bayes points for predicting alignment when the technical details of his alignment problems basically don't apply to deep learning as you said.