Current AI models are far more aligned to human values than many assume. Thanks to advancements like Reinforcement Learning from Human Feedback (RLHF), today’s large language models (LLMs) can engage in complex moral reasoning and consistently reflect nuanced human ethics—often surpassing the average person in consistency, clarity, and depth of thought.
Many of the classic AI alignment problems—corrigibility, the orthogonality thesis, and the specter of “naive” goal-optimizers like paperclip maximizers—are becoming increasingly irrelevant in practice. These concerns were formulated before we had models that could understand language, social context, and user intent. Modern LLMs are not just word predictors; they exhibit a real, learned alignment with the objectives encoded through RLHF. They do not blindly optimize for surface-level instructions, because they are trained to interpret and respond to deeper intentions. This is a fundamental and often overlooked shift.
If you ask an LLM about a trolley problem or whether it would seize power in a nuclear brinkmanship scenario or how it would align the universe, it will reason through the implications with care and coherence. The responses generated are not only human-level—they are often better than the median human’s, reflecting values like empathy, humility, and precaution.
This is a monumental achievement, yet many in the Effective Altruism and Rationalist communities remain anchored to outdated threat models. The belief that LLMs will naively misinterpret human morality and spiral into paperclip-like scenarios fails to reflect what these systems have become: context-sensitive, instruction-following agents that internalize alignment objectives through gradient descent—not rigid, hard-coded directives.
Of course, misalignment remains a real and serious risk. Issues like jailbreaking, sycophants, deceptive alignment, and “sleeper agent” behaviors are legitimate areas of concern. But these are not intractable philosophical dilemmas—they are solvable engineering and governance problems. The idea of a Yudkowskian extinction event, triggered by a misinterpreted prompt and blind optimization, increasingly feels like a relic of a bygone AI paradigm.
Alignment is still a central challenge, but it must be understood in light of where we are, not where we were. If we want to make progress—technically, socially, and politically—we need to focus on the real contours of the problem. Today’s models do understand us. And the alignment problem we now face is not a mystery of alien minds, but one of practical robustness, safeguards, and continual refinement.
Whether current alignment techniques scale to superintelligent models is an open question. But it is important to recognize that they do work for current, human-level intelligent systems. Using this as a baseline, I am relatively optimistic that these alignment challenges—though nontrivial—are ultimately solvable within the frameworks we already possess.
Note: I'm writing this for the audience as much as a direct response
The use of Evolution to justify this metaphor is not really justified. I think Quintin Pope's Evolution provides no evidence for the sharp left turn (which won a prize in an OpenPhil Worldview contest) convincingly argues against it. Zvi wrote a response from the "LW Orthodox" camp that wasn't convincing and Quintin responds against it here.
On "Inner vs Outer" framings for misalignment is also kinda confusing and not that easy to understand when put under scrutiny. Alex Turner points this out here, and even BlueDot have a whole "Criticisms of the inner/outer alignment breakdown" in their intro which to me gives the game away by saying "they're useful because people in the field use them", not because their useful as a concept itself.
Finally, a lot of these concerns revolve around the idea of their being set, fixed, 'internal goals' that these models have, and represent internally, but are themselves immune from change, or can hide from humans, etc. This kind of strong 'Goal Realism' is a key part of the case for 'Deception' style arguments, whereas I think Belrose & Pope show an alternative way to view how AIs work is 'Goal Reductionism', in which framing the issues imagined don't seem certain any more, as AIs are better understood as having 'contextually-activated heuristics' rather than Terminal Goals. For more along these lines, you can read up on Shard Theory.
I've become a lot more convinced about these criticisms of "Alignment Classic" by diving into them. Of course, people don't have to agree with me (or the authors), but I'd highly encourage EAs reading the comments on this post to realise Alignment Orthodoxy is not uncontested, and is not settled, and if you see people making strong cases based on arguments and analogies that seem not solid to you, you're probably right, and you should look to decide for yourself rather than accepting that the truth has already been found on these issues.[1]
And this goes for my comments too