Naive question: I see many EAs talking about non-extinction X-risks such as the alleged dangers of 'value lock-in' or the imposition of a 'global permanent totalitarian state'. Most recently I came across Will MacAskill mentioning these as plausible risks in the new book 'What we owe the future'.
As an evolutionary psychologist, I'm deeply puzzled by the idea that any biologically reproducing species could ever be subject to a 'permanent' socio-cultural condition of the sort that's posited. On an evolutionary time scale, 'permanent' doesn't just mean 'a few centuries of oppression'. It would mean 'zero change in the biological foundations of the species being oppressed -- including no increased ability to resist or subvert oppression -- across tens of thousands of generations'.
As long as humans or post-humans are reproducing in any way that involves mutation, recombination, and selection (either with standard DNA or post-DNA genome-analogs such as digital recipes for AGIs), Darwinian evolution will churn along. Any traits that yield reproductive advantages in the 'global totalitarian state' will spread, changing the gene pool, and changing the psychology that the 'global totalitarians' would need to manage.
Unless the global totalitarians are artificial entities such as AIs that are somehow immune to any significant evolution or learning in their own right, the elites running the totalitarian state would also be subject to biological evolution. Their heritable values, preferences, and priorities would gradually drift and shift over thousands of generations. Any given dictator might want their family dynasty to retain power forever. But Mendelian randomization, bad mate choices, regression to the mean, and genetic drift almost always disrupt those grand plans within a few generations.
So, can someone please point me to any readings that outline a plausible way whereby humans could be subject to any kind of 'global totalitarian oppressive system' across a time scale of more than a hundred generations?
My rough sense of the argument is "AI is immune to all evolution mechanisms so it can stay the same forever, so an AI-governed totalitarian state can be permanent."
AI domination is not the only situation described in this argument, though: it also considers human domination that is aided by AI. In this scenario, your argument about drift in the elite class makes sense.
Maybe. But it seems like we have to pick one: either
(1) Powerful AI tries to impose global permanent totalitarian oppression based on its own stable, locked-in values, preferences, and priorities... which would make it static and brittle, and a sitting duck for coevolution by any beings it's exploiting,
or
(2) Powerful AI tries to impose oppression based on its own nimble, adaptive, changeable values, preferences, and priorities.... which could co-evolve faster than any beings it's exploiting, but which would mean it's no longer 'permanent' in terms of the goals and nature of its totalitarian oppression.