Naive question: I see many EAs talking about non-extinction X-risks such as the alleged dangers of 'value lock-in' or the imposition of a 'global permanent totalitarian state'. Most recently I came across Will MacAskill mentioning these as plausible risks in the new book 'What we owe the future'.
As an evolutionary psychologist, I'm deeply puzzled by the idea that any biologically reproducing species could ever be subject to a 'permanent' socio-cultural condition of the sort that's posited. On an evolutionary time scale, 'permanent' doesn't just mean 'a few centuries of oppression'. It would mean 'zero change in the biological foundations of the species being oppressed -- including no increased ability to resist or subvert oppression -- across tens of thousands of generations'.
As long as humans or post-humans are reproducing in any way that involves mutation, recombination, and selection (either with standard DNA or post-DNA genome-analogs such as digital recipes for AGIs), Darwinian evolution will churn along. Any traits that yield reproductive advantages in the 'global totalitarian state' will spread, changing the gene pool, and changing the psychology that the 'global totalitarians' would need to manage.
Unless the global totalitarians are artificial entities such as AIs that are somehow immune to any significant evolution or learning in their own right, the elites running the totalitarian state would also be subject to biological evolution. Their heritable values, preferences, and priorities would gradually drift and shift over thousands of generations. Any given dictator might want their family dynasty to retain power forever. But Mendelian randomization, bad mate choices, regression to the mean, and genetic drift almost always disrupt those grand plans within a few generations.
So, can someone please point me to any readings that outline a plausible way whereby humans could be subject to any kind of 'global totalitarian oppressive system' across a time scale of more than a hundred generations?
Jackson -- thanks for your comment.
I agree that historically, new technologies often allow new forms of political control (but also new forms of political resistance and rebellion). We're seeing this with social media and algorithmic 'bubble formation' that increases polarization.
Your last paragraph identifies what I think is the latent fear among many EAs: when they talk about a 'permanent global totalitarian state', I think they're often implicitly extrapolating from the current Chinese state, and imagining it augmented by much stronger AI. Trouble is, I think these fears are often (but not always) based on some pretty serious misunderstandings of China, and its history, government, economy, culture, and ethos.
By most objective standards, I think the CCP over the last 100 years has actually been more adaptable, dynamic, and flexible in its approach to policy changes than most 'liberal democracies' have been -- with diverse approaches ranging from Mao's centralized economic control to Mao's cultural revolution to Deng's economic liberalization to Hu's humble meritocracy to Xi's re-assertive nationalism. Decade by decade, China's policies change quite dramatically, even as the CCP remains in power. By contrast, Western 'liberal democracies' tend to be run by the same deep state bureaucrats and legislatively gridlocked duopolies that rarely deviate from a post-WWII centrist status quo. Anyway, I think EAs interested in whether 'China + AI' provides a credible model for a 'permanent totalitarian state' could often benefit from learning a bit more about Chinese history over the last century. (Recommended podcasts: 'China Talk' and 'China History Podcast').