Naive question: I see many EAs talking about non-extinction X-risks such as the alleged dangers of 'value lock-in' or the imposition of a 'global permanent totalitarian state'. Most recently I came across Will MacAskill mentioning these as plausible risks in the new book 'What we owe the future'.
As an evolutionary psychologist, I'm deeply puzzled by the idea that any biologically reproducing species could ever be subject to a 'permanent' socio-cultural condition of the sort that's posited. On an evolutionary time scale, 'permanent' doesn't just mean 'a few centuries of oppression'. It would mean 'zero change in the biological foundations of the species being oppressed -- including no increased ability to resist or subvert oppression -- across tens of thousands of generations'.
As long as humans or post-humans are reproducing in any way that involves mutation, recombination, and selection (either with standard DNA or post-DNA genome-analogs such as digital recipes for AGIs), Darwinian evolution will churn along. Any traits that yield reproductive advantages in the 'global totalitarian state' will spread, changing the gene pool, and changing the psychology that the 'global totalitarians' would need to manage.
Unless the global totalitarians are artificial entities such as AIs that are somehow immune to any significant evolution or learning in their own right, the elites running the totalitarian state would also be subject to biological evolution. Their heritable values, preferences, and priorities would gradually drift and shift over thousands of generations. Any given dictator might want their family dynasty to retain power forever. But Mendelian randomization, bad mate choices, regression to the mean, and genetic drift almost always disrupt those grand plans within a few generations.
So, can someone please point me to any readings that outline a plausible way whereby humans could be subject to any kind of 'global totalitarian oppressive system' across a time scale of more than a hundred generations?
Jackson -- thanks for the interesting examples. Have you written anything more detailed about any of these, or know anyone who has?
Some of these sounds technically feasible within a few decades or centuries, but most raise the issue -- what motivation would the powerful people/AIs/whatever running society have for doing any of these things? Some of them sound pointlessly sadistic, costly, and unaligned with the powerful beings' interests. (For example, why perpetuate a species of docile post-human submissives, instead of just automating whatever one wants to do? Why keep copies of everyone's uploaded consciousness, if they're not actually smart and empowered enough to actually do anything useful?)
I'd love to see some serious game theory analysis of these kinds of scenarios -- e.g. which kinds of powerful elite behavior (in perpetuating a 'global totalitarian state') would actually make any rational sense across millennia? Versus which are more like Black Mirror dystopian fantasies that don't actually make sense in terms of anyone's long-term interests?