Naive question: I see many EAs talking about non-extinction X-risks such as the alleged dangers of 'value lock-in' or the imposition of a 'global permanent totalitarian state'. Most recently I came across Will MacAskill mentioning these as plausible risks in the new book 'What we owe the future'.
As an evolutionary psychologist, I'm deeply puzzled by the idea that any biologically reproducing species could ever be subject to a 'permanent' socio-cultural condition of the sort that's posited. On an evolutionary time scale, 'permanent' doesn't just mean 'a few centuries of oppression'. It would mean 'zero change in the biological foundations of the species being oppressed -- including no increased ability to resist or subvert oppression -- across tens of thousands of generations'.
As long as humans or post-humans are reproducing in any way that involves mutation, recombination, and selection (either with standard DNA or post-DNA genome-analogs such as digital recipes for AGIs), Darwinian evolution will churn along. Any traits that yield reproductive advantages in the 'global totalitarian state' will spread, changing the gene pool, and changing the psychology that the 'global totalitarians' would need to manage.
Unless the global totalitarians are artificial entities such as AIs that are somehow immune to any significant evolution or learning in their own right, the elites running the totalitarian state would also be subject to biological evolution. Their heritable values, preferences, and priorities would gradually drift and shift over thousands of generations. Any given dictator might want their family dynasty to retain power forever. But Mendelian randomization, bad mate choices, regression to the mean, and genetic drift almost always disrupt those grand plans within a few generations.
So, can someone please point me to any readings that outline a plausible way whereby humans could be subject to any kind of 'global totalitarian oppressive system' across a time scale of more than a hundred generations?
As far as I know, there is really not much EA thought about this idea of "stable totalitarianism", which is odd considering that it is often brought up right when people are introducing the fundamental logic of "longtermist" EA, as you mentioned. The EA Forum just has a couple oddball articles, including this one brainstorming how we might try to screen out mean-spirited people to prevent them rising to power, this section of a post on Brain-Computer Interfaces on how there is obvious totalitarian potential if you can read the minds of your subjects or directly wire reward/punishment into their brains, this essay by Bryan Caplan, a couple of articles about protecting democracy (although these are more near-term-oriented)... compared to the usual thoroughness that EA often brings to the table, it's pretty lame!
Maybe there are other related subcultures beyond EA, where the idea of stable totalitarianism has been given more thought? Crypto people are pretty libertarian/paranoid, so maybe they have good takes on this stuff? Dunno...
One related area where people (including myself) have written a bit more is the "vulnerable world hypothesis" -- situations where you might actually need global totalitarianism in order for humanity to control an incredibly dangerous technology.