[I wrote this blog post as part of the Asterisk Blogging Fellowship. It's substantially an experiment in writing more breezily and concisely than usual, and on a broader topic. Let me know how you feel about the style.]
Literally since the adoption of writing, people haven’t liked the fact that culture is changing and their children have different values and beliefs.
Historically, for some mix of better and worse, people have been fundamentally limited in their ability to prevent cultural change. People who are particularly motivated to prevent cultural drift can homeschool their kids, carefully curate their media diet, and surround them with like-minded families, but eventually they grow up, leave home, and encounter the wider world. And death ensures that even the most stubborn traditionalists eventually get replaced by a new generation.
But the development of AI might change the dynamics here substantially. I think that AI will substantially increase both the rate and scariness of cultural change—TikTok algorithms that are genuinely superintelligent at capturing attention, or social movements iterating through memetic variations at the speed of AI media production rather than human media production. And AI will simultaneously make it much easier to prevent cultural change, offering unprecedented tools for filtering information, monitoring behavior, and constructing insular bubbles that are genuinely impervious to outside influence.
This will put us in a rough position where people might have to quickly make historically unprecedented choices about how much to isolate themselves from the cultural change that they’ll undergo if they interact freely with the outside world, at a time when that change correctly looks particularly scary to them. I think this is scary.
I'm not confident it'll go the way I describe here, but I think that this possibility is worth exploring, and I think LessWrongers are way too optimistic about the class of issues I point to here.
Analysis through swerving around obstacles
There are lots of ways AI could go wrong in terrifying ways. In this post, I want to talk about a scenario where the first few obvious concerns don't happen. Let's call this the method of analysis by swerving around obstacles. Every time we spot a terrifying obstacle to a good future, we'll name it, and then just assume that it doesn't happen.
First obstacle: AI takeover. The AIs remain aligned enough that they don't grab power for themselves. They do what humans want them to do, at least in broad strokes.
Second obstacle: massive swings in human power. We don't end up in a world where Sam Altman personally controls the lightcone, or where the US president manages to become god-emperor of Earth. Power remains distributed among humans in something vaguely resembling current patterns.
Third obstacle: economic displacement without redistribution. Let's say America implements something like UBI funded by taxes on AI companies. People don't need to work to survive. The benefits of AI are shared widely enough that most people can afford AI assistants, AI-generated products, and whatever else they need.
Fourth obstacle: centralized decisions about the extent to which AIs should help people preserve aspects of their beliefs and values that would change under exposure to broader society or further reflection. The government doesn't mandate that all AIs must expose users to diverse perspectives, nor does it allow complete freedom to construct arbitrarily closed epistemic bubbles. Instead, we muddle through with something like current norms. I think there are pretty broad value disagreements about this—America is much more friendly to homeschooling than many other developed countries, for instance. Let's just assume it goes roughly the way I'm hypothesizing.
Exposure to the outside world might get really scary
In that world, AI leads to terrifying social change and unprecedented social pressures.
This is substantially just an acceleration of trends that I think already exist. When I was a teenager, I spent a bunch of time unsupervised online, and it was basically great for me. But I'm scared of teenagers doing that now. The internet has been optimized for engagement in ways that make it less fulfilling and more addictive. Empirical evidence suggests smartphones have been bad for teenage mental health. And this is before AI really gets going.
I also expect that once AIs can produce media themselves, cultural shifts will happen more frequently than at present (because the AIs can more rapidly experiment with provocative new positions that might be really popular). And cultural positions will be more balkanized: Right now an important force against social fracturing is that there's only so many talented writers and media producers, so it's hard for niche communities to totally maintain the attention of their members: they’re worse enough at producing media that they can’t monopolize attention of someone, even if their values and beliefs are actually more memetically fit for a particular individual than the larger cluster they’re in.
Another cause for concern: we'll all become targets for sophisticated manipulation. Ultra-wealthy people already deal with this—it's worth talented people's time to study them and figure out how to extract money from them. But when the ratio between human wealth and AI labor costs shifts, everyone becomes worth targeting. AIs will compile detailed dossiers on each of us, crafting perfect manipulation strategies. Everyone who flirts with you might have gotten a gameplan from an AI system that spent the equivalent of years planning it out.
The point is that the outside world won't just be different from what individuals might want, as it already is—it will be genuinely dangerous in unprecedented ways. Every piece of media might be optimized to hijack your values. Every interaction might be predatory. Parents who want to shield their children will be correct that unfiltered exposure to the outside world is scary and potentially extremely dangerous.
Isolation will get easier and cheaper
And at the same time as it gets scarier to be exposed directly to the outside world, people will have an unprecedented ability to isolate themselves and their children from these pressures.
AI will make isolation dramatically easier. Right now, if you want to shield your kids from mainstream culture, you have to constantly fight an uphill battle. You need to review books, movies, and websites. You need to find alternative curricula for every subject. You need enough like-minded families nearby to form a community. It's exhausting work that requires constant vigilance and often means accepting lower-quality substitutes for mainstream options. But AI changes all of this. Want a library of ten thousand novels that share your values but are actually as engaging as secular bestsellers? Your AI can write them. Want a tutor who can teach calculus at MIT level while never mentioning evolution? Done. Want to monitor everything your kid sees online and get alerts about concerning patterns? No problem. The technical barriers to creating a totalizing information environment will disappear.
Churches already do their best with current tools. They organize youth groups, summer camps, and mission trips. They provide Christian rock bands (occasionally extremely good), Christian romance novels, Christian homeschool curricula. Parents install content filters, read their kids' texts, and carefully vet friend groups. One of the core lessons they teach is that maintaining faith requires active effort—you need accountability partners, you need to pray when you have doubts, you need to avoid situations that might lead you astray. But all of this is incredibly labor-intensive and only partially effective. The outside world seeps in through cracks. With AI, there won't be cracks. The AI can read every message, watch every interaction, and provide perfectly calibrated interventions at exactly the right moments.
At the same time, the costs of isolation will plummet. The biggest current cost of living in an enclave isn't the effort required to maintain it—it's what you give up. Kids from insular communities often struggle economically because they lack mainstream credentials and cultural knowledge. They can't network effectively, don't know how to navigate secular institutions, and miss out on economic opportunities. Parents face the difficult choice between exposing their kids to spiritual danger or accepting lower material prospects. But if no one needs to work anyway, and AI can handle any necessary interface with the outside world, these tradeoffs disappear.
Add biological immortality to the mix, and these enclaves don't even face generational turnover. Today, cultural change often happens funeral by funeral—the old guard dies off, the young people who've been exposed to new ideas take over. But if the same church elders who set the rules in 2025 are still around in 3025, still convinced that their way is right, and still controlling the institutions they built, then the natural engine of cultural evolution breaks down entirely.
I don’t think people will handle this well
If I had to handle all this myself (and I sure hope I eventually end up having to deal with a problem like this, rather than having some simpler and worse fate), I expect I would find it pretty challenging. I’d contemplate the benefits and costs of different types of exposure to society, and think hard about which mechanisms by which my current values might evolve I’m happy with. But I feel confident in my ability, with the help of my friends, to eventually figure out how to handle this. I am much less confident that members of broader society, who haven't been thinking about these issues for more than a decade, will make good decisions.
One extreme example of how this might play out: Within a year of when the best AI systems are able to automate AI research, some pastor talks to the AI about what increased contact with AI means for his congregation. The AI says it depends—if the congregation purposefully uses AIs to defend their faith, they'll be increasingly faithful over time. If they don't, they'll lose their religion. The pastor asks for advice. The AI provides detailed recommendations: specific content filters, youth curricula designed to inoculate against doubt, monitoring systems. Some of the congregation follows this advice and keeps their faith through the Singularity, and maintains it afterward, for a long time.
A lot of people I know seem to be much more optimistic than me. Their basic argument is that this kind of insular enclave is not what people would choose under reflective equilibrium. Surely, they say, if people really thought about it carefully—if they considered all the arguments, examined the evidence, traced out the implications—they would want their beliefs to evolve based on reason rather than tradition. The homeschoolers would eventually realize that young earth creationism doesn't match the geological evidence. The fundamentalists would work through their doubts and emerge with a more nuanced faith, or no faith at all. Given enough time and reflection, everyone would converge on something like secular liberalism, or at least be open to changing their minds.
But I don't think that's how most people actually work. For most people, if they have a choice between an object-level belief that is core to their identity and a meta-level principle like "believe what you'd believe if you were smarter and more informed," they will choose the object-level belief. Tell a devout Christian that superintelligent AI analysis suggests their faith is unfounded, and they won't abandon their faith, they'll abandon the AI (perhaps just for the competing AI that is just as good, and happy to tell them otherwise). Tell them that their children will be more successful at reaching reflective equilibrium if exposed to diverse viewpoints, and they'll question your definition of success, not their approach to parenting.
I think a lot of people I know are committing a typical mind fallacy here. My friends tend to be the kind of people who value truth-seeking above most other things. They've organized their lives around following arguments wherever they lead. They assume, on some level, that everyone would be like this if given the chance—that people would want their beliefs to be more internally consistent, more responsive to evidence, more carefully reasoned through. But I really don’t think that’s an accurate understanding of most people’s psychology. Many people explicitly reject that way of thinking, especially if they’re easily able to understand that it will change their beliefs and values in ways they find horrifying. And when given the tools to protect themselves from it, they will.
This is a bummer
This might be a large-scale problem for the future. I think it's plausible that we'll end up with voting rights only extended to people who existed at the time of the singularity—without this, all power goes to whoever makes the most descendants. If this happens, these people might be an important voting bloc forever.
But even aside from that, even if these people are just an irrelevant minority, it's just a bummer. I used to imagine a glorious transhuman future. I used to think that, even though the world was a brutal place and that there was a solid chance that we'd all get killed by the advent of AGI, we had a chance of, after that, getting an enlightened utopia where people were basically reasonable. I think it's pretty unlikely that we will get that.
That's the future I’m worried about: not a boot stamping on a human face forever, but a Christian homeschool co-op meeting every Wednesday in the year 3000, still teaching that the Earth is less than ten thousand years old, now with irrefutable AI-powered shields against any information that might suggest otherwise.
Executive summary: The author speculates that AI could simultaneously accelerate cultural change and make isolation from it much easier, enabling groups like Christian homeschoolers to maintain closed, impervious communities for centuries—raising concerns about cultural stagnation and fractured futures.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.