Note (added after original post): This essay was ghostwritten by AI and as such has a few significant, sometimes subtle, mistakes. An updated final version can be found here.
Note: after posting a temporary version for the "Essays on Longtermism" competition deadline, this series of posts has been temporarily removed from the front page, and will be reposted in the following days after updates have been made.
Epistemic status: In the spirit of draft amnesty, I am posting this series slightly before it is fully ready or in ideal form.
This represents many years' worth of my thinking and I think the core material here is quite important, but I really wanted to submit them for the "Essays on Longtermism" competition, due today which ended up competing with some other important fellowships and other applications, hence getting far lower priority and attention than it deserved.
Nonetheless, I believe these ideas are fundamentally important, execution may just be closer to middle draft than final draft.
That said, I will likely be updating this series significantly in the following days, especially the last post, “Shortlist of Longtermist Interventions,” and the last two pieces which are unpublished. I will have a running list of when updates have been made, so that readers can easily see when each essay has been updated and finalized, available on the series explainer page here:
(Brief and Comprehensive Series Explainers Here)
This essay, easily readable as a stand-alone piece, is this first essay and part of my submission to the "Essays on Longtermism" competition, based on my unpublished work on Deep Reflection (comprehensive examination of crucial considerations to determine the best achievable future). (Deep Reflection Summary Available Here)
TL;DR
This essay is provides the theoretical foundation for why viatopia matters. Viatopia, a concept introduced by Will MacAskill, refers to “a state of the world where society can guide itself towards near-best outcomes.”
A key advantage of viatopia lies in creating tractability through buy-in. It’s politically feasible, whereas implementing comprehensive reflection directly may not be.
The essay establishes the multiplicative crucial considerations framework: dozens to over a hundred factors (normative, epistemic, strategic, empirical) interact multiplicatively to determine future value, meaning comprehensive reflection has orders of magnitude more expected value than addressing considerations individually.
It introduces the commoditization thesis: as AI capabilities grow, implementation becomes trivial, while determining good directions becomes everything, making values and institutional design work the highest-leverage pre-AGI Better Futures interventions.
The bootstrapping mechanism shows how having multiple well-developed viatopia paths creates natural incentives to pause and debate rather than rushing to implement the first available option, helping prevent premature lock-in.
The essay explores parallels and differences between MacAskill's Better Futures framework and my Deep Reflection work, explains why early strategy research is uniquely high-leverage, and discusses the fundamental design challenge of maximizing both human agency and future value simultaneously. This theoretical foundation explains why the concrete mechanisms in subsequent essays matter.
Part 1: Context Within the Essays on Longtermism Series
This essay is part of my submission to the Essays on Longtermism competition (link). In my first essay, "Introduction to Building Cooperative Viatopia" (link), I explored how the longtermist community can move from theoretical frameworks to practical institutional reality. I examined two key essays from the collection: Owen Cotton-Barratt and Rose Hadshar's analysis of what longtermist societies might look like, and Hilary Greaves and Christian Tarsney's comparison of minimal versus expansive longtermism.
The central insight from that analysis was that while EA has developed strong theoretical foundations, we face a critical infrastructure gap. We need concrete mechanisms that make longtermism practically achievable without requiring coercion or universal adoption of explicitly longtermist values in daily life. As AI capabilities advance rapidly, we face the a growing need to "skating to where the puck is going": AI will soon commoditize everything except ideas and values, making the determination of good directions perhaps the highest-leverage work possible before transformative AI arrives.
The present essay in this series focuses on explaining why “Viatopia” is important, including explaining a few of its core features. This leads into the next essay in the series, which will focus specifically on the challenge of stakeholder buy-in for Viatopia. While Building Cooperative Viatopia established the importance of community infrastructure and systematic value reflection, the next essay performs concrete stakeholder analysis, examining what AI labs, governments, and the public can do to move us toward viatopia. The final three essays in this series will detail specific viatopian mechanisms: a Shortlist of high-leverage longtermist interventions, the Hybrid Market (a mechanism for systematically pricing positive and negative externalities), and the Children's Movement (an approach to raising children that builds strong epistemics and collaborative values from the earliest stages).
Part 2: Why Viatopia Matters - Connecting to Deep Reflection
MacAskill's Better Futures and Viatopia as the Unifying Framework
When Will MacAskill shared his Better Futures essay series with me after I requested feedback on my own work, one concept stood out as perhaps the single most important contribution to longtermist strategy: viatopia. While MacAskill discusses numerous interventions across his series, viatopia serves as the unifying framework that makes all other Better Futures work coherent. Without viatopia, we're simply listing isolated interventions; with it, we have a strategic vision for how to navigate from our current world to one capable of achieving the best possible future.
MacAskill states (link): "Plausibly, viatopia is a state of society where existential risk is very low, where many different moral points of view can flourish, where many possible futures are still open to us, and where major decisions are made via thoughtful, reflective processes."
This concept is not merely theoretical for MacAskill. He has indicated that viatopia is one of the most important ideas he wants to promote from his Better Futures series and that Forethought, the strategy research organization, should possibly consider working on it. Interestingly, while he emphasized its importance in conversations, he devoted relatively little space to it within the series itself, noting his intention to explore it further at a later date. His only published piece specifically on viatopia focuses on "bootstrapping to viatopia" (link) - the mechanism by which we might progressively delay major civilizational decisions as we identify more crucial considerations worth reflecting on, such as by repeatedly extending a global constitutional convention as we realize there are more factors we should carefully consider.
The importance of viatopia lies specifically in that it creates tractability through buy-in. While implementing comprehensive reflection directly might seem infeasible or politically unpalatable, viatopia represents an intermediate state that is both politically feasible and actionable. It doesn't require everyone to become explicit longtermists or to make all decisions based on far-future considerations. Instead, it creates structures and trajectories that naturally lead toward better reflection and decision-making while remaining appealing to diverse stakeholders with different values and priorities.
The Commoditization Thesis: Why Direction Becomes Everything
Perhaps the most important consideration for understanding why viatopia work is so urgent is what I call the commoditization thesis: as AI capabilities grow exponentially, implementation becomes increasingly trivial while direction becomes overwhelmingly important. Very soon, advanced AI will enable us to implement nearly any institutional design, technological development, or societal intervention we can specify clearly enough. The binding constraint shifts from "can we do this?" to "should we do this? What exactly should we do?"
This creates a critical dynamic: whatever institutional designs and value frameworks are well-developed and readily available when this transformative capability arrives are likely to be what we actually implement. When faced with unprecedented power and pressure to make decisions about humanity's trajectory, we will use the tools lying around rather than conducting an exhaustive search through all possibilities. This is simply because having vetted, concrete proposals available makes them vastly more likely to be adopted than proposals that exist only as vague ideas requiring years of development.
This means that focusing attention on determining the best possible goals, developing multiple viable paths to good futures, and creating the deliberative infrastructure to choose wisely among them may represent the highest-leverage Better Futures work in the pre-AGI period. In other words: we should be skating to where the puck is going. Knowing what the best possible shape of the universe could be, or at least knowing the best process for determining this, becomes the final challenge once AI can handle everything else. Values and strategic direction represent perhaps the last things that will be fully automated, making them the optimal fulcrum point for high-leverage human effort before transformative AI arrives.
Context from Deep Reflection: The Broader Framework
This next essay in the series is excerpted from my much longer unpublished work "Deep Reflection," a 35,000-word essay exploring interventions for ensuring humanity achieves not merely existential security but the best achievable future. I originally wrote this for the Existential Choices Debate Week (link) that I requested in 2024 (link), and which the EA Forum ran in 2025. An intermediate summary version is available here (link) for those interested in the full theoretical framework.
Because I cannot finish the entire work by the competition deadline, I am summarizing the most important points of it here to explain why Viatopia is important and how it connects to Deep Reflection.
The central concept of Deep Reflection is straightforward: a comprehensive examination of all crucial considerations (which I'll define shortly) to determine how to achieve the best possible future, followed by implementing the results of that reflection. This is the broader class of interventions that includes proposals like Toby Ord and Will MacAskill's "Long Reflection" (link), but encompasses any process of sufficiently thorough and comprehensive reflection, whether implemented as a single extended period or as an ongoing process of careful deliberation at each step.
The full Deep Reflection essay establishes several key points that provide essential context for understanding why viatopia matters:
Path dependence and lock-in risks: As technology advances rapidly, humanity may unknowingly take paths that create path-dependent lock-in. That is, we take paths which are naturally extremely unlikely to be reversed (link to Beckstead on path dependence). Path-dependence may also come in milder forms where lock-in of certain outcomes becomes significantly more likely, yet such path-dependence could still have large impacts on expected value. MacAskill's Better Futures series explores multiple mechanisms by which persistent path dependence becomes more probable after transformative AI (link to relevant sections). Given the vast time scales of potential futures, it seems quite possible that humanity will eventually fall into some stable, locked-in attractor state. The question is whether we deliberately choose that state through careful reflection or blindly stumble into it.
The urgency of early strategy work: We face a potentially fleeting window of opportunity. Currently, relatively few people are trying to influence how advanced technology is used or what we do with the deep future, but this will soon change dramatically. Once transformative AI arrives, many actors will push for their preferred trajectories, and coordination becomes vastly more complex. The institutions and ideas we have ready at that moment may have outsized influence simply through availability.
What's crucial to understand is that viatopia is not merely one among many possible approaches to improving the future. It serves as the mechanism that makes Deep Reflection achievable in practice, not just desirable in theory.
Multiplicative Crucial Considerations: Why Comprehensive Reflection Matters
To understand why viatopia is essential, we must first understand the multiplicative crucial considerations framework, which forms the theoretical foundation for why comprehensive reflection has such enormous expected value.
A crucial consideration (Nick Bostrom’s term) (link), is any factor that could materially influence the value of the future that we need to understand to act correctly. Importantly, this extends far beyond normative questions about what "the good" ultimately consists of. Crucial considerations include:
- Normative considerations: What kinds of experiences or states matter morally? How should we weigh different types of value? What is the moral status of digital minds or artificial sentiences?
- Empirical considerations: What are the actual consequences of different technological trajectories? How do social systems evolve over long time periods? What are the physical limits and possibilities of the universe?
- Strategic considerations: How do we maintain option value while making progress? What paths create beneficial path dependencies versus dangerous ones? How do we coordinate across diverse actors?
- Epistemic considerations: How do we build institutions that help us discover truth? How do we avoid collective epistemic failures? What processes best aggregate distributed knowledge?
The crucial insight is that these considerations interact multiplicatively rather than additively. If we get one major consideration seriously wrong, it doesn't just reduce the value of the future by some additive amount - it can reduce it by a multiplicative factor. Get several wrong, and the multiplied reductions compound. With potentially dozens to over a hundred such considerations, each potentially affecting value by factors of 2X, 10X, or even more, the difference between getting most of them right versus getting many wrong could represent many orders of magnitude in expected value.
Moreover, we face what we could call strategic cluelessness (I'm not sure about this, but this is closely related to, or perhaps identical with, Hilary Greaves’ “Complex Cluelessness”): we don't know in advance which crucial considerations will turn out to be most important, or how they interact with each other. This creates a fundamental challenge for narrow trajectory change interventions. If we focus on changing one aspect of the future (say, ensuring certain governance structures or advancing certain technologies), we might succeed at that narrow goal while inadvertently creating negative effects on other crucial considerations we haven't thought carefully about. The interactions are complex enough that we can't simply work on considerations one by one and expect to achieve good results.
(footnote) Will MacAskill has articulated a related but subtly different concept he calls multiplicative factors. His framework focuses specifically on qualities that the future itself must have to achieve high value. He argues these factors interact multiplicatively such that getting them all right is essential. This is an important insight, and my crucial considerations framework includes such factors as a central component. However, my framework extends more broadly to include epistemic, strategic, and empirical considerations that affect our ability to determine and achieve good futures, not just the normative qualities those futures must possess. This distinction matters because it suggests we face an even more complex challenge than identifying the right end-state properties - we must also navigate the strategic and epistemic challenges of reliably getting there. I believe both frameworks point toward similar conclusions about the importance of comprehensive reflection, though they emphasize different aspects of why such reflection is necessary.
This framework explains why comprehensive reflection processes like Deep Reflection have such enormous potential value. Rather than trying to address crucial considerations one by one while remaining clueless about their interactions, comprehensive reflection systematically examines all considerations together, understanding how they relate and ensuring we don't inadvertently undermine progress on one front while advancing another. The expected value difference is not marginal - it represents orders of magnitude in expectation, despite the admittedly significant challenges in tractability.
The fragility of value: This insight, articulated by Carl Shulman (link), recognizes that value is fragile in the sense that many different factors must go right for the future to achieve high value. Missing crucial considerations or making certain early mistakes could lead to dramatically suboptimal outcomes that persist indefinitely.
(footnote) Carl Shulman raised an important critique (link to Shulman comment) of my crucial considerations framework, arguing that if there truly are dozens to hundreds of factors we need to get right, and they interact multiplicatively, then we're doomed anyway - we couldn't possibly address them all, so we'd lose most value regardless. He suggested we should instead focus on a small number of key considerations, like preventing autocracy, which could be enough to achieve most value in non-fragile worlds. MacAskill responded to Shulman suggesting there may be only a small number of factors affecting future value, which would make targeted interventions more promising. I believe this critique, while important to address, misses a key point: automated macrostrategy research can dramatically increase our capacity to analyze these considerations systematically. This is precisely why early work on automating strategy research is so valuable - it scales our ability to handle complexity. (MacAskill has made public statements indicating he is a big advocate of automated macrostrategy, indicating he may agree with this, but just isn't stating it here explicitly.) Furthermore, comprehensive reflection processes can identify which considerations are actually most critical and which interventions are most robust across multiple considerations, helping us prioritize effectively even within a complex landscape. I believe we can find good proxies for progress toward better futures just as we can find high leverage AI safety interventions which create progress toward preventing existential risk. The strategic cluelessness problem is real, but it's solvable through sufficiently sophisticated analytical processes, especially those leveraging advanced AI.
Early Strategy Research: A Uniquely High-Leverage Window
The multiplicative crucial considerations framework helps explain why early strategy research is uniquely high-leverage right now. We can think of the pre-transformative-AI period as having two distinct phases with different priorities:
The current early phase: Right now, we need intensive work on strategy, ideas, and values to navigate the intelligence explosion successfully. We must identify crucial considerations, develop institutional designs for viatopia, and create the intellectual and organizational infrastructure for comprehensive reflection. This work must happen before transformative AI arrives because path dependencies will begin solidifying rapidly once such capabilities exist.
The later phase: Once sufficiently advanced AI exists, it will generate strategy better than humans can. At that point, values become the primary bottleneck. We'll need to ensure AI is pursuing the right goals and that we have good processes for determining what those goals should be, but the generation of strategic insights itself will be increasingly automated.
This creates a critical imperative: we must do the foundational strategy work now, during this window when human strategic thinking still adds significant value, to ensure we'll be well-positioned when AI capabilities advance. This includes not just developing good strategies, but creating AI-augmented strategy and intervention research tools, as I focused on in my previous work, (link) and plan to focus more on in the future (link). These automation research tools must be designed such that they continue improving as models become more capable, allowing the effectiveness of our strategy work to scale with AI progress rather than being left behind by it.
Moreover, multiple groups will soon try to influence values and strategic direction. Once transformative AI approaches, numerous actors with different value systems will push their preferred trajectories. The actors who have done the most thorough preparatory work on institutional designs and value reflection processes will be disproportionately influential. This makes comprehensive processes for broad societal research, reflection, debate, and experimentation around values increasingly urgent. We need them ready before path dependencies solidify.
Why Values Matter Most: Beckstead's Insight
An essential insight, articulated most clearly by Nick Beckstead (link to Beckstead on trajectory change), is that if we have the right values, we can always figure out how to achieve them, but if we have the wrong values, we might go farther and farther down the wrong path until any impetus to change our values back has been permanently extinguished.
This suggests a crucial prioritization: among all the things that could influence the far future, the trajectory of moral and epistemic progress may be paramount. This is one of the main reasons (another being x-risk more broadly, as articulated by Bostrom (link)) some technologies need to be slowed down or avoided while others should be differentially accelerated. It's also why upgrading social institutions to more effectively promote moral reflection and values learning deserves such focus. Systems like political institutions, economic structures, education systems, parenting and childcare approaches, and mental health support all influence how humans think about values and make moral progress over time.
The centrality of values also explains why systematic value reflection infrastructure - institutional and technological infrastructure for helping humans reflect on values, experiment with different frameworks, engage in substantive debate, and systematically evolve toward better values - may be among the highest-leverage interventions possible. This will become even more true as AI capabilities grow. When implementation becomes easy, having the right goals becomes everything.
Convergence and Divergence with MacAskill's Framework
Given the overlap between MacAskill's Better Futures work and my own Deep Reflection essay, it's worth explicitly noting where our thinking converges and diverges. We arrived at surprisingly similar conclusions through somewhat different paths, and understanding both the similarities and differences helps clarify the overall framework.
(footnote) Convergences between our frameworks:
Both MacAskill and I independently developed viatopia-like concepts as central to our thinking about the far future. His term "viatopia" appears in his Better Futures series; I used "Seed Reflection" as my term in 2025 when revising my Deep Reflection essay, and earlier used "Paths to Utopia" as the working title for a book I began in 2021 (before discovering the EA community) exploring mechanisms for navigating from the current world to optimal futures.
We both emphasize the concept of existential compromise (myself) or grand bargain (MacAskill) - the idea that with sufficient resources (likely enabled by advanced AI), we can structure arrangements that are massively positive-sum, satisfying diverse stakeholders while still making progress toward better futures. I call this "existential compromise," inspired in my case by Nick Bostrom's suggestion of a possible compromise between superbeneficiaries and other beings. (link)
Both of us heavily emphasize path dependence and the mechanisms by which early decisions can have persistent effects on trajectories. MacAskill's Better Futures series explores multiple ways persistent path dependence becomes more likely with transformative AI. My work similarly stresses how transformative AI creates both opportunities and risks for path dependence leading to eventual lock-in.
Key differences and my evolution:
One of the most important concepts I learned from MacAskill's work is "doing the good de dicto" - being motivated to pursue comprehensive good rather than specific goals. This was a major takeaway from his Better Futures series that I had not adequately emphasized in my earlier work. The idea is that some actors are motivated to promote whatever turns out to be good comprehensively (de dicto motivation), rather than being attached to specific outcomes they currently believe are good (de re motivation). For such actors, if they learned that something different was actually better, they would change their behavior accordingly. This attitude of pursuing the good comprehensively, whatever it consists in, is essential for reliably achieving great futures. I created the shorthand term "basing" to refer to this concept because information theory suggests that frequently used important concepts should have short words, and "pursuing the good de dicto" felt too unwieldy for such a central idea. This term is tentative and could be changed based on feedback.
On cooperation versus strategic advantage: my initial approach emphasized realpolitik considerations and the importance of influencing whichever party achieves a decisive strategic advantage. MacAskill's work places greater emphasis on cooperation and positive-sum coordination. Through his influence and that of Forethought Foundation's research, I've evolved toward placing cooperation more centrally in my framework, while still recognizing that considerations of strategic advantage remain relevant in some scenarios.
On crucial considerations versus multiplicative factors: As explained earlier, these frameworks are related but distinct. Both recognize multiplicative interactions between factors affecting future value, but my crucial considerations framework encompasses a broader range of epistemic and strategic factors, while his multiplicative factors focus more specifically on properties the future must have to achieve high value.
The Diversity Mechanism: Bootstrapping to Viatopia
One of the most important but underappreciated mechanisms for achieving viatopia lies in developing multiple well-specified paths toward it. This isn't merely about having backup options or hedging uncertainty - it serves a crucial strategic function.
The primary mechanism works as follows: if only one or two viatopia proposals are well-developed when the ability to reliably shape the future becomes available (likely through transformative AI), decision-makers might simply choose one and implement it without extensive deliberation. After all, if there's only one concrete plan developed, using it may seem like the obvious option. But if dozens of compelling viatopia designs exist, each well-developed and attractive in different ways, this creates a natural incentive for serious debate about which path is best. The existence of multiple good options makes it obvious that we should pause and carefully consider which to pursue.
This is the core of what MacAskill calls "bootstrapping to viatopia" (link) in his only published piece specifically on the concept. As we develop more viatopia proposals and identify more crucial considerations worth reflecting on carefully, we create increasingly strong reasons to delay major civilizational decisions. As MacAskill explains, this could look something like repeatedly extending a global constitutional convention as we realize there are more important factors we should think through before committing. Each pause allows for further research, deliberation, and refinement, helping us converge toward better outcomes.
This bootstrapping mechanism has a self-reinforcing quality: the more we research viatopia, the more considerations we discover that merit careful thought, which justifies further delays and more research, which reveals even more important factors. This positive feedback loop can help us avoid premature lock-in to suboptimal futures by creating compelling reasons to maintain option value and continue reflecting.
There is also a secondary benefit to having multiple viatopia paths: different stakeholders will find different paths more appealing based on their values and priorities. This enables existential compromise by allowing various actors to support different approaches that nonetheless share the common feature of moving toward comprehensive reflection. AI labs might prefer certain viatopia designs that emphasize rapid technological progress within careful governance, while governments might prefer designs emphasizing stability and broad legitimacy, and the public might prefer designs most similar to familiar social structures. Having multiple paths increases the probability that all major stakeholders find at least one proposal highly appealing, facilitating buy-in.
Maximizing Agency AND Value: A Core Design Constraint
One of the most fundamental design challenge for viatopia is that we want to maximize both human agency and the value of the future - and these often trade off against each other.
The tension arises because humans sometimes go down paths (attractor states) that make exploring other paths increasingly unlikely, even if those alternative paths would be better from their all-things-considered perspective if they had the chance to carefully evaluate them. Once committed to certain trajectories, whether through technology, institutions, or value evolution, reversing course becomes increasingly difficult. Yet if we restrict people's freedom to choose certain paths in order to keep options open, we've thereby reduced their agency.
This means viatopia must be designed to score as high as possible on both dimensions simultaneously. We need structures that allow maximal human freedom and choice while also systematically enabling effective exploration of the values and possibilities space. This is a central design problem.
Examples of just a few other factors affect this tradeoff and should be systematically explored when designing viatopia mechanisms:
- How much do individual agents influence each other? High mutual influence could lead to concerning conformity pressures, but some level of value and idea exchange seems necessary for collective progress.
- How much should AI be used? AI can augment human capability for exploring values space, but over-reliance risks outsourcing crucial determinations.
- Should processes be theoretical versus empirical? Systematic theoretical approaches provide structure and rigor, but purely theoretical approaches might miss important insights from experimentation and lived experience.
Understanding these tradeoffs and design factors helps flesh out the whole space of viatopia intervention possibilities - essentially creating a "tech tree" for viatopia mechanism design where we map out different approaches and their implications. This systematic exploration of design space is itself important work that can help identify which mechanisms might best balance agency and value.
One reason for developing human-centric versions of viatopia, such as the Children's Movement I explore in a subsequent essay, is that we may want to keep humans as the main factor determining values into the future, allowing value evolution over long timescales relative to the speed of AI progress, at least until we're confident about allowing AI significant influence over value evolution. Different viatopia designs make different choices about this crucial question.
Mechanisms for Aggregating and Improving Values
A related design challenge is that people hold different values, and we need systems for effectively aggregating this diverse information while helping ensure the best values emerge through debate, research, concrete testing, and informed decision-making - all while guarding against premature path dependence or lock-in.
Importantly, the fact that people have different values is not merely a problem to be solved but a feature that can be leveraged. Value diversity allows market-like mechanisms to function effectively. It provides a wide space to draw from, with different people holding different perspectives and having different incentives to work on various aspects of the overall value question. This distribution of attention and effort across the values landscape helps us explore more thoroughly than any single perspective could achieve.
Different viatopia designs take different approaches to value aggregation and improvement:
- Democratic processes that give everyone input while building in guardrails against poor collective decisions
- Market-based mechanisms (like the Hybrid Market I explore later) that price values and incentivize value-creating behavior
- Deliberative processes that bring together diverse perspectives in structured ways designed to reach better conclusions than individuals could alone
- AI-assisted value exploration that helps individuals and groups understand the implications of different value frameworks and experiment with them in simulated or limited-scale contexts
- Educational, developmental, and psychological approaches (like the Children's Movement) that systematically build strong epistemics and moral reflection capacity from childhood, continuing throughout life
One possible key to this may be to create a global metatopia, echoing Atomic Communitarianism (Alexander) (link) or “meta” option (Karnofsky) (link): we could have multiple Viatopian approaches running in parallel, whether geographically or in simulation, learning from each other, allowing maximal agency in designing and choosing living conditions for individuals and communities, and allowing for evolution over time as we discover which mechanisms work best in practice.
"Keeping the Future Human": An Important Consideration
One framework worth considering explicitly is what Anthony Aguirre calls "keeping the future human" (link) - the idea that we may not want AI to hyper-optimize everything for us immediately, but rather prefer allowing human values to evolve carefully over time in societies that aren't radically different from our current experience.
This perspective provides part of the justification for slower, more human-centric viatopia paths. Rather than rushing to implement whatever current AI systems suggest might be optimal, we might prefer trajectories where humans maintain primary agency over value determination, at least for an extended period. This allows values to evolve in ways humans find meaningful and can understand, rather than accepting optimization according to values we haven't fully reflected on. Bostrom explores similar ideas (among many, many other ideas) in his book “Deep Utopia.” (link)
This doesn't mean rejecting AI assistance entirely. Rather, it suggests a paradigm of human-AI symbiosis where “tool AI” or “oracle AI” augments human capacity for reflection and exploration without displacing humans as the primary agents. As discussed in Building Cooperative Viatopia (link), this bidirectional feedback loop where humans give AI context while AI helps humans improve themselves may be essential from a strategic perspective for maintaining meaningful human relevance and agency as AI capabilities grow.
Different viatopia designs reflect different positions on this question. Some might emphasize rapid AI-enabled optimization once we've achieved sufficient clarity on values. Others might prioritize maintaining human-like conditions and slower evolution even if it means taking longer to reach optimal states. (Yudkowsky's (Renounced) “Coherent, Extrapolated Volition,” (link) in sharp contrast to his "Fun Theory" demonstrates the two extremes (link)) The diversity of approaches allows different philosophical positions to coexist and compete in the marketplace of ideas - though it is essential there is overall coordination within Metatopia, between the different Viatopia options, so that none ends up destroying another, again Atomic Communitarianism (link) is relevant here.
Conclusion
The preceding context establishes a few of Viatopia's core features and why viatopia is essential: it serves as the practical mechanism that makes Deep Reflection achievable by creating political feasibility and stakeholder buy-in. But understanding that viatopia is important doesn't tell us how to actually create it. That requires concrete analysis of the motives and available actions of key stakeholders.
In the next essay, "Viatopia and Buy-In," I perform concrete stakeholder mapping to identify practical pathways for achieving viatopia. While this essay established the theoretical case for why viatopia matters, the next essay addresses the challenge of actually creating it by analyzing what AI labs, governments, and the general public can each do given their different incentives, capabilities, and constraints. It demonstrates how viatopia can be framed to appeal to diverse stakeholders' interests and shows that viatopia is not just theoretically desirable but practically achievable through existential compromise. This stakeholder analysis bridges from theoretical arguments to practical implementation pathways.
