I want to push back specifically on P1 as it applies to X-risk reduction.
The longtermist case for X-risk reduction doesn't require estimating "all effects until the end of time"
Your list of "crucial factors" to consider (digital sentience, space colonization, alien civilizations, etc.) frames longtermism as requiring a comprehensive estimate of net welfare across all possible futures. But this mischaracterizes the actual epistemic claim behind X-risk reduction work.
The claim is narrower: we can identify a specific, measurable problem (probability of extinction) and implement interventions that plausibly reduce it. This is structurally similar to how we approach any tractable problem — not by modeling all downstream effects, but by identifying a variable we care about and finding levers that move it in the right direction.
You might respond: "But whether reducing extinction probability is good requires all those judgment calls about future welfare." I'd argue this conflates two questions:
For (2), I'd note that the judgment call isn't symmetric. Extinction forecloses all option value — including the option for future agents to course-correct if we've made mistakes. Survival preserves the ability to solve new problems. This isn't a claim about net welfare across cosmic history; it's a claim about preserving agency and problem-solving capacity.
The civilizational paralysis problem
If universal adoption of cluelessness would cause civilizational collapse, this isn't just a reductio — it suggests cluelessness fails as action-guidance at the collective level. You might say individuals can act on non-longtermist grounds while remaining longtermist-clueless. But this concedes that something must break the paralysis, and I'd argue that "preserve option value / problem-solving capacity" is a principled way to do so that doesn't require the full judgment-call apparatus you describe.
I'm genuinely uncertain here and would be curious how you'd respond to the option-value framing specifically.
Will MacAskill's "viatopia" concept resonates with me — but I don't think any single framework wins here either.
I suspect the most resilient path forward combines:
1. Mini-topias at small/medium scale — Local experimentation with positive visions. Let a thousand societies bloom. Countries already work this way: part of a whole, never the whole itself.
2. A protopia budget — Resources dedicated to solving the most pressing problems, one by one. Prioritization research (like Coefficient and aligned orgs) helps decide where.
3. Viatopia as shared culture — A collective north star that motivates without dictating the destination.
The hard part is (3). Shared values usually emerge from history, not design. But we've done it before: the UN Charter created alignment without uniformity.
So here's a sketch — principles any org building superintelligence should sign:
Not a utopia. A compass.