I want to push back specifically on P1 as it applies to X-risk reduction.
The longtermist case for X-risk reduction doesn't require estimating "all effects until the end of time"
Your list of "crucial factors" to consider (digital sentience, space colonization, alien civilizations, etc.) frames longtermism as requiring a comprehensive estimate of net welfare across all possible futures. But this mischaracterizes the actual epistemic claim behind X-risk reduction work.
The claim is narrower: we can identify a specific, measurable problem (probability of extinction) and implement interventions that plausibly reduce it. This is structurally similar to how we approach any tractable problem — not by modeling all downstream effects, but by identifying a variable we care about and finding levers that move it in the right direction.
You might respond: "But whether reducing extinction probability is good requires all those judgment calls about future welfare." I'd argue this conflates two questions:
For (2), I'd note that the judgment call isn't symmetric. Extinction forecloses all option value — including the option for future agents to course-correct if we've made mistakes. Survival preserves the ability to solve new problems. This isn't a claim about net welfare across cosmic history; it's a claim about preserving agency and problem-solving capacity.
The civilizational paralysis problem
If universal adoption of cluelessness would cause civilizational collapse, this isn't just a reductio — it suggests cluelessness fails as action-guidance at the collective level. You might say individuals can act on non-longtermist grounds while remaining longtermist-clueless. But this concedes that something must break the paralysis, and I'd argue that "preserve option value / problem-solving capacity" is a principled way to do so that doesn't require the full judgment-call apparatus you describe.
I'm genuinely uncertain here and would be curious how you'd respond to the option-value framing specifically.