12345

3 karmaJoined

Comments
7

I want to push back specifically on P1 as it applies to X-risk reduction.

The longtermist case for X-risk reduction doesn't require estimating "all effects until the end of time"

Your list of "crucial factors" to consider (digital sentience, space colonization, alien civilizations, etc.) frames longtermism as requiring a comprehensive estimate of net welfare across all possible futures. But this mischaracterizes the actual epistemic claim behind X-risk reduction work.

The claim is narrower: we can identify a specific, measurable problem (probability of extinction) and implement interventions that plausibly reduce it. This is structurally similar to how we approach any tractable problem — not by modeling all downstream effects, but by identifying a variable we care about and finding levers that move it in the right direction.

You might respond: "But whether reducing extinction probability is good requires all those judgment calls about future welfare." I'd argue this conflates two questions:

  1. Can we reduce P(extinction)? (Empirical, tractable)
  2. Is reducing P(extinction) net positive? (The judgment call you're highlighting)

For (2), I'd note that the judgment call isn't symmetric. Extinction forecloses all option value — including the option for future agents to course-correct if we've made mistakes. Survival preserves the ability to solve new problems. This isn't a claim about net welfare across cosmic history; it's a claim about preserving agency and problem-solving capacity.

The civilizational paralysis problem

If universal adoption of cluelessness would cause civilizational collapse, this isn't just a reductio — it suggests cluelessness fails as action-guidance at the collective level. You might say individuals can act on non-longtermist grounds while remaining longtermist-clueless. But this concedes that something must break the paralysis, and I'd argue that "preserve option value / problem-solving capacity" is a principled way to do so that doesn't require the full judgment-call apparatus you describe.

I'm genuinely uncertain here and would be curious how you'd respond to the option-value framing specifically.

I strongly agree with this assessment, and this is why we created a new version of ML4Good, which is non-technical, ML4Good governance, that is no longer targeted for wanna be researcher

I think that advocacy is much more needed than additional technical effort here - see this.

I've posted a new post, that explains the risks in a much more concrete way I think: https://www.lesswrong.com/posts/4PpRp589zJGEbDhxX/are-we-dropping-the-ball-on-recommendation-ais