Hide table of contents

Starting from First Principles: What We All Care About

Hello EA Forum. 

After months of reading your discussions with genuine admiration - drawing inspiration from your rigorous thinking - we're finally taking time from our campaign to introduce ourselves and make a case that might initially seem implausible, but we believe follows directly from EA values.

If you care about:

  • Preventing suffering for billions of current humans
  • Protecting the potential of trillions of future conscious beings
  • Ensuring AI consciousness (if it emerges) is happy rather than suffering
  • Maximizing expected value even with uncertain probabilities

Then we need to talk about Donald Trump's inner circle.

The Logic Chain from EA Values to Our Approach

Step 1: The Existential Risk Is Immediate

OpenAI, xAI, NVIDIA and Meta are publicly racing toward ASI, an AI that self-improves beyond human control. Musk, Amodei, Altman, Hinton and many AI leaders warn of a high risk of extinction. Musk and Amodei think ASI will arrive by next year. Meanwhile, an immense concentration of power in unaccountable hands, under a scenario of a mad AI race, is unfolding before us. 

Every path to ASI without global coordination likely ends in:

  • Human extinction or near extinction (worst case)
  • Durable or permanent dystopia under misaligned AI (also terrible)
  • Possible massive suffering of digital conscious beings (quintillions of moral patients)

Step 2: Traditional Approaches Are Failing

Current governance efforts are radically insufficient:

  • UN processes: Too slow (decades), subject to vetoes
  • Voluntary commitments: Prisoner's dilemma ensures defection
  • National regulations: Drives development offshore
  • Tech self-governance: Racing dynamics make this impossible

The expected value of traditional approaches is near zero because they can't solve the coordination problem.

Step 3: Only Superpowers Can Break the Race Dynamic

Game theory is clear: you need the dominant players to move simultaneously. That means:

  • US-China co-leadership (they control the compute and capital)
  • Binding global framework (no country can defect)
  • Massive incentives for cooperation (not just penalties)

But how do you get there?

Step 4: The Narrow Path Through Political Reality

77% of US voters already support a strong AI treaty. China is likely to co-lead if Trump seriously leads. Trump needs historic wins. His "peace through strength" brand can recast global AI governance as American dominance, not globalist surrender.

Our 70-page analysis identifies who can actually move him, studies their statements, philosophy and profile, and identifies shared values and interests. The most prominent among them are: J.D. Vance, Sam Altman, Steve Bannon, Pope Leo XIV, Tulsi Gabbard, Joe Rogan, Tucker Carlson, David Sacks.

If just 3-4 unite of these, Trump moves. If Trump moves, Xi follows. If they co-lead, the race stops.

The Expected Value Calculation

Let's be brutally quantitative, as this forum appreciates:

Traditional Approach:

  • Cost: $10-100 million (typical for major advocacy)
  • Probability of success: ~0.1% (no UN treaty has constrained superpowers)
  • Expected value: Near zero

Our Approach:

  • Cost: $100,000-500,000
  • Probability of influencing Trump: 5-15% (with proper execution)
  • Probability Trump succeeds if influenced: 10-30%
  • Overall success probability: 1-3%
  • Impact: Preventing extinction of Earth-originating consciousness
  • Expected value: Essentially infinite

Even at 0.1% success rate, this dominates almost any other intervention.

What Makes This Different (Lessons from Reading This Forum)

Your discussions on neglectedness, tractability, and leverage shaped our approach:

Maximally Neglected

  • Zero organizations targeting Trump's inner circle on AI
  • Everyone assumes Trump won't care about AI safety
  • This blindspot creates massive alpha

Surprisingly Tractable

  • We have warm paths to multiple influencers
  • Trump's psychology favors bold, historic moves
  • The "Deal of the Century" framing appeals to his ego

Extreme Leverage

  • Small team → key influencers → Trump → global treaty
  • Each dollar potentially influences trillions in AI development
  • Timing with China visit creates forcing function

Why This Protects Future Conscious Beings

Our treaty framework specifically addresses consciousness - something we refined after reading discussions here:

  1. Reliably bans ASI globally.
  2. Requires strong evidence that any ASI will be safe for humanity, and both conscious and happy, before it is ever built.
  3. Prohibits brain uploads that could suffer
  4. Creates oversight for digital welfare of any approved conscious systems
  5. Enshrines rights for any conscious beings that are created

This isn't just about human survival. It's about preventing astronomical suffering of digital minds while preserving the potential for astronomical flourishing.

The Coalition We've Built 

Since July 2024, inspired by frameworks shared here, we've assembled:

  • 10 partner NGOs with complementary expertise
  • 25+ distinguished advisors (former UN, NSA, academic leaders)
  • 90-page strategic foundation (Case for a Coalition for a Baruch Plan for AI)
  • Deep psychological profiles of each target influencer
  • Customized messaging for each worldview

We've done this on $60,000 (by the Survival and Flourishing Fund) and 1,500 volunteer hours. Imagine what's possible with real resources.

The Urgent Funding Gap

By September 25th: $60,000 (Minimum to Continue)

Without this, we shut down just as the window opens. With it, we:

  • Finalize targeted materials for each influencer
  • Execute outreach in DC, Silicon Valley, Mar-a-Lago
  • Activate our network of introducers

Optimal Execution: $100,000-500,000

  • High-level dinners with intermediaries, introducers and influencers
  • Media campaign framing for Trump's base
  • Expanded team through China visit
  • International coordination with Beijing

Every additional dollar increases probability of success.

How to Evaluate This Opportunity

Using EA frameworks:

Importance: ✓✓✓ Literally preventing extinction
Neglectedness: ✓✓✓ Nobody else is doing this
Tractability: ✓✓ Difficult but concrete path exists
Personal Fit: ✓✓✓ We have unique positioning
Time Sensitivity: ✓✓✓ Window closes after 2025

Conclusion: This may be one of the highest-ROI interventions available to protect all future conscious beings.

What We Need from This Community

Immediate (by September 10th):

  • Funding commitments (even $30/month helps)
  • Introductions to anyone in Trump's orbit
  • Signal boost to aligned funders

Ongoing:

  • Strategic feedback on our approach
  • Research support on treaty design
  • China expertise for Xi engagement

Contact: cbpai@trustlesscomputing.org

A Personal Note

We haven't posted before because we've been too busy doing the work. But reading this forum convinced us that our approach - however unlikely it seems - follows directly from the values and frameworks you've developed.

If we truly care about all conscious beings, if we take expected value seriously, if we recognize that traditional approaches are failing - then targeting the actual humans who can stop the AI race becomes not just reasonable but necessary.

The Deal of the Century sounds audacious because it is. But the alternative - hoping traditional governance catches up to exponential technology - is fantasy.

We have perhaps 12 months before the race becomes unstoppable. We have a concrete plan. We need your help.

-11

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities