R

Remmelt

Research Coordinator @ Stop/Pause AI area at AI Safety Camp
1160 karmaJoined Working (6-15 years)

Bio

See explainer on why AGI could not be controlled enough to stay safe:
https://www.lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable

Note: I am no longer part of EA because of the community’s/philosophy’s overreaches. I still post here about AI safety. 

Sequences
4

Preparing for an AI Market Crash
Bias in Evaluating AGI X-Risks
Developments toward Uncontrollable AI
Why Not Try Build Safe AGI?

Comments
265

Topic contributions
5

Just because they didn't invest money in the Stargate expansion doesn't mean they aren't reserving the option to do so later if necessary.... If you believe that the US government will prop up AI companies to virtually any level they might realistically need by 2029, then i don't see a crash happening.


Thanks for reading and your thoughts.

I disagree, but I want to be open to changing my mind if we see e.g. the US military ramping up contracts, or the US government propping up AI companies with funding at the level of say the $280 billion CHIPS Act.

This is clarifying context, thanks. It's a common strategy to go red for years while tech start-ups build a moat around themselves (particularly through network effects). Amazon built a moat in terms of drawing in vendors and buyers into its platform while reducing logistics costs, and Uber in drawing in taxi drivers and riders onto its platform. Tesla started out with a technological edge. 

Currently, I don't see a strong case for that OpenAI and Anthropic are building up a moat.
–> Do you have any moats in mind that I missed? Curious.

Network effects aren't much of a moat here, since their users are mostly using the tools by themselves (though their prompts used to improve the tools; not sure how much). It doesn't seem a big deal for most users to switch to another competing chatting tool or image generation tool say. Potentially, current ChatGPT or Claude users can later move to using new model-based tools that are profitable for those AI companies. But as it stands, OpenAI and Anthropic are losing money on existing users on one end, while being under threat of losing users to cheap model alternatives on the other end. It's not clear whether with the head-start they got on releasing increasingly extractive, general use models that they are going to be 'winners'. Maybe their researchers will be the ones to come up with new capability breakthroughs, that will somehow be used to maintain an industry edge (incl. in e.g. military applications). But over the last two years, there has been more of a closing of the gap between the user functionality of newer versions of Claude and ChatGPT and cheaper competing models (like Meta’s and DeepSeek’s). OpenAI sunk hundreds of millions of dollars over 18 months into a model that was not worth calling GPT-5, and meanwhile other players caught up on model functionality of GPT-4.

OpenAI seems reflective of an industry where investment far outstrips user demand, as happened during the dotcom bubble.

This is not to say that there could not be large-model developers with at least tens of billion dollars in yearly profit within the next decade. That is what current investments and continued R&D are aimed towards. It seems the default scenario. Personally, I'll work hard to prevent that scenario since at that point restricting the development of increasingly unscoped (and harmful) models will basically be intractable.

Ah, it's meant to be the footnotes. Let me edit that to be less confusing.

Update: back up to 70% chance.

Just spent two hours compiling different contributing factors. Now I weighed those factors up more comprehensively, I don't expect to change my prediction by more than ten percentage points over the coming months. Though I'll write here if I do.

My prediction: 70% chance that by August 2029 there will be a large reduction in investment in AI and a corresponding market crash in AI company stocks, etc, and that both will continue to be for at least three months.

 

For:

  • Large model labs losing money
    • OpenAI made loss of ~$5 billion last year.
      • Takes most of the consumer and enterprise revenue, but still only $3.7 billion.
      • GPT 4.5 model is the result of 18 months of R&D, but only a marginal improvement in output quality, while even more compute intensive.
      • If OpenAI publicly fails, as the supposed industry leader, this can undermine the investment narrative of AI as a rapidly improving and profitable technology, and trigger a market meltdown.
    • Commoditisation
      • Other models by Meta, etc, around as useful for consumers.
      • DeepSeek undercuts US-designed models with compute-efficient open-weights alternative.
    • Data center overinvestment
      • Microsoft cut at least 14% of planned data center expansion.
  • Subdued commercial investment interest.
    • Some investment firm analysts skeptical, and second-largest VC firm Sequoia Capital also made a case of lack of returns for the scale of investment ($600+ billion).
    • SoftBank is the main other backer of the Stargate data center expansion project, and needs to raise debt to do raise ~$18 billion. OpenAI also needs to raise more investment funds next round to cover ~$18 billion, with question whether there is interest
  • Uncertainty US government funding
    • Mismatch between US Defense interest and what large model labs are currently developing.
      • Model 'hallucinations' get in the way of deployment of LLMs on the battlefield, given reliability requirements.
        • On the other hand, this hasn't prevented partnerships and attempts to deploy models.
      • Interest in data analysis of integrated data streams (e.g. by Palantir) and in self-navigating drone systems (e.g. by Anduril).
        • The Russo-Ukrainian war and Gaza invasion have been testbeds, but seeing relatively rudimentary and straightforward AI models being used there (Ukraine drones are still mostly remotely operated by humans, and Israel used an LLM for shoddy target identification).
    • No clear sign that US administration is planning to subsidise large model development.
      • Stargate deal announced by Trump did not involve government chipping in money.
  • Likelihood of a (largish) US economic recession by 2029.
    • Debt/misinvestment overload after long period of low interest.
    • Early signs, but nothing definitive:
      • Inflation
      • Reduced consumer demand
      • Business uncertainty amidst changing tariffs.
    • Generative AI subscriptions seem to be a luxury expense for most people rather than essential for completing work (particularly because ~free alternatives exist to switch to and for most users those aren't significantly different in use). Enterprises and consumers could cut heavily on their subscriptions once facing a recession.
  • Early signs of large progressive organising front, hindering tech-conservative allyships.
    • #TeslaTakedown.
    • Various conversations by organisers with a renewed motivation to be strategic.
      • Last few years' resurgence of 'organising for power' union efforts, overturning top-down mobilising and advocacy approaches.
    • Increasing awareness of fuck-ups in the efficiency drives by Trump-Musk administration coalition.

Against:

  • Current US administration's strong public stance on maintaining America's edge around AI.
    • Public announcements.
      • JD Vance's speech at the renamed AI Action Summit.
    • Clearing out regulation
      • Scrapped Biden AI executive order.
      • Copyright
        • Talks as in UK and EU about effectively scrapping copyright for AI training materials (with opt-out laws, or by scrapping opt-out too).
    • Stopping enforcement of regulation
      • Removing Lina Khan at head of FTC, which were investigating AI companies.
      • Musk internal dismantling of departments engaged in oversight.
    • Internal deployment of AI model for (questionable) uses.
      • US IRS announcement.
      • DOGE attempts of using AI to automate evaluation and work by bureacrats.
  • Accelerationist lobby's influence been increasing.
    • Musk, Zuckerberg, Andreessen, other network state folks, etc, been very strategic in
      • funding and advising politicians,
      • establishing coalitions with people on the right (incl. Christian conservatives, and channeling populist backlashes against globalism and militant wokeness),
      • establishing social media platforms for amplifying their views (X, network of popular independent podcasts like Joe Rogan show).
    • Simultaneous gutting of traditional media.
  • Faltering anti-AI lawsuits
    • Signs of corruption of plaintiff lawyers,
      • e.g. in case against Meta, where crucial arguments were not made, and judge considered not allowing class representation.
  • Defense contracts
    • US military has budget in the trillions of dollars, and could in principle keep the US AI corporations propped up.
      • Possibility that something changes geopolitically (war threat?) resulting in large funds injection.
      • Guess Pentagon already treating AGI labs such as OpenAI and Anthropic as a strategic asset (to control, and possibly prop up if their existence is threatened).
    • Currently seeing cross-company partnerships.
      • OpenAI with Anduril, Anthropic with Palantir.
  • National agenda pushes to compete in various countries.
    • Incl. China, UK, EU.
    • Recent increased promotion/justification in and around US political circles of the need to compete with China.
  • New capability development
    • Given the scale of AI research happening now, it is quite possible that some teams will develop of new cross-domain-optimising model architecture that's data and compute efficient.
    • As researchers come to acknowledge the failure of the 'scaling laws' focussed approach using existing transformer architectures (given limited online-available data, and reduced marginal returns on compute), they will naturally look for alternative architecture designs to work on.

Update: back up to 60% chance. 

I overreacted before IMO on the updating down to 40% (and undercompensated when updating down to 80%, which I soon after thought should have been 70%).

The leader in turns of large model revenue, OpenAI has basically failed to build something worth calling GPT-5, and Microsoft is now developing more models in-house to compete with them. If OpenAI fails on the effort to combine its existing models into something new and special (likely), that’s a blow to perception of the industry.

A recession might also be coming this year, or at least in the next four years, which I made a prediction about before.

Update: back up to 50% chance. 

Noting Microsoft’s cancelling of data center deals. And the fact the ‘AGI’ labs are still losing cash, and with DeepSeek are competing increasingly on a commodity product. 

Update: 40% chance. 

I very much underestimated/missed the speed of tech leaders influencing the US government through the Trump election/presidency. Got caught flat-footed by this. 

I still think it’s not unlikely for there to be an AI crash as described above within the next 4 years and 8 months but it could be from levels of investment much higher than where we are now. A “large reduction in investment” at that level looks a lot different than a large reduction in investment from the level that markets were at 4 months ago. 

We ended up having a private exchange about it. 

Basically, organisers spend more than half of their time on general communications and logistics to support participants get to work. 

And earmarking stipends to particular areas of work seems rather burdensome administratively, though I wouldn’t be entirely against it if it means we can cover more people’s stipends.

Overall, I think we tended not to allow differentiated fundraising before because it can promote internal conflicts, rather than having people come together to make the camp great.

Answer by Remmelt3
0
0

Here's how I specify terms in the claim:

  • AGI is a set of artificial components, connected physically and/or by information signals over time, to in aggregate sense and act autonomously over many domains.
    • 'artificial' as configured out of a (hard) substrate that can be standardised to process inputs into outputs consistently (vs. what our organic parts can do).
    • 'autonomously' as continuing to operate without needing humans (or any other species that share a common ancestor with humans).
  • Alignment is at the minimum the control of the AGI's components (as modified over time) to not (with probability above some guaranteeable high floor) propagate effects that cause the extinction of humans.
  • Control is the implementation of (a) feedback loop(s) through which the AGI's effects are detected, modelled, simulated, compared to a reference, and corrected.

Update: reverting my forecast back to 80% chance likelihood for these reasons.

Load more