R

Remmelt

Research Coordinator @ Stop/Pause AI area at AI Safety Camp
1302 karmaJoined Working (6-15 years)

Bio

See explainer on why AGI could not be controlled enough to stay safe:
https://www.lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable

I post here about preventing unsafe AI. 

Note that I'm no longer part of EA, because of overreaches I saw during my time in the community (core people leading technocratic projects with ruinous downside risks, a philosophy based around influencing consequences over enabling collective choice-making, and a culture that's bent on proselytising both while not listening deeply enough to integrate other perspectives).

Sequences
4

Preparing for an AI Market Crash
Bias in Evaluating AGI X-Risks
Developments toward Uncontrollable AI
Why Not Try Build Safe AGI?

Comments
278

Topic contributions
6

I just expanded the text:

On one hand, it was a major contribution for a leading AI company to speak out against the moratorium as stipulated. On the other hand, Dario started advocating himself for minimal regulation. He recommended mandating a transparency standard along the lines of RSPs, adding that state laws "should also be narrowly focused on transparency and not overly prescriptive or burdensome".[11] Given that Anthropic had originally described SB 1047's requirements as 'prescriptive' and 'burdensome', Dario was effectively arguing for the federal government to prevent any state from passing any law that was as demanding as SB 1047.

You’re right. I totally skipped over this.

Let me try to integrate that quote into this post. 

I adjusted my guesstimate of winning down to a quarter.

I now guess it's more like 1/8 chance (meaning that from my perspective Marcus will win this bet on expectation). It is pretty hard to imagine so many paying customers going away, particularly as revenues have been growing in the last year.

Marcus has thought this one through carefully, and I'm naturally sticking to the commitment. If we end up seeing a crash down the line, I invite all of you to consider with me how to make maximum use of that opportunity!

I still think a crash is fairly likely, but also that if there is a large slump in investment across the industry that most customers could end up continuing to pay for subscriptions.

The main problem I see is that OpenAI and Anthropic are losing money on products they are selling, which are facing commodification (i.e. downward pressure on prices). But unless investments run dry soon, they can continue for some years and eventually find ways to lock in customers (e.g. through personalisation) and monetisation (e.g. personalised ads).
 

like AI & ML VC deal activity being <30% and Anthropic valuation <$30B 

My preference was for the former metric (based on AI PitchBook-NVCA Venture Monitor), and another metric based on some threshold for the absolute amount Anthropic or OpenAI got in investments in a next round (which Marcus reasonably pointed out could be triggered if the company just decided to do a some extra top-up round).

I was okay with using Marcus’ Anthropic valuation metric with the threshold set higher, and combined with another possible metric. My worry was that Anthropic execs would not allow their valuation to be lowered unless they were absolutely forced to offer shares at a lower price; a bit like homeowners holding on to their house during a downturn unless their mortgage forces them to sell.

I kinda liked the YCombinator option in principle, but I guessed that applicants for the summer 2025 program would already start to get selected around now, so that would not pick up on a later crash. Also, YC feels like the center of the AI hype to me, so I worried that they’d be last to give way (Marcus thought staff have their hand on the pulse and could change decisions fast, and therefore that made YC more of a leading indicator).

Right, I don’t have a high income, and also have things in my personal life to take care of. 

Good question. 

Marcus and I did a lot of back and forth on potential criteria. I started by suggesting metrics that capture a decline in investments into AI companies. Marcus though was reasonably trying to avoid things that can be interest rate/tariff/broad market driven.

So the criteria we have here are a result of compromise.

The revenue criteria are rather indirect for capturing my view on things. I think if OpenAI and Anthropic each continue to make $5+ billion yearly losses (along with losses by other model developers) that would result in investors declining to invest, which in turn would lead to reduction in investments into AI data centers (and a corresponding reduction of Nvidia’s revenues). I also think that OpenAI and Anthropic are facing competition with cheaper/free models that for users often function ‘good enough’, and that particularly during a larger US recession, people will be motivated to cancel their ‘luxury’ subscriptions. Though people can get locked into using personalised chatbots.

I did ask for high betting odds, so there’s a trade-off here!

It’s bad to support a race here. Given that no-one has a way to safely constrain open-endedly learning autonomous machinery, and there are actual limits to control. 

Load more