See explainer on why AGI could not be controlled enough to stay safe:
https://www.lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable
I post here about preventing unsafe AI.
Note that I'm no longer part of EA, because of overreaches I saw during my time in the community (core people leading technocratic projects with ruinous downside risks, a philosophy based around influencing consequences over enabling collective choice-making, and a culture that's bent on proselytising both while not listening deeply enough to integrate other perspectives).
like AI & ML VC deal activity being <30% and Anthropic valuation <$30B
My preference was for the former metric (based on AI PitchBook-NVCA Venture Monitor), and another metric based on some threshold for the absolute amount Anthropic or OpenAI got in investments in a next round (which Marcus reasonably pointed out could be triggered if the company just decided to do a some extra top-up round).
I was okay with using Marcus’ Anthropic valuation metric with the threshold set higher, and combined with another possible metric. My worry was that Anthropic execs would not allow their valuation to be lowered unless they were absolutely forced to offer shares at a lower price; a bit like homeowners holding on to their house during a downturn unless their mortgage forces them to sell.
I kinda liked the YCombinator option in principle, but I guessed that applicants for the summer 2025 program would already start to get selected around now, so that would not pick up on a later crash. Also, YC feels like the center of the AI hype to me, so I worried that they’d be last to give way (Marcus thought staff have their hand on the pulse and could change decisions fast, and therefore that made YC more of a leading indicator).
Good question.
Marcus and I did a lot of back and forth on potential criteria. I started by suggesting metrics that capture a decline in investments into AI companies. Marcus though was reasonably trying to avoid things that can be interest rate/tariff/broad market driven.
So the criteria we have here are a result of compromise.
The revenue criteria are rather indirect for capturing my view on things. I think if OpenAI and Anthropic each continue to make $5+ billion yearly losses (along with losses by other model developers) that would result in investors declining to invest, which in turn would lead to reduction in investments into AI data centers (and a corresponding reduction of Nvidia’s revenues). I also think that OpenAI and Anthropic are facing competition with cheaper/free models that for users often function ‘good enough’, and that particularly during a larger US recession, people will be motivated to cancel their ‘luxury’ subscriptions. Though people can get locked into using personalised chatbots.
I did ask for high betting odds, so there’s a trade-off here!
Apr: Californian civil society nonprofits
This petition has the most rigorous legal arguments in my opinion.
Others I know also back a block (#JusticeForSuchir, Ed Zitron, Stop AI, creatives for copyright). What’s cool is how diverse the backers are, from skeptics to doomers, and from tech whistleblowers to creatives.
Frankly, because I'd want to profit from it.
The odds of 1:7 imply a 12.5% chance of a crash, and I think the chance is much higher (elsewhere I posted a guess of 40% for this year, though I did not have precise crash criteria in mind there, and would lower the percentage once it's judged by a few measures, rather than my sense of "that looks like a crash").
That percentage of 12.5% is far outside of the consensus on this Metaculus page. Though I notice that their criteria for a "bust or winter" are much stricter than where I'd set the threshold for a crash. Still that makes me wonder whether I should have selected an even lower odd ratio. Regardless, this month I'm prepared to take this bet.
I adjusted my guesstimate of winning down to a quarter.
I now guess it's more like 1/8 chance (meaning that from my perspective Marcus will win this bet on expectation). It is pretty hard to imagine so many paying customers going away, particularly as revenues have been growing in the last year.
Marcus has thought this one through carefully, and I'm naturally sticking to the commitment. If we end up seeing a crash down the line, I invite all of you to consider with me how to make maximum use of that opportunity!
I still think a crash is fairly likely, but also that if there is a large slump in investment across the industry that most customers could end up continuing to pay for subscriptions.
The main problem I see is that OpenAI and Anthropic are losing money on products they are selling, which are facing commodification (i.e. downward pressure on prices). But unless investments run dry soon, they can continue for some years and eventually find ways to lock in customers (e.g. through personalisation) and monetisation (e.g. personalised ads).