See explainer on why AGI could not be controlled enough to stay safe:
https://www.lesswrong.com/posts/xp6n2MG5vQkPpFEBH/the-control-problem-unsolved-or-unsolvable
Note: I am no longer part of EA because of the community’s/philosophy’s overreaches. I still post here about AI safety.
This is clarifying context, thanks. It's a common strategy to go red for years while tech start-ups build a moat around themselves (particularly through network effects). Amazon built a moat in terms of drawing in vendors and buyers into its platform while reducing logistics costs, and Uber in drawing in taxi drivers and riders onto its platform. Tesla started out with a technological edge.
Currently, I don't see a strong case for that OpenAI and Anthropic are building up a moat.
–> Do you have any moats in mind that I missed? Curious.
Network effects aren't much of a moat here, since their users are mostly using the tools by themselves (though their prompts used to improve the tools; not sure how much). It doesn't seem a big deal for most users to switch to another competing chatting tool or image generation tool say. Potentially, current ChatGPT or Claude users can later move to using new model-based tools that are profitable for those AI companies. But as it stands, OpenAI and Anthropic are losing money on existing users on one end, while being under threat of losing users to cheap model alternatives on the other end. It's not clear whether with the head-start they got on releasing increasingly extractive, general use models that they are going to be 'winners'. Maybe their researchers will be the ones to come up with new capability breakthroughs, that will somehow be used to maintain an industry edge (incl. in e.g. military applications). But over the last two years, there has been more of a closing of the gap between the user functionality of newer versions of Claude and ChatGPT and cheaper competing models (like Meta’s and DeepSeek’s). OpenAI sunk hundreds of millions of dollars over 18 months into a model that was not worth calling GPT-5, and meanwhile other players caught up on model functionality of GPT-4.
OpenAI seems reflective of an industry where investment far outstrips user demand, as happened during the dotcom bubble.
This is not to say that there could not be large-model developers with at least tens of billion dollars in yearly profit within the next decade. That is what current investments and continued R&D are aimed towards. It seems the default scenario. Personally, I'll work hard to prevent that scenario since at that point restricting the development of increasingly unscoped (and harmful) models will basically be intractable.
Update: back up to 70% chance.
Just spent two hours compiling different contributing factors. Now I weighed those factors up more comprehensively, I don't expect to change my prediction by more than ten percentage points over the coming months. Though I'll write here if I do.
My prediction: 70% chance that by August 2029 there will be a large reduction in investment in AI and a corresponding market crash in AI company stocks, etc, and that both will continue to be for at least three months.
For:
Against:
Update: back up to 60% chance.
I overreacted before IMO on the updating down to 40% (and undercompensated when updating down to 80%, which I soon after thought should have been 70%).
The leader in turns of large model revenue, OpenAI has basically failed to build something worth calling GPT-5, and Microsoft is now developing more models in-house to compete with them. If OpenAI fails on the effort to combine its existing models into something new and special (likely), that’s a blow to perception of the industry.
A recession might also be coming this year, or at least in the next four years, which I made a prediction about before.
Update: 40% chance.
I very much underestimated/missed the speed of tech leaders influencing the US government through the Trump election/presidency. Got caught flat-footed by this.
I still think it’s not unlikely for there to be an AI crash as described above within the next 4 years and 8 months but it could be from levels of investment much higher than where we are now. A “large reduction in investment” at that level looks a lot different than a large reduction in investment from the level that markets were at 4 months ago.
We ended up having a private exchange about it.
Basically, organisers spend more than half of their time on general communications and logistics to support participants get to work.
And earmarking stipends to particular areas of work seems rather burdensome administratively, though I wouldn’t be entirely against it if it means we can cover more people’s stipends.
Overall, I think we tended not to allow differentiated fundraising before because it can promote internal conflicts, rather than having people come together to make the camp great.
Here's how I specify terms in the claim:
Thanks for reading and your thoughts.
I disagree, but I want to be open to changing my mind if we see e.g. the US military ramping up contracts, or the US government propping up AI companies with funding at the level of say the $280 billion CHIPS Act.