People often appeal to Intelligence Explosion/Recursive Self-Improvement as some win-condition for current model developers e.g. Dario argues Recursive Self-Improvement could enshrine the US's lead over China.
This seems non-obvious to me. For example, suppose OpenAI trains GPT 6 which trains GPT 7 which trains GPT 8. Then a fast follower could take GPT 8 and then use it to train GPT 9. In this case, the fast follower has a lead and has spent far less on R&D (since they didn't have to develop GPT 7 or 8 themselves).
I guess people are thinking that OpenAI will be able to ban GPT 8 from helping competitors? But has anyone argued for why they would be able to do that (either legally or technically)?
The lead could also break down if someone steals the model weights, which seems likely.
They could exclusively deploy their best models internally, or limit the volume of inference that external users can do, if running AI researchers to do R&D is compute-intensive.
There are already present-day versions of this dilemma. OpenAI claims that DeepSeek used OpenAI model outputs to train its own models, and they do not reveal their reasoning models' full chains of thought to prevent competitors from using it as training data.
They could exclusively deploy their best models internally, or limit the volume of inference that external users can do, if running AI researchers to do R&D is compute-intensive.
There are already present-day versions of this dilemma. OpenAI claims that DeepSeek used OpenAI model outputs to train its own models, and they do not reveal their reasoning models' full chains of thought to prevent competitors from using it as training data.