Weirdly aggressive reply.
First of all, the AI 2027 people disagree about the numbers. Lifland's median is nearer to 2031. I have a good amount of uncertainty, so I wouldn't be shocked if, say, we don't get the intelligence explosion for a decadeish.
"you've predicted a 95-trillion-fold increase in AI research capacity under a 'conservative scenario.'" is false. I was just giving that as an example of the rapid exponential growth.
So the answer, in short, is that I'm not very confident in extremely rapid growth within the next few years. I'd probably put +10% GDP growth by 2029 below 50%.
I mean, that might help with a few problems, but doesn't help with a lot of the problems. Also, it just seems so crazy. Giving up axiology to hold on to a not even very widely shared intuition? Giving up the idea that the world would be better if it had lots of extra happy people and every existing person was a million times better?
Awesome post!