This is indeed a rigorously argued article.
After reading it, I believe that the growth potential of artificial intelligence (AI) truly exists, and I also believe that AI has already begun to change our productivity, and this impact will continue to expand.
However, predictions about the future scalability of AI and its impact on productivity based on historical AI capability growth rate data may be somewhat simplistic.
The development progress of AI varies across different fields. While it may have already achieved significant results in areas such as programming, it may still require a long period of research in the field of embodied AI.
For example, if AI is to eventually achieve full automation of industrial production, thereby greatly liberating human labor, this requires online learning capabilities. This is because production scenarios require continuous iteration of production behavior strategies, whether it's updating a behavioral pattern in a complex production process (which is common in modern complex pipelines) or producing highly customized products. Research on online learning capabilities is still unclear at present.
Of course, this is just my intuitive conjecture and feeling, not a true prediction.
This is truly an excellent article, and I largely agree that the field of artificial intelligence needs more funding, and I also feel we need to focus on more areas.
Specifically, I think that aside from RLHF alignment for LLM models, or constitutional AI research (which is certainly important), the alignment issue in embodied AI may be receiving slightly insufficient attention.
Models like VLA have significantly different pathfinding logic compared to single-step autoregressive LLMs, and I believe the required alignment methods are also different.
Furthermore, embodied VLA models will be deployed in areas more prone to safety incidents, such as factories, and compared to LLMs, VLA is more likely to experience safety incidents after large-scale deployment.
This is truly an excellent article. I strongly agree that we need to maintain epistemological humility when predicting AGI occurrences, but I also understand that people always crave an accurate prediction, even with insufficient evidence; people are always uneasy when facing the unknown. However, I believe that even when the future state distribution is unknown, we are not without reasonable strategies for decision-making. I was inspired after reading Alexander Turner's "Optimal Policies Tend to Seek Power," which suggests that when the future reward function is randomly distributed, retaining nodes with more choices is the optimal solution. I think this has been very helpful in my own decision-making; even when the future environment is unknown, I believe choosing the node with more approximate branches is always a good approach.
I strongly agree with the author's viewpoint, and I also strongly agree that long-term predictions in chaotic systems (such as predictions about events three years in advance) are, in most cases, a form of self-comfort, a resistance to the uncertainty of the future. Essentially, it's a psychological comfort of seeking certainty, rather than a rigorous, systematic argument.
Specifically, in complex dynamics, there's the concept of the Lyapunov exponent, a classic application in meteorology. Any weather forecast exceeding 14 days is almost indistinguishable from a random walk or simply looking up past daily average temperatures.
However, in the multidimensional and complex scenarios of real human society and technological iteration, this prediction window won't be much longer (theoretically). We cannot rigorously verify the effectiveness of our predictions beforehand. (That is, rigorously proven predictive power surpasses random sampling. This is unrelated to logical quality; it's determined by the nature of chaotic systems.)