The roboticist Rodney Brooks is perhaps best known for founding iRobot, which makes the Roomba robot vacuum. Brooks' academic paper "Elephants Don't Play Chess" — and its slogan "the world is its own best model" — is well-known in philosophy of mind and cognitive science.
On May 5, 2025, Brooks published a blog post titled "Parallels between Generative AI and Humanoid Robots". Out of respect for Brooks' copyright, I won't share the full text of his blog post here, but I will pull some choice quotes.
On the wow factor of LLMs:
People interact with a Large Language Model (LLM), generating text on just about any subject they choose. And it generates facile language way better and more human like than any of the previous generations of chatbots that have been developed over the last sixty years. It is the classic con. A fast talker convinces people that there is more to them than there really is. So people think that the LLMs must be able to reason, like a person, must be as knowledgeable as any and all people, and so there for must be able to do any white collar job, as those are the jobs that require a person to be facile with language.
... It is the apparent human-ness of these two technologies [generative AI and humanoid robots] that both lure people in, and then that promise human level performance everywhere, even when that level has not yet been demonstrated. People think that surely it is just a matter of time.
On the "sin of extrapolation":
In my analysis above I pointed to Generative AI hype being overestimated because it show very strong performance in using language. This is the AI sin of extrapolating from a narrow performance to believing there must be much more general competence. The problem is that any person who has facile language performance is usually quite competent in being able to reason, to know the truth and falsity of many propositions, etc. But LLMs do not have any of these, rather they have only the ability to predict likely next words that ought to follow a string of existing words. Academics, VCs, startup founders, and many others though have a strong belief there must be an emergent system within the learned weights that is able to reason, judge, estimate, etc. Many of them are betting with papers they write, cash they invest, or sweat equity, that this really must be true. Perhaps we have a bit too much of Narcissus in us.
The second sin that leads to overhype is the “indistinguishable from magic” sin. Arthur C. Clarke said that “any sufficiently advanced technology is indistinguishable from magic”. He meant that if technology is very much advanced from what you are used to you no longer have a mental model of that technology can and cannot do and so can’t know its limitations. Again, this is what happens with generative AI, as it can perform amazingly well, and so people do not understand its limitations, partly because they keep forgetting how it works, enthralled instead by the spectacular results in generating great language.
The full post is here.
