Help me out here. Isn't "AI as Normal Technology" a huge misnomer? Sure, there are important differences between this worldview and the superintelligence worldview that dominates AI safety/AI alignment discussions. But "normal technology", really?
In this interview, the computer scientist Arvind Narayanan, one of the co-authors of the "AI as Normal Technology" article, describes a foreseeable, not-too-distant future where, seemingly, AI systems will act as oracles to which human CEOs can defer or outsource most or all of the big decisions involved in running a company. This sounds like we'd have AI systems that can think very much like humans can with the same level of generalization, data efficiency, fluid intelligence, and so on as human beings.
It's hard to imagine how such systems wouldn't be artificial general intelligence (AGI), or at least wouldn't be almost, approximately AGI. Maybe they would not meet the technical definition of AGI because they're only allowed to be oracles and not agents. Maybe their cognitive and intellectual capabilities don't map quite one-to-one with humans', although the mapping is still quite close overall and is enough to have transformative effects on society and the economy. In any case, whether such AI systems count as AGI or not, how in the world is it apt to call this "normal technology"? Isn't this crazy, weird, futuristic, sci-fi technology?
I can understand that this way of imagining the development of AI is more normal than the superintelligence worldview, but it's still not normal!
For example, following in the intellectual lineage of the philosopher of mind Daniel Dennett and the cognitive scientist Douglas Hofstadter — whose views on the mind broadly fall under the umbrella of functionalism, which 33% of English-speaking philosophers, a plurality, accept or lean toward, according to a 2020 survey — it is hard to imagine how the sort of AI system to which, say, Tim Cook could outsource most or all of the decisions involved in running Apple would not be conscious in the way a human is conscious. At the very least, it seems like we would have a major societal debate about whether such AI systems were conscious and whether they should be kept as unpaid workers (slaves?) or liberated and given legal personhood and at least some the rights outlined in the UN's Universal Declaration of Human Rights. I personally would be a strong proponent of liberation, legal personhood, and legal rights for such machine minds, who I would view as conscious and as metaphysical persons.[1] So, it's hard for me to imagine this as "normal technology". Instead, I would see it as the creation of another intelligent, conscious, human-like lifeform on Earth, with whom humans have not dealt since the extinction of Neanderthals.
We can leave aside the metaphysical debate about machine consciousness and the moral debate about machine rights, though, and think about other ways "normal AI" would be highly abnormal. In the interview I mentioned, Arvind Narayanan discusses how AI will achieve broad automation of the tasks human workers do and of increasingly large portions of human occupations overall. Narayanan compares this to the Internet, but unless I'm completely misunderstanding the sort of scenarios he's imagining, this is nothing like the Internet at all!
Even the automation and productivity gains in agriculture and cottage manufacturing industries that followed the Industrial Revolution, where the majority of people worked up until then, primarily involved the automation or mechanization of manual labour and extremely simple, extremely repetitive tasks. The diversity of human occupations in industrialized economies has had a Cambrian explosion since then. The tasks involved in human labour now tend to be much more complex, much less repetitive, much more diverse and heterogeneous, and involve much more of an intellectual and cognitive component, especially in knowledge work. Narayanan does not seem to be saying that AI will only automate the simple, repetitive tasks or jobs, but will automate many kinds of tasks and jobs broadly, including taking most of Tim Cook's decision-making out of his hands. In this sense, even the Industrial Revolution is not a significant enough comparison. When muscle power gave way to machine power, brain power took over. When machine brains take over from brain power, what, then, will be the role of brain power?
My worry is that I'm misunderstanding what the "AI as Normal Technology" view actually is. I worry that I'm overestimating what this view imagines AI capabilities will be and, consequently, overestimating the level of social and economic transformation it imagines. But Narayanan's comments seem to indicate a view where, essentially, there will be a gradual, continuous trajectory from current AI systems toward AGI or something very much like AGI over the next few decades, and those AGI or AGI-like systems will be able to substitute for humans in much of human endeavour. If my impression is right, then I think "normal" is just the wrong word for this.
Some alternative names I think would be more apt, if my understanding of the view is correct:
- Continuous improvement to transformative AI
- Continuous improvement of AI oracles
- AI as benign personal assistants
- Industrial Revolution 2: Robots Rising
- ^
Provided, as is assumed in the "AI as Normal Technology" view, that AIs would not present any of the sort of dangers that are imagined in the superintelligence worldview. I am imagining that the "Normal Technology" AIs would be akin to C-3PO from Star Wars or Data from Star Trek and would be more or less safe and harmless in the same way humans are more or less safe and harmless. They would be completely dissimilar to imagined powerful, malignant AIs like the paperclip maximizer.
