Hide table of contents
This is a linkpost for https://youtu.be/NqmUBZQhOYw

This video is based on this article. @jai has written both the original article and the script for the video. 

Script:

The ACM Turing Award is the highest distinction in computer science, comparable to the Nobel Prize. In 2018 it was awarded to three pioneers of the deep learning revolution: Geoffrey Hinton, Yoshua Bengio, and Yann LeCun.

In May 2023, Geoffrey Hinton left Google so that he could speak openly about the dangers of advanced AI, agreeing that “it could figure out how to kill humans” and saying “it's not clear to me that we can solve this problem.”

Later that month, Yoshua Bengio wrote a blog post titled "How Rogue AIs may Arise",  in which he defined a "rogue AI" as "an autonomous AI system that could behave in ways that would be catastrophically harmful to a large fraction of humans, potentially endangering our societies and even our species or the biosphere."

Yann LeCun continues to refer to thoseanyone suggesting that we're facing severe and imminent risk as “professional scaremongers” and says it's a “simple fact” that “the people who are terrified of AGI are rarely the people who actually build AI models.”

LeCun is a highly accomplished researcher, but in light of Bengio and Hinton's recent comments it's clear that he's misrepresenting the field whether he realizes it or not. There is not a consensus among professional researchers that AI research is safe. Rather, there is considerable and growing concern that advanced AI could pose extreme risks, and this concern is shared by not only both of LeCun's award co-recipients, but the headsleaders of all three leading AI labs (OpenAI, Anthropic, and Google DeepMind):

Demis Hassabis, CEO of DeepMind, said in an interview with Time Magazine: "When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful. Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material."

Anthropic, in their public statement "Core Views on AI Safety", says: “One particularly important dimension of uncertainty is how difficult it will be to develop advanced AI systems that are broadly safe and pose little risk to humans. Developing such systems could lie anywhere on the spectrum from very easy to impossible.”

And OpenAI, in their blog post "Planning for AGI and Beyond", says "Some people in the AI field think the risks of AGI (and successor systems) are fictitious; we would be delighted if they turn out to be right, but we are going to operate as if these risks are existential." Sam Altman, the current CEO of OpenAI, once said "Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity. "

There are objections one could raise to the idea that advanced AI poses significant risk to humanity, but "it's a fringe idea that actual AI experts do not take seriously" is no longer among them. Instead, a growing share of experts are echoing the conclusion reached by Alan Turing, considered by many to be the father of computer science and artificial intelligence, back in 1951: "[I]t seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. [...] At some stage therefore we should have to expect the machines to take control." 


 

33

0
0

Reactions

0
0

More posts like this

No comments on this post yet.
Be the first to respond.
More from Writer
Curated and popular this week
Relevant opportunities