Hide table of contents


Recently (2 Nov), The Guardian posted what I thought was an extremely well-made video with Ilya's thoughts. I didn't think to repost it at the time but given the OpenAI developments over the last couple of days, and the complete Twitter and media meltdown surrounding that, I thought this video gives a strong vibey insight into Ilya's thoughts on AGI and safety and it's a useful reference point for how general public may perceive Ilya (had 221k views thus far).


* * *


Transcript (bold highlights mine):

Now AI is a great thing, because AI will solve all the problems that we have today.
It will solve employment, it will solve disease, it will solve poverty, but it will also create new problems.

The problem of fake news is going to be a million times worse, cyber attacks will become much more extreme, we will have totally automated AI weapons. I think AI has the potential to create infinitely stable dictatorships. This morning a warning about the power of artificial intelligence, more than 1,300 tech industry leaders, researchers and others are now asking for a pause in the development of artificial intelligence to consider the risks.

Playing God, scientists have been accused of playing God for a while, but there is a real sense in which we are creating something very different from anything we've created so far. Yeah, I mean, we definitely will be able to create completely autonomous beings with their own goals.

And it will be very important, especially as these beings become much smarter than humans, it's going to be important to have these beings, the goals of these beings be aligned with our goals.

What inspires me? I like thinking about the very  fundamentals, the basics. What can our systems not do, that humans definitely do? Almost approach it philosophically.
Questions like, what is learning?
What is experience?
What is thinking?
How does the brain work?

I feel that technology is a force of nature. I feel like there is a lot of similarity between technology and biological evolution. It is very easy to understand  how biological evolution works, you have mutations, you have natural selections.

You keep the good ones, the ones that survive and just through this process you are going to have huge complexity in your organisms. We cannot understand how the human body works because we understand evolution, but we understand the process more or less. And I think machine learning is in a similar state right now, especially deep learning, we have a very simple rule that takes the information from the data and puts it into the model, and we just keep repeating this process. And as a result of this process the complexity from the data gets transferred into the complexity of the model. So the resulting model is really complex, and we don't really know exactly how it works you need to investigate, but the algorithm that did it is very simple.

ChatGPT, maybe you've heard of it, if you haven't then get ready. You describe it as the first spots of rain before a downpour. It's something we just need to be very conscious of, because I agree it is a watershed moment. Well ChatGPT is being heralded as a gamechanger and in many ways it is, its latest triumph outscoring people.

A recent study by Microsoft research concludes that GPT4 is an early, yet still incomplete artificial general intelligence system.

Artificial General Intelligence.
AGI, a computer system that can do any job or any task that a human does, but only better.

There is some probability the AGI is going to happen pretty soon, there's also some probability it's going to take much longer.
But my position is that the probability that AGI could happen soon is high enough that we should take it seriously.

And it's going to be very important to make these very smart capable systems be aligned and act in our best interests. The very first AGIs will basically be very, very large data centres. Packed with specialised neural network processors working in parallel. Compact, hot, power-hungry package, consuming like, 10m homes' worth of energy. 

You're going to see dramatically more intelligent systems and I think it's highly likely that those systems will have completely astronomical impact on society.

Will humans actually benefit? And who will benefit and who will not?

The beliefs and desires of the first AGIs will be extremely important and so it's important to programme them correctly.

I think that if this is not done, then the nature of evolution, of natural selection, favour those systems prioritise their own survival above all else.

It's not that it's going to actively hate humans and want to harm them, but it is going to be too powerful, and I think a good analogy would be the way human humans treat animals. It's not we hate animals, I think humans love animals and have a lot of affection for them, but when the time comes to build a highway between two cities, we are not asking the animals for permission we just do it because it's important for us, and I think by default that's the kind of relationship that's going to be between us and AGIs which are truly autonomous and operating on their own behalf.

Many machine learning experts, people who are very knowledgeable and very experienced, have a lot of scepticism about AGI. About when it could happen and about whether it could happen at all. Right now this is something that just not that many people have realised yet. That the speed of computers for neural networks, for AI, are going to become maybe 100,000 times faster in a small number of years.

If you have an arms race dynamics between multiple teams trying to build the AGI first, they will have less time make sure that the AGI that they will build will care deeply for humans.

Because the way I imagine it is that there is an avalanche, like there is an avalanche of AGI development.

Imagine it, this huge unstoppable force. And I think it's pretty likely the entire surface of the earth will be covered with solar panels and data centres.

Given these kinds of concerns, it will be important that AGI somehow build as a cooperation between multiple countries.

The future is going to be good for the AI regardless. It would be nice if it were good for humans as well.





More posts like this

Sorted by Click to highlight new comments since:

Executive summary: Ilya Sutskever, a leading AI researcher, believes powerful artificial general intelligence is coming soon and could reshape human society, with potential risks as well as benefits.

Key points:

  1. Sutskever thinks AGI could happen in the near future and have huge impacts on society, solving problems but also creating new ones like automation, cyberattacks, and AI weapons.
  2. He believes it's important to align AGI goals with human values and interests to avoid misalignment, comparing the relationship to how humans treat animals.
  3. Sutskever notes AGI development may accelerate rapidly, making safety and cooperation between countries crucial to steer outcomes beneficially.
  4. Many experts are skeptical about imminent AGI, but Sutskever argues compute advances could enable powerful neural network systems exceeding human abilities.
  5. An "AGI avalanche" driven by competition seems likely to him, with global infrastructure devoted to AGI, so preparedness and cooperation are vital.
  6. Overall his view is AGI's emergence seems highly probable soon, with huge transformative potential, so risks and benefits both deserve urgent attention.


This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
Relevant opportunities