In the context of climate change, are predictions about climate change decades in the future similarly presupposing "things that are fictional",
So no, climate change is something that seems similar but is only superficially. As I understand we now have the historic data that temperatures are rising and we have the historic data that this could mean many bad things. No computers are currently running around killing people of their own free will.
I think everyone in this debate would agree that it is harder to predict what AGIs (artificial general intelligences) and ASIs might do and how they might think and behave, than it is to make scientifically-justified climate models,
I would very much disagree with this. All the historic data shows that computers can be easily controlled, risk of death is very low (self driving cars are safer than human driven cars for example) and make our lives easier. The effects of climate change range from the very bad to the good.
I suppose that may be true, but if your view is that we definitely won't lose control of computers at all, ever, that is quite a hard claim to defend.
Historically there is not one example of a computer doing anything other than what it was programmed to do. This is like arguing that aliens will turn up tomorrow. There is no evidence.
password to control the robot is lost forever
The robot is still simply doing what it was programmed to do. I agree that terrorists getting their hands on super weapons, including AI powered ones (for example using AI to create new viruses) is extremely dangerous. But that is not a sci fi scenario, our enemies getting hold of weapons weâve created is common in history.
An AI that is trying to fufill a simply-stated goal like "maximise iPhone production" would want to keep itself in existence and running, because if it no longer exists, its goal is perhaps less likely to be fulfilled
So this is a common argument that doesnât make sense economically or from a safety view point. In order for an iPhone factory to be able to prevent itself from being turned off what capabilities would it require? Well it would need some way presumably to stop us humans from cutting its cables. Iâd presume therefore that it would need autonomous armed guards. To prevent airstrikes on the factory maybe it would need an AA battery. But neither of those things are required for an iPhone factory. If youâve programmed an iPhone factory with the capability to refuse to be turned off and given it armed robot drones, and AA guns then youâre an idiot. We already have iPhone factories that work just fine without any of those things. It doesnât make sense from an economic resource utilisation point of view to upgrade then with dangerous stuff they donât need.
Iâve heard similar arguments about âWhat if the AI fires off all the nukes?â Donât give a complex algorithm control of the nukes in the first place!
A simpler scenario that might help understanding is the election system. Tom Scott had a great video on this. Why is election security so much more contentious in America than Britain? Because Americans are too lazy to do hand counting and use all sorts of computer systems instead. These systems are more hackable than the paper and pen and hand counting we use in the UK. But the important thing to understand here is that none of these scenarioâs are the fault of any âsuper-intelligenceâ but rather typical human super-stupidity.
I submit that it also doesn't make sense when people don't know how to build the thing and probably aren't immediately about to build the thing, but might actually build the thing in 2-5 years time!
I disagree and itâs something I find rather cringe about the whole âAI alignmentâ field. For one thing, something isnât useful of profitable until itâs safe. For instance we talk often about having âself drivingâ cars in the future. But weâve had self driving cars from the very beginning! I can go out to my ole gas guzzler right now, put a brick on the accelerator and it will drive itself into a wall. What we actually mean by âself driving carsâ is âcars that can drive themselves safely.â THIS is what Tesla, Apple and Google are all working on. If you set up an outside organisation to âmake sure AI self driving cars were safeâ people would think you were crackers because who would drive in an unsafe self driving car? Unsafe AI in 90%+ of cases will simply not be economically viable because why would you use something thatâs unsafe when you already have the existing whatever it is that does the same thing safely (just slower or whatever.)
No, it isn't - because banking, scintillating as it may be, is not a general task, it's a narrow domain - like chess, but not quite as narrow.
Everything is a narrow domain. No I will not explain further lol.
why are we measuring computers by human standards? Because we want to know when we should be really worried - both from a "who is going to lose their job?" point of view, and for us doomers, an existential risk point of view as well.
Anthropomorphising
We already have uncensorable, untrackable computer networks like Tor. We already have uncensorable, stochastically untrackable cryptocurrency networks like Monero
Why does the existence of these secure networks make you more worried about AI and not less?
We have already seen computer viruses (worms) that spread in an uncontrolled manner around the internet given widespread security vulnerabilities that they can be programmed to take advantage of - and there are still plenty of those
I havenât had a computer virus in years. Iâm sure AIs will create viruses and businesses will use AI to create ways to stop them. My money is on the side with more money which is the commercial and government side not the leet hackers.
A super AI virus is a realistic concern released by China or terrorists. Itâs not a realistic concern that it creates itself from itâs own will.
You're effectively asking why the AIs would not choose to entertain themselves instead of fighting with us.
No, Iâm actually asking why us humans would allow our resources all to go into computers instead of things we want?
Weâre not going to allow AIs to mine the moon to make themselves more powerful for instance, if we have that capability weâll have them mine it to make space habitats instead.
Oops, you forgot to mention not to kill your child!
Again this is human stupidity NOT AI super intelligence. And this is the real risk of AI!
We can go back to the man that killed himself because the chatbot told him too. There were two humans being stupid there. First the designers of the App who made a chatbot that was designed to be an agreeable friend. But they were so stupid they forgot to ask themselves âWhat if it agrees with someone suicidal?â For all we know theyâve also forgot to ask themselves âWhat if it agrees with someone who wants to do an act of terrorism?â They should have foreseen this but they didnât because weâre stupid monkeys.
Then there is the man himself who instead of going to a human with his issues went to a frigging chat bot who gave him advice no human would ever give him. He also seems to have on some level believed the chat bot was real or sentient and thatâs influenced his behaviour. Heâs also given waaay too much credence to an algorithm designed simply to agree with him.
Now ask yourself, who would have foreseen this situation. Eliezer Yudkowsky who believes he is super intelligent, AIs will be even more super intelligent and anthropomorphises them constantly? I could absolutely see Eliezer killing himself because a chatbot told him too.
Or me who believes AIs are stupid, humans are stupid and thinking AIs are alive is really stupid?
Letâs go back to Wuhan... Was the real problem that humans were behaving as gods and we were eaten by our own superior creations? No! Itâs that weâre stupid monkeys who were to lazy to close the laboratory door!
One of the main stupid things we are doing is anthropomorphising these things. This leads humans to think the computers are capable of things that they arenât.
The fear this provokes is probably not that dangerous but the trust it engenders is very dangerous.
That trust will lead to people putting them in charge of the nukes or people following the advice of a Chatbot created for ISIS or astrologers.
The idea of 'alignment' presupposes that you cannot control the computer and that it has its own will so you need to 'align it' ie incentivise it. But this isn't the case, we can control them.
It's true that machine learning AIs can create their own instructions and perform tasks however we still maintain overall control. We can constraint both inputs and outputs. We can nest the 'intelligent' machine learning part of the system within constraints that prevent unwanted outcomes. For instance ask an AI a question about feeling suicidal now and you'll probably get an answer that's been written by a human. That's what I got last time I checked and the conversation was abrupty ended.