Elon Musk is wrong. The AI singularity won't kill us all - WIRED
- by Wired
- Sep 20, 2017
- 0 Comments
- 0 Likes Flag 0 Of 5
It seems you
can’t open a newspaper without Elon Musk predicting that artificial intelligence (AI) needs regulating – before it starts World War III. And if it’s not Elon, its Vladimir telling us AI will rule the world.
I’m starting to feel like I’m a very dangerous guy. That’s because I’m a professor of artificial intelligence.
There was a time, 20 years back, when people just smiled at me when I told them I was working on building intelligent machines. And I knew that smile was one of sympathy. Back then, AI was simply so hopeless.
But now, as AI begins to make some progress, people seem to live in fear of the next thing that will emerge from AI labs across the world.
"AI needs regulating because the big tech companies have got too big for their own good"
Toby Walsh, University of New South Wales
Elon is, in fact, right. AI does need regulating. But he’s also almost surely wrong – AI isn’t going to start World War III anytime soon. Or rule the world. Or end humanity.
AI needs regulating because the big tech companies have got too big for their own good. And like every other industry sector before it that has got too big – the banks, the oil companies, the telecom firms – regulation is needed to ensure the public good. To ensure that we all benefit and not just the tech elite. Read more: Everyone needs to stop paying attention to Elon Musk’s tweets
Most people working in AI like myself have a healthy skepticism for the idea of the singularity. We know how hard it is to get even a little intelligence into a machine, let alone enough to achieve recursive self-improvement.
There are many technical reasons why the singularity might never happen. We might simply run into some fundamental limits. Every other field of science has fundamental limits. You can’t, for example, accelerate past the speed of light. Perhaps there are some fundamental limits to how smart you can be?
Or perhaps we run into some engineering limits. Did you know that Moore’s Law is officially dead? Intel is no longer looking to double transistor count every 18 months.
But even if we do get to the singularity, machines don’t have any consciousness, any sentience. They have no desires or goals other than the ones that we give them.
AlphaGo isn’t going to wake up tomorrow and decide humans are useless at Go, and instead opt to win some money at online poker. And it is certainly not going to wake up and decide to take over the planet. It’s not in its code.
All AlphaGo will ever do is maximise one number: its estimate for the probability it will win the current game of Go. Indeed, it doesn’t even know that it is playing Go.
So, we don’t have to fear that the machines are going to take over anytime soon. But we do have to worry about the impact even stupid AI is starting to have on our lives. It will widen inequality. It will put some people out of work. It will corrode political debate. Even stupid AI can be used by the military to transform warfare for the worse.
So, Elon, stop worrying about World War III and start worrying about what Tesla’s autonomous cars will do to the livelihood of taxi drivers.
And don’t just take my word for it. A recent survey of 50 Nobel Laureates ranked the climate, population rise, nuclear war, disease, selfishness, ignorance, terrorism, fundamentalism, and Trump as bigger threats to humanity than AI.
Toby Walsh is professor of artificial intelligence at the University of New South Wales and the author of "Android Dreams: The Past, Present and Future of AI" (Hurst, £16.99)
This article was originally published by WIRED UK
Topics
Please first to comment
Related Post
Stay Connected
Tweets by elonmuskTo get the latest tweets please make sure you are logged in on X on this browser.
Energy




