Artificial intelligence: Should we be as terrified as Elon Musk and Bill Gates?
- by ZDNet
- Oct 19, 2015
- 0 Comments
- 0 Likes Flag 0 Of 5
In a September 2015 CNN interview, Musk went even further. He said, "AI is much more advanced than people realize. It would be fairly obvious if you saw a robot walking around talking and behaving like a person... What's not obvious is a huge server bank in a vault somewhere with an intelligence that's potentially vastly greatly than what a human mind can do. And it's eyes and ears will be everywhere, every camera, every device that's network accessible... Humanity's position on this planet depends on its intelligence so if our intelligence is exceeded, it's unlikely that we will remain in charge of the planet."
Gates and Musk are two of humanity's most credible thinkers, who have not only put forward powerful new ideas about how technology can benefit humanity, but have also put them into practice with products that make things better.
And still, their comments about AI tend to sound a bit fanciful and paranoid.
Are they ahead of the curve and able to understand things that the rest of us haven't caught up with yet? Or, are they simply getting older and unable to fit new innovations into the old tech paradigms that they grew up with?
To be fair, others such as Stephen Hawking and Steve Wozniak have expressed similar fears, which lends credibility to the position that Gates and Musk have staked out.
What this really boils down to is that it's time for the tech industry to put guidelines in place to govern the development of AI. The reason it's needed is that the technology could be developed with altruistic intentions, but could eventually be co-opted for destructive purposes--in the same way that nuclear technology became weaponized and spread rapidly before it could be properly checked.
In fact, Musk has made a direct correlation there. In 2014, he tweeted, "We need to be super careful with AI. [It's] potentially more dangerous than nukes."
How to talk about the singularity and look smart doing it
AI is already creeping into military use with the rise of armed drone aircraft. No longer piloted by humans, they are carrying out attacks against enemy targets. For now, they are remotely controlled by soldiers. But the question has been raised of how long it will be until the machines are given specific humans or groups of humans--enemies in uniform--to target and given the autonomy to shoot to kill when they acquire their target. Should it ever be ethical for a machine to make a judgment call in taking a human life?
These are the kinds of conversations that need to happen more broadly before AI technology continues its rapid development. Certainly governments are going to want to get involved with laws and regulations, but the tech industry itself can pre-empt and shape that by putting together its own standards of conduct and ethical guidelines ahead of nations and regulatory bodies hardening the lines.
See also
Please first to comment
Related Post
Stay Connected
Tweets by elonmuskTo get the latest tweets please make sure you are logged in on X on this browser.
Energy




