Elon Musk says AI could doom human civilization. Zuckerberg disagrees. Who's right?
- by USA Today
- Jan 02, 2018
- 0 Comments
- 0 Likes Flag 0 Of 5
More: New artificial intelligence can detect colorectal cancer in less than a second, researchers say
Apple, Facebook and Amazon declined to provide an executive to speak on the record on AI’s pros and cons. Each company employs staff responsible for AI oversight.
Eric Horvitz, who heads Microsoft Research, says the company last summer created an internal review board called Aether — AI and Ethics in Engineering and Research — that is tasked with closely monitoring progress not just in machine learning but also fields such as object recognition and emotion detection.
“There are certainly high stakes in terms of how AI impacts transportation, health care and other significant sectors, and there need to be channels to check for failures,” says Horvitz.
One area of concern is image capture, particularly when it comes to facial recognition, he says. “Biases can live in this data collection that represent the worst in society,” Horvitz says.
Another organization vowing to tackle AI’s dark side is the recently formed DeepMind Ethics & Society research group, which aims to publish papers focused on some of the most vexing issues posed by AI. London-based DeepMind was bought by Google in 2014 to expand its own AI work.
One of the group’s key members is Nick Bostrom, the Swedish-born Oxford University professor whose 2014 book, Superintelligence: Paths, Dangers, Strategies, first caused Musk to caution against AI’s dangers.
“My view of AI developments is, if it’s useful, use it, but maybe also be sure to participate in conversations about where this is all going,” says Bostrom, who adds that the world “doesn’t need more alarm sounding” but more dialog.
“We’re all full of hopes and fears when it comes to long term potential of AI,” he says. “We need to channel that in a constructive way.”
Woz: From AI skeptic to fan
Apple cofounder Steve Wozniak initially found himself in the AI-wary camp. He, like Musk and Hawking, was concerned that machines with human-like consciousness could eventually pose a risk to homo sapiens.
But then he changed his thinking, based largely on the notion that humans still remain perplexed by how the brain works its magic, which in turn means that it would be difficult for scientists to create machines that can think like us.
“We may have machines now that simulate intelligence, but that’s different from truly replicating how the brain works,” says Wozniak. “If we don’t understand things like where memories are stored, what’s the point of worrying about when the Singularity is going to take over and run everything?”
Ah, yes, the Singularity. Glad he brought that up. That very sci-fi-sounding term refers to the moment in which machines become so intelligent they are able to run and upgrade themselves, leading to a runaway technological horse that humans will not be able to catch.
Some techies are eager for that machine-led moment. Last May, former Google self-driving car engineer Anthony Levandowski filed papers with the Internal Revenue Service to start a new religion called Way of the Future. Its mission is to promote the “realization, acceptance and worship of a Godhead based on Artificial Intelligence developed through computer hardware and software.”
Far from a joke, Levandowski — who is at the center of a contentious lawsuit between Google and Uber, to whom Levandowski sold his self-driving truck company Otto before being accused by Google of stealing proprietary tech — told Wired magazine last fall that his new church was merely a logical response to an inevitability.
“It’s not a god in the sense that it makes lightning or causes hurricanes,” he said. “But if there is something a billion times smarter than the smartest human, what else are you going to call it?”
How about maybe, unnerving?
Please first to comment
Related Post
Stay Connected
Tweets by elonmuskTo get the latest tweets please make sure you are logged in on X on this browser.
Energy




