"This is quite possibly the most important and most daunting challenge humanity has ever faced," wrote Nick Bostrom, the director of the Future of Humanity Institute, in his recent book "Superintelligence." Anxieties swirl around the idea, from machines replacing humans in the work force, which they're already doing, to taking the wheel in our cars, which they're preparing to; and the more fearsome possibility of sentient machines growing beyond and above us.
"Whether we succeed or fail," Bostrom writes, "it is probably the last challenge we will ever face."
An intelligence explosion
Alan Turing, inventor of the modern computer, wondered in 1950 if it might be easier to create intelligent machines by programming them as children who can learn, rather than trying to skip ahead and program fully functioning "adults." The idea became reality in 1959, when computer-science pioneer Arthur Samuel programmed a machine that could teach itself to play checkers; it did, very well, beating the Connecticut state champion.
That savvy computer marked the birth of machines that can learn to do something very specific very well — a process known as "machine learning" or its variation, deep learning.
Since then, AIs have learned a lot of specialized talents, from Google's search engine and self-driving cars to Facebook recognizing your friends' faces in photos; from Amazon and Netflix recommendations to Pandora choosing what song to play next; from Siri telling you it's raining outside to Watson playing a mean game of "Jeopardy!"
But while they're impressive, all those skills don't add up to the sort of encompassing, higher level of intelligence that humans possess.
"[Artificial intelligence] is still waiting for its Isaac Newton," said Thomas Henderson, a computer-science professor at the University of Utah. "Someone who says: 'Here's some principles, and it works like this.' "
Bostrom found that a lot of experts predict full, sentient AI — Chappie's birth, so to speak — will occur in the next 20 to 90 years. Henderson is reticent to set any date, since "the Isaac Newton" comes along so rarely and unpredictably. But if deep learning algorithms continue to improve, data scientist Jeremy Howard foresees technology reaching an explosive point sooner than we might think.
"The better computers get at intellectual activities, the more they can build better computers to be better at intellectual capabilities, so this is going to be a kind of change that the world has actually never experienced before," Howard said during a TED talk last December. Bostrom seems to agree and predicts that such an "intelligence explosion," in which AI jumps not only to human-level intelligence but to a level beyond us, would happen rather quickly.
And if that happens, what then?
Our Frankenstein monster
"[AI] is our new Frankenstein myth," Joss Whedon, writer and director of "Age of Ultron," told Entertainment Weekly.
The 19th-century horror story warns against mankind's scientific ambitions coming back to bite us, and that's just what scientists like Stephen Hawking fear of AI. In an open letter published earlier this year, Hawking encouraged the continuing development of useful AI, but cited a warning that the rise of a superintelligent version — without proper safeguards — could threaten the human race. Other scientists and entrepreneurs co-signed Hawking's letter, including higher-ups within Google and IBM, MIT professors, Elon Musk and Bostrom.
"We cannot blithely assume that a superintelligence will necessarily share any of the final values stereotypically associated with wisdom and intellectual development in humans," Bostrom writes, including benevolent concern, humility and selflessness. It may not be out to destroy us, but it may not have our interests in mind, either.