Quantcast
Home » News

Is a real ‘Ex Machina’ or ‘Age of Ultron’ in humanity’s future?

First Published      Last Updated Jul 07 2015 02:09 pm


Science » From Ultron to Ava to Chappie, anxieties over AI are front and center on the silver screen.

What happens when machines are finally, truly intelligent?

Fiction has long wrestled with the implications, and it started long before the homicidal HAL-9000 in the epic 1968 film "2001: A Space Odyssey." Even as far back as Mary Shelley's "Frankenstein," which in turn owes inspiration to the Greek myth of Prometheus, people have wondered anxiously about the implications of creation.

In quick succession, two movies have debuted — and a third is coming soon — that hit on that very theme: "Chappie," about a robot reprogrammed to think for itself; "Ex Machina," an intimate study of a robot's humanity (or lack thereof); and "Avengers: Age of Ultron," the explosive consequences of an evil AI. The movies seemingly form a spectrum: The innocent Chappie debuted in March, followed by the enigmatic Ava of "Ex Machina" earlier this month (released in Utah today) and the evil Ultron on May 1. Each robot represents a potential future for AI, and which future will come to pass depends on whom you ask.




"This is quite possibly the most important and most daunting challenge humanity has ever faced," wrote Nick Bostrom, the director of the Future of Humanity Institute, in his recent book "Superintelligence." Anxieties swirl around the idea, from machines replacing humans in the work force, which they're already doing, to taking the wheel in our cars, which they're preparing to; and the more fearsome possibility of sentient machines growing beyond and above us.

"Whether we succeed or fail," Bostrom writes, "it is probably the last challenge we will ever face."

An intelligence explosion

Alan Turing, inventor of the modern computer, wondered in 1950 if it might be easier to create intelligent machines by programming them as children who can learn, rather than trying to skip ahead and program fully functioning "adults." The idea became reality in 1959, when computer-science pioneer Arthur Samuel programmed a machine that could teach itself to play checkers; it did, very well, beating the Connecticut state champion.

That savvy computer marked the birth of machines that can learn to do something very specific very well — a process known as "machine learning" or its variation, deep learning.

Since then, AIs have learned a lot of specialized talents, from Google's search engine and self-driving cars to Facebook recognizing your friends' faces in photos; from Amazon and Netflix recommendations to Pandora choosing what song to play next; from Siri telling you it's raining outside to Watson playing a mean game of "Jeopardy!"

But while they're impressive, all those skills don't add up to the sort of encompassing, higher level of intelligence that humans possess.

"[Artificial intelligence] is still waiting for its Isaac Newton," said Thomas Henderson, a computer-science professor at the University of Utah. "Someone who says: 'Here's some principles, and it works like this.' "

Bostrom found that a lot of experts predict full, sentient AI — Chappie's birth, so to speak — will occur in the next 20 to 90 years. Henderson is reticent to set any date, since "the Isaac Newton" comes along so rarely and unpredictably. But if deep learning algorithms continue to improve, data scientist Jeremy Howard foresees technology reaching an explosive point sooner than we might think.

"The better computers get at intellectual activities, the more they can build better computers to be better at intellectual capabilities, so this is going to be a kind of change that the world has actually never experienced before," Howard said during a TED talk last December. Bostrom seems to agree and predicts that such an "intelligence explosion," in which AI jumps not only to human-level intelligence but to a level beyond us, would happen rather quickly.

And if that happens, what then?

Our Frankenstein monster

"[AI] is our new Frankenstein myth," Joss Whedon, writer and director of "Age of Ultron," told Entertainment Weekly.

The 19th-century horror story warns against mankind's scientific ambitions coming back to bite us, and that's just what scientists like Stephen Hawking fear of AI. In an open letter published earlier this year, Hawking encouraged the continuing development of useful AI, but cited a warning that the rise of a superintelligent version — without proper safeguards — could threaten the human race. Other scientists and entrepreneurs co-signed Hawking's letter, including higher-ups within Google and IBM, MIT professors, Elon Musk and Bostrom.

"We cannot blithely assume that a superintelligence will necessarily share any of the final values stereotypically associated with wisdom and intellectual development in humans," Bostrom writes, including benevolent concern, humility and selflessness. It may not be out to destroy us, but it may not have our interests in mind, either.

» Next page... Single page

 

COMMENTS
VIEW/POST COMMENT      ()