facebook-pixel

Melanie Mitchell: We shouldn’t be scared by ‘Superintelligent AI’

(Sophia Foster-Dimino | The New York Times)

Intelligent machines catastrophically misinterpreting human desires is a frequent trope in science fiction, perhaps used most memorably in Isaac Asimov’s stories of robots that misconstrue the famous “three laws of robotics.” The idea of artificial intelligence going awry resonates with human fears about technology. But current discussions of superhuman A.I. are plagued by flawed intuitions about the nature of intelligence.

We don’t need to go back all the way to Isaac Asimov — there are plenty of recent examples of this kind of fear. Take an Op-Ed in The New York Times and a new book, “Human Compatible,” by the computer scientist Stuart Russell. Russell believes that if we’re not careful in how we design artificial intelligence, we risk creating “superintelligent” machines whose objectives are not adequately aligned with our own.

As one example of a misaligned objective, Russell asks, “What if a superintelligent climate control system, given the job of restoring carbon dioxide concentrations to preindustrial levels, believes the solution is to reduce the human population to zero?” He claims that “if we insert the wrong objective into the machine and it is more intelligent than us, we lose.”

Russell’s view expands on arguments of the philosopher Nick Bostrom, who defined A.I. superintelligence as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” Unlike today’s best machines, which remain far below the level of humans in all but relatively narrow domains (such as playing chess or Go), Bostrom and Russell envision a superintelligence with vast general abilities.

Bostrom, Russell and other writers argue that even if there is just a small probability that such superintelligent machines will emerge in the foreseeable future, it would be an event of such magnitude and potential danger that we should start preparing for it now. In Bostrom’s view, “a plausible default outcome of the creation of machine superintelligence is existential catastrophe.” That is, humans would be toast.

These thinkers — let’s call them the “superintelligentsia” — speculate that if machines were to attain general human intelligence, the machines would quickly become superintelligent. They speculate that a computer with general intelligence would be able to speedily read all existing books and documents, absorbing the totality of human knowledge. Likewise, the machine would be able to use its logical abilities to make discoveries that increase its cognitive power.

Such a machine, the speculation goes, would not be bounded by bothersome human limitations, such as slowness of thought, emotions, irrational biases and need for sleep. Instead, the machine would possess something like a “pure” intelligence without any of the cognitive shortcomings that limit humans.

The assumption seems to be that this A.I. could surpass the generality and flexibility of human intelligence while seamlessly retaining the speed, precision and programmability of a computer. This imagined machine would be far smarter than any human, far better at “general wisdom and social skills,” but at the same time it would preserve unfettered access to all of its mechanical capabilities. And as Russell’s example shows, it would lack humanlike common sense.

The problem with such forecasts is that they underestimate the complexity of general, human-level intelligence. Human intelligence is a strongly integrated system, one whose many attributes — including emotions, desires, and a strong sense of selfhood and autonomy — can’t easily be separated.

Similarly, if generally intelligent A.I. is ever created (something that will take many decades, if not centuries), its objectives, like ours, will not be easily “inserted” or “aligned.” They will rather develop along with the other qualities that form its intelligence, as a result of being embedded in human society and culture. The machines’ push to achieve its objectives will be tempered by the common sense, values and social judgment without which general intelligence cannot exist.

What’s more, the notion of superintelligence without humanlike limitations may be a myth. It seems likely to me that many of the supposed deficiencies of human cognition are inseparable aspects of our general intelligence, which evolved in large part to allow us to function as a social group. It’s possible that the emotions, “irrational” biases and other qualities sometimes considered cognitive shortcomings are what enable us to be generally intelligent social beings rather than narrow savants. I can’t prove it, but I believe that general intelligence can’t be isolated from all these apparent shortcomings, either in humans or in machines that operate in our human world.

In his 1979 Pulitzer Prize-winning book, “Gödel, Escher, Bach: an Eternal Golden Braid,” the cognitive scientist Douglas Hofstadter beautifully captured the counterintuitive complexity of intelligence by posing a deceptively simple question: “Will a thinking computer be able to add fast?” Dr. Hofstadter’s surprising but insightful answer was, “perhaps not.”

As Hofstadter explained: “We ourselves are composed of hardware which does fancy calculations but that doesn’t mean that our symbol level, where ‘we’ are, knows how to carry out the same fancy calculations. Let me put it this way: there’s no way that you can load numbers into your own neurons to add up your grocery bill. Luckily for you, your symbol level (i.e., you) can’t gain access to the neurons which are doing your thinking — otherwise you’d get addle-brained … Why should it not be the same for an intelligent program?”

In other words, the intelligent part of your mind can’t harness the fast-adding skills of your own neurons, and for good reason. This barrier — between the “self” that you are aware of and the detailed activity of your brain — permits the kind of thinking that matters for survival without getting overwhelmed (“addle-brained”) by your own thought processes. Similarly, a thinking computer’s hardware, like ours, would presumably include circuits for fast arithmetic, but at the level of its cognitive awareness, the machine wouldn’t be able to access these circuits any more than we humans can.

It’s fine to speculate about aligning an imagined superintelligent — yet strangely mechanical — A.I. with human objectives. But without more insight into the complex nature of intelligence, such speculations will remain in the realm of science fiction and cannot serve as a basis for A.I policy in the real world.

Understanding our own thinking is a hard problem for our plain old intelligent minds. But I’m hopeful that we, and our future thinking computers, will eventually achieve such understanding in spite of — or perhaps thanks to — our shared lack of superintelligence.

Melanie Mitchell is a professor of computer science at Portland State University, external professor at the Santa Fe Institute and the author of “Artificial Intelligence: A Guide for Thinking Humans,” from which this essay is partly adapted.