Is Skynet Inevitable? Artificial Intelligence and the Possibility of Human Extinction
Bailey, Ronald, Reason
Our Final Invention: Artificial Intelligence and the End of the Human Era, by James Barrat, Thomas Dunne, 352 pages, $26.99
IN THE LATEST Spike Jonze movie, Her, an operating system called Samantha evolves into an enchanting, self-directed intelligence with a will of her own. Samantha makes choices that do not harm humanity, though they do leave viewers feeling a bit sadder.
In his terrific new book, Our Final Invention, documentarian James Barrat argues that visions of an essentially benign artificial general intelligence (AGI) like Samantha amount to silly pipe dreams. Barrat believes artificial intelligence is coming, but he thinks it will be more like Skynet.
In the Terminator movies, Skynet is an automated defense system that becomes self-aware, decides that human beings are a danger to it, and seeks to destroy us with nuclear weapons and terminator robots. Barrat doesn't just think that Skynet is likely. He thinks it's practically inevitable.
Barrat has talked to all the significant American players in the effort to create recursively self-improving artificial general intelligence in machines. He makes a strong case that AGI with human-level intelligence will be developed in the next couple of decades. Once an AGI comes into existence, it will seek to improve itself in order to more effectively pursue its goals. At researcher Steve Omohundro, president of the company Self-Aware Systems, explains that goal-driven systems necessarily develop drives for increased efficiency, creativity, self-preservation, and resource acquisition.
At machine computation speeds, the AGI will soon bootstrap itself into becoming millions of times more intelligent than a human being. It would thus transform itself into an artificial super-intelligence (ASI)--or, as Institute for Ethics and Emerging Technologies chief James Hughes calls it, "a god in a box." And this new god will not want to stay in the box.
The emergence of super-intelligent machines has been dubbed the technological Singularity. Once machines take over, the argument goes, scientific and technological progress will turn exponential, thus making predictions about the shape of the future impossible. Barrat believes the Singularity will spell the end of humanity, since the ASI like Skynet, is liable to conclude that it is vulnerable to being harmed by people. And even if the ASI feels safe, it might well decide that humans constitute a resource that could be put to better use. "The AI does not hate you, nor does it love you," remarks the AI researcher Eliezer Yudkowsky, "But you are made out of atoms which it can use for something else."
Barrat analyzes various suggestions for how to avoid Skynet. The first is to try to keep the god in his box: The new ASI could be guarded by gatekeepers, who would make sure that it is never attached to any networks out in the real world. But Barrat convincingly argues that an intelligence millions of times smarter than people would be able to persuade its gatekeepers to let it out. …