The advent of a human-level artificial intelligence--a machine capable of the richness of expression and nuance of thought that we associate with humanity--promises to generate tremendous wealth for the inventors and companies that develop it.
According to the Business Communications Company, the market for AI software and products reached $21 billion in 2007, an impressive figure that doesn't touch on the wealth that a human-level artificial intelligence could generate across industries. At present, the world's programmers have succeeded in automating the delivery of electricity to our homes, the trading of stocks on exchanges, and much of the flow of goods and services to stores and offices across the globe, but, after more than half a century of research, they have yet to reach the holy grail of computer science--an artificial general intelligence (AGI).
Is the tide turning? At the second annual Singularity Summit in San Francisco last September, I discovered that the thinkers and researchers at the forefront of the field are pitched in an intellectual battle over how soon AGI might arrive and what it might mean for the rest of us.
The Not-So-Rapid Progress Of AI Research
The scientific study of artificial intelligence has many roots, from IBM's development of the first number-crunching computers of the 1940s to the U.S. military's work in war-game theory in the 1950s. The proud papas of computer science--Marvin Minsky, Charles Babbage, Alan Turing, and John Von Neumann--were also the founding fathers of the study of artificial intelligence.
During the late 1960s and early 1970s, money for AI work was as easy as expectations were unrealistic, fueled by Hollywood images of cocktail-serving robots and a Hal 9000 (a non-homicidal one, presumably) for every home. In an ebullient moment in 1967, Marvin Minsky, proclaimed. "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved," by which he meant a humanistic AI. Public interest dried up when the robot army failed to materialize by the early 1980s, a period that researchers refer to as the "AI winter." But research, though seemingly dormant, continued.
The field has experienced a revival of late. Primitive-level AI is no longer just a Hollywood staple. It's directing traffic in Seattle through a program called SmartPhlow, guiding the actions of hedge-fund managers in New York, executing Internet searches in Stockholm, and routing factory orders in Beijing over integrated networks like Cisco's. More and more, the world's banks, governments, militaries, and businesses rely on a variety of extremely sophisticated computer programs--what are sometimes called "narrow AIs"--to run our ever-mechanized civilization. We look to AI to perform tasks we can easily do ourselves but haven't the patience for any longer. There are 1.5 million robot vacuum cleaners already in use across the globe. Engineers from Stanford University have developed a fully autonomous self-driving car named Stanley, which they first showcased in 2005 at the Defense Advanced Research Projects Agency's (DARPA) Grand Challenge motor cross. Stanley represents an extraordinary improvement over the self-driving machines that the Stanford team was showing off in 1979. The original self-driving robot needed six hours to travel one meter. Stanley drove 200 meters in the same time.
"The next big leap will be an autonomous vehicle that can navigate and operate in traffic, a far more complex challenge for a 'robotic' driver," according to DARPA director Tony Tether.
In other words, robot taxis are coming to a city near you.
The decreasing price and increasing power of computer processing suggest that, in the decades ahead, narrow AIs like these will become more effective, numerous, and cheap. But these trends don't necessarily herald the sort of radical intellectual breakthrough necessary to construct an artificial general intelligence. …