The Search for Artificial Intelligence
Bethell, Tom, The American Spectator
For 50 years the finest minds have been telling computers what to do. What they haven't been able to instill in them is common sense.
IN A SEMI-OFFICIAL WAY, the search for artificial intelligence began SO years ago. In the- summer of 1956, a two-month conference at Dartmouth College set out to explore "the conjecture that every aspect of learning or any other feature of intelligence can in principle he so precisely described that a machine can be made to simulate it."
Computers could do what the mind does, in other words.
An attempt would be made "to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves."
The four authors of the grant proposal added-optimistically it turned out: "We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer." The Rockefeller Foundation put up the money.
The conjecture that machines could be built with the ability to think had been made by the British mathematician Alan Turing in the 1930s. By the end of the 2OtIi century, he believed, "one will be able to speak of machines thinking without expecting to be contradicted." In 1950 he devised what became known as the Turing test. If a human behind a screen cannot distinguish human from machine responses, then the machine must be considered intelligent.
Fifty years after the Dartmouth conference, the computer science people are still working on these problems. Computers have not yet passed the Turing test. A "significant advance" has been made in solving some problems "now reserved for humans." But the advance belongs in the realm of what is called "applied" artificial intelligence. Computers can do useful things like multiplication and division, and they are also very good at chess. An IBM program beat the world chess champion. As to machines forming abstractions on their own, there has been no progress.
The principal organizer of the 1956 conference was an assistant professor of mathematics at Dartmouth, John McCarthy, who was still in his twenties. It is generally accepted that he was also the first to use the term artificial intcllit/cncc. which today goes by the acronym Al. Another leading participant and coauthor of the grant proposal was Marvin Minsky, who was a junior fellow in mathematics at Harvard. He is the same age as McCarthy-both are now 78.
The field of artificial intelligence has been largely created and colonized by matliematicians, and it's worth noting in passing that the world of mathematics is itself an ideal world. It corresponds to the real world most of the time, hut not all of the time.
Within a few years McCarthy had moved on to Stanford University and Minsky to the Massachusetts Institute of Technology. Both institutions have remained dominant in the Al field, both men have remained actively involved, and both will speak at "AI (a) 50," agolden jubilee conference to be held at Dartmouth in mid-July. The conference director, Dartmouth philosophy professor James Moor, sounds more cautious than his predecessors 50 years ago, modest Iy saying that this summer's event will "undertake a full exploration into the many emerging directions for future AI research, just as the College took the first steps to establish AI as a research discipline 50 years ago."
Rodney Brooks, the Panasonic Professor of Robotics and Director of the MIT Computer Science and Artificial Intelligence Lab, added a more effusive comment. (Marvin Minsky is still on the MIT facultyhe is the Toshiba Professor of Media Arts and Sciences and a professor of electrical engineeringand computer science there-but Brooks now runs the AI Lab at MIT.) Brooks praised the 1956 conference, at which "an audacious, outrageous even, intellectual Zeitgeist emerged: that the core of humanity, our ability to think and reason, was subject to our own technological understanding,"
And, he added: "The participants were right."
But were they?
THERE'S ANOTHER SOTH ANNIVERSARY event this summer, this one in Switzerland, organized by an otherwise unidentified group called ASAI50. Here, the younger generation is in charge, and Rodney Brooks, 51. is on the program committee. The unsigned prospectus for the conference tells a story that seems to cast doubt on the claim that the founders really were "right" in 1956:
Despite its advances in the last 50 years, it is clear that the original goals set by the first generation of AI visionaries have not been reached. Not only is natural intelligence far from being understood and artificial forms of intelligence still so much more primitive than natural ones, but seemingly simple tasks like object manipulation and recognitionwhich a 3-year-old fan do-have not yet been realized artificially.
A look at the current landscape of research reveals how little we know about how biological brains achieve their remarkable functionalities, how these functionalities develop in the child, or how they have arisen in the course of evolution.
Also, we do not understand the cultural and social processes that have helped to shape human intelligence. Because basic theories of natural intelligence are lacking and-despite impressive advances-the required technologies for building sophisticated artificial systems are still not available, the capabilities of current robots fall far short of the intelligence of even very simple animals.
A rapid survey of the field over the last 50 years shows seemingly contradictory results. When IBM's Deep Blue beat Carry Kasparov, the world chess champion, he said at the time that he could feel "a new kind of intelligence across the table." Then again, programmers had great difficulty in getting robots to do seemingly simple things such as pile up children's bricks. Researchers would watch children playing with bricks, and it looked so easy. But robots would 117 to put the top brick in place first, not "understanding" that other bricks must "stand under" it.
Maybe that's because computers really don't understand anything at all but just do what they're told?
That was the opinion of Ada, the Countess of Lovelace, the only legitimate child of Lord Byron. In recent decades she has become the focus of both admiration and frustration among AI researchers. As a debutante in London in the 183Os, Ada was introduced to Charles Babbage, the Lucasian professor of mathematics at Cambridge, who was absorbed by the idea of building a computer, or analytical engine, as he called it. He is often referred to as the father of the computer.
Ada Lovelace, a precocious student of mathematics herself, understood his machine, even though she saw only a partial model of it. (It was never completed in Babbage's lifetime.) She wrote up detailed notes describing his thoughts and ideas, and it is through her efforts that we know most of what we do know about Babbage's intentions. She is sometimes called the "first computer programmer." Understandably, she has been placed on a pedestal by feminists. But the seekers after artificial intelligence have been more ambivalent, because she wrote the following:
The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.
Ada was probably in her twenties when she wrote that plain statement, relegating computers to the status of slaves. Over 160 years later, despite much effort by Ph.D.s from Stanford, MIT, Carnegie-Mellon, and an ever increasing number of universities, no one has ever shown her comment to be out of date-inapplicable to today's computers. (Ada died at the age of 36, the same age that her father died. Byron had left England right after her birth and they never met again.)
Over the years, artificial intelligence has had more cheerleaders than critics, but one prominent naysayer has been Hubert Dreyfus of U.C. Berkeley (philosophy), lie was also at MIT, and as a HAND consultant made many enemies with a paper under the thinktank's imprint comparing AI to alchemy. In 1972 he published a book, What Computers Can't Do, and updated it as What Computers Still Can't Do (1992). His colleagues have not looked upon him kindly, and according to MIT's Joseph Weizenbaum, the AI community there gave Dreyfus "the silent treatment." He became a "non-person." But they had the last laugh when Dreyfus, not a good chess player, made the mislake of taking on an early chess program, MacHack, and lost.
Another and more widely respected critic of AI is John Searle, also of Berkeley's philosophy department. Using his "Chinese room" argument, Searle opposed the claim of John McCarthy and others that because computers can "respond" as though they have thoughts or beliefs, they can be regarded as really having them. (McCarthy once told Searle that a thermostat has three "beliefs": "It's too hot in here, it's too cold in here, and it's just right in here.") Searle responded that if he were put in a room equipped with Chinese symbols and a set of rules for manipulating them, and could use those rules to respond appropriately to questions with those symbols, he still would not understand a word of Chinese. Nor would a computer that performed manipulations and answered in the same way.
IN THE PAST HALF CENTURY, an important distinction has emerged: between "strong" and "weak" AI. It divides what is often called "cognitive" and "applied" artificial intelligence. It distinguishes between computers on the one hand actually knowing and thinking-the still unattained goal of "strong" AI; and on the other hand performing a programmed sequence of tasks in order to achieve a well-defined goal-"applied" AI. Robots can perform tirelessly on assembly lines; computers can decipher our voices over the telephone and respond with pre-recorded replies; sometimes they can diagnose diseases. there are more and more of these applications, and more will come. Everyone accepts that applied AI has been a success.
Some also say that computers can compose music and poems, but we need not take that seriously. It is more a reflection of declining artistic standards than the emerging creativity of machinery. Listen to the atonal and non-metric exercises in pretension that we hear on public radio, and it's all too easy to believe that machines can imitate them.
In the 2005 DARPA Grand Challenge, with driverless vehicles navigating a course in the Mojave Desert, Stanford's entry, a modified VW Touareg, came in first. "Stanley," as it was called, was one of five vehicles to finish. Decked out with radar, stereo, and monocular cameras, an array of five lasers, GPS, various sensors, and the latest in hi-tech from Silicon Valley, the vehicle completed the 130-mile course at an average speed of 19 mph and won the $2 million prize.
About six weeks before the desert race, I was walking across the Stanford campus at about 10 P.M., and I had a briefvision of a verydifferent kind of navigation, The campus is deserted at that time of year (and night) and quite dark except for one or two dim lights. There was no moon. A woman happened to be walking nearby and suddenly a swift blur crossed our path. "What was that!" she said, in some alarm. As it happened the creature crossed exactly between me and one of the lights. It was a young fox going at what looked to be about 20 miles an hour, with utter self-assurance and with hardly any light to steer by.
No GPS, lasers or computers on board. How did that software get inside the animal's head? I was reminded of George Gilder's comments on the fly, as studied by an assistant to Carver Mead at Caltech a few years ago. The fly, Gilder wrote in The Silicon Eye,
can do flawless flip landings on the edge of a glass or on a glass ceiling while scarcely slowing down. All the Caltech computer firepower in Carver's lab... could not even scratch the surface of the evident superiority and continuing inscrutability of the eye, brain and nervous system of the tly. No obvious quantitative measure explained it.
JOHN MCCARTHY LIVES on the Stanford campus and in January I went to see him. I told him I wanted to discuss AI, 50 years on.
Right off, he asked me if I was interested in "sustainahility." I had seen something on his wehsite about that. Evidently he had become interested in the same natural-resource and energy issues that came to preoccupy the economist Julian Simon, who famously bet the professional alarmist Paul Ehrlich that the price of commodities would fall in the 1980s. In fact, McCarthy's position seemed to be similar to Simon's. Among other things, McCarthy advocates the expansion of nuclear power. Encouraged by tax subsidies, McCarthy told me, he put solar panels into the roof of his house some years ago, but they leaked and the company that installed them went out of business. So that didn't look too promising.
Once a Marxist, McCarthy has moved considerably to the right over the years. His web page " What Is Marxism?" attracted four times as many hits as his "What Is AI?" page, he told me. I formed the impression that after 50 years on the case, AI may no longer absorb him as it once did. Now emeritus, he has been a professor of Computer Science at Stanford since 1962. He originated the LISP computer programming language and has won numerous awards in the field of computer science, including the Kyoto Prize.
"I have to say that I was over-optimistic," he said, thinking back to the early days. "In my first proposals in the 1960s, I expected to accomplish things within three years that haven't been done to this day." Another Dartmouth participant, Herbert Simon, who (with Alien Newell) had already written a computer program that seemed to show that computers could "think," predicted (in 1965) that within 20 years machines would be capable of doing anything "that a man can do." Ry the time 1985 rolled around, however. Simon had won the Nobel Prize in Economics, but thinking computers still hadn't appeared on the scene.
This pattern of unwarranted optimism has persisted. When Stanley Kubrick's movie 2001 was released, in 1968, Marvin Minsky (a consultant to Kubriek) predicted that "in 30 years we should have machines whose intelligence is comparable to man's." Arthur Clarke. whose science fiction was the basis of Kubrick's movie, still believes that AI will reach human levels, but in an interview published in 1999, he postponed that development until "after 2020." By then, he believes, there will be "two intelligent species on Planet Earth-one evolving far more rapidly than biology would ever permit."
The reigning "strong" AJ enthusiast is Ray Kurzweil, the CEO of Kurzweil Technologies and a wellknown futurist. A student of Marvin Minsky's at MIT, he developed voice-recognition software and other useful gadgets. He also made the good prediction that a computer would beat the world chess champion in 1998 (it did so a year earlier).
This may have gone to his head, for in The Age of Spiritual Machines he declared that "the emergence of machines that exceed human intelligence in all of its broad diversity is inevitable." This will happen by 2029, lie thinks. Having evolved minds of their own, computers then will make a far more rapid progress. Human minds and computer minds will somehow merge, thoughts will be downloadable, and an age of super-intelligence will commence. With heavy reliance on Moore's Law-according to which computing power doubles every 18 months-Kurzweil promoted the same idea in his more recent book The Singularity Js Near. The hi-tech magazine Wired has offered us more of this hype, and journalists have mostly played along without a murmur.
In 2000, Bill Joy, the former chief scientist of Sun Microsystems, adopted Kurzweil's message but with a pessimistic spin. His anti-technology diatribe, "Why the Future Doesn't Need Us," was a seven-day marvel, praised by techies who should have known better. With a credulity that might have impressed African witch doctors. Joy declared that robotics, nanotechnology, and genetic engineering would gang up on us. ("It is no exaggeration to say that we are on the cusp of the further perfection of extreme evil." he wrote.)
I asked John McCarthy what he thought of this-Kurzweil's vision in particular.
"I don't think Kurzweil has any basis for what he says about that," he replied. "He imagines that there is a Moore's Law of artificial intelligence. Everything doubles in 18 months. I don't see that-either as a characteristic of the past or of the future."
The successes of AI? "Well, some of the expert systems have been a great success," McCarthy said. "They have been used commercially and that is why various companies support AI departments." He gave the American Express "authorizer's assista nt" as an example. It uses various criteria to help credit card companies decide whether a charge should he allowed, thereby reducing fraud. In the health-care department, expert systems can diagnose diseases from a list of symptoms (as indeed a medical encyclopedia can).
Still, the success has been "very limited," McCarthy allowed. He repeated what a colleague had once told him, that "critical change will occur when we reach the level at which computer programs can learn from the Internet. Or you could say, when a computer program can learn physics by reading a physics textbook. But we are not at that level yet!"
The successful completion of the course in the Mojave Desert had given McCarthy a boost. Even so, he was disappointed that the rules had simplified the task. "If one car had to be stopped for some reason, the car behind it would also be stopped. They didn't want the rear car to see another one unexpectedly in front of it." GPS had been allowed on board not to steer the cars, he said, only to make sure that they didn't go grossly off the road. Five out of 20 vehicles completed the course (the year before none had).
I couldn't resist asking McCarthy about the fox I had seen darting across the Stanford campus by night. How did its computer get programmed?
"Well, it has had many quadrillions of trials, over many millions of years," he said. "Of course, it would be very interesting to know how good the first mammals were." That might be ascertainable, too, he thought, because the past is turning out to be less impenetrable than we imagined. He had been keeping up with the news, mentioning Richard Dawkins, Intelligent Design, and the Discovery Institute. But the hitter's website had been a disappointment. "All the links were to polemics against evolution, rather than taking their own ideas seriously." But he was open to the possibility that the actions of a designer, or "intervenor" as he put it, could be investigated using the methods of science.
Most people who pursue artificial intelligence are materialists, he agreed, meaning that everything is assumed to be physical, the mind included. It is assumed that a physical description of brain states can also yield a complete account of mental states. McCarthy said he knew one or two computer scientists who were not materialists-he mentioned one who was a bishop in the Mormon Church-but they are the exception. McCarthy himself is a materialist and an atheist, but he has started an organization called Atheists for School Prayer. It has seven members.
I told him I had read that computer programs had had difficulties stacking children's bricks. He said robots these days can easily do that. Nonetheless, the method employed "doesn't correspond to human common sense knowledge about objects," he said. Picking up bricks-yes. "Picking up cats-no." Robots can't do that. "Not even dead cats."
Before we can reach "human level," McCarthy said byway of summary, the field of AI needs new ideas and new thinking. "How long will it take? Well, maybe some smart graduate student has thought of the new concepts but hasn't told us yet. and it will be five years. On the other hand, it may be 500 years." Understanding intelligence is "a hard scientific problem," he said.
IN THE GULF THAT HAS OPENED UP between human level and applied AI, there is also a paradox. The truth is that computers can do quickly and easily those things that humans find it difficult to do, or can do only slowly-multiply two large numbers, for example. At the same time, computers cannot do at all those things that humans (and sometimes animals as well) can do easily, often without even having to think about them.
McCarthy had touched on one of the great unsolved problems when he mentioned human "common sense knowledge." Feeding common sense into computers has turned out to be extraordinarily difficult. Even small children know millions of things, and they learn them in the first two or three years of life without having to be taught or realixing that they are learning anything.
Since 1984, Douglas Lenat and his team have been workingon a project called Cyc (short for encyclopedia and pronounced "psyche") to encode in computers common-sense knowledge-the literally millions of things that children know before they go to grade school. They are not included in any book because they are so obvious. Examples: people who die stay dead, nothing can be in two places at once, animals don't like pain, and so on. In his excellent book AI: The Tumultuous History of the Search for Artificial Intelligence Daniel Crevier wrote that a consortium funded by a number of corporations began its task in the following fashion:
Researchers started by lifting pairs of sentences at random from newspapers, encyclopedias and magazine articles. Then they programmed into Cyc the basic concepts inherent in each sentence, so that the program could "understand" their meanings.
The first two sentences took Lenat's team three months to code. They were: "Napoleon died on St. Helena. Wellington was saddened." Through a complex hierarchy of interlocking frames, the researchers were able to impart to Cyc the knowledge that Napoleon was a person.... Death, in turn, is a subset of the frame "Event," which has as one of its properties TemporalExtent (indefinite in the case of death)...
"Sadness," as you can imagine, was quite a headache. And so it went.
To illustrate the problem of building common sense into computers, one program decided that everyone who ever lived in the past was famous. Why so? Because every query ("Was Moses famous? Was Isaac Newton famous? Was Napoleon famous?") received the answer yes.
A critic on the web wrote that, as of 1994: "Lenat has revised his estimate of the total number of 'rules' required upward by a factor of ten (to 20-40 million), and extended the time needed by another ten years. It bothers me a lot that the sort of thing being added apparently includes rules like 'A creature with two arms probably has two legs.' This seems out of control to me."
In 2002, Lenat told Computerworld that the Cyc project had put in "600 person-years of effort, and we've assembled a knowledge base containing 3 million rules of thumb that the average person knows about t he world, plus about 300,000 terms or concepts."
Q: Can you give an example?
A: Terms like "first date" and rules of thumb like "People are more polite on their first date than they areon their nth date." A lot of these things were true 50,000 years ago, like "If you are carrying a container that's open on one side, you should carry it with the open end up." The idea is to represent these in formal Ionic as opposed to English sentences. You want the machine to he able to crank through the logical deductions-the consequences of these assertions-the same way you or I would.
The New Scientist reported earlier this year that Cyc now contains around 300,000 concepts, "such as 'sky' and 'blue,' and around 3 million different assertions, such as 'the sky is blue,' in a format that can be used by computers to make deductions."
There's still a long way to go, though. "Despite more than 20 years' work, the Cyc project contains only about 2 percent of the information its designers think it needs to operate with something like human intelligence."
(By the way, is the sky blue? Yes-except when it isn't.)
Douglas Lenat, who has also been funded by the U.S. government and by Microsoft billionaire Paul Alien, said in Crevier's book that Cyc had a good chance of serving "as the foundation of the first true AI agent.... No one in 2015 would dream ofbuying a machine without common sense." So let's give him another nine years. By then Lenat will be 65 and perhaps ready for retirement. There are a few optimists, among them Marvin Minsky, who said in 2001:
As far us I know, no computer knows that you can use a string to pull an object, but not to push it. You probably shouldn't eat string and if you tie a box with it, you should have put the stuff in it before. If you steal a string, its owner might be annoyed. There are a whole lot of things you know about string. There's been only one large project to do something about that [the common sense problem], and that's the famous Cyc project of Douglas Lenat in Austin. He's coming along, but if you look at Cyc, it still can't do any extensive amount of 5year-old type common sense reasoning.
Crevier reported that most AI researchers "don't believe it's going to work but can't help being fascinated by it."
There's a related difficulty called the "frame problem." It arises because if an intelligent program is to work, it must be able to infer that a certain sequence of actions will achieve its goal. But such inferences tend to be extremely literal minded, so that all consequences of a specified action have to be spelled out. In 1969, McCarthy and an assistant from the University of Edinburgh (a leading center of AI in Britain) found that in proving that one person could talk to another after looking up his phone number, "we were obliged to add the hypothesis that if a person has a telephone, he still has it after looking up a number in the telephone book."
Computer programs must know how to deduce the consequences of various actions. When we do one thing in the real world, most other things in our environment do not change. But sometimes they do change, and deciding when they do and when they don't and what changes are relevant turned out to be difficult; and complex if all relevant changes have to be spelled out. In fact, the "frame problem" raises basic questions about the applicability of formal logic to the analysis of everyday life.
THE DIFFICULTIES ENCOUNTEHED by Cyc, and the related "frame problem," are of interest even if computer programmers can't solve them, because they have indirectly taught us a lot about what humans know and how soon we know it. In fact, AI research has tended to validate the claims that MTT emeritus linguistics professor Noam Chomsky made about language decades ago. He argued that children have an innate knowledge of a basic grammatical structure common to all human languages. This allows them to produce an infinite number of sentences, including ones that no one had previously uttered. Children learn language so quickly that only an innate capacity can explain it, Chomsky said. His argument was opposed to that of the behaviorist B.F. Skinner, who argued that language developed as a series of progressively elaborated grunts, each one rewarded in turn.
Chomsky's claim made him unpopular in AI circles. One MIT student recalled that "people were very rude to Chomsky when he came over to the AI lab. They booed when he talked, and he was very miffed."
A problem analogous to the frame problem arises with robotics, but this time it is in the three-dimensional world, as opposed to a matter of literal-minded logic. A robot can "compute" its new location when it moves, but if there is any slippage or wheel spinning, then a discrepancy will open up between its real and its calculated position. After a few such displacements an early robot called Shakey soon lost track of its location and would bump into walls. Yet, in an exaggerat ion that has long characterized reporting on the subject, Life magazine in 1970 called Shakey "the first electronic person," capable of traveling about the moon "for months at a time, without a single beep of direction from the earth."
In fact, it could "barely negotiate straight corridors," as Daniel Crevier noted.
Rodney Brooks, in charge of the AI Lab at MIT today, has been enamored of robots and has differed from the approach of McCarthy and Minsky, who wanted to establish a proper computational foundation before expecting too much from robots. In the view of some of the younger researchers, however, this theoretical preparation has gone on for long enough, with little success. The best course is to go ahead and put the robots out there in the real world and hope for the best. Maybe they will learn from their environment-just as children do. Brooks told an interviewer in 1997:
What we've been able to do is build robots that operate in the world, in unstructured environments, and do pretty well, because they use whatever structure there is in the world to get the tasks done.
In a talk Minsky gave a few years ago, "It's 2001: Where's HAL?" his frustration with robots was plain. (HAL, for those who never saw the movie, was the malign computer in 2001 who decided to kill off the crew of the spaceship as it headed toward Jupiter. Science fiction may eventually come to be seen as the true inspiration for much of AI, and the source both of its optimism and its frustration. Getting a fictional computer to think is a problem that has been solved in a thousand novels, and all AI researchers seem to have read Sci-Fi in their spare time). Here's what Minsky said:
Tell me something that you've learned from buildingaphysical robot, and I'll tell you someone in the 1970s who wrote a big paper on that. So the student is wasting a whole year or three soldering connections and working with bad components. Every now and then the robot will go down the hall and actually find a door and go through it, if that's what you're programming it to do. But you don't know why because next time it won't. That's why you'll find that these robotics people treasure their videos-because it won't work tomorrow.
Minsky warned those going into AI: "If you see a student who says I'm building another robot, tell him 40 thousand people are doing that. In 15 years they've discovered 5 things (giving them the benefit of the doubt)." He added this about the science:
What happens in physical robotics is you never get to do the same thing twice. There's no science. There's no replicable experiment. It's just like ESP, meaning usually it doesn't work, but it works if you're happy enough to have a video.
In his books, notably The Society of Mind (1985) and The Emotion Machine (forthcoming, available at his website), Minsky seems mostly to be absorbed by experimental psychology. Often his ideas seem only tangentially related to AI. Yet he is surely correct that if computers are to know how to perceive three-dimensional objects, recognize them, decide whether to ignore them or to pay close attention, and so on, we must first form theories as to how human beings solve such problems.
Minsky's contributions to the field seem to be quite original, perhaps because in thinking about what perceptual and mental problems we do in fact solve, he spelled out what experimental psychologists had often taken for granted (just as encyclopedia writers take common sense for granted). I Ie is no doubt also correct that sending robots forth to blunder about in the world and learn by their mistakes is a counsel of despair. It is unlikely that anything will come of it.
I DECIDED TO FIND OUT what David Gelernter thought about these matters. A professor of Computer Science at Yale, he was wounded when he opened a package mailed by the Unabomber in 1993. The following year Gelernter published The Muse in the Machine, in which he has some original things to say about the mind and creativity. Central to human thought is what he calls the "cognitive spectrum," ranging from highly focused "analytic" thought down to "low focus," oblivious-of-the-environment thought, verging on hallucination and dreams.
"I think we'll be able to make a computer that looks as if it were understanding, that fakes it very well," he told the New York Times in 1994. "But when we get down to the question, 'Is there real understanding present?' we will have to concede that there isn't."
Gelernter's e-mailed responses seemed to reinforce the idea of a growing gulf between cognitive and applied AI. "Applied AI has been a blow-out success," he said. His father, Herbert Gelernter, who was a cofounder of the field and present at the Dartmouth conference, had worked on the "applied" side and developed a "geometry-theorem proving machine" in the 1950s. But he "never claimed to be researching the human mind."
Cognitive AI, on the other hand, "was bound to fail," David Gelernter went on. "Alan Turing and his followers were hugely naïve about the mind. At least at the start, they didn't grasp the existence of mental states-of an inner mental world, intentional states, of consciousness." They didn't realize that "building a mind simulator in software and expecting it to think made no more sense than the idea of building a thunderstorm simulator and expecting it to get everyone wet (Searle's example). It was and is absurd. No reason has ever been adduced for believing that anyone will ever be able to build a conscious mind out of electronics."
Although the emergence of consciousness from exactly the right combination of electronics and software is not impossible, he added, the same could be said of its emergence from "exactly the right combination of mozzarella and tomato sauce, or bricks and mortar, or cardboard and rubber cement. None of these things is impossible, but none of them is terribly likely either."
In The Society of Mind, Marvin Minsky said that "minds are simply what brains do." This reflects the materialist worldview of almost all of the founding generation of AI researchers, including McCarthy, Simons, and Newell. Mind is what matter does, and can be reduced to molecules in motion. It is crucial to understand that this is the foundation on which the whole notion of thinking machines was built. Moreover, if that premise is true, then it surely does follow that mental activity is nothing more than some combination of neurons firing in the brain. And if that combination can be replicated, then (it is assumed) the mind will have been re-created in the computer.
That is the basis for the faith in cognitive AI. In an article about the computerized defeat of the world chess champion, for example, the columnist Charles Krauthammer wrote:
It seems to me obvious that machines will achieve consciousness. After all, we did, and with very humble beginnings. In biology, neurons started firing millions of years ago, allowing tiny mindless organisms to move about, avoid noxious stimuli, etc. But when enough of those neurons were put together with enough complexity, all of a sudden you got... us. A cartoon balloon pops up above that mass of individually unconscious neurons and says, "I exist." In principle, why should that not eventually occur with silicon? The number of chips and complexity of their interaction will no doubt he staggering and may require centuries to construct. But I do not see why silicon cannot make the same transition from unconsciousness to consciousness that carbon did.
But as Gelernter said, why should silicon be any more plausible as a medium than cardboard? Perhaps we should consider the possibility that the materialist premise itself is just plain wrong. The gulf that 50 years of research has unmistakably revealed suggests that older ideas, long discardednotably dualism, or the separation of mind and matter into different kinds of substances-should be restored.
I was curious to know whether Gelernter was a materialist, too, so I asked him. His answer, given "from the standpoint of a practicing Jew," tried to marry materialism and religious faith (an awkward union):
There's no reason God can't manipulate the plain physical stuff of the brain to produce an awareness of His presence. His justice, His sanctity. Judaism traditionally locates God at opposite ends of the cosmos, outside the universe and deep inside the mind. But "deep inside the mind" doesn't necessarily imply dualism or epiphenomenalism. Nor does it imply that consciousness per se is anything other than biological. It's consciousness of right and wrong that can't be accounted for biologically-or more precisely, the sort of consciousness that allows an honest man to compel other people to adhere to a moral code. A professor who deduces "don't commit murder" from abstract principles understands that someone else might deduce some other rule. A man who knows that murder is wrong because God says so is in a different position.
He still believes what he wrote in Commentary five years ago: "Chances are that, fifty years from now, we will be grateful to computer technology for showing us what marvelously powerful machines we can build-and how little they mean after all."
No doubt we were all unduly impressed by those old room-sized computers depicted in 1960s cartoons. Now that they sit docilely under our desks we no longer expect that they possess, or will develop, minds of their own. They will remain our obedient servants, surely becoming more and more reliable and with an ever widening range of applications. As to their becoming "intelligent," I guess that will never happen. The search for "strong AI" has been worthwhile, nonetheless, if only because the many unavailing attempts to replicate the human mind have made us more conscious of its marvels.
Some also say that computers can compose music and poems, but we need not take that seriously. It is more a reflection of declining artistic standards than the emerging creativity of machinery.
Most people who pursue artificial intelligence are materialists, he agreed, meaning that everything is assumed to be physical, the mind included.
In the past half century, an important distinction has emerged: between "strong" and "weak" AI. It divides what is often called "cognitive" and "applied" artificial intelligence.
Tom Bethell is a senior editor of The American Spectator and author of the new book. The Politically Incorrect Guide to Science (Reynery).…
Questia, a part of Gale, Cengage Learning. www.questia.com
Publication information: Article title: The Search for Artificial Intelligence. Contributors: Bethell, Tom - Author. Magazine title: The American Spectator. Volume: 39. Issue: 6 Publication date: July/August 2006. Page number: 26+. © Not available. Provided by ProQuest LLC. All Rights Reserved.