Science&Tech: It's a No-Brainer ; Artificial Intelligence Is 50 Years Old. So Why Are We Yet to Create Anything Worthy of the Name? Danny Bradbury Investigates

Article excerpt

For the most promising, radical and exciting branch of computer science, 2006 should have gone down as a landmark year. Robot butlers should have popped the corks of fine champagne, having first made their own informed decision as to which vintage was appropriate. It is 50 years since the Dartmouth Summer Research Project on Artificial Intelligence (AI), where researchers laid down the foundations for their hopes, dreams and plans to develop machines that would simulate every aspect of human intelligence. So why don't we have those robots?

At the recent anniversary conference, again at the New Hampshire's Dartmouth College and this time titled AI@50, the verdict was that, clearly, developers still have far to go.

"I had hoped that the original Dartmouth meeting would make a substantial dent in the problem of human-level AI, and it didn't," says John McCarthy, the organiser of the 1956 meeting, and a speaker at this year's event. "The main reason is that AI was a lot harder to develop than was anticipated."

By human-level AI, McCarthy means machines that really have a mind. Instead, most AI developments to date have focused on reproducing very narrow aspects of human intelligence' voice- recognition software uses AI to turn speech into text, but it cannot discuss what you're telling it. Some digital cameras use AI to steady an image in the viewfinder, but they can't tell you what the image was.

Chess has always been the holy grail for AI researchers, says Brock Brower, an AI author who was on the steering committee of the AI@50 conference. Chess, it is thought, has the right combination of human flair alongside serious number-crunching. IBM cracked that problem in May 1997 when Deep Blue beat the chess champion Gary Kasparov in a controversial match. "But the only thing it can do is play chess. It's an idiot savant," says Brower. "It was a stupendous computational victory over the clever chess player, but it doesn't have a mental state."

So can we recreate true understanding in a computer? Some hope to do it by simulating aspects of the brain, which is made up of a hugely complex series of nerve cells called neurons. Scientists have created simplistic neural networks that can "learn" responses to basic conditions, for applications such as visual recognition.

Today, computing power can produce more effective neural nets, argues Michele Bezzi, a researcher at Accenture Technology Labs. His research concentrates on making each neuron work more like a complex human cell and less like a binary switch.

Steven Furber, a professor at Manchester University's School of Computer Science, is going for volume with his film "brainbox" project. The idea is that large numbers of simple microprocessors can be linked like the networks of neurons. He is building a prototype network of one million neurons, which he says could scale to a much larger size if enough chips were available. By comparison, abumblebee's brain contains roughly 850,000 neurons.

But this is just a small step. There are around 100 billion neurons in the human brain, warns Furber. "The large part of the problem is knowing what networks map on to it, and that is where there are still huge gaps in our knowledge," he says.

We simply don't know enough about how the brain is wired together to emulate it, even if we had the computing power. And it isn't just the mechanics of the brain that we fail to understand, says McCarthy. The process of thinking is also hard to analyse. "Humans are not very good right now at understanding intelligence," he says.

It is becoming clearer to some researchers that there are different aspects to intelligence, and that they do not all involve reasoning. "If you get as old as I am, at 75, you begin to think that your emotions are part of your intelligence," Brower muses. How do you build that into software?

Yesterday's software algorithms simply won't cut it, argues Sebastian Thrun, director of Stanford University's AIlab. …