By Stephen Humphries writer of The Christian Science Monitor
The Christian Science Monitor
You needn't have taken a philosophy course to see "A.I.," the new Steven Spielberg movie, but you may wish you'd enrolled in Philosophy 101 by the time you exit the cinema.
"A.I." (Artificial Intelligence), is a futuristic story in which a robot resembling an 11-year-old boy embarks on a Pinocchio-like quest to become human. Mr. Spielberg's movie posits the idea that machines can develop self-awareness, and even understand love.
Is Spielberg's premise as far-fetched as "E.T." flying a bicycle past the moon? Not according to Ray Kurzweil, who is something of a superstar in the AI community, currently made up of hundreds of corporations and universities across the world. In his book "The Age of Spiritual Machines: When Computers Exceed Human Intelligence," Dr. Kurzweil predicts that computers will come to replicate the full range of human intelligence.
It's the astonishing growth in real-world artificial- intelligence technology that is forcing thinkers, theologians, philosophers, and the public to reexamine some age-old fundamental philosophical questions with a new vigor and urgency. Is it possible to replicate human consciousness in machines? If so, then what does that tell us about consciousness? What does it mean to be human?
"What's really at issue in the debate are fundamental metaphysical theological convictions about the fundamental reality of things - is mind reducible to mechanism or to computation?" says Jay Richards, co-editor of the forthcoming book "Are We Spiritual Machines?" for the Seattle-based Discovery Institute. "The great thing about [artificial intelligence] is that there aren't a lot of subjects that can bring high-level philosophical disputes into the public sphere."
And for some, solving these questions while AI is still in its incipient stages is critical.
"If your grandmother was cuddling a [robot toy] Furby and feeling this incredible attachment to it, are you cool with that?" asks Sherry Turkel, a professor of the sociology of science at the Massachusetts Institute of Technology in Cambridge. "We're at a moment in history where ... you can have some people thinking it's totally unproblematic to have these kinds of relationships with robotic objects, and some people like me who still feel, 'can we talk about this?' "
It's important to define who we are now, and to define whether a machine can possess true consciousness, says Dr. Turkel, before the difference between human and machine becomes too blurred.
"If history is correct, we will continue to define who we are in relationship to what we see as close to us," says Turkel. "We will define what's special about ourselves in relation to these robots."
The relationship between man and machine is already changing. "[AI] is creeping into our lives in ways that we're starting to become aware of," says Tom Mitchell, the incoming president of the American Association for Artificial Intelligence. "I had a phone conversation with a computer the other day. It wasn't a very interesting conversation, but I called the information number and it asked me which city and listing."
Shifting from practical to intelligent machines
The vast majority of AI research is focused on practical applications, but developments like this sort of voice-recognition software have shifted the threshold of what we now take for granted, coloring the way everyman views philosophical debates about sentient computers, according to Mr. Mitchell.
Leslie Pack Kaelbling, associate director of the MIT Artificial Intelligence Laboratory, notes that we already talk as if the simplest of machines were intelligent: "You talk about your thermostat thinking it's too hot in here, and it needs to be colder."
Dr. Kaelbling says that she doesn't see any reason why we won't be able to make a machine that's indistinguishable from a human in the future. …