Cuddling Up to Cyborg Babies
Turkle, Sherry, UNESCO Courier
A "cybershrink" traces relations between children and their electronic pets and computer toys over three generations
Children have always used their toys and playthings to create models for understanding their world. Fifty years ago, the genius of Swiss psychologist Jean Piaget showed it is the business of childhood to take objects and use how they "work" to construct theories of space, time, number, causality, life and mind. At that time, a child's world was full of things that could be understood in simple, mechanical ways. A bicycle could be understood in terms of its pedals and gears, a windup car in terms of its clockwork springs. Children were able to take electronic devices such as basic radios and (with some difficulty) bring them into this "mechanical" system of understanding.
But in the early 1980s, a first generation of computer toys changed the traditional story. When children removed the back of their computer toys to "see" how they worked, they found a chip, a battery, and some wires. Sensing that trying to understand these objects "physically" would lead to a dead end, children tried to use a "psychological" kind of understanding. They asked themselves if the games were conscious, if they had feelings and even if they knew how to "cheat." Earlier objects encouraged children to think in terms of a distinction between the world of psychology and the world of machines, but the computer did not. Its "opacity" encouraged children to see computational objects as psychological machines.
Among the first generation of computational objects was Merlin, which challenged children to games of tic-tac-toe. For children who had only played games with human opponents, reaction to this object was intense. For example, while Merlin followed an optimal strategy for winning tic-tac-toe most of the time, it was programmed to make a slip every once in a while. So when children discovered strategies that allowed them to win and then tried these strategies a second time, they usually would not work, The machine gave the impression of not being "dumb enough" to let down its defences twice. Robert, seven, playing with his friends on the beach, watched his friend Craig perform the "winning trick," but when he tried it, Merlin did not slip up and the game ended in a draw.
Robert, confused and frustrated, threw Merlin into the sand and said, "Cheater. I hope your brains break." He was overheard by Craig and Greg, aged six and eight, who salvaged the by-now very sandy toy and took it upon themselves to set Robert straight. "Merlin doesn't know if it cheats," says Craig. "It doesn't know if you break it, Robert. It's not alive." Greg adds, "It's smart enough to make the right kinds of noises. But it doesn't really know if it loses. And when it cheats it don't even know it's cheating." Jenny, six, interrupts with disdain: "Greg, to cheat you have to know you are cheating. Knowing is part of cheating."
In the early 1980s such scenes were not unusual. Confronted with objects that spoke, strategized and "won," children were led to argue the moral and metaphysical status of machines on the basis of their psychologies: did the machines know what they were doing? Despite Jenny's objections that "knowing is part of cheating," children did come to see computational objects as exhibiting a kind of knowing. By doing so, they recast the Piagetian framework in which a definition of life centred around "moving of one's own accord."
Observing children in the world of the "traditional"--that is non-computional-- objects, Piaget found that at first they considered everything that moved to be alive. Then only things that moved without an outside push or pull. Gradually, children refined the notion to mean "life motions," namely only those things that breathed and grew were taken to be alive.
Motion gives way to emotion
Children broke with this orderly categorization by making distinctions about "machines that think. …