The Brain in the Machine: Biologically Inspired Computer Models Renew Debates over the Nature of Thought
Bower, Bruce, Science News
The Brain in the Machine
Neural networks -- a group of computer models of how the brain might work -- have generated much interest, not to mention hype, in the past few years. Yet while their ability to illuminate the dark recesses of the mind may have been exaggerated by ardent proponents, there remains a strong belief in some quarters that neural networks will link up with emerging studies of brain cells in action to produce new insights into how the human brain makes sense of the world and generates complex thoughts.
In fact, according to a report in the Sept. 9 SCIENCE, this field of "computational neuroscience" has already arrived.
Its ultimate aim is to explain how the brain uses electrical and chemical signals to represent and process information, say three researchers involved in neural network modeling: biophysicist Terrence J. Sejnowski of Johns Hopkins University in Baltimore, computer scientist Christof Koch of the California Institute of Technology in Pasadena and philosopher Patricia S. Churchland of the University of California, San Diego. Although this goal is not new, they contend science is now in a better position to serve as a matchmaker between the computer hardware of neural networks, or "connectionist" models, and the three pounds of "wetware" encased in the human skull.
At the philosophical heart of network modeling lies the notion that the mind emerges from the brain's behavior. Thus, it makes sense to imitate in computer setups the structure and biological wiring of the brain to reproduce mental abilities.
The appeal of this approach, says Yale University psychologist Denise Dellarosa, "has its roots in an idea that will not die" -- associationism. Put simply, associationism posits that humans learn through repetition to recognize people, things and events as more or less related to each other and as familiar or novel. Generalizing from examples, recognizing familiar faces in a crowd and driving a car are a few of the many tasks that characterize the effortless nature of associative learning.
Eighteenth-century philosophers David Hume and George Berkeley and psychologists in later centuries -- including William James and B.F. Skinner -- have championed, in their own ways, the cause of cognition as a building of associations through experience, Dellarosa says.
Neural networks attempt to stimulate associative learning involved in vision, language processing, problem solving and motor control. Mathematical calculations adjust the strength of connections linking up "neuron-like" processing units. A given stimulus fed into the network activates all the units at the same time, including feedback mechanisms that stimulate or suppress designated connections. If the statistical assumptions guiding the connections are on target, a correct response is produced gradually after hundreds or thousands of trials.
One example of plugging neurobiology into a connectionist model was recently reported by Sejnowski and Hopkins colleague Sidney Lehky (SN: 3/5/88, p.149). Their neural network calculates curvature from shading in an image and behaves much as two types of neurons in the cat's visual cortex do. It relies on a procedure called back propagation. The system contains a layer of input units, a layer of output units and a layer of intermediate or "hidden" units that gradually acquire the right electrical responses -- after several thousand trials -- to accomplish the computational task. Error signals are sent back through the network as training proceeds to adjust connections between units and guide the system toward a correct response.
Turning this approach around, other researchers test computational approaches with data from brain studies. At last summer's International Conference on Neural Networks, held in San Diego by the Institute of Electrical and Electronics Engineers (IEEE), Bill Betts of the University of Southern California in Los Angeles reported that cells in the toad's visual center appear to operate in a manner modeled by the neural networks of Boston University's Stephen Grossberg. …