Ian Parberry University of North Texas
Computation consumes resources, including time, memory, hardware, and power. A theory of computation, called computational complexity theory, which should not be confused with the more recent science of complexity studied by physicists, has grown from this simple observation, starting with the seminal paper of Hartmanis and Stearns ( 1965). The prime tenet of this field is that some computational problems intrinsically consume more resources than others. The resource usage of a computation is measured as a function of the size of the problem being solved, that is, the number of bits needed to encode the input. The idea is that as science and technology progresses the amount of data that we must deal with will grow rapidly with time. We not only need to be able to solve today's technological problems, but also to be able to scale up to larger problems as our needs grow and larger and faster computers become available. The goal of computational complexity theory is to develop algorithms that are scalable in the sense that the rate of growth in their resource requirements does not outstrip the ability of technology to supply them.
An important contribution of neural networks is their capacity for efficient computation. The first computers were created in rough analogy with the brain or, more correctly, in rough analogy with what was believed about the brain at the time by a certain group of people. Although technology has advanced greatly in recent decades, modern computers are little different from their older counterparts. It is felt by some scientists that in order to produce better computers we must return to the brain for further inspiration.
-85-