By Ruggiero, Murray A., Jr.
Futures (Cedar Falls, IA) , Vol. 41, No. 12
Neural networks, if used properly, can provide the framework for a plethora of market analysis tools that can supplement an existing trading program or suggest new directions for future research. While the history of these tools dates back much further, their modern application took root in the late 1980s and came of age in 1993 when patent no. 5241620 was awarded to this author for the concept of embedding a neural network into a common spreadsheet. Suddenly, neural networks were not just part of the professional mainstream, but the average trading populace could access them.
The analytical foundation for this leap is built on an algorithm called back propagation. In layman's terms, this is a method that allows a network to learn to discriminate between classes that can't be distinguished based on linear properties. Rumelhart, Hinton and Williams presented a well-received paper on what they called "Backward propagation of errors" in 1985. Others who did research into this approach include David Parker and Paul Werbos. Werbos arguably invented these techniques and presented them in "Introduction to Pattern Analysis," his 1974 Ph.D. dissertation at Harvard.
The back propagation algorithm consists of a multi-layer perception that uses non-linear activation functions (see "Simple net," right). The most commonly used functions are the sigmoid, which ranges from 0 to 1, and the hyperbolic tangent function, which ranges from -1 to 1. All inputs and target outputs must be mapped into these ranges when used in these types of networks.
The "magic" of back propagation, or backprop, is that mathematical calculations (the type typically found in first-year calculus) adjust the weights of the connections to minimize the error across the training set. An important attribute of these methods is they generate a reasonably low error across the training set of inputs. However, they do not find the absolute minimum error, but the local minimum. This means that training a neural network is not exact and depends on the precise data set. Repeating the same experiment does not always give the same answer.
Backprop, in its original form, had a lot of issues. Many variations of this algorithm attempt to resolve those weaknesses. Early ideas used momentum and variable learning rate adjustment techniques, such as simulated annealing. When newer tactics are combined with older ones, the combination can optimize learning. For example, we perform batch learning in parallel so that we can run it on multiple cores, saving a tremendous amount of time. All of these variations are supervised learning algorithms: We give them input patterns and train them to output a certain target set of results. In doing so, we map the patterns, which in turn allows us to generalize for new patterns that were not used in training.
There are other algorithms, such as radial nets and kernel regression (also known as Support Vector Machines). All of these algorithms can be used to create approximations of non-linear functions. This approximates how neural networks map a given input to an out put. Put simply, we create a universal function "approximator" that, given a set of inputs, can provide a good idea of what the optimal solution to a problem would be.
As with most things, interest in neural networks took off when the customer started demanding it. Traders, hungry for the next big thing, were clamoring for the technology in the early 1990s. However, the vast majority of these traders had no background in the processes --and those who had the background knew nothing about the markets or how they work.
But neural networks were not the perfect solution, and after many years of trial and error, it became clear why: Standard neural-network-based signal processing techniques simply do not work in the markets as signal generators. In other words, the process of implementing neural networks correctly must begin far earlier in trading system development. …