Artificial Intelligence: A Comeback Story: For Many Traders Whose Careers Stretch through the 1990s, "Artificial Intelligence" Is a Tainted Phrase. However, Technology and Best Practices Are Finally Starting to Catch Up to the Original Promise, and While Artificial Intelligence Is Not a Magic Bullet, It Is Now a Reasonable Addition to Any Trader's Toolbox

Article excerpt

Many segments of the field of study involving artificial intelligence have found their way into the trading industry today. There are individual types of networks and related methods such as kernel regression. In addition, there are several different types of genetic algorithms and machine induction.

The influx of these tools began in force with the 1990s. Neural networks were the new buzzword in trading, and nearly all large traders and institutions invested heavily in this technology. During the mid 1990s, genetic algorithms and machine learning bloomed. With only a few exceptions, this first boom period was not a success. Most of the trading strategies developed did not hold up in the future. The primary reasons were:

1) Analysts tried to make neural networks and other artificial intelligent methods do too much. They made them the heart of the system. If the network failed, so did the system.

2) Unique technical factors, such as how neural networks start from random weights and produce different results each time you train them, caused them to fall out of favor.

It is now 20 years since the first neural network boom started, and they are making a comeback, not just neural networks, but other advanced technologies such as genetic algorithms, machine rule induction, fuzzy logic and chaos theory.

During the first two boom periods, these technologies were taken in isolation. For example, one developer would focus on neural networks; another would apply only genetic programming. The present trend involves integrating multiple technologies. If we review the professional journals in this field, we see that for most of the period 2001-06, scholars wrote few articles that applied a.i. methods to trading. Now, academic research in this area is booming once again.

In this, the first of a multi-part series on artificial intelligence's comeback, we'll overview past attempts and examine why this comeback may be different.

A BRIEF HISTORY

Neural network technology takes its cue from the human brain by emulating its structure. Work on neural networks was started in the 1940s and was followed in 1957 by the advent of Frank Rosenblatt's "Perceptron," which was a linear classifier, or the simplest kind of feed forward neural network (see "Perceptron simplified," above).

The neuron is the basic structural unit of a neural network. If the neuron receives enough signals, the neuron fires and triggers all of its outputs. A neuron receives any number of inputs, possessing weights based on their importance. With a real neuron, the weighted inputs are summed and output is based on a threshold function sent to every neuron downstream. Finally, all of the impulses are passed along until the output layer is reached and the output signals are translated into real work information.

Although this system worked well for simple problems, it was demonstrated in 1969 that non-linear classifications called "exclusive/or" problems were impossible to solve. The exclusive/or problem is a simple real-world problem. For example, it is possible to go shopping or to the movies, but it is not possible to do both at the same time (see "Pick and choose," right).

Neural networks represent a branch of computing science called machine learning, which includes two major branches, namely supervised and unsupervised learning. In supervised learning, applications learn via a teacher and compare the output with the current weights to produce the answer. This is how the original Perceptron worked and this type of neural network is most often used in financial analysis. The calculated value is compared to the actual value and the weights are adjusted to minimize the error across the complete training set. The goal of machine learning is for the neural network to learn the training set well and produce good answers on new cases that have never been seen before.

In 1986, a paper was presented on an algorithm called "back propagation," announcing the discovery of a method allowing a network to learn to discriminate between not linearly separable classes. …