Are Neural Networks a Better Forecaster?

Article excerpt

In some cases, neural nets prove to be superior to their regression counterparts if they're applied correctly. Here's a representative comparison of a neural net for forecasting electricity demand with a well-designed regression model.

Despite the technical analysis influx futures trading has experienced in the past decade, statistical models of markets' fundamental underpinnings continue to enjoy a strong following. Thus, deciding how to go about building a model remains a key decision for many traders and analysts. Complicating this is the continuous evolution of modeling techniques. Early this century, the question was whether a statistical approach to economic theory was an appropriate analytical method. The acceptance of this procedure then led to the sophisticated forms of regression and econometric applications.

The increased availability of economic data sources in today's information age as well as ongoing enhancements in computer technology have enabled a constant "pushing of the envelope" of forecasting techniques. The stages of the process range from single equation regression and univariate or sole time series applications to sophisticated systems of equations and finally to state-of-the-art "artificial intelligence" or neural network technology.

Here, we will analyze this quantitative modeling technology by running a comparison of forecast accuracy between a sophisticated Semtsa (structural econometric model time series analysis) and a neural net-based computer algorithm, looking at predictive accuracy between the two approaches. This experiment offers some insight into where model builders should focus their efforts.

A closer look We often refer to neural nets as artificial intelligence because they are computer algorithms that attempt to reproduce human thought and reasoning. A simple neural net comprises a number of processing elements called neurons that are connected to other neurons that form layers of neurons.

The connectors or "synapses" have a weight that is assigned to them during the optimization process. This structure mirrors the neural net of a human brain where the neurons, in conjunction with their synapses, process input impulses during the human learning stage. The optimization process of neural nets resembles the minimizing of the error term in that of least squared standard regression. The difference lies in that neural nets consider linear, non-linear and pattern recognition relationships in the input data and conduct the optimization process automatically.

Standard regression models require the analyst to apply different functional forms to the data, such as deciding on exponential powers of the explanatory variables. By including the functional form as part of the optimization process, neural nets claim to better predict trending relationships among model variables and identify crucial turning points. Also, neural nets use a unique algorithm approach in such a way the technology does not have a problem with collinearity, a strong relationship between two of the predictor variables, which can cause major errors in standard regression analysis models.

But neural nets do have limitations. One potential shortcoming is the notion of overfitting or overoptimization (see, for example, "The Amateur Scientist," Scientific American, September 1992). This common problem occurs during training. The neural net's performance in linking dependent to independent variables generally will improve the longer the weight adjustment process continues. However, when out-of-sample data are introduced to the system, the net's performance often breaks down. The reason is the neural net learned too many anomalies of the given input data, which it attempted to apply to the new data. Such overfitting results from the improper design of the network structure.

The representative neural net in this article was built with a program that includes certain algorithms that limit the negative overfitting aspects of neural nets. …