Financial Market Prediction System with Evolino Neural Network and Delphi Method

Article excerpt

1. Introduction

Artificial intelligence methods have become very important in making financial market predictions. The following elements are of major importance: the selection of the input data, the selection of the forecasting tool, and the correct use of the output data. As investors are searching for profitable growth, they require the development of a stable and reliable forecasting model.

Kimoto et al. (1990) proposed a stock market prediction system with modular neural networks; Wang and Leu (1996) used ARIMA-based neural networks. The application of neural networks to stock market prediction was presented by Kulkarni (1996). The accuracy of the prediction depended on neural networks and the input selection.

Stock prices have also been forecasted using evolutionary systems. Kim and Han (2000) proposed a new hybrid of a genetic algorithm with artificial neural networks. The genetic algorithm not only searches for the optimal or near-optimal solutions of the connection weights in the learning algorithm, but also looks for the optimal or near-optimal thresholds of the feature discretization. Hussan et al. (2007) proposed and implemented a fusion model by combining the hidden Markov model, artificial neural networks (NN), and genetic algorithms to forecast the behaviour of financial markets. The weighted average of the predictions was used to forecast stock prices and increase the accuracy of the model. Choudhry and Garg (2008) proposed a hybrid machine learning system based on genetic algorithms and support vector machines for stock market prediction, using the correlation between the stock prices of different companies.

Prediction systems based on neuro-fuzzy sets have also been used to predict financial markets. Ang and Quek (2006) proposed a model, which synergizes the price difference forecast method with a forecast bottleneck-free trading decision model. Chiang and Liu (2008) developed a fuzzy rule based system, where the clustering technique and the simplified and wavelet (Chang, Fan 2008) fuzzy rule based systems were integrated for forecasting. The system proposed by Agrawal et al. (2010) used an adaptive neuro-fuzzy inference system for making decisions based on the values of some technical indicators. Among the various technical indicators available, the system used the weighted moving averages, divergence, and RSI (relative strength index). In the paper (Quek et al. 2011), a novel stock trading framework based on a neuro-fuzzy associative memory architecture was proposed. The architecture incorporated the approximate analogical reasoning schema to resolve the problem of discontinuous responses and inefficient memory utilization with uniform quantization in the associative memory structure.

Suppose it is known that p is an element of some set of distributions P. Choose a fixed weight [[omega].sub.q] for each q in P such that the [omega]q add up to 1 (for simplicity, suppose P is countable). Then construct the Bayesmix M(x) = [summation].sub.q] [omega]qq(x), and predict using M instead of the optimal but unknown p. How wrong could this be? The recent work of Hutter provides general and sharp loss bounds (Hutter 2001): Let LM(n) and Lp(n) be the total expected unit losses of the M-predictor and the p-predictor, respectively, for the first n events. Then LM(n)-Lp(n) is at most of the order of [square root of]Lp(n). That is, M is not much worse than p. And in general, no other predictor can do better than that. In particular, if p is deterministic, then the M-predictor will not make any more errors. If P contains all recursively computable distributions, then M becomes the celebrated enumerable universal prior. The aim of this paper is to construct a model that could make predictions with a small enough difference M(t)-p(t) for some fixed time t.

Schmidhuber et al. (2005) introduced a general framework of sequence learning algorithms, EVOlution of recurrent systems with LINear outputs (Evolino). …