Algorithm Bias: A Statistical Review: The Biggest Challenge Faced by Technical Analysts Is How Best to Use Domain Knowledge to Infer an Appropriate Bias in Their Algorithms. Here, We Dive into the Subject of Model Selection Using a System for the Pound/dollar

Article excerpt

The most general (and toughest) challenge faced by technical analysts is neither optimization (optimizing parameters is straightforward) nor overfitting (overfitting avoidance is an assumption), but how best to use domain knowledge to infer an appropriate bias in their algorithms. At the risk of oversimplifying, statistics generally concerns testing a given hypothesis, while machine learning concerns formulating the process of generalization as a search through possible hypotheses in an attempt to find the best hypothesis. Classical statistics involves calculating the probability of the data if the null hypothesis is true, while Bayesian inference involves calculating the probability of a hypothesis, given the data.

As traders in particular, and scientists in general, our aims are better aligned with the paradigms of Bayesian inference and machine learning than with classical statistics.

Consider this: Let B be background information, H a hypothesis and D data. Then P(H|B) is known as the prior, P(D|B) the probability of the data, P(D|HB) the likelihood, and P(H|DB) the posterior. The probabilities are famously related via Bayes' theorem,

P(H|DB) = P(H|B)P(D|HB)/P(D|B)

As there is no such thing as an absolute probability, for notational convenience we often omit B. As the denominator in Bayes' theorem, P(D|B), is independent of H, when comparing hypotheses we can omit it and use:

P(H|D) [infinity] P(H)P(D|H)

In the 18th century, Hume (1740) pointed out that "even after the observation of the frequent or constant conjunction of objects, we have no reason to draw any inference concerning any object beyond those of which we have had experience." More recently, and with increasing rigor, Mitchell (1980), Schaffer (1994) and Wolpert (1996) showed that bias-free learning is futile.

The important point is that one can never generalize beyond known data without making at least some assumptions. The no-free-lunch theorem for supervised machine learning (Wolpert 1996) states that in terms of off-training set error, there are no prior distinctions between learning algorithms. In particular, this implies that there is no free lunch for overfitting avoidance. In other words, we should only constrain our algorithm if this reflects our prior beliefs.

A model is a family of functions, or equivalently, a function is a particular parameter choice of a model. Model selection is the task of choosing a model with the correct inductive bias, which in practice means selecting a model of optimal complexity for the given data. A more complex model will always fit the training data better, but may not represent the true underlying model and thus perform poorly on new data. Note that model selection (which is difficult) logically precedes parameter selection (which is well understood).

OVERFITTING SOLUTION

Below, I present a pedagogical example of Bayesian model selection, a method which in principle solves the overfitting problem and is originally due to the work of Sir Harold Jeffreys 70 years ago (Jeffreys 1939). The aim is to predict the daily British pound/U.S. dollar interbank rate. The data set spans Jan. 1, 1993, to Feb. 3, 2008, and consists of the average ask price for each day. Where the target (defined below) was zero return (weekends), the data were excluded. The training set consisted of 3,402 data points and the out-of-sample set included 1,701 data points.

For reasons of market efficiency, it is safest to take the view that there are no privileged features in financial time series, over and above keeping the inputs potentially relevant, orthogonal and using Tobler's first law of geography that states that "everything is related to everything else, but near things are more related than distant things" (Tobler 1970).

Let p-n be the exchange rate n days ago. Consider five potential inputs:

[x.sub.1] = log([p. …