A Comparative Evaluation of Methods for Combining Forecasts

Article excerpt

A Comparative Evaluation of Methods for Combining Forecasts*

Traditionally, the forecast choice problem has assumed the existence of a "best" forecast or method among the available ones and focused on the search and identification of the most appropriate or best forecast for a specific situation as expressed in Chambers, Mullick and Smith [4] and Makridakis, Wheelwright and McGee [9]. Beginning with the Bates and Granger [1] article, however, there has been considerable interest in combining two or more forecasts to form a composite forecast. The rationale behind this approach is that such a composite would be based on more information than any one of its component forecasts, that it would yield a lower "error," and that any single forecast is unlikely to be a consistently better performer over time than all the others even in a given situation. Since then, and especially over the last ten years, renewed interest has been shown in this area, resulting in the development and advocacy of other combining methods from Bordley [2], Granger and Ramanathan [6], Winkler and Makridakis [12], and the burgeoning empirical work of Makridakis et al. [7], Makridakis and Winkler [8], and Newbold and Granger [11].

All the empirical studies cited above are based on a large collection of several series, each of considerable length, as in [7, 9, 12], or a few series, again each of considerable length, as in [6, 11]. Many decision makers and researchers may not have access to such extensive data bases or be able to concur with judgments that may have been overly affected one way or another by performance over the distant past. This article presents comparative results for those Newbold and Granger [11] combining methods identified as superior by Winkler and Makridakis [12] and for the unconstrained linear combination method of Granger and Ramanathan [6]. The data are taken from quarterly U.S. forecasts for two series, viz., growth in current and real $ GNE. The accuracy of these combinations is compared with the accuracy of the individual forecasts themselves and their simple average. The comparison is made through the Mean Absolute Percentage Error or MAPE and the Mean Squared Error or MSE--the two error measures or evaluative criteria that have found wide acceptance as reported by Carbone and Armstrong [3]. * Supported by a grant from the President's Fund, University of Regina. An earlier version of this paper was presented at the TIMS XXVI International Meeting, Copenhage, June 1984. Description of the Combination Methods Studied In this section, the following notation is used. Ct: The combined forecast for period `t'. n

Ct = Wi,tFi,t i=1 Fi,t: Forecast for period `t' from forecaster `i'; i = 1,2,..., n; n = 6 in this study. w(i),(t): Weight attached to F(i),(t) in forming C(t). 1. Simple Average: 1 w(i),(t) = n for each `i'. 2. WINKMAK K: Refers to the K(th) procedure of Winkler and Makridakis [12]. Only procedures 1, 3, and 4 identified by them as superior are considered here. (a) WINKMAK 1: parameterized by NU, v = 3, 6, 9, and 12.

t-1        n    t-1
w(i),(t) = (   e(2)(i),(s))(-1)/   (   e(2)(i),s)(-1)
s=t-v        1=1    s=t-v

where e(i),(s) = (A(s) - F(i),(s)) / A(s) and A(s) is the preliminary estimate of the actual for the variable being forecast for period `s'. Thus, there are four different variations of this procedure, one for each value of v. (b) WINKMAK 3: parameterized by NU, v = 3, 6, 9, and 12 and by BETA,

= 0.5, 0.7, and 0.9. Thus, there are twelve different variations of this combinatorial procedure, one for each possible combination of the four v values and the three B values.

t-1        n    t-1
w(i),(t) = Bw(i),(t)-1 + (1 - B)[(   e(2)(i),(s))-1 /   (   (   e(2)(i),(s)-1]
s=t-v        i=1    s=t-v
(c) WINKMAK 4: parameterized by GAMMA,   = 1.0, 1.5, and 2.0. Thus,

there are three possible variations of this procedure, one for each different value of Y. …