Style: APA

#### Cited page

Citations are available only to our active members. Sign up now to cite pages or passages in MLA, APA and Chicago citation styles.

Display options
Reset

# Monte Carlo Smoothing for Nonlinear Time Series

By: Godsill, Simon J.; Doucet, Arnaud et al. | Journal of the American Statistical Association, March 2004 | Article details

# Monte Carlo Smoothing for Nonlinear Time Series

Godsill, Simon J., Doucet, Arnaud, West, Mike, Journal of the American Statistical Association

1. INTRODUCTION

In this article we develop Monte Carlo methods for smoothing in general state-space models. To fix notation, consider the standard Markovian state-space model (West and Harrison 1997)

[x.sub.t+1] [approximately] f([x.sub.t+1]|[x.sub.t]) (state evolution density),

[y.sub.t+1] [approximately] g([y.sub.t+1]|[x.sub.t+1]) (observation density),

where {[x.sub.t]} are unobserved states of the system, {[y.sub.t]} are observations made over some time interval t [member of] {1, 2,..., T}, and f(*|*) and g(*|*) are prespecified state evolution and observation densities, which may be non-Gaussian and involve nonlinearity. It is assumed throughout that the required distributions can be represented by density functions, and that both f(*|*) and g(*|*) can be evaluated for any valid states and observations [x.sub.t] and [y.sub.t]; [x.sub.t] and [y.sub.t] may both be vectors. We assume that the process {[x.sub.t]} is Markov, generated according to the foregoing state evolution, and that the observation process {[y.sub.t]} is independent conditional on the state process {[x.sub.t]}. Hence an expression for the joint distribution of states and observations can be obtained directly by the probability chain rule,

p([x.sub.1:t],[y.sub.1:t]) = f([x.sub.1])([t.[product].[i=2]]f([x.sub.i]|[x.sub.i-1]))([t.[product].[i=1]]g([y.sub.i]|[x.sub.i])),

where f([x.sub.1]) is the distribution of the initial state. Here [x.sub.1:t] = ([x.sub.1],...,[x.sub.t]) and [y.sub.1:t] = ([y.sub.1],...,[y.sub.t]) denote collections of observations and states from time 1 through t. In proving the validity of our proposed smoothing algorithm, a more formal definition of the state-space model is needed; we present this in Appendix A.

A primary concern in many state-space inference problems is sequential estimation of the filtering distribution p([x.sub.t]|[y.sub.1:t]). Updating the filtering density can be done in principle using the standard filtering recursions

p([x.sub.t+1]|[y.sub.1:t]) = [integral] p([x.sub.t]|[y.sub.1:t])f([x.sub.t+1]|[x.sub.t])d[x.sub.t]

and

p([x.sub.t+1]|[y.sub.1:t+1]) = [g([y.sub.t+1]|[x.sub.t+1])p([x.sub.t+1]|[y.sub.1:t])]/[p([y.sub.t+1]|[y.sub.1:t])].

Similarly, smoothing can be performed recursively backward in time using the smoothing formula

p([x.sub.t]|[y.sub.1:T]) = [integral]p([x.sub.t+1]|[y.sub.1:T])[[p([x.sub.t]|[y.sub.1:t])f([x.sub.t+1]|[x.sub.t])]/[p([x.sub.t+1]|[y.sub.1:t])]]d[x.sub.t+1].

Inference in general state-space models has been revolutionized over the past decade by the introduction of cheap and massive computational resources and the consequent development and widespread application of Monte Carlo methods. In batch-based scenarios, Markov chain Monte Carlo (MCMC) methods have been widely used, and various powerful tools have been developed and proven in application (see, e.g., Carlin, Polson, and Stoffer 1992; Carter and Kohn 1994; Shephard 1994; Shephard and Pitt 1997; De Jong 1997; Aguilar, Huerta, Prado, and West 1999, Aguilar and West 1998, 2000; Pitt and Shephard 1999b). However, constructing an effective MCMC sampler in models with significant degrees of nonlinearity and non-Gaussianity is not always straightforward. Specifically, in these cases it can be hard to construct effective proposal distributions, either over collections of states simultaneously or even for single states conditional on all others. The danger then is that the MCMC will be slowly mixing and may never converge to the target distribution within realistic time scales.

Alternative Monte Carlo strategies based on sequential importance sampling, known generically as particle filters, have been rapidly emerging in areas such as target tracking for radar, communications, econometrics, and computer vision (West 1993; Gordon, Salmond, and Smith 1993; Kitagawa 1996; Liu and Chen 1998; Doucet, ā¦

• Questia's entire collection
• Automatic bibliography creation
• More helpful research tools like notes, citations, and highlights