Foundations and Trends® in Machine Learning >
Vol 6 > Issue 1

Fredrik Lindsten and Thomas B. Schön (2013), "Backward Simulation Methods for Monte Carlo Statistical Inference", Foundations and Trends® in Machine Learning: Vol. 6: No. 1, pp 1-143. http://dx.doi.org/10.1561/2200000045

© 2013 F. Lindsten and T. B. Schön

Particle smoothing, Sequential Monte Carlo

Download article
**In this article:**

Monte Carlo methods, in particular those based on Markov chains and on interacting particle systems, are by now tools that are routinely used in machine learning. These methods have had a profound impact on statistical inference in a wide range of application areas where probabilistic models are used. Moreover, there are many algorithms in machine learning which are based on the idea of processing the data sequentially, first in the forward direction and then in the backward direction. In this tutorial, we will review a branch of Monte Carlo methods based on the forward–backward idea, referred to as backward simulators. These methods are useful for learning and inference in probabilistic models containing latent stochastic processes. The theory and practice of backward simulation algorithms have undergone a significant development in recent years and the algorithms keep finding new applications. The foundation for these methods is sequential Monte Carlo (SMC). SMC-based backward simulators are capable of addressing smoothing problems in sequential latent variable models, such as general, nonlinear/non-Gaussian state-space models (SSMs). However, we will also clearly show that the underlying backward simulation idea is by no means restricted to SSMs. Furthermore, backward simulation plays an important role in recent developments of Markov chain Monte Carlo (MCMC) methods. Particle MCMC is a systematic way of using SMC within MCMC. In this framework, backward simulation gives us a way to significantly improve the performance of the samplers. We review and discuss several related backward-simulation-based methods for state inference as well as learning of static parameters, both using a frequentistic and a Bayesian approach.

146 pp. $99.00

Buy book (pb)
146 pp. $115.00

Buy E-book (.pdf)
1. Introduction

2. Monte Carlo Preliminaries

3. Backward Simulation for State-Space Models

4. Backward Simulation for General Sequential Models

5. Backward Simulation in Particle MCMC

6. Discussion

Acknowledgments

Notations and Acronyms

References

Monte Carlo methods, in particular those based on Markov chains and on interacting particle systems, are by now tools that are routinely used in machine learning. These methods have had a profound impact on statistical inference in a wide range of application areas where probabilistic models are used. Moreover, there are many algorithms in machine learning that are based on the idea of processing the data sequentially; first in the forward direction, and then in the backward direction.

*Backward Simulation Methods for Monte Carlo Statistical Inference* reviews a branch of Monte Carlo methods that are based on the forward-backward idea, and that are referred to as backward simulators. In recent years, the theory and practice of backward simulation algorithms have undergone a significant development, and the algorithms keep finding new applications. The foundation for these methods is sequential Monte Carlo (SMC). SMC-based backward simulators are capable of addressing smoothing problems in sequential latent variable models, such as general, nonlinear/non-Gaussian state-space models (SSMs). However, this book also clearly shows that the underlying backward simulation idea is by no means restricted to SSMs. Furthermore, backward simulation plays an important role in recent developments of Markov chain Monte Carlo (MCMC) methods. Particle MCMC is a systematic way of using SMC within MCMC. In this framework, backward simulation gives us a way to significantly improve the performance of the samplers. This monograph discusses several related backward-simulation-based methods for state inference as well as learning of static parameters, both using a frequentistic and a Bayesian approach.

*Backward Simulation Methods for Monte Carlo Statistical Inference* is an excellent primer for anyone interested in this active research area.