Towards SMC: Importance Sampling Explained
Simple Explanation of Importance Sampling for Sequential Monte Carlo (SMC)
Classic Importance Sampling
Suppose that our aim is to compute the posterior expectation
Suppose also that it is impractical to sample directly from
where we have defined
Properties of Classic Importance Sampling
- For the variance of the estimator of
to be finite we require . - Notice that the vector of weights
tells us how well fits . In particular when we have , i.e. a vector of uniform weights. So the more uniform the weights, the better the fit. In practice, there will be only few dominant weights, and if badly fits then the vast majority of weights will be neglegible, and one or two weights will dominate.
Bibliography
Barber, David. 2012. Bayesian Reasoning and Machine Learning. USA: Cambridge University Press.
Doucet, Arnaud, Simon Godsill, and Christophe Andrieu. 2000. “On Sequential Monte Carlo Sampling Methods for Bayesian Filtering.” Statistics and Computing 10 (3). USA: Kluwer Academic Publishers: 197–208. https://doi.org/10.1023/A:1008935410038.
Gelman, Andrew, John B. Carlin, Hal S. Stern, and Donald B. Rubin. 2004. Bayesian Data Analysis. 2nd ed. Chapman; Hall/CRC.
Liu, Jun S. 2008. Monte Carlo Strategies in Scientific Computing. Springer Publishing Company, Incorporated.
Naesseth, Christian A., Fredrik Lindsten, and Thomas B. Schön. 2019. “Elements of Sequential Monte Carlo.”
Robert, Christian P., and George Casella. 2005. Monte Carlo Statistical Methods (Springer Texts in Statistics). Berlin, Heidelberg: Springer-Verlag.