Developments in Scalable Posterior Sampling through Random Weighting
Conference
Format: IPS Abstract
Keywords: bayesian, computational, mixture model, uncertainty quantification
Session: Invited Session 9B - Modern Approaches for Scientific Inference and Uncertainty Quantification
Thursday 5 December 9:30 a.m. - 11 a.m. (Australia/Adelaide)
Abstract
This presentation will mainly focus on the problem of uncertainty quantification in Gaussian Mixture Models (GMMs). A natural way to quantify uncertainties in GMMs is through Bayesian methods. That said, sampling from the joint posterior distribution of GMMs via standard Markov chain Monte Carlo (MCMC) imposes several computational challenges, which have prevented a broader full Bayesian implementation of these models. A growing body of literature has introduced the Weighted Likelihood Bootstrap and the Weighted Bayesian Bootstrap as alternatives to MCMC sampling. The core idea of these methods is to repeatedly compute maximum a posteriori (MAP) estimates on many randomly weighted posterior densities. These MAP estimates then can be treated as approximate posterior draws. Nonetheless, a central question remains unanswered: How to select the random weights under arbitrary sample sizes. We, therefore, introduce the Bayesian Optimized Bootstrap (BOB), a computational method to automatically select these random weights by minimizing, through Bayesian Optimization, a black-box and noisy version of the reverse Kullback–Leibler (KL) divergence between the Bayesian posterior and an approximate posterior obtained via random weighting. Our proposed method outperforms competing approaches in recovering the Bayesian posterior, it provides a better uncertainty quantification, and it retains key asymptotic properties from existing methods. BOB’s performance is demonstrated through extensive simulations, along with real-world data analyses.