Approximation of the Formal Bayesian Model Comparison using the Extended Conditional Predictive Ordinate Criterion
Conference
64th ISI World Statistics Congress
Format: CPS Abstract
Keywords: "bayesian, monte carlo simulation
Session: CPS 43 - Statistical modelling VII
Tuesday 18 July 8:30 a.m. - 9:40 a.m. (Canada/Eastern)
Abstract
The optimal method for Bayesian model comparison is the formal Bayes factor (BF), according to decision theory. The formal BF is computationally troublesome for more complex models. If predictive distributions under the competing models do not have a closed form, a cross-validation idea, called the conditional predictive ordinate (CPO) criterion can be used. In the cross-validation sense, this is a “leave-out one” approach. CPO can be calculated directly from the Monte Carlo (MC) outputs, and the resulting Bayesian model comparison is called the pseudo Bayes factor (PBF). We can get closer to the formal Bayesian model comparison by increasing the “leave-out size”, and at “leave-out all” we recover the formal BF. But, the MC error increases with increasing “leave-out size”. In this study, we examine this for linear and logistic regression models.
Our study reveals that the Bayesian model comparison can favour a different model for PBF compared to BF when comparing two close linear models. So, larger “leave-out sizes” are preferred which provide result close to the optimal BF. On the other hand, MC samples based formal Bayesian model comparisons are computed with more MC error for increasing “leave-out sizes”; this is observed by comparing with the available closed form results. Still, considering a reasonable error, we can use “leave-out size” more than one instead of fixing it at one. These findings can be extended to logistic models where a closed form solution is unavailable.