Model comparison within a collection of candidate models is an important and active area of statistical methodology and practice, especially from the Bayesian perspective. An essential requirement for applying the Bayesian paradigm is the specification of a prior distribution on the parameter space of each candidate model: clearly this task becomes prohibitive as soon as the number of models is only moderately large. However, when models are nested within an encompassing (or full) model, one can try to relate priors across models. In this way, only one prior on the parameter space of the full model need be assigned, while the prior under each submodel is derived from it. This would solve not only the elicitation problem, but also achieve some sort of prior compatibility across models. In this paper we provide a unified framework for the assignment of priors under a collection of submodels, given a prior on the full model. We introduce two interpretations of nested models, carefully describing the allied notation, and describe some procedures to derive priors under each submodel, namely: marginalization, conditioning, and Kullback-Leibler (KL) projection. We motivate and illustrate the general methodology through the variable selection problem of linear regression, and illustrate the methods with three examples. In the light of our findings, we conclude that the procedure based on conditioning is not particularly advisable, while KL projection priors together with a default improper prior may jointly contribute to identify a collection of plausible models for Bayesian variable selection.

Compatibility of Prior Specifications across Linear Model

VERONESE, PIERO
2006

Abstract

Model comparison within a collection of candidate models is an important and active area of statistical methodology and practice, especially from the Bayesian perspective. An essential requirement for applying the Bayesian paradigm is the specification of a prior distribution on the parameter space of each candidate model: clearly this task becomes prohibitive as soon as the number of models is only moderately large. However, when models are nested within an encompassing (or full) model, one can try to relate priors across models. In this way, only one prior on the parameter space of the full model need be assigned, while the prior under each submodel is derived from it. This would solve not only the elicitation problem, but also achieve some sort of prior compatibility across models. In this paper we provide a unified framework for the assignment of priors under a collection of submodels, given a prior on the full model. We introduce two interpretations of nested models, carefully describing the allied notation, and describe some procedures to derive priors under each submodel, namely: marginalization, conditioning, and Kullback-Leibler (KL) projection. We motivate and illustrate the general methodology through the variable selection problem of linear regression, and illustrate the methods with three examples. In the light of our findings, we conclude that the procedure based on conditioning is not particularly advisable, while KL projection priors together with a default improper prior may jointly contribute to identify a collection of plausible models for Bayesian variable selection.
2006
G., Consonni; Veronese, Piero
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11565/55000
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact