The problem of combining information related to I binomial experiments, each having a distinct probability of success θi, is considered. Instead of using a standard exchangeable prior for θ; = (θ1, …, θI), we propose a more flexible distribution that takes into account various degrees of similarity among the θi’s. Using ideas developed by Malec and Sedransk, we consider a partition g of the experiments and take the θi’s belonging to the same partition subset to be exchangeable and the θi’s belonging to distinct subsets to be independent. Next we perform Bayesian inference on θ; conditional on g. of course, one is typically uncertain about which partition to use, and so a prior distribution is assigned on a set of plausible partitions g. The final inference on θ; is obtained by combining the conditional inferences according to the posterior distribution of g. The methodology adopted in this article offers a wide flexibility in structuring the dependence among the θi’s. This allows the estimate of θi to borrow strength from all other experiments according to an adaptive process governed by the data themselves. The method may be usefully applied to the analysis of binary response variables in the presence of categorical covariates. The latter are used to identify a collection of suitable partitions g, representing factor main effects and interactions, whose relevance will be summarized in the posterior distribution of g. Besides providing novel interpretations on the role played by the various factors, the procedure will also produce parameter estimates that may differ, sometimes in an appreciable manner, from those obtained using more traditional techniques. Finally, three real data sets are used to illustrate the methodology and compare it with other approaches, such as empirical Bayes (both parametric and nonparametric), logistic regression, and multiple shrinkage estimators. © 1995 Taylor & Francis Group, LLC.
A Bayesian method for combining results from several binomial experiments
VERONESE, PIERO
1995
Abstract
The problem of combining information related to I binomial experiments, each having a distinct probability of success θi, is considered. Instead of using a standard exchangeable prior for θ; = (θ1, …, θI), we propose a more flexible distribution that takes into account various degrees of similarity among the θi’s. Using ideas developed by Malec and Sedransk, we consider a partition g of the experiments and take the θi’s belonging to the same partition subset to be exchangeable and the θi’s belonging to distinct subsets to be independent. Next we perform Bayesian inference on θ; conditional on g. of course, one is typically uncertain about which partition to use, and so a prior distribution is assigned on a set of plausible partitions g. The final inference on θ; is obtained by combining the conditional inferences according to the posterior distribution of g. The methodology adopted in this article offers a wide flexibility in structuring the dependence among the θi’s. This allows the estimate of θi to borrow strength from all other experiments according to an adaptive process governed by the data themselves. The method may be usefully applied to the analysis of binary response variables in the presence of categorical covariates. The latter are used to identify a collection of suitable partitions g, representing factor main effects and interactions, whose relevance will be summarized in the posterior distribution of g. Besides providing novel interpretations on the role played by the various factors, the procedure will also produce parameter estimates that may differ, sometimes in an appreciable manner, from those obtained using more traditional techniques. Finally, three real data sets are used to illustrate the methodology and compare it with other approaches, such as empirical Bayes (both parametric and nonparametric), logistic regression, and multiple shrinkage estimators. © 1995 Taylor & Francis Group, LLC.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.