The aim of this paper is to assess the quality of the ranking of institutions obtained with different types of widely used random and fixed effects value-added models. Through a Monte Carlo simulation study we assess the robustness of the ranking obtained in the presence of different model misspecifications and data structures. Coherently with the well-established literature, we find that it is quite hard to obtain a reliable ranking of the whole effectiveness distribution, while it is possible to identify institutions with extreme performances under various experimental conditions. Multilevel models where the between and within cluster components of first-level covariates are distinguished perform significantly better than both multilevel models where the two effects are set to be equal and fixed effect models. We also find that the estimated rankings are of poor quality when the effectiveness distribution does not follow a symmetric and unimodal distribution. For these situations we plan to explore simple data transformations, as the Box-Cox, and non-parametric methods for the estimation of random effects that can help to obtain a better ranking.
A Monte-Carlo study to evaluate value-addedmodels for institutions’ rankings
ARPINO, BRUNO;
2010
Abstract
The aim of this paper is to assess the quality of the ranking of institutions obtained with different types of widely used random and fixed effects value-added models. Through a Monte Carlo simulation study we assess the robustness of the ranking obtained in the presence of different model misspecifications and data structures. Coherently with the well-established literature, we find that it is quite hard to obtain a reliable ranking of the whole effectiveness distribution, while it is possible to identify institutions with extreme performances under various experimental conditions. Multilevel models where the between and within cluster components of first-level covariates are distinguished perform significantly better than both multilevel models where the two effects are set to be equal and fixed effect models. We also find that the estimated rankings are of poor quality when the effectiveness distribution does not follow a symmetric and unimodal distribution. For these situations we plan to explore simple data transformations, as the Box-Cox, and non-parametric methods for the estimation of random effects that can help to obtain a better ranking.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.