The fundamental ideas of AIC-based model selection and multimodel inference
Ken Burnham, Professor of Biological Statistics, USGS and CSU Fish and Wildlife Biology, Colorado State University.
Monday, March 23, 2009
4:00 pm, Weber 223
Data-based model selection is often a key aspect of data analysis. However, just as often, outdated and poor methods are used (e.g., stepwise selection) in ways inconsistent with fundamental inference philosophy and good science practice. I claim model selection should be criterion-based and produce not just an estimated “best” model, but rather a full set of model probabilities, hence facilitating multimodel inference. The choice of criterion is partly a matter of philosophy as well as a matter of available technical methods (e.g., AIC, or BIC). AIC is based on the idea of finding a best predictive model; BIC is based on finding the “true” model. The model selection literature has been generally poor at reflecting the deep foundations of AIC. An overview of the key technical and philosophical ideas underlying AIC will be presented. There is both a clear philosophy, a sound criterion (i.e., Kullback-Leibler) based in information theory, and a rigorous statistical foundation for AIC and its use. The philosophical context of what is assumed about reality, approximating models, and the intent of model-based inference should determine whether AIC or BIC is used. Moreover, model selection must be more than just a search for, and then inference from, a single best model in a set of models: inference should reflect all models considered and associated selection uncertainty. Various facets of such multimodel inference will be noted here along with some comments on properties of AIC vs. BIC model selection.