Cleveland
Home Up Next

 

Cleveland
Friedman
Grunwald
Jewell
Kolaczyk
Lee, T.
Lee, Y. 
Madigan
Meng
Muthukrishnan
Nair
Nolan
Rus
Saul
Singer
Wainwright
Wolfe
Wu
Yu
Human-Guided, Ultra-Adaptive Learning
William S. Cleveland
Departments of Statistics and Computer Science, Purdue
Benjamin Tyner
Department of Statistics, Purdue
Today, models need to be complex to cope with the complexity of data sets. One form of complexity is tuning parameters that determine a model from a class of models is to be applied to the data. For example, in local polynomial regression, the bandwidth and the polynomial degree are tuning parameters.

The choice of tuning parameters is a form of model selection. The choice of the parameters based on the data makes the model selection adaptive. Adaptive selection is typically carried out by choosing a model criterion such as the cross-validation sum of squares, and using unconstrained optimization to find the parameters that minimize the criterion. It tends to be treated as a minimization problem with an algorithm running on the machine that finds the solution --- pure machine learning.

We are taking a new approach. We begin with the same framework, tuning parameters and a model criterion, and add a measure of model complexity. Then we treat model selection as we would an experiment with a multi-response surface as a function of explanatory variables. There are two responses, the selection criterion and the complexity measure, and the explanatory variables are the tuning parameters.

In approaching the problem as an experiment, we use many of the techniques of experimental design --- transforming the variables to simplify the surfaces, visualization to understand the structure of the surfaces, and for cases where each run is computationally costly, optimal design. We can bring this framework even further into standard experimental methods by introducing randomization, for example by bootstrapping or by breaking the data into subsets.

We are replacing the machine learning method of unconstrained optimization with a human-guided experimental approach. We believe this will result in an ability to optimize over much larger numbers of tuning parameters, making the model selection ultra-adaptive and thereby enabling the fitting of much more complex models.

We have begun our investigation of this approach in the context of local regression. Initial results have been very encouraging.  

Short Course: Information Theory & Statistics
Bin Yu & Mark Hansen
June 1, 2005
Colorado State University Campus
Fort Collins, CO 80523

Graybill Conference
June 2-3, 2005
Hilton Fort Collins

(Formerly: University Park Holiday- Inn)
Fort Collins, CO 80526

www.stat.colostate.edu/graybillconference
Graybill Conference Poster

Last Updated: Friday, May 24, 2005