Today, models need to be complex to cope with the complexity of data
sets. One form of complexity is tuning parameters that determine a
model from a class of models is to be applied to the data. For example,
in local polynomial regression, the bandwidth and the polynomial degree
are tuning parameters.
The choice of tuning parameters is a form of model selection. The choice
of the parameters based on the data makes the model selection
adaptive. Adaptive selection is typically carried out by choosing
a model criterion such as the cross-validation sum of squares, and
using unconstrained optimization to find the parameters that minimize
the criterion. It tends to be treated as a minimization problem with
an algorithm running on the machine that finds the solution --- pure
machine learning.

We are taking a new approach. We begin with the same framework,
tuning parameters and a model criterion, and add a measure of model
complexity. Then we treat model selection as we would an experiment
with a multi-response surface as a function of explanatory variables.
There are two responses, the selection criterion and the complexity
measure, and the explanatory variables are the tuning parameters.

In approaching the problem as an experiment, we use many of the techniques
of experimental design --- transforming the variables to simplify the
surfaces, visualization to understand the structure of the surfaces,
and for cases where each run is computationally costly, optimal design. We
can bring this framework even further into standard experimental methods
by introducing randomization, for example by bootstrapping or by breaking
the data into subsets.

We are replacing the machine learning method of unconstrained optimization
with a human-guided experimental approach. We believe this will result in
an ability to optimize over much larger numbers of tuning parameters,
making the model selection ultra-adaptive and thereby enabling the
fitting of much more complex models.