Bregman Loss Functions: optimality and applications
Monday, 25 October 2004
Given a probability space, a random variable X, and a sub-sigma algebra G, it is a well known and simple result that the corresponding conditional expectation is an optimal predictor of X in terms of the mean square error. It is natural to ask whether L-2 is the only loss function for which conditional expectation is the unique optimizer. In a joint work with Bernajee (U. of Austin) and Wang (Brown U.), we show that condition expectation is the optimal predictor for all Bregman Loss Functions (BLFs), of which L-2 and KL-divergence functions are special cases. Moreover, under mild conditions, we show that BLFs are exhaustive. In other words, we provide necessary and sufficient conditions for which conditional expectation is the unique optimizer.
We will discuss some applications of this result, including its connection to K means clustering. Time permits, we will also discuss some relatedwork in mathematical finance, including financial time series data analysis (with Chan of CSIRO) and credit risk evaluations (with Jarrow and Zeng of Cornell U.).
Refreshments will be served at 3:45 p.m. in Room 008 of the Statistics Building