Essays on Expected Prediction Error
Abstract :
Finite-Sample Bias in Cross-Validation and Pseudo-Out-of-Sample Testing: This paper analyses finite-sample bias in cross-validation estimates of expected prediction error. A significant risk of positive bias against flexible models is identified— this bias has practical implications for assessing curve-fitting models in finance and economics, for example when comparing regime-change models with less flexible model-averaging methods. The bias against flexible models is also positively related to the number of data-points left out for cross-validation purposes — this has implications for pseudo-out-of-sample Diebold-Mariano tests because these tests rely on a special case of cross-validation that leaves out large chunks of data. I examine real models that predict equity market returns, in which all forms of cross-validation appear to be extremely biased against flexible models. And I present a simulation study which supports my analytic conclusions and is consistent with the equity return results.
Optimistic Equity Premium Predictions: I compare equity premium prediction models in terms of expected prediction error, using a new method that is less biased against flexible models than conventional out-of-sample analysis. I add to the literature by finding that: the best models can predict premiums in economic good-times, not just bad-times; successful premium predictions are driven by growth, rather than value, assets; the best models overall are also the best within all meaningful subsets of periods; and while models do well under a squared error loss function, they do not do well under absolute error or sign error loss functions.
Supervisor: Abraham Lioui, EDHEC Business School
External reviewer: Francis X. Diebold, University of Pennsylvania
Other committee member: Nikolaos Tessaromatis, EDHEC Business School