Out-of-sample comparisons of overfit models

Thumbnail Image
Date
2014-03-30
Authors
Major Professor
Advisor
Committee Member
Journal Title
Journal ISSN
Volume Title
Publisher
Authors
Person
Calhoun, Gray
Professor
Research Projects
Organizational Units
Organizational Unit
Economics

The Department of Economic Science was founded in 1898 to teach economic theory as a truth of industrial life, and was very much concerned with applying economics to business and industry, particularly agriculture. Between 1910 and 1967 it showed the growing influence of other social studies, such as sociology, history, and political science. Today it encompasses the majors of Agricultural Business (preparing for agricultural finance and management), Business Economics, and Economics (for advanced studies in business or economics or for careers in financing, management, insurance, etc).

History
The Department of Economic Science was founded in 1898 under the Division of Industrial Science (later College of Liberal Arts and Sciences); it became co-directed by the Division of Agriculture in 1919. In 1910 it became the Department of Economics and Political Science. In 1913 it became the Department of Applied Economics and Social Science; in 1924 it became the Department of Economics, History, and Sociology; in 1931 it became the Department of Economics and Sociology. In 1967 it became the Department of Economics, and in 2007 it became co-directed by the Colleges of Agriculture and Life Sciences, Liberal Arts and Sciences, and Business.

Dates of Existence
1898–present

Historical Names

  • Department of Economic Science (1898–1910)
  • Department of Economics and Political Science (1910-1913)
  • Department of Applied Economics and Social Science (1913–1924)
  • Department of Economics, History and Sociology (1924–1931)
  • Department of Economics and Sociology (1931–1967)

Related Units

Journal Issue
Is Version Of
Versions
Series
Department
Abstract

This paper uses dimension asymptotics to study why overfit linear regression models should be compared out-of-sample; we let the number of predictors used by the larger model increase with the number of observations so that their ratio remains uniformly positive. Our analysis gives a theoretical motivation for using out-of-sample (OOS) comparisons: the DMW OOS test allows a forecaster to conduct inference about the expected future accuracy of his or her models when one or both is overfit. We show analytically and through Monte Carlo that standard full-sample test statistics can not test hypotheses about this performance. Our paper also shows that popular test and training sample sizes may give misleading results if researchers are concerned about overfit. We show that P 2 /T must converge to zero for theDMW test to give valid inference about the expected forecast accuracy, otherwise the test measures the accuracy of the estimates constructed using only the training sample. In empirical research, P is typically much larger than this. Our simulations indicate that using large values of P with the DMW test gives undersized tests with low power, so this practice may favor simple benchmark models too much.

Comments
Description
Keywords
Citation
DOI
Source
Subject Categories
Copyright
Collections