Minimax Nonparametric Classification—Part I: Rates of Convergence Yang, Yuhong
dc.contributor.department Statistics 2018-02-16T21:35:30.000 2020-07-02T06:56:31Z 2020-07-02T06:56:31Z 1998
dc.description.abstract <p>— This paper studies minimax aspects of nonparametric classification. We first study minimax estimation of the conditional probability of a class label, given the feature variable. This function, say f, is assumed to be in a general nonparametric class. We show the minimax rate of convergence under square L2 loss is determined by the massiveness of the class as measured by metric entropy. The second part of the paper studies minimax classification. The loss of interest is the difference between the probability of misclassification of a classifier and that of the Bayes decision. As is well known, an upper bound on risk for estimating f gives an upper bound on the risk for classification, but the rate is known to be suboptimal for the class of monotone functions. This suggests that one does not have to estimate f well in order to classify well. However, we show that the two problems are in fact of the same difficulty in terms of rates of convergence under a sufficient condition, which is satisfied by many function classes including Besov (Sobolev), Lipschitz, and bounded variation. This is somewhat surprising in view of a result of Devroye, Gyorfi, ¨ and Lugosi (1996).</p>
dc.description.comments <p>This preprint was published as Yuhong Yang, "Minimax Nonparametric Classification - Part I: Rates of Convergence", <em>IEEE Transactions on Information Theory</em> (1999): 2271-2284, doi: <a href="" target="_blank">10.1109/18.796368</a>.</p>
dc.identifier archive/
dc.identifier.articleid 1106
dc.identifier.contextkey 7444504
dc.identifier.s3bucket isulib-bepress-aws-west
dc.identifier.submissionpath stat_las_preprints/99
dc.language.iso en
dc.source.bitstream archive/|||Sat Jan 15 02:39:08 UTC 2022
dc.subject.disciplines Statistics and Probability
dc.subject.keywords conditional probability estimation
dc.subject.keywords mean error probability regret
dc.subject.keywords metric entropy
dc.subject.keywords minimax rates of convergence
dc.subject.keywords nonparametric classification
dc.subject.keywords neural network classes
dc.subject.keywords sparse approximation
dc.title Minimax Nonparametric Classification—Part I: Rates of Convergence
dc.type article
dc.type.genre article
dspace.entity.type Publication
relation.isOrgUnitOfPublication 264904d9-9e66-4169-8e11-034e537ddbca
Original bundle
Now showing 1 - 1 of 1
1.11 MB
Adobe Portable Document Format