TR-2001-30

The interaction of stability and weakness in AdaBoost

Samuel Kutin; Partha Niyogi. 19 October, 2001.
Communicated by Partha Niyogi.

Abstract

We provide an analysis of AdaBoost within the framework of algorithmic stability. In particular, we show that AdaBoost is a stability-preserving operation: if the "input" (the weak learner) to AdaBoost is stable, then the "output" (the strong learner) is almost-everywhere stable. Because classifier combination schemes such as AdaBoost have greatest effect when the weak learner is weak, we discuss weakness and its implications. We also show that the notion of almost-everywhere stability is sufficient for good bounds on generalization error. These bounds hold even when the weak learner has infinite VC dimension.

Original Document

The original document is available in Postscript (uploaded 23 October, 2001 by Nicholas Russo).

Additional Document Formats

The document is also available in PDF (uploaded 23 October, 2001 by Nicholas Russo).

NOTE: The author warrants that these additional documents are identical with the originial to the extent permitted by the translation between the various formats. However, the webmaster has made no effort to verify this claim. If the authenticity of the document is an issue, please always refer to the "Original document." If you find significant alterations, please report to webmaster@cs.uchicago.edu.