TR-2002-03
Almost-everywhere algorithmic stability and generalization error
Samuel Kutin; Partha Niyogi. 25 March, 2002.
Communicated by Partha Niyogi.
Abstract
We introduce a new notion of algorithmic stability, which we call training stability. We show that training stability is sufficient for good bounds on generalization error. These bounds hold even when the learner has infinite VC dimension. In the PAC setting, training stability gives necessary and sufficient conditions for exponential convergence, and thus serves as a distribution-dependent analog to VC dimension.Our proof generalizes an argument of Bousquet and Elisseeff (2001), who show that the more rigid assumption of uniform hypothesis stability implies good bounds on generalization error. We argue that weaker forms of hypothesis stability also give good bounds. We explore the relationships among VC dimension, generalization error, and different notions of stability.
Original Document
The original document is available in Postscript (uploaded 27 March, 2002 by Nicholas Russo).
Additional Document Formats
The document is also available in PDF (uploaded 27 March, 2002 by Nicholas Russo).
NOTE: The author warrants that these additional documents are identical with the originial to the extent permitted by the translation between the various formats. However, the webmaster has made no effort to verify this claim. If the authenticity of the document is an issue, please always refer to the "Original document." If you find significant alterations, please report to webmaster@cs.uchicago.edu.