Extensions to McDiarmid's inequality when differences are bounded with high probability

Samuel Kutin. 12 April, 2002.
Communicated by Partha Niyogi.


The method of independent bounded differences (McDiarmid, 1989) gives large-deviation concentration bounds for multivariate functions in terms of the maximum effect that changing one coordinate of the input can have on the output. This method has been widely used in combinatorial applications, and in learning theory. In some recent applications to the theory of algorithmic stability (Kutin and Niyogi, 2002), we need to consider the case where changing one coordinate of the input usually leads to a small change in the output, but not always.

We prove two extensions to McDiarmid's inequality. The first applies when, for most inputs, any small change leads to a small change in the output. The second applies when, for a randomly selected input and a random one-coordinate change, the change in the output is usually small.

Original Document

The original document is available in Postscript (uploaded 12 April, 2002 by Partha Niyogi).

Additional Document Formats

The document is also available in PDF (uploaded 12 April, 2002 by Partha Niyogi).

NOTE: The author warrants that these additional documents are identical with the originial to the extent permitted by the translation between the various formats. However, the webmaster has made no effort to verify this claim. If the authenticity of the document is an issue, please always refer to the "Original document." If you find significant alterations, please report to