TR-2004-06

Manifold Regularization: A Geometric Framework for Learning from Examples

Mikhail Belkin; Partha Niyogi; Vikas Sindhwani. 25 August, 2004.
Communicated by Partha Niyogi.

Abstract

We propose a family of learning algorithms based on a new form of regularization that allows us to exploit the geometry of the marginal distribution. We focus on a semi-supervised framework that incorporates labeled and unlabeled data in a general-purpose learner. Some transductive graph learning algorithms and standard methods including Support Vector Machines and Regularized Least Squares can be obtained as special cases. We utilize properties of Reproducing Kernel Hilbert spaces to prove new Representer theorems that provide theoretical basis for the algorithms. As a result (in contrast to purely graph based approaches) we obtain a natural out-of-sample extension to novel examples and so are able to handle both transductive and truly semi-supervised settings. We present experimental evidence suggesting that our semi-supervised algorithms are able to use unlabeled data effectively. Finally we have a brief discussion of unsupervised and fully supervised learning within our general framework.

Original Document

The original document is available in Postscript (uploaded 25 August, 2004 by Partha Niyogi).

Additional Document Formats

The document is also available in PDF (uploaded 16 November, 2004 by Partha Niyogi) and Postscript (uploaded 25 August, 2004 by Partha Niyogi).

NOTE: The author warrants that these additional documents are identical with the originial to the extent permitted by the translation between the various formats. However, the webmaster has made no effort to verify this claim. If the authenticity of the document is an issue, please always refer to the "Original document." If you find significant alterations, please report to webmaster@cs.uchicago.edu.