Format

Send to:

Choose Destination
See comment in PubMed Commons below
IEEE Trans Neural Netw. 1999;10(5):988-99. doi: 10.1109/72.788640.

An overview of statistical learning theory.

Author information

  • 1AT&T Labs-Research, Red Bank, NJ 07701, USA.

Abstract

Statistical learning theory was introduced in the late 1960's. Until the 1990's it was a purely theoretical analysis of the problem of function estimation from a given collection of data. In the middle of the 1990's new types of learning algorithms (called support vector machines) based on the developed theory were proposed. This made statistical learning theory not only a tool for the theoretical analysis but also a tool for creating practical algorithms for estimating multidimensional functions. This article presents a very general overview of statistical learning theory including both theoretical and algorithmic aspects of the theory. The goal of this overview is to demonstrate how the abstract learning theory established conditions for generalization which are more general than those discussed in classical statistical paradigms and how the understanding of these conditions inspired new algorithmic approaches to function estimation problems. A more detailed overview of the theory (without proofs) can be found in Vapnik (1995). In Vapnik (1998) one can find detailed description of the theory (including proofs).

PMID:
18252602
[PubMed]
PubMed Commons home

PubMed Commons

0 comments
How to join PubMed Commons

    Supplemental Content

    Full text links

    Icon for IEEE Engineering in Medicine and Biology Society
    Loading ...
    Write to the Help Desk