Variational learning in nonlinear gaussian belief networks

Neural Comput. 1999 Jan 1;11(1):193-213. doi: 10.1162/089976699300016872.

Abstract

We view perceptual tasks such as vision and speech recognition as inference problems where the goal is to estimate the posterior distribution over latent variables (e.g., depth in stereo vision) given the sensory input. The recent flurry of research in independent component analysis exemplifies the importance of inferring the continuous-valued latent variables of input data. The latent variables found by this method are linearly related to the input, but perception requires nonlinear inferences such as classification and depth estimation. In this article, we present a unifying framework for stochastic neural networks with nonlinear latent variables. Nonlinear units are obtained by passing the outputs of linear gaussian units through various nonlinearities. We present a general variational method that maximizes a lower bound on the likelihood of a training set and give results on two visual feature extraction problems. We also show how the variational method can be used for pattern classification and compare the performance of these nonlinear networks with other methods on the problem of handwritten digit recognition.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Depth Perception / physiology
  • Handwriting
  • Learning / physiology*
  • Neural Networks, Computer*
  • Nonlinear Dynamics*
  • Pattern Recognition, Automated
  • Pattern Recognition, Visual / physiology
  • Stochastic Processes