Local estimation of posterior class probabilities to minimize classification errors

IEEE Trans Neural Netw. 2004 Mar;15(2):309-17. doi: 10.1109/TNN.2004.824421.

Abstract

Decision theory shows that the optimal decision is a function of the posterior class probabilities. More specifically, in binary classification, the optimal decision is based on the comparison of the posterior probabilities with some threshold. Therefore, the most accurate estimates of the posterior probabilities are required near these decision thresholds. This paper discusses the design of objective functions that provide more accurate estimates of the probability values, taking into account the characteristics of each decision problem. We propose learning algorithms based on the stochastic gradient minimization of these loss functions. We show that the performance of the classifier is improved when these algorithms behave like sample selectors: samples near the decision boundary are the most relevant during learning.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Probability*
  • Research Design / standards*