Format

Send to

Choose Destination
IEEE Trans Neural Netw Learn Syst. 2018 May;29(5):1503-1513. doi: 10.1109/TNNLS.2017.2671845. Epub 2017 Mar 10.

Discriminative Sparse Neighbor Approximation for Imbalanced Learning.

Abstract

Data imbalance is common in many vision tasks where one or more classes are rare. Without addressing this issue, conventional methods tend to be biased toward the majority class with poor predictive accuracy for the minority class. These methods further deteriorate on small, imbalanced data that have a large degree of class overlap. In this paper, we propose a novel discriminative sparse neighbor approximation (DSNA) method to ameliorate the effect of class-imbalance during prediction. Specifically, given a test sample, we first traverse it through a cost-sensitive decision forest to collect a good subset of training examples in its local neighborhood. Then, we generate from this subset several class-discriminating but overlapping clusters and model each as an affine subspace. From these subspaces, the proposed DSNA iteratively seeks an optimal approximation of the test sample and outputs an unbiased prediction. We show that our method not only effectively mitigates the imbalance issue, but also allows the prediction to extrapolate to unseen data. The latter capability is crucial for achieving accurate prediction on small data set with limited samples. The proposed imbalanced learning method can be applied to both classification and regression tasks at a wide range of imbalance levels. It significantly outperforms the state-of-the-art methods that do not possess an imbalance handling mechanism, and is found to perform comparably or even better than recent deep learning methods by using hand-crafted features only.

PMID:
28362590
DOI:
10.1109/TNNLS.2017.2671845

Supplemental Content

Full text links

Icon for IEEE Engineering in Medicine and Biology Society
Loading ...
Support Center