Format

Send to

Choose Destination
Neural Netw. 2010 Apr;23(3):435-51. doi: 10.1016/j.neunet.2009.07.025. Epub 2009 Jul 23.

Biased ART: a neural architecture that shifts attention toward previously disregarded features following an incorrect prediction.

Author information

1
Department of Cognitive and Neural Systems, Boston University, Boston, MA 02215, USA. gail@bu.edu

Abstract

Memories in Adaptive Resonance Theory (ART) networks are based on matched patterns that focus attention on those portions of bottom-up inputs that match active top-down expectations. While this learning strategy has proved successful for both brain models and applications, computational examples show that attention to early critical features may later distort memory representations during online fast learning. For supervised learning, biased ARTMAP (bARTMAP) solves the problem of over-emphasis on early critical features by directing attention away from previously attended features after the system makes a predictive error. Small-scale, hand-computed analog and binary examples illustrate key model dynamics. Two-dimensional simulation examples demonstrate the evolution of bARTMAP memories as they are learned online. Benchmark simulations show that featural biasing also improves performance on large-scale examples. One example, which predicts movie genres and is based, in part, on the Netflix Prize database, was developed for this project. Both first principles and consistent performance improvements on all simulation studies suggest that featural biasing should be incorporated by default in all ARTMAP systems. Benchmark datasets and bARTMAP code are available from the CNS Technology Lab Website: http://techlab.bu.edu/bART/.

PMID:
19811892
DOI:
10.1016/j.neunet.2009.07.025
[Indexed for MEDLINE]

Supplemental Content

Full text links

Icon for Elsevier Science
Loading ...
Support Center