Format

Send to

Choose Destination
See comment in PubMed Commons below
Front Neuroinform. 2011 Jul 22;5:9. doi: 10.3389/fninf.2011.00009. eCollection 2011.

Brian hears: online auditory processing using vectorization over channels.

Author information

1
Laboratoire Psychologie de la Perception, CNRS and Université Paris Descartes Paris, France.

Abstract

The human cochlea includes about 3000 inner hair cells which filter sounds at frequencies between 20 Hz and 20 kHz. This massively parallel frequency analysis is reflected in models of auditory processing, which are often based on banks of filters. However, existing implementations do not exploit this parallelism. Here we propose algorithms to simulate these models by vectorizing computation over frequency channels, which are implemented in "Brian Hears," a library for the spiking neural network simulator package "Brian." This approach allows us to use high-level programming languages such as Python, because with vectorized operations, the computational cost of interpretation represents a small fraction of the total cost. This makes it possible to define and simulate complex models in a simple way, while all previous implementations were model-specific. In addition, we show that these algorithms can be naturally parallelized using graphics processing units, yielding substantial speed improvements. We demonstrate these algorithms with several state-of-the-art cochlear models, and show that they compare favorably with existing, less flexible, implementations.

KEYWORDS:

Brian; GPU; Python; auditory filter; vectorization

PubMed Commons home

PubMed Commons

0 comments
How to join PubMed Commons

    Supplemental Content

    Full text links

    Icon for Frontiers Media SA Icon for PubMed Central
    Loading ...
    Support Center