Send to

Choose Destination
See comment in PubMed Commons below
J Vis. 2012 Aug 1;12(8). pii: 1. doi: 10.1167/12.8.1.

A neural-based code for computing image velocity from small sets of middle temporal (MT/V5) neuron inputs.

Author information

School of Psychology, The University of Waikato, Hamilton, New Zealand.


It is still not known how the primate visual system is able to measure the velocity of moving stimuli such as edges and dots. Neurons have been found in the Medial Superior Temporal (MST) area of the primate brain that respond at a rate proportional to the speed of the stimulus but it is not clear how this property is derived from the speed-tuned Middle Temporal (MT) neurons that precede area MST along the visual motion pathway. I show that a population code based on the outputs from a number of MT neurons is susceptible to errors if the MT neurons are tuned to a broad range of spatial frequencies and have receptive fields that span a wide range of sizes. I present a solution that uses the activity of just three MT units within a velocity channel to estimate the velocity using a weighted vector average (centroid) technique. I use a range of velocity channels (1, 2, 4, and 8°/s) with inhibition between them so that only a single channel passes the velocity estimate onto the next stage of processing (MST). I also include a contrast-dependent redundancy-removal stage which provides tighter spatial resolution for the velocity estimates under conditions of high contrast but which trades off spatial compactness for greater sensitivity at low contrast. The new model produces an output signal proportional to the stimulus input velocity (consistent with MST neurons) and its input stages have properties closely tied to those of neurons in areas V1 and MT.

[Indexed for MEDLINE]
PubMed Commons home

PubMed Commons

How to join PubMed Commons

    Supplemental Content

    Full text links

    Icon for Silverchair Information Systems
    Loading ...
    Support Center