• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of pnasPNASInfo for AuthorsSubscriptionsAboutThis Article
Proc Natl Acad Sci U S A. Oct 24, 2000; 97(22): 11829–11835.
PMCID: PMC34356
Colloquium Paper

Spatial processing in the auditory cortex of the macaque monkey

Abstract

The patterns of cortico-cortical and cortico-thalamic connections of auditory cortical areas in the rhesus monkey have led to the hypothesis that acoustic information is processed in series and in parallel in the primate auditory cortex. Recent physiological experiments in the behaving monkey indicate that the response properties of neurons in different cortical areas are both functionally distinct from each other, which is indicative of parallel processing, and functionally similar to each other, which is indicative of serial processing. Thus, auditory cortical processing may be similar to the serial and parallel “what” and “where” processing by the primate visual cortex. If “where” information is serially processed in the primate auditory cortex, neurons in cortical areas along this pathway should have progressively better spatial tuning properties. This prediction is supported by recent experiments that have shown that neurons in the caudomedial field have better spatial tuning properties than neurons in the primary auditory cortex. Neurons in the caudomedial field are also better than primary auditory cortex neurons at predicting the sound localization ability across different stimulus frequencies and bandwidths in both azimuth and elevation. These data support the hypothesis that the primate auditory cortex processes acoustic information in a serial and parallel manner and suggest that this may be a general cortical mechanism for sensory perception.

One of the fundamental tasks of the auditory system is to determine the spatial location of acoustic stimuli. In contrast to the visual and somatosensory systems, the auditory periphery cannot encode stimulus location, but can only encode the presence of particular stimulus frequencies in the input. The central nervous system therefore must compute the spatial location of a stimulus by integrating the responses of many individual sensory receptors.

There are three main cues that can be used to compute the spatial location of an acoustic stimulus: interaural intensity, interaural time or phase, and differences in the stimulus spectrum at the tympanic membrane (1). The binaural cues are critical for localization in azimuth, but are much less effective for localization in elevation because the ears of most mammals are located symmetrically on the head. However, reflections of the acoustic signal by the torso, head, pinna, and ear canal create spectral peaks and notches that vary with stimulus elevation (2, 3). Although the physical cues that could provide the necessary information to localize sounds are well defined, how the nervous system uses these cues to calculate the spatial location of acoustic stimuli is far from being resolved. There are several stations along the ascending auditory pathway in mammals that integrate the spatial cues necessary for the localization of sounds, including the superior olivary complex (4), the inferior colliculus (57), and the thalamus (810). The spatial tuning of the majority of auditory cortical neurons is very broad, commonly over 90° for a half-maximal response (1116). In contrast, primates can detect changes in sound location as small as a few degrees or less (1722). This finding may appear to indicate that auditory cortex is not necessary for this perception, but auditory cortical lesions produce clear deficits in sound localization performance in cats (23), ferrets (24), New World monkeys (25), Old World monkeys (26), and humans (27). Thus, a key question is how the broad spatially tuned neurons in the auditory cortex processes acoustic information to ultimately result in the perception of acoustic space.

The auditory cortex of the primate can be anatomically subdivided into several “core”, “belt,” and “parabelt” cortical areas based on cytoarchitecture, cortico-cortical connections, and thalamo-cortical connections (see refs. 28 and 29). It has been speculated that these multiple auditory cortical areas process acoustic information in both a serial and parallel manner (28) similar to visual cortical processing of “what” and “where” information (30, 31). While the available anatomical data are consistent with this hypothesis, there are relatively few electrophysiological studies in the monkey to either support or refute this idea. Merzenich and Brugge (32) were the first to describe the physiological properties of the macaque primary auditory cortex (AI) and the rostral field (R) in the core area, and the caudomedial (CM) field and lateral field (L) of the “belt” area (32). They found that AI and R neurons had sharper frequency tuning than those in CM based on the multiple-unit responses in the anesthetized animal. Subsequent studies (33, 34) support these initial observations. More recent studies indicate that neurons in the L of the belt area respond better to spectrally complex stimuli, including vocalizations (35), which suggests that the L is processing “what” information. In contrast, caudal and medial fields have been proposed to process “where” information. Neurons in CM have broad frequency tuning and the responses of CM neurons to tone stimuli depend on an intact AI (36). These limited physiological data are consistent with serial processing of acoustic information from the core to the belt auditory cortical areas, and this relatively new hypothesis currently is being rigorously tested in several laboratories.

Neuronal Activity as a Function of Stimulus Frequency and Intensity

Previous electrophysiological studies in the primate auditory cortex have largely been done in anesthetized animals. However, the activity of neurons in the primate auditory cortex can either increase or decrease depending on whether the monkey is attending to the stimulus, not attended to the stimulus, or is anesthetized (11, 37, 38). To define the frequency and intensity responses of primate cortical neurons in the attended state, single neuron responses were recorded in monkeys while they performed a sound localization task (39). In this experiment, tone stimuli at 31 different frequencies (2- to 5-octave range) and 16 different intensities (90-dB range) were presented from a speaker located directly opposite to the contralateral ear. Fig. Fig.11 shows representative frequency response areas (FRAs) measured across three different auditory cortical areas in a representative monkey. The normalized firing rate for each stimulus is indicated by the color, with red regions corresponding to the stimuli that elicited the greatest activity and blue showing stimuli that elicited activity significantly greater than the spontaneous rate but less than 25% of the peak activity. The frequency range tested was adjusted for each neuron, as the frequency tuning could be quite different between neurons in different cortical areas (Fig. (Fig.11 Upper Right). These experiments demonstrated that AI neurons in the behaving monkey had relatively sharp frequency tuning (e.g., Fig. Fig.11 B, C, E, and F). In contrast, neurons in CM generally had broader frequency tuning (Fig. (Fig.11 D, G, and H), even for neurons with similar characteristic frequencies (CFs), defined as the frequency that elicited a response at the lowest intensity (Fig. (Fig.11 C vs. G). There also was a shift in CF when crossing the border between different cortical areas, for example from L to AI (Fig. (Fig.1 1 A vs. B) or between AI and CM (Fig. (Fig.1 1 C vs. D and Fig. Fig.11 F vs. G).

Figure 1
Frequency response areas of single auditory cortical neurons. Responses were recorded to 50-ms tone stimuli (3-ms rise/fall) presented at 16 different intensity levels [10- to 90-dB sound pressure level (SPL)] at 31 different ...

Fig. Fig.22 shows representative FRAs recorded from neurons in a second monkey. Neurons in R showed similar tuning functions as AI neurons (compare Fig. Fig.22 AG to HK, M, and O). Again, the AI and CM border was easily identified in this monkey by the change in the frequency tuning and the CF (Fig. (Fig.22 KP).

Figure 2
FRAs recorded in the three different auditory cortical areas in a second monkey. (AG) FRAs from single neurons recorded in R. (HJ) FRAs recorded at the rostral border of AI. (K, M, and O) Neurons recorded at the medial border of ...

These observations are consistent with those described in the anesthetized monkey and indicate that different auditory cortical areas have distinct functional properties using simple tone stimuli. Statistical analysis confirmed that CM neurons had the broadest frequency tuning of all fields examined, and neurons in R had the narrowest frequency tuning (39). The ability to integrate information across a broad frequency range would likely improve spatial processing, as binaural and spectral cues across different frequencies could be used, and broadband stimuli are more easily localized than narrow band stimuli (see below). The broad frequency tuning of neurons in CM would make them ideally suited to integrate information across frequencies, consistent with the hypothesis that AI and CM form part of a serial “where” processing stream of auditory information (28).

Neuronal Activity as a Function of Stimulus Location

The hypothesis that AI and CM neurons process auditory spatial information in series predicts that the spatial response properties of these neurons should improve between AI and CM. To address this issue, the responses of neurons in these areas were measured while the monkey performed a sound localization task, and the neuronal activity was compared with the monkey's sound localization performance (16). To determine sound localization thresholds, the monkey depressed a lever to initiate a trial, and several stimuli were presented from directly in front of the monkey. At some random time the stimulus changed location in either azimuth or elevation. When the monkey detected this change it released the lever and received a reward. The sound localization threshold was defined as the distance between locations necessary for the monkey to detect a difference on half of the trials. These thresholds are shown for two monkeys in Fig. Fig.3.3. The filled bars show the thresholds measured in azimuth and the open bars show thresholds measured in elevation for tone stimuli of different frequencies (Left) or noise stimuli with different spectral content (Right). Across these different stimuli, the thresholds for localization in azimuth were lower than those for localization in elevation. This difference was greatest for tone stimuli, where in most cases the elevation thresholds could not be measured because 30° was the maximum change in location tested. For noise stimuli, there was a progressive improvement in elevation thresholds as the stimulus contained higher frequency components. The worst thresholds were noted for stimuli containing 750–1,500 Hz, improving for 3,000–6,000 Hz, and 5,000–10,000 Hz, and the lowest thresholds were noted when the stimulus was a broadband noise containing all of those frequencies. There was no such obvious trend as a function of the tone stimulus frequency.

Figure 3
Sound localization thresholds across stimulus frequencies and bandwidths. Thresholds are shown for localization in azimuth (solid bars) and elevation (open bars). Thresholds could not be defined if they were greater than 30° (broken lines). ...

The activity of single neurons also was recorded in these monkeys while they performed a similar task. Each neuron was tested with two stimuli on randomly interleaved trials. One stimulus was a tone near the characteristic frequency of the neuron and the other was a noise stimulus that included the CF. Both of these stimuli usually elicited a robust response from the neuron under study. A typical example from an AI neuron is shown in Fig. Fig.4.4. To the left are poststimulus time histograms (PSTHs) taken over 10 trials in which either a tone (Fig. (Fig.44A) or band-passed noise (Fig. (Fig.44B) was presented from one of 17 different locations in front of the monkey. Stimuli were positioned straight ahead and at 15° and 30° eccentricity along the horizontal, vertical, and both oblique axes. Fig. Fig.44 shows the PSTHs at their relative locations in this region of frontal space. This neuron had a more robust response when the stimuli were presented to the right of the midline (in contralateral space), compared with when the stimuli were presented to the left of the midline. However, there was little difference in activity as a function of the elevation of the stimulus. This can be most readily appreciated by comparing the middle row of PSTHs (azimuth tuning) to the middle column of PSTHs (elevation tuning). The three-dimensional reconstruction of these responses are shown to the right of each plot in Fig. Fig.4.4. These plots were normalized to the peak activity of that neuron measured across all locations for both stimuli, with the response shown in the z axis as a function of the stimulus azimuth and elevation. The response contour for noise stimuli had a greater slope than the response contour for the tone stimuli, indicating that this neuron was more sensitive to the location of noise stimuli compared with tones.

Figure 4
Spatial response profiles of an AI neuron. PSTHs are shown in their relative position from the monkey's perspective (rightward PSTHs correspond to stimuli presented to the right of midline). Numbers above the most eccentric PSTHs correspond to the ...

An example from a CM neuron is shown in Fig. Fig.5.5. In this case, the neuron responded better to noise than to tones. Further, the response to noise was more strongly modulated by the stimulus location, illustrated by the greater slope of the surface contour. Finally, there was a difference in the spatial preferences of this neuron depending on the stimulus. When tone stimuli were presented (Fig. (Fig.55A), there was essentially no modulation of the response as a function of the stimulus elevation, shown as the iso-intensity contours (heavy black lines) being roughly parallel to the elevation axis. This can best be seen for stimuli at the midline (0° azimuth), where the neuronal response varied very little over 60° differences in elevation when tones were presented. In contrast, the response along the midline to noise stimuli (Fig. (Fig.55B) was greatest for upward elevations and smallest at the lowest stimulus elevation.

Figure 5
Three-dimensional reconstructions of spatial responses from a representative CM neuron. Conventions are as in Fig. Fig.4.4. This neuron had a lower response to tone stimuli, and a more shallow response as a function of stimulus location (A ...

The results from both monkeys indicated that although most neurons responded to all stimulus locations, i.e., they were very broadly tuned, the main features of the neuronal responses were consistent with the behavioral ability to localization sounds. Localization in elevation was very poor for tone stimuli, and few neurons (<10%) were encountered that had changes in their response as a function of the elevation of tone stimuli. In contrast, localization in elevation of noise stimuli containing high-frequency components was much better than for tone stimuli, and more neurons were encountered that were sensitive to the elevation of these noise stimuli (≈40%). Secondly, there was a greater rate of change in the response as a function of the stimulus azimuth for noise stimuli compared with tone stimuli. Finally, the highest percentage of neurons were sensitive to the location of broadband noise (≈55% in azimuth and ≈30% in elevation for AI neurons and ≈80% in azimuth and ≈30% in elevation for CM neurons), which showed the lowest behavioral thresholds of all stimuli tested. These general observations suggest that the firing rate of single neurons could contain sufficient information for the monkey to localize these different types of stimuli.

Correlations Between Neural Activity and Sound Localization

These qualitative impressions were verified by directly comparing the neuronal and behavioral data. Fig. Fig.6 6 shows the firing rate as a function of stimulus azimuth for a single AI neuron (A) and a single CM neuron (B). The task that was used to define thresholds (Fig. (Fig.3)3) required the monkey to detect a change in the location of the stimulus from directly ahead. If the monkey had access to the information provided by only one neuron, then significant differences in activity from when the stimulus was presented directly in front of the monkey would be a reliable signal that the stimulus had changed location. The predicted threshold for each neuron was defined as the distance that the stimulus would have to move for the activity to change by one standard deviation from when the stimulus was straight ahead (dashed lines of Fig. Fig.6).6). This predicted threshold would be large if the spatial tuning of the neuron was relatively poor (slopes of the line near zero), and the predicted threshold would be small if the response of the neuron was strongly modulated by stimulus position. The predicted threshold was compared with the behavior by taking the ratio of the predicted threshold divided by the measured threshold. This ratio was less than one if the neuron predicted a smaller threshold than was observed, one if the neuron and the behavior were the same, and greater than one if the behavioral threshold was smaller than the neuronal prediction. If the neuronal responses reflect the sound localization ability, stimuli that the monkey had difficulty in localizing should elicit poor spatial resolution in most neurons (and therefore predict high thresholds for a ratio near 1.0), while stimuli that the monkey could easily localize should elicit sharp spatial resolution in most neurons (and therefore predicted low thresholds for a ratio near 1.0). The distribution of this ratio for 353 AI neurons and 118 CM neurons is shown in the middle panel of Fig. Fig.6.6. For both AI and CM, while most neurons predicted thresholds greater than those observed behaviorally, many neurons did predict thresholds consistent with the behavior. Further, CM neurons were better able to predict the behavior than AI neurons (P < 0.05) as indicated by more neurons having ratios close to 1.0 (compare the middle and right panels of Fig. Fig.6 6 A and B).

Figure 6
Predictions of behavioral performance by single neurons. The mean and standard deviation of the response of a single neuron as a function of the stimulus azimuth (0° elevation) are shown for an AI neuron (A) and a CM neuron (B). (Left) [filled square] ...

The ability of some neurons to predict behavior indicates that neurons in these areas could provide valuable information to the monkey about the spatial location of the stimulus. However, there was a wide variation in threshold ratios, meaning that many cells performed better or worse than the monkey. Because all neurons responded to these stimuli they were presumably conveying some information to the monkey. One possibility is that pooling the responses of all neurons would enhance the ability to predict the behavior. Alternatively, it may be that pooling the responses of all neurons would cause a degradation of the ability of the population to predict the behavior caused by the neurons that showed poor spatial sensitivity.

The results of an analysis of pooling neurons is shown in Fig. Fig.77 where the mean and standard deviation across all comparisons (tone and noise stimuli for azimuth and elevation, 21 comparisons total) are shown for two populations of pooled neurons in each cortical area. Open bars show the results when all neurons tested in each cortical area were pooled. The neuronal predictions of the behavior for both AI and CM neurons were significantly worse than the measured behavior. A second level of analysis pooled the responses based on their spatial tuning. Significant spatial tuning was defined as a statistically significant correlation of the response as a function of stimulus location in at least one direction (azimuth or elevation) for at least one tested stimulus (tone or noise). The closed bars of Fig. Fig.77 show that there was an improvement in the ability to predict the behavior by pooling the responses of only these spatially sensitive neurons. For AI neurons, the improvement still resulted in predictions that were significantly worse than the measured behavior. For CM neurons, however, the predictions based on the pooled spatially sensitive neurons were not different from the behavioral thresholds measured in the monkey. This result indicates that relatively small populations of neurons in CM contained sufficient information for the monkey to perform the task.

Figure 7
Mean and standard deviation for the predicted/measured ratio pooled across either all neurons measured in that cortical area (open bars) or restricted to only the neurons in that cortical area that had significant correlation between the neuronal ...

These results are consistent with serial processing of spatial information from AI to CM in the primate auditory cortex. Neurons in CM showed better spatial tuning than AI neurons, and the ability to predict the behavior by all of the measured CM neurons was not significantly different from the predictions by only the spatially sensitive AI neurons. This is expected if the CM neurons were selectively activated by the spatially tuned AI neurons, ultimately leading to an enhanced representation of acoustic space during this serial processing. In support of this idea is the finding that CM neurons receive inputs from broad regions of AI that span much of the frequency representation (33, 36).

These results raise several obvious questions. The first is which other cortical areas also process spatial information. The experiments to date have concentrated on AI and CM, but it remains to be seen how neurons in other cortical areas also participate in this perception. It is likely that other cortical areas also will have spatially tuned neurons, as the “parallel” nature of information processing is almost certainly not strictly maintained. It is more likely that neurons across cortical areas process both “what” and “where” information to differing degrees to aid in ultimately “binding” these features to give rise to the percept of a real-world object.

A second question is how this information is used by the monkey. Although both AI and CM are likely to be necessary for sound location perception, it is also unlikely that either is sufficient for this percept in the primate. The most likely scenario is that these neurons form one link in the serial processing of spatial information that will be further processed in other auditory cortical fields, as well as parietal (e.g., ref. 40) and/or frontal cortical areas (41). The inputs from CM are likely candidates to contribute to the creation of multimodal spatial perception in the parietal lobe (40).

In summary, the available physiological evidence is supportive of the hypothesis that spatial location is processed in series between AI and CM. It remains to be seen how the outputs of CM are further processed, and how this processing results in the perception of acoustic space. Similarly, other features of the acoustic stimulus may be preferentially processed in other cortical areas, for example in the L fields in the belt and parabelt areas. Finally, the role of the cortical areas in the core region, particularly AI and R, is still unclear. It may be that both areas process all types of information in parallel, or there may be a subdivision of feature processing at this initial cortical level. Nonetheless, these experiments on the cortical mechanisms of sound localization indicate that broadly tuned neurons can in fact provide information necessary to perform perceptual discriminations at a much finer resolution than the bandwidths of the neuronal tuning functions would suggest. This type of information processing may be a general mechanism by which the activity of neurons in the cerebral cortex leads to perception across sensory modalities (4244).

Acknowledgments

I thank E. A. Disbrow, L. A. Krubitzer, J. A. Langston, M. L. Phan, T. K. Su, and T. M. Woods for helpful comments on earlier versions of this manuscript, and D. C. Guard, M. L. Phan and T. K. Su for their participation in the described experiments. Funding was provided by National Institutes of Health Grant DC-02371, the Klingenstein Foundation, and the Sloan Foundation.

Abbreviations

AI
primary auditory cortex
CM
caudomedial
FRA
frequency response area
CF
characteristic frequency
R
rostral field
L
lateral field
PSTH
poststimulus time histogram

Footnotes

This paper was presented at the National Academy of Sciences colloquium “Auditory Neuroscience: Development, Transduction, and Integration,” held May 19–21, 2000, at the Arnold and Mabel Beckman Center in Irvine, CA.

References

1. Middlebrooks J C, Green D M. Annu Rev Psychol. 1991;42:135–159. [PubMed]
2. Wightman F L, Kistler D J. J Acoust Soc Am. 1989;85:868–878. [PubMed]
3. Pralong D, Carlile S. J Acoust Soc Am. 1994;95:3435–3444. [PubMed]
4. Joris P X, Yin T C. J Neurophysiol. 1995;73:1043–1062. [PubMed]
5. Yin T C, Chan J C. J Neurophysiol. 1990;64:465–488. [PubMed]
6. Kuwada S, Batra R, Yin T C, Oliver D L, Haberly L B, Stanford T R. J Neurosci. 1997;17:7565–7581. [PubMed]
7. Litovsky R Y, Yin T C. J Neurophysiol. 1998;80:1285–1301. [PubMed]
8. Clarey J C, Barone P, Imig T J. J Neurophysiol. 1994;72:2384–2405. [PubMed]
9. Barone P, Clarey J C, Irons W A, Imig T J. J Neurophysiol. 1996;75:1206–1220. [PubMed]
10. Imig T J, Poirier P, Irons W A, Samson F K. J Neurophysiol. 1997;78:2754–2771. [PubMed]
11. Benson D A, Hienz R D, Goldstein M H., Jr Brain Res. 1981;219:249–267. [PubMed]
12. Middlebrooks J C, Pettigrew J D. J Neurosci. 1981;1:107–120. [PubMed]
13. Imig T J, Irons W A, Samson F R. J Neurophysiol. 1990;63:1448–1466. [PubMed]
14. Rajan R, Aitkin L M, Irvine D R F, McKay J. J Neurophysiol. 1990;64:872–887. [PubMed]
15. Middlebrooks J C, Xu L, Eddins A C, Green D M. J Neurophysiol. 1998;80:863–881. [PubMed]
16. Recanzone G H, Guard D C, Phan M L, Su T K. J Neurophysiol. 2000;83:2723–2739. [PubMed]
17. Alshuler M W, Comalli P E. J Aud Res. 1975;15:262–265.
18. Brown C H, Beecher M D, Moody D B, Stebbins W C. Science. 1978;201:753–754. [PubMed]
19. Brown C H, Beecher M D, Moody D B, Stebbins W C. J Acoust Soc Am. 1980;68:127–132. [PubMed]
20. Brown C H, Schessler T, Moody D, Stebbins W. J Acoust Soc Am. 1982;72:1804–1811. [PubMed]
21. Perrott D R, Saberi K. J Acoust Soc Am. 1990;87:1728–1731. [PubMed]
22. Recanzone G H, Makhamra S D D R, Guard D C. J Acoust Soc Am. 1998;103:1085–1097. [PubMed]
23. Jenkins W M, Merzenich M M. J Neurophysiol. 1984;52:819–847. [PubMed]
24. Kavanagh G L, Kelly J B. J Neurophysiol. 1987;57:1746–1766. [PubMed]
25. Thompson G C, Cortez A M. Behav Brain Res. 1983;8:211–216. [PubMed]
26. Heffner H E, Heffner R S. J Neurophysiol. 1990;64:915–931. [PubMed]
27. Sanachez-Longo L P, Forster F M. Neurology. 1958;8:119–125. [PubMed]
28. Rauschecker J P. Audiol Neuro-Otol. 1998;3:86–103. [PubMed]
29. Kaas J H, Hackett T A, Tramo M J. Curr Opin Neurobiol. 1999;9:164–170. [PubMed]
30. Ungerleider L G, Mishkin M. In: Analysis of Visual Behavior. Ingle D J, Goodale M A, Mansfield J W, editors. Cambridge, MA: MIT Press; 1982. pp. 549–556.
31. Ungerleider L G, Haxby J V. Curr Opin Neurobiol. 1994;4:157–165. [PubMed]
32. Merzenich M M, Brugge J F. Brain Res. 1973;50:275–296. [PubMed]
33. Morel A, Garraghty P E, Kaas J H. J Comp Neurol. 1993;335:437–459. [PubMed]
34. Kosaki H, Hashikawa T, He J, Jones E G. J Comp Neurol. 1997;386:304–316. [PubMed]
35. Rauschecker J P, Tian B, Hauser M. Science. 1995;268:111–114. [PubMed]
36. Rauschecker J P, Tian B, Pons T, Mishkin M. J Comp Neurol. 1997;382:89–103. [PubMed]
37. Miller J M, Sutton D, Pfingst B, Ryan A, Beaton R, Gourevitch G. Science. 1972;177:449–451. [PubMed]
38. Miller J M, Dobie R A, Pfingst B E, Hienz R D. Am J Otolaryngol. 1980;1:119–130. [PubMed]
39. Recanzone G H, Guard D C, Phan M L. J Neurophysiol. 2000;83:2315–2331. [PubMed]
40. Mazzoni P, Bracewell R M, Barash S, Andersen R A. J Neurophysiol. 1996;75:1233–1241. [PubMed]
41. Romanski L M, Tian B, Fritz J, Mishkin M, Goldman-Rakic P S, Rauschecker J P. Nat Neurosci. 1999;2:1131–1136. [PMC free article] [PubMed]
42. Bradley A, Skottun B C, Ohzawa I, Sclar G, Freeman R D. J Neurophysiol. 1987;57:755–772. [PubMed]
43. Vogels R, Orban G A. J Neurosci. 1990;10:3543–3558. [PubMed]
44. Prince S J, Pointon A D, Cumming B G, Parker A J. J Neurosci. 2000;20:3387–3400. [PubMed]

Articles from Proceedings of the National Academy of Sciences of the United States of America are provided here courtesy of National Academy of Sciences
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...