• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of scanLink to Publisher's site
Soc Cogn Affect Neurosci. Jun 2007; 2(2): 93–103.
PMCID: PMC2555450

The BOLD signal in the amygdala does not differentiate between dynamic facial expressions

Abstract

The amygdala has been considered to be essential for recognizing fear in other people's facial expressions. Recent studies shed doubt on this interpretation. Here we used movies of facial expressions instead of static photographs to investigate the putative fear selectivity of the amygdala using fMRI under more ecological conditions. The amygdala was found to respond more to movies of facial expressions than to pattern motion, but no differences were found between the responses to neutral, happy, disgusted and fearful facial expressions. This lack of emotional selectivity was replicated in three experiments using three different tasks (passive observation, delayed match to sample and viewing for imitation) and two different analysis methods (voxel-by-voxel and anatomical region of interest). Our data therefore provide strong support for the idea that under more ecologically valid conditions, the contribution of the amygdala towards the detection of fearful facial expressions must be more indirect than previously assumed.

Keywords: fMRI, amygdala, movies, emotions, facial expressions, non-selectivity

INTRODUCTION

The amygdaloid complex (hereafter called amygdala) is considered to be important in many aspects of emotional processing [see Phelps and LeDoux (2005) for review]. Amongst other functions, its role in the processing of the emotional facial expressions of other individuals has been intensely investigated.

In the 1990s the amygdala was regarded a threat module. In terms of facial expressions, it was considered to respond mainly to the sight of fear and to a lesser extent to sadness, anger and surprise (Breiter et al., 1996; Morris et al., 1996; Blair et al., 1999; Whalen et al., 2001; Kim et al., 2003; Wang et al., 2005) [for review, see Phan et al., (2002)]. Some neuroimaging studies though found amygdala activation also to positive emotions (Breiter et al., 1996; Pessoa et al., 2002; Yang et al., 2002; Williams et al., 2004), or failed to find differences between emotions (Winston et al., 2003; Fitzgerald et al., 2006), but because these findings were less consistent (Phan et al., 2002), they failed to falsify the fear module view.

Currently, this view is changing. Half of the patients with bilateral amygdalar lesions turn out not to have measurable deficits in recognizing fear [see Keysers and Gazzola (2006) for review]. Moreover, the selective fear-deficit in patient SM, the most thoroughly investigated patient with bilateral amygdalar damage, disappears if she looks at the eye region of other individuals’ facial expressions (Adolphs et al., 2005). These findings motivated us to critically re-examine the evidence for amygdala selectivity for fearful emotions using neuroimaging.

Under ecological circumstances facial expressions are dynamic displays. To study the contribution of the amygdala in the visual processing of facial expressions, a switch from using static displays to using movies that capture the dynamic nature of these stimuli will dramatically increase the ecological validity of neuro-imaging experiments. An experiment testing the amygdala with a range of movies capturing the natural emotional facial dynamics of fear contrasted against another negative (e.g. disgust) and a positive emotion (e.g. happiness) would allow to assess the amygdalar's putative selectivity for fear. Unfortunately, this critical test has so far not been performed. LaBar et al. (2003) and Sato et al. (2004) used morphs of facial expressions, that assume linear motion, which neglects the complex dynamics of natural facial expressions. Kilts et al. (2003), and Wicker et al. (2003), used movies, but did not test the emotion of fear.

Here we therefore tested subjects with facial expressions of fear, disgust, happiness and an emotionally neutral stimulus with a similar amount of facial motion. As experimental task has been shown to influence amygdalar activity (Critchley et al., 2000; Hariri et al., 2000; Ochsner et al., 2002; Phan et al., 2002; Keightley et al., 2003; Lange et al., 2003), we repeated this study with the same volunteers using three different tasks, enabling us to investigate the consistency of amygdalar activations in the same subjects.

METHODS

Subjects

Seventeen healthy young adults (age range 19–27 years of age; mean age 23.3 years; nine women, eight men) were recruited from the University of Groningen community. All subjects were right-handed [assessed with the Edinburgh handedness Questionnaire (Oldfield, 1971)] and screened for neurological and psychiatric diseases. Informed consent was obtained from each subject in accordance with the human subjects research protocol approved by the Medical Ethical Committee of the University Medical Center Groningen.

Stimuli

Visual stimuli consisted of 3 s movie clips depicting the emotions happiness, disgust, fear or neutral. Unlike previous studies using dynamic displays (Kilts et al., 2003; LaBar et al., 2003; Wicker et al., 2003; Sato et al., 2004), we have a range of facial expressions and more naturalistic specimens (no unecological morph technique was used).The happy condition consisted of spontaneous laughs of our actors, triggered by jokes. Fearful and disgusted expressions resulted from instructions to the actors to display prototypical, strong expressions of these emotions. The neutral condition consisted of movies of an actor blowing up hi%er cheeks. Actors, all Caucasian, were filmed from the shoulders up and were asked to express clearly the different emotions while limiting rigid head movements as much as possible. The movies were recorded digitally with a Sony DSR-PDX10P digital camcorder. Adobe Premier Pro Software was used to cut 3 s movie-clips starting with a neutral expression (slightly friendly) lasting for 0.5 s followed by the unfolding of the facial expression and ending on a strong facial expression (see supporting online information, Video S1–S4 for examples). An extra control condition was used, displaying abstract pattern motions, composed of 0.5 s of a static oval shape patterned with vertical or horizontal stripes. Part of this pattern then started to swirl (for 2.5 s, see supporting online information, Video S5). Twenty different actors (10 males, 10 females) were filmed for all facial conditions. Each film was then rated on the content of basic emotions on a 7 point intensity scale (range 1–7) by an independent group of 15 subjects [according to methods used in Adolphs et al. (1994)]. On a separate 3-point scale (range 1–3), subjects rated how genuine the emotions looked. Some of the actors were rated as displaying some emotions more intensely than other emotions, while other actors displayed their emotions more homogeneously. Out of the 20 actors, we therefore selected the five males and five females that showed least differences between the intensity ratings of their target emotions. For this subset of actors, there were no significant differences between the intensity of the displayed target emotions (see Figure 1). On average, the chosen movie sets for the different conditions scored >2 on the 3-point genuineness-scale.

Fig. 1
Ratings of emotional facial expressions on emotion words. Data are from an independent sample of 15 normal subjects. The photos are snapshots from one of the stimulus movies. The words in capitals above each graph refer to the stimulus category, those ...

Procedure

The experiment was composed of three smaller experiments, each with a different instruction to the subjects. The Observation and the Discrimination tasks were conducted on the first scanning day, the Imitation task on a second scanning day. Stimuli were presented using Presentation software (Neurobehavioral Systems, Inc).

Observation task

Subjects were instructed to pay careful attention to the different movie clips, without further explicit tasks. Stimuli were shown in a randomized event related design with movies lasting 3 s and being separated by a random interval (average 6 s, range 4–8 s, hereafter written 6 ± 2 s) during which a white fixation cross was shown on a black background. The experiment was split in two functional runs lasting ~7.5 min. In each run, 5 out of the 10 movies of each condition were presented twice. After the functional runs, during the acquisition of an anatomical image, subjects had to perform a surprise memory task: There were shown 40 movies of single facial expressions, half of which had been shown to them during the functional runs. The 20 new movies were either movies of the same actors performing different emotions than those used in this experiment or from different actors showing the same or different emotions as used in this experiment. Subjects had to indicate by means of a button press in a two alternative forced choice task if they had seen the movie before or not.

Discrimination task

After the Observation task, subjects received instructions for this second task. Subjects performed four functional runs (each lasting ~10 min) of a delayed-match-to-sample task on emotions: movies were shown in pairs, subjects had to report by means of a button press if the emotion of the first and second movie was the same or different. Only brain activity during presentation of the first stimulus in the pair will be analyzed here, as it represents the brain activity during the deliberate extraction of the emotion from a facial expression, without the motor planning involved during the presentation of the second stimulus. Movie pairs were shown pseudo-randomly, 50% of the trials displayed the same emotion. Every movie was shown twice at the first position of movie pairs to enable comparisons between all tasks. Random intervals separated the two movies of a trial (4 ± 2 s) and two consecutive trials (6 ± 2 s). A red fixation cross was shown within movie pairs, between movie pairs it was white. After this task, subjects were informed about the Imitation task to follow a week later. Subjects were instructed to generate three personal scenarios that could help them to induce the tested emotions (e.g. sudden encounter of a spider, a dirty toilet, a good joke).

Imitation task (second scanning day)

Participants had to watch a facial expression. After a random pause (4 ± 2 s) subjects had to (i) imitate the movements of the first movie during a 3 s period and (ii) generate the corresponding emotion using the scenarios they had prepared ahead of time. In particular, subjects were instructed not simply to generate facial movements that are generic to the demonstrator's emotion, but to imitate the particularities of that particular exemplar of the displayed emotion. Only the brain activity during the presentation of the movie was analyzed, to avoid contamination by motor execution. Data from this session thus represents brain activity while subjects explicitly pay attention to the motor aspects of a facial expression. Subjects were cued to start imitation by means of a change in color of the fixation cross from red to green. Turning white of the same fixation cross was the signal to stop the imitation. Every movie-clip was shown twice.

For the purpose of analyses, the same number of trials of each condition was analyzed in all three tasks, namely 20 trials of each emotion. The discrimination task, due to the delayed matching to sample paradigm contained twice as many movies, but only the sample movies were analyzed. This allowed direct comparisons between the 3 sub-experiments.

Importantly, to avoid biasing mental processes during free viewing, subjects were held naive about the discrimination and imitation tasks ahead. Similarly, during the discrimination task, subjects did not yet know they would have to perform an imitation task later. For this aim, tasks had to be conducted in the fixed order: Observation, Discrimination, Imitation for all subjects.

MRI data acquisition

Imaging data were acquired with a 3T Philips Intera MRI scanner (Philips, Best, The Netherlands). The standard 6-channel SENSE head coil was used to acquire whole brain echo-planar functional images (EPIs). Thirty-nine axial slices were acquired with the following parameters: TR 2000 ms; TE 30 ms; flip angle 90°; SENSE factor 2; field of view 224 mm; matrix 64 × 64; slice thickness 3.5 mm with no slice gap, yielding isotropic voxels of 3.5 × 3.5 × 3.5 mm in size. In addition, two anatomical images were acquired: one 3D-FFE to co-register and normalize functional data with (TR = 25 ms, TE = 4.6 ms, flip angle = 30°, FOV = 256 mm, matrix 256 × 256 mm, slice thickness 1.0 mm) and a high contrast 3D-T1TFE to trace the amygdalae (TR = 8.2 ms, TE = 3.7 ms, flip angle = 8.0°, FOV = 256 mm, matrix 256 × 256 mm, slice thickness 1.0 mm).

General data processing

A voxel-based analysis implemented in SPM2 (www.fil.ion.ucl.ac.uk/spm) was used first to analyze the fMRI data. Functional images were temporally adjusted for interleaved slice acquisition and then realigned to the first functional image of the first run. High quality T1 images were co-registered to the mean EPI image and segmented. Low-frequency signal drift was corrected by applying a high pass temporal filter with a temporal cut-off of 250 s. Data pre-processing ended here for the analysis of signals in anatomically defined Regions of Interests (ROI's). For the whole-brain analyses on the other hand, the co-registered grey matter segment was normalized onto the MNI grey matter template and the resulting normalization parameters applied to all EPI images. The functional data were spatially smoothed with a 6 mm isotropic Gaussian Kernel before the statistical analysis.

General linear model (GLM)

In all analyses, a similar general linear model random effect analysis was performed. For each subject, a general linear model considering the time course of each condition, convoluted with the hemodynamic response was used to estimate the contribution of each condition. A single contrast value was then extracted for each contrast of interest and subject. The contrast values from the 17 subjects were then compared against zero using a one-sample t-test to implement a random effect analysis. Resulting P-values are reported uncorrected for multiple comparisons. Input for this analysis could be the mean activity extracted from a region of interest (Brett et al., 2002) (http://marsbar.sourceforge.net) or the voxel by voxel activity of the entire brain using SPM2 (see further).

Analysis of anatomically defined amygdala

We wanted to find out if the anatomically defined amygdala as a whole is differentially activated by our stimuli conditions. We traced the amygdala individually on our T1TFE images using the protocol of Kates et al. (1997). We used this protocol as it also uses a slice orientation similar to ours. The protocol of Schumann et al. (2004) used slices oriented parallel to the main axis of the temporal cortex, and was thus harder to adapt to our data set. From the resulting amygdala ROI's, data were analyzed using the GLM (see above), yielding a single value per amygdala and subject for each contrast of interest. The values from these contrasts were then examined over tasks and stimuli using repeated measurement ANOVAs followed by Newman–Keuls post hoc comparisons. All P-values are two tailed. For statistical analyses, both SPSS (SPSS, Inc) and Statistica (Statsoft, Inc) were used.

Voxel-by voxel analysis of the amygdala

A voxel-wise analysis in the amygdala was implemented using the GLM described before to investigate the spatial pattern of activity in the amygdala. An uncorrected threshold of P < 0.001 was used to compare activity in different conditions within a particular task. This threshold was then lowered to P < 0.1 to look for weak but consistent preferences common to all three tasks. See ‘Results’ for further details.

RESULTS

Behavioral data

Behavioral results of three subjects could not be analyzed due to malfunctioning of the computer that recorded key-presses during scanning sessions. Behavioral data presented are therefore based on the analysis of the remaining 14 subjects. All 17 subjects were kept for the fMRI analysis.

Surprise memory task

Subjects showed a good performance on this task: on average 93% of the new stimuli were correctly identified while 84% of the familiar movie clips were identified as such. All subjects scored >70% [calculated as (correct rejection + hits%umber of trials] on the memory task, suggesting that subjects paid attention to our stimuli during the Observation task.

Discrimination task

On average 92% of the ‘same emotion’ trials were correctly identified, and 97% of the ‘different emotion’ trials. All subjects scored >80% on this task, justifying the conclusion that all our subjects paid attention during the delayed-match-to-sample-task.

No performance data were acquired during the imitation task, although gross motor movements of the subjects were monitored by an independent observer with an infra-red camera installed in the MRI environment. Facial movements during the appropriate moment of the imitation task were confirmed for all our subjects.

Amygdala activations during the different conditions

Figure 2 illustrates the findings of the voxel-by-voxel analysis of brain activity. During the Observation task (Figure 2A), comparing the emotional, neutral and pattern movies against baseline (i.e. fixation between conditions), all stimuli activated a region that overlapped with the amygdala in all tasks (leftmost three columns). Comparing conditions against each other, all facial conditions differed from the pattern condition in the dorsal aspects of the amygdalar complex (rightmost three columns) but none of the emotional facial expressions differed from the neutral condition (middle three columns) nor from each other (Figure 2D, leftmost three columns) at a rigorous threshold normally used in fMRI studies with no a priori hypothesis (P < 0.001 uncorrected). The same was true during the presentation of the first stimulus of each delayed matching to sample trial in the Discrimination task (see Figure 2B, Figure 2D middle three columns). Data in the retention interval and the presentation of the second stimulus were modeled in the GLM, but not analyzed as they reflect functions such as memory and motor control that were irrelevant to our study.

Fig. 2
Results of the voxel-wise analysis of amygdalar activity during the different tasks. In the right top corner: three T1 images averaged over all 17 subjects showing the regions of the brain shown in a–d (at y = −2, −6 and −10). ...

During the viewing phase of the Imitation task, no pattern condition existed, but again, all facial expressions differed from baseline, but none differed from each other nor from the neutral condition (Figure 2C, Figure 2D rightmost three columns).

Together, these analyses show robust activation of the amygdala to all facial stimuli and, to a lesser extent, to moving patterns. Nevertheless, there is no evidence for the threat-module hypothesis in any of the three tasks. Instead, the amygdala appears to respond similarly to neutral and emotional facial expressions.

Anatomically defined amygdalae

Amygdalae were traced on individual T1-weighted images (see supporting information Figure S1). The regions of interests for the left and right amygdala had volumes of 1656 ± 47 mm3 and 1669 ± 28 mm3, respectively. These volumes are very close to the average amygdala size reported by Brierley et al. (2002) in a meta-review of 51 volumetric analyses of the amygdalae: 1727 ± 35 (mean ± 95% confidence interval of the mean, left) 1692 ± 37 (right). We then extracted the mean activity within the two amygdalae of each subject and analyzed this activity with a GLM to extract parameter estimates for amygdala activation to all stimuli. These subject-specific parameter estimates were then compared over all 17 subjects using repeated measurement ANOVAs (Figure 3).

Fig. 3
Results from the anatomically defined amygdala analysis. Contrast values for the different visual stimuli compared against baseline for the two amygdalae during observation (top row), during the first stimulus of the discrimination task (middle row) and ...

Comparing faces and patterns during the observation and discrimination task (2 tasks × 2 hemispheres × 5 stimuli) revealed main effects for task (P < 0.0002), hemisphere (P < 0.01) and stimuli (P < 10−8), but no significant interactions (all P > 0.05). Newman–Keuls post hoc exploration of the effects revealed that the effect of task was caused by smaller amygdala activations during the observation condition compared with the discrimination condition. The right amygdala was activated to a larger extent than the left amygdala with our tasks, causing the effect of hemisphere [in agreement with Morris et al. (1998), Noesselt et al. (2005) and Pegna et al. (2005) however, see (Baas et al., 2004)]. The effect of stimulus was due to the patterns always causing smaller activations than the faces (all P < 0.0005), but there were no significant differences between the faces (all P > 0.2).

Including also the viewing phase of the Imitation task, comparisons could no longer include the pattern condition, as this condition is absent from imitation (subjects wouldn't know how to imitate a pattern motion with their face). Comparing the amygdalar activations to the facial expressions (3 tasks × 2 hemispheres × 4 facial expressions) revealed main effects of task and hemisphere (both P < 0.005), but none for facial expressions (P > 0.5), and no significant interactions. Although the lack of main effects and interactions with facial expressions did not call for the use of post hoc comparisons, we used Newman–Keuls tests to verify that the facial expression of fear did never produce activations that exceeded that of other facial expressions significantly (all P > 0.25), nor did any emotional facial expression cause activities that significantly exceeded neutral facial expressions (all P > 0.40, except happy > neutral in the right hemisphere during the viewing phase of imitation P = 0.06).

To examine if the amygdalae might differentiate between different facial expressions only during a particular time window of the responses, we also extracted the time course of the BOLD response during the observation condition and found no significant differences between the responses to the different faces at any of the time points (all P > 0.1). We also examined the time courses during both the first and second stimulus of the discrimination task, and again found no differences at any time point (all P > 0.1). Using anatomically defined amygdala, we therefore found no evidence what-so-ever for the hypothesis that the amygdalae respond more to fearful facial expressions compared with other facial expressions, nor that emotional facial expressions exceed neutral facial expressions in activation. The only robust effect was a larger activation to faces compared against patterns.

Interestingly, average brain activity increased from the Observation to the Discrimination task, despite the fact that the same stimuli were used in both tasks. The gain in attention that is probably linked to the active performance of a discrimination task thus outweighed the potential effect of habituation in our experiment. The basic pattern of responses to the emotional stimuli was similar though in all three tasks, as shown by a lack of stimulus × task interaction.

Voxel-by-voxel approach with lower thresholds

A ROI analysis of the amygdala as a whole thus failed to provide evidence for fear selectivity. A voxel-wise analysis at P < 0.001 showed that not even a subregion of the amygdala shows such selectivity. This finding is in apparent contrast to the majority of studies that used static facial expressions. Negative findings are difficult to interpret, as they may be the result of a lack of statistical power: Figure 3B in Winston et al. (2003) for instance suggests a trend towards fear selectivity in the right amygdala, that might have been too small to be significant within that experiment. To strengthen the validity of our negative finding, we lowered our threshold to P < 0.01, uncorrected (k > 6 voxels), and examined separately for the three experiments if the contrasts fear-neutral, fear-happy or fear-disgust show significant differences in the amygdalae. Not a single significant difference was found.

DISCUSSION

The present study examined amygdala activation during the observation of dynamic facial expressions and pattern movements under different viewing tasks.

Observation of all film clips displaying facial movements robustly activated the amygdala, in line with previous single cell (Fried et al., 1997; Kreiman et al., 2000; Fried et al., 2002) and neuroimaging studies (Breiter et al., 1996; Morris et al., 1998; Whalen et al., 1998; Kilts et al., 2003; Winston et al., 2003; Fitzgerald et al., 2006). The subjects showed the same pattern of activations in three experiments with different instructions.

Pattern movements, when tested, activated the amygdala significantly, but the activations were always significantly smaller than to our facial stimuli, in accordance with previous findings (Kilts et al., 2003; Pasley et al., 2004; Fitzgerald et al., 2006; Reinders et al., 2005).

Comparing movies of different facial expressions, be they perceived as emotional or not, we found no measurable difference in the BOLD signal in the amygdala. This lack of difference cannot be due to a lack of sensitivity in our measurements, as all facial conditions produced reliable, reproducible and strong activations that could be differentiated from pattern motion, nor can it be due to our stimuli not containing strong enough emotional content, as the ratings of our stimuli are well in line with those published previously for standardized facial expression sets (Figure 1). The lack of difference between the facial stimuli persisted even when lowering the threshold to P < 0.01.

The task performed while watching the facial stimuli affected amygdalar activation. Compared with the observation task, the more demanding discrimination and imitation tasks augmented both the magnitude and variance of the amygdalar activations (Figure 3), resulting in an overall reduction of the t-values in Figure 2. As task order was fixed, the increase in magnitude in the later tasks occurred despite the habituating effect of having seen the facial stimuli in previous tasks. Our results using movies thus differ from those observed using static facial expressions where many studies have shown a decrease in amygdala signal during the performance of a cognitive task (Critchley et al., 2000; Hariri et al., 2000; Ochsner et al., 2002; Phan et al., 2002; Hariri and Weinberger, 2003; Hariri et al., 2003; Keightley et al., 2003; Lange et al., 2003) and others found a significant habituation effect in the amygdala (Breiter et al., 1996; Wright et al., 2001; Fischer et al., 2003; Wedig et al., 2005).

By capturing the natural motion of facial expressions, we increased the ecological validity of our movies, but their validity could be further increased by increasing the relevance of the facial expressions to the subjects. Exploring amygdalar responses in a setup where the facial expressions could indicate the presence of a true danger in the scanner room would be an important future step in research.

A priori, our neutral condition posed a problem: it contained facial movements mainly in the lower regions of the face while the emotional expressions had salient motion also in their upper parts. The observed lack of difference in amygdalar response between the neutral and the emotional expressions though suggests that a posteriori, that difference turned out to be of no relevance for the amygdala.

The tasks used in experiments can influence results. A passive observation task (e.g. Breiter et al. 1996; Whalen et al., 1998; Whalen et al., 2001; Kim et al., 2003; Lange et al., 2003) does not force subjects to attend to the emotions. Discrimination tasks may add cognitive (e.g. Hariri et al., 2000; Canli et al., 2002; Yang et al., 2002; Kilts et al., 2003; LaBar et al., 2003; Lange et al., 2003; Winston et al., 2003; Fitzgerald et al., 2006; Wright and Liu, 2006) and memory components that may affect the discriminative nature of the amygdala. Imitation tasks may emphasize motor components (e.g. Carr et al., 2003; Dapretto et al., 2006). Using multiple paradigms on the same subjects is thus necessary for differentiating response patterns that are specific to particular tasks from those common to all paradigms. Here we observed a similar lack of amygdalar emotion selectivity in all three tasks, suggesting that this phenomenon is not task specific.

This finding is in contrast with many results published earlier. It would be tempting to search for methodological differences: we used movies while most studies used static photographs. Indeed, the only study directly comparing amygdalar responses to movies and static images, found that the amygdala did not discriminate between dynamic emotional and neutral facial expressions (Kilts et al., 2003). However, that study also failed to find significant differences in the amygdala when comparing static photographs of neutral and emotional facial expressions, in line with a growing number of other studies utilizing static facial expressions (Winston et al., 2003; Fitzgerald et al., 2006). For each individual study, methodological differences can be found (event vs blockdesign, movies vs photos, etc.) and subtle differences in the timing parameters of an experiment (e.g. presentation rate, duration, number of repetitions, etc.) can affect dramatically (Price et al., 1996). The effect of these factors will need to be tested thoroughly in order to identify the conditions under which the amygdala does show emotion selectivity, and those in which it does not. In the face of the accumulating number of negative findings, it is important though to start to acknowledge, that the amygdala appears not to be always selective for particular emotions. Instead, in the context of a belief in amygdala selectivity for negative emotions, the scientific peer-review process may have been more favorable towards positive findings and theory-congruent studies, while searching for technical weaknesses in studies that reported theory-incongruent negative findings. This may have resulted in overestimating the selectivity of the amygdala.

Independent evidence for this view comes from the most recent exploration of the deficits encountered after amygdalar lesions. Adolphs et al. (2005) demonstrated that without amygdalae, patient SM is not incapable of recognizing fear, she simply stops to bias her visual processing of faces towards the eyes during the exploration of all facial expressions, not fear in particular. Only the consequences of not paying attention to the eyes differed between emotions, as the eye region is particularly informative in recognizing fear.

If the amygdala may not be fear selective, what role might it have during facial processing? Amongst many other connections, the amygdala receives converging input from temporal visual areas processing faces as well as from areas processing the hedonic valence of events, including the orbitofrontal, cingulate and insular cortex (McDonald, 1998; Stefanacci and Amaral, 2000; Stefanacci and Amaral, 2002). This pattern of connectivity makes the amygdala particularly suitable for processing correlations between stimuli and their hedonic consequences, and biasing visual processing towards stimuli that are considered of particular importance for the organism. What stimuli are important can be learned by the system, but for a social animal such as humans, faces [and eyes in particular (Whalen et al., 2004)] are a stimulus type that deserves particular attention in many circumstances, and that would thus a priori deserve privileged processing in this circuitry, explaining the higher activity to all facial expressions compared with pattern motion in our experiment. Facial movement will be particularly important in this context, as it suggests some change in the state of other individuals, and these changes need to be processed. Static faces on the other hand suggest that things stay as they were, thus requiring no further processing. In this view the amygdala thus contributes to the processing of facial expressions not directly by detecting the expressed emotion, but indirectly, by directing the processing of facial expressions towards particularly informative parts of our environment, in particular faces and the eyes. The actual recognition of the emotion occurs in other neural structures. This view of the amygdala is similar to that put forward by others (Whalen et al., 1998; Holland and Gallagher, 1999; Sander et al., 2003; Adolphs et al., 2005).

Our fMRI experiment measures the activity of voxels of 3.5 × 3.5 × 3.5 mm. Finding that a voxel as a whole does not differentiate between different emotions does not preclude the existence of neurons in that area that sharply discriminate between different emotions but shows that within that voxel, the processing of different facial expressions must have comparable metabolic demands. This suggests that most neurons in the amygdala are unlikely to be selective for one particular emotion. Indeed, the amygdala has been shown to contain neurons selective for the positive as well as negative valence of a visual stimulus, with the two types of neurons overlapping substantially within the scale of an fMRI voxel (Fried et al., 1997; Nishijo et al., 1988; Paton et al., 2006).

Using movies to probe the selectivity of the amygdala for particular facial expressions, we found that the amygdala appears not to be selective. We replicated this finding in three experiments. Notwithstanding the open debate on the amygdalar selectivity during the viewing of static photographs of emotions, at least for the ecologically more relevant case of movies, we can thus state that the contribution of the amygdala towards detecting fearful facial expressions must be more indirect than previously assumed. This finding will guide significantly our changing understanding of the role of the amygdala.

Acknowledgments

We thank Remco Renken for his valuable help in data analysis and Anita Kuiper for her help in data acquisition and performing the independent movement observation during the imitation task. Mbemba Jabbi is thanked for his help in creating the film stimuli, Valeria Gazzola for her assistance in programming the Presentation software and data analyses. We thank Andre Aleman for helpful comments on earlier versions of this manuscript. The research was supported by an NWO VIDI grant and a Marie Curie Excellence Grant to CK.

Footnotes

Conflict of Interest

None declared.

REFERENCES

  • Adolphs R, Tranel D, Damasio H, Damasio A. Impaired recognition of emotion in facial expressions following bilateral damage to the human amygdala. Nature. 1994;372:669–72. [PubMed]
  • Adolphs R, Gosselin F, Buchanan TW, Tranel D, Schyns P, Damasio AR. A mechanism for impaired fear recognition after amygdala damage. Nature. 2005;433:68–72. [PubMed]
  • Baas D, Aleman A, Kahn RS. Lateralization of amygdala activation: a systematic review of functional neuroimaging studies. Brain Research Reviews. 2004;45:96–103. [PubMed]
  • Blair RJ, Morris JS, Frith CD, Perrett DI, Dolan RJ. Dissociable neural responses to facial expressions of sadness and anger. Brain. 1999;122:883–93. [PubMed]
  • Breiter HC, Etcoff NL, Whalen PJ, et al. Response and habituation of the human amygdala during visual processing of facial expression. Neuron. 1996;17:875–87. [PubMed]
  • Brett M, Anton JL, Valabregue R, Poline JB. Region of interest analysis using an SPM toolbox. Neuroimage. 2002;16:Abstract 497.
  • Brierley B, Shaw P, David AS. The human amygdala: a systematic review and meta-analysis of volumetric magnetic resonance imaging. Brain Research Reviews. 2002;39:84–105. [PubMed]
  • Canli T, Sivers H, Whitfield SL, Gotlib IH, Gabrieli JD. Amygdala response to happy faces as a function of extraversion. Science. 2002;296:2191. [PubMed]
  • Carr L, Iacoboni M, Dubeau MC, Mazziotta JC, Lenzi GL. Neural mechanisms of empathy in humans: a relay from neural systems for imitation to limbic areas. Proceedings of the National Academy of Sciences USA. 2003;100:5497–502. [PMC free article] [PubMed]
  • Critchley H, Daly E, Phillips M, et al. Explicit and implicit neural mechanisms for processing of social information from facial expressions: a functional magnetic resonance imaging study. Hum Brain Mapp. 2000;9:93–105. [PubMed]
  • Dapretto M, Davies MS, Pfeifer JH, et al. Understanding emotions in others: mirror neuron dysfunction in children with autism spectrum disorders. Nature Neuroscience. 2006;9:28–30. [PMC free article] [PubMed]
  • Fischer H, Wright CI, Whalen PJ, McInerney SC, Shin LM, Rauch SL. Brain habituation during repeated exposure to fearful and neutral faces: a functional MRI study. Brain Research Bulletin. 2003;59:387–392. [PubMed]
  • Fitzgerald DA, Angstadt M, Jelsone LM, Nathan PJ, Phan KL. Beyond threat: amygdala reactivity across multiple expressions of facial affect. Neuroimage. 2006;30:1441–8. [PubMed]
  • Fried I, MacDonald KA, Wilson CL. Single neuron activity in human hippocampus and amygdala during recognition of faces and objects. Neuron. 1997;18:753–765. [PubMed]
  • Fried I, Cameron KA, Yashar S, Fong R, Morrow JW. Inhibitory and excitatory responses of single neurons in the human medial temporal lobe during recognition of faces and objects. Cerebral Cortex. 2002;12:575–84. [PubMed]
  • Hariri AR, Weinberger DR. Functional neuroimaging of genetic variation in serotonergic neurotransmission. Genes Brain and Behavior. 2003;2:341–9. [PubMed]
  • Hariri AR, Bookheimer SY, Mazziotta JC. Modulating emotional responses: effects of a neocortical network on the limbic system. Neuroreport. 2000;11:43–8. [PubMed]
  • Hariri AR, Mattay VS, Tessitore A, Fera F, Weinberger DR. Neocortical modulation of the amygdala response to fearful stimuli. Biological Psychiatry. 2003;53:494–501. [PubMed]
  • Holland PC, Gallagher M. Amygdala circuitry in attentional and representational processes. Trends in Cogn Sciences. 1999;3:65–73. [PubMed]
  • Kates WR, Abrams MT, Kaufmann WE, Breiter SN, Reiss AL. Reliability and validity of MRI measurement of the amygdala and hippocampus in children with fragile X syndrome. Psychiatry Research. 1997;75:31–48. [PubMed]
  • Keightley ML, Winocur G, Graham SJ, Mayberg HS, Hevenor SJ, Grady CL. An fMRI study investigating cognitive modulation of brain regions associated with emotional processing of visual stimuli. Neuropsychologia. 2003;41:585–96. [PubMed]
  • Keysers C, Gazzola V. Towards a unifying neural theory of social cognition. Progress in Brain Research. 2006;156:379–401. [PubMed]
  • Kilts CD, Egan G, Gideon DA, Ely TD, Hoffman JM. Dissociable neural pathways are involved in the recognition of emotion in static and dynamic facial expressions. Neuroimage. 2003;18:156–68. [PubMed]
  • Kim H, Somerville LH, Johnstone T, Alexander AL, Whalen PJ. Inverse amygdala and medial prefrontal cortex responses to surprised faces. Neuroreport. 2003;14:2317–22. [PubMed]
  • Kreiman G, Koch C, Fried I. Category-specific visual responses of single neurons in the human medial temporal lobe. Nature Neuroscience. 2000;3:946–53. [PubMed]
  • LaBar KS, Crupain MJ, Voyvodic JT, McCarthy G. Dynamic perception of facial affect and identity in the human brain. Cerebral Cortex. 2003;13:1023–33. [PubMed]
  • Lange K, Williams LM, Young AW, et al. Task instructions modulate neural responses to fearful facial expressions. Biological Psychiatry. 2003;53:226–232. [PubMed]
  • McDonald AJ. Cortical pathways to the mammalian amygdala. Progress in Neurobiology. 1998;55:257–332. [PubMed]
  • Morris JS, Frith CD, Perrett DI, et al. A differential neural response in the human amygdala to fearful and happy facial expressions. Nature. 1996;383:812–5. [PubMed]
  • Morris JS, Friston KJ, Buchel C. A neuromodulatory role for the human amygdala in processing emotional facial expressions. Brain. 1998;121:47–57. [PubMed]
  • Nishijo H, Ono T, Nishino H. Single neuron responses in amygdala of alert monkey during complex sensory stimulation with affective significance. Journal of Neuroscience. 1988;8:3570–83. [PubMed]
  • Noesselt T, Driver J, Heinze HJ, Dolan R. Asymmetrical activation in the human brain during processing of fearful faces. Current Biology. 2005;15:424–9. [PubMed]
  • Ochsner KN, Bunge SA, Gross JJ, Gabrieli JD. Rethinking feelings: an FMRI study of the cognitive regulation of emotion. Journal of Cognitine Neuroscience. 2002;14:1215–29. [PubMed]
  • Oldfield RC. The assessment and analysis of handedness: the Edinburgh inventory. Neuropsychologia. 1971;9:97–113. [PubMed]
  • Pasley BN, Mayes LC, Schultz RT. Subcortical discrimination of unperceived objects during binocular rivalry. Neuron. 2004;42:163–72. [PubMed]
  • Paton JJ, Belova MA, Morrison SE, Salzman CD. The primate amygdala represents the positive and negative value of visual stimuli during learning. Nature. 2006;439:865–70. [PMC free article] [PubMed]
  • Pegna AJ, Khateb A, Lazeyras F, Seghier ML. Discriminating emotional faces without primary visual cortices involves the right amygdala. Nature Neuroscience. 2005;8:24–5. [PubMed]
  • Pessoa L, McKenna M, Gutierrez E, Ungerleider LG. Neural processing of emotional faces requires attention. Proceedings of the National Academy of Sciences USA. 2002;99:11458–63. [PMC free article] [PubMed]
  • Phan KL, Wager T, Taylor SF, Liberzon I. Functional neuroanatomy of emotion: a meta-analysis of emotion activation studies in PET and fMRI. Neuroimage. 2002;16:331–48. [PubMed]
  • Phelps EA, LeDoux JE. Contributions of the amygdala to emotion processing: from animal models to human behavior. Neuron. 2005;48:175–87. [PubMed]
  • Price CJ, Moore CJ, Frackowiak RS. The effect of varying stimulus rate and duration on brain activity during reading. Neuroimage. 1996;3:40–52. [PubMed]
  • Reinders AA, den Boer JA, Buchel C. The robustness of perception. Eur J Neuroscience. 2005;22:524–30. [PubMed]
  • Sander D, Grafman J, Zalla T. The human amygdala: an evolved system for relevance detection. Reviews in the Neurosciences. 2003;14:303–16. [PubMed]
  • Sato W, Kochiyama T, Yoshikawa S, Naito E, Matsumura M. Enhanced neural activity in response to dynamic facial expressions of emotion: an fMRI study. Cognitive Brain Research. 2004;20:81–91. [PubMed]
  • Schumann CM, Hamstra J, Goodlin-Jones BL, et al. The amygdala is enlarged in children but not adolescents with autism; the hippocampus is enlarged at all ages. Journal of Neuroscience. 2004;24:6392–401. [PubMed]
  • Stefanacci L, Amaral DG. Topographic organization of cortical inputs to the lateral nucleus of the macaque monkey amygdala: a retrograde tracing study. Journal of Comparative Neurology. 2000;421:52–79. [PubMed]
  • Stefanacci L, Amaral DG. Some observations on cortical inputs to the macaque monkey amygdala: an anterograde tracing study. Journal of Comparative Neurology. 2002;451:301–23. [PubMed]
  • Wang L, McCarthy G, Song AW, LaBar KS. Amygdala activation to sad pictures during high-field (4 tesla) functional magnetic resonance imaging. Emotion. 2005;5:12–22. [PubMed]
  • Wedig MM, Rauch SL, Albert MS, Wright CI. Differential amygdala habituation to neutral faces in young and elderly adults. Neuroscience Letters. 2005;385:114–9. [PubMed]
  • Whalen PJ. Fear, vigilance, and ambiguity: initial neuroimaging studies of the human amygdala. Current Directions in Psychological Sciences. 1998;7:177–88.
  • Whalen PJ, Rauch SL, Etcoff NL, McInerney SC, Lee MB, Jenike MA. Masked presentations of emotional facial expressions modulate amygdala activity without explicit knowledge. Journal of Neuroscience. 1998;18:411–8. [PubMed]
  • Whalen PJ, Shin LM, McInerney SC, Fischer H, Wright CI, Rauch SL. A functional MRI study of human amygdala responses to facial expressions of fear versus anger. Emotion. 2001;1:70–83. [PubMed]
  • Whalen PJ, Kagan J, Cook RG, et al. Human amygdala responsivity to masked fearful eye whites. Science. 2004;306:2061. [PubMed]
  • Wicker B, Keysers C, Plailly J, Royet JP, Gallese V, Rizzolatti G. Both of us disgusted in My insula: the common neural basis of seeing and feeling disgust. Neuron. 2003;40:655–64. [PubMed]
  • Williams MA, Morris AP, McGlone F, Abbott DF, Mattingley JB. Amygdala responses to fearful and happy facial expressions under conditions of binocular suppression. Journal of Neuroscience. 2004;24:2898–904. [PubMed]
  • Winston JS, O'Doherty J, Dolan RJ. Common and distinct neural responses during direct and incidental processing of multiple facial emotions. Neuroimage. 2003;20:84–97. [PubMed]
  • Wright CI, Fischer H, Whalen PJ, McInerney SC, Shin LM, Rauch SL. Differential prefrontal cortex and amygdala habituation to repeatedly presented emotional stimuli. Neuroreport. 2001;12:379–83. [PubMed]
  • Wright P, Liu Y. Neutral faces activate the amygdala during identity matching. Neuroimage. 2006;29:628–36. [PubMed]
  • Yang TT, Menon V, Eliez S, et al. Amygdalar activation associated with positive and negative facial expressions. Neuroreport. 2002;13:1737–41. [PubMed]

Articles from Social Cognitive and Affective Neuroscience are provided here courtesy of Oxford University Press
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • MedGen
    MedGen
    Related information in MedGen
  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

    Your browsing activity is empty.

    Activity recording is turned off.

    Turn recording back on

    See more...