• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of nihpaAbout Author manuscriptsSubmit a manuscriptNIH Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
Dev Neuropsychol. Author manuscript; available in PMC Dec 23, 2009.
Published in final edited form as:
PMCID: PMC2797312
NIHMSID: NIHMS160804

Individual Differences in Children’s Facial Expression Recognition Ability: The Role of Nature and Nurture

Abstract

We examined genetic and environmental influences on recognition of facial expressions in 250 pairs of 10-year-old monozygotic (83 pairs) and dizygotic (167 pairs) twins. Angry, fearful, sad, disgusted, and happy faces varying in intensity (15%–100%), head orientation, and eye gaze were presented in random order across 160 trials. Total correct recognition responses to each facial expression comprised the dependent variables. Twin data examined genetic and environmental contributions to variables and their correlations. Results support a common psychometric factor influenced primarily by additive genetic influences across expressions with discrimination of specific expressions due largely to non-shared environmental influences.

The study of normal and abnormal development of facial expression recognition (FER) has attracted widespread interest from social cognition and developmental psychopathology researchers. Accurate interpretation of facial expressions is crucial for normal social interactions (Philipott & Feldman, 1990), while perturbations in this ability have been linked to psychopathology, as both a precursor and epiphenomenon of disorders (Herba & Phillips, 2004). Given links with psychological well being, identifying factors that contribute to individual variability in face-emotion processing presents a key conceptual question that may potentially inform preventative interventions.

Most studies in pursuit of this question focus on specific proximal factors, such as age and development, sex, verbal ability, socioeconomic status (SES), and the presence of psychopathology (for reviews see Herba & Phillips, 2004; McClure, 2000). It has been rare for studies to assess the role of more general distally based factors, such as genetic and environmental influences. As such the present study aimed to delineate the relative contributions of nature and nurture on emerging differences in FER in 10-year-old children. Middle childhood is a salient period for assessing individual differences in recognition ability of facial emotions for three reasons. First, children of this age have an increased repertoire of emotional vocabulary, permitting researchers a wider range of techniques to evaluate this ability. Second, children of this age possess a greater understanding of their own and others’ emotional states than do younger children (Izard & Harris, 1995), and can typically label five or six primary emotions (happy, sad, anger, fear, disgust, and surprise) (Camras, 1985). Third, qualitative changes appear to characterize the strategies employed by children of this age relative to their younger peers when processing faces, such as attending to inner (eyes, nose, mouth) rather than external features (chin, hair, ears) (Campbell, Walker, & Baron-Cohen, 1995).

Although no studies have directly assessed genetic and environmental influences on FER to date, theoretical accounts on how these abilities manifest across development set a precedent for expecting the contributions of both. These theories can broadly be divided into social constructivist (Meadows, 1996) and neurobehavioral maturation (Nelson & de Haan, 1997) models. According to the first of these, FER abilities in childhood are largely nurtured through social experiential factors such as learning. Referencing and modeling adult displays of facial expressions in particular are hypothesized to influence children’s abilities to differentiate and label emotions. In contrast, neurobehavioral models focus on the emergence of FER abilities through maturational changes in brain systems dedicated to processing complex patterns. Emotional face processing in particular has been linked to amygdala engagement, a region known to be responsive to affective stimuli (McClure et al., 2007; Thomas et al., 2001). Recent adult data indicating associations between amygdala activity during processing of emotional faces and genetic polymorphisms (Munafo, Brown, & Hariri, 2007) provide a platform for expecting genetic influence on children’s FER abilities.

As environmental stressors may also impinge on neural function involved in emotion-processing through alteration of key neurotransmitter systems (Heim & Nemeroff, 2001), a broader interpretation of the neurobehavioral model also recognizes environmental contributions to FER too. This suggestion concurs with a wider consensus within developmental psychopathology that complex emotional behaviors are likely to be effected by the interplay of both genetic and environmental influences (Rutter, 2003). Consistent with this view, it has been suggested that biological and experiential factors co-act in an integrated fashion to influence FER (McClure, 2000). This interplay could manifest through several pathways. Individuals with biological dispositions for better FER abilities may seek out beneficial social exchanges (person–environment correlation) that amplify existing biological differences (person–environment interaction). Although these specific pathways are at present based on speculation rather than experimentation, this integrative approach implies joint roles of nature and nurture on FER.

The aim of the present study was to assess directly genetic and environmental contributions to variability in FER among 10-year-old children. We examined recognition of five facial expressions (angry, sad, fear, disgust, happy) each displayed at four intensities. As surprise is sometimes poorly understood by children (Camras, 1985), it was not included in the task. To minimize ceiling effects and to enhance variance in task performance, we increased task difficulty by manipulating head and eye gaze direction cues in the target faces. Both are important for the interpretation of social information including emotional expressions (Emery 2000, Adams, 2005). Thus, presenting head and eye gaze direction away from the observer were expected to yield lower accuracy in recognition scores.

Our main analyses explored the role of genetic and environmental influences on FER. Integrated neurobehavioral and social constructivist accounts warranted predictions that included contributions from both sources. However, as these theories do not explicitly specify etiological differences across emotional expressions, we adopted a two-stage exploratory approach. We first examined genetic and environmental effects to each emotional expression separately (univariate analysis) and second, we assessed whether genetic and environmental influences on different emotional expressions co-varied (multivariate analysis). More specifically, multivariate analyses allow careful elucidation of whether commonalities across recognition of emotional expressions can be attributed to the effects of common genetic and environmental influences and/or whether differences between them can be due to specific genetic and environmental influences. Because the current project represents the first study of its kind on this issue, no specific hypotheses differentiating additive from dominant genetic influences and shared (family-general) from non-shared (individual-specific) environmental experiences were made. Given known sex differences in mean levels of FER ability (girls perform better than boys; see McClure, 2000), we also explored gender differences in the size of genetic and environmental contributions to each emotion separately (univariate) and the extent to which these covaried across emotions (multivariate).

METHOD

Sample

Subjects were 250 pairs of 10-year-old twins from Wave 2 of the Emotions, Cognitions, Heredity, and Outcomes (ECHO) study. Participants of this study were initially selected using an extremes design from a large ongoing longitudinal sample of twins born in England and Wales (TEDS: Twins Early Development Study, Trouton, Spinath, & Plomin, 2002) to target children with high levels of emotional symptoms. Thus the Wave 1 sample was comprised of 247 8-year-old twin pairs selected for high scores on parent-reported anxiety. We also randomly selected 53 “control pairs” who did not score in the top 5% of the anxiety score distribution in TEDS, to ensure coverage of the full range of scores on test measures (see Gregory, Eley, & Rijsdijk, 2006). A total of 250 families returned for Wave 2 (Lau, Gregory, Goldwin, Pine, & Eley, 2007), forming the subject pool for the current analyses. Twins were aged between 9 years 7 months and 10 years 10 months (mean: 10 years 1 month). A total of 56.4% of the sample was female, with 83 monozygotic (MZ) and 167 dizygotic (DZ) twin pairs. Informed consent was obtained from parents of twins. Ethical approval for this study was given by the Research Ethics Committee of the Institute of Psychiatry and South London and Maudsley NHS Trust.

Because of concerns that the extremes design used at Wave 1 to select symptomatic subjects would affect the generalizability of results, manifesting as increased means, decreased variances, and decreased covariance of correlated variables, we took two precautions: examination of symptom-FER associations and implementation of a weighting system in all analyses to allow population inference of results. All associations between facial expression variables and self- and parent-reported anxiety symptoms were low and non-significant (rs < 0.10). Furthermore, of the 500 subjects (250 pairs) at Wave 2, only 18 (3.6%) received anxiety diagnoses. Those receiving anxiety diagnoses did not differ in the number of correct responses to any of the facial expression variables (ts < 1.69). Thus in the present sample, FER abilities were not associated with risk for anxiety.

Nevertheless we constructed a weighting system. Effectively, this assigned lower weights to individuals from categories over-represented and higher weights to individuals from categories under-represented in the sample relative to the population distribution. First, to control for biases associated with ascertainment, that is, over-sampling of symptomatic children, we used the ratio of the selection probability of high symptom families to that of non-symptom control families. Second, to control for biases associated with attrition (individuals with mothers reporting higher levels of emotional symptoms and who experienced greater negative life events were significantly less likely to participate) a second weight that used the inverse of the predicted probability of families remaining at Wave 2 was generated. These weights were then multiplied to create a final weighting variable that was incorporated in all analyses.

Facial Expression Recognition Task

This task comprised 160 trials. On each trial, a facial image morphed from a neutral expression into one of five expressions (angry, fear, sad, disgust, and happy) with equal frequency (32 trials of each emotion) (Figure 1). Half the images were of a female face and half of a male face. All facial expressions were taken from a standard set of pictures of facial affect. Subjects were instructed to name each facial expression using one of five labels corresponding to the different emotions. Facial expression morphs were displayed as animations changing from the neutral expression (0%) to one of four levels of intensity (25%, 50%, 75%, or 100%). As happy expressions are generally easier to identify, their intensity levels were adjusted accordingly to 10%, 25%, 50%, or 75%. Head orientation (facing the camera, side on to the camera) and eye gaze direction (toward the camera, away from the camera) of faces were also manipulated to create 4 different trial types: head facing–eyes toward; head facing–eyes away; head side–eyes toward; and head side–eyes away. Thus, all combinations of each individual posing 5 facial expression were presented at 4 levels of intensity and as 4 trial types resulting in 160 randomly ordered trials. As expected, trials in which the head faced the subject yielded greater accuracy relative to when the head was presented from a side angle (F(1,247) = 8.28, p < .01). Similarly, greater accuracy characterized trials in which the eyes appeared toward rather than away from the subject (F(1,247)=22.93 p < .001). However, in both comparisons, effect sizes were small: Cohen’s d = 0.11 and 0.16, respectively.

FIGURE 1
Facial expression recognition task showing the morphing sequence from a neutral to angry face across the 4 intensity levels (25%, 50%, 75%, and 100%). Each sequence is presented as one of 4 trial types that manipulate head orientation (facing versus side) ...

Prior to commencing the task subjects were first read standardized instructions. To check that subjects understood the meaning of the five emotions, they were asked to provide a definition of each emotion. This check revealed that all subjects were familiar with the emotion labels. Next subjects received five practice trials in which they were presented with the faces of two individuals (one male and one female) not used elsewhere in the experiment, demonstrating the five expressions animated from neutral to its full intensity (100% for negative expressions and 75% for happy expressions). As with test trials, subjects indicated the label corresponding with the expression.

Accuracy scores summed across all test trials for each expression (angry, fear, sad, disgust, happy) comprised the dependent variables.

Statistical Analyses

We performed three sets of analyses: descriptive, univariate, and multivariate analyses, using a model-fitting approach implemented in a structural equation modeling package called Mx. This package was used because of its capacity to incorporate sampling weights to account for selection and attrition biases as well as control for the non-independence of data incurred by twin designs. The goal of model-fitting is to estimate parameters specified by a model (e.g., means or variance components) that best fit the observed data. Discrepancy between estimated parameters and observed data is indexed by the Chi-square statistic (χ2), Akaike’s Information Criterion (AIC), and Root Mean Squared Error Approximation (RMSEA). Lower χ2 values, more negative AIC, and values of RMSEA below 0.10 generally indicate good fit and parsimony (models with fewest estimated parameters). Comparing fit statistics across models with different assumptions and selecting the model of best fit permits testing competing explanations of dependent variables. All dependent variables showed normal distributions. Mean age and sex effects were regressed from dependent variables for univariate and multivariate analyses.

Descriptive analyses used saturated models that estimate the variance, covariance, and means of all variables to examine within-group differences in recognition of each emotion; sex and zygosity differences in recognition of each emotion; equality of variance across sex; and inter-correlations across facial expressions. In brief, differences in means across variables (e.g., recognition of angry faces versus sad faces) or across groups (recognition of angry faces in males versus females) can be tested by comparing the fit of models that allow means to differ across variables or across groups and models that equate means across variables or across groups. Significant differences in fit between models imply significant differences between variables or between groups, respectively. Similar principles extend to testing equality of variance across groups. Inter-correlations among facial expressions are derived from covariance matrices of saturated models.

Univariate models were used to estimate genetic and environmental effects on each facial expression variable by comparing within-pair similarity (twin correlations) among monozygotic (MZ) twins, who share 100% of their genetic makeup, and dizygotic (DZ) twins, who share on average 50% of segregating genes. Higher MZ compared to DZ resemblance is attributed to increased genetic similarity among MZ twins and used to estimate additive (a2) and dominant genetic (d2) influences. Additive genetic influences show simple additive effects such that two copies of a risk allele confers twice as much as risk as possessing only one copy. Dominant genetic effects refer to interactions between alleles at the same or different genetic locus such that the effect of one allele depends on that of another. Within-pair similarity not due to genetic factors is assigned as shared environmental variance (c2) contributing toward resemblance among individuals in the same family. Finally, non-shared environmental influences (e2) create differences among individuals from the same family and are estimated from within-pair differences between MZ twins, that is, the one minus MZ twin correlations. This term includes measurement error too. As there are only three observed statistics (variance in the phenotype, and correlations for MZ and DZ twins) only three of these four parameters can be estimated at any one time. Thus either dominant genetic (d2) or shared environmental (c2) influences but not both are estimated. Because the presence of dominant genetic effects tend to increase the similarity of MZ twins relative to DZ twins, if DZ twin correlations on a measure are less than half of those of MZ twins, an ADE model is typically estimated. Otherwise, an ACE model is estimated. Sex differences in the size of genetic and environmental parameters were specified in the current univariate models.

Competing explanations of genetic and environmental contributions to the covariance across FER were evaluated by testing and comparing three multivariate models in terms of fit statistics (Neale & Maes, 2001, Figures 2a–c). Parameters in multivariate models are estimated from cross-twin cross-measure covariance matrices. The first model was the correlated factors solution of a Cholesky decomposition (Figure 2a), which assumes distinct sets of genetic and environmental factors (A1 to A5, C1 to C5, D1 to D5, E1 to E5) influencing each variable (through the paths a1 to a5, c1 to c2, d3to d5, e1to e5). The extent to which these latent factors are correlated are represented through the double-headed arrows between them (ra, rc, rd, and re) are assumed to explain shared variance across the different measures of facial expression recognition. Twin correlations for univariate analyses determined whether shared environmental or dominant genetic paths were estimated for each facial expression. As such, shared variance between variables that differed in their patterns of shared environment versus dominant genetic effects was explained by additive genetic and non-shared environmental effects only.

FIGURE 2FIGURE 2FIGURE 2
FIGURE 2a Correlated factors solution of a Cholesky decomposition assumes distinct genetic and environmental influences on each facial expression recognition variable. However, correlations between genetic and environmental influences explain the shared ...

The independent pathway model (Figure 2b) assumes two sets of genetic and environmental influences. Common additive and dominant genetic and shared and non-shared environmental factors (Ac, Cc, Dc, Ec) contribute to all facial expression variables (through paths ac1to ac5, cc1 to cc2, dc3to dc5, ec1 to ec5), explaining their shared variance. Specific genetic and environmental factors (As1 to As5, Cs1 to Cs2, Ds3 to Ds5, Es1 to Es5) distinct to each facial expression variable, explain their unique variance (through paths as1 to as5, cs1 to cs2, ds3 to ds5, es1 to es5). Univariate analyses determined whether shared environmental or dominant genetic paths were modeled for each facial expression.

The common pathway model (Figure 2c) also assumes two sets of genetic and environmental influences. However, a psychometric factor (L) is proposed to mediate common genetic and environmental influences (Ac and Ec) on the different facial expression variables through factor loadings (f1to f5,). As we could not simultaneously estimate common dominant genetic or shared environmental effects on the latent factor due to under-identification, only additive genetic and non-shared environmental effects were specified. The psychometric factor is similar to factors derived through phenotypic factor analyses, representing a higher-order factor, such as general emotion recognition ability. Similar to the independent pathway model, specific genetic and environmental influences (As1 to As5, Cs1 to Cs2, Ds3 to Ds5, Es1 to Es5) account for unique variance on each variable (through the paths as1 to as5, cs1 to cs2, ds3 to ds5, es1 to es5).

Sex differences established in descriptive or univariate analyses were incorporated into all multivariate models.

RESULTS

Descriptive Analyses

Table 1 presents the means, standard deviations across the whole sample and across males and females for correct identification of each facial emotion. Full details on the comparison of sub-models used for these analyses are available on request. There was a significant main effect of emotion (Δχ2 (4) = 82.46, p < .001). Highest accuracy was reported for happy faces, followed by fear faces. There was no significant difference between recognition of angry and disgust faces but both showed higher accuracy scores than sad faces. Females were significantly better at recognizing each individual emotion: angry (Δχ2 (1) = 10.97, p < .001), fear (Δχ2 (1) = 9.819, p < 0.005), sad (Δχ2 (1) = 13.24, p < .001), disgust (Δχ2 (1) = 27.52, p < .001) and happy (Δχ2(1) = 9.103, p < .01), although effect sizes for these differences were small to medium (Cohen’s d = .19 to .38). No differences in accuracy emerged among MZ and DZ twins for any of the facial emotions. Differences in variance between males and females characterized fear and disgust faces, with males showing greater variability. Moderate correlations in the number of correct responses across facial expressions were found (Table 2).

TABLE 1
Means and Standard Deviations of Correct Recognition Responses to Each Facial Expression Across 32 Trials are Presented for the Whole Sample, and Separately for Males and Females, With Effect Sizes of Sex Differences
TABLE 2
Inter-Correlations Across Recognition Responses to Different Facial Expressions

Univariate Analyses

MZ and DZ twin correlations are presented in Table 3. An ACE model was fit to data on angry and sad faces, whereas the pattern of twin correlations for fear, disgust, and happy faces justified fitting an ADE model. Fit statistics and parameter estimates from univariate genetic models are presented in Table 3. Of note, parameter estimates with 95% confidence intervals that overlap with 0 are considered not significant. As there were no differences between males and females in the size of genetic and environmental influences on any of the facial expressions, parameters are reported for the whole sample. For fear and disgust faces, parameter estimates are derived from a model that included variance differences between males and females. Given small sample sizes and restricted power only fear faces showed significant (dominant) genetic effects. Nevertheless the effect sizes of this estimated genetic parameter was largely similar to that for disgust and happy faces, as indexed by partially overlapping confidence intervals. Non-shared environmental effects were substantial on all facial expression recognition variables.

TABLE 3
Fit Statistics and Parameter Estimates with 95% Confidence Intervals of Univariate Models for Each Facial Expression

Multivariate Analyses

Fit statistics derived from testing the three multivariate models showed the common pathway model to present best fit to the data. Parameter estimates for this model are presented in Table 4. In this model (Figure 2c), variance on the recognition of each facial expression is explained by two sets of factor: (i) a psychometric factor (L) that mediates common genetic and non-shared environmental influences (Ac and Ec) through factor loadings (f1 to f5,) and (ii) specific genetic (additive and/or dominant) and environmental (shared and/or non-shared) effects (as1 to as5, cs1 to cs2, ds3 to ds5, es1 to es5). As such, squaring the estimated factor loadings and paths from specific factors sum to the total variance of each variable, which is 1 as each variable was standardized prior to model-fitting. On the basis of the estimated parameters, Figure 3 illustrates all significant paths, with the size of the path proportional to the strength of the parameter estimate. Two sets of findings can be drawn from these estimates.

FIGURE 3
The common pathway model with only significant paths included. The width of the path is proportional to the strength of the path parameter as recorded in Table 3.
TABLE 4
Fit statistics of multivariate models and parameter estimates of the model with the lowest fit statistics, the common pathway model are presented with 95% confidence intervals. Confidence intervals overlapping with zero indicate non-significant parameters ...

First, the variance of the common psychometric factor is driven primarily by large and significant genetic influences (0.75) with a more modest but significant role for non-shared environmental influences (0.25). The effects of “common” genetically mediated effects can be calculated on each facial expression by multiplying 0.75 by the squared factor loading. Accordingly, these accounted for 18%, 19%, 17%, 17%, and 20% of the total variance of angry, sad, fear, disgust, and happy faces respectively. Similarly, multiplying 0.25 by the squared factor loadings produces the effects of non-shared environmentally mediated factors common to all emotions, which explain between 6%–7% of the variance on these variables.

Second, there were no significant effects of the specific genetic factors indicating that the distinction of different facial expressions was mostly due to individual-specific environmental experiences. Squaring the parameter estimates in Table 4, indicates that these contributed 64%, 69%, 53%, 53%, and 58% to the variance of angry, sad, fear, disgust, and happy faces, respectively. It should be noted that although substantial, these contributions also include any measurement error present.

DISCUSSION

The present study assessed genetic and environmental influences on children’s recognition of different facial expressions. Two main sets of findings emerged, pertaining to the nature of genetic and environmental influences on FER, respectively. Univariate analysis revealed only significant genetic effects for fear faces. In comparison, multivariate analyses indicated significant common genetic effects across all facial expressions. These genetic effects were mediated through a higher-order psychometric factor explaining most of the covariance across recognition of all facial expressions. The remainder of the covariance was explained by smaller but significant non-shared environmental effects. In contrast, identifying specific facial expressions was due largely to non-shared environmental effects with some modest but non-significant genetic effects. We also replicated previous findings of group differences, finding better recognition of all facial expressions among females with effect sizes that are consistent with those of prior studies (see McClure, 2000 for a meta-analytic review of these studies). Notably, sex differences in mean accuracy scores did not emerge in the context of sex differences in the size of genetic and environmental influences on FER variables. Greatest accuracy of happy faces across the whole sample was also consistent with prior findings.

Although these results offer preliminary insights into the role of nature and nurture on general recognition of and discrimination between facial expressions, conclusions are restricted by two sets of caveats. The first set pertains to the sample used. A relatively small sample size and reduced power meant wide confidence intervals, non-significant parameter estimates, and a possible lack of sex differences. Moreover, despite our use of statistical controls, the over-sampling of anxious subjects raises questions on the generalization of results to non-anxious populations.

An additional set of caveats relates to modeling procedures on current data. First, psychometric problems could influence the size and patterns of twin correlations, which in turn impacts the decision to estimate ACE or ADE models (given that only shared environment, C, or dominant genetic influences, D, can be estimated at any one time). This calls into question how far differences in the sources of resemblance (i.e., shared environment versus dominant genetic effects) across FER variables should be interpreted. As we chose to estimate C on some FER variables and D on others, this may under-estimate common genetic influences to the shared variance of facial expressions that differ in their sources of resemblance (e.g., angry and fear faces). In the present multivariate models, because of these differences, shared variance between angry and fear faces can only be explained by factors that both have in common (i.e., additive genetic and non-shared environmental influences), whereas factors that affect one emotion but not the other (i.e., dominant genetic and shared environmental influences) contribute to their specific variance only. As only additive genetic factors account for shared variance, overall common genetic variance between facial expressions is reduced. In the common pathway model, this reduction will be manifested as smaller additive genetic effects on the higher-order psychometric factor. A third consideration concerns small differences in fit between common and independent pathway multivariate models. Conventional model-fitting procedures warranted choosing the common pathway model but in the absence of statistical tests that definitively compare model-fits, one could argue that the independent pathway model fit the data equally well. Finally, the degree to which measurement error explains “non-shared environmental” contributions is not known. Caution in interpreting individual-specific environmental effects is thus needed.

Notwithstanding these caveats, our data provide a novel angle for understanding individual differences in recognizing facial expressions. As this skill characterizes a range of medical conditions (e.g., Williams syndrome), psychopathologies (e.g., internalizing and externalizing problems) and individuals exposed to social adversity (e.g., maltreatment), several exciting implications can be drawn. Perhaps most intriguing is that the general ability to recognize facial expressions is influenced by genetic factors. Consistent with this, individuals with Williams’ syndrome (WS), a genetic disorder that arises from the deletion of a section of chromosome 7, show poorer FER (Gagliardi et al., 2003), anomalies that have been attributed to difficulties in configural processing associated with low IQ. Presumably, genetic factors associated with WS alter neural function underlying these higher-order abilities, compromising FER. Whether different genetic pathways characterize FER abilities in typically developing children is unknown. Drawing on the neurobehavioral maturation model, inherited differences in the functioning of “emotional” neural circuitry, such as the amygdala, could explain variation in processing valenced facial expressions. Indeed, several sources implicate the amygdala in FER-linked abilities, during development (McClure et al., 2007), and associations between genetic polymorphisms and amygdala function (Munafo et al., 2007).

Our finding of non-shared environmental contributions to recognition of different facial expressions is also interesting given data on the effects of adverse environments (e.g., maltreatment) on recognition of specific facial expressions. Not only are physically abused children biased toward an over-identification of angry faces (Pollak, Cicchetti, Hornung, & Reed, 2000; Pollak & Kistler, 2002), but they also show greater accuracy in recognizing this facial expression (Pollak & Sinha, 2002). Although abuse is undoubtedly an extreme source of environmental risk, the normal range of child-specific daily events may also enhance (or attenuate) learning of specific emotions too. These events may allow children to learn associations between the affective salience and meaning of events and specific facial emotions, providing the foundation for later recognition. For instance, a child who has experienced the death of their pet may learn to associate their own and others’ reactions to this event (through introspection and observation respectively) with the expression of sadness. The acquired association then shapes their ability to recognize sadness. Such examples could conceivably apply to other emotions too.

Finally, we also extended findings on sex differences in FER ability. Although we replicated previous findings of a female advantage over males in recognition of individual face emotions (McClure, 2000), these did not emerge in the context of differences in the magnitude of genetic and environmental influences that contribute to variance on recognition scores. One account of these findings is that there are sex-specific factors that do not cause individual differences in FER ability within either males or females, but instead have a constant effect on all females relative to males. This constant effect can be visualized as shifting the population mean of female test scores in FER measures “upwards” relative to the distribution of male test scores. Speculatively, such factors would include sex-linked exposure to certain biological maturation factors or socialization experiences such as hormonal surges that influence amygdala function or gender-specific rearing patterns that reinforce and maintain patterns of non-verbal skills in females only (McClure, 2000).

In summary, these findings highlight the role of both biological and social factors in the development of FER in children and adolescents. If replicated, these imply a pattern of significant common genetic effects on general recognition of facial expressions, with child-specific environmental variables contributing to recognition of different expressions. Thus, these results concur with integrative approaches to FER (e.g., McClure, 2000) that suggest an interplay between developmentally driven brain maturation processes (under the influence of genetic variables) and external experiences of social referencing, learning, and modeling to modify FER abilities. How genetic influences are expressed and which environmental factors are involved will be the next questions to consider.

Contributor Information

Jennifer Y. F. Lau, Department of Experimental Psychology, University of Oxford, U.K., Social, Genetic and Developmental Psychiatry Centre, Institute of Psychiatry, King’s College London, U.K.

Michael Burt, Department of Psychology, Durham University, U.K.

Ellen Leibenluft, Mood and Anxiety Program, National Institute of Mental Health, National Institute of Health, Bethesda, Maryland.

Daniel S. Pine, Mood and Anxiety Program, National Institute of Mental Health, National Institute of Health, Bethesda, Maryland.

Fruhling Rijsdijk, Social, Genetic and Developmental Psychiatry Centre, Institute of Psychiatry, King’s College London, U.K.

Nina Shiffrin, Mood and Anxiety Program, National Institute of Mental Health, National Institute of Health, Bethesda, Maryland.

Thalia C. Eley, Social, Genetic and Developmental Psychiatry Centre, Institute of Psychiatry, King’s College London, U.K.

References

  • Campbell R, Walker J, Baron-Cohen S. The development of differential use of inner and outer face features in familiar face identification. Journal of Experimental Child Psychology. 1995;59:196–210.
  • Camras L, Allison K. Children’s understanding of emotional facial expressions and verbal labels. Journal of Nonverbal Behavior. 1985;9(2):84–94.
  • Emery NJ. The eyes have it: The neuroethology, function and evolution of social gaze. Neuroscience and Biobehavioral Reviews. 2000;24:581–604. [PubMed]
  • Felsenfeld S, Kirk KM, Zhu G, Statham D, Neale M, Martin NG. A study of the genetic and environmental etiology of stuttering in a selected twin sample. Behavior Genetics. 2000;30:359–366. [PubMed]
  • Frigerio E, Burt M, Montagne B, Murray L, Perrett D. Facial affect perception in alcoholics. Psychiatry Research. 2002;113:161–171. [PubMed]
  • Gagliardi C, Frigerio E, Burt M, Cazzaniga I, Perrett D, Borgatti R. Facial expression recognition in Wil-liams syndrome. Neuropsychologia. 2003;41:733–738. [PubMed]
  • Gregory AM, Eley TC, Rijsdijk FV. A twin-study of sleep difficulties in school-aged children. Child Development. 2006;77:1668–1679. [PubMed]
  • Heim C, Nemeroff C. The role of childhood trauma in the neurobiology of mood and anxiety disorders: Pre-clinical and clinical studies. Biological Psychiatry. 2001;49:1023–1039. [PubMed]
  • Herba C, Phillips M. Annotation: Development of facial expression recognition from childhood to adolescence: Behavioral and neurological perspectives. Journal of Child Psychology and Psychiatry. 2004;45:1185–1198. [PubMed]
  • Izard C, Harris P. Emotional development and developmental psychopathology. In: Cichetti D, Cohen D, editors. Developmental psychopathology. Oxford, England: John Wiley & Sons; 1995. pp. 467–503.
  • Lau J, Gregory A, Goldwin M, Pine D, Eley T. Assessing gene–environment interactions on anxiety symptom subtypes across childhood and adolescence. Development and Psychopathology. 2007;19:1129–1146. [PubMed]
  • Meadows S. Parenting behaviour and children’s cognitive development. Hove, England: Psychology Press; 1996.
  • McClure EB. A meta-analytic review of sex differences in facial expression processing and their development in infants, children, and adolescents. Psychological Bulletin. 2000;126:424–453. [PubMed]
  • McClure EB, Monk CS, Nelson EE, Parrish JM, Adler A, Blair RJR, et al. Abnormal attention modulation of fear circuit function in pediatric generalized anxiety disorder. Archives of General Psychiatry. 2007;64:97–106. [PubMed]
  • Munafò MR, Brown SM, Hariri AR. Links serotonin transporter (5-HTTLPR) genotype and amygdala activation: A meta-analysis. Biological Psychiatry. 2007;63:852–857. [PMC free article] [PubMed]
  • Neale MC. Mx: Statistical modeling(Version 4.0) [Computer Software] Richmond, VA: University of Virginia, Department of Psychiatry; 1997.
  • Neale MC, Maes HH. Methodology for genetic studies of twins and families. Dordrecht, The Netherlands: Kluwer Academic Publishers B.V; 2001.
  • Nelson CA, de Haan M. Neural correlates of infants’ visual responsiveness to facial expressions of emotion. Developmental Psychobiology. 1996;29:577–595. [PubMed]
  • Phillipott P, Feldman RS. Age and social competence in preschoolers’ decoding of facial expression. British Journal of Social Psychology. 1990;29:43–54. [PubMed]
  • Pollak SD, Kistler DJ. Proceedings of the National Academy of Sciences, USA. 2002;99:9072–9076. [PMC free article] [PubMed]
  • Pollak SD, Sinha P. Effects of early experience on children’s recognition of facial displays of emotion. Developmental Psychology. 2002;38:784–791. [PubMed]
  • Pollak SD, Cicchetti D, Hornung K, Reed A. Developmental Psychology. 2000;36:679–688. [PubMed]
  • Rutter M. Genetic Influences on risk and protection: Implications for understanding resilience. In: Luthar S, editor. Resilience and vulnerability: Adaptation in the context of childhood adversities. New York: Cambridge University Press; 2003. pp. 489–509.
  • Thomas KM, Drevets WC, Whalen PJ, Eccard CH, Dahl RE, Ryan ND, Casey BJ. Amygdala response to facial expressions in children and adults. Biological Psychiatry. 2001;49:309–316. [PubMed]
  • Trouton A, Spinath FM, Plomin R. Twins early development study (TEDS): A multivariate, longitudinal genetic investigation of language, cognition and behavior problems in childhood. Twin Research. 2002;5:444–448. [PubMed]
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • MedGen
    MedGen
    Related information in MedGen
  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...