Logo of nihpaAbout Author manuscriptsSubmit a manuscriptHHS Public Access; Author Manuscript; Accepted for publication in peer reviewed journal;
J Res Pers. Author manuscript; available in PMC 2012 Jun 1.
Published in final edited form as:
J Res Pers. 2011 Jun 1; 45(3): 259–268.
doi:  10.1016/j.jrp.2011.02.004
PMCID: PMC3105910

A Meta-Analysis of the Convergent Validity of Self-Control Measures


There is extraordinary diversity in how the construct of self-control is operationalized in research studies. We meta-analytically examined evidence of convergent validity among executive function, delay of gratification, and self- and informant-report questionnaire measures of self-control. Overall, measures demonstrated moderate convergence (rrandom = .27 [95% CI = .24, .30]; rfixed = .34 [.33, .35], k = 282 samples, N = 33,564 participants), although there was substantial heterogeneity in the observed correlations. Correlations within and across types of self-control measures were strongest for informant-report questionnaires and weakest for executive function tasks. Questionnaires assessing sensation seeking impulses could be distinguished from questionnaires assessing processes of impulse regulation. We conclude that self-control is a coherent but multidimensional construct best assessed using multiple methods.

Keywords: Self-Control, Self-Regulation, Impulsivity, Multi-Method Measurement, Convergent Validity, Meta-analysis

The construct of self-control has attracted substantial attention from psychologists working within a variety of theoretical and methodological frameworks. At present, more than 3% of all publications are indexed in the PsycInfo database by the keywords self-control, impulsivity, or related terms.1 However, operational definitions of self-control vary widely, begging the question: do these varied measures tap the same underlying construct? For instance, does the Eysenck Impulsiveness Questionnaire (Eysenck, Easting, & Pearson, 1984) assess the same trait as the preschool delay of gratification task (Mischel, Shoda, & Rodriguez, 1989)? Do these measures tap the same underlying construct as the Stroop (Wallace & Baumeister, 2002) or go/no-go (Eigsti et al., 2006) executive function tasks?

Evidence of convergent validity (i.e., substantial and significant correlations between different instruments designed to assess a common construct) is a “minimal and basic requirement” for the validity of any psychological test (Fiske, 1971, p. 164). Unfortunately, the rather “modest requirement” of convergent validity in psychological measurement is often assumed rather than tested directly (Fiske, 1971, p. 164). In the current investigation, we meta-analytically synthesized evidence from 282 multi-method samples to examine the convergent validity of self-control measures.

Defining Self-Control

Several authors have noted the challenge of defining and measuring self-control (also referred to as self-regulation, self-discipline, willpower, effortful control, ego strength, and inhibitory control, among other terms) and its converse, impulsivity or impulsiveness (e.g., DePue & Collins, 1999; Evenden, 1999; White et al., 1994; Whiteside & Lynam, 2001). As an illustration of the diversity of measures that have been used to assess self-control, consider the following: refraining from pushing a button when a non-target stimulus appears on a computer screen, matching two geometric patterns from a selection of highly similar patterns, choosing between $1 today and $2 one week later, refraining from immediately eating a single marshmallow in order to obtain two marshmallows later, and responding to questionnaire items such as “Do you often long for excitement?” or “I make my mind up quickly.” Given the rather extraordinary range of measures used, one might expect a lively interdisciplinary debate in the self-control literature as to whether these measures are, in fact, tapping the same underlying construct. Instead, self-control researchers tend to read and cite the work of others working in their same methodological tradition: “Unfortunately, with a few exceptions, researchers interested in the personality trait of impulsivity, in the experimental analysis of impulsive behavior, in psychiatric studies of impulsivity or in the neurobiology of impulsivity form largely independent schools, who rarely cite one another’s work, and consequently rarely gain any insight into their own work from the progress made by others” (Evenden, 1999, pp. 348-349).

What do diverse measures of self-control have in common? We suggest that the common conceptual thread running through varied operationalizations of self-control is the idea of voluntary self-governance in the service of personally valued goals and standards. This idea is captured concisely by Baumeister, Vohs, and Tice (2007): “Self-control is the capacity for altering one’s own responses, especially to bring them into line with standards such as ideals, values, morals, and social expectations, and to support the pursuit of long-term goals” (p. 351). Tasks and questionnaire items that attempt to measure self-control implicitly or explicitly posit a plurality of mutually exclusive responses (e.g., if I eat my cake now, I cannot save it, too). One response is recognized by the individual as superior insofar as it is aligned with their long-term goals and standards (saving the cake for after dinner will make me happier in the long-run), but an alternative response (eating the cake now) is more gratifying or automatic in the short-term. In such situations, self-controlled individuals tend to choose the superior response, whereas impulsive individuals tend to choose the immediately gratifying or automatic response.

Our conceptualization of self-control emphasizes “top-down” processes that inhibit or obviate impulses, and thus implicitly assumes “bottom-up” psychological processes that generate these impulses. While individuals surely vary in what they find tempting (Tsukayama, Duckworth, & Kim, submitted for publication), given that adults and children across cultures reliably rate themselves lower in self-control than in any other character strength (Peterson, 2006), it seems reasonable to assume that almost everyone is tempted by something. That is, while attraction to various vices may vary across individuals, we agree with Oscar Wilde (1912/2009) that for each of us “there are terrible temptations which it requires strength, strength and courage, to yield to” (Second Act, Line 42).

Measurement Traditions in the Study of Self-Control

Our review of the self-control literature revealed four distinct approaches to the measurement of self-control: executive function tasks, delay of gratification tasks, self-report questionnaires, and informant-report questionnaires. Arguably, each of these approaches assesses voluntary self-governance in the service of goals or standards. Still, diversity both within and across these types of measures is striking. Because of their distinct histories, we review the four measurement traditions separately below.

Executive function tasks

Executive function refers to goal-directed, higher-level cognitive processing in which top-down control is exerted over lower-level cognitive processes (Williams & Thayer, 2009). Emerging first in the neuropsychology literature, executive function is a relatively new construct (Burgess, 1997) that, like the construct of self-control, continues to be inconsistently defined and measured (Banfield, Wyland, Macrae, Munte, & Heatherton, 2004; Miller, 2000). Behavioral tasks designed to assess executive function have been used to assess individual differences in self-control (e.g., Eigsti et al., 2006; White et al., 1994), the presence of clinical levels of impulsivity (Baker, Taylor, & Leyva, 1995), the effect of self-control interventions (Diamond, Barnett, Thomas, & Munro, 2007), and experimental manipulations aimed at taxing self-control (Hagger, Wood, Stiff, & Chatzisarantis, 2010).

There is growing evidence that executive function is not unitary in nature, but rather a collection of distinct processes associated with the frontal lobes, including working memory, attention, response inhibition, and task switching (Kramer, Humphrey, Larish, & Logan, 1994; Miller, 2000; Miyake, Friedman, Emerson, Witzki, & Howerter, 2000). Because any single executive function task tends to assess a plurality of these cognitive processes (Burgess, 1997; Zaparniuk & Taylor, 1997), it was not feasible to organize executive function tasks according to any of the proposed taxonomies of executive function. As an alternative, we noted that authors often explicitly referred to measures as belonging to one of 12 subtypes of executive function task (e.g., sun-moon Stroop, color-word Stroop, counting Stroop) and categorized measures accordingly (see Table 1).

Table 1
Executive function task subtypes, with frequency and convergence with other types of self-control measures

Delay of gratification tasks

Whereas executive function tasks have their roots in the neuropsychology literature and the study of neurological impairment, delay of gratification tasks were first developed to understand normative, age-related changes in child development. The ability to delay the discharge of impulses figured prominently in Freud’s (1922) psychoanalytic theory of ego development. Early attempts to operationalize the capacity to delay gratification for the sake of long-term gain included coding images of humans in action from responses to Rorschach ink blots (Singer, 1955). Such projective measures of delay of gratification generally demonstrated poor reliability and validity and were supplanted by more direct measures developed by Walter Mischel in the 1960s. Performance in delay tasks has been shown to predict academic achievement (Mischel et al., 1989; Evans & Rosenbaum, 2008), drug use (Kirby, Petry, & Bickel, 1999), and aggressive and delinquent behavior (Krueger, Caspi, Moffitt, White, & Stouthamer-Loeber, 1996).

Mischel’s research included three subtypes of delay tasks (see Table 2). In a hypothetical choice delay task, subjects make a series of choices between smaller, immediate rewards and larger, delayed rewards, most or none of which they expect to actually receive. For instance, children answer questionnaire items such as, “I would rather get ten dollars right now than have to wait a whole month and get thirty dollars then” (Mischel, 1961, p. 3). More recently, similar questionnaires have been used to calculate a discount rate for each individual that relates the subjective value of a delayed reward to the delay required to receive it (e.g., Green, Fry, & Myerson, 1994; Kirby et al., 1999). In a real choice delay task, subjects make an actual (i.e., not hypothetical) choice between a small, immediate reward and a larger, delayed reward (e.g., Mischel, 1958; Duckworth & Seligman, 2005). This decision happens at a single point in time, after which the decision cannot be revoked. The third subtype, the sustained delay task, differs from hypothetical and real choice tasks in that subjects first choose the preferred delayed reward, which is clearly “worth the wait”. Subsequently, the ability to delay gratification is measured as the time subjects can resist the smaller, immediate reward in order to obtain the larger, deferred reward (e.g., Mischel, Shoda, & Rodriguez, 1989; Solnick, Kannenberg, Eckerman, & Waller, 1980).

Table 2
Delay task subtypes, with frequency and convergence with other types of self-control measures

A fourth subtype of delay task not used by Mischel and colleagues is the repeated trials delay task, in which subjects complete a series of brief trials, on each of which they choose between a smaller, more immediate reward and a larger, delayed reward (e.g., Newman, Kosson, & Patterson, 1992). As in choice delay procedures, subjects receive actual rewards (e.g., nickels or candy) and cannot revoke their decision.

Self- and informant-report personality questionnaires

In individual difference and clinical psychology research, self-control is most often measured by pen-and-paper personality questionnaires completed by the participant or a close informant (e.g., parent). Questionnaire measures of self-control have been shown to predict academic achievement (Duckworth, Tsukayama, & May, 2010), physical health (Tsukayama, Toomey, Faith, & Duckworth, 2010; Moffitt et al., 2011), wealth (Moffitt et al., 2011), juvenile delinquency (Benda, 2005), criminal activity in adulthood (Moffitt et al., 2011), and even longevity (Kern & Friedman, 2008).

Our literature search revealed over 100 unique self- and informant-report questionnaires, most designed as stand-alone measures and a few as subscales of omnibus personality, temperament, or psychopathology inventories. Items on these questionnaires suggested considerable heterogeneity in the underlying constructs assessed. For instance, the Eysenck I7 Impulsiveness Scale includes items about doing and saying things without thinking (Eysenck et al., 1984). The Self-Control Scale (Tangney, Baumeister, & Boone, 2004) casts a wider net, including items about acting “without thinking through all the alternatives,” as well as “resisting temptation,” and “concentrating.” Likewise, the Barratt Impulsiveness Scale Version 11 (BIS-11) includes separate scales for motor impulsiveness, non-planning impulsiveness, and cognitive impulsiveness (Barratt, 1985).

Control of Impulses vs. Generation of Impulses

In an attempt “to bring order to the myriad of measures and conceptions of impulsivity” (p. 684) in the individual difference and clinical psychology literatures, Whiteside and Lynam (2001) administered several previously published self-control questionnaires to a large sample of undergraduates. Item-level factor analyses produced four distinct factors (UPPS) interpreted as “discrete psychological processes that lead to impulsive-like behaviors” (Whiteside & Lynam, 2001; p. 685): Urgency is the inability to override strong impulses (e.g., “I have trouble controlling my impulses”). (Lack of) premeditation, is similar to Eysenck’s conception of acting before thinking (e.g., “My thinking is usually careful and purposeful” (reverse-scored)). (Lack of) perseverance refers to the inability to focus on boring or difficult tasks (e.g., “I tend to give up easily”). Finally, sensation seeking refers to an attraction to exciting and risky activities (e.g., “I’ll try anything once”).

Whiteside and Lynam’s (2001) UPPS model has been validated in subsequent studies (e.g., Miller, Flory, Lynam, & Leukefeld, 2003; Whiteside, Lynam, Miller & Reynolds, 2005) but is not the only multidimensional model for self-control. Indeed, at least a dozen different factor structures for self-control (e.g., Barratt, 1985; Buss & Plomin, 1975; Miller, Joseph, & Tudway, 2004; White et al., 1994) have been suggested. One attractive feature of the UPPS is that it situates facets of self-control within the five-factor model of personality, relating urgency to neuroticism, perseverance and planning to conscientiousness, and sensation seeking to extraversion. Also notable is the similarity between the UPPS and Buss and Plomin’s (1975) four-factor model, and at least partial overlap with other proposed factor structures for self-control. Finally, the distinction between sensation seeking impulses and a variety of psychological processes that direct behavior away from those impulses is consistent with dual-system models of self-control (Carver, Johnson, & Joormann, 2009; Eisenberg et al., 2004; Hofmann, Friese, & Strack, 2009; Metcalfe & Mischel, 1999; Steinberg, 2008). While dual-system models vary somewhat in their particulars, they all posit two opponent systems underlying the generation of quick, involuntary, and often consummatory impulses on the one hand, and the control of these impulses by deliberate, volitional processes impulses on the other.

Expectations about Convergent Validity

Our initial, qualitative survey of the self-control literature indicated considerable heterogeneity in the targeted psychological processes and, in addition, in the level of intended description. Some tasks and questionnaire items, it seemed, were designed to assess aggregate self-controlled behavior (i.e., ultimately behaving in accordance with long-term goals and standards at the expense of short-term gratification). For instance, the Self-Control Scale (Tangney et al., 2004) includes the item, “People would say that I have iron self-discipline.” Other tasks and questionnaire items, in contrast, seemed to target the component psychological processes that precede and contribute to self-controlled behavior. In addition to the four processes specified by Whiteside and Lynam’s (2001) UPPS model, we suggest that self-control may be facilitated by accurately weighing long-term and short-term consequences (delay discounting questionnaire; Kirby et al., 1999), following through on a decision to resist immediate gratification (preschool delay of gratification task, Mischel et al., 1989), suppressing habitual or automatic responses that conflict with one’s goals (go/no-go task; Eigsti et al., 2006), and effectively regulating attention in the face of distractors (Attentional Network Task; Rueda et al., 2004).

Heterogeneity in the targeted dimensions of self-control and in the level of description suggests that correlations among diverse self-control measures may be relatively modest. In addition, measurement error, whether from task-specific or random error variance, should further attenuate estimates of convergent validity. A meta-analysis of published studies reporting multi-method, multi-trait matrices of correlations found that more than 60% of the variance in personality measures was accounted for by task-specific and random error variance (Cote & Buckley, 1987).

The Current Study

In the current investigation, we meta-analytically examined evidence for convergent validity among self-control measures from published and unpublished studies that used at least two different measures of self-control. We had several specific goals in our analyses: First, we sought an overall estimate of the convergent validity among executive function, delay of gratification, and questionnaire self-control measures. Second, we examined sources of heterogeneity in convergent validity estimates, including type and subtype of self-control measure. Finally, we examined our meta-analytically derived correlation matrix for support of the UPPS model (Whiteside & Lynam, 2001).


Literature Search

We used two strategies to search the PsycINFO database for published articles and dissertations available by February 2008 that used more than one measure of self-control. First, keyword searches were conducted for self-control related terms and popular self-control measures.2 Second, we identified relevant studies cited by articles identified in this search and from the library of the first author.

Over 7,000 study abstracts were screened, resulting in 1,280 potentially-relevant manuscripts. For studies that did not report all correlations among the self-control measures used, we emailed authors to request this information. Of the 542 authors we contacted, 101 authors (18.6%) provided correlation matrices.

Inclusion and Exclusion Criteria

Studies selected for this meta-analysis were written in English and reported at least one bivariate correlation coefficient for two different measures of self-control. We excluded studies that reported correlations of r = 1 or reported only Spearman rank or partial correlations. We also excluded measures that did not meet our broad definitional criteria for self-control (i.e., the self-governance of responses in order to achieve long-term benefit at the expense of short-term gratification). Finally, we excluded correlations between subscores of a common self-control measure (e.g., correlations between error and latency scores from a single executive function task; subscale scores from a single questionnaire) or between two different versions of a common questionnaire.

Moderator Coding Procedure

A total of five trained coders recorded sample characteristics and correlations. Each correlation was coded independently by at least two coders to ensure reliability. Conflicts were resolved by discussion and re-examination of studies in question. In addition to sample sizes and correlation coefficients, the coders recorded the following variables:

Name, type, and subtype of measure

We recorded the name of each measure and classified each according to one of four types: executive function task, delay of gratification task, self-report questionnaire, or informant report questionnaire. Executive function and delay of gratification subtypes were classified according to the categories in Tables 11 and 2.2. Questionnaire measures were classified by the name of the scale.


Each correlation was classified as originating from one of three sources: published articles or book chapters (k = 131), email correspondence with study authors (k = 86), or dissertations (k = 65).


Mean ages for k = 281 samples were divided into the following ordinal categories: 0-5, 6-12, 13-17, 18-21, 22-29, 30-39, 40-49, 50-59, 60-69, and 70+ years. For samples that reported age ranges but not means, we categorized each sample into the age bracket in which most of the sample fell (e.g., age range of 17 to 22 was coded as the 18 to 21 category).


We recorded the number of male and female participants for every study that reported the relevant descriptive statistics. From these numbers we calculated the percent of females included in the study (k = 247).

Sample type

We recorded whether the study sample represented either non-clinical or clinical/mixed populations. Non-clinical samples (k = 127) were typically convenience samples of non-clinical individuals. Clinical/mixed samples (k = 155) included at least some participants with a psychological disorder or other impairment (e.g., ADHD, learning disorder, behavioral problems, delinquency, anxiety, substance abuse, neurological impairment, incarceration). Samples including participants who had been administered psychoactive medication were excluded.

Data Analyses

We used the Pearson correlation coefficient (r) as the effect size (ES) measure. For the vast majority of included studies, multiple correlation coefficients based on a single sample were reported. We computed synthetic effect sizes by aggregating homogeneous dependent effect sizes within samples. This approach assumes that correlations within a sample are based on measures of a common latent variable and produces accurate, if somewhat conservative, meta-analytic estimates of the effect size (Hedges, 2007).

An important conceptual question for meta-analyses is whether to assume a fixed or random effects model (cf. Field, 2001; Hunter & Schmidt, 2004; Schulze, 2007). The fixed effects model provides more precise and reliable estimates but cannot be generalized to broader populations (Cooper, 1998). The random effects model allows the amount of variance between and within studies to be considered, but has statistical disadvantages when the number of observations for any particular analysis is small (Hunter & Schmidt, 2004). We report both fixed and random effects analyses for the overall analysis and, because of reduced sample size, fixed effects estimates only for moderator analyses. Since the weights used in the aggregation of correlation coefficients can influence the results, we followed the recommendation of Hedges and Olkin (1985) and used the inverse variance as weights in the analyses.

When possible, it is helpful to correct for artifacts, such as range restriction and lack of reliability, when estimating population effect sizes (Hunter & Schmidt, 2004). However, such corrections require information that was not available in most of the included studies. We therefore did not correct for any artifacts in our analyses, and results should be considered with this limitation in mind.


In total, 236 studies met our inclusion criteria, comprised of k = 282 independent samples and N = 33,564 participants. Altogether, j = 3,654 effect sizes were culled from these study reports, which were aggregated to 282 effect sizes for the overall and moderator analyses (at the sample level), and 907 effect sizes for intertype comparisons. Study characteristics are summarized in Appendix A, with corresponding references in Appendix B.

Based on the total sample of 282 independent effect sizes, the mean effect size across self-control measures was medium in size (rrandom = .27 [95% CI = .24, .30]; rfixed = .34 [.33, .35]). As expected, there was substantial heterogeneity, Q(281) = 2152.20, p < .001.

Moderation by Sample Characteristics

In order to account for heterogeneity in effect sizes, we examined available sample characteristics as potential moderators. The number of samples k varied slightly among moderator analyses because information for moderators was missing in a very small proportion of samples.

Overall, sample characteristics explained minimal differences in effect sizes. Year of publication, the source of effect size (dissertation, published study, email correspondence), and sample type (normative vs. clinical/mixed) each reduced heterogeneity variance by about one percent, Q(1) = 22.60, Q(1) = 27.51, p < .001, and Q(1) = 31.79 (all ps < .001) respectively. Gender composition accounted for less than one percent of the variance in observed effect sizes (Q(1) = 16.45, p < .001). Although statistically significant, these study characteristic effects were minuscule in magnitude, suggesting that the convergent validity of self-control measures has not strengthened over the past 45 years, publication bias has not favoured larger effect sizes, and effects were fairly constant across clinical and non-clinical populations and across male and female participants.

Age was a statistically significant moderator, Q(9) = 264.73, p < .001, accounting for 12% of the variance. However, we noted that the type of self-control measure employed varied by age, with younger samples including disproportionately more informant-report questionnaires (completed by teachers and parents) and older samples including disproportionately more executive function measures. When we examined age as a moderator of effect sizes separately for each of the four types of self-control measures, there were no consistent or interpretable trends.

Convergent Validity by Type of Self-Control Measure

In contrast to sample characteristics, measure type explained 53% of the overall variance in effect sizes, Qtotal(906) = 8049.98, p < .001; Qtype(9) = 4261.09, p < .001. As shown in Table 3, informant-report questionnaires demonstrated the strongest evidence of convergent validity. Correlations among informant-report questionnaires (r = .54 [.53, .55]) were slightly higher than correlations among self-report questionnaires (r = .50 [.48, .51], Z = 3.45, p = .001), and much higher than correlations among executive function tasks (r = .15 [.14, .17], Z = 32.69, p = <.0001) and among delay tasks (r = .21 [.09, .32], Z = 6.19, p < .0001). Interestingly, delay of gratification tasks, which appeared less frequently in the reviewed literature than other types of measures, demonstrated homogeneity across effect sizes, Q(3) = .57, p = .90. Delay tasks were more strongly associated with informant-report questionnaires than were executive function tasks (rdelay = .21 [.17, .25], rexec = .14 [.12, .15], Z = 2.33, p = .02), with a similar trend for self-report questionnaires (rdelay = .15 [.11, .18], rexec = .10 [.08, .12], Z = 1.87, p = .06).

Table 3
Correlation matrix for four types of self-control measures

Convergent Validity by Subtype of Self-Control Measure

We next examined subtypes of self-control measures, considering convergence with other measures of the same subtype (e.g., Stroop tasks with other executive function tasks) and with other types of measures (e.g., Stroop tasks with self-report questionnaires). These analyses are summarized in Tables 11 and 2.2. Marked unevenness in the availability of convergent validity estimates (i.e., higher numbers of effect sizes for some measures than for others) was notable and should be kept mind when considering these results.

Among executive function tasks, go/no-go tasks, Stroop tasks, and set switching tasks were used most frequently; attention tasks, gambling tasks, and risk tasks were used much less frequently. Despite the paucity of multi-method studies using attention tasks, higher than average correlations for attention tasks with informant-report questionnaires (r = .33 [.22, .42], Z = 2.33, p = .02) were notable. Otherwise, there were no particularly striking differences in convergent validity for executive function tasks.

Similarly, there was no salient evidence for the superior convergent validity of one subtype of delay task over another. Within subtype of delay task, correlations (e.g., between two different delay tasks) ranged from r = .20 to .23. Convergence with other types of self-control measures was difficult to evaluate because of limited data. None of the four delay task subtypes demonstrated consistently stronger convergent validity with non-delay tasks.

Unlike executive function and delay tasks, the 104 differently named questionnaire measures included in this meta-analysis did not lend themselves to a defensible a priori taxonomy of subtypes. A correlation matrix in which questionnaires were organized simply by name of measure produced 98.4% blank cells (i.e., missing values). Thus, although questionnaire measures demonstrated significant heterogeneity (Qself(56) = 686.48 and Qinformant(141) = 1628.17), we were unable to test whether certain subtypes of questionnaires demonstrated stronger convergent validity than others.

Evaluating the UPPS Structure With Self-Report Questionnaires

Finally, we examined correlations among self-report questionnaires for evidence of the four-factor UPPS structure proposed by Whiteside and Lynam (2001). In consultation with a co-author of the UPPS model (Lynam, personal correspondence November 2009), we reviewed the most popular 80 questionnaires and subscales and recorded the UPPS facet(s) with which they seemed most aligned.3 We then created a synthesized correlation matrix by aggregating effect sizes for all measures tapping each facet. (Necessarily, the aggregated values were not independent; scales that were rated as assessing multiple facets were included in multiple places in the synthesized matrix.) We expected that average correlations within a facet (i.e., values on the diagonal) to be stronger than correlations across facets (i.e., off-diagonal values).

As shown in Table 4, sensation seeking demonstrated stronger within-facet associations (r = .48 between different sensation seeking questionnaires) than associations with other facets (rs with other facets ranging from .36 to .40, all comparison ps < .01). The other three proposed facets in the UPPS model were not consistently different from one another. In further support that sensation seeking differs from other aspects of self-control, the correlation between delay tasks and sensation seeking questionnaires (r = .18, [.12, .25], k = 4, j = 12, N = 271) was higher than the correlation between delay tasks and other questionnaires (r = .13, [.11, .15], k = 17, j = 99, N = 2,200), whereas the correlation between executive function tasks and sensation seeking questionnaires (r = .07, [.04, .11], k = 9, j = 56, N = 602) was lower than the correlation between executive function tasks and other self-control questionnaires (r = .11, [.10, .12], k = 50 j = 492, N = 3,846). However, tests for the significance of these comparisons did not reach significance, suggesting that more studies are needed confirm these trends.

Table 4
Testing the UPPS factor structure with self-reported self-control measures


Across 282 multi-method studies and over 33,000 participants, we found moderate convergence across self-control measures. Correlations did not vary systematically by sample characteristics, including gender, sample type, publication year, or whether the correlations were extracted from published articles, dissertations, or email correspondence with authors. In contrast, over half of the heterogeneity in correlations was explained by the type of self-control measure used. Both within and across types, informant-report questionnaires demonstrated the strong evidence of convergent validity, followed closely by self-report questionnaires, then delay of gratification tasks and, finally, executive function tasks. Notably, all estimates of convergent validity were statistically significant and at least small in magnitude.

Despite the large number of studies and participants included in the meta-analysis, there were insufficient data to draw strong conclusions about the relative convergence of specific subtypes. There was substantial heterogeneity in the convergent validity of executive function tasks, both with other executive function measures and with other types of measures. With this caveat in mind, we note that none of the three most commonly used executive functions tasks (go/no-go, Stroop, and set switching) demonstrated uniformly higher correlations with other executive function tasks or with delay or questionnaire measures of self-control. Other executive function tasks, most notably attention tasks, demonstrated higher convergence, but were too rarely used in multimethod studies to draw firm conclusions. Thus, while there was substantial heterogeneity in the convergent validity of executive function tasks, this heterogeneity was not well-explained by the subtype of executive function task used.

In contrast, we found no evidence for differences in convergent validity among hypothetical, repeated trials, sustained, or real choice delay of gratification tasks. Homogeneity of effect sizes among the four subtypes of delay tasks suggests that these tasks differ less from each other than do executive function tasks. One possible explanation for this pattern of findings is that while different delay tasks may to some extent tap different processes related to delay of gratification (e.g., making the choice to wait for a larger reward vs. sustaining that choice in the face of temptation), as a group they may tap more similar processes than do executive function tasks. Alternatively, since estimates of heterogeneity are dependent upon the number of included effect sizes (Hedges & Olkin, 1985), we cannot rule out the possibility that heterogeneity estimates would have been larger and statistically significant had more data been available from multi-method studies employing delay tasks.

Distinguishing Sensation Seeking from Other Processes Relevant to Self-Control

Our attempt to test the four-factor UPPS structure (Whiteside & Lynam, 2001) using correlations among self-report measures of self-control was constrained by the available data. Nevertheless, a priori categorization of questionnaires according to the UPPS factor structure and subsequent analysis of their intercorrelations suggested that sensation seeking could be distinguished from urgency, lack of perseverance, and lack of premeditation. In contrast, we failed to find compelling evidence for separation among the remaining three factors. These analyses support a distinction between sensation seeking tendencies and, broadly, the psychological processes that oppose these tendencies.

Our findings are consistent with Miller et al. (2003), who found that of the four UPPS factors, sensation seeking was the least correlated with the remaining UPPS factors. Many authors consider self-control to be coextensive with Big Five conscientiousness (Moffitt et al., 2011), and the UPPS urgency, lack of planning, and lack of persistence facets have been identified (inversely) with Big Five conscientiousness (MacCann, Duckworth, & Roberts, 2009). Likewise, Romer, Duckworth, Sznitman, and Park (2010) found that sensation seeking peaks sharply during late adolescence and then falls in early adulthood, whereas the developmental trajectories for future time perspective and delay of gratification over the same period are monotonically positive.

Recent neuroscience research suggests that sensation seeking impulses may be generated by dopaminergic subcortical structures whose activity normatively spikes during adolescence, whereas the psychological processes associated with inhibitory control, premeditation, and perseverance correspond to slowly maturing frontal areas (Steinberg, 2008). Collectively, this evidence is consistent with dual-system models of self-control positing impulse-generating and impulse-controlling systems (Carver et al., 2009; Eisenberg et al., 2004: Hofmann et al., 2009; Metcalfe & Mischel, 1999; Steinberg, 2008).

Implications for Research and Theory

How do these cross-method correlations compare to those observed for traits other than self-control? Meyer (2001) compiled meta-analytic estimates of cross-method convergent associations for a wide range of psychological constructs, providing a benchmark by which to judge the current estimates. Generally, correlations between self- and informant-report questionnaire measures in Meyer’s review were medium in size. For instance, Meyer reported correlations between parent reports of children’s behavioral and emotional problems and either self-reports or teacher reports were r = .29, correlations between self-report and spouse/partner reports of personality and mood were r = .29, and correlations between supervisor-report and peer-report ratings of job performance were r = .34. In contrast, Meyer found correlations between task and questionnaire measures to be small in size. For instance, self-report and cognitive test measures of memory problems correlated r = .13; self-report and cognitive test measures of attentional problems correlated r = .06. Overall, it seems that the evidence for convergent validity among self-control measures in the present meta-analysis compares favorably to Meyer’s meta-analytic estimates for other psychological constructs.

The dramatically stronger evidence for convergent validity among questionnaire measures, in both Meyer’s review and the current investigation, has practical implications for self-control researchers. In particular, researchers facing time and budget constraints may be advised to choose a single informant- or self-report questionnaire over any single executive function or delay of gratification task measure. Task measures, of course, have important advantages over questionnaires (e.g., objective performance outcomes that are difficult if not impossible to fake). However, the comparatively weaker evidence of convergent validity for task measures points to substantial random and task-specific error variance, notoriously problematic for executive function tasks in particular (Rabbitt, 1997) but also well-known for performance task measures in general (Epstein, 1979). In the time required to administer a single executive function or delay task, many questionnaire items can be administered. Multiple measures reduce error variance (Brown, 1910; Spearman, 1910), and furthermore, the response to any particular questionnaire item (e.g., “I have trouble resisting temptation”) implicitly asks the respondent for an aggregate judgment of behavior across multiple situations and observations.

When using task measures of self-control, therefore, we recommend aggregating across measures in order to reduce error variance. For instance, Beck, Carlson, and Rothbart (2011) recently demonstrated that average performance across three vs. six executive function tasks correlated r = .22 and r = .30, respectively, with informant-report questionnaire measures of self-control. Notably, both of these observed convergent validities exceeded our meta-analytic average for single executive function tasks, r = 14.

Perhaps the optimal measurement strategy is to include both task and questionnaire measures. For instance, Duckworth and Seligman (2005, Study 2) measured self-control using a battery of self- and informant-report questionnaires, as well as two delay of gratification measures. Estimates of convergent validity were consistent with those in the current meta-analysis, and the composite measure of self-control predicted objectively measured academic performance better than did any single measure alone.

While we did not find compelling evidence in the current investigation for the distinction among other facets proposed by Whiteside and Lynam’s (2001) UPPS model, absence of evidence is not evidence of absence. Our opportunistic analyses using the available meta-analytic correlation matrix were far from a definitive test of the UPPS factor structure. One promising direction for future research would be a more systematic, multimethod investigation of the components of self-control. Ideally, separate measures assessing sensation seeking and other “impulsive” processes would be administered, along with measures designed to assess the psychological processes posited to modulate those impulses. Including research participants of diverse ages would allow researchers to trace the developmental trajectories of these distinct psychological processes over the life course; divergent developmental trajectories would provide evidence, in addition to conventional factor analyses, for the separation of self-control processes. Finally, processes might be distinguished from each other by differential predictive validity for theoretically-relevant outcomes.


The promise of psychology as a cumulative science depends not only upon field-unifying theories and well-designed studies, but also upon valid, consensually understood measures (Mischel, 2009). On the basis of the current meta-analysis, we suggest that evidence for the convergent validity of self-control measures is adequate – and as strong as the evidence of convergent validity for other psychological measures. Looking to the future, we hope the current investigation encourages collaboration among researchers of diverse methodological traditions. Such interdisciplinary partnerships should dramatically accelerate our understanding of the coherent, yet complex, construct of self-control.

Supplementary Material




This study was funded by the National Institute on Aging K01 mentored research scientist award (Duckworth PI) and the John Templeton Foundation (Seligman PI; Duckworth Co-PI). The authors acknowledge equal contribution to the manuscript.


Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

1A PsycInfo search was conducted for all publications from 2009 and 2010 using the following keywords: self-control, self control, self-discipline, self discipline, self-regulation, self regulation, delay of gratification, delayed gratification, gratification delay, impulsive, impulsivity, impulsiveness, impulse control, emotional regulation, emotion regulation, ADHD, attention deficit, attention deficit disorder, attention deficit hyperactivity disorder, hyperactivity, cognitive control, executive function, and executive functioning.

2Keyword searches included at least one self-control related keyword (see Footnote 1) and either: multi-method, multi-source, measure, assess, or validity. Measure searches used all possible pairwise combinations of the following popular self-control measures and terms: Matching Familiar Figures, circle tracing, draw-a-line, walk-a-line, draw-a-star, Stroop, Gordon diagnostic, go/no-go, Wisconsin card sort, trail making, Eysenck impulsiveness, Dickman impulsivity inventory, Barratt impulsiveness, EASI-III impulsivity, child behavior-checklist, Conners rating scale, self-control rating scale, California Q-set, delay of gratification, discount delay, time preference.

3Available from the first author by request.


  • Baker DB, Taylor CJ, Leyva C. Continuous performance tests: A comparison of modalities. Journal of Clinical Psychology. 1995;51:548–551. [PubMed]
  • Banfield J, Wyland CL, Macrae CN, Munte TF, Heatherton TF. The cognitive neuroscience of self-regulation. In: Baumeister RF, Vohs KD, editors. The handbook of self-regulation. Guilford Press; New York: 2004. pp. 62–83.
  • Barratt ES. Impulsive subtraits: Arousal and information processing. In: Spence JT, Izard CE, editors. Motivation, emotion, and personality. Elsevier Science Publishers; North Holland: 1985. pp. 137–146.
  • Baumeister RF, Vohs KD, Tice DM. The strength model of self-control. Current Directions in Psychological Science. 2007;16:351–355.
  • Bechara A, Damasio AR, Damasio H, Anderson SW. Insensitivity to future consequences following damage to human prefrontal cortex. Cognition. 1994;50:7–15. [PubMed]
  • Beck DM, Carlson SM, Rothbart MK. Executive function, effortful control and parent report of children’s temperament. University of Washington; 2011.
  • Benda BB. The robustness of self-control in relation to form of delinquency. Youth & Society. 2005;36:418–444.
  • Brown W. Some experimental results in the correlation of mental abilities. British Journal of Psychology. 1910;3:296–322.
  • Burgess PW. Theory and methodology in executive function research. In: Rabbitt P, editor. Methodology of frontal and executive function. Psychology Press; East Sussex: 1997. pp. 91–116.
  • Buss AH, Plomin R. A temperament theory of personality development. Wiley Interscience; Oxford: 1975.
  • Carver CS, Johnson SL, Joormann J. Two-mode models of self-regulation as a tool for conceptualizing effects of the serotonin system in normal behavior and diverse disorders. Current Directions in Psychological Science. 2009;18:195–199. [PMC free article] [PubMed]
  • Cooper H. Synthesizing research: A guide for literature reviews. 3rd ed. Sage; Beverly Hills, CA: 1998.
  • Cote JA, Buckley MR. Estimating trait, method, and error variance: Generalizing across 70 construct validation studies. Journal of Marketing Research. 1987;24:315–318.
  • Depue RA, Collins PF. Neurobiology of the structure of personality: Dopamine, facilitation of incentive motivation, and extraversion. Behavioral and Brain Sciences. 1999;22:491–569. [PubMed]
  • Diamond A, Barnett S, Thomas J, Munro S. Preschool program improves cognitive control. Science. 2007;318:1387–1388. [PMC free article] [PubMed]
  • Dougherty DM, Mathias CW, Marsh DM, Jagar AA. Laboratory behavioral measures of impulsivity. Behavior Research Methods. 2005;37:82–90. [PubMed]
  • Duckworth AL, Seligman MEP. Self-discipline outdoes IQ in predicting academic performance of adolescents. Psychological Science. 2005;16:939–944. [PubMed]
  • Duckworth AL, Tsukayama E, May H. Establishing causality using longitudinal hierarchical linear modeling: An illustration predicting achievement from self-control. Social Psychology and Personality Science. 2010;1:311–317. [PMC free article] [PubMed]
  • Eigsti I-M, Zayas V, Mischel W, Shoda Y, Ayduk O, Dadlani MB, Casey BJ. Predicting cognitive control from preschool to late adolescence and young adulthood. Psychological Science. 2006;17:478–484. [PubMed]
  • Eisenberg N, Spinrad TL, Fabes RA, Reiser M, Cumberland A, Shepard SA, Thompson M. The relations of effortful control and impulsivity to children’s resiliency and adjustment. Child Development. 2004;75:25–46. [PMC free article] [PubMed]
  • Epstein S. The stability of behavior: I. On predicting most of the people much of the time. Journal of Personality and Social Psychology. 1979;37:1097–1126.
  • Eriksen BA, Eriksen CW. Effects of noise letters upon the identification of a target letter in a nonsearch task. Perception & Psychophysics. 1974;16:143–149.
  • Evans GW, Rosenbaum J. Self-regulation and the income-achievement gap. Early Childhood Research Quarterly. 2008;23:504–514.
  • Evenden JL. Varieties of impulsivity. Psychopharmacology. Special Issue: Impulsivity. 1999;146:348–361. [PubMed]
  • Eysenck SB, Easton G, Pearson PR. Age norms for impulsiveness, venturesomeness and empathy in children. Personality and Individual Differences. 1984:315–321.
  • Field AP. Meta-analysis of correlation coefficients: A monte carlo comparison of fixed- and random-effects methods. Psychological Methods. 2001;6:161–180. [PubMed]
  • Fiske DW. Measuring the concepts of personality. Aldline Publishing Co.; Chicago, Illinois: 1971.
  • Freud S. Beyond the Pleasure Principle. Liveright; New York: 1922.
  • Green L, Fry AF, Myerson J. Discounting of delayed rewards: A life-span comparison. Psychological Science. 1994;5:33–36.
  • Hagger MS, Wood C, Stiff C, Chatzisarantis NLD. Ego depletion and the strength model of self-control: A meta-analysis. Psychological Bulletin. 2010;136:495–525. [PubMed]
  • Heaton RK, Pendleton MG. Use of neuropsychological tests to predict adult patients' everyday functioning. Journal of Consulting and Clinical Psychology. 1981;49:807–821. [PubMed]
  • Hedges LV. Meta-analysis. In: Rao CR, Sinharay S, editors. Handbook of statistics. Vol. 26. Elsevier; Amsterdam: 2007. pp. 919–953.
  • Hedges LV, Olkin I. Statistical methods for meta-analysis. Academic Press; San Diego, CA: 1985.
  • Hofmann W, Friese M, Strack F. Impulse and self-control from dual-systems perspective. Perspectives on Psychological Science. 2009;4:162–176.
  • Hunter JE, Schmidt FL. Methods of meta-analysis: Correcting error and bias in research findings. 2nd ed. Sage; Thousand Oaks, CA: 2004.
  • Kagan J. Matching familiar figures test. Harvard University; Cambridge, MA: 1964.
  • Kendall PC, Wilcox LE. Self-control in children: Development of a rating scale. Journal of Consulting and Clinical Psychology. 1979;47:1020–1029. [PubMed]
  • Kern ML, Friedman HS. Do conscientious individuals live longer? A quantitative review. Health Psychology. 2008;27:505–512. [PubMed]
  • Kirby KN, Petry NM, Bickel WK. Heroin addicts have higher discount rates for delayed rewards than non-drug-using controls. Journal of Experimental Psychology: General. 1999;128:78–87. [PubMed]
  • Kochanska G, Murray K, Jacques TY, Koenig AL, Vandegeest . Child Development. Vol. 67. 1996. Inhibitory control in young children and its role in emerging internalization; pp. 490–507. [PubMed]
  • Kramer AF, Humphrey DG, Larish JF, Logan GD. Aging and inhibition: Beyond a unitary view of inhibitory processing in attention. Psychology and Aging. 1994;9:491–512. [PubMed]
  • Krueger RF, Caspi A, Moffitt TE, White J, Stouthamer-Loeber M. Delay of gratification, psychopathology, and personality: Is low self-control specific to externalizing problems? Journal of Personality. 1996;64:107–129. [PubMed]
  • Lejuez CW, Read JP, Kahler CW, Richards JB, Ramsey SE, Stuart GL, Brown RA. Evaluation of a behavioral measure of risk taking: The Balloon Analogue Risk Task (BART) Journal of Experimental Psychology: Applied. 2002;8:75–84. [PubMed]
  • Logan GD. On the ability to inhibit thought and action: A user's guide to the stop signal paradigm. In: Dagenbach D, Carr TH, editors. Inhibitory processes in attention, memory, and language. 1 ed. Academic Press; 1994. pp. 189–239.
  • MacCann C, Duckworth AL, Roberts RD. Empirical identification of the major facets of conscientiousness. Learning and Individual Differences. 2009;19:451–458.
  • Maccoby EE, Dowley EM, Hagen JW, Degerman R. Activity level and intellectual functioning in normal preschool children. Child Development. 1965;36:761–770.
  • Metcalfe J, Mischel W. A hot/cool-system analysis of delay of gratification: Dynamics of willpower. Psychological Review. 1999;106:3–19. [PubMed]
  • Meyer GJ, Finn SE, Eyde LD, Kay GG, Moreland KL, Dies RR, Reed GM. Psychological testing and psychological assessment: A review of evidence and issues. Amercian Psychologist. 2001;56:128–165. [PubMed]
  • Miller EK. The prefronal cortex: No simple matter. Neuroimage. 2000;2:447–450. [PubMed]
  • Miller E, Joseph S, Tudway J. Assessing the component structure of four self-report measures of impulsivity. Personality and Individual Differences. 2004;37:349–358.
  • Miller J, Flory K, Lynam D, Leukefeld C. A test of the four-factor model of impulsivity-related traits. Personality and Individual Differences. 2003;34:1403–1418.
  • Mischel W. Preference for delayed reinforcement: An experimental study of a cultural observation. The Journal of Abnormal and Social Psychology. 1958;56:57–61. [PubMed]
  • Mischel W. Preference for delayed reinforcement and social responsibility. Journal of Abnormal & Social Psychology. 1961;62:1–7. [PubMed]
  • Mischel W. Becoming a cumulative science. APS Observer. 2009;22:18.
  • Mischel W, Shoda Y, Rodriguez ML. Delay of gratification in children. Science. 1989;244:933–938. [PubMed]
  • Miyake A, Friedman NP, Emerson MJ, Witzki AH, Howerter A. The unity and diversity of executive functions and their contributions to complex “frontal lobe” tasks: A latent variable analysis. Cognitive Psychology. 2000;41:49–100. [PubMed]
  • Moffitt T, Arseneault L, Belsky D, Dickson N, Hancox R, Harrington HL, Caspi A. A gradient of childhood self-control predicts health, wealth, and public safety. Proceedings of the National Academy of Sciences. 2011 Advance online publication. doi: 10.1073/pnas.1010076108. [PMC free article] [PubMed]
  • Newman JP, Kosson DS, Patterson CM. Delay of gratification in psychopathic and nonpsychopathic offenders. Journal of Abnormal Psychology. 1992;101:630–636. [PubMed]
  • Porteus SD. Qualitative performance in the maze test. The Smith Printing House; Vineland, NJ: 1942.
  • Peterson C. A primer in positive psychology. Oxford University Press; New York: 2006.
  • Rabbitt P. Introduction: Methodologies and models in the study of executive function. In: Rabbitt P, editor. Methodology of frontal and executive function. Psychology Press; East Sussex: 1997. pp. 1–38.
  • Reitan RM, Wolfson D. The Halstead–Reitan neuropsychological test battery: Therapy and clinical interpretation. Neuropsychologycal Press; Tucson, AZ: 1985.
  • Romer D, Duckworth AL, Sznitman S, Park S. Can Adolescents Learn Self-Control?: Delay of Gratification in the Development of Control over Risk Taking. Prevention Science. 2010;11:319–330. [PMC free article] [PubMed]
  • Rosvold HE, Mirsky AF, Sarason I, Bransome ED, Jr., Beck LH. A continuous performance test of brain damage. Journal of Consulting Psychology. 1956;20:343–350. [PubMed]
  • Rueda MR, Fan J, McCandliss BD, Halparin JD, Gruber DB, Lercari LP, Posner MI. Development of attentional networks in childhood. Neuropsychologia. 2004;42:1029–1040. [PubMed]
  • Schulze R. Current methods for meta-analysis: Approaches, issues, and developments. Zeitschrift für Psychologie / Journal of Psychology. 2007;215:90–103.
  • Shallice T. Specific impairments of planning. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences. 1982;298:199–209. [PubMed]
  • Shallice T. From neuropsychology to mental structure. Cambridge University Press; New York: 1988.
  • Singer JL. Delayed gratification and ego development: implications for clinical and experimental research. Journal of Consulting Psychology. 1955;19:259–266. [PubMed]
  • Solnick JV, Kannenberg CH, Eckerman DA, Waller MB. An experimental analysis of impulsivity and impulse control in humans. Learning and Motivation. 1980;11:61–77.
  • Stroop JR. Studies of interference in serial verbal reactions. Journal of Experimental Psychology. 1935;18:643–662.
  • Spearman Charles. Correlation calculated from faulty data. British Journal of Psychology. 1910;3:271–295. C.
  • Steinberg L. A social neuroscience perspective on adolescent risk-taking. Development Review. 2008;28:78–106. [PMC free article] [PubMed]
  • Tangney JP, Baumeister RF, Boone AL. High self-control predicts good adjustment, less pathology, better grades, and interpersonal success. Journal of Personality. 2004;72:271–322. [PubMed]
  • Tsukayama E, Duckworth AL, Kim BE. Resisting everything except temptation: An explanation for domain-specific impulsivity. European Journal of Psychology. submitted for publication.
  • Tsukayama E, Toomey SL, Faith MS, Duckworth AL. Self-control protects against overweight status in the transition from childhood to adolescences. Archives of Pediatrics and Adolescent Medicine. 2010;164:631–635. [PMC free article] [PubMed]
  • Wallace HM, Baumeister RF. The effects of success versus failure feedback on further self-control. Self and Identity. 2002;1:35–41.
  • White JL, Moffitt TE, Caspi A, Bartusch DJ, Needles DJ, Stouthamer-Loeber M. Measuring impulsivity and examining its relationship to delinquency. Journal of Abnormal Psychology. 1994;103:192–205. [PubMed]
  • Whiteside SP, Lynam DR. The five factor model and impulsivity: Using a structural model of personality to understand impulsivity. Personality and Individual Differences. 2001;30:669–689.
  • Whiteside SP, Lynam DR, Miller JD, Reynolds SK. Validation of the UPPS impulsive behaviour scale: A four-factor model of impulsivity. European Journal of Personality. 2005;19:559–574.
  • Wilde O. An ideal husband. 2009. The Project Gutenberg eBook #885.Retrieved from www.gutenberg.org/files/885/885-h/885-h.htm (Original work published 1912)
  • Williams PJ, Thayer JF. Executive functioning and health: Introduction to the special series. Annals of Behavioral Medicine. 2009;37:101–105. [PubMed]
  • Zaparniuk J, Taylor S. Impulsivity in children and adolescents. In: Webster CD, Jackson MA, editors. Impulsivity: Theory, assessment and treatment. Guilford Press; New York: 1997. pp. 158–179.
PubReader format: click here to try


Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...


  • MedGen
    Related information in MedGen
  • PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...