• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of amjphAmerican Journal of Public Health Web SiteAmerican Public Health Association Web SiteSubmissionsSubscriptionsAbout Us
Am J Public Health. 2011 April; 101(4): 751–758.
PMCID: PMC3052337
NIHMSID: NIHMS314213

Interventions to Increase Physical Activity Among Healthy Adults: Meta-Analysis of Outcomes

Vicki S. Conn, PhD, RN,corresponding author Adam R. Hafdahl, PhD, and David R. Mehr, MD, MS

Abstract

Objectives. We conducted a meta-analysis summarizing the effects of interventions designed to increase physical activity among healthy adults.

Methods. Our comprehensive searches located 358 reports eligible for inclusion. We used random-effects analyses to synthesize data, and we used meta-analytic analogues of regression and analysis of variance to examine potential moderator variables. We also explored moderator variable robustness and publication bias.

Results. We computed meta-analytic results from studies comprising 99 011 participants. The overall mean effect size for comparisons of treatment groups versus control groups was 0.19 (higher mean for treatment participants than for control participants). This effect size is consistent with a mean difference of 496 ambulatory steps per day between treatment and control participants. Exploratory moderator analyses suggested that the characteristics of the most effective interventions were behavioral interventions instead of cognitive interventions, face-to-face delivery versus mediated interventions (e.g., via telephone or mail), and targeting individuals instead of communities. Participant characteristics were unrelated to physical activity effect sizes. Substantial between-studies heterogeneity remained beyond individual moderators.

Conclusions. Interventions designed to increase physical activity were modestly effective. Interventions to increase activity should emphasize behavioral strategies over cognitive strategies.

Adequate physical activity is linked with important health outcomes, including reductions in cardiovascular disease,1 type 2 diabetes,2,3 some cancers,4,5 falls,6 osteoporotic fractures,7 and depression,8 and improvements in physical function,911 weight management,1215 cognitive function,16,17 and quality of life.18 Despite this compelling evidence for the benefits of physical activity, healthy adults commonly get an inadequate amount of physical activity.19

Extensive primary research has tested interventions to increase physical activity. Although many meta-analyses have addressed health outcomes of physical activity, few have examined physical activity behavior outcomes. The seminal 1996 meta-analysis of interventions to increase physical activity behavior reported a moderate effect size across 127 studies of healthy and chronically ill adults and children.20 Their moderator analyses documented larger effect sizes when interventions used behavior modification, had face-to-face delivery versus mediated delivery (e.g., telephone), focused on healthy people, measured active leisure versus structured exercise, measured low-intensity activity, encouraged unsupervised physical activity versus supervised physical activity, targeted participants of diverse ages, and targeted groups versus individuals.

A recent comprehensive meta-analysis of work site programs for healthy adults documented a d effect size (standardized mean difference) of 0.21 but did not conduct moderator analyses to determine the intervention characteristics linked with the largest physical activity increases.21 Other meta-analyses have integrated across chronically ill adults22 or focused on small, specific interventions or populations, such as primary care–based referrals to physical activity programs,23 older adults,24 computer-based interventions,25 or environmental interventions.26 Many meta-analyses have been plagued by small samples that hinder moderator analyses.27,28 For example, only 19 studies were included in the most recent Cochrane review that aggregated randomized, controlled trials with follow-up data gathered at least 6 months after interventions to increase physical activity among sedentary adults.27

Because of the importance of physical activity and the proliferation of studies testing interventions to increase physical activity, we sought to move this area of science forward by conducting a comprehensive meta-analysis to estimate the overall effect of interventions and, more importantly, to conduct moderator analyses to identify intervention characteristics associated with the best outcomes. We addressed 2 questions: (1) What overall effects do interventions designed to increase physical activity have on physical activity behavior after completion of interventions? (2) Do interventions’ effects on physical activity behavior vary depending on intervention, methodology, or sample characteristics?

METHODS

We used multiple comprehensive search strategies to avoid the bias resulting from narrow searches.29,30 An expert reference librarian conducted searches in 13 databases (e.g., MEDLINE, Dissertation Abstracts, SCOPUS). We examined the National Institutes of Health Computer Retrieval of Information on Scientific Projects for potential studies, and we searched 36 research registers,31 using broad search terms to ensure comprehensive searches. We did ancestry searches for review articles and eligible studies. We also conducted computerized database searches for senior authors and principal investigators of all eligible studies. Our staff hand-searched 82 journals from 1960 through 2007.31 This extensive searching yielded 54 642 papers to consider for inclusion.

We included English-language reports of interventions to increase physical activity among healthy adults. Physical activity was defined as any bodily movement that increased energy expenditure beyond basal levels. Diverse physical activity behavior change interventions were eligible (e.g., education sessions, supervised exercise practice sessions) if physical activity was measured separately from the intervention. To reduce biases, we included both published and unpublished studies.32,33 We also included small-sample and pre-experimental studies.

Data Extraction

We developed a coding frame on the basis of related meta-analyses, review articles, and extensive examination of primary studies.34 The coding frame captured studies’ results as well as characteristics of the source, participants, method, and intervention. Participant characteristics coded included age, gender, overweight status, previous exercise, and minority status. Methodological features coded were method of assigning participants, attrition, physical activity measure, and interval between intervention and physical activity assessment.

We coded a total of 74 intervention characteristics in the following categories: intervention social context (individual, group), social structure target (individual, community), theoretical framework for intervention (social cognitive theory, transtheoretical model [other theories were reported too infrequently for analyses]), behavioral target (physical activity only vs physical activity plus other health behaviors), recommended physical activity (form, intensity, duration/session, frequency/week), and exercise session characteristics among studies with supervised physical activity (form, intensity, duration/session, frequency/week, total number of sessions). We also recorded specific intervention content of the following types: access enhancement, barriers management, competition, contracting, consequences or rewards, cues or stimulus control, decision-making, education about the health benefits of physical activity, exercise prescription, feedback, goal setting, modeling, monitoring physical activity behavior by research staff, motivational interviewing, problem solving, relapse prevention education, and self-monitoring.

The presence of individually tailored interventions—those with specific content matched to individual participants’ attributes, such as items identified as personal barriers to physical activity—was coded. Interventions that used a train-the-trainer approach (i.e., teaching physical activity behavior change interventions to local community members or health care providers so they could deliver the interventions to individuals) were noted. We coded special intervention targets including entire communities, worksites, and ambulatory health care settings, and we coded the mode of delivery (e.g., face-to-face, mass media, mediated by telephone, mail, e-mail). Thirty studies were pilot-coded.

We coded data at a microlevel to enhance validity.35 To enhance reliability, 2 extensively trained coders independently extracted all data from each report. All data were compared between coders to achieve 100% agreement. A third coder verified effect-size data. Discrepancies were resolved by consulting the lead author or another member of the research team. We extracted data on 564 pairwise comparisons from 358 reports.

Data Analysis

We calculated 4 types of effect-size comparisons.36 Treatment versus control postintervention effect sizes refer to treatment group results compared with control group results after interventions. Treatment versus control pre–post effect sizes were calculated as a comparison between treatment group pre–post effect size and control group pre–post effect size. Treatment pre–post effect sizes are within-group effect sizes calculated for studies that provided treatment group baseline and outcome data. Control pre–post comparisons are the same but for control participants. A standardized mean difference (d) effect size was calculated for each primary study comparison. Positive d reflects more favorable scores for the treatment group or at posttest. The 4 types of comparisons were analyzed separately.

Main analyses of treatment versus control data.

Treatment versus control postintervention effect sizes were calculated as the treatment posttest mean minus the control group posttest mean, divided by the pooled posttest standard deviation. A second 2-group effect size—treatment versus control pre–post effect size—was generated to address possible treatment changes from time-related effects in addition to the intervention effect (e.g., maturation, testing, regression). These effect sizes were calculated as treatment group pre–post effect size minus control group pre–post effect size, where each pre–post effect size was computed as posttest mean minus pretest mean, divided by pretest standard deviation. Effect sizes were adjusted for bias.37 Larger samples were given more influence in the analysis by weighting each effect size by the inverse of its sampling variance (i.e., precision). Homogeneity was assessed using a conventional heterogeneity statistic (Q) and I2, an index of between-studies heterogeneity relative to within-study sampling error used to assess the impact of consistency (or inconsistency) among trials on meta-analytic results.

Studies with 2 or more treatment groups compared with a single group were included in the meta-analysis by accounting for the dependence caused by the shared control group. To accomplish this, we used a 2-stage approach wherein each study's dependent effect sizes were combined into a single independent effect size38 and then submitted to standard univariate random-effects analysis. Estimates of mean physical activity effect sizes were converted to the original metrics of ambulatory steps per day and minutes per week. To detect possible publication bias, we used multiple statistical procedures, because all strategies have limitations.3943 Outliers were detected statistically, omitting each effect size 1 at a time and checking for large externally standardized residuals or substantially reduced measures of heterogeneity.

The estimated effect sizes we report here were based on the random-effects model, with the between-studies variance component, An external file that holds a picture, illustration, etc.
Object name is 751inf1.jpg, estimated by weighted method of moments unless otherwise designated. The random-effects model assumes that individual effect sizes vary as a result of both participant-level sampling error and sources of study-level error.44 The random-effects model is appropriate when study implementation is heterogeneous. Inclusion criteria variations, intervention differences, dose variations, and study execution differences contribute to heterogeneity.4547 We expect heterogeneity in behavior change research, and we used 4 strategies to manage it. First, we present findings of the random-effects model, which assumes heterogeneity. Second, we report both a location parameter and a variability parameter. Third, we explored potential study-level moderators to understand sources of heterogeneity. Fourth, we interpreted findings in light of heterogeneity. These strategies were important because they helped us interpret the extent to which heterogeneity affects meta-analysis conclusions.45

Analyses of alternative designs.

Treatment group pre–post comparisons included studies designed as single-group projects, those with multiple treatment groups and no control group, and studies designed as treatment versus control comparisons that also provided preintervention data, making treatment pre–post comparisons possible. We calculated control pre–post comparisons from baseline and outcome control group scores. Each single-group effect size was calculated as the postintervention minus preintervention mean divided by the baseline standard deviation. We solicited correlations between preintervention and postintervention scores from primary study authors to calculate sampling variances.

Adjustment for bias, sample size weighting, detection of outliers, random-effects models, estimation of publication bias, and assessment of heterogeneity as described earlier in the Methods section were applied to these single-group effect sizes. Findings from single-group comparisons are presented as ancillary evidence to the more internally valid 2-group comparisons. Within-group control participant findings are presented as empirical evidence to address the common concern that control participants may experience some benefit from participation in study procedures.

Moderator analyses.

We conducted exploratory moderator analyses with treatment versus control postintervention comparisons. We used a mixed-effects meta-analytic analog of regression for moderator analyses. This comparison incorporates between-studies heterogeneity into the estimate and test of the moderator's relationship with mean effect size. For continuous moderators this comparison estimated and tested the (unstandardized) regression slope, b. For dichotomous moderators, a special case of regression estimated and tested the difference between 2 mean effect sizes. The moderator's effect was tested with a heterogeneity statistic, either for the model (Qmodel) or between groups (Qbetween).

These moderator analyses should be interpreted as hypothesis generating, because of the lack of consistent previous findings to form a firm basis for hypothesis testing. Moderator pairs were analyzed to address a given moderator's robustness and generalizability in the presence of other moderators by determining how much its effect changed when each other moderator was controlled and how much it interacted with each other moderator. Robustness was summarized as excellent, good, mixed, mediocre, or poor, on the basis of the extent of agreement in rankings of interaction significance, interaction size, and size of change in main effect.

RESULTS

We calculated effect sizes from about 99 011 participants’ data. Treatment versus control postintervention analyses comprised 74 852 participants (206 comparisons). Treatment pre–post comparisons used data from 43 701 participants (498 comparisons). Study characteristics are described in Table 1. The median of the mean age was 44 years. Median sample size was 72 participants (range = 5–17 579). Women were well represented, with a median of 74% women, but the median for minority participants was only 14% among studies that reported such data. Interventions ranged from a single motivational education session to extensive supervised exercise sessions occurring over many weeks. The median duration of supervised exercise was 45 minutes. The median number of sessions was 27 supervised exercise encounters. Motivational interventions’ median duration was 60 minutes, delivered in a median of 5 sessions.

TABLE 1
Characteristics of Primary Studies Included in Meta-Analyses: Interventions to Increase Physical Activity, 358 Reports (N = 99 011)

Effect of Interventions

Table 2 shows the effects of interventions on physical activity behavior outcomes. We found a mean effect size (d) estimate of 0.19 for treatment versus control postintervention comparisons and for treatment versus control pre–post comparisons. A mean effect size (d) of 0.33 was documented for treatment pre–post comparisons. These effect sizes indicate that, on average, interventions did increase overall physical activity after completion of the intervention. In contrast, control participants did not experience increased physical activity by participating in studies, as evidenced by a mean effect size of 0.00 (d).

TABLE 2
Random-Effects Behavior Outcome Estimates and Tests: Meta-Analysis of Interventions to Increase Physical Activity, 358 Reports (N = 99 011)

Findings from heterogeneity analyses (Q and I2) suggest substantial variation in true effect size among studies. The 2-group comparison mean effect size of 0.19 is consistent with a mean difference of 14.7 minutes per week of physical activity or 496 steps per day between the treatment and control groups. If we assume true effect sizes are normally distributed with a mean of 0.19 and a standard deviation of 0.17 (Table 2), then the middle 95% of true effect sizes falls between −0.14 and 0.53. Expressing this interval in an original metric gives (–11.0–40.3) minutes per week or (–371–1363) steps per day. Thus, for instance, a randomly selected study's true mean difference for treatment participants could plausibly range from 11 minutes per week less to about 40 minutes per week more. Evidence suggested possible publication bias among studies reporting effect sizes for treatment versus control postintervention, treatment versus control pre–post, and treatment pre–post. No publication bias was apparent for studies reporting effect sizes for control pre–post.

Moderator Analyses

Tables 3 and 4 present the results of dichotomous and continuous moderator analyses of treatment versus control postintervention effect sizes. Tabled results from the analyses of multiple–degrees of freedom categorical moderators and moderator pairs are available from V. S. C. Analyses with fewer studies providing information on a characteristic (smaller k) should be interpreted more cautiously than analyses with more comparisons (larger k). Moderator analyses should be considered exploratory.

TABLE 3
Dichotomous Mixed-Effects Moderator Analyses of Treatment Versus Control Postintervention Comparisons: Meta-Analysis of Interventions to Increase Physical Activity (n = 74 852)

Report, sample, and methodological moderators.

Neither publication nor funding status was related to physical activity effect sizes (QB in Table 3). Studies published more recently had larger mean effect sizes (Qmodel in Table 4). The dichotomous moderator analyses suggested that studies of participants who exercised prior to the intervention reported lower effect size (0.14) than did studies of sedentary participants (0.27), but these findings were not robust in joint moderator analyses. Effect sizes were unrelated to sample characteristics (e.g., age), random versus nonrandom assignment, or method of measuring physical activity. The attrition difference between treatment and control participants was related to effect size in both the individual moderator analyses and the joint moderator analyses. Specifically, studies that had smaller treatment-group attrition rates than control-group attrition rates reported larger effect sizes. The number of days between intervention and outcome measurement was unrelated to effect size in the continuous moderator analyses (Table 4) and in the joint moderator analyses.

TABLE 4
Linear and Cubic Mixed-Effects Moderator Analyses of Treatment Versus Control Postintervention Comparisons: Meta-Analysis of Interventions to Increase Physical Activity (n = 74 852)

Intervention moderators.

When considered individually, 13 of the dichotomous moderators tested were associated with differences in physical activity outcomes (Table 3). Studies that did not use social cognitive theory reported significantly larger effect size (0.20) than did studies that used social cognitive theory (0.12). Studies without the transtheoretical model reported larger effect size (0.21) than did studies with the model (0.15). These findings that showed better outcomes among studies without social cognitive theory or the transtheoretical model were robust in the joint moderator analyses. A comparison of effect size between studies using the 2 models did not reveal a statistically significant difference. Multiple–degree of freedom analyses documented the largest effect size for studies using neither model (0.23). This pattern of findings suggested that social cognitive theory was more detrimental to effect-size values than was the transtheoretical model.

Although the dichotomous moderator analyses suggested that studies including exercise prescription reported larger effect sizes for physical activity (0.30) than did studies without prescription (0.17), these findings were not robust. The presence of supervised exercise in the intervention was associated with better physical activity outcomes (0.29 vs 0.17) in dichotomous moderator analyses, but this finding was not supported in joint analyses. The joint moderator analyses revealed mixed support for the finding that studies in which research staff modeled exercise behavior were associated with larger effect sizes (0.38) than were studies without modeling (0.17).

The joint analyses supported the finding that interventions that included a train-the-trainer approach were less effective (0.09) than were interventions with research staff providing interventions directly to participants (0.21). The finding that standardized interventions (0.20) were more effective than individually tailored interventions (0.04) received mixed support in the joint moderator analyses. The dichotomous moderator finding that interventions that included relapse-prevention strategies (0.34) were more effective than were interventions that did not (0.17) was not confirmed in the joint analyses.

The dichotomous moderator analyses and the joint moderator analyses confirmed that interventions that targeted entire communities (0.09) were less effective than were interventions aimed at individuals (0.19). The finding that studies with mass-media approaches (0.08) were less effective than were studies using other strategies to increase physical activity (0.19) was confirmed in the joint analyses. Interventions with mediated delivery of interventions (e.g., mail, phone) had smaller effect sizes (0.15) than did interventions that were delivered face-to-face (0.29) in the single-variable analysis. The joint moderator analyses did not confirm the better effect size for face-to-face interventions. Worksite–based and primary care–based interventions (0.21, 0.16) did not report different effect sizes compared with interventions without these characteristics (0.18, 0.19).

We grouped interventions into approaches that were either behavioral (e.g., goal setting, contracting, self-monitoring, cues, rewards) or cognitive (e.g., decision making, health education, providing information). Interventions that exclusively used behavioral strategies (0.25) were more effective than were other interventions (0.17). Multiple–degree of freedom analyses confirmed that the largest effect sizes were for interventions that focused entirely on behavioral interventions. The joint moderator analyses confirmed that the superiority of behavioral approaches was a robust finding.

Interventions often recommended the form, intensity, or duration that physical activity was to take following the interventions. None of these recommendations were significantly linked with effect sizes for physical activity. Neither the number of intervention strategies nor the total minutes of intervention content (including total minutes of supervised physical activity) was associated with physical activity outcomes.

DISCUSSION

This comprehensive meta-analysis found a moderate mean effect size (d = 0.19) across diverse studies designed to increase physical activity among healthy adults. Moderator analyses identified several robust and moderately robust effect-size moderators associated with larger physical activity effect size: behavioral interventions (vs cognitive interventions that targeted knowledge, attitudes, or beliefs), interventions delivered directly to individuals (vs mass-media interventions and interventions targeting entire communities), interventions delivered by project staff (vs train-the-trainer models), physical activity behavior being modeled by research staff, standardized interventions (vs individually tailored interventions), and absence of interventions based on social cognitive theory or the transtheoretical model.

The effect size from these studies of healthy adults is smaller than are the effect size reported for chronically ill adults22 (d = 0.45) and the effect size reported for chronically ill and healthy adults and children (d = 0.72).20 These results are similar to the effect size reported for older adults (d = 0.26),24 in which interventions targeting specific disease-patient populations elicited larger physical activity behavior changes than did interventions not targeting such groups. The presence of chronic illness may cause patients to be more responsive to interventions. Our effect size is smaller than that reported by Dishman and Buckworth,20 which could have resulted from our greatly expanded search strategies locating more obscure studies with smaller effect sizes.

The magnitude of physical activity behavior change was modest. The achieved steps per day did not meet public health goals of 10 000 steps per day.48,49 It is unclear whether entirely sedentary people gain incremental health benefits when they add even small amounts of physical activity. Future intervention research should report outcomes in terms of understandable amounts of physical activity increases such as steps per day or minutes per week.

The moderator analysis finding that behavioral strategies were superior to cognitive strategies is consistent with meta-analytic findings for chronically ill adults22 and older adults.24 Behavioral strategies include goal setting, self-monitoring, physical activity behavior feedback, consequences, exercise prescription, and cues. Health care providers and public health programs often emphasize physical activity's health benefits, but we found that health education did not increase effect size. Perhaps the public already is convinced of physical activity's health benefits, such that programs using behavioral strategies to change physical activity behavior may be most effective. Future research comparing behavioral interventions to cognitive interventions in large randomized controlled trials would help confirm these findings. Further primary research comparing specific types of behavioral interventions (e.g., contracting, self-monitoring, cues, rewards) could identify the most effective behavioral intervention components. Public health workers designing interventions should emphasize behavioral strategies over cognitive approaches.

The pattern of findings across the approaches of mediated delivery (e.g., delivered via e-mail or telephone), mass media (e.g., delivered via television or newspaper), social structure target (individual vs community), and train the trainer (i.e., teaching physical activity behavior change interventions to local community members or health care providers so they can deliver the interventions to others) suggests that delivering interventions to individuals face to face is the most effective approach. This finding's importance extends beyond trends found in previous meta-analyses that did not achieve statistical significance.22,24 Audience attention to the message may be higher in individually delivered face-to-face interventions, making the message seem more important to recipients. Moreover, behavioral interventions may be easier to deliver in individual face-to-face encounters.

This meta-analysis was limited by the studies retrieved and the information available in study reports. Primary study quality varied widely. Many quality aspects, such as treatment fidelity and allocation concealment, were poorly reported and could not be examined in moderator analyses. Our finding of publication bias suggests that studies with negative or low effect sizes remain inaccessible. Meta-analyses are unable to assess publication bias favoring studies with particular characteristics. The comprehensive nature of this meta-analysis and the resulting heterogeneity among studies then, are both strengths and limitations. The overall effect size should be interpreted in light of discovered heterogeneity; that is, not all interventions are equally effective. The results of the moderator analyses should be used to interpret findings, and these effect-size comparisons may be more important than the overall effect size. The findings of the moderator analyses should be interpreted in the context of associations among moderators, substantial residual between-studies heterogeneity, and lack of hypotheses from the literature.

Our comprehensive meta-analysis found that physical activity interventions produced moderate, statistically significant increases in physical activity behavior and that behavioral interventions appeared to be more effective than were cognitive interventions. These findings suggest that interventions to increase physical activity should emphasize behavioral components such as self-monitoring, stimuli to increase physical activity, rewards, behavioral goal setting, and modeling physical activity behavior in standardized interventions delivered to individuals. Future research should explore which components of behavioral interventions are most effective.

Acknowledgments

Financial support was provided by the National Institutes of Health (grant R01NR009656).

Note. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Human Participant Protection

The institutional review board for the protection of human subjects of the University of Missouri categorized this study as exempt from requiring individual participant consent because data were obtained from secondary sources.

References

1. Hamer M, Chida Y. Walking and primary prevention: a meta-analysis of prospective cohort studies. Br J Sports Med. 2008;42(4):238–243 [PubMed]
2. Roumen C, Blaak EE, Corpeleijn E. Lifestyle intervention for prevention of diabetes: determinants of success for future implementation. Nutr Rev. 2009;67(3):132–146 [PubMed]
3. Orozco LJ, Buchleitner AM, Gimenez-Perez G, et al. Exercise or exercise and diet for preventing type 2 diabetes mellitus. Cochrane Database Syst Rev. 2008;3:CD003054. [PubMed]
4. Friedenreich CM, Cust AE. Physical activity and breast cancer risk: impact of timing, type and dose of activity and population subgroup effects. Br J Sports Med. 2008;42(8):636–647 [PubMed]
5. Tardon A, Lee WJ, Delgado-Rodriguez M, et al. Leisure-time physical activity and lung cancer: a meta-analysis. Cancer Causes Control. 2005;16(4):389–397 [PMC free article] [PubMed]
6. Sherrington C, Whitney JC, Lord SR, Herbert RD, Cumming RG, Close JC. Effective exercise for the prevention of falls: a systematic review and meta-analysis. J Am Geriatr Soc. 2008;56(12):2234–2243 [PubMed]
7. Moayyeri A. The association between physical activity and osteoporotic fractures: a review of the evidence and implications for future research. Ann Epidemiol. 2008;18(11):827–835 [PubMed]
8. Teychenne M, Ball K, Salmon J. Physical activity and likelihood of depression in adults: a review. Prev Med. 2008;46(5):397–411 [PubMed]
9. Manini TM, Pahor M. Physical activity and maintaining physical function in older adults. Br J Sports Med. 2009;43(1):28–31 [PMC free article] [PubMed]
10. Mian OS, Baltzopoulos V, Minetti AE, Narici MV. The impact of physical training on locomotor function in older people. Sports Med. 2007;37(8):683–701 [PubMed]
11. Chin A, Paw MJ, van Uffelen JG, Riphagen I, van Mechelen W. The functional effects of physical exercise training in frail older people: a systematic review. Sports Med. 2008;38(9):781–793 [PubMed]
12. Keller C, Records K, Ainsworth B, Permana P, Coonrod DV. Interventions for weight management in postpartum women. J Obstet Gynecol Neonatal Nurs. 2008;37(1):71–79 [PubMed]
13. Seo DC, Sa J. A meta-analysis of psycho-behavioral obesity interventions among US multiethnic and minority adults. Prev Med. 2008;47(6):573–582 [PubMed]
14. Shaw K, Gennat H, O'Rourke P, Del Mar C. Exercise for overweight or obesity. Cochrane Database Syst Rev. 2006;4:CD003817. [PubMed]
15. Ohkawara K, Tanaka S, Miyachi M, Ishikawa-Takata K, Tabata I. A dose-response relation between aerobic exercise and visceral fat reduction: systematic review of clinical trials. Int J Obes (Lond). 2007;31(12):1786–1797 [PubMed]
16. Erickson KI, Kramer AF. Aerobic exercise effects on cognitive and neural plasticity in older adults. Br J Sports Med. 2009;43(1):22–24 [PMC free article] [PubMed]
17. Liu-Ambrose T, Donaldson MG. Exercise and cognition in older adults: is there a role for resistance training programmes? Br J Sports Med. 2009;43(1):25–27 [PubMed]
18. Bize R, Johnson JA, Plotnikoff RC. Physical activity level and health-related quality of life in the general adult population: a systematic review. Prev Med. 2007;45(6):401–415 [PubMed]
19. Centers for Disease Control and Prevention U.S. physical activity statistics. Available at: http://apps.nccd.cdc.gov/PASurveillance/StateSumV.asp. Accessed January 5, 2010
20. Dishman RK, Buckworth J. Increasing physical activity: a quantitative synthesis. Med Sci Sports Exerc. 1996;28(6):706–719 [PubMed]
21. Conn VS, Hafdahl AR, Cooper PS, Brown LM, Lusk SL. Meta-analysis of workplace physical activity interventions. Am J Prev Med. 2009;37:330–339 [PMC free article] [PubMed]
22. Conn VS, Hafdahl AR, Brown SA, Brown LM. Meta-analysis of patient education interventions to increase physical activity among chronically ill adults. Patient Educ Couns. 2008;70(2):157–172 [PMC free article] [PubMed]
23. Williams NH, Hendry M, France B, Lewis R, Wilkinson C. Effectiveness of exercise-referral schemes to promote physical activity in adults: systematic review. Br J Gen Pract. 2007;57(545):979–986 [PMC free article] [PubMed]
24. Conn VS, Valentine JC, Cooper HM. Interventions to increase physical activity among aging adults: a meta-analysis. Ann Behav Med. 2002;24(3):190–200 [PubMed]
25. Kroeze W, Werkman A, Brug J, Kroeze W, Werkman A, Brug J. A systematic review of randomized trials on the effectiveness of computer-tailored education on physical activity and dietary behaviors. Ann Behav Med. 2006;31(3):205–223 [PubMed]
26. Matson-Koffman DM, Brownstein JN, Neiner JA, Greaney ML. A site-specific literature review of policy and environmental interventions that promote physical activity and nutrition for cardiovascular health: what works? Am J Health Promot. 2005;19(3):167–193 [PubMed]
27. Foster C, Hillsdon M, Thorogood M. Interventions for promoting physical activity. Cochrane Database Syst Rev. 2005;1:CD003180. [PubMed]
28. Kuoppala J, Lamminpaa A, Husman P. Work health promotion, job well-being, and sickness absences—a systematic review and meta-analysis. J Occup Environ Med. 2008;50(11):1216–1227 [PubMed]
29. Cooper H, Hedges L, Valentine J, editors. , The Handbook of Research Synthesis and Meta-analysis. 2nd ed New York, NY: Russell Sage Foundation; 2009
30. White H. Scientific communication and literature retrieval. : Cooper H, Hedges L, Valentine J, editors. , The Handbook of Research Synthesis and Meta-analysis. New York, NY: Russell Sage Foundation; 2009
31. Reed J, Baxter P. Using reference databases. : Cooper H, Hedges L, Valentine J, editors. , The Handbook of Research Synthesis and Meta-analysis. New York, NY: Russell Sage Foundation; 2009
32. Cook DJ, Guyatt GH, Ryan G, et al. Should unpublished data be included in meta-analyses? Current convictions and controversies. JAMA. 1993;269(21):2749–2753 [PubMed]
33. Rothstein HR, Hopewell S. Grey literature. : Cooper H, Hedges L, Valentine J, editors. , The Handbook of Research Synthesis and Meta-analysis. New York, NY: Russell Sage Foundation; 2009
34. Lipsey M. Identifying interesting variables and analysis opportunities. : Cooper H, Hedges L, Valentine J, editors. , The Handbook of Research Synthesis and Meta-analysis. New York, NY: Russell Sage Foundation; 2009
35. Orwin R, Vevea J. Evaluating coding decisions. : Cooper H, Hedges L, Valentine J, editors. , The Handbook of Research Synthesis and Meta-analysis. New York, NY: Russell Sage Foundation; 2009
36. Morris SB, DeShon RP. Combining effect size estimates in meta-analysis with repeated measures and independent-groups designs. Psychol Methods. 2002;7(1):105–125 [PubMed]
37. Hedges L, Olkin I. Statistical Methods for Meta-analysis. Orlando, FL: Academic Press; 1985
38. Gleser LJ, Olkin I. Stochastically dependent effect sizes. : Cooper H, Hedges L, Valentine J, editors. , The Handbook of Research Synthesis and Meta-analysis. New York, NY: Russell Sage Foundation; 2009
39. Sutton AJ. Publicaton bias. : Cooper H, Hedges L, Valentine J, editors. , The Handbook of Research Synthesis and Meta-analysis. New York, NY: Russell Sage Foundation; 2009
40. Gleser LJ, Olkin I. Models for estimating the number of unpublished studies. Stat Med. 1996;15(23):2493–2507 [PubMed]
41. Rosenthal R. The file drawer problem and tolerance for null results. Psychol Bull. 1979;86(3):638–641
42. Sterne JA, Egger M. Funnel plots for detecting bias in meta-analysis: guidelines on choice of axis. J Clin Epidemiol. 2001;54(10):1046–1055 [PubMed]
43. Vevea JL, Hedges LV. A general linear model for estimating effect size in the presence of publication bias. Psychometrika. 1995;60(3):419–435
44. Raudenbush S. Analyzing effect sizes: random-effect models. : Cooper H, Hedges L, Valentine J, editors. , The Handbook of Research Synthesis and Meta-analysis. New York, NY: Russell Sage Foundation; 2009
45. Higgins JP, Thompson SG, Deeks JJ, Altman DG. Measuring inconsistency in meta-analyses. BMJ. 2003;327(7414):557–560 [PMC free article] [PubMed]
46. Colditz GA, Burdick E, Mosteller F. Heterogeneity in meta-analysis of data from epidemiologic studies: a commentary. Am J Epidemiol. 1995;142(4):371–382 [PubMed]
47. Berlin JA. Invited commentary: benefits of heterogeneity in meta-analysis of data from epidemiologic studies. Am J Epidemiol. 1995;142(4):383–387 [PubMed]
48. Le Masurier GC, Sidman CL, Corbin CB, Le Masurier GC, Sidman CL, Corbin CB. Accumulating 10 000 steps: does this meet current physical activity guidelines? Res Q Exerc Sport. 2003;74(4):389–394 [PubMed]
49. Tudor-Locke C, Bassett DR. How many steps/day are enough? Preliminary pedometer indices for public health. Sports Med. 2004;34(1):1–8 [PubMed]

Articles from American Journal of Public Health are provided here courtesy of American Public Health Association
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • MedGen
    MedGen
    Related information in MedGen
  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...