NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Williams JW, Plassman BL, Burke J, et al. Preventing Alzheimer's Disease and Cognitive Decline. Rockville (MD): Agency for Healthcare Research and Quality (US); 2010 Apr. (Evidence Reports/Technology Assessments, No. 193.)

Cover of Preventing Alzheimer's Disease and Cognitive Decline

Preventing Alzheimer's Disease and Cognitive Decline.

Show details

3Results

Literature Search Results

Figure 2 summarizes the results of our literature search and screening process. We identified a total of 6713 citations from the electronic search and an additional 194 citations from other sources. After applying inclusion/exclusion criteria at the title-and-abstract level, 1626 full-text articles were retrieved and screened. Of these, we excluded 1035 that did not meet our inclusion criteria. Appendix G provides a complete listing of articles excluded at the full-text stage, with reasons for exclusion.

Figure 2. Literature flow diagram.

Figure 2

Literature flow diagram.

After applying quality assessment criteria to the systematic reviews captured in our search, we identified 25 good quality reviews (Table 3), which are summarized in the relevant sections below. Publications that were included in one of these 25 reviews are not generally counted in our tally of original research studies. However, some original research publications may have addressed more than one factor, and may be included in both an existing systematic review for one factor, and as an original research study for another factor. In the end, we included 250 original research studies, along with the 25 systematic reviews.

Table 3. Included systematic reviews.

Table 3

Included systematic reviews.

Measurement of Cognitive Outcomes

Alzheimer’s Disease and Dementia

The assessment for dementia was similar across most of the major cohort studies we identified. Typically the cognitive batteries used included measures of global cognitive function; language (naming and verbal fluency); verbal memory (word list and/or paragraph immediate and delayed recall); visual memory; executive function and processing speed; attention; and an estimate of baseline intelligence or reading ability. The specific tests used differed across the studies, but the cognitive domains assessed were generally similar. The studies differed in their use of information from a proxy informant in the diagnostic process; that is, some studies used information from informants, while others did not.

Cognitive Decline

Cognitive decline was measured in a number of different ways in the included studies. Some studies used a diagnosis of incident mild cognitive decline (MCI) or cognitive impairment not demented (CIND) as the definition for cognitive decline. The criteria used for these diagnostic categories varied across studies, but typically included a psychometrically determined mild impairment on memory tests and/or other cognitive domains, with at most mild functional impairment in daily activities.

Other studies defined cognitive decline based on longitudinal change on one or more cognitive measures. Some studies determined decline on the test(s) based on continuous change in the test score over time, while other studies defined decline in categorical (often dichotomous) terms based on a predetermined threshold of change in performance over two or more time points.

Review of the studies included in this systematic review found that about 40 percent reporting on cognitive decline based their findings on performance on a single cognitive measure, typically a general measure of cognitive function. The most common measures used were the Mini-Mental State Examination (MMSE) or an abbreviated form of the MMSE, the Modified Min-Mental State Examination (3MS), and some form of the Telephone Interview for Cognitive Status (TICS). Approximately half of the studies using these measures defined cognitive change as a continuous outcome, while the other half defined cognitive change in categorical terms. Some studies reported results for only a single measure, such as a verbal memory task, from the battery of tests that were administered.

Another 40 percent of the included studies assessed cognitive decline using multiple neuropsychological measures. The majority of these studies measured decline as a continuous outcome. Some of these studies reported results for both the individual cognitive measures and a global composite measure combining all tests. The specific cognitive tests used varied across studies but typically included tests in a number of the following domains: global cognitive function (MMSE, 3MS, TICS); verbal memory (word list or paragraph immediate and delayed recall); visual memory; verbal fluency; naming; speed of processing; attention; executive function; working memory; and reasoning.

Finally, about 10 percent of included studies defined cognitive decline based on a composite global index of performance on all tests combined. Some of these studies presented the results as a continuous outcome, while others reported them as a categorical outcome.

Key Question 1 – Factors Associated with Reduction of Risk of Alzheimer’s Disease

Key Question 1 is: What factors are associated with the reduction of risk of Alzheimer’s disease?

Nutritional and Dietary Factors

B vitamins and folate. We identified one good quality systematic review that examined the association between B vitamins or berries and development of Alzheimer’s disease (AD).32 However, we decided not to provide a detailed summary of this review here because the majority of the studies identified in the review did not meet our eligibility criteria, and the review authors did not conduct any meta-analysis combining the studies, thus providing limited benefit for our purpose. Instead, we here review the studies identified by the review that met our eligibility criteria, along with additional studies identified in our literature search. We identified a total of five eligible cohort studies in this way.54–58 These studies are summarized in Table 4; detailed evidence tables are provided in Appendix B. Three studies used samples from U.S. communities,54–56 and two from communities in Europe.57,58 Length of followup across the studies ranged from an average of approximately 3.0 to 6.1 years. In all five studies, participants were non-demented at baseline. Two of the studies used blood serum levels of various B vitamins and folate.57,58 The other three54–56 estimated levels of B vitamins and folate based on self-reported responses on nutrition questionnaires. The studies that used blood serum levels to characterize exposure used standard methods to determine the levels of folate and B vitamins. Another research group that reported on two different samples55,56 has published the results of analyses assessing the reliability and validity of the food frequency questionnaires used in their studies.59 They reported that the Spearman correlations for 1-year reproducibility of responses to the questionnaire were 0.70 for total folate, 0.50 for total vitamin B-12, and 0.58 for vitamin B-6. The Pearson correlations for validity were 0.50 for total folate, 0.38 for total vitamin B-12, and 0.51 for total vitamin B-6. The authors also reported that the performance characteristics did not differ significantly by cognitive ability.

Table 4. B vitamins and folate and risk of developing AD.

Table 4

B vitamins and folate and risk of developing AD.

All of the studies used sample selection methods to minimize selection bias. Four studies compared some baseline characteristics by exposure level.55–58 All studies used standard criteria for the diagnosis of AD, but only two used an informant report as part of the diagnostic process.57,58 Only some of the studies reported results for AD cases only, but the studies that reported results for dementia as a whole included enough AD cases to meet our eligibility criteria. Two studies explicitly stated that the dementia diagnosis was assigned blind to the exposure level of B vitamins and folate;55,56 however, it is unlikely that details of B vitamin and folate exposure were discussed during the diagnostic process in any of the studies. Analyses were appropriate and generally controlled for relevant potential confounders.

Results from the two studies57,58 that measured folate serum levels showed that low baseline folate levels were consistently associated with increased risk of AD (or dementia). In comparison, B12 levels were typically not associated with risk of AD. The three studies that used estimated dietary intake of folate and B vitamins based on self-reported information reported conflicting results. One reported an association between higher intake of folate and reduced risk of AD,54 while another did not find a significant reduction in AD risk associated with folate intake.55 Neither study found an association between vitamins B6 or B12 and risk of AD. Direct comparisons of the two studies to identify reasons for these inconsistent results are difficult, but based on the information provided in the studies, the average rate of folate intake may differ between the two studies, with the study by Morris and colleagues55 reporting a lower rate of folate intake. Only one study examined niacin (B3) intake and found a lower risk for AD associated with higher intake of niacin.56

In conclusion, based on folate levels measured in serum, there is preliminary evidence from two studies that low folate levels are associated with increased risk of AD. The two studies estimating folate level from self-report dietary information did not find a consistent association with risk of AD. The evidence does not suggest an association between B12 and risk of AD. The one study assessing estimated niacin intake showed an association between higher niacin intake and lower risk of AD; confirmation of this is required prior to drawing conclusions. Further confirmation is also needed of the putative association between folate serum levels and risk of AD.

Other vitamins. We identified 12 eligible cohort studies that examined risk of AD in association with use of antioxidant and multivitamins.60–71 These studies are summarized in Table 5; detailed evidence tables are provided in Appendix B. Eight studies used samples from U.S. communities,62,64–66,68–71one used a sample from a medical cooperative organization in the United States;63 two from communities in Europe,60,61 and one from Canada.67 Length of followup across the studies ranged from approximately 2 to 33 years. In all studies, participants were non-demented at baseline. Studies using food intake of E and C vitamins, beta carotene, or flavonoids as the predictor variables estimated intake from self-reported responses on nutrition questionnaires. Studies using intake of E and C vitamin supplements as the predictor variables estimated intake from self-reported use; some studies confirmed use of supplements by examination of medication containers. One study used medical records to obtain information on use of supplements for institutionalized participants.67 Two studies reported on the same cohort, the Honolulu-Asia Aging Study,64,66 but appeared to use different sources for their exposure data. Another two studies reported on the same sample but defined the predictor variable somewhat differently.69,70 Yet another study used two distinctly different food frequency questionnaires for different subgroups of the sample and then developed a method to combine the information from both questionnaires into a single dataset.61 In general, little validation has been done on the accuracy of the nutrition questionnaires in these studies, but one research group that reported on two different samples68–70 has published some analyses showing that the food frequency questionnaires used in their studies were reasonably reliable and valid,59 and that the performance characteristics did not vary significantly by cognitive ability. Correlations between responses on a food frequency questionnaire and a 24-hour dietary recall typically ranged from 0.39 to 0.67 for vitamins C and E, and were higher when vitamin supplements were considered.59 Eleven of the studies used sample selection methods to minimize selection bias. One study67 used a subsample from a larger cohort study, of which a disproportionate segment of the sample was at relatively high risk of cognitive impairment; part of this study sample was drawn from institutionalized participants and part from community participants. The sources of exposure information differed for these two subgroups, introducing additional potential sources of bias. Eight studies compared some baseline characteristics by exposure level.60,63,65,67–71 All studies used standard criteria for the diagnosis of AD, but only five used an informant report as part of the diagnostic process.60,62,64,66,71 All but one study61 reported results for AD cases only, but the study that reported results for dementia as a whole included enough AD cases to meet our eligibility criteria. Few of the studies explicitly stated that the dementia diagnosis was assigned blind to the exposure level of the nutrients of interest; however, it is unlikely that details of these types of exposure were discussed during the diagnostic process in any of the studies. Analyses were appropriate and generally controlled for relevant potential confounders.

Table 5. Antioxidant and multivitamin use and risk of developing AD.

Table 5

Antioxidant and multivitamin use and risk of developing AD.

Results from the studies combined are inconclusive. The preponderance of evidence suggests that there is no association between the amount taken in of vitamins E or C, flavonoids, or beta carotene and risk of AD. However, selected studies have reported associations between AD and vitamin C,60,68 vitamin E,64,69 or the combination of the two vitamins.71 When significant associations were reported, higher intake of the vitamin was associated with lower risk of AD (see Table 5). However, within studies these findings were often not consistent. For example, sometimes the significant association was limited to food intake only and not supplemental vitamins,69 and in other studies the association between vitamin intake and AD was limited to alternating quintiles of vitamin level.64 This raises some questions about the robustness of the findings and leads to the conclusion that there is little evidence supporting a beneficial effect of antioxidant vitamins on reducing risk of AD.

Gingko biloba. We did not identify any eligible studies examining risk of AD in relation to use of gingko biloba supplements.

Omega-3 fatty acids. We identified two good quality systematic reviews evaluating the association between omega-3 fatty acids and risk of Alzheimer’s disease.29,30 We focus the discussion on the more recent (2009) review by Fotuhi et al.29 The review included seven prospective cohort studies described in nine publications dating from 1997 to 2008.60,72–79 The seven studies included a total of 18,922 subjects; three were conducted in the United States, three in European countries, and one in Canada (Table 6). Prospective observational studies or trials were selected that addressed the specific association between any form of omega-3 fatty acids and dementia in participants age 65 or older, and that used standard diagnosis of dementia. The number of individuals with AD versus other dementias was available, and all studies met our eligibility threshold of at least 60 percent with AD. There was not a structured quality assessment of studies reported in this systematic review; however study characteristics for key design variables were reported, and study selection criteria focused the review on higher quality studies. Length of followup ranged from 3.9 to 7 years. No information was given on followup rates. Covariate adjustment included age, sex, and education, and many studies included additional covariates such as the apolipoprotein E gene (APOE), other nutritional factors, and income. Covariate adjustment for education and income may be particularly important as several studies reported an association between fish consumption and higher incomes and education. Both unadjusted and adjusted results were reported. Omega-3 fatty acid intake was estimated by dietary histories in six studies; one74 measured serum polyunsaturated fatty acids (PUFAs) and one reported plasma PUFAs in a subsample.79 Most studies focused on fish consumption to estimate omega-3 fatty acids without considering other dietary sources or fish oil supplements. Exposure classification varied substantially ranging from a simple count of the frequency of fish servings per week to estimates of the number of grams consumed per day. Because of significant study heterogeneity in study design, results were synthesized qualitatively.

Table 6. Omega-3 fatty acids and risk of developing AD – study characteristics and results from studies reviewed by Fotuhi et al., 2009.

Table 6

Omega-3 fatty acids and risk of developing AD – study characteristics and results from studies reviewed by Fotuhi et al., 2009.

Study characteristics and results are summarized in Table 6. There was no consistent association between omega-3 fatty acid intake and incident AD. Three of the seven cohort studies showed that fish consumption was associated with a statistically significant reduced risk of AD;73,75,78 three did not show a statistically significant association,72,76,77 although the point estimate favored a lower risk in two studies; and one small study74 showed higher serum PUFAs in subjects who developed dementia. Two studies examined the interaction between fish intake and APOE, one showing no interaction,78 and one showing an interaction where increased fish consumption decreased risk of AD only in those who were non-carriers of the epsilon 4 allele of the apolipoprotein E gene (APOE-e4).76 There was substantial heterogeneity in how omega-3 consumption was assessed, including differences in the types of fish (e.g., fatty versus non-fatty), dosage, and duration of use. Most studies focused only on long-chain omega-3 fatty acids and not on specific fatty acids (e.g., docosahexaenoic acid [DHA]) or the ratio of omega-3 to omega-6 fatty acids, a ratio that has been linked to some cardiovascular outcomes. The variability in exposure intake may be an important contributor to inconsistent study findings. The authors concluded that the existing data do not favor a role for long-chain omega-3 fatty acids in preventing dementia, including AD.

We identified two additional eligible studies published after the above-described review appeared (Table 7). Devore and colleagues80 prospectively followed 5396 subjects from Rotterdam for a mean of 9.6 years. Fish and other dietary sources of omega-3 were assessed at baseline using a dietary history. Kroger and colleagues81 used a nested case-control design to evaluate the association between blood and erythrocyte membrane PUFAs and AD in a community sample from Canada. This represents an updated analysis of the study by Laurin et al.,74 which was included in the 2009 systematic review described above.29 Only 15 percent of the overall sample provided blood samples, potentially introducing selection bias. Both studies established AD using the National Institute of Neurological and Communicative Diseases and Stroke-Alzheimer’s Disease and Related Disorders Association (NINCDS-ADRDA) criteria and controlled for multiple potential confounders including age, educational level, and vascular risk factors. Neither study found an association between fish intake, total PUFAs, DHA, or eicosapentaenoic acid (EPA).

Table 7. Omega-3 fatty acids and risk of developing AD – recent cohort studies.

Table 7

Omega-3 fatty acids and risk of developing AD – recent cohort studies.

In summary, a previous systematic review of seven prospective studies concluded that there was no consistent association between PUFAs, usually estimate by dietary histories of fish consumption, and incident AD. Results from a relatively large observational study with longer-term followup published since the 2009 systematic review, and from a reanalysis of a previously published study, are consistent with this conclusion.

Other fats. We identified two eligible cohort studies examining risk of AD in relation to intake of various types of fat.82,83 These studies are summarized in Table 8; detailed evidence tables are provided in Appendix B. One study used a community sample in the United States,82 and the other used a community sample in Europe.83 Length of followup ranged from 3.9 to 21 years. In one study, 82 exposure was determined based on self-reported information from a semi-quantitative food frequency questionnaire in late-life. Based on a validation substudy, the authors reported the Pearson correlations for comparative validity with 24-hour dietary recalls were 0.40 for monounsaturated fat, 0.47 for saturated fat, 0.36 for polyunsaturated fat, and 0.39 for cholesterol. In the other study,83 exposure was determined based on a self-reported, 20-question, multiple-choice questionnaire completed in mid-life.83 This study estimated the total fat intake from milk products and dairy product spreads based on questionnaire responses. Both studies used sample selection methods to minimize selection bias, but only one compared baseline characteristics by exposure level.82 Investigators used standard criteria for the diagnosis of AD, but they did not use an informant report as part of the diagnostic process. One study stated that dementia diagnosis was assigned blind to the exposure level;82 it is assumed here that this was also the case in the other study.83 Analyses were appropriate and controlled for relevant potential confounders in one study,82 and partially controlled for potential confounders in the other study.83

Table 8. Intake of various types of fat and risk of developing AD.

Table 8

Intake of various types of fat and risk of developing AD.

The study assessing mid-life dietary fat intake83 did not find a significant association between risk of AD and intake of total fat, polyunsaturated fats, or monounsaturated fats. Investigators did report significant increased risk of AD associated with the 2nd quartile of saturated fat intake compared with the 1st quartile; however, this increased risk did not hold up across the top two quartiles, raising questions about the robustness of the result. The study assessing later life dietary fat intake82 reported increased risk of AD associated with increased intake of saturated fats and trans-unsaturated fats, and a decreased risk of incident AD associated with higher intake of w-6 polyunsaturated fats. Differences between the studies in how the level of exposure was determined and the time when the exposure occurred may explain the discrepant results, but such fundamental differences also make it difficult to draw conclusions from these two studies. The study by Morris and colleagues82 used the most detailed dietary intake data. Giving weight to this study based on the data quality, there are preliminary data that saturated fats and trans-unsaturated fats may contribute to an increased risk of AD. Confirmation of these findings is needed.

Trace metals. We identified no systematic reviews or studies evaluating a potential association between trace metals and reduction of risk of Alzheimer’s disease.

Mediterranean diet. We identified four eligible cohort studies examining risk of AD and the Mediterranean diet.84–87 The Mediterranean diet is characterized by high intake of vegetables, legumes, fruits, and cereals; high intake of unsaturated fatty acids (mostly in the form of olive oil), but low intake of saturated fatty acids; a moderately high intake of fish; a low-to-moderate intake of dairy products (mostly cheese or yogurt); a low intake of meat and poultry; and a regular but moderate amount of alcohol, primarily in the form of wine and generally during meals. The included studies are summarized in Table 9; detailed evidence tables are provided in Appendix B. One study87 used a community sample in Europe. The three other studies84–86 were based on the same community sample in the United States, but they address slightly different outcomes or exposures. One study assessed the association between AD and the Mediterranean diet,84 one assessed the association between progression from MCI to AD and the Mediterranean diet,86 and one assessed the association between AD and the Mediterranean diet and physical activity combined.85 For all of the studies, participants were non-demented at baseline, but for one86 some of the participants were retrospectively assigned a diagnosis of MCI at baseline. Length of followup ranged from an average of 4 to 7 years. Exposure was determined based on self-reported information from a semi-quantitative food frequency questionnaire. Both studies used similar methods to calculate a Mediterranean diet score based on the responses on this questionnaire. Investigators in all studies noted that they had previously reported that this questionnaire has adequate validity and reliability based on substudies of segments of the questionnaire. All of the studies used sample selection methods to minimize selection bias and compared baseline characteristics by exposure level. Investigators used standard criteria for the diagnosis of AD, but did not use an informant report as part of the diagnostic process. One study applied diagnostic criteria for MCI retrospectively.86 It is assumed here that the dementia and MCI diagnoses were assigned blind to the exposure level, but this information was not provided in some of the publications. Analyses were appropriate and controlled for relevant potential confounders.

Table 9. Mediterranean diet and risk of developing AD.

Table 9

Mediterranean diet and risk of developing AD.

The three publications based on the single cohort84–86 reported fairly consistent results regarding the association between higher compliance with a Mediterranean diet and a significantly lower risk of incident AD. The studies reported significant trend effects, suggesting a dose-response pattern. The study on this cohort examining the progression of MCI to dementia found that higher adherence to a Mediterranean diet was associated with a lower risk of progressing from MCI to AD.86 Another study on this cohort examined the combination of physical activity and diet; the authors reported that a lower risk of AD was associated with those who both highly adhered to a Mediterranean diet and participated in much physical activity. Low adherence to a Mediterranean diet combined with high levels of physical activity, or vice versa, did not provide a protective association for AD. The one study on a separate sample87 reported a hazard ratio in the direction indicating that high adherence to a Mediterranean diet was associated with lower risk of AD, but the hazard ratio did not meet standard significance levels. The authors of this manuscript noted that the sample had limited power to detect an association, and this may explain their null finding.

In conclusion, multiple studies on one cohort reported that high adherence to a Mediterranean diet is associated with lower risk of AD; one study on a separate sample did not replicate this finding, but this may be due to lack of statistical power. Confirmation of the reported protective association of a Mediterranean diet is needed using an independent sample.

Fruit and vegetable intake. We identified two eligible cohort studies examining risk of AD in relation to intake of fruit and vegetables88 or intake of fruit and vegetable juices containing a high concentration of polyphenols.89 These studies are summarized in Table 10; detailed evidence tables are provided in Appendix B. One study89 used a community sample in the United States, and the other used a twin registry in Europe.88 Length of followup ranged from an average of 6.3 to 31.5 years. In one study,88 exposure was determined based on self-reported responses to one question on fruit and vegetable consumption; in the other study,89 exposure was determined by self-reported information from a semi-quantitative food frequency questionnaire. The investigators conducted a validation study of the questionnaire and found low to moderate correlations (0.42 to 0.77) between food records and responses on the food frequency questionnaire for major nutrient groups for the ethnic groups included in the study. Both studies used sample selection methods to minimize selection bias, and one89 compared baseline characteristics by exposure level. Investigators used standard criteria for the diagnosis of AD, but only one of the studies89 used an informant report as part of the diagnostic process. Only one study88 reported that the dementia diagnosis was assigned blind to the exposure level, but since this type of exposure is not typically discussed as part of the dementia assessment and diagnosis process, it is assumed here that the diagnosis was assigned blind to exposure in both studies. Analyses were appropriate and controlled for relevant potential confounders.

Table 10. Intake of fruit and vegetables and risk of developing AD.

Table 10

Intake of fruit and vegetables and risk of developing AD.

One of the studies reported that medium to great fruit and vegetable intake in mid-life was associated in lower risk of AD in late life.88 This association was present in women, but not in men. It was also present in individuals with angina, but not in those without angina. This study used a crude measure (one self-report question) to determine exposure. The other study showed that intake of fruit or vegetable juice at least three times per week in later life, compared to less than once per week, was associated with reduced risk of incident AD.89 A significant trend was noted, suggesting a dose-response pattern.

In conclusion, these two studies offer preliminary evidence that higher intake of fruit and vegetable juices throughout adult life may provide benefits for preventing AD, but the findings need to be confirmed.

Total intake of calories, carbohydrates, fats, and proteins. We identified one eligible cohort study examining risk of AD and total intake of calories, carbohydrates, fats, and protein.90 This study is summarized in Table 11; a detailed evidence table is provided in Appendix B. The study used a community sample in the United States. Length of followup averaged 4 years. Participants were non-demented at baseline. Exposure was determined based on self-reported information from a semi-quantitative food frequency questionnaire. The validity of the questionnaire used in this study was assessed previously in a subsample of individuals using two 7-day food records as the criterion. The intra-class correlations for energy-adjusted nutrients were 0.30 for total calories, 0.28 for carbohydrates, 0.41 for fats, and 0.33 for protein, based on energy-adjusted nutrient intakes. The study used sample selection methods to minimize selection bias and it compared baseline characteristics by exposure level. Investigators used standard criteria for the diagnosis of AD, but did not use an informant report as part of the diagnostic process. It was not reported whether the dementia diagnosis was assigned blind to the exposure level, but it is unlikely that this type of information would have been discussed during the diagnostic process. Analyses were appropriate and controlled for relevant potential confounders.

Table 11. Total intake of calories, carbohydrates, fats, and protein and risk of developing AD.

Table 11

Total intake of calories, carbohydrates, fats, and protein and risk of developing AD.

This study reported that higher caloric intake was associated with higher risk of incident AD. There was no association between AD risk and intake amounts of carbohydrates, fats, or protein. In analyses stratified by APOE e4 allele status, both total calorie intake and fat intake were associated with high risk of AD.

In conclusion, the findings from this single study are somewhat difficult to interpret given the hazard ratio (HR) < 2, which may suggest that residual confounding explains the association and the relatively low correlations reported in the study’s validation of the instrument used to collect exposure. In addition, these findings may be inconsistent with other studies reporting that weight loss may be an antecedent of AD. However, the findings do suggest that high caloric intake may be an aspect of diet that should be investigated further in regards to its association with risk of AD.

Medical Factors

Vascular factors. Factors considered under this heading include diabetes mellitus, metabolic syndrome, hypertension, hyperlipidemia, and homocysteine.

Diabetes mellitus. We identified two good quality systematic reviews that examined the association between diabetes mellitus and the development of AD.41,42 The review by Biessels and colleagues41 included 11 cohort studies (101,972 subjects); five were from the United States, four from Western Europe, one from Canada, and one from Japan. Publication dates ranged between 1989 and 2005. Studies were selected that were longitudinal, had subjects recruited at the population level, and where the incidence of dementia could be compared between subjects with and without diabetes mellitus. Studies that included people with cognitive impairments but not dementia were excluded, as were studies of the prevalence of diabetes in patients with dementia. Data were presented for the effect of diabetes on any dementia, vascular dementia, and Alzheimer’s disease, but the focus here is on Alzheimer’s disease.

The review authors reported that the quality for cohort designs was fair to good, with 9 of 11 studies receiving a score of at least 6 points out of 10 using a scale that judged population selection and recruitment, participation at followup, dementia assessment and diagnosis, and data analysis. Length of study followup ranged from 2.1 to 35 years, with the age of recruited subjects ranging from 45 to 84 years. Diagnosis of diabetes varied. Six studies relied on medical history or medication use and did not assess blood glucose concentration in all participants. The prevalence of diabetes ranged from 8.8 percent to 35 percent of the study population. Six studies also assessed diabetes only at baseline, making it likely that a number of subjects who developed incident diabetes were assigned to the non-diabetic group. Studies did not distinguish between type 1 and 2 diabetes, but since all participants were middle-aged or older adults and type 2 diabetes predominates in this age group, almost all were likely to have type 2 diabetes. Data for diabetes duration, hemoglobin A1c, and microvascular complications were not regularly reported. Most studies used the Diagnostic and Statistical Manual of Mental Disorders (DSM) III or IV criteria for the diagnosis of dementia and NINCDS-ADRDA criteria for AD. Six studies relied on a consensus committee to establish a diagnosis of dementia.

Biessels et al. did not combine data because of variability of study design, and assessment of heterogeneity was not reported.41 The possibility of publication bias was considered, but a funnel plot was not performed. Covariates commonly considered included age, sex, education, and, in some studies, baseline cognitive performance and cardiovascular risk factors. Nine of 10 studies reported that participants with diabetes had an increased risk of developing Alzheimer’s disease, with relative risk, odds ratios, or hazard ratios greater than 1 (range 1.2 to 2.4), and with 95 percent confidence intervals > 1.0 in five studies. Adjustment for vascular risk factors was examined in five studies; four of the five reported a relative risk or hazard ratio greater than 1 (range in all studies 0.8 to 2.0), but for only two of these did the adjusted HR exclude no effect. Two studies examined the risk of developing Alzheimer’s disease in individuals who had midlife assessment of diabetes status. Yamada et al.91 reported an odds ratio (OR) of 4.4 (p < 0.01), and Curb et al.92 reported a relative risk (RR) of 1.0 (95 percent confidence interval [CI] 0.5 to 2.0) for individuals with diabetes mellitus developing Alzheimer’s disease.

Longitudinal studies in which diabetes and dementia were assessed in late life demonstrated fairly consistent results. Seven of 11 studies reported a 50 to 100 percent increase in the incidence of AD. Two studies examined the effect of APOE genotype and found that the presence of an e4 allele doubled the relative risk of dementia in diabetics compared to participants with either of these risk factors alone. The authors concluded the literature suggests that the risk of AD is increased in patients with diabetes mellitus.41

A subsequent systematic review and meta-analysis by Lu and colleagues identified reports from two additional cohort studies examining the association between diabetes mellitus and the incidence of AD.42 Akomolafe et al. reported results from 2210 participants in the Framingham study and found that diabetics had a non-statistically significant increase in risk compared to non-diabetics (RR 1.15; 95 percent CI 0.65 to 2.05).93 Similar results were found in the Cache County Study of Memory, Health and Aging, where the RR for AD in diabetics was reported to be 1.33 (95 percent CI 0.66 to 2.46).94 In their systematic review, Lu and colleagues,42 in contrast to Biessels and colleagues,41 judged that studies examining the effect of diabetes on dementia risk were sufficiently homogeneous, based on similar criteria for diagnosis and dementia, that meta-analysis was appropriate. They performed a meta-analysis on the adjusted relative risk of diabetics developing Alzheimer’s disease using data from eight longitudinal, prospective cohort studies. The combined RR, using a fixed-effect model, was 1.39 (95 percent CI 1.17 to 1.66). A test for heterogeneity did not reveal significant heterogeneity between studies (χ-squared Q-test statistic 3.269, df = 5; p = 0.659), and visual inspection of funnel plots and Egger’s test did not suggest publication bias. Lu and colleagues concluded, therefore, that diabetes mellitus was associated with an increased incidence of AD.42

We identified two additional studies on diabetes mellitus and the risk of developing AD that were published after the above-described systematic reviews (Table 12). Irie et al. examined the role of diabetes mellitus and APOE genotype on incidence of dementia in over 2000 participants in the Cardiovascular Health Study.95 They found that diabetes or inheritance of APOE e4 alone each increased the risk of developing AD (OR 1.62; 95 percent CI 0.98 to 2.67; and OR 2.50; 95 percent CI 1.84 to 3.40, respectively) compared to individuals without diabetes or an APOE e4 allele. The OR for subjects with both diabetes mellitus and an APOE e4 allele was 4.99 (95 percent CI 2.70 to 9.20), which the author’s suggested was evidence for an interaction between the risk factors. Xu et al. prospectively followed 1248 subjects from the Kungsholmen Project for an average of 5.1 years.96 The average age of participants was approximately 82 years, and 75 percent were women. In a fully adjusted model the hazard ratio of incident AD for individuals with borderline diabetes was 1.87 (95 percent CI 1.11 to 3.14), and for undiagnosed diabetes the HR was 3.29 (95 percent CI 1.20 to 9.01), indicating an increased risk of AD. The risk for subjects with diagnosed diabetes was not statistically different from the risk for non-diabetics. No analysis of dementia risk as a function of hemoglobin A1c level was reported. Xu and colleagues96 suggested several possible explanations for their findings that borderline and undiagnosed diabetes place subjects at greater risk for AD than those with diagnosed diabetes: The number of subjects with diagnosed diabetes and dementia was relatively small, limiting the statistical power to identify a significant association; diabetics – because they were aware of their condition – may have altered their lifestyle, while subjects with borderline or undiagnosed diabetes would not be aware of their condition and, therefore, would not have modified their lifestyle; and the degree of hyperinsulinemia and insulin resistance might be different in borderline and undiagnosed diabetics than in known diabetics. Hyperinsulinemia and insulin resistance have been implicated in AD pathogenesis.

Table 12. Diabetes mellitus and risk of developing AD.

Table 12

Diabetes mellitus and risk of developing AD.

In summary, individual prospective, longitudinal cohort studies, two systematic reviews, and a meta-analysis all report an association between diabetes mellitus and incident Alzheimer’s disease, but results in individual studies vary. Studies also suggest that inheriting an APOE e4 allele further increases the risk of AD in diabetics. Limitations of the included studies include variable criteria for diagnosis of diabetes, failure to consider duration of diabetes, and degree of glycemic control. Additional research examining the age of diabetes onset (mid-life versus late-life onset), comorbid conditions (such as vascular risk factors), type of treatment (diet versus oral versus insulin), and the role of hyperinsulinemia on dementia risk is needed.

Metabolic syndrome. We identified two longitudinal, prospective studies that examined the association between metabolic syndrome and incident AD (Table 13).97,98 Both studies were conducted in the United States and together they involved a total of 5603 subjects. Both studies recruited older adults from the community and used NINCDS-ADRDA criteria to establish a diagnosis of AD. Time between screening and followup ranged from 4.4 to 28 years, and the age range for the studies was comparable (mean 76 versus 78 years). Muller et al.,98 in the Northern Manhattan study, defined the metabolic syndrome according to National Cholesterol Education Program 3rd Adult Treatment Panel Guideline (NCEP-ATPIII), and Kalmijn et al.,97 in the Honolulu-Asia Aging Study (HAAS), used a novel definition (described below). The NCEP-ATPIII criteria require at least three of the following for a diagnosis of metabolic syndrome:

Table 13. Metabolic syndrome and risk of developing AD.

Table 13

Metabolic syndrome and risk of developing AD.

  1. Waist measurement > 88 cm for women or > 102 cm for men.
  2. Hypertriglyceridemia (≥ 150 mg/dL [≥ 1.69 mmol/L]).
  3. Low high density lipoprotein (HDL; men < 40 mg/dL [< 1.03 mmol/L]); women < 50 mg/dL [< 1.29 mmol/L]).
  4. High blood pressure (systolic blood pressure [SBP] ≥ 130 mmHg; diastolic blood pressure [DBP] ≥ 85 mmHg) or currently using an antihypertensive medication.
  5. High fasting glucose (≥ 110 mg/dL [≥ 6.10 mmol/L]) or currently using anti-diabetic medication (insulin or oral agents).

In contrast, HAAS defined metabolic syndrome as the sum of seven factors – increased body mass index (BMI), elevated total cholesterol, elevated triglycerides, elevated DBP and SBP, elevated random post-load glucose, and increased subscapular skinfold thickness – expressed as the individual’s z score for that risk factor (calculated as the value compared to the total population, assuming a normal distribution [−4 SD to +4 SD]; scores ranged between −12.8 to 13.4, with higher scores indicative of the presence of more risk factors).97

The variation in definition of metabolic syndrome makes it difficult to compare results between studies. Comparisons are further limited because of sex and ethnic differences between the two studies. The Northern Manhattan population98 was predominantly female (67 percent), 39 percent Caribbean Hispanic, 31 percent African-American, and 30 percent white, while HAAS97 was restricted to Japanese-American men. Fifty-five percent of the participants in the Northern Manhattan study had metabolic syndrome, and 29 percent of the HAAS participants had more than two elevated risk factors of the seven examined. Both studies adjusted for important confounders, such as age, sex, education, and baseline cognitive performance. Muller et al.98 reported baseline differences between participants with and without metabolic syndrome; subjects with metabolic syndrome were more likely to be female, Hispanic, smokers, and less educated. There was, however, no difference in the risk of developing AD (RR 0.9; 95 percent CI 0.6 to 1.3). Analysis of the components of metabolic syndrome revealed that only diabetes was associated with a statistically significant increase in total dementia (HR 1.6; 95 percent CI 1.2 to 2.2), but the risk for AD did not reach statistical significance (HR 1.4; 95 percent CI 1.0 to 2.1). The RR for AD per 1 unit increase in z-score sum of metabolic risk factors in HAAS was 1.0 (95 percent CI 0.94 to 1.06), and for AD with cerebrovascular disease 1.04 (0.95 to 1.15).97 Investigators also divided z-scores into quartiles and found that there was a trend toward increased risk of dementia (all subtypes) in subjects assigned to quartiles 2, 3 and 4, but data were not presented for AD.

In summary, metabolic syndrome, using two different diagnostic criteria, is not associated with a higher risk of AD. These conclusions are limited by the small number of published studies, differences in the study populations, and the lack of uniform criteria used to diagnose metabolic syndrome. Additional analysis of subsets of risk factors included in metabolic syndrome may provide better insight into the validity of metabolic syndrome as a clinically valid construct for predicting dementia risk.

Hypertension. We did not identify any good quality systematic reviews evaluating hypertension and risk of developing AD. Our independent search identified 11 eligible publications,99–109 describing 10 different cohort studies that examined the association between hypertension and incident AD. These studies are summarized in Table 14; detailed evidence tables are provided in Appendix B. Seven studies were derived from community cohorts in the United States, of which two dealt specifically with subjects of Japanese descent.99,102 Subjects from the remaining three cohorts were from Finland, Sweden, and Canada. More than 18,700 subjects were included, with 1511 incident cases of AD. Followup ranged from 5 to 27 years. All studies reported good procedures for determination of AD outcome. Only four studies102,106–108 reported results adjusted for antihypertensive use. Two other studies103,109 did check for interactions between antihypertensive medications and hypertension and stated that the change to reported results was minimal.

Table 14. Hypertension and risk of developing AD.

Table 14

Hypertension and risk of developing AD.

Definitions of hypertension varied. Two of the studies used self-reported hypertension to establish exposure.104,105 Bias could be introduced by the subjects who are unaware of their hypertension, decreasing the likelihood of detecting an association. Neither study found an association between reported hypertension and incident AD.

When SBP > 140 mmHg was used as a definition,99,101,106,107 the results were not statistically significant except for a single study101 that reported an adjusted OR of 1.97 (95 percent CI 1.03 to 3.77) from a Scandinavian cohort (FINMONICA). An analysis of the HAAS cohort (discussed below) did find an association between never-treated hypertension, defined at 140 mmHg and compared to SBP < 120 mmHg, with non-specific dementia. When hypertension was defined as a SBP > 160 mmHg,100,102,103,107 only one of four studies100 found a significant result, again in an analysis of the FINMONICA cohort. The FINMONICA cohort measured blood pressure in mid-life, 15 years prior to cognitive testing.

The Religious Orders Study108 followed a cohort of retired catholic clergy and used blood pressure as a continuous variable. There was no relationship between SBP or DBP and incident AD. Results from this highly educated cohort may not be generalizable to others, as the mean SBP was 134 (64 percent had SBP < 140 mmHg), and the mean DBP was 75 (93 percent had DBP < 90 mmHg).

It is possible that all the cohorts formed later in life99,103–108 had a selection bias in that if hypertension predisposes to AD and to death, those subjects with hypertension would have selectively died prior to cohort formation. By contrast, the FINMONICA cohort100,101 was followed for 21 years, and the HAAS cohort102 for 27 years.

Measures of DBP also did not show robust associations with incident AD. Low DBP (< 70 mmHg) was examined in the Kungsholmen cohort and was significantly associated with incident AD (RR 1.9; 95 percent CI 1.2 to 3.0).107 High DBP was examined as a risk factor in six studies (seven papers)100–103,107–109 and was not found to have a significant association with incident AD with the exception of subgroups of the HAAS cohort. The HAAS cohort102 was formed from the Honolulu Heart Program (1965 to 1971), when many hypertensive patients were not treated. Investigators found significantly elevated odds ratios for AD in those with untreated high DBP (OR 4.47; 95 percent CI 1.53 to 13.09), but not untreated high SBP (OR 1.22; 95 percent CI 0.37 to 4.04). The HAAS cohort is distinguished by having the longest followup of these studies, with a mean of 27 years. In other analyses of the HAAS cohort,110 non-specific dementia was associated with never-treated hypertension, as defined by SBP > 140 mmHg (and compared to SBP < 120 mmHg), with a HR or 2.66 (95 percent CI 1.51 to 4.68).

In summary, in the cohorts described here, the association between blood pressure and incident AD was significant in only one cohort (the FINMONICA cohort100,101), with untreated diastolic hypertension significantly associated with incident AD in one other population (the HAAS cohort102). These two populations, however, were followed for a considerably longer period of time than the other cohorts.

Hyperlipidemia. We identified one good quality systematic review examining total cholesterol as a possible risk factor for AD.40 Eight included cohort studies, involving 14,331 subjects, examined the association between incident AD and total cholesterol. Three of the studies used cholesterol measured in mid-life, one used the average of multiple cholesterol measurements over 30 years, and four used cholesterol measured in later life; for this reason, the studies were considered too heterogeneous to combine in a single analysis. Followup ranged from 4.8 to 29 years, with a mean of approximately 13 years.

Four studies examined cholesterol as measured in mid-life as it relates to incident AD. One looked at the Framingham cohort.111 Cholesterol levels were averaged over the time of the study. No association was found between cholesterol measured in this way and incident AD. Another study112 found that a decreasing cholesterol level from mid- to late life was associated with increased risk of AD (β = −0.33; p = 0.03). Two studies found that high cholesterol in mid-life was associated with an increased risk of AD. Kivipelto et al.113 found that, in the FINMONICA (Finnish Multinational Monitoring of Trends and Determinants in Cardiovascular Disease) and North Karelia Project cohort, followed for a mean of 21 years (range 11 to 26 years), cholesterol ≥ 6.5 mmol/L (251 mg/dL) in mid-life was associated with an OR of 2.8 (95 percent CI 1.2 to 6.7) for incident AD in late life. Notkola et al.114 followed up 444 survivors from a cohort formed in 1959, checked after 5 to 30 years, and found an OR of 3.1 (1.2 to 8.5) for an average cholesterol ≥ 6.5 mmol/L for measured cholesterol in 1959, 1964, 1969, and 1974, when the cohort was mid-life.

Four studies looked at late-life cholesterol and AD. Three studies were considered similar enough for fixed-effect meta-analysis.115–117 Combined sample size for these three was 10,195 controls and 599 cases of incident AD. No difference was found between the lowest quartile of total cholesterol and any of the other quartiles in the incidence of AD. The relative risk (RR) between first and fourth quartile was 0.85 (95 percent CI 0.65 to 1.12; z = 1.17; n = 5526; p = 0.24). Yoshitake et al.118 included both prevalent and incident cases of AD in their analysis and found that for each increase of one standard deviation in total cholesterol the relative risk for AD was 1.1 (0.80 to 1.51).

In summary, based on this systematic review, there is evidence to suggest that hypercholesterolemia in mid-life is associated with increased risk of AD later. There is no evidence in these studies to suggest that late-life cholesterol levels are related to incident AD. If mid-life but not late-life cholesterol is related to increased risk, then averaging cholesterol over decades of life, as was done with the Framingham cohort, would not be expected to show a relationship.

Our search did not reveal any other prospective cohort studies meeting our inclusion criteria and addressing the relationship between hyperlipidemia and incident AD.

Homocysteine. Our search identified four cohort studies, involving 2662 subjects, evaluating the association between homocysteine and incident AD (Table 15). Two cohorts were from U.S. communities,119,120 and two were from Western Europe;57,121 all studies recruited community samples. Three of the four cohorts analyzed frozen plasma from fasting subjects, which may give a better estimate of bioavailable folate than non-fasting samples. The fourth study121 did not specify whether subjects were fasting. The studies defined increased plasma homocysteine levels differently. Two studies57,120 compared the highest quartiles of homocysteine in their samples to the lowest quartile, one examined log-transformed homocysteine,119 and one compared those subjects whose homocysteine doubled over 2.5 years to all others.121 The duration of followup ranged from 1 to 13 years. Alzheimer’s disease was diagnosed using NINCDS-ADRDA criteria; however, one study57 relied on telephone or informant interviews, medical records, or death certificates for 15 percent of the sample. All studies adjusted for potential confounders, but two57,119 had a large number of model variables compared to the number of incident cases of AD, which may decrease the replicability of their results.

Table 15. Homocysteine and risk of developing AD.

Table 15

Homocysteine and risk of developing AD.

Three studies reported adjusted results for baseline homocysteine using approximately the same threshold (> 14 or >15 μmol/L).57,119,120 We combined these studies using a random-effects model (Figure 3). A test for heterogeneity suggested significant variability among studies (Q statistic = 6.378, p = 0.04, I2 = 68.6 percent). We examined design features qualitatively and could not explain the variability. Elevated homocysteine levels were not associated with incident AD, but the confidence interval was wide (RR 1.53; 95 percent CI 0.94 to 2.49).

Figure 3. Meta-analysis of three cohort studies on homocysteine and risk of developing AD.

Figure 3

Meta-analysis of three cohort studies on homocysteine and risk of developing AD. Combined estimate is given in bottom row.

Other classifications of elevated homocysteine found variable associations. For the highest homocysteine quartile (mean homocysteine 27.44 μmol/L), Luchsinger et al.120 found an increased risk for AD (HR 2.0; 95 percent CI 1.2 to 3.5). However, this association was no longer statistically significant after adjusting for age, sex, education, APOE e4 status, and history of stroke (HR 1.3; 95 percent CI 0.8 to 2.3). Blasko et al.121 evaluated the association between change in homocysteine and incident AD. For subjects whose homocysteine level doubled over 2.5 years, the risk of AD was increased (OR 4.2; 95 percent CI 1.6 to 11). It is not clear how frequently homocysteine levels doubled over the 2.5-year duration of the study, but the wide confidence interval may indicate that a relatively low number were in this group.

In summary, adjusted models in three of the four included cohort studies found an association between increased baseline homocysteine and the development of incident AD. Point estimates for the relative risk varied substantially across studies, from modest (1.3) to large (4.2). However a pooled estimate using a common classification of elevated homocysteine did not reach statistical significance. Homocysteine levels rise with age, renal insufficiency, use of coffee, tobacco, and the sequelae from heavy alcohol use. Differences in the cohorts studied with regard to these factors may have contributed to variable findings, but there were not adequate numbers of studies to evaluate this possibility formally.

Other medical factors. Factors considered under this heading include sleep apnea, obesity, and traumatic brain injury (TBI).

Sleep apnea. We did not identify any good quality systematic reviews or primary studies that evaluated the association between sleep apnea and risk of developing AD.

Obesity. We identified one good quality systematic review that examined the association between various measures of obesity and the development of AD.47 The review included 10 prospective cohort studies published between January 1995 and June 2007, of which four were conducted in the United States, two among Japanese men (one of which was in Honolulu), two in Sweden, and one each in Finland and France. Prospective cohort studies were selected if the sample size was > 100; followup was ≥ 2 years; exposure was recorded as BMI, obesity/overweight, a measure of central obesity, or a combination; outcomes reported were AD or vascular dementia (VAD), or a combination; and if the outcomes were reported as odds ratio, relative risk, or hazard ratio, or as data from which these measures could be calculated.

Of the 10 eligible studies included in the review, 4 studies involving 15,688 subjects examined the relationship between AD and obesity and were combined in a meta-analysis.94,101,122,123 The results from the other studies included dementia as a whole or vascular dementia separately, but did not report associations with AD. All participants were ≥ 65 years of age at the time of cognitive testing; however, age at baseline ranged from 40 to 45 years old to more than 77 in some studies. Length of followup ranged from 3.2 to 36 years. All studies measured BMI, and one study also measured change in BMI.122 Level of covariate adjustment was reported at the individual study level. Although the review included both AD and VAD, both combined and individual analyses were done for the two outcomes, and the RR for AD was reported separately. Study quality for the primary studies was not assessed in this review.

Studies were combined for meta-analysis using a fixed-effect model, as the heterogeneity among the four studies was not statistically significant. A funnel plot that included all studies did not reveal significant publication bias. This was confirmed by two other numerical tests (Egger’s regression asymmetry test: − 0.14 ± 0.54, P = 0.791; and Begg-adjusted rank correlation test: z = 0.62; P = 0.533). For the four cohort studies, compared with normal-weight subjects, those who were obese had a higher risk of developing AD (RR 1.80; 95 percent CI 1.00 to 3.29). Analysis was done to estimate the RR of dementia based on weight status; however, this analysis combined AD and VAD as outcomes.

We identified three additional prospective cohort studies published after the beginning of 2008 which examined the association between obesity and Alzheimer’s disease.124–126 These studies are summarized in Table 16; detailed evidence tables are provided in Appendix B. The first study was conducted in a community in Sweden where 1255 participants who were enrolled in the Kungsholmen Project were followed for 9 years.124 The two other studies were conducted in the United States, but the population in one of these studies was restricted to those of Japanese ethnicity.125 The average followup periods in the U.S. studies were 5.9 years126 and 7 to 9 years.125 In all three studies, selection bias was minimized by recruiting participants from the community and by excluding those who had dementia at baseline. Two of the three studies compared baseline characteristics by weight,125,126 while one examined baseline characteristic only by sex.124 All three studies directly measured weight for the calculation of BMI; one study also considered midlife weight by self-report as an additional risk factor.126 BMI was categorized into four groups in each study; however, the cut-offs used were slightly different in the studies, as follows: in Fizpatrick et al., underweight (BMI < 20), normal weight (20–25), overweight (25–30), and obese (> 30);126 in Hughes et al., obese (BMI ≥ 25.0), overweight (23.0–24.9), normal (18.5–22.9), and underweight (< 18.5);125 and in Atti et al., obese (BMI ≥ 30), overweight (25–29.9), normal weight (20–24.9, reference category) and underweight (< 20).124 Although all studies assessed both AD and other types of dementia, the investigators conducted a separate analysis for those with AD only as the outcome.

Table 16. Obesity and risk of developing AD.

Table 16

Obesity and risk of developing AD.

Atti et al.124 concluded that higher BMI was associated with a lower risk of developing AD; that is, overweight subjects had a lower risk of developing dementia over 9 years (HR 0.66; 95 percent CI 0.50 to 0.88). The other conclusion was that loss of weight is a marker of incipient dementia. Hughes et al. concluded that after controlling for covariates except APOE, higher baseline BMI was associated with a decreased risk of AD (HR 0.56; 95 percent CI 0.33 to 0.97); however, this model was no longer significant after controlling for APOE (HR 0.68; 95 percent CI 0.31 to 1.51). Also, lower decline in BMI was associated with a decrease in risk of incident AD (HR 0.21; 95 percent CI 0.06 to 0.80).125 Fitzpatrick et al. also concluded that underweight persons (BMI < 20) had an increased risk of dementia (HR 1.62; 1.02 to 2.64), whereas being overweight (BMI_25–30) was not associated (HR 0.92; 0.72 to 1.18), and being obese reduced the risk of dementia (HR 0.63; 0.44 to 0.91) compared with those with normal BMI. In the same study, when the association between midlife BMI and dementia was examined, there was a reversal in the direction of risk, as an increased risk of dementia was found for the obese (BMI > 30) versus those of normal weight (BMI 20–25), adjusted for demographics (HR 1.39; 1.03 to 1.87) and for cardiovascular risk factors (HR 1.36; 0.94 to 1.95).126

In conclusion, the meta-analysis published as part of a systematic review found that obesity was associated with an increased risk of AD, while all three prospective cohort studies published after the meta-analysis found that that a higher BMI was associated with a lower risk of developing AD. These conflicting results could be explained by the differences in age in the different study populations. The reversal of the direction of risk found by Fitzpatrick et al.126 is interesting, as it implies that BMI does not consistently predict dementia risk across the lifespan, and that this risk might change based on the age of exposure to obesity. Also, decreasing BMI might be a sign of early dementia, as one cannot attribute a causal relationship between decrease in weight and dementia yet.

Traumatic brain injury (TBI). We identified one good quality systematic review that examined the association between traumatic brain injury (TBI) and the development of AD in case-control studies.46 We did not generally consider case-control studies for this report due to the numerous limitations of such studies compared to cohort studies. However, in the case of TBI, there were few prospective cohort studies that met our eligibility criteria, but meta-analyses have been done using case-control studies. In the general population, TBI is a relatively low prevalence event, meaning that large sample sizes are necessary to have sufficient power to detect an association in general community samples. In addition, TBI is not an exposure that lends itself to RCTs. For these reasons, we decided to include the meta-analyses described here in our review. The review included 15 case-control studies.127–141 There were a total of 2653 subjects in the combined sample, of which 164 had exposure to TBI and 2489 did not have a reported history of TBI. As expected in case-control studies, the cases were demented at baseline and the controls were not demented. Six of the studies were conducted in the United States, six in European countries, and one each in Canada, Australia, and China. Studies were included if their definition of TBI required loss of consciousness; they used either individual or group matching of cases and controls; they used NINCDS-ADRDA or DSM diagnostic criteria; they used predefined inclusion criteria for controls to rule out the possibility of dementia; data on TBI were collected from informants for both cases and controls (symmetrical data collection); and the TBI occurred prior to onset of AD. The authors did not conduct a structured quality assessment of the studies reported in this systematic review; however the inclusion/exclusion criteria provided a limited indirect assessment of quality. The review did not provide information on the length of followup, followup rates, or the analytical covariates used in the studies. Exposure to TBI with loss of consciousness was determined by proxy-report for both cases and controls. All studies used the DSM and/or NINDS/ADRDA diagnostic criteria. Standard χ2 tests using a p-value of 5 percent were used to examine heterogeneity; results of these analyses showed no significant heterogeneity was present (actual p values were p ≥ 0.58). Studies were combined using fixed-effect meta-analyses since there was no evidence of heterogeneity. The results from these analyses are shown in Table 17.

Table 17. Traumatic brain injury and risk of developing AD – results from case-control studies reviewed by Fleminger et al., 2003.

Table 17

Traumatic brain injury and risk of developing AD – results from case-control studies reviewed by Fleminger et al., 2003.

The authors had planned to assess the association between TBI and APOE genotype as risk factors for AD, but they were unable to do so because only two of the included studies reported APOE genotype. Publication bias was not assessed formally, but the authors did attempt to assess for recall bias, a potential major weakness of case-control studies on individuals with dementia. When limiting the analyses to those cases and controls for whom the informant type was the same (e.g., informants were spouses in both groups), the association weakened slightly and became statistically insignificant (OR 1.42; 95 percent CI 0.75 to 2.67). This finding suggests that differential quality of informants for cases and controls in some studies may have resulted in a slight overestimate of the association between TBI and AD in the analyses combining all studies. Quality ratings of the studies were not provided, but the selection criteria may have increased the likelihood that higher quality studies were included in the review. However, it is noted that some studies had small sample sizes, and one study limited the cases of AD to those with onset prior to age 65,134 potentially limiting the generalizability of the results. The authors concluded that TBI may confer an increased risk of AD in males only. They also advised that future studies should use medical records to document head injury and should use population-based cohort designs to avoid the limitations associated with case-control studies.

Due to the limitations inherent in case-control studies, we supplemented the above-described systematic review46 with a search for cohort studies. This search identified two eligible prospective cohort studies104,142 and one retrospective cohort study.143 These studies are summarized in Table 18; detailed evidence tables are provided in Appendix B. Two of the studies drew samples from the community,104,142 and one drew its sample from military hospitalization records in the early 1940s;143 this latter study included both community residents and institutionalized individuals. One study was conducted in the United States,143 one in Canada,104 and the third in Europe.142 Length of followup ranged from 2 to approximately 55 years. Two studies used self-report history of TBI, and one study used military medical records at baseline to characterize exposure. For two of the studies,142,143 the definition of TBI required loss of consciousness or post-traumatic amnesia associated with the injury, but the third study104 did not include this requirement. For all three studies, individuals were non-demented at baseline. Two of the studies used sample selection methods to minimize selection bias;104,142 due to the retrospective nature of the third study,143 it only partially met criteria for sample selection methods that minimize selection bias. Only one of the studies143 compared baseline characteristics to assess differences between exposed and unexposed. All three studies used standard criteria for the diagnosis of AD. Only one study143 reported that the cognitive diagnoses were assigned blind to exposure status; the other two did not report this information. Analyses were appropriate and controlled for relevant potential confounders, but none of the studies reported a priori sample size calculations.

Table 18. Traumatic brain injury and risk of developing AD – cohort studies.

Table 18

Traumatic brain injury and risk of developing AD – cohort studies.

Two of the studies found that risk of AD did not increase in relation to a history of TBI.104,142 The third study reported that TBI was associated with increased risk of AD, and that there was a dose-response effect, with the risk being due to those with moderate and severe injuries.143 This latter study used an all-male sample. One of the other studies investigated potential differences by sex and found no differences in the association between TBI and AD for males and females.142 The inconsistency in results across studies may be due to the differences in the method of exposure ascertainment (i.e., self-report of lifetime history of exposure versus abstracted information from medical records) and to differences in the severity of traumatic brain injuries based on the sample characteristics (i.e., sample made up entirely of WWII veterans versus samples with limited number of war veterans). Two of the studies investigated the interaction between TBI and the APOE e4 allele on risk of AD. One study found no interaction effect.142 The other reported progressively larger hazard ratios with increasing numbers of e4 alleles, but the results did not reach statistical significance, possibly due to the relatively small sample size.143

As noted above, the methodological differences in the studies provide plausible reasons for the differing results. The systematic review of case-control studies found an association between TBI and AD in males only, with the OR for males exceeding 2.0 providing some support for the robustness of the result. The one cohort study with an all-male sample also reported that TBI increased risk of AD.143 The latter study used medical records from the 1940s to document exposure, thus avoiding reliance on self-report of lifetime history of injury. This study also reported a concordance rate of about 65 to 69 percent between documented TBI in military medical records and subsequent self-or proxy report of a history of TBI, suggesting that reliance on self-or proxy report may result in marked exposure misclassification. None of the studies could adequately assess whether there is a synergistic effect between the APOE e4 allele and TBI in altering risk of AD.

In summary, there is some evidence that TBI, even in early adulthood, may increase risk of AD years later. For those studies that reported an association between TBI and increased risk of AD, one study had an all-male sample, and the other found the association only in males. This potential gender-specific effect may be attributed to males being exposed to more severe TBIs given the dose-response association reported by one study.143 Further confirmation of this finding is needed using sources such as medical records to document exposure to TBI.

Psychological and emotional health. Factors considered under this heading include depression, anxiety, and resiliency.

Depression. We identified one good quality systematic review that examined the association between depression and the development of AD.45 The review included 11 cohort studies (95,104 subjects) and 9 case-control studies; 6 were from the United States, 5 from European countries, and 1 from Canada. Studies were selected that had sufficient data to calculate an odds ratio (OR) for risk of AD or AD-like dementia, had a control group for comparison, and made a clinical diagnosis of depression and AD. Study quality for cohort designs was fair; 4 of 11 had limitations in assessment of exposure,104,144–146 and 5 of 11 had important limitations in assessment of AD.144–148 Length of followup and level of covariate adjustment were not reported at the individual study level. Studies were classified as those using specific depression criteria (e.g., ICD or DSM) to ascertain exposure and studies using symptoms consistent with major depressive disorder but without specific criteria. Several studies used hospitalization for depression as an indicator of clinical depression, and these studies may not be applicable to individuals with milder depression. Studies were further classified into those assessing AD and AD-like dementia outcomes with structured criteria such as NINCDS-ADRDA and those using a description of diagnostic criteria for AD or AD-like dementia but without structured criteria.

Included studies were combined using a random-effects model. A test for heterogeneity suggested significant variability between studies that persisted when the analysis was limited to studies using a cohort design (p = 0.02). An I2 was not reported. A funnel plot suggested possible publication bias. For the 11 cohort studies, depression was associated with a statistically significant increased risk of AD (OR 1.90; 95 percent CI 1.55 to 2.33). An influence analysis that recalculated the summary OR iteratively, removing one study with each iteration, yielded ORs ranging from 1.81 (95 percent CI 1.45 to 2.24) to 2.03 (1.71 to 2.41), suggesting that no single study had a large effect on the estimate of association. A meta-regression analysis showed a positive association between the interval between depression diagnosis and risk of AD, suggesting that depression is a risk factor for AD, rather than a prodrome of the disease.

The authors conducted stratified analyses for prospective versus retrospective study designs and specific or non-specific exposure and outcome assessments (Table 19). These analyses showed a statistically significant association for all subgroups. In the four studies using the most rigorous criteria for depression and AD diagnosis, the pooled OR was 2.23 (95 percent CI 1.71 to 3.09).

Table 19. Depression and risk of AD – results from stratified analyses by Ownby et al., 2006.

Table 19

Depression and risk of AD – results from stratified analyses by Ownby et al., 2006.

The authors concluded that depression may confer an increased risk for developing AD later in life.

We identified five additional eligible studies involving 4961 subjects published since the beginning of 2005 (Table 20). Two studies were conducted in the United States, one in Canada, one in the United States and Canada, and one in Europe. Four prospective cohort studies recruited older adults without dementia from the community and used NINCDS-ADRDA criteria to establish AD over 5 to 6 years of followup. One study evaluated the association between depressive symptoms and incident AD in subjects with amnestic MCI recruited for a 3-year trial of vitamin E or donepezil.149 All studies assessed current depressive symptoms at baseline using a validated instrument. The significance of such a single assessment for depressive symptoms is uncertain. Two studies150,151 went further and established a clinical history of depression requiring medical attention. All studies adjusted for some important confounders, but other potentially important confounders, such as comorbid psychiatric conditions, were not evaluated. Geerlings et al.151 found an association between depression requiring medical attention if onset was before age 60 (HR 3.7; 95 percent CI 1.43 to 9.58), but not for late-onset depression (HR 1.71; 95 percent CI 0.62 to 4.74). The study reporting a “history of depression” did not find an association with AD at 5 years (OR 1.5; 95 percent CI 0.49 to 4.63), but the precision of the estimate was poor due to few incident cases.150 One study22 found an interaction between APOE e4 and depressive symptoms, but the only other study evaluating this interaction found no significant effect.149 All studies found an association between significant depressive symptoms at baseline and incident AD.

Table 20. Depression and risk of developing AD – recent cohort studies.

Table 20

Depression and risk of developing AD – recent cohort studies.

In summary, a previous systematic review found an association between clinical depression and incident AD that was robust to subgroup analyses by study design features. Despite variability in depression assessment, ranging from a self-reported history to hospitalization, the association with incident AD was reasonably consistent. However, publication bias may have inflated the summary estimate of effect. Since publication of the systematic review, five additional studies found an association between current depressive symptoms and incident AD; one of the four additional studies found an association for early onset clinical depression. Collectively, these observational studies suggest an association between a history of depression and incident AD.

Anxiety. We did not identify any systematic reviews or primary studies that evaluated the association between anxiety disorders and incident AD.

Resiliency. We did not identify any systematic reviews or primary studies that evaluated the association between psychological resiliency and incident AD.

Medications. Prescription and non-prescription drugs considered under this heading include statins, antihypertensives, anti-inflammatories, gonadal steroids, cholinesterase inhibitors, and memantine.

Statins. Our search identified six eligible studies examining the association between 3-hydroxy-3-methylglutaryl-coenzyme A (HMG-CoA) reductase inhibitors (statins) and incidence of AD. Five were cohort studies (17,840 subjects; described in six publications),153–158 and one was a secondary analysis of data from an RCT (2223 subjects).159 No recent good quality systematic reviews were identified. All studies drew samples from the community, five in the United States and one in The Netherlands, and then followed patients from 3 to 17 years. All but one study153 selected samples using methods to minimize selection bias and baseline differences between exposed and unexposed groups. Statin use was determined only at baseline – a crude measure of exposure – in two studies.153,158 AD outcomes were assessed using structured criteria, but only two studies reported assessments that were blind to exposure status.154,158 Analyses were appropriate and controlled for confounding, but only one study conducted an a priori sample size calculation.155 Study characteristics are summarized in Table 21.

Table 21. Statins and risk of developing AD.

Table 21

Statins and risk of developing AD.

Two studies154,159 showed a statistically significant association between statin use and a reduced risk of AD (Figure 4). Some studies reported stratified analyses. There was no significant difference in the strength of association for lipophilic and hydrophilic statins,154,157 duration of statin exposure,154–157 or presence of APOE.154 We used a random-effects model to compute a summary estimate of effect, which showed a significant association between statin use and decreased incidence of AD (HR 0.73; 95 percent CI 0.569 to 0.944). The forest plot, chi-square test (Q = 5.132, df = 5, p = 0.40), and I2 = 2.58 did not suggest significant statistical heterogeneity.

Figure 4. Meta-analysis of six cohort studies on statins and risk of developing AD.

Figure 4

Meta-analysis of six cohort studies on statins and risk of developing AD. Combined estimate is given in bottom row.

Subgroup analyses grouping studies by baseline versus ongoing assessment of exposure showed no association for the two studies assessing exposure at baseline only (HR 0.958; 95 percent CI 0.602 to 1.526) and a significantly reduced risk of AD for those with a more robust assessment of exposure (HR 0.655; 95 percent CI 0.485 to 0.986). Subgroup analysis by length of followup (< 5 years versus ≥ 5 years) did not show important differences in summary effect.

A funnel plot (Figure 5) did not suggest significant publication bias.

Figure 5. Funnel plot of standard error by log hazard ratio for statins and risk of developing AD.

Figure 5

Funnel plot of standard error by log hazard ratio for statins and risk of developing AD.

In summary, six observational studies with 4 to 17 years followup showed a moderate reduction in risk of AD with statin use.

Antihypertensives. We identified eight eligible cohort studies, described in ten publications,99,104,107,109,160–165 that examined the use of antihypertensives and risk of incident AD (Table 22). More than 20,000 subjects and 1300 cases of incident AD were included. Five studies recruited community samples from the United States.99,109,162–164 Two of these focus on Americans of Japanese descent.99,163 The other studies were from Canada (Canadian Study of Health and Aging, or CSHA), the Netherlands (Rotterdam),161,165 and Sweden (Kungsholmen).107,160 Outcomes were measured well, although one study107 provided few details on dementia diagnoses. Exposure information came from self-report and/or inspection of pill bottles, except in the study by Haag et al.,165 which used pharmacy data. It is not known if as much detail was provided when a questionnaire with multiple risk factors was used, as in Lindsay et al.104 Followup rates were between 72 and 94 percent, with the exceptions of Yasar et al.,164 which did not report followup rates, and Peila et al.,163 where it appears that 73 percent of normotensive subjects had missing or abnormal blood pressures and were not included in the analysis.

Table 22. Antihypertensives and risk of developing AD.

Table 22

Antihypertensives and risk of developing AD.

Four of the eight cohort studies found a decreased risk for AD with antihypertensive medications. Significant impact of antihypertensive use in the risk of incident AD was found in the Kungsholmen cohort after 3-year160 and 6-year followup.107 In subjects with SBP ≥ 140 mmHg, the RR for AD with antihypertensives was decreased (0.6; 95 percent CI 0.4 to 0.8). When both APOE e4 allele and high SBP were present, the RR for AD was 2.4 (1.4 to 4.2). This elevated risk was mitigated when antihypertensives were used (RR 1.0; 95 percent CI 0.6 to 1.6). The Kungsholmen cohort had a mean age of 81 years at baseline, so it would be expected to have a high incidence of dementia and prevalence of hypertension (HTN). It is possible that many APOE e4 subjects would have had prevalent AD and not be eligible for inclusion in the cohort, thus selecting for individuals less susceptible to AD.

The Cache County cohort study also found an association between antihypertensive use and the development of AD.162 When patients taking antihypertensives at baseline were followed for 3 years, the hazard ratio (HR) for incident AD was 0.64 (95 percent CI 0.41 to 0.98). This cohort was younger at inception (mean age 74.1) and was followed for a shorter time. When different classes of antihypertensives were analyzed, the result was significant only for diuretics, with an adjusted HR of 0.61 (0.37 to 0.98). When controlled for current blood pressure, statistical significance was lost, although a significant result remained when the analysis was restricted to a cohort who self-reported hypertension.

The HAAS cohort163 was formed in mid-life and followed into later life. Compared to never-treated hypertensive subjects, antihypertensive use for > 12 years was associated with a significantly lower risk for incident AD (HR 0.35; 95 percent CI 0.16 to 0.78). Antihypertensive use for 0 to 5 years and 5 to 12 years was associated with a non-statistically significant reduced risk. There was no statistically significant difference in incident non-specific dementia between normotensive subjects (not on antihypertensives) and any hypertensive subjects treated with antihypertensives for > 12 years (HR 0.82; 95 percent CI 0.28 to 2.38). However, the confidence interval was wide and does not exclude a clinically significant difference.

Haag et al.165 report data from the Rotterdam cohort after a mean followup of 8 years (up to 13.3 years); pharmacy records were used to determine exposure. The HR for AD per year of antihypertensive use (compared to no use) was 0.94 (95 percent CI 0.90 to 0.99). Subjects 75 years of age and younger had a statistically significant lower risk of AD when antihypertensives had been used, but subjects over 75 years did not. Use of antihypertensives for ranges of duration was significant only for use between 1.6 to 5.3 years.

No association between antihypertensive use and incident AD was found in the other cohorts. The Kame99 and CSHA cohorts104 were followed for a mean of 6 and 5 years, respectively, at the time of these analyses. The Kame cohort99 had a mean age at baseline of 72.6. In the CSHA cohort, the mean age at baseline was 81 for those who developed incident AD at wave 2, and 72.9 for controls. Morris et al.109 followed a subset of the Boston EPESE study for up to 13 years. There was no clear association between HTN or use of antihypertensives and incident AD. Yasar et al.164 used the Baltimore Longitudinal Study on Aging (BLSA) to specifically examine the impact of calcium channel blockers (CCB), both dihydropyridine (DHP), and non-DHP; neither non-specific nor specific classes of CCBs were significantly associated with incident AD over the average 13 years of followup.

In summary, data from eight cohort studies do not show a consistent association between antihypertensive use and risk of developing AD. However, most studies found a decreased risk – albeit a statistically non-significant decreased risk – with use of antihypertensive medication, suggesting a possible reduction in risk. Age of cohort group studied, length of time followed, and prevalence of HTN do not consistently explain the variability in outcomes across studies.

Anti-inflammatories. Our search identified one good quality systematic review examining the impact of NSAIDs on risk of developing AD.35 This review included studies only if AD was diagnosed by validated criteria. Studies examining prevalent and incident AD and case-control studies were all included. Data on non-aspirin NSAIDs were summarized quantitatively. For our purposes, only the four cohort studies that evaluated incident AD were useful.104,166–168

Two studies analyzed the use of NSAIDs in community populations in the United States (Baltimore Longitudinal Study on Aging168 and Cache County study166). Lindsay and colleagues104 examined NSAID use and incident AD in the Canadian Study of Health and Aging, and In’t Veld and colleagues167 used the Rotterdam cohort. Two studies104,168 relied on self-report of subjects regarding use of NSAIDs. Zandi and colleagues166 used direct examination of pill bottles in addition to self-report, and In’t Veld and colleagues167 used automated pharmacy data. The latter study ran through the end of 1998 (NSAIDs were available by prescription only until 1995 in the Netherlands).

Stewart and colleagues found an RR of 0.40 (95 percent CI 0.19 to 0.84) when NSAIDs were used for more than 2 years. The Dutch study167 reported that NSAID use at any time had an RR for incident AD of 0.86 (0.66 to 1.09), but when NSAIDs were used for more than 2 years, the RR was statistically significant at 0.20 (0.05 to 0.83). Lindsay and colleagues found a milder reduced risk with any NSAID use, reporting an OR of 0.65 (0.44 to 0.95). Finally, Zandi and colleagues reported a HR of 0.45 (0.17 to 0.97), but followup was short, only 3 years as compared to 5 to 15 years for the other studies.

The cohorts in the In’t Veld and Stewart papers167,168 were relatively young: in both cases, 78 percent of subjects were under 75 years of age at baseline. In the cohort studied by Zandi et al.,166 the mean age was approximately 74 years, while in Lindsay et al.104 controls had a mean age of 73 years, and subjects with incident dementia had a mean age of 81 years. The authors of this latter study also noted that 18.2 percent of their subjects died between waves of the study. In a sensitivity analysis that included decedents and estimated the probability of incident dementia in this group, there was no association between NSAID use and AD (OR 0.97; 95 percent CI 0.77 to 1.20).

These four prospective studies examining incident AD104,166–168 included 15,990 subjects with 672 cases of incident AD. The meta-analysis of the four prospective cohort studies using a fixed-effect model showed a RR of 0.74 (95 percent CI 0.62 to 0.89). A chi-square test suggested no significant heterogeneity; Q3 = 1.16; p = 0.56. Three of the four studies166–168 evaluated NSAID exposure of more than 2 years. The combined RR for those three studies was 0.42 (95 percent CI 0.26 to 0.66; Q2 = 1.16). An I2 test for heterogeneity was not reported.

Our own search of the literature identified four eligible cohort studies published after the systematic review described above.169–172 These studies are summarized in Table 23; detailed evidence tables are provided in Appendix B. All four studies had community-based populations, with a total of more than 8200 subjects. Three studies used U.S. populations and one a Swedish population. Subjects were followed for 1 to 12 years. Exposure to NSAIDs was determined by self-report and inspection of pill bottles except in the study by Breitner et al.,172 which used a pharmacy record and self-report. Two of the papers reported that examining duration of use or a lagging time (to account both for difficulty by cognitively impaired subjects in accurately reporting exposure and for the possible lagging effects of exposure on risk) did not change the results, but the actual hazard ratios for these calculations were not included. Breitner and colleagues172 ignored the year preceding dementia onset to avoid some of the influence of cognitive impairment on reported NSAID use. All papers used populations with mean baseline ages in the mid 70s.

Table 23. NSAIDs and risk of developing AD.

Table 23

NSAIDs and risk of developing AD.

One study171 found a reduction in risk for AD with NSAID use that was statistically significant. The same study found an association between NSAID use and reduced risk of AD in the presence of the APOE e4 allele. In their analysis, benefit was apparent only in those with at least one e4 allele. For subjects 75 or younger, the HR for AD with a history of NSAID use was 0.22 (95 percent CI 0.06 to 0.73), and for those older than 75 years, the HR was 0.45 (0.20 to 0.97). There were only three incident cases of AD in the younger group and eight in the older group. Two other studies169,170 found no associations between NSAID use and the risk of developing AD. Breitner and colleagues172 found an increased risk of AD in heavy users of NSAIDs, but no statistically significant effect for moderate users, using both pharmacy data and pharmacy data integrated with self-report. Analyses were adjusted for APOE status.

Explanations for the disparate findings in the studies are unclear. It has been suggested that longer duration of NSAID use might be necessary to convey benefit, and that there may be a window of opportunity prior to onset of disease when NSAIDs are helpful. As noted above, it has also been suggested that the benefit from NSAIDs may be more pronounced or only present when there is an APOE e4 allele.

It is possible that studies that failed to find an association did so for different reasons. For example, it is possible the duration of followup was too brief in Cornelius et al.,170 and that the populations in Arvanitakis et al.169 and Breitner et al.172 were too old. Breitner’s report on the ACT cohort differs in that it uses pharmacy exposure information as well as self-report. It is not clear how much NSAID use is over-the-counter and not captured by pharmacy data. The secondary analysis by Breitner et al. included a combination of pharmacy and self-report data, but the proportions of various user groups (low, moderate, heavy) from self-report data are not reported. If APOE e4 causes an earlier onset of illness, and benefit from NSAIDs is most apparent in those with APOE e4, it would be most beneficial to follow a younger cohort to capture this effect. Additionally, those subjects with earlier onset of AD (e.g., those with APOE e4) would not have been eligible for dementia-free inception cohorts forming later in life. The most strikingly positive findings involved longer duration of use in relatively younger cohorts.167,168

A random-effects meta-analysis combining the cohort studies in the systematic review by Szekely et al.35 and the more recent studies summarized above is shown in Figure 6. Studies were significantly heterogeneous (Q = 40.84, df = 7, p < 0.001, I2 = 83 percent). Any use of NSAIDs was not associated with the risk of AD (RR 0.83; 95 percent CI 0.63 to 1.09). Analyses examining the effect of duration and level of exposure did not explain the heterogeneity. A sensitivity analysis removing one study at a time found that when the study by Breitner et al. was removed, the summary estimate shows an association between NSAID use and lowered risk for incident AD (HR 0.79; 95 percent CI 0.69 to 0.91). It is unclear why this study is an outlier.

Figure 6. Meta-analysis of eight cohort studies on NSAIDs and risk of developing AD.

Figure 6

Meta-analysis of eight cohort studies on NSAIDs and risk of developing AD. Combined estimate is given in bottom row.

In summary, these prospective cohort studies do not provide clear consensus on the impact of NSAIDs in the reduction of risk for AD. Any use is associated with a moderate statistically significant decreased risk, but studies are heterogeneous. Variability may be explained by interactions with genetic predispositions, duration of use of NSAIDs, and possibly even a therapeutic window of benefit.

Gonadal steroids. We identified one good quality systematic review that examined the association between gonadal steroids and the development of AD.38 Cohort studies were reviewed for the effects of hormone replacement therapy (HRT) on cognitive decline and dementia risk. Selected studies had sufficient data to calculate an odds ratio for risk of AD or AD-like dementia, had a control group for comparison, and made a clinical diagnosis of AD. The review included 2 cohort studies (1596 subjects) and 10 case-control studies; 10 were from the United States, 1 from European countries, and 1 from Australia. Subjects in the two cohort studies had had an average age of 61.5 years in one study and 74.2 years in the other. Study quality for cohort designs was fair; one study had limitations because it did not maintain comparable groups and did not report loss to followup. The other cohort study was limited because it did not assemble comparable groups at baseline. Length of followup and level of covariate adjustment ranged from 1 to 16 years, and results were adjusted for education, age, and ethnicity. The formulation of estrogen varied, with most participants using oral conjugated equine estrogen (CEE), though some subjects used other oral estrogens or estrogen delivered using a transdermal delivery system. The duration of estrogen use was not stated in one cohort study and ranged in duration from 2 months to 49 years (average 6.8 years) in the other. The cohort studies diagnosed AD using structured criteria such as NINCDS-ADRDA.

Studies were combined using a random-effects model. A test for heterogeneity suggested that the studies were not heterogeneous (p > 0.1). No I2 was reported. A funnel plot that included all 12 studies suggested possible publication bias. In the two cohort studies, estrogen use was associated with a statistically significant decreased risk of AD (RR 0.50; 95 percent CI 0.30 to 0.80).

The review authors conducted stratified analyses for cohort versus case-control study designs; results are summarized in Table 24.

Table 24. Gonadal steroids and risk of developing AD – results from stratified analyses by LeBlanc et al., 2001.

Table 24

Gonadal steroids and risk of developing AD – results from stratified analyses by LeBlanc et al., 2001.

Summary RRs were similar for studies with NINDS-ADRDA diagnosed AD and those using other, less strict AD diagnostic criteria. Exclusion of a study with uncertain confidence intervals and a study with low SE (high weight) did not significantly change the risk assessment.

The authors of the meta-analysis concluded that hormone replacement therapy decreased risk of dementia, but most studies had important methodological limitations. The effect of hormones on dementia may be over-estimated if participants with memory problems or proxy respondents for women with dementia did not remember or were not aware of HRT exposure. Further limitations included a wide range of different estrogens, presence or absence of progestins, timing of estrogen treatment (perimenopausal versus early or late post-menopausal), and duration of use.

Our search did not identify any new observational studies published since 2001. See the section on “Gonadal Steroids” under Question 3 for results of RCTs examining the therapeutic and adverse effects of gonadal steroids used to delay the onset of AD.

Cholinesterase inhibitors. We did not identify any systematic reviews or primary studies in cognitively normal samples that evaluated the association between cholinesterase inhibitors and incident AD.

Memantine. We did not identify any systematic reviews or primary studies that evaluated the association between memantine and incident AD.

Social, Economic, and Behavioral Factors

Early childhood factors. We did not identify any systematic reviews that examined the association between childhood exposures and development of AD. We identified one eligible cohort study.173 The study is summarized in Table 25; a detailed evidence table is provided in Appendix B. The study was drawn from U.S. communities, but some of the participants lived in religious order facilities. The length of followup averaged 5.6 years. At baseline, participants were not demented. Self-reported information on variables related to childhood socioeconomic status was collected when the participants were an average of 75 years old. This information was then used to derive indices of socioeconomic status. There was no objective validation of the derived indices. The sample selection method only partially minimized selection bias because as part of the enrollment criteria participants were required to agree to post-mortem autopsy, which may have resulted in some selection bias. The study did not report comparison of baseline characteristics between those exposed and unexposed. Standard criteria were used for the diagnosis of AD, but the study did not use an informant report as part of the diagnostic process. The authors did not report whether the diagnosis was blind to exposure status; however, it is unlikely that details like those used as part of this childhood socioeconomic index would be discussed during the diagnostic process. Analyses were appropriate and controlled for relevant potential confounders. This study showed that neither early-life household nor community socioeconomic factors influenced risk of incident AD in later life. In conclusion, there is no evidence supporting an association between these early childhood factors and AD, but there is also not sufficient evidence to rule out a possible association.

Table 25. Childhood socioeconomic status and risk of developing AD.

Table 25

Childhood socioeconomic status and risk of developing AD.

Education/occupation. Because of their close association, education and occupation are considered together here.

Education. Our review of studies examining the association between years of education and AD focused on those studies in which assessing this association was the main aim of the study. Reviewing all studies, such as those primarily focused on estimating incidence rates of AD and assessing numerous factors that predict AD, was beyond the scope of this review.

We identified one good quality systematic review that examined the association between years of education and development of AD.49 The review included nine prospective cohort studies.118,174–181 It also included six case-control studies; however, given the numerous weaknesses of case-control studies in assessing risk factors, the current summary describes only the cohort studies. The nine cohorts included 22,726 subjects; four studies were from the United States, four from European countries, and one from Japan. Studies were selected that used clear diagnostic criteria for dementia and AD, provided information about years or level of education of participants, controlled for potential confounders, and provided odds ratios or relative risks or sufficient data to calculate these figures.

There was not a structured quality assessment of the studies reported in this systematic review; however the strict inclusion/exclusion criteria provided an indirect assessment of quality, and the study characteristics for key design variables were reported. Length of followup was not reported. However, all studies included in the review reported on incident AD, and education is typically completed in early adulthood, meaning that exposure most likely occurred years prior to participation in the study. There was no information provided on the followup rates in the studies. The covariate adjustment for most studies included at least age and sex; some studies included additional covariates such as occupation, APOE, ethnic group, leisure activities, and health conditions. Exposure was determined by self-report and categorized as high, medium, or low levels of education. The definition of these three levels of education appeared to differ across studies. All studies used DSM and NINDS-ADRDA criteria for diagnosis of AD and dementia.

Studies were combined using both fixed-effect and random-effects models to calculate pooled relative risks. When results from the two approaches differed, only the random-effects models were reported, as they represent more conservative estimates. Heterogeneity was tested by using the Cochrane Q statistic, and the Ri statistic. Publication bias was assessed graphically using a funnel plot. To assess the potential effect of publication bias on the pooled relative risk, the authors conducted sensitivity analysis using three assumptions: (1) published studies included only half of all studies conducted; (2) the unpublished studies found null associations; and (3) the unpublished studies included as many cases and controls as the average of the published studies.

To briefly summarize the main findings, fewer years of education were associated with a greater risk of AD compared to individuals with the highest level of education (Table 26). Analyses were not conducted to allow for assessment of a dose-response association. However, when the lowest and medium level education groups were combined, the relative risk decreased compared to the findings from the analysis using just the lowest level education group. This might be interpreted as an indirect measure of a dose-response effect. Both the funnel plot and the sensitivity analyses assessing extreme assumptions concerning unpublished studies showed that the findings were robust and that no publication bias was evident. The authors concluded that having fewer years of education is associated with greater risk of AD.

Table 26. Education and risk of developing AD – results from studies reviewed by Caamano-Isorna et al., 2006.

Table 26

Education and risk of developing AD – results from studies reviewed by Caamano-Isorna et al., 2006.

We identified two additional eligible cohort studies published since 2004 examining risk of AD in relation to years of education completed.182,183 These two studies are summarized in Table 27; detailed evidence tables are provided in Appendix B. One study used a community sample,182 and one used a religious order sample in the United States.183 In the study by Ngandu and colleagues,182 participants were non-demented at baseline. In the study by Tyas and colleagues,183 participants were either cognitively normal or had MCI at baseline. Length of followup ranged from 1 to 21 years. Exposure was determined based on self-reported information about years of education completed; this is a standard and well-accepted method of data collection for this information. The studies used sample selection methods to minimize selection bias; however, one study required that participants agree to brain donation at the time of death, and this may have introduced some selectivity into the sample.183 One study specifically stated that they used standard criteria for the diagnosis of AD;182 the other study did not.183 Neither study used an informant report as part of the diagnostic process. The study that did not state specific standard criteria did describe criteria that appeared to be in keeping with DSM criteria.183 However, while this approach may provide a diagnosis of dementia, it is not clear that it would provide a reliable differential diagnosis of AD. It was not reported whether the dementia diagnosis was assigned blind to the exposure level, but it seems likely that those assigning the diagnosis were aware of the participant’s level of education. However, because the association between education and AD was not the primary outcome for these studies, this knowledge may not have biased the results. Analyses were appropriate and controlled for relevant potential confounders.

Table 27. Years of education and risk of developing AD.

Table 27

Years of education and risk of developing AD.

Both studies reported an inverse association between years of education and risk of AD (Table 27). In conclusion, the preponderance of evidence indicates that more years of education may provide protection from AD. It is not clear whether education is a surrogate for other factors such as occupation, baseline intelligence, or socioeconomic status. It is also not clear whether more years of education actually prevents AD, delays onset of the disease, or just delays the detection of the cognitive decline. Two of the cohort studies included in the systematic review discussed above address some of these points. Karp and colleagues174 reported that both low education and low socioeconomic status increase risk of AD, but only low education remains a significant predictor when both factors are simultaneously included in the model. Stern and colleagues176 found that either low education or a low-level occupation increased risk of AD, but those with both low education and low occupation had the greatest risk. Combined, these findings suggest that education contributes to risk of AD independently of occupation and other socioeconomic factors.

Occupation. We identified five eligible cohort studies examining risk of AD in relation to occupation.174,176,184–186 The studies are summarized in Table 28; detailed evidence tables are provided in Appendix B. Two of the studies used community samples in the United States,176,184 and three used community samples in Europe.174,185,186 Two of these studies were based on the same sample,174,186 but they used different lengths of followup and different, but related, predictor variables. Length of followup ranged from 1 to 6.4 years. Exposure was determined based on self-reported information about occupation. The studies used sample selection methods to minimize selection bias. All studies stated that they used standard criteria for the diagnosis of AD, but only one of the studies used an informant report as part of the diagnostic process.185 It was not reported whether the dementia diagnosis was assigned blind to the exposure level, but it seems likely that those assigning the diagnosis were aware of the participant’s occupation. However, because the association between occupation and AD was not the primary outcome for the parent studies of these substudies, this knowledge may not have biased the results. Analyses were appropriate and controlled for relevant potential confounders. The studies used different scales to categorize occupational characteristics, making it difficult to make direct comparisons.

Table 28. Occupation and risk of developing AD.

Table 28

Occupation and risk of developing AD.

Overall, the findings suggest that typically the relatively modest associations between occupation and incident AD become statistically non-significant once years of education are included in the model. However, one study did report that low occupation level combined with low education further increased the risk of AD, in addition to the effect noted for low education alone.176 These results point to the complex inter-relationships among education, occupation, and other markers of socioeconomic status.

In conclusion, the studies to date do not support an association between occupational level and risk of AD that is independent of the influence of education level.

Social engagement. We did not identify any good quality systematic reviews that examined the association between social engagement and development of AD. We identified five relevant and eligible cohort studies.187–191 These studies are summarized in Table 29; detailed evidence tables are provided in Appendix B.

Table 29. Social engagement and risk of developing AD.

Table 29

Social engagement and risk of developing AD.

Social engagement as a risk factor was defined by different exposures in the studies, including objective measures such as marital status, living situation, number of people in social network, as well as subjective measures such as feelings of loneliness and perceptions of social support. Although five social engagement studies were identified, the measurement of exposure and reporting of outcomes varied among the studies. Hence, results were not combined to provide a single summary statistic; rather, qualitative descriptions of the studies are provided in what follows.

Two of the studies drew their sample from the United States;188,191 the other three were from Europe.187,189,190 All studies chose community samples and enrolled non-demented participants. The length of followup ranged from 3.3 to 21 years. Wilson et al.188 considered loneliness as a risk factor and measured this using a modified version of the de Jong Gierveld Scale with a 5-point scoring system. It included statements such as “I miss having people around” and “I often feel abandoned.” In the same study, social isolation was inferred from two indicators of social functioning, social network size and frequency of participating in social activity. Fratiglioni et al. measured social network, taking into account martial status, whether subjects lived alone or not, and contact with friends and family, including satisfaction with social contact.189 In the study by Saczynski et al., social engagement consisted of marital status; living arrangement; participation in social, political, or community groups; participation in social events with coworkers; and the existence of a confidant relationship.191 The other two studies187,190 defined social engagement based on marital status.

None of the studies had objective validation of exposure. Two studies used informant interviews, one of them to confirm exposure data,191 and the other only when the participant was unable to answer questions.189 Only two studies examined baseline characteristics by exposure.190,191 No information on blinding of the diagnostic assessment to exposure status was provided in any of the studies. Although information about social activities is not a routine part of dementia assessment, marital status is commonly asked about in many assessments. Of the studies that examined marital status and AD, one study found that those who were never married had a higher risk of AD (RR 2.31; 95 percent CI 1.14 to 4.68).187 In this study, being widowed or divorced was not associated with AD. Fratiglioni et al. found that being single and living alone was associated with an increased risk of AD (RR 1.9; 95 percent CI 1.2 to 3.1), but being widowed, divorced, or married but living alone did not significantly increase risk of AD.189 Hakansson et al. also found that those who were without a partner after midlife had an increased risk of AD (OR 5.0; 95 percent CI 1.4 to 17.5). The association between being without a partner at midlife and risk of AD was not statistically significant. (OR 2.06; 95 percent CI 0.9 to 4.7).190 Wilson and colleagues found that a person with a high degree of loneliness (score 3.2, 90th percentile) was about 2.1 times more likely to develop dementia during followup when compared with a person with a low degree of loneliness (score 1.4, 10th percentile).188 In the same study, the risk of AD associated with loneliness decreased (RR 1.41; 95 percent CI 0.97 to 2.06) after adjusting for a 9-item CES-D score (after removing one item about loneliness), whereas the risk of AD associated with the CES-D score was decreased by half after controlling for loneliness. In the studies that looked at social network and engagement, poor or limited social networks were associated with a higher risk of incident dementia (RR 1.87; 95 percent CI 1.12 to 2.1), and participants who were not satisfied with social contact with children were also at a higher risk (RR 2.0; 95 percent CI 1.2 to 3.4).189 Though social engagement at midlife was not significantly associated with AD, a decline in social engagement from mid- to late life was associated with an increased risk of AD (HR 1.87; 95 percent CI 1.12 to 3.13).191

In conclusion, across three studies187,189,190 there was a consistent association between an increased risk of AD and being single and not cohabiting with a partner in later life. Generally, this association was not present for individuals who were divorced or widowed. But one exception to this is a reported association for increased risk of AD among individuals who were widowed both at midlife and later life (OR 7.67; 95 percent CI 1.67 to 40.0) compared to those who were cohabiting at both time points.190 Further analyses of subgroups based on APOE genotype showed that this association was due primarily to those with at least one APOE e4 allele (OR 25.55; 95 percent CI 5.7 to 114.5; P < 0.001) when compared to APOE e4 non-carriers who were cohabiting at both time points.190 Some caution in drawing conclusions from these results is warranted given the wide confidence intervals. Further studies are needed to confirm these findings regarding cohabitation, marriage and AD.

There is also preliminary evidence that a higher degree of loneliness, dissatisfaction with social contacts, and decreased social networks might also be risk factors for AD. As a change from high to low social engagement from mid- to late life was associated with a higher risk of AD compared to consistently low or consistently high social engagement, it is possible that the decrease in social engagement may be associated with changes due to early AD. Further studies are needed to clarify the direction of the relationship between social engagement and AD.

Cognitive engagement. For the purposes of this review, we have categorized leisure activities into three categories: (1) cognitively engaging activities (e.g., puzzles, reading, and board or card games); (2) physical activities; and (3) other leisure activities that do not fall into the first two categories (e.g., membership in organizations such as clubs). The first group (cognitively engaging activities) also includes cognitive training RCTs. We have attempted to group the results from studies into these three categories, however, in cases in which studies grouped activities together from more than one category, we have assigned the studies to one of these categories based on the characteristics of the majority of items in the grouping. We begin with cognitively engaging activities, then proceed to physical activities and other leisure activities.

We did not identify any good quality systematic reviews that examined the association between cognitive engagement and development of AD. We identified four eligible cohort studies.192–195 These are summarized in Table 30; detailed evidence tables are provided in Appendix B. Three studies drew samples from U.S. communities;192–194 and one study used a community sample in Europe.195 In all four studies, participants were non-demented at baseline. Length of followup across the studies was approximately 3.0 to 5.0 years (mean, median, or time span for mean number of annual assessments). All four studies used self-report of the frequency of involvement in specific activities, but three of the studies asked exclusively about current involvement in activities,192,194,195 while the fourth inquired about involvement in activities across the lifespan.193 There was no objective validation of this method, but one of the studies194 did ask an informant to confirm the participant’s report of involvement in activities. One study used sample selection methods that minimized selection bias,195 while the other three studies used methods that partially minimized selection bias. One study compared baseline characteristics by exposure status.195 All four studies used standard criteria for the diagnosis of AD, but only one used an informant report as part of the diagnostic process.194 None of the studies noted whether the diagnosis was blind to exposure status; however, it is unlikely that details of involvement in these types of activities would be discussed during the diagnostic process. Analyses were generally appropriate and controlled for relevant potential confounders.

Table 30. Cognitive activities and risk of developing AD.

Table 30

Cognitive activities and risk of developing AD.

All four studies showed a decreased risk of AD associated with more frequent involvement in activities considered to be cognitively engaging. One study assessed the influence of APOE on the association between current cognitive activity and AD and reported that APOE e4 status did not change the risk estimate, and that there was no interaction between cognitive activity and the APOE e4 allele.192 Another study193 reported that the frequency of past cognitive activity also was associated with risk of AD (RR 0.56; 95 percent CI 0.36 to 0.88). However, when current and past activity were assessed in the same model, the effect of past activity was eliminated (HR 0.80; 95 percent CI 0.49 to 1.30), but the effect of current activity remained substantially unchanged (HR 0.47; 95 percent CI 0.34 to 0.66). Cognitive, physical, and social activity levels are often correlated. One study conducted analyses using physical and social activity levels as covariates to assess their influence on the association between cognitive activity and incident AD and found that the results remained unchanged.193 A reduction in cognitive activities may occur in the context of mild cognitive impairment as one of the early symptoms of prodromal AD. One study conducted sensitivity analyses, successively excluding individuals with low baseline MMSE, incident dementia within first 2 years of followup, and then prevalent MCI, to assess whether the association between increased cognitive activity and incident AD may be attributed solely to individuals with potential prodromal AD.195 The hazard ratios for these analyses remained similar to those for the full sample, but some of the confidence intervals included 1.0 which may reflect the reduced sample size.

In conclusion, the available evidence supports an association between increased involvement in cognitive activities and decreased risk of AD. The one study that assessed past and current participation in cognitive activities found that current activities explained the protective association. Further work is needed to confirm the finding195 that the reduced involvement in cognitively stimulating activities among those who develop AD does not reflect early symptoms of the disease given the long sub-clinical prodromal period for AD. Validation of both the type and level of exposure is needed.

Physical activities. We did not identify any good quality systematic reviews that examined the association between physical activity and development of AD. We identified 12 eligible cohort studies.85,118,194–202,203 These studies are summarized in Table 31; detailed evidence tables are provided in Appendix B. Five studies used samples from U.S. communities,85,194,197,198,202 five from communities in Europe,195,196,200,201,203one from Canada,199 and one from Japan.118 Of these, one study used a sample from a health maintenance organization,198 one used a sample of twins,196 one used a community sample but also included institutionalized individuals,203 and the remainder of the studies used community samples. Two of the studies used a sample from the same parent study where one of the reports focused on leisure time physical activities and the other reported on work-related physical activities.200,201 At baseline, participants were cognitively normal in three studies,196,199,203 non-demented in seven studies,85,118,194,195,197,198,202 and are assumed here to have been non-demented in the other two studies based on the mean baseline age of the sample.200,201 Length of followup across the studies was approximately 3.9 to 31 years. All studies used self-reported information on involvement in physical activities; some asked about specific activities, and others asked more general questions about any physical activities (information collected in each study is detailed in Table 31). There was a fair degree of overlap among the activities across those studies that asked about specific activities and provided this detailed information in the article. Most studies asked about current physical activities (at the time of the interview), but one study asked about activities during the previous 25 years (from age 25–50).196 The span of years for followup reflects that some studies collected information about mid-adult life physical activities, while others collected information about later life physical activities. One study averaged the reported physical activity level in mid- and late-life.202 In general, there was no objective validation of the accuracy of this self-reported exposure to physical activity. However, one study examined construct validity by comparing the combined physical activity score with reported markers of health hypothesized to be related to exercise and self-rated health.197 Ten of the studies used sample selection methods to minimize selection bias, one partially used such methods,194 and for one study it was not possible to determine whether the sample selection methods minimized selection bias.118 Eight studies compared some baseline characteristics by level of physical activity.85,195–198,200-202 The other studies did not compare baseline characteristics between those exposed and unexposed. All studies used standard criteria for the diagnosis of AD, but only three of the studies used an informant report as part of the diagnostic process.196,197,203 None of the studies noted whether the diagnosis was blind to exposure status; however, it is unlikely that details of involvement in these types of activities would be discussed during the diagnostic process. Analyses were appropriate and generally controlled for relevant potential confounders.

Table 31. Physical activity and risk of developing AD.

Table 31

Physical activity and risk of developing AD.

Quantitative risk estimates from the eight studies are reported in Table 31. Eight of the 11 studies (excluding the study that used the duplicate sample but focused only on work-related physical activity201) reported risk estimates consistent with moderate or high levels of physical activity and suggesting a protective benefit from AD. The risk estimates did not always reach statistical significance once appropriate covariates were added or across both moderate and high levels of activity. This may point to insufficient sample size to detect a significant difference or confounding due to un-identified factors. Two studies reported results from analyses examining the interaction between physical activity and APOE genotype. These studies reported inconsistent results, with one reporting physical activity was most protective among carriers of the APOE e4 allele,200 and the other study reporting the opposite result.202 Similarly, results on whether there was a differential effect of physical activity on AD for males and females were not consistent.199,200 One study assessed the association between risk of AD and the combination of physical exercise and the Mediterranean diet.85 Compared with individuals neither adhering to the diet nor participating in physical activity (low diet score and no physical activity; absolute AD risk of 19 percent), those both adhering to the diet and participating in physical activity (high diet score and high physical activity) had a lower risk of AD (absolute risk, 12 percent; HR 0.65 [95 percent CI 0.44 to 0.96]; P = 0.03 for trend).

A random-effects meta-analysis combining nine cohort studies is shown in Figure 7. Studies were significantly heterogeneous (Q = 23.25, df = 8, p = 0.003, I2 = 66 percent). Higher levels of physical activity were associated with lower relative risk for incident AD (HR 0.72; 95 percent CI 0.53 to 0.98). An influence analysis that recalculated the summary HR iteratively, removing one study with each iteration, yielded summary HRs from 0.66 (95 percent CI 0.48 to 0.91) to 0.75 (0.54 to 1.05), suggesting that no single study had a large effect on the estimate of association.

Figure 7. Meta-analysis of nine cohort studies on physical activity and risk of developing AD. Combined estimate is given in bottom row.

Figure 7

Meta-analysis of nine cohort studies on physical activity and risk of developing AD. Combined estimate is given in bottom row.

In conclusion, the results from the meta-analysis and the majority of studies reviewed here suggest that physical activity, particularly at high levels, is associated with lower risk of incident AD. However, there was substantial heterogeneity among the studies. The risk estimates from individual studies were not all statistically significant, and in some studies the risk estimate was in the direction indicating increased risk of AD. Differences among the studies in samples, methodologies, and measures of exposure do not provide an obvious explanation for the inconsistent results. One point to consider when interpreting these results is that physical activity may be a marker for a generally healthier lifestyle and that these other healthy lifestyle factors may contribute to preserving cognition in later life. One of the studies described here addressed this point by examining the combination of physical activity and a Mediterranean diet on risk of AD.85 Future work should consider this multi-factorial approach.

Other leisure activities. We did not identify any systematic reviews that examined the association between non-physical leisure activities and development of AD. We identified one eligible cohort study that assessed participation in a range of leisure activities, including those considered to be cognitive, social, or physical,179 and one eligible cohort study that assessed activities that the authors of the study categorized as either social leisure or passive leisure activities.195 The leisure activities assessed for each study are listed in Table 32. Some of the leisure activities in these studies overlapped with activities considered to be “cognitively engaging” in other studies,192–194 so the results described here should be interpreted in conjunction with the findings for Question 1 for the “Cognitive Engagement” factor. The current two studies are summarized in Table 32; detailed evidence tables are provided in Appendix B. One of the studies used a sample drawn from a U.S. community,179 and the other used a community sample from Europe.195 Mean length of followup ranged from 2.9 to 4 years. Exposure status was based on self-report of current involvement in specific activities on either a daily or monthly basis. There was no objective validation of this method. Both studies used sample selection methods to minimize selection bias, but only one reported comparisons of baseline characteristics between those exposed and unexposed.195 Investigators used standard criteria for the diagnosis of dementia and AD. They did not report whether the diagnosis was blind to exposure status; however, it is unlikely that details of involvement in these types of activities would be discussed during the diagnostic process. Analyses were generally appropriate and controlled for relevant potential confounders. One study reported results for overall dementia (not for AD),179 while the other reported results specifically for AD.195

Table 32. Leisure activities and risk of developing AD.

Table 32

Leisure activities and risk of developing AD.

Akbaraly and colleagues195 reported that more frequent participation in social leisure or passive leisure activities was not associated with reduced incidence of AD; the hazard ratios for the highest level of exposure for both social and passive leisure activities were in the direction of a lower risk of AD, but they were not statistically significant. In contrast, Scarmeas and colleagues179 found that participation in more leisure activities was associated with a decreased risk of incident dementia. Grouping the leisure activities into categories showed that intellectual activities (RR 0.76; 95 percent CI 0.61 to 0.94), physical activities (RR 0.80; 95 percent CI 0.66 to 0.97), and social activities (RR 0.85; 95 percent CI 0.77 to 0.94) were all associated with reduced risk of incident dementia.179 These results suggest that any leisure activity, regardless of whether it is cognitive, physical, or social in nature, may provide some protection against dementia. In addition, participation in a greater number of these activities may be key to their protective benefits.

These two studies differed in the types of leisure activities assessed, the number of AD or dementia cases, and also in how they characterized the exposure. One study used the frequency or time involved in each activity, which also indirectly reflected involvement in multiple activities,195 while the other used just the number of activities in which the participant was involved.179 Any of these differences may explain the discrepant findings between the two studies.

In conclusion, there is no consistent evidence indicating that involvement in leisure activities that are not solely cognitive or physical in nature is associated with lower risk of incident AD or dementia.

Tobacco use. We identified two good quality systematic reviews, published in 2007 and 2008, that examined the association between tobacco use and the development of AD.50,204 We decided to use only the systematic review by Anstey and colleagues50 because compared to the other review204 it used broader search terms and stricter inclusion criteria more consistent with those used by the present authors, reported detailed results from multiple exposure levels, and clearly identified the studies used in each analyses. The review included 10 prospective cohort studies published between 1995 and 2005.64,104,105,118,205–210 The 10 studies included a total of 13,786 subjects; four were conducted in the United States, two in European countries, and one each in Canada, Australia, Japan, and China. Studies were selected that had at least two occasions of measurement, had AD as an outcome, had at least a 12-month followup period, and measured exposure to smoking at baseline. The number of individuals with AD versus other dementias was available for the majority of the studies. There was not a structured quality assessment of the studies reported in this systematic review; however, the strict inclusion/exclusion criteria provided an indirect assessment of quality, and the study characteristics for key design variables were reported. Length of followup ranged from 2 to 30 years. No information was provided on the followup rates in the studies. The covariate adjustment for most studies included at least age and education; many studies included additional covariates such as sex, APOE, biological measures, and health conditions. Selection of models to report from individual studies was determined first by the model with the smallest standard error and then on the model with the largest number of covariates. Exposure was determined by self-report, and smoking was classified as ever, current, former, or never smokers. All studies with AD as an outcome used DSM and/or NINDS-ADRDA diagnostic criteria.

Studies were combined using fixed-effect meta-analyses if there was no evidence of heterogeneity. If heterogeneity was present, random-effects models were used. Standard χ2 tests using a p-value of 10 percent were used to examine heterogeneity. The small number of studies within each group of studies with compatible measures precluded investigation of heterogeneity, using meta-regression, subgroup analyses, or assessment of publication bias.

The results for the various exposure definitions and the outcome of AD are reported in Table 33. The studies that provided data for three smoking statuses (current, former, and never) provided data only for current-versus-never and former-versus-never comparisons. The authors of the review mathematically derived conservative estimates of the current-versus-former comparison from the current-versus-never and former-versus-never. To briefly summarize the main findings, current smokers were at greater risk of AD compared to either never smokers or former smokers. The results in Table 33 suggest a crude dose-response association, with the risk of AD progressively increasing from never smokers to former smokers to current smokers. Some caution is urged in interpreting the pattern of results in this way because different studies contribute to the relative risks for each of the comparisons. In addition, the mathematically derived relative risk for current versus former smokers is almost equal that of the comparison of current versus never smokers, a result that would not be expected if there were a dose-response effect.

Table 33. Smoking and risk of developing AD – results from studies reviewed by Anstey et al., 2007.

Table 33

Smoking and risk of developing AD – results from studies reviewed by Anstey et al., 2007.

The authors noted that one limitation of the study was that the former smokers group included a broad range of exposure periods. Unfortunately, there were not a sufficient number of studies with data on the number of smoking pack-years to use this as the exposure variable. Although publication bias was not assessed formally, the authors noted that 2 of the 10 studies did not focus on smoking, but rather the results on smoking were incidental to the aim of the study and additional data were obtained from the authors of the source studies. For this reason, one might conclude that the potential for publication bias was reduced. Quality ratings of the studies were not provided, but strict selection criteria may have increased the likelihood that only high quality studies were included in the review. The authors of the systematic review concluded that current smokers are at increased risk of AD.

We identified two additional eligible cohort studies published since the beginning of 2005.211,212 These studies are summarized in Table 34; detailed evidence tables are provided in Appendix B. Both studies drew samples from the community, and one of them also included institutionalized individuals; one study was based on a U.S. sample and the other was conducted in Europe. At baseline, participants in both studies were non-demented. Length of followup ranged from a mean of 4 to 7 years. Both studies used self-report history of smoking obtained at baseline to characterize exposure. The studies used sample selection methods to minimize selection bias; however, neither study compared baseline characteristics to assess differences between exposed and unexposed. Both studies reportedly used standard criteria for the diagnosis of AD, but one of the studies did not use an informant report as part of the diagnostic process. Neither study noted whether the cognitive diagnoses were assigned blind to exposure status; however, the smoking analyses were not the primary outcome, so knowledge of exposure status may have had little effect on the outcome. Analyses were appropriate and controlled for relevant potential confounders, but neither study conducted a priori sample size calculations.

Table 34. Tobacco use and risk of developing AD – recent cohort studies.

Table 34

Tobacco use and risk of developing AD – recent cohort studies.

Both studies showed an increased risk of AD associated with current smoking compared to individuals who never smoked. They also showed that current smokers without an APOE e4 allele had an increased risk of AD, but there was no increased risk of AD associated with smoking for individuals with one or more e4 alleles.

In conclusion, the studies examining current smokers versus never smokers and/or former smokers consistently show an increased risk of AD associated with current smokers (although not all studies show a statistically significantly increased risk). However, former smokers do not appear to be at increased risk of AD. The authors of the review50 noted that there were insufficient data to evaluate the duration of smoking among the current and former smokers or the duration of abstinence from smoking among former smokers. Thus, questions about the amount of time it takes a former smoker to return to the level of risk of a never smoker could not be addressed.

The evidence provided above refers only to smoking; the effects of nicotine itself may be different, as nicotine may aid specific cognitive functions such as attention, reaction time, and learning and memory tasks. We searched for systematic reviews and studies examining the association between nicotine and cognition, but (as described under Questions 3 and 4) we did not identify any eligible publications.

Alcohol use. We identified a single, good quality systematic review, published in 2009, that examined the association between alcohol use and the development of AD.51 The review included nine prospective community cohort studies published between 2002 and 2006.104,118,213–219 The nine studies included a total of 17,835 subjects; two were conducted in the United States, three in European countries, one in Canada, one in Japan, one in Korea, and one in China. Studies were selected that screened for dementia at baseline or adjusted for cognitive function at baseline, had at least a 12-month followup period, had AD as an outcome, and measured exposure to alcohol at baseline or during the followup period prior to the final followup examination. Study participants were non-demented at baseline. The number of individuals with AD versus other dementias was available for the majority of the studies. The meta-analysis reported was based on current use of alcohol, although some of the included studies also collected data on those who formerly used alcohol versus those who never used alcohol. There was not a structured quality assessment of the studies reported in this systematic review; however the strict inclusion/exclusion criteria provided an indirect assessment of quality, and the study characteristics for key design variables were reported. Length of followup ranged from 2 to 7 years. No information was provided on the followup rates in the studies. The covariate adjustment for most studies included at least age, sex, and education; many studies included additional covariates such as APOE, health behaviors, biological measures, and health conditions. The authors of the review noted that models with the largest number of covariates were given priority when determining the selection for inclusion in the report. Exposure was determined by self-report in all studies, but the categorization of extent of current and past alcohol use differed across the studies. For the meta-analyses, comparisons were made between drinkers versus non-drinkers, light to moderate drinkers versus non-drinkers, and heavy drinkers versus non-drinkers. All studies with AD as an outcome used DSM and/or NINDS-ADRDA diagnostic criteria.

Studies were combined using fixed-effect meta-analyses if there was no evidence of heterogeneity. If heterogeneity was present, random-effects models were used. Standard χ2 tests using a P value of 10 percent were used to examine heterogeneity. The test for heterogeneity was significant for AD for light to moderate drinkers versus nondrinkers χ2[5] = 11.43, P = 0.04). The test for heterogeneity for heavy drinkers versus nondrinkers was not significant. Publication bias was not formally assessed.

The results for the various exposure definitions and the outcome of AD are reported in Table 35. The definition for light to moderate drinker varied across the studies and ranged from 1 to 2 drinks per week as a minimum to 13 to 28 drinks per week as a maximum. To summarize the main findings, all drinkers combined had a lower risk of AD compared to non-drinkers. Light to moderate drinkers also had a lower risk of AD compared to non-drinkers. Three studies provided results by sex and reported that light to moderate alcohol use was protective for AD in both males and females. However, heavy/excessive drinkers showed no difference in risk compared to non-drinkers.

Table 35. Alcohol use and risk of developing AD – results from studies reviewed by Anstey et al., 2009.

Table 35

Alcohol use and risk of developing AD – results from studies reviewed by Anstey et al., 2009.

The authors noted that the study of alcohol use as a risk factor for late-life health outcomes is complicated by variation in the type of beverage used and the criteria for measuring and categorizing quantity. In addition, the present meta-analyses (and many other studies) are limited to current use of alcohol only, but alcohol patterns may change over a lifetime, and former drinkers may differ from lifetime abstainers. Five of the studies included in the systematic review either for AD or for cognitive decline collected data on former drinkers compared to lifetime abstainers.217–221 The cognitive outcomes for these studies were varied and included AD, dementia, and cognitive decline. Three of the studies218,219,221 showed no difference in the associations between cognitive outcome and former drinkers compared to lifetime abstainers. But the results from two other studies217,220 indicated that former drinkers account for much of the risk of cognitive impairment among non-drinkers; this suggests many former drinkers may have stopped drinking for reasons that also predispose to cognitive impairment, such as health problems. Although publication bias was not assessed formally, the systematic review authors noted that they included studies from article reference lists and articles that did not focus on alcohol use, but in which alcohol use was a covariate. For this reason, one might conclude that the potential for publication bias was reduced. Quality ratings of the studies were not provided, but strict selection criteria may have increased the likelihood that only high-quality studies were included in the review.

The systematic review authors concluded that light to moderate alcohol use in late life was associated with an attenuated risk of AD. They further concluded that it was not clear whether these results reflect selection effects in cohort studies that begin in later life, a protective effect of alcohol consumption throughout adulthood, or a specific benefit of alcohol in late life. The review did not find an increased risk of AD among individuals who drank heavily, but the authors speculated that this may be due to selection bias given the age of the samples.

We did not identify any additional eligible cohort studies published since June 2006.

In conclusion, individuals who drink light to moderate amounts of alcohol in late life appear to be at reduced risk of AD; however further research is needed to determine whether this association is due to confounding factors. For example, there is some evidence to suggest that those who continue to use alcohol in late life are healthier in general which may itself lead to a lower risk of dementia.

Toxic Environmental Exposures

We identified one good quality systematic review of occupational risk factors for Alzheimer’s disease, focusing on the associations between AD and pesticides, solvents, electromagnetic fields, lead, and aluminum in the workplace.52 The review included 21 case-control studies and three cohort studies published between 1984 and 2003. Although case-control studies are a weaker design than cohort studies for establishing causality, we included case-control studies for this factor because of the paucity of data from cohort studies. Further, exposures to specific toxic substances are relatively uncommon and would require very large sample sizes to have sufficient power to detect an effect in general community samples. The number of studies and subjects for each risk factor is summarized in Table 36. Three of the publications reported on different exposures from the same study population. Studies were included if it was possible to calculate a relative risk for AD; if the exposure occurred in the workplace; and if the clinical diagnosis of AD was based on NINCDS-ADRDA, DSM, or ICD criteria. Two epidemiologists completed data abstraction and quality assessments independently; disagreements were resolved by consensus. Study quality was assessed using a 39-item assessment tool for case-control studies and a 30-item measure for cohort studies. A global quality index was calculated for each study and scored as the percentage of the maximum possible value achieved. Results were described qualitatively.

Table 36. Toxic environmental factors and risk of developing AD – characteristics of studies reviewed by Santibanez et al., 2007.

Table 36

Toxic environmental factors and risk of developing AD – characteristics of studies reviewed by Santibanez et al., 2007.

For the 24 studies, the median global quality index was 36.6 percent. The most common quality problems were: misclassification of the exposure (18/24), surrogate informants (12/17), misclassification of the disease (11/24), and selection bias in 10 studies. Study quality was judged to be higher for pesticide exposures. Two cohort studies reported higher adjusted relative risks for AD with exposure to defoliants and fumigants (RR 4.35; 95 percent CI 1.05 to 17.90)222 and pesticides in men (RR 2.39, 1.02 to 5.63).223 Two higher quality case-control studies137,224 found small, non-statistically significant associations between pesticides and AD.

Other exposures were reported to show less consistent associations. Of the 11 studies evaluating solvents, two found a statistically significant association with AD. However, these two studies were from the same population base. The single cohort study evaluating solvent exposure222 did not find an association with AD (RR 0.88; 95 percent CI 0.31 to 2.50). Studies of lead exposure were all case-control design and assessed as low quality; none showed a statistically significant association with AD. One study of aluminum exposure was a higher quality case-control study and found no association with AD (RR 0.95; 95 percent CI 0.5 to 1.9).225 Similarly, the two lower quality case-controls studies226,227 did not find any association between aluminum and AD.

Our search identified two additional studies on the exposures of interest, one evaluating aluminum exposure and one evaluating blood mercury levels. Rondeau et al.228 followed a community sample of 1925 non-demented adults, age 65 and older, for a mean of 11.3 years. Aluminum exposure was estimated using a food frequency questionnaire that assessed tap water consumption, coupled with chemical analysis of aluminum levels in drinking water. The estimate of aluminum intake is not well validated, and followup rates were not given. The risk of AD was increased for aluminum intake ≥ 0.1 mg/day (RR 1.34; 95 percent CI 1.09 to 1.65); no dose-response relationship was observed. This observation of elevated risk differs from the finding of no association in three previous case-control studies. Using a subgroup from the Canadian Study of Health and Aging (15 percent of the cohort), Kroger et al.81 used a nested case-control design to evaluate the association between blood mercury levels and AD. After a median of 4.9 years, individuals in the 3rd (OR 0.41; 95 percent CI 0.23 to 0.74) and 4th quartiles of exposure (OR 0.56, 0.32 to 0.99) were at lower risk for AD. However, the relatively low participation rate may have introduced significant selection bias. We did not identify any studies on Agent Orange or gulf war syndrome.

In summary, few cohort studies have examined the association between toxic-environmental exposures and risk of AD. Most case-control studies have important methodological limitations that may bias the results. Among the exposures considered, only pesticides showed a consistent association with AD.

Genetic Factors

After age, family history is the strongest risk factor for the development of Alzheimer’s disease. As early as the 1920s, there were reports of a few families with many individuals with AD across more than one generation, suggesting a genetic contribution to the disease (for a review, see Kennedy et al., 1994229). Twin studies provided further support for the role of genes in the etiology of AD. Heritability is defined as the proportion of disease liability attributable to genes, and it can be estimated from the difference in disease concordance rates for monozygotic twin pairs compared to dizygotic pairs. Estimates of heritability of AD from twins studies have ranged from 0.33 to 0.74,230–232 indicating a moderate genetic contribution. The genetics of AD, however, are complex, with fully penetrant autosomal dominant mutations responsible for early-onset disease (onset prior to 60 years of age) and other genetic susceptibility factors responsible for the much more common late-onset disease (> 95 percent of cases). Mutations in three different genes have been identified that cause early-onset AD, but these account for a small minority of individuals with AD. Disease-causing mutations in these genes, amyloid precursor protein (APP) gene and presenilin 1 and 2, are completely penetrant, and most individuals who inherit these mutations become symptomatic in their thirties or forties. These three genes also play a role in amyloid formation, strengthening the argument that amyloid deposition is a key factor in disease pathogenesis. Individuals who inherit a disease-causing mutation in one of these genes will develop AD unless they die prematurely from other causes. Genetic testing is commercially available for each of these early-onset disease genes. In contrast to early-onset AD, no classically Mendelian genetic influences have been found for the much more common late-onset AD.

The literature on genetic influences on late-onset AD is extensive. AlzGene, a regularly updated genetic database that compiles association studies on AD (http://www.alzgene.org/), reports data from 1355 studies examining 660 genes (Website updated January 29, 2010; accessed January 31, 2010).53 AlzGene includes only studies published in peer-reviewed journals and performs meta-analyses on genetic polymorphisms that have been examined in at least four case-control samples. Meta-analyses are updated as more data are published, but family-based studies are not included in meta-analyses. Odds ratios and 95 percent confidence intervals are calculated for all polymorphisms with minor allele frequencies greater than one percent in healthy controls using a random-effects model with weights that incorporate within- and between-study variance. Data are presented for all studies and then separately after excluding studies in which in which the Hardy-Weinberg equilibrium criteria have not been met. To avoid including overlapping data, usually only the largest sample is included for analysis.

The meta-analyses performed in AlzGene are graded based on the amount of data available for a polymorphism, consistency of replication, and an assessment of bias. The data are graded from “A” to “C” based on the number of minor alleles in the case and control population. A grade of A requires > 1000 minor alleles, B between 100 and 1000, and C < 100. Consistency of replication is determined by I2 point estimates: A = < 25 percent; B = 25 to 50 percent; and C = > 50 percent. Sources of bias that are considered include errors in phenotyping, genotyping, and population sources, as well as publication bias. Publication bias is assessed with a Begg-modified funnel plot depicting allele-specific ORs for each study versus its standard error on a semi-logarithmic scale. Summary ORs from meta-analyses are also graded based on their deviation from 1.0. Studies with summary ORs < 1.15 or ORs > 1.15 with evidence of publication bias receive a grade of C, acknowledging that occult biases and selective reporting may invalidate the proposed association. Studies that lose statistical significance after exclusion of the original publication or studies violating Hardy-Weinberg equilibrium are also given a grade of C. An overall assessment of association credibility is based on the overall score. Credibility is “strong” if a gene received three A grades; “moderate” if it receive at least one B grade and no C grades; and “weak” if it receives any C grades.

Apoliprotein E (APOE = gene; ApoE = protein) is the single most-validated genetic susceptibility factor in AD (overall AlzGene grade A). APOE has three common polymorphisms (e2, e3 and e4) that introduce an amino acid change in APOE. Inheriting one or two copies of APOE e4 increases the risk of developing AD in a dose-dependent fashion, while inheriting an e2 allele reduces risk. Case-control studies examining the association of APOE and AD are reported for Caucasian (28 studies), Asian (5 studies), African-descent (2 studies), Hispanic (1 study), and other or mixed origin populations (1 study). All studies reported increased risk of AD in subjects with APOEe4. A meta-analysis of 38 studies by AlzGene produced a summary OR of 3.68 (95 percent CI 3.30 to 4.11) for the e4 versus e3 alleles. The lower limit confidence interval excluded 1.0 in 36 of the 38 studies included in the analysis. A meta-analysis of 37 studies examining the association of APOE e2 on the development of AD demonstrated a protective role for this allele (summary OR 0.621; 95 percent CI 0.456 to 0.85 ). The studies used for the AlzGene meta-analysis were originally compiled and analyzed by Farrer et al.,233 with some data removed because they were derived from family studies, not published in English, or concerned unpublished genotype data. The results reported by Farrer and colleagues were qualitatively identical to the subsequent AlzGene analysis. They also provided summary ORs for the risk of developing AD for each genotype compared to the reference genotype APOE e3/e3; these were (OR; 95 percent CI): e2/e4, 2.6 (1.6 to 4.0); e3/e4, 3.2 (2.8 to 3.8); e4/e4, 14.9 (10.8 to 20.60); e2/e2, 0.6 (0.2 to 2.0); and e2/e3, 0.6 (0.5 to 0.8). No other susceptibility gene in AD approaches the statistical level of APOE. The literature suggests that there are racial and ethnic differences in the strength of the association between APOE genotype and AD. The APOE e4 association with AD is stronger in people of Japanese ancestry and weaker among African-Americans and Hispanics than among Caucasians, but there was significant heterogeneity in the ORs in studies of African-Americans (p < 0.03). APOEe2 also appears to be associated with protection from AD in Asians (OR for e2 vs. e3 0.548; 95 percent CI 0.277 to 1.08). Additional studies in African-Americans, Asians, and Hispanics are needed to establish a definitive risk estimate. The APOE e4 effect is age-dependent, with a major contribution to risk in people between the ages of 40 and 90 years, but the effect diminishes after the age of 70.

Nine other gene polymorphisms received a grade of A using AlzGene criteria. Clusterin (CLU), also called apolipoprotein J, located on chromosome 8, has been linked to AD in nine case-control samples. The OR for all samples was less than 1, with a summary OR for all studies of 0.86 (95 percent CI 0.82 to 0.89). A publication of these findings by two groups in separate GWAS studies involving approximately 30,000 subjects identified CLU rs11136000 as the single nucleotide polymorphism (SNP) with the greatest OR, other than APOE (P = 1.4 × 10−9 and P = 7.5 × 10−9 in the two studies.234,235 By comparison, the P value for the most significant SNP in the APOE locus in one of the studies was 1.8 × 10−157.234 CLU expression is increased in a number of pathologic conditions involving brain injury or inflammation and binds soluble amyloid beta peptide, which may be relevant to AD pathogenesis. Phosphatidylinositol binding clathrin assembly protein (PICALM; also known as clathrin assembly lymphoid-myeloid leukemia gene [CALM]), located on chromosome 11, was also identified in a GWAS analysis of more than 16,000 individuals and received an A grade in AlzGene.234 The OR for PICALM was less than 1.0 in 6 of 6 case-control samples examined, and in 5 of 6 samples the 95 percent CI excluded 1.0. The AlzGene summary OR for association of PICALM with AD was 0.87 (95 percent CI 0.83 to 0.91). PICALM is located on chromosome 11, is involved in clathrin-mediated endocytosis, and may play a role in synaptic vesicle fusion. Synaptic density is correlated with cognitive decline in AD, suggesting that this may be relevant to pathogenesis. Sortilin-related receptor (SORL1) is a low-density lipoprotein receptor relative located on chromosome 11 that has been associated with risk of AD. An AlzGene meta-analysis of 21 studies reported an OR of 1.10 (95 percent CI 1.03 to 1.71), with I2 = 33. A locus on chromosome 14 called GWA 14q32.13 was identified as associated with AD in five studies. The meta-analysis OR for GWA 14q32.13 was 0.84 (95 percent CI 0.77 to 0.93). No associated gene has been identified with this polymorphism. Tyrosine kinase, non-receptor 1 (TNK1) has been examined in five case-control populations involving 10,920 people. Three samples showed a positive association of TNK1 with AD with 95 percent CIs excluding unity, two showed a trend, and one had an OR > 1. A meta-analysis of five datasets of TNK1 by AlzGene produced a summary OR of 0.84 (0.76 to 0.93). Angiotensin 1 converting enzyme (ACE) has been examined in 57 case-control studies, with 20 studies showing positive results, 6 trending positive, and 22 showing negative results. Separate analysis was not available for six studies. A meta-analysis of ACE polymorphism rs1800764 in five datasets found a summary OR of 0.83 (95 percent CI 0.72 to 0.95), suggesting a protective effect for genetic variants in this gene. Two other single-nucleotide polymorphisms (SNPs) within this gene also produced significant results in meta-analysis.

Inflammation has been implicated in the pathogenesis of AD, and interleukin 8 (IL8) has been examined as a candidate gene in four case-control studies. A meta-analysis of these four studies reported an OR of 1.27 (95 percent CI 1.08 to 1.50; I2 = 0), suggesting that different genotypes of IL8 modify an individual’s risk of developing AD. The low-density lipoprotein receptor (LDLR), located on chromosome 19, has also been identified as a genetic modifier of AD. An AlzGene meta-analysis of four studies found an OR of 0.85 (95 percent CI 0.72 to 0.99). A caveat about the validity of ORs determined by meta-analysis is necessary here. The final gene with an AlzGene grade of A is Cystatin C. Cystatin C is a member of the cystatin gene family that contains proteins with cysteine protease activity; it has been examined in 19 case-control studies. Six studies reported an association, and 10 studies were negative. A meta-analysis of four datasets found a summary OR of 1.28 (95 percent CI 1.04 to 1.56) for the association of cystatin C with AD. It should be noted that meta-analysis ORs can be skewed by publication bias, producing inflated estimates if negative studies are less likely to be published.

A number of genome-wide association studies (GWAS) involving thousands of patients and hundreds of thousands of polymorphisms have been performed in AD, but apart from the studies identifying CLU and PICALM noted above, most have identified genetic variants of marginal statistical significance. Almost all AD GWAS have confirmed the association between the region containing APOE and AD, but detection of other polymorphisms has varied between studies despite regular use of validation sets to confirm results. Variability in results between studies could be the result of differences in populations or could be caused by spurious associations as the result of multiple hypothesis testing. At present, most GWAS results are best viewed as suggestive, with need for independent confirmation and demonstration of biological relevance to disease pathogenesis.

In summary, autosomal-dominant, early-onset AD is associated with mutations in three genes. APOE is the only well-validated susceptibility gene for late-onset AD, but a number of promising candidates have been identified, including those listed above. Additional data are necessary to confirm the relationship of these genes with AD and to demonstrate their biologic relevance to pathogenesis.

Key Question 2 – Factors Associated with Reduction of Risk of Cognitive Decline

Key Question 2 is: What factors are associated with the reduction of risk of cognitive decline in older adults?

Nutritional and Dietary Factors

B vitamins and folate. We identified five eligible cohort studies that examined the association between folate and B vitamins and cognitive decline.56,236–239 These studies are summarized in Table 37; detailed evidence tables are in Appendix B. All five studies reported either continuous variable outcomes or multiple levels of categorical outcome (e.g. quintiles). Three of the studies used community samples in the U.S.,56,236,238 one used a community sample in Europe,239 and the fifth used a clinical sample in Europe.237 Two studies were based on the same sample, but one used niacin56 and the other used folate and B12 levels236 to predict cognitive decline. For all studies, participants were non-demented at baseline. The length of followup ranged from 5.5 to 10 years. All studies used sample selection methods to minimize selection bias. Three of the studies used plasma/serum levels of B vitamins and folate.237–239 The other two studies estimated dietary and supplement intake of B vitamins and folate based on self-reported information collected using modified Harvard food frequency questionnaire. This group has previously reported results from a study that aimed to validate food frequency questionnaires. Three studies compared baseline characteristics between those exposed and unexposed.56,236,239 The case definitions and measures of cognitive change for the studies are described in Table 37. The analyses appear generally appropriate, and most controlled for relevant potential confounders, including homocysteine and the other predictor variables (i.e., folate, B vitamins). None of the studies conducted a priori sample size calculations.

Table 37. B vitamins and folate and risk of cognitive decline.

Table 37

B vitamins and folate and risk of cognitive decline.

One study examined the association between niacin (B3) and cognitive change over time.56 Investigators reported that higher dietary intake of niacin was generally associated with a modest protective effect on cognition; however, the results were only significant in subgroups of individuals without stroke or myocardial infarction or individuals with baseline cognitive scores in the upper 85 percent of the sample. When total niacin intake, including supplements, was evaluated there was no significant association between niacin intake and cognitive decline. The findings on folate were in opposite directions, with one study that used self-reported intake of nutrients reporting that higher levels of folate were associated with higher rates of cognitive decline,236 and another study that used plasma levels of folate reporting that low levels of folate were associated with greater cognitive decline.238 A third study that used serum levels of folate did not show any association between folate levels and cognitive change.237 In general, the studies did not show any association between either B12 or B6 and cognitive decline, except in select subsamples.236 In addition, there were inconsistent findings for the association between cognitive decline and holotranscobalamin and methylmalonic, markers that are related to B12 levels and function. One study showed no association between these markers and cognitive change,239 while another237 reported that doubling holotranscobalamin levels resulted in a slower rate of decline on the MMSE, and a doubling of methylmalonic acid levels resulted in a more rapid rate of decline. An explanation for the discrepancy in findings using plasma markers for B12 is not obvious, as the measures of central tendency for the baseline levels of the markers were not markedly different between the studies. The studies did differ in the source of the sample, with one being a community sample and the other being a clinical registry. In addition, the cognitive measures used to measure change differed between the studies; inconsistent results have frequently been reported both within and between studies on many exposures for different cognitive measures.

In conclusion, there is no consistent evidence to support an association between cognitive decline and exposure to niacin, folate, B12, or markers for B12 based on estimated intake or plasma levels of these factors. The preponderance of the limited studies on these exposures reports no association between these factors and cognitive change over time. Inconsistencies in the findings reported here may be due to a number of factors, including differences in both the types and quality of the exposure, the outcome measures, and sample characteristics.

Other vitamins. We identified eight eligible cohort studies that examined the association between cognitive decline and vitamins C or E, beta carotene or flavonoids.67,70,240– 245 This summary will focus on the two studies with categorical outcomes.67,244 A brief overview of the other studies using continuous outcomes of decline will be given, but a detailed review of these studies will not be provided because they do not change the conclusion from the two studies with categorical outcomes. The two studies reporting categorical variable outcomes are summarized in Table 38; detailed evidence tables for all of the studies are in Appendix B. One of the studies used a community sample in Europe,244 and the other used a Canadian sample comprised of both community and institutionalized individuals.67 One study stated that participants were non-demented at baseline,67 and it is assumed here (although not explicitly stated by study investigators) that the participants were non-demented at baseline in the other study.244 The length of followup ranged from 3 to 5 years. One study used sample selection methods to minimize selection bias.244 The other study67 used selection methods that partially addressed selection bias because they used a subsample from a larger cohort study, of which a disproportionate segment of the sample was at relatively high risk of cognitive impairment; part of this study sample was drawn from institutionalized participants (20 percent), and part from community participants. One study244 used self-reported information to estimate food and supplement intake of vitamins C and E, beta carotene, and flavonoids. The other study used self-reported vitamin supplement use confirmed by inspection of medication container for the community residents and medical records for the institutionalized participants.67

Table 38. Other vitamins and risk of cognitive decline.

Table 38

Other vitamins and risk of cognitive decline.

One study compared baseline characteristics between those exposed and unexposed.67 The case definitions for the studies are described in Table 38. The analyses appear generally appropriate and were controlled for relevant potential confounders. Neither study conducted a priori sample size calculations. One study reported no association between cognitive decline and vitamins E or C, beta carotene, or flavonoids.244 The other study reported no association between cognitive decline and vitamins E or C separately, but did show a protective association for any vitamin use and combinations of multivitamins and vitamins C and E.67

To briefly summarize the studies reporting continuous outcomes of cognitive decline, we note that two studies were based on the same sample, with one reporting the association between cognitive decline and vitamins C and E,242 and the other adding NSAIDs to the list of predictors.240 Another two studies were based on the same sample with some differences in how vitamin E intake was estimated.70,241 Two studies defined exposure levels using blood samples,243,245 and the others based exposure levels on self-reported information. Four of the studies stated that participants were non-demented at baseline,70,240–242 and it is assumed here that the vast majority of subjects in the other two studies243,245 were non-demented at baseline based on the relatively young mean baseline age. Of the six studies defining cognitive decline as a continuous measure, only two studies that were based on the same sample reported a protective effect of vitamin E (but not vitamin C) on cognition.70,241 However, for one of these studies,241 the risk estimates for vitamin E and cognitive decline were not consistently in the same direction for all quintile levels of vitamin E.

In conclusion, the findings on vitamins E and C, beta carotene, and flavonoids provide no consistent support for a protective association between these nutrients and cognitive decline.

Gingko biloba. We identified no systematic reviews or studies evaluating the use of gingko biloba and risk of cognitive decline.

Omega-3 fatty acids. We identified two good quality systematic reviews evaluating the association between omega-3 fatty acids and risk of cognitive decline.29,30 We discuss the more recent (2009) review by Fotuhi et al.29 The review included three cohort studies published between 2003 and 2007. The three studies included a total of 4174 subjects; one each was conducted in the United States, France, and The Netherlands. Prospective observational studies were selected that addressed the specific association between any form of omega-3 fatty acids and cognitive change in participants age 65 or older. There was not a structured quality assessment of studies reported in this systematic review; however, study characteristics for key design variables were reported, and study selection criteria focused the review on higher quality studies. In two studies, cognitive testing at baseline excluded participants with dementia, and the third study recruited normal volunteers. Length of followup ranged from 4 to 6 years. No information was given on followup rates. Covariate adjustment included age, sex, education, and baseline cognitive function. Omega-3 fatty acid intake was estimated by dietary histories in two studies;246,247 the third248 measured erythrocyte membrane fatty acid content. Dietary histories were used in one study to classify exposure as the number of fish-containing meals per week and in the other study to estimate the levels of DHA and EPA from fish and other sources. Because of significant heterogeneity in study design, results were synthesized qualitatively. Adjusted results were reported and are summarized in Table 39. Each of the three studies showed an association between greater exposure and less cognitive decline.

Table 39. Omega-3 fatty acids and risk of cognitive decline – study characteristics and results from studies reviewed by Fotuhi et al., 2009.

Table 39

Omega-3 fatty acids and risk of cognitive decline – study characteristics and results from studies reviewed by Fotuhi et al., 2009.

The authors of the systematic review concluded that the existing evidence favors a role for long chain omega-3 fatty acids in slowing cognitive decline in older adults.

Our search identified two additional studies (described in three publications) examining the relationship between omega-3 fatty acids and cognitive decline.249–251 The study characteristics are summarized in Table 40. Both studies enrolled participants with a mean age < 65 and thus were not included in the review by Fotuhi et al.29 Beydoun et al. reported results from dietary assessment250 and plasma omega-3 fatty acids249 separately. Participants drawn from four U.S. communities (n = 7814) were followed for 6 years; outcomes were reported as reliable change index for three tests. Adjusted analyses showed that higher long chain omega-3 fatty acid intake was associated with less risk of decline as assessed by the delayed word recall and verbal fluency (controlled oral word association) tests, but not as assessed by the digit symbol substitution test or global cognition. An analysis of the 2251 participants with plasma omega-3 fatty acid measurements showed that higher total n-6 PUFAs decreased the risk of global cognitive decline. Total omega-3 PUFAs (OR 0.84; 95 percent CI 0.66 to 1.05), EPA, and DHA were not associated with risk of global cognitive decline. Consistent with the analysis by dietary history, higher levels of DHA and EPA were associated with less decline in verbal fluency.

Table 40. Omega-3 fatty acids and risk of cognitive decline – recent cohort studies.

Table 40

Omega-3 fatty acids and risk of cognitive decline – recent cohort studies.

The second study251 was a secondary analysis from a 3-year RCT of folic acid in 404 subjects with elevated homocysteine levels. Total omega-3 fatty acid was measured at baseline. Cognitive change was measured with five tests evaluating memory, processing speed, word fluency, sensorimotor speed, and complex speed. Higher plasma omega-3 PUFAs were associated with less decline in sensorimotor speed (p = 0.02) and complex speed (p < 0.01), but not memory, information processing speed, or word fluency.

In summary, the effects of n-3 fatty acids on cognitive decline have been evaluated in five prospective longitudinal studies. Studies vary substantially in measurement of n-3 fatty acids, participant characteristics and outcome measures. Each study reports a positive association between at least one measure of PUFAs and a measure of cognition. However, some results are conflicting. Heude et al.248 found that higher total omega-3 PUFA and higher omega-3:omega-6 fatty acid ratios were associated with less risk of cognitive decline, while Beydoun et al.249 found that higher n-6 PUFAs but not total plasma n-3 PUFAS reduced cognitive decline. Another analysis by Beydoun et al.250 found that higher plasma DHA and EPA were associated with less decline in verbal fluency, but had no effect on global cognition, while Dullemeijer et al.251 found no association between plasma n-3 PUFAs and verbal fluency but positive effects on sensorimotor and complex speed. Some studies compared multiple measures of exposure with multiple measures of cognition, increasing the risk for detecting spurious associations. The positive results could be explained by residual confounding. Eating fish might be a proxy for individuals with healthier lifestyles than those who do not eat fish and effects on cognitive decline might have little to do with fish consumption. Despite these cautions, these studies support the possible association between higher consumption of PUFAs and less cognitive decline.

Other fats. We identified one eligible cohort study that examined the association between cognitive decline and fat intake.252 The study is summarized in Table 41; a detailed evidence table is provided in Appendix B. The study used a community sample in the United States. Participants were non-demented at baseline. The median length of followup was 5.6 years. The study used sample selection methods to minimize selection bias. It used self-reported information to estimate fat intake. Based on a validation substudy, the authors reported the Pearson correlations for comparative validity with 24-hour dietary recalls were 0.40 for monounsaturated fat, 0.47 for saturated fat, 0.36 for polyunsaturated fat, and 0.39 for cholesterol. The study compared baseline characteristics between those exposed and unexposed. The case definitions for the studies are described in Table 41. The analyses appear appropriate and were controlled for relevant potential confounders. The study did not conduct a priori sample size calculations.

Table 41. Intake of various types of fat and risk of cognitive decline.

Table 41

Intake of various types of fat and risk of cognitive decline.

The study showed that higher intake of saturated fats and trans-unsaturated fats were linearly associated with greater cognitive decline. But total fat, vegetable and animal fat, and cholesterol were not associated with cognitive change over time. In another study on this same sample,253 the authors noted an interaction between copper intake and fat intake in that higher copper intake was associated with greater cognitive decline in subjects with high saturated and trans fats intake.

In conclusion, there is a single study each addressing risk of AD and risk of cognitive decline associated with dietary fat intake. The two studies provide preliminary evidence for a deleterious association between increased saturated fat and trans fat intake and risk of AD or cognitive decline. Further research is needed both to validate self-report exposure measures of dietary intake and also to confirm the findings in the present study.

Trace metals. We did not identify any systematic reviews evaluating the association between trace metals and cognitive decline. Our search identified three primary research publications245,253,254 from two cohort studies.

Selenium. Selenium is an antioxidant and constituent of brain selenoproteins that may be important for the maintenance of brain functions. The association between plasma selenium and cognitive change was described in two publications from the same community-based cohort conducted in older adults with normal cognition from the Nantes district of France.245,254 These publications are summarized in Table 42; detailed evidence tables are provided in Appendix B. Subjects were recruited in part from advertisement campaigns that may introduce selection bias. It is assumed here that the majority of participants were non-demented at baseline based on the relatively young age at baseline for both studies and the lengthy followup period in one of the studies.254 Followup rates were 84 percent at 4 years and 54 percent at 9 years. Analyses were adjusted for multiple potential confounding variables. Berr et al.245 described the relationship between baseline selenium levels and at least a 3-point decline on the MMSE at 4 years. Selenium levels below the 25th percentile increased the risk for cognitive decline (OR 1.58; 95 percent CI 1.08 to 2.31). Akbaraly et al.254 reported the association between change in plasma selenium levels and declines on four cognitive measures. Analyses were conducted to examine the 2- and 9-year change in selenium with the four cognitive measures, evaluating change in cognition as both a continuous measure and as a dichotomous variable using two separate thresholds. Analyses were not adjusted for multiple comparisons. Two-year change in plasma selenium was not associated with change in cognition. When change in cognition was analyzed as a continuous variable, the 9-year change in selenium was associated only with the MMSE. When cognition was analyzed as a dichotomous variable (10th or 25th percentile of decline), change in plasma selenium was associated with declines in the finger tapping test. Associations with other cognitive measures were inconsistent depending on the threshold for cognitive decline.

Table 42. Plasma selenium levels and risk of cognitive decline.

Table 42

Plasma selenium levels and risk of cognitive decline.

In summary, the results of one of these studies provide limited support for a possible association between baseline selenium and cognitive decline. However, a number of issues raise concerns about the robustness of this finding, namely: the potential selectivity of the sample, the lack of an association with change in selenium level and cognitive change, and the modest effect size that may indicate confounding due to other factors.

Copper, zinc, and iron. A single cohort study involving 3718 older adults from a community sample in Chicago, Illinois, evaluated the relationship between dietary copper, zinc, and iron and cognitive decline.253 This study is summarized in Table 43; a detailed evidence table is provided in Appendix B. Subjects were non-demented at baseline, and were followed for a median of 5.5 years; 88 percent of survivors completed followup. Copper, zinc, and iron intake was estimated based on the modified Harvard Food Frequency Questionnaire. Based on a validation analyses in a subsample, the authors reported that Pearson correlations between total intake levels on the questionnaire and multiple 24-hour dietary recalls were 0.46 for copper, 0.50 for zinc, and 0.43 for iron. Cognitive decline was measured as the standardized composite of four tests. Analysis adjusted for multiple potential confounding variables including other nutritional factors (vitamins C and E, niacin, folate) showed no association between copper, zinc, or iron and cognitive decline. A power calculation was not reported. However, higher copper intake was associated with greater cognitive decline in subjects with high saturated and trans fats (difference in cognitive decline for highest versus lowest copper quintile was −0.61 standardized units/year, p < 0.01). This interaction between copper intake and saturated fat intake was specified a priori, supported by an animal study showing that neurodegenerative changes may be exacerbated by consumption of trace amounts of copper in drinking water. These results provide preliminary evidence that high copper intake may be associated with more rapid cognitive decline in individuals who consume a diet high in saturated and trans fats.

Table 43. Intake of copper, zinc, and iron and risk of cognitive decline.

Table 43

Intake of copper, zinc, and iron and risk of cognitive decline.

Mediterranean diet. We identified two eligible cohort studies that examined the association between cognitive decline and the Mediterranean diet.86,87 The Mediterranean diet is characterized by high intake of vegetables, legumes, fruits, and cereals; high intake of unsaturated fatty acids (mostly in the form of olive oil), but low intake of saturated fatty acids; a moderately high intake of fish; a low-to-moderate intake of dairy products (mostly cheese or yogurt); a low intake of meat and poultry; and a regular but moderate amount of alcohol, primarily in the form of wine and generally during meals. The included studies are summarized in Table 44; detailed evidence tables are provided in Appendix B. One study87 used a community sample in Europe, and the other study86 used a community sample in the United States. Participants were cognitively normal at baseline in one study86 and non-demented in the other study.87 The outcome for one study was progression to MCI; the diagnosis for MCI was retrospectively assigned.86 In the other study, the outcome was longitudinal change on multiple cognitive tests. Length of followup ranged from an average of 4.5 to 7 years. Exposure was determined based on self-reported information from a semi-quantitative food frequency questionnaire. Both studies used similar methods to calculate a Mediterranean diet score based on the responses on this questionnaire. The investigators of both studies noted that they had previously reported that this questionnaire has adequate validity and reliability based on substudies of segments of the questionnaire. Both studies used sample selection methods to minimize selection bias and compared baseline characteristics by exposure level. One study stated that the outcome diagnosis was assigned blind to the exposure level;86 it is assumed here that in the other study that the cognitive measures were administered blind to exposure level.87 Analyses were appropriate and controlled for relevant potential confounders. Neither study conducted a priori sample size calculations.

Table 44. Mediterranean diet and risk of cognitive decline.

Table 44

Mediterranean diet and risk of cognitive decline.

Scarmeas and colleagues86 reported that compared to those with the lowest tertile Mediterranean diet score, those in the middle tertile were not at significantly lower risk of developing MCI (HR 0.83; 95 percent CI 0.62 to 1.12), but being in the the highest tertile was associated with lower risk of MCI (HR 0.72; 95 percent CI 0.52 to 1.00). The hazard ratio for the trend was also significant (HR 0.85; 95 percent CI, 0.72 to 1.00; P for trend = 0.05). Feart and colleagues87 found a significant association between a higher Mediterranean diet score and fewer MMSE errors (β = −0.006; 95 percent CI, −0.01 to −0.0003; P = 0.04 for 1 point of the Mediterranean diet score), indicating less decline on the MMSE over 5 years. Longitudinal performance on other cognitive tests did not show this association, except when participants with incident dementia were excluded. In this latter analysis, the memory test showed slightly less decline associated with a higher Mediterranean diet score (β = 0.05; 95 percent CI 0.005 to 0.010; P = 0.03 for 1 point of the Mediterranean diet score).

In summary, there is preliminary evidence that greater adherence to a Mediterranean diet may be associated with less cognitive decline in later life. Some caution is warranted in drawing conclusions from these findings due to the small effect sizes, minimally significant results, and the fact that the association is limited to few of the multiple cognitive measures. Confirmation of the findings is needed.

Intake of fruit and vegetables. We identified two eligible cohort studies that examined the association between cognitive decline and intake of fruits and vegetables.255,256 The outcomes of both studies were continuous variables. The two studies are summarized in Table 45; detailed evidence tables for the studies are in Appendix B. Both studies used community samples in the United States. One study stated the participants were non-demented at baseline;256 the other study255 did not provide information about baseline cognitive level, but it is assumed here that most of the participants were non-demented at baseline. The length of followup ranged from 2 to 5.5 years. Both studies used sample selection methods to minimize selection bias. Both studies used self-reported information to estimate fruit and vegetable intake, and both reported additional substudies aimed at validating the food frequency questionnaires. For the foods of interest in these analyses, the correlations tended to be in the moderate range for responses on the food frequency questionnaires and more detailed nutrition data. Both studies compared baseline characteristics between those exposed and unexposed. The case definitions for the studies are described in Table 45. The analyses appear generally appropriate and were controlled for relevant potential confounders. Neither study conducted a priori sample size calculations.

Table 45. Intake of fruit and vegetables and risk of cognitive decline.

Table 45

Intake of fruit and vegetables and risk of cognitive decline.

Both studies reported a significant protective association between higher amounts of vegetables and lower rates of cognitive decline, with the association being the strongest for green leafy vegetables. There were no significant associations between amount of fruit intake and cognitive decline. The results from these two studies are consistent in suggesting a protective effect on cognition associated with eating vegetables, but the actual difference in mean scores between the groups is quite small. Additional studies confirming these findings would be useful to rule out the possibility of residual confounding explaining the results and to determine whether these small differences have clinical significance.

Total intake of calories, carbohydrates, fats, and proteins. We identified no systematic reviews or studies evaluating total intake of calories, carbohydrates, fats, and protein and risk of cognitive decline.

Medical Factors

Vascular factors. Factors considered under this heading include diabetes mellitus, metabolic syndrome, hypertension, hyperlipidemia, and homocysteine.

Diabetes mellitus. We identified two systematic reviews that examined the relationship between diabetes mellitus and risk of cognitive decline.42,43 Lu et al.42 identified seven prospective, longitudinal cohort studies published between 1999 and November 2007 involving 38,573 subjects. Five studies were conducted in the United States and two in Western Europe. Ascertainment of diabetes varied among studies, with the diagnosis of diabetes based on history, medical records, fasting glucose, or oral glucose tolerance test. Heterogeneity between studies was assessed using the chi-square test, and publication bias was examined using visual inspection of funnel plots and Egger’s test. No heterogeneity or publication bias was found. Global measures of cognitive function included the MMSE, the 3MS, and the TICS. Several studies also examined executive function using Trail Making Test Part B (Trails B), verbal fluency, and Digit Symbol Substitution Test (DSST). Individuals with diabetes mellitus were found to have faster decline in global cognitive function as measured by change in MMSE, 3MS, or TICS score after 2 to 7 years of followup. Similarly, baseline diabetes mellitus was also associated with a faster decline in measures of executive function.

An earlier systematic review by Cukierman et al.43 reported that the annual rate of change in MMSE or 3MS scores in three studies was 1.2 to 1.5 times faster in diabetic subjects than in non-diabetics. This review also examined studies that assessed global cognitive change as a categorical variable. Cognitive decline was defined as either a percent reduction from baseline scores, or reduction below a particular threshold (for example, a score less than 80 percent of the population). Diabetics were more likely than non-diabetic subjects to experience a decline in MMSE or 3MS score of ≥ 6.6 to 11.5 percent from their baseline score (ORs ranged from 0.7 to 1.7 in four studies), or to score in the bottom 15 or 20 percentile of the population (ORs ranged from 1.0 to 1.2 in two studies), but only one of these six studies had a lower 95 percent CI that exceeded 1.0. A meta-analysis of the results from six studies that performed the MMSE at baseline and followup found that the OR for cognitive decline for diabetics as measured by the MMSE was 1.2 (95 percent CI 1.05 to 1.4). There was no statistical evidence of heterogeneity (chi squared = 6.73, df = 5 (p = 0.24); I2 = 25.7 percent). A similar analysis for the DSST (two studies) found that diabetics were more likely to decline by at least 7.3 percent from their baseline score (OR 1.6; 95 percent CI 1.2 to 2.2), or to score in the bottom 15 percentile of the population than were non-diabetics (OR 2.3; 95 percent CI 1.2 to 4.3). A meta-analysis of the two studies that performed the DSST found that diabetic subjects were at increased risk of a decline in performance compared to non-diabetics (OR 1.7; 95 percent CI 1.3 to 2.3). Six studies used a composite of other measures of cognitive performance to detect cognitive decline. Five of these found that subjects with diabetes had a higher risk of cognitive decline than non-diabetics. In three of the studies that demonstrated decline, the lower limit of the 95 percent CI exceeded 1.0. Cukierman et al.43 concluded that people with diabetes have a greater risk and rate of cognitive decline than people without diabetes.

We identified five additional studies on the association between diabetes mellitus and cognitive decline.257–261 These studies are summarized in Table 46; detailed evidence tables are provided in Appendix B. One study was from Australia,257 one from Europe,259 and three from the United States.258,260,261 The total number of subjects enrolled was 9056; 58.2 percent were women. Only two studies had a significant number of African-American subjects. The mean age of subjects ranged from 59 to 74 years, and the duration of studies was 4 to 14 years. One study examined conversion to a diagnosis of MCI or any form of MCI (Age Associated Memory Impairment, Age Associated Cognitive Decline; Mild Neurocognitive Disorder, CDR = 0.5 and Other Cogntive Disorder) and found no association between diabetes and cognitive impairment.257 Yaffe et al. did not use a categorical diagnosis, but divided subjects ranging in age from 70 to 79 years into three groups based on performance on the 100-point Modified Mini-Mental State Examination (3MS).258 Subjects whose 3MS scores were stable at baseline, 3, 5, and 8 years (slope of scores ≥ 0) were called cognitive maintainers, those with slopes between 0 and > 1 SD below mean were called minor decliners, and those with slope that declined by more than 1 SD were major decliners. Thirty percent of the population maintained cognitive function over 8 years, 53 percent demonstrated minor decline, and 16 percent had major cognitive decline. The investigators reported that in multivariate analysis, diabetes mellitus was not significantly associated with being either a minor or major decliner in cognitive function.258 Comijs et al. used the 30-point MMSE as a measure of general cognitive function in a 6-year study of 1358 subjects ranging in age from 62 to 85 years.259 Using a Generalized Estimated Equation (GEE) model, investigators found that subjects with diabetes had a significantly lower baseline MMSE score, but cognitive change over time was not significantly different from non-diabetics. Diabetes was also associated with lower baseline scores on Raven’s Colored Progressive Matrices, the Alphabet Coding Task-15, and the Dutch version of the Auditory Verbal Learning Test (AVLT), but cognitive decline over time was only significant for delayed recall on the AVLT (p < 0.01). Knopman et al. examined the association of diabetes mellitus with cognitive decline over 14 years using three neurocognitive tests (Digit Symbol Substitution [DSST], Delayed Word Recall [DWR] and Word Fluency [WF]).260 Although there was a decline in all three measures in subjects with diabetes, it was significant in multivariate analysis only for WF (P = 0.021). In univariate analysis adjusted for race, age, sex, and education, subjects with diabetes had a greater average annual decline on DSST (P = 0.002) and WF (P = 0.003), but not DWR. Investigators found no interaction between risk factors. Carmelli et al examined the effect of APOE e4 genotype on 10-year cognitive decline in 410 subjects with midlife hyperglycemia.262 Investigators found that APOE e4 carriers with midlife hyperglycemia experienced greater decline than APOE e4 carriers without hyperglycemia and hyperglycemic subjects who did not carry and e4 allele.

Table 46. Diabetes mellitus and risk of cognitive decline.

Table 46

Diabetes mellitus and risk of cognitive decline.

In summary, the data linking diabetes mellitus with a rapid rate of cognitive decline are mixed, with most studies showing a modest association. A number of studies have identified declines in selective cognitive function (e.g., DSST, WF and delayed recall on AVLT) in diabetics, but the specific domain affected has varied across studies. Possible explanations for variation in results include use of different criteria to diagnose diabetes mellitus, and failure to consider the effect of duration and severity of diabetes on cognitive outcome. More data are needed on the effect of various forms of diabetes treatment (insulin versus oral agents versus diet) and the role of comorbid vascular factors and hyperinsulinemia on cognitive decline.

Metabolic syndrome. We identified four prospective, longitudinal cohort studies, involving 5713 subjects, that evaluated the association between metabolic syndrome and cognitive impairment.260,263–265 These studies are summarized in Table 47; detailed evidence tables are provided in Appendix B. One study was conducted in the Netherlands, one in Singapore, and two in the United States; all were community samples. Metabolic syndrome was identified in three studies using National Cholesterol Education Program 3rd Adult Treatment Panel Guidelines (NCEP-ATPIII),260,263,264 and in the other using International Diabetic Federation criteria.265 The NCEP-ATPIII criteria require at least three of the following for a diagnosis of metabolic syndrome:

Table 47. Metabolic syndrome and risk of cognitive decline.

Table 47

Metabolic syndrome and risk of cognitive decline.

  1. Waist measurement > 88 cm for women or > 102 cm for men.
  2. Hypertriglyceridemia (≥ 150 mg/dL [≥ 1.69 mmol/L]).
  3. Low HDL (men < 40 mg/dL [< 1.03 mmol/L]); women < 50 mg/dL [< 1.29 mmol/L]).
  4. High blood pressure (systolic ≥ 130 mmHg; diastolic ≥ 85 mmHg) or currently using an antihypertensive medication.
  5. High fasting glucose (≥ 110 mg/dL [≥ 6.10 mmol/L]) or currently using anti-diabetic medication (insulin or oral agents).

The International Diabetes Federation criteria are similar, but require a waist circumference greater than 90 cm for men (> 80 cm for women) plus at least two of the following: elevated blood pressure or use of antihypertensive medication; elevated fasting glucose (≥ 5.6 mmol/L) or use of antidiabetic drugs; elevated triglycerides (1.7 mmol/L) or use of lipid-lowering agents; or low HDL cholesterol (< 0.9mmol/L in men and < 1.1mmol/L in women) or use of lipid-lowering agents.

The definition of cognitive change also differed among the studies. Yaffe et al.263 described cognitive change as a decline of 5 or more points in the 100-point Modified Mental Status (3MS) examination at either the 3- or 5-year evaluation, while Ho et al.265 defined change as a 2-point decline on the 30-point MMSE over 1 to 2 years. Van den Berg et al.264 and Knopman et al.260 examined the effect of metabolic syndrome on the rate of change in a battery of tests over 5 years of followup. Van den Berg et al. used the MMSE, Stroop, Letter digit coding, and word list immediate learning or delayed recall, and Knopman et al. used Digit Symbol Substitution, Delayed Word Recall and Word Fluency. There were also several differences among the study populations. Van den Berg et al.264 examined residents of Leiden, The Netherlands, who were 85 or older, and 17 percent of the participants had MMSE scores ≤ 18 at baseline. Ho et al.265 studied people from Singapore with a mean age of approximately 65, and all participants had MMSE scores ≥ 24. Yaffe et al.263 studied black and white elders from Memphis, Tennessee, and Pittsburgh, Pennsylvania, who ranged in age between 70 and 79 years of age, with a mean baseline 3MS score of 90. This latter study did not describe the number of subjects with low 3MS scores, but required a self-report of normal functioning on activities of daily living and the absence of a diagnosis of dementia to participate. The subjects in the Knopman et al. study260 were a subset of participants in the Atherosclerosis Risk in Communities (ARIC) study. In all studies, participants with metabolic syndrome were more likely to be women, and in two studies they had lower levels of education; both female sex and low education are associated with an increased risk of AD, so baseline differences in the population may confound interpretation of results. All studies adjusted for important confounders, such as sex and educational level. Yaffe,263 Ho,265 and Knopman and colleagues260 adjusted for age, but van den Berg and colleagues264 did not, possibly because of the limited age difference in the latter’s study participants. Van den Berg et al. and Knopman et al. also did not stratify results based on baseline cognitive testing. There were also differences in followup: in the study by van den Berg and colleagues, 51 percent of subjects died before repeat cognitive assessment,264 while followup rates for Ho, Yaffe, and Knopman and colleagues were 64 percent, 89 percent, and 59 percent, respectively,260,263,265 Mortality in the study by van den Berg et al. was not associated with metabolic syndrome (HR 0.9; 95 percent CI 0.7 to 1.2).

Two studies found that metabolic syndrome was modestly associated with cognitive decline. Yaffe and colleagues reported that 26 percent of participants with metabolic syndrome had a decline of at least 5 points on the 3MS, compared to 21 percent of subjects without metabolic syndrome (adjusted RR 1.20; 95 percent CI 1.02 to 1.41).263 They further analyzed their data based on whether participants had evidence of increased inflammation as measured by the serum markers C-reactive protein (CRP) and interleukin 6. Subjects with metabolic syndrome and increased inflammatory markers were at greater risk of cognitive decline (adjusted RR 1.66; 95 percent CI 1.19 to 2.32) compared to those with metabolic syndrome and normal levels of inflammatory markers (adjusted RR 1.08; 95 percent CI 0.89 to 1.30), suggesting that inflammation is a mechanistically important mediator of cognitive change in metabolic syndrome. Ho and colleagues reported that subjects with metabolic syndrome were more likely to experience a 2-point decline on the MMSE than subjects without metabolic syndrome: among subjects with metabolic syndrome, 19.9 percent had a 2-point decline, compared to 14 percent of subjects without the syndrome (OR 1.42; 95 percent CI 1.10 to 1.98).265 Knopman et al. prospectively followed 1130 individuals with a mean age at baseline of 59 years for 14 years.260 Forty-six percent of the cohort met criteria for metabolic syndrome, and 52 percent were African-American. Investigators reported that subjects with metabolic syndrome had a statistically significant greater annual decline on word fluency than other subjects in univariate testing that controlled for race, age, sex, and education (P < 0.001). Metabolic syndrome was not associated with a significant decline on Digit Symbol Substitution or Delayed Word Recall Tests. The authors also reported that there was no evidence of differential effects of risk factors on cognitive decline by race or sex. Van den Berg et al. found that metabolic syndrome was not associated with lower cognitive performance in their study of people over the age of 85 years.264 In contrast, the Leiden 85-Plus study data showed that subjects with metabolic syndrome had a slower rate of cognitive decline on the MMSE, Stroop, and Letter Digit Coding tests than subjects without metabolic syndrome. The authors suggested that the difference in age between Leiden 85-Plus study participants and participants in the other studies may explain the disparate findings. There is some literature to suggest that weight loss, low blood pressure, and low cholesterol values are associated with an increased risk of dementia and higher mortality in the old-old, which could explain the protective effect of metabolic syndrome in the older group. There may also be a survivor effect, such that individuals who reach 85 despite having metabolic syndrome may be less susceptible to the adverse effects of these risk factors.

In summary, metabolic syndrome is associated with a modestly increased risk of cognitive decline in studies involving subjects under the age of 80. The relationship between metabolic syndrome and risk of cognitive decline may not be valid in persons over the age of 85 years. There is limited evidence that inflammation may mediate some of the risk of metabolic disease in elders under the age of 80, and measures of inflammation should be included in future studies. Future studies should also include various definitions of metabolic syndrome and subjects with onset of metabolic syndrome in midlife, as syndrome duration may be relevant to late-life cognitive decline and dementia.

Hypertension. We did not identify any good quality systematic reviews evaluating hypertension and risk of cognitive decline. Our own search identified 16 unique cohorts described in 19 publications that examined the association between hypertension and cognitive decline.108,163,258,260,262,266–280 These studies are summarized in Table 48; detailed evidence tables are provided in Appendix B. The included studies involved more than 43,000 subjects. Studies were widely heterogeneous. Three studies266–268 assigned diagnoses of incident MCI using different modifications of Petersen’s criteria. The other studies looked at changes in cognitive test scores over time. Twelve studies108,163,258,260,262,271,272,274–276,279,280 used community cohorts from the United States, and one used a community cohort from France.278 More narrowly defined community-based cohorts included those made up of WWII twin pairs,262,272 retired Catholic clergy,108 and a cohort drawn from the participants in the Study of Osteoporotic Fractures.271 Two publications described observational analyses based on RCTs. One cohort273 was formed by the subjects (aged ≥ 60 years) from the Systolic Hypertension in the Elderly Program (SHEP) trial, a study of anti-hypertension treatment in which all subjects had a SBP > 160 and a DBP < 90 and lacked “clinically obvious dementia” at baseline. Another cohort277 used subjects from the Advanced Cognitive Training for Independent and Vital Elderly (ACTIVE) trial, a test of cognitive training. Subjects (mean age ~ 74 at baseline) in ACTIVE had MMSE scores > 22 and were not functionally impaired. Subjects with controlled hypertension would have been classified as normotensive, and analyses were not adjusted for antihypertensive medication use.

Table 48. Hypertension and risk of cognitive decline.

Table 48

Hypertension and risk of cognitive decline.

The studies also used varying definitions of hypertension, as noted in Table 48. Some studies treated blood pressure as a continuous variable.108,275,279 Others used self-reported hypertension as one component of designating a subject as hypertensive or as the sole means of determining the presence of hypertension.163,258,271,276 In the cohort from the SHEP trial,277 all participants had isolated systolic hypertension. Definitions of hypertension were set a priori in some cases,258,260,266,273,276,277 and were clearly based on the data in another.274 In the remainder of the studies, hypertension was self-reported, treated as a continuous variable, or not clearly defined a priori.

There were 2990 subjects in the studies that diagnosed MCI. Rates of incident MCI varied. Solfrizzi and colleagues266 describe 113 cases of incident MCI in a population of 1524 over 3.5 years. Tervo et al.267 found 65 cases of incident MCI in a study sample of 548 over 3.26 years. Reitz and colleagues268 found 334 cases of incident MCI in a study sample of 918 over 4.7 years. No significant association between hypertension and MCI was found in any of these studies.

Studies from community samples that evaluated the effects of hypertension on various cognitive tests108,163,258,260,262,271,272,274–276,278–280 had mixed results. For various cognitive domains, associations between hypertension and cognitive decline were inconsistent: processing speed (two of three studies positive), executive functioning (one of two studies positive) and global cognition (five of nine studies found a statistically significant association between hypertension and greater cognitive decline). The detailed results are discussed in the following paragraphs.

Two studies275,276 found no significant association between either SBP or DBP and cognitive decline over 6 to 7 years. Cognitive outcomes were measured with the MMSE275,276 and other memory tests.275

In a substudy of the ARIC cohort, Knopman et al. found a worsened performance on verbal fluency associated with hypertension but no association with delayed recall or processing speed.260 Alves de Moraes and colleagues, in an earlier analysis of the full ARIC cohort269 compared five different categories of hypertension with each of three cognitive tests and found that only subjects with uncontrolled hypertension (high SBP or high DBP on both followup visits) and DSST subtest scores were significantly related. Peila and colleagues163 found that never-treated hypertensives declined faster on a measure of general cognition, the Cognitive Abilities Screening Instrument (CASI; 1.46 points decline/year; range 0 to 100) than did hypertensives treated for 5 to 12 years (1.14 points of decline/year). However, differences between untreated hypertensives and those treated for 0 to 5 years and those treated > 12 years were not statistically significant, making importance of the finding unclear. Barnes and colleagues271 gave subjects a shortened version of the MMSE several times over a study duration of 6 to 15 years (median 10 years). The study population was divided into tertiles defined by the slopes of the lines representing this change over time. Optimal cognitive functioning was defined by a slope of 0. Lack of hypertension was predictive of optimal cognitive functioning, with an OR of 1.22 (95 percent CI, 1.03 to 1.44). Tzourio et al.278 found the strongest association between hypertension and cognitive decline, defined as a 4-point decline in MMSE scores over the 4-year followup of the study. When hypertension was defined as SBP ≥ 160 mmHg or DBP ≥ 95 mmHg, the OR for cognitive decline was 2.8 (95 percent CI 1.6 to 5.0). When hypertension was defined as SBP ≥ 140 mmHg or DBP ≥ 90 mmHg, the OR decreased to 1.8 (1.2 to 2.9). The study population was younger than in other cohorts (mean age 65 years), and an MMSE decline of 4 points would represent a fairly severe decline. Waldstein and colleagues279 reported a nonlinear relation of SBP with cognitive change as measured by tests of non-verbal memory and confrontational naming. Younger subjects with higher SBP made more baseline errors on the BVRT but improved over time, while older individuals with high SBP made more errors and got worse over time. Haan and colleagues274 found that SBP that was a standard deviation above the mean was associated with a faster decline on the 3MS and the DSST. Glynn et al.280 looked at the Boston EPESE cohort, a subset of which had blood pressure readings from 9 years prior to first brief cognitive tests followed by further cognitive tests 3 and 6 years later. No association was identified between various blood pressure levels and errors on a brief memory test or the short portable mental status questionnaire (SPMSQ) except for a single category of hypertension (SBP>160) and a greater increase in SPMSQ errors over time.

Other populations were less generalizable. Two studies262,272 reported on the association in white male twin pairs from WWII. Hypertensives had a greater decline on the DSST, which was statistically significant using a one-sided p test,262 but not on the MMSE or BVRT.272 The same subjects with high blood pressure in mid-life showed greater decline in MMSE scores over 10 years when compared to those with low SBP (< 120 mmHg).

Kuo and colleagues277 looked at subjects in the ACTIVE trial, which tested different cognitive training techniques. Subjects were followed over 2 years. Hypertensive subjects (defined by direct measurement but not use of antihypertensives) had faster decline in reasoning tests, while memory, speed of processing, and global cognition composite scores were not significantly affected. The authors found no meaningful (or statistically significant) interactions between the cognitive training intervention and the effect of hypertension on cognitive decline.

If there is a pattern to these isolated positive results, it is that they tended to be in tests associated with frontal lobe functioning (reasoning, working memory, etc). This is an area of the brain thought to be vulnerable to vascular insults, which could be expected to be more likely in hypertensive subjects.

One study was formed by the subjects in the SHEP trial (subjects were volunteers for blood pressure screening and had SBP 160 to 239 and DBP < 100, with randomization to antihypertensive or placebo).273 The SHEP trials compared the effects of antihypertensive drugs versus placebo on cardiovascular outcomes; cognitive change was a tertiary outcome measured by the Trails A and DSST. Duration of this substudy was only 1 year, and all subjects were hypertensive at trial initiation. Neither the antihypertension intervention nor the change in SBP was associated with changes in cognition over this brief period.

The Religious Orders cohort of retired clergy108 had an older mean age of 75. The cohort was highly educated (mean of 18 years of education). Baseline blood pressures were low. No relationship was noted between blood pressure analyzed as a continuous variable and cognitive decline, defined by a global score from combining multiple tests, over 6 to 15 years. It is possible that individuals with hypertension selectively died prior to inclusion in this cohort, or that the limited variability of blood pressure levels prevented detection of any association.

In summary, while multiple cohorts have been examined for an association between hypertension and cognitive decline using various tests, the samples are as heterogeneous as are the outcomes, definitions of hypertension, and results. The strongest results were associated with subjects whose hypertension was untreated and whose cognitive decline was relatively severe. Some studies found results when multiple tests were compared individually with hypertension at baseline, raising the possibility that a positive result could arise by chance.

Hyperlipidemia. We identified one good quality systematic review that examined the relationship between total cholesterol and cognitive impairment and cognitive decline.40 No formal quality assessment of included papers was done. Two studies examined the relationship between a mild cognitive impairment diagnosis and total cholesterol. One266 examined cholesterol in late life and found a lower risk with higher cholesterol, but the confidence interval included 1 (RR 0.67; 95 percent CI 0.45 to 1.00). Another study281 found total cholesterol in midlife of ≥ 6.5 mmol/L (251 mg/dL) to be related to incident MCI, with an OR of 1.9 (95 percent CI 1.2 to 3.0). It is possible that the method of screening for MCI in this latter study was insensitive to some cases, and this may have skewed the results.

Of the studies examining cognitive decline included in the systematic review, only two would have met our inclusion criteria. Kalmijn and colleagues282 found no association between late-life cholesterol and cognitive decline (OR 1.38; 95 percent CI 0.75 to 2.55). Reitz et al.283 also found no relationship (after Bonferroni correction) between cholesterol and cognitive decline.

Our own independent search identified two additional papers (Table 49; detailed evidence tables are provided in Appendix B). Knopman et al.270 found no association between hyperlipidemia and declines on any test in a population initially aged 45 to 64 and followed a mean of 6 years. Word fluency, DSST, and DWR were the tests used. Packard et al.284 identified no significant associations between LDL or HDL levels and performance in any tests in a cohort formed from subjects from a statin treatment trial and adjusted for treatment allocation who were followed a mean of 3.2 years with an initial age between 70 and 82 years. Tests given were MMSE, picture word recall test (involving immediate recall and recall after 20 minutes), Stroop Color and Word Test, and letter digit coding test.

Table 49. Total cholesterol and risk of cognitive decline.

Table 49

Total cholesterol and risk of cognitive decline.

As in the data concerning incident AD and total cholesterol levels suggest, lipid levels are not convincingly related to cognitive impairment by this available data. There was a trend toward a lower risk of cognitive decline with higher late-life cholesterol in one study,266 but a lack of association in four others.270,282–284

Homocysteine. We identified four cohort studies120,238,285,286 and a nested case-control study237 involving 3409 subjects that examined the association between homocysteine and risk of cognitive decline (Table 50). Among the five studies, three were conducted in European communities,237,285,286 and two in U.S. communities.120,238 One study selected a sample of highly functioning elderly;238 the other studies selected samples from general community populations. Three studies237,238,286 used non-fasting homocysteine samples that may not measure bioavailable folate as well as fasting samples. Rather than specifying abnormal homocysteine levels a priori, all studies set thresholds based on population levels. Thresholds varied across studies. In the cohort studies, followup rates exceeded 80 percent. In the nested case-control study, 51 percent of survivors agreed to participate when approached at 10-year followup, and of these, only 68 percent provided blood for analysis. The duration of followup was relatively short in two studies (2 to 2.7 years),285,286 and 4.7 to 10 years in the other studies. Three studies237,285,286 used declines in the Mini-Mental State Examination (MMSE) to determine cognitive decline. Decline was evaluated as a continuous measure or using different thresholds for decline. The other two studies used multiple cognitive tests to compute a single summary score238 or summary scores for several domains of cognitive function.120 All studies adjusted results for multiple potential confounding variables.

Table 50. Homocysteine and risk of cognitive decline.

Table 50

Homocysteine and risk of cognitive decline.

Using a MMSE decline of greater than 1 point per year, Kalmijn et al.286 found no significant association with tertiles of homocysteine. Dufouil et al.285 evaluated the association using quartiles of homocysteine. The highest quartile (≥ 15 μmol/L) was associated with an increased risk for a ≥ 3-point decline in the MMSE over 2 years (OR 2.8; 95 percent CI 1.2 to 6.2). This 1- to 1.5-point average annual decrease in MMSE would represent a fairly rapid decline in cognition. Clarke et al.237 evaluated the association between the 8-year change in homocysteine values and 10-year change in MMSE. Doubling of homocysteine was associated with more rapid decline on the MMSE, but when analyses were adjusted for other vitamin markers (B12, folate, methylmalonic acid), the association was no longer statistically significant.

Analyses of other cognitive outcomes showed inconsistent associations with baseline homocysteine values. Kado et al.238 used a summary cognitive score and found no significant association with homocysteine, categorized by quartiles, and cognitive decline over 7 years in a population selected to be in the top third of cognitive and physical functioning (RR for highest versus lowest quartile 1.11; 95 percent CI 0.65 to 1.76). Luchsinger et al.120 created summary scores for memory, language, and visuospatial domains. Comparing homocysteine values greater than the median (15.6 μmol/L) to values below the median, homocysteine was not associated not associated with a greater decline in any domain. Subjects were followed for a mean of 4.7 years. Dufouil et al.285 found that homocysteine values greater than 15μmol/L were associated with greater decline on the DSST, finger tapping, and Trails B test.

The variability in subjects studied, classification of exposure, outcomes measured, and duration of followup may explain the variability in observed associations. However, given the small number of studies and the variability across multiple dimensions, no clear pattern can be determined.

In summary, we identified five studies that examined the relationship between baseline homocysteine and cognitive decline. Four of the five studies did not find an association between cognitive decline and homocysteine levels, and two studies found associations using differing definitions of exposure. There is no consistent association between homocysteine levels and cognitive decline.

Other medical factors. Factors considered under this heading include sleep apnea, obesity, and traumatic brain injury (TBI).

Sleep apnea. We did not identify any good quality systematic reviews or primary studies that evaluated the association between sleep apnea and risk of cognitive decline.

Obesity. We did not identify any good quality systematic reviews that examined the relationship between weight and cognitive decline. We identified three prospective cohort studies that examined the effects of obesity on cognitive decline.257,258,287 These studies are summarized in Table 51; detailed evidence tables are provided in Appendix B. Two of these were conducted in the United States,258,287 while the other was conducted in Australia.257 All three studies recruited participants from the community thereby decreasing bias in selection of the cohort; however, only two studies excluded participants with impaired cognition.257,258 Most participants were over age 70 at the time of cognitive testing except in one study where the mean age was 62.5.257 Two of the studies ascertained BMI by direct measurement of height and weight;258,287 the remaining study calculated BMI from self-reported height and weight.257 Cognitive decline was considered as a continuous variable in one study.287 The other two studies categorized the variable as MCI257 and as major and minor decliners, defined by the slope of cognitive decline being ≥ 1 SD below the mean and 0 < but no more than 1 SD below the mean, respectively.258 The period of followup ranged from 4 to 8 years. Only one study compared the sample at baseline by exposure and found that persons with the highest BMI tended to be younger, female, and black,287 while the others compared samples at baseline based on their outcome. There was no a priori calculation of the sample size in any of the studies, but all did control for potential confounders in the analysis.

Table 51. Obesity and risk of cognitive decline.

Table 51

Obesity and risk of cognitive decline.

Sturman et al.287 included people with all levels of baseline cognitive function and showed that a higher BMI was associated with less cognitive decline over 6 years in both black (β = 0.0013, p = 0.009) and non-black subjects (β = 0.0021, p = 0.006). In a separate analysis limited to those who had MMSE scores greater than 24 (1010 participants), there was no relationship between obesity and cognitive decline over time in black (β = 0.0003, p = 0.415) or non-black subjects (β = 0.0008, p = 0.086). This latter analysis is of greatest relevance to our study question; however, it is important to bear in mind that this is a secondary analysis. Yaffe et al.258 found that cognitive decline was associated with a higher BMI; however, this association was not statistically significant when investigators compared maintainers and minor decliners. They did find a statistically significant relationship between minor and major decliners where major decline was associated with higher BMI (OR 0.97; 95 percent CI 0.94 to 1.00). Cherbuin et al.257 found that there was no statistically significant association between BMI and MCI (OR 1.01; 95 percent CI 0.93 to 1.10), but there was a significant association between BMI and any major cognitive decline (OR 1.05; 95 percent CI 1.05 to 1.09).

In conclusion, all three prospective cohort studies that have examined the association between weight and cognition are inconclusive. A possible explanation for this could be that the effect of weight on cognitive decline is small. It could also be the case that the extremes of weight have an adverse outcome which might be masked by considering weight to be single continuous variable. It is also notable that the studies did not measure lifetime or midlife BMI, as studies on BMI and AD show that the time of exposure to obesity is important. Also, change in weight, which has been shown to be a predictor for AD in some studies, was not considered. Future studies are needed to clarify the relationship between weight and cognitive decline and these studies need to consider age at exposure as well as change in weight.

Traumatic brain injury (TBI). We did not identify any good quality systematic reviews or primary studies that evaluated the association between TBI and risk of cognitive decline.

Psychological and emotional health. Factors considered under this heading include depression, anxiety, and resiliency.

Depression. We identified 13 cohort studies, involving 32,969 subjects, evaluating the association between depression and categorical outcomes for cognitive impairment.257,271,288–298 All studies excluded subjects with dementia, but most did not specifically exclude individuals with cognitive impairment that did not meet the threshold for dementia. An additional nine studies evaluated the association between depressive symptoms and changes on 26 different measures of cognition analyzed as a continuous measure. Because of the heterogeneity of continuous outcome measures and the similar results to studies using categorical outcomes, these studies will not be discussed in detail.299–307

Among the 13 cohort studies of interest, 6 studies each were conducted in Western Europe and the United States, all but one291 in community samples. Almost all studies evaluated adults older than age 65. All studies assessed current depressive symptoms using a validated severity measure; two257,296 also assessed antidepressant use at baseline. Seven studies reported incidence of MCI using similar definitions that required abnormal neuropsychiatric testing, change from prior cognitive status, absence of functional impairment, and not meeting criteria for dementia. The other six studies defined cognitive decline as a change in MMSE meeting or exceeding a specified threshold (1, 3, or 5 points) or in the lowest tertile. The average followup ranged from 1.5 to 6 years; the mean duration was at least 3 years for all but one study.293 Most studies used methods to minimize selection bias and used generally appropriate analysis methods, including adjustments for confounding. However, only one study reported an a priori sample size calculation,294 few controlled for psychotropic medication use, and followup rates were low or not reported in over half the studies.

Because of the variability in how studies categorized significant depressive symptoms, we did not compute a summary estimate of effect. Instead we summarized results qualitatively (Table 52). For the seven studies evaluating incident MCI, five showed an elevated risk for subjects with significant depressive symptoms. One study that found no association with depressive symptoms296 found that antidepressant use increased risk. The study by Christensen et al.289 had few incident cases of MCI and was likely underpowered. In contrast, the studies examining risk for decline on the MMSE were mixed. Three of the six studies showed an elevated risk for cognitive decline among those with depressive symptoms at baseline, one showed an elevated risk only for those with persistent depressive symptoms, and two showed no association. The variability in findings is not explained by differences in study population, exposure measurement, or study design. When all the studies using MCI and decline in MMSE are considered, the evidence suggests an association between depressive symptoms and cognitive decline.

Table 52. Depression and risk of cognitive decline.

Table 52

Depression and risk of cognitive decline.

Anxiety. We identified four prospective cohort studies, involving 6297 mid- to late-life adults, examining the association between anxiety and cognitive decline.257,309–311 All studies were conducted in community-based samples in Western Europe or Australia and followed subjects for up to 17 years.

The study by Wetherell et al.311 followed 704 same-sex twins; the overall followup rate was 75.5 percent. Subjects with dementia at baseline or either of the two followup assessments were excluded from analyses. A baseline measure of neuroticism was used as a proxy for anxiety, and cognitive outcomes were assessed using 11 different measures. Analyses were adjusted for age, sex, and education level, but not for other psychiatric symptoms. There was no association between the 9-item neuroticism measure and change in cognition for any of the 11 different measures.

The study by Bierman et al.309 followed 2351 adults aged 55 to 85. The followup rate at year nine was 62.5 percent, and dropout was associated with higher anxiety scores and lower cognitive performance, which may have biased the estimate of association. Anxiety was measured at multiple time points using the Hospital Anxiety and Depression Scale (HADS), and cognitive outcomes included a general measure of cognitive functioning (MMSE) and measures of fluid intelligence, processing speed, and episodic memory. Analyses were adjusted for age, sex, education, chronic disease count, depressive symptoms, alcohol consumption, and benzodiazepine use. There was no association between anxiety symptoms and cognitive decline for any of the cognitive measures.

The study by Gallacher et al.310 followed 2358 non-demented men aged 48 to 67 for a mean of 17.3 years. Only those with baseline anxiety scores and followup (n = 1160, 48 percent) were included in the analyses. There were multiple baseline demographic and clinical differences between those with and without followup, potentially biasing the estimate of association. Anxiety was measured with the State Trait Anxiety Inventory (STAI) and dichotomized as high using the 31st percentile. Analyses were adjusted for age, education, marital status, cognitive function, and vascular risk factors. During followup, there were 174 cases of incident CIND and 69 cases of incident dementia. Elevated anxiety scores were associated with increased risk of CIND (OR 2.31; 95 percent CI 1.20 to 4.44) and risk of the combined outcome of CIND or dementia (OR 2.19; 95 percent CI 1.24 to 3.88). A sensitivity analysis excluding those with cognitive impairment at baseline showed a stronger association.

Cherbuin and colleagues257 followed 2082 cognitively normal adults for 4 years; followup exceeded 80 percent. Anxiety was measured using the Goldberg Anxiety/Depression Scale but a threshold for an abnormal result was not specified. Anxiety medication was assessed at baseline. During followup there were 18 incident cases of MCI and, using broader criteria, 64 cases of mild cognitive disorder (MCD). Anxiety symptoms and anxiety medications were not associated with MCI or MCD. A sample size or power calculation was not reported, but the study likely had low powered to exclude a clinically significant association.

In summary, four prospective cohort studies failed to find a consistent association between anxiety symptoms and cognitive decline. One study309 was strengthened by a validated scale for anxiety, measured at multiple time points, but no study used a clinical or criterion-based diagnosis of anxiety disorders. Questionnaires, such as the HADS, correlate only moderately with clinical diagnosis; a criterion-based diagnosis may be a more clinically relevant measure of exposure.

Resiliency. We did not identify any good quality systematic reviews or primary studies that evaluated the association between psychological resiliency and risk of cognitive decline.

Medications. Prescription and non-prescription drugs considered under this heading include statins, antihypertensives, anti-inflammatories, gonadal steroids, cholinesterase inhibitors, and memantine.

Statins. Our search identified four cohort studies examining the relationship between statin use and cognitive decline.153,312–314 A total of 6827 older adults (mean age > 70 years in all studies) were involved. All studies were conducted in the United States; three drew samples from the community. Followup ranged from 1 to 12 years. Three of the four studies reported measures of global cognition, while the fourth312 reported executive function. Three studies screened for and excluded subjects with dementia. The fourth study312 recruited a consecutive sample of veterans from primary care settings, with a mean age of 75 years old, but did not screen for dementia. Only one study selected subjects in a manner that minimized selection bias and baseline inequalities between exposed and unexposed groups.313 Two of the four studies classified statin use only at baseline.312,314 Data were analyzed appropriately and controlled for confounders.

Due to incomplete reporting and heterogeneity in study designs, a summary estimate of effect was not calculated. As summarized in Table 53, results were mixed. The study by Bernick et al.313 was the largest, assessed statin use annually, used multiple control groups, and evaluated annual change for a mean of 5.1 years using a well-validated global measure of cognitive change. In an analysis adjusted for age, sex, education, race, APOE e4, and baseline cholesterol, the difference in mean rate of change in the Modified Mini-Mental State Examination (3MS) between continuous statin users compared to subjects for whom treatment was recommended but not taken did not differ significantly (0.40 annually; 95 percent CI −0.03 to 0.87). For continuous statin users compared with subjects in whom lipid lowering treatment was not recommended, the difference in mean rate of 3MS change favored statin use (0.49; 95 percent CI 0.04 to 0.95). In summary, a limited number of observational studies, some with important methodological limitations, do now show a consistent association between statin use and cognitive decline in older adults.

Table 53. Statins and risk of cognitive decline.

Table 53

Statins and risk of cognitive decline.

In summary, data from observational studies were limited and did not show a consistent association between statin use and cognitive change. Two large RCTs, discussed under Question 4, also did do not show an association between statin use and cognitive change.

Antihypertensives. Two studies were identified examining the impact of antihypertensives on cognitive decline (Table 54).267,278 Tervo et al.267 looked at risk factors for incident MCI diagnosed using very modified Petersen’s criteria in a sample from Kuopio, Finland. Self-report of antihypertensive use was elicited during a structured interview. It is not clear that the indication for the antihypertensive was elicited, but 44 percent of the overall sample had directly measured HTN (≥ 160/95). Diagnosis of MCI was based primarily on a test of delayed recall. Over a 3.26 year followup, there was no statistically significant impact of antihypertensive use on incident MCI; adjusted OR 1.61 (95 percent CI 0.87 to 2.99).

Table 54. Antihypertensives and risk of cognitive decline.

Table 54

Antihypertensives and risk of cognitive decline.

In the Tzouri study278 of the Epidemiology of Vascular Aging (EVA) cohort from Nantes, France, cognitive decline was defined as a 4-point decrease in the Mini-Mental State Examination (MMSE) over 4 years. A 4-point decline in the MMSE would represent a significant decline in cognitive function, and was found in 8.5 percent of the sample. Again, the use of antihypertensives was not found to be protective against cognitive decline in the overall sample (as compared to those with normal measured BP and not on antihypertensives; adjusted OR 1.1; 95 percent CI 0.7 to 1.7). However, among subjects with HTN, the risk of cognitive decline was decreased in those taking antihypertensive medication at baseline and followup compared to those not taking antihypertensives at either time. Both studies evaluated relatively young populations. The mean age in Tervo267 was 67.7 years and in Tzourio278 65 years.

Both studies were of relatively brief duration. Cognitive decline would be expected to be fairly uncommon under these circumstances and definitions of decline.

In summary, the limited evidence from observational studies does not support an association between antihypertensive use and lower risk for cognitive decline.

Anti-inflammatories. Our search identified six cohort studies examining NSAID use and risk of cognitive decline.240,270,315–318 These studies are summarized in Table 55; detailed evidence tables are provided in Appendix B. The classification of NSAID exposure was variable ranging from any use, to use before a certain age, to use for a variable duration of time. All of the studies evaluated decline over time in cognitive testing. All used memory tests. Other tests, as listed in Table 55, were added to individual studies. Studies by Hee Kang et al.318 and Grodstein et al.317 used a global test score formed from the combination of the tests given. Cognitive decline in the Cache County cohort240,315 was defined by change in the 3MS, a measure of general cogniton measure using memory, orientation, and similarities among other items.

Table 55. NSAIDs and risk of cognitive decline.

Table 55

NSAIDs and risk of cognitive decline.

Five studies used community cohorts from the United States and one used a cohort from Amsterdam.316 Two of the studies examined the same Cache County cohort, one evaluating NSAIDs alone,315 the other evaluating the combination of NSAIDs with vitamins C and E.240 In total, approximately 32,600 subjects were included from the five unique cohorts. Followup ranged from 2 to 9 years. Three studies restricted analysis to subjects without cognitive impairment at baseline.240,315,318 Three studies did not eliminate cognitively impaired subjects when forming the cohorts, but one316 had a mean baseline MMSE of 27.5, suggesting few subjects had significant cognitive impairment, and another270 formed the cohort in middle age (47 to 70 years old) when prevalent impairment would be expected to be low. The CHAP cohort317 did not limit inclusion to cognitively intact subjects and enrolled subjects with a mean baseline age of 79.9, suggesting that there may be a significant level of cognitive impairment in the sample. Cognitive impairment among participants could differentially bias the recollection of exposure.

Fohuti et al.240 found that subjects using vitamin C, vitamin E, and NSAIDs who had at least one APOE e4 maintained cognitive functioning as opposed to declines in every other group. Hayden and colleagues,315 examining the same Cache County cohort, found a protective effect of NSAIDs that was most beneficial when NSAIDs were started before age 65 and when at least one e4 allele was present. Mean age at baseline for the Cache County study was approximately 74 years. Recall of NSAID use decades earlier are of unclear accuracy. Grodstein317 found slower rates of decline with longer NSAID use (5+ years) as compared to no use. Information on duration of NSAID use was collected 3 years after baseline (length of followup 3 to 9 years). It is possible that information on duration of use is more likely to be biased than the data on any NSAID use collected at baseline in those subjects developing cognitive deficits. However, a sensitivity analysis eliminating subjects with baseline cognitive scores in the bottom 10 percent was performed and did not show substantially different results. Grodstein et al.317 found more protective effect in this relatively more intact group, especially those on NSAIDs longer. Jonker and colleagues found lowered odds ratio for decline on tests of delayed recall but confidence intervals were wide and included no effect.316 An analysis of the large Nurses Health Cohort did not show an association between global cognitive decline and aspirin use or NSAID use of at least 8 years (RR for substantial decline 0.77; 95 percent CI 0.57 to 1.05).318 Finally, another large cohort study found no association between NSAID use and cognitive decline on any of three measures.270

In summary, results from five cohort studies are inconsistent regarding the association between NSAIDs and cognitive decline. Several studies find no association between NSAID use and cognitive decline and no studies find an association of sufficient magnitude that the impact of unaccounted confounders can be dismissed. The Cache County cohort found protective effects in some subgroups only. Results from the CHAP cohort may be biased given the unknown cognitive status of the cohort at baseline and the collection of duration of use data a few years into the study. There is limited support, based on a subgroup analysis, for a greater effect of NSAIDs when used at younger ages, for a longer duration, along with vitamin supplements and in those with at least one e4 allele.

Gonadal steroids. We identified a single good quality systematic review that examined the association between hormone replacement therapy and cognitive decline.38 The review identified nine RCTs and eight cohort studies. Ten studies were from the United States, four from European countries, and three from Canada. The nine RCTs are discussed separately under Question 4, below. Of the eight cohort studies, six were rated as having fair quality and two as poor. The cohort studies included 15,298 subjects ranging in age from 59 to 77 years, with duration of followup ranging from 1.5 to 15 years. The formulation of estrogen varied in composition, dose, and method of administration. Most subjects received an estrogen formulation that did not include a progestin.

Studies were not combined because more than 40 different tests were used to assess cognitive function. Thirty of these tests were used in a single study, and seven tests were used in more than two studies, but test administration was not always uniform. Verbal memory using the immediate verbal recall test was examined in four studies and showed a benefit of estrogen treatment in one study. Delayed verbal recall was improved by estrogen treatment in two of three studies, while visual memory improved in one of two studies. Attention tasks were divided into complex attention (0 of 3 positive studies) and mental tracking (0 of 3 positive studies). Most of the studies reporting benefit noted an effect of estrogen on attention tasks and involved women with menopausal symptoms. Abstract reasoning was shown to be improved by estrogen treatment in one of two studies, and mental status, as measured by an improved score in a dementia screening examination, was improved in two of five studies. Verbal fluency was reported to be improved in one of four studies, with users of estrogen more fluent in naming than non-users. The review authors concluded that estrogen does not consistently enhance asymptomatic women’s cognitive performance on formal testing.

Our search identified one new observational study published since 2001 (Table 56; a detailed evidence table is provided in Appendix B). Ryan et al. examined the association of self-reported, life-time estrogen exposure to late-life cognition in a prospective cohort study involving 996 French women aged 65 or older.319 A battery of tests – including the MMSE, the 5-word Test of Dubois; Isaacs Set Test; semantic fluency; BVRT; MMSE; and Trails A and B – was performed at baseline, 2, and 4 years. In the fully adjusted model which accounted for age, education, and baseline test performance, the authors found no association of estrogen use with cognitive change.

Table 56. Gonadal steroids and risk of cognitive decline.

Table 56

Gonadal steroids and risk of cognitive decline.

In summary, there may be a slight benefit for symptomatic postmenopausal women in tests of verbal memory, vigilance, reasoning, and motor speed, which could be mediated by symptom relief. Available data does not support a consistent benefit of estrogen use in modifying cognitive decline. There is insufficient evidence to determine the optimal formulation of estrogen; the dose, duration, and onset of treatment; or if progestins attenuate the effect of estrogen.

Cholinesterase inhibitors. We did not identify any systematic reviews or primary studies that evaluated the association between cholinesterase inhibitors and risk of cognitive decline.

Memantine. We did not identify any systematic reviews or primary studies that evaluated the association between memantine and risk of cognitive decline.

Social, Economic, and Behavioral Factors

Early childhood factors. We identified three eligible cohort studies that examined the association between childhood factors and cognitive decline in later life.173,320,321 These studies are summarized in Table 57; detailed evidence tables are in Appendix B. One of the studies reported a categorical outcome;321 the other two reported continuous outcomes of cognitive decline.173,320 Given the small number of eligible studies, the studies reporting continuous outcomes are included in the current discussion. All three of the studies used community samples in the United States, and one of the studies included individuals residing in religious order facilities.173 The length of followup ranged from 2 to 5.6 years. The cognitive status of participants differed across the studies. One study320 included all participants who completed cognitive testing at a minimum of two time points, so some individuals may have had dementia at baseline. The other two studies173,321 included only individuals who were non-demented at baseline. All of the studies at least partially used sample selection methods to minimize selection bias. All of the studies collected exposure data using self-report of a range of childhood factors; one study also used public records.173 There was no objective validation of the indices derived to represent childhood socioeconomic status or childhood cognitive milieu.

Table 57. Childhood factors and risk of cognitive decline.

Table 57

Childhood factors and risk of cognitive decline.

The studies did not compare baseline characteristics between those exposed and unexposed, but one compared baseline differences by outcome groups (i.e., those who had cognitive decline versus those who did not).321 The case definitions and cognitive outcomes for the studies are described in Table 57. The analyses appear generally appropriate and controlled for relevant potential confounders, but none of the studies conducted a priori sample size calculations. Two studies found no association between early life socioeconomic status or childhood cognitive milieu and cognitive decline in later life.173,320 The third study used a Japanese-American cohort and found that numerous factors associated with stronger affiliation with Japan or the Japanese culture were associated with protection against cognitive decline in later life.321 For select variables that allowed graded responses, the study found a dose-response effect where the greater the exposure to Japanese culture the less likelihood of cognitive decline. The differences in the sample characteristics and the childhood factors examined among these three studies make drawing conclusions difficult. The authors of the Japanese-American cohort study point to a number of differences between the Japanese and American cultures that may explain their findings. Based on the two studies using predominantly individuals born and raised in the United States, there does not appear to be a strong influence of childhood socioeconomic status or childhood cognitive milieu on cognitive decline in later life.

Education/occupation. As described above (under Key Question 1), we consider educational and occupational factors as subcategories under a single heading.

Education. We identified eight eligible cohort studies that had a categorical outcome.183,258,267,322–326 We also review an additional six studies that had a continuous outcome because they provide information on population subgroups of interest, such as different ethnic groups327–329 or different APOE genotypes.330–332 In the majority of these studies, the focus was on investigating the risk of cognitive decline and years of education completed, but in a few studies, education was only one of several risk factors examined in relation to cognitive decline. The studies are summarized in Table 58; detailed evidence tables are provided in Appendix B. Seven of the studies used community samples in the United States,258,323–325,327–329 one used a health maintenance organization sample in the United States,330 three used community samples in Europe,322,326,332 one used a sample in Europe that included participants from both the community and institutions,267 one used a community sample in Australia,331 and one used a religious order sample in the United States.183 At baseline, participants were non-demented in some studies322–324,330,332 and cognitively normal in some others.183,267,327 In other studies, those with mild cognitive impairment and dementia were not specifically excluded;325,326,328,329,331 however, given the length of the followup period328,329 or the baseline age of the sample,331 it is likely that the majority of the participants were non-demented at baseline in most of these studies. The final study258 included only individuals with a baseline 3MS score > 80 in an attempt to exclude those with dementia. Length of followup ranged from 1 to 11 years. Exposure was determined based on self-reported information about years of education completed; this is a standard and well-accepted method of data collection for this information. One study examined the association between literacy level as measured by a standard neuropsychological test and cognitive decline.327 Most of the studies used sample selection methods to minimize selection bias; however, one study required that participants agree to brain donation at the time of death, and this may have introduced some selectivity into the sample.183 One other study only partially used sample selection methods to minimize selection bias.322 Definitions of cognitive decline varied among the studies and are described in Table 58. In all but one study331 analyses were appropriate and controlled for relevant potential confounders.

Table 58. Years of education and risk of cognitive decline.

Table 58

Years of education and risk of cognitive decline.

Among the eight studies that had categorical outcomes, four reported that having fewer years of education was associated with an increased risk of cognitive decline on at least some of the cognitive measures used,258,323–325 and the odds ratio for the fifth study326 was in the same direction but did not reach statistical significance. An additional two studies reported that having fewer years of education was associated with an increased risk of incident MCI.183,267 In general, the association was strongest at the extremes of high and low education, thus suggesting a dose-response pattern even when the association was not significant at all intermediary education levels. In the one study that did not find a significant association between education level and cognitive decline, the education level for the sample was quite low.322

In contrast to the studies that used categorical outcomes, the studies that used non-categorical outcomes typically did not find an association between years of education and cognitive decline. Four studies reported no association between years of education and rate of cognitive decline in their total samples.329–332 A fifth study328 found only a non-linear association such that the rate of cognitive decline at average or high levels of education was slightly increased during earlier years of followup, but slightly decreased in later years in comparison to low levels of education. The sixth study327 reported that after controlling for education, participants with lower literacy were more likely to have faster decline in cognition.

These six studies using continuous variables also examined the association between years of education and longitudinal cognitive performance in selected subgroups of the samples. The three studies that reported results comparing ethnic subgroups showed few differences across the subgroups. The study by Wilson and colleagues328 showed no differences between whites and African-Americans for the association between level of education and cognitive decline. Manly and colleagues327 reported no significant differences between whites, Hispanic and African-Americans regarding the association between literacy level and rate of cognitive decline. The study by Karlamangla and colleagues329 reported only one difference among multiple ethnic groups. They found that among non-Mexican Hispanic Americans, a greater number of years of education was associated with less cognitive decline over time.

Four studies (three using continuous outcomes and one using a categorical outcome) assessed potential interactions between APOE genotype and education, and each study reported different results. Winnock and colleagues332 found no interaction between APOE genotype and education in their association with cognitive decline. Kalmijn and colleagues326 reported that APOE e4 non-carriers with less education tended to show greater decline than APOE e4 carriers with low education, but the risk estimate was not statistically significant. Shadlen and colleagues330 found that lower education was associated with greater cognitive decline among APOE e4 homozygotes but not among heterozygotes. Christensen and colleagues331 found that among individuals with < 16 years education, those with at least one APOE e4 allele had greater cognitive decline on selected cognitive measures. This latter study did not control for baseline cognitive performance or age; it also reported many statistical comparisons without adjustment for multiple comparisons. The widely discrepant findings from these studies make it difficult to draw any conclusions about the interaction between education and APOE genotype in their association with cognitive decline.

The generally inconsistent findings reported by studies using categorical outcomes compared to those using continuous outcomes raises fundamental questions about the best methodological approach for examining this issue. The studies using categorical outcomes often categorize “cognitive decliners” as those who show the most pronounced decline. This may identify individuals who are in the prodromal stages of a progressive dementing disorder such as AD. In this case, it is possible that the association between years of education and cognitive decline reflects an underlying association between years of education and AD. Some additional methodological points might be considered. In the United States and some other developed countries, years of education may be a poor reflection of inherent ability for ethnic minorities and other groups for whom educational opportunities were not accessible. In the first half of the 20th century, when the participants in these studies were attending school, the quality of education in the United States differed across regions of the country and also across racial and ethnic groups. For these reasons, it has been suggested that reading skills, not years of education, may be a better marker of ability in these groups.327

In conclusion, the evidence is inconsistent regarding the putative association between years of education or its underlying construct and risk of cognitive decline. Further research is needed in this area that directly compares the association between years of education and cognitive decline using both categorical and non-categorical outcomes in the same sample.

Occupation. We identified four eligible cohort studies in which the focus of the study was investigating the risk of cognitive decline and occupational history.322,333–335 All of these studies had a continuous outcome; there were no eligible studies with categorical outcomes. The studies are summarized in Table 59; detailed evidence tables are provided in Appendix B. One of the studies used a community twin sample in the United States,333 one used an HMO sample in the United States,335 and two used community samples in Europe.322,334 Length of followup ranged from 4 to 14 years. Exposure was determined based on self-reported information about occupation or job characteristics. Two studies used sample selection methods to minimize selection bias,333,334 while the other two studies partially used such methods.322,335 Definitions of cognitive decline varied among the studies and are described in Table 59. Analyses were appropriate and controlled for relevant potential confounders.

Table 59. Occupation and risk of cognitive decline.

Table 59

Occupation and risk of cognitive decline.

The four studies examined different aspects of jobs, making it difficult to compare results. One study examined the association between the number of hours worked each week and cognitive change over time;334 another examined the association between job characteristics, such as general intellectual demand and physical exertion, and longitudinal cognitive performance;333 the third study dichotomized occupation by farmers versus non-farmers to investigate the relation between jobs and cognitive decline;322 and the fourth study assessed the association between three aspects of self-directed work (perceived autonomy, work control, and innovation) and cognitive decline.335 The results from Alvarado and colleagues,322 Potter and colleagues,333 and Yu and colleagues335 are broadly consistent. One showed a trend toward greater cognitive decline for farmers;322 one reported that individuals who worked in jobs with high levels of physical exertion showed greater decline;333 and the third reported that greater control over one’s work is associated with better maintenance of cognition.335 In all three studies, other factors, such as fewer years of education, actually accounted for more of the cognitive decline than did occupational characteristics. The study by Virtanen and colleagues334 found that working longer hours, a characteristic of jobs associated with higher levels of education, was linked to greater decline on a test of reasoning. This finding makes the point that the impact of jobs may be multi-faceted, and further work needs to be done to examine the relation between different aspects of jobs and cognitive outcomes.

In conclusion, the data available currently suggest that the reported association between occupation and cognitive decline is largely attributable to level of education. In addition, further research is needed to decompose the various components of occupation and their role on late life cognition.

Social engagement. We identified 15 cohort studies that examined the association between social engagement and the development of MCI or cognitive decline.188,190,258,271,321,329,336–344 These studies are summarized in Table 60; detailed evidence tables are presented in Appendix B. Social engagement as a risk factor was described by different exposures in the studies, including objective measures such as marital status, living situation, number of people in social network, as well as subjective measures such as feelings of loneliness and perceptions of social support. Engaging in social activities (which may or may not involve social engagement) is mostly covered in the sections on cognitive engagement and leisure activities, although some overlap is unavoidable given that the boundaries are sometimes nebulous. Although 15 social engagement studies were identified, the measurement of exposure and reporting of outcomes varied among the studies. Hence, they were not combined to provide a single summary statistic; rather, qualitative descriptions of the studies are provided in this discussion.

Table 60. Social engagement and risk of cognitive decline.

Table 60

Social engagement and risk of cognitive decline.

Qualitatively, the measurement of exposures in the studies fall under three main categories: social network size, social support, and marital status. Ten of the 15 studies were conducted in various regions of the United States,188,190,258,271,321,329,336–339 four in Europe,340,341,343,344 and one in Hong Kong.342 Ten studies reported continuous outcomes of cognitive decline.188,271,329,336–341,344 The other five reported decline as a categorical outcome, although the definition varied from study to study.190,258,321,342,343 All studies used community samples and attempted to decrease selection bias. Some studies did not exclude individuals who were cognitively impaired at baseline336,338,340 and therefore could have included some demented individuals, except Green et al.,338 where the mean age at baseline was 47.3. Though all other studies attempted to exclude those who were cognitively impaired at baseline, the criteria and methods of screening were heterogenous and included: 3MS > 80;258 MMSE > 24;341 MMSE > 28;337 and participants scoring in the top third of the screening test.339 The other studies had a broad inclusion criteria of simply non demented. Some studies had a significant amount of loss to followup (> 50 percent).337,338,344 One study338 was a continuation of another.337 The average length of followup ranged from 2 to 21 years. Measurement of social engagement was done through self-report in all the studies. There was no objective validation of the measure for ascertaining exposure in any of the studies. Five studies compared baseline characteristics between the exposed and unexposed groups.190,258,321,340,341 Some studies compared the baseline characteristics of the participants who were followed up compared to those who were lost to followup and found statistically significant differences between the two groups; for example, participants who were followed up were younger and more educated than those who were not.337,338 One study used informant interviews;339 however, this was mainly to collect proxy information for missing data. The methods used to define cases and cognitive decline are described in Table 60. None of the studies reported a priori power calculations. The analyses were largely appropriate with adequate adjustment for confounders.

Of the 15 studies, five examined the relationship between social network and cognitive decline. Two studies concluded that a larger social network decreased cognitive decline.336,337 However, continued followup of one of these cohorts337 found discrepant results in that there was no association between social network size or support and cognitive decline.338 However, of the 2607 participants in the original study, only 874 were in the followup study, and they differed in demographic characteristics from the original cohort. This may explain the difference in findings between the two studies. In another study, there was no association between the size of the social network and cognitive decline; however, decreased social engagement as measured by group memberships was associated with an increased risk of cognitive decline (OR 2.92; 95 percent CI 1.35 to 6.36).343 In a study examining characteristics of a social network, men of Japanese descent living in the United States and having current friends who were Japanese were at lower risk of cognitive decline (OR 0.64; 95% CI 0.44 to 0.93).321

Four studies examined the relationship between social support and cognitive decline. One found that lack of social support increased the risk of cognitive decline (OR 1.2; 95 percent CI 1.01 to 1.43),271 while another found an increased risk in men but not women in a preliminary model, which then became non-significant in an adjusted model.342 In the third study, lack of social support was measured as loneliness, which was found to be associated with more rapid decline in global cognition, semantic memory, perceptual speed, and visuospatial ability.188 In another study, participants who reported having enough social support had a lower risk of becoming a major cognitive decliner.258 Perception of social support in a high functioning group was examined only in one study and was not found to be significantly associated with cognitive decline in a fully adjusted model.339

Six studies examined the relationship between marital status and cognitive decline.190,329,340–342,344 Loss of spouse was a significant risk factor for cognitive decline in three studies.190,329,340 Of these, only one defined MCI as a categorical diagnosis using published criteria; investigators found that being widowed (OR 3.30; 95 percent CI 1.6 to 6.9) and being without a partner (OR 2.14; 1.2 to 3.8) at midlife were both associated with increased risk of MCI.190 However, two other studies did not report a significant association between cognitive decline and marital status341,344 or living situation.341 The final study342 reported that greater cognitive decline was associated with being divorced in males, but the association was not found in single or widowed males or in females. Given the latter findings, and the extremely broad confidence intervals for the association between divorced males and cognitive decline, the robustness of the finding that greater cognitive decline in males is associated with being divorced is questionable. The association between living with someone and cognitive decline was inconsistent across studies. One study showed no significant association,258 while two other studies found that those who lived alone at baseline and followup had an increased risk of cognitive decline.190,341

In conclusion, although comparison across studies is difficult given the different measures of exposure, it is evident that results are inconsistent among the studies on social network size and social support. In addition, those studies reporting a beneficial association between measures of social engagement and maintenance of cognition generally report relatively small effect sizes, suggesting that residual confounding may explain the association. Thus, there is currently not sufficient evidence supporting a protective effect of social engagement. However, there appears to be a more robust association between the loss of a spouse and cognitive decline as evidenced by the findings from three studies. The findings are inconsistent regarding living alone or being without a partner for any reason. We suggest that some of the heterogeneity in the findings may be attributed to the shorter followup time in some studies in which the majority of the data was collected in late life; as the two studies with over 15 years of follow up showed that those who were single at baseline and followup were at increased risk of cognitive decline.

Cognitive engagement. We identified four eligible cohort studies that examined the association between cognitive engagement and development of MCI or cognitive decline.335,345–347 These studies are summarized in Table 61; detailed evidence tables are in Appendix B. One of the studies reported a categorical outcome,345 and the other three reported continuous outcomes of cognitive decline.335,346,347 Given the small number of eligible studies, the studies reporting continuous outcomes are included in this discussion. Two of the studies used community samples in the United States,345,346 one used a sample from a health maintenance organization (HMO) in the United States,335 and the fourth used a clinical sample in Europe.347 The length of followup averaged 3 to 14 years. Two of the studies used sample selection methods to minimize selection bias,346,347 while the remaining studies partially used such methods.335,345 In two of the studies, the participants were non-demented at baseline;345,347 the other two studies appeared to include all participants at baseline regardless of cognitive status, but given the length of the followup periods, it is assumed here that few participants were demented at baseline.335,346 All of the studies used self-report of the frequency of current involvement in specific activities. There was no objective validation of the method for measuring exposure, but one study194 did ask an informant to confirm the participant’s report of involvement in activities. The studies did not compare baseline characteristics between those exposed and unexposed, but one study compared baseline differences between individuals who developed amnestic MCI and those who did not.345 The case definitions and cognitive outcomes used are described in Table 61. The analyses appear generally appropriate and controlled for relevant potential confounders, but none of the studies conducted a priori sample size calculations.

Table 61. Cognitive activities and risk of cognitive decline.

Table 61

Cognitive activities and risk of cognitive decline.

One study345 reported an attenuation of risk of amnestic MCI with increasing frequency of cognitive activities; another study346 reported a reduction in a global measure of cognitive decline with increasing levels of cognitive activity; and the third study347 reported that cognitively engaging activity was associated with less cognitive decline on selected cognitive measures. In contrast to these findings, Yu and colleagues reported no association between cognitive decline and involvement in cognitive leisure activities.335 Their study report did not detail the activities included in the cognitive activity group, nor did it provide any specific results on the cognitive activity analyses. This limits our ability to identify differences among the studies that may contribute to the discrepant results.

In conclusion, there is limited but inconsistent evidence suggesting that increased involvement in cognitive activities in later life is associated with less cognitive decline and lower risk of incident amnestic MCI. In addition to the limited information reported in one study and noted above, there are some other challenges to interpreting these results. First, the effect sizes are relatively small and limited to selective measures in some studies. It is possible that residual confounding may contribute to some of the findings. Second, given the long subclinical prodromal phase of AD, it is not possible to determine whether less involvement in cognitive activities in some individuals is an early symptom of AD. Third, validation of the type and extent of exposure is needed.

Physical activities. We identified eighteligible cohort studies that examined the association between physical activity and development of MCI or cognitive decline.199,258,261,342,345,348–350 These studies are summarized in Table 62; detailed evidence tables are in Appendix B. Seven studies had a categorical outcome; the eighth had a continuous outcome, but we have included it here because it focused on a select subgroup of individuals (women with diabetes) relevant to one of the key questions of this systematic review.261 These eight studies are the focus of attention in what follows. There were an additional three studies with continuous outcomes of cognitive decline335,347,351 which are not reviewed in detail, but for which we report general conclusions. Five of the studies used community samples in the United States,258,261,345,348,350 and one study each used community samples in Canada,199 Europe,349 and Hong Kong.342 The length of followup ranged from 2 to 8 years. The cognitive status of participants at baseline differed among the studies. Two studies348,349 may have included some individuals with dementia at baseline, as they did not appear to apply exclusionary criteria regarding impaired cognitive function. Other studies258,350 attempted to exclude those with dementia by only including individuals who scored above 80 points on the 3MS or at least 23 out of 26 points on an abbreviated MMSE test. One study342 excluded participants categorized at baseline as having cognitive impairment based on performance on 12 items of an orientation and information test. For the remaining three studies,199,261,345 participants were non-demented at baseline, with a subset of the individuals specifically having MCI in one study,345 and a subset of individuals being cognitively normal in some studies.199,345 Seven studies used sample selection methods to minimize selection bias,199,258,261,342,348–350 and the other partially used such methods.345 All studies used self-reported information on involvement in physical activities at baseline; some asked about specific activities, but the majority asked more general questions about any physical activities. Since a number of the studies used open-ended questions to obtain information about engagement in physical activities, it was difficult to assess the degree of overlap among the activities across studies. Only one study provided some information on the reliability and validity of the physical activity questions.349 Five studies compared baseline characteristics between those exposed and unexposed.258,261,348–350 The case definitions for the studies are described in Table 62. The analyses appear generally appropriate and most controlled for relevant potential confounders. However, one study342 defined a case by a threshold score on the cognitive measure, but in the analysis did not control for performance on the cognitive measure at baseline. This may have markedly influenced their results since a large proportion of the females, compared to the males, scored just above this threshold at baseline. None of the studies reported a priori sample size calculations.

Table 62. Physical activity and risk of cognitive decline.

Table 62

Physical activity and risk of cognitive decline.

For all studies, the risk estimates were typically in the hypothesized direction, and there was often a dose-response pattern for the association between more physical activity and the various case definitions of cognitive decline. However, when statistical significance was considered, the results were inconsistent. Five studies reported that more physical activity was associated with a lower risk of CIND, cognitive impairment, or cognitive decline,199,258,342,348,350 but in two of these studies the benefit was attributed entirely to a significant effect among females,199,342 and in one the entire sample was female.350 Devore and colleagues261 found that among women with diabetes those in the highest tertile level of physical activity showed less cognitive decline compared to those in the lowest tertile level of physical activity. The difference in longitudinal change between the two groups was small, and when the model included adjustment for physical disability the results were no longer statistically significant. The last two studies345,349 found no significant association between levels of physical activity and risk of either amnestic MCI or cognitive decline. These two studies had the smallest sample sizes, which suggests that lack of statistical power may have been an issue. One of these studies349 found that among carriers of the APOE e4 allele, physical activity showed significant protective benefit against cognitive decline.

The three studies that had continuous outcome variables also reported inconsistent results. Weuve and colleagues351 showed that increased levels of physical activity were associated with better long-term performance on multiple cognitive tests, but Bosma and colleagues347 showed that physical activity was associated with less decline on only one of a number of cognitive tests. In contrast, Yu and colleagues335 showed no association between physical activity and cognitive decline.

The inconsistent findings may be due to a number of factors, including small number of cases and heterogeneity in both the types and quality of the exposure and outcome measures.

In conclusion, the data currently available provide preliminary evidence for a beneficial effect of physical activity deterring cognitive decline, but overall the results are not robust. Further work using standardized methods to assess exposure is needed to confirm these findings and draw firmer conclusions.

Other leisure activities. We identified two eligible cohort studies that examined the association between non-physical leisure activities and cognitive decline.336,347 Both studies defined the activities as “social” activities. We also identified one cohort study that examined the association between a range of leisure activities – including those considered to be social, productive, and physical – and cognitive decline.352 Two of the studies included “leisure employment, or part- or full-time work” in the list of leisure activities. Although work is not typically considered a leisure activity, these studies best fit in the present section. The leisure activities assessed for each study are listed in Table 63. Some of the leisure activities in these studies overlapped with some of the activities considered to be “cognitively engaging” in other studies, so the results described here should be interpreted in conjunction with the findings for Question 2 for the “Cognitive Engagement” factor. One of the studies reported a categorical outcome,352 and the other two reported continuous outcomes of cognitive decline.336,347 Given that there were only three eligible studies for this factor for Question 2, we include all studies in the current discussion. The three studies are summarized in Table 63; detailed evidence tables are provided in Appendix B. One study used a community sample in the United States,336 one used a community sample in Singapore,352and the third study used a clinical sample in Europe.347 The length of followup ranged from 1.5 to 6 years. Two studies used sample selection methods that minimized selection bias,336,352 and the third study partially used such methods.347 Both used self-report of current involvement and/or frequency of involvement in specific activities, a method of measuring exposure that has not been validated. Only one of the studies reported comparisons of baseline characteristics between those exposed and unexposed.352 The cognitive outcomes for the study are described in Table 63. The analyses appear generally appropriate and controlled for relevant potential confounders, but the studies did not report a priori sample size calculations.

Table 63. Leisure activities and risk of cognitive decline.

Table 63

Leisure activities and risk of cognitive decline.

One study347 reported that involvement in social activities was associated with less decline on the immediate and delayed recall trials of a verbal memory task (0.01< p < 0.05). It is noteworthy that no adjustment was made for multiple statistical comparisons, and the three other cognitive measures did not show significant differences in rate of cognitive decline based on participation in social activities. Another study336 found that higher levels of involvement in social activities were associated with slightly less decline on a global cognitive measure. The third study examined a wide range of leisure activities and found that individuals in the medium and high tertile levels of leisure activity were less likely to exhibit decline on the MMSE.352 In addition, individuals who participated in at least one activity considered to be a “productive leisure activity” were less likely to decline on the MMSE, while those who participated in at least one social or physical leisure activity did not show such a benefit. Among APOE e4 carriers, those who participated in at least one physical leisure activity or one social leisure activity were significantly less likely to decline on the MMSE; this same association was not observed among APOE e4 non-carriers.

In conclusion, these studies provide preliminary evidence that a range of leisure activities may be associated with preservation of cognitive function. The findings are not entirely consistent, as two of the three studies reported an association between greater involvement in social activities and less cognitive decline, while one did not find such an association. In addition, the differences in how exposure was defined, the limited number of statistically significant associations among the multiple comparisons, and the relatively small effect sizes limit the conclusions that can be drawn. Further research is needed in this area.

Tobacco use. We identified one good quality systematic review, published in 2007, that examined the association between tobacco use and cognitive decline, cognitive impairment or cognitive performance change.50 The review included three prospective cohort studies for cognitive performance change published between 1996 and 2004.353–355 The three studies included a total of 7872 subjects; one was conducted in the United States, and the other two in European countries. The review also included three prospective cohort studies reporting a dichotomous measure of cognitive decline (776 subjects; two from the United States and 1 from Europe),356–358 and three prospective cohort studies reporting cognitive impairment (8385 subjects; one each from the United States, Canada, and Australia).74,359,360 Studies were selected that had at least two occasions of measurement, used cognitive measures compatible with those used by other studies, included at least a 12-month followup period, and measured exposure to smoking at baseline. The MMSE was the only cognitive measure common to enough studies to analyze cognitive performance change. Cognitive performance change was defined as a continuous measure of yearly change on the MMSE. The definitions for cognitive decline and cognitive impairment differed across studies, but in general terms cognitive decline was defined as a dichotomous variable based on cognitive measures that included more than just the MMSE. Cognitive impairment was a decline on cognitive measures sufficient to represent a pre-set definition of impairment. Study quality was not reported in this systematic review. Length of followup ranged from 2 to 7 years. The covariate adjustment for most studies included at least age and education; many studies included additional covariates such as sex, APOE, biological measures, and health conditions. The results with the most covariates were given preference when reporting the data from the individual studies. Exposure was determined by self-report, and smoking was classified as ever, current, former, or never smokers.

Studies were combined using fixed-effect meta-analyses if there was no evidence of heterogeneity. If heterogeneity was present, random-effects models were used. Standard χ2 tests using a p-value of 10 percent were used to examine heterogeneity. The small number of studies within each group of studies with compatible measures precluded investigation of heterogeneity, using meta-regression, subgroup analyses, or assessment of publication bias. The results for the various exposure definitions and the outcomes measures are reported in Table 64. To briefly summarize the main findings, current smokers were more likely to show a greater decline on the MMSE than either former or never smokers. Current smokers were also more likely to be categorized as “cognitive decliners” compared to individuals who were former smokers and those who never smoked. Finally, former smokers showed greater yearly decline on the MMSE compared to those who never smoked.

Table 64. Smoking and risk of cognitive decline – results from studies reviewed by Anstey et al., 2007.

Table 64

Smoking and risk of cognitive decline – results from studies reviewed by Anstey et al., 2007.

The authors noted that one limitation of the review was that the former smokers group includes a broad range of exposure periods. Unfortunately, there were not a sufficient number of studies with data on the number of smoking pack-years to use this as the exposure variable. Publication bias was not assessed formally. Quality ratings of the studies were not provided, but strict selection criteria may have increased the likelihood that only high quality studies were included in the review. The authors concluded that current smokers are at increased risk of cognitive decline compared to those who never smoked.

We identified five additional eligible cohort studies examining the association between smoking and cognitive decline.257,258,266,271,361 These studies are summarized in the Table 65; detailed evidence tables are in Appendix B. Four of the five studies used a categorical outcome;257,258,266,271 the fifth361 assessed cognitive decline as a continuous variable. This latter study is included in the current discussion because it is the only one of the five studies that assessed the association of smoking and cognitive decline based on APOE e4 allele status. All five studies used samples drawn from the community; one also studied nursing home residents.266 Three studies used U.S. samples,258,271,361, one used an Australian sample,257 and the other used a European sample.266 In three of the studies, participants were cognitively normal at baseline.257,266,361 In the other two studies, it is likely the vast majority of the participants were non-demented since one study258 included only individuals with 3MS scores > 80 in an attempt to exclude individuals with cognitive impairment, and the other study had a 6-year period between the two cognitive assessments.271 Length of followup ranged from 3.5 to 10 years (mean or median). All three studies used self-report history of smoking obtained at baseline to characterize exposure. The studies used sample selection methods to minimize selection bias; however, only three of the studies compared baseline characteristics to assess differences between exposed and unexposed.258,271,361 The case definitions and cognitive outcomes used in the studies are described in Table 65. The analyses appear generally appropriate and controlled for relevant potential confounders, but none of the studies conducted a priori sample size calculations.

Table 65. Tobacco use and risk of cognitive decline – recent cohort studies.

Table 65

Tobacco use and risk of cognitive decline – recent cohort studies.

The results were inconsistent across studies, with two studies258,271 reporting that those who did not smoke had greater likelihood of maintaining optimal cognitive function versus exhibiting minor cognitive decline. In one study, past smoking was associated with increased risk of MCI compared to never smoking,257 but another study did not find a significant association between number of pack years of smoking and risk of incident MCI.266 The fifth study361 did not find a significant association between current smoking and cognitive decline on abstract reasoning-visuospatial tasks in either the group that was ≤ 75 years old or > 75 years old. However on memory tasks, current smokers over age 75 showed greater cognitive decline than individuals who did not currently smoke; there was no comparable difference among the group that was ≤ 75 years old. Three of the studies examined current smoking,258,271,361 but only one of them271 used the MMSE as the main cognitive measure, making it relatively comparable to the measures of exposure and outcome in the systematic review by Anstey et al.50 In general terms, the findings from this study were consistent with those from the review, that is, current smoking was related to cognitive decline.

One study examined the association between current smoking and cognitive decline by APOE e4 allele status.361 This study found that its reported association between current smoking and decline on memory tasks was limited to the group over age 75 years with no APOE e4 allele. This finding is generally consistent with those reported for current smoking and AD, that is, the statistically significant association found in the entire group was due to the association in the subgroup with no APOE e4 allele and not to the subgroup with at least one APOE e4 alleles.

In conclusion, these studies provide fairly consistent evidence for an association between current smoking and increased risk of cognitive decline. The evidence on past smoking is less consistent.

Alcohol use. We identified a single, good quality systematic review, published in 2009, that examined the association between alcohol use and cognitive decline.51The review included two prospective cohort studies for cognitive decline published in 2000 and 2005.220,362 The two studies were community samples and included a total of 2192 subjects; one was conducted in the United States, and the other in a European country. Studies were selected that had measured cognition at both baseline and followup periods and either implemented a dementia assessment at baseline, which excluded those participants with cognitive impairment or dementia, or adjusted for incident dementia and/or baseline cognition performance in analyses. Included studies also had at least a 12-month followup period, had cognitive decline as an outcome, and measured exposure to alcohol at baseline or during the followup period prior to the final followup examination. Study participants were non-demented at baseline. The meta-analysis was based on current alcohol use, but one of the studies220 also collected data on former alcohol users versus lifetime abstainers.

The cognitive outcome was defined as longitudinal change on the MMSE for one study.362 and change on a composite score of multiple cognitive measures for the other study.220 There was not a structured quality assessment of the studies reported in this systematic review; however the strict inclusion/exclusion criteria provided an indirect assessment of quality, and the study characteristics for key design variables were reported. Length of followup ranged from 4 years in one study362 to an average of 7.3 years in the other study.220 No information was provided on the followup rates in the studies. The covariate adjustment for the two studies included at least age, sex, education, baseline cognitive score, and depressive symptoms; each study had additional covariates such as health behaviors and conditions. The results with the most covariates were given preference when reporting the data from the individual studies. Exposure was determined by self-report and alcohol use was classified as drinkers versus nondrinkers.

Studies were combined using fixed-effect meta-analyses if there was no evidence of heterogeneity. If heterogeneity was present, random-effects models were used. Standard χ2 tests using a P value of 10 percent were used to examine heterogeneity. Publication bias was not formally assessed.

The results for cognitive decline in drinkers versus non-drinkers are reported in Table 66. The test for heterogeneity was significant (χ2[1] = 11.80, P = 0.00006), suggesting that the pooled results from only two studies may not be reliable. To summarize the main findings, the direction of the relative risk suggested that drinkers may have a lower risk of cognitive decline, but the result did not approach standard statistical significance levels.

Table 66. Alcohol use and risk of cognitive decline – results from studies reviewed by Anstey et al., 2009.

Table 66

Alcohol use and risk of cognitive decline – results from studies reviewed by Anstey et al., 2009.

As discussed above under Question 1, the authors of the systematic review noted a number of complicating factors in the study of alcohol use as a risk factor for late-life cognitive outcomes. These factors make interpretation of the meaning of both significant and null results challenging. Publication bias was not assessed formally in this systematic review, but the authors suggested that the potential for such bias was likely reduced because of the inclusion of studies from article reference lists and articles that did not focus on alcohol use, but in which alcohol use was a covariate. Quality ratings of the studies were not provided, but strict selection criteria may have increased the likelihood that only high-quality studies were included in the review. The authors concluded that the pooled analysis showed no significant association between alcohol use and cognitive decline, but the measure of heterogeneity indicated that the results from the two studies combined may not be reliable.

We identified five additional eligible cohort studies examining the association between alcohol use and cognitive decline published since June 2006.257,258,363–365 These studies are summarized in Table 67; detailed evidence tables are in Appendix B. In two of the studies257,365 participants were cognitively normal at baseline, and in two other studies the vast majority of participants are assumed here to have been non-demented based on either minimum cognitive scores required for inclusion in the study364 or mean age and mean cognitive score for the group.363 The fifth study258 included individuals who scored 80 or higher on the 3MS; thus it may have included some individuals with cognitive impairment and dementia. Two of these studies used the categorical outcome of incident mild cognitive impairment (MCI),257,365 and one used a categorical outcome indicating maintenance of cognition or minor or major decline. The other two studies363,364 assessed cognitive decline as a continuous variable. All five studies used samples drawn from the community; one of them also studied nursing home residents.365 Two studies used a U.S. sample,258,363 two used European samples,364,365 and one used an Australian sample.257 The average length of followup ranged from 2.2 to 8 years. The studies used self-report history of current alcohol use obtained at baseline to characterize exposure during late life. Four of the studies used sample selection methods to minimize selection bias; 257,258,363,365 the inclusion criteria for the fifth study364 required that participants have vascular risk factors or vascular disease, which may have confounded the association between alcohol and cognitive change. Three studies compared baseline characteristics to assess differences between exposed and unexposed.258,363,364 The case definitions and cognitive outcomes for the studies are described in Table 67. The analyses appear generally appropriate and controlled for relevant potential confounders, but none of the studies conducted a priori sample size calculations.

Table 67. Alcohol and risk of cognitive decline – recent cohort studies.

Table 67

Alcohol and risk of cognitive decline – recent cohort studies.

The results on incident MCI were inconsistent between the two studies evaluating this outcome, with one reporting no association between use of alcohol and risk of incident MCI,365 and the other showing that drinkers overall had a lower risk of incident MCI compared to non-drinkers.257 This latter study also showed a U-shaped quadratic relationship with very low and very high alcohol intake being associated with higher risk of MCI compared to moderate alcohol intake. The third study using a categorical outcome for cognitive decline found no significant association between alcohol intake and cognition, but the odds ratios were in the direction of more than one drink per day being protective for cognition.258 This study simultaneously examined the association between about 20 factors and cognitive decline; alcohol was not the focus of the study. The two studies assessing rate of cognitive decline as a continuous measure were consistent regarding the use of any alcohol providing greater preservation of cognition over time. One study364 reported a significant association (but negligible effect size) between alcohol use and maintenance of performance on the MMSE. However, this study did not find an association between alcohol intake and longitudinal performance on measures assessing other areas of cognition, such as memory and executive function, which are domains that typically show decline in the early stages of AD. The other study363 showed a dose-response effect in which greater amounts of alcohol per week were associated with less decline on the modified Telephone Interview for Cognitive Status (TICS), a measure of cognitive status similar to the MMSE. This dose-response association was not modified by the presence of an APOE e4 allele. Interestingly, this study did not find that an excessive amount of alcohol (i.e., more than two drinks per day) had a detrimental effect on cognition.

In conclusion, the results are inconsistent regarding the association between cognitive decline and alcohol use in any amount. Obvious differences between the studies do not point to a clear explanation for these inconsistencies.

Toxic Environmental Exposures

We identified no systematic reviews or primary studies that examined toxic environmental exposures and the risk of cognitive decline.

Genetic Factors

Although there is an extensive literature examining genetic factors associated with AD, the literature linking genes and cognitive decline is more limited. The relation between genetic polymorphisms and cognitive change has been studied for the apolipoprotein E gene (APOE). APOE has three common alleles (e2, e3, and e4) that act as susceptibility factors for late-onset (age ≥ 65 years) AD. APOE e4 increases the risk of AD in a dose-dependent fashion, and e2 reduces the risk.

We identified 15 cohort studies involving 8509 subjects that examined the association between APOE and the risk of cognitive decline.183,258,260,262,267,274,284,330,331,366–371 These studies are summarized in Table 68; detailed evidence tables are in Appendix B. Four of the studies reported a categorical outcome,183,258,267–367 and the other 11 reported continuous outcomes.260,262,274,284,330,331,366,368–371 Fourteen were community samples, and one sample came from a clinical trial.284 One sample was from Australia,331 four from Europe,267,284,369,371 and 10 from the United States.183,258,260,262,274,330,366–368,370 The length of followup ranged from 1 to 14 years, and approximately 60 percent of subjects were women. All of the studies used sample selection methods to minimize selection bias and reported comparisons of baseline characteristics between those exposed and unexposed. The case definitions and cognitive outcomes for the studies are described in Table 68. None of the studies reported a priori sample size calculations. The analyses appear generally appropriate, with all of the studies using education as a covariate, and all but one adjusting for age and sex. The one study that did not adjust for age or sex had a sample with a narrow age range (ages 65 to 69).331 The studies used different baseline cognitive criteria for inclusion and some may have included individuals with mild cognitive impairment. One study included some individuals with dementia at baseline,331 but we report here only the analyses that excluded those individuals. The followup rates for four of the studies were fairly high, but one study had a followup rate of about 50 percent when combining non-participation due to both attrition and exclusion criteria.368

Table 68. APOE genotype and risk of cognitive decline.

Table 68

APOE genotype and risk of cognitive decline.

Generally, various studies reported that APOE e4 was associated with greater decline on some, but not all, cognitive measures. Five studies reported a categorical outcome, and all found that APOE e4 allele increased the risk of cognitive decline.183,258,267,330,367 Tervo et al.267 reported that subjects with an APOE e4 were more likely to receive a diagnosis of MCI than those without an APOEe4 (OR 2.23; 95 percent CI 1.23 to 4.05). Bretsky et al.367 assessed global cognitive function using the SPMSQ and found that subjects with an APOEe4 allele were at increased risk for decline (OR 2.3; 95 percent CI 1.5 to 3.4). Yaffe et al.258 divided subjects into cognitive maintainers, minor decliners, and major decliners based on their performance on the modified MMS test (3MS). Investigators reported that major decliners were more likely to inherit an APOE e4 than minor decliners (OR 2.31; 95 percent CI 1.75 to 3.05). Presence of an APOEe4 allele was not, however, significantly different in those who maintained cognitive performance compared to those with minor declines. Shadlen et al.330 used the cognitive abilities screening instrument (CASI) to assess performance at baseline and after 6 years. At followup 6 percent of the 2168 subjects had a decline of ≥ 1.5 SD on the CASI. Individuals who were homozygous for APOE e4 were at increased risk for decline compared to non-e4 subjects, but e4 heterozygotes were not. Tyas et al.183 reported on data from 470 subjects in the Religious Orders Study followed for 1 to 12 years and found that subjects with an APOE e4 allele had an increased risk of transition from normal cognition to mild cognitive impairment compared to subjects without an APOE e4 allele (OR 1.87; 95 percent CI 1.27 to 2.73).

Several studies used a battery of tests to assess longitudinal cognitive function. Comparison across studies is difficult because of the wide variety of non-overlapping tests used. Blair et al.368 studied subjects from the Atherosclerosis Risk in Communities (ARIC) project over a 6-year period. Investigators found a racial difference in APOE genotype effect. In Caucasians, decline on the Digit Symbol Substitution Test (DSST) and the Delayed Word Recall (DWR) test was correlated with APOE genotype, with the e2 group showing less decline than the e3 group, which in turn showed less decline than the e4 group. Among African-Americans, an APOE correlation similar to that observed in Caucasians was shown for DSST, but not DWR. Word fluency was not correlated with APOE genotype in either group. Knopman et al.260 extended the findings with the ARIC population reported by Blair et al.368 After 14 years of followup, APOE e4 genotype was still associated with a more rapid decline in DSST and DWR, but not WF, but the differential race effect was no longer significant.260 Christensen and colleagues performed the MMSE, symbol digit modality, reaction time, California Verbal Learning test, and digits backward, and found that APOE e4 was associated with greater decline on the MMSE, but not any of the other cognitive measures.331 Interpretation of the results for these multivariate analyses is problematic because of the large number of analyses performed without correction for multiple comparisons. Haan et al.274 reported that APOE e4 was associated with a higher annual rate of decline on the DSST, but not on the 3MS. Staehelin et al.369 examined free recall, reaction time, and the WAIS-R vocabulary test and found that baseline scores were lowest in subjects who were hetero- or homozygous for APOEe4, but after 2 years there was no APOE genotype-related change in performance on any measure. Yaffe et al.370 followed a cohort of Caucasian women recruited for an osteoporotic fracture study and found that after an average followup of 6.4 years, women with an APOE e4 allele had a greater decline on all tests (modified MMSE, 26 points, P = 0.01; DSST, P = 0.05; Trails B, P = .0 503). A similar association between decline on DSST in individuals and APOE e4 was also reported by Blair and Knopman.260,368 Packard and colleagues284 analyzed the association between APOE genotype and cognitive decline in 5804 subjects from the PROSPER trial of pravastatin in hypercholesterolemia. Subjects were between the age of 70 and 82 and were followed for an average of 3.2 years. Investigators reported that subjects with an APOE e4 allele had poorer baseline performance on immediate and delayed memory scores, and slower information processing. Subjects with APOE e4 also showed a greater decline in immediate and delayed recall, but not significant change in speed of information processing, as measured by the Stroop test.284

Three studies reported an interaction effect for APOE e4 and diabetes or hypercholesterolemia, such that the presence of at least one e4 allele was associated with greater decline among individuals with any of these medical conditions.262,274,368 However, one of these studies reported that this interaction was evident only on the 3MS,274 and another reported it for the DSST.368 Carmelli and colleagues found a significant difference in MMSE, DSST, and BVRT.262 Shadlen et al. reported that lower education was associated with steeper 4-year declines on CASI in APOE e4 homozygotes, but not in heterozygotes, suggesting that modifiable factors, such as education, could mitigate the association of this genetic risk factor on cognitive decline.330 Dik et al. examined the association between APOE genotype in cognitively normal subjects (MMSE > 26) and subjects with mild cognitive impairment (MMSE 21 to 26).371 Investigators reported that APOE e4 is a risk factor for memory decline, but only in cognitively impaired individuals (MMSE 21 to 26). No association between decline and APOE e4 was found in subjects with baseline MMSE > 26.

Two studies also found that the APOE e2 allele may provide some protection against cognitive decline compared to both APOE e3 and e4.366,368 Wilson and colleagues reported that inheriting an APOE e2 allele was associated with a reduced rate of decline in episodic memory, while inheriting an APOEe4 allele was associated with an increased rate of decline in semantic memory, episodic memory, and perceptual speed.366 Blair et al. found that APOE e2 was associated with a slower rate of decline in DSST and DWR, but not WF, compared to non-APOE e2 genotypes in Caucasians, but not in DWR for African-Americans.368

In summary, the majority of studies suggest that APOE e4 is associated with an increased rate of cognitive decline in elderly individuals, especially on some memory tasks (DWR) and tasks of perceptual speed (e.g., the DSST). Not all cognitive domains appear to be affected by APOE genotype, and there is variability between studies. There is some evidence that APOE e2 protects against memory decline, which is consistent with its proposed protective role against AD, but more data are needed. The effect of APOE on cognitive decline in African-Americans remains uncertain and will require studies with larger numbers of participants. There is modest evidence about interactions between APOE genotype and other risk factors, such as diabetes, hypercholesterolemia and education, but no firm conclusions can be drawn. Data examining the role of other genetic factors linked to AD, such as PICALM and CLU in the rate of cognitive decline are lacking.

Key Question 3 – Interventions to Delay the Onset of Alzheimer’s Disease

Key Question 3 is: What are the therapeutic and adverse effects of interventions to delay the onset of Alzheimer’s disease? Are there differences in outcomes among identifiable subgroups?

Nutritional and Dietary Factors

B vitamins and folate. There were no studies identified that used B vitamins or folate in RCTs to examine prevention of AD.

Other vitamins. We identified one RCT that examined the effect of supplemental vitamin E on progression of amnestic MCI to AD.372 Participants were recruited from 69 Alzheimer’s Disease Cooperative Study (ADCS) sites throughout the United States and Canada. Inclusion criteria were: a diagnosis of amnestic MCI; impaired memory; a Logical Memory delayed-recall score approximately 1.5 to 2 SD below an education-adjusted norm; a Clinical Dementia Rating Scale Score (CDR) of 0.5; a score of 24 to 30 on MMSE; and age between 55 to 90 years. There were three treatment arms: (1) 2000 IU of vitamin E, placebo donepezil, and multivitamin daily; (2) 10 mg of donepezil, placebo vitamin E, and a multivitamin daily; and (3) placebo vitamin E, placebo donepezil, and a multivitamin daily (placebo group). The trial lasted for 3 years, during which participants were assessed every 6 months. A total of 769 participants were randomized, of which 230 discontinued due to death, adverse events, or withdrawal of consent. Followup rates did not differ by treatment or placebo groups. The primary outcome was incident AD determined by standard assessment procedures and diagnostic criteria. Compared to the placebo group, the group treated with vitamin E did not show a difference in their rate of progression from amnestic MCI to AD (HR 1.02; 95 percent CI 0.74 to 1.41; P = 0.91).

Gingko biloba. We identified one RCT that examined the effectiveness of gingko biloba versus placebo in reducing the incidence of AD in older individuals with normal cognition and those with mild cognitive impairment.373 Volunteers aged 75 years or older were recruited using voter registration and other purchased mailing lists from four U.S. communities with academic medical centers. To enroll in the study, individuals needed to have a proxy informant who was willing to be interviewed every 6 months. Individuals with prevalent dementia were excluded. There were additional exclusion criteria primarily related to medication use that are outlined the in evidence table in Appendix B. The treatment group took two daily doses of 120 mg gingko biloba extract; the placebo group took placebo pills on the same schedule. At the end of the trial, 60 percent of the active participants were taking their assigned study medication, and compliance did not differ between the two groups. A total of 3069 individuals were enrolled and randomized, of which 482 had a diagnosis of MCI at enrollment and the remainder were considered cognitively normal. Primary outcomes were known on 93 percent of the participants at the end of the study. The primary outcome was a diagnosis of dementia. Secondary outcomes were to evaluate the effect of gingko biloba on the following end points: overall cognitive decline, functional disability, total mortality, and incidence of cardiovascular disease. Only results on the primary outcome were presented in this article. The trial results showed that the HR for the entire sample for AD for the treatment versus the placebo groups was 1.16 (95 percent CI 0.97 to 1.39; P = 0.11); the HR > 1.0 suggests increased risk of AD for gingko biloba users. Gingko biloba also had no effect on the rate of progression to AD in participants with MCI (HR 1.10; 95 percent CI 0.83 to 1.47; P = 0.51). The investigators concluded that gingko was not effective in preventing incident AD.

In conclusion, there is little evidence to support the use of gingko biloba to delay the onset of Alzheimer’s disease.

Omega-3 fatty acids. We identified two good quality systematic reviews evaluating the association between omega-3 fatty acids and risk of Alzheimer’s disease29,31 The Cochrane review by Lim et al.31 searched multiple databases for randomized, double-blinded, placebo-controlled trials lasting at least 6 months in persons aged 60 or above without pre-existing dementia. Through October 2005, no eligible studies were identified. Fotuhi et al.29 searched for trials that addressed the specific association between any form of omega-3 fatty acids in participants age 65 or older and used standard diagnosis of dementia. No trials were identified. Our independent search also failed to identify any relevant studies.

Other fats. We identified no RCTs of other fats used to delay the onset of AD.

Trace metals. We identified no RCTs evaluating the use of trace metals to delay the onset of AD.

Mediterranean diet. We identified no RCTs evaluating the use of a Mediterranean diet to delay the onset of AD.

Intake of fruits and vegetables. We identified no RCTs assessing the relationship between fruit and vegetable intake and onset of AD.

Total intake of calories, carbohydrates, fats, and proteins. We identified no RCTs evaluating the relationship between total intake of calories, carbohydrates, fats, and protein and onset of AD.

Medical Factors

Medications. Prescription and non-prescription drugs considered under this heading include statins, antihypertensives, anti-inflammatories, gonadal steroids, cholinesterase inhibitors, and memantine.

Statins. No systematic reviews or RCTs were identified evaluating the effects of statins on the incidence of AD.

Antihypertensives. We identified one good quality systematic review evaluating the association between antihypertensive medications and the prevention of dementia.34 Our own independent search did not identify any additional studies. Included in the systematic review were randomized, double blind, placebo-controlled trials whose subjects had a diagnosis of hypertension without clinical evidence of cerebrovascular disease. Three trials, SCOPE, Syst-Eur and SHEP were included.374–376 SCOPE randomized participants (who had an entry SBP of 160 to 179 mmHg or a DBP 90 to 99 mmHg or both) in a 1:1 ratio to placebo or candesartan. There was a stepwise progression of medication changes based on blood pressure. SHEP included subjects with isolated systolic hypertension (SBP 160 to 219 mmHg, DBP < 90 mmHg). Participants were randomized to placebo or chlorthalidone, with atenolol or reserpine added if necessary. Syst-Eur also enrolled subjects with isolated systolic hypertension. SBP was 160 to 219 mmHg at entry, with DBP < 95 mmHg. Subjects were randomized to placebo with medications added if SBP remained high versus nitrendipine with addition of enalapril and/or hydrochlorothiazide. All studies reported rates of incident dementia, but only one376 reported the proportion of patients with AD, and only a total of 23 cases were observed. In all studies, cognitive outcomes were a secondary outcome and investigators did not report a priori sample size calculation specific to this outcome. Given the low rate of incident dementia, these studies may have been underpowered to detect clinically important differences between interventions.

All studies had a significant number of control-assigned subjects on active medication (84 percent in SCOPE, 27 percent in Syst-Eur during study, and 44 percent in SHEP). This treatment contamination would decrease differences between groups. Also, in each study, a minority of subjects was taking initially assigned medications (25 percent in SCOPE, 30 percent in Syst-Eur, 30 percent in SHEP). All studies did achieve a differential BP response, with lower pressures in subjects assigned to active treatment (differences for SBP/DBP were SCOPE 3.2/1.6 mmHg, SHEP 11.1/3.4 mmHg, and Syst-Eur 10.1/4.5 mmHg). In a meta-analysis, treatment with antihypertensives did not decrease the risk for general dementia (OR 0.89; 95 percent CI 0.69 to 1.16).

Forette and colleagues376,377 followed the Syst-Eur subjects for an additional 2 years after the end of the study. All control subjects who wished to continue were changed to active medication at that time. Throughout the followup, blood pressure remained lower in the group initially assigned to active treatment by 7.0 mmHg/3.2 mmHg. In spite of the majority of subjects being on active treatment, the rates of dementia were lower in the active treatment group. The reasons for the decrease in incidence of dementia are not known. Nitrendipine, a calcium channel blocker, was the first study drug started and showed a HR of 0.38 (95 percent CI 0.23 to 0.64). It may be that the subjects opting not to continue in the trial had a higher rate of cognitive decline, biasing results away from a null effect. It is noted that there was a protective effect in the Syst-Eur study during the duration of the trial also (incidence decreased by 50 percent from 7.7 to 3.8 cases per 1000 patient-years).34,376,377

In summary, a combination of three large, multi-site RCTs did not suggest a protective effect of antihypertensive treatment in incident dementia. The proportion of subjects with incident AD was not consistently reported, so the applicability of these data to our study question is uncertain. The trials are difficult to evaluate, as many patients were lost to followup and many subjects assigned to the control group received medications when their blood pressure remained elevated. Bias could be introduced via both mechanisms. It also is noted that durations of studies were fairly brief (5 years), and that all studies looked at non-specific dementia rather than AD. The brief followup study to Syst-Eur did suggest a benefit to antihypertensives, but in this open-label continuation study all subjects were receiving active treatment. Details of dementia diagnosis are limited, and it is possible that subjects with cognitive impairment would be less likely to participate in the followup study.

Anti-inflammatories. Our search identified two RCTs using NSAIDs and reporting AD as an outcome. Both studies invoked early stopping rules. Thal et al.378 evaluated subjects who had MCI at baseline. A total of 1457 subjects with a MMSE ≥ 24 were randomized to rofecoxib 25 mg daily, a drug that subsequently was withdrawn from the market due to safety concerns, or placebo. The study was powered based on the projection of 220 incident cases of AD. The original 2-year study duration was lengthened to 4 years because of lower than expected conversion rates, and was then shortened to approximately 3 years, as reaching the goal was determined to be futile. Over the course of the study 189 subjects developed AD by the study definition, primarily a Clinical Dementia Rating Scale (CDR) score ≥ 1, which triggered further evaluation. A minority of subjects actually completed the study on drug (40 percent of the rofecoxib group and 41 percent of the placebo group). The most common reasons for discontinuation were withdrawal of consent, followed by adverse events, and then subjects who were lost to followup, uncooperative, or moved. A “completers” analysis evaluating those subjects who finished the study on drug showed an increased risk of AD with NSAIDs (HR 1.49; 95 percent CI 1.08 to 2.05), as did the intention-to-treat analysis (HR 1.46; 1.09 to 1.94).

The second trial, by the Adapt Research Group,379 randomized 2528 subjects who were first-degree relatives of AD patients to using celecoxib, naproxen, or placebo. Subjects received cognitive testing at baseline, and those scoring low enough were referred for a more comprehensive dementia evaluation. The cut-points used to trigger a full evaluation were not reported. This study was closed early because of concerns over the safety of COX-2 inhibitors as a class. Poor adherence to study medications was common: 83 to 85 percent of subjects had sufficient data to be included in analysis, but almost half of subjects had terminated use of the study drug. According to subject reports, the active medications were taken on medians of 546 and 561 days out of a possible 733 days. Seven demented subjects were inadvertently enrolled. When the data from these subjects were excluded, there were 5 subjects who developed AD in the placebo group, 11 in the celecoxib group, and 9 in the naproxen group. Reflecting the small number of cases, the confidence intervals were large: the HR for celecoxib versus placebo was 4.11 (95 percent CI 1.30 to 13.0), and the HR for naproxen versus placebo was 3.57 (1.09 to 11.7).

In summary, both RCTs suggest that NSAIDs increase the risk of incident AD. Explanations are not clear, but it is possible that NSAIDs adversely affect cognition. Alternatively, initiation of an anti-inflammatory after MCI has manifested itself may be too late in the course of illness for benefit. Studies that have attempted to treat AD with NSAIDs have also had negative findings, and MCI may well represent a prodrome in many patients. The trial conducted by the Adapt Research Group379 attempted to begin with unimpaired subjects. The randomization of a small number who had actual dementia casts some doubt on the sensitivity of the screening process. The frequent termination of study drugs may also have introduced some bias, depending on the cause of termination. It has been suggested that cognitively impaired subjects may be more likely to terminate participation in a trial. By subject report, it also appears that those remaining on medication had a cumulative time on NSAIDs of approximately 18 months, less than has been suggested necessary by observational studies.

Gonadal steroids. No good quality systematic review was identified that specifically addressed the therapeutic and adverse effects of gonadal steroids on development of AD. We identified two RCTs involving 7479 women that examined the effect of estrogen with or without progestins on the development of Alzheimer’s disease.380,381 Both studies were conducted in the United States and used a staged diagnosis based on DSM-IV criteria for AD, with data from cognitively impaired individuals referred to a central adjudication committee. Participants were community-dwelling, non-demented, postmenopausal women between the ages of 65 and 79 years recruited from 39 of the 40 Women’s Health Initiative Centers. Treatment consisted of conjugated equine estrogen (CEE 0.625 mg) or CEE plus 2.5 mg medroxyprogesterone acetate (MPA) versus placebo. Both studies assessed cognitive function at baseline using the Modified Mini-Mental State Examination (3MS); duration of followup was 4 (CEE plus MPA) or 5 (CEE) years.

The primary outcome in both studies was probable dementia, but a diagnosis of AD was made in half the patients with dementia (54 of 108 cases). An additional 16 percent of cases with dementia (17 of 108) were attributed to a mixture of cerebrovascular and AD (mixed dementia). Results are summarized in Table 69. Treatment with CEE alone did not reduce the risk of probable dementia, but appeared rather to increase it (HR 1.76; 95 percent CI 1.19 to 2.60; P = 0.005). After excluding participants with baseline 3MS scores at or below the screening cut point, the association of increased risk of dementia with CEE alone was no longer statistically significant (HR 1.77; 95 percent CI 0.74 to 4.23; P = 0.20), suggesting that these subjects may have been cognitively impaired at the time of enrollment in the study. Treatment with CEE plus MPA significantly increased the risk of developing dementia (HR 2.05; 95 percent CI 1.21to 3.48; P = 0.01). No statistics were provided for AD as a separate endpoint in either the CEE or CEE plus MPA treatment arms.

Table 69. Therapeutic effects of gonadal steroids on development of AD.

Table 69

Therapeutic effects of gonadal steroids on development of AD.

The findings from these studies demonstrate that hormone therapy with estrogen with or without progestin does not reduce the risk of dementia in postmenopausal women. Regular use of CEE plus MPA by postmenopausal women slightly increases the risk of dementia.

Our own independent search did not identify any additional studies.

Cholinesterase inhibitors. We identified one good quality systematic review that examined the effects of cholinesterase inhibitors on the progression to dementia or AD.44 The review included eight RCTs (4127 subjects). Four were multi-site studies in North America or the United States; one was a multi-site study in North America and Western Europe; one was a small, single-site U.S. study; and two did not report location. RCTs were selected that compared a cholinesterase inhibitor (donezpezil, galantamine, rivastigmine) to placebo control in participants with abnormal memory function and/or who met diagnostic criteria for mild cognitive impairment (MCI); individuals with dementia were excluded. Only English-language studies and studies presenting original data were included. Study quality was assessed using the Jadad criteria and was judged to be low to medium. Only one trial adequately described the randomization process; four followed an intention-to-treat principle for analysis; loss to followup was substantial and greater for intervention than control subjects; and in all but one study, multiple secondary outcome measures were evaluated without correction for multiple comparisons. Formal tests for publication bias (e.g., funnel plot) were not performed, but three completed studies382–384 identified at ClinicalTrials.gov have not reported results, suggesting possible publication bias. One was a 16-week industry-sponsored study of rivastigmine that was terminated early in 2004.384 The second was a 1-year National Institute of Mental Health-sponsored study of donepezil and gingko biloba extract completed in 2004.383 The third was a 1-year, industry-sponsored study of donepezil in subjects with MCI completed in March 2007.382 From the available records, it is unclear if conversion to AD was an outcome measure in these trials.

Of the eight identified trials, four (described in three publications385–387) reported rates of conversion to dementia/AD. Cholinesterase inhibitors evaluated were donepezil 10 mg daily, rivastigmine 3 to 12 mg daily, and galantamine 16 or 24 mg daily (two studies). The number of subjects ranged from 769 to 1062. Participants were age ≥ 50 years; race was reported in only one publication, describing two studies,386 and over 90 percent of subjects were white. One study387 recruited subjects with amnestic MCI, while the other studies used more inclusive criteria for MCI. Two studies specifically reported conversion to AD at 3 to 4 years using NINCDS-ADRDA criteria,385,387 while the other two reported conversion to dementia at 2 years based on an increase in the CDR from 0.5 to 1.0.386 The authors of the systematic review did not compute a summary estimate of effect due to important heterogeneity in the definition for MCI.

Conversion to dementia or AD for subjects treated with a cholinesterase inhibitor ranged from 13 percent (at 2 years) to 25 percent (at 3 years). In comparison, control subjects converted at a rate of 18 percent (at 2 years) to 28 percent (3 years). Hazard ratios for conversion to AD were reported in two studies and did not show a statistically significant reduction in risk: HRs were 0.85 (95 percent CI 0.64 to 1.12)385 and 0.80 (0.57 to 1.13).387 Treatment discontinuation due to adverse events was significantly higher for intervention subjects, ranging from 21 to 24 percent compared to 7 to 13 percent in control subjects. Effects on mortality were not adequately reported. The authors concluded that the use of cholinesterase inhibitors in MCI was not associated with any delay in the onset of AD or dementia, and that the safety profile showed that risks associated with cholinesterase inhibitors are not negligible. A fair quality systematic review388 evaluated the same four trials and computed a summary estimate showing a decreased risk of conversion to AD or dementia (RR 0.75; 95 percent CI 0.66 to 0.87) and a higher all-cause dropout risk (RR 1.36; 95 percent CI 1.24 to 1.49). We judged these summary estimates to be suspect due to significant variability in the definition of MCI, variability in outcomes (AD versus any dementia), and probable publication bias.

We did not identify any additional trials that evaluated cholinesterase inhibitors and reported conversion to AD or dementia, but we identified a subgroup analysis from the Petersen trial comparing donepezil to placebo in 769 subjects with amnestic MCI and Hamilton Depression Rating Scale (HDRS) scores < 12 (consistent with no current mild to moderate major depressive disorder [MDD]). The primary outcomes publication conducted subgroup analyses in APOE e4 carriers, showing a reduced risk for conversion to AD in patients treated with donepezil (HR 0.66; 95 percent CI 0.44 to 0.98). Benefit in APOE e4 carriers was not demonstrated in subgroup analyses from the study by Feldman et al.385 The subgroup analysis42 evaluated the effects of donepezil in subjects with Beck Depression Inventory (21-item) scores ≥ 10, consistent with significant depressive symptoms despite the absence of MDD. In the subgroup with significant depressive symptoms (n = 208), donepezil treatment was associated with a lower conversion to AD at 1.7 years (11 percent versus 25 percent) and 2.2 years (14 percent versus 29 percent), but not at 2.7 years (18 percent versus 32 percent, p = 0.07).

In summary, a systematic review found four low to medium quality trials reporting the effects of cholinesterase inhibitors on conversion to dementia/AD in subjects with MCI. Study heterogeneity precluded a valid summary estimate of effect, but conversion rates were similar in intervention and control subjects. Differential effects in subgroups, including those at higher risk for progression to AD (e.g., amnestic MCI, depressive symptoms, APOE e4) are inconsistent.

Memantine. We did not identify any systematic reviews or primary studies that evaluated the effects of memantine on incident AD in subjects who were cognitively normal or had mild cognitive impairment.

Social, Economic, and Behavioral Factors

We did not identify any good quality systematic reviews or RCTs that evaluated the effects of the following types of interventions for delaying the onset of AD:

  • Social engagement;
  • Cognitive engagement;
  • Physical activities;
  • Other leisure activities;

Key Question 4 – Interventions to Improve or Maintain Cognitive Ability or Function

Key Question 4 is: What are the therapeutic and adverse effects of interventions to improve or maintain cognitive ability or function? Are there differences in outcomes among identifiable subgroups?

Nutritional and Dietary Factors

B vitamins and folate. We identified two RCTs that examined the effect of B vitamins on maintenance of cognitive function.389,390 The first389 was a substudy of a larger ongoing RCT examining the effect of antioxidants and folic acid on reduction of cardiovascular disease in female health professionals throughout the United States. Those included in the parent study were at least 40 years of age, had at least three coronary disease risk factors, completed the run-in-phase of the RCT adequately, were willing to forgo use of other vitamin supplements during the course of the study, and had no history of cancer, active liver disease, chronic kidney failure, or use of anticoagulants. The cognitive substudy was limited to participants in the parent RCT who were aged 65 years and older. No details were provided regarding the baseline cognitive status of participants, but since these individuals were part of an ongoing RCT for cardiovascular disease, one would predict that the vast majority were non-demented at baseline. This substudy included participants assigned to one of the treatment arms and a placebo group of the parent study. The intervention was a daily combined vitamin of 2.5 mg folic acid, 50 mg vitamin B6, and 1 mg vitamin B12. The time from randomization to the intervention and end of the trial was 6.6 years. The initial cognitive assessment was about 1 year after the start of the intervention, and assessments were then repeated about every 2 years for a total of four assessments. Cognitive function was measured using a telephone assessment protocol that included the Telephone Interview for Cognitive Status (TICS), an immediate and delayed verbal memory task, and a semantic fluency task. The primary outcome measures were: longitudinal change on a global composite score that was derived by combining scores from all cognitive tests; and longitudinal change on a memory composite score that was derived by combining the memory measures.

Of the 2164 individuals meeting inclusion criteria for the substudy, 93 percent (2009) completed the first cognitive assessment. Ninety-four percent completed at least one followup assessment, and 83 percent completed at least three cognitive assessments. Cumulatively, however, just over 50 percent of the sample completed the final followup assessment (4th assessment) due primarily to the short interval of time between the third cognitive assessment and the RCT end date. Participation rates did not differ by treatment group. Compliance with treatment was 83 percent for both the intervention and placebo groups. The main independent variable was the treatment arm (intervention versus placebo). There was no difference in cognitive decline over time between the intervention and placebo groups. The mean difference in cognitive decline over time between the treatment and placebo groups was 0.03 standard units per year (95 percent CI −0.03 to 0.08; p = 0.30) for the global score and 0.03 (95 percent CI −0.03 to 0.09; p = 0.36) for the verbal memory score. However, in subgroup analyses the investigators examined whether other possible risk factors for cognitive decline modified the effect of the treatment and placebo groups on rates of cognitive change. One of the potential effect modifiers was dietary intake of vitamin B6 and B12 and folate based on self-reported responses to a semi-quantitative food-frequency questionnaire administered at baseline. The results of these subgroup analyses showed that there was no difference in rate of cognitive change for the global cognitive score or the global memory score when stratified by estimated levels of dietary intake of B vitamins or folate. However, the analyses provided some support for the idea that supplemental B vitamins and folate may be advantageous in maintaining cognitive performance on the TICS for those with low dietary intake of B vitamins and folate.

In summary, this cognitive substudy of a larger RCT found no association between supplemental B vitamins and folate and rate of cognitive change over a period of about 6 years. The substudy used assignment to the treatment or placebo group as the predictor variable and did not use serum levels of the B vitamins or folate.

The second RCT390 investigated the effect of combined folate, B6, and B12 supplementation on cognition (primary outcome) and homocysteine levels (secondary outcome) among individuals with elevated baseline homocysteine levels. The study was conducted in New Zealand. Participants were recruited from service clubs. Inclusion criteria included age 65 and older, a fasting homocysteine > 13 μmol/L, and a normal creatinine level. Exclusion criteria were suspected dementia; taking medications known to interfere with folate metabolism (e.g., oral hypoglycemic agents or antiepileptic agents); taking vitamin supplements containing folic acid, vitamin B12, or vitamin B6; being treated for depression; diabetes; or a history of stroke or transient ischemic attacks. The participants were randomized to either the treatment arm (one capsule per day that contained 1000 μg of folate, 500 μg of vitamin B12, and 10 mg of vitamin B6) or the placebo group (one placebo capsule per day). The trial lasted for 24 months and had three time points of cognitive testing (baseline, 1 year, and 2 years). Of the 276 individuals randomized in the trial, 253 (92 percent) completed the study and were included in the analyses. This included 15 individuals who discontinued taking the supplement, but completed all cognitive testing. Compliance rate was adequate, as 85 percent of participants took 95 percent of the supplements. The cognitive battery included a number of standard neuropsychological measures for the assessment of cognitive decline in later life and assessed a range of cognitive domains including memory, verbal fluency, executive function, reasoning, and orientation. Scores were reported for individual tests and also for a standardized combined score for all tests. Change in performance on the cognitive measures over time was calculated controlling for age, sex, and baseline cognitive score.

We note that the study authors interpreted the results as showing overall no significant differences in cognitive performance for the treatment and placebo groups. However, the results showed a modest but statistically significant difference (p = 0.05) in the summary cognitive score, with the treatment group scoring 0.11 standard deviations lower than the placebo group. Analysis of change in individual test scores revealed a statistically significant worsening of scores on Trails B in the treatment group as compared to the placebo group (the exponent of the difference between the log-transformed values is the ratio of the result in the vitamin group compared to the placebo group = 1.08; 95 percent CI 1.02 to 1.14). The treatment group tended to show greater decline on most of the cognitive measures, but these differences did not meet standard statistical significance levels.

The secondary outcome of homocysteine levels showed that the vitamin group had homocysteine levels that were, on average, 4.36 μmol/L (p < 0.001) lower than the placebo group.

In summary, the two RCTs that have examined the effect of supplemental folate and vitamins B6 and B12 on maintenance of cognition in later life have not shown a beneficial effect.

Other vitamins. We identified five RCTs that examined the effect of supplemental vitamin E, vitamin C, or a multivitamin on maintenance of cognitive function.372,391–394The studies are summarized here, and further details are provided in evidence tables in Appendix B. The first trial392 was a substudy of the Women’s Health Study that examined the effect of vitamin E and low-dose aspirin on the prevention of cardiovascular disease and cancer. Inclusion criteria for the parent study were women at least 45 years old; no history of coronary heart disease, cerebrovascular disease, cancer (except for non-melanoma skin cancer), or other major chronic illnesses; not actively using any of the study medications; and no history of adverse effects from any study medications. The cognitive substudy was limited to women age 65 and older and began about 5.6 years after randomization for the parent study. A detailed description of the participants’ baseline cognitive status was not provided, but based on the mean baseline scores on the TICS, the majority of participants were likely non-demented at baseline. The intervention group took vitamin E (600 IU) and low-dose aspirin (100 mg) every other day; the placebo group took a placebo pill on the same schedule. Of the 7175 eligible for the study, 6377 (89 percent) completed the baseline cognitive assessment, and 5073 completed all three cognitive assessments. Followup rates did not differ for the treatment and placebo groups. Compliance was similar for the two groups, as 75.4 percent of the vitamin E group and 76.9 percent of the placebo group reported taking at least two-thirds of the assigned pills. The cognitive substudy lasted for 4 years, which included three telephone cognitive assessments at 2-year intervals. The assessment included five tests measuring general cognition, verbal memory, and category fluency. The primary outcome was longitudinal change on a global composite score derived by averaging standardized scores across all five tests.

The findings showed that there was essentially no difference in rates of cognitive decline between the two groups. Mean cognitive change over time was similar in the vitamin E group compared with the placebo group for the global score (mean difference in change = 0.00; 95 percent CI −0.04 to 0.04). The relative risk of substantial decline in the global score in the vitamin E group compared with the placebo group was 0.92 (95 percent CI 0.77 to 1.10). (Relative risk < 1.0 means vitamin E was associated with lower risk of substantial decline). There were no statistically significant differences in cognitive change between the treatment and placebo group for the secondary outcomes of longitudinal performance on the individual cognitive tests.

The second RCT391 examined the effect of micronutrient supplementation on maintenance of cognition. This was a secondary outcome of the MAVIS (Mineral And Vitamin Intervention Study) trial, which was a large RCT of multivitamin and multimineral supplementation designed to assess possible effects on infection in men and women aged 65 years or over using a supplement containing 11 vitamins and 5 minerals. Participants were recruited from six primary care clinics in northeast Scotland. Participants had to have not taken vitamin, mineral, or fish oil supplements within 3 months of recruitment (1 month for supplements of water soluble vitamins other than vitamin B12). The cognitively impaired were not overtly excluded, although the authors noted that individuals with dementia were unlikely to volunteer to participate or would have been excluded by their physician. A total of 910 individuals enrolled in the study and were randomized. Over the 12-month study, the dropout rates were 12.7 percent for the treatment group and 17.6 percent for the placebo group. The treatment group daily took a multivitamin composed of 16 vitamins and minerals at one to two times the recommended daily allowance. The placebo group took a placebo pill on the same schedule. Compliance with taking the tablets was over 78 percent in both supplemented and placebo groups. The cognitive assessment included the Digit Span forward test and a phonemic verbal fluency measure. The tests were administered in person at the beginning of the study and then by telephone at the end of the study.

The results of the study showed no differences in cognitive change between the treatment and the placebo groups, either in the groups as a whole or in analyses of the over age 75 subgroup or the subgroup at risk for micronutrient deficiency (as defined by self-report responses to a food frequency questionnaire). The different modes of test administration at baseline (in person) and the end of the study (telephone) may have increased the variability in performances over time, but this was unlikely to differ between treatment and placebo groups.

The third RCT372 examined the association between vitamin E supplementation and decline on cognitive tests and other cognitive/functional outcomes over a 3-year period among individuals with amnestic MCI. These were secondary outcomes to the RCT described under Question 3; details of the study are described in that section. Every 6 months during the 3-year trial, a range of standard cognitive tests were completed in addition to the Clinical Dementia Rating scale, the Activities of Daily Living Scale, and the Global Deterioration Scale. Among the numerous outcomes assessed, there were few significant differences between the treatment and placebo groups. The significant differences were less decline in the vitamin E treatment group on measures of executive function (p < 0.05) and overall cognitive score at the 6-month assessment, and on language measures (p < 0.5) up through the 18-month assessment. No significant differences remained after 18 months.

The fourth RCT393 examined the association between an antioxidant vitamin and cognitive decline over a 1-year period. Participants were recruited using advertisements in the United Kingdom. Inclusion criteria were age between 60 and 80 years; within two standard deviations of the normal weight for height, age and sex; no history of significant disease or mental illness; able to give informed consent; and capable of taking 80 ± 120 percent of the prescribed number of capsules during the run-in period. Exclusion criteria were: current medication likely to influence the outcome measures; use of vitamin supplements in the preceding 3 months; history of drug abuse, including alcohol; significant cardiovascular, respiratory, hepatic, renal, gastrointestinal, endocrine, neurological disease or abnormality; malabsorption syndrome; psychiatric disorder; subjects unable to give informed consent; disorders which would interfere with the understanding or compliance with the study, hypersensitivity to any of constituents in the active treatment; MMSE score < 18; participation in another drug clinical trial within the previous 6 months; and subjects from whom blood samples could not be obtained. The exclusion criteria would have excluded individuals with dementia of moderate or greater severity; however, it is possible that some individuals with mild dementia were included. There was a treatment and a placebo group. The treatment group took a vitamin containing 2 mg beta carotene, 400 mg alpha-tocopherol, and 500 mg ascorbic acid daily. No information was provided on whether the outcome assessors or the participants were blind to group assignment. A total of 205 subjects were randomized; 185 appeared to have completed all assessments, but few details were provided. Participants were assessed at baseline and at 4, 8, and 12 months thereafter. The cognitive assessment included measures of verbal memory, logical reasoning, attention, and reaction time. The analyses for the study did not adjust for baseline cognitive performance. The authors reported the number of significant findings on all cognitive measures did not exceed the number one would expect to find by chance (4/117 significant). The actual significance levels were not reported for any of the measures.

The fifth and final RCT394 was a substudy of a larger ongoing RCT examining the effect of antioxidants and beta carotene on reduction of cardiovascular disease in female health professionals throughout the United States. Those included in the parent study were at least 40 years of age, had at least three coronary disease risk factors, completed the run-in phase of the RCT adequately, were willing to forgo use of other vitamin supplements during the course of the study, and had no history of cancer, active liver disease, chronic kidney failure, or use of anticoagulants. The cognitive substudy was limited to participants in the parent RCT who were aged 65 years and older. No details were provided regarding the baseline cognitive status of participants, but since these individuals were part of an ongoing RCT for cardiovascular disease, one would predict that the vast majority were non-demented at baseline. This substudy included participants assigned to one of the treatment arms and a placebo group of the parent study. The intervention was three antioxidants: 420 mg vitamin E every other day, 500 mg vitamin C daily, and 50 mg beta carotene every other day. There were eight intervention groups that ranged from zero to three active vitamins. The time from randomization to the intervention and end of the trial was 8.9 years. The initial cognitive assessment was about 3.5 years after the start of the intervention, and assessments were then repeated about every 2 years for a total of four assessments. Cognitive function was measured using a telephone assessment protocol that included the TICS, an immediate and delayed verbal memory task, and a semantic fluency task. The primary outcome measures were: longitudinal change on a global composite score that was derived by combining scores from all cognitive tests; and longitudinal change on a memory composite score that was derived by combining the memory measures.

Of the 3170 individuals meeting inclusion criteria for the substudy, 89 percent (2824) completed the first cognitive assessment. Ninety-one percent completed at least one followup assessment, and 81 percent completed at least three cognitive assessments. The number of participants who completed the 4th assessment was not given, but the study authors stated that 24 percent of the sample was not contacted for the final followup assessment due to the short interval of time between the third cognitive assessment and the RCT end date. Participation rates did not differ by treatment group. Compliance with the treatment, defined as taking at least two thirds of the study pills, ranged from 64 to 68 percent and was comparable across all groups. The main independent variables were the various treatment arms (intervention versus placebo). The primary outcome was a global composite score averaging all scores; repeated-measures analyses were used to examine cognitive change over time.

Results showed that vitamin E supplementation and beta carotene supplementation were not associated with slower rates of cognitive change (mean difference in change for vitamin E versus placebo −0.01 standard unit; 95 percent CI −0.05 to 0.04; P = 0.78; for beta carotene, 0.03; −0.02 to 0.07; P = 0.28). Although vitamin C supplementation was associated with better performance at the last assessment (mean difference 0.13; 95 percent CI 0.06 to 0.20; P=0.0005), it was not associated with cognitive change over time (mean difference in change 0.02; 95 percent CI −0.03 to 0.07; P = 0.39). In secondary analyses, those taking at least one of the three antioxidant supplements (n = 2471) did not differ in cognitive change from baseline compared with those assigned to all placebos (n = 353); mean difference in cognitive change over time was 0.02 standard units (95 percent CI −0.04 to 0.09; P = 0.64). A number of other secondary analyses were done to examine whether the results differed by various subgroups. One result from these secondary analyses was the finding that vitamin C supplementation was associated with better performance over time among those who developed cardiovascular events during followup (difference in change from baseline in global score for vitamin C group versus placebo 0.15; 95 percent CI 0.04 to 0.26) compared to those who did not experience new cardiovascular events (difference in change 0.00; 95 percent CI, −0.05 to 0.05; P for interaction = 0.009). The authors of the study concluded that vitamin E, vitamin C, or beta carotene supplementation was not associated with less cognitive decline in women with cardiovascular disease or risk factors.

In conclusion, the five included RCTs do not provide support for a beneficial effect of supplemental vitamin E, vitamin C, or a multivitamin on maintenance of cognitive function.

Gingko biloba. We identified one RCT that examined the effect of gingko biloba on maintenance of cognitive function.395 Participants were recruited through mass mailings to individuals on public registry lists in Oregon. Inclusion criteria were: over 84 years of age; no subjective complaint of memory impairment compared to others of similar age; has not sought assessment for memory or cognitive dysfunction; normal memory based on performance on specific cognitive tests; functionally independent; not depressed; adequate vision and hearing to complete all testing; adequate English language skills to complete all testing; general health status that will not interfere with ability to complete longitudinal study; and informant available with frequent contact with subject to verify functional status. Exclusion criteria were: diseases associated with dementia or significant cognitive impairments, current alcohol or substance abuse, B12 deficiency, thyroid disease, or urinary tract infection. The trial lasted for 42 months. The treatment group took 80 mg of gingko biloba three times per day, and the placebo group took a placebo pill on the same schedule. All subjects also took a daily multivitamin with 40 IU of vitamin E. Two outcomes that were examined were incident mild cognitive decline, defined as progression from Clinical Dementia Rating Scale Score (CDR) = 0 to CDR = 0.5, and decline in memory function over time measured by performance on a verbal recall delayed memory task.

A total of 134 participants were enrolled and randomized in the trial, of which 118 (88 percent) met completion criteria and were included in the analyses. The dropout rate did not differ between the treatment and the placebo groups. The assessment included an in-person interview and cognitive status screening every 6 months and more comprehensive neuropsychological testing annually. The maximum number of assessments was seven. Overall, 68.6 percent of participants met the definition of medication compliance, and the gingko biloba group (65.0 percent) did not differ from the placebo group (72.4 percent).

The results showed that in the intention-to-treat analysis there was no reduced risk of progression to a CDR score of 0.5 (HR 0.43; 95 percent CI 0.17 to 1.08) in the gingko biloba group. There was also no less of a decline in memory function in the treatment group (coefficient [SE] 0.111 [0.057]; p = 0.055). In the secondary analysis that controlled for the medication adherence level, the gingko biloba group had a lower risk of progression from a CDR score of 0 to 0.5 (HR 0.33; 95 percent CI 0.12 to 0.89) and a smaller decline in memory scores (coefficient [SE] = 0.115 [0.057]; p = 0.047). The study authors noted that a larger RCT was needed to clarify whether gingko biloba was advantageous in deterring cognitive decline, especially among those compliant with the medication use.

In conclusion, there is little evidence to support the use of gingko biloba to improve or maintain cognitive ability or function.

Omega-3 fatty acids. We identified a single good quality systematic review evaluating the use of omega-3 fatty acids to improve or maintain cognitive ability or function.29 The authors searched for trials that addressed the specific association between any form of omega-3 fatty acids and cognitive decline in participants age 65 or older. Four trials were identified, but three were conducted in subjects with dementia or organic brain lesions. A single 26-week trial compared DHA-EPA 400 mg or 1800 mg versus placebo in adults age ≥ 65 with MMSE score > 21 at baseline.396 There was no statistically significant effect for any of the four cognitive domains evaluated. Four ongoing RCTs397–400 were identified evaluating the effects of omega-3 fatty acids in mid-life to older adults without dementia. Study recruitment has been completed in three of the four studies, but results had not been published at the time of our search.

Other fats. We identified no RCTs of other fats used to improve or maintain cognitive ability or function.

Trace metals. We identified no RCTs assessing the relationship between trace metals and cognitive decline. We did identify one RCT391 that assessed the association between multivitamins (which included trace metals) and cognitive function; this study is described above, under “Other vitamins.”

Mediterranean diet. We identified no RCTs of the Mediterranean diet used to improve or maintain cognitive ability or function.

Intake of fruits and vegetables. We identified no RCTs assessing the relationship between fruit and vegetable intake and improvement or maintenance of cognitive ability or function.

Total intake of calories, carbohydrates, fats, and proteins. We identified no RCTs evaluating the relationship between total intake of calories, carbohydrates, fats, and protein and improvement or maintenance of cognitive ability or function.

Medical Factors

Medications. Prescription and non-prescription drugs considered under this heading include statins, antihypertensives, anti-inflammatories, gonadal steroids, cholinesterase inhibitors, and memantine.

Statins. A good quality systematic review examined the effects of two HMG-CoA reductase inhibitors (statins) on cognitive decline33 The review included two trials that randomized participants to a statin or placebo for the primary purpose of examining effects on cardiovascular events.401,402 Change in cognitive status and adverse events were secondary outcomes. The two trials randomized 26,340 adults from Western Europe, aged 40 to 82 years old, at elevated risk for vascular events. A summary estimate of effect was not computed because the cognitive outcomes and duration of followup varied significantly. Both studies were assessed overall as good quality.

The Heart Protection Study401 (n = 20,536) excluded individuals with a history of dementia but did not assess cognition at baseline; a modified Telephone Interview for Cognitive Status (TICS-m) score below 22 of 39 was pre-specified as indicating cognitive impairment. Participants were randomized to simvastatin (40 mg daily), a lipophilic statin, or matching placebo. Adherence was reported as 85 percent; 17 percent of the placebo group used a non-study statin. Mean duration of followup was 5 years, and the followup rate was high. The proportion of subjects with cognitive impairment at final followup did not differ between treatment (23.7 percent) and placebo groups (24.2 percent, p = not significant). The unadjusted difference in mean TICS-m scores did not differ significantly (24.08 simvastatin versus 24.06 placebo; difference = 0.02 [SE 0.07]). Discontinuations due to adverse events were similar (4.8 percent simvastatin versus 5.1 percent placebo). Key methodological limitations are the limited cognitive outcomes and the lack of a baseline cognitive assessment.

The PROSPER study402 (n = 5806) excluded individuals with a MMSE < 24; changes in scores on four cognitive tests were reported. Participants were randomized to pravastatin (40 mg daily), a hydrophilic statin, or matching placebo. Adherence was reported as 94 percent; 10 percent of the placebo group used a non-study statin. Mean duration of followup was 3.2 years; approximately 25 percent of participants in each group withdrew. Cognitive status – including global cognition, cognitive speed, and cognitive inhibition – was measured by MMSE, picture-word learning test, Stroop Color-Word Test, and a letter digit coding test. Changes in cognition adjusted for age, SBP, body mass index, alcohol use, concomitant drugs, Barthel Index score, Instrumental Activities of Daily Living (IADL) score, sex, smoking diabetes mellitus, vascular disease, country, and test version (if applicable) did not differ significantly for any of the cognitive assessments. Discontinuations due to adverse events were similar (3.7 percent pravastatin versus 3.98 percent placebo). However, new cancer diagnoses were higher for the pravastatin-treated group (HR 1.25; 95 percent CI 1.04 to 1.51). The study authors completed a meta-analysis of eight randomized placebo-controlled trials lasting at least 3 years which did not show an association between statin use and cancer (HR 1.02; 0.96 to 1.09).

Our search did not identify any additional trials. In summary, two large RCTs conducted in mid- to late-life adults at high risk for vascular disease did not show an effect on cognitive function of statins taken for 3 to 5 years.

Antihypertensives. A good quality Cochrane systematic review34 evaluated the effects of antihypertensive medications on cognitive impairment (CI) and dementia. In addition to the systematic review, we identified five additional manuscripts discussing several secondary analyses related to the trials in the included review, and two additional trial excluded from the review.273,374,375,403–406

Included and discussed in the McGuiness review34 were three randomized, double blind, placebo-controlled trials whose subjects had a diagnosis of hypertension without clinical evidence of cerebrovascular disease: SCOPE,374 SHEP,375 and Syst-Eur.376 SCOPE randomized participants (who had an entry SBP of 160 to 179 mmHg or a DBP 90 to 99 mmHg or both) in a 1:1 ratio to placebo or candesartan. There was a stepwise progression of medication changes based on blood pressure. SHEP included subjects with isolated systolic hypertension (SBP 160 to 219 mmHg, DBP < 90 mmHg). Participants were randomized to placebo or chlorthalidone, with atenolol or reserpine added if necessary. Syst-Eur also enrolled subjects with isolated systolic hypertension. SBP was 160 to 219 mmHg at entry, with DBP <95 mmHg. Subjects were randomized to placebo, with medications added if SBP remained high, versus nitrendipine with addition of enalapril and/or hydrochlorothiazide.

All studies in the systematic review had a significant number of control subjects taking active medication (84 percent in SCOPE, 27 percent in Syst-Eur during study, and 44 percent in SHEP). This treatment contamination would decrease differences between groups. Also in each study, a minority of subjects was taking initially assigned medications (25 percent in SCOPE, 30 percent in Syst-Eur, 30 percent in SHEP). All studies achieved a differential BP response, with lowered pressures in subjects assigned to active treatment (differences were SCOPE 3.2/1.6 mmHg, SHEP 11.1/3.4 mmHg, and Syst-Eur 10.1/4.5 mmHg). Cognitive outcomes were secondary analyses in all three studies. Sample size calculations for cognitive decline were reported for SHEP but not SCOPE. Sample size calculations reported for Syst-Eur were for nonspecific dementia only.

In the discussion that follows, we review the secondary analyses identified in our search. In SCOPE, for ethical reasons 84 percent of placebo patients received antihypertensive medications. One publication407 examined only those subjects without add-on therapy, a comparison that would be expected to amplify any observed treatment benefit. Change in MMSE scores did not differ between placebo and candesartan groups.

Saxby et al.403 analyzed information from one site of the SCOPE study using a computerized test set to define cognitive decline. As in other sites and trials, there was considerable contamination of allocated groups. At this site, 81 percent of control subjects were on active antihypertensive by study’s end, while 68 percent of treatment group was off of assigned medications at study’s end. At the study’s close the average difference in BP was 8 mmHg/3 mmHg. Small beneficial effects associated with antihypertensive use were seen on episodic memory and attention, but not on speed of cognition, working memory, or executive function.

In SHEP, which included only patients with isolated systolic hypertension, use of add-on medications was common. Add-on medication was triggered by high blood pressure and was more common in the placebo than treatment groups, possibly biasing the results towards a null effect. There was no apparent protective effect of antihypertensives. Di Bari and colleagues408 also suggest that differential dropouts may have hidden a protective effect of antihypertensives.

The Medical Research Council’s (MRC) trial405 was not included in the systematic review but met our eligibility criteria. This study randomized older adults in a 2:1:1 ratio to placebo, atenolol 50 mg, or hydrochlorothiazide 25 mg. Patients had SBP 160 to 209 mmHg and mean DBP < 115 mmHg during 8 weeks preceding randomization. The mean fall in SBP was 16.4 mmHg in the placebo group, 30.9 mmHg in the atenolol group, and 33.5 mmHg in the hydrochlorothiazide group. A variable proportion of subjects in all treatment arms received additional medications: 20 percent of the hydrochlorothiazide group, 27 percent of the atenolol group, and 1.3 percent of the placebo group. Non-adherence to study medications was substantial; 43 percent of the hydrochlorothiazide group, 52 percent of the atenolol group, and 51 percent of the placebo group were off of allocated treatment for at least part of the 54-month long trial. Cognitive outcomes were assessed using the paired associate learning test and the Trails Making Test, Part A (Trails A). There was no difference in cognitive outcomes based on group assignment.

The PROGRESS study406 was excluded from the McGuinness systematic review34 because all subjects had a history of stroke or transient ischemic attack. There were no blood pressure requirements for inclusion. Subjects were randomly assigned to either active treatment with perindopril (plus indapamide if there was neither an indication for nor a contraindication to a diuretic) or a placebo. During the study, 22 percent of subjects discontinued study medication. Cognitive decline was defined by change in MMSE over a mean of 3.9 years of followup. Active treatment was associated with decreased risk for cognitive decline when decline was defined as a drop of ≥ 3 points on the MMSE (RR 0.81; 95 percent CI 0.68 to 0.96). Sensitivity analysis defining cognitive decline as ≥ 2 or ≥ 4 points did not “materially alter” the results. The mean difference between randomized groups in decline in MMSE score (placebo minus active) was 0.19 (0.07 SE), with less decline for active treatment (p = 0.01).

In summary, participants in these trials had mean ages ranging from 70 to 77 years, except for the PROGRESS trial, where mean age at baseline was 64 years. For individuals with hypertension, antihypertensives were not demonstrably protective against cognitive decline over 4.5 to 5 years. However, all studies had large amounts of treatment contamination and subjects lost to followup. A single trial in subjects with known vascular disease suggests possible benefit with antihypertensive treatment.

Anti-inflammatories. We identified three randomized, placebo-controlled trials evaluating the effects of NSAIDs on cognitive decline. Two studies409,410 randomized 6244 subjects to 100 mg acetylsalicylic acid (ASA) daily or on alternate days versus placebo for 5 to 6 years. From the trial by Price et al.,409 only the cognitive change subset, which included 399 subjects, met our inclusion criteria. Within this subset, 24.8 percent of the aspirin group (n = 63) and 16.8 percent of the placebo group (n = 42) were lost to followup. Cognitive outcomes were assessed using a summary score from multiple measures of cognition. Over the 5-year followup, there was no statistically significant difference in cognition (adjusted mean difference 0.01; 95 percent CI −0.07 to 0.09).

The Women’s Health Study410 involved 5845 subjects who had completed at least two cognitive assessments. The authors analyzed this study as a cohort study within the context of an RCT. The trial was originally formed to examine the impact of aspirin on cardiovascular disease and cancer. In the aspirin and placebo groups, 79 and 80 percent of subjects, respectively, completed all three assessments. The cognitive cohort was started at a mean of 5.6 years after randomization, and only women 65 years of age and older were included.

This study found a slower decline in verbal fluency in the aspirin group, but not an effect on the global summary score. Category fluency (number of animals named in a minute) had a mean of 17.76 (SE 0.10) for the aspirin group at third assessment, and a mean of 17.38 (0.10) for the placebo group at the same assessment; the mean difference between the two groups was 0.37 (95 percent CI 0.10 to 0.65). The mean difference between aspirin and placebo groups for the global summary score at the third assessment was 0.00 (95 percent CI −0.04 to 0.04).

The Alzheimer’s Disease Anti-inflammatory Prevention Trial (ADAPT)411 randomized 2528 subjects to celecoxib 200 mg two times per day, naproxen 220 mg two times per day, or placebo. This 2-year study terminated early and had a high dropout rate and poor medication compliance. There was a suggestion of worsening with both active drugs. The global summary score was significantly worse only with naproxen. Actual time on drug, as reported by the subjects, averaged 1.50 years for celecoxib and 1.42 years for naproxen.

In summary, there was no effect of low-dose aspirin on cognitive decline in these studies. There was worsening over time with naproxen versus placebo, but not celecoxib versus placebo, in the one RCT available. This study had a high dropout rate, and the actual time on drug was brief. There is no support in these studies for the use of NSAIDs to slow or prevent cognitive decline. As with the incident dementia analysis, the ADAPT study411 gives some concern for possible worsening of cognitive functioning, but the problems in the trial mitigate this concern.

Gonadal steroids. We identified a single good quality systematic review that examined the effects of gonadal steroids on cognitive function.37 Studies were included if they were double-blind RCTs that examined the effects of estrogen or estrogen plus progestin on cognitive function over a treatment period of at least 2 weeks in postmenopausal women. Twenty-four studies were included in the review, but only 16 had analyzable data (10,114 women). Eleven studies were from the United States, seven from Europe, three from Canada, and one from Australia. Treatment duration ranged from 2 weeks to 5.4 years, and five studies had duration of greater than 1 year. The eight largest studies included postmenopausal women over the age of 65 years. Eleven studies had comparable age and education status at baseline for the HRT and placebo groups, while four studies did not report education. The type of hormone therapy, dose, and mode of administration varied greatly across studies.

The effect of HRT on the development of mild cognitive impairment (MCI) was examined in two studies. MCI was defined by a strict protocol with four phases of ascertainment, including performance on neuropsychological tests and clinical assessment. Odds ratios (ORs) were calculated using fixed-effect models for rates of cognitive impairment, and weighted mean differences (WMDs) were calculated for continuous data. Meta-analyses showed no statistically significant effect of estrogen or estrogen plus progestin on prevention of MCI (OR for MCI with estrogen after 5 years 1.34 [95 percent CI 0.95 to 1.9]; OR for MCI with estrogen plus progestin after 4 years 1.05 [95 percent CI 0.72 to 1.54]). Estrogen or estrogen plus progestin treatment did not maintain or improve cognitive function (estrogen WMD −0.45 [95 percent CI −0.99 to 0.09]; estrogen plus progestin WMD −0.16 [95 percent CI −0.58 to 0.26]). There was no significant statistical heterogeneity (I2 < 50 percent) in any of the analyses. No assessment of publication bias was performed.

The effect of HRT on various cognitive domains was also examined. In one study, the immediate Paired Associate test, a test of verbal memory, showed a significant beneficial effect after 2 to 3 months of estrogen therapy, but other larger studies found that 4 to 5 years of HRT was associated with impaired verbal memory using the CVLT (total WMD −0.52 [95 percent CI −0.91 to −0.13]; short delay WMD −0.24 [95 percent CI −0.44 to −0.04]; and long delay WMD −0.23 [95 percent CI] −0.43 to −0.03). Removal of two studies with inadequate allocation concealment also resulted in a loss of statistical significance in the Paired Associate test. In most studies, HRT had no effect on visual memory, but women randomized to receive CEE plus MPA in a large study showed a small, but statistically significant benefit on the Benton Visual Retention Test (BVRT), a test of short-term figural memory and visuo-constructional abilities (a difference of −0.27 [95 percent CI −0.49 to −0.05] errors per year). The clinical significance of this small change is unknown. There was no evidence for benefit in verbal fluency, word list recall, Wechsler Memory scale tests, Boston Naming, or PMA Vocabulary.

The review authors concluded that estrogen and combined estrogen plus progestin do not prevent cognitive decline in older postmenopausal women. Treatment with CEE plus MPA was associated with a small decrement in a number of verbal memory tests and a slight increase in figural memory. It is unknown whether HRT may benefit specific subgroups, such as younger women, women with different types of menopause (natural versus surgical), formulation, dose, or method of administration of HRT.

We identified one RCT examining the effect of raloxifene, a selective estrogen receptor modulator (SERM), on cognitive change.39 This U.S.-based, double-blind, randomized, two-site, parallel-group, placebo-controlled study compared two doses of raloxifene (60 or 120 mg/day) to placebo on a battery of cognitive tests. One hundred and forty-three (143) postmenopausal women ranging in age from 45 to 75 years participated. Tests were derived from the Memory Assessment Clinics (MAC) computerized psychometric battery and the Walter Reed Performance Assessment Battery and were performed at baseline and at 1, 6, and 12 months. Study duration was 12 months. There were no differences in any cognitive measure following 1 year treatment with 60 or 120 mg raloxifene.39

Tierney et al. performed a 2-year randomized, double-blind, placebo-controlled trial of the effect of 1 mg 17-beta-estradiol and 0.35 mg norethindrone in 142 women between the ages of 61 and 87.412 The primary outcome was short-delay verbal recall on the CVLT, and subjects were stratified by baseline performance on short-delay recall trial of the Rey Auditory Verbal Learning Test (RVLT). Women who scored at or above average on baseline RVLT showed significantly less decline on CVLT at 1 (p = 0.007) and 2 years (p = 0.01) than women who received placebo. There was no treatment effect in women who scored below average on RVLT at either year, suggesting that any benefit of estrogen on cognitive function may be limited to women with average or above average memory at baseline.

We also identified one good quality systematic review that examined the effect of dehydroepiandosterone (DHEA) supplementation on cognitive function.36 Studies were included if subjects received DHEA or DHEA sulfate for any duration and were assessed by a valid neuropsychometric measure. The review included six studies; three from the United States and three from European countries (264 women and 281 men). Four studies included cognitive measures, while two had quality-of-life measures without cognitive testing. The age range of subjects was 44 to 85 years, and the duration for studies with cognitive measures ranged from 2 weeks to 1 year. Bias was assessed and described. DHEA 50 mg or placebo was administered daily, and outcomes were change in neuropsychometric test results.

No consistent benefit of DHEA supplementation on cognitive function was identified. The authors concluded that although the evidence is limited, controlled trials do not support a beneficial effect of DHEA supplementation on cognitive function in non-demented middle-aged or elderly people.

We identified one additional eligible United States-based study of DHEA on cognitive function involving administration of 50 mg DHEA to 225 cognitively normal subjects (aged 55 to 85 years) for 1 year.413 This double-blind RCT measured cognitive function at baseline and 12 months using a battery of tests including the 3MS, word list memory and recall, Trail Making Part B, category fluency, and modified Boston Naming Test. The authors found no benefit in cognitive performance from treatment with DHEA.

The combined data do not support a benefit of 50 mg DHEA on cognition. No data are available on whether regular administration of DHEA has an effect on development of AD.

In summary, as described under Key Questions 1 and 2, respectively, some cohort studies of estrogen treatment suggest a decreased incidence of AD and, for symptomatic post-menopausal women, decreased cognitive decline. Double-blind, RCTs trials of estrogen, however, have not demonstrated a protective effect in preventing dementia or cognitive decline. Use of CEE plus MPA may increase the risk of dementia in postmenopausal women. In studies administering a battery of cognitive tests there is no consistent evidence for the benefit of routine administration of estrogen in any cognitive domain. Limited evidence suggests that there is also no beneficial effect of raloxifene, a SERM, or DHEA on cognitive function. There is insufficient evidence to determine whether other groups may benefit from estrogen treatment, such as women < 60 years of age; when to begin treatment; or whether effects differ in women who have natural versus surgical menopause. Other remaining questions about the effect of gonadal steroids on cognitive function include the type of estrogen or SERM used, whether it is supplemented with a progestin, the duration of therapy, and whether different modes of delivery would alter efficacy.

Cholinesterase inhibitors. We identified one good quality systematic review that examined the effects of cholinesterase inhibitors on the progression to dementia or AD and included as secondary outcomes, effects on cognitive testing.44 The review included eight RCTs (4127 subjects). Four were multi-site studies in North America or the United States; one was a multi-site study in North America and Western Europe; one was a small, single-site U.S. study; and two did not report location. RCTs were selected that compared a cholinesterase inhibitor (donezpezil, galantamine, rivastigmine) to placebo control in participants with abnormal memory function and/or who met diagnostic criteria for mild cognitive impairment (MCI); individuals with dementia were excluded. Only English-language studies and studies presenting original data were included. Study quality was assessed using the Jadad criteria and was judged to be low to medium. Only one trial adequately described the randomization process; four followed an intention-to-treat principle for analysis; loss to followup was substantial and greater for intervention than control subjects; and in all but one study, multiple secondary outcome measures were evaluated without correction for multiple comparisons. Formal tests for publication bias (e.g., funnel plot) were not performed, but three completed studies382–384 identified at ClinicalTrials.gov have not reported results, suggesting possible publication bias. One was a 16-week industry-sponsored study of rivastigmine that was terminated early in 2004.384 The second was a 1-year National Institute of Mental Health (NIMH)-sponsored study of donepezil and gingko biloba extract completed in 2004.383 Finally, the third was a 1-year, industry-sponsored study of donepezil in subjects with MCI completed in March 2007.382 All three studies planned assessment of cognitive outcomes.

Of the eight identified trials, six (described in five publications385–387,414,415) reported effects on measures of cognition, activities of daily living, or neuropsychiatric symptoms. Cholinesterase inhibitors evaluated were donepezil 10 mg daily (two studies), rivastigmine 3 to 12 mg daily, and galantamine 16 or 24 mg daily (three studies). The number of subjects ranged from 19 to 1062. The two smaller studies followed subjects for ≤ 6 months, while the large studies followed subjects for 2 to 4 years. Participants were aged ≥ 50 years; race was reported in three studies (described in two publications386,415), and over 90 percent of subjects were white. A total of 36 different scales, tests, and neuropsychological batteries and two measures of volumetric imaging were used. The authors of the systematic review did not compute a summary estimate of effect due to important heterogeneity in the definition of MCI and the variability in outcome measures. Point estimates and 95 percent confidence intervals were reported for the 18 outcomes for which specific results from the original studies were reported. One small study414 was excluded from analyses because results were only reported for 10 subjects and were not based on intention-to-treat analyses.

For the 18 outcomes reported, statistically significant differences favoring treatment were seen in individual studies only for the rate of brain volume atrophy by MRI (mean difference 0.21; 95 percent CI 0.14 to 0.27);386 a measure of global cognition, the CDR-Sum of boxes (mean difference 0.2; 95 percent CI 0.0 to 0.4);386 and the cognitive functions evaluated by the ADAS-Cog 13 (mean difference 1.9; 95 percent CI 0.5 to 3.3).415 After correcting for multiple comparisons with Bonferroni methods, only the difference in rate of brain atrophy remained statistically significant. Treatment discontinuation due to adverse events was significantly higher for intervention subjects, ranging from 21 to 24 percent, compared to 7 to 13 percent in control subjects.

We identified two additional RCTs comparing 1 year of treatment with donepezil to placebo.416,417 Doody and colleagues evaluated donepezil in subjects with amnestic MCI.416 Amnestic MCI is of particular interest because it progresses to AD more commonly then general MCI. This U.S.-based, multi-center, industry sponsored study randomized 821 adults aged 45 to 90 to donepezil 5 mg daily for 6 weeks, then 10 mg daily for 42 weeks or placebo. Participants were mostly male (54 percent), mostly white (87 percent), and had a memory complaint corroborated by an informant, along with neuropsychological testing consistent with amnestic MCI (see the evidence table in Appendix B for details). Individuals with medical conditions (e.g., neurological, psychiatric) that could affect cognition, who had taken a cholinesterase inhibitor for > 1 month, or who were taking a concomitant anticonvulsant, anti-Parkinsonian drug, stimulant, or drug with anticholinergic or procholinergic effects, were excluded. The followup rate (60.8 percent of those randomized) was low, with fewer subjects in the intervention group completing 48-week followup. The primary efficacy measures were the ADAS-Cog (range 0 to 70), a measure of cognition, and the CDR-SB (range 0 to 18), a measure of global function. Investigators pre-specified statistically significant differences on both measures to conclude treatment benefit. At 48 weeks, intervention subjects showed greater improvement from baseline than did control subjects on the ADAS-Cog (mean difference −0.9; SE 0.37; p = 0.01), but no significant difference on the CDR-SB (mean difference not given). Of eight secondary measures, donepezil-treated patients showed statistically significant benefit on two. More subjects assigned to donepezil (n = 72, 18.4 percent) than placebo (n = 32, 8.3 percent) discontinued treatment due to adverse effects.

Yesavage et al.417 compared donepezil 5 mg daily for 6 weeks, and then 10 mg daily for 46 weeks, to placebo in 168 adults aged 55 to 90. At study weeks 13 and 14, all subjects received cognitive training consisting of 10 separate 2-hour sessions that taught visualization and mnemonic techniques. Participants were 65 years old on average, male (48 percent), in good general health, and had MCI (29 percent) or non-impaired cognitive functioning. Randomization and allocation concealment were adequate. Patients, providers, and outcome assessors were blind to intervention status, but followup rates were not reported. Analyses were conducted with random regression models using the intention-to-treat principle; funding was from the NIMH and VA.

For the primary cognitive outcomes (word list recall, name-face recall), there were no significant between-group differences at any of the three followup time points. Similarly, there were no significant between-group differences for secondary outcomes: symbol digit, digit span, quality of life, and functional status. More subjects treated with donepezil dropped out within the first 12 weeks (15 of 83 versus 6 of 85), or experienced muscle cramps (19 versus 1) or insomnia (18 versus 8; p < 0.05 for all comparisons).

In summary, a systematic review found six low- to medium-quality trials reporting the effects of cholinesterase inhibitors on cognition in subjects with MCI; three other studies have not reported outcomes. We identified two additional donepezil trials that did not show treatment benefit on the primary cognitive outcomes at 1 year, but did show greater dropouts with treatment. In aggregate, over 5000 subjects have participated in these trials, but no consistent positive effects have been demonstrated. Treatment discontinuations due to adverse effects are consistently higher in the cholinesterase inhibitor-treated groups.

Memantine. We did not identify any systematic reviews or primary studies that evaluated the effects of memantine on cognitive testing in subjects who were cognitively normal or had mild cognitive impairment.

Social, Economic, and Behavioral Factors

Social engagement. No good quality systematic reviews or RCTs were identified that evaluated a social engagement intervention to improve or maintain cognitive ability or function.

Cognitive engagement. Our review identified three reports from the Advanced Cognitive Training for Independent and Vital Elderly (ACTIVE) trial that examined the effects of cognitive training on improving long-term cognitive performance.418–420 In the ACTIVE trial, participants were randomized to one of three cognitive treatment groups (memory training, reasoning training, or speed of processing training) or the control group with no contact. Participants in the treatment groups attended 10 sessions over a 5- to 6-week period. A randomly selected subsample of 60 percent of each treatment group received four sessions (over a 2- to 3-week period) of booster training at 11 months and then again at 35 months after the initial training sessions. Primary outcomes were performance on cognitive measures and functional performance in daily activities.

Individuals were recruited from senior housing, community centers, hospitals, and medical clinics in six cities in the United States. Participants had to be over age 65 years, living independently, and able to perform their activities of daily living (ADLs) independently. Excluded individuals had an MMSE score < 22 points; reported a diagnosis of AD; reported substantial functional decline; reported having a medical condition that could predispose them to severe functional decline or death; had severe loss of vision, hearing, or ability to communicate; had recently participated in another cognitive training study; or planned to move out of the area during the time course of the trial. A total of 2802 individuals were enrolled and appropriately randomized. Dependent variables were measured at baseline, immediately post-treatment, and then at 1, 2, and 5 years post-treatment. Eighty-nine percent of participants completed at least eight training sessions.

The mean age of the sample was 73.6 (5.9) years. Comparisons of baseline characteristics among intervention and control groups were reported. Followup rates were 80 percent at 2 years and 63 percent at 5 years. Although participants who did not complete all 5 years of data collection were more likely to be older, male, have less education and more health problems, and have lower cognitive function on baseline measures, there were no significant interactions between treatment group and these variables.

Each intervention group showed improvement in the targeted cognitive ability compared with baseline, and the effect was still evident at 2 years post-intervention (memory: effect size 0.17; p < 0.001; reasoning: effect size 0.26; p < 0.001; speed of processing: effect size 0.87; p < 0.001).420 Booster training increased training gains in speed (p = 0.001) and reasoning (p = 0.001) interventions at both the 1-year and the 2-year followup. One of the outcome measures was an estimate of reliable change. At the 2-year followup, only the speed of processing training group showed marked differences in this outcome, with 79 percent of the booster speed processing group showing reliable change, compared to 65 percent of the no booster group and 37 percent of the control group. No training effects were observed on everyday functioning at the 2-year followup.

At the 5-year followup,418 each intervention group maintained positive effects on its specific targeted cognitive ability (memory: effect size 0.23 [99 percent CI 0.11 to 0.35]; reasoning: effect size 0.26 [99 percent CI 0.17 to 0.35]; speed of processing: effect size 0.76 [99 percent CI 0.62 to 0.90]). Booster training on the targeted ability produced additional improvement for reasoning performance (effect size 0.28; 99 percent CI 0.12 to 0.43) and for speed of processing performance (effect size 0.85; 99 percent CI 0.61 to 1.09). The booster training for the speed of processing group, but not for the other two groups, showed a significant effect on the performance-based functional measure of everyday speed of processing (effect size 0.30; 99 percent CI 0.08 to 0.52).

A third report on the ACTIVE trial419 assessed whether a subgroup of individuals classified as memory impaired showed as much improvement on cognitive measures after training as the remainder of the group who were not memory impaired. Individuals scoring ≥ 1.5 standard deviations below their expected score on a verbal memory test at baseline were considered to be memory impaired (n = 193). Results indicated that memory-impaired participants failed to benefit from memory training, but did show normal training gains after reasoning (effect size 0.28; p < 0.05) and speed training (effect size −0.76; p < 0.001). The study authors concluded that memory function mediates the response to some forms of cognitive training.

Our search did not identify any additional trials. In summary, one large cognitive training trial has shown modest long-term benefits from cognitive training over a 5- to 6-week period with subsequent periodic booster training. Overall, the smallest effect is shown on the tasks of verbal declarative memory.

Physical activities. We identified one good quality systematic review examining the association of physical activity interventions on cognitive change over time.48 This review identified 11 eligible RCTs with participants that met the following criteria: aged 55 or older, not demented due to any reason, not recovering from surgery, and did not have comorbidities that precluded them from participation in physical exercise programs. Acceptable physical activity interventions were any form of exercise of any intensity, duration, or frequency that was aimed at improving cardiorespiratory fitness. The studies identified in this review had a followup period of no greater than 6 months, and the majority lasted 4 months or less. There were no studies identified examining the longer term effects of physical activity on cognition.

The authors of the review reported that 8 of the 11 eligible studies showed an improvement in at least one aspect of cognitive function, but the domains of cognitive function that improved were not the same in each study, and the majority of comparisons yielded no significant results. Thus, the review authors concluded that there were insufficient data to state that aerobic physical activity improves cognitive function.

Our own review identified one eligible RCT that examined the effect of physical activity on improving or maintaining long-term cognitive performance.421 Participants were randomized to an education and usual care group or to a 24-week home-based program of physical activity. The primary outcome was change in the Alzheimer Disease Assessment Scale-Cognitive Subscale (ADAS-Cog) scores over 18 months. Individuals were recruited from a number of sources, including two memory clinics and the general community, using advertisements in the local media in Perth, Australia. Participants had to be over age 50 years and had to have responded “yes” to the question, “Do you have any difficulty with your memory?” Individuals excluded had scores lower than 19 of 50 on the TICS-m (a score consistent with significant cognitive impairment or dementia); had a Geriatric Depression Scale-15 score of 6 or higher; reported drinking more than four standard units of alcohol a day; had a chronic mental illness, such as schizophrenia; had medical conditions likely to compromise survival, such as metastatic cancer, or render them unable to do in physical activity, such as severe cardiac failure; or had severe sensory impairment or lack of fluency in written or spoken English. Additional exclusion criteria were a diagnosis of dementia, an MMSE score < 24, a Clinical Dementia Rating ≥ 1, and inability to walk for 6 minutes without assistance.

The aim of the physical activity intervention was to have participants engage in moderate intensity physical activity for at least 150 minutes per week, to be completed in three 50-minute sessions. Participants recorded the details of their physical activities in a diary. To enhance compliance with the program, participants were also given a modified behavioral intervention package based on social cognitive theory. Participants in the usual care control group received educational material about memory loss, stress management, healthful diet, alcohol consumption, and smoking, but not about physical activity. Participants in the physical activity group were also offered these educational materials.

A total of 170 individuals were enrolled and appropriately randomized. Outcomes were measured at baseline, and then at 6, 12, and 18 months after baseline. A total of 81.2 percent of the participants completed the trial. Adherence to the prescribed physical activity for the 24 weeks was 78.2 percent. The mean age of the sample was 68.6 (8.7) for the exercise group and 68.7 (8.5) for the control group. Baseline characteristics were reported for the intervention and control groups, but statistical comparisons were not reported. The values reported in Table 1 of the manuscript suggest that the control group may have had higher frequency of moderately intense physical activity at baseline, but without a statistical comparison that cannot be confirmed. Women were more likely than men to drop out in both groups, and those who dropped out had higher ADAS-Cog scores than those who remained in the trial.

At the 6-month point, the physical activity group showed a decline of −0.26 (95 percent CI −0.89 to 0.54) points on the ADAS-Cog (lower scores indicate better performance), and the control group showed an increase of 1.04 (0.32 to 1.82) points on this measure. At the 18-month followup, the difference between the two groups had diminished, with the treatment group showing a decline on the ADAS-Cog of −0.73 (−1.27 to 0.03) points, and the control group showing a decrease of −0.04 (−0.46 to 0.88). The repeated measures ANCOVA across the 6-, 12-, and 18-month followups showed statistically significant less decline in the intervention group (p = 0.04). Analyses on secondary outcomes showed differences on the delayed word list task, with the physical activity group showing an increase of 0.45 (0.03 to 0.87) points compared to the control groups increase of 0.38 (−0.01 to 0.77) points at the 6-month followup. This pattern of differences continued at the 18-month followup, with the physical activity group showing an increase of 0.76 (0.41 to 1.10) points, and the control group showing a decrease of −0.02 (−0.36 to 0.32) points (p = 0.02 for ANCOVA for repeated measures across the three time points). There were no statistically significant differences on the other cognitive measures. Similar differences were seen on the ADAS-Cog when the analyses were limited to individuals categorized as having mild cognitive impairment. When considering only those individuals who completed all assessments, the physical activity group showed more improvement or maintenance of cognition on the ADAS-Cog (p = 0.009 for ANCOVA for repeated measures across the three time points), the delayed word list (p = 0.01), and the Clinical Dementia Rating scale (CDR) (p = 0.003).

In summary, this RCT found a modest, but positive effect of physical activity on one relatively comprehensive cognitive measure (ADAS-Cog) and also on a delayed recall task over an 18-month period, that is, 1 year post-intervention. The participants were individuals who confirmed having problems with their memory, and in fact some met criteria for a diagnosis of mild cognitive impairment, suggesting that the individuals were likely at increased risk for cognitive decline. Thus, relatively greater preservation of cognition associated with physical activity in this group may be particularly meaningful. Furthermore, the study authors noted that the effect associated with physical activity was comparable to or better than the results from some of the medication treatment trials.

Other leisure activities. We did not identify any good quality systematic reviews or RCTs that assessed the effects of non-cognitive, non-physical leisure activities for preserving cognitive ability.

Nicotine. We did not identify any good quality systematic reviews or RCTs that evaluated the effects nicotine for preserving cognitive ability.

Key Question 5 – Relationships Between Factors Affecting Alzheimer’s Disease and Cognitive Decline

Key Question 5 is: What are the relationships between the factors that affect Alzheimer’s disease and the factors that affect cognitive decline?

Introduction

Concordance for factors affecting cognitive decline and Alzheimer’s disease has a number of potential implications. A consistent body of evidence increases our confidence in the observed association. It is also consistent with the proposed analytic framework that the symptoms of AD begin with insidious cognitive decline that progress to more marked cognitive and functional impairment. Finding consistent evidence for cognitive decline and AD would reinforce the potential effectiveness of early interventions that could diminish both the risk of cognitive decline and AD. Discordant findings weaken our confidence in the association, but may simply reflect the heterogeneity of the etiology of cognitive decline; that is, cognitive decline may be due to normal aging mechanisms or the prodromal stage of other types of dementing disorders such as vascular or frontal lobe dementia. To address this question, we used the results from Key Questions 1 through 4 to compare the evidence for the effects of each exposure on risk of AD and cognitive decline. For factors with both randomized controlled trial (RCT) and observational evidence, we first compared the consistency of findings across study designs for each outcome. RCTs are a stronger design than observational studies and were prioritized when there were high-quality studies that used robust outcome measures. When studies showed a consistent effect on risk that was in the same direction for both AD and cognitive decline, we judged the results concordant. For many factors, the available data are quite limited, and concordant evidence across outcomes should not necessarily be interpreted as a robust finding.

Nutritional and Dietary Factors

These factors include vitamins, diet composition, and gingko biloba. In Table 70 we summarize the number of studies and subjects and provide a qualitative summary of the association.

Table 70. Summary of evidence for association between nutritional factors and AD or cognitive decline.

Table 70

Summary of evidence for association between nutritional factors and AD or cognitive decline.

Concordant evidence. Concordant evidence for these factors was as follows:

  • Increased risk with higher exposure: None.
  • No consistent association with risk: Beta carotene, flavonoids, gingko biloba, multivitamins, vitamin B12, vitamin C, and vitamin E.
  • Decreased risk with higher exposure: Mediterranean diet (limited evidence).

Discordant evidence. In observational studies, folic acid was associated with decreased risk for AD, but this association was not consistent for cognitive decline. RCTs did not show a protective effect for folic acid. In observational studies, omega-3 fatty acids were associated with less risk for cognitive decline but not a decreased risk for AD. No RCTs of at least a year’s duration have been conducted for omega-3 fatty acids.

Concordance not determined. For some factors, concordanc e was not determined because of the lack of evidence for both AD and cognitive decline, namely: diet composition, trace metals, vitamin B3 (niacin), and vitamin B6 (pyridoxine). For fruit and vegetable consumption, we made the judgment that the exposures were not comparable across outcomes and concordance could not be determined. Preliminary evidence suggests that saturated fat intake may be associated with AD and cognitive decline, but the evidence was considered too limited to judge concordance.

Medical Factors

Vascular, other medical, and psychological and emotional health. These factors include diabetes mellitus, metabolic syndrome, hypertension, hyperlipidemia, homocysteine, sleep apnea, obesity, traumatic brain injury, and depressive and anxiety disorders. In Table 71 we summarize the number of studies and subjects and provide a qualitative summary of the association.

Table 71. Summary of evidence for association between medical factors and AD or cognitive decline.

Table 71

Summary of evidence for association between medical factors and AD or cognitive decline.

Concordant evidence. Concordant evidence for these factors was as follows:

  • Increased risk with higher exposure: Diabetes mellitus, depressive disorders (although evidence less consistent for cognitive decline).
  • No consistent association with risk: Hypertension, homocysteine, obesity.

Discordant evidence. In observational studies, metabolic syndrome was associated with increased risk for cognitive decline in the young-old, but was not associated with risk for AD. Hyperlipidemia was associated with AD in mid- but not late-life and did not show a consistent association with cognitive decline.

Concordance not determined. For some factors, concordance was not determined because of the lack of evidence for both AD and cognitive decline, namely: anxiety disorders and traumatic brain injury. There were no studies for sleep apnea or resiliency.

Prescription and non-prescription medications. These factors include HMG-CoA reductase inhibitors (statins), antihypertension medications, non-steroidal anti-inflammatory medications (NSAIDs), gonadal steroids (estrogens, raloxifene, dehydroepiandosterone), and cholinesterase inhibitors. In Table 72 we summarize the number of studies and subjects and a qualitative summary of the association.

Table 72. Summary of evidence for association between medications and AD or cognitive decline.

Table 72

Summary of evidence for association between medications and AD or cognitive decline.

Concordant evidence. Concordant evidence for these factors was as follows:

  • Increased risk with higher exposure: None
  • No consistent association with risk or rate of cognitive decline: Cholinesterase inhibitors, estrogens

Discordant evidence. Observational studies suggest that statins decrease risk for AD, but observational and trial data do not show a consistent benefit for cognitive decline. Treatment with antihypertensive medication may decrease risk for AD, but no protective effect was found for cognitive decline. These studies are limited by the absence of trial data for AD, and by data from a trial for cognitive decline that used an outcome measure that is relatively insensitive to change. In observational studies, exposure to NSAIDs were possibly associated with decreased risk for AD and cognitive decline, but RCTs support an increased risk for AD and no consistent association for cognitive decline.

Concordance not determined. For some factors, concordance was not determined because of the lack of evidence for both AD and cognitive decline, namely: raloxifene and dehydroepiandosterone. There were no studies for memantine.

Social, Economic, and Behavioral Factors

These factors include early childhood factors, education and occupation, social engagement, cognitive engagement, physical activities, other leisure activities, smoking, and alcohol use. In Table 73 we summarize the number of studies and subjects and a qualitative summary of the association.

Table 73. Summary of evidence for association between social/economic/behavioral factors and AD or cognitive decline.

Table 73

Summary of evidence for association between social/economic/behavioral factors and AD or cognitive decline.

Concordant evidence. Concordant evidence for these factors was as follows:

  • Increased risk with higher exposure: tobacco.
  • No consistent association with risk of AD or rate of cognitive decline: early childhood socioeconomic environment (limited data).
  • Decreased risk with higher exposure:
    • Observational studies show that greater cognitive engagement (imprecise measures of exposure) decrease risk of AD and cognitive decline. Observational studies are limited by imprecise and variable measures of exposure; the effect of cognitive training on cognitive decline has been evaluated in a single RCT.
    • Greater physical activity in late adult life is associated with decreased risk of AD and less cognitive decline, but conclusions are limited by imprecise measures of exposure, variable measures of cognitive decline, and a single small RCT.

Discordant evidence. Light to moderate alcohol intake is associated with a decreased risk of AD, and lower educational level is associated with an increased risk of AD; neither shows a consistent association with cognitive decline.

Concordance not determined. For some factors, concordanc e was not determined because of the lack of evidence for both AD and cognitive decline, namely: physical activity during mid-adult life. For marital status, data are insufficient to determine concordance. Occupation exposure, social support, and social network were defined too heterogeneously both within and between the studies for AD and cognitive decline to determine concordance. Other leisure activities are not consistently associated with AD but probably decrease the risk of cognitive decline; again, definition of exposure varied substantially between studies, leading us to conclude that the evidence is insufficient to determine concordance.

Toxic Environmental Exposures

For toxic environmental exposures, concordance was not determined because of the lack of evidence for both AD and cognitive decline.

Genetic Factors

Of the six genes associated with risk for AD and included in this review, only one (APOE 4) has been evaluated in cohort studies for risk of cognitive decline. These studies are generally concordant. The presence of APOE e4 increases the risk of AD and the risk of cognitive decline, especially on some memory tasks and tasks of perceptual speed.

Key Question 6 – Future Research Needs

Key Question 6 is: If recommendations for interventions cannot be made currently, what studies need to be done that could provide the quality and strength of evidence necessary to make such recommendations to individuals?

Introduction

To address this question, we first identified the factors included in the present review that are potential interventions. Only a subset of the factors considered meets this criterion. Childhood exposures, education, genetics, toxic exposures, and the medical conditions considered are not potential interventions, but rather potential targets for intervention. Components or intermediary measures of some of the risk factors evaluated (e.g., treatment for diabetes mellitus) may be appropriate for intervention, but these factors were not on the list of exposures to be considered in this review. This discussion focuses primarily on the factors reviewed that are potential interventions.

Based on a review of the quality, strength, consistency, and extent of evidence for each factor, for AD, the only risk factors with moderate quality evidence for increased risk of AD were the APOE e4 allele, some non-steroidal anti-inflammatory drugs (NSAIDs), and conjugated equine estrogen with methyl progesterone. For cognitive decline, there was moderate quality evidence for an increased risk with some NSAIDs, and high quality evidence for a decreased risk with cognitive training. Chapter 5 describes the other factors that had low quality evidence supporting either an increased or decreased risk of AD or cognitive decline.

Given the number of studies that have investigated one or more of the factors on this lengthy list of putative risk or protective exposures for AD and cognitive decline, this much abbreviated list of factors with even moderate support may seem discouraging. But it is important to note that the factors on the list that lack even moderate supporting evidence may be associated with cognitive decline and AD; there just was not sufficient evidence to draw such a conclusion. For example, findings from cohort studies showed an association between statins and decreased AD risk, but there were no RCTs confirming this finding. The findings on the Mediterranean diet and other dietary components, such as folic acid, look intriguing, but the research is limited or too heterogeneous to draw firm conclusions. Many of these prior studies, including those reporting on factors with some supporting evidence, should be viewed as exploratory investigations that need to be followed up by well-designed hypothesis-testing observational studies or RCTs. The current literature does not provide adequate evidence to make recommendations for interventions.

We discuss below the characteristics unique to AD that present particular challenges when assessing the effect of given exposures on disease outcome. We also discuss some of the disease-related issues and the methodological challenges to assimilating the present studies in this area.

Protracted Course of Disease without Overt Clinical Symptoms

Issues. Neuropathological evidence suggests the pathological changes associated with AD may begin as early as the 4th decade of life, but overt clinical symptoms do not present until years later during the 7h, 8th, and 9th decades of life.20 Subtle cognitive changes may begin prior to age 60 among those with an APOE e4 allele,422 but these changes are difficult to detect in individuals. The age criteria for the present review was age 50 and older, but the majority of studies examined exposures well beyond mid-adult life, meaning that for some individuals (e.g., APOE e4 allele positive individuals) or factors the studies may have missed the critical exposure time period. The extended sub-clinical prodromal phase of AD also means that exposures measured 1 to 2 years prior to onset of symptoms may conflate the risk factor exposure with prodromal AD.

Addressing the gap. Observational studies need to assess exposures initially years prior to expected onset of symptoms. The collection of exposure data should continue over an extended period of time because it is not known whether exposures with a protective effect or those with a detrimental effect may still be influential even after the pathological process has begun. It is also important to collect longitudinal exposure data to examine whether the timing of the exposure makes a difference, and whether changes in exposure over time alter risk of cognitive decline. Prospectively collecting this exposure information for decades prior to onset of clinical disease is costly and logistically challenging. Realistically, intermediate or shorter-term outcomes may need to be integrated into such a life course approach to make the studies viable. Some of the initial work in this area may be able to use established registries. In fact, some research groups have already taken advantage of longitudinal registry databases, such as those from the Veteran’s Administration and health maintenance organizations, to collect more objective information on exposure variables that spans decades. The value of these registries can be optimized by linking the exposure data to a prospective and comprehensive evaluation for diagnosis of dementia. Multiple registries could contribute data to establish research consortia to conduct planned prospective meta-analysis. This would be particularly useful for some of the research questions that require very large sample sizes such as assessing interaction effects and differential effects in sample subgroups. This approach would have the potential additional benefit of encouraging some standardization of data collection methods and instruments across studies.

The protracted course of the disease also means that early symptoms of AD may be mistakenly reported as risk factors for the disease when in fact they are correlates or symptoms of disease. An example of this is depression, which is often a symptom of AD, especially in the early stages of disease. It is difficult to separate depressive symptoms that are antecedents of AD from depression that is an early symptom of the disease.

The long prodromal phase of AD means that interventions should not only be implemented as early as possible, but also that there also may be windows of time during which interventions are most effective, and these time periods may differ for different risk factors and interventions. Due to the long prodromal period of AD, RCTs also need to continue for extended periods of time. The current review identified few clinical trials that were even 1 year in length. If exposures throughout the lifespan are being examined for their role in AD and cognitive decline in late life, it is unrealistic to expect interventions of less than 1 year to change the path of the disease. However, if unaffected or mildly affected individuals are to be exposed to interventions for long periods of time, the interventions need to be low risk. For example, one intervention trial of non-steroidal anti-inflammatory (NSAID) medication (the Alzheimer’s Disease Anti-inflammatory Prevention Trial [ADAPT]) was discontinued due to concerns about serious side-effects of the medication.379 It is important to note that the interventions do not need to be pharmacologic; low-risk interventions could involve lifestyle interventions like exercise and diet, or aggressively monitored treatment of existing diseases like diabetes, cholesterol, or hypertension. Another approach to limiting the period of exposure to an intervention and to optimize outcome would be to enrich the sample with individuals at particularly high risk of progressing to AD. This would result in shorter followup time and smaller sample sizes required to evaluate the intervention. Some RCTs have already used this approach.372 Another efficient way to test interventions for potential risk factors such as diabetes mellitus is to design robust measures for cognition as a secondary outcome in trials designed to test multifactorial interventions for the disease of interest. The ACCORD (Action to Control Cardiovascular Risk in Diabetes) and ACCORD-MIND (Action to Control Cardiovascular Risk in Diabetes – Memory in Diabetes) trials are current examples of this strategy.423,424

Although long-term RCTs are the ideal approach, in many cases the barriers to implementing such studies may make them unrealistic. In these cases, alternative analytical approaches such as structural equation modeling, path analysis, and multi-level modeling applied to life course data may help to identify causal relations between exposure and disease.425 In addition, alternative designs such as randomized encouragement designs and non-random quantitative assignment of treatment designs are options which may be more feasible and still allow one to make causal inferences.426

Long-term studies may not only be unrealistic but they also create their own issues, such as attrition due to numerous reasons (e.g., mortality, refusal), which need to be carefully considered in the planning stages. Those who continue to participate throughout the course of the study may be younger, healthier, and of higher socioeconomic status than those who discontinue participation.427 This may create selectivity in the sample over time, another issue that should be addressed in the planning stages of the study. Although evidence-based approaches to decrease attrition are not well established, a recent systematic review found that studies using multiple strategies such as community involvement, frequently updating participant contact information, financial incentives, and minimizing participant burden were associated with less attrition.428

Lack of Validation of Exposure Measures

Issues. There are a number of issues regarding the measurement of exposures. Large cohort studies often rely on self-reported information from questionnaires that briefly assess a range of exposures. Typically any given exposure is assessed with just a few questions. Often responses to questions are then combined post hoc to create exposure variables that were never intended when the questionnaire was designed. Often the derived exposure variables have not been formally assessed for construct validity; that is, do the variables measure what they say they do? For example, among the studies cited in this review, there was a good deal of overlap among the activities categorized as cognitive, physical, and leisure. Validation for the categorization was not provided. Another example that raised questions about construct validity was that some studies interpreted exposure on a single item to have broader meaning. For example, the variable “being married” was used to indicate more social support, even though the benefits of marriage are multi-dimensional. Another issue related to construct validity is that exposure variables may actually serve as surrogates for other variables. An example of this is that education may be a surrogate for premorbid intellect or cumulative advantage throughout life. The association between a greater number of years of education and lower risk of AD may also reflect an insensitivity of diagnostic methods to identify impairment among those with high education. Identifying an association between a surrogate factor and disease outcome is an important first step, but prior to implementing RCTs or interventions based on these findings, the underlying risk factor needs to be identified.

Another issue is the imprecision of the measurement of exposure in observational studies. One example of this would be the accuracy of self-report information on food intake and the conversion of this information to actual nutritional components. In addition, it is unclear whether intake of nutrient directly equates to in-vivo level of nutrient. The studies reporting validation analyses indicate a limited correlation between responses on the food frequency questionnaire and 24-hour records of food intake. Another example of imprecise exposure data is the variability in the type, duration, and frequency of exercise. It is difficult to retrospectively assess exercise activity over decades of exposure when there may be periods of regular activity followed by no activity.

Yet another issue of measurement is that studies ordinarily investigate a single exposure, but many of the exposures of interest are likely inter-related. This is particularly true for nutrition, as it is unrealistic to consider single nutrients in isolation. In addition, many of the exposures of interest are behaviors that commonly co-occur in individuals aiming to maintain a healthy lifestyle; measuring any single exposure among the numerous healthy behaviors would lead to inaccurate conclusions.

Another issue relates to how exposure is defined. It was often not clear whether exposure levels were determined a priori and whether they were linked to biological rationale, clinical relevance, or informed by prior studies. For example, definitions of hypertension and nutritional intake levels varied across studies or were defined by proportions of the available data. In addition, categorizing the exposure based on distributional properties (e.g., quartiles) may decrease the power to detect an association. To assist in interpreting the results, the reader will need to know whether the analysis was exploratory, with multiple definitions of exposures being tested, or whether the analysis addressed a specific hypothesis, with the exposure level being predetermined as part of the hypothesis.

One final issue that is related to both exposure and outcome is that power analyses were rarely reported in the included studies. Providing a priori power analyses for planned analyses or post hoc calculations for exploratory analyses would allow readers and systematic reviewers to better understand if null findings were due to low power.

Addressing the gap. There are multiple issues related to exposure variables, as noted above. A few basic steps would advance the field substantially in addressing these issues, but some of these steps are quite challenging. A first step should be developing standard methods to measure exposure and provide validation data to show that the measure is reliable and valid. Some areas of research have established “measures warehouses” to standardize the measurement with the aim of advancing the research. Similar to the idea of a measures warehouse, sponsors of research might establish a web-based resource for dementia studies that inventories exposure measures and data about validity. Finally, editors for more journals might require that authors follow standard guidelines for reporting observational studies (e.g., the Strengthening the Reporting of Observational Studies in Epidemiology [STROBE] guidelines).429

Sensitivity, Validation, and Homogeneity of Outcome Measures

Issues. For inclusion in this review, we required that studies on AD used standard diagnostic criteria. However, there was wide variation in how these diagnostic criteria were operationalized, particularly regarding the extent of neuropsychological testing used and whether information was collected from both a knowledgeable informant and the study participant to determine the diagnosis. In contrast to AD, for mild impairment the diagnostic standards are still evolving, with most diagnostic nomenclature suggesting that cognitive decline leading to mild impairment appears to have many causes. For example, there are multiple types of mild cognitive impairment (MCI) and cognitive impairment not demented (CIND). This heterogeneity in both the cognitive profile and most likely the underlying etiology may be part of the reason why only a few cognitive measures show significant change associated with a risk factor. This alone makes interpretation of the results difficult. But compounding the issue are the facts that different cognitive measures are used across studies, and associations between specific exposures and specific cognitive tests or tests in the same cognitive domain are not replicated across studies. The heterogeneity of cognitive measures has made synthesis of the literature on cognitive decline difficult.

Often studies reported that exposure to a factor was associated with statistically significant, but very modest decline on one or two cognitive measures only. It is important to note that statistical significance does not equate to clinical significance.

Addressing the gap. Further work is required to reach a consensus on which cognitive measures are the best validated, most responsive to change, and measure the needed domains. If the experts could agree on a limited battery of measures, it would make synthesis of the literature more straightforward and allow for pre-planned meta-analysis. If the same exposure and cognitive assessments were used in different studies, then datasets could be combined and patient level meta-analysis could be performed. This is an efficient way to look at subgroups rather than powering individual studies for this type of analysis. The National Institutes of Health (NIH) Toolbox, part of the NIH Neuroscience Blueprint Initiative, is a brief comprehensive battery of assessment tools being developed to measure cognitive, motor, sensory, and emotional function over the full range of normal function.430 It is an example of a standardized assessment tool that has the potential to improve uniformity of assessment and make synthesis of results from multiple studies more meaningful.

Using a more standard battery of measures, it may be possible to better identify individual measures or domains of cognition that predict progression to clinically significant cognitive impairment (e.g., MCI or AD). Performance on these measures could be used to identify individuals at greater risk of decline in the near term.

More work is needed to better characterize and validate the various subtypes of CIND and MCI to be able to identify specific cognitive measures or domains of cognition that predict progression to AD. These findings could be used to enrich intervention samples with individuals at highest risk of progressing to dementia.

Finally, more research is needed to determine size of effect necessary to be of clinical significance, and this information should be used when interpreting results. In addition, measurement of meaningful change needs to include assessment of practice effects.

Identifying Differential Effects of Risk Factors in Subgroups and Interactions

Issues. Due to lack of data, we were not able to address whether there were important subgroup differences or interactions in the association between factors and cognitive outcomes. Even when studies reported that an interaction effect was not statistically significant, often the relevant information to determine if the study had statistical power to detect an interaction effect was not provided. Important subgroups where differential risk and differential effect of interventions are sometimes observed include sex, ethnic groups, and specific genes.

Addressing the gap. Observational studies should be powered to look at the differences between subgroups and interaction effects. Interactions may be present for specific individual characteristics or between exposure factors. The effects of exposures and interventions may depend on individual characteristics, so assessing the sample as a whole may hide effects. The use of a standard assessment battery (as suggested above) would allow for combining studies so that no single study would need to be sufficiently powered to examine differences among multiple subgroups.

Publication Bias

Issues. Most large epidemiological studies of aging ask questions about many of the same exposures. Depression and diabetes are examples of conditions that are not only routinely inquired about, but also show the most consistent association with AD and cognitive decline. In contrast, it is as striking that many of the large cohort studies have not published reports on the topic of the association between depression or diabetes and AD. It is possible that these other studies have investigated these exposures in their data and that the lack of publication on the finding means that they did not find a significant association. This suggests the potential for important publication bias.

Addressing the gap. One idea for addressing this gap would be to establish a registry of cohort studies that includes planned analyses. Although establishing a registry of cohort studies would be more challenging than creating the registries of clinical trials, it would allow the planned versus published results to be tracked to get a sense of publication bias. This would require the cooperation of journal editors and funding agencies to provide incentives to researchers to submit their data to the registry. In addition, statistical techniques for identifying publication bias in observational studies need to be developed and validated.

Determining the Cost and Benefit of Intervention

Issues. There is a lack of information on the overall effectiveness and the cost-effectiveness of the interventions. To date, the few RCTs conducted on the factors of interest here have not shown a positive effect of the intervention. However, once evidence is available to indicate efficacy of an intervention, further research will need to be done to determine the effectiveness of the intervention from multiple perspectives.

Addressing the gap. Demonstrating an association between an intervention and a cognitive outcome in an RCT provides an indication of efficacy, but that is just the first step in evaluating the benefit of an intervention. From that point, the effectiveness of the intervention on many levels will need to be determined. Typically, effectiveness research has focused on the outcomes of cognitive decline and decline in performance of daily activities by the patient, but research in the area of pharmacoeconomics has shown that other outcomes should also be assessed when estimating the cost-benefit values of AD interventions.431 These additional outcomes include not only others related to the patient, such as the presence and severity of neuropsychiatric and other behavioral symptoms, but may also include factors related to caregiver burden. These factors include the extent of care needed, whether providing care has required the caregiver to leave the workforce, and the impact on physical and mental health of providing care.

Some of the interventions suggested by this review (e.g lipid lowering agents) have been shown to have net benefit for other outcomes (e.g., cardiovascular). Evidence for a positive effect on cognition would simply be one more reason to use the intervention. But the additional benefit on cognition would also alter the cost-effectiveness ratio. In these situations, it will also be important to assess whether the threshold for effectiveness is the same for both outcomes (i.e., the cognitive outcome and the other outcome). It is possible that there may be a threshold effect or curvilinear effect for the intervention (e.g., glucose control for DM), and that these may differ for the two outcomes, which would influence recommendations for intervention intensity.