U.S. flag

An official website of the United States government

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Fordham B, Sugavanam T, Edwards K, et al. Cognitive–behavioural therapy for a variety of conditions: an overview of systematic reviews and panoramic meta-analysis. Southampton (UK): NIHR Journals Library; 2021 Feb. (Health Technology Assessment, No. 25.9.)

Cover of Cognitive–behavioural therapy for a variety of conditions: an overview of systematic reviews and panoramic meta-analysis

Cognitive–behavioural therapy for a variety of conditions: an overview of systematic reviews and panoramic meta-analysis.

Show details

Chapter 3Review methods

The methods for the mapping stage of the overview are presented first, followed by the methods for the PMA. The protocol was registered with PROSPERO, the international prospective register of systematic reviews (number CRD42017078690), and published open access.16

Mapping

Inclusion and exclusion criteria for the mapping

Types of reviews

We included all reviews that reported RCTs if they met four of the five methodological criteria outlined by the Centre for Reviews and Dissemination (CRD) at the University of York, as part of the Database of Abstracts of Reviews of Effects (DARE).17 DARE was consulted for its guidance in the application of the criteria:

  1. Were inclusion/exclusion criteria reported?
  2. Was the search adequate? (Databases stated, more than one database searched or one database plus checking references, hand-searching, contact with researchers, citation searching, internet searching.)
  3. Were the included studies synthesised?
  4. Was the quality of the included studies assessed?
  5. Are sufficient details about the individual studies presented? (Details on the population/setting, intervention and a result for each included study.)

We included reviews of RCTs comparing CBT with an active or non-active comparator. Reviews containing randomised and non-randomised studies were included only if RCT data were summarised separately. We excluded reviews based on any other study designs (e.g. quasi-randomised, non-randomised).

Type of health condition

The ICD-11 classifies mental and physical diseases, disorders, injuries and other related health problems in a comprehensive and hierarchical fashion, and is used as a standard for both clinical and research purposes.13 The term condition will be used throughout this report to represent diseases, disorders and injuries. Participants with any conditions recognised by the ICD-11 or its nominal categorisation, and of any severity, were included. Non-health-related problems, such as procrastination, were excluded.

For physical conditions, we categorised reviews under the primary ICD-11 codes. For mental conditions, we used the secondary level of ICD-11 codes listed underneath the primary code of 06 mental, behavioural or neurodevelopmental disorders. A review was categorised according to its primary aims. For example, if a review examined the effectiveness of CBT to improve quality of life for people living with diabetes, then 05: Endocrine diseases was the condition category. However, if a review examined the effectiveness of CBT for improving depression in people living with diabetes, then the review was classified as 6A60-80 mood disorders, with comorbid 05 endocrine diseases. Box 1 shows all of the primary and secondary codes that could be considered in grouping reviews together.

Box Icon

BOX 1

Primary and secondary ICD-11 codes

Types of participants

We included participants of any age [children/adolescents (aged < 18 years), adults (aged 18–65 years) and older adults (aged > 65 years)], either sex and any ethnicity.

Types of health-care setting

We included reviews of RCTs that were conducted in any setting [e.g. primary care, secondary care, school/university, institutional (residential care)] and in any country.

Types of delivery timing

We included reviews of RCTs in which CBT was delivered preventatively, as a standard responsive care or as a relapse prevention.

Types of interventions

We included only reviews that evaluated CBT. Interventions were accepted as CBT when authors explicitly stated so in the title, abstract or keywords, or when the review defined the intervention as including at least one cognitive and one behavioural element.

If a trial intervention combined CBT with another therapy and the other therapy was used as a comparator condition (e.g. CBT plus pharmacotherapy compared with pharmacotherapy), then we included these trials. If a trial combined CBT with another therapy and this was compared with another type of comparator [e.g. CBT plus pharmacotherapy compared with wait-list control (WLC)], then we excluded these reviews because we could not extract the isolated effects of CBT.

All modes of CBT delivery were included and categorised into high or low intensity, based on the definitions by Roth and Pilling.1 High-intensity CBT was defined as face-to-face, individual or group therapy, delivered by a trained CBT therapist. Low-intensity was CBT delivered via media (internet, written, telephone), or was when face-to-face, individual or group CBT was administered by a non-CBT therapist (paraprofessional or layperson). If the review did not report the intensity of the intervention, it was assumed to be high-intensity CBT. We excluded all non-CBTs: cognitive therapy, behavioural therapy, third-wave CBT (e.g. acceptance and commitment therapy, mindfulness therapy), motivational interviewing, stress inoculation therapy, problem-solving therapy and stress management therapy.

Types of comparators

We included reviews that compared CBT with one of the following: (1) a non-CBT-based active comparator (e.g. other psychological therapy, pharmacotherapy), (2) a non-active comparator [e.g. placebo, WLC, treatment as usual (TAU), standard care, no treatment] or (3) a CBT-based active comparator of different intensity [e.g. face-to-face CBT (high intensity) compared with internet-based CBT (low intensity)]. We excluded reviews that compared variations of high-intensity (e.g. group CBT compared with individual CBT) or low-intensity CBT (internet CBT compared with bibliotherapy CBT).

Types of outcomes

We included reviews if they reported data on at least one of the following outcomes: HRQoL, anxiety, depression or a condition-specific outcome (e.g. pain).

Length of follow-up

We included reviews with post-treatment, short-term (< 12 months) or long-term (≥ 12 months) follow-up data. If both short- and long-term follow-up data were reported, the synthesis of only the longest follow-up time point was included.

Search methods for identification of systematic reviews

We followed the principles of the Cochrane Handbook for Systematic Reviews of Interventions3 and recommendations for conducting overviews of systematic reviews9 to identify systematic reviews for the overview.

Information sources

The DARE (up to March 2015), Cochrane Database of Systematic Reviews, MEDLINE (via Ovid), EMBASE (via Ovid), PsycINFO (via Ovid), Cumulative Index to Nursing and Allied Health Literature (CINAHL) (via EBSCOhost), Child Development & Adolescent Studies (via EBSCOhost) and OpenGrey databases were searched on 25–27 April 2018 to identify relevant systematic reviews published between 1992 and 2018. An updated search was run on all the above databases on 30 January 2019, covering the period from April 2018 to 30 January 2019, excluding DARE, which is no longer updated. Owing to the volume of material being processed and the time constraints associated with this process, the reference lists of included reviews were not hand-searched for additional reviews. We did not contact authors for additional information to confirm inclusion/exclusion.

Search strategy

Comprehensive search strategies for each of the eight databases were designed by a senior research information specialist (SK). Each search strategy was developed iteratively, and a sensitivity check was performed in each database for the ability of each strategy to retrieve 36 key known papers (where indexed) that had been identified a priori (see Appendix 1). The included search terms were identified from these reviews and their associated database indexing terms, and with input from the expert consultation group (ECG). The search strategies utilised a combination of free text and controlled vocabulary search terms covering variations of ‘CBT’ searched in the title, abstract or keyword fields, and were combined with validated study-type filters for ‘systematic review’. The Scottish Intercollegiate Guidelines Network systematic review search filters available on the InterTASC (Technology Appraisal Support Collaboration) Information Specialists’ Sub-Group website18 was used to search the MEDLINE, EMBASE and CINAHL databases. The McMaster University Health Information Research Unit systematic review filter19 was modified and used in the PsycINFO search. The full search strategies for all the databases can be found in Appendix 2.

Restrictions

The scoping work identified that the earliest published review of CBT is from 1992.20 Therefore, we restricted our search to reviews published since 1992. The search was not restricted in terms of language, although we subsequently excluded non-English-language reviews (see Protocol revisions).

Data management

The database search results were exported into Endnote [Clarivate Analytics (formerly Thomson Reuters), Philadelphia, PA, USA] for deduplication and then exported into Covidence (Melbourne, VIC, Australia), a Cochrane technology platform designed and recommended for systematic review management,3 and a final deduplication check was performed. The full texts of reviews shortlisted for full-text analysis were also uploaded to and screened in Covidence. Data extraction was performed using Microsoft Excel® (Microsoft Corporation, Redmond, WA, USA).

Study selection

Two review authors (TS and BF) independently screened the titles and abstracts of all the references identified by the search strategy. The full texts of the selected reviews were obtained via online resources or through Bodleian Libraries. Reviews were screened for eligibility by two review authors (KE and TS), using the criteria stipulated in Inclusion and exclusion criteria for the mapping; disagreements were resolved by consensus or deliberation with a third reviewer (BF).

Data extraction

A bespoke data extraction form was developed. This form was piloted by two reviewers (BF and TS) using the sensitivity check papers recommended by the ECG (see Appendix 1).

We extracted the following information:

  • review identification details – author, date of publication, aim, number of included RCTs and number of participants, risk-of-bias tool used
  • participant details – primary condition (that which the intervention is primarily aiming to treat) and comorbid conditions, severity, age category (children and adolescents aged < 18 years, adults aged 18–65 years and older adults aged > 65 years), sex, ethnicity
  • setting – from where participants were recruited, treatment timing (e.g. preventative, early, standard, relapse prevention) and countries where the individual RCTs were conducted
  • intervention details – CBT intensity, and, if available, number, duration and frequency of sessions and intervention content description
  • comparator details – description of comparator interventions (active: CBT or non-CBT interventions; non-active: WLC, TAU, no treatment)
  • outcomes: what outcome was measured, follow-up period (short or long), the number of RCTs and number of participants summarised for this outcome, and whether or not a meta-analysis was conducted.

No numerical data were extracted at this stage. If a review had looked for one of our relevant outcomes but did not find any CBT RCTs, this was recorded. When available, we extracted information on patients’ perspectives of CBT, for example patient satisfaction ratings, levels of adherence, dropout rates and any reported adverse events. When available, we extracted information on patient satisfaction, acceptability, adverse events and economic evaluations. An example data extraction form can be found in Appendix 3.

Quality assessment of reviews

The methodological quality of all the included systematic reviews was independently assessed by two review authors (KE, TS or BC) using the A MeaSurement Tool to Assess systematic Reviews (AMSTAR)-2 checklist.4 This checklist assesses the quality of the review design, analysis and reporting, but does not account for the risk of bias of the included RCTs. Because of the overview design (i.e. the review was conducted at the review level), it was outside the scope of this study to return to the RCT level to perform risk-of-bias assessments. Discrepancies between reviewers were adjudicated by another reviewer (BF). We used the online checklist21 (see Appendix 4) to complete the 16 items scored either as ‘yes’, ‘no’ or ‘partial yes.’ This automatically generated a review rating of ‘critically low’, ‘low’, ‘moderate’ or ‘high’ quality. We stratified the reviews based on their AMSTAR-2 score into higher-quality reviews (those rated ‘high’ or ‘moderate’ on the AMSTAR-2 checklist) and lower-quality reviews (those rated as ‘low’ or ‘critically low’) (Beverly Shea, University of Ottawa, 25 March 2019, personal communication).

We calculated the agreement on the overall quality rating between the two main reviewers (KE and TS) using weighted kappa (κw) (interpreted as < 0.20, poor; 0.21–0.40, fair; 0.41–0.60, moderate; 0.61–0.80, good; and 0.81–1.00, very good).22

Independent, double data extraction was undertaken by two reviewers (KE, TS or BC). All data extraction forms and quality checklists were then cross-checked by a third reviewer (BF). All information from the data extraction sheets was entered into a review database, and graphic representation of quality was provided.

Visualisations mapping

The evidence from all the included systematic reviews was synthesised using the following types of charts, tables and maps.

Bubble chart

The evidence was grouped under the corresponding ICD-11 primary or secondary code. The volume of evidence, in terms of number of reviews, RCTs and participants, was then imported from Microsoft Excel into TIBCO Spotfire® (TIBCO, Software Inc., Palo Alto, CA, USA) software23 to produce a bubble chart. The axes of the bubble charts were very large, ranging from 0 to 45,000 participants. To help readability of the charts, we stratified reviews into those with < 1000 participants and those with ≥ 1000 participants.

Summary tables

The detailed description of each included review was represented in summary tables.

Gap maps

The condition and population and context characteristics extracted from the included reviews were populated in an Excel spreadsheet to identify any gaps in the evidence base, and were summarised.

Panoramic meta-analysis

Inclusion and exclusion criteria

From the reviews identified in the mapping stage, we selected the higher-quality reviews (rated ‘high’ or ‘moderate’ on AMSTAR-2) that contained quantitative data (either a single RCT or a meta-analytic effect estimate generated from pooling across multiple RCTs). We extracted these data for HRQoL, depression, anxiety and pain (the most commonly reported condition-specific outcome).

Reviews often contained multiple meta-analyses conducted on data from the same participants for a single outcome (e.g. CBT vs. active comparators, CBT vs. non-active comparators, symptom response, recovery, relapse, remission). To avoid double-counting studies, one meta-analysis per outcome per condition had to be chosen from each review. We used a predefined, step-by-step, hierarchy system in line with the review objectives. We included the meta-analysis (or single RCT) (1) with the longest follow-up time; (2) with the largest number of included RCTs; (3) that used measurement tools with the highest psychometric properties; (4) with the largest number of participants; (5) for which an active comparator was prioritised over non-active comparators; (6) for which continuous outcomes were prioritised over dichotomous outcomes; (7) for which, within dichotomous outcomes, the odds ratio (OR) was prioritised over the risk ratio (RR); (8) for which a random-effects meta-analysis was prioritised over fixed effects; and (9) for which self-report measures were prioritised over clinician-rated measures.

We then grouped the reviews that included quantitative data on each outcome (HRQoL, depression, anxiety and pain). Some of the reviews shared the same RCTs. To avoid double-counting evidence, we had to select one review to include in the PMA. We used a predefined selection process.15 If two or more reviews shared the same RCT(s), we included the review (1) with the longest follow-up time, (2) with the highest AMSTAR-2 rating, (3) that was the most recently published, (4) with the largest number of RCTs or (5) with the largest number of participants.

Data extraction

We extracted the following data: number of participants in total and per group, number of participants who achieved the desired outcome in the case of dichotomous outcomes, effect sizes, confidence intervals (CIs), direction of effect, heterogeneity measures and type of meta-analysis. An example data extraction form can be found in Appendix 5.

Data management

Data from the data extraction sheets were entered into a master database (Excel) and exported into Stata® versions 13.1 and 16.0 (StataCorp LP, College Station, TX, USA). The PMAs were conducted by four reviewers (BC, HL, KE and TS).

Data analyses

Heterogeneity tests

We conducted a PMA per outcome measure. Review data were entered into an ICD-11 condition subgroup analysis and we tested the within-condition statistical heterogeneity across the reviews. In parallel, we tested the heterogeneity across the ICD-11 condition subgroup categories.

Assumptions for pooling across conditions

We developed three a priori conditions that must be met for us to pool the effect estimates across ICD-11 conditions categories:

  1. Intervention homogeneity: the ECG and investigators (see Expert consultation group including patient and public involvement) agreed that, although investigators often use condition-, population- and context-specific protocols, the principles of CBT (see Figure 1) are the same across all conditions. This allows us to make a judgement of intervention homogeneity and, provided the other criteria are met, to pool estimates across conditions.
  2. Design homogeneity: it is possible that meta-analytic estimates of effects would be moderated by differences between the review’s underlying design and methodologies. The review estimates would also be influenced by the quality of the included RCTs. However, a RCT-level quality assessment was beyond the scope of this overview. Therefore, we used the proxy of assuming that the highest-quality reviews would be more likely to be unbiased in their methods and would probably report from the best-available evidence. We minimised review design (but not RCT design) variation by including only reviews that adhered to the CRD review criteria and were graded as being of high or moderate quality on the AMSTAR-2 tool (higher-quality reviews); hence, we could claim design homogeneity. We ran a sensitivity analysis (which included higher- and lower-quality review data) to ascertain if the variation in review quality affected the homogeneity of effect estimates across conditions.
  3. Statistical homogeneity: statistical heterogeneity was assessed using the I2 statistic; this is expressed as a percentage. A higher percentage is indicative of greater heterogeneity. I2 reflects the variation in effect estimates between reviews that is attributable to heterogeneity.24 There is no guidance regarding acceptable heterogeneity for PMAs. We used the guidance for meta-analysis heterogeneity,25 which suggests that I2 of < 75% is acceptable for pooling across the categories.

Panoramic meta-analysis method

The PMA was undertaken using a two-step frequentist approach random-effects model using the ‘metan’ command in Stata (versions 13.1 and 16). The two-step analysis consists of performing a conventional meta-analysis of a series of meta-analyses. The first step is undertaken by the original reviewers in obtaining a pooled treatment effect based on their included trials. Many of these will have been estimated via a random-effects meta-analysis, but some will have been analysed using a fixed-effects approach. Nonetheless, we assume that within-review variability has been appropriately allowed for. In the second step, the pooled estimates (with CIs) from each of the systematic reviews are combined into an overall (over all reviews) pooled estimate. At this point, we use a random-effects approach using the DerSimonian–Laird26 approach. We obtained a pooled estimate from within condition and also across conditions, if the across-condition heterogeneity was < 75%. In the few cases where data required for the meta-analysis, such as standard deviations or CIs, were missing, we referred to the individual RCT paper to extract this information.

Primary analysis

The primary analysis was conducted on continuous, end-point data extracted from higher-quality reviews (AMSTAR-2 rating of ‘moderate’ or ‘high’ quality) if there were more than two systematic review per comparison. The primary outcome was HRQoL and the secondary outcomes were depression, anxiety and pain.

We analysed the standardised mean differences (SMDs). When reviews reported values as mean differences, we converted the pooled estimate into a SMD using the standard deviation reported.27 We reported the 95% CIs and the prediction intervals. These offer a prediction of the distribution of SMDs from future reviews, perhaps in other conditions that have not been included in our overview.

Secondary analysis

Some reviews reported change scores only; we pooled these separately because of concerns that these may be biased as a result of regression to the mean.28 We performed separate PMAs for RRs and ORs.

We grouped reviews that directly compared high- with low-intensity CBT, irrespective of the condition, and analysed this group separately.

Transforming the standardised mean difference into a mean difference

To make meaningful interpretations, the overall pooled estimate (i.e. the SMD) for each outcome was transformed into a mean difference. The SMD was multiplied by the standard deviation of the most commonly used outcome measure (e.g. Beck Depression Inventory for depression) for each outcome.29 To find a suitable standard deviation for the measurement tool, we identified a higher-quality review, which included a low risk-of-bias RCT that had used the most common outcome measure. From that trial, we extracted the standard deviation of the outcome measure at baseline.

Publication bias

When ≥ 10 reviews were included in the meta-analysis, publication bias and small-study effects were tested for using Egger’s regression intercept30 and a visual assessment of funnel plot asymmetry. We used a conservative value of p < 0.1 at CIs of 95% to reflect asymmetry.

Subgroup analysis

Subgroup analyses were agreed a priori and were performed if four or more reviews were included in the meta-analysis across all conditions for each of the outcomes on the following: (1) CBT intensity (high/low intensity), (2) age (children and adolescents, adults, older adults), (3) duration of follow-up (short: < 12 months, long: > 12 months) and (4) comparator group (active, non-active). The subgroups were separated using the ‘by()’ command in Stata.

We ran interaction tests between the subgroups using an exploratory meta-regression. The meta-regression used the method of moments estimate of between-study variance and the ‘metareg’ command in Stata.

If we identified any reviews that directly compared high-intensity CBT with low-intensity CBT, we grouped these reviews together by outcomes (HRQoL, depression, anxiety and pain). If their estimates were homogeneous, then we pooled across the reviews as an example of direct comparison.

Sensitivity analysis

To test whether or not the quality of the reviews moderated the effect estimate and or heterogeneity, we ran a sensitivity analysis in which we included data from all reviews, irrespective of their AMSTAR-2 quality, for each outcome PMA. Then we compared the heterogeneity and pooled effect estimates between the sensitivity analyses and the primary analyses (which included only data from higher-quality reviews, i.e. those of ‘high’ and ‘moderate’ AMSTAR-2 quality).

We also conducted a sensitivity analysis in the HRQoL PMA for the two subscales of the Short Form questionnaire-12 items (SF-12)/Short Form questionnaire-36 items (SF-36) instruments. These instruments include a physical composite score and a mental composite score, but the tool does not pool them together. We prioritised the physical component scale (as recommended by the ECG; see Expert consultation group including patient and public involvement), then we re-ran the analyses using the mental component scale to determine if this changed the results.

Consistency of effect

We employed an ontological argument, which suggests that a lack of inconsistency across evidence suggests consistency of effect.31 We examined the effect estimates from each condition subgroup (pooled effect across all reviews conducted within one condition), subgroup analyses (e.g. active/non-active comparator groups), sensitivity analyses (including the additional condition subgroup analyses) and secondary analyses (e.g. pooled effects across dichotomous outcomes). If any analyses produced a statistically significant effect in favour of the group (comparator or CBT) that was contrary to the overall pooled effect estimate, then the evidence for the general effect was inconsistent across the included conditions. If we did not identify any contrary evidence across any of the conditions or subgroups, then we declared that the overall effect was consistent across all included conditions.

Expert consultation group including patient and public involvement

We worked with a CBT ECG consisting of clinical academics (n = 6), research academics (n = 8) and patient representatives (n = 4). We met with this group directly on three occasions (January 2018, February 2019 and September 2019) and communicated via telephone/e-mail throughout the overview process. For each meeting, the group was sent a workbook of the work to date and a set of questions for the members to comment on. These were collected at the end of each meeting to ensure that all members’ contributions were collated and recognised. The group was not involved in any of the data extraction or quality assessment of the reviews. The ECG provided advice on methods and interpretation, but the final decisions were taken by the study investigators.

Expert consultation group meeting 1: January 2018

In the first meeting, we achieved consensus on the search strategy (terms and databases), the inclusion/exclusion criteria (population, intervention, comparison, outcomes, review design), data extraction form (data to extract) and analysis plan (avoid double-counting of RCTs, subgroup analyses).

Expert consultation group meeting 2: February 2019

The results of the data screening and extraction and the plan for the data synthesis were presented at this meeting. The ECG agreed the following actions:

  • Protocol amendment to include both pooled and single trial data in the PMAs
  • Protocol amendment to include behavioural outcomes as condition-specific outcomes.
  • The ECG did not reach consensus on how the generalisation framework should be used (see Chapter 6). Beth Fordham, Jeremy Howick and Karla Hemming were to continue work on how this could be conceptualised for sharing with the ECG at the next meeting.

Expert consultation group meeting 3: September 2019

The preliminary results were presented at the third meeting. The ECG were in agreement for the mapping and PMA processes. We agreed to prioritise higher-quality reviews over any-quality reviews. However, again, we did not reach agreement on the generalisation framework. The ECG agreed that there was intervention homogeneity, but could not agree that CBT always effects change through the same mechanisms. It agreed that CBT is implemented via the core principles (see Figure 1), but it felt that it was important to recognise that the mechanisms for change would be different for patients living with different conditions. The ECG suggested that a review of all the mechanistic evidence for CBT’s effectiveness was required in order to assume that there is a common mechanism of action.

The patient and public involvement (PPI) representatives guided our visual representation of the data and reflected that the real-life mechanisms of CBT will ‘feel’ very different for each individual receiving the treatment.

Protocol revisions

We intended to translate non-English-language reviews. However, the resource and time allocations were unprepared for the number and complexity of reviews that were found. On discussion with the National Institute for Health Research (NIHR) Health Technology Assessment (HTA) programme board, we made a change to the protocol and excluded non-English-language reviews at the full-text screening stage.

In the protocol, we selected three general outcomes (HRQoL, depression and anxiety) and suggested collecting condition-specific outcomes, such as psychosis and physical/physiological outcomes. Subsequently, we chose to present the three general outcomes plus the most commonly reported other outcome, which was pain.

We had not envisaged the problem of a review reporting the mental and the physical component subscales of the SF-12 HRQoL tool separately. After consulting the ECG, we selected the physical subscale to be included in the primary analysis, and conducted a sensitivity analysis using the mental subscale data to examine if that affected the PMA estimates and heterogeneity.

Other specific changes included to:

  • include both single RCT data and pooled meta-analysis data in a PMA
  • include behavioural outcomes as condition-specific outcomes
  • prioritise higher-quality reviews over any-quality reviews.

All the above changes were approved by the NIHR HTA programme board.

In response to comments from reviewers of the draft HTA monograph, we have included prediction intervals to our primary panoramic meta-analyses.

Image 15-174-24-fig1
Copyright © Queen’s Printer and Controller of HMSO 2021. This work was produced by Fordham et al. under the terms of a commissioning contract issued by the Secretary of State for Health and Social Care. This issue may be freely reproduced for the purposes of private research and study and extracts (or indeed, the full report) may be included in professional journals provided that suitable acknowledgement is made and the reproduction is not associated with any form of advertising. Applications for commercial reproduction should be addressed to: NIHR Journals Library, National Institute for Health Research, Evaluation, Trials and Studies Coordinating Centre, Alpha House, University of Southampton Science Park, Southampton SO16 7NS, UK.
Bookshelf ID: NBK567949

Views

  • PubReader
  • Print View
  • Cite this Page
  • PDF version of this title (7.0M)

Other titles in this collection

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...