Logo of bmjLink to Publisher's site
BMJ. 2000 Aug 19; 321(7259): 504.
PMCID: PMC1118396
Statistics Notes

Blinding in clinical trials and other studies

Simon J Day, manager, clinical biometricsa and Douglas G Altman, professor of statistics in medicineb

Human behaviour is influenced by what we know or believe. In research there is a particular risk of expectation influencing findings, most obviously when there is some subjectivity in assessment, leading to biased results. Blinding (sometimes called masking) is used to try to eliminate such bias.

It is a tenet of randomised controlled trials that the treatment allocation for each patient is not revealed until the patient has irrevocably been entered into the trial, to avoid selection bias. This sort of blinding, better referred to as allocation concealment, will be discussed in a future statistics note. In controlled trials the term blinding, and in particular “double blind,” usually refers to keeping study participants, those involved with their management, and those collecting and analysing clinical data unaware of the assigned treatment, so that they should not be influenced by that knowledge.

The relevance of blinding will vary according to circumstances. Blinding patients to the treatment they have received in a controlled trial is particularly important when the response criteria are subjective, such as alleviation of pain, but less important for objective criteria, such as death. Similarly, medical staff caring for patients in a randomised trial should be blinded to treatment allocation to minimise possible bias in patient management and in assessing disease status. For example, the decision to withdraw a patient from a study or to adjust the dose of medication could easily be influenced by knowledge of which treatment group the patient has been assigned to.

In a double blind trial neither the patient nor the caregivers are aware of the treatment assignment. Blinding means more than just keeping the name of the treatment hidden. Patients may well see the treatment being given to patients in the other treatment group(s), and the appearance of the drug used in the study could give a clue to its identity. Differences in taste, smell, or mode of delivery may also influence efficacy, so these aspects should be identical for each treatment group. Even colour of medication has been shown to influence efficacy.1

In studies comparing two active compounds, blinding is possible using the “double dummy” method. For example, if we want to compare two medicines, one presented as green tablets and one as pink capsules, we could also supply green placebo tablets and pink placebo capsules so that both groups of patients would take one green tablet and one pink capsule.

Blinding is certainly not always easy or possible. Single blind trials (where either only the investigator or only the patient is blind to the allocation) are sometimes unavoidable, as are open (non-blind) trials. In trials of different styles of patient management, surgical procedures, or alternative therapies, full blinding is often impossible.

In a double blind trial it is implicit that the assessment of patient outcome is done in ignorance of the treatment received. Such blind assessment of outcome can often also be achieved in trials which are open (non-blinded). For example, lesions can be photographed before and after treatment and assessed by someone not involved in running the trial. Indeed, blind assessment of outcome may be more important than blinding the administration of the treatment, especially when the outcome measure involves subjectivity. Despite the best intentions, some treatments have unintended effects that are so specific that their occurrence will inevitably identify the treatment received to both the patient and the medical staff. Blind assessment of outcome is especially useful when this is a risk.

In epidemiological studies it is preferable that the identification of “cases” as opposed to “controls” be kept secret while researchers are determining each subject's exposure to potential risk factors. In many such studies blinding is impossible because exposure can be discovered only by interviewing the study participants, who obviously know whether or not they are a case. The risk of differential recall of important disease related events between cases and controls must then be recognised and if possible investigated.2 As a minimum the sensitivity of the results to differential recall should be considered. Blinded assessment of patient outcome may also be valuable in other epidemiological studies, such as cohort studies.

Blinding is important in other types of research too. For example, in studies to evaluate the performance of a diagnostic test those performing the test must be unaware of the true diagnosis. In studies to evaluate the reproducibility of a measurement technique the observers must be unaware of their previous measurement(s) on the same individual.

We have emphasised the risks of bias if adequate blinding is not used. This may seem to be challenging the integrity of researchers and patients, but bias associated with knowing the treatment is often subconscious. On average, randomised trials that have not used appropriate levels of blinding show larger treatment effects than blinded studies.3 Similarly, diagnostic test performance is overestimated when the reference test is interpreted with knowledge of the test result.4 Blinding makes it difficult to bias results intentionally or unintentionally and so helps ensure the credibility of study conclusions.


1. De Craen AJM, Roos PJ, de Vries AL, Kleijnen J. Effect of colour of drugs: systematic review of perceived effect of drugs and their effectiveness. BMJ. 1996;313:1624–1626. [PMC free article] [PubMed]
2. Barry D. Differential recall bias and spurious associations in case/control studies. Stat Med. 1996;15:2603–2616. [PubMed]
3. Schulz KF, Chalmers I, Hayes R, Altman DG. Empirical evidence of bias: dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA. 1995;273:408–412. [PubMed]
4. Lijmer JG, Mol BW, Heisterkamp S, Bonsel GJ, Prins MH, van der Meulen JH, et al. Empirical evidence of design-related bias in studies of diagnostic tests. JAMA. 1999;282:1061–1066. [PubMed]

Articles from BMJ : British Medical Journal are provided here courtesy of BMJ Group
PubReader format: click here to try


Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...


  • Cited in Books
    Cited in Books
    PubMed Central articles cited in books
  • PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...