NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

Irwig L, Irwig J, Trevena L, et al. Smart Health Choices: Making Sense of Health Advice. London: Hammersmith Press; 2008.

Cover of Smart Health Choices

Smart Health Choices: Making Sense of Health Advice.

Show details

Chapter 16Is this a useful diagnostic test?

The next three chapters have been provided for those readers who really want to understand and learn some basic epidemiological skills. You may be a health consumer who has really found this book interesting and wants to go a bit further. You may be a health practitioner or practitioner in training and want to brush up on some skills in evidence-based practice. Whoever you are, if you are the sort of person who does not like numbers, you might want to skip over this part.

Sensitivity and specificity of a diagnostic test

  • Sensitivity indicates the probability that the test will accurately pick up disease when there truly is disease.
  • Specificity indicates the probability that the test will accurately detect ‘NO disease’ when the disease is truly absent.

To illustrate these, imagine I have a bag of toffees, some of which are liquorice flavoured (L) and some of which are not (NOT-L). L and NOT-L toffees have a slightly different shape so it’s easy for me (or so I believe) to feel which is which without looking. To see how accurate I am at detecting which are which, I try it out.

This is the result: of 100 L toffees, my hand correctly calls 80 of them L toffees. Of 100 NOT-L toffees, my hand correctly calls 90 of them NOT-L toffees.

In technical jargon, if I consider my hand as a diagnostic test, it has a sensitivity of 80 per cent (the proportion of L toffees that I correctly identified) and a specificity of 90 per cent (the proportion of NOT-L toffees that I correctly identified).

Pre-test and post-test probability

Now if I put my hand in a bag of toffees and say ‘This is a liquorice toffee’, what are my chances of being correct? Well, I cannot tell what my chances are of being right unless I know something about the existing probability of finding a liquorice toffee. This is referred to as the pre-test probability of an L toffee. For example, if there are no L toffees in that bag, all of those that I call L would be wrong calls. On the other hand, if the bag contains only L toffees, all those that I call L will be correct calls (and, of course, any ‘NOT-L’ calls would be wrong!). So even though I may know the sensitivity and specificity of my hand as a test, I need more information to interpret the test result.

Clearly the interpretation of the test depends on what percentage of the toffees in the bag were L or NOT-L before I put my hand in it. Put another way, it depends on the pre-test probability of a toffee being L. Now if I knew that, working out the result of my test would be easy. Here are a few numerical examples of the toffee test:

Suppose I have a bag of 400 toffees, of which 25 per cent (i.e. 100 toffees) are L. If I have been told that this is the case, I can apply the sensitivity and specificity of my hand to this set of information as shown in Table 16.1.

Table 16.1. My probability of correctly detecting L toffees: pre-test probability of 25 per cent.

Table 16.1

My probability of correctly detecting L toffees: pre-test probability of 25 per cent.

In Table 16.1, the sensitivity is 80 per cent (80/100) and the specificity 90 per cent (270/300). I know this from applying my known sensitivity and specificity in detecting L and NOT-L toffees as described earlier. Now, if I put my hand in and detect a toffee as L the probability of being correct is 73 per cent (80/110). If I think that it is NOT-L, of course, there is still a chance that it actually is L – a 7 per cent (20/290) chance to be precise.

Now, let’s imagine that I am given another bag of 400 toffees and, this time, 75 per cent of them are L toffees instead of 25 per cent. Needless to say, the sensitivity and specificity of my hand (remember my hand is the diagnostic test) remain the same, so this time the table would be as shown in Table 16.2.

Table 16.2. My probability of correctly detecting L toffees: pre-test probability of 75 per cent.

Table 16.2

My probability of correctly detecting L toffees: pre-test probability of 75 per cent.

Now, if I say I think that a toffee is L, I will be correct 96 per cent of the time and, if I identify it as NOT-L, there is a 40 per cent chance that it turns out to be L, which translates to a 60 per cent chance that I will be right in my call.

In medical jargon, then, the PRE-TEST PROBABILITY is the probability that L toffees are in the bag before I put my hand in it or, more appropriately, the probability that there really is disease before a diagnostic test is carried out. The POST-TEST PROBABILITY if the test turns out to be POSITIVE is the probability that I will detect an L toffee when it truly is one, whereas the POSTTEST PROBABILITY if the test is NEGATIVE is the probability of it really being L when I judge it not to be. In terms of disease, it is the probability of the existence of disease when the test detects no disease.

Table 16.3. Summary of all the above information.

Table 16.3

Summary of all the above information.

In summary, the post-test probability of disease given a diagnostic test result depends on the sensitivity and specificity of the test AND on the pre-test probability. There is no such thing as being absolutely certain of what a test result means; it varies from one patient to another depending on his or her pre-test probability. For instance, a positive HIV test in an intravenous drug user means something different to a positive HIV test on a blood donation from, say, a nun. For the drug user the pre-test probability may be appreciable and a positive test is likely to indicate HIV. In the nun, on the other hand, the pre-test probability is close to zero and any positive test is likely to be a false positive.

Note that the post-test probabilities for negative and positive tests straddle the pre-test probability – that is, a positive test increases the probability of disease above the pre-test level, whereas a negative test decreases it to below the pre-test level. When you do choose to have a further diagnostic test, use of pre-test and post-test probabilities will tell you how the test results affect your chances of having the disease.

To give you some examples, screening mammography has a sensitivity of about 85 per cent in women over 50 and about 70 per cent in women aged between 40 and 49.1 Specificity is about 95 per cent – that is, about 5 per cent of women without cancer will require some further investigation. Ultrasound has a sensitivity of about 85 per cent and a specificity of 90 per cent in detecting blockage in the arteries to the brain.2 However, ultrasound has a sensitivity of only about 60 per cent and a specificity of 97 per cent for detecting clots in the veins in the legs after operations.3 Of course, one can also assess the accuracy of symptoms and the signs. For example, if you are admitted to hospital with possible appendicitis, the pain in the bottom right part of your abdomen has a sensitivity of 81 per cent and a specificity of 53 per cent.4

Like you, many practitioners find this complex. This type of information may not have been part of their medical training. Consequently, you are likely to find the answers about your pre-test and post-test probabilities less satisfactory than answers about the effects of treatments.

References

1.
Kerlikowske K, Grady D, Sickles EA, Ernster V. Effect of age, breast density, and family history on the sensitivity of first screening mammography. JAMA. 1996;276(1):33–38. [PubMed: 8667536]
2.
Blakeley DD, Oddone EZ, Hasselblad V, et al. Noninvasive carotid artery testing: a meta-analytic review. Annals of Internal Medicine. 1995;122(5):360–367. [PubMed: 7847648]
3.
Wells PS, Lansing AW, Davidson BL, et al. Accuracy of ultrasound for the diagnosis of deep venous thrombosis in asymptomatic patients after orthopaedic surgery: a meta-analysis. Annals of Internal Medicine. 1995;122(1):47–53. [PubMed: 7985896]
4.
Wagner JM, McKinney WP, Carpenter JC, et al. Does this patient have appendicitis? JAMA. 1996;276:1584–1594. [PubMed: 8918857]
Copyright © 2008, Professor Les Irwig, Judy Irwig, Dr Lyndal Trevena, Melissa Sweet.

All rights reserved. No part of this publication may be reproduced, stored in any retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, without the prior permission of the publishers and copyright holder or in the case of reprographic reproduction in accordance with the terms of licences issued by the appropriate Reprographic Rights Organisation.

Bookshelf ID: NBK63635
PubReader format: click here to try

Views

Related information

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...