• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of radiologyRadiology HomeCurrent IssueArchiveSubscribeOnline SubmissionRSNA Home
Radiology. Jul 2011; 260(1): 174–181.
Published online Jul 2011. doi:  10.1148/radiol.11101913
PMCID: PMC3121011

Improving Communication of Diagnostic Radiology Findings through Structured Reporting

Abstract

Purpose:

To compare the content, clarity, and clinical usefulness of conventional (ie, free-form) and structured radiology reports of body computed tomographic (CT) scans, as evaluated by referring physicians, attending radiologists, and radiology fellows at a tertiary care cancer center.

Materials and Methods:

The institutional review board approved the study as a quality improvement initiative; no written consent was required. Three radiologists, three radiology fellows, three surgeons, and two medical oncologists evaluated 330 randomly selected conventional and structured radiology reports of body CT scans. For nonradiologists, reports were randomly selected from patients with diagnoses relevant to the physician’s area of specialization. Each physician read 15 reports in each format and rated both the content and clarity of each report from 1 (very dissatisfied or very confusing) to 10 (very satisfied or very clear). By using a previously published radiology report grading scale, physicians graded each report’s effectiveness in advancing the patient’s position on the clinical spectrum. Mixed-effects models were used to test differences between report types.

Results:

Mean content satisfaction ratings were 7.61 (95% confidence interval [CI]: 7.12, 8.16) for conventional reports and 8.33 (95% CI: 7.82, 8.86) for structured reports, and the difference was significant (P < .0001). Mean clarity satisfaction ratings were 7.45 (95% CI: 6.89, 8.02) for conventional reports and 8.25 (95% CI: 7.68, 8.82) for structured reports, and the difference was significant (P < .0001). Grade ratings did not differ significantly between conventional and structured reports.

Conclusion:

Referring clinicians and radiologists found that structured reports had better content and greater clarity than conventional reports.

© RSNA, 2011

Supplemental material: http://radiology.rsna.org/lookup/suppl/doi:10.1148/radiol.11101913/-/DC1

Introduction

The complexity of medical imaging has increased dramatically over the past few decades, providing radiologists with an ever-larger number of images to interpret and more imaging modalities to compare. Radiologists and referring physicians are required to correlate and integrate ever-greater amounts of radiologic, clinical, and laboratory data. Remarkably, despite these changes, the style and format of radiology reports have generally remained unaltered. Most reports still contain free-form text dictated or typed by the radiologist, with an introductory section (summarizing the examination technique and clinical history), a main body (consisting of a paragraph or more describing the findings), and a brief overall impression section (1). Some radiologists view the writing of radiology reports as an art and resist attempts at standardization. However, given the growing complexity of the information radiologists are charged with interpreting, it is worth considering whether greater standardization could result in better communication, more-complete reports, and fewer misdiagnoses (2,3).

An alternative to free-form reporting is structured reporting, which involves the presentation of a standard set of concepts in a standard sequence (4). Structured reports use a template with standardized headings analogous to a checklist of necessary report elements (5). Preliminary information (6) suggests that such checklist-style reports are preferred by many referring clinicians. In addition, structured reports often use standardized language, such as the standardized lexicon called RadLex that is being developed by the Radiological Society of North America (7). The use of such standardized language not only reduces the chances of miscommunication, but also makes the reports more accessible for data mining and research (8).

Recognizing the advantages of structured reporting, the U.S. Food and Drug Administration mandated the use of the Breast Imaging Reporting and Data System for all mammography reports nearly 2 decades ago (9,10). The use of the Breast Imaging Reporting and Data System, which requires assignment of a specific diagnosis code with its associated clinical recommendation, has reduced variability in reporting and improved the clarity of communication between radiologists and referring physicians (11). Still, the broader radiologic community has been slow to adopt structured reporting even as other medical disciplines, including pathology, endoscopy, and surgery, have embraced it (1215). In surgery, the use of structured reporting in operating room notes has been found to increase the amount and consistency of information conveyed (2); for instance, structured surgical reports were associated with a significant increase in the completeness of prespecified data and were available in the electronic medical record in a shorter period of time.

Few studies have investigated the value of structured reporting in areas of radiology outside of breast imaging. Therefore, we conducted this study to compare the content, clarity, and clinical usefulness of conventional (ie, free-form) and structured radiology reports of body computed tomographic (CT) scans, as evaluated by referring physicians, attending radiologists, and radiology fellows in a tertiary care cancer center.

Materials and Methods

The institutional review board of Memorial Sloan-Kettering Cancer Center approved the study as a quality improvement initiative; no written consent was required. The study was in full compliance with the Health Insurance Portability and Accountability Act guidelines.

Respondents

Respondents were physicians from our institution who agreed to participate in this quality improvement project. Radiology respondents were selected from the members of the diagnostic imaging group who routinely interpret body CT studies. Representative high-volume referring clinicians, including surgical and medical oncologists, were selected from interdisciplinary disease management teams that provide subspecialty care to patients with specific tumor types (ie, gastric, colorectal, pancreatic, hepatobiliary, cervical, uterine, and ovarian). The surgical and medical oncologists and radiologists were asked if they were willing to participate in the study and agreed to review these reports. All respondents (n = 11) provided data on the number of years they had been in practice and the approximate number of radiology reports they reviewed per day. No one who was asked to participate refused. Three respondents were radiologists with 25 (D.M.P.), 7, and 2.5 years of practice experience who reported reviewing an average of five, 16, and 22 radiology reports per day, respectively. Three respondents were radiology fellows enrolled in a 1-year body imaging fellowship who reported reviewing an average of 15, 25, and 35 reports daily. Three respondents were surgeons specializing in oncologic surgery, hepatopancreaticobiliary oncologic surgery, and gynecologic oncologic surgery who had an average of 8.3 years in practice and reported reviewing an average of 10 radiology reports per day. Also participating in the study were two medical oncologists who had 20 and 4 years of experience in practice and reported reviewing an average of six radiology reports per day.

Selection and Assignment of Radiology Reports

Each respondent reviewed 15 conventional and 15 structured radiology reports of body CT examinations of the chest, abdomen, and pelvis or of the abdomen and pelvis, from which all patient identifiers had been removed. No reports were reviewed by more than one respondent; thus, a total of 330 radiology reports were reviewed. The reports were selected at random from a database containing all imaging studies performed in the radiology department. The conventional reports were from scans obtained between January 2009 and June 2009; the structured reports were from scans obtained between June 2009 (when structured reporting was first implemented throughout our radiology department) and September 2009. The six radiologists in the study reviewed randomly selected CT reports from the above date ranges for all tumor types. The five surgeons and oncologists reviewed randomly selected reports from the above date ranges from patients whose tumor types were related to their areas of subspecialization and covered within their respective disease management teams. All respondents had at least 6 months of experience reading both types of reports at our institution.

Structured Reporting Method

Before the conversion to structured reporting, a committee was formed within the radiology department and tasked with creating content standards and report templates for each imaging modality based on the needs of and input from 16 multidisciplinary disease management teams. A separate template was developed for each of the 205 most commonly performed radiologic examinations and procedures in our department. In CT, for example, 43 templates were created, corresponding to 43 different scan protocols (eg, CT chest; CT chest, abdomen, and pelvis; CT urogram; CT preoperative pancreas; CT triphasic liver). The overall structure of the templates was standardized so that all contained certain common elements (Fig 1a). Various standardized entries, with associated default results in brackets, were included in the Findings section of each template (Fig 1b). The default results are phrases that describe normal or unremarkable findings; they become part of the final report unless modified by the radiologist. An example of an actual structured report is shown in Appendix E1 (online).

Figure 1a:
Structured reporting template. (a) Elements included in all CT templates in the order shown. (b) Subcategories (under “FINDINGS”) are specific to the template for chest CT, with default entries in brackets. DLP = dose-length product.
Figure 1b:
Structured reporting template. (a) Elements included in all CT templates in the order shown. (b) Subcategories (under “FINDINGS”) are specific to the template for chest CT, with default entries in brackets. DLP = dose-length product.

The structured report templates were entered into a commercially available speech recognition system (PowerScribe; Nuance Technology, Burlington, Mass). After saying “PowerScribe” and the name of a specific template, the radiologist can begin dictating results into the template.

Report Evaluation

On the basis of prior work addressing clinician satisfaction with radiology reports (6,16), respondents were asked the following two questions: (a) How satisfied are you with the content of this radiology report? (b) How satisfied are you with the clarity of this radiology report? To answer each question, respondents rated their degree of satisfaction on a scale ranging from 1 (very dissatisfied or very confusing) to 10 (very satisfied or very clear).

In addition, respondents used a previously developed radiology report grading scale (17) to categorize the effectiveness of each report in advancing the patient’s position on a clinical spectrum (POCS) consisting of signs and symptoms, differential diagnosis, diagnosis, and change in status. The report grading scale is defined as follows (17):

Grade I: Does not take clinical picture at least one step forward on the POCS algorithm; does not include pertinent information in the description or impression of the report.

Grade IIA: Does not take clinical picture at least one step forward on the POCS algorithm; but includes pertinent information in description but not in impression of the report.

Grade IIB: Does not take clinical picture at least one step forward on the POCS algorithm; includes pertinent information in the description and the impression of the report.

Grade III: Takes clinical picture at least one step forward on the POCS algorithm; the findings are in the description but not in impression of the report.

Grade IV: Takes clinical picture at least one step forward on the POCS algorithm; the findings are in the description and summarized in impression of the report.

Statistical Analysis

Mixed-effects models were used (Y.L.) to test the differences between conventional and structured reports in regard to three outcomes: satisfaction with content, satisfaction with report clarity, and POCS grade ratings. In each of the three mixed-effects models, report type and practice type (radiologist vs nonradiologist) were entered as fixed effects. A respondent effect explaining individual respondent differences was fitted as the sole random effect. Intraclass correlations between 30 repeated ratings nested within each of the 11 respondents were thus accounted for appropriately. Additionally, histograms of the response distributions were plotted (A.R.B.) to enable further examination of patterns of response across report and practice type.

Results

Satisfaction with Content

For satisfaction with content, conventional reports received a mean rating of 7.61 (95% confidence interval [CI]: 7.12, 8.16), and structured reports received a mean rating of 8.33 (95% CI: 7.82, 8.86); the difference was significant (P < .0001) (Table 1). Figure 2 shows the distribution of ratings for satisfaction with content for both conventional and structured reports. While conventional reports received a modal response score of 8, structured reports received visibly more ratings of 10 (very satisfied) (46 vs 15 instances). In addition, a small minority of reviewers gave ratings in the 2–3 range for conventional reports (three instances), whereas none of the reviewers gave these extremely low ratings to structured reports.

Table 1
Mixed-Effects Model and Adjusted Means for Satisfaction with Content
Figure 2:
Bar graph of distribution of content satisfaction ratings for conventional and structured reports.

Nonradiologists reported more satisfaction with the structured reports than did radiologists, but the interaction between report and practice type was not significant (P = .058) (Table 1). No radiologist gave a satisfaction rating lower than 4, whereas there were three instances of nonradiologists giving such low ratings to the conventional reports (Fig 3). Conventional reports received 15 ratings of 10 from radiologists but none from nonradiologists. For structured reports, however, both radiologists’ and nonradiologists’ ratings clustered around the higher end of the scale, with a nearly equal number of 10 ratings (24 for nonradiologists, 22 for radiologists) and no ratings below 4.

Figure 3:
Bar graphs of distribution of content satisfaction ratings for nonradiologists and radiologists. Top: conventional reports. Bottom: structured reports.

Satisfaction with Clarity

Mean clarity satisfaction ratings for conventional and structured reports were 7.45 (95% CI: 6.89, 8.02) and 8.25 (95% CI: 7.68, 8.82), respectively; the difference was significant (P < .0001) (Table 2). Respondents gave more clarity satisfaction ratings of 10 (very clear) for structured reports (44 ratings) than for conventional reports (13 ratings), as seen in Figure 4.

Table 2
Mixed-Effects Model and Adjusted Means for Satisfaction with Clarity
Figure 4:
Bar graph of distributions of clarity satisfaction ratings for conventional and structured reports.

Clarity scoring did not differ significantly between radiologists and nonradiologists (P = .462), and no interaction was present between report and practice type (P = .208) (Table 2). Conventional reports received 13 clarity satisfaction ratings of 10 from radiologists but none from nonradiologists (Fig 5). For structured reports, no clarity ratings by either radiologists or nonradiologists were below 4, and no ratings by radiologists were below 6.

Figure 5:
Bar graphs of distribution of clarity satisfaction ratings for nonradiologists and radiologists. Top: conventional reports. Bottom: structured reports.

Radiology Report Grading Scale

The POCS grades were similar for the two report types. For these calculations, grade I was assigned a value of 1; grade IIA, 2; grade IIB, 3; grade III, 4; and grade IV, 5. Conventional reports received a mean rating of 4.11 (95% CI: 3.67, 4.54) (approximately grade III), whereas structured reports received a mean rating of 4.27 (95% CI:, 3.82, 4.70) (still close to grade III but slightly closer to grade IV than the mean rating of conventional reports). The difference was not significant (P = .146) (Table 3).

Table 3
Mixed-Effects Model and Adjusted Means for POCS Grades

Grade ratings did not differ significantly between radiologists and nonradiologists (P = .822), and no interaction was present between report and practice type (P = .745) (Table 3).While the majority of grades given by both radiologists and nonradiologists were greater than or equal to IIB, radiologists more frequently gave grade IV ratings and less frequently gave grade I ratings than did nonradiologists for both report types (Fig 6). Few grades of I or IIA were given by radiologists or nonradiologists for either report type (Fig 6).

Figure 6:
Bar graphs of distribution of POCS grades for nonradiologists and radiologists. Top: conventional reports. Bottom: structured reports.

Discussion

Face-to-face contact between radiologists and referring physicians has been diminishing with the growing use of picture archiving and communication systems. Thus, the quality of written radiologic reports is more important than ever and paramount for optimal patient care. To our knowledge, relatively little has been published regarding the effect of structured reporting on physician interpretation of radiology reports (5,1825). Our study evaluated a uniform group of reports of body CT examinations in the setting of a tertiary care cancer center.

For over 20 years, radiologists have been concerned about the quality of their radiology reports and referring clinicians’ perceptions of these reports. In one study (26), 32% of referring clinicians preferred the summary statements or impressions to be at the beginning of the report. Researchers in another study (20) found wide variability in the content of chest radiography reports and a large degree of uncertainty in the findings. In that study (20), eight content characteristics were evaluated in the radiology reports of 822 patients; overall, only 67% of the characteristics were included in the radiology reports. In our study, physicians displayed significantly greater satisfaction with the content and clarity of structured reports than with the content and clarity of conventional reports. Since satisfaction with the content and clarity of conventional reporting was high, the fact that a significant improvement could be achieved with structured reporting is remarkable. The improvement in satisfaction was greater for referring physicians than for radiologists. It is likely that radiologists, who in clinical practice review patients’ prior radiologic reports, have prior images conveniently available, can readily interpret those images, and can more easily tease out important information. Some referring physicians, however, may rely more on the written content of reports than the actual images, and thus, their increased satisfaction with structured reports may be meaningful.

The overall grades given by using the report grading scale (17) did not differ significantly between conventional and structured reports. This scale is a useful metric for measuring clinical change, but it may be better used with more specific clinically relevant data and information, which we did not provide. Grades did not decrease with structured reporting, an effect that has been feared by some (17). The lack of a significant difference in the grade ratings may also be a result of the distribution of grade ratings, which was skewed toward positive ratings and, therefore, had quite a constricted range. The majority of physicians gave high grade ratings to the conventional reports, leaving little room to assign higher ratings to structured reports.

These findings differ from those of a study by Johnson et al (27), who found decreases in accuracy and completeness with structured reporting. The reports were simulations (ie, not based on real-time image interpretation) dictated by resident trainees, and the grading for accuracy and completeness was performed by a single neuroradiologist. Naik et al (23) confirmed many of our findings in a different type of analysis. In their study, they first performed a retrospective audit of randomly selected reports, and then they administered a questionnaire to radiologists and referring clinicians containing three mock clinical scenarios and pairs of prose and itemized reports for each scenario. Their results showed a strong preference for computer-generated itemized reports among both referring clinicians and radiologists. They found that appearance, completeness, and structured format were the most important cited advantages of the structured reports. Furthermore, their initial audit of existing reports showed the inconsistencies that appear with the use of traditional prose reports. Prose reports may be confusing to some clinicians. For instance, in the United Kingdom, a group of general practitioners was found to prefer a detailed report in a tabulated or structured format (16). Interestingly, this same group of referring clinicians was found to be confused when the size of a structure was given without an explanation of its relevance (16).

Structured reporting is new to radiology and is confounded by both human and technological challenges. Some radiologists complain that the structuring process interferes with their interpretation of the images since their attention may be diverted to interactions with the reporting system and could potentially reduce diagnostic accuracy (5). In addition, some radiologists fear having to change behaviors they have been accustomed to since their training. It is interesting that, while radiologists have been slow to adopt structured reporting, their clinical referral base seems strongly in favor of this type of reporting. Langlotz and Siegel (28) point out that a reason that radiology appears to be slow in adopting structured reporting relative to other specialties may be owing to the limited nature of certain diseases and the ability to adopt a manageable list of templates for specialties such as cardiology and gastroenterology.

Structured reporting systems have generally not been built into picture archiving and communication system workstations, and the templates for structured reports need to be customized to meet the needs of the particular referring physicians in a given practice (eg, the needs of medical oncologists differ substantially from those of emergency room physicians) (29). In developing the templates for our structured reporting system, we sought input from many attending radiologists with expertise in different disease processes and/or imaging modalities. Equally importantly, we consulted with referring physicians to address their needs and concerns. We believe that this intense involvement of radiologists and referring physicians greatly facilitated user acceptance of structured reporting. Furthermore, the physicians who evaluated the reports were at least somewhat familiar with the output of this system, as it had been used at our institution for a few months before the study began. Therefore, we were evaluating the system in a relatively steady-state environment, as might be found in routine clinical practice.

Our study had a number of limitations. It was performed at a tertiary care cancer center. While this focus limits the general applicability of the results, it is likely that structured reporting systems will be implemented with many templates and vocabularies that are indeed highly focused and tailored to specific diseases and perhaps even disease states (eg, preoperative staging, postoperative follow-up, posttherapy assessment, routine posttreatment surveillance). Our sample size was relatively small, but that was unavoidable owing to the need for expert readers. We also only tested one structured reporting system, without matching of content, and different systems would likely have yielded different results. However, our system uses templates that could be incorporated into any structured reporting software package. Further, the same reports were not dictated in both the conventional and structured systems, which may have led to bias toward the newly implemented system, which is to be used exclusively moving forward at our institution. Finally, we did not evaluate or compare the time spent by interpreting radiologists in producing conventional versus structured reports.

The advent of digital imaging, new imaging modalities, and image postprocessing has dramatically increased the amount of raw data available for radiologists to interpret. Throughout these changes, the convention of free-form reporting in radiology has prevailed, presenting a contrast to the growing movement toward standardization in medicine that has developed out of the desire for more efficient evidence-based care. Our results and those of prior studies indicate that structured reporting can provide the benefits of standardization (eg, clearer communication, increased accessibility of data for research) without compromising radiologists’ ability to communicate qualitative findings and opinions. Furthermore, a key feature of evidence-based medicine is the ability to assess quality, and structured reporting makes the evaluation of quality indicators for both radiologic studies and reports much easier, since individual elements measuring quality are more easily defined in a structured report. Developing user-friendly systems for structured reporting that do not diminish efficiency by imposing new distractions remains a major challenge.

Advance in Knowledge

  • As compared with conventional radiology reports, structured radiology reports that employed specialized templates developed with input from interdisciplinary clinical teams received significantly higher mean ratings for clarity (8.25 [95% confidence interval {CI}: 7.68, 8.82] vs 7.45 [95% CI: 6.89, 8.02]; P < .0001) and content (8.33 [95% CI: 7.82, 8.86] vs 7.61 [95% CI: 7.12, 8.16]; P < .0001) when evaluated by radiologists and referring physicians at a tertiary care cancer center.

Implication for Patient Care

  • Structured radiology reporting may improve patient care by increasing clarity and thoroughness in the communication of imaging findings to referring physicians.

Disclosures of Potential Conflicts of Interest: L.H.S. No potential conflicts of interest to disclose. D.M.P. Financial activities related to the present article: none to disclose. Financial activities not related to the present article: received royalties from Elsevier. Other relationships: none to disclose. A.R.B. No potential conflicts of interest to disclose. Y.L. No potential conflicts of interest to disclose. H.H. No potential conflicts of interest to disclose.

Acknowledgments

This project was supported in part by grant number NCI P30 CA08748, which provides partial support for the Behavioral Research Methods Core at Memorial Sloan-Kettering Cancer Center that was used in conducting this investigation. The National Cancer Institute did not play any role in the study design; in the collection, analysis, or interpretation of data; or in the decision to submit the manuscript for publication. The authors thank Jamie S. Ostroff, PhD, and Paul Krebs, PhD, for assisting with the study analyses, Ada Muellner, MS, for editing the manuscript, and Carolina Montalvo, BA, for preparing the figures. They also thank the physicians who participated in the study by reading radiology reports.

Received September 22, 2010; revision requested November 17; revision received February 3, 2011; accepted February 21; final version accepted March 2.

Funding: This research was supported by the National Cancer Institute (grant NCI P30 CA08748).

Abbreviations:

CI
confidence interval
POCS
position on a clinical spectrum

References

1. Gagliardi RA. The evolution of the X-ray report. AJR Am J Roentgenol 1995;164(2):501–502 [PubMed]
2. Park J, Pillarisetty VG, Brennan MF, et al. Electronic synoptic operative reporting: assessing the reliability and completeness of synoptic reports for pancreatic resection. J Am Coll Surg 2010;211(3):308–315 [PubMed]
3. Kahn CE, Jr, Wang K, Bell DS. Structured entry of radiology reports using World Wide Web technology. RadioGraphics 1996;16(3):683–691 [PubMed]
4. Bell DS, Greenes RA. Evaluation of UltraSTAR: performance of a collaborative structured data entry system. Proc Annu Symp Comput Appl Med Care 1994. 216–222 [PMC free article] [PubMed]
5. Weiss DL, Langlotz CP. Structured reporting: patient care enhancement or productivity nightmare? Radiology 2008;249(3):739–747 [PubMed]
6. Plumb AA, Grieve FM, Khan SH. Survey of hospital clinicians’ preferences regarding the format of radiology reports. Clin Radiol 2009;64(4):386–394; 395–396 [PubMed]
7. Langlotz CP. RadLex: a new method for indexing online educational materials. RadioGraphics 2006;26(6):1595–1597 [PubMed]
8. Langlotz CP, Meininger L. Enhancing the expressiveness and usability of structured image reporting systems. Proc AMIA Symp 2000:467–471 [PMC free article] [PubMed]
9. Kopans DB. Standardized mammography reporting. Radiol Clin North Am 1992;30(1):257–264 [PubMed]
10. Kopans DB, D’Orsi CJ, Adler DD, et al. Breast Imaging Reporting and Data System. Reston, Va: American College of Radiology, 1993
11. Burnside ES, Sickles EA, Bassett LW, et al. The ACR BI-RADS experience: learning from history. J Am Coll Radiol 2009;6(12):851–860 [PMC free article] [PubMed]
12. Ficaro EP, Lee BC, Kritzman JN, Corbett JR. Corridor4DM: the Michigan method for quantitative nuclear cardiology. J Nucl Cardiol 2007;14(4):455–465 [PubMed]
13. Korman LY, Delvaux M, Bidgood D. Structured reporting in gastrointestinal endoscopy: integration with DICOM and minimal standard terminology. Int J Med Inform 1998;48(1-3):201–206 [PubMed]
14. Leslie KO, Rosai J. Standardization of the surgical pathology report: formats, templates, and synoptic reports. Semin Diagn Pathol 1994;11(4):253–257 [PubMed]
15. Markel SF, Hirsch SD. Synoptic surgical pathology reporting. Hum Pathol 1991;22(8):807–810 [PubMed]
16. Grieve FM, Plumb AA, Khan SH. Radiology reporting: a general practitioner’s perspective. Br J Radiol 2010;83(985):17–22 [PMC free article] [PubMed]
17. Lee R, Cohen MD, Jennings GS. A new method of evaluating the quality of radiology reports. Acad Radiol 2006;13(2):241–248 [PubMed]
18. Johnson AJ. Radiology report quality: a cohort study of point-and-click structured reporting versus conventional dictation. Acad Radiol 2002;9(9):1056–1061 [PubMed]
19. Reiner BI, Knight N, Siegel EL. Radiology reporting, past, present, and future: the radiologist’s perspective. J Am Coll Radiol 2007;4(5):313–319 [PubMed]
20. Sobel JL, Pearson ML, Gross K, et al. Information content and clarity of radiologists’ reports for chest radiography. Acad Radiol 1996;3(9):709–717 [PubMed]
21. Hobby JL, Tom BD, Todd C, Bearcroft PW, Dixon AK. Communication of doubt and certainty in radiological reports. Br J Radiol 2000;73(873):999–1001 [PubMed]
22. Sierra AE, Bisesi MA, Rosenbaum TL, Potchen EJ. Readability of the radiologic report. Invest Radiol 1992;27(3):236–239 [PubMed]
23. Naik SS, Hanbidge A, Wilson SR. Radiology reports: examining radiologist and clinician preferences regarding style and content. AJR Am J Roentgenol 2001;176(3):591–598 [PubMed]
24. Hussein R, Engelmann U, Schroeter A, Meinzer HP. DICOM structured reporting. II. Problems and challenges in implementation for PACS workstations. RadioGraphics 2004;24(3):897–909 [PubMed]
25. Morioka CA, Sinha U, Taira R, el-Saden S, Duckwiler G, Kangarloo H. Structured reporting in neuroradiology. Ann N Y Acad Sci 2002;980:259–266 [PubMed]
26. Clinger NJ, Hunter TB, Hillman BJ. Radiology reporting: attitudes of referring physicians. Radiology 1988;169(3):825–826 [PubMed]
27. Johnson AJ, Chen MY, Swan JS, Applegate KE, Littenberg B. Cohort study of structured reporting compared with conventional dictation. Radiology 2009;253(1):74–80 [PubMed]
28. Langlotz CP, Siegel E. Structured radiology reporting: are we there yet? Radiology 2009;253(1):23–25 [PubMed]
29. Reiner B, Siegel E. Radiology reporting: returning to our image-centric roots. AJR Am J Roentgenol 2006;187(5):1151–1155 [PubMed]

Articles from Radiology are provided here courtesy of Radiological Society of North America
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • MedGen
    MedGen
    Related information in MedGen
  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...