• We are sorry, but NCBI web applications do not support your browser and may not function properly. More information
Logo of digimagwww.springer.comThis JournalToc AlertsSubmit OnlineOpen Choice
J Digit Imaging. Apr 2011; 24(2): 256–270.
Published online Mar 23, 2010. doi:  10.1007/s10278-010-9285-6
PMCID: PMC3056962

Mapping LIDC, RadLex™, and Lung Nodule Image Features

Abstract

Ideally, an image should be reported and interpreted in the same way (e.g., the same perceived likelihood of malignancy) or similarly by any two radiologists; however, as much research has demonstrated, this is not often the case. Various efforts have made an attempt at tackling the problem of reducing the variability in radiologists’ interpretations of images. The Lung Image Database Consortium (LIDC) has provided a database of lung nodule images and associated radiologist ratings in an effort to provide images to aid in the analysis of computer-aided tools. Likewise, the Radiological Society of North America has developed a radiological lexicon called RadLex. As such, the goal of this paper is to investigate the feasibility of associating LIDC characteristics and terminology with RadLex terminology. If matches between LIDC characteristics and RadLex terms are found, probabilistic models based on image features may be used as decision-based rules to predict if an image or lung nodule could be characterized or classified as an associated RadLex term. The results of this study were matches for 25 (74%) out of 34 LIDC terms in RadLex. This suggests that LIDC characteristics and associated rating terminology may be better conceptualized or reduced to produce even more matches with RadLex. Ultimately, the goal is to identify and establish a more standardized rating system and terminology to reduce the subjective variability between radiologist annotations. A standardized rating system can then be utilized by future researchers to develop automatic annotation models and tools for computer-aided decision systems.

Key words: Chest CT, digital imaging, image data, image interpretation, imaging informatics, lung, radiographic image interpretation, computer-assisted, reporting, RadLex, semantic, LIDC

Introduction

An image annotation is the explanatory or descriptive information about the pixel data of an image that is generated by a human or machine observer. Much of the time, image annotation is captured as free text in a radiology report. Currently, there is little standardization of the terms used in annotations which draws us farther from the universality of semantics. The situation is further complicated when comparing human image observations to their algorithmic equivalents. That is, the way in which image pixel level data relates to the human perception of that image.

Ideally, one image should be described in the same way by multiple-trained observers. There is research13 to suggest that this is not often the case (Fig. 1). Reeves et al.1 found very high interobserver variation in lung nodule boundaries marked by radiologists. Similarly, in our previous work,2 we showed high uncertainty and low levels of agreement between radiologist annotations when attempting to map semantic characteristics to lung nodule image content. Ochs et al.3 showed the importance of enforcing agreement between radiologists when creating a reference standard for computer-aided diagnosis (CAD) systems.

Fig 1
Example lung nodule boundaries marked by LIDC radiologists.

The content and usefulness of radiological reports in diagnosis have been criticized for the amount of variability in medical terminology used in text reports.4 To remedy this particular variability in interpretations, lexicons such as the Unified Medical Language System (UMLS), ICD-9, and SNOMED have been developed in an effort to standardize terminology.4 However, Langlotz and Caldwell [4] demonstrate that these lexicons have been shown to be minimally successful in capturing terms used to describe medical images in actual radiological reports. Langlotz and Caldwell [4] found that none of the lexicons achieved greater than 50% completeness for any test set of imaging terms when evaluated.

On the other hand, the Breast Imaging and Reporting Data System5 provides a lexicon of standardized terminology, a reporting organization and assessment structure, a coding system, and a data collection structure for mammography. By specializing on a single organ and a narrow set of pathologies, McKay et al.6 were able to demonstrate that BIRADS is an effective tool to increase interobserver objectivity, moderately high reliability between radiologists’ interpretations, and moderate accuracy of interpretations. The gap between reliable subjective image interpretations is even harder to bridge with a larger set of pathologies, such as in those in the lung, and exponential development of image data.

In 2005, the Radiological Society of North America (RSNA) recognized this gap and formed a project to develop RadLex™, a radiology lexicon.7 RadLex™ currently contains nearly 12,000 terms many of which are not found in other lexicons. It is organized in a hierarchical structure as an ontology with the primary relationship being “is_a.”

In a separate endeavor, the National Cancer Institute has developed the Lung Image Database Consortium (LIDC) in an effort to provide an image database as a resource to aid in the analysis of CAD algorithms for detecting lung nodules in computed tomography (CT) scans.8 LIDC radiologists were given the task of identifying lung nodules with size greater than or equal to 3 mm and marking their associated boundaries. The radiologists also independently rated each CT image based on nine characteristics as deemed appropriate for lung nodule diagnostic descriptors by the LIDC committee. These characteristics include: calcification, internal structure, lobulation, malignancy, margin, sphericity, spiculation, subtlety, and texture (Table 1).

Table 1
LIDC Nodule Characteristics, Definitions, and Ratings

The development of LIDC has led to a large amount of research based on the image sets they have provided for use. One common theme these studies share is a discussion of radiologist agreement or variability in interpreting the images.9,10 In Ochs et al.,3 the authors investigate the effects of radiologist agreement on the development of a “ground truth” and the subsequent impact of these effects on CAD performance. The authors of [1] investigate variability in radiologists’ demarcation of nodule boundaries. Likewise, an analysis of differences in performance of radiologists’ annotation methods was investigated.11

In comparison to radiologist ratings and annotations, which are subject to high levels of variability, lung nodule image features provide a quantitative and objective way to capture information about lung nodule images.2,12 The difference between radiologists’ subjective, high-level interpretations and objective, low-level image features is known as a semantic gap.21 Figure Figure22 provides an example of an LIDC radiologist’s ratings for an image and associated low-level image features captured from the same image based on nodule boundaries marked by the radiologists.

Fig 2
Example of radiologist ratings and associated features low-level image features.

For example, in our previous work,2 we proposed ways to reduce the semantic gap in the medical imaging community by investigating various methods to develop computer-aided tools to be used as second readers by rating nodules based on automatically discovered image-semantic mappings. Using the LIDC dataset, we found that it is possible to develop probabilistic models of lung nodule image characteristics using image content. Specifically, we used low-level image features such as shape, size, intensity, and texture to develop probabilistic models for LIDC characteristics (lobulation, malignancy, margin, sphericity, spiculation, subtlety, and texture).

Using these semantic characteristics as a bridge to RadLex™’s ontology, a standardized annotation system which utilizes models learned using LIDC image features could be developed. Probabilistic models, such as those learned in2 on the LIDC data, may then serve as decision-based rules to predict if a lung nodule can be characterized or classified with an associated RadLex™ term. As shown in Figure Figure3,3, we can then use RadLex™ terminology and predictive models based on imaging to continuously derive a standardized semantically meaningful rating system. Given the uniqueness of the publicly available LIDC dataset which provides both image data and radiologists’ semantic interpretation of these data, the goal of this work is to investigate the feasibility of associating LIDC rating terminology with RadLex™ terminology. We hypothesize that the more mappings are found between image rating terminology and RadLex, the closer we are to providing a standardized system of image interpretation and diagnosis and, therefore, closer to bridging the semantic gap between image content and high-level radiologists’ interpretation.

Fig 3
Linking LIDC and RadLex™.

Methods

Figure Figure44 presents an overview of our methodology. We searched RadLex™ for each LIDC characteristic, shown in Table 1, and their associated rating terms (for example, “sphericity” and its rating “round”).13 Each LIDC characteristic and its associated rating terms were first positioned within the RadLex™ hierarchy under related parent terms (Fig. 5). The current version of RadLex™ does not provide definitions for all of its terms, making it more difficult to accurately match terminology. In the absence of a perfect RadLex™ match, manual browsing, matching based on combined term searches (e.g., “fatty internal structure” instead of just “fatty”), and UMLS and SNOMED were used to identify terms. Results, therefore, are categorized in three ways: (1) exact RadLex™ matches, (2) synonymous and conceptual matches, and (3) manually searched matches. In the following sections, LIDC terms are denoted in italics while RadLex™ terms are shown in quotes. While the methodology outlined above utilizes manual matching techniques with results later confirmed by an expert, automated tools do exist to find matches within RadLex. For example, WordNet could be used to automate a process to provide synonyms for terms; however, there is still the difficulty of finding terms which provide meaningful semantic matches.14 An additional technique that could have been used is preparing an LIDC term database and to download the RadLex™ term tree. Queries could then be developed to automate the process of matching.16 However, direct querying to RadLex™ of a relatively small number of terms did not necessitate an automated matching process. Given a larger sample of terms to match, an automated process would definitely be necessary. Likewise, in addition to an automated matching process, matches would still need to be confirmed by an expert to ensure that the meaning of the image and rating terminology are consistent.

Fig 4
Diagram of proposed methodology in relation to related work.
Fig 5
Mapping organization based on RadLex™ term tree.

Results

Exact Matches

Sphericity

The LIDC characteristic sphericity does not appear within RadLex™. The LIDC sphericity characteristics, linear, ovoid, and round, however, all appear as “shapes” in RadLex™. The location of these terms in the “imaging observation characteristic” hierarchy suggests that they are being used harmoniously (Fig. 6).

Fig 6
Example of exact matches for linear, ovoid and round.

Margin

For margin, LIDC uses an arbitrarily defined five-point scale that varies from 1, poorly defined, to 5, sharp. The term poorly defined is found in RadLex™ as “poorly-defined margin,” both a conceptual and exact match.

The term “sharp” also appears as an exact match within RadLex™, but as a generic “morphologic characteristic.” Rather, the RadLex™ term “circumscribed margin” is intended for use with respect to margins and, in fact, “sharply marginated” is present as a synonym for it.

Internal Structure

The LIDC characteristics internal structure contains the rating terms soft tissue, fluid, fat, and air. The term soft tissue is found in RadLex™ but refers to a “route of administration.” Conceptually, soft tissue is much more closely related to the composition modifier, “solid.” This should be clarified in RadLex™ and a precise definition made.

Fluid, itself, appears as an exact match, though as a generic RadLex™ “body substance.” RadLex™ provides several “composition modifiers,” such as “serous,” “hemorrhagic,” “mucinous,” “proteinaceous,” etc. that suggest liquid or fluid imaging characteristics, but a somewhat more generic, “fluid” or “liquid” is missing.

Fat, similarly, is found as a generic “body substance.” Conceptually, the LIDC usage is much closer to the RadLex™ “composition modifier,” “fat-containing.” “Fatty” which appears in RadLex™ as a “morphological characteristic” also appears to be conceptually matched to LIDC’s use of the term given its location in the RadLex™ tree, “imaging observation characteristic” → “morphological characteristic”→“fatty.”

The term “air” does not appear in RadLex™. “Air-containing” is, however, a “composition modifier” similar to “fat-containing” and most closely matches the LIDC usage.

Calcification

The LIDC characteristic calcification uses the rating terms popcorn, laminated, solid, non-central, central, and absent. The term calcification appears as an exact match in RadLex™ exactly where expected, “imaging observation”→“pathophysiologic process”→“degenerative disorder”→“deposition”→“mineral deposition”→“calcification.”

An exact match for solid is found located in “modifier”→“imaging observation modifier”→“composition modifier”→“solid.” These two terms together capture precisely the meaning intended in LIDC. Similarly, the term central is found in RadLex™ as a “position modifier.” Again, in conjunction with the “calcification” term, this matches the LIDC meaning.

Non-central is not found in RadLex™. This is a difficult term to place as it its meaning is vague. RadLex™ contains a number of “focality” imaging observation characteristics which would have been more useful in this setting. “Scattered,” “patchy,” “multifocal,” “focal,” “diffuse,” “clustered,” and “coalescent” are all examples. In addition, there are “position” modifiers like “peripheral,” “superficial,” “superior,” “inferior,” “lateral,” and “medial” that are related to non-central, however, are unrelated to the use of non-central as a calcification term.

Finally, absent is found in RadLex™ as a component of “definitely absent,” a synonym for the RadLex™ term “definitely not present.” Combined with the “calcification” term, these two terms would capture the LIDC meaning. It is interesting to note that doing so forces a choice for the calcification characteristic. The absence of the characteristic itself would indicate the lack of calcification though perhaps in a less uniform fashion.

Neither Popcorn nor laminated appear in RadLex™. Popcorn does appear as, “Coarse (popcorn-like) calcification” in BIRADS. When the integration of BIRADS into RadLex™ is completed in the coming year, several other terms for types of calcification will be standardized.

Laminated is more difficult. This term does not appear in BIRADS nor in RadLex™. If the intended meaning is more toward “lammelated,” then the RadLex™ “shape,” “plate-like” may be suitable.

Texture

Rating terms within this characteristic include nonsolid, part-solid/mixed, and solid. An exact match for solid was found located in “modifier”→“imaging observation modifier”→“composition modifier”→”solid,” which belongs to both the calcification and texture characteristics. So, because solid has a match in RadLex™, the texture rating nonsolid would follow as a match as well. It should also be noted that since one term is shared by two different LIDC characteristics, mappings for probabilistic decision rules must specify which solid term it is predicting. Likewise, the term mixed, which belongs to the LIDC characteristic texture, appears in RadLex™ under “imaging observation characteristic”→ “morphological characteristic”→“mixed”; however, like other LIDC terms, we cannot say with complete confidence that this is an exact match due to its location in the RadLex™ term tree.

Lobulation and Spiculation

Both lobulation and spiculation contain the rating terms marked and none. An exact match for the characteristic terms lobulation and spiculation were not found within RadLex™. The term marked does appear as exact match; however, it is listed as a synonym of “severe” under “modifier”→“imaging observation modifier”→“severity modifier” in RadLex™ which suggests a conceptual match. The rating term, none, was not found within RadLex™.

Non-matches

Malignancy did yield an exact match within RadLex™. None of the rating terms highly unlikely, moderately unlikely, indeterminate, moderately suspicious, or highly suspicious appeared in RadLex™. These rating terms were searched for as a whole phrase (using both words) as well as individual words. Similarly, the rating terms for subtlety include extremely subtle, moderately subtle, fairly subtle, moderately obvious, and obvious were not present. Terms were searched for as single-word and two-word queries, and none of them were found as exact matches within RadLex™.

Synonymous and Conceptual Matches Using UMLS

According to RSNA’s RadLex™ documentation available via the RadLex™ website, some of the terms used in RadLex™ appear in SNOMED. So, to identify additional search terms, synonyms were located using UMLS (2007 AC release) which includes SNOMED terminology as well. LIDC terms were then searched for within UMLS in order to identify possible synonyms. These synonyms were then used as new search terms for RadLex™. Unfortunately, few new insights or matches were made using this method of matching as illustrated by Table 2. Most UMLS-listed synonyms were not found in RadLex™ and most associated definitions did not pertain to radiology or lung nodules. As such, although some of the UMLS-listed synonyms were found in RadLex™, none of the terms (unless mapped in the previous section) could be considered as a conceptual match. For example, the terms calcification, central, soft tissue, fluid, fat, air, lobular, elevated, sharp, and spherical (denoted by parentheses in Table 2) are all synonyms of LIDC terms that were found in RadLex™, but their positions in the RadLex™ term tree suggest a conceptual mismatch. Conversely, calcified, solid, marked, moderate, border, margin, poorly (defined margin), linear, ovoid, and round, listed as UMLS synonyms, were found in RadLex™ and with conceptual matches made between UMLS definitions and positions in RadLex™. These terms were all matched in the previous section without the aid of UMLS. However, UMLS does provide definitions for some of the LIDC terms, which provide additional support for conceptual mismatches from “Exact Matches” and later, manual matching as presented in “Manually Searched Conceptual Matches.”

Table 2
LIDC Characteristics, UMLS Definitions, and RadLex™–UMLS Matches

Manually Searched Conceptual Matches

Margin Characteristics

The LIDC rating term “sharp,” as discussed in the previous sections, appeared to be an exact match but was a conceptual mismatch. However, upon further investigation of RadLex™ terms under the parent term “margin characteristic,” the term “circumscribed margin” appears as a child term. As shown in the RadLex™ viewer, a synonym term for “circumscribed margin” is “sharply-marginated.” The term circumscribed margin therefore provides a conceptual match to the LIDC characteristic margins rating term sharp.

Similarly, the LIDC characteristic lobulation does not appear as an exact match to any RadLex™ terms. However, conceptually, lobulation is defined as a margin characteristic,17 and when we look under the RadLex™ parent term “margin characteristic,” the term lobulated margin appears with a child term “microlobulated margin.” Another LIDC characteristic, spiculation, does not appear as an exact match in RadLex™. However, conceptually, spiculation is also defined as a margin characteristic as well.18 Within the RadLex™ term browser, the term spiculated margin appears under the RadLex™ parent term “margin characteristic.” Figure Figure77 provides screen shots of examples the conceptual matches for sharp, lobulation, and spiculation in RadLex™.

Fig 7
Screenshots of margin characteristic related matches.

Specifically, Figure Figure77 illustrates the importance of term relationships within a RadLex™ header. It further supports matching methods described in “Exact Matches” and “Synonymous and Conceptual Matches Using UMLS” in that RadLex™ child terms located within a parent header can either be identified as conceptually related or conceptually unrelated to a certain LIDC term.

Internal Structure, Calcification, and Texture Revisited

Terms listed under “composition modifier” appear to be conceptually matched to LIDC characteristics related to both the internal structure and calcification of a nodule. Specifically, a conceptual match for fat was found under “modifier”→“imaging observation modifier”→“composition modifier”→“fat-containing.” In the same section of RadLex™, as seen in Figure Figure8,8, a match for air can be found under “gas-containing”→“air-containing.” A conceptual match for fluid was found under the same header under the term “serous.” Serous is defined as “of thin watery constitution” (Dictionary.com, retrieved March 25, 2008). Similarly, the texture rating term part-solid /mixed has a conceptual match under “modifier”→“imaging observation modifier”→“composition modifier”→”semisolid.” Likewise, as mentioned in “Margin,” calcification was found in RadLex™ as an exact match; however, its location within the term tree suggests that there might be a better match. As such, a better match for calcification is the term “calcified” which appears under “modifier”→“imaging observation modifier”→“composition modifier”→“calcified.”

Fig 8
Screenshots of internal structure and texture related matches.

Malignancy and Subtlety

While no exact matches for ratings terms in malignancy and subtlety were found in RadLex™, a match for the calcification rating term absent leads to the discovery of the RadLex™ header “uncertainty.” Other RadLex™ terms listed under this header include “definitely not present,” “almost certainly absent,” “probably not present,” “possibly present,” “almost certainly present,” “probably present,” and “definitely present.” These terms suggest a possible conceptual match with ratings terms for both malignancy and subtlety. A proposed matching schema is presented in Table 3. Using RadLex™ terms as standardized rating terms for both subtlety and malignancy may provide a way for decision rules to be more easily applied; however, it may not convey the intended interpretation exactly.

Table 3
Possible RadLex™ Matches for Subtlety and Malignancy

Discussion

After performing all three methods for matching LIDC terminology to RadLex™ terms, 74% of the terms (25 out of 34) were found in RadLex™. Thus, the mapped terminology can now be matched with probabilistic models developed using an LIDC dataset which can now serve as decision-based rules to predict if a lung nodule can be characterized or classified with an associated RadLex™ term. Table 4 contains an example of a decision rule mappings to subtlety. The decision rule contains the image features maxIntensity (a gray-level intensity feature) and minorAxisLength (a size feature) and applies the following logic to assign a rating: if (minorAxisLength  0.15) and (maxIntensity  0.23), then the associated subtlety rating is 1 or extremely subtle with a 100% confidence. The particular decision rule presented in Table 4 was based on our previous research.2

Table 4
Example of Matched Term with Decision Rule

Table 5 summarizes our results; specifically, it contains all LIDC terms with matches in RadLex™. Items denoted as shared indicated that the same RadLex™ term is shared by two or more LIDC terms. So, any decision rules applied to the RadLex™ term must specify which LIDC term it belongs to. Similarly, items denoted as “opposite of” indicate that the matched term for any particular rating (e.g., None) is simply the opposite of the rating term on the other end of the scale (e.g., Lobulated Margin). So, in lieu of the fact that there is no match for the LIDC rating term none, a nodule with the predicted annotation as none using lobulation rating 5 decision rules would be interpreted as having the opposite meaning of lobulated margin or would be interpreted as (not) lobulated margin.

Table 5
Summary of Matched Terms

Conclusion

The results of this study identified matches for 25 out of 34 LIDC terms in RadLex™. With most of the LIDC terms now mapped to RadLex™, predictive rules for annotation were applied as learned by our previous research.2 With these rules, a CT image of a lung nodule can be interpreted by a computer, annotated, and further verified by a radiologist using similar descriptors. On a larger scale, automatic lung nodule annotation based on low-level image features can narrow the semantic gap. Predictive rules for annotating images learned using the LIDC dataset and terminology were mapped to RadLex™ terminology in an effort to reduce subjective variability of radiologist image interpretations. Future work may include re-annotating lung nodule images based on a modified LIDC terminology which utilizes RadLex™ terms. Work can also be done using RadLex™ terminology directly in an effort to develop a new, radiologist-annotated image database by which future researchers may develop predictive rules and apply them to a larger, more standardized CAD system. Likewise, additional work may include an implementation of clustering analysis or some other unsupervised learning technique to uncover new meanings, labels, or characteristics for groups of nodules.

Not all LIDC terms, however, were found to have matches in RadLex™, and many of the mapped terms were not found as exact matches. The fewest number of matches was for calcification, suggesting that the terminology used by LIDC may be inconsistent with other terms found in existing lexicons. This suggests that there are opportunities to better conceptualize, define, or reduce LIDC characteristics and associated rating terminology in an effort to produce even more matches with RadLex™. It is also important to note that even with the use of UMLS and SNOMED to help define LIDC terminology, no new additional matches were discovered. As shown in Table 2, numerous synonyms for each LIDC characteristic were provided by SNOMED; however, when these synonyms were searched for in RadLex™, no additional matches were found in comparison to direct querying of RadLex™ of the original LIDC terms. As such, while there are several lexicons, ontologies, and image reporting initiatives that exist and are being developed,15,16 a single concise lexicon and ontology would greatly advance radiology reporting consistency and accuracy.19,20

By utilizing LIDC semantic characteristics as a bridge to RadLex™’s ontology, a standardized annotation system based on models learned using LIDC image features could be developed. The universality of radiological concepts and terminology is what this research aims to capture, in an effort to reduce the semantic gap between image content and high level radiologists’ interpretation, and ultimately to provide a standardized system of image interpretation and diagnosis.

Contributor Information

Pia Opulencia, Phone: +1-312-4543572, ten.aicnelupo@aip.

David S. Channin, Phone: +1-312-9269165, Fax: +1-312-9265991, ude.nretsewhtron@csd.

Daniela S. Raicu, Phone: +1-312-3625512, Fax: +1-312-3626116, ude.luaped.mdc@natsd.

Jacob D. Furst, Phone: +1-312-3625158, Fax: +1-312-3626116, ude.luaped.mdc@tsrufj.

References

1. Reeves AP, Biancardi AM, Apanasovich TV, Meyer CR, MacMahon H, Beek EJR, Kazerooni EA, et al. The Lung Image Database Consortium (LIDC): pulmonary nodule measurements, the variation, and the difference between different size metrics. Proc SPIE Int Soc Opt Eng. 2007;8:65140J.1–65140J.8.
2. Raicu DS, Varutbangkul E, Furst JD, Armato SG., III Modeling semantics from image data: opportunities from LIDC. IJBET. 2008;2:1–22. doi: 10.1504/IJBE.2008.016838. [Cross Ref]
3. Ochs R, Kim HJ, Angel E, Panknin C, McNitt-Gray M, Brown M. Forming a reference standard from LIDC data: impact of reader agreement on reported CAD performance, Medical Imaging 2007: Computer-Aided Diagnosis. Proc SPIE Int Soc Opt Eng. 2007;6514:65142A.
4. Langlotz CP, Caldwell S. The completeness of existing lexicons for representing radiology report information. J Digit Imaging. 2001;15:201–205. doi: 10.1007/s10278-002-5046-5. [PubMed] [Cross Ref]
5. Eberl MM, Fox CH, Edge SB, Carter CA, Mahoney MC: BI-RADS Classification for management of abnormal mammograms. JABFM 19:161–164 [PubMed]
6. McKay C, Hart CL, Erbacher G. Objectivity and accuracy of mammogram interpretation using the BI-RADS final assessment categories in 40- to 49- year old women. JAOA. 2000;100(10):615–620. [PubMed]
7. Langlotz CP: RadLex™: A new method for indexing online educational materials. RadioGraphics doi:10.1148/rg.266065168, September 14, 2006 [PubMed]
8. Armato SG, III, McLennan G, McNitt-Gray MF, Meyer CR, Yankelevitz D, Aberle DR, Henschke CI, Hoffman EA, Kazerooni EA, MacMahon H, Reeves AP, Croft BY, Clarke LP. Lung image database consortium: Developing a resource for the medical imaging research community. Radiology. 2004;232:739–748. doi: 10.1148/radiol.2323032035. [PubMed] [Cross Ref]
9. Armato SG, III, McNitt-Gray MF, Reeves AP, Meyer CR, McLennan G, Aberle DR, Kazerooni EA, MacMahon H, Beek EJR, Yankelevitz D, Hoffman EA, Henschke CI, Roberts RY, Brown MS, Engelmann RM, Pais RC, Piker CW, Qing D, Kocherginsky M, Croft BY, Clarke LP. The Lung Image Database Consortium (LIDC): An evaluation of radiologist variability in the identification of lung nodules on CT scans. Acad Radiol. 2007;14:1409–1421. doi: 10.1016/j.acra.2007.07.008. [PMC free article] [PubMed] [Cross Ref]
10. Armato SG, III, Roberts RY, Kocherginsky M, Aberle DR, Kazerooni EA, MacMahon H, Beek EJR, Yankelevitz DF, McLennan G, McNitt-Gray MF, Meyer CR, Reeves AP, Caligiuri P, Quint LE, Sundaram B, Croft BY, Clarke LP. Assessment of radiologist performance in the detection of lung nodules: Dependence on the definition of “truth.” Acad Radiol. 2008;16(1):28–38. doi: 10.1016/j.acra.2008.05.022. [PMC free article] [PubMed] [Cross Ref]
11. Meyer CR, Johnson TD, McLennan G, Aberle DR, Kazerooni EA, MacMahon H, Mullan BF, Yankelevitz DF, Beek EJR, Armato SG, III, McNitt-Gray MF, Reeves AP, Gur D, Henschke CI, Hoffman EA, Bland PH, Laderach G, Pais R, Qing D, Piker C, Guo J, Starkey A, Max D, Croft BY, Clarke LP. Evaluation of lung MDCT nodule annotation across radiologists and methods. Acad Radiol. 2006;13:1254–1265. doi: 10.1016/j.acra.2006.07.012. [PMC free article] [PubMed] [Cross Ref]
12. Horsthemke W, Varutbangkul E, Raicu D, Furst J: Predictive data mining for lung nodule interpretation. In Proceedings of the Seventh IEEE international Conference on Data Mining Workshops (October 28–31, 2007). ICDMW. IEEE Computer Society, Washington, DC, 157–162. doi:http://dx.doi.org/10.1109/ICDMW.2007.74, 2007.
13. McNitt-Gray MF, Armato SG, III, Meyer CR, Reeves AP, McLennan G, Pais R, Freymann J, Brown MS, Engelmann RM, Bland PH, Laderach GE, Piker C, Guo J, Towfic Z, Qing DP, Yankelevitz DF, Aberle DR, Beek EJR, MacMahon H, Kazerooni EA, Croft BY, Clarke LP. The Lung Image Database Consortium (LIDC) data collection process for nodule detection and annotation. Acad Radiol. 2007;14:1464–1474. doi: 10.1016/j.acra.2007.07.021. [PMC free article] [PubMed] [Cross Ref]
14. Fellbaum C: WordNet: An Electronic Lexical Database, 1998
15. Channin DS. The caBIG Annotation and Image Markup Development Project. Chicago: Radiological Society of North America Annual Meeting and Scientific Assembly; 2006.
16. Rubin D, Channin DS, Mongolowat P, et al: LIDC Conversion to AIM, February 13, 2009. Last Accessed March 12, 2009. Available at: https://wiki.nci.nih.gov/display/Imaging/LIDC+Conversion+to+AIM
17. Sluimer I, Schilhan A, Prokop M, Ginneken B. Computer analysis of computed tomography scans of the lung: a survey. IEEE Trans Med Imaging. 2006;25:385–405. doi: 10.1109/TMI.2005.862753. [PubMed] [Cross Ref]
18. Zerhouni EA, Stitik FP, Siegelman SS, Nadidich DP, Sagel SS, et al. CT of the pulmonary nodule: a cooperative study. Radiology. 1986;160:319–327. [PubMed]
19. Vanel D. The American College of Radiology (ACR) Breast Imaging and Reporting Data System (BI-RADS™): A step towards a universal radiological language? Eur J Radiol. 2007;61(2):183–183. doi: 10.1016/j.ejrad.2006.08.030. [PubMed] [Cross Ref]
20. Sistrom CL, Langlotz CP. A framework for improving radiology reporting. J Am Coll Radiol. 2005;2(2):159–167. doi: 10.1016/j.jacr.2004.06.015. [PubMed] [Cross Ref]
21. Deserno TM, Antani S, Long LR: Exploring access to literature using content-based image retrieval. SPIE Medical Imaging 6516, 2007

Articles from Journal of Digital Imaging are provided here courtesy of Springer
PubReader format: click here to try

Formats:

Related citations in PubMed

See reviews...See all...

Cited by other articles in PMC

See all...

Links

  • PubMed
    PubMed
    PubMed citations for these articles

Recent Activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...