Format

Send to

Choose Destination
Sci Data. 2018 Nov 20;5:180251. doi: 10.1038/sdata.2018.251.

A dataset of clinically generated visual questions and answers about radiology images.

Author information

1
Lister Hill National Center for Biomedical Communications, National Library of Medicine, Bethesda, MD, USA.

Abstract

Radiology images are an essential part of clinical decision making and population screening, e.g., for cancer. Automated systems could help clinicians cope with large amounts of images by answering questions about the image contents. An emerging area of artificial intelligence, Visual Question Answering (VQA) in the medical domain explores approaches to this form of clinical decision support. Success of such machine learning tools hinges on availability and design of collections composed of medical images augmented with question-answer pairs directed at the content of the image. We introduce VQA-RAD, the first manually constructed dataset where clinicians asked naturally occurring questions about radiology images and provided reference answers. Manual categorization of images and questions provides insight into clinically relevant tasks and the natural language to phrase them. Evaluating with well-known algorithms, we demonstrate the rich quality of this dataset over other automatically constructed ones. We propose VQA-RAD to encourage the community to design VQA tools with the goals of improving patient care.

PMID:
30457565
PMCID:
PMC6244189
DOI:
10.1038/sdata.2018.251
[Indexed for MEDLINE]
Free PMC Article

Supplemental Content

Full text links

Icon for Nature Publishing Group Icon for PubMed Central
Loading ...
Support Center