Send to

Choose Destination
Endoscopy. 2019 Aug 23. doi: 10.1055/a-0981-6133. [Epub ahead of print]

Automated classification of gastric neoplasms in endoscopic images using a convolutional neural network.

Cho BJ1,2,3, Bang CS3,4,5, Park SW4,5, Yang YJ3,4,5, Seo SI4,5, Lim H4,5, Shin WG4,5, Hong JT4,5, Yoo YT6, Hong SH6, Choi JH3, Lee JJ3,7, Baik GH4,5.

Author information

Department of Ophthalmology, Hallym University College of Medicine, Chuncheon, Korea.
Interdisciplinary Program in Medical Informatics, Seoul National University College of Medicine, Seoul, Korea.
Institute of New Frontier Research, Hallym University College of Medicine, Chuncheon, Korea.
Department of Internal Medicine, Hallym University College of Medicine, Chuncheon, Korea.
Institute for Liver and Digestive Diseases, Hallym University, Chuncheon, Korea.
Dudaji Inc., Seoul, Korea.
Department of Anesthesiology and Pain medicine, Hallym University College of Medicine, Chuncheon, Korea.



 Visual inspection, lesion detection, and differentiation between malignant and benign features are key aspects of an endoscopist's role. The use of machine learning for the recognition and differentiation of images has been increasingly adopted in clinical practice. This study aimed to establish convolutional neural network (CNN) models to automatically classify gastric neoplasms based on endoscopic images.


 Endoscopic white-light images of pathologically confirmed gastric lesions were collected and classified into five categories: advanced gastric cancer, early gastric cancer, high grade dysplasia, low grade dysplasia, and non-neoplasm. Three pretrained CNN models were fine-tuned using a training dataset. The classifying performance of the models was evaluated using a test dataset and a prospective validation dataset.


 A total of 5017 images were collected from 1269 patients, among which 812 images from 212 patients were used as the test dataset. An additional 200 images from 200 patients were collected and used for prospective validation. For the five-category classification, the weighted average accuracy of the Inception-Resnet-v2 model reached 84.6 %. The mean area under the curve (AUC) of the model for differentiating gastric cancer and neoplasm was 0.877 and 0.927, respectively. In prospective validation, the Inception-Resnet-v2 model showed lower performance compared with the endoscopist with the best performance (five-category accuracy 76.4 % vs. 87.6 %; cancer 76.0 % vs. 97.5 %; neoplasm 73.5 % vs. 96.5 %; P  < 0.001). However, there was no statistical difference between the Inception-Resnet-v2 model and the endoscopist with the worst performance in the differentiation of gastric cancer (accuracy 76.0 % vs. 82.0 %) and neoplasm (AUC 0.776 vs. 0.865).


 The evaluated deep-learning models have the potential for clinical application in classifying gastric cancer or neoplasm on endoscopic white-light images.


Supplemental Content

Full text links

Icon for Georg Thieme Verlag Stuttgart, New York
Loading ...
Support Center