Format

Send to

Choose Destination
Public Health Nutr. 2018 Mar 26:1-12. doi: 10.1017/S1368980018000538. [Epub ahead of print]

Automatic food detection in egocentric images using artificial intelligence technology.

Author information

1
1Department of Neurosurgery,University of Pittsburgh,3520 Forbes Avenue,Suite 202,Pittsburgh,PA 15213,USA.
2
3Department of Pediatrics,Baylor College of Medicine,Houston,TX,USA.
3
4School of Nursing,University of Pittsburgh,Pittsburgh,PA,USA.
4
5Image Processing Center,Beihang University,Beijing,People's Republic of China.
5
2Department of Biomedical Engineering,Hebei University of Technology,Tianjin,People's Republic of China.
6
6Department of Electrical and Computer Engineering,University of Pittsburgh,Pittsburgh,PA,USA.

Abstract

OBJECTIVE:

To develop an artificial intelligence (AI)-based algorithm which can automatically detect food items from images acquired by an egocentric wearable camera for dietary assessment.

DESIGN:

To study human diet and lifestyle, large sets of egocentric images were acquired using a wearable device, called eButton, from free-living individuals. Three thousand nine hundred images containing real-world activities, which formed eButton data set 1, were manually selected from thirty subjects. eButton data set 2 contained 29 515 images acquired from a research participant in a week-long unrestricted recording. They included both food- and non-food-related real-life activities, such as dining at both home and restaurants, cooking, shopping, gardening, housekeeping chores, taking classes, gym exercise, etc. All images in these data sets were classified as food/non-food images based on their tags generated by a convolutional neural network.

RESULTS:

A cross data-set test was conducted on eButton data set 1. The overall accuracy of food detection was 91·5 and 86·4 %, respectively, when one-half of data set 1 was used for training and the other half for testing. For eButton data set 2, 74·0 % sensitivity and 87·0 % specificity were obtained if both 'food' and 'drink' were considered as food images. Alternatively, if only 'food' items were considered, the sensitivity and specificity reached 85·0 and 85·8 %, respectively.

CONCLUSIONS:

The AI technology can automatically detect foods from low-quality, wearable camera-acquired real-world egocentric images with reasonable accuracy, reducing both the burden of data processing and privacy concerns.

KEYWORDS:

Artificial intelligence; Deep learning; Egocentric image; Food detection; Technology-assisted dietary intakeestimation; Wearable device

PMID:
29576027
DOI:
10.1017/S1368980018000538

Supplemental Content

Full text links

Icon for Cambridge University Press
Loading ...
Support Center