Format

Send to

Choose Destination
Sensors (Basel). 2018 May 29;18(6). pii: E1746. doi: 10.3390/s18061746.

A Kinect-Based Segmentation of Touching-Pigs for Real-Time Monitoring.

Author information

1
Department of Computer Convergence Software, Korea University, Sejong City 30019, Korea. misoalth@korea.ac.kr.
2
Department of Computer Convergence Software, Korea University, Sejong City 30019, Korea. ycc4477@korea.ac.kr.
3
Department of Computer Convergence Software, Korea University, Sejong City 30019, Korea. goyangi100@korea.ac.kr.
4
Department of Computer Convergence Software, Korea University, Sejong City 30019, Korea. sjwon92@korea.ac.kr.
5
Department of Computer Convergence Software, Korea University, Sejong City 30019, Korea. peacfeel@korea.ac.kr.
6
Department of Computer Convergence Software, Korea University, Sejong City 30019, Korea. ychungy@korea.ac.kr.
7
Department of Computer Convergence Software, Korea University, Sejong City 30019, Korea. dhpark@korea.ac.kr.

Abstract

Segmenting touching-pigs in real-time is an important issue for surveillance cameras intended for the 24-h tracking of individual pigs. However, methods to do so have not yet been reported. We particularly focus on the segmentation of touching-pigs in a crowded pig room with low-contrast images obtained using a Kinect depth sensor. We reduce the execution time by combining object detection techniques based on a convolutional neural network (CNN) with image processing techniques instead of applying time-consuming operations, such as optimization-based segmentation. We first apply the fastest CNN-based object detection technique (i.e., You Only Look Once, YOLO) to solve the separation problem for touching-pigs. If the quality of the YOLO output is not satisfied, then we try to find the possible boundary line between the touching-pigs by analyzing the shape. Our experimental results show that this method is effective to separate touching-pigs in terms of both accuracy (i.e., 91.96%) and execution time (i.e., real-time execution), even with low-contrast images obtained using a Kinect depth sensor.

KEYWORDS:

YOLO; agriculture IT; computer vision; convolutional neural network; depth information; touching-objects segmentation

PMID:
29843479
PMCID:
PMC6021839
DOI:
10.3390/s18061746
[Indexed for MEDLINE]
Free PMC Article

Supplemental Content

Full text links

Icon for Multidisciplinary Digital Publishing Institute (MDPI) Icon for PubMed Central
Loading ...
Support Center