Format

Send to

Choose Destination
Sensors (Basel). 2019 May 14;19(10). pii: E2225. doi: 10.3390/s19102225.

An Improved Point Cloud Descriptor for Vision Based Robotic Grasping System.

Author information

1
Faculty of Robot Science and Engineering, Northeastern University, Shenyang 110169, China. wangfei@mail.neu.edu.cn.
2
Faculty of Robot Science and Engineering, Northeastern University, Shenyang 110169, China. 1700951@stu.neu.edu.cn.
3
College of Information Science and Engineering, Northeastern University, Shenyang 110819, China. 1870652@stu.neu.edu.cn.
4
School of Mechanical Engineering and Automation, Northeastern University, Shenyang 110819, China. chenght@me.neu.edu.cn.

Abstract

In this paper, a novel global point cloud descriptor is proposed for reliable object recognition and pose estimation, which can be effectively applied to robot grasping operation. The viewpoint feature histogram (VFH) is widely used in three-dimensional (3D) object recognition and pose estimation in real scene obtained by depth sensor because of its recognition performance and computational efficiency. However, when the object has a mirrored structure, it is often difficult to distinguish the mirrored poses relative to the viewpoint using VFH. In order to solve this difficulty, this study presents an improved feature descriptor named orthogonal viewpoint feature histogram (OVFH), which contains two components: a surface shape component and an improved viewpoint direction component. The improved viewpoint component is calculated by the orthogonal vector of the viewpoint direction, which is obtained based on the reference frame estimated for the entire point cloud. The evaluation of OVFH using a publicly available data set indicates that it enhances the ability to distinguish between mirrored poses while ensuring object recognition performance. The proposed method uses OVFH to recognize and register objects in the database and obtains precise poses by using the iterative closest point (ICP) algorithm. The experimental results show that the proposed approach can be effectively applied to guide the robot to grasp objects with mirrored poses.

KEYWORDS:

global feature descriptor; iterative closest point; object recognition; pose estimation; vision-guided robotic grasping

Supplemental Content

Full text links

Icon for Multidisciplinary Digital Publishing Institute (MDPI) Icon for PubMed Central
Loading ...
Support Center