APSIPA Transactions on Signal and Information Processing > Vol 9 > Issue 1

Ground-distance segmentation of 3D LiDAR point cloud toward autonomous driving

Industrial Technology Advances

Jian Wu, University of Science and Technology of China, China, Qingxiong Yang, liiton.research@gmail.com
 
Suggested Citation
Jian Wu and Qingxiong Yang (2020), "Ground-distance segmentation of 3D LiDAR point cloud toward autonomous driving", APSIPA Transactions on Signal and Information Processing: Vol. 9: No. 1, e24. http://dx.doi.org/10.1017/ATSIP.2020.21

Publication Date: 23 Nov 2020
© 2020 Jian Wu and Qingxiong Yang
 
Subjects
 
Keywords
Computer visionImage processingMachine learning
 

Share

Open Access

This is published under the terms of the Creative Commons Attribution licence.

Downloaded: 1841 times

In this article:
I. INTRODUCTION 
II. RELATED WORK 
III. APPROACH 
IV. EXPERIMENTAL RESULTS 
V. CONCLUSION 

Abstract

In this paper, we study the semantic segmentation of 3D LiDAR point cloud data in urban environments for autonomous driving, and a method utilizing the surface information of the ground plane was proposed. In practice, the resolution of a LiDAR sensor installed in a self-driving vehicle is relatively low and thus the acquired point cloud is indeed quite sparse. While recent work on dense point cloud segmentation has achieved promising results, the performance is relatively low when directly applied to sparse point clouds. This paper is focusing on semantic segmentation of the sparse point clouds obtained from 32-channel LiDAR sensor with deep neural networks. The main contribution is the integration of the ground information which is used to group ground points far away from each other. Qualitative and quantitative experiments on two large-scale point cloud datasets show that the proposed method outperforms the current state-of-the-art.

DOI:10.1017/ATSIP.2020.21