End-to-end learning of local point cloud feature descriptors
Emerging technologies like augmented reality and autonomous vehicles have resulted in a growing need to identify and track objects in the environment. Object tracking and localization is frequently accomplished through the use of local feature descriptors, either in 2D or 3D. However, state-of-the-art feature descriptors often suffer from incorrect matches, which affects tracking and localization accuracy. More robust 3D feature descriptors would make these applications more accurate, reliable, and safe. This research studies the use of a pointwise convolutional neural network for the task of creating local 3D feature descriptors on point clouds. A network to produce feature descriptors and keypoint scores is designed, and a loss function and training method is developed. The resulting learned descriptors are evaluated on four different objects, using synthetic and scanned point clouds. The evaluation shows that the descriptors can effectively register objects with noise, and that the keypoint scores can reduce the number of required iterations for registration by a factor of three. An analysis of the learned filters provides insights into what the descriptors encode and potential avenues for improvement.