Deep learning frameworks for point cloud reconstruction

Thumbnail Image
Deva Prasad, Anjana
Major Professor
Krishnamurthy, Adarsh
Sarkar, Soumik
Ganapathysubramanian, Baskar
Committee Member
Journal Title
Journal ISSN
Volume Title
Research Projects
Organizational Units
Journal Issue
Is Version Of
Computer Engineering
Rapid advancements have been made in the field of surface reconstruction over the last two decades. Nonetheless, traditional approaches to reconstructing surface representations, although robust for a plethora of objects, fail to scale well for 3D point cloud datasets available today that contain a diverse class of shapes. Due to the widely acknowledged success of deep learning-based methods on 2D images, there has been a growing interest in using deep learning for obtaining surface representations from 3D point clouds. While several such methods have been proposed, many are not easy to adapt for applications in fields like computer-aided design and agriculture. This thesis aims to develop robust deep learning frameworks for explicit and implicit surface reconstruction that can be seamlessly integrated into research pipelines of such fields. Boundary representations (B-reps) using Non-Uniform Rational B-splines (NURBS) are the de facto standard used in CAD, but their utility in deep learning-based approaches is not well researched. In our first work, we propose a differentiable NURBS module to integrate the NURBS representation of CAD models with deep learning methods. We mathematically define the derivatives of the NURBS curves or surfaces with respect to the input parameters, which are then used to perform the “backward” evaluation performed while training deep learning models. This allows NURBS to be incorporated with the modern differentiable programming paradigm used in deep learning, making it more easily integrated with modern deep learning frameworks. Reconstructing the geometry of crops from 3D point cloud data is useful for various plant phenotyping applications. Due to very thin and slender segments, obtaining accurate surface geometry representations from the 3D point cloud data of plants is challenging. Further, defects in the point cloud data might produce errors in the reconstructed plant structures. In our second work, we leverage deep learning frameworks that learn neural implicit representations to reconstruct the surfaces of fully developed maize plants using data acquired from Terrestrial Laser Scanners (TLS).