High-throughput robotic plant phenotyping using 3D machine vision and deep neural networks

dc.contributor.advisor Tang, Lie
dc.contributor.advisor Birrell, Stuart
dc.contributor.advisor Steward, Brian
dc.contributor.advisor Schnable, Patrick
dc.contributor.advisor Yin, Yanhai
dc.contributor.author Xiang, Lirong
dc.contributor.department Department of Agricultural and Biosystems Engineering (ENG)
dc.date.accessioned 2022-11-09T00:15:17Z
dc.date.available 2022-11-09T00:15:17Z
dc.date.issued 2022-05
dc.date.updated 2022-11-09T00:15:18Z
dc.description.abstract The ability to correlate morphological traits of plants with their genotypes plays an important role in plant phenomics research. However, traditional plant phenotyping is time-consuming, labor-intensive, and prone to human errors. This dissertation documents my innovative research in high-throughput robotic plant phenotyping for sorghum and maize plants using 3D machine vision and convolutional neural networks. Sorghum is an important grain crop and a promising feedstock for biofuel production due to its excellent drought tolerance and water use efficiency. The 3D surface model of a plant can potentially provide an efficient and accurate way to digitize plant architecture and accelerate sorghum plant breeding programs. A non-destructive 3D scanning system using a commodity depth camera was developed to take side-view images of plants at multiple growth stages. A 3D skeletonization algorithm was developed to analyze the plant architecture and segment individual leaves. Multiple phenotypic parameters were obtained from the skeleton and the reconstructed point cloud including plant height, stem diameter, leaf angle, and leaf surface area. These image-derived features were highly correlated with the ground truth. Additionally, the results showed that stem volume was a promising predictor of shoot fresh weight and shoot dry weight. To address the challenges of in-field imaging for plant phenotyping caused by variable outdoor lighting, wind conditions, and occlusions of plants. A customized stereo module, namely PhenoStereo, was developed for acquiring high-quality image data under field conditions. PhenoStereo was used to acquire a set of sorghum plant images and an automated point cloud data processing pipeline was also developed to automatically extract the stems and then quantify their diameters via an optimized 3D modeling process. The pipeline employed a Mask Region Convolutional Neural Network for detecting stalk contours and a Semi-Global Block Matching stereo matching algorithm for generating disparity maps. The system-derived stem diameters were highly correlated with the ground truth. Additionally, PhenoStereo was used to quantify the leaf angle of maize plants under field conditions. Multiple tiers of PhenoStereo camera were mounted on PhenoBot 3.0, a robotic vehicle designed to traverse between pairs of agronomically spaced rows of crops, to capture side-view images of maize plants in the field. An automated image processing pipeline (AngleNet) was developed to detect each leaf angle as a triplet of keypoints in two-dimensional images and extract quantitative data from reconstructed 3D models. AngleNet-derived leaf angles and their associated internode heights were highly correlated with manually collected ground-truth measurements. The dissertation investigates and develops automated computer-vision-based robotic systems for plant phenotyping under controlled environments and in field conditions. In particular, a stereo module was customized and utilized to acquire high-quality image data for in-field plant phenotyping. With high-fidelity reconstructed 3D models and robust image processing algorithms, a series of plant-level and organ-level phenotypic traits of sorghum and maize plants were accurately extracted. The results demonstrated that with proper customization stereo vision can be a highly desirable sensing method for field-based plant phenotyping using high-fidelity 3D models reconstructed from stereoscopic images. The proposed approaches in this dissertation provide efficient alternatives to traditional phenotyping that could potentially accelerate breeding programs for improved plant architecture.
dc.format.mimetype PDF
dc.identifier.doi https://doi.org/10.31274/td-20240329-387
dc.identifier.uri https://dr.lib.iastate.edu/handle/20.500.12876/WwPg1BRz
dc.language.iso en
dc.language.rfc3066 en
dc.subject.disciplines Agriculture engineering en_US
dc.subject.keywords Convolutional neural network en_US
dc.subject.keywords Plant phenotyping en_US
dc.subject.keywords Point cloud en_US
dc.subject.keywords Stereo vision en_US
dc.title High-throughput robotic plant phenotyping using 3D machine vision and deep neural networks
dc.type dissertation en_US
dc.type.genre dissertation en_US
dspace.entity.type Publication
relation.isOrgUnitOfPublication 8eb24241-0d92-4baf-ae75-08f716d30801
thesis.degree.discipline Agriculture engineering en_US
thesis.degree.grantor Iowa State University en_US
thesis.degree.level dissertation $
thesis.degree.name Doctor of Philosophy en_US
File
Original bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
Xiang_iastate_0097E_20215.pdf
Size:
3.93 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
0 B
Format:
Item-specific license agreed upon to submission
Description: