Object detection for low-contrast complex background applications

Thumbnail Image
Date
2023-05
Authors
Sangha, Harman Singh
Major Professor
Advisor
Darr, Matthew
Peschel, Joshua
Kaleita, Amy
McNaull, Robert
Zhou, Yuyu
Committee Member
Journal Title
Journal ISSN
Volume Title
Publisher
Authors
Research Projects
Organizational Units
Journal Issue
Is Version Of
Versions
Series
Department
Agricultural and Biosystems Engineering
Abstract
This research was focused on advancing the understanding of the effects of model structure and image augmentations on overall model performance for computer vision machine learning applications. The goal of this dissertation is to first develop an over encompassing definition of real-world agricultural dataset scenes that can be used to explain the nature of images in real-world agricultural applications. Secondly, to evaluate the effects of model architectures in both one-stage and two-stage detector on model performance for low-contrast complex background applications and to gauge the influence of different photo-metric image augmentation methods on model performance for a standard one-stage and two stage detector. Thirdly, to benchmark and perform initial training for cotton boll object detection deep learning model that applies best practices for image augmentation, to be used on a row unit for a large scale spindle cotton picker. A definition was provided to explain scenes in real-world agricultural datasets as low-contrast complex background. It was observed that for one-stage detector, smaller models performed better as compared to larger models. In case of image augmentations it was observed that except for random contrast image augmentation method every other method significantly improved model performance. Even when random contrast was used with other augmentation methods, there was no considerable improvement. For the third study, the results indicate the model was able to detect cotton bolls in a dark space when assisted with LED lights. The final model presented a mAP of 0.688 which translates to approximately 69% accuracy. The trained model can be further employed on a large scale machinery.
Comments
Description
Keywords
Citation
DOI
Source
Subject Categories
Copyright