Comparing Visual Assembly Aids for Augmented Reality Work Instructions

Supplemental Files
Date
2017-01-01
Authors
Oliver, James
MacAllister, Anastacia
Hoover, Melynda
MacAllister, Anastacia
Gilbert, Stephen
Oliver, James
Radkowski, Rafael
Garrett, Timothy
Holub, Joseph
Winer, Eliot
Terry, Scott
Gilbert, Stephen
Davies, Paul
Major Professor
Advisor
Committee Member
Journal Title
Journal ISSN
Volume Title
Publisher
Altmetrics
Research Projects
Organizational Units
Mechanical Engineering
Organizational Unit
Psychology
Organizational Unit
Journal Issue
Series
Department
Mechanical EngineeringVirtual Reality Applications CenterElectrical and Computer EngineeringPsychologyMaterials Science and EngineeringIndustrial and Manufacturing Systems EngineeringVirtual Reality Applications Center
Abstract

Increased product complexity and the focus on zero defects, especially when manufacturing complex engineered products, means new tools are required for helping workers conduct challenging assembly tasks. Augmented reality (AR) has shown considerable promise in delivering work instructions over traditional methods. Many proof-of-concept systems have demonstrated the feasibility of AR but little work has been devoted to understanding how users perceive different AR work instruction interface elements. This paper presents a between-subjects study looking at how interface elements for object depth placement in a scene impact a user’s ability to quickly and accurately assemble a mock aircraft wing in a standard work cell. For object depth placement, modes with varying degrees of 3D modeled occlusion were tested, including a control group with no occlusion, virtual occlusion, and occlusion by contours. Results for total assembly time and total errors indicated no statistically significant difference between interfaces, leading the authors to conclude a floor has been reached for optimizing the current assembly when using AR for work instruction delivery. However, looking at a handful of highly error prone steps showed the impact different types of occlusion have on helping users correctly complete an assembly task. The results of the study provide insight into how to construct an interface for delivering AR work instructions using occlusion. Based on these results, the authors recommend customizing the occlusion method based on the features of the required assembly task. The authors also identified a floor effect for the steps of the assembly process, which involved picking the necessary parts from tables and bins. The authors recommend using vibrant outlines and large textual cues (e.g., numbers on parts bins) as interface elements to guide users during these types of “picking” steps.

Comments

This proceeding is published as MacAllister, Anastacia, Melynda Hoover, Stephen Gilbert, James Oliver, Rafael Radkowski, Timothy Garrett, Joseph Holub, Eliot Winer, Scott Terry, and Paul Davies. "Comparing Visual Assembly Aids for Augmented Reality Work Instructions." In Proceedings of the 2017 Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC). Volume 2017, Paper no. 17208. Arlington, VA: National Training and Simulation Association. Posted with permission.

Description
Keywords
Citation
DOI
Source