Evaluating the Microsoft Kinect compared to the mouse as an effective interaction device for medical imaging manipulations
Volume-rendered medical images afford medical professionals increased information to provide their patients with more advanced diagnoses than previously allowed with 2D slices. Three-dimensional (3D) images enable a non-invasive depiction of a patient's body, which a surgeon would expect to see during an invasive surgery. These generated 3D representations can more effectively and efficiently convey information about the patient to the surgeon, bypassing the mental reconstruction required by radiologists to interpret the same patient's data displaced on a two-dimensional (2D) array of images. Time demands on doctors prohibit mastering complicated software packages with steep learning curves. Designs of medical imaging software must be easy to learn with effective functionality for the software to be used and accessible to medical professionals. Interacting with the software is a key component of usability and accessibility. Commercially-off-the-shelf (COTS) interaction devices provide new opportunities to manipulate 3D medical imaging software to further reduce a traditionally steep learning curve in medical imaging software. Implementing these devices into medical environments can create new concerns with sterilization and effective utilization. Specific COTS devices offer sterile, touch-less interaction that would be ideal for medical operating rooms (OR), anatomy labs or clinics. These devices allow medical professionals direct control of the patient's data being examined. This thesis explores the usability and functionality of the Microsoft KinectTM as an interaction device for medical imaging technology by being able to complete a task called windowing or changing the tissue densities displayed in an anatomical region. A user study was conducted to evaluate participant's performance and experience, while completing a task called windowing. Windowing is changing the tissue densities displayed in an anatomical image. Participants completed four rounds of five tasks to view particular anatomical features throughout two datasets. Participants using both devices had a 75% accuracy to correctly identify the anatomy, while those using the Kinect (μ = 9.739 minutes) spent on average 2-minutes less time to complete the series of 20 tasks, compared to those using the mouse (μ = 11.709 minutes). Participants using the Kinect also had larger window width values than mouse users, however this did not appear to affect their accuracy in identifying the tasks.