Multimodality Assistive Technology for Users with Dyslexia

Thumbnail Image
Gordon, Shira
Major Professor
James Oliver
Committee Member
Journal Title
Journal ISSN
Volume Title
Research Projects
Organizational Units
Organizational Unit
Mechanical Engineering
The Department of Mechanical Engineering at Iowa State University is where innovation thrives and the impossible is made possible. This is where your passion for problem-solving and hands-on learning can make a real difference in our world. Whether you’re helping improve the environment, creating safer automobiles, or advancing medical technologies, and athletic performance, the Department of Mechanical Engineering gives you the tools and talent to blaze your own trail to an amazing career.
Journal Issue
Is Version Of
Mechanical Engineering

To assist dyslexic users with reading and writing, several approaches have been explored to convey text to users using text-to-speech technology (TTS), and to transcribe what the user dictates using speech-to-text technology (STT). The currently available assistive technologies suffer from limitations around compatibility with digital-only formats, and the necessity for speaking out loud to use speech synthesis, and dictation out loud, thus creating social stigmas. If we think beyond singular modal solutions and expand the possibilities to include solutions that are multimodal and multisensory, it opens up the door for creative ways to help people with dyslexia. An alternate approach would provide assistance with reading all text, regardless of format, in a way that is inaudible anyone other than the user, and provide a user the ability to transcribe their thoughts without the need to speak out loud. This work explores a multimodal wearable device for assistance with reading and writing for users with dyslexia, using augmented reality for input, and neuromuscular signals picked up by electrodes and bone conduction for output. This device would recognize text from digital, printed or environmental sources, highlights the copy being read, and reads that text aloud to the user utilizing bone conduction output, so the sound would be audible only to the user. For transcription, electrodes in the device can pick up neuromuscular signals in the jaw and face that are triggered by internal verbalizations, requiring users to only say words “in their head.” This would all take place through a device that is barely discernible from the average pair of glasses, reducing stigma a dyslexic user may experience when needing to read or write in social situations.

Subject Categories
Tue Jan 01 00:00:00 UTC 2019