Multimodality Assistive Technology for Users with Dyslexia
To assist dyslexic users with reading and writing, several approaches have been explored to convey text to users using text-to-speech technology (TTS), and to transcribe what the user dictates using speech-to-text technology (STT). The currently available assistive technologies suffer from limitations around compatibility with digital-only formats, and the necessity for speaking out loud to use speech synthesis, and dictation out loud, thus creating social stigmas. If we think beyond singular modal solutions and expand the possibilities to include solutions that are multimodal and multisensory, it opens up the door for creative ways to help people with dyslexia. An alternate approach would provide assistance with reading all text, regardless of format, in a way that is inaudible anyone other than the user, and provide a user the ability to transcribe their thoughts without the need to speak out loud. This work explores a multimodal wearable device for assistance with reading and writing for users with dyslexia, using augmented reality for input, and neuromuscular signals picked up by electrodes and bone conduction for output. This device would recognize text from digital, printed or environmental sources, highlights the copy being read, and reads that text aloud to the user utilizing bone conduction output, so the sound would be audible only to the user. For transcription, electrodes in the device can pick up neuromuscular signals in the jaw and face that are triggered by internal verbalizations, requiring users to only say words “in their head.” This would all take place through a device that is barely discernible from the average pair of glasses, reducing stigma a dyslexic user may experience when needing to read or write in social situations.