Augmenting Human Communication with Cutting-Edge Technology
Millions of people with neurological conditions, such as cerebral palsy or ALS, are physically prevented from producing natural human speech. They are intelligent and socially aware, and want to fully participate in a social world, but largely depend on caregivers to interpret grunts and or gestures in response to prompted guessing. For someone with this condition to generate an original sentence often requires tediously cycling through the alphabet, slowly assembling words into sentences.
Computers have enormous potential to augment human communication with speech, as eyeglasses do for vision and hearing aids for hearing. The field of Augmentative and Alternative Communication (AAC) has produced some products to help, allowing people to construct words and sentences from computer menus. Most products are limited by the user interface.
Tom did early work on AAC 30 years ago, using AI to accelerate the rate of speech generation (see Knowledge-based Communication Prosthesis for details). Yet most AAC products are still using the primitive scanning techniques from the 1980s. Since then, there have been amazing advances in AI and advanced user interfaces, such as eye and head motion detection for augmented reality (AR) and brain computer interfaces (BCI). But these UI technologies have been largely aimed at able bodied people operating games or simulations, or for scientific study.
Startup company Cognixion is applying these cutting-edge technologies for AAC and control applications. Its mission is to give people of all abilities the power of speech. They have created an iPad app that allows you to speak with your eyes. It uses the face recognition cameras and AR technology available on iOS devices, combined with proprietary AI algorithms, to provide an interaction modality that can track where you are looking on the screen. With this tracking input, users can quickly navigate a control interface to generate speech. The same modality drives input to a virtual assistant, which can then issue commands to control IoT devices like opening a door or turning on the lights. This makes a huge difference in the lives of the people who use it.
Recently, Cognixion has produced the ultimate in AAC technology — a wearable product that allows you to speak with your brain. It is an AR headset that projects computer generated options overlaid on the user’s visual field. When the user thinks about one of the options, like a word they want to speak, electroencephalogram (EEG) electrodes on the device detect that intention. This has been theoretically possible for a while and demonstrated in laboratory conditions, but the EEG signal has been too noisy to be useful as a practical input modality. By combining EEG signal processing with AI models, Cognixion has been able to significantly improve the accuracy and reliability of interpretation. As a result, the company has produced a breakthrough practical AAC solution.
Professor Lorenzo Minelli, who spoke nine languages until rendered speechless by a stroke, used an early prototype of the AR/BCI headset to say this using only his brain:
“Medicine is an art as well as a science, and great art requires great imagination, the capacity for empathy. Take the opportunity to see your own suffering in the suffering of others. For when you can see others’ suffering as your own, you will be greatly rewarded.”
This is a case where AI is beautifully aligned with the elemental human need to communicate, by augmenting our ability to speak when injury or circumstance might otherwise have made it impossible. That’s Humanistic AI.