Apple Partners Up to Improve Voice Recognition Systems for Disabled Users

Team working to improve voice recognition systems for disabled users

While Apple’s voice assistant, Siri, is still often ridiculed for mishearing requests, it has become much more accurate over the years. A new project aims to improve on the functionality even further. The University of Illinois Urbana – Champaign (UIUC) is working on the Speech Accessibility Project along with Apple and others to improve voice recognition systems for disabled users.

UIUC’s Speech Accessibility Project

The project’s aim is to expand how many speech patterns voice recognition systems can understand. Currently, people with speech impediments and disabilities tend to have more difficulty with Siri and other such voice assistants than the general public.

In particular, the Speech Accessibility Project focuses on improving voice recognition systems for people with a handful of diseases and disabilities. These include Lou Gehrig’s disease (amyotrophic lateral sclerosis), Parkinson’s disease, cerebral palsy and Down syndrome.

These individuals, quite often, simply cannot benefit from current speech recognition tools and voice assistants. As much as the tools have improved, their algorithms remain unable to interpret the speech patterns these diseases, ailments and disabilities cause.

The idea is that speech recognition systems could dramatically improve the quality of life for such users, especially when they also experience mobility issues. Data samples are being collected from individuals “representing a diversity of speech patterns” in order to create a private, de-identified dataset.

Apple Joins With Other Tech Giants to Help UIUC Improve Voice Recognition Systems

Apple, Amazon, Google, Meta and Microsoft have partnered with UIUC in the project. So have various non-profit organizations. Beginning with American English, the groups hope to use the dataset to train machine learning models. The goal here is to help the models better cope with different speech patterns, impediments and disabilities.

Getting so many tech giants on board for the project can only help. Each has its own virtual assistant or speech recognition features. Integrating the study with all of their tools could speed up development. At a minimum, it will provide more resources and avoid duplication of efforts.

Mark Hasegawa-Johnson, the UIUC professor of electrical and computer engineering leading the project, pointed out the importance of the work.

The option to communicate and operate devices with speech is crucial for anyone interacting with technology or the digital economy today. Speech interfaces should be available to everybody, and that includes people with disabilities. This task has been difficult because it requires a lot of infrastructure, ideally the kind that can be supported by leading technology companies, so we’ve created a uniquely interdisciplinary team with expertise in linguistics, speech, AI, security, and privacy to help us meet this important challenge.

The project also includes Heejin Kim, a research professor in linguistics. Also on hand is clinical professor in speech and hearing science Clarion Mendes. Mendes is also a speech-language pathologist. Staff members from UIUC’s Beckman Institute for Advanced Science and Technology, including IT professionals, are also helping the project.

One thought on “Apple Partners Up to Improve Voice Recognition Systems for Disabled Users

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.