There is an accent gap in speech recognition technology. Research shows that speech recognition technologies are not nearly as accurate in understanding nonnative accents as they are in understanding white, non-immigrant, upper-middle-class Americans.
It is an unpalatable yet also unsurprising phenomenon. It is this demographic that had access to and trained the technology from the beginning. Unfortunately, it has resulted in speech models that are more useful to some people than to others.
Besides the ethical considerations of creating more inclusive technology, accent gaps in voice recognition models are also bad for business. According to the U.S. Census, over 35 million people in the United States are native speakers of a language other than English. Sixty percent of these people speak Spanish at home.
For companies with AI solutions to compete in the large nonnative English-speaking market in the U.S., speech models need to be able to understand a wide range of different Spanish accents, originating from all the Americas, and indeed the rest of the world.
Join DefinedCrowd’s Director of Machine Learning, Christopher Shulby, Director of Product Management, Daan Baldewijns, and Vice President of Product Management, Andrew Webb, in a fascinating discussion on accent bias in artificial intelligence: how to identify it, test for it, and correct it.