Mozilla's Common Voice: The immediate future of human-machine interaction lies in voice control with smart speakers, home appliances and phones listening to commands to do them.
However, voice assistants, Amazon's Alexa or Apple's Siri, represent overwhelming white, male developers who seem to have racial prejudices.
For example, if you have a strange pronunciation, or your native language is not English, chances are you will never get what you are looking for.
To address this, Mozilla, a free software community, created Common Voice in 2017, a tool that gathers voices as data sets to create a different AI representing the world's population, not just the West.
Common Voice works by publicly releasing an ever-increasing dataset. So every company can use this data to research, create and train their own voice applications, improving voice recognition for everyone, regardless of language, gender, age or pronunciation.
Currently, there are more than 2.400 hours of voice data and 29 languages (English, French, German, Chinese, and Kabyle.)
"Existing speech recognition services are only available in cost-effective languages," Kelly Davis, Mozilla's head of Machine Learning, told TNW.
Speech is beginning to become the preferred way to interact with technology, and this has been helped by the development of news services from Amazon (Alexa) and Google with Google Assistant.
These voice assistants have overturned the way we communicate with technology, however, the innovative momentum of this technology is largely untapped as developers, researchers and start-ups around the world dealing with voice recognition technology face one problem: the inability to provide voice data in many languages for speech-to-text engine training, ”explains Davis.
Although Davis believes that AI is beginning to improve, though slowly, they are far from where they need to be. At the end of 2017, Amazon he added an Indo-English pronunciation to Alexa, allowing her to pronounce Indian phrases and understand some Indian voice shades.
But the voice assistant is still very much in the West, as six of the seven languages he uses are European.
In early 2018, Google announced support for Hindi to its voice assistant, but the feature was limited to a few questions. A few months after its initial release, Google updated the feature so that Google Assistant can now chat in Hindi - the third most widely spoken language in the world.
"Efforts to bridge the AI gap have largely fallen into the hands of non-partners," Davis said.
For example, the project Black In AI, looking for ways to integrate non-Western voice features into AI, was started by former Google employees at 2017.
However, it did not begin as a formal extension of the company's work. It began to address what they saw as a primary need in the community.
Davis claims that there is little benefit from voice recognition technology right now.
"Think about how speech recognition could be used by minority language speakers to allow more people to access the technology and services that the internet can offer, even if they have never learned to read."
"The same goes for the visually impaired or the disabled, but today's market does not seem to be able to help them."
The Common Voice project hopes to accelerate the process of data collection in all languages and around the world, regardless of pronunciation, gender or age.
"By making this data available - and developing a speech recognition mechanism (the Deep Speech project) we can empower entrepreneurs and communities to bridge the gaps," Davis added.
If you want to help differentiate Common Voice project voice recognition, make a recording and try to read suggestions or listen to other recordings. Then, just verify that they are accurate.