Chris Doty, Content Marketing Manager at RapidMiner, was recently featured in an article from Speech Technology Magazine.
Biases in the algorithms come from the developers who build them; even if unintentional, these developers have their own biases based on their backgrounds and experiences, experts agree. Chris Doty shares:
“Simply put, AI models are biased because the data that feeds them is biased. For example, if you have a speech recognition system trained only on speakers of American English, it’s going to struggle with…people from Australia, speakers of non-standard varieties of American English, non-native speakers, etc.”
Doty adds that bias can also come into play over time with models that drift, the technical term for the disconnect that occurs when real-world changes are not reflected in static data, when the data is not updated so it starts to poorly match the real world.
“Ultimately, models are only as good as the data that is used to train them,” Doty says. “Ensuring you have a wide range of relevant, up-to-date data is key, especially when it comes to speech projects.”
You can read the full article here: Overcoming Bias Requires an AI Reboot