Skip to main content

Is Bias Baked Into AI?

  • 5 Min Read

Jutta Treviranus, leading expert in inclusive design, discusses what is required to ensure AI accounts for diversity and achieves true inclusivity.

John Baker

In the past year, AI has been making the news a lot—though not always in a good way. For example, Amazon’s facial recognition AI recently made headlines when it was revealed that the system had an error rate of 31% when identifying the gender of images of women with dark skin. And Uber made the news when a story broke about transgender drivers being kicked off its app. But it’s not just facial recognition AIs that are under the microscope—Amazon also had to get rid of a recruiting app that favored men’s resumes over women’s, and a recent study revealed that some mortgage algorithms are biased against non-white homebuyers.

Initially, some people worried that the unconscious—or perhaps conscious—biases of the programmers were being replicated in the technology. This has led to an erosion of confidence in AI, and even led to the city of San Francisco banning face-recognition technology. But the truth is far more complex and harder to fix, because it turns out that the bias creeping into AI is, in some ways, a reflection of embedded biases in our culture itself that manifest themselves in the data used to fuel AI.

I recently spoke with Dr. Jutta Treviranus, Professor and the Director and Founder of the Inclusive Design Research Centre and the Inclusive Design Institute at the Ontario College of Art and Design University in Toronto. Dr. Treviranus is one of the world’s foremost authorities in the field of inclusive design, which has led to her addressing the European Parliament and the United Nations. Her work focuses on ways that we can support inclusive design through authoring and developing tools and, more broadly, recognizing the importance of diversity through considering the outliers and the margins to ensure greater economic and social prosperity.

She told me a story about how some of her recent work reveals some of the bias inherent in AI.

“I was working with some learning models that were intended to direct traffic in intersections, and I tested them to see what they would do with an unexpected scenario—in this case someone pushing a wheelchair backwards through a crossing,” said Treviranus. “I tested six learning models, all of which initially decided to keep driving through the intersection—which people thought was a reflection of the model not having enough data. After researchers exposed the AI to more data, the result was in fact dramatically different: the AI decided to run the person over, but with much greater confidence.”

The issue with AI, according to Treviranus, is often not the size of the data sample being used to train it but rather the quality of the sample. This bias can begin in the company itself, where heterogeneous cultures unknowingly create sample biases. For example, only 10% of the AI research staff at Google are women, and only 2.5% of its workforce is black. But the bias can also come from the sample itself. One large facial recognition program relied on data sets that were more than 75% male and more than 80% white—meaning it had very poor ability to identify people who, according to the data inputs, were “outliers.”

Because the data used to create AI is made up of “average” populations, it often performs poorly when presented with someone who is not average. Think, for example, of speech recognition systems—they have a hard time understanding someone speaking in heavily accented English. One solution, says Treviranus, is to think about design inclusiveness from the beginning of a project. This means considering access for all, design for all, and the inclusivity of more individuals, and using more digital systems to encompass more people. Another approach is to take a multivariant and large data set, and manipulate the Gaussian Curve to not allow more than a small number of repeats of any data element — and train the models using this data (technically “leveling the playing field” and taking away the advantage of being like many other people). While the machine learning model may take longer to reach useful decisions based on the data, ultimately it can save money because there are fewer fixes needed along the way, it can respond to changing contexts, and it can deal better with the unexpected, because flexibility is built in from the beginning. Furthermore, where the stakes are high—like in education, health care, or cybersecurity—taking that additional time is critical.

While Treviranus sees promise in adaptive learning, she cautions that adaptive systems shouldn’t be used to flatten our differences. Ultimately, we need to make learning systems inclusive by design and we need to be okay with different learners reaching different destinations.

“Our challenge,” says Treviranus, “is that we’ve taken our need for simplicity and applied it to our machines. We should use them, instead, to help us as people better deal with and understand complexity and diversity.”

Speaking to Treviranus challenges all of us to think about how much further all AI needs to go to remove bias and achieve true inclusivity. As tech leaders, it’s our responsibility to help challenge the way we move forward with AI to embrace inclusivity by design. To reach every learner, we need to think more deeply about the needs, circumstances, and uniqueness of every learner. While it isn’t a simple task, our job isn’t done until we can make sure that all learners move forward together.

Written by:

John Baker

Stay in the know

Educators and training pros get our insights, tips, and best practices delivered monthly