Skip to main content

Is Bias Baked Into AI?

  • 5 Min Read

Jutta Treviranus, leading expert in inclusive design, discusses what is required to ensure AI accounts for diversity and achieves true inclusivity.

John Baker
John Baker

CEO and Board Chair

topics

In the past year, AI has been making the news a lot—though not always in a good way. For example, Amazon’s facial recognition AI recently made headlines when it was revealed that the system had an error rate of 31% when identifying the gender of images of women with dark skin. And Uber made the news when a story broke about transgender drivers being kicked off its app. But it’s not just facial recognition AIs that are under the microscope—Amazon also had to get rid of a recruiting app that favored men’s resumes over women’s, and a recent study revealed that some mortgage algorithms are biased against non-white homebuyers.

Initially, some people worried that the unconscious—or perhaps conscious—biases of the programmers were being replicated in the technology. This has led to an erosion of confidence in AI, and even led to the city of San Francisco banning face-recognition technology. But the truth is far more complex and harder to fix, because it turns out that the bias creeping into AI is, in some ways, a reflection of embedded biases in our culture itself that manifest themselves in the data used to fuel AI.

I recently spoke with Dr. Jutta Treviranus, Professor and the Director and Founder of the Inclusive Design Research Centre and the Inclusive Design Institute at the Ontario College of Art and Design University in Toronto. Dr. Treviranus is one of the world’s foremost authorities in the field of inclusive design, which has led to her addressing the European Parliament and the United Nations. Her work focuses on ways that we can support inclusive design through authoring and developing tools and, more broadly, recognizing the importance of diversity through considering the outliers and the margins to ensure greater economic and social prosperity.

She told me a story about how some of her recent work reveals some of the bias inherent in AI.

“I was working with some learning models that were intended to direct traffic in intersections, and I tested them to see what they would do with an unexpected scenario—in this case someone pushing a wheelchair backwards through a crossing,” said Treviranus. “I tested six learning models, all of which initially decided to keep driving through the intersection—which people thought was a reflection of the model not having enough data. After researchers exposed the AI to more data, the result was in fact dramatically different: the AI decided to run the person over, but with much greater confidence.”

The issue with AI, according to Treviranus, is often not the size of the data sample being used to train it but rather the quality of the sample. This bias can begin in the company itself, where heterogeneous cultures unknowingly create sample biases. For example, only 10% of the AI research staff at Google are women, and only 2.5% of its workforce is black. But the bias can also come from the sample itself. One large facial recognition program relied on data sets that were more than 75% male and more than 80% white—meaning it had very poor ability to identify people who, according to the data inputs, were “outliers.”

Because the data used to create AI is made up of “average” populations, it often performs poorly when presented with someone who is not average. Think, for example, of speech recognition systems—they have a hard time understanding someone speaking in heavily accented English. One solution, says Treviranus, is to think about design inclusiveness from the beginning of a project. This means considering access for all, design for all, and the inclusivity of more individuals, and using more digital systems to encompass more people. Another approach is to take a multivariant and large data set, and manipulate the Gaussian Curve to not allow more than a small number of repeats of any data element — and train the models using this data (technically “leveling the playing field” and taking away the advantage of being like many other people). While the machine learning model may take longer to reach useful decisions based on the data, ultimately it can save money because there are fewer fixes needed along the way, it can respond to changing contexts, and it can deal better with the unexpected, because flexibility is built in from the beginning. Furthermore, where the stakes are high—like in education, health care, or cybersecurity—taking that additional time is critical.

While Treviranus sees promise in adaptive learning, she cautions that adaptive systems shouldn’t be used to flatten our differences. Ultimately, we need to make learning systems inclusive by design and we need to be okay with different learners reaching different destinations.

“Our challenge,” says Treviranus, “is that we’ve taken our need for simplicity and applied it to our machines. We should use them, instead, to help us as people better deal with and understand complexity and diversity.”

Speaking to Treviranus challenges all of us to think about how much further all AI needs to go to remove bias and achieve true inclusivity. As tech leaders, it’s our responsibility to help challenge the way we move forward with AI to embrace inclusivity by design. To reach every learner, we need to think more deeply about the needs, circumstances, and uniqueness of every learner. While it isn’t a simple task, our job isn’t done until we can make sure that all learners move forward together.

Written by:

John Baker
John Baker

CEO and Board Chair

John founded D2L in 1999, at the age of twenty-two, while attending the University of Waterloo. D2L is a global software company that believes learning is the foundation upon which all progress and achievement rests.

A strong believer in community involvement, John devotes both his personal and business efforts to supporting young entrepreneurs who are developing and applying technology to improve society worldwide.

He was appointed to the Governing Council of the Social Sciences and Humanities Research Council of Canada, Member (Entrepreneurs’ Circle) of the Business Council of Canada, Business Higher Education Roundtable, Past Chair of the Board of Communitech, and is a board member of Canada’s National Ballet School.

John was awarded the Meritorious Service Cross, the EY Entrepreneur of the Year (Ontario for Software and Technology), Young Alumni Achievement Medal from University of Waterloo, and Intrepid Entrepreneur of the Year in Waterloo Region Hall of Fame.

John graduated from the University of Waterloo with an Honours B.A.Sc. in Systems Design Engineering, with First Class Honours and an option in Management Sciences.

Twitter: @JohnBakerD2L
LinkedIn: John Baker
In the News: Want to Learn, Train, and Lead Better? Use These 4 Strategies From Neuroscience

Stay in the know

Educators and training pros get our insights, tips, and best practices delivered monthly