A husband and wife separately apply online for the same credit card. They’ve been married for 20 years, they share the same financial resources, co-own all their assets and have almost identical credit scores. When they enter their respective information to get “instant approval” for a card, the data is crunched in the background by an artificial intelligence (AI)-based algorithm…and BINGO! They’re both approved in a matter of seconds. Except for some reason the husband is given a $40,000 credit limit but the wife only $10,000. 

How can this be when they’re both seemingly identical credit risks?

Don’t blame the AI – it only “knows” what it’s been taught. Teaching and training often is developed through analyzing available datasets, which can often be outdated, or a reflection of data from a biased data set, or a reflection of the software developers’ perceptions. If you teach AI that men are less of a credit risk than women based on historical but out-of-date information, you wind up with situations like this.

How AI Learns

AI becomes “intelligent” through machine learning – an automated way of building algorithms by processing enormous amounts of data to uncover patterns that enable AI to make decisions. In this case, the system was trained on historical credit risk data, and one of the patterns it identified is that women are assigned lower credit limits than men (due to financial institutions’ historical practice of assigning higher credit risk to women). 

There’s nothing nefarious about the AI in this situation – it simply identified a historical bias against women in its training data and made that part of its algorithm for calculating credit risk. But to women being insulted by that algorithm’s results, it certainly seems nefarious.  Companies need to consider these potential biases as they develop the datasets to drive their AI systems. And, they need to continuously update these datasets to reflect changing trends and societal norms.

For Patriot One and Xtract, AI bias is an incredibly important issue – you can imagine the potential challenges that could develop around the training data for AI-based weapons detection and video surveillance technology. It is vitally important that our systems view each person as “simply another human,” and not improperly use race, gender, attire, etc. as a means of unfairly identifying security threats.  

Having ethical AI in high-stakes security environments is an absolute must, because if, for example, a concert venue is constantly sending security guards to pat down people of a particular race, it can be insulting to those patrons and risk exposing the venue to legal and reputation damage. That’s why at Patriot One, the Xtract team has strict processes in place for identifying bias in training data and is taking a leadership role joining industry consortia focused on ethical AI. 

Taming the Wild West

AI has evolved much faster than government regulations and industry frameworks have – so the field is currently a “Wild West” with no regulations or accepted principles of conduct. This means it’s up to technology vendors and data scientists to ensure the ethical quality of their AI. Some will take shortcuts, some will do it right, and some will try to do it right but make mistakes. All technology vendors and corporate data scientists should take AI bias very seriously, though, because today’s legal and reputation risks will only increase over time, and be joined by compliance risks once governments begin regulating AI (which they will).  

In this environment, it’s a very good idea to stay ahead of the game and take measures to produce ethical AI. Failure to do so today risks creating a toxic AI waste-dump that will need to be cleaned up tomorrow.