There are no “good” or “bad” technologies. There are only good and bad applications of technology. A hammer used to fix the hole in your roof is good. A hammer used to break a window for a burglary is bad. So, when we discuss the merits of new technologies like artificial intelligence (AI), the same dynamic applies. 

This dynamic is particularly important with perhaps the most controversial technology today: facial recognition. Most people would agree that when used to identify the location of kidnapped children, facial recognition is good. However, when used to falsely accuse and arrest innocent people, as was famously reported by 60 Minutes, well…that’s bad. 

As with all AI systems, facial recognition is only as good as its data and its governance. In a controlled setting, where clear facial images are used to test the system, it’s possible to have extremely high accuracy rates with facial recognition. However, this can change drastically once images captured “in the wild” by video cameras are used. It’s one thing to capture a perfect image of someone sitting right in front of the camera; it’s another all together when the subject is walking through an airport terminal in a crowd of people. 

 Another challenge is there are varying degrees of data available for training facial recognition systems, which has led to corresponding gaps in accuracy among different sets of people. A test of several facial recognition systems by the National Institute of Standards and Technology (NIST), for example, found the systems to have significantly higher false positive rates with women than with men, and with people of color than with Caucasians.  

These issues are manageable with the right kinds of applications. For example, when facial recognition is used to gain access to buildings, or even your smartphone, it more closely resembles the controlled environment one might use in a lab test. When the subject understands their face needs to be scanned to gain some sort of benefit (i.e., access to a building), then they are going to position themselves nicely in front of the camera so a clean image can be captured. And even if a false negative occurs and the person can’t enter the building, it’s an annoyance but not a potentially life-changing event. 

It’s in surveillance applications where things can get dicey. Obviously, the false arrest mentioned earlier is a vivid example of why the use of facial recognition in law enforcement is so controversial. But even in routine security surveillance – say, at an airport − facial recognition can be a problem. For one, the data suggests that people of color and women will be subject to a greater frequency of unnecessary secondary screenings when compared to white males.  And, this problem also applies to false negatives – so someone who actually does represent a threat might go undetected. 

As long as the systems are developed ethically, accurately and with proper oversight, it is possible for both low-risk and higher-risk implementations to deliver value. For example, facial recognition is being used in some sports stadiums on an opt-in basis to replace tickets at the gate and as a “biometric ID” at concession stands selling alcohol. This is an extremely low-risk implementation because patrons choose to participate, and even if the technology fails, there are alternative means for achieving the objective – producing a traditional ID before buying a beer. That same stadium could also use the technology in a more high-risk fashion, monitoring people in the crowd to identify those who have caused trouble at previous events and are not supposed to be in the venue, among other things. This type of application of facial recognition requires the “ethical, accurate and proper oversight” qualities mentioned earlier. One can see the potential benefits these technologies can deliver for a better patron experience – care must be taken, however, to mitigate potential risks. This mitigation can range from understanding how the application was developed and the quality of data used, to the processes used by security staff – for example, how guards react when the system identifies a person as a risk to the patron experience. 

Technology usually moves at a faster rate than legislation, and facial recognition is no exception. The previously mentioned 60 Minutes segment reported that facial recognition had already been used in hundreds of thousands of arrests in the U.S. alone, and there are no federal regulations around law enforcement’s use of the technology. 

This legislation is coming – but beyond that, facial recognition provides an object lesson on how all organizations should approach AI. Just because it’s math doesn’t make it right – so understanding how AI systems are developed, the data used to train the systems, and what safeguards have been implemented to mitigate bias and other ethical issues, should be near the top of the “checklist” for organizations developing or purchasing AI-based systems − particularly those that might impact customers or other key stakeholders.

People, process and technology are the three pillars of efficiency. If you ignore the first two in favor of the “shiny object” third, your technology runs the risk of causing significantly more problems than it solves. This is particularly true of AI – and in too many cases facial recognition is at the eye of a storm of unintended consequences.