There has been significant discussion about artificial intelligence (AI) replacing humans in the workforce. And, this has caused considerable consternation – are we going to let soulless AI “Terminators” replace hard working people in the office? Most people are not happy with this vision of the future. 

However, if one turns the conversation to politicians, a different story emerges. According to a study from The Center for the Governance of Change, a research organization within Spain’s IE University, 51% of Europeans would like to see their parliamentarians replaced by an AI algorithm. And, when the field was limited to 24-35 year-olds, that number rises to 60%. 

The U.S. was a bit more circumspect, with only 40% indicating they would like to see their elected officials replaced with AI. But still, the U.S. Congress hasn’t had a 40% approval rating since 2005, so AI still enjoys a more positive image than members of congress do. 

Does the public really trust computer algorithms more than their elected officials? Or is this just a commentary on people’s low opinions of those officials? Or, does it simply show people don’t understand AI?

The short answer is: “Yes, to all of the above.” We’re not going to get into why people do or don’t trust their politicians, but we can definitively say that it’s a problem when people have an overinflated opinion of AI’s capabilities. It creates irrational fear and failed AI projects. 

What people seem to forget is that AI is created by humans – and humans have biases and make mistakes. And when you program those biases and mistakes into an AI algorithm, they scale to fantastic levels and become a real problem. For example, if one real estate agent uses race as the determining factor for choosing which houses to show potential buyers, that’s a lamentable but limited problem. But if an AI-based real-estate application on the web scales that bias across millions of visitors, the issue becomes much more severe.  

So, let’s assume you actually wanted to build an AI parliamentarian and give it the name “Sue.” And, let’s say you want Sue to reflect the will of the people, so you rely on public opinion data to drive Sue’s decision-making process. But here’s the problem: what if public opinion is out of context or misinformed? Or what if the population used for the survey is not representative of the population at large? 

Let’s look at the COVID-19 crisis and see how this could play out. When the pandemic was starting to heat up and shut down economies, it was important for governments to take steps to provide monetary relief for suffering businesses and people. But public opinion data says, “it’s bad to run up deficits,” so Sue votes against providing any aid, to avoid running up deficits. She doesn’t have the judgment to consider if the dataset is representative of current thinking, or if it only represents a slice of the population, or if perhaps she understands the situation better than people taking public opinion polls do. Instead, she just votes “no.” 

So we see from this simple example that Sue would likely become far less popular than the human politician she replaced in short order. But how does this play out beyond this purely speculative “what if” scenario of science fiction? 

Actually, we see every day the problems that arise from people not understanding the proper applications of AI. Untold millions of dollars have been spent on AI applications that have never been deployed, due to poorly defined projects and objectives. AI is most useful when there is a clear and limited objective − for example, identifying people carrying guns in digital video feeds. Humans are bad at this “finding a needle in a haystack” kind of activity – watching video for hours and days on end, hoping to catch a glimpse of someone carrying a gun. AI, however, never gets tired, understands the gait and behavior of someone carrying a gun, and can instantly alert on this person when they walk in the room. 

Even then, though, AI should not also be programmed to automatically call 911, because human judgment is needed to understand the situation. Perhaps the system has alerted on someone with a legal concealed carry permit, or an off-duty policeman. AI’s job is to simply alert on someone carrying a gun – it’s up to humans to determine if the person is a risk or harmless. 

Unfortunately, too many AI projects don’t have clearly defined objectives, so they wind up in the dustbin of “well, we tried” … which is right where the “replace parliamentarians with AI” idea belongs!