With all the hype around artificial intelligence (AI), it can be hard to distinguish fact from fiction. When you go beyond that hype and look at what’s really happening, though, there’s no doubt that AI can be a transformative technology, when it is used for the right applications and purposes.


This last point is important – because too many organizations today are speeding to adopt AI without fully considering the potential ethical and business ramifications. This is causing discomfort among many business leaders – a problem that was exposed in a recent KPMG report that found, among other things:

  • 55% of business leaders in industrial manufacturing, and 49% in retail and technology, said “AI is moving faster than it should.”
  • Small company leaders are even less comfortable – 63% of them agreed that AI is moving too fast.
  • And, business leaders with the most AI knowledge are also the wariest: 92% of leaders with high AI knowledge said they’d like more government regulation around AI.


Does this mean we should fear AI? Not at all – but it does mean that we need to develop ethical and legal guardrails around AI. (And, we need to close the knowledge gap around AI – companies need to understand where and why the application AI of is appropriate. We will address this in more detail in our next blog.)


The KPMG report found that the lack of these guardrails has become a front-burner issue in the last year, due largely to the tremendous acceleration in AI adoption caused by the need to digitally transform businesses during the pandemic. In the previous year’s report, respondents said AI was not moving fast enough. Now, a decided majority believe the exact opposite – and the fear is that all this AI adoption is introducing new regulatory and legal risk to companies.


All of this brings to mind the scene in the movie Jurassic Park, when Dr. Malcolm says “…your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”


In the current “wild west” phase of AI, there are no real rules or frameworks in place yet, which means data scientists and their employers are governed only by their own ethical compasses. There are those of us who take that seriously – and there are those that don’t. And for the latter group, there aren’t any real corrective penalties in place to encourage a change in behavior.


But the reason why business leaders are so concerned is that unethical AI can exact other kinds of penalties on companies. In these days where environmental, social and governance issues are a top boardroom issue for corporations (Exhibit A: see the recent corporate reactions to the Georgia voting legislation), the potential of violating people’s privacy or exhibiting automated biased behaviors are major concerns. No company wants to wind up in the headlines for these reasons.


Another concern is that rapid adoption of AI might create a portfolio of systems that need to be redeveloped or decommissioned once AI regulations are created and enforced (this process is already well underway in the European Union, which introduced its first AI regulations in April).


For AI system developers like Xtract AI/Patriot One, the most sensible approach to this problem is to join relevant industry consortia and to train data scientists in ethical AI and AI bias. Taking this approach enables us to establish guardrails that will keep us from straying into the troubled waters articulated in the KPMG survey. A data scientist can have the best of intentions – but if they are not knowledgeable in ethical AI, they can make potentially costly mistakes without realizing it.


By self-imposing these guardrails now, companies can develop and deploy AI systems with the confidence that it’s not only something they could do; but also something they should do.