Companies everywhere are finding “responsibility religion.” The old playbooks of “check the box” corporate social responsibility (throw money at a few charities, give employees a day off to do good deeds, and call it a day) are over. The impossible-to-ignore challenges brought about by climate change, social unrest, and COVID-19 are requiring companies to make fundamental changes in how they operate and go to market.

There are practical reasons behind this shift. 83% of millennials want the brands with which they do business to align with their personal values. Blackrock made headlines last year in a letter to CEOs in which the company said its investment strategy moving forward would be on companies with a demonstrable path to net-zero carbon emissions by 2050. And, competitive pressures are signaling the end of internal combustion engines in automobiles. Adopting responsible business practices is no longer just “the right thing to do;” it’s the only way companies will be able to attract and retain customers and employees, access financial markets, and remain competitive as the century progresses.

According to a new report from Pew Research Center and Elon University, however, experts in artificial intelligence (AI) do not believe companies are feeling the same pressures in relation to their corporate responsibility when it comes to AI systems. The report canvasses “602 technology innovators, developers, business and policy leaders, researchers and activists.” All were asked the same question: By 2030, will most of the AI systems being used by organizations of all sorts employ ethical principles focused primarily on the public good?

The results were startling: 68% of respondents said that most AI systems in 2030 will not use ethical principles focused primarily on the public good. The comments from the experts canvassed in the report indicate that the thirst for efficiency and revenue generation – not to mention the relentless drumbeat of competitive pressure between China and the west − will be the top priority for companies deploying AI, with ethical considerations being an afterthought. 

As we’ve written before on this blog, all AI systems reflect the data on which they are trained. If the data contains implicit bias, then the AI system will generate biased results. If the design of the system captures and uses data in an unethical way, then the system will be fundamentally unethical. For example, an HR recruiting application that uses AI to scour candidates’ social media profiles and preferences, and then uses that data to predict each candidate’s suitability for employment, is overstepping what most would consider appropriate reference and background checking.

To be fair, there is an enormous amount of confusion around AI ethics. For example, on whose ethics do we base ethical AI? And by what standard are we measuring ethical behavior? If an AI system performs more ethically than the human process it replaces, is that good enough? And how do we set AI standards if all ethics are in the eye of the beholder? For example, it may seem unethical to allow AI in a fighter jet to target and shoot down an opposing plane with no human intervention – but if the pilot of that fighter jet is your child, the ethical lines become considerably obscured. 

Fundamentally, the report boils it all down to humans. The question people should ask is not “What do we want AI to be?” It’s “What kind of humans do we want to be?” Interestingly, companies that were early to the “responsible business game” (Starbucks, Patagonia, Ben & Jerry’s, etc.) asked that question when building their businesses, and now find themselves well positioned for a responsibility-centric economy. One wonders how many companies today are asking that same question as they ramp up their AI programs. According to the PEW survey, it’s not many.  However, once the proliferation of AI causes a corresponding increase in business-damaging events due to unethical AI, it’s easy to see how this trend could change.