We’ve previously blogged about how business leaders feel there is a need for more government regulations around AI – because of the immense speed that the technology is moving. And while this may be true from a business standpoint, there are a lot of challenges around early regulation of AI. How do you manage it without stunting its growth and potential? We’re seeing the first glimpses of AI regulation come out of Europe – which is not surprising. Europe traditionally has a “regulate first” approach to technology (see GDPR), while the U.S. tends to take a “litigate first” approach. These initial attempts at regulation are centered around what kinds of data-use practices should be banned, which is normally well-meaning and meant to protect peoples’ privacy and rights. 

The New York Times recently published an op-ed on the subject – and urged the U.S. to follow the EU’s guidance on regulation (more on that later). One point in the op-ed was particularly interesting: “The list of prohibited A.I. uses is not comprehensive enough — for example, many forms of nonconsensual A.I.-driven emotion recognition, mental health diagnoses, ethnicity attribution and lie detection should also be banned.” While this may seem like a good idea at first glance (aren’t these excessively intrusive uses of AI?), it is easy to see applications where the use of these capabilities actually benefits people.

For example, what if a retail customer is angry about something and trying to get their problem solved on an online feedback form? When it comes to customer service, it is important to understand a customer’s emotional state, because it may require a different approach than if a customer is simply doing a routine call to solve an issue. AI could detect the customer’s anger in the feedback and escalate the complaint to a manager – rather than an entry-level customer service representative trying to address the issue. 

The Ongoing Process of Technology Development

If you were told 25 years ago that there would be a website that collects all sorts of personal information on you – where you live, how old you are, your favorite hobbies, your family makeup, etc., you probably would have said “that’s creepy.” But if you were then told – “and it’ll allow you to shop for anything in the world online and then have it delivered to your door in two days,” your answer might change to “Wow, that’s cool!” Imagine if e-commerce websites were overly regulated back then, forbidding the capture of any personal information so you had to fill in the same information repeatedly whenever you made purchases. Further, these websites were also incapable of providing any level of personalization, and, to protect brick-and-mortar stores, they also had to pay a special “online tax” that made them more expensive than offline competitors. If that happened, amazon.com might never have ventured beyond being an online bookstore.

The same dynamic applies to AI. Over-regulation today could prevent the creation of new systems and capabilities that truly impact our lives in a positive way. Take biometric identification, for example. At face value, it seems overly intrusive and dangerous. “What if my fingerprints, retinal patterns or face map winds up in the wrong hands?” But then, how many of us now use our thumbprint or facial scans to pay for products over our phones? A lot – because it’s easy and makes our lives better.

Obviously, we have seen biometrics misused – particularly facial recognition. But to simply dismiss the technology as too invasive at this stage of its development is shortsighted, and something that needs to be considered in the regulatory process. 

The Razor’s Edge of Regulation

The New York Times piece urges global leaders to develop strict regulations around how AI can be used. The EU has already begun this journey, releasing a proposal for more systemic regulation of AI, including forbidding some uses of it including some forms advertising and facial recognition. Several U.S. states have introduced similar regulations.

Some of these regulations require vendors to prove their AI works as advertised, to provide a roadmap for the technology, and to provide information on how they will protect against discriminatory or biased practices. These types of protections are generally a good thing – since “good AI” systems will be able to pass this test rather easily. But over-regulating the technology could hinder some really amazing advances in commerce, health and medicine, public safety and other areas. 

As long as regulators take a nuanced approach to AI, they should be able to walk that razor’s edge of “just enough but not too much” regulation. However, this requires a balanced approach, so for every potential negative side effect of AI, they also need to fully explore AI’s potential to positively transform people’s lives. Preventing the former does not need to impede the latter. But nevertheless, whichever side of the debate someone might fall on, it’s still important to have these kinds of discussions, as this type of dialogue will only help us push progress forward.