Fair Lending & Compliance

It's time to regulate AI — here's how

Zest AI team
August 16, 2018

We’re now at peak hand-wringing over artificial intelligence, which is more or less what you’d expect of a technology which in influence and hype has outstripped people’s understanding of how it works.

Policymakers are moving at varying paces on the issue. The Trump Administration’s strategy is more-or-less to get out of the way of the private sector. In contrast, China has crafted a national AI initiative built around massive central investment to control future standards. The European Commission has taken a more pro-citizen approach, publishing a 20-page strategic AI framework centered on strong ethics. Europe’s new General Data Protection Regulations require any company handling data involving EU citizens to provide a “right to an explanation” about how AI arrived at its decisions.

It’s good to see all these stakes put in the ground, and as technologists focused on AI, we want to see smart regulation that protects citizens’ rights while creating proper incentives for continued innovation. What concerns us is that the current debate is being driven by emotions, specifically fear and anxiety over “the robots” taking our jobs and humanity. Automation has been “taking our jobs” for almost 200 years. And as for the threat to our humanity, we're fairly convinced that the concept of robots enslaving humans might make for good sci-fi flicks, but isn’t going to happen any time soon, if ever.

AI can do only what you to tell it to do, and think only on what you tell it to think. If the system produces results that are biased or unfair or unpredictable, that’s more often the fault of human error or training the AI models on wrong (or biased) data. Those who champion humans over machines need to check their assumptions too. People make stunningly bad decisions all the time — not buckling seat belts, foregoing immunizations for their children, smoking. Credit and lending were often discriminatory and arbitrary processes before the advent in the 1970s of simple algorithms to score people. Credit scores expanded financial access dramatically for millions of deserving people, and credit will expand again with AI and machine learning models to spot the next generation of deserving borrowers.

Any regulation of AI should champion smart combinations of humans and machines in a way we haven’t had before. We have to know when and where it’s safe to use AI. We need tools to check bias in the data we feed to our AI models, and tools that explain the decisions being made inside those “black boxes.”

Most regulators, schooled in law or public policy, do not have the technical expertise to evaluate AI models or even know what questions to ask. We can bridge the knowledge gap between technologists and policymakers by creating an advisory committee to craft the beginnings of a regulatory framework with teeth. The work kicked off by the Congressional AI Caucus is a great start, and needs to be supported by bringing on more computer science advisers and ethicists to bridge the knowledge gap between Silicon Valley and Washington. People living that far apart from each other’s worlds cannot begin to frame the right questions.

Here are some of our proposals for regulating AI:

  • Promote transparency – Consumers must be empowered to pick and choose what information they make public and have the right to remove it. Companies must know what data they are collecting and analyzing – and the relevant data inputs they are feeding into their models. Any false or inaccurate data should be discarded.
  • Promote explainability – Consumers should have a right to understand in plain English how and why an AI model arrived at a certain decision. Companies should be able to clearly explain the logic behind those decisions and make that publicly available to their customers. At the same time, they should be allowed to keep any proprietary algorithm private.
  • Affirm trust Consumers can’t trust what they can’t see, so every attempt should be made to open a window into AI models for continuous monitoring. This is hard to do, but not impossible, and not all data need to be revealed. Think of it as the glass wall between you and the car wash.
  • Outline a path for recourse – Consumers should be entitled to legal or transactional recourse if a company’s model misuses their data. Companies, meanwhile, should be held accountable by regulators and face penalties. To do this right, we need better methods to determine the financial impact of “bad AI” events or violation of trust. Class actions are not going to be the best approach. What we need is a new form of insurance policy to cover adverse events.

AI’s progress is inevitable, but its widespread acceptance is not guaranteed. The only certainty is that AI will fail us someday. All models, like humans, make mistakes. Let’s set up the guardrails now to ensure its effective oversight without blocking and stopping the technology.

latest
March 25, 2024
Women’s History Month: sisters are doin' it for themselves
February 27, 2024
Zest Cares — we mean it when we say it!
February 26, 2024
Credit & Risk
Leaning downhill: rethinking risk in lending