Credit & Risk

How to get the fair lending piece right with AI underwriting

Zest AI team
October 19, 2021

Credit unions and banks are fast adopting the benefits of AI-driven lending: faster and more approvals with no added risk. But, lenders do have legitimate concerns about ensuring that AI and machine learning (ML) credit underwriting models don’t discriminate against borrowers of color or any other protected class, even incidentally.

This is why we covered the topic of fair lending with AI in our Zest Guide, "The five building blocks for compliant AI-driven lending." One of those five fundamentals is performing rigorous fair lending analysis on every underwriting model you plan to use, whether it’s based on an industry score, old school methods, or ML.

Algorithmic bias is real. Regulators know it. And they are looking to solve the problem by penalizing industry players who see it and do nothing and by rewarding those who are actively using solutions. Sticking your head in the sand is not an option, unless you want it handed to you. 

Fortunately, fair lending testing and review for an AI/ML model should look pretty similar to the process you follow today in most respects. You’ll need to conduct and document the outcomes of three main tests: disparate treatment, disparate impact , and a search for less discriminatory alternative models. We built Zest AI's technology to automate each of these fair lending steps based on our experience working with lenders and regulators. Here’s what to know about each step. 

Disparate treatment — are you considering a protected characteristic in the model?

No model, ML or otherwise, can use demographic characteristics directly to assess credit risk. This disparate treatment is illegal. Any fair lending analysis must verify that variables directly tied to race, gender, age, and other protected class statuses are not predictive variables in credit decisioning.

The standard way to spot disparate treatment is to find variables that correlate highly with protected classes. Some variables appear neutral based on their name and description, but you still need to test them to make sure that they are not — whether by themselves or in combination with other variables in the model — proxies for a demographic characteristic.

To do so, you should go beyond simple correlation and do feature proxy testing to spot variables that function as a close proxy for a prohibited characteristic. Zest AI's technology does this automatically. Feature proxy testing quantifies the ability of a variable to predict credit risk and protected class separately.

For example, a predictive model built with a (hypothetical) feature that flags a lender’s Hispanic-majority neighborhoods will also be highly predictive of whether a randomly selected applicant is — or isn’t — Hispanic. Features that are predictive of the latter (but not the former) are easy to identify. Note that Zest AI performs this test without considering the underlying model, solely focusing on the input model features.

Disparate impact — is bias showing up in outcomes driven by the model?

The Equal Credit Opportunity Act and its implementation through Regulation B also prohibit disparate impact (DI). DI occurs when a facially neutral practice has a disproportionately negative effect on a protected class on a prohibited basis, “unless the creditor practice meets a legitimate business need that cannot reasonably be achieved as well by means that are less disparate in their impact.”

The first step in disparate impact testing is to see if it’s there. Zest AI’s approach to DI testing is to look at a credit model globally, all the variables and interactions at once, to figure out which features adversely impact protected class applicants more than their control group counterparts. Features causing significant harm that offer little predictive value can be further examined and considered for removal by the lender.

Some in the industry may question whether disparate impact is a viable theory of liability, but Zest AI, and many others, stand by it. Plus, managing disparate impact is just the right thing to do. When techniques exist to make your lending both fairer and more profitable, it's in your interest to use them. 


LDA search & de-biasing — is there a less biased model that is nearly as accurate?

As a final step in fair lending testing, regulators are going to want to see that you completed a thorough and well-documented search for any less discriminatory alternative (LDA) models. LDA search is how you show regulators and counterparties in legal claims that your credit practice “meets a legitimate business need that cannot reasonably be achieved as well by means that are less disparate in their impact,” as the ECOA requires.

This “business need” justification carve-out was written in an era when building fairer models usually meant less accurate underwriting, something no lender can sustain for long. With the performance gains of AI/ML, lenders have real options to deliver more profit and fairer lending. That’s something you can’t do right now with popular credit scores (or people, for that matter).

The method of LDA search we endorse uses a technique called adversarial debiasing, which allows lenders to optimize their underwriting models to reduce disparate impact with only minimal impact on accuracy. These alternate models sacrifice little to no profit in exchange for a lot more fairness. Lenders can choose the extent to which fairness enters into the process, develop many alternative models, and select one for production.

For example, one Zest AI client saw approval rates for women jump 26 percent after using a de-biased AI credit model. Likewise, a mortgage lender generated a model that shrank the approval rate gap between Black and white borrowers by 50 percent. ML, appropriately done, holds the key to ending racial disparities in financial services. 

The key thing to remember: LDA search has to achieve fairness gains without using protected status as an input in underwriting, which can violate ECOA or Reg. B. At Zest AI, for example, we designed our LDA Search tool to optimize for fairness without knowing the borrower’s status. 

A growing number of lenders are choosing to deliver fairness at a minimal cost across the board. For them, saying yes to more near-prime applicants is a growth strategy and a way to make good on their social commitments. As for liability concerns? We believe they’re unfounded. A bank can conduct these analyses under legally privileged conditions, and significant lenders have used similar methods of identifying and choosing between less discriminatory alternatives for decades.

In case you missed it, our blog already tackled another essential element to compliant AI: Knowing exactly how the model is making its decisions.

latest
April 18, 2024
Redefining financial literacy through innovation and community
April 9, 2024
Learning from nature — you must water and prune a plant for it to grow
March 28, 2024
Innovation In Lending
Looking beyond market pain points to find purpose