Credit & Risk

How To Get Fair Lending Right With AI-Driven Lending

Zest AI team

October 19, 2021

Credit unions and banks are fast adopting the benefits of AI-driven lending: faster and more approvals with no added risk. But, lenders do have legitimate concerns about ensuring that AI and machine learning (ML) credit underwriting models don’t discriminate against borrowers of color or any other protected class, even incidentally. 

This is why we covered the topic of fair lending with AI in our latest Zest Guide, “The Five Building Blocks For Compliant AI-Driven Lending.” One of those five fundamentals is performing rigorous fair lending analysis on every underwriting model you plan to use, whether it’s based on an industry score, old school methods, or ML. (In case you missed it, our Insights blog last week tackled another essential element to compliant AI: Knowing exactly how the model is making its decisions. Read that here.)

Algorithmic bias is real. Regulators know it. And they are looking to solve the problem by penalizing industry players who see it and do nothing and by rewarding those who are actively using solutions. Sticking your head in the sand is not an option, unless you want it handed to you. 

Fortunately, fair lending testing and review for an AI/ML model should look pretty similar to the process you follow today in most respects. You’ll need to conduct and document the outcomes of three main tests: disparate treatment (Are you considering a protected characteristic in the model?), disparate impact (Is bias showing up in outcomes driven by the model?), and a search for less discriminatory alternative models (Is there a less biased model that is nearly as accurate?). We built Zest software to automate each of these fair lending steps based on our experience working with lenders and regulators. Here’s what to know about each step. 

We built Zest software to automate each of these fair lending steps based on our experience working with lenders and regulators. 

Disparate treatment

No model, ML or otherwise, can use demographic characteristics directly to assess creditworthiness. This kind of disparate treatment is illegal. Any fair lending analysis must verify that variables directly tied to race, gender, age, and other protected class statuses are not predictive variables in credit decisioning. The standard way to spot disparate treatment is to find variables that correlate highly with protected classes. Some offenders appear neutral based on their name and description, but you still need to test them to make sure that they are not, whether by themselves or in combination with other variables in the model, proxies for a demographic characteristic.

Download the Zest Guide, "The Five Building Blocks For Compliant AI Driven Lending"

To do so you should go beyond simple correlation and do feature proxy testing to spot variables that function as a close proxy for a prohibited characteristic. Zest software does this automatically. Feature proxy testing quantifies the ability of a variable to predict credit risk and protected class separately. For example, a predictive model built with a (hypothetical) feature that flags a lender’s Hispanic-majority neighborhoods will also be highly predictive of whether a randomly selected applicant is (or isn’t) Hispanic. Features that are predictive of the latter (but not the former) are easy to identify. Note that Zest performs this test without considering the underlying model, solely focusing on the input model features.

Disparate impact

The Equal Credit Opportunity Act and its implementation through Regulation B also prohibit disparate impact (DI). DI occurs when a facially neutral practice has a disproportionately negative effect on a protected class on a prohibited basis, “unless the creditor practice meets a legitimate business need that cannot reasonably be achieved as well by means that are less disparate in their impact.”

The first step in disparate impact testing is to see if it’s there. Zest AI’s approach to DI testing is to look at a credit model globally, all the variables and interactions at once, to figure out which features adversely impact protected class applicants more than their control group counterparts. Features causing significant harm that offer little predictive value can be further examined and considered for removal by the lender. While some in the industry may question whether disparate impact is a viable theory of liability under the current Supreme Court, Zest AI and many others stand by it. Plus, managing disparate impact is just the right thing to do. Even if the court seems to be leaning a certain way, it is a long way from knocking down disparate-impact liability. When techniques exist to make your lending both fairer and more profitable, it's in your interest to use them. 

While some in the industry may question whether disparate impact is a viable theory of liability under the current Supreme Court, Zest AI and many others stand by it. Plus, managing disparate impact is just the right thing to do.

LDA Search + De-biasing 

As a final step in fair lending testing, regulators are going to want to see that you completed a thorough and well-documented search for any less discriminatory alternative (LDA) models. LDA search is how you show regulators and counterparties in legal claims that your credit practice “meets a legitimate business need that cannot reasonably be achieved as well by means that are less disparate in their impact,” as the ECOA requires.

This “business need” justification carve-out was written in an era when building fairer models usually meant less accurate underwriting, something no lender can sustain for long. With the performance gains of AI/ML, lenders have real options to deliver more profit and fairer lending. That’s something you can’t do right now with popular credit scores (or people, for that matter).

Used properly, ML holds the key to ending racial disparities in financial services.

The method of LDA search we endorse uses a technique called adversarial debiasing, which allows lenders to optimize their underwriting models to reduce disparate impact with only minimal impact on accuracy. These alternate models sacrifice little to no profit in exchange for a lot more fairness. Lenders can choose the extent to which fairness enters into the process, develop many alternative models, and select one for production.

For example, one Zest client saw approval rates for women jump 26% after using a de-biased AI credit model. Likewise, a mortgage lender generated a model that shrank the approval rate gap between Black and white borrowers by 50%. ML, appropriately done, holds the key to ending racial disparities in financial services. 

The key thing to remember: LDA search has to achieve fairness gains without using protected status as an input in underwriting, which can violate ECOA or Reg. B. At Zest, for example, we designed our LDA Search tool to optimize for fairness without knowing the borrower’s status. 

A growing number of lenders are choosing to deliver fairness at a minimal cost across the board. For them, saying yes to more near-prime applicants is a growth strategy and a way to make good on their social commitments. As for liability concerns? We believe they’re unfounded. A bank can conduct these analyses under legally privileged conditions, and significant lenders have used similar methods of identifying and choosing between less discriminatory alternatives for decades.

Download the Zest Guide, "The Five Building Blocks For Compliant AI Driven Lending"

Thank you for subscribing!
Something went wrong while submitting the form.