Credit & Risk

Five simple policy changes that will unlock credit for millions of Americans

Teddy Flo
July 7, 2021

Last week, Zest AI joined dozens of industry leaders to provide feedback to the federal government on how the financial services industry uses artificial intelligence and machine learning technology (AI/ML).

Zest AI pioneered the application of AI/ML to credit underwriting, and we’ve spent years working with lawmakers and regulators to encourage the safe use of AI/ML. We care because our technology is used every day by lenders large and small, from coast to coast, to build, audit, and run ML underwriting models that make better lending decisions and cut down default rates.  

The industry will inevitably standardize around AI/ML lending. The economics are too obvious to pass up. But, if we get it right, the bigger prize to society will be fairness.

A ML algorithm is simply better at identifying risk across the credit spectrum than traditional credit scores. That gives lenders more options to say yes to underserved borrowers. For example, one Zest AI client saw approval rates for women jump by 20 percent after using an ML model. Another generated a model that shrank the approval rate gap between Black and white borrowers by 50 percent.

It’s our core belief that, used properly, AI/ML hold the key to ending racial disparities in financial services while improving the soundness of the U.S. banking and lending system. But, unfortunately, the vast majority of industry participants are using suboptimal legacy methods while waiting for regulatory guidance on the use of ML. Their legacy techniques make it nearly impossible to reverse the generations of systemic bias baked into public credit databases.

Modernizing law and policy to facilitate the adoption of AI and ML is an urgent challenge. The tolerance for mediocre results has to end. Zest AI’s written response to federal financial regulators seeks five uncontroversial and straightforward changes that can expand access to fair, affordable credit.

What we are asking for is doable. In combination, these changes would build atop the Equal Credit Opportunity Act to produce something akin to a Fair Credit Opportunity Act.

One: clarify that Regulation B allows the use of advanced explainability techniques

The Equal Credit Opportunity Act (ECOA) and its implementing regulation, Regulation B or Reg. B, for short, requires lenders to give consumers specific and valid reasons when it denies them credit. But, as we wrote in our December 2020 comment letter to the CFPB, legacy techniques for identifying those reasons will not work when used on AI/ML underwriting models.

Methods now exist to explain AI/ML models accurately, but the 40-year old commentary to the regulations wasn’t written with AI explainability in mind. One simple fix would be for the agencies to update guidance or use their Offices of Innovation to reflect the regulatory acceptance of more accurate explainability methods to generate adverse action notices in a world migrating to AI underwriting. Clearing up the ambiguity in the commentary would allow lenders to rely on more rigorous math-based explainability methods and allow AI/ML models to make credit more fair and transparent for protected groups.

Consumers deserve precise and accurate information from any kind of scoring model or algorithm that denies them credit.

Two: encourage the GSEs and FHFA to study and approve alternatives to traditional generic credit scores that don't perpetuate significant racial disparities

The process set up to approve alternative credit scores isn’t working as effectively as it should. Among the many requirements in the rules that restrict competition and innovation, even applying for certification to both GSEs requires an upfront fee of $400,000 — in addition to a slew of other expenses that could add hundreds of thousands of dollars in additional costs. As a result, few innovators beyond the giant incumbents could afford to apply.

Consumer advocates have issued repeated concerns that the FHFA and GSEs unreasonably privilege the industry’s leading score even though it unnecessarily restricts access to credit for the millions of Americans with thin or no credit histories.

In the interest of ensuring that some of the most consequential uses of model scores in consumer financial markets are not unnecessarily excluding people from access to safe credit, the Agencies should encourage and work with the FHFA and GSEs to streamline the model approval process and level the playing field for credit scoring models that can increase access to safe credit for millions of consumers locked out by the leading generic score.

Three: make credit model documentation a requirement for all financial institutions, not just supervised ones, as well as score providers

Lenders overseen by prudential regulators are required to document their credit models for safety, soundness, and fairness. Yet, unsupervised lenders and scoring providers are not under that same requirement — even some huge ones. But everyone should fall under these requirements.

The requirement to document credit models doesn’t necessarily mean a tortuous or expensive process. Technology can now automate and standardize the documentation of everything from feature selection and model performance to output and fair lending analysis. With an automated Model Risk Management framework, banks can reduce the length of their compliance reviews by 75 percent. For example, when it’s time to produce and review MRM documentation, Zest AI's clients can hit the print button, and out comes a nine-chapter SR 11-7-compliant MRM report, generated by tools using the data created during the modeling process.

Four: improve upon existing race estimation techniques so lenders and regulators can tell whether credit models are fair to people of color, women, and other protected groups

Improving fair lending testing requires improving race estimation. Because ECOA prohibits lenders from collecting data on applicants’ race or gender outside of the mortgage context, lenders must estimate their applicants’ races when conducting fair lending analysis. Currently, the CFPB and many lenders use a simplistic formula called Bayesian Improved Surname Geocoding (BISG) to do so. More data and better math can improve this method. The Agencies should study ways of improving it to ensure poor data do not obscure disparities.

In a test using Florida voter registration data (one of few publicly available datasets that include ZIP code, name, and race/national origin), the Zest Race Predictor (ZRP) identified Black consumers with 30 percent more accuracy than BISG at the 80 percent confidence threshold. It also cut the numbers of white consumers identified as non-white by 70 percent. In a test conducted by the Harvard Computer Society’s Tech for Social Good program using North Carolina voter data, the ZRP algorithm was more than twice as accurate at identifying Black individuals and 35 percent more accurate at identifying Hispanic individuals than BISG.

Multiply those numbers nationwide, and we’re talking about correcting the race and ethnicity classification of tens of millions of Americans. With a more accurate count, we would better know the scope of the equity problem and the efficacy of our solutions. We believe that the Agencies should start looking for a better race estimation technique. We’re happy to give ours away and would be honored to collaborate on improving it.

Five: have regulators study and use advanced techniques for identifying less discriminatory alternative credit models when performing fair lending reviews

The most effective way to close existing wealth and homeownership disparities is to ensure lenders are not using discriminatory credit models to make decisions. Under “disparate impact” discrimination analysis, an ECOA violation may exist if a creditor uses a model that unnecessarily causes discriminatory effects. Thus, if a lender’s model causes disparities and a “less discriminatory alternative” model exists, the lender must adopt the alternative model.

Responsible lenders already perform this testing, but, unfortunately, they often fail to use the most effective techniques available to do so, leading them to settle for second-best models when it comes to fairness. To make matters worse, when lenders search for and find multiple fairer models, existing regulations don’t offer any guidance on which one the lender can safely choose. As a result, regulators often find themselves in the same position during supervision and enforcement — they lack the tools to test whether a given model is the fairest one available given reasonable business constraints.

The Agencies should upgrade their capabilities to ensure that lenders search for and adopt these less discriminatory alternative models. These alternative models exist and, if adopted, will drive fairer lending decisions for people of color and other historically marginalized groups. To complement the work that responsible lenders already undertake, the Agencies should study ways to improve and automate searches for less discriminatory models and provide guidance and clarity on selecting from among these alternatives. We detailed how to do this in our December 2020 comment to the CFPB.

Calls for equity in the financial systems are more vital than ever. But, unfortunately, the status quo is not going to produce the changes needed for a fairer economy. Fortunately, the advent of AI/ML modeling has given us the tools we need to do something about it. The market is moving toward these solutions, but more clarity and more substantial incentives from the regulators can speed their adoption and impact.

latest
April 18, 2024
Redefining financial literacy through innovation and community
April 9, 2024
Learning from nature — you must water and prune a plant for it to grow
March 28, 2024
Innovation In Lending
Looking beyond market pain points to find purpose