Fair Lending & Compliance

An independent study of AI and fair lending

Zest AI team
April 15, 2021

Every industry should have something akin to what financial services has in FinRegLab. The independent, nonprofit research organization investigates new technologies and data to make the financial marketplace more responsible and inclusive. By sitting as an honest broker between the industry and the regulators, FinRegLab plays an increasingly crucial role in shaping public policy and advancing the use of tech in finance that can help consumers.

Two years ago, FinRegLab released authoritative research on the impact of using cash-flow data in consumer credit underwriting. The results, aggregated from six fintech lenders, were eye opening.

FinRegLab found compelling evidence that cash-flow variables and scores were just as predictive of credit risk and loan performance as traditional credit bureau data. And — across the diverse set of companies, populations, and products studied — cash-flow data was just as, if not better at, saying yes to applicants who would otherwise struggle to obtain credit at fair prices. This was a big boost for industry players such as Accion, Kabbage, and Petal that use cash flow to score applicants with thin or no credit files.  

FinRegLab is doing it again.

This week it announced the launch of a ground-breaking new research project, the first empirical effort to evaluate the performance of available open source and proprietary machine learning (ML) credit model diagnostic tools in the context of three critical areas: model risk management, fair lending, and adverse action reporting.

“Advances in machine learning are likely to reshape lending in the coming years,” said Jonathan Levin, the Philip H. Knight Professor and Dean of Stanford Graduate School of Business. “Understanding how these algorithms can be designed to make credit allocation both more efficient and more equitable is an urgent challenge.”

FinRegLab will be working with researchers from Stanford Graduate School of Business (GSB) to evaluate the ability of available technology to explain, document, and govern ML underwriting models. This will be the first public research guided by all relevant stakeholders to address questions about explainability and fairness that have made lenders hesitate to adopt ML underwriting models at scale.

Zest AI is excited to be one of the industry participants in this research. We pioneered the use of machine learning in consumer credit underwriting and have seen great leaps in its fairness and transparency over the years. The application of ML to credit offers a once-in-a-generation opportunity to safely expand access to economic opportunity and address the severe economic inequality in the United States and everywhere.

But we have to get ML right. Algorithms can unintentionally perpetuate bias in lending if the models aren’t carefully built and monitored. Concerns over bias and explainability as well as a lack of clear guidance from regulators, have led to hesitancy to adopt ML in lending.

These are the areas in which FinRegLab study was designed to advance our understanding. How good are today’s ML technology products at proving reliability and governance? How good are they at complying with legal requirements to provide loan applicants with honest answers when they get denied for credit? How good are they at spotting and mitigating disparate impact on protected classes (race, gender, sex, etc.)?

Answers will come in the months ahead. Note that there are no winners in this evaluation. This is to provide aggregated results that will help all stakeholders, especially those steeped in traditional practices, gain better understand and comfort in ML underwriting.

latest
April 18, 2024
Redefining financial literacy through innovation and community
April 9, 2024
Learning from nature — you must water and prune a plant for it to grow
March 28, 2024
Innovation In Lending
Looking beyond market pain points to find purpose