Data Science & AI

Explainable Machine Learning in Credit

Jay Budzik

March 13, 2018

Machine learning is being used to solve an astonishing array of previously unsolvable problems. Besides powering search results and internet advertising, machine learning is used to help computers hear and see. Machine learning can recognize voices and characters, automatically label your personal photo library, pick the right music, assign jobs to the right worker, and help drivers prevent their car from veering out of their lane, among many other things.

Machine learning is being used to solve increasingly higher stakes problems. Radiologists use machine learning to identify regions in a scan that are more likely to be cancerous tumors so that doctors can catch the disease when a cure is still possible. And, in some jurisdictions, machine learning models even make judgments in legal proceedings.

Machine Learning in Consumer Credit

Despite the significant increase in accuracy over traditional methods, machine learning models have found limited use in credit underwriting. Credit underwriting is based on techniques that originated in the 1950s. These techniques are relatively easy to understand and implement, and the industry has a lot of experience using them. However, they are not without their shortcomings. First, they can only work with a handful of variables. They are also brittle in the face of missing data, and they can’t capture all the nuances in the data.

While most data analysis today has moved on to make use of machine learning, credit underwriting still predominantly relies on traditional scoring techniques. There are a number of reasons for this. One key reason is that machine learning models are hard to understand. They are complex “black boxes.” Unlike standard approaches based on logistic regression, it isn’t immediately obvious why machine learning models make the decisions they make, or which variables are the most important to the model in general.

This is fine when all you want to do is place an ad on the right web page to target a prospective client. But lending is not online advertising. Each decision must be understood well enough that it can be explained to a regular person. And the model’s decision-making process must be understood in general and analyzed in detail to understand its full impact.

In the U.S., the Fair Credit Reporting Act (FCRA) requires lenders provide reasons for why a loan or credit card application was not approved. These are called adverse action notices. In a machine learning world, a credit model might have thousands of variables and use complex ensembles of decision trees and neural networks to assess an applicant’s credit risk. Given so much complexity, how can the model’s decision be explained in enough detail to provide the necessary adverse action reasons? Explainable machine learning solves this problem.

Model Risk Management and Mitigating the RIsk of Another Financial Crisis

A second issue is understanding how the model makes decisions in general. There are a few reasons this is important.

The first is that if you are going to run a multibillion dollar loan portfolio on a fancy algorithm, you better be able to understand and explain how it works. You need to be able to recreate it easily, and you need to be able to compare its performance against previous models. You need to ensure it’s not going to go haywire and drive your business into a brick wall. You need to be able to understand how the model behaves in all circumstances to make sure it’s safe. You need tools to monitor its performance over time. In the industry this is called Model Risk Management, and the Office of the Comptroller of the Currency (OCC) and the Federal Reserve have published detailed guidelines designed to prevent catastrophes like those that occurred in 2008.

Machine Learning and Fair Lending Practices

Just as important as being able to replicate a model and ensure it is safe is ensuring fairness. Historically, banks have had trouble ensuring their practices do not discriminate. Despite landmark legislation like the Equal Credit Opportunity Act, which was signed into law in 1974 to prevent discrimination against protected classes, questions about whether banks fairly lend to minorities still persist. Ensuring machine learning models make fair decisions that do not discriminate against minorities and other protected classes is yet another important application of explainable machine learning.

The good news is that explainable machine learning is available today. ZestFinance’s Automated Machine Learning (ZAML) solution offers unique tools that allow you to benefit from the power of machine learning while meeting the transparency requirements required to ensure your models are safe, fair, and compliant with the law. Our customers see decreases in credit losses of up to 33% thanks to the power of ensembled machine learning models. These same customers produce adverse actions, perform disparate impact analysis, and create model risk management documentation that allows them to remain compliant with ECOA, FCRA, and OCC/Fed guidance for Model Risk Management.

Contact partner@zestfinance.com to learn more.

Thank you for subscribing!
Something went wrong while submitting the form.