Data Science & AI

Why Human-Interpretable Models Are A Myth (And That’s Totally Okay)

Jay Budzik
November 16, 2021

The current credit scoring system is in need of improvement. Seven out of ten financial service employees we surveyed recently said that racism is built into the system, and more than eight out of ten said AI and machine learning technology would lead to better credit scoring. Nine out of ten said the regulators should allow greater use of AI in financial services.

As more lenders begin adopting AI/ML to approve consumer loans with more accuracy and fairness, we humans deserve to understand these formulas and models so we can trust them to make the right call.

Understanding how ML models work requires knowing their underlying assumptions and a detailed analysis of what's happening under the hood. Zest has always believed that ML models can and should be transparent. Fortunately, a wealth of ML academic research breakthroughs have produced substantial advances in understanding how models make decisions under the umbrella term model explainability. At Zest, we’ve spent the last ten years adapting these academic research breakthroughs to the lending business and developing some of our own techniques to explain algorithmic credit models based on rigorous math. 

We think it’s time to “step into the future,” and leave phony limitations behind. It really is OK to trust in the same mathematics and computational methods we have been reliably using for decades in the engineering and science disciplines, to explain our credit underwriting models.

However, we've noticed a worrying trend where financial technologists claim that models should meet the standard of being "interpretable," meaning simple enough that anyone can understand them just by looking at the equation. This may sound like a common sense criterion but, when you unpack it, it turns out to be a chimera used to exempt the authors from true transparency.

Demanding that a model be human interpretable limits the number of variables and the kinds of models that can be used. It also creates a false sense of security in that analysts might think they understand their model but that understanding turns out to be relatively shallow. The true and legitimate approach, model explainability, places no such limits on the model.

In a new Zest report, our CTO Jay Budzik explains why human-interpretability is a myth, especially when it comes to credit underwriting models. He explains why most off-the-shelf credit scores don’t qualify as interpretable, and even if they were, good luck getting credit scoring vendors to share their model equations with you. Most custom underwriting models developed by credit bureaus or statisticians also don’t qualify as interpretable. And it should go without saying that more complicated models like adaptive explainable neural networks clearly don’t qualify as interpretable. 

We advocate the use of radical inventions such as calculus and computers to understand how models work. The same kind of analysis Newton used in 1687 to predict the motion of comets and NASA used in 1969 to land on the moon can be used to understand how a machine learning model generates a credit score. It’s not that controversial a position, and we weren’t the first ones to go there. Nobel prize-winning economist Lloyd Shapley and his co-authors were the first to come up with a provable way of explaining complex models. Sundararajan applied Shapley’s methods first to neural networks, and then we extended those methods to accommodate a more diverse set of models.   

We appreciate a healthy dose of skepticism, especially when it comes to high-stakes business applications like credit approvals. But we think it’s time to “step into the future,” and leave phony limitations behind. It really is OK to trust in the same mathematics and computational methods we have been reliably using for decades in the engineering and science disciplines, to explain our credit underwriting models.


latest
June 5, 2023
More than a score: AI-automated underwriting empowers folks across the credit spectrum
May 25, 2023
Building a one-stop-shop to simplify how credit unions lend: A deeper look into Origence’s exclusive offering, Zest Auto
April 24, 2023
Credit Unions
Stressed about an economic downturn? We’ve got your back.