Innovation in Lending

How To Make Sure Results From AI Lending Models Hold Up Over Time

Zest AI team

September 9, 2021

Lenders everywhere are switching from legacy credit scoring to AI-driven underwriting models. Why? AI-based models produce faster decisions 24/7 and generate more good loans and fewer bad ones. We've recently written about how to compare the statistical outperformance of AI-driven models over traditional models and how to translate those improvements into revenue and profit gains for your business. 

Download the complete Zest Guide, “Doing The Math: How To Assess The Value of AI-Driven Lending” today.

Knowing that an AI model outperforms a benchmark is nice, but you want to ensure that this statistical superiority holds up over time. The way to do that is to analyze the AI model’s K-S score over time and compare its AUC to your benchmark model across the entire period you used to test the model, using that same target. (K-S and AUC are common measures of statistical accuracy.) Having slight variation between months is okay, but wide variability could indicate an overfitted model. Here’s an example of what the outcomes of a successful stability tests looks like:


The results on the top show excellent stability over a six-month test period for all three product models. Score accuracy is holding up well. The second chart analyzes the AUC of the AI/ML models (aggregated) against benchmark generic scores over a similar test period. Again, the AI model outperforms consistently over time.

Zest software builds these kinds of powerful and durable models, even with a limited set of performance data, by using a proprietary reject inference method that uses a rules-based approach to augment the data from funded and unfunded populations on which the models are trained. The result: Resiliency that reduces your need for costly refits or rebuilds. You should always ask about model and score stability over time when you’re considering or reviewing an internal or vendor model.

One last note: Accuracy and performance are not the ends of your analysis. What about the cost to document your model? How will you set up performance monitoring to ensure the model is working as designed? 

The sizable benefits of AI-driven lending come with a bit more complexity downstream, especially for organizations new to machine learning and that lack a swath of analysts, compliance experts, and data scientists, i.e., pretty much every lender below the top tier. You’re going to want to automate as much of the regulatory and compliance tasks as possible to spare your existing teams the added work. Zest AI software spares you a lot of time by automating critical compliance functions and producing an SR 11-7 compliant model risk management report on-demand when your model is complete. 

Regarding responsible monitoring, any AI/ML modeling solution you’re looking to buy or build should come with multivariate input monitoring to ensure that the distribution of features in production matches expectations, as well as output monitoring to ensure the score distribution is consistent versus training and your score cut-off decisions are still valid. You’ll also waNt performance monitoring to highlight the ongoing economic and technical performance of the model. Fortunately, Zest’s Model Management System includes all of these monitors so that your risk and compliance teams can sleep at night. 

Download the complete Zest Guide, “Doing The Math: How To Assess The Value of AI-Driven Lending” today.


Thank you for subscribing!
Something went wrong while submitting the form.