Best Practices In AI Lending: Data, Documentation, Monitoring

Zest AI team
December 14, 2021

Lenders thinking about switching to AI-based lending always want to know about three big things: what kinds of data should I be using, how do we document an AI/ML credit model and how do I make sure my results hold up over time?

Data

Machine learning algorithms can consume virtually limitless data types from unlimited sources in limitless quantities to make accurate predictions. However, the Fair Credit Reporting Act limits the types of data lenders can use for credit underwriting to data provided by credit reporting agencies or CRAs. So, lenders need to ensure that they're using fully FCRA-compliant data sources in their underwriting.

Alternative data is all the rage, with lots of new evidence that the use of rent, utility, cellphone payments, and other cash-flow data can boost thin-file scores. While there are great new platforms for sourcing open data, for now, you still need to make sure your data is FCRA compliant. Besides, we find that a richer use of standardized bureau data remains an untapped resource. We get a lot of signals out of the bureau data that's already there.

Documentation

Here again, the AI/ML approach will look similar to what you already do, with some twists. For example, ML models usually require more transparency in feature analysis and validation because their variables don't always point in the same direction, and there are many more of them.

That said, the documentation doesn’t have to be a chore. Zest software, for example, comes with an Autodoc application that, at the push of a button, produces a model risk management report compliant with the SR 11-7 model risk management standard set in 2011 by the Fed and OCC, the FDIC Financial Institution Letter 22-2017, and the NCUA’s Corporate Credit Union Guidance Letter 2013-02. Ours goes dozens of pages long and explains model development, implementation, and use, along with the soundness of its validation, governance, policies, and controls. You may only need the executive summary.

Either way, it’s a chance to up your compliance game. Now is a good time to do so, as federal and certain state regulators discuss imposing new or additional documentation requirements on risk models. Of course, it could be two years or ten years out. Still, to the extent that you're not already prudentially regulated, you should be thinking about model documentation because it is a regulatory regime that's coming down the pike.

Download Zest's new, free guide to AI Lending compliance

One last note about documentation: Take special care to show your validation steps. All prudential regulators require that models be validated to ensure that they're accurate and performing as expected. But, because machine learning is new to most regulators, they will scrutinize your validation work more than they would if you used a traditional underwriting method. So you should be prepared to show at least three essential requirements for a solid validation process, which we share in the next section.

Validation

One of the final steps in the compliance process is proving to the examiners or auditors that your models perform as expected and designed. That applies to all model components -- inputs, processing, outputs, and reports -- and applies equally to models developed in-house and from vendors or consultants. The three essential requirements to a solid validation process are:

Evaluation of Conceptual Soundness. You’ll need to assess the quality of your model design and construction quality and how careful you were to keep within common industry practice. This section is where you justify the modeling method used and why it’s appropriate for the purpose. For ML, it could be as straightforward as saying this model is “intended to improve the risk assessment of loan applications to better support underwriting decisions to increase approvals and/or reduce losses within the loan portfolio.”

Putting in the time to understand the results and prepare your organization early will reap outsized rewards.

Ongoing Monitoring. Many lenders mistakenly believe that underwriting algorithms are living, breathing organisms that learn and adapt over time. Far, far from it. ML lending models are trained and then locked down to pass validation. But they can drift a little if you’re not watching. Monitoring is essential to ensure the model isn’t losing predictive accuracy, especially as products, clients, and market conditions fluctuate.

Outcomes Analysis. This step involves comparing model outputs to corresponding outcomes. One way to do this (we recommend it) is to back-test the trained model using an “out-of-time” period not used in model development but at a roughly similar frequency as the model’s performance window. You can also run stress and scenario tests on a model, seeing what would happen if every one was missing bankruptcy data or every applicant had a thin file.

People say that getting an ML model through a compliance review is impossible. Far from it. While your first pass will take longer because regulators are unfamiliar with it, the truth is that many regulators, in our experience, look favorably on this kind of move because these models tend to be more accurate and fairer. And so, it is not an impossible task. And it is a task that will only get easier with time as more industry participants and examiners get used to it.

AI-driven lending can transform your organization’s growth and bottom line. Putting in the time to understand the results and prepare your organization early will reap outsized rewards. Take advantage of outside resources to learn. We put out new learning Guides all the time, including our definitive six-step guide to adopting machine learning, so check back frequently at www.zest.ai.

latest
February 24, 2022
Announcements
Zest Announces Strategic Partnership With The California and Nevada Credit Union Leagues
February 24, 2022
Announcements
Zest AI Named a Strategic Link Partner with the Northwest Credit Union Association
February 24, 2022
Innovation In Lending
Demystifying Three Big Myths Of AI in Financial Services