Data Science & AI

Why Criticism Of Machine Learning Is Suspect

Zest AI team

March 28, 2020

We’ve read the criticism about how using AI and machine learning in consumer lending is risky and suspect as we head into a sharp economic downturn. Any new technology comes with risk, but, properly built and operated, AI lending models can do quite well, even in an economic situation like we’re experiencing now. We’ve seen this first hand in our experience with one of the largest banks in Turkey, which experienced a sharp currency shock and runaway inflation in 2018. The ML lending model we built with them did quite well, good enough for the bank to want to accelerate adoption of AI across its lending businesses. Folks who say machine learning has not been battle-tested in difficult times are simply misinformed.

Some of the criticisms leveled at AI lending models are true of any kind of model. Models built using data from recent years don’t include consumer behavior from prior periods of economic contraction. All models must be put through a thorough review process as required by Fed SR 11-7. Machine learning will always be more powerful than traditional methods because it uses more data and better math, but you can’t assume that ML will cover up for sloppy modeling or loose model risk management practices.

Machine learning models have to be tested rigorously for stability over time and in the face of synthetic data that emulate sudden changes in the economy that can influence the character of applicants who apply. Our highly experienced team has helped large lenders navigate through downturns here and abroad.

We’ve put together the following list of facts to help risk managers understand how ML and traditional modeling methods stack up, as they consider using more powerful analytics to get a leg up during these difficult times:

Sensitivity to downturns: All models, both traditional (scorecard, logistic regression) and machine learning (XGBoost, neural networks) are trained on historical data. This is nothing unique to AI. Any model, AI or not, built with loan data during the expansion will have blind spots to sharp contractions. The timeframe of data used to build the model has nothing to do with whether the model is a machine learning model or a traditional model. What you want is a modeling process and toolset that can generate updates quickly when new information from model monitors comes in. Side note: Some people think ML lending uses self-learning AI like search engines or fraud detection, but self-learning models could never be validated or allowed by regulators. Zest AI and any others must build discrete AI models trained and tested on a fixed period of time so they can be validated and documented properly.
Heterogeneity (aka the ability to identify good borrowers in a sea of bad): AI models use more data and better math to generate a more accurate and granular rank ordering of risk across the credit spectrum. Traditional models can miss key drivers that differentiate risk and give very different applicants the same risk score. All modeling methods suffer when there isn’t sufficient statistical support for a trustworthy prediction. When a model encounters data the likes of which it hasn’t ever seen before, its predictions should be viewed with skepticism and be subject to increased scrutiny. That is what’s happening right now for models of any type. Any vintage ML model with rigorous explainability will allow for real-time monitors that pick up anomalies early in a downturn. Zest customers are taking action now on outlier activity they spotted a month ago.

In the last few weeks, we have seen dramatic shifts in the credit quality of applicants, and lenders are using our explainability tools to pinpoint the key drivers of score differences so they can make informed credit policy decisions.

Risk Management: All models material to a business, both traditional and ML, should be validated and tested per SR 11-7 guidance so that lenders understand what can happen when the model is used under various adverse conditions. AI models can be tested and validated just like their more-traditional counterparts. Risk assessment at origination is always going to be imperfect because it is a prediction made about the future at a specific point in time, and so it is important to test model behavior in a variety of scenarios to game out action plans under various adverse conditions.

Input Monitoring. The guidance calls for ongoing monitoring of all material models to detect shifts in the character of applicants the models are being asked to score. In the last few weeks, we have seen dramatic shifts in the credit quality of applicants, and lenders are using our explainability tools to pinpoint the key drivers of score differences so they can make informed credit policy decisions. Regardless of whether your origination models are based on machine learning or more traditional methods, monitoring and model transparency are both very important right now.

Portfolio Review. Banks should review the loans they’ve already made in order to mitigate charge-offs due to changes in the economy. This process applies regardless of whether the loan was made based on a black-box credit score, custom scorecard, or an ML model. AI-powered portfolio review tools can be used to provide a more accurate risk assessment of outstanding loans because they can incorporate additional data like employment type, income patterns, and other data from checking and savings accounts that can be key indicators of repayment risk.

Agility. As new data becomes available it’s important to be on an agile footing. Automation can help lenders swiftly refit, document, test and deploy models should the need arise in these rapidly changing market conditions. Automation allows lenders to refresh models quickly to reflect new data to better inform decision-making.

All models, from ML to Excel, are built to predict outcomes based on historical data. The current unprecedented economic environment, with sharply rising unemployment claims, volatile stocks and bonds, and decreased consumer spending due to widespread shelter-in-place orders, is not represented in any historical dataset. Models must be carefully monitored and analyzed in order for banks to make prudent policy decisions. Responsible modeling practice requires sensitivity testing and validation called for by regulatory guidance, regardless of the technology and tools you use.

Zest AI is a recognized leader in advanced analytics for financial services. Contact us at hello@zest.ai

Photo by Ali Shah Lakhani on Unsplash

Thank you for subscribing!
Something went wrong while submitting the form.