Credit & Risk
Overcoming the Biggest Barrier to AI Adoption: Explainability
February 26, 2021
As banks and credit unions of all sizes adopt AI and machine learning more widely, one of the most powerful use cases to emerge is credit risk underwriting. By replacing legacy underwriting models with machine learning-based models, lenders can lift approvals by 15% or more or lower charge-offs by 30%. How? By using more information. ML models consume 10 to 100 times more data, including trended data and inclusive data such as cash flow, rent, and utility bills. The millions of correlations among all these data variables produce better insight and more accurate risk prediction.
Some risk officers see that complexity as a dealbreaker. They see ML as black-box tech or just too hard to explain to auditors and regulators. While that may have been true two or three years ago, data science has moved fast. The good news is that ML models can be rendered transparent down to every variable and denial factor using provable math. In a talk at a Jan 2021 Federal Reserve Bank AI symposium, Fed Governor Lael Brainard highlighted the industry’s advancements in ML transparency. She said, “The AI community has made notable strides in explaining complex machine learning models.”
Other risk officers are holding off on AI adoption because they want more clarity from the regulators around the use of ML with current model risk management and fair lending requirements. That clarity should be coming before long, but in the meantime, Zest-built models are going live after passing rigorous agency audits. The test is whether these ML models meet or exceed current requirements to prove they’re free of bias, safe to use, and aligned with business goals. They do.
Adopting AI Responsibly
In helping lenders move to AI-powered underwriting, we see banks and credit unions who successfully adopt AI do these two things:
- Select the right explainability technique and streamline your model risk management (MRM) process to address the specific needs of risk, data science, regulatory, IT, legal, and business functions.
- Gather and mitigate stakeholder concerns EARLY to avoid downstream regulatory and compliance challenges. By soliciting feedback sooner rather than later, you identify and resolve problems well before the model hits any production challenges.
So, what’s the best way to achieve explainability in ML? Our latest guide, A Lender’s Roadmap to AI Adoption, covers the most common compliance and regulatory concerns you’ll need to address while sharing best practices for adopting AI. For those that want to dig further into ML explainability, here are a few resources:
- Robust Explainability in AI Models
- Video: How to Build Transparent Machine Learning Credit Models
- Getting Adverse Action Notices Right for Machine Learning Models
- Most AI Explainability is Snake Oil, Ours Isn't
Change is challenging, but the journey to AI is easier than you might think. With the right strategy and prescriptive approach, you can successfully implement ML models to deliver better results for every lending objective.