Data Science & AI

Why Not Just Use SHAP?

Jay Budzik

May 30, 2019

It’s becoming an accepted fact in credit underwriting that, as a way to model and predict borrower risk, machine learning outperforms by a wide margin traditional methods such as logistic regression or manually constructed scorecards.

Yet explaining ML models such as gradient boosted trees and deep neural networks (or ensembles thereof) has been a persistent challenge. If you can’t show how each input variable contributes to a model’s score it’s impossible to use ML models for tasks such as credit decisioning, where complete transparency is required both by common sense and regulation.

Open-source software tools have recently come onto the scene promising to solve ML’s lack of transparency. One of them, called LIME, creates a simpler, linear approximation of any underlying model and explains that simpler approximation instead. We’ve shown before that LIME is often too slow to be useful, and also significantly inaccurate, even on the simplest of models.
A newer open-source tool, SHAP, is much more advanced. It uses competitive game theory and a clever algorithm to produce model explanations for a particular kind of model called XGBoost, a tree-based model that performs well for some predictive analytics use cases.

Tools to explain machine learning models have to stand up the rigors of credit underwriting.

It’s easy to assume open source packages such as SHAP are good enough for your model explainability problem. But, in the highly-regulated context of credit underwriting, you are taking a big risk using SHAP without fully understanding the mathematics behind it and other approaches. Plugging and chugging on math you don’t really understand doesn’t meet the letter or the spirit of Federal Reserve’s Guidance on Model Risk Management. It’s easy to think you’re safe when you’ve inadvertently gotten the wrong answer, and our research on SHAP strongly suggests that it often returns the wrong answer. Giving out wrong answers for why you’ve just rejected a borrower could expose you to enforcement actions over fair lending and fair credit violations, as well as lawsuits from consumer advocacy groups.

Read our deeper technical dive into the issues around SHAP

If that isn’t enough to persuade you, many open-source model explainability methods (SHAP included) are limited to a single model type. You may want to use continuous modeling methods such as radial basis function networks, Gaussian mixture models, and, perhaps most commonly, deep neural networks. The current implementation of SHAP cannot explain other types of tree models, and cannot explain any continuous model outside a small collection, and only by importing algorithms other than SHAP.

What’s more, SHAP cannot explain ensembles of continuous and tree-based models, such as stacked or deeply stacked models that combine XGBoost and deep neural networks. In our experience, these types of ensemble models are more accurate and stable over time. Don’t just take our word for it: the same approach was used by the Netflix Prize winners, and countless times to win worldwide data science competitions.

That’s why we built ZAML to explain a much wider variety of model types, enabling you to use world-beating ensemble models to drive your lending business. You’re leaving significant profits on the table if you settle for anything less.

ZestFinance’s ZAML suite of credit modeling tools enables you to access advanced ensemble modeling methods practically and transparently. Our ZAML tools incorporate everything we’ve learned from building and running live ML credit risk models for more than a decade. Unlike open source tools, ZAML is not a raw library you have to wrap a workflow and business process around. Instead, ZAML offers a step up in operational efficiency, by integrating accurate explainability methods into tools that mirror the credit risk modeling lifecycle: including development, verification, and operationalization.

Our tools are designed specifically for credit underwriting and support such functions as:
SHAP may be fine for lab work, research, or performing low-stakes predictive analytics. Credit underwriting is anything but low-stakes. We built ZAML tools to solve the rigorous use cases facing banks and lenders. They are extensible and so you can adapt them to your specific business case and documentation requirements. ZAML tools enable you to assemble reusable model build and analysis workflows that can be fully automated.

  • Analysis and selection of predictive variables
  • Feature engineering, model selection and tuning
  • Feature contribution, partial dependence, and analysis of interactions
  • Model comparison
  • Risk analysis, including swap-set analysis
  • Financial projections
  • Model verification
  • Fair lending analysis, testing, and less discriminatory alternative search
  • Adverse action
  • Model documentation
  • Model deployment
  • Monitoring

With ZAML, you can spend less time reworking open source tools and spend more time making money putting superior models into production.

Jay Budzik is the CTO of Zest AI

Thank you for subscribing!
Something went wrong while submitting the form.