The Ball Is Finally Moving On AI Explainabilty

Subscribe to Our Blog

A couple of weeks ago Google released a suite of cloud-based machine-learning (ML) tools that explain decisions made by neural network models. This was great news for those of us in the AI applications space (and adds to the work released by IBM earlier this year). Google’s heft in the data science world will draw more attention to the progress being made to dispel the myth that algorithmic models are black boxes. They’re not -- if you have the right tools to explain their decisions. For a few years now, we’ve been helping lenders deploy ML credit underwriting models that are fully interpretable as to why an applicant was approved or denied a loan.

While we laud Google’s explainable AI, it’s not going to solve ML adoption for every business and presents limitations especially in financial services. First, Google’s explainability tools (now in beta) require that model design and deployment reside within the Google Cloud. What most companies need are explainability tools can be used in any cloud environment or on local servers.

Second, Google’s support for ensembled models is limited. Ensembled models, which combine multiple models often using diverse modeling techniques, boost the predictive power of AI credit scoring and will become standard over time as more lenders embrace ML over the next few years. While both Google and Zest AI’s explainability tools are rooted in similar mathematical principles (an implementation of the Aumann-Shapley method), Google provides limited support for tree-based models and ensembles of trees and neural networks.

Third, while the Google AI “What-If” tool is pretty clever and allows modelers to test different scenarios at a glance, Google’s user interface could be difficult to use. Developers will have to learn a specific coding language and configuration convention to access explainability functions.

Last, Google’s explainability package is aimed primarily at data scientists—not credit analysts working for heavily regulated financial-services firms. In theory, data scientists at banks could build models using Google’s tools. But those same folks would need to build additional tools to test their models for accuracy and fairness, and to generate all the compliance reporting required by regulators. Those are features Zest has been including in our modeling software for more than two years, including automated model validation and risk documentation.

So while Google now offers a great explainability tool for cloud-based neural networks, the vast majority of end-users -- especially in highly regulated industries such as financial services and healthcare -- will continue to need more complete explainability solutions that cover their regulatory risk. A complete solution should include managed services (including expert help from a dedicated team), risk-management documentation (MRM) and model monitoring tailored for that industry, underwriting expertise and experience working with the largest lenders and, last but not least, a track record of deploying models that surpass regulatory scrutiny.

Google is taking important first steps to bring more transparency to the world of ML. That’s encouraging—for lenders, consumers, and the tech community—and we will be following its progress.

In the meantime, Zest AI will continue working with financial clients, data infrastructure firms such as credit bureaus and loan origination software vendors, and regulators and lawmakers to advance broader adoption of explainable AI in financial services.