Data Science & AI

Most AI Explainability Is Snake Oil. Ours Isn’t And Here’s Why

Jay Budzik
December 15, 2018

Advanced machine learning (ML) is a subset of AI that uses more data and sophisticated math to make better predictions and decisions. Banks and lenders could make a lot more money using ML-powered credit scoring instead of legacy methods in use today. But adoption of ML has been held back by the technology’s “black-box” nature: you can see the model’s results but not how it came to those results. You can’t run a credit model safely or accurately if you can’t explain its decisions, especially for a regulated use case such as credit underwriting.

At Zest AI, we’ve built new kinds of explainability math into our Zest Model Management system that quickly render the inner workings of ML models transparent from creation through deployment. You can use these tools to monitor model health in run-time. You can trust the results are fair and accurate. And we’ve automated the reporting so all you have to do is push a button and produce all the documentation required to comply with regulations.

A handful of other techniques are coming to market claiming to solve this “black-box” problem, but these methods can be inconsistent, inaccurate, slow, and/or fail to spot unacceptable outcomes such as race- and gender-based discrimination. We produced a short white paper reviewing the pros and cons of these techniques compared to Zest AI and included the results of a comparison test on the key measures of consistency, accuracy, and speed.

It’s time to distinguish what’s real explainability and what’s not. Read the full paper here.

latest
September 12, 2023
Why picking the right AI-credit decisioning partner matters
September 7, 2023
Machine learning 101: Using ML to increase portfolio yield
September 6, 2023
Credit Unions
Tech costs should not be killing innovation for small credit unions