Human-Interpretable Models Are A Myth (And Why That's Okay)

We've noticed a worrying trend where financial technologists claim that credit underwriting models should meet the standard of being "interpretable," meaning simple enough that anyone can understand them just by looking at the equation. This may sound like a common sense criterion but, when you unpack it, it turns out to be a chimera used to exempt the authors from true transparency. The true and legitimate approach, model explainability, places no such limits on the model. In this report, Zest AI CTO Jay Budzik advocates for something else -- using calculus and computers -- to understand how models work. We're not the first ones to argue this.