White Papers

Robust Explainability in AI Models

Zest AI Resources

Recent research in machine learning has suggested certain explainability approaches may yield unreliable explanations, providing less than satisfactory robustness or too much sensitivity. Inaccurate or misleading explainability carries severe implications for financial institutions and technology providers using or considering machine learning. These firms are subject to stringent regulations, such as providing accurate reasons why they denied an applicant and identifying drivers of algorithmic bias (race, gender, etc.)

At Zest AI, we’ve spent years making machine learning safe to use in highly-regulated environments. The challenges raised in this recent research prompted us to reconfirm that our own explainability methods do not suffer from similar weaknesses identified by their work.

In this white paper, our Data Scientist Ian Hardy covers:
  • The key components of our explainability approach
  • How we are designing for robustness and regulatory compliance
  • Experimental demonstration of our approach on a real credit model and dataset.