The New Way To Reduce Bias In AI


The best lenders rely on Zest software to build and safely operate ML credit models.

Zest's ZAML suite of tools provides full machine learning (ML) transparency. That means total awareness of what's driving disparity in your credit models. Until now, lenders had to sacrifice accuracy to reduce disparity in underwriting. 

We don't think you have to sacrifice accuracy to be fair.

How Does It Work?


Every credit model has some bias in it.

The traditional way of reducing bias is a slow and manual process that often comes late in the modeling process. It requires making lots of hard choices between fairness and performance and then re-running the model. Lenders end up simply tossing out offending credit signals, which leaves a lot of performance on the table. With ZAML Fair, lenders are in control and able to pick a better model at a fraction of the time and effort required by legacy techniques.


ZAML Fair relies on the transparency tools built into ZAML to rank a model's credit signals by their influence on model bias.

ZAML Fair then deploys a "helper AI" that combines with the existing model to carefully reduce the impact of the offending variables that drive racial and gender disparity, which can be common credit signals such as income and the traditional credit score. You can't toss those out, but you can mitigate their impact.

Lenders shouldn’t have to choose between fairness and accuracy. With ZAML Fair, they can optimize for both.


Get started with ZAML Fair

ZAML Fair works on most credit models and is available today to all ZAML customers. For more information, fill out the form below or email us.