Data Science & AI

Why Transparent AI Is More Important Now Than Ever

Zest AI team

April 3, 2020

There’s no question that in the next six months to a year, the world is going to look radically different — especially for the financial industry. Lenders, in particular, are going to have to re-evaluate the metrics they use for giving loans and, as they make those changes, transparency is going to be essential.

Consumers, shareholders, and regulators are going to demand it. Why was one person given a loan while another was denied? Why did one borrower get a payment freeze while another didn’t? In an economy of skyrocketing unemployment, how are lenders deciding who is a good risk? What measures are lenders taking to limit their exposure to non-performing loans?

For banks that have adopted artificial intelligence (AI) to improve their lending practices, transparency means being able to determine how and why an algorithm arrived at its decision. Since AI’s mathematics are so complex, it’s not straightforward: even the best mathematicians would take weeks to unwind some AI results using a pencil and calculator. That’s why it’s crucial that algorithmic models not be black boxes churning out results. They need to have transparency built in from the very start. 

Algorithms, like humans, are susceptible to bias. In order to scrub the algorithms for systematic bias, lenders need to be able to look inside models and see how they are making connections between thousands of different variables. It’s impossible to use AI in credit underwriting in a way that’s responsible both to business goals and fairness requirements without transparency. 

Until recently, the credit underwriting market didn’t have the tools to open up AI’s black box. One danger is that some systems that claim to solve the black box problem are using proofing approaches that work fine for traditionally constructed models, but actually produce an unacceptably high percentage of bad explanations. The false positives from transparency tools not built for AI and ML approaches mean lenders take on more portfolio and regulatory risk than they believe. 

At Zest, we’ve spent years refining our AI systems to include explainability tools at the core of every model. The results are solid: We’ve helped lenders expand access to credit for underserved populations, with an average 15% increase in approval rates with no additional risk. Providing the ability to understand a model's reasoning and economic value allows lenders to make credit decisions with confidence while ensuring compliance with regulations around disparate impact and adverse action. 

Without transparent AI in lending models, millions of deserving people could find it nearly impossible to get affordable credit to buy a home, finance a car, or take out a student loan. Regulators will notice and the market will notice too.

Photo by Luca Nicoletti on Unsplash

Thank you for subscribing!
Something went wrong while submitting the form.