Data Science & AI

Why AI Transparency is so Important

Jay Budzik

April 23, 2020

What is transparency in AI?

Transparency in AI is the ability to determine how and why an algorithm arrived at its decision – but while many organizations claim to advocate for the fair and transparent use of AI, the commitment is rarely backed up by real action or business strategies. This can lead to a whole host of problems, including discrimination, issues of fairness and general mistrust – all of which have received increased attention lately. While algorithmic models get a bad rap for being black boxes prone to unfair bias and lack of transparency, they’re not—if you have the right tools to explain their decisions.

If the results are right, why is it important that enterprises know how AI makes its decisions to get there?

Public trust, disclosure, and transparency are necessary governing ethics for AI technologies. How AI systems work, how they reach the decisions they do, and how those decisions impact Americans are required knowledge for businesses using AI models in high-stakes applications. The industry term for answering these questions is “explainability” and, when making high-stakes decisions, AI users should be required to create rigorous explainability processes and methods. Failure to do so can lead to businesses adopting opaque and flawed AI that could threaten consumers, unnecessarily perpetuate discrimination, and pose a threat to the safety and soundness of financial systems.

Truly transparent AI is going to be a crucial edge for those who have it and a roadblock for those who don’t.

Organizations don’t often want to invite additional scrutiny. Why would they want important, potentially sensitive systems, such as AI, to be more transparent? How does it benefit them?

Algorithms, like humans, are susceptible to bias. In order to scrub the algorithms from systematic bias, they must be explainable and transparent – and organizations should strive to reach this goal to reduce risk, increase fairness, and satisfy regulatory and compliance laws. There are concrete business benefits, like increasing approvals among credit-worthy borrowers who have been traditionally misevaluated by traditional scoring methods. But there is also the threat of running afoul of regulation, resulting in significant fines. These organizations certainly don’t want to do that unwittingly because of a system.

Can you share examples of cases where AI transparency is vital (i.e. extreme cases that would be impossible without AI transparency)?

It would be impossible to use AI in credit underwriting without transparency. Until recently, the market didn’t have the tools to open up AI’s black box, but a lot of folks are working on it. We’ve been working specifically on solving the explainability problem for AI-based credit underwriting. The results are pretty solid: We’ve helped a handful of lenders expand access to credit for underserved populations, with a 15% increase in approval rates on average. Providing the ability to understand a model’s reasoning and economic value allows lenders to make credit decisions with confidence while ensuring compliance with regulations on disparate impact and adverse action. Without transparent AI, millions of deserving people would find it nearly impossible to get affordable credit to buy a home, finance a car, or take out a student loan.

Would there be any cases where AI transparency is bad?

Transparency would be bad if the explanations were wrong, aka false positives. And not all AI needs to be explained in detail if the use case is not regulated. A conversational marketing or customer service bot or an image recognition algorithm doesn’t require an explanation as long as the results are considered good.

Given increasing adoption of AI, where do you see the importance of AI transparency going in the future?

Well, like I said, AI transparency is most important in highly-regulated areas, credit underwriting being the one we’re in today. But you can imagine a number of other applications of AI where there would be similar needs to satisfy regulation or document clear model explainability, such as healthcare or government services. There’s a lot of exciting potential applications just beginning to come into view. I think we’ll see AI and AI transparency continue to shape critical industries over the next several years.


Photo by Bud Helisson on Unsplash

Thank you for subscribing!
Something went wrong while submitting the form.