Federal Reserve Bank Governor Lael Brainard

Fair Lending & Compliance

Fed Governor Lael Brainard Gets The Promise Of Transparent AI

Bruce Upbin

January 19, 2021

Federal Reserve Bank Governor Lael Brainard is one of the more forward-thinking financial regulators in Washington today. She has spoken openly about the promise that technologies such as AI and machine learning hold for improving the accuracy and fairness of consumer credit decisions. Her core message: The benefits of AI are real, but must be accompanied by a better understanding of the risks and more clarity around the expectations for how those risks can be managed effectively by banks. The bedrock essential is ensuring strong standards around fairness and transparency.

Back in November 2018 Brainard gave a talk called “What Are We Learning About AI In Financial Services?,” in which she highlighted the growing interest in AI across financial services, especially in operational risk, cyberfraud, and credit risk assessment. The benefits of AI/ML were clear: better pattern recognition, more accurate and efficient decision-making, and the ability to consume a lot more data which helps expand credit access to the tens of millions of Americans with incomplete or missing credit files that make them hard to score in traditional ways.

The challenge she pointed to then was the opacity of AI models. Banks and credit unions still perceive this as an issue even though it’s a solved problem. The members of the Fed Advisory Council, in their meeting a month ago, agreed that the use of AI would increase with time as long as the industry and its regulators “recognize some of the complexities inherent in broader deployment.” Specifically, lenders need transparency: absolute confidence that their lending algorithms don’t perpetuate or exaggerate racial or gender bias, and produce credit denial reasons that are provably correct in case any decision is challenged.

But Brainard saw progress being made on the transparency front even in 2018, pointing to important advances in AI model explainability. (Zest has invested heavily in this area.) In a follow-up talk last week at an academic AI symposium hosted by the Fed, Brainard updated her assessment of what algorithmic transparency should mean in financial services. Context matters. The nature of the user matters. Models that are meant only for data sciences to parse don’t need the transparency required for analysis by compliance officers. But models that make decisions affecting consumers must be held to rigorous standards of transparency to ensure they comply with fair lending and UDAAP laws.

Two things worth clarifying, though, are what kinds of models are appropriate to use and what kinds of data make for the best application of ML in underwriting.

First, Governor Brainard talks about the use of AI that “drives continuous change in models” with “continuous data elements.” While this so-called online learning approach to ML is quite good at improving Google’s search algorithm or cyberfraud detection models, it makes models exceedingly difficult to parse. Zest-built models used by banks and credit unions are fixed models trained on a discrete set of loan applications and validated on a separate discrete set of applications. And then the models are locked down and monitored until it’s time to re-fit or re-build them. They don’t “learn” on their own and therefore they’re very straightforward to explain with the right math.

Put simply, innovative AI models do not need to be a black box containing a continuously changing algorithm; to the contrary, AI models can be objectively provable and discernible, which creates the ultimate in transparency.

Second, Governor Brainard (and others) reference the use of non-structured data such as social media activity, mobile phone use, and text message activity to score someone who’s hard to score because they lack enough standard bureau data. While AI’s more expansive math creates opportunities to use non-traditional data to make lending decisions and there have been precedents where lenders use non-structured data from digital platforms, especially in China, we don’t do it or recommend it. There are real accuracy, completeness, and fair lending risks to using data that can easily perpetuate biases and disparate impact against protected classes such as race and gender. So, while the ability to leverage that kind of data exists, you don’t need them to make quicker and more accurate credit decisions. Ultimately, standard bureau and application data are almost always good enough for an ML model to do far better than traditional models. We should applaud these evolutionary gains that, when accumulated, become revolutionary.

Brainard’s thoughtful approach is setting a great standard in Washington and we look forward to the Fed engaging more with the AI community to improve financial services and outcomes for consumers.

Thank you for subscribing!
Something went wrong while submitting the form.