In search of responsible AI

Yolanda D. McGill, Esq.
August 22, 2023

How the financial services industry has paved the road for AI legislation

Headlines blaring about the imminent takeover of AI don’t have it exactly right — AI’s not going to take over. It’s already everywhere around us.

This shouldn’t be a shock to anyone who is paying attention. AI has been integrated into most aspects of daily life, taking on mundane tasks, helping to identify potential movie selections and song choices, providing navigation for your car, and even talking to you through your smartphone or home assistant device. 

There are many ways AI is developed and used for tasks, and these developments and deployments can vastly differ. An AI algorithm can be limited or broad in its task or capabilities. This distinction makes all the difference in how we should approach the regulation conversation around AI.

Here’s a specific example: an AI algorithm is fed data to train it to “understand” and better accomplish its task. This AI is supervised while it learns, and then the AI algorithm is locked and disconnected from training data sources after it reaches its optimal ability to perform its set task. By disconnecting the algorithm from the data source, it will not use new inputs or deviate from its task. This controlled machine learning algorithm technique is the very strategy that Zest AI deploys for our models — it’s safe, controlled, and ensures compliance for our clients.

In other use cases of AI, the algorithm may remain unlocked and have ongoing access to data and continue to “learn” and “grow” based on that new data — ChatGPT is a good example of this. AI can use controlled or uncontrolled data sources for learning and decision-making across its use cases, and plenty of scientific study exists around these different applications. 

I believe the questions everyone should be asking when speaking about regulating AI should focus on specific use cases instead of broad strokes around how to control it.  By fostering invention while also protecting the end consumer, AI has brought a lot of simplicity and personalization into our everyday lives. The best path forward is ensuring that AI applications are purpose-built for bettering the human experience.

The state of AI in the world today, from beliefs to use cases

State and federal legislators are rallying to examine AI and craft AI-focused legislation. As they sprint to pass new laws, policymakers should hesitate to lump all AI use cases together.

On May 23, 2023, the White House announced a Request for Information, among other fact-finding initiatives, around the research, development, and deployment of what it calls "responsible AI.” This announcement is noteworthy as it indicates a discernment of the types of issues that may arise across many use cases of AI. 

The far-reaching AI use cases — and the potential for even broader, as yet unknown ones — have generated a tremendous amount of attention in recent months. Google’s AI program creating its own ‘language’ resurfaced from 2016, AI has been using artists’ work to generate art, and the WGA and SAG have spent over 100 days striking against how Hollywood plans to use AI (among other things).

The White House noted last year that “U.S. law and policy already provides a range of protections that can be applied to these [AI] technologies and the harms they enable.” Some AI use cases — such as AI credit decisioning technology — benefit from decades of innovation in accordance with regulations that prohibit discrimination, protect data, and mandate consumer notices.

In furtherance of the Biden Administration’s AI efforts, four federal agencies issued a joint statement that made it abundantly clear that they regard AI-based underwriting and certain other use cases as covered by existing law and expect compliance with those laws. The laws in question include the Truth in Lending Act, Equal Credit Opportunity Act, Fair Credit Reporting Act, Dodd-Frank’s UDAAP provisions, and the Federal Trade Commission’s trade and UDAP rules, to name a few. 

Regulation of AI in the financial services industry has given the legislators a blueprint for AI regulation in other industries

We cannot allow fear of the unknown to rule over our decisions when it comes to innovation. However, we do need to approach innovation with a sense of purpose. Many considerations go into building AI that enhances our lives, including the human component of developing and managing these systems. Ensuring that it’s developed by a group of individuals with diverse experiences, backgrounds, cultural beliefs, ideologies, and other diversity markers like race, sex, and age is crucial. 

Explainable, transparent AI-driven lending technology has been available to the financial services sector for almost 15 years. This technology is helping credit unions and banks lend to more of their customers more fairly, inclusively, and efficiently. The banking industry proves that AI can deliver on its promise within regulatory guidelines.

But, AI-enhanced innovation that improves lending outcomes could be stifled — or even eliminated — if it is swept into the broad-brush legislative efforts to reach more novel, less controlled AI use cases in areas where the legal parameters have not yet been established.

To avoid this unfortunate outcome, lawmakers should collaborate with AI stakeholders to enhance existing laws to offer consumers more clarity and safeguards. We know that the NAIAC and Congress are working to carefully evaluate all of the advantages and drawbacks of AI in the coming year. In fact, some work on AI governance has already been outlined at the federal level.

Senator Schumer’s SAFE Innovation framework outlines a vision for governance that reflects four priorities for AI in financial services:

Security, especially data security and privacy, is required of financial service providers. Lending is a highly regulated sector where providers must have Accountability for their activities as well as those of their service providers. AI-driven lending must adhere to fair lending and other consumer protection mandates as well as safety and soundness, all of which are Foundational tenets in our society. And finally, innovative AI/ML technology is paving the way for enhancing the Explainability that is crucial for compliant AI-driven decisioning. 

Financial services have proven that AI can be governed to bring immense benefit along with clear lines of sight into potential risks — creating fairer systems that use more data and better math to finally include borrowers shut out through no fault of their own, helping individuals attain their goals and build generational wealth. The path to these innovations went through safety and soundness, as well as consumer protection and other laws, regulations, and guidelines, paving the way to a framework for responsible AI.

Read Yolanda's August 2023 Letter to the Editor in the CU Times here.

latest
July 18, 2024
Much ado about delinquencies
July 3, 2024
Making a dream the American Reality
June 24, 2024
Credit & Risk
Unlocking your borrowers’ home equity