Zest's Seth Silverstein Shares The Lender’s View On Machine Learning

Subscribe to Our Blog

seth illo-02Seth Silverstein spent more than three decades in investment banking and consumer lending. For the last 10 years, he was the corporate head of modeling and analytics at Ally Financial. Previously, he held long-term posts as a managing director at RBS, JP Morgan and Bear Stearns, working on structured products and rates markets. In November, Seth joined Zest AI as chief credit analytics officer. We recently sat down with Seth to get his insight into how a modeling executive approaches the challenges of the industry and the promise of machine learning.

Zest AI: You recently joined Zest as chief credit analytics officer. What made you take the leap to the startup world and to Zest in particular?

Seth Silverstein: I’ve had a pretty extensive career in investment banking, consumer lending, and as a consultant to other lenders. With projects at previous banks, we knew we had to start bringing in alternative kinds of data to lower our losses and bring more loans onto the books. I included these newly available data items, and we built a new model using standard logistic regression. But we didn’t pick up a lot of intuitive variables, like trending data, because many of the alternative data sources are correlated with existing, traditional variables. Standard logistic regression doesn’t do well when variables are even slightly correlated. I knew there had to be a better way because I knew these new variables had to be important. Eventually, I started talking with Jay [Budzik, Zest CTO] about what Zest does, the tools Zest builds and the explainability it has. An ML model can bring in thousands of variables! I got really excited about ML and the Zest product and what my experience could bring to bear. I also wanted to have fun and do something new. And that brought me to Zest.

Zest: What do you mean by alternative data?

Seth: Alternative data covers a very broad spectrum. In finance, I’m speaking of things like utility bills, phone bills, payday loans and how consumers pay them, as well as trending data. We’re not talking about things like your friends on Facebook, which people are using in other applications. Trending data is especially interesting. Let’s say you have a conventional credit score of 650. Not all 650s are alike. You could be a subprime borrower who has gotten their act together and your credit history is trending up, or you could be someone who was super-prime but had a situation, such as a lost job, and is trending down. Using trending data lets you see where this person has been and where they’re going. It helps you get more signal through the noise.

I got really excited about ML and the Zest product and what my experience could bring to bear. I also wanted to have fun and do something new. And that brought me to Zest.

Zest: Tell us more about ML models compared to standard logistic-regression models.

Seth: Speaking very simply — because I always like to simplify ideas — a standard logistic regression model is like algebra: Y equals 5 times X, plus 2 times Z. Now, X and Z can’t be very highly correlated, or the model will perform poorly, or even overfit some variables. You need very uncorrelated variables to make a logistic regression model valuable. The nice thing about ML models is you can have lots more variables, even correlated ones, and extract power out of their relationships, as long as you can explain those relationships. That’s where the power of these extra, alternative data variables comes into play. Back to that last project I mentioned: I brought in a lot of new variables but ended up only able to use two new ones versus the traditional model, and no trending data. With an ML model, I could have effectively used hundreds.

Zest: What’s the thinking in the lending industry now about ML?

Seth: We’re at an inflection point: People think it’s going to add value, but they don’t know for sure. At conferences I’ve been to over the past few years, AI is a top subject. Yet there are not a lot of concrete projects that have shown results or are in production. Why is that? The technology is still pretty new; therefore, they have to explain it to their own risk people and IT people. Compliance folks are worried about explainability. Regulators may not totally understand it and could see it as hocus-pocus.

That said, everyone out there is saying, “We’ve really got to look at this.” Most firms dealing in the mid-prime and subprime spaces are making some effort, but overall, it’s not a production-quality atmosphere yet. Where Zest helps is we have shown you can take these new techniques to internal model review, compliance and external regulators, and our explanations make sense to them. They have accepted them. And we’ve taken models through to production — working very closely with IT groups — whereas a lot of other companies haven’t.

Zest: That’s interesting, because the consumer credit landscape has been healthy in recent years. Yet lenders have concerns and want to explore new methods. Why is that?

Seth: The credit landscape changes very often. Lenders know how important it is to monitor their portfolios and see how loan characteristics are performing. Credit models are traditionally built with three years of current data –– two years the model is built from and one year that’s used toward the end of development to make sure the model is working properly. Obviously, because the last recession was in 2008, lender models aren’t covering downturns at all. A lot of risk managers — and, just as importantly, regulators — are always asking the questions: “What's going to happen to this model when a recession comes? How could you show me it's going to perform well?” The answer, historically, has been very difficult. Tools have to be built and used to monitor portfolios before actual losses take place, and Zest is at the forefront of this development.

Zest: Subprime auto went through a bit of a slump in 2017 and early 2018, and there was a rise in delinquencies. Are those portfolios better-seasoned now as a result of being able to include recent recession-type data in their models?

Seth: Obviously they have better data now; the question is whether it’s enough. A lot of lenders took too big a hit and left the business, which traditionally happens. The ones that have enough data generally have to start building new models, because that data wasn’t in their current model, or they wouldn’t have issued those delinquent loans. For most lenders, the timeline for building a new model and getting it into production is 18 months, including all the regulatory and compliance reviews. It takes a while to get something new into production.

Tools have to be built and used to monitor portfolios before actual losses take place, and Zest is at the forefront of this development.

Zest: Are the monitoring tools in place now adequate for doing that?

Seth: I’ve been impressed by Zest’s monitoring tools. I wish we’d been able to monitor things like input and output distributions and reason code stability when I’d been at my old firms. Zest’s monitoring tools have been battle-tested in some volatile economic environments, including Eastern Europe and Latin America, where the credit landscape has been changing dramatically. Our tools were able to pick shifts in the credit markets before losses took place.

Zest: How do you advise lenders who want to make the leap from traditional models to ML? I assume it’s a classic dilemma of build, partner or buy.

Seth: I would go to the head of the business, the person who has to prove profitability. It helps to show that machine learning can generate more loans. It helps to show that it can lower losses. If you can show the business head he or she is going to make a lot more money, that leader is the one who’s going to say “I need to use this, because we can’t do it any other way. Or even if we did it ourselves, it would take a long time.” Getting that person on board is important to the process. You’re still going to have the potential roadblocks from IT saying, “This doesn't fit in our systems,” risk compliance saying, “We don’t understand this,” and regulators saying, “This is new.” This is where our past experience at Zest really helps. We have seen these roadblocks and have broken through them to get to the finish line.

On the good side, I think people throughout the system are starting to realize they need to adopt machine learning for underwriting, and that standard techniques are breaking down. One big advantage of ML is time to market. Whoever gets this online first is going to have a win period of 12 to 18 months over the competition.

Zest: What advice would you give to lenders evaluating a potential fintech partner?

Seth: If you’re looking at a partner or vendor, you look at their experience and do due diligence inside the shop. See who their customers are right now, and how they’ve done with them. Has it been an easy transition? Did the IT organization work with them? How did they satisfy their regulators? There is an investment; it’s not turnkey. You have to work closely with the vendor and buy in.

I think people throughout the system are starting to realize they need to adopt machine learning for underwriting, and that standard techniques are breaking down.

Zest: Do traditional proofs of concept make sense?

Seth: POCs still make sense, but they have to be streamlined. I think three months is long enough from beginning to end. Modelers at any company are busy and stretched really thin. Fintechs need to give them bang for the buck. So long POCs don’t make sense.

Zest: What should lenders do to ensure the results they get in a POC carry over to production?

Seth: The data set is imperative. If you don’t bring in an extensive data set that can also be used in production, you may not capture gains or even see much lift over traditional modeling. This includes the number of historical records, time frames and alternative data that will make the model more robust. When it comes to machine learning, the more data you have, the better. You need to make sure the POC model is well-validated and documented, or you risk “garbage in, garbage out.” Also, your ML model should make sense intuitively. Make sure the decisions coming out are sensible, and that variable importance and interactions can be explained to a layman. Secondly, you want to monitor your model and its input data. Data and economic conditions change over time, so make sure the properties of the data going in accord with the properties of the data on which you trained the ML model.

Zest: What are some techniques that help ensure a quality modeling process?

Seth: Using reject inference — the practice of pulling in data on loans you were offered but didn’t make — is very important. If you only use your own lending data, you’re going to have built-in bias from previous models or the loan officers, behind the scenes, approving only certain types of loans. Those models and loan officers may not approve someone with no credit history or someone who wasn’t a customer in the past. You have to be careful that your ML model doesn’t become overfit to the bias in your legacy data. The credit bureaus can help you figure out if someone else issued a loan that you didn’t approve or that a customer didn’t accept from you. And then you can see how that loan performed. Reject inference techniques are important for another reason too. There’s a great lending occurrence known as “adverse selection.” That means you get the loan someone else didn’t want for whatever reason. You think it’s good, but other lenders thought it was bad. You get adversely selected because you get potential losses based on something another model captured, which yours didn’t.

Zest has the ability to check for unintended bias while the model is being built. That allows lenders to adjust models along an efficient frontier curve, correcting for bias in their models while maintaining their economics.

Zest: But what about dealing with algorithmic bias?

Seth: Disparate impact — the situation where a model generates unintentional bias against a particular protected class, like race or gender — is a common issue. That’s because some of the most-used variables are very correlated with bias. A good example would be conventional credit scores, which in general are higher for some groups than others due to historical factors. This unfortunate reality means a lot of models are open to charges of bias.

Zest has the ability to check for unintended bias while the model is being built. That allows lenders to adjust models along an efficient frontier curve, correcting for bias in their models while maintaining their economics. You can shift your model along that curve to get the biggest bang for the buck while also being fair. You can reduce the influence of variables that generate potential bias, but are also predictive in a model, whereas traditional modeling techniques would require those variables to be removed. Zest has briefed the fair-lending regulators on this approach, and they have appreciated our efforts to preemptively correct for bias in models instead of just trying to explain it away.

Zest: Looking back over your years in the industry, and then ahead to where it’s going, what’s going to change and what isn’t?

Seth: Everything will change — but it will be evolutionary, not revolutionary. Standard logistic regression models have been around for a long time. I firmly believe that’s going to change in the next year. We’re going to see machine learning in credit take off in a big way. But fears about robots taking over are totally overblown. Even with ML credit models, you still need humans in the loop — building, validating, monitoring and auditing those models —because there are always issues. Credit officers will always exist; I suspect that will never change in most lending businesses. But machine learning, alternative data and higher levels of automation allow credit officers to be put to higher and better use.