Transcription:
Transcripts are generated using a combination of speech recognition software and human transcribers, and may contain errors. Please check the corresponding audio for the authoritative record.
Penny Crosman (00:03):
Welcome to the American Banker Podcast. I’m Penny Crosman. The use of AI in lending has been controversial, with advocates saying it can deter the kinds of discrimination that creep into human loan officers’ decisions and even FICO score minimums, while critics worry that AI is a black box and that models can also pick up bias. We’re here today with former Acting Comptroller of the Currency and former chair of the National Credit Union Administration Rodney Hood. Full disclosure, he is on the board of Zest AI, a company that makes AI-based lending software. Welcome, Rodney.
Rodney Hood (00:38):
Thank you, Penny. I’m delighted to join you today.
Penny Crosman (00:42):
Thanks for coming. So you recently wrote an opinion piece in which you said, “The way we decide who deserves credit in America has remained largely unchanged since the Eisenhower era, and it shows. A system designed for the mid-20th century cannot adequately serve a 21st century economy driven by data, innovation, and evolving patterns of work and income. The result is a rigid black and white view of creditworthiness that leaves nearly 36 million Americans underbanked, creating a systemic barrier to opportunity.” So that’s a strong statement. So let’s unpack that a little. In this traditional world where we approve loans, bankers typically rely on a few measures of credit worthiness such as credit score, debt to income ratio, and so forth. What is wrong with this more traditional method?
Rodney Hood (01:37):
Well, Penny, before I answer the question, I just want to acknowledge that financial inclusion is something that I’ve long championed, and I believe that financial inclusion is the civil rights issue of our time. And as someone who’s both been a banker and a financial regulator, I recognize that when I look at some of the data points that show 40% of our households unable to obtain a $400 loan in an emergency or the 65 million households where people are credit invisible, a lot of these folks are invisible or are lacking access because the traditional scoring mechanisms are simply not working. I would say the modern economy has changed dramatically. A lot of the scoring and patterns that we’ve long seen regarding traditional scores where they work if you have stable employment, but they don’t work if you are perhaps an actor or if you are in the gig economy and people who really are looking at different ways of earning an income that’s not quite reported traditionally that would be captured by a model of yesteryear.
(02:43):
I think it’s important that the system we’ve had of late does not measure accurately. And the result is I’m trying to articulate its exclusion rather than true risk-based decision being made. And I think far too often, the old scoring models, they’re not measuring risk. They’re measuring whether someone fits the old credit template. And I think that modern tools are able to go a little deeper in analyzing underserved and folks who wish to be a part of the mainstream. I think that we are able to see through the strategic use of AI that the credit aperture, the financial access aperture can be open. And I think we’re starting to see a lot of cases for that. So while I don’t wish to pick on some of the old systems of yesteryear, I just don’t think that they are as relevant today for our 21st century economy.
(03:39):
So that’s how I’d like to begin our question, our conversation today.
Penny Crosman (03:44):
Sure. So from your perspective, what makes AI better at making a lending decision than a loan officer or a FICO-based scoring model?
Rodney Hood (03:59):
I believe that the strategic use and intentionality from AI, it allows the lenders to see people, not just profiles and not just a credit score from a particular point in time. You’re looking at the person in totality, you’re able to really get a true snapshot of seeing that individual and not just, again, a profile that’s from a specific point in time.
Penny Crosman (04:27):
And have you seen this actually work? Have you spoken to or seen lenders get their approval rates up or have the ability to lend to people that they wouldn’t have loaned to in the past?
Rodney Hood (04:42):
Yes. They’re able to see use cases of AI where the loan approvals, in fact, we’ve seen some real world deployments. Some of the lenders, and not every lender is using Zest. Yes, I’m fortunate to be on the board of Zest as an independent director, but I championed and cared about these issues. The reason I’m on their board is because they share my mission and purpose around broadening financial access. And again, with financial inclusion being a civil rights issue. But some of the real world deployments, and I’m talking now holistically, lenders using advanced AI underwriting have reported approval increases of approximately 20% to 25%. And I’m going to say without increasing credit losses, I think it’s important to note, Penny, that I, above all, am a safety and soundness regulator. You mentioned in my introduction that not only did I oversee the banking system as acting comptroller the currency, I’ve also been the chairman of NCUA.
(05:44):
So collectively, I’ve been able to oversee some $26 trillion in American bank assets, being able to really provide sound, prudential and regulatory oversight to the bank and credit union system. So I care about that system remaining strong. $26 trillion in assets to date with over 11% in capital for both of those entities. So they’re doing so very well. So I want the audience to know that when I’m talking about using artificial intelligence to bolster lending, I’m not wanting to do so at the expense of a safe and sound banking system. So with that being said, I referenced the 20% to 25% increase, you’re not seeing any demonstrable uptick in losses or loan delinquencies. In fact, I’m very proud that the modeling is really able to ensure that the individuals getting credit are able to comply. And again, the institutions have seen very little defaults as a result of these models more accurately being able to distinguish between higher and lower risk borrowers, and that’s really worth noting.
Penny Crosman (06:52):
Now, the last time I wrote a story where I talked about banks and credit unions seeing these kinds of outcomes, like a better ability to approve credit for people in disadvantaged areas and groups and so forth, I got criticism from someone who said, “Well, they’ve only been doing it a few years. You need to look at many years of data before you can see the true delinquency rates and loss rates, default rates, and so forth.” What would you say to that?
Rodney Hood (07:27):
I would say that yes, we do want to see empirical data. And yes, we do want to see a larger sample size, but when you look at the 40% of the households and they’re unable to obtain small driver lending or the folks who are credit invisible, they don’t have time for a longitudinal study. They really want access to financial capital now so they can meet the needs that they need for it, whether it be a car to go to and from work or a small business owner who wants to bring his idea to fruition. So I would say to that, let’s use the technology. We’ve not seen a demonstrable uptick in losses or delinquencies. When I gave the high level overview of the state of America’s banking and credit union system to date with capital levels over 11%, and that’s for both entities, the capital is there to be a bit of a buffer.
(08:18):
The resources are there. And I would like to contend that if we saw issues as regulators, we would not be promoting these activities if the banks themselves, if they saw that their risk levels were not being adhered to, then they would perhaps not embrace the technological platforms that exist as readily. But to date, they’re able to do so without any deleterious impact, I would say, on the banking and credit union system or on the balance sheets that those banks would choose to pursue these products.
Penny Crosman (08:50):
And this is a tangent, but I’ve been thinking a little bit about how household debt has been going up and total household debt reached $18.8 trillion in the fourth quarter. Does that worry you?
Rodney Hood (09:09):
I would need to look at more than just the aggregate of the $18 trillion. Is it because it means that folks are having opportunities to have debt to get cars that they need? I think the average price of cars has gone up somewhat. So if that means that they’re able to get cars, if it means that they’re able to get mortgage opportunities. So I want to know, is the debt because they are using debt to finance businesses. So I probably would want to see, Penny, a breakdown of where the debt levels are occurring before I say yay or nay. I would just say if that’s the case, it means that credit is being extended. But I hope that when the credit numbers that you’ve just articulated, that this is credit that is coming from well-regulated entities. And what do I mean by well-regulated entities? It means that these are insured depositories that are following consumer protection, they are following consumer compliance activities, they’re looking at BSA and knowing your customer KYC, that if these are entities that are in the regulated field where folks are complying with Reg B and then ECOA and UDAAP, unfair and deceptive advertising practices, if the loans are being made through regulated entities, then I know that they are being made so not just extending the credit, but doing so again with the fidelity to consumer protection and consumer compliance.
Penny Crosman (10:35):
All right. So I just want to ask you about some of the other objections that people sometimes raise about the idea of using AI in underwriting decisions. So one is regulators have warned that AI models can be black boxes and that lenders can’t make their decisions in a black box. They have to clearly be able to explain their credit decision, especially when they decline somebody. They need to provide a clear adverse action notice. Their reasons need to follow certain guidelines. Do you think that the AI lending vendors have truly solved this issue?
Rodney Hood (11:19):
I can’t speak for all AI lenders, but I can say the ones that I have gotten to know firsthand are looking at this issue very seriously. And in the use of AI, it does not erase the need for consumer protection and consumer compliance. If anything, it raises the expectation. So I would say that the AI groups, whether it be zest or others, are making sure that the models have explainability. And if there is an adverse action, yes, that letter needs to be distributed and also there needs to be explainability. I think perhaps that the sophistication of a lot of the models from day one to where we are present day, I think there’s been a seismic shift. I think a lot of the components now when it comes to using artificial intelligence, it’s not just looking at things such as reg B and all the other regulations and adverse action as an afterthought, Penny.
(12:15):
I think what you’re finding today is that is being built into the model from day one. I would say in the days of your with some of the examples or perhaps some of the naysayers that you’re referring to, a lot of these activities were built as an add-on. You would have the model, I want to have an uptick in approvals, but they were not looking at what I would call the other components of a fine model. And that is looking at GRC. Are you looking at governance? Are you looking at risk management and mitigation? Are you looking at compliance? And as I now tell folks, you build all of that into your model, especially around fair lending, fair treatment, disparity treatment, making sure that you’re building that into the model early on throughout the enterprise that’s opposed to it being bolted on at the last minute.
(13:01):
So I think almost everyone that I now know that’s in this space, we have had a bit of years now to look at these models, but yes, I think that the models today, you have to have that explainability. And if I may, Penny, one of the things that when you are being examined, the regulators are going to want to ask you to defend your model. Are you also going about the validation? Are you going about the testing? Are you also doing the what I would call continuous monitoring? And if you also looked at opportunities to implement changes, you can’t just set the model and then leave it on autopilot. So again, you get to some of those things by testing and looking at different subsets and things of that nature. So I think we’ve evolved quite a bit and it’s an iterative process and I think you’re going to continue to see fine-tuning and tweaking.
(13:54):
But the main thing is build the model, use the AI with already an overarching lens, again, governance, risk, and compliance.
Penny Crosman (14:05):
That makes sense. So another concern that people sometimes raise is that an AI model could have some kind of bias arise in it. It could, for instance, learn from decisions that were made in the past that were biased and or it could infer a loan applicant’s race or gender or some other characteristic from other data in their loan application. What do you think of those kinds of concerns?
Rodney Hood (14:44):
Well, they’re valid concerns, but again, that’s why we’re going to look at the testing. I mentioned the ongoing monitoring. And again, if you’re building fairness and fair lending activities into the models, and of course you can continue to look at them and test them, but I don’t want to have this fear prevent people from using tools that can really bring more and more individuals into the financial mainstream. Penny, I’m going to share an example with you where I do know firsthand as a banker, as someone who’s been in this industry for now some 30 years, I was a young banker. I was working on a loan activity, and I couldn’t understand why I had a minority gentleman and a non-minority gentleman, same income, same loan requests for a mortgage, and the credit scores were vastly different and everything else on paper was the same. Why would one credit score be vastly different for the minority than the non-minority?
(15:45):
Well, a lot of minority neighborhoods don’t have, at the time, we’re talking about 30 years ago, a lot of the minority neighborhoods were populated not by what we would call traditional mainstream banks, but you would have what were called finance companies. They were not check cashers or pawn shops pretty close, but yet they were still, in most people’s opinion, they still were dispensing credit, but they were called these finance companies. Whereas the minority borrower went to the finance company, the non-minority went to a traditional bank, fill in the blank. Well, the credit model, the black box of sorts assumed that the minority borrower must not have been able to get a paper, or at least in its opinion, because they went to a local finance company as opposed to going to the traditional bank that we think of to date. So the model had that built in bias.
(16:39):
Models in and of themselves aren’t biased. Someone must have assumed that they said, “Hey, this model needs to give that person a hundred point devaluation because they didn’t go to a mainstream firm that we would’ve thought about today.” So the models can be incorrect. We had something that we did with this particular loan that we were able to do what’s called manual underwriting. It means that a separate set of eyes had to look at the loan and decide to make it. But the point is models can have bias. So that is why it’s so important to, again, go into it thinking that you are ruling that out. And again, you get to the root of that by the testing and the monitoring and the validation, but it could happen. I like to think that we are going to be able to weed some of those things out.
(17:30):
And if you don’t catch it in the first tranche, I’m hoping that, again, if you’re going back and doing the sample test and doing everything else that one needs to do to really make sure that they are doing this appropriately, then these things will be weeded out.
Penny Crosman (17:46):
All right. And another objection that I have heard is that with AI-based lending, you can use many different data points. And there are lenders and platforms that will use things like what school a person went to, their … I don’t know if they use their grade exactly, but they’ll use the kind of profession, the kind of job they have. So there’s the concern, for instance, that somebody who went to a community college might be penalized versus somebody who went to an Ivy League school, which is partly a matter of how wealthy a family is. What do you think about that? And there’s also the concern about an AI model looking at someone’s social media posts or location data. What do you think of those kinds of objections?
Rodney Hood (18:39):
I think those are valid and sound objections. I’d hate for a model to really be used to prevent anyone from having, not just access to the financial system, but to have access affordably, to have access confidently and knowing that their ability to repay is what’s going to be the deciding factor as opposed to any of those other externalities that you’ve just referenced, Penny. And I would take umbrage at that and certainly would like to think that none of the models to date are using that. I am aware that there have been some that are looking at social media profiles and where one went to school or things of that nature. And again, that is just unseemly. I’d rather though to use the AI models that are out there just to help provide more data. And I think the one thing that I would say, it’s not that AI is about replacing judgment in banking.
(19:34):
No one is saying that you’re not going to have still a nice set of eyes to look at it. It’s just about improving the measurement. And I think that’s the thing that often gets sort of taken out of context. No one’s replacing sound judgment. It’s just about giving a better opportunity around measurement. And Penny, I would beg the question in terms of when folks recognize that they’re going to use these models, it needs to have that degree of explainability. And I can’t emphasize that enough. And when you’re meeting with that examiner and you’re walking them through your approvals and your declinations and you’re trying to explain it, you cannot say it’s the algorithm and that you’re not a technologist or that that’s not your bailiwick. It is going to be incumbent upon everyone involved without the enterprise to be able to have explainability around the model and the decisions that were being made.
(20:25):
And Penny, another piece around our conversation that we’re having today, it doesn’t just stop and end with that chief loan officer or the sales force or risk management compliance policy. You also need to have a board level oversight for this endeavor as well. I often encounter banks and credit union leaders who are thrilled about using technology, and then I ask them about the board policy and I get the, “Oh, we didn’t think about including the board.” So I would encourage all the individuals who are looking at these types of activities to make sure that there is board oversight. Has the board given you their imprimatur to not only use this technology, but how are you looking at that throughout the enterprise? And just as I can talk very boldly and enthusiastically about use cases, I think it’s also important that we recognize that through the use of AI that you do open up a number of other risk variables such as how is the data being preserved?
(21:30):
How are you looking at personal identifiable information? How are you looking at cyber and operational resiliency? So I would just tell folks if you’re exploring these types of activities, AI, which I hope you are, that you’re looking at all the other things regarding governance, having AI be a part of the whole enterprise in terms of who’s going to touch it. And I guess I would conclude with, you can’t just say the AI system made the mistake. At some point, there does need to be ownership. So I think companies and everyone needs to realize who’s going to be on point because at the end of the day, there does need to be someone who does own the solution. And I think that some of the things that I often see lacking is no one has a defined owner and no one has really in some instances thought to have a board level policy.
Penny Crosman (22:25):
All right. Well, that’s a good note to end on. So Rodney Hood, thank you so much for joining us today. And to all of you, thank you for joining the American Banker Podcast. I produced this episode with audio production by Adnan Khan, WenWyst Jeanmary, and Anna Mints. Special thanks this week to Rodney Hood. Rate us, review us, and subscribe to our content at www.americanbanker.com/subscribe. For American Banker, I’m Penny Crosman, and thanks for listening.