Contact Us
Back to Blog

Better AI/ML Analytical Methods Can Both Remove Unfairness and Enhance Revenue for Lenders

This article was originally published on LinkedIn by Jeff Marowits, President at Keystone. Financial institutions decide every day whether thousands of people qualify for credit. If those decisions are being done with artificial intelligence and machine learning (AI/ML), potentially many credit-worthy individuals are being turned away based on inadequate financial modeling. Now a pair of our Keystone economists and my colleagues, Jill Furzer, Ph.D., and Vitoria Rabello de Castro, Ph.D., have provided a novel application for correcting those errant computations. Their research shows how institutions can make better decisions that are both fairer to applicants and potentially more profitable for lenders in the long run. It's important work and I wanted to highlight the contribution they've made in this area. As with so many modern businesses, financial organizations increasingly rely on AI/ML to make decisions. In this case, the decisions are being made by compiling the results of previous lending decisions in a database and training a computer to determine who is likely to be a good risk candidate and who is likely to default on a loan. These predictions are driven by evaluating long lists of an individual's characteristics—age, income, gender, work history, etc.—and creating a credit-worthy score. The scores are either positive or negative—the applicant gets the loan or is turned down. Jill and Vitoria looked at a group of financial decisions made by German lenders in the 1990s. Because there were fewer candidates receiving negative scores, there was more statistical variation—and thus uncertainty of the prediction's accuracy—than the positive scores. They used a relatively new technique called data Shapley values to further refine the AI/ML process and come up with better predictability of outcomes. A baseball analogy: The value of a 5-8, stocky outfielder Before delving into the importance of that, consider how statistics and predictions play out in sports. Professional teams take care to recruit players based on predicting how they will perform at the highest level of their game. Training players is costly, so organizations use various methods to predict who will perform well based largely on how those with comparable qualities have performed. A baseball team, for example, would be unlikely to take a chance on a center fielder who stood a mere 5-8 and weighed 178 pounds, based just on how that person compared to the physical characteristics of most Major League Baseball center fielders. But the Minnesota Twins took a gamble and signed a player with those dimensions. The player, Kirby Puckett, went on to become an All-Star and is now enshrined in the Baseball Hall of Fame. From a statistical point of view, Puckett had an extremely high “value” for the Twins' overall objective of winning baseball games. He helped them win two World Series during his career. The value of different classes of borrowers Loosely speaking, the data Shapley method employed by Jill and Vitoria looks at the predictive “value” of potential borrowers from a game theory perspective. That is, how much would each potential borrower be expected to contribute to the goal of loans that are repaid in full vs. defaulting? Complicating the issue is that many factors are evaluated all at once, so it takes some time and experimentation to learn the true risk a borrower represents. That means that in reality, lenders would have to take some short-term risk that those borrowers in the categories previously denied loans would repay them. But Jill and Vitoria found quantitative benefits to using the data Shapley value method. Before using this approach, lenders would incur losses two ways:

  • Borrowers who were predicted to be good credit risks but turned out not to be cost the lender by failing to repay
  • Borrowers who were predicted to be bad credit risks but would have repaid cost the lender in lost potential revenue

After applying the approach, the more accurate predictions resulted in fewer defaults and more revenue from those previously excluded. That meant a jump in cost savings by 33 percentage points. As Vitoria points out, it also means significant social payoffs: Groups of people previously excluded from taking out mortgage loans could do so and greatly enhance their overall net worth over their lifetimes. Ironically enough, even though they both are gainfully employed by Keystone, both Jill and Vitoria have been turned down for credit because of their status. Jill is Canadian and without much of a U.S. credit history while Vitoria has the added detrimental factor of working in the U.S. under a visa. They're clearly evidence of the second type of loss lenders incur – both have good jobs and solid career prospects and could readily repay the loans. The lenders might make different decisions if they used the methods our Keystone economists devised. To learn more about how financial institutions and other organizations can achieve better, more equitable outcomes using AI/ML, contact us at Keystone.