Credit Denial in the Age of AI

Cheyenne Noelle November 26, 2019 Finance

credit

Are you trustworthy? It’s a question that banks and financial services ask when determining who qualifies for a loan, credit card, mortgage, etc. Even in the age of artificial intelligence (AI), we are still seeing loopholes in the system. This article will explore the traditional credit score, who is at risk, capitalizing on the “unbanked”, and how AI can work for or against positive change in the credit denial process.

The logic of the credit score

In the 1950s, two men named Bill Fair and Earl Isaac created an automated scoring system that eventually became the FICO score. Over time, a need for regulation became apparent as more banks adopted the scoring system. The Fair Credit Reporting Act, passed in 1970, created a regulated system around which data was collected, what could be reported, for how long, and how consumers could get copies of their credit reports.

For years, traditional scorecards, linear models, decision trees, and the FICO score have played a significant role in determining who can and cannot qualify for varying lines of credit. Although they don’t tell an applicant’s full history, they are still used by 90% of top lenders in America.

 

When you are denied credit, federal law requires a lender to tell you why. This is a reasonable policy on several fronts. First, it provides the consumer the necessary information to try and improve their chances to receive credit in the future. Second, it creates a record of decision to (ideally) help ensure against illegal discrimination. As the desire to capitalize on the unbanked — what American Banker defines as those with limited access to credit — grows, more banks are turning to technology to change the game. One way financial services are rethinking credit denial is by using artificial intelligence.

 

Determining who is worthy

An Efma report found that 58% of banking providers believe AI will have a significant impact on the fintech industry. Today, lenders have been using machine learning algorithms to solve problems both big and small, by making manual processes more simple, accurate, faster and less expensive.

There are a growing number of startups, like CredoLab, focusing on using alternative scoring models to predict the trustworthiness of borrowers. Outdated scoring models like the FICO score rely on a limited amount of data in the decision-making process, leaving millions of Americans out on the lending lifecycle. For those who are in the higher 500s and lower 600s or any other borrower on the cusp of cut-off numbers, these alternative models can help significantly.

So what’s holding more lenders back from adopting these types of alternative models? For some, it’s a combination of legacy software, change resistance from the market and C-suite, along with regulatory requirements that make major progress slow to achieve. These challenges exist despite demands in the volatile financial market, putting pressure on banks to meet customer demands regardless of the obstacles in place. Emerging technologies like machine learning, Natural Language Processing (NLP), and automation are how the financial industry is adapting to market changes, along with the growing need to service those previously deemed “un-credit worthy.”

On the other hand, many experts speculate that banks are unwilling to modify their scoring models based on the fact that it would not be in their best interest. According to Slate, banks and credit bureaus alike rely on data collected from consumers to produce their own scoring models, creating internal risk scores against consumers deemed as “unworthy” of credit increases. This poses a substantial risk not only to the consumer but to the integrity of the banking industry.

AI: A Viable Replacement

It’s generally well-known that technology is neutral. It’s what we choose to do with technology that can be for good or bad. In the age of AI, augmented agents like chatbots, virtual assistants, robots, and other automation are being incorporated into our lives more every day. Today, data is a currency. For credit scoring models, this is big news.

The CEO of ZestFinance recently argued, “All data is credit data.” For those who are looking to qualify for loans, seeking help outside of the outdated models seems promising. Businesses like Lenddo, based in Singapore, is one of a handful of startups using alternative data points for credit scoring. It reviews behavioral traits and smartphone habits to build models of creditworthiness for consumers in emerging markets, where standard credit reporting barely exists.

“The potential this technology has is massive,” said Arjuna Costa to MarketWatch, Partner at the Omidyar Network, which was founded by billionaire entrepreneur Pierre Omidyar.

It seems like sunshine and rainbows until researchers have found out about the risks of AI. Just like Google and Amazon’s privacy scandals with consumer data, algorithms in credit scoring models are proving to fall prey to the same mistakes made in historical scoring methods.

Where does AI bias come from? Machine learning models are designed and trained purely for accuracy as defined by the user, not for fairness. It’s all data and numbers. AI isn’t biased; AI simply reveals what biased attitudes already exist. It’s only when the majority designs products for everyone without fairness in mind that AI falls short of its transforming potential.

 

The future of lending

Today, lenders have the ability to collect more data than ever about their customers. In addition to sociodemographic data, this includes transactional data, records from credit bureaus, social media, Google Analytics, and more. In the age of AI, artificial intelligence has the potential to transform the credit scoring game, but also comes with its own set of risks along the way if mindfulness isn’t taken into account.

Developers must take care when building a model that it doesn’t repeat human processes that were historically a source of bias. Ethics must be considered from the very start to build a fair model moving forward. It’s a gray area that technologists are still working on for the future.

Undoubtedly, algorithms and automation tools can help assess and mitigate risk more effectively than manual labor. Most importantly, the responsibility to create and assess risk ethically is a group effort. Policymakers, banks, technologists, and consumers will have to work together to create policies, products, and processes that maintain the integrity of the financial system.

Considering an automated lending solution for your enterprise? Learn more about how we help banks mitigate risk and deliver exceptional customer services with our award-winning workflow automation solution at www.processmaker.com.

About ProcessMaker

ProcessMaker is a low-code business process management and workflow software.  ProcessMaker makes it easy for business analysts to collaborate with IT to automate complex business processes connecting people and existing company systems. Headquartered in Durham, North Carolina in the United States, ProcessMaker has a partner network spread across 35 countries on five continents. Hundreds of commercial customers, including many Fortune 100 companies, rely on ProcessMaker to digitally transform their core business processes enabling faster decision making, improved compliance, and better performance.

Privacy Update
We use cookies to make interactions with our website and services easy and meaningful. Cookies help us better understand how our website is used and tailor advertising accordingly.

Accept