emerj.com Open in urlscan Pro
2606:4700:20::681a:cd8  Public Scan

Submitted URL: https://trk.klclick.com/ls/click?upn=yfz9-2BLLyHxMiF0GsuyuKs6kTiRPdLXGDVFtGRXFBUjxB-2F2Ic-2Bb9SBHF0sluw5CFk-2BcGxnM9xYYs...
Effective URL: https://emerj.com/partner-content/ai-transparency-in-finance/?utm_source=email&utm_medium=AI-for-Avoiding-Supply-C...
Submission: On January 24 via api from US — Scanned from DE

Form analysis 2 forms found in the DOM

GET https://emerj.com

<form method="get" id="nav-searchform-ex" action="https://emerj.com" role="search">
  <div class="input-group"><input id="s" name="s" type="text" placeholder="Search Topics, Industries &amp; More" value="" class="form-control rmj-field"> <button onclick="document.getElementById('nav-searchform-ex').submit();"
      class="submit-btn">Search</button> <button class="close-btn"><svg width="10" height="10" viewBox="0 0 10 10" fill="none" xmlns="http://www.w3.org/2000/svg">
        <path
          d="M9.12891 7.62109C9.29297 7.80339 9.375 8.01302 9.375 8.25C9.375 8.48698 9.29297 8.69661 9.12891 8.87891C8.94661 9.04297 8.73698 9.125 8.5 9.125C8.26302 9.125 8.05339 9.04297 7.87109 8.87891L5 5.98047L2.12891 8.87891C1.94661 9.04297 1.73698 9.125 1.5 9.125C1.26302 9.125 1.05339 9.04297 0.871094 8.87891C0.707031 8.69661 0.625 8.48698 0.625 8.25C0.625 8.01302 0.707031 7.80339 0.871094 7.62109L3.76953 4.75L0.871094 1.87891C0.707031 1.69661 0.625 1.48698 0.625 1.25C0.625 1.01302 0.707031 0.803385 0.871094 0.621094C1.05339 0.457031 1.26302 0.375 1.5 0.375C1.73698 0.375 1.94661 0.457031 2.12891 0.621094L5 3.51953L7.87109 0.621094C8.05339 0.457031 8.26302 0.375 8.5 0.375C8.73698 0.375 8.94661 0.457031 9.12891 0.621094C9.29297 0.803385 9.375 1.01302 9.375 1.25C9.375 1.48698 9.29297 1.69661 9.12891 1.87891L6.23047 4.75L9.12891 7.62109Z"
          fill="#1D1E29"></path>
      </svg></button></div>
</form>

GET https://emerj.com

<form method="get" id="nav-mobile-searchform" action="https://emerj.com" role="search">
  <div class="input-group"><span class="search-icon icon-search-icon"><svg width="21" height="21" viewBox="0 0 21 21" fill="none" xmlns="http://www.w3.org/2000/svg">
        <path
          d="M20.5312 17.8438C20.8438 18.1562 21 18.5208 21 18.9375C21 19.3542 20.8438 19.7188 20.5312 20.0312C20.2188 20.3438 19.8542 20.5 19.4375 20.5C19.0208 20.5 18.6562 20.3438 18.3438 20.0312L13.6562 15.3828C12.0417 16.4766 10.1536 16.9062 7.99219 16.6719C6.16927 16.4115 4.63281 15.6302 3.38281 14.3281C2.10677 13.0521 1.33854 11.5026 1.07812 9.67969C0.869792 7.88281 1.15625 6.25521 1.9375 4.79688C2.74479 3.36458 3.86458 2.24479 5.29688 1.4375C6.75521 0.65625 8.38281 0.369792 10.1797 0.578125C12.0026 0.838542 13.5521 1.60677 14.8281 2.88281C16.1042 4.13281 16.8854 5.66927 17.1719 7.49219C17.4062 9.65365 16.9766 11.5417 15.8828 13.1562L20.5312 17.8438ZM4.08594 8.625C4.11198 10.0312 4.60677 11.2031 5.57031 12.1406C6.50781 13.1042 7.67969 13.599 9.08594 13.625C10.4922 13.599 11.6771 13.1042 12.6406 12.1406C13.5781 11.2031 14.0599 10.0312 14.0859 8.625C14.0599 7.21875 13.5781 6.04688 12.6406 5.10938C11.6771 4.14583 10.4922 3.65104 9.08594 3.625C7.67969 3.65104 6.50781 4.14583 5.57031 5.10938C4.60677 6.04688 4.11198 7.21875 4.08594 8.625Z"
          fill="#1D1E29"></path>
      </svg></span> <input name="s" type="text" placeholder="What're we looking for ?" value="" class="form-control rmj-field"></div>
</form>

Text Content

 * Insights
   * AI in Industries
     * Explore AI by Industry
       PLUS
     * Consumer goods
     * Finance
     * Government
     * Healthcare
     * Heavy industry
     * Media
     * Natural resources
     * Professional services
     * Transportation
   * ADVANCED SEARCH
     * AI Best Practice Guides
       PLUS
     * AI White Paper Library
       PLUS
     * AI Business Process Explorer
       PLUS
 * What We Offer
   * Enterprise AI Newsletter
   * Emerj Plus Research
   * Advertise
 * Podcasts
   * AI in Business Podcast
   * The AI Consulting Podcast
   * AI in Financial Services Podcast
 * Webinars
   * Precisely – Building Trust in Data
   * Shift Technology – How Insurers are Using AI
   * Uniphore – The Future of Banking CX in APAC
   * Uniphore – The Economic Impact of Conversational AI and Automation
   * Uniphore – The Future of Complaints Management
   * Uniphore – Conversational AI in Banking
 * About

Log in
Advertise with us
Search
 * Insights
   * AI in Industries
     * Explore AI by Industry
       PLUS
     * Consumer goods
     * Finance
     * Government
     * Healthcare
     * Heavy industry
     * Media
     * Natural resources
     * Professional services
     * Transportation
   * ADVANCED SEARCH
     * AI Best Practice Guides
       PLUS
     * AI White Paper Library
       PLUS
     * AI Business Process Explorer
       PLUS
 * What We Offer
   * Enterprise AI Newsletter
   * Emerj Plus Research
   * Advertise
 * Podcasts
   * AI in Business Podcast
   * The AI Consulting Podcast
   * AI in Financial Services Podcast
 * Webinars
   * Precisely – Building Trust in Data
   * Shift Technology – How Insurers are Using AI
   * Uniphore – The Future of Banking CX in APAC
   * Uniphore – The Economic Impact of Conversational AI and Automation
   * Uniphore – The Future of Complaints Management
   * Uniphore – Conversational AI in Banking
 * About

Log in
Finance
SPONSORED


AI TRANSPARENCY IN FINANCE – UNDERSTANDING THE BLACK BOX

Daniel FaggellaLast updated on January 27, 2020
Last updated on January 27, 2020, published by Daniel Faggella

Daniel Faggella is Head of Research at Emerj. Called upon by the United Nations,
World Bank, INTERPOL, and leading enterprises, Daniel is a globally sought-after
expert on the competitive strategy implications of AI for business and
government leaders.

Share to:
LinkedIn Twitter Facebook Email

The financial sector was one of the first to start experimenting with machine
learning applications for a variety of use-cases. In 2019, banks and other
lenders are looking to machine learning as a way to win market share and stay
competitive in a changing landscape, one in which people are no longer
exclusively going to banks to handle all of their banking needs.

While lenders may be excited to take on machine learning projects at their
businesses, many aren’t fully aware of the challenges that come with adopting
machine learning in finance. Lenders face several unique difficulties when it
comes to implementing machine learning, particularly with regards to machine
learning-based credit models.

In order to better understand these domain-specific challenges, we spoke with
Jay Budzik, CTO at Zest AI, about transparency in machine learning as applied to
the financial sector and how lenders may be able to overcome what is often
referred to as the “black box” problem of machine learning.

In this article, we describe the black box of machine learning in finance and
explain how a lack of transparency may cause problems for lenders and consumers
that interact with machine learning-based credit models. These problems include:

 * Ineffective Model Development And Validation
 * Inability to Explain Why a Credit Applicant Was Rejected
 * Algorithmic Bias (Race, Gender, Age)
 * Inability to Monitor Models In Production

Later in the article, we discuss Budzik’s critique of one popular technique for
explainable machine learning. Finally, we finish the article by exploring what
machine learning-based credit models mean for both lenders and credit
applicants.


THE BLACK BOX OF MACHINE LEARNING IN FINANCE

Although ML-based credit models are more predictive than traditional models
(such as logistic regression and simple FICO-based scorecards) in many cases,
they often lack one of the most critical advantages of these traditional models:
explainability. 

Machine learning has what’s called a “black box” problem, meaning it’s often
extremely difficult to figure out how an ML-based credit model came to the score
it did for a particular credit applicant. 

This is due to the nature of ML algorithms such as tree-based models or neural
networks. In neural networks, each node in the network fires in response to
patterns that it’s “seen” before, but it doesn’t actually understand what’s
being presented to it. In other words, one can’t probe a node to figure out what
concept or idea made it fire without complex diagnostics that are far from
ubiquitous in the world of machine learning.

In less regulated industries, the black box problem is less of an issue. For
example, if a recommendation engine for an online store mistakenly recommends a
product to a customer that they aren’t interested in, the worst that happens is
the customer doesn’t buy that product. In this case, it ultimately doesn’t
matter how the model came to recommend the customer that product.

In time, the model will simply get better at recommending products, and the
online store’s operations go on as usual. But in the financial sector, a machine
learning models’ inaccurate output can have serious consequences for the
business using the model. 


INABILITY TO EXPLAIN WHY A CREDIT APPLICANT WAS REJECTED

In credit underwriting, lenders must be able to explain to credit applicants why
they were rejected. Traditional linear models make this relatively simple
because one can easily interpret how the model and the underwriter came to the
borrower decision, and they factor in significantly fewer variables than a
machine learning model could. 

There may be hundreds of variables involved in a machine learning-based credit
model, and the interactions among each of those variables can themselves be
variables.

Without rigorous explainability, lenders aren’t able to provide applicants with
an adverse action notice that details why they were rejected, information they
can use to improve their credit profiles and successfully obtain credit in the
future. We further discuss the effect this can have on consumers later in the
article when we discuss the SHAP technique for explainability in machine
learning models.

Unexplainable machine learning-based credit models can have serious legal
consequences for lenders that use them. The Fair Credit Reporting Act of 1970
requires lenders to be able to explain the models they use to approve and deny
credit applicants.

Failure to do so accurately can result in large fines and/or a suspension of
banking license. Noncompliance is perhaps the most vital reason that lenders
have been cautious to adopt machine learning for credit scoring and
underwriting.


ALGORITHMIC BIAS (RACE, GENDER, AGE)

Another critical concern for lenders looking to adopt machine learning-based
credit models is algorithmic bias. Although one may assume an algorithm is
objective and neutral to the social context in which it’s created, those that
develop the algorithm bring with them their own assumptions about the society to
which they belong. These assumptions can inadvertently influence the way a
machine learning engineer develops an algorithm, and, as a result, the outputs
of that algorithm can be unintentionally biased. 

In lending models, bias is more often be introduced when determining the data or
the weightings that the algorithm will use to make its decision about whether to
approve an applicant or not. Lenders know which data points they need to avoid
using when making decisions with traditional credit models be they manual
scorecards or linear regression models, including:

 * The applicant’s race or ethnicity
 * The applicant’s gender or sex
 * The applicant’s age

There will undoubtedly be other data points that are barred from use in credit
modeling as more regulations are passed in the coming years. Other data points
can serve as clear proxies for race such as property values, 

One might think it’s easy enough to simply ensure these specific data points
aren’t fed to an ML algorithm. But the black box problem makes it difficult to
know if discriminatory credit signals aren’t inadvertently being factored into a
machine learning-based credit model as the combination of other seemingly
unrelated data points. According to Budzik:

The location of the applicant can be a pretty good predictor of their race and
ethnicity, and so you have to be pretty careful when you’re considering
attributes like the location of someone. We’ve seen even at the state level when
you associate the state in which a person lives with other attributes like the
mileage on the car, you can end up with pretty perfect predictors of their race.
So it’s important to be able to do a full inspection of the model to make sure
it’s not doing the wrong thing from a discrimination perspective, that it isn’t
making decisions that are biased and unfair.

The consequences of a biased algorithm can vary, but in credit modeling they can
be harmful to those who are already at a disadvantage when it comes to acquiring
credit. People of color are already much more likely to be denied mortgages than
whites using traditional credit scores, in part because their scores don’t take
into account recurring payments such as rent, phone bills, and utility bills.

As a result, people of color may have thin credit histories but be just as
likely to pay off a loan as white borrowers with extensive credit histories.
This is the state of the current credit system; machine learning could
exacerbate the problem if lenders and regulators are unable to look into an
algorithm and figure out how it’s coming to its conclusions.

The political sphere is starting to concern itself with algorithmic bias as
well. In a letter to several government agencies, including the Consumer
Financial Protection Bureau, presidential candidate Elizabeth Warren and Senator
Doug Jones of Alabama asked what these agencies were doing to combat algorithmic
bias in automated credit modeling, calling out FinTech companies specifically
for rarely including a human in the loop when underwriting a loan.

In addition, Zest AI’s CEO, Douglas Merrill, spoke before the AI Task Force of
the House Committee on Financial Services on algorithmic bias and the company’s
purported ability to develop transparent underwriting algorithms.


SHAP AND ITS LIMITATIONS IN CREDIT UNDERWRITING

SHapley Additive exPlantions (SHAP) is one method of bringing explainability to
machine learning that’s gained some popularity since its inception in 2017. The
method helps explain how a tree-based machine learning model comes to the
decisions it does. It incorporates elements of game theory to do this, and it
can purportedly do this quickly enough for use in business.

According to Zest AI, however, SHAP, when used alone, has some limitations in
credit underwriting when it comes to explaining ML-based credit models. The
greatest of these is in large part a domain-specific problem. 

Lenders may want their model to approve a percentage of the applicants that it
sees. This percentage, this desired outcome, exists in what’s called “score
space.” Meanwhile, the machine learning model’s actual output is not a
percentage; it exists in what’s called the “margin space.”

SHAP is designed to explain machine learning model outputs that exist in the
“margin space,” and Zest AI argues that this poses a challenge when using it to
explain why an applicant was denied credit. In other words, lenders need the
explanation to be in “score space,” but SHAP doesn’t allow this easily. As a
result, according to Budzik in a post on the topic:

If you compute the set of weighted key factors in margin space, you’ll get a
very different set of factors and weights than if you compute them in score
space, which is where banks derive their top five explanations for rejecting a
borrower. Even if you are using the same populations and are only looking at the
transformed values, you will not get the same importance weights. Worse, you
likely won’t end up with the same factors in the top five.

As a result, a lender might provide an adverse action notice to a consumer that
inaccurately ranks the reasons the applicant was denied. This could lead to the
applicant attempting to correct their credit profile in the wrong way, focusing
on a reason for denial that was actually less critical to their denial than
another reason that was inaccurately ranked less important.


WHAT ML-BASED CREDIT MODELS MEAN FOR LENDERS

As of now, there are few examples of explainable machine learning as applied to
credit underwriting. Zest AI does claim its ZAML software is one of them,
working to explain multiple types of machine learning-based credit models.

That said, it’s likely going to be several years before lenders are comfortable
fully automating their credit underwriting with machine learning, especially due
to uncertainty around regulations. As evidenced by the recent interest in
discussing machine learning in finance, the government is likely to make several
decisions regarding the use of ML-based credit models within the next few
years. 

That doesn’t, however, necessarily mean that lenders should wait to figure out
how exactly they would implement a machine learning-based credit model when the
time comes. They should be paying attention to the black box problem and how
FinTechs, banks, and regulators are responding to it.

The black box problem matters because overcoming it can unlock the real
potential that machine learning has in finance, particularly credit
underwriting. Explainable models could allow lenders to approve more loan
applicants that would have otherwise been denied by more rigid credit model
without increasing the risk they take on.

In terms of dollars, this means more revenue and less money lent out to
borrowers that won’t be able to pay it back.

The near infinite data points involved in a machine learning model may in large
part originate the black box problem, but they’re what will also allow lenders
to create more nuanced profiles of a loan applicant. 


WHAT ML-BASED CREDIT MODELS MEAN FOR BORROWERS

Machine learning-based credit models may allow lenders to approve them for loans
when they wouldn’t have before. When explainable ML becomes ubiquitous in credit
underwriting, ideally, lenders will be able to run more effective businesses and
will allow more people to have fair access to credit, regardless of age, gender,
or race.

 

This article was sponsored by Zest AI and was written, edited and published in
alignment with our transparent Emerj sponsored content guidelines. Learn more
about reaching our AI-focused executive audience on our Emerj advertising page.

Header Image Credit: WallStreet.com


RELATED POSTS

 * Overcoming the Data and Talent Challenges of AI in Life Sciences
   
   Episode summary: In this episode of AI in industry, Innoplexus CEO
   Gunjan Bhardwaj explores how pharma giants…

 * The Analogy of AI and the Fourth Industrial Revolution
   
   Emerj's own Daniel Faggella will be speaking at BootstrapLaps Applied
   Artificial Intelligence Conference on April…

 * AI Hardware - Businesses Are Considering More Than Just Performance
   
   AI hardware is a fast-growing interest among tech media, and there is a lot
   of…

 * The Impact of AI on Business Leadership and the Modern Workforce
   
   This article was written by Sudhir Jha (Senior Vice President & Global
   Head of Product Management and…

 * Innovating With AI and Data Science in Insurance - Strategies For Success
   
   In the past, we’ve explored the need for insurance companies to adapt to
   millennial buying…

Share to:
LinkedIn Twitter Facebook Email

AI-ENABLED FINANCIAL REPORT AUTOMATION FOR FINANCE LEADERS



Co-created by YSEOP and Emerj. Download this guide now and find out how
enterprise organizations have automated their financial reports and how Finance
managers are measuring their ROI:



Download White Paper

RELATED POSTS (5)

Jan 27, 2020
Data management


AUTOML AND HOW AI COULD BECOME MORE ACCESSIBLE TO BUSINESSES

In the last year, interest in so-called “autoML” has risen greatly in part due
to its promise of bringing artificial intelligence to businesses that have been
blocked from accessing it due to its serious time, talent, and budget
requirements. Although machine learning may still be widely unavailable to small
businesses, medium-sized businesses may find that autoML allows them to make use
of it in the coming years.

Read more
Nov 29, 2018
Security


HOW AI ETHICS IMPACTS THE BOTTOM LINE – AN OVERVIEW OF PRACTICAL CONCERNS

This week on AI in Industry, we are talking about the ethical consequences of AI
in business. If a system were to train itself to act in unethical or legally
reprehensible ways, it could take actions such as filtering or making decisions
about people in regards to race or gender.

Read more
Apr 03, 2020
Business intelligence and analytics
Finance


MACHINE LEARNING FOR UNDERWRITING AND CREDIT SCORING – CURRENT POSSIBILITIES

The advent of machine learning in finance ushered in a keen interest in using AI
to automate processes from fraud detection to customer service. While some
use-cases aren’t nearly as established as others, our research leads us to
believe that in the coming five years, banks will continue to invest in machine
learning for risk-related processes, including underwriting.

Read more
May 20, 2019
Enterprise resource planning


HOW TO HIRE A REMOTE MACHINE LEARNING ENGINEER

When searching Indeed at the time of writing this article, over 770 remote
machine learning job listings were posted. A search on LinkedIn yielded over
1,200 results.

Read more
Dec 12, 2018
Partner Content


3 WAVES OF AI TRANSFORMATION IN INDUSTRY – PATTERN MATCHING, UBIQUITOUS ACCESS,
AND DEDUCTIVE REASONING

The following article has been written by Josh Sutton, Global Head, Data &
Artificial Intelligence at Publicis.Sapient. Publicis is one of the world's
largest. Editing and formatting added by the Emerj team. For information about
our thought leadership and publishing arrangements with brands, please visit
our partnerships page.

Read more


AI-ENABLED FINANCIAL REPORT AUTOMATION FOR FINANCE LEADERS



Co-created by YSEOP and Emerj. Download this guide now and find out how
enterprise organizations have automated their financial reports and how Finance
managers are measuring their ROI:



Download White Paper
 * 
 * 
 * 

SERVICES

 * Market Reasearch and Advisory
 * AI Presentations and Keynotes
 * Emerj Plus Membership

RESOURCES

 * AI In Business Podcast
 * AI In Finance Services Podcast
 * Subscribe to our AI Newsletter

COMPANY

 * About us
 * Advertise with us
 * Contact us

© 2024 Emerj Artificial Intelligence Research. All rights reserved.
 * Terms and Conditions
 * Refund and Cancellation Policy
 * Privacy Policy

Sumo