www.thomsonreuters.com Open in urlscan Pro
2600:9000:206f:5600:1b:b66f:bac0:93a1  Public Scan

Submitted URL: http://url4492.healthcaredatafit.com/ls/click?upn=rlvaWfJ7Ne4MGUVaSW3cqXPwhCEoCRVRo8pErPaT9reG2ri3ha8fHdacvl7X0Es1lhbIq-2B5U5wm8LV2BT...
Effective URL: https://www.thomsonreuters.com/en-us/posts/legal/ai-enabled-anti-black-bias/
Submission: On February 10 via api from US — Scanned from DE

Form analysis 0 forms found in the DOM

Text Content

Skip to content
 * DirectoryGlobal directory

 * LoginsProduct logins
 * SupportSupport & training
 * ContactContact us


Thomson Reuters
Clear

SearchLoading



 * Solutions
   
   
   Solutions
   
   
   INDUSTRIES
   
    * Legal Leverage unparalleled legal content, expertise, and technology
    * News & media Find essential resources for broadcasters and publishers
    * Tax & accounting Explore our tax and accounting technology, guidance, and
      expertise
   
   
   ORGANIZATIONS
   
    * Corporations Let us help you solve your toughest business challenges
    * Government Government professionals trust us to help them make informed
      decisions
    * Professional services firms Access global tax, legal, and risk management
      resources for today’s professional services firms
   
   
   PARTNERSHIPS
   
    * The power of partnership Expand your audience and your capabilities by
      joining our partner program
   
   
   APIS
   
    * The power of APIs Integrate Thomson Reuters content and functionality to
      enhance your tools and workflows

 * Insights
   
   
   Insights
   
   
   INSIGHTS BY TYPE
   
    * Corporate resources Offering insight into corporations in the U.S. and
      abroad
    * ESG resources Insights on issues concerning environmental, social, and
      governance topics
    * Government resources A look into the inner workings of government and the
      courts
   
    * Legal resources Legal coverage focusing on the business and practice of
      law
    * Tax & accounting resources Featuring an expanded insight into the world of
      tax professionals
    * Technology and innovation resource center Exploring how technology and
      innovation will influence the evolution of future services
   
   
   BROWSE BY TOPIC
   
    * Compliance & risk
    * Corporate law departments
    * Courts & justice
    * Diversity
    * Events
    * Global trade management
    * Legal practice management
    * Reports
    * More insights

 * About us
   
   
   About us
   
 * Careers
   
   
   Careers
   



Corporate Talent & Inclusion


NEW STUDY FINDS AI-ENABLED ANTI-BLACK BIAS IN RECRUITING

Dawn Zapata  Senior Content Producer / Thomson Reuters

18 Jun 2021

Share
 * Facebook
 * Twitter
 * Linkedin
 * Email

Dawn Zapata  Senior Content Producer / Thomson Reuters

18 Jun 2021

Share
 * Facebook
 * Twitter
 * Linkedin
 * Email

Too often, the biases that professionals from minority groups experience in the
real world are replicated in the AI-enabled algorithms used in training &
recruiting

Without human intervention, it is easy for algorithms used in the recruiting
process to reproduce bias from the real world, according to a 2019 study
conducted by Harvard Business Review (HBR) with professionals from Northeastern
University and the University of Southern California.

Since then, it is questionable whether the situation has improved despite the
emergence of artificial intelligence as a powerful tool in the evolving 21st
century business landscape and its ability to learn and identify trend. A new
report entitled The Elephant in AI, produced by Prof. Rangita de Silva de Alwis
founder of the AI & Implicit Bias Lab at University of Pennsylvania Carey Law
School, looks at employment platforms through the perceptions of 87 black
students and professionals coupled with an analysis of 360 online professional
profiles with the goal of understanding how AI-powered platforms “reflect,
recreate, and reinforce anti-Black bias.”


KEY FINDINGS FROM THE RESEARCH

The new report explored a range of AI-related employment processes from job
searches, online networking opportunities, and electronic resume submission
platforms. More specifically, key findings include:

 * * * In an analysis of job board recommendations of those surveyed, 40% of
       respondents noted that they had experienced recommendations based upon
       their identities, rather than their qualifications. Moreover, 30% also
       noted that the job alerts they had received were below their current
       skill level.
     * Almost two-thirds (63%) of respondents noted that academic
       recommendations made by the platforms were lower than their current
       academic achievements. This was a finding of particular disappointment as
       the survey highlights the fact that black women are the most educated
       group in America.

For the most part, Silicon Valley is still prominently populated by white
people, with men comprising the majority of leadership positions. It begs the
question of how the technology industry can create fair and balanced AI for the
masses if there are still diversity challenges within the very teams designing
and implementing the algorithms upon which that AI relies. In fact, Amazon
scrapped a recruiting tool in 2018 because of such bias.

Further, a 2019 study from the U.S. National Institute of Standards and
Technology that examined 189 facial recognition algorithms from 99 different
developers found that a majority falsely identified non-white faces. Although
commonly used by both federal and state governments, facial recognition has
raised concerns over AI-enabled bias and have led agencies such as the Boston
and San Francisco police departments to ban its use.


AI-ENABLED BIASES IN RECRUITING & TESTING

As with the case of facial recognition, long-known hiring discrimination
processes are often increasingly AI-enabled. The UPenn report notes that Black
professionals in today’s employment marketplace continue to receive 30% to 50%
less job call-backs when their resumes contain information tied to their racial
or ethnic identity.

With AI being developed as an employment tool meant to help provide equality of
opportunities, the survey asked about employers empowered with AI-based
recruiting technologies and whether candidates feared being not considered for
employment by those employers. Less than 10% said it would little cause of
worry, yet more than 20% said that it’d be of considerable worry to them. The
report expands on hiring discrimination by exploring potential biases
incorporated within pre-programmed “expected responses”, with researchers
pointing out that these responses point to potential data inequity.

Other inequities centered around skill-based tests questions programmed into
hiring platforms that have been known to be biased. Such questions, built upon
exams such as the Law School Admission Test (LSAT), are creating unfair
screening assessments. And given the amount of research around these biases in
standardized testing, using legacy assessment models continue to inhibit the
success of Black and other minority groups from advancing in employment hiring
pools.

In considering the use of AI platforms by employers, the report points both to
the technical complexity of the AI behind the platforms as well as the limited
knowledge of those in human resources or other hiring roles in understanding
such complexities.


ACTIONS FOR EMPLOYERS & DEVELOPERS

Until there are industry-wide best practices, the responsibility to ensure that
the AI algorithms being used to promote equity falls upon the vendors that are
building the tools and the employers that use them. According to the 2019 HBR
study, employers using AI-enabled recruiting tools should analyze their entire
recruiting pipeline — from attraction to on-boarding — in order to “detect
places where latent bias lurks or emerges anew.”

Prof. de Silva de Alwis calls for diverse teams to develop less biased models
and algorithms and advises employers of software developers to leverage tools
that will minimize biases, such as Microsoft’s Fairlearn, an open source toolkit
that empowers data scientists and developers to assess and improve the fairness
of their AI systems. InterpretML, also a Microsoft creation, is another tool
that helps AI-model developers assess their model’s behavior and de-bias their
data.

Employers should also use a “second look” at the resumes and CVs of
underrepresented minorities to mitigate the biases that run the risk of being
reproduced on a vast scale by AI-led recruitment platforms, says Eric Rosenblum,
Managing Partner at Tsingyuan Ventures, the largest Silicon Valley venture
capital fund for Chinese diaspora innovators.

 * Facebook
 * Twitter
 * Linkedin
 * Email

 * Corporate Talent & Inclusion
 * Diversity


RELATED POSTS


LEVERAGING TECHNOLOGY TO ENHANCE CROSS-GENERATIONAL WORKPLACE ENGAGEMENT

7 Feb 2023 · 5 minute read

Share
 * Facebook
 * Twitter
 * Linkedin
 * Email


HOW CRIMINAL JUSTICE REFORM CAN OFFER EMPLOYERS A LABOR SHORTAGE SOLUTION

11 Jan 2023 · 5 minute read

Share
 * Facebook
 * Twitter
 * Linkedin
 * Email


ESG CASE STUDY: HOW CORPORATE PURPOSE STRENGTHENS KELLOGG’S ESG COMMUNICATIONS
WITH STAKEHOLDERS

9 Jan 2023 · 5 minute read

Share
 * Facebook
 * Twitter
 * Linkedin
 * Email


MORE INSIGHTS


STARTING WITH CAS: USING CLIENT ACCOUNTING ADVISORY SERVICES TO FEED YOUR FIRM

2 Nov 2022 · 5 minute read

Share
 * Facebook
 * Twitter
 * Linkedin
 * Email


5 AML COMPLIANCE LESSONS FROM NEW YORK’S FINE & PENALTIES AGAINST ROBINHOOD
CRYPTO

19 Aug 2022 · 5 minute read

Share
 * Facebook
 * Twitter
 * Linkedin
 * Email


CORPORATE ESG COMMITMENTS ARE MOVING BEYOND COMPLIANCE REQUIREMENTS TO
VALUES-BASED COMMITMENTS

18 Mar 2022 · 5 minute read

Share
 * Facebook
 * Twitter
 * Linkedin
 * Email

Clear

SearchLoading


 * About Thomson Reuters
   * About us
   * Annual report
   * Careers
   * Digital accessibility
   * Investor relations
   * Press releases
   * Site map
   * Social impact
 * Products & Services
   * All products
   * Core publishing solutions
   * Corporations
   * Government
   * Legal
   * News & media
   * Professional services firms
   * Tax & accounting
 * Learn More
   * API integration
   * Artificial intelligence
   * Innovation @ Thomson Reuters
   * Partnerships
   * Supplier portal
   * The Trust Principles
   * Thomson Reuters Institute
 * Contacts
   * Contact us
   * Global sites directory
   * Investors
   * Media relations
   * Office locations
   * Sales & account inquiries
 * Connect With Us
   * Facebook
   * Instagram
   * LinkedIn
   * Twitter
   * YouTube



Thomson Reuters
 * Cookie policy
 * Cookies Settings
 * Terms of use
 * Privacy statement
 * Copyright
 * Supply chain transparency
 * For CA: Do not sell my personal information






PRIVACY PREFERENCE CENTER

When you visit any website, it may store or retrieve information on your
browser, mostly in the form of cookies. This information might be about you,
your preferences or your device and is mostly used to make the site work as you
expect it to. The information does not usually directly identify you, but it can
give you a more personalized web experience. Because we respect your right to
privacy, you can choose not to allow some types of cookies. Click on the
different category headings to find out more and change our default settings.
However, blocking some types of cookies may impact your experience of the site
and the services we are able to offer. More information
Allow All


MANAGE CONSENT PREFERENCES

STRICTLY NECESSARY COOKIES

Always Active
Strictly Necessary Cookies

These cookies are necessary for the website to function and cannot be switched
off in our systems. They are usually only set in response to actions made by you
which amount to a request for services, such as setting your privacy
preferences, logging in or filling in forms.    You can set your browser to
block or alert you about these cookies, but some parts of the site will not then
work. These cookies do not store any personally identifiable information.

PERFORMANCE COOKIES

Performance Cookies

These cookies allow us to count visits and traffic sources so we can measure and
improve the performance of our site. They help us to know which pages are the
most and least popular and see how visitors move around the site.    All
information these cookies collect is aggregated and therefore anonymous. If you
do not allow these cookies we will not know when you have visited our site, and
will not be able to monitor its performance.

TARGETING COOKIES

Targeting Cookies

These cookies may be set through our site by our advertising partners. They may
be used by those companies to build a profile of your interests and show you
relevant adverts on other sites.    They do not store directly personal
information, but are based on uniquely identifying your browser and internet
device. If you do not allow these cookies, you will experience less targeted
advertising.

Confirm My Choices

Back Button

Back


PERFORMANCE COOKIES

Vendor Search Search Icon Filter Icon


Clear Filters

Information storage and access
Apply
Consent Leg.Interest

All Consent Allowed

Select All Vendors
Select All Vendors
All Consent Allowed


 * 33ACROSS
   
   HOST DESCRIPTION
   
   VIEW COOKIES
   
   
   REPLACE-WITH-DYANMIC-HOST-ID
    * Name
      cookie name


 * 33ACROSS
   
   View Privacy Notice
   
   3 Purposes
   
   REPLACE-WITH-DYANMIC-VENDOR-ID
   Arrow
   
   Consent Purposes
   
   Location Based Ads
   
   Consent Allowed
   
   Legitimate Interest Purposes
   
   Personalize
   
   Require Opt-Out
   
   Special Purposes
   
   Location Based Ads
   
   Features
   
   Location Based Ads
   
   Special Features
   
   Location Based Ads

Confirm My Choices



OUR PRIVACY STATEMENT AND COOKIE POLICY



All Thomson Reuters websites use cookies to improve your online experience. They
were placed on your computer when you launched this website. You can change your
cookie settings through your browser.

Privacy Statement        Cookie Policy

Cookies Settings Accept All Cookies