www.britannica.com Open in urlscan Pro
104.18.5.110  Public Scan

Submitted URL: https://news.google.com/rss/articles/CBMiX0FVX3lxTE5CbUw2T2NjWmF5R2c3Sm85ZWlYQ1ZnSExocUZWVFFWMlFFTDVpUXpBNk5wSDV6M3ljSXR...
Effective URL: https://www.britannica.com/money/ai-ethical-issues
Submission: On November 22 via api from DE — Scanned from AU

Form analysis 0 forms found in the DOM

Text Content

History & SocietyScience & TechBiographiesAnimals & NatureGeography & TravelArts
& CultureGames & QuizzesVideosOn This DayOne Good FactDictionary
 * Lifestyles & Social Issues
 * Philosophy & Religion
 * Politics, Law & Government
 * World History

 * Health & Medicine
 * Science
 * Technology

 * Browse Biographies

 * Birds, Reptiles & Other Vertebrates
 * Bugs, Mollusks & Other Invertebrates
 * Environment
 * Fossils & Geologic Time
 * Mammals
 * Plants

 * Geography & Travel

 * Entertainment & Pop Culture
 * Literature
 * Sports & Recreation
 * Visual Arts

CompanionsDemystifiedImage
GalleriesInfographicsListsPodcastsSpotlightSummariesThe ForumTop
Questions#WTFact100 WomenBritannica KidsSaving EarthSpace Next 50Student Center



Subscribe Now

Household Finance

Investing

Trading

Retirement

Companies

Biographies

Finance & the Economy
Table of Contents

--------------------------------------------------------------------------------

 * Introduction
 * 1. Is AI biased?
 * 2. Does AI compromise data privacy?
 * 3. Who is accountable for AI decisions?
 * 4. Is AI harmful to the environment?
 * 5. Will AI steal my job?
 * The bottom line

Read More
Angel investor vs. venture capitalist: What’s the difference?
Financial market participants: Market makers, institutional investors, and you
and me
5 ways that artificial intelligence is changing how we work
Table Of Contents

InvestingMarkets & Regulation


5 ETHICAL QUESTIONS ABOUT ARTIFICIAL INTELLIGENCE

There will be consequences.
PrintCiteShare
Written byAllie Grace Garnett
Allie Grace Garnett
Allie Grace Garnett is a content marketing professional with a lifelong passion
for the written word. She is a Harvard Business School graduate with a
professional background in investment finance and engineering. 

Fact-checked byDoug Ashburn
Doug Ashburn
Doug is a Chartered Alternative Investment Analyst who spent more than 20 years
as a derivatives market maker and asset manager before “reincarnating” as a
financial media professional a decade ago.

Updated: Nov. 21, 2024
Table of Contents

--------------------------------------------------------------------------------

 * Introduction
 * 1. Is AI biased?
 * 2. Does AI compromise data privacy?
 * 3. Who is accountable for AI decisions?
 * 4. Is AI harmful to the environment?
 * 5. Will AI steal my job?
 * The bottom line

Read More
Angel investor vs. venture capitalist: What’s the difference?
Financial market participants: Market makers, institutional investors, and you
and me
5 ways that artificial intelligence is changing how we work
Table Of Contents
Open full sized image

Can AI understand fairness?
© StockPhotoPro/stock.adobe.com
Recent News
Nov. 20, 2024, 11:56 AM UTC(The Hollywood Reporter)U.K. Media Groups, Including
Sky News and The Guardian, Partner With “Ethical AI” Company

Are you wondering about the ethical implications of artificial intelligence?
You’re not alone. AI is an innovative, powerful tool that many fear could
produce significant consequences—some positive, some negative, and some
downright dangerous.

Ethical concerns about an emerging technology aren’t new, but with the rise of
generative AI and rapidly increasing user adoption, the conversation is taking
on new urgency. Is AI fair? Does it protect our privacy? Who is accountable when
AI makes a mistake—and is AI the ultimate job killer? Enterprises, individuals,
and regulators are grappling with these important questions.


KEY POINTS

 * Bias in AI design can lead to fairness issues.
 * Storage and processing of large datasets raises the risk of data breaches.
 * When AI makes a mistake, it’s unclear who should be held accountable.

Let’s explore the major ethical concerns surrounding artificial intelligence and
how AI designers can potentially address these problems.


1. IS AI BIASED?

AI systems can be biased, producing discriminatory and unjust outcomes
pertaining to hiring, lending, law enforcement, health care, and other important
aspects of modern life. Biases in AI typically arise from the training data
used. If the training data contains historical prejudices or lacks
representation from diverse groups, then the AI system’s output is likely to
reflect and perpetuate those biases.

Bias in AI systems is a significant ethical concern, especially as the use of AI
becomes more common, because it can lead to unfair treatment. Biased AI systems
may consistently favor certain individuals or groups, or make inequitable
decisions.

Designers of AI systems can proactively combat bias by employing a few best
practices:

 * Use diverse and representative training data.
 * Implement mathematical processes to detect and mitigate biases.
 * Develop algorithms that are transparent and explainable.
 * Establish or adhere to ethical standards that prioritize fairness.
 * Conduct regular system audits to continuously monitor bias.
 * Engage in learning and improvement to further reduce bias over time.

Granted, there’s a lot of subjectivity in determining fairness and bias, and to
some degree a generative AI model needs to reflect the world as it is (not as we
wish it to be). For today’s models, it’s still a work in progress.


2. DOES AI COMPROMISE DATA PRIVACY?

Many artificial intelligence models are developed by training on large datasets.
That data comes from a variety of sources, and it may include personal data that
the data owners did not consent to provide. AI’s heavy appetite for data raises
ethical concerns about how the data is collected, used, and shared.

Data privacy and protection are generally not enhanced by AI systems. When
developers store and process large datasets that may be attractive to scammers,
it boosts the risk of data breaches. The data can be misused or potentially
accessed without authorization.

AI systems developers have the ethical responsibility to prevent unauthorized
access, use, disclosure, disruption, modification, or destruction of data.
Here’s what you can expect from an AI system that prioritizes users’ best
interests for their data:

 * The AI model collects and processes only the minimum data that is necessary.
 * Your data is used transparently and only with your consent.
 * Data storage and transmission is encrypted to protect against unauthorized
   access.
 * Data is anonymized or pseudonymized whenever possible.
 * Access controls and authentication mechanisms strictly control data access.
 * Users are granted as much control as possible over their data.


AI IS CHANGING HOW WE WORK

From health care and finance to agriculture and manufacturing, AI may be
transforming the workforce from top to bottom. Here are five examples of
companies—all in different sectors—that are using AI in new ways.

Are today’s generative AI models employing these best practices? With the
secrecy and mystique surrounding the latest rollouts, it’s difficult to know for
sure.


3. WHO IS ACCOUNTABLE FOR AI DECISIONS?

If you or an enterprise uses a generative AI tool and it makes a mistake, who is
accountable for that error? What if, for example, the AI in a health care system
makes a false diagnosis, or a loan is unfairly denied by an AI algorithm? The
use of artificial intelligence in consequential decision-making can quickly
obscure responsibility, raising important questions about AI and accountability.

This accountability problem in AI stems partly from the lack of transparency in
how AI systems are built. Many AI systems, especially those that use deep
learning, operate as “black boxes” for decision-making. AI decisions are
frequently the result of complex interactions with algorithms and data, making
it difficult to attribute responsibility.

Accountability matters to build widespread trust in AI systems. AI developers
can address issues of accountability by taking proactive measures:

 * Follow ethical design principles that specifically prioritize accountability.
 * Define and document the responsibilities of all stakeholders in an AI system.
 * Ensure that the system design includes meaningful human oversight.
 * Engage stakeholders to understand concerns and expectations regarding AI
   accountability.

Still, if you’re one of the millions who use ChatGPT, then you may have noticed
the disclaimer telling you that the generative AI tool makes mistakes. And it
does—so be sure to fact-check all of the information you receive. In other
words, the accountable party is you, the user.


4. IS AI HARMFUL TO THE ENVIRONMENT?

Training and operating artificial intelligence models can be highly energy
intensive. AI models may require substantial computational power, which can
result in significant greenhouse gas emissions if the power source isn’t
renewable. The production and disposal of hardware used in AI systems may also
worsen the problems of electronic waste and natural resource depletion.

It’s worth noting that AI also has the potential to benefit the environment by
optimizing energy usage, reducing waste, and aiding in environmental monitoring.
But that doesn’t erase the eco-ethical concerns of using AI. System designers
can play a partial role by:

 * Designing energy-efficient algorithms that use minimal computing power.
 * Optimizing and minimizing data processing needs.
 * Choosing hardware with maximum power efficiency.
 * Using data centers powered by renewable energy sources.
 * Comprehensively assessing the carbon footprint of an AI model.
 * Supporting or engaging in research on sustainable artificial intelligence.

Since the Industrial Revolution, we have been turning fossil fuels into economic
growth. But there are associated negative externalities that must be addressed.


5. WILL AI STEAL MY JOB?

You may be paying close attention to artificial intelligence because you’re
concerned about your job. That’s relatable! The potential for AI to automate
tasks or perform them more efficiently creates a serious ethical concern with
broad economic implications.

Enterprises have a moral—if not legal—responsibility to use artificial
intelligence in a way that enhances rather than replaces their workforces.
Employers who integrate AI and simultaneously provide opportunities for
retraining, upskilling, and transitioning employees to new AI-based roles are
the enterprises using AI in an ethically defensible way.

The fear that AI will “steal” jobs is real. And it likely won’t be assuaged
anytime soon. AI system designers cannot entirely mitigate this risk, but they
can use a few tactics to discourage enterprises from using AI in economically
disastrous ways. Strategies include:

 * Develop complementary AI designs that augment human labor rather than replace
   it.
 * Deploy AI tools incrementally in ways that only gradually improve workforce
   efficiency.
 * Focus on developing AI tools for tasks too dangerous or impractical for
   humans.
 * Actively engage with stakeholders of an AI tool to ensure that all
   perspectives are heard.


THE BOTTOM LINE

The ethical deployment of AI is crucial to the economy and all of its
participants. When used ethically, AI can support economic growth by driving
innovation and efficiency. AI that’s used only to enhance profitability could
produce many unintended consequences. As the adoption of artificial intelligence
continues, these ethical questions are likely to become more important to all of
us.




About UsPrivacy PolicyTerms & Conditions
© 2024 Encyclopædia Britannica, Inc.