www.darkreading.com Open in urlscan Pro
2606:4700::6810:dfab  Public Scan

URL: https://www.darkreading.com/cyber-risk/ai-remains-wild-card-in-war-against-disinformation
Submission: On July 30 via manual from US — Scanned from US

Form analysis 0 forms found in the DOM

Text Content

Dark Reading is part of the Informa Tech Division of Informa PLC
Informa PLC|ABOUT US|INVESTOR RELATIONS|TALENT
This site is operated by a business or businesses owned by Informa PLC and all
copyright resides with them. Informa PLC's registered office is 5 Howick Place,
London SW1P 1WG. Registered in England and Wales and Scotlan. Number 8860726.

Black Hat NewsOmdia Cybersecurity

Newsletter Sign-Up

Newsletter Sign-Up

Cybersecurity Topics

RELATED TOPICS

 * Application Security
 * Cybersecurity Careers
 * Cloud Security
 * Cyber Risk
 * Cyberattacks & Data Breaches
 * Cybersecurity Analytics
 * Cybersecurity Operations
 * Data Privacy
 * Endpoint Security
 * ICS/OT Security

 * Identity & Access Mgmt Security
 * Insider Threats
 * IoT
 * Mobile Security
 * Perimeter
 * Physical Security
 * Remote Workforce
 * Threat Intelligence
 * Vulnerabilities & Threats


World

RELATED TOPICS

 * DR Global

 * Middle East & Africa

See All
The Edge
DR Technology
Events

RELATED TOPICS

 * Upcoming Events
 * Podcasts

 * Webinars

SEE ALL
Resources

RELATED TOPICS

 * Library
 * Newsletters
 * Podcasts
 * Reports
 * Videos
 * Webinars
 * Whitepapers

 * 
 * 
 * 
 * 
 * Partner Perspectives:
 * > Microsoft

SEE ALL


 * Cyber Risk
 * Vulnerabilities & Threats


AI REMAINS A WILD CARD IN THE WAR AGAINST DISINFORMATIONAI REMAINS A WILD CARD
IN THE WAR AGAINST DISINFORMATION

Digital literacy and protective measures will be key to detecting disinformation
and deepfakes as AI is used to shape public opinion and erode trust in the
democratic processes, as well as identify nefarious content.

Erin Drake, Melissa DeOrio

July 18, 2024

5 Min Read
Source: Enrico01 via Alamy Stock Photo


COMMENTARY

Disinformation — information created and shared to mislead opinion or
understanding — isn't a new phenomenon. However, digital media and the
proliferation of open source generative artificial intelligence (GenAI) tools
like ChatGPT, DALL-E, and DeepSwap, coupled with mass dissemination capabilities
of social media, are exacerbating challenges associated with preventing the
spread of potentially harmful fake content. 

Although in their infancy, these tools have begun shaping how we create digital
content, requiring little in the way of skill or budget to produce convincing
photo and video imitations of individuals or generate believable conspiratorial
narratives. In fact, the World Economic Forum places disinformation amplified by
AI as one of the most severe global risks over the next few years, including the
possibilities for exploitation amid heightened global political and social
tensions, and during critical junctures such as elections. 



In 2024, as more than 2 billion voters across 50 countries have already headed
to the polls or await upcoming elections, disinformation has driven concerns
over its ability to shape public opinion and erode trust in the media and
democratic processes. But while AI-generated content can be leveraged to
manipulate a narrative, there is also potential for these tools to improve our
capabilities to identify and protect against these threats. 




ADDRESSING AI-GENERATED DISINFORMATION

Governments and regulatory authorities have introduced various guidelines and
legislation to protect the public from AI-generated disinformation. In November
2023, 18 countries — including the US and UK — entered into a nonbinding AI
Safety agreement, while in the European Union, an AI Act approved in mid-March
limits various AI applications. The Indian government drafted legislation in
response to a proliferation of deepfakes during elections cycle that
compels social media companies to remove reported deepfakes or lose their
protection from liability for third-party content. 



Nevertheless, authorities have struggled to adapt to the shifting AI landscape,
which often outpaces their ability to develop relevant expertise and reach
consensus across multiple (and often opposing) stakeholders from government,
civil, and commercial spheres. 

Social media companies have also implemented guardrails to protect users,
including increased scanning and removal of fake accounts, and steering users
toward reliable sources of information, particularly around elections.
Amid financial challenges, many platforms have downsized teams dedicated to AI
ethics and online safety, creating uncertainty as to the impact this will have
on platforms' abilities and appetite to effectively stem false content in the
coming years. 



Meanwhile, technical challenges persist around identifying and containing
misleading content. The sheer volume and rate at which information spreads
through social media platforms — often where individuals first encounter
falsified content — seriously complicates detection efforts; harmful posts can
"go viral" within hours as platforms prioritize engagement over accuracy.
Automated moderation has improved capabilities to an extent, but such solutions
have been unable to keep up. For instance, significant gaps remain in automated
attempts to detect certain hashtags, keywords, misspellings and non-English
words.  

Disinformation can be exacerbated when it is unknowingly disseminated by
mainstream media or influencers who have not sufficiently verified its
authenticity. In May 2023, the Irish Times apologized after gaps in its editing
and publication process resulted in the publication of an AI-generated article.
In the same month, while an AI-generated image on Twitter of an explosion at the
Pentagon was quickly debunked by US law enforcement, it nonetheless prompted a
0.26% drop in the stock market. 




WHAT CAN BE DONE?

Not all applications of AI are malicious. Indeed, leaning into AI may help
circumvent some limitations of human content moderation, decreasing reliance on
human moderators to improve efficiency and reduce costs. But there are
limitations. Content moderation using large language models (LLMs) is often
overly sensitive in the absence of sufficient human oversight to interpret
context and sentiment, blurring the line between preventing the spread of
harmful content and suppressing alternative views. Continued challenges with
biased training data and algorithms and AI hallucinations (occurring most
commonly in image recognition tasks) have also contributed to difficulties in
employing AI technology as a protective measure. 

A further potential solution, already in use in China, involves "watermarking"
AI-generated content to help identification. Though the differences between AI
and human-generated content are often imperceptible to us, deep-learning models
and algorithms within existing solutions can easily detect these variations. The
dynamic nature of AI-generated content poses a unique challenge for digital
forensic investigators, who need to develop increasingly sophisticated methods
to counter adaptive techniques from malicious actors leveraging these
technologies. While existing watermark technology is a step in the right
direction, diversifying solutions will ensure continued innovation which can
outpace, or at least keep up with, adversarial uses. 


BOOSTING DIGITAL LITERACY

Combating disinformation also requires addressing users' ability to critically
engage with AI-generated content, particularly during election cycles. This
requires improved vigilance in identifying and reporting misleading or harmful
content. However, research shows that our understanding of what AI can do and
our ability to spot fake content remains limited. Although skepticism is often
taught from an early age in the consumption of written content, technological
innovations now necessitate the extension of this practice to audio and visual
media to develop a more discerning audience. 




TESTING GROUND

As adversarial actors adapt and evolve their use of AI to create and spread
disinformation, 2024 and its multitude of elections will be a testing ground for
how effectively companies, governments, and consumers are able to combat this
threat. Not only will authorities need to double down on ensuring sufficient
protective measures to guard people, institutions, and political processes
against AI-driven disinformation, but it will also become increasingly critical
to ensure that communities are equipped with the digital literacy and vigilance
needed to protect themselves where other measures may fail.




ABOUT THE AUTHOR(S)

Erin Drake

Associate, Strategic Intelligence, S-RM

Erin Drake is an associate in S-RM’s Strategic Intelligence team, where she
leads on case management of regular and bespoke consulting projects. She joined
the firm in 2017 and has worked on a variety of projects ranging from threat
assessments to security risk assessments across several markets. This often
entails the development of client-specific approach and methodological framework
for high-level and detailed bespoke projects, to support clients in
understanding and monitoring the security, political, regulatory, reputational,
geopolitical, and macroeconomic threats present in their operating environment.
Erin’s expertise includes global maritime security issues, political stability
concerns in the commercial sector, and conflict analysis. Erin holds a master's
degree in international relations with a focus on global security issues like
nuclear proliferation and multilateral diplomacy.

See more from Erin Drake

Melissa DeOrio

Global Cyber Threat Intelligence Lead, S-RM

Melissa DeOrio is Global Cyber Threat Intelligence Lead at S-RM, supporting
clients on a variety of proactive cyber and cyber-threat intelligence services.
Before joining S-RM, Melissa worked on US Federal Law Enforcement cyber
investigations as a cyber targeter. In this role, Melissa utilized numerous
cyber-investigative techniques and methodologies to investigate cyber threat
actors and groups including open source intelligence techniques, cryptocurrency
asset tracing as well as identifying and mapping threat actor tactics,
techniques, and procedures (TTPs) to provide tactical and strategic intelligence
reports. Melissa began her career in corporate intelligence, where she
specialized in Turkish regional investigations, managed a global team of
researchers, and played a role in the development and implementation of a new
compliance program at a leading management consulting firm. 

See more from Melissa DeOrio
Keep up with the latest cybersecurity threats, newly discovered vulnerabilities,
data breach information, and emerging trends. Delivered daily or weekly right to
your email inbox.

Subscribe

You May Also Like

--------------------------------------------------------------------------------

Cyber Risk

ChatGPT Spills Secrets in Novel PoC Attack
Cyber Risk

It's Time to Rethink Third-Party Risk Assessment
Cyber Risk

Volt Typhoon Ramps Up Malicious Activity Against Critical Infrastructure
Cyber Risk

Researchers Use AI to Jailbreak ChatGPT, Other LLMs
More Insights
Webinars

 * CISO Perspectives: How to make AI an Accelerator, Not a Blocker
   
   August 20, 2024

 * Securing Your Cloud Assets
   
   August 27, 2024

More Webinars
Events

 * Black Hat USA - Aug 3-8 - The Premier Technical Cybersecurity Conference -
   Learn More
   
   August 3, 2024

 * Black Hat Europe - December 9-12 - Learn More
   
   December 10, 2024

 * SecTor - Canada's IT Security Conference Oct 22-24 - Learn More
   
   October 22, 2024

More Events



EDITOR'S CHOICE

Laptop screen full of alphanumerics with the words virus, worm and trojan
emphasized
Endpoint Security
Millions of Devices Vulnerable to 'PKFail' Secure Boot Bypass IssueMillions of
Devices Vulnerable to 'PKFail' Secure Boot Bypass Issue
byJai Vijayan, Contributing Writer
Jul 26, 2024
3 Min Read

Hands of a person in a suit jacket holding a screen pointing to an image of a
padlock and the words "insider threat"
Vulnerabilities & Threats
Security Firm Accidentally Hires North Korean Hacker, Did Not KnowBe4Security
Firm Accidentally Hires North Korean Hacker, Did Not KnowBe4
byElizabeth Montalbano, Contributing Writer
Jul 25, 2024
4 Min Read
A laptop showing the Microsoft WIndows 11 logo
Endpoint Security
Goodbye? Attackers Can Bypass 'Windows Hello' Strong AuthenticationGoodbye?
Attackers Can Bypass 'Windows Hello' Strong Authentication
byJeffrey Schwartz, Contributing Writer
Jul 23, 2024
4 Min Read

Reports

 * 2024 InformationWeek US IT Salary Report

 * Proven Success Factors for Endpoint Security

 * Increased Cooperation Between Access Brokers, Ransomware Operators Reviewed

 * SANS 2021 Cloud Security Survey

 * Why You're Wrong About Operationalizing AI

More Reports
White Papers

 * The Future of Audit, Risk, and Compliance: Exploring AI's Transformative
   Impact, Use Cases, and Risks

 * IT Risk & Compliance Platforms: A Buyer's Guide

 * Threat Hunting's Evolution:From On-Premises to the Cloud

 * Data Protection Essentials: Proactive PII Leak Prevention and Data Mapping
   for GDPR

 * State of Enterprise Cloud Security

More Whitepapers
Events

 * Black Hat USA - Aug 3-8 - The Premier Technical Cybersecurity Conference -
   Learn More
   
   August 3, 2024

 * Black Hat Europe - December 9-12 - Learn More
   
   December 10, 2024

 * SecTor - Canada's IT Security Conference Oct 22-24 - Learn More
   
   October 22, 2024

More Events





DISCOVER MORE WITH INFORMA TECH

Black HatOmdia

WORKING WITH US

About UsAdvertiseReprints

JOIN US


Newsletter Sign-Up

FOLLOW US



Copyright © 2024 Informa PLC Informa UK Limited is a company registered in
England and Wales with company number 1072954 whose registered office is 5
Howick Place, London, SW1P 1WG.

Home|Cookie Policy|Privacy|Terms of Use
Cookies Button


ABOUT COOKIES ON THIS SITE

We and our partners use cookies to enhance your website experience, learn how
our site is used, offer personalised features, measure the effectiveness of our
services, and tailor content and ads to your interests while you navigate on the
web or interact with us across devices. By clicking "Continue" or continuing to
browse our site you are agreeing to our and our partners use of cookies. For
more information seePrivacy Policy
CONTINUE
Do Not Sell My Personal Information



DO NOT SELL MY PERSONAL INFORMATION

When you visit our website, we store cookies on your browser to collect
information. The information collected might relate to you, your preferences or
your device, and is mostly used to make the site work as you expect it to and to
provide a more personalized web experience. However, you can choose not to allow
certain types of cookies, which may impact your experience of the site and the
services we are able to offer. Click on the different category headings to find
out more and change our default settings according to your preference. You
cannot opt-out of our First Party Strictly Necessary Cookies as they are
deployed in order to ensure the proper functioning of our website (such as
prompting the cookie banner and remembering your settings, to log into your
account, to redirect you when you log out, etc.). For more information please
see
Privacy Policy
Allow All


MANAGE CONSENT PREFERENCES

STRICTLY NECESSARY COOKIES

Always Active

These cookies are necessary for the website to function and cannot be switched
off in our systems. They are usually only set in response to actions made by you
which amount to a request for services, such as setting your privacy
preferences, logging in or filling in forms.    You can set your browser to
block or alert you about these cookies, but some parts of the site will not then
work. These cookies do not store any personally identifiable information.

SALE OF PERSONAL DATA

Sale of Personal Data

Under the California Consumer Privacy Act, you have the right to opt-out of the
sale of your personal information to third parties. These cookies collect
information for analytics and to personalize your experience with targeted ads.
You may exercise your right to opt out of the sale of personal information by
using this toggle switch. If you opt out we will not be able to offer you
personalised ads and will not hand over your personal information to any third
parties. Additionally, you may contact our legal department for further
clarification about your rights as a California consumer by using this Exercise
My Rights link.

If you have enabled privacy controls on your browser (such as a plugin), we have
to take that as a valid request to opt-out. Therefore we would not be able to
track your activity through the web. This may affect our ability to personalize
ads according to your preferences.

 * PERFORMANCE COOKIES
   
   Switch Label label
   
   These cookies allow us to count visits and traffic sources so we can measure
   and improve the performance of our site. They help us to know which pages are
   the most and least popular and see how visitors move around the site.    All
   information these cookies collect is aggregated and therefore anonymous. If
   you do not allow these cookies we will not know when you have visited our
   site, and will not be able to monitor its performance.

 * TARGETING COOKIES
   
   Switch Label label
   
   These cookies may be set through our site by our advertising partners. They
   may be used by those companies to build a profile of your interests and show
   you relevant adverts on other sites.    They do not store directly personal
   information, but are based on uniquely identifying your browser and internet
   device. If you do not allow these cookies, you will experience less targeted
   advertising.

Back Button


COOKIE LIST



Search Icon
Filter Icon

Clear
checkbox label label
Apply Cancel
Consent Leg.Interest
checkbox label label
checkbox label label
checkbox label label

Confirm My Choices