therecord.media Open in urlscan Pro
2606:4700:4400::ac40:9b4b  Public Scan

URL: https://therecord.media/openai-report-china-russia-iran-influence-operations
Submission: On May 31 via api from TR — Scanned from DE

Form analysis 1 forms found in the DOM

<form><span class="text-black text-sm icon-search"></span><input name="s" placeholder="Search…" type="text" value=""><button type="submit">Go</button></form>

Text Content

This website stores cookies on your computer. These cookies are used to improve
your website experience and provide more personalized services to you, both on
this website and through other media. To find out more about the cookies we use,
see our Privacy Policy.

Accept

 * Leadership

 * Cybercrime

 * Nation-state

 * Elections

 * Technology

 * Cyber Daily®

 * Click Here Podcast

Go
Subscribe to The Record

✉️ Free Newsletter


Image: Zac Wolff via Unsplash
James Reddick
May 30th, 2024
 * Technology
 * Nation-state
 * Elections
 * News
 * China

 * 
 * 
 * 
 * 
 * 

Get more insights with the
Recorded Future
Intelligence Cloud.
Learn more.
What is Threat Intelligence


OPENAI MODELS USED IN NATION-STATE INFLUENCE CAMPAIGNS, COMPANY SAYS

Threat actors linked to the governments of Russia, China and Iran used OpenAI’s
tools for influence operations, the company said Thursday. 

In its first report on the abuse of its models, OpenAI said that over the last
three months it had disrupted five campaigns carrying out influence operations. 

The groups used the company’s tools to generate a variety of content — usually
text, with some photos — including articles and social media posts, and to debug
code and analyze social media activity. Multiple groups used the service to
create phony engagement by replying to artificial content with fake comments.

“All of these operations used AI to some degree, but none used it exclusively,”
the company said. “Instead, AI-generated material was just one of many types of
content they posted, alongside more traditional formats, such as manually
written texts, or memes copied from across the internet.”

The rise of generative AI has sparked fears that the tools will make it easier
than ever to carry out malicious activity online, like the creation and spread
of deepfakes. With a spate of elections this year, and stark divisions between
China, Russia, Iran and the West, experts have raised alarms. 

According to the company, however, the influence operations have had little
reach, and none scored higher than a 2 out of 6 on a metric called the “Breakout
Scale”, which measures how much influence specific malicious activity likely has
on audiences. A recent report by Meta on influence operations reached a similar
conclusion about inauthentic activity on its platforms.  

OpenAI  detected campaigns by two different Russian actors — one an unknown
group it dubbed Bad Grammar and the other Doppelgänger, a prolific malign
network known for spreading disinformation about the war in Ukraine. It also
disrupted the activity of the Chinese group Spamouflage, which the FBI has said
is tied to China’s Ministry of Public Security. 

The Iranian group the International Union of Virtual Media (IUVM) reportedly
used the tools to create content for their website, usually with an anti-US and
anti-Israel focus, and an Israeli political campaign management firm called
STOIC was also discovered abusing the models, creating content “loosely
associated” with the war in Gaza and relations with Jews and Muslims. 

OpenAI disrupted four Doppelgänger clusters, which used generative AI to create
short text comments in English, French, German, Italian and Polish, and to
translate articles from Russian and to generate text about them for social
media. Another cluster generated articles in French, while a fourth used the
technology to take content from a Doppelgänger website and synthesize it into
Facebook posts. 

The report also highlights instances where the company’s software prevented
threat actors from achieving their goals. For example, Doppelgänger tried to
create images of European politicians but was stopped, and Bad Grammar posted
generated content that included denials from the AI model.

  “AI can change the toolkit that human operators use, but it does not change
the operators themselves,” they said. “Our investigations showed that they were
as prone to human error as previous generations have been.” 

 * 
 * 
 * 
 * 
 * 

Tags
 * artificial intelligence (AI)
 * OpenAI
 * influence operations
 * Iran
 * Russia
 * social media

Previous articleNext article
Analygence chosen as company to help NIST address backlog at NVD
Scammers are playing college kids with free piano offers

James Reddick

has worked as a journalist around the world, including in Lebanon and in
Cambodia, where he was Deputy Managing Editor of The Phnom Penh Post. He is also
a radio and podcast producer for outlets like Snap Judgment.


BRIEFS

 * Analygence chosen as company to help NIST address backlog at NVDMay 30th,
   2024
 * EU Parliament member suspected of being paid to promote Russian propaganda
   May 30th, 2024
 * All democracies 'struggling’ with foreign manipulation, warns Estonian
   presidentMay 30th, 2024
 * Ransomware attack on Seattle Public Library knocks out online systemsMay
   28th, 2024
 * Feds continue to rack up convictions in BEC cases as Georgia man gets 10-year
   sentenceMay 22nd, 2024
 * CISA to tap cyber policy veteran Jeff Greene for top roleMay 22nd, 2024
 * HHS offering $50 million for proposals to improve hospital cybersecurityMay
   20th, 2024
 * EPA says it will step up enforcement to address ‘critical’ vulnerabilities
   within water sectorMay 20th, 2024
 * Cyber firm CyberArk inks $1.54 billion deal to acquire VenafiMay 20th, 2024


GRU'S BLUEDELTA TARGETS KEY NETWORKS IN EUROPE WITH MULTI-PHASE ESPIONAGE
CAMPAIGNS


GRU's BlueDelta Targets Key Networks in Europe with Multi-Phase Espionage
Campaigns


GITCAUGHT: THREAT ACTOR LEVERAGES GITHUB REPOSITORY FOR MALICIOUS INFRASTRUCTURE


GitCaught: Threat Actor Leverages GitHub Repository for Malicious Infrastructure


EXPLORING THE DEPTHS OF SOLARMARKER'S MULTI-TIERED INFRASTRUCTURE


Exploring the Depths of SolarMarker's Multi-tiered Infrastructure


RUSSIA-LINKED COPYCOP USES LLMS TO WEAPONIZE INFLUENCE CONTENT AT SCALE


Russia-Linked CopyCop Uses LLMs to Weaponize Influence Content at Scale


IRAN-ALIGNED EMERALD DIVIDE INFLUENCE CAMPAIGN EVOLVES TO EXPLOIT ISRAEL-HAMAS
CONFLICT


Iran-Aligned Emerald Divide Influence Campaign Evolves to Exploit Israel-Hamas
Conflict
 * 
 * 
 * 
 * 
 * 

 * Privacy

 * About

 * Contact Us

© Copyright 2024 | The Record from Recorded Future News