www.wired.com Open in urlscan Pro
151.101.130.194  Public Scan

Submitted URL: https://link.mail.beehiiv.com/ss/c/TsAdlkiAd9oMVbISfo202_W-a9iwqE1rkZSKcKgBaV5E7en4AVUvBes4q1lptlk5vEwJWEORe3FgUnd4hKzwlIpprAO...
Effective URL: https://www.wired.com/story/use-of-ai-is-seeping-into-academic-journals-and-its-proving-difficult-to-detect/?utm_sourc...
Submission: On August 25 via manual from US — Scanned from DE

Form analysis 1 forms found in the DOM

Name: newsletter-subscribePOST

<form class="form-with-validation NewsletterSubscribeFormValidation-iCYa-Dt dweEln" id="newsletter-subscribe" name="newsletter-subscribe" novalidate="" method="POST"><span class="TextFieldWrapper-Pzdqp gLbdoV text-field"
    data-testid="TextFieldWrapper__email"><label class="BaseWrap-sc-gjQpdd BaseText-ewhhUZ TextFieldLabel-klrYvg iUEiRd bguvtk dGjIbL text-field__label text-field__label--single-line" for="newsletter-subscribe-text-field-email"
      data-testid="TextFieldLabel__email">
      <div class="TextFieldLabelText-cvvxBl ccLIEk">Your email</div>
      <div class="TextFieldInputContainer-jcMPhb oFrOs"><input aria-describedby="privacy-text" aria-invalid="false" id="newsletter-subscribe-text-field-email" required="" name="email" placeholder="Enter your email"
          class="BaseInput-fAzTdK TextFieldControlInput-eFUxkf eGzzTT ebwfTz text-field__control text-field__control--input" type="email" data-testid="TextFieldInput__email" value=""></div>
    </label><button class="BaseButton-bLlsy ButtonWrapper-xCepQ fhIjxp hlDdIy button button--utility TextFieldButton-csBrgY edxbrw" data-event-click="{&quot;element&quot;:&quot;Button&quot;}" data-testid="Button" type="submit"><span
        class="ButtonLabel-cjAuJN hzwRuG button__label">SUBMIT</span></button></span>
  <div id="privacy-text" tabindex="-1" class="NewsletterSubscribeFormDisclaimer-bTVtiV dQZiuj"><span>By signing up you agree to our <a href="https://www.condenast.com/user-agreement">User Agreement</a> (including the
      <a href="https://www.condenast.com/user-agreement#introduction-arbitration-notice"> class action waiver and arbitration provisions</a>), our <a href="https://www.condenast.com/privacy-policy">Privacy Policy &amp; Cookie Statement</a> and to
      receive marketing and account-related emails from WIRED. You can unsubscribe at any time.</span></div>
</form>

Text Content

Skip to main content

Open Navigation Menu
Menu
Story Saved

To revist this article, visit My Profile, then View saved stories.

Close Alert


Use of AI Is Seeping Into Academic Journals—and It’s Proving Difficult to Detect
 * Backchannel
 * Business
 * Culture
 * Gear
 * Ideas
 * Science
 * Security

Story Saved

To revist this article, visit My Profile, then View saved stories.

Close Alert

Sign In

SUBSCRIBE


GET WIRED


ONE YEAR FOR $29.99 $5

SUBSCRIBE


Search
Search
 * Backchannel
 * Business
 * Culture
 * Gear
 * Ideas
 * Science
 * Security

 * Podcasts
 * Video
 * Artificial Intelligence
 * Climate
 * Games
 * Newsletters
 * Magazine
 * Events
 * Wired Insider
 * Jobs
 * Coupons

Chevron
SUMMER SALEGet WIRED - now only $29.99 $5This is your last free article. See the
future here first with 1 year of unlimited access.SUBSCRIBE NOW
Already a subscriber? Sign in

Get WIRED - now only $29.99 $5. SUBSCRIBE NOW




Amanda Hoover

Science
Aug 17, 2023 7:00 AM


USE OF AI IS SEEPING INTO ACADEMIC JOURNALS—AND IT’S PROVING DIFFICULT TO DETECT

Ethics watchdogs are looking out for potentially undisclosed use of generative
AI in scientific writing. But there’s no foolproof way to catch it all yet.
Photograph: Yifei Fang/Getty Images

Save this storySave
Save this storySave

In its August edition, Resources Policy, an academic journal under the Elsevier
publishing umbrella, featured a peer-reviewed study about how ecommerce has
affected fossil fuel efficiency in developing nations. But buried in the report
was a curious sentence: “Please note that as an AI language model, I am unable
to generate specific tables or conduct tests, so the actual results should be
included in the table.”

CONTENT

To honor your privacy preferences, this content can only be viewed on the site
it originates from.

The study’s three listed authors had names and university or institutional
affiliations—they did not appear to be AI language models. But for anyone who
has played around in ChatGPT, that phrase may sound familiar: The generative AI
chatbot often prefaces its statements with this caveat, noting its weaknesses in
delivering some information. After a screenshot of the sentence was posted to X,
formerly Twitter, by another researcher, Elsevier began investigating. The
publisher is looking into the use of AI in this article and “any other possible
instances,” Andrew Davis, vice president of global communications at Elsevier,
told WIRED in a statement.

Elsevier’s AI policies do not block the use of AI tools to help with writing,
but they do require disclosure. The publishing company uses its own in-house AI
tools to check for plagiarism and completeness, but it does not allow editors to
use outside AI tools to review papers.



The authors of the study did not respond to emailed requests for comment from
WIRED, but Davis says Elsevier has been in contact with them, and that the
researchers are cooperating. “The author intended to use AI to improve the
quality of the language (which is within our policy), and they accidentally left
in those comments—which they intend to clarify,” Davis says. The publisher
declined to provide more information on how it would remedy the Resources Policy
situation, citing the ongoing nature of the inquiry.

The rapid rise of generative AI has stoked anxieties across disciplines. High
school teachers and college professors are worried about the potential for
cheating. News organizations have been caught with shoddy articles penned by AI.
And now, peer-reviewed academic journals are grappling with submissions in which
the authors may have used generative AI to write outlines, drafts, or even
entire papers, but failed to make the AI use clear.



Journals are taking a patchwork approach to the problem. The JAMA Network, which
includes titles published by the American Medical Association, prohibits listing
artificial intelligence generators as authors and requires disclosure of their
use. The family of journals produced by Science does not allow text, figures,
images, or data generated by AI to be used without editors’ permission. PLOS ONE
requires anyone who uses AI to detail what tool they used, how they used it, and
ways they evaluated the validity of the generated information. Nature has banned
images and videos that are generated by AI, and it requires the use of language
models to be disclosed. Many journals’ policies make authors responsible for the
validity of any information generated by AI.

Featured Video



Teens Hacked Boston Subway Cards to Get Infinite Free Rides—and This Time,
Nobody Got Sued



Most Popular
 * Backchannel
   The Dark History Oppenheimer Didn't Show
   
   Ngofeen Mputubwele

 * Security
   The Last Hour of Prigozhin’s Plane
   
   Matt Burgess

 * Science
   Florida’s War With Invasive Pythons Has a New Twist
   
   Max G. Levy

 * Security
   A New Attack Impacts Major AI Chatbots—and No One Knows How to Stop It
   
   Will Knight

 * 





Experts say there’s a balance to strike in the academic world when using
generative AI—it could make the writing process more efficient and help
researchers more clearly convey their findings. But the tech—when used in many
kinds of writing—has also dropped fake references into its responses, made
things up, and reiterated sexist and racist content from the internet, all of
which would be problematic if included in published scientific writing.



If researchers use these generated responses in their work without strict
vetting or disclosure, they raise major credibility issues. Not disclosing use
of AI would mean authors are passing off generative AI content as their own,
which could be considered plagiarism. They could also potentially be spreading
AI’s hallucinations, or its uncanny ability to make things up and state them as
fact.



It’s a big issue, David Resnik, a bioethicist at the National Institute of
Environmental Health Sciences, says of AI use in scientific and academic work.
Still, he says, generative AI is not all bad—it could help researchers whose
native language is not English write better papers. “AI could help these authors
improve the quality of their writing and their chances of having their papers
accepted,” Resnik says. But those who use AI should disclose it, he adds.

For now, it's impossible to know how extensively AI is being used in academic
publishing, because there’s no foolproof way to check for AI use, as there is
for plagiarism. The Resources Policy paper caught a researcher’s attention
because the authors seem to have accidentally left behind a clue to a large
language model’s possible involvement. “Those are really the tips of the iceberg
sticking out,” says Elisabeth Bik, a science integrity consultant who runs the
blog Science Integrity Digest. “I think this is a sign that it's happening on a
very large scale.”


SEE WHAT’S NEXT IN TECH WITH THE FAST FORWARD NEWSLETTER

A weekly dispatch from the future by Will Knight, exploring AI advances and
other technology set to change our lives. Delivered every Thursday.
Your email

SUBMIT
By signing up you agree to our User Agreement (including the class action waiver
and arbitration provisions), our Privacy Policy & Cookie Statement and to
receive marketing and account-related emails from WIRED. You can unsubscribe at
any time.

In 2021, Guillaume Cabanac, a professor of computer science at the University of
Toulouse in France, found odd phrases in academic articles, like “counterfeit
consciousness” instead of “artificial intelligence.” He and a team coined the
idea of looking for “tortured phrases,” or word soup in place of straightforward
terms, as indicators that a document likely comes from text generators. He’s
also on the lookout for generative AI in journals, and is the one who flagged
the Resources Policy study on X.

Cabanac investigates studies that may be problematic, and he has been flagging
potentially undisclosed AI use. To protect scientific integrity as the tech
develops, scientists must educate themselves, he says. “We, as scientists, must
act by training ourselves, by knowing about the frauds,” Cabanac says. “It’s a
whack-a-mole game. There are new ways to deceive."

Tech advances since have made these language models even more convincing—and
more appealing as a writing partner. In July, two researchers used ChatGPT to
write an entire research paper in an hour to test the chatbot’s abilities to
compete in the scientific publishing world. It wasn’t perfect, but prompting the
chatbot did pull together a paper with solid analysis.

Most Popular
 * Backchannel
   The Dark History Oppenheimer Didn't Show
   
   Ngofeen Mputubwele

 * Security
   The Last Hour of Prigozhin’s Plane
   
   Matt Burgess

 * Science
   Florida’s War With Invasive Pythons Has a New Twist
   
   Max G. Levy

 * Security
   A New Attack Impacts Major AI Chatbots—and No One Knows How to Stop It
   
   Will Knight

 * 





That was a study to evaluate ChatGPT, but it shows how the tech could be used by
paper mills—companies that churn out scientific papers on demand—to create more
questionable content. Paper mills are used by researchers and institutions that
may feel pressure to publish research but who don’t want to spend the time and
resources to conduct their own original work. With AI, this process could become
even easier. AI-written papers could also draw attention away from good work by
diluting the pool of scientific literature.

And the issues could reach beyond text generators—Bik says she also worries
about AI-generated images, which could be manipulated to create fraudulent
research. It can be difficult to prove such images are not real.

Some researchers want to crack down on undisclosed AI writing, to screen for it
just as journals might screen for plagiarism. In June, Heather Desaire, a
professor of chemistry at the University of Kansas, was an author on a study
demonstrating a tool that can differentiate with 99 percent accuracy between
science writing produced by a human and entries produced by ChatGPT. Desaire
says the team sought to build a highly accurate tool, “and the best way to do
that is to focus on a narrow type of writing.” Other AI writing detection tools
billed as “one-size fits all” are usually less accurate.



The study found that ChatGPT typically produces less complex content than
humans, is more general in its references (using terms like others, instead of
specifically naming groups), and uses fewer types of punctuation. Human writers
were more likely to use words like however, although, and but. But the study
only looked at a small data set of Perspectives articles published in Science.
Desaire says more work is needed to expand the tool’s capabilities in detecting
AI-writing across different journals. The team is “thinking more about how
scientists—if they wanted to use it—would actually use it,” Desaire says, “and
verifying that we can still detect the difference in those cases.”






GET MORE FROM WIRED

 * 📧 Get the best stories from WIRED’s iconic archive in your inbox

 * 🎧 Our new podcast wants you to Have a Nice Future

 * Grimes on living forever, dying on Mars, and giving Elon Musk ideas for his
   best (worst) tweets

 * Why the great AI backlash came for a tiny startup you’ve probably never heard
   of

 * Leave Venice as a monument to climate change—and let her sink

 * How NASA nearly lost the Voyager 2 spacecraft forever

 * The best juicers for cocktails, mocktails, juices, and smoothies

 * 🌞 See if you take a shine to our picks for the best sunglasses and sun
   protection

Amanda Hoover is a general assignment staff writer at WIRED. She previously
wrote tech features for Morning Brew and covered New Jersey state government for
The Star-Ledger. She was born in Philadelphia, lives in New York, and is a
graduate of Northeastern University.
Staff Writer
 * Twitter

Topicsartificial intelligenceChatGPTscienceresearchethicsPublishingacademia
More from WIRED
A New Computer Proof ‘Blows Up’ Centuries-Old Fluid Equations
For more than 250 years, mathematicians have wondered if the Euler equations
might sometimes fail to describe a fluid’s flow. Now there’s a breakthrough.

Jordana Cepelewicz

Physicists Rewrite a Quantum Rule That Clashes With Our Universe
The past and future are tightly linked in conventional quantum mechanics. A
tweak could let quantum possibilities increase as space expands.

Charlie Wood

Florida’s War With Invasive Pythons Has a New Twist
It may not be possible to eradicate the state’s tens of thousands of Burmese
pythons. But the local wildlife is biting back—and humans wielding new tech can
help.

Max G. Levy


The World Is Going Blind. Taiwan Offers a Warning, and a Cure
So many people are nearsighted on the island nation that they have already
glimpsed what could be coming for the rest of us.

Amit Katwala

Brain Implants That Help Paralyzed People Speak Just Broke New Records
Two new studies show that AI-powered devices can help paralyzed people
communicate faster and more accurately.

Emily Mullin

India’s Lander Touches Down on the Moon. Russia’s Has Crashed
While India’s spacecraft landed on the lunar surface, the Russian one collided
with it. The mixed record shows that developing a lunar economy won’t be easy.

Ramin Skibba

The Impossible Fight to Stop Canada’s Wildfires
Canada’s worst wildfire season ever has put unprecedented strain on the
country’s firefighters. Tens of thousands of people have been displaced, with
millions more choking on toxic wildfire smoke. Next summer could be far worse.

Omar Mouallem

Everyone Was Wrong About Antipsychotics
An unprecedented look at dopamine in the brain reveals that psychosis drugs get
developed with the wrong neurons in mind.

Max G. Levy







ONE YEAR FOR $29.99 $5

SUBSCRIBE
WIRED is where tomorrow is realized. It is the essential source of information
and ideas that make sense of a world in constant transformation. The WIRED
conversation illuminates how technology is changing every aspect of our
lives—from culture to business, science to design. The breakthroughs and
innovations that we uncover lead to new ways of thinking, new connections, and
new industries.
 * Facebook
 * Twitter
 * Pinterest
 * YouTube
 * Instagram
 * Tiktok

More From WIRED

 * Subscribe
 * Newsletters
 * FAQ
 * Wired Staff
 * Press Center
 * Coupons
 * Editorial Standards
 * Archive

Contact

 * Advertise
 * Contact Us
 * Customer Care
 * Jobs

 * RSS
 * Accessibility Help
 * Condé Nast Store
 * 
   Manage Preferences

© 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance
of our User Agreement and Privacy Policy and Cookie Statement and Your
California Privacy Rights. WIRED may earn a portion of sales from products that
are purchased through our site as part of our Affiliate Partnerships with
retailers. The material on this site may not be reproduced, distributed,
transmitted, cached or otherwise used, except with the prior written permission
of Condé Nast. Ad Choices

Select international siteUnited StatesLargeChevron
 * UK
 * Italia
 * Japón




We and our partners store and/or access information on a device, such as unique
IDs in cookies to process personal data. You may accept or manage your choices
by clicking below or at any time in the privacy policy page. These choices will
be signaled to our partners and will not affect browsing data.More Information


WE AND OUR PARTNERS PROCESS DATA TO PROVIDE:

Use precise geolocation data. Actively scan device characteristics for
identification. Store and/or access information on a device. Personalised ads
and content, ad and content measurement, audience insights and product
development. List of Partners (vendors)

I Accept
Show Purposes