www.darkreading.com Open in urlscan Pro
2606:4700::6810:deab  Public Scan

URL: https://www.darkreading.com/cloud-security/critical-bugs-hugging-face-ai-platform-pickle
Submission: On April 08 via api from TR — Scanned from DE

Form analysis 0 forms found in the DOM

Text Content

Dark Reading is part of the Informa Tech Division of Informa PLC
Informa PLC|ABOUT US|INVESTOR RELATIONS|TALENT
This site is operated by a business or businesses owned by Informa PLC and all
copyright resides with them. Informa PLC's registered office is 5 Howick Place,
London SW1P 1WG. Registered in England and Wales and Scotlan. Number 8860726.

Black Hat NewsOmdia Cybersecurity

Newsletter Sign-Up

Newsletter Sign-Up

Cybersecurity Topics

RELATED TOPICS

 * Application Security
 * Cybersecurity Careers
 * Cloud Security
 * Cyber Risk
 * Cyberattacks & Data Breaches
 * Cybersecurity Analytics
 * Cybersecurity Operations
 * Data Privacy
 * Endpoint Security
 * ICS/OT Security

 * Identity & Access Mgmt Security
 * Insider Threats
 * IoT
 * Mobile Security
 * Perimeter
 * Physical Security
 * Remote Workforce
 * Threat Intelligence
 * Vulnerabilities & Threats


World

RELATED TOPICS

 * DR Global

 * Middle East & Africa

See All
The Edge
DR Technology
Events

RELATED TOPICS

 * Upcoming Events

 * Webinars

SEE ALL
Resources

RELATED TOPICS

 * Library
 * Newsletters
 * Reports
 * Videos
 * Webinars
 * Whitepapers

 * 
 * 
 * 
 * 
 * Partner Perspectives:
 * > Microsoft

SEE ALL


 * Сloud Security
 * Vulnerabilities & Threats
 * Cyber Risk


CRITICAL BUGS PUT HUGGING FACE AI PLATFORM IN A 'PICKLE'

One issue would have allowed cross-tenant attacks, and another enabled access to
a shared registry for container images; exploitation via an insecure Pickle file
showcases emerging risks for AI-as-a-service more broadly.

Jai Vijayan, Contributing Writer

April 5, 2024

4 Min Read
Source: Barry Mason via Alamy Stock Photo


Two critical security vulnerabilities in the Hugging Face AI platform opened the
door to attackers looking to access and alter customer data and models.

One of the security weaknesses gave attackers a way to access machine learning
(ML) models belonging to other customers on the Hugging Face platform, and the
second allowed them to overwrite all images in a shared container registry. Both
flaws, discovered by researchers at Wiz, had to do with the ability for
attackers to take over parts of Hugging Face's inference infrastructure.



Wiz researchers found weaknesses in three specific components: Hugging Face's
Inference API, which allows users to browse and interact with available models
on the platform; Hugging Face Inference Endpoints — or dedicated infrastructure
for deploying AI models into production; and Hugging Face Spaces, a hosting
service for showcasing AI/ML applications or for working collaboratively on
model development.


THE PROBLEM WITH PICKLE

In examining Hugging Face's infrastructure and ways to weaponize the bugs they
discovered, Wiz researchers found that anyone could easily upload an AI/ML model
to the platform, including those based on the Pickle format. Pickle is a widely
used module for storing Python objects in a file. Though even the Python
software foundation itself has deemed Pickle as insecure, it remains popular
because of its ease of use and the familiarity people have with it.



"It is relatively straightforward to craft a PyTorch (Pickle) model that will
execute arbitrary code upon loading," according to Wiz.



Wiz researchers took advantage of the ability to upload a private Pickle-based
model to Hugging Face that would run a reverse shell upon loading. They then
interacted with it using the Inference API to achieve shell-like functionality,
which the researchers used to explore their environment on Hugging Face's
infrastructure.

That exercise quickly showed the researchers their model was running in a pod in
a cluster on Amazon Elastic Kubernetes Service (EKS). From there the researchers
were able to leverage common misconfigurations to extract information that
allowed them to acquire the privileges required to view secrets that could have
allowed them to access other tenants on the shared infrastructure.

With Hugging Face Spaces, Wiz found an attacker could execute arbitrary code
during application build time that would let them examine network connections
from their machine. Their review showed one connection to a shared container
registry containing images belonging to other customers that they could have
tampered with.



"In the wrong hands, the ability to write to the internal container registry
could have significant implications for the platform's integrity and lead to
supply chain attacks on customers’ spaces," Wiz said.

Hugging Face said it had completely mitigated the risks that Wiz had discovered.
The company meanwhile identified the issues as at least partly having to do with
its decision to continue allowing the use of Pickle files on the Hugging Face
platform, despite the aforementioned well-documented security risks associated
with such files.  

"Pickle files have been at the core of most of the research done by Wiz and
other recent publications by security researchers about Hugging Face," the
company noted. Allowing Pickle use on Hugging Face is "a burden on our
engineering and security teams and we have put in significant effort to mitigate
the risks while allowing the AI community to use tools they choose."




EMERGING RISKS WITH AI-AS-A-SERVICE

Wiz described its discovery as indicative of the risks that organizations need
to be cognizant about when using shared infrastructure to host, run and develop
new AI models and applications, which is becoming known as "AI-as-a-service."
The company likened the risks and associated mitigations to those that
organizations encounter in public cloud environments and recommended they apply
the same mitigations in AI environments as well.

"Organizations should ensure that they have visibility and governance of the
entire AI stack being used and carefully analyze all risks," Wiz said in a blog
this week. This includes analyzing "usage of malicious models, exposure of
training data, sensitive data in training, vulnerabilities in AI SDKs, exposure
of AI services, and other toxic risk combinations that may exploited by
attackers," the security vendor said.

Eric Schwake, director of cybersecurity strategy at Salt Security, says there
are two major issues related to the use of AI-as-a-service that organizations
need to be aware of. "First, threat actors can upload harmful AI models or
exploit vulnerabilities in the inference stack to steal data or manipulate
results," he says. "Second, malicious actors can try to compromise training
data, leading to biased or inaccurate AI outputs, commonly known as data
poisoning."

Identifying these issues can be challenging, especially with how complex AI
models are becoming, he says. To help manage some of this risk it’s important
for organizations to understand how their AI apps and models interact with API
and find ways to secure that. "Organizations might also want to explore
Explainable AI (XAI) to help make AI models more comprehensible," Schwake says,
"and it could help identify and mitigate bias or risk within the AI models."




ABOUT THE AUTHOR(S)

Jai Vijayan, Contributing Writer



Jai Vijayan is a seasoned technology reporter with over 20 years of experience
in IT trade journalism. He was most recently a Senior Editor at Computerworld,
where he covered information security and data privacy issues for the
publication. Over the course of his 20-year career at Computerworld, Jai also
covered a variety of other technology topics, including big data, Hadoop,
Internet of Things, e-voting, and data analytics. Prior to Computerworld, Jai
covered technology issues for The Economic Times in Bangalore, India. Jai has a
Master's degree in Statistics and lives in Naperville, Ill.

See more from Jai Vijayan, Contributing Writer
Keep up with the latest cybersecurity threats, newly discovered vulnerabilities,
data breach information, and emerging trends. Delivered daily or weekly right to
your email inbox.

Subscribe

You May Also Like

--------------------------------------------------------------------------------

Сloud Security

What Healthcare Cybersecurity Leaders Should Know About the FDA's Section 524B
Guidelines
Сloud Security

Securely Moving Financial Services to the Cloud
Сloud Security

What Does Socrates Have to Do With CPM?
Сloud Security

Enterprises Rely on Multicloud Security to Protect Cloud Workloads
More Insights
Webinars

 * Cybersecurity Strategies for Small and Med Sized Businesses
   
   April 11, 2024

 * Defending Against Today's Threat Landscape with MDR
   
   April 18, 2024

 * Securing Code in the Age of AI
   
   April 24, 2024

 * Beyond Spam Filters and Firewalls: Preventing Business Email Compromises in
   the Modern Enterprise
   
   April 30, 2024

 * Key Findings from the State of AppSec Report 2024
   
   May 7, 2024

More Webinars
Events

 * Black Hat USA - August 3-8 - Learn More
   
   August 3, 2024

 * Cybersecurity's Hottest New Technologies: What You Need To Know
   
   March 21, 2024

 * Black Hat Asia - April 16-19 - Learn More
   
   April 16, 2024

More Events



EDITOR'S CHOICE

Flags with FIFA and Qatar 2022 World Cup logo waving in the wind.
Cyber Risk
How Soccer's 2022 World Cup in Qatar Was Nearly HackedHow Soccer's 2022 World
Cup in Qatar Was Nearly Hacked
byJai Vijayan, Contributing Writer
Apr 3, 2024
5 Min Read

A hooded hacker on a laptop with red code flowing through from the left side of
the image
Cyber Risk
XZ Utils Backdoor Implanted in Carefully Executed, Multiyear Supply Chain
AttackXZ Utils Backdoor Implanted in Multiyear Supply Chain Attack
byJai Vijayan, Contributing Writer
Apr 1, 2024
5 Min Read
Flag of Singapore in digital code
Cybersecurity Analytics
Singapore Sets High Bar in Cybersecurity PreparednessSingapore Sets High Bar in
Cybersecurity Preparedness
byNate Nelson, Contributing Writer
Apr 4, 2024
4 Min Read

WordPress Logo on blue screen of a smartphone with WordPress homepage on the
screen of a laptop in background
Remote Workforce
Critical Security Flaw Exposes 1 Million WordPress Sites to SQL
InjectionCritical Security Flaw Exposes 1 Million WordPress Sites to SQL
Injection
byElizabeth Montalbano, Contributing Writer
Apr 4, 2024
3 Min Read
Reports

 * Industrial Networks in the Age of Digitalization

 * Zero-Trust Adoption Driven by Data Protection

 * How Enterprises Assess Their Cyber-Risk

 * The 2021 Security Outcomes Study

 * The Infoblox Q1 2021 Cyberthreat Intelligence Report

More Reports
White Papers

 * The State of Incident Response

 * A Solution Guide to Operational Technology Cybersecurity

 * Demystifying Zero Trust in OT

 * Secure Access for Operational Technology at Scale

 * FortiSASE Customer Success Stories - The Benefits of Single Vendor SASE

More Whitepapers
Events

 * Black Hat USA - August 3-8 - Learn More
   
   August 3, 2024

 * Cybersecurity's Hottest New Technologies: What You Need To Know
   
   March 21, 2024

 * Black Hat Asia - April 16-19 - Learn More
   
   April 16, 2024

More Events





DISCOVER MORE WITH INFORMA TECH

Black HatOmdia

WORKING WITH US

About UsAdvertiseReprints

JOIN US


Newsletter Sign-Up

FOLLOW US



Copyright © 2024 Informa PLC Informa UK Limited is a company registered in
England and Wales with company number 1072954 whose registered office is 5
Howick Place, London, SW1P 1WG.

Home|Cookie Policy|Privacy|Terms of Use

Cookies Button


ABOUT COOKIES ON THIS SITE

We and our partners use cookies to enhance your website experience, learn how
our site is used, offer personalised features, measure the effectiveness of our
services, and tailor content and ads to your interests while you navigate on the
web or interact with us across devices. You can choose to accept all of these
cookies or only essential cookies. To learn more or manage your preferences,
click “Settings”. For further information about the data we collect from you,
please see our Privacy Policy

Accept All
Settings



COOKIE PREFERENCE CENTER

When you visit any website, it may store or retrieve information on your
browser, mostly in the form of cookies. This information might be about you,
your preferences or your device and is mostly used to make the site work as you
expect it to. The information does not usually directly identify you, but it can
give you a more personalized web experience. Because we respect your right to
privacy, you can choose not to allow some types of cookies. Click on the
different category headings to find out more and change our default settings.
However, blocking some types of cookies may impact your experience of the site
and the services we are able to offer.
More information
Allow All


MANAGE CONSENT PREFERENCES

STRICTLY NECESSARY COOKIES

Always Active

These cookies are necessary for the website to function and cannot be switched
off in our systems. They are usually only set in response to actions made by you
which amount to a request for services, such as setting your privacy
preferences, logging in or filling in forms.    You can set your browser to
block or alert you about these cookies, but some parts of the site will not then
work. These cookies do not store any personally identifiable information.

Cookies Details‎

PERFORMANCE COOKIES

Performance Cookies

These cookies allow us to count visits and traffic sources so we can measure and
improve the performance of our site. They help us to know which pages are the
most and least popular and see how visitors move around the site.    All
information these cookies collect is aggregated and therefore anonymous. If you
do not allow these cookies we will not know when you have visited our site, and
will not be able to monitor its performance.

Cookies Details‎

FUNCTIONAL COOKIES

Functional Cookies

These cookies enable the website to provide enhanced functionality and
personalisation. They may be set by us or by third party providers whose
services we have added to our pages.    If you do not allow these cookies then
some or all of these services may not function properly.

Cookies Details‎

TARGETING COOKIES

Targeting Cookies

These cookies may be set through our site by our advertising partners. They may
be used by those companies to build a profile of your interests and show you
relevant adverts on other sites.    They do not store directly personal
information, but are based on uniquely identifying your browser and internet
device. If you do not allow these cookies, you will experience less targeted
advertising.

Cookies Details‎
Back Button


BACK



Search Icon
Filter Icon

Clear
checkbox label label
Apply Cancel
Consent Leg.Interest
checkbox label label
checkbox label label
checkbox label label

 * 
   
   View Cookies
   
    * Name
      cookie name

Confirm My Choices