www.darkreading.com Open in urlscan Pro
2606:4700::6811:7963  Public Scan

URL: https://www.darkreading.com/vulnerabilities-threats/samsung-engineers-sensitive-data-chatgpt-warnings-ai-use-workplace
Submission: On April 21 via api from US — Scanned from DE

Form analysis 0 forms found in the DOM

Text Content

The Edge
DR Tech
Sections
Close
Back
Sections
Featured Sections
The Edge
Dark Reading Technology
Attacks / Breaches

Cloud

ICS/OT

Remote Workforce

Perimeter

Analytics
Security Monitoring

Security Monitoring
App Sec
Database Security

Database Security
Risk
Compliance

Compliance
Threat Intelligence

Endpoint
AuthenticationMobile SecurityPrivacy

AuthenticationMobile SecurityPrivacy
Vulnerabilities / Threats
Advanced ThreatsInsider ThreatsVulnerability Management

Advanced ThreatsInsider ThreatsVulnerability Management
Operations
Identity & Access ManagementCareers & People

Identity & Access ManagementCareers & People
Physical Security

IoT

Black Hat news
Omdia Research
Security Now
Events
Close
Back
Events
Events
 * Anatomy of a Data Breach - A Dark Reading June 22 Event
   
 * Black Hat USA - August 5-10 - Learn More
   

Webinars
 * How to Launch a Threat Hunting Program
   Apr 27, 2023
 * What's "CNAPP-ening"? Bring Your Cloud Security into Focus!
   May 04, 2023

Resources
Close
Back
Resources
Dark Reading Library >
Webinars >
Reports >
Slideshows >
White Papers >
Partner Perspectives: Microsoft
Tech Library >

Newsletter

The Edge
DR Tech
Sections
Close
Back
Sections
Featured Sections
The Edge
Dark Reading Technology
Attacks / Breaches

Cloud

ICS/OT

Remote Workforce

Perimeter

Analytics
Security Monitoring

Security Monitoring
App Sec
Database Security

Database Security
Risk
Compliance

Compliance
Threat Intelligence

Endpoint
AuthenticationMobile SecurityPrivacy

AuthenticationMobile SecurityPrivacy
Vulnerabilities / Threats
Advanced ThreatsInsider ThreatsVulnerability Management

Advanced ThreatsInsider ThreatsVulnerability Management
Operations
Identity & Access ManagementCareers & People

Identity & Access ManagementCareers & People
Physical Security

IoT

Black Hat news
Omdia Research
Security Now
Events
Close
Back
Events
Events
 * Anatomy of a Data Breach - A Dark Reading June 22 Event
   
 * Black Hat USA - August 5-10 - Learn More
   

Webinars
 * How to Launch a Threat Hunting Program
   Apr 27, 2023
 * What's "CNAPP-ening"? Bring Your Cloud Security into Focus!
   May 04, 2023

Resources
Close
Back
Resources
Dark Reading Library >
Webinars >
Reports >
Slideshows >
White Papers >
Partner Perspectives: Microsoft
Tech Library >
The Edge
DR Tech
Sections
Close
Back
Sections
Featured Sections
The Edge
Dark Reading Technology
Attacks / Breaches

Cloud

ICS/OT

Remote Workforce

Perimeter

Analytics
Security Monitoring

Security Monitoring
App Sec
Database Security

Database Security
Risk
Compliance

Compliance
Threat Intelligence

Endpoint
AuthenticationMobile SecurityPrivacy

AuthenticationMobile SecurityPrivacy
Vulnerabilities / Threats
Advanced ThreatsInsider ThreatsVulnerability Management

Advanced ThreatsInsider ThreatsVulnerability Management
Operations
Identity & Access ManagementCareers & People

Identity & Access ManagementCareers & People
Physical Security

IoT

Black Hat news
Omdia Research
Security Now
Events
Close
Back
Events
Events
 * Anatomy of a Data Breach - A Dark Reading June 22 Event
   
 * Black Hat USA - August 5-10 - Learn More
   

Webinars
 * How to Launch a Threat Hunting Program
   Apr 27, 2023
 * What's "CNAPP-ening"? Bring Your Cloud Security into Focus!
   May 04, 2023

Resources
Close
Back
Resources
Dark Reading Library >
Webinars >
Reports >
Slideshows >
White Papers >
Partner Perspectives: Microsoft
Tech Library >

--------------------------------------------------------------------------------

Newsletter
SEARCH
A minimum of 3 characters are required to be typed in the search bar in order to
perform a search.




Announcements
 1. 
 2. 
 3. 

Event
How to Launch a Threat Hunting Program | Webinar <REGISTER>
Event
How to Accelerate XDR Outcomes: Bridging the Gap Between Network and Endpoint |
Webinar <REGISTER>
Report
Black Hat USA 2022 Attendee Report | Supply Chain & Cloud Security Risks Are Top
of Mind | <READ IT NOW>
PreviousNext

Vulnerabilities/Threats

4 MIN READ

News



SAMSUNG ENGINEERS FEED SENSITIVE DATA TO CHATGPT, SPARKING WORKPLACE AI WARNINGS

In three separate incidents, engineers at the Korean electronics giant
reportedly shared sensitive corporate data with the AI-powered chatbot.
Jai Vijayan
Contributing Writer, Dark Reading
April 11, 2023
Source: Wright Studio via Shutterstock
PDF


Recent reports about engineers at Samsung Electronics inadvertently leaking
sensitive company information via ChatGPT in three separate incidents highlight
why policies governing employee use of AI services in the workplace are quickly
becoming a must for enterprise organizations.



The Economist Korea, one of the first to report on the data leaks, described the
first incident as involving an engineer who pasted buggy source code from a
semiconductor database into ChatGPT, with a prompt to the chatbot to fix the
errors. In the second instance, an employee wanting to optimize code for
identifying defects in certain Samsung equipment pasted that code into ChatGPT.
The third leak resulted when an employee asked ChatGPT to generate the minutes
of an internal meeting at Samsung.

The incidents played out exactly the same way that researchers have been warning
that they could, since OpenAI made ChatGPT publicly available in
November. Security analysts have noted how, in all instances where users share
data with ChatGPT, the information ends up as training data for the machine
learning/large language model (ML/LLM). They have noted how someone could later
retrieve the data using the right prompts. 

ChatGPT creator, OpenAI, itself has warned users on the risk: "We are not able
to delete specific prompts from your history. Please don't share any sensitive
information in your conversations," OpenAI's user guide notes.




SAMSUNG ENACTS EMERGENCY ANTI-CHATGPT MEASURES

The situation has apparently prompted a rethink of ChatGPT use at Samsung after
the third incident, just barely three weeks after the South Korean electronics
giant allowed employees access to the generative AI tool. The company had
initially banned the technology over security and privacy concerns before
relenting.



The Economist reported Samsung's emergency measures to limit ChatGPT use
internally as including restricting employees from asking questions of ChatGPT
that were larger than 1,024 bytes and considering disciplinary action against
employees who share corporate data with LLMs such as ChatGPT.

Samsung did not respond to a Dark Reading request seeking clarification on the
three incidents and the company's response to them. Security researchers,
however, have been warning that such leaks could become common as employees
begin leveraging ChatGPT for various use cases within the enterprise.

A study that Cyberhaven conducted earlier this year found many workers at client
companies pasting source code, client data, regulated information, and other
confidential data into ChatGPT. Examples included an executive who pasted his
company's 2023 strategy document into the chatbot so it could generate a
PowerPoint slide presentation, and a doctor who entered a patient's name and
medical diagnosis so ChatGPT could generate a letter to the patient's insurance
company.




AI: A RISK VS. BENEFIT GAMBLE FOR ENTERPRISES

"From an employee's perspective, ChatGPT-like tools offer the potential to be
exponentially more productive, making them hard to ignore," says Krishna
Vishnubhotla, vice president of product strategy at Zimperium. However, it's
important to consider the risks vs. rewards equation, which will vary depending
on specific roles and responsibilities, he notes. For instance, employees
working with intellectual property would require more guidance and precautions
on how to use tools like ChatGPT he says: "It is crucial for organizations to
understand how productivity will look and where it will come from before they
embrace this opportunity."

Michael Rinehart, vice president of artificial intelligence at Securiti, says
it's important to remember that advanced Generative AI tools such as ChatGPT
cannot distinguish between what they should and should not memorize during
training. So, organizations that want to harness them for different use cases
should consider using tools for classifying, masking, or tokenizing personal and
other sensitive data.

Or "a second approach is the use of differential privacy. Differential privacy
offers provable protection of individuals in structured data," Rinehart says.


DIFFERENTIAL PRIVACY

Rinehart describes differential privacy as a technique for protecting data in a
dataset. "It involves adding noise — or error — to real data," he says. The
synthetic data will have many of the important characteristics of real-world
data without exposing the individuals in the dataset. "If used properly, even if
the synthetic dataset were leaked, privacy would still be protected.
Enterprise-grade synthetic data generators with differential privacy are now
available," he says.

Rinehart perceives attempts by organizations to ban ChatGPT use in the workplace
as ill conceived and unlikely to work. "A consistent story in the field of
security is that it may be better to offer a secure path to using a tool than to
block it," he says. "If a tool offers incredibly high benefits, people may
attempt to circumvent blocks to take advantage of it."

Melissa Bischoping, director of endpoint security at Tanium, says the issue with
sharing data to ChatGPT lies in the fact that the creators can see the data and
use it to understand how the model continues to train and grow. Once a user
shares information with ChatGPT, that information becomes part of the next
model. 

"As organizations want to enable use of powerful tools like ChatGPT, they should
explore options that allow them to leverage a privately trained model so their
valuable data is only used by their model and not leveraged by iterations on
training for the next publicly available model."

Threat IntelligenceApplication SecurityAdvanced Threats
Keep up with the latest cybersecurity threats, newly-discovered vulnerabilities,
data breach information, and emerging trends. Delivered daily or weekly right to
your email inbox.
Subscribe

More Insights
White Papers
 * 
   AppSec Best Practices: Where Speed, Security, and Innovation Meet in the
   Middle
 * 
   Cybersecurity Maturity Model Certification (CMMC) Version 1.02

More White Papers
Webinars
 * 
   How to Launch a Threat Hunting Program
 * 
   What's "CNAPP-ening"? Bring Your Cloud Security into Focus!

More Webinars
Reports
 * 
   Successfully Managing Identity in Modern Cloud and Hybrid Environments
 * 
   The 10 Most Impactful Types of Vulnerabilities for Enterprises Today

More Reports

Editors' Choice
7 Things Your Ransomware Response Playbook Is Likely Missing
Becky Bracken, Editor, Dark Reading
Samsung Engineers Feed Sensitive Data to ChatGPT, Sparking Workplace AI Warnings
Jai Vijayan, Contributing Writer, Dark Reading
LastPass Breach Reveals Important Lessons
Mike Walters, VP of Vulnerability and Threat Research & Co-Founder, Action1
Corporation
FIN7, Former Conti Gang Members Collaborate on 'Domino' Malware
Jai Vijayan, Contributing Writer, Dark Reading
Webinars
 * How to Launch a Threat Hunting Program
 * What's "CNAPP-ening"? Bring Your Cloud Security into Focus!
 * Artificial Intelligence, ChatGPT and Cybersecurity: A Match Made in Heaven or
   a Hack Waiting to Happen?
 * Unleashing AI to Assess Cybersecurity Risk
 * Puzzled by Patching: Solve Endpoint Pains

More Webinars
Reports
 * Successfully Managing Identity in Modern Cloud and Hybrid Environments
 * The 10 Most Impactful Types of Vulnerabilities for Enterprises Today
 * Shoring Up the Software Supply Chain Across Enterprise Applications
 * The Promise and Reality of Cloud Security
 * 10 Hot Talks From Black Hat USA 2022

More Reports

White Papers
 * AppSec Best Practices: Where Speed, Security, and Innovation Meet in the
   Middle
 * Cybersecurity Maturity Model Certification (CMMC) Version 1.02
 * Cybersecurity in a post pandemic world: A focus on financial services
 * Know your customer: Enable a 360-degree view with customer identity & access
   management
 * Enable and Protect Your Remote Workforce

More White Papers
Events
 * Anatomy of a Data Breach - A Dark Reading June 22 Event
 * Black Hat USA - August 5-10 - Learn More
 * Black Hat Asia - May 9-12 - Learn More

More Events
More Insights
White Papers
 * 
   AppSec Best Practices: Where Speed, Security, and Innovation Meet in the
   Middle
 * 
   Cybersecurity Maturity Model Certification (CMMC) Version 1.02

More White Papers
Webinars
 * 
   How to Launch a Threat Hunting Program
 * 
   What's "CNAPP-ening"? Bring Your Cloud Security into Focus!

More Webinars
Reports
 * 
   Successfully Managing Identity in Modern Cloud and Hybrid Environments
 * 
   The 10 Most Impactful Types of Vulnerabilities for Enterprises Today

More Reports

DISCOVER MORE FROM INFORMA TECH

 * Interop
 * InformationWeek
 * Network Computing
 * ITPro Today

 * Data Center Knowledge
 * Black Hat
 * Omdia

WORKING WITH US

 * About Us
 * Advertise
 * Reprints

FOLLOW DARK READING ON SOCIAL

 * 
 * 
 * 
 * 
 * 
 * 


 * Home
 * Cookies
 * Privacy
 * Terms



Copyright © 2023 Informa PLC Informa UK Limited is a company registered in
England and Wales with company number 1072954 whose registered office is 5
Howick Place, London, SW1P 1WG.





Cookies Button


ABOUT COOKIES ON THIS SITE

We and our partners use cookies to enhance your website experience, learn how
our site is used, offer personalised features, measure the effectiveness of our
services, and tailor content and ads to your interests while you navigate on the
web or interact with us across devices. You can choose to accept all of these
cookies or only essential cookies. To learn more or manage your preferences,
click “Settings”. For further information about the data we collect from you,
please see our Privacy Policy

Accept All
Settings



COOKIE PREFERENCE CENTER

When you visit any website, it may store or retrieve information on your
browser, mostly in the form of cookies. This information might be about you,
your preferences or your device and is mostly used to make the site work as you
expect it to. The information does not usually directly identify you, but it can
give you a more personalized web experience. Because we respect your right to
privacy, you can choose not to allow some types of cookies. Click on the
different category headings to find out more and change our default settings.
However, blocking some types of cookies may impact your experience of the site
and the services we are able to offer.
More information
Allow All


MANAGE CONSENT PREFERENCES

STRICTLY NECESSARY COOKIES

Always Active

These cookies are necessary for the website to function and cannot be switched
off in our systems. They are usually only set in response to actions made by you
which amount to a request for services, such as setting your privacy
preferences, logging in or filling in forms.    You can set your browser to
block or alert you about these cookies, but some parts of the site will not then
work. These cookies do not store any personally identifiable information.

Cookies Details‎

PERFORMANCE COOKIES

Performance Cookies

These cookies allow us to count visits and traffic sources so we can measure and
improve the performance of our site. They help us to know which pages are the
most and least popular and see how visitors move around the site.    All
information these cookies collect is aggregated and therefore anonymous. If you
do not allow these cookies we will not know when you have visited our site, and
will not be able to monitor its performance.

Cookies Details‎

FUNCTIONAL COOKIES

Functional Cookies

These cookies enable the website to provide enhanced functionality and
personalisation. They may be set by us or by third party providers whose
services we have added to our pages.    If you do not allow these cookies then
some or all of these services may not function properly.

Cookies Details‎

TARGETING COOKIES

Targeting Cookies

These cookies may be set through our site by our advertising partners. They may
be used by those companies to build a profile of your interests and show you
relevant adverts on other sites.    They do not store directly personal
information, but are based on uniquely identifying your browser and internet
device. If you do not allow these cookies, you will experience less targeted
advertising.

Cookies Details‎
Back Button


BACK



Search Icon
Filter Icon

Clear
checkbox label label
Apply Cancel
Consent Leg.Interest
checkbox label label
checkbox label label
checkbox label label

 * 
   
   View Cookies
   
    * Name
      cookie name

Confirm My Choices