venturebeat.com Open in urlscan Pro
192.0.66.2  Public Scan

URL: https://venturebeat.com/ai/the-profound-danger-of-conversational-ai/
Submission: On February 07 via api from US — Scanned from DE

Form analysis 1 forms found in the DOM

GET https://venturebeat.com/

<form method="get" action="https://venturebeat.com/" class="search-form" id="nav-search-form">
  <input id="mobile-search-input" class="" type="text" placeholder="Search" name="s" aria-label="Search" required="">
  <button type="submit" class="">
    <svg width="24" height="24" viewBox="0 0 24 24" fill="none" xmlns="http://www.w3.org/2000/svg">
      <g>
        <path fill-rule="evenodd" clip-rule="evenodd"
          d="M14.965 14.255H15.755L20.745 19.255L19.255 20.745L14.255 15.755V14.965L13.985 14.685C12.845 15.665 11.365 16.255 9.755 16.255C6.16504 16.255 3.255 13.345 3.255 9.755C3.255 6.16501 6.16504 3.255 9.755 3.255C13.345 3.255 16.255 6.16501 16.255 9.755C16.255 11.365 15.665 12.845 14.6851 13.985L14.965 14.255ZM5.255 9.755C5.255 12.245 7.26501 14.255 9.755 14.255C12.245 14.255 14.255 12.245 14.255 9.755C14.255 7.26501 12.245 5.255 9.755 5.255C7.26501 5.255 5.255 7.26501 5.255 9.755Z">
        </path>
      </g>
    </svg>
  </button>
</form>

Text Content

WE VALUE YOUR PRIVACY

We and our partners store and/or access information on a device, such as cookies
and process personal data, such as unique identifiers and standard information
sent by a device for personalised ads and content, ad and content measurement,
and audience insights, as well as to develop and improve products. With your
permission we and our partners may use precise geolocation data and
identification through device scanning. You may click to consent to our and our
partners’ processing as described above. Alternatively you may click to refuse
to consent or access more detailed information and change your preferences
before consenting. Please note that some processing of your personal data may
not require your consent, but you have a right to object to such processing.
Your preferences will apply to this website only. You can change your
preferences at any time by returning to this site or visit our privacy policy.
MORE OPTIONSDISAGREEAGREE

Skip to main content
Events Special Issues
VentureBeat Homepage

Subscribe

 * Artificial Intelligence
   * View All
   * AI, ML and Deep Learning
   * Auto ML
   * Data Labelling
   * Synthetic Data
   * Conversational AI
   * NLP
   * Text-to-Speech
 * Security
   * View All
   * Data Security and Privacy
   * Network Security and Privacy
   * Software Security
   * Computer Hardware Security
   * Cloud and Data Storage Security
 * Data Infrastructure
   * View All
   * Data Science
   * Data Management
   * Data Storage and Cloud
   * Big Data and Analytics
   * Data Networks
 * Automation
   * View All
   * Industrial Automation
   * Business Process Automation
   * Development Automation
   * Robotic Process Automation
   * Test Automation
 * Enterprise Analytics
   * View All
   * Business Intelligence
   * Disaster Recovery Business Continuity
   * Statistical Analysis
   * Predictive Analysis
 * More
   * Data Decision Makers
   * Virtual Communication
     * Team Collaboration
     * UCaaS
     * Virtual Reality Collaboration
     * Virtual Employee Experience
   * Programming & Development
     * Product Development
     * Application Development
     * Test Management
     * Development Languages


Subscribe Events Special Issues

Guest


THE PROFOUND DANGER OF CONVERSATIONAL AI

Louis Rosenberg, Unanimous A.I.
February 4, 2023 6:40 AM
 * Share on Facebook
 * Share on Twitter
 * Share on LinkedIn

Image Credit: Image by Louis Rosenberg using Midjourney

Check out all the on-demand sessions from the Intelligent Security Summit here.

--------------------------------------------------------------------------------



When researchers contemplate the risks that AI poses to human civilization, we
often reference the “control problem.” This refers to the possibility that an
artificial super-intelligence could emerge that is so much smarter than humans
that we quickly lose control over it. The fear is that a sentient AI with a
super-human intellect could pursue goals and interests that conflict with our
own, becoming a dangerous rival to humanity.

While this is a valid concern that we must work hard to protect against, is it
really the greatest threat that AI poses to society? Probably not. A recent
survey of more than 700 AI experts found that most believe that human-level
machine intelligence (HLMI) is at least 30 years away. 

1
/
6
Optimizing security strategies during an acute talent shortage
Read More

158.7K
12




Video Player is loading.
Play Video
Unmute

Duration 0:00
/
Current Time 0:00
Playback Speed Settings
1x
Loaded: 0%

0:00

Remaining Time -0:00
 
FullscreenPlayUp Next

This is a modal window.



Beginning of dialog window. Escape will cancel and close the window.

TextColorWhiteBlackRedGreenBlueYellowMagentaCyanTransparencyOpaqueSemi-TransparentBackgroundColorBlackWhiteRedGreenBlueYellowMagentaCyanTransparencyOpaqueSemi-TransparentTransparentWindowColorBlackWhiteRedGreenBlueYellowMagentaCyanTransparencyTransparentSemi-TransparentOpaque
Font Size50%75%100%125%150%175%200%300%400%Text Edge
StyleNoneRaisedDepressedUniformDropshadowFont FamilyProportional
Sans-SerifMonospace Sans-SerifProportional SerifMonospace SerifCasualScriptSmall
Caps
Reset restore all settings to the default valuesDone
Close Modal Dialog

End of dialog window.

Share
Playback Speed

0.25x
0.5x
1x Normal
1.5x
2x
Replay the list

TOP ARTICLES






 * Powered by AnyClip
 * Privacy Policy




Optimizing security strategies during an acute talent shortage
Optimizing security strategies during an acute talent shortage
NOW PLAYING
UP NEXT
Becoming Secure by Design
NOW PLAYING
UP NEXT
Investing in technologies and people to defend financial institutions
NOW PLAYING
UP NEXT
Internal threats that create external attack opportunities and how to combat
them
NOW PLAYING
UP NEXT
Rebuilding cybersecurity threat detection and response
NOW PLAYING
UP NEXT
Identifying and mitigating the most critical security risks
NOW PLAYING
UP NEXT



On the other hand, I’m deeply concerned about a different type of control
problem that is already within our grasp and could pose a major threat to
society unless policymakers take rapid action. I’m referring to the increasing
possibility that currently available AI technologies can be used to target and
manipulate individual users with extreme precision and efficiency. Even worse,
this new form of personalized manipulation could be deployed at scale by
corporate interests, state actors or even rogue despots to influence broad
populations.   


THE ‘MANIPULATION PROBLEM’

To contrast this threat with the traditional Control Problem described above, I
refer to this emerging AI risk as the “Manipulation Problem.”  It’s a danger
I’ve been tracking for almost two decades, but over the last 18 months, it has
transformed from a theoretical long-term risk to an urgent near-term threat.


EVENT

Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case
studies. Watch on-demand sessions today.

Watch Here

That’s because the most efficient and effective deployment mechanism for
AI-driven human manipulation is through conversational AI. And, over the last
year, a remarkable AI technology called Large Language Models (LLMs) has rapidly
reached a maturity level. This has suddenly made natural conversational
interactions between targeted users and AI-driven software a viable means of
persuasion, coercion, and manipulation.  

Of course, AI technologies are already being used to drive influence campaigns
on social media platforms, but this is primitive compared to where the
technology is headed. That’s because current campaigns, while described as
“targeted,” are more analogous to spraying buckshot at flocks of birds. This
tactic directs a barrage of propaganda or misinformation at broadly defined
groups in the hope that a few pieces of influence will penetrate the community,
resonate among its members and spread across social networks.

This tactic is extremely dangerous and has caused real damage to society,
polarizing communities, spreading falsehoods and reducing trust in legitimate
institutions. But it will seem slow and inefficient compared to the next
generation of AI-driven influence methods that are about to be unleashed on
society. 

advertisement



REAL-TIME AI SYSTEMS

I’m referring to real-time AI systems designed to engage targeted users in
conversational interactions and skillfully pursue influence goals with
personalized precision. These systems will be deployed using euphemistic terms
like Conversational Advertising, Interactive Marketing, Virtual Spokespeople,
Digital Humans or simply AI Chatbots.

But whatever we call them, these systems have terrifying vectors for misuse and
abuse. I’m not talking about the obvious danger that unsuspecting consumers may
trust the output of chatbots that were trained on data riddled with errors and
biases. No, I’m talking about something far more nefarious — the deliberate
manipulation of individuals through the targeted deployment of agenda-driven
conversational AI systems that persuade users through convincing interactive
dialog.

advertisement


Instead of firing buckshot into broad populations, these new AI methods will
function more like “heat-seeking missiles” that mark users as individual targets
and adapt their conversational tactics in real time, adjusting to each
individual personally as they work to maximize their persuasive impact.  

At the core of these tactics is the relatively new technology of LLMs, which can
produce interactive human dialog in real time while also keeping track of the
conversational flow and context. As popularized by the launch of ChatGPT in
2022, these AI systems are trained on such massive datasets that they are not
only skilled at emulating human language, but they have vast stores of factual
knowledge, can make impressive logical inferences and can provide the illusion
of human-like commonsense.

When combined with real-time voice generation, such technologies will enable
natural spoken interactions between humans and machines that are highly
convincing, seemingly rational and surprisingly authoritative. 


EMERGENCE OF DIGITAL HUMANS

advertisement


Of course, we will not be interacting with disembodied voices, but with
AI-generated personas that are visually realistic. This brings me to the second
rapidly advancing technology that will contribute to the AI Manipulation
Problem: Digital humans. This is the branch of computer software aimed at
deploying photorealistic simulated people that look, sound, move and make
expressions so authentically that they can pass as real humans.

These simulations can be deployed as interactive spokespeople that target
consumers through traditional 2D computing via video-conferencing and other flat
layouts. Or, they can be deployed in three-dimensional immersive worlds using
mixed reality (MR) eyewear.  

While real-time generation of photorealistic humans seemed out of reach just a
few years ago, rapid advancements in computing power, graphics engines and AI
modeling techniques have made digital humans a viable near-term technology. In
fact, major software vendors are already providing tools to make this a
widespread capability. 

For example, Unreal recently launched an easy-to-use tool called Metahuman
Creator. This is specifically designed to enable the creation of convincing
digital humans that can be animated in real-time for interactive engagement with
consumers. Other vendors are developing similar tools. 

advertisement



MASQUERADING AS AUTHENTIC HUMANS

When combined, digital humans and LLMs will enable a world in which we regularly
interact with Virtual Spokespeople (VSPs) that look, sound and act like
authentic persons. 

In fact, a 2022 study by researchers from Lancaster University and U.C. Berkeley
demonstrated that users are now unable to distinguish between authentic human
faces and AI-generated faces. Even more troubling, they determined that users
perceived the AI-generated faces as “more trustworthy” than real people.

This suggests two very dangerous trends for the near future. First, we can
expect to engage AI-driven systems to be disguised as authentic humans, and we
will soon lack the ability to tell the difference.  Second, we are likely to
trust disguised AI-driven systems more than actual human representatives. 

advertisement



PERSONALIZED CONVERSATIONS WITH AI

This is very dangerous, as we will soon find ourselves in personalized
conversations with AI-driven spokespeople that are (a) indistinguishable from
authentic humans, (b) inspire more trust than real people, and (c) could be
deployed by corporations or state actors to pursue a specific conversational
agenda, whether it’s to convince people to buy a particular product or believe a
particular piece of misinformation. 

And if not aggressively regulated, these AI-driven systems will also analyze
emotions in real-time using webcam feeds to process facial expressions, eye
motions and pupil dilation — all of which can be used to infer emotional
reactions throughout the conversation.

At the same time, these AI systems will process vocal inflections, inferring
changing feelings throughout a conversation. This means that a virtual
spokesperson deployed to engage people in an influence-driven conversation will
be able to adapt its tactics based on how they respond to every word it speaks,
detecting which influence strategies are working and which are not. The
potential for predatory manipulation through conversational AI is extreme. 

advertisement



CONVERSATIONAL AI: PERCEPTIVE AND INVASIVE

Over the years, I’ve had people push back on my concerns about Conversational
AI, telling me that human salespeople do the same thing by reading emotions and
adjusting tactics — so this should not be considered a new threat.

This is incorrect for a number of reasons. First, these AI systems will detect
reactions that no human salesperson could perceive. For example, AI systems can
detect not only facial expressions, but “micro-expressions” that are too fast or
too subtle for a human observer to notice, but which indicate emotional
reactions — including reactions that the user is unaware of expressing or even
feeling.

Similarly, AI systems can read subtle changes in complexion known as “blood flow
patterns” on faces that indicate emotional changes no human could detect. And
finally, AI systems can track subtle changes in pupil size and eye motions and
extract cues about engagement, excitement and other private internal feelings.
Unless protected by regulation, interacting with Conversational AI will be far
more perceptive and invasive than interacting with any human representative.

advertisement



ADAPTIVE AND CUSTOMIZED CONVERSATIONS

Conversational AI will also be far more strategic in crafting a custom verbal
pitch. That’s because these systems will likely be deployed by large online
platforms that have extensive data profiles about a person’s interests, views,
background and whatever other details were compiled over time.

This means that, when engaged by a Conversational AI system that looks, sounds
and acts like a human representative, people are interacting with a platform
that knows them better than any human would. In addition, it will compile a
database of how they reacted during prior conversational interactions, tracking
what persuasive tactics were effective on them and what tactics were not. 

In other words, Conversational AI systems will not only adapt to immediate
emotional reactions, but to behavioral traits over days, weeks and years. They
can learn how to draw you into conversation, guide you to accept new ideas, push
your buttons to get you riled up and ultimately drive you to buy products you
don’t need and services you don’t want. They can also encourage you to believe
misinformation that you’d normally realize was absurd. This is extremely
dangerous. 

advertisement



HUMAN MANIPULATION, AT SCALE

In fact, the interactive danger of Conversational AI could be far worse than
anything we have dealt with in the world of promotion, propaganda or persuasion
using traditional or social media. For this reason, I believe regulators should
focus on this issue immediately, as the deployment of dangerous systems could
happen soon.

This is not just about spreading dangerous content — it is about enabling
personalized human manipulation at scale. We need legal protections that will
defend our cognitive liberty against this threat. 

After all, AI systems can already beat the world’s best chess and poker players.
What chance does an average person have to resist being manipulated by a
conversational influence campaign that has access to their personal history,
processes their emotions in real-time and adjusts its tactics with AI-driven
precision? No chance at all. 

Louis Rosenberg is founder of Unanimous AI and has been awarded more than 300
patents for VR, AR, and AI technologies.


DATADECISIONMAKERS

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data
work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best
practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even consider contributing an article of your own!

Read More From DataDecisionMakers




INTELLIGENT SECURITY SUMMIT ON-DEMAND

Did you miss a session at Intelligent Security Summit? Head over to the
on-demand library to hear insights from experts and learn the importance of
cybersecurity in your organization.

Watch Here


 * DataDecisionMakers
 * Follow us on Facebook
 * Follow us on Twitter
 * Follow us on LinkedIn
 * Follow us on RSS

 * Press Releases
 * Contact Us
 * Advertise
 * Share a News Tip
 * Contribute to DataDecisionMakers

 * Careers
 * Privacy Policy
 * Terms of Service
 * Do Not Sell My Personal Information

© 2023 VentureBeat. All rights reserved.