www.helpnetsecurity.com Open in urlscan Pro
44.242.35.164  Public Scan

URL: https://www.helpnetsecurity.com/2023/09/06/ai-social-engineering/
Submission: On September 07 via api from TR — Scanned from DE

Form analysis 1 forms found in the DOM

POST

<form id="mc4wp-form-1" class="mc4wp-form mc4wp-form-244483 mc4wp-ajax" method="post" data-id="244483" data-name="Footer newsletter form">
  <div class="mc4wp-form-fields">
    <div class="hns-newsletter">
      <div class="hns-newsletter__top">
        <div class="container">
          <div class="hns-newsletter__wrapper">
            <div class="hns-newsletter__title">
              <i>
                        <svg class="hic">
                            <use xlink:href="#hic-plus"></use>
                        </svg>
                    </i>
              <span>Cybersecurity news</span>
            </div>
          </div>
        </div>
      </div>
      <div class="hns-newsletter__bottom">
        <div class="container">
          <div class="hns-newsletter__wrapper">
            <div class="hns-newsletter__body">
              <div class="row">
                <div class="col">
                  <div class="form-check form-control-lg">
                    <input class="form-check-input" type="checkbox" name="_mc4wp_lists[]" value="520ac2f639" id="mcs1">
                    <label class="form-check-label text-nowrap" for="mcs1">Daily Newsletter</label>
                  </div>
                </div>
                <div class="col">
                  <div class="form-check form-control-lg">
                    <input class="form-check-input" type="checkbox" name="_mc4wp_lists[]" value="d2d471aafa" id="mcs2">
                    <label class="form-check-label text-nowrap" for="mcs2">Weekly Newsletter</label>
                  </div>
                </div>
              </div>
            </div>
            <div class="form-check form-control-lg mb-3">
              <input class="form-check-input" type="checkbox" name="_mc4wp_lists[]" value="28abe5d9ef" id="mcs3">
              <label class="form-check-label" for="mcs3">(IN)SECURE - monthly newsletter with top articles</label>
            </div>
            <div class="input-group mb-3">
              <input type="email" name="email" id="email" class="form-control border-dark" placeholder="Please enter your e-mail address" aria-label="Please enter your e-mail address" aria-describedby="hns-newsletter-submit-btn" required="">
              <button class="btn btn-dark rounded-0" type="submit" id="hns-newsletter-submit-btn">Subscribe</button>
            </div>
            <div class="form-check">
              <input class="form-check-input" type="checkbox" name="AGREE_TO_TERMS" value="1" id="mcs4" required="">
              <label class="form-check-label" for="mcs4">
                <span>I have read and agree to the <a href="https://www.helpnetsecurity.com/newsletter/" target="_blank" rel="noopener" class="d-inline-block">terms &amp; conditions</a>
                </span>
              </label>
            </div>
          </div>
        </div>
      </div>
    </div>
  </div><label style="display: none !important;">Leave this field empty if you're human: <input type="text" name="_mc4wp_honeypot" value="" tabindex="-1" autocomplete="off"></label><input type="hidden" name="_mc4wp_timestamp"
    value="1694052637"><input type="hidden" name="_mc4wp_form_id" value="244483"><input type="hidden" name="_mc4wp_form_element_id" value="mc4wp-form-1">
  <div class="mc4wp-response"></div>
</form>

Text Content

searchtwitterarrow rightmail strokearrow leftmail solidfacebooklinkedinplusangle
upmagazine plus
 * News
 * Features
 * Expert analysis
 * Videos
 * Events
 * Whitepapers
 * Industry news
 * Product showcase
 * Newsletters

 * 
 * 
 * 


Dita Pesek, Associate Security Consultant, WithSecure
September 6, 2023
Share


EMERGING THREAT: AI-POWERED SOCIAL ENGINEERING



Social engineering is a sophisticated form of manipulation but, thanks to AI
advancements, malicious groups have gained access to highly sophisticated tools,
suggesting that we might be facing more elaborate social engineering attacks in
the future.

It is becoming increasingly evident that the current “don’t click the link”
training approach will not suffice to tackle the evolving nature of social
engineering.




THE IMPLEMENTATION OF LLMS BY MALICIOUS ACTORS

Large language models (LLM) like ChatGPT are trained on vast amounts of text
data to generate human-like responses and perform various language-related
tasks. These models have millions or even billions of parameters, allowing them
to understand and generate text in a coherent and contextually relevant manner.

ChatGPT has become a powerful tool in malicious actors’ arsenal. The days of
poorly worded, error-ridden emails cluttering our spam boxes may soon be gone.
The text can now be enhanced and refined, making emails sound more convincing.

It’s worth noting that many phishing emails are crafted by non-native English
speakers, as numerous hacking organizations operate outside of English-speaking
countries. LLMs like ChatGPT allow these individuals to rewrite phishing emails
to better match their target audience’s language and context.

The recipients of such emails frequently include individuals responsible for
financial matters or possessing influential positions within the organization,
enabling them to execute transactions. Well-crafted emails tend to yield higher
rates of success. WormGPT is an AI model available on the dark net, and it’s
designed to create texts for hacking campaigns. This means malicious actors do
not have to worry about their accounts being blocked – they can produce any type
of content with it.


DEEPFAKES ARE GOOD BUT NOT (YET) FLAWLESS

Deepfake videos use AI and deep learning techniques to create highly realistic
but fake or fabricated content. Deepfakes often involve replacing the faces of
individuals in existing videos with other people’s faces, typically using
machine learning algorithms known as generative adversarial networks (GANs).
These advanced algorithms analyze and learn from vast amounts of data to
generate highly convincing visual and audio content that can deceive viewers
into believing that the manipulated video is authentic.

The most effective evaluation of deepfake technology can be made when watching
videos in which the “deepfaked” person is a celebrity or individual whom the
viewer is visually familiar with. Within the realm of available deepfake
technology, Deepfakes Web, the well-known deepfake generator, falls short. It is
immediately apparent that there is something wrong with the videos.

However, DeepFaceLab, another software for creating deepfakes, is a different
story. This technology serves as the tool for most current deepfakes, offering
greater believability that hinges on the skills of the creator. This Lucy Liu
deepfake created with DeepFaceLab is particularly impressive.

The challenge of achieving believability in deepfake videos lies in accurately
replicating hair and facial features. When the canvas for the deepfake possesses
a significantly different hairline or facial structure, the resulting deepfake
appears less convincing.

However, malicious actors find themselves fortunate in this regard. There is an
abundance of aspiring actors who are willing to have their videos recorded and
their appearances altered. Furthermore, there is no shortage of individuals who
are open to being recorded engaging in various activities, especially when they
are assured that their identity will never be exposed.

The current use of deepfakes is even more worrying than the availability of the
tools to create them. Shockingly, around 90% of deepfakes are used for
nonconsensual pornography, particularly for revenge purposes. What compounds the
issue is the absence of specific laws in Europe to protect the victims.


A POTENT METHOD FOR BLACKMAIL

Imagine if someone were to capture fake hidden camera footage and utilize AI to
replace the participants’ faces with that of the victim. Although the footage is
fabricated, explaining the situation to a spouse or a boss becomes an incredibly
difficult task. The possibilities for compromising individuals are boundless.

As malicious actors gain the upper hand, we could potentially find ourselves
stepping into a new era of espionage, where the most resourceful and innovative
threat actors thrive. The introduction of AI brings about a new level of
creativity in various fields, including criminal activities.

The crucial question remains: How far will malicious actors push the boundaries?
We must not overlook the fact that cybercrime is a highly profitable industry
with billions at stake. Certain criminal organizations operate similarly to
legal corporations, having their own infrastructure of employees and resources.
It is only a matter of time before they delve into developing their own deepfake
generators (if they haven’t already done so).

With their substantial financial resources, it’s not a matter of whether it is
feasible but rather whether it will be deemed worthwhile. And in this case, it
likely will be.

What preventative measures are currently on offer? Various scanning tools have
emerged, asserting their ability to detect deepfakes. One such tool is
Microsoft’s Video Authenticator Tool. Unfortunately, it is currently limited to
a handful of organizations engaged in the democratic process.

Another free tool is the Deepware free deepfake scanner, which has been tested
with YouTube videos, revealing its proficiency in scanning and recognizing known
deepfakes. However, when presented with content that is real, it struggles to
perform accurate scans, raising doubts about its overall effectiveness. It seems
it has been mainly trained on known deepfakes and struggles to recognize
anything else.

Additionally, Intel claims its FakeCatcher scanner has a 96% accuracy in
deepfake detection. However, given that most existing deepfakes can already be
recognized by humans, one may question the actual significance of this claim.


VOICE FAKES ALSO POSE A SIGNIFICANT THREAT TO ORGANIZATIONS

Voice fakes are artificially generated or manipulated audio recordings that aim
to imitate or impersonate someone’s voice. Like deepfake videos, voice fakes are
generated with advanced machine learning techniques, particularly speech
synthesis and voice conversion algorithms. The result is highly convincing audio
that mimics a specific individual’s speech pattern, tone, and nuances.

Voice fakes can be created based on just a few seconds of audio. However, to
effectively deceive someone who knows the individual well, longer recordings are
required. Obtaining such recordings becomes simpler when the targeted person
maintains a strong online presence.

Alternatively, adept social engineers can skillfully engage individuals in
conversations lasting over a minute, making the acquisition of voice samples
relatively effortless. Currently, voice fakes possess a higher degree of
believability than deepfakes, where research into the target’s speech patterns
only enhances the probability of a successful attack.

Consequently, we find ourselves in a situation where the success of such attacks
relies on the extent of effort that malicious actors are willing to invest. This
evolving landscape may have profound implications for so-called whale phishing
attacks, where high-profile figures are targeted. These types of social
engineering attacks garner the utmost attention and allocation of resources
within malicious organizations.

With the threat that voice fakes pose, it is becoming evident that implementing
two-factor authentication for sensitive phone calls, where transactions or the
sharing of sensitive information occur, is essential. We are entering a digital
communication landscape where the authenticity of any form of communication may
be called into question.


SHOULD WE PEN TEST HUMANS?

As AI becomes increasingly integrated into everyday life, it naturally becomes
intertwined with the cybersecurity landscape. While the presence of voice fake
and deepfake scanners is promising, their accuracy must be thoroughly tested. It
is reasonable to anticipate that pen testing efforts will increasingly focus on
AI, leading to a shift in some security assessments.

Evaluating the online presence of high-profile individuals and the ease of
creating convincing deepfakes may soon become integral to cybersecurity and red
team engagements. We might even see incident prevention and response teams
specifically dedicated to combating social engineering attacks.

Currently, if someone falls victim to extortion through a deepfake, where can
they turn for help? They certainly won’t approach their employer and say, “There
might be a sensitive video circulating, but don’t worry, it’s just a deepfake.”
However, having a team capable of addressing this issue confidentially and
mitigating the impact of such attacks on individuals could become a vital
service for companies to consider.

While the transformative power of the new AI-driven world on the cybersecurity
landscape is evident, the exact nature of these changes remains uncertain.




More about
 * artificial intelligence
 * cybercrime
 * cybersecurity
 * deepfakes
 * email
 * Europe
 * extortion
 * law
 * phishing
 * social engineering
 * WithSecure

Share this

FEATURED NEWS

 * Old vulnerabilities are still a big problem
 * Cybercriminals target MS SQL servers to deliver ransomware
 * Emerging threat: AI-powered social engineering

CIS Benchmarks Communities: Where configurations meet consensus


SPONSORED


EBOOK: 9 WAYS TO SECURE YOUR CLOUD APP DEV PIPELINE


FREE ENTRY-LEVEL CYBERSECURITY TRAINING AND CERTIFICATION EXAM


GUIDE: ATTACK SURFACE MANAGEMENT (ASM)




DON'T MISS


OLD VULNERABILITIES ARE STILL A BIG PROBLEM


CYBERCRIMINALS TARGET MS SQL SERVERS TO DELIVER RANSOMWARE


EMERGING THREAT: AI-POWERED SOCIAL ENGINEERING


CYBER TALENT GAP SOLUTIONS YOU NEED TO KNOW


ATLAS VPN ZERO-DAY ALLOWS SITES TO DISCOVER USERS’ IP ADDRESS




Cybersecurity news
Daily Newsletter
Weekly Newsletter
(IN)SECURE - monthly newsletter with top articles
Subscribe
I have read and agree to the terms & conditions
Leave this field empty if you're human:

© Copyright 1998-2023 by Help Net Security
Read our privacy policy | About us | Advertise
Follow us
×