www.malwarebytes.com Open in urlscan Pro
2600:9000:223c:6e00:16:26c7:ff80:93a1  Public Scan

URL: https://www.malwarebytes.com/blog/business/2023/05/chatgpt-cybersecurity-friend-or-foe
Submission: On May 23 via api from TR — Scanned from DE

Form analysis 2 forms found in the DOM

GET

<form id="search-form" onsubmit="submitSearchBlog(event)" method="get">
  <div class="searchbar-wrap-rightrail">
    <label for="cta-labs-rightrail-search-submit-en" aria-label="cta-labs-rightrail-search-submit-en" aria-labelledby="cta-labs-rightrail-search-submit-en">
      <input type="text" id="st-search-input-rightrail" class="st-search-input-rightrail" placeholder="Search Labs">
    </label>
    <button type="submit" id="cta-labs-rightrail-search-submit-en" aria-label="Submit your search query">
      <svg class="svg-icon svg-stroke-mwb-blue svg-search">
        <use href="/images/component-project/templates/blog/blog-svg.svg#svg-search"></use>
      </svg>
    </button>
  </div>
</form>

/newsletter/

<form class="newsletter-form form-inline" action="/newsletter/">
  <div class="email-input">
    <label for="cta-footer-newsletter-input-email-en" aria-label="cta-footer-newsletter-input-email-en" aria-labelledby="cta-footer-newsletter-input-email-en">
      <input type="text" class="email-input-field" id="cta-footer-newsletter-input-email-en" name="email" placeholder="Email Address">
    </label>
    <input name="source" type="hidden" value="">
    <input type="submit" class="submit-bttn" id="cta-footer-newsletter-subscribe-email-en" value="">
  </div>
</form>

Text Content

       
Personal
Personal
 * Security & Antivirus
 * Free virus removal >
 * Malwarebytes Premium for Windows >
 * Malwarebytes Premium for Mac >
 * Malwarebytes for Chromebook >
 * Malwarebytes Premium for Android >
 * Malwarebytes Premium for iOS >
 * Malwarebytes Premium for Teams >
 * Malwarebytes Premium + Privacy VPN >
 * AdwCleaner for Windows >
 *  
   Online Privacy
 * Malwarebytes Privacy VPN >
 * Malwarebytes Browser Guard >

 * How can we help?

 * Have a current computer infection?
   
   CLEAN YOUR DEVICE NOW 

 *  

 * Try out Malwarebytes Premium, with a full-featured trial
   
   DOWNLOAD NOW  

 *  

 * Find the right solution for you
   
   SEE PERSONAL PRICING 

 *  

 * Activate, upgrade and manage your subscription in MyAccount
   
   SIGN IN TO YOUR ACCOUNT 

 *  

 * Get answers to frequently asked questions and troubleshooting tips
   
   VISIT OUR SUPPORT PAGE 


Business
Business
 * Solutions
 * BY COMPANY SIZE
 * Small Businesses
 *  1-99 Employees 
 * Mid-size Businesses
 *  100-999 Employees
 * Large Enterprise
 *  1000+ Employees
 * BY INDUSTRY
 * Education
 * Finance
 * Healthcare
 * Government

 * Products
 * CLOUD-BASED SECURITY MANAGEMENT
 * Endpoint Protection
 * Endpoint Protection for Servers
 * Endpoint Detection & Response
 * Endpoint Detection & Response for Servers
 * Incident Response
 * Nebula Platform Architecture
 * Mobile Security
 * CLOUD-BASED SECURITY MODULES
 * DNS Filtering
 * Vulnerability & Patch Management
 * Remediation Connector Solution
 * Application Block
 * SECURITY SERVICES
 * Managed Detection and Response 
 * Cloud Storage Scanning Service 
 * Malware Removal Service
 * NEXT-GEN ANTIVIRUS FOR SMALL BUSINESS
 * For Teams

 * Get Started
 *  * Find the right solution for your business
    * See business pricing
   
   --------------------------------------------------------------------------------
   
    * Don't know where to start?
    * Help me choose a product
   
   --------------------------------------------------------------------------------
   
    * See what Malwarebytes can do for you
    * Get a free trial
   
   --------------------------------------------------------------------------------
   
    * Our sales team is ready to help. Call us now
    * +49 (800) 723-4800

Pricing
Partners
Partners
 * Explore Partnerships

 * Partner Solutions
 * Resellers
 * Managed Service Providers
 * Computer Repair
 * Technology Partners
 * Contact Us

 * Partner Success Story
 * Marek Drummond
   Managing Director at Optimus Systems
   
   "Thanks to the Malwarebytes MSP program, we have this high-quality product in
   our stack. It’s a great addition, and I have confidence that customers’
   systems are protected."

 * See full story

Resources
Resources
 * Learn About Cybersecurity
 * Antivirus
 * Malware
 * Ransomware
 * Malwarebytes Labs – Blog
 * Glossary
 * Threat Center

 * Business Resources
 * Reviews
 * Analyst Reports
 * Case Studies
 * Press & News

 * Reports
 * 
   
   
   
   The State of Malware 2023 Report
   

 * See Report

Support
Support
 * Technical Support
 * Personal Support
 * Business Support
 * Premium Services
 * Forums
 * Vulnerability Disclosure
 * Report a False Positive

 *  Product Videos
 * 

 * Featured Content
 * 
   
   
   
   Activate Malwarebytes Privacy on Windows device.

 * See Content

FREE DOWNLOAD
CONTACT US
CONTACT US
 * Personal Support
 * Business Support
 * Talk to Sales
 * Contact Press
 * Partner Programs
 * Submit Vulnerability

COMPANY
COMPANY
 * About Malwarebytes
 * Careers
 * News & Press

SIGN IN
SIGN IN
 * MyAccount: manage your personal/Teams subscription >
 * Cloud Console: manage your cloud business products >
 * Partner Portal: management for Resellers and MSPs >

SUBSCRIBE


Business


CHATGPT: CYBERSECURITY FRIEND OR FOE?

Posted: May 22, 2023 by Marcin Kleczynski

There are a lot of benefits to ChatGPT, but many in the security community have
concerns about it. Malwarebytes' CEO Marcin Kleczynski takes a deep dive into
the topic.

If you haven’t heard about ChatGPT yet, perhaps you’ve just been thawed from
cryogenic slumber or returned from six months off the grid. ChatGPT—the
much-hyped, artificial intelligence (AI) chatbot that provides human-like
responses from an enormous knowledge base—has been embraced practically
everywhere, from private sector businesses to K–12 classrooms.

Upon its launch in November 2022, tech enthusiasts quickly jumped at the shiny
new disruptor, and for good reason: ChatGPT has the potential to democratize AI,
personalize and simplify digital research, and assist in both creative
problem-solving and tackling “busywork.” But the security community and other
technology leaders have started raising the alarm, worried about the program’s
potential to write malware and spread mis- and disinformation.

Do you think your organization should embrace ChatGPT? Or do you believe
implementing the platform will compromise your company’s cybersecurity posture?
Read on to learn more about the pros, cons, conversations, and controversies
surrounding ChatGPT, including a call to (halt) action from technology leaders.


WHY CHATGPT SECURITY CONCERNS ARE BOTH OVERBLOWN AND VALID

Artificial intelligence (AI) was once a science fiction cautionary tale—stories
consistently warned against designing machines capable of surpassing human
ingenuity. But over the last 20 years, AI has transitioned from the stuff of
nightmares to mainstream adoption. From the first Roomba in 2002 to the first
virtual assistant (Siri) in 2011, AI has slowly and steadily penetrated the
consumer technology market, often with little comprehension from buyers that
artificial intelligence is actually powering the functionality behind their
favorite devices.

As of November 2022, people can no longer ignore the artificial elephant in the
room. Enter: ChatGPT. Developed by OpenAI, ChatGPT (Chat Generative Pre-Trained
Transformer) is an AI-powered chatbot that uses an enormous and sophisticated
language model to generate human-like responses in text format. ChatGPT amassed
an incredible 100 million users within two months of its explosive launch, and
its website receives over 1 billion visitors per month.

Unlike other chatbots, which are typically only able to produce canned replies
on a few subjects, ChatGPT can mimic the style of celebrity CEOs, craft business
pitches, compose music and song lyrics, answer test questions, simulate an
entire chat room, write computer code, and much more. For example, Malwarebytes
asked ChatGPT to write the opening paragraph of a novel about an antiquated
antivirus program that relies on signature-based detection to stop new and
emerging threats.

Here's what the program came back with:

> “The antivirus program blinked to life, its archaic interface flickering on
> the outdated CRT monitor. The program’s once cutting-edge signature-based
> detection technology was now decades out of date, unable to keep pace with the
> rapidly evolving landscape of digital threats. But for the small team of
> programmers responsible for maintaining the program, it was still their best
> line of defense against the unrelenting tide of new and unknown viruses that
> threatened to breach their systems. As they watched the program scan through
> the code, line by line, they knew that they were fighting an uphill battle -
> but they had no other choice but to keep trying.”

Such articulate, detailed responses caused a near frenzy upon ChatGPT’s initial
release. Many corporate customers and app developers have rushed to implement
the platform, and its newer Dall-E program for generating images, into their
business processes and products. However, the security community and those wary
of artificial intelligence’s steady drumbeat forward have warned organizations
to exercise caution over a myriad of potential risks.

Because of its meteoric rise into public consciousness and rapid adoption, the
generative AI chatbot has been the subject of continuing, complex conversations
about its impact on the cybersecurity industry, threat landscape, and humanity
as a whole. Will ChatGPT be the sentient harbinger of death some have claimed?
Or is it a unicorn that’s going to solve every business, academic, and creative
problem? The answer, as usual, lies somewhere in the gray.


SECURITY PROS OF CHATGPT

AI can be a powerful tool for cybersecurity and information technology
professionals. It will change the way we defend against cyberattacks by
improving the industry’s ability to detect and respond to threats in real time.
And it will help businesses shore up their IT infrastructure to better withstand
the constant stream of increasingly-sophisticated attacks. Most effective
security solutions today, including Malwarebytes, already employ some form of
machine learning. That’s why some in the security community argue that
generative AI tools can be safely deployed to strengthen an organization’s
cybersecurity posture as long as they’re implemented according to best
practices.


INCREASES EFFICIENCY

ChatGPT can increase efficiency for cybersecurity staff on the front lines. For
one, it can significantly reduce notification fatigue, a growing concern within
the field. With companies grappling with limited resources and a widening talent
gap, a tool like ChatGPT could simplify certain labor-intensive tasks and give
defenders back valuable time to commit to higher-level strategic thinking.
ChatGPT can be trained to identify and mitigate network security threats like
DDoS attacks when used in conjunction with other technologies. It can also help
automate security incident analysis and vulnerability detection, as well as more
accurately filter spam.


ASSISTS ENGINEERS

Malware analysts and reverse engineers could also benefit from ChatGPT’s
assistance on traditionally challenging tasks, such as writing proof-of-concept
code, comparing language- or platform-specific conventions, and analyzing
malware samples. The chatbot can also help engineers learn how to write in
different programming languages, master difficult software programs, and
understand vulnerabilities and exploit code.


TRAINS EMPLOYEES

ChatGPT’s security applications aren’t limited to Information Security (IS)
personnel. The program can help close the security knowledge gap by assisting in
employee training. Cybersecurity training is crucial for organizations
interested in mitigating cyberattacks and fraud, yet IT departments are often
far too busy to offer more than a single course per year. ChatGPT can step in to
offer insights on identifying the latest scams, avoiding social engineering
pitfalls, and setting stronger passwords in concise, conversational text that
may be more effective than a lecture or slide presentation.


AIDS LAW ENFORCEMENT

Finally, ChatGPT has the potential to assist law enforcement with investigating
and anticipating criminal activities. In a March 2023 report from Europol,
subject matter experts found that ChatGPT and other large language models (LLMs)
opened up “explorative communication” for law enforcement to quickly gather key
information without having to manually search through and summarize data from
search engines. LLMs can significantly speed up the learning process, enabling a
much faster gateway into technological comprehension than was previously thought
possible. This could help officers get a leg up on cybercriminals whose
understanding of emerging technologies have typically outpaced their own.


SECURITY CONCERNS OVERBLOWN

Not long after ChatGPT was first introduced, the inevitable hand wringing by
technology decision-makers took hold. In a February survey of IT professionals
by Blackberry, 51 percent predicted we are less than a year away from a
successful cyberattack being credited to ChatGPT, and 71 percent believed nation
states are likely already using the technology for malicious purposes.

The following month, thousands of tech leaders, including Steve Wozniak and Elon
Musk, signed an open letter to all AI labs calling on them to pause the
development of systems more powerful than the latest version of ChatGPT for at
least six months. The letter cites the potential for profound risks to society
and humanity that arise from the rapid development of advanced AI systems
without shared safety protocols. More than 27,500 signatures have since been
added to the letter.

However, even when ChatGPT is engaged in ominous activities, the outcomes at
present are rather harmless. Since OpenAI allows developers to modify its
official APIs, some have tested a few nefarious theories by creating ChaosGPT,
an internet-connected “evil” version that runs actions users do not intend. One
user commanded the AI to destroy humanity, and it planned a nuclear winter, all
while maintaining its own Twitter account, which was ultimately suspended.



So maybe ChatGPT isn’t going to take over the world just yet—what about some of
the more realistic security concerns being voiced, like the ability to develop
malware or phishing kits?

When it comes to writing malicious code, ChatGPT isn't yet ready for prime time.
In fact, the platform is a terrible programmer in general. It's currently easier
for an expert threat actor to create malware from scratch than to spend time
correcting what ChatGPT has produced. The fear that ChatGPT would hand script
kiddies the programming power to produce thousands of new malware strains is
unfounded, as amateur cybercriminals lack the knowledge to pick up on minor
errors in code, as well as the understanding of how code works.

One of our researchers recently embarked on an experiment to get ChatGPT to
write ransomware, and despite the chatbot’s initial protests that it couldn’t
“engage in activities that violate ethical or legal standards, including those
related to cybercrime or ransomware,” with a little coaxing, ChatGPT eventually
complied. The result: snippets of ransomware code that switched languages
throughout, stopped short after a certain number of characters, dropped features
at random, and were essentially incoherent and useless.

Since the primary focus of ChatGPT’s training was in language skills, security
pros have been most anxious about its ability to generate believable phishing
kits. While the chatbot can produce a clean phishing email that’s free from
grammatical or spelling errors, many modern phishing samples already do the
same. The AI tool’s phishing skills begin and end with writing emails because,
again, it lacks the coding talent to produce other elements like credential
harvesters, infected macros, or obfuscated code. Its attempts so far have been
rudimentary at best—and that’s with the assistance of other tools and
researchers.

ChatGPT can only pull from what’s already in its public database, and it has
only been trained on data up until 2021. Even today, there are simply not enough
well-written phishing scripts in the wild for ChatGPT to surpass what
cybercriminals have already developed. In addition, OpenAI has safety protocols
that explicitly prohibit the use of its models for malware development, fraud
(including spam and scams), and invasions of privacy. Unfortunately, that hasn’t
stopped crafty individuals from “jailbreaking” ChatGPT to get around them.


CHATGPT SECURITY CONS

Just because some of the worst fears about ChatGPT are overhyped doesn’t mean
there are no justifiable concerns. According to the NIST AI Risk Management
Framework published in January, an AI system can only be deemed trustworthy if
it adheres to the following six criteria:  

 1. Valid and reliable
 2. Safe
 3. Secure and resilient
 4. Accountable and transparent
 5. Explainable and interpretable
 6. Fair with harmful biases managed

However, risks can emerge from socio-technical tensions and ambiguity related to
how an AI program is used, its interactions with other systems, who operates it,
and the context in which it is deployed.


RACIAL AND GENDER BIAS

There are many inherent uncertainties in LLMs that render them opaque by nature,
including limited explainability and interpretability, and a lack of
transparency and accountability, including insufficient documentation.
Researchers have also reported multiple cases of harmful bias in AI, including
crime prediction algorithms that unfairly target Black and Latino people and
facial recognition systems that have difficulty accurately identifying people of
color. Without proper controls, ChatGPT could amplify, perpetuate, and
exacerbate toxic stereotypes, leading to undesirable or inequitable outcomes for
certain communities and individuals.


LACK OF VERIFIABLE METRICS

AI systems suffer from a deficit of verifiable measurement metrics, which would
help security teams determine whether a particular program is safe, secure, and
resilient. What little data exists is far from robust and lacks consensus among
AI developers and security professionals alike. What’s worse, different AI
developers interpret risk in different ways and measure it at different
intervals in the AI lifecycle, which could yield inconsistent results. Some
threats may be latent at one time but increase as AI systems adapt and evolve.


CYBERCRIMINAL EXPERIMENTATION

Despite its struggles with malicious code, ChatGPT has already been weaponized
by enterprising cybercriminals. By January, threat actors in underground forums
were experimenting with ChatGPT to recreate malware variants and techniques
described in research publications. Criminals shared malicious tools, such as an
information stealer, an automated exploit, and a program designed to phish for
credentials. Researchers also discovered cybercriminals exchanging ideas about
how to create dark web marketplaces using ChatGPT that sell stolen credentials,
malware, or even drugs in exchange for cryptocurrency.


VULNERABILITIES AND EXPLOITS

There are few ways to know in advance if an LLM is free from vulnerabilities. In
March, OpenAI temporarily took down ChatGPT because of a bug that allowed some
users to see the titles of other people’s chat histories and first messages of
newly-created conversations. After further investigation, OpenAI discovered the
vulnerability had exposed some user payment and personal data, including first
and last names, email addresses, payment addresses, the last four digits of
credit card numbers, and card expiration dates. While OpenAI claims, “We are
confident that there is no ongoing risk to users’ data,” there’s no way (at
present) to confirm or deny whether personal information was exfiltrated for
criminal purposes.

Also in March, OpenAI massively expanded ChatGPT’s capabilities to support
plugins that allow access to live data from the web, as well as from third-party
applications like Expedia and Instacart. In code provided to ChatGPT customers
interested in integrating the plugins, security analysts found a potentially
serious information disclosure vulnerability. The bug can be leveraged to
capture secret keys and root passwords, and researchers have already seen
attempted exploits in the wild.


PRIVACY CONCERNS

Compounding worries that vulnerabilities could lead to data breaches, several
top brands recently chastised employees for entering sensitive business data
into ChatGPT without realizing that all messages are saved on OpenAI’s servers.
When Samsung engineers asked ChatGPT to fix errors in their source code, they
accidentally leaked confidential notes from internal meetings and performance
data in the process. An executive at another company cut-and-pasted the firm's
2023 strategy into ChatGPT to create a slide deck, and a doctor submitted his
patient's name and medical condition for ChatGPT to craft a letter to his
insurance company.



Both privacy and security concerns have prompted major banks, including Bank of
America, JPMorgan Chase, Goldman Sachs, and Wells Fargo, to restrict or all-out
ban ChatGPT and other generative AI models until they can be further vetted.
Even private companies like Amazon, Microsoft, and Walmart have issued warnings
to their staff to refrain from divulging proprietary information or sharing
personal or customer data on ChatGPT as well.


SOCIAL ENGINEERING

Finally, cybercriminals wouldn’t be cybercriminals if they didn’t capitalize on
ChatGPT’s wild popularity. Because of its accelerated growth, ChatGPT was forced
to throttle its free tool and launch a $20/month paid tier for those wanting
unlimited access. This gave threat actors the ammunition to develop convincing
social engineering schemes that promised uninterrupted, free access to ChatGPT
but really lured users into entering their credentials on malicious webpages or
unknowingly installing malware. Security researchers also found more than 50
malicious Android apps on Google Play and elsewhere that spoof ChatGPT's icon
and name but are designed for nefarious purposes.


CHATGPT’S DISINFORMATION PROBLEM

While vulnerabilities, data breaches, and social engineering are valid concerns,
what’s causing the most anxiety at Malwarebytes is ChatGPT’s ability to spread
misinformation and disinformation on a massive scale. That which enamors the
public most—ChatGPT’s ability to generate thoughtful, human-like responses—is
the very same capability that could lull users into a false sense of security.
Just because ChatGPT’s answers sound natural and intelligent doesn’t mean they
are accurate. Incorrect information and associated biases are often incorporated
into its responses.

OpenAI CEO Sam Altman himself expressed worries that ChatGPT and other LLMs have
the potential to sow widespread discord through extensive disinformation
campaigns. Altman said the latest version, GPT-4, is still susceptible to
“hallucinating” incorrect facts and can be manipulated to produce deceptive or
harmful content. “The model will boldly assert made-up things as if they were
completely true,” he told ABC News.

In the age of clickbait journalism and social media, it can be challenging to
discern the difference between fake and authentic content, propaganda or
legitimate fact. With ChatGPT, bad actors can use the AI to quickly write fake
news stories that mimic the voice and tone of established journalists,
celebrities, or even politicians. For example, Malwarebytes was able to get
ChatGPT to write a story in the voice of Barack Obama about the earthquake in
Turkey, which could easily be modified to spread disinformation or collect
fraudulent payments through fake donation links.


EDUCATIONAL CONCERNS

In education, mis- and disinformation are especially troubling byproducts of
ChatGPT that have led some of the biggest school districts in the US to ban the
program from K–12 classrooms. From its lack of cultural competency to its
potential to undermine human teachers, academia is understandably apprehensive.
For every student using ChatGPT to research debate prompts or develop study
guides, there’s another abusing the platform to plagiarize essays or take exams.

The education industry might be willing (for now) to let teachers use ChatGPT
for simple tasks like creating lesson plans and emailing parents, but the tool
will likely remain off-limits for students, or at least highly regulated in
public schools. Educators are aware that over-reliance on AI-powered tools and
generated content could lead to a decrease in problem solving, creativity, and
critical thinking—the very skills teachers and administrators aim to develop in
students. Without them, it’ll be that much harder to recognize and avoid
misinformation.


FINAL VERDICT

Suggesting that ChatGPT is low risk and unworthy of the security community’s
attention is like putting your head in the sand and pretending AI doesn’t exist.
ChatGPT is only the start of the generative AI revolution. Our industry should
take its potential for disruption—and destruction—seriously and focus on
developing safeguards to combat AI threats. Halting “dangerous” research on
advanced models ignores the reality of rampant AI use today. Instead, it’s
better to demand NIST’s criteria for trustworthiness and establish regulation
around the development of AI through both government intervention and corporate
security innovation.

Some artificial intelligence regulation is already on the books: the 2022
Algorithmic Accountability Act requires US businesses to assess critical AI
algorithms and provide public disclosures for increased transparency. The
legislation was endorsed by AI advocates and experts, and it sets the stage for
future government oversight. With AI laws proposed in Canada and Europe as well,
we’re one step closer to providing some important guardrails for AI. In fact,
expect to see changes (aka limitations) implemented to ChatGPT in the near
future in response to a country-wide ban by the Italian government.

Just as cybersecurity relies on commercial software to defend people and
businesses, so too might generative AI models. New companies are already
springing up that specialize in AI vulnerability detection, bot mitigation, and
data input cleansing. One such company, Kasada Pty, has been tracking ChatGPT
misuse and abuse. Another new tool from Robust Intelligence, modeled after
VirusTotal, scans AI applications for security flaws and tests whether they’re
as effective as advertised or if they have issues around bias. And Hugging Face,
one of the most popular repositories of machine learning models, has been
working with Microsoft’s threat intelligence team on an application that scans
AI programs for cyberthreats.

As organizations look to integrate ChatGPT—whether to augment employee tasks,
make workflows more efficient, or supplement cyberdefenses—it will be important
to note the program’s risks alongside its benefits, and recognize that
generative AI still requires an appreciative amount of oversight before
large-scale adoption. Security leaders should consider AI-related
vulnerabilities across their people, processes, and technology—especially those
related to mis- and disinformation. By putting the right safeguards in place,
generative AI tools can be used to support existing security infrastructures.

Awareness alone won’t solve the more nebulous threats associated with ChatGPT.
To bring disparate security efforts together, the AI community will need to
adopt a similar modus operandi to traditional software, which benefits from an
entire ecosystem of government, academia, and enterprise that has developed over
more than 20 years. That system is in its infancy for LLMs like ChatGPT today,
but continued diligence—plus a learning model of its own—should integrate
cybersecurity in a symbiotic relationship.  The benefits of ChatGPT are many,
and there’s no doubt that generative AI tools have the potential to transform
humanity. In what way, remains to be seen.

--------------------------------------------------------------------------------

Malwarebytes EDR and MDR removes all remnants of ransomware and prevents you
from getting reinfected. Want to learn more about how we can help protect your
business? Get a free trial below.

TRY NOW

SHARE THIS ARTICLE

--------------------------------------------------------------------------------

COMMENTS



--------------------------------------------------------------------------------

RELATED ARTICLES

Podcast


IDENTITY CRISIS: HOW AN ANTI-PORN CRUSADE COULD JAM THE INTERNET, FEATURING ALEC
MUFFETT: LOCK AND CODE S04E11

May 22, 2023 - This week on Lock and Code, we speak with longtime security
researcher about the eerily similar attempts across the globe to weaken the
Internet to achieve one specific, social goal.

CONTINUE READING 0 Comments

Business


WEBINAR RECAP: EDR VS MDR FOR BUSINESS SUCCESS

May 22, 2023 - Learn more about EDR and MDR and which is right for your
business.

CONTINUE READING 0 Comments

Business


UPDATE NOW: 9 VULNERABILITIES IMPACT CISCO SMALL BUSINESS SERIES

May 22, 2023 - If you're using one of the affected products from the Cisco small
business range, you need to patch immediately.

CONTINUE READING 0 Comments

News


A WEEK IN SECURITY (MAY 15-21)

May 22, 2023 - The most interesting security-related news of the week from May
15-21.

CONTINUE READING 0 Comments

Business


APT ATTACKS: EXPLORING ADVANCED PERSISTENT THREATS AND THEIR EVASIVE TECHNIQUES

May 18, 2023 - Unpacking one of the most dangerous threats in cybersecurity.

CONTINUE READING 0 Comments

--------------------------------------------------------------------------------

ABOUT THE AUTHOR

Marcin Kleczynski
CEO and Co-Founder of Malwarebytes

Likes long walks on the beach and hates fish.


Contributors


Threat Center


Podcast


Glossary


Scams


Write for Labs

Cyberprotection for every one.

Cybersecurity info you can't do without

Want to stay informed on the latest news in cybersecurity? Sign up for our
newsletter and learn how to protect your computer from threats.



Cyberprotection for every one.

FOR PERSONAL

Windows

Mac

iOS

Android

VPN Connection

SEE ALL

COMPANY

About Us

Contact Us

Careers

News and Press

Blog

Scholarship

Forums

FOR BUSINESS

Small Businesses

Mid-size Businesses

Large Enterprise

Endpoint Protection

Endpoint Detection & Response

Managed Detection and Response (MDR)

FOR PARTNERS

Managed Service Provider (MSP) Program

Resellers

MY ACCOUNT

Sign In

SOLUTIONS

Free Rootkit Scanner

Free Trojan Scanner

Free Virus Scanner

Free Spyware Scanner

Anti Ransomware Protection

SEE ALL

ADDRESS

3979 Freedom Circle
12th Floor
Santa Clara, CA 95054

ADDRESS

One Albert Quay
2nd Floor
Cork T12 X8N6
Ireland

LEARN

Malware

Hacking

Phishing

Ransomware

Computer Virus

Antivirus


What is VPN?

COMPANY

About Us

Contact Us

Careers

News and Press

Blog

Scholarship

Forums

MY ACCOUNT

Sign In

ADDRESS

3979 Freedom Circle, 12th Floor
Santa Clara, CA 95054

ADDRESS

One Albert Quay, 2nd Floor
Cork T12 X8N6
Ireland

   English
Legal
Privacy
Accessibility
Vulnerability Disclosure
Terms of Service


© 2023 All Rights Reserved

Select your language

 * English
 * Deutsch
 * Español
 * Français
 * Italiano
 * Português (Portugal)
 * Português (Brasil)
 * Nederlands
 * Polski
 * Pусский
 * 日本語
 * Svenska

New Buy Online Partner Icon Warning Icon Edge icon

This site uses cookies in order to enhance site navigation, analyze site usage
and marketing efforts. Please see our privacy policy for more information.
Privacy Policy

Cookies Settings Decline All Accept All Cookies



PRIVACY PREFERENCE CENTER

When you visit any website, it may store or retrieve information on your
browser, mostly in the form of cookies. This information might be about you,
your preferences or your device and is mostly used to make the site work as you
expect it to. The information does not usually directly identify you, but it can
give you a more personalized web experience. Because we respect your right to
privacy, you can choose not to allow some types of cookies. Click on the
different category headings to find out more and change our default settings.
However, blocking some types of cookies may impact your experience of the site
and the services we are able to offer.
Privacy Policy
Allow All


MANAGE CONSENT PREFERENCES

STRICTLY NECESSARY

Always Active

These cookies are necessary for the website to function and cannot be switched
off in our systems. They are usually only set in response to actions made by you
which amount to a request for services, such as setting your privacy
preferences, logging in or filling in forms.    You can set your browser to
block or alert you about these cookies, but some parts of the site will not then
work. These cookies do not store any personally identifiable information.

Cookies Details‎

PERFORMANCE AND FUNCTIONALITY

Performance and Functionality

These cookies enable the website to provide enhanced functionality and
personalisation. They may be set by us or by third party providers whose
services we have added to our pages.    If you do not allow these cookies then
some or all of these services may not function properly.

Cookies Details‎

SOCIAL MEDIA

Social Media

These cookies are set by a range of social media services that we have added to
the site to enable you to share our content with your friends and networks. They
are capable of tracking your browser across other sites and building up a
profile of your interests. This may impact the content and messages you see on
other websites you visit.    If you do not allow these cookies you may not be
able to use or see these sharing tools.

Cookies Details‎

ANALYTICS

Analytics

These cookies allow us to count visits and traffic sources so we can measure and
improve the performance of our site. They help us to know which pages are the
most and least popular and see how visitors move around the site.    All
information these cookies collect is aggregated and therefore anonymous. If you
do not allow these cookies we will not know when you have visited our site, and
will not be able to monitor its performance.

Cookies Details‎

ADVERTISING

Advertising

These cookies may be set through our site by our advertising partners. They may
be used by those companies to build a profile of your interests and show you
relevant adverts on other sites.    They do not store directly personal
information, but are based on uniquely identifying your browser and internet
device. If you do not allow these cookies, you will experience less targeted
advertising.

Cookies Details‎
Back Button


BACK



Search Icon
Filter Icon

Clear
checkbox label label
Apply Cancel
Consent Leg.Interest
checkbox label label
checkbox label label
checkbox label label

 * 
   
   View Cookies
   
    * Name
      cookie name

Decline All Confirm My Choices