detect.fyi Open in urlscan Pro
162.159.152.4  Public Scan

Submitted URL: https://detect.fyi/threat-hunting-suspicious-user-agents-3dd764470bd0?source=collection_home---4------9------------...
Effective URL: https://detect.fyi/threat-hunting-suspicious-user-agents-3dd764470bd0?gi=f43bb8635e83&source=collection_home---4---...
Submission: On February 22 via api from US — Scanned from US

Form analysis 0 forms found in the DOM

Text Content

Open in app

Sign up

Sign in

Write


Sign up

Sign in


Mastodon


THREAT HUNTING - SUSPICIOUS USER AGENTS

mthcht

·

Follow

Published in

Detect FYI

·
14 min read
·
Dec 31, 2023

155



Listen

Share




WHAT IS A USER-AGENT ?

A User-Agent string is a line of text that a browser or application sends to a
web server to identify itself. It typically includes the name and version of the
browser/application, the operating system, and the language. It’s constructed as
a list of product tokens (keywords) with optional comments that provide further
detail. Tokens are typically separated by spaces, and comments are enclosed in
parentheses. Each part of the User-Agent string helps the server determine how
to deliver content in a compatible format for the client’s software environment.

Back to the early days of the internet when browsers were competing for market
share, it was straightforward, but as competition increased, browsers started to
mimic each other’s strings to bypass compatibility issues. For example,
Mozilla’s format, “Mozilla/5.0 (…)”, became a standard prefix for many browsers,
regardless of their actual connection to Mozilla.

A classic example of how user-agents have been manipulated for broader web
compatibility: Opera faced compatibility issues because some websites were
optimized only for popular browsers like Internet Explorer. To ensure
compatibility, Opera began to include the string “MSIE” (indicating Microsoft
Internet Explorer) in its User-Agent, alongside its own identifier. This way,
Opera could access websites that were exclusively designed for IE users.

These practices led to the complex and sometimes misleading User-Agent strings
we see today


EXAMPLE:

> Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like
> Gecko) Chrome/120.0.0.0 Safari/537.36

 * Mozilla/5.0: This is a general identifier used for compatibility (it has no
   real meaning anymore)
 * (Windows NT 10.0; Win64; x64): This part specifies the operating system as
   Windows 10, 64-bit edition, on an x64-based processor.
 * AppleWebKit/537.36: It signifies that the browser uses the AppleWebKit
   rendering engine, which is responsible for how web content is displayed.
 * (KHTML, like Gecko): This indicates that the browser is compatible with both
   KHTML and Gecko rendering engines, enhancing cross-browser compatibility.
 * Chrome/120.0.0.0: This specifies the browser as Chrome and gives the version
   number, which in this case is 120.0.0.0.
 * Safari/537.36: The inclusion of Safari with the same version number as
   AppleWebKit suggests compatibility with Safari's rendering standards.

For more detailed examples and variations used by different browsers, you can
refer to the information provided by MDN Web Docs
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/User-Agent

These sites can help you identify User-agents string:

 * https://explore.whatismybrowser.com/useragents/parse/
 * https://useragentstring.com/
 * https://www.whatsmyua.info/


WHY DETECTING USER-AGENT STRINGS ?

Threat actors frequently alter or fabricate User-Agent strings, sometimes aiming
to camouflage their traffic within legitimate web requests.


[MALWARE EXAMPLE] RACCOON STEALER



A prime example is the Raccoon Stealer, notorious for using specific HTTP
User-Agent strings when communicating with its C2 server. These User-Agent
strings are unique and distinct, minimizing the chances of false positives
during threat hunting sessions or in detection rules.

I added them in my hunting list
https://github.com/mthcht/awesome-lists/blob/main/Lists/suspicious_http_user_agents_list.csv


Raccoon Stealer strings

A detailed behavioral analysis of a Raccoon Stealer sample can be found here:
https://tria.ge/230404-kmka5adg89/behavioral2


User-Agent iMightJustPayMySelfForAFeature

Another recent sample analysis of Raccoon Stealer:
https://www.joesandbox.com/analysis/1342102/0/html


User-Agent SouthSide

User-Agent SouthSide


[MALWARE EXAMPLE] BUNNY LOADER:

A Malware-as-a-Service (MaaS) being discussed and sold on various underground
forums. This loader is designed to distribute and execute various types of
malware, making it a versatile tool for cybercriminals. Like Raccoon Stealer,
Bunny Loader may also use unique User-Agent strings as part of its operation,
further emphasizing the importance of monitoring and analyzing User-Agent
strings in network traffic.


Bunny Loader strings

Bunny Loader strings

Raccoon Stealer and Bunny Loader are just two examples from a vast array of
malware using unique User-Agent strings, as detailed in my extensive list. They
highlight the importance of monitoring specific User-Agent strings in your SIEM.

For enhanced network security and visibility, it’s advisable to configure your
workstations to exclusively use your company proxy for internet access. By doing
so, you effectively restrict all web traffic to pass through the proxy, ensuring
that it can be monitored and controlled.


LIST OF BAD USER AGENT

I’ve put together a list of suspicious User-Agents on GitHub that you can use to
hunt in your environment, this same list will be the cornerstone of our hunting
strategies discussed throughout this article


AWESOME-LISTS/LISTS/SUSPICIOUS_HTTP_USER_AGENTS_LIST.CSV AT MAIN ·
MTHCHT/AWESOME-LISTS


SECURITY LISTS FOR SOC DETECTIONS . CONTRIBUTE TO MTHCHT/AWESOME-LISTS
DEVELOPMENT BY CREATING AN ACCOUNT ON GITHUB.

github.com




STRUCTURE OF THE FILE:

Each User-Agent is categorized with these fields:

 * http_user_agent: This field is used to match suspicious User-Agent strings in
   our SIEM logs. It supports wildcard entries for flexible matching and is not
   case-sensitive
 * metadata_description: Simple description of the User-Agent
 * metadata_link: Link to the source code, repository or article referencing the
   suspicious User-Agent
 * metadata_flow_direction: flow direction for the detection (Emphasize that
   this should primarily be employed for detecting internal threats)
 * metadata_category: Threat Category of the User-Agent
   (C2,Malware,RMM,Compliance,Phishing,Vulnerability Scanner,Exploitation…)


 * metadata_priority: Priority assigned to the suspicious User-Agent (info → low
   → medium → high → critical)


 * metadata_fp_risk: False positive risk assigned to the suspicious User-Agent
   (none → low → medium → high → very high)


 * metadata_severity: threat severity risk assigned to the suspicious User-Agent
   (info → low → medium → high → critical)


 * metadata_usage: This field is determined by the priority, false positive
   risk, and severity risk associated with a User-Agent. User-Agents likely to
   trigger high-confidence alerts are labeled as “Detection rule”, making them
   ideal for inclusion in scheduled detection rules on your SIEM. Conversely,
   User-Agents likely to produce low-confidence alerts (due to high false
   positive potential, low severity, or low priority) are categorized as
   “Hunting”, making them better suited for threat hunting activities rather
   than for use in detection rules.




DETECTING THREATS INSIDE YOUR NETWORK

> Requiered: HTTP Proxy logs (internal > external flow)


HUNTING WITH A SUSPICIOUS USER-AGENT LIST

> Requiered: Suspicious User-Agent List



let’s leverage the list of suspicious User-Agents in our Splunk environment,
specifically focusing on our proxy logs. Here’s how you can proceed:

 1. Upload the List suspicious_http_user_agents_list.csv to your Splunk instance
 2. Create a Lookup Definition named suspicious_http_user_agents_list in Splunk.
    This will allow you to cross-reference the suspicious User-Agents in your
    logs faster against the list (Splunk’s settings menu under the ‘Lookups’
    option)


Not Case sensitive and wildcard match

3. Use the lookup definition in searches

 * Hunt for everything

`proxy`
| lookup suspicious_http_user_agents_list http_user_agent as http_user_agent OUTPUT http_user_agent as http_user_agent_pattern metadata_description metadata_link metadata_category metadata_priority metadata_fp_risk metadata_severity metadata_usage
| where isnotnull(http_user_agent_pattern)
| stats values(http_user_agent_pattern) values(metadata_category) values(http_method) values(status) dc(src_user) last(src_user) values(url) values(dest_host) values(metadata_description) values(metadata_link) values(metadata_priority) values(metadata_fp_risk) values(metadata_severity) earliest(_time) as firsttime latest(_time) as lasttime count by http_user_agent metadata_usage
| rename values(*) as *
| sort + count

 * High confidence searches (Detection rule)

`proxy`
 NOT [|inputlookup Exclusion_List.csv | fields - "metadata_*"]
| lookup suspicious_http_user_agents_list http_user_agent as http_user_agent OUTPUT http_user_agent as http_user_agent_pattern metadata_description metadata_link metadata_category metadata_priority metadata_fp_risk metadata_severity metadata_usage
| where metadata_usage="Detection rule"
| stats values(http_user_agent_pattern) values(metadata_category) values(http_method) values(status) values(url) values(metadata_description) values(metadata_link) values(metadata_priority) values(metadata_fp_risk) values(metadata_severity) earliest(_time) as firsttime latest(_time) as lasttime count by src_user dest_host http_user_agent metadata_usage
| rename values(*) as *

Implement this search as a detection rule in your environment. You shouldn’t
encounter these specific User-Agents in a secure network setting.

 * Low confidence searches (Threat Hunting)

`proxy`
| lookup suspicious_http_user_agents_list http_user_agent as http_user_agent OUTPUT http_user_agent as http_user_agent_pattern metadata_description metadata_link metadata_category metadata_priority metadata_fp_risk metadata_severity metadata_usage
| where metadata_usage="Hunting"
| stats values(http_user_agent_pattern) values(metadata_category) values(http_method) values(status) dc(src_user) last(src_user) values(url) values(dest_host) values(metadata_description) values(metadata_link) values(metadata_priority) values(metadata_fp_risk) values(metadata_severity) earliest(_time) as firsttime latest(_time) as lasttime count by http_user_agent metadata_usage
| rename values(*) as *



adding an exclusion list and filtering on internal source IP addresses
 * Hunt for LOLBIN usages

`proxy`
| lookup suspicious_http_user_agents_list http_user_agent as http_user_agent OUTPUT http_user_agent as http_user_agent_pattern metadata_description metadata_link metadata_category metadata_priority metadata_fp_risk metadata_severity metadata_usage
| where metadata_category="LOLBIN"
| stats values(http_user_agent_pattern) values(metadata_category) values(http_method) values(status) dc(src_user) last(src_user) values(url) values(dest_host) values(metadata_description) values(metadata_link) values(metadata_priority) values(metadata_fp_risk) values(metadata_severity) earliest(_time) as firsttime latest(_time) as lasttime count by http_user_agent metadata_usage
| rename values(*) as *

 * Hunt for C2

`proxy`
| lookup suspicious_http_user_agents_list http_user_agent as http_user_agent OUTPUT http_user_agent as http_user_agent_pattern metadata_description metadata_link metadata_category metadata_priority metadata_fp_risk metadata_severity metadata_usage
| where metadata_category="C2"
| stats values(http_user_agent_pattern) values(metadata_category) values(http_method) values(status) dc(src_user) last(src_user) values(url) values(dest_host) values(metadata_description) values(metadata_link) values(metadata_priority) values(metadata_fp_risk) values(metadata_severity) earliest(_time) as firsttime latest(_time) as lasttime count by http_user_agent metadata_usage
| rename values(*) as *

 * Hunt for Malwares

`proxy`
| lookup suspicious_http_user_agents_list http_user_agent as http_user_agent OUTPUT http_user_agent as http_user_agent_pattern metadata_description metadata_link metadata_category metadata_priority metadata_fp_risk metadata_severity metadata_usage
| where metadata_category="Malware"
| stats values(http_user_agent_pattern) values(metadata_category) values(http_method) values(status) dc(src_user) last(src_user) values(url) values(dest_host) values(metadata_description) values(metadata_link) values(metadata_priority) values(metadata_fp_risk) values(metadata_severity) earliest(_time) as firsttime latest(_time) as lasttime count by http_user_agent metadata_usage
| rename values(*) as *

 * Hunt for RMM

`proxy`
| lookup suspicious_http_user_agents_list http_user_agent as http_user_agent OUTPUT http_user_agent as http_user_agent_pattern metadata_description metadata_link metadata_category metadata_priority metadata_fp_risk metadata_severity metadata_usage
| where metadata_category="RMM"
| stats values(http_user_agent_pattern) values(metadata_category) values(http_method) values(status) dc(src_user) last(src_user) values(url) values(dest_host) values(metadata_description) values(metadata_link) values(metadata_priority) values(metadata_fp_risk) values(metadata_severity) earliest(_time) as firsttime latest(_time) as lasttime count by http_user_agent metadata_usage
| rename values(*) as *

 * Hunt for Scanners

`proxy`
| lookup suspicious_http_user_agents_list http_user_agent as http_user_agent OUTPUT http_user_agent as http_user_agent_pattern metadata_description metadata_link metadata_category metadata_priority metadata_fp_risk metadata_severity metadata_usage
| search metadata_category IN ("Bots & Vulnerability Scanner","Vulnerability Scanner","Discovery")
| stats values(http_user_agent_pattern) values(metadata_category) values(http_method) values(status) dc(src_user) last(src_user) values(url) values(dest_host) values(metadata_description) values(metadata_link) values(metadata_priority) values(metadata_fp_risk) values(metadata_severity) earliest(_time) as firsttime latest(_time) as lasttime count by http_user_agent metadata_usage
| rename values(*) as *

 * Hunt for critical severity and low false positives

`proxy`
| lookup suspicious_http_user_agents_list http_user_agent as http_user_agent OUTPUT http_user_agent as http_user_agent_pattern metadata_description metadata_link metadata_category metadata_priority metadata_fp_risk metadata_severity metadata_usage
| search metadata_severity IN ("high","critical") AND metadata_fp_risk IN ("none","low")
| stats values(http_user_agent_pattern) values(metadata_category) values(http_method) values(status) dc(src_user) last(src_user) values(url) values(dest_host) values(metadata_description) values(metadata_link) values(metadata_priority) values(metadata_fp_risk) values(metadata_severity) earliest(_time) as firsttime latest(_time) as lasttime count by http_user_agent metadata_usage
| rename values(*) as *

 * Compliance detections

`proxy`
| lookup suspicious_http_user_agents_list http_user_agent as http_user_agent OUTPUT http_user_agent as http_user_agent_pattern metadata_description metadata_link metadata_category metadata_priority metadata_fp_risk metadata_severity metadata_usage
| where metadata_category="Compliance"
| stats values(http_user_agent_pattern) values(metadata_category) values(http_method) values(status) dc(src_user) last(src_user) values(url) values(dest_host) values(metadata_description) values(metadata_link) values(metadata_priority) values(metadata_fp_risk) values(metadata_severity) earliest(_time) as firsttime latest(_time) as lasttime count by http_user_agent metadata_usage
| rename values(*) as *

4. Analyze and Investigate: Any matches found between your proxy logs and the
suspicious User-Agents list should be thoroughly analyzed


ANOMALY DETECTION IN USER-AGENT STRINGS

Here are some search techniques for identifying User-Agent anomalies in your
data without needing to rely on my hunting list


[HUNTING] UNUSUALLY LONG USER AGENT STRINGS

Simple Hunting search showing the User-Agent strings longer than 250 characters
(you should ajust the threshold with something relevant the usual lenght of what
you see in your environment)



⚠️ High number of false positives expected, you could filter throught the noise
with dest and url category / severity and HTTP method fields. May catch benign
cases where user agents are legitimately long due to plugins or toolbars
(establish a baseline of legitimate known applications used in your environment
for exclusion)


[HUNTING] UNUSUALLY SHORT USER AGENT STRINGS

Simple Hunting search showing the User-Agent strings with less than 10
characters



⚠️ High number of false positives expected (it will include empty user agent
with the value ‘unknown’ in splunk), you could filter throught the noise with
dest and url category / severity and HTTP method fields. You will catch lots of
benign cases (establish a baseline of known application used in your environment
for exclusion)

Talking about empty User-Agent (which has the value ‘unknown’ in Splunk), it is
included in my list. You should Hunt specifically on successful external
sign-ins (excluding failed attempts) in cloud services like Microsoft Office
365, these empty User-Agent instances are relatively rare and searching for them
is a good way of identifying unusual or suspicious activities.


[HUNTING] UNUSUAL USER AGENT STRINGS LENTGH

A more advanced search compared to the two previous lentgh searches using Splunk
command avg and stdev (much slower but could give more relevant results)


 * eval length=len(http_user_agent): This creates a new field named length that
   stores the length of each user agent string.
 * eventstats avg(length) as avgLength, stdev(length) as stdevLength: This
   calculates the average length and standard deviation of the length field
   across the events.
 * where length > avgLength + (3*stdevLength) OR length < avgLength -
   (4*stdevLength): This condition filters out user agents whose lengths are
   more than 3 or 4 standard deviations away from the average, either longer or
   shorter.


[HUNTING] MULTIPLE USER AGENTS FROM SAME SOURCE IN SHORT TIME

Detects potential scanning or enumeration activities from a single source
(either with the src_ip or the src_user)

Simple Hunting search to find sources making Web requests with more than 4
different User-Agent string on a domain (dest_host) in 10 minutes:



⚠️ Alternative query: You can change src_user after the ‘count by’ for the
src_ip and change ‘values(src_ip)’ to ‘values(src_user)’ but be aware of the
Virtual IP address (VIP) used within your environment as you could have multiple
users behind an IP address.


[HUNTING] RAREST USER-AGENTS

You can use Splunk’s rare command to identify the least common User-Agent
strings in your data
https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Rare



This command will generate a list of the top 30 rarest User-Agents with count
and percentage for each one. While a rare User-Agent doesn't inherently indicate
suspicious activity, this information can be invaluable for threat hunting,
providing insights into unusual patterns or anomalies in your network traffic.

You could also test the commands anomalydetection on your proxy logs and check
the anomaly results for the field http_user_agent



https://docs.splunk.com/Documentation/SplunkCloud/9.1.2308/SearchReference/Anomalydetection

These searches might reveal some interesting insights, though it’s not
guaranteed (reserved for threat hunting)



Alex Teixeira
also shared insightful techniques for identifying rare user-agents with splunk
here
https://opstune.com/2020/09/16/tracking-rare-http-agent-context-rich-alerts-splunk/




[COMPLIANCE] MISMATCH BETWEEN USER AGENT AND HOST OS

> Requiered:
> 
> - Each request in the proxy logs should include the source IP address, or
> alternatively, the proxy logs should directly indicate the Host Operating
> System (without using the http user agent)
> 
> - CMDB logs in the SIEM to correlate with proxy logs (if only IP Address in
> the proxy logs)
> 
> - Splunk lookup with the association src_ip — host os (if only IP Address in
> the proxy logs)

If you can identify an host OS with either:

 * Configuration Management Database (CMDB) logs which should encompass details
   like IP address, hostname, services running and OS
 * A simple Splunk lookup with the relationship between IP address and OS
 * Your proxy logs (without using the http user agent), some proxy solution like
   Mcafee Web Gateway can give you process names and host OS infos.

To illustrate the hunt:



You’ve successfully identified a Windows server using one of the mentioned
methods and noticed it making a web request with a Linux User Agent (the first
case in the illustration). This could indicate the presence of a virtual machine
on the computer that’s not under our monitoring. We should verify the legitimacy
of this activity on the specified machine. Ideally, if your Configuration
Management Database (CMDB) is well-maintained, you should be able to determine
the expected services running on that machine directly from your SIEM !


[COMPLIANCE] USER AGENTS INDICATING OUTDATED OR VULNERABLE BROWSERS

> Requiered: Needs constant updating with the latest vulnerability information
> and versions to stay relevant.

Using the same example illustration, we identified that the last two Windows
machines were using outdated and vulnerable browsers



I’ve included extremely old user-agents in my list of bad user-agents. For
instance, the presence of Internet Explorer 8.0 in 2023, as seen in the third
machine from our example, is a clear anomaly and should raise immediate
concerns.

Further parsing of user-agent strings may be required to correctly identify
outdated versions. Typically, this is a compliance detection task that should be
handled by software management tools.


LOGGING EXTERNAL THREATS

> Requiered: (WAF logs | WebServer access Logs | Reverse Proxy Logs)
> + Firewall Logs

I want to clarify my stance on alerts for external Web attacks (with suspicious
User-Agents or not) on our internet-exposed servers: it’s not a priority.
Constantly monitoring these would lead to an overwhelming number of alerts, most
of which will lead to nothing interesting. A more effective strategy would be to
correlate low confidence alerts and only alert with a high confidence alert.

For example we could only log the external IP addresses associated with a
blocked attack (or a vulnerability scanner User-Agent), then, we focus on
generating an alert only when an internal machine initiates a connection to any
of these flagged IP addresses (Having the TCP flag information in our Firewall
logs will further enhance accuracy and reduce false positives) or when a
succesful connection on one of our services is observed with these flagged IP
addresses.


LOGGING

Illustration logging external attacks with suspicious User-Agents using a WAF



This approach should be broadly applied across all types of logs and external
attack vectors. It’s important to expand beyond just User-Agents, to include any
traces or indicators that can pinpoint an attack with high confidence




ALERTING

Then we only alert if the attacker’s IP address:

 * Connected succesfully to any of our exposed services (AD, VPN, MAIL, SSH,
   Other Cloud authent services…)
 * Is contacted by an internal source after the attack (as explained earlier)
 * Is observed associated with an email sender/recipient address

Illustration:



More detail on how to identify TCP flags with AWS network traffic logs (in this
example):


LOGGING IP TRAFFIC USING VPC FLOW LOGS


CREATE A FLOW LOG TO CAPTURE INFORMATION ABOUT IP TRAFFIC GOING TO AND FROM YOUR
NETWORK INTERFACES.

docs.aws.amazon.com



TCP segment structure:
https://en.wikipedia.org/wiki/Transmission_Control_Protocol#TCP_segment_structure

Applying this approach to any external service and not only User-Agent detection
but any high confidence attack pattern:




CONCLUSION

User-Agent string detection, already a known practice, is crucial for SOC teams
to identify high confidence threat patterns. However, its role in threat hunting
and weak signal is often underestimated, primarily due to concerns about dealing
with false positives. With the list and examples i’ve provided, I hope to
enhance your threat hunting process, helping you uncover more significant
insights with fewer false alarms.


Happy Hunting !




SIGN UP TO DISCOVER HUMAN STORIES THAT DEEPEN YOUR UNDERSTANDING OF THE WORLD.


FREE



Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.


Sign up for free


MEMBERSHIP



Access the best member-only stories.

Support independent authors.

Listen to audio narrations.

Read offline.

Join the Partner Program and earn for your writing.


Try for $5/month
User Agent
Threat Hunting
Threat Intelligence
Detection Engineering
Splunk


155

155



Follow




WRITTEN BY MTHCHT

481 Followers
·Writer for

Detect FYI

Threat Hunting - DFIR - Detection Engineering

Follow





MORE FROM MTHCHT AND DETECT FYI

mthcht

in

Detect FYI


THREAT HUNTING — SUSPICIOUS WINDOWS SERVICE NAMES


SIMULATION AND DETECTION

13 min read·Jan 8, 2024

70





Simone Kraus

in

Detect FYI


SENSOR MAPPINGS TO ATT&CK (SMAP) — A CONCRETE EXAMPLE OF HOW TO USE THE SMAP FOR
A REAL WORLD…


CHINESE STATE SPONSORED THREAT ACTORS NTDS.DIT & NTDSUTIL

19 min read·Jan 21, 2024

55





Alex Teixeira

in

Detect FYI


UNSUPERVISED MACHINE LEARNING WITH SPLUNK: THE CLUSTER COMMAND


IF YOU ARE IN CYBER FOR LONG, YOU SHOULD HAVE PROBABLY HEARD THIS ONE:

7 min read·3 days ago

76





mthcht

in

Detect FYI


EVENT LOG MANIPULATIONS [1] - TIME SLIPPING


AFTER READING ALEX’S LATEST ARTICLE, I’M INSPIRED TO START A DETECTION SERIE
DEDICATED TO EVENT LOG MANIPULATION TECHNIQUES, WITH OUR FIRST…

15 min read·Jan 13, 2024

146




See all from mthcht
See all from Detect FYI



RECOMMENDED FROM MEDIUM

mthcht

in

Detect FYI


DETECT DLL HIJACKING TECHNIQUES FROM HIJACKLIBS WITH SPLUNK


SPLUNK DETECTIONS SEARCHES

6 min read·Oct 1, 2023

21





Tyler Wall




SOC ANALYST TOOLS, CONCEPTS & MORE


IN THIS ARTICLE WE’LL DISCUSS SOC ANALYST TOOLS, COMMON SECURITY DEFINITIONS,
MITRE ATT&CK FRAMEWORK, CYBER KILL CHAIN, & ZERO TRUST.

17 min read·Feb 4, 2024

85

1





LISTS


STAFF PICKS

587 stories·767 saves


STORIES TO HELP YOU LEVEL-UP AT WORK

19 stories·488 saves


SELF-IMPROVEMENT 101

20 stories·1375 saves


PRODUCTIVITY 101

20 stories·1261 saves


Vit Bukac


PRACTICAL SPLUNK DETECTION RULES HOW TO — PART 1 — CRAWL


WHEN WE TALK ABOUT WRITING DETECTION RULES, WE DETECTION ENGINEERS TEND TO FOCUS
ON ANSWERING THE QUESTION “HOW TO IDENTIFY THE MALICIOUS…

6 min read·Jan 16, 2024

9





Ahmed Nosir


NJRAT: TECHNICAL INSIGHTS AND STRATEGIC HUNTING APPROACHES


INTRODUCTION:

5 min read·Jan 28, 2024

3





Chandrak Trivedi


PURPLE TEAMING: BEST GAP ANALYSIS OPEN-SOURCE TOOL — VECTR AND DETTECT


GAP ANALYSIS IS THE MOST IMPORTANT STAGE IN PURPLE TEAMING EXERCISES. THIS
ANALYSIS WOULD BE CRUCIAL FOR ENRICHING DETECTION ENGINEERING…

4 min read·Jan 7, 2024

4





Olaf Hartong

in

FalconForce


MICROSOFT DEFENDER FOR ENDPOINT INTERNALS 0X05 — TELEMETRY FOR SENSITIVE ACTIONS


IN THE PREVIOUS EDITION OF THIS SERIES I DISCUSSED THE TIMELINE TELEMETRY. SINCE
THAT BLOG THE AMOUNT OF EVENTS HAS CERTAINLY GROWN. I’VE…

12 min read·Oct 13, 2023

69




See more recommendations

Help

Status

About

Careers

Blog

Privacy

Terms

Text to speech

Teams