blog.palantir.com Open in urlscan Pro
162.159.152.4  Public Scan

Submitted URL: https://tinyurl.com/ypysh3am
Effective URL: https://blog.palantir.com/microsoft-defender-attack-surface-reduction-recommendations-a5c7d41c3cf8?gi=43ade4c0bd97
Submission: On April 16 via manual from GB — Scanned from GB

Form analysis 0 forms found in the DOM

Text Content

Open in app

Sign up

Sign In

Write


Sign up

Sign In


Published in

Palantir Blog

Palantir
Follow

Jan 11, 2021

·
15 min read
·

Listen



Save








MICROSOFT DEFENDER ATTACK SURFACE REDUCTION RECOMMENDATIONS

This blog post provides a set of recommendations based on the audit data
Palantir’s Infosec team has collected from the Windows Defender Attack Surface
Reduction (ASR) family of security controls over the past two years. We hope it
will assist other security teams who are considering a deployment. We’ll aim to
highlight the considerations for each setting based on production deployment
experience.



For those that are new to the topic, Windows Defender Attack Surface Reduction
(ASR) is the name Microsoft gave a collection of controls that restrict common
malware and exploit techniques on Windows endpoints.

Unlike Windows Defender Exploit Guard, ASR controls are simple on/off switches
that administrators can deploy in very short order with group policy or Intune,
especially if they plan to use audit-only mode.

This paragraph from the Microsoft website provides a great overview:

ASR targets software behaviors that are often abused by attackers, such as:

 * Launching executable files and scripts that attempt to download or run files
 * Running obfuscated or otherwise suspicious scripts
 * Performing behaviors that apps don’t usually initiate during normal
   day-to-day work

Such behaviors are sometimes seen in legitimate applications; however, they are
considered risky because they are commonly abused by malware. Attack surface
reduction rules can constrain these kinds of risky behaviors and help keep your
organization safe.


RECOMMENDATION SUMMARY

We aimed to be somewhat opinionated in this post to provide value to readers but
we also acknowledge that all of the settings have their own environmental
nuance. Please refer to the descriptions below this summary section for the
reasons we’ve made the recommendations. It’s very likely that you’ll be dealing
with a different set of constraints in the networks you protect.



For those looking to dive right in to the logs in their environment, the
information will be recorded in two different events:

 * Audit Only: Windows Event 1122
 * Block Mode: Windows Event 1121

Configuring any ASR setting in block mode will cause Windows to deny the
behavior and also log the event. The obvious challenge is that Windows is likely
to restrict legitimate use cases in some environments and that’s what we are
hoping to help avoid with this post. Audit mode on its own will provide an
excellent source of data for defenders and can be configured fleet wide in most
corporate environments in under an hour using group policy.

> A note on ASR deployment and event log forwarding: At Palantir, we make heavy
> use of Windows Event Forwarding (WEF) to collect the 1121 and 1122 events for
> analysis. Our WEF guide can be found here and configuration samples here. Our
> ASR deployment was configured by setting the group policy settings described
> here. You can use a single policy for all ASR rules; you’ll just need to
> decide between block mode and audit mode for each setting you configure. An
> alternative is to create multiple policies where exceptions need to be granted
> to block mode rules, configuring the exceptions as higher priority group
> policies, restricted by “apply group policy“ access control entries on the
> group policy object.


RULE BREAKDOWN


BLOCK UNTRUSTED AND UNSIGNED PROCESSES THAT RUN FROM USB

Configuring “Block untrusted and unsigned processes that run from USB” in block
mode caused no issues whatsoever. We configured this rule in block mode on day
one and didn’t register a single event related to this ASR rule over the past 18
months. We feel this rule would be safe to deploy in block mode in almost any
corporate environment.

Recommendation: Block Mode.


BLOCK ADOBE READER FROM CREATING CHILD PROCESSES

We identified early in our ASR journey that “Block Adobe Reader Child Process”
does cause a minor issue related to the way Adobe currently approaches updates
for some products.

Here’s an example event:

Message=Windows Defender Exploit Guard has blocked an operation that
is not allowed by your IT administrator.For more information please contact your IT administrator.
     ID: 7674BA52-32EB-4A4F-A9A1-F0F9A1619A2C
     Detection time: 2020-10-16T19:35:03.723Z
     User: DOMAIN\User
     Path: C:\xxx\Acrobat_DC_Set-Up.exe
     Process Name: C:\xxx\AcroRd32.exe
     Security intelligence Version: 1.325.883.0
     Engine Version: 1.1.17500.4
     Product Version: 4.18.2009.7

The update mechanism of the Adobe application attempts to launch a child process
to do the work. This is a problem for the ASR rule.

Rather than rolling back the rule, we opted to perform the Adobe updates via our
central software patching and maintenance service so that there is no legitimate
need for Adobe to be launching child processes. We’ve decided to accept the
slight operational cost for the sake of additional security related to PDF
malware.

Recommendation: Block Mode.
Tip: Be careful with the Adobe update process. Make sure you have an alternative
update approach.


BLOCK EXECUTABLE CONTENT FROM EMAIL CLIENT AND WEBMAIL

We registered no events relating to this control in the data we collected. Many
of our users do leverage email clients and webmail that the rule targets. Our
theory is that other controls in the email delivery path do a reasonable job of
protecting users against executable content in email. Obviously we still
recommend leaving this control in place as a last backstop to protect endpoints.

Recommendation: Block Mode.


BLOCK JAVASCRIPT OR VBSCRIPT FROM LAUNCHING DOWNLOADED EXECUTABLE CONTENT

We registered no events relating to this control in the 18+ months of data we
collected.

Microsoft provide the following advice:

 * “Although not common, line-of-business applications sometimes use scripts to
   download and launch installers”.

However, the Windows environment this post is based on contains roughly 1000
endpoints and we did not record any indication of this behavior. For that
reason, we feel it is most likely safe for the majority of corporate networks
but administrators should be aware of the warning above.

Recommendation: Block Mode.


BLOCK PERSISTENCE THROUGH WMI EVENT SUBSCRIPTION

We registered no events relating to this control in the 18+ months of data we
collected. We actually started in Audit mode because we missed the note about it
being a block mode only control (audit mode defaults to block).

Microsoft describe this one as: “this rule prevents malware from abusing WMI to
attain persistence on a device”.

This persistence behavior has been implemented in many C2 frameworks to make it
an easy win for adversaries and for that reason, we’d already developed
additional alerts related to this behavior but did appreciated the simplicity of
this control. We configured block and have not recorded any issues.

Recommendation: Block Mode.


BLOCK CREDENTIAL STEALING FROM THE WINDOWS LOCAL SECURITY AUTHORITY SUBSYSTEM
(LSASS.EXE)

The lsass protection rule is one of the most common ASR audit mode events we’ve
come across. It generates roughly 12 million events every six months in our
environment.

Many safe processes will generate ASR alerts for the lsass.exe rule and from a
defender perspective, it’s reasonably hard to differentiate between legitimate
use cases and and adversary tradecraft.

While planning our block mode implementation, we originally decided not to
enforce blocking on this rule because of the significant number of events
recorded in audit mode.

After thinking about it for a while, and digging deeper into some of the use
cases and alerts, we felt that we were capturing mostly the events described by
Microsoft in this note:

> In some apps, the code enumerates all running processes and attempts to open
> them with exhaustive permissions. This rule denies the app’s process open
> action and logs the details to the security event log. This rule can generate
> a lot of noise. If you have an app that simply enumerates LSASS, but has no
> real impact in functionality, there is NO need to add it to the exclusion
> list. By itself, this event log entry doesn’t necessarily indicate a malicious
> threat.

We also should mention that we had already deployed Credential Guard in the
environment. This gave us extra comfort in thinking it was unlikely there was a
legitimate, critical, use case for cross process lsass access.

For those reasons, we moved the lsass rule into block mode and it paid off.
We’ve had the rule deployed for three months as of the data of this post and
have recorded no user impact. We’ll be sure to update this post if new
considerations come to light.

> Microsoft also provide some advise for a statistical approach to differentiate
> between signal and noise in this post.

Recommendation: Block Mode.
Tip: You will see a lot of audit mode events for this setting. Don’t let that be
the end of your attempt to move to block mode.


BLOCK OFFICE APPLICATIONS FROM CREATING EXECUTABLE CONTENT

We recorded roughly 100 audit events every six months for a small subset of
users. On closer investigation, we learned that the file causing an issue with
the ASR rule “Block Office applications from creating executable content” is:

 * “microsoft.office.smartlookup.ssr.js”



This file appears to be part of the Office smart lookup feature, which is
described by Microsoft here.

The Office processes leveraging the .js file for smart lookup are broken up as
follows:

 * MS Excel 75% of the time
 * MS OneNote 20% of the time.
 * MS Word 5% of the time.

A question we asked ourselves as defenders was whether an attacker could
leverage this .js file? While unlikely, it seems as though this file might
provide an opportunity for persistence at the very least. For that reason we’ve
not excluded the file from ASR auditing at this point. We moved the rule to
block mode with some exceptions for the users with a business case and have
updated our internal wiki to make it easier for people to request exceptions
based on business need.

Recommendation: Block Mode.
Tip: Be careful if the Microsoft Office Smart Lookup feature is important to
your users.


BLOCK OFFICE APPLICATIONS FROM INJECTING CODE INTO OTHER PROCESSES

It was surprising and disappointing to learn that we had legitimate use cases
that would prevent us from moving forward with block mode for this rule right
away. It seemed like an easy one.

The Microsoft explanation for this rule says:

> This rule blocks code injection attempts from Office apps into other
> processes. Attackers might attempt to use Office apps to migrate malicious
> code into other processes through code injection, so the code can masquerade
> as a clean process. There are no known legitimate business purposes for using
> code injection.

Over a six month period, we collected around 10,000 events related to this
behavior.

The majority of the events relate to a proprietary/commercial, non-Microsoft
application we identified in the environment that is deployed for a very small
number of users. To give a sense for how irregular the use of process injection
seems to be:

 * The proprietary/commercial application was responsible for 97% of this
   behavior in the environment.
 * Adobe Acrobat represented 1% (roughly 100 events per 6 months)
 * Microsoft Word alerts represented an additional 1%
 * The remaining events combined for 1% and were distributed across the office
   suite.

At this point we are unable to move forward with this rule in block mode for the
majority of the fleet and have had to create a separate group policy to
accommodate a small subset of users.

Recommendation: Block Mode with a small exceptions group.
Tip: This is one of the controls you’ll need to carefully asses your audit data
for. If you’re unlucky enough to have a business critical application with this
behavior and only a small number of people require the add-in, we recommend
using a separate exception group policy for the small group of users.


BLOCK WIN32 API CALLS FROM OFFICE MACROS

If you’re read this far, thank you, but you must be thinking, “This one should
have been an easy block decision.” We felt the same way, and were surprised with
the data.

We’re currently recording about 15,000 events in this category every half year.

The events all relate to the use of an older style of Office macro, the events
point to variations of:

 * C:\Program Files\Microsoft Office\root\Office16\STARTUP\ProprietaryThing.dotm
   → word/vbaProject.bin

The most succinct explanation of this behavior we found is here.

The page explains that:

> “The vbaProject.bin file is a binary OLE COM container. This was the format
> used in older xls versions of Excel prior to Excel 2007. Unlike all of the
> other components of an xlsx/xlsm file the data isn’t stored in XML format.
> Instead the functions and macros as stored as a pre-parsed binary format”.

In our environment we can attribute this behavior to a legacy plugin that is
required for a very small number of users. Interestingly, a single user (who
relies heavily on the plugin) generates roughly 30% of the event volume.

As a side note: We block macros for most users in this network and recommend the
same for other environments we work with. @Oddvarmoe recently released this
excellent blog post describing the security options for macros in a corporate
setting.

Recommendation: Block Mode with a small exceptions group.
Tip: This is a second control that you’ll need to carefully asses your data for.
If you’re unlucky enough to have a business critical application with this
behavior, we recommend that you try to use a separate exception group policy for
those users while you push the vendor for an update.


BLOCK ALL OFFICE APPLICATIONS FROM CREATING CHILD PROCESSES

This control generates about 150 events every six months across a small number
of endpoint devices. The endpoint devices are used by team members that share a
common set of workflows. Interestingly, half of the events relate to a
commercial/proprietary application used by the team.

We reviewed the vendor documentation for the application and found that this
child process approach is actually the lesser of two evils offered for the
correct functioning of the application and is not possible to avoid and also
meet the business need. The alternate option provided by the vendor leverages
rundll32.exe to achieve a similar goal.

The remaining 50% (~75 per 6 months) of events in this ASR category are split
between the well-known Office binaries (Word, Excel, etc.) and installers for
some of the less common applications in the suite (officeconnectsetup,
officeconnectupdate, etc.).

Recommendation: Block Mode with a small exceptions group.
Tip: Another control that you’ll need to carefully asses the data for. We
recommend using a separate exception group policy for users with a business
need.


BLOCK EXECUTION OF POTENTIALLY OBFUSCATED SCRIPTS

This was a low volume but widespread event. We observed about 70 events per six
months. The events were spread across a number of different endpoint devices.

The majority of events (80%+) in each six month period belong to:

 * C:\Windows\System32\WindowsPowerShell\v1.0\Modules\SmbShare\Smb.types.ps1xml

The Smb.types.ps1xml file is interesting. The leaf attributes in the xml
structured file are powershell commands but they do not actually appear to be
obfuscated beyond being part this uncommon file type.

For example:

<GetScriptBlock>
    [Microsoft.PowerShell.Cmdletization.GeneratedTypes.SmbShare.ShareState]
    ($this.PSBase.CimInstanceProperties['ShareState'].Value)
</GetScriptBlock>

There is is also an unsigned PowerShell module that gets flagged a lot compared
to the remaining data. You’ll find it in the location:

 * C:\Windows\System32\WindowsPowerShell\v1.0\Modules\ServerManagerTasks\ServerManagerTasks.psd1

Like the SMBTypes file, this one is also created by Microsoft and doesn’t appear
to be a concern.

# Author of this module
Author = 'Microsoft Corporation'

It does not appear to be obfuscated in any way, but is definitely tripping up
the detection.

As Defenders, we are not particularly excited by excluding unsigned scripts from
the ASR ruleset, even when it’s in a reasonably secure path. We don’t love the
possibility for an adversary to easily modify them and potentially stay under
our radar.

In addition to the two interesting Microsoft files, we also had alerts for
“potential obfuscation” which related to internal development workflows.

Recommendation: Block Mode in a corporate setting, but be careful with developer
machines.
Tip: We were not as concerned with the Microsoft files above being blocked as we
were in making sure our large developer workforce was not negatively impacted.
If you’re in an environment where it is easy to separate developer machines from
corporate machines from a group policy perspective, that may be a sensible way
to approach this control. You may be able to build detections based on the audit
mode events.


BLOCK EXECUTABLE FILES FROM RUNNING UNLESS THEY MEET A PREVALENCE, AGE, OR
TRUSTED LIST CRITERION

We realized right away that this ASR control was unlikely to provide value in
block mode at Palantir simply because our developers generate new files
frequently. We still felt it might assist others to share the data we collected
regarding this control:

 * We generated about 180,000 1122 ASR events of this category every six months.
 * Roughly 110,000 (60%) of the events relate directly to internal development
   workflows.
 * The remaining events were attributed to very common, frequently updated,
   external packages.

In a practical sense, we’ve accepted that we won’t be able to move past audit
mode on this one. This control still provides great value in audit mode, though.
We’re able to see, in a very simple query, all of the binaries that Microsoft
Defender raises an eyebrow at because of their age and other trust heuristics.

Recommendation: Audit Mode.
Tip: Only the most strictly controlled environments will be able to support this
ASR setting in block mode. In addition to developer workflows, the applications
we collect audit events for are very common, they’re just frequently updated.


USE ADVANCED PROTECTION AGAINST RANSOMWARE

Here’s how Microsoft describe this control:

> This rule provides an extra layer of protection against ransomware. It scans
> executable files entering the system to determine whether they’re trustworthy.
> If the files closely resemble ransomware, this rule blocks them from running,
> unless they’re in a trusted list or an exclusion list.

In the environment we audited, this control was similar to the “block
executables unless they meet a prevalence…” control and did catch some developer
workflows. We saw ~500 events per six months and all of the events related to
internal product development for a team working on a Windows application.

For that reason, it wasn’t a good fit for the whole company to have this ASR
rule in enforced mode, but we were able to leverage a separate policy for those
that required an exception, and would encourage others to do the same. The key
to reducing friction is to monitor for events after enabling the rule and have a
process for exceptions.

Recommendation: Block Mode.
Tip: Be careful if you support developer workflows.


BLOCK PROCESS CREATIONS ORIGINATING FROM PSEXEC AND WMI COMMANDS

One our “favorite” lateral movement techniques, sometimes leveraged by the
SpecterOps adversary simulation team we work with, is to use the “network login”
type and WMI/psexec style lateral movement. We say it’s our “favorite” because
it’s very challenging to prevent.

We wrote a long post on the topic here, but for the purposes of considering this
ASR control consider that:

 * Network login type will bypass things like “allow logon” rights management
   via group policy on the client side and can be as simple as “runas
   /user:contoso\bob-t1 ‘psexec.exe \\tier1server.contoso.com cmd.exe’”
 * Network login type and psexec will often bypass network inspection devices
   looking for tier violations.
 * Modification of the DisableRestrictedAdmin key can be used to disable MFA in
   many cases and this can be achieved via PSExec/WMI with a single
   authentication factor.

But most importantly (and sadly):

 * This ASR rule, which seems perfect, cannot be moved into block mode if you
   use SCCM in the environment.

The paragraph provided by Microsoft is:

> This rule is incompatible with management through Microsoft Endpoint
> Configuration Manager because this rule blocks WMI commands the Configuration
> Manager client uses to function correctly.

As expected, this event generates about 1,000,000 events per six months, spread
among ~1000 devices. It’s a big volume event when you use SCCM. With careful
filtering, we are still able to derive value from the events, though: anything
outside of the SCCM use cases stand out.

The recommendation here is to analyze the events that do not relate to SCCM to
quickly identify abnormalities that this rule was designed to block.

Recommendation: Audit Mode, especially if you use SCCM or Intune.
Tip: If you use SCCM and event volume is a concern, you may wish to leave this
ASR rule unconfigured. If you don’t use SCCM or Intune, start with Audit mode,
review the data, and aim to move to Block mode.


BLOCK OFFICE COMMUNICATION APPLICATIONS FROM CREATING CHILD PROCESSES

In an environment with Microsoft Outlook and Microsoft Teams, we’re tracking
about 5000 events per 6 months.

 * 50% of the events are Outlook,
 * 40% is Teams.

The remaining events are linked to the communication apps as expected and
originate from processes with integrations.

The events spread is very similar to what we experience with the “Block Win32
API calls from Office macros”; it is mostly attributed to 3rd party commercial
integrations used by a small portion of our user base. Half of all the events
belong to a single power user, and less than 50 users have logged events of this
type.

Recommendation: Audit Mode for users with Office integrations.
Tip: This will be different in most environments. Investigate audit events first
and move to block mode if none are registered.


FILE AND PATH EXCEPTIONS

Attack Surface Reduction policies can be configured with file and folder
exclusions. The process is described here.

There are three important notes you should be aware of:

 * Exclusions apply to all of your ASR rules, there are not ASR file/folder
   exclusions per rule.
 * Exclusions apply to audit events as well.
 * Block persistence through WMI event subscription is not supported for
   exclusions.

We didn’t like the idea that the audit only events would also be excluded. We
have limited the use of file exclusions relating to ASR.


ASR AUDIT MODE

It’s important to mention that a lot of value is derived from audit mode events,
and while our goal was block mode for everything if possible, we don’t consider
remaining in audit mode a strictly negative outcome. Our incident response team
have taken the time to understand what “normal” looks like for each of the
categories we are not blocking and have been able to derive high fidelity signal
from the events.

Deploying ASR in audit mode can be done quickly and relatively safely. The key
tip is to remain on the lookout for excessive logging that could contribute to
performance problems or increased volume in your logging pipelines.

> A note for reviewing audit logs: The way to track and asses an individual ASR
> rule when searching event logs is via the per rule unique GUID. The GUID for
> each rule can be found in the Microsoft documentation. For example, to review
> the “Adobe Reader” rule above we would search for 1121 events containing
> “7674ba52–37eb-4a4f-a9a1-f0f9a1619a2c”.


ADDITIONAL INFORMATION

 * Microsoft Attack Surface Reduction Reference


AUTHOR

Chad D.

Palantirtech
Infosec
Microsoft Defender
Information Security
Palantirinfosec


133

133

1




133

1





GET AN EMAIL WHENEVER PALANTIR PUBLISHES.

By signing up, you will create a Medium account if you don’t already have one.
Review our Privacy Policy for more information about our privacy practices.

Subscribe


MORE FROM PALANTIR BLOG

Follow

Palantir Blog

Palantir

·Mar 30


DATA CATALOGING: BRINGING ORDER TO CHAOS

There is such a thing as too much data — when it becomes too overwhelming to
properly figure out how to use it. Keeping track and making sense of what data
your organization has, where it came from, who owns it, and how they can use it,
alongside a myriad…

AI

8 min read



AI

8 min read




--------------------------------------------------------------------------------

Share your ideas with millions of readers.

Write on Medium

--------------------------------------------------------------------------------

Palantir

·Apr 8, 2019


DEV VERSUS DELTA: DEMYSTIFYING ENGINEERING ROLES AT PALANTIR

As a Hiring Manager for engineers, I get a lot of questions about our unique
engineering model. Specifically, people want to know: what’s the difference
between a Software Engineer and a Forward Deployed Software Engineer? I polled
Palantirians from around the globe for their answers to these and other common…

Software Development

9 min read



Software Development

9 min read




--------------------------------------------------------------------------------

Palantir

·Mar 4, 2018


CODE REVIEW BEST PRACTICES

The Internet provides a wealth of material on code reviews: on the effect of
code reviews on company culture, on formal security reviews, shorter guides,
longer checklists, humanized reviews, reasons for doing code reviews in the
first place, best practices, more best practices, statistics on code review
effectiveness for catching…

Software Development

12 min read



Software Development

12 min read




--------------------------------------------------------------------------------

Palantir

·Apr 7


AI, AUTOMATION, AND THE ETHICS OF MODERN WARFARE

Editors Note: In a previous post, Palantir’s Privacy and Civil Liberties (PCL)
team articulated the case for why AI ethics and efficacy discussions need to
move beyond performative towards operational realities. We suggested that the
contextual domains of AI application provide a tangible foothold for delving
into the ways that…

Palantir

11 min read



Palantir

11 min read




--------------------------------------------------------------------------------

Palantir

·5 days ago


SAFELY MODERNIZE LEGACY SYSTEMS WITH PALANTIR FOUNDRY CONTAINER ENGINE (FCE)

Missile warnings. Airplane flight statuses. Satellite observation alerts. Much
of the U.S. Government’s most critical digital infrastructure is dependent on
software built during the Cold War, written in archaic languages (e.g., Fortran,
COBOL, ADA), and/or installed exclusively on mainframe computers. While the
infrastructure is old and may struggle to keep…

Palantirtech

4 min read



Palantirtech

4 min read




--------------------------------------------------------------------------------

Read more from Palantir Blog

AboutHelpTermsPrivacy

--------------------------------------------------------------------------------


GET THE MEDIUM APP




PALANTIR

10.1K Followers

Follow




MORE FROM MEDIUM

Alex Teixeira

THREAT DETECTION BAD TRIPS: LOG EVERYTHING!



Adam Goss

CERTIFIED RED TEAM OPERATOR (CRTO) REVIEW



The PyCoach

in

Artificial Corner

YOU’RE USING CHATGPT WRONG! HERE’S HOW TO BE AHEAD OF 99% OF CHATGPT USERS



Adam Goss

THREAT HUNTING II: ENVIRONMENT SETUP



Help

Status

Writers

Blog

Careers

Privacy

Terms

About

Text to speech

To make Medium work, we log user data. By using Medium, you agree to our Privacy
Policy, including cookie policy.