hello-world-hidden-wave-f1fa.26335442079873.workers.dev Open in urlscan Pro
172.67.143.53  Public Scan

Submitted URL: http://hello-world-hidden-wave-f1fa.26335442079873.workers.dev/moderation/1500000178701-321-auto-moderation-in-discord
Effective URL: https://hello-world-hidden-wave-f1fa.26335442079873.workers.dev/moderation/1500000178701-321-auto-moderation-in-discord
Submission: On June 24 via api from US — Scanned from DE

Form analysis 0 forms found in the DOM

Text Content

DownloadNitroDiscoverSafety
Safety
SupportBlogCareers
Download
Back
Safety Center
Overview

Controlling Your Experience
Four steps to a super safe accountFour steps to a super safe serverRole of
administrators and moderators on DiscordReporting problems to DiscordMental
health on DiscordAge-Restricted Content on DiscordTips against spam and hacking

Parents & Educators
What is Discord?Discord's commitment to a safe and trusted experienceHelping
your teen stay safe on DiscordTalking about online safety with your
teenAnswering parents' and educators' top questionsIf your teen encounters an
issueWorking with CARU to protect users on Discord

How We Enforce Rules
Our policiesHow we investigateWhat actions we takeHow you can appeal our
actionsDiscord's Transparency ReportsWorking with law enforcement
Back
Moderator Academy
Overview

Basics
100: An Intro to the DMA103: Basic Channel Setup104: How To Report Content To
Discord110: Moderator Etiquette111: Your Responsibilities as a Moderator151: An
Intro to the Moderator Ecosystem

Setup and Function
201: Permissions on Discord202: Handling Difficult Scenarios203: Developing
Server Rules204: Ban Appeals205: Utilizing Role Colors206: Best Practices for
Reporting Tools207: Server Information and Announcement Channels208: Channel
Categories and Names210: Moderator Recruitment211: Creating Moderation Team
Channels231: Fundamentals of Family-Friendly Servers241: Securing Your Discord
Account

Advanced Community Management
301: Implementing Verification Gates302: Developing Moderator Guidelines303:
Facilitating Positive Environments304: Moderating Safely and Securely310:
Managing Moderation Teams311: Understanding and Avoiding Moderator Burnout312:
Internal Conflict Resolution313: How to Moderate Voice Channels314: Training and
Onboarding New Moderators321: Auto Moderation in Discord322: Using Webhooks and
Embeds 323: Using XP Systems324: Using Modmail Bots331: Community Engagement332:
Fostering Healthy Communities333: Planning Community Events334: Community
Partnerships341: Understanding Your Community Through Insights345: Best
Practices for Moderating Content Creation

Moderation Seminars
401: Transparency in Moderation402: Confidentiality in Moderation403: Sensitive
Topics404: Considering Mental Health in Your Community 405: Practicalities of
Moderating Adult Channels407: Managing Exponential Membership Growth431: Ethical
Community Growth432: Internationalization of a Community441: Community
Governance Structures442: Using Insights to Improve Community Growth and
Engagement443: Ban Evasion and Advanced Harassment444: Managing Interpersonal
Relationships451: Reddit X Discord452: Twitch X Discord453: Patreon X
Discord455: Schools X Discord459: Bringing Other Communities to Discord

Graduate
531: Parasocial Relationships541: The Application of Metaphors in Moderation


Author Credits
Author Credits
Login


Discord Safety CenterPolicy HubSafety LibraryAuto Moderation in Discord
Discord
Version
No items found.
June 3, 2022



AUTO MODERATION IN DISCORD

So, you’ve set up your community and established some rules to serve as a
starting point. The next step to getting your server’s safety practices into
place is enforcing those rules. Automated moderation can play a large role in
the process of rule enforcement and keeping your community safe even when there
aren’t always eyes to do it for you.

This article will cover general and specific implementations and configurations
of automoderation, both with the aid of tools Discord has readily available, as
well as with tools provided by third party bots. Before reading on, be sure to
familiarize yourself with the following terms in order to best make use of this
article:

‘Raid’ ‘Raider’ - A raid is where a large number of users will join a community
with the express intention of causing issues for the community. A raider is an
account engaging in this activity.

‘Alt’ ‘Alt account’ - An alt is a throwaway account owned by a Discord user. In
the context of raids, these alts are made en masse to engage in raiding.

‘Self-bot’ - A self bot is an account that’s being controlled via custom code or
tools. This is against Discord’s TOS. In the context of raids and moderation,
these accounts are automated to spam, bypass filters or engage in other annoying
activities.


WHY IS AUTO MODERATION IMPORTANT?

Auto Moderation is integral to many communities on Discord, especially those of
any notable size. There are many valid reasons for this, some of which you may
find apply to your community as well. The security that auto moderation can
provide can give your users a much better experience in your community, make the
lives of your moderators easier and prevent malicious users from doing damage to
your community or even joining your community.

AUTO MODERATION VS MANUAL MODERATION

If you’re a well established community, you’ll likely have a moderation team in
place. You may wonder, why should I use auto moderation? I already have
moderators! Auto moderation isn’t a replacement for manual moderation, rather,
it serves to enrich it. Your moderation team can continue to make informed
decisions within your community while auto moderation serves to make that
process easier for them by responding to common issues at any time more quickly
than a real-life moderator can.

KNOWING WHAT’S RIGHT FOR YOUR COMMUNITY

Different communities will warrant varying levels of auto moderation. It’s
important to be able to classify your community and consider what level of auto
moderation is most suitable to your community’s needs. Keep in mind that Discord
does impose some additional guidelines depending on how you designate your
community. Below are different kinds of communities and their recommended auto
moderation systems:

PRIVATE COMMUNITIES

If you run a Discord community with limited invites where every new member is
known, auto moderation won’t be a critical function unless you have a
significantly larger member count. It’s recommended to have at least some auto
moderation however, namely text filters, anti-spam, or Discord’s AutoMod keyword
filters.

PUBLIC COMMUNITIES

If you run a Discord community that is Discoverable or has public invites where
new members can come from just about anywhere, it’s strongly recommended to have
anti-spam and text filters or Discord’s AutoMod keyword filters in place.
Additionally, you should be implementing some level of member verification to
facilitate the server onboarding process. If your community is large, with
several thousand members, anti-raid functionality may become necessary.
Remember, auto moderation is configurable to your rules, as strict or loose as
they may be, so keep this principle in mind when deciding what level of
automation works best for you.

VERIFIED AND PARTNERED COMMUNITIES

If your Discord community is Verified or Partnered, you will need to adhere to
additional guidelines to maintain that status. Auto moderation is recommended
for these communities in order to feel confident that you can succinctly and
effectively enforce these guidelines at all times so consider using anti-spam
and text filters or Discord’s AutoMod keyword filters. If you have a Vanity URL
or your community is Discoverable, anti-raid is a must-have in order to protect
your community from malicious actors.




BUILT-IN MODERATION FEATURES

Some of the most powerful tools in auto moderation come with your community and
are built directly into Discord. Located under the Server Settings tab, you will
find the Moderation settings. This page houses some of the strongest safety
features that Discord has to natively offer. These settings can help secure your
Discord community without the elaborate setup of a third party bot involved. The
individual settings will be detailed below.


AUTOMOD

AutoMod is a new content moderation feature as of 2022, allowing those with the
“Manage Server” and “Administrator” permissions to set up keyword and spam
filters that can automatically trigger moderation actions such as blocking
messages that contain specific keywords or spam from being posted, and logging
flagged messages as alerts for you to review.

This feature has a wide variety of uses within the realm of auto moderation,
allowing mods to automatically log malicious messages and protect community
members from harm and exposure to undesirable spam and words like slurs or
severe profanity. AutoMod’s abilities also extend to messages within threads,
text-in-voice channels, and Forum channels giving moderation teams peace of mind
that they have AutoMod’s coverage across these message surfaces without having
to worry about adding more manual moderation work by enabling these valuable
features.

Setting up AutoMod is very straightforward. First, make sure your server has
Communities enabled. Then, navigate to your server’s settings and click the
AutoMod tab. From there, you’ll find AutoMod and can start setting up keyword
and spam filters.‍

Keyword Filters‍

Keyword filters allow you to flag and block messages containing specific words,
characters, and symbols from being posted. You can set up one “Commonly Flagged
Words” filter, along with up to 3 custom keyword filters that allow you to enter
a maximum of 1,000 keywords each, for a total of four keyword filters.

When inserting keywords, you should separate each word with a comma like so:
Bad, words, go, here. Matches for keywords are exact and aware of whitespace.
For example, the keyword “Test Filter” will be triggered by “test filter” but
not “testfilter” or “test”. Do note that keywords also ignore capitalisation.

To have AutoMod filter messages containing words that partially match your
keywords, which is helpful for preventing users from circumventing your filters,
you can modify your keywords with the asterisk (*) wildcard character. This
works as follows:

 * *cat - flags “bobcat” or “copycat”.
 * cat* - flags “catching” or “caterpillar”.
 * *cat* - flags “scathing” or “locate”
   

Be careful with wildcards so as to not have AutoMod incorrectly flag words that
are acceptable and commonly used!



COMMONLY FLAGGED WORDS

AutoMod’s Commonly Flagged Words keyword filter comes equipped with three
predefined wordlists that provide communities with convenient protection against
commonly flagged words. There are three predefined categories of words
available: Insults and Slurs, Sexual Content, and Severe Profanity. These
wordlists will all share one rule, meaning they’ll all have the same response
configured. These lists are maintained by Discord and can help keep
conversations in your Community consistent with Discord's Community Guidelines.
This can be particularly helpful for Partnered and Verified communities.



‍

EXEMPTIONS

Both AutoMod’s commonly flagged word filters and custom filters allow for
exemptions in the form of roles and channels, with the commonly flagged word
filter also allowing for the exemption of words from Discord’s predefined
wordlists. Anyone with these defined roles, or sending messages within defined
channels or containing keywords from Discord’s wordlists, will not trigger
responses from AutoMod.

This is notably useful for allowing moderators to bypass filters, allowing
higher trusted users to send more unrestricted messages, and tailoring the
commonly flagged wordlists to your community’s needs. As an example, you could
prevent new users from sending Discord invites with a keyword filter of:
*discord.gg/*, *discord.com/invites/* and then give an exemption to moderators
or users who have a certain role, allowing them to send Discord invites. This
could also be used to only allow sharing Discord invites in a specific channel.
There’s a lot of potential use cases for exemptions! Members with the Manage
Server and Administrator permissions will always be exempt from all AutoMod
filters. Bots and webhooks are also exempt.

SPAM FILTERS

Spam, by definition, is irrelevant or unsolicited messages. AutoMod comes
equipped with two spam filters that allow you to flag messages containing
mention spam and content spam.

MENTION SPAM

Mention spam is when users post messages containing excessive mentions for the
purpose of disrupting your server and unnecessarily pinging others.

AutoMod’s mention spam filter lets you flag and block messages containing an
excessive number of unique @role and @user mentions. You define what is
“excessive” by setting a limit on the number of unique mentions that a message
may contain, up to 50.

It is recommended to select "Block message" as an AutoMod response when it
detects a message containing excessive mentions as this prevents notifications
from being sent out to tagged users and roles. This helps prevent your channels
from being clogged up by disruptive messages containing mention spam during mass
mention attempts and mention raids, and saves your members from the annoyance of
getting unnecessary notifications and ghost pings.



SPAM CONTENT
THIS FILTER FLAGS SPAMMY TEXT CONTENT THAT HAS BEEN WIDELY REPORTED BY OTHER
USERS AS SPAM, SUCH AS UNSOLICITED MESSAGES, FREE NITRO SCAMS AND
ADVERTISEMENTS, AND INVITE SPAM.

‍

This filter identifies spam at large by using a model that has been trained by
messages that users have reported as spam to Discord. Enabling this filter is an
effective way to block out a variety of messages that resemble spammy content
reported by Discord users, and identify spammers in your community that should
be weeded out. However, this filter isn’t perfect and might not catch all forms
of spam, such as DM spam, copy/pasta or repeat messages.



AUTOMATIC RESPONSES

You can configure AutoMod’s keyword and spam filters with the following
automatic responses when a message is flagged:

BLOCK MESSAGE

This response will prevent a message containing a keyword or spam from being
sent entirely. Users will be notified with an ephemeral message when this
happens, informing them the community has blocked the message from being sent.

Discord will seamlessly block all messages containing matching keywords, spam
content, and excessive mentions from your filters from being sent entirely
regardless of the volume of messages, making this response especially effective
for preventing or de-escalating raids where raiders try to spam your channels
with repeated messages and excessive mentions.






SEND AN ALERT

This response will send an alert containing who-what-where information of a
flagged message to a logging channel of your choice.

This message will preview what the full caught message would’ve looked like,
including the full content. It also shows a pair of buttons at the bottom of the
message, ⛨ Actions and Report Issues. Thes action buttons will bring up a user
context menu, allowing you to use any permissions you have to kick, ban or time
out the member. The message also displays the channel the message was attempted
to be sent in and the filter that was triggered by the message. In the future,
some auto-moderation bots may be able to detect these messages and action users
accordingly.



‍

TIME OUT USER

This response will automatically apply a time out penalty to a user, preventing
them from interacting in the server for the duration of the penalty. Affected
users are unable to send messages, react to messages, join voice channels or
video calls during their timeout period. Keep in mind that they are able to see
messages being sent during this period.

To remove a timeout penalty, Moderators and Admins can right-click on any
offending user’s name to bring up their Profile Context Menu and select “Remove
Timeout.”



RECOMMENDED CONFIGURATION

AutoMod is a very powerful tool that you can set up easily to reduce moderation
work and keep your community's channels and conversations clean and welcoming
during all hours of the day. For example, you may want to use three keyword
filters; one to just block messages, one to just send alerts for messages, and
one to do both.

Overall, it's recommended to have AutoMod block messages you wouldn't want
community members to see. For example, high harm keywords such as slurs and
other extreme language should have AutoMod’s “block message” and “send alerts”
responses enabled. This will allow your moderation team to take action against
undesirable messages and the users behind them while preventing the rest of your
community from exposure. Low harm keywords or commonly spammed phrases can also
have AutoMod’s “Block message” response enabled without the need to set up
alerts. This will still prevent undesirable messages from being sent without
spamming your logs with alerts. You can also quickly configure AutoMod’s keyword
and spam filters in real-time to prevent and de-escalate raids by adding spammed
keywords or - adjusting your mention limit in the event of a mention raid - to
prevent the raids from causing lasting damage.

It's also recommended to have AutoMod send you alerts for more subjective
content that requires a closer look from your moderation team, rather than
having them being blocked entirely. This will allow your moderation team to
investigate flagged messages with additional context to ensure there’s nothing
malicious going on. This is useful for keywords that can be commonly
misrepresented, or sent in a non-malicious context.

‍




‍

‍


VERIFICATION LEVEL

None - This turns off verification for your community, meaning anyone can join
and immediately interact with your community. This is typically not recommended
for public communities as anyone with malicious intent can immediately join and
wreak havoc.

Low - This requires people joining your community to have a verified email which
can help protect your community from the laziest of malicious users while
keeping everything simple for well-meaning users. This would be a good setting
for a small, private community.

Medium - This requires the user to have a verified email address and for their
account to be at least 5 minutes old. This further protects your community by
introducing a blocker for people creating accounts solely to cause problems.
This would be a good setting for a moderately sized community or small public
community.


High - This includes the same protections as both medium and low verification
levels but also adds a 10 minute barrier between someone joining your community
and being able to interact. This can give you and anyone else responsible for
keeping things clean in your community time to respond to ‘raids’, or large
numbers of malicious users joining at once. For legitimate users, you can
encourage them to do something with this 10 minute time period such as read the
rules and familiarize themselves with informational channels to pass the time
until the waiting period is over. This would be a good setting for a large
public community.

Highest - This requires a joining user to have a verified phone number in
addition to the above requirements. This setting can be bypassed by robust
‘raiders’, but it takes additional effort. This would be a good setting for a
private community where security is tantamount, or a public community with
custom verification. This requirement is one many normal Discord users won’t
fill, by choice or inability. It’s worth noting that Discord’s phone
verification disallows VoIP numbers to be abused.

‍

‍


EXPLICIT MEDIA CONTENT FILTER

Not everyone on the internet is sharing content with the best intentions in
mind. Discord provides a robust system to scan images and embeds to make sure
inappropriate images don’t end up in your community. There are varying levels of
scrutiny to the explicit media content filter which are:

Don’t scan any media content - Nothing sent in your community will go through
Discord’s automagical image filter. This would be a good setting for a small,
private community where only people you trust can post images, videos etc.

Scan media content from users without a role - Self explanatory, this works well
to stop new users from filling your community with unsavoury imagery. When
combined with the proper verification methods, this would be a good setting for
a moderately sized private or public community.

Scan media content from all members - This setting makes sure everyone,
regardless of their roles, isn’t posting unsavoury things in your community. In
general, we recommend this setting for ALL public facing communities.

Once you’ve decided on the base level of auto moderation you want for your
community, it’s time to look at the extra levels of auto moderation bots can
bring to the table! The next few sections are going to detail the ways in which
a bot can moderate.




BOT-CONTROLLED AUTO MODERATION

If you want to keep your chats clean and clear of certain words, phrases, spam,
mentions and everything else that can be misused by malicious users you’re going
to need a little help from a robotic friend or two. Examples of bots that are
freely available are referenced below. If you decide to use several bots, you
may need to juggle several moderation systems.

When choosing a bot for auto moderation, you should also consider their
capabilities for manual moderation (things like managing mutes, warns etc.).
Find a bot with an infraction/punishment system you and the rest of your
moderator team find to be the most appropriate. All of the bots listed in this
article have a manual moderation system.

The main and most pivotal forms of auto moderation are:

 * Anti-Spam
 * Text Filters
 * Anti-Raid
 * User Filters

Each of these subsets of auto moderation will be detailed below along with
recommended configurations depending on your community.

BOTS SEEN IN THIS GUIDE:

 * Mee6 - https://mee6.xyz/
 * Dyno - https://dyno.gg/
 * Giselle - https://docs.gisellebot.com/bot-invite.html
 * AutoModerator - https://automoderator.app/
 * Fire - https://getfire.bot/
 * Bulbbot - https://bulbbot.rocks/
 * Gearbot - https://gearbot.rocks/

SUPPORT OF DISCORD API FEATURES

It’s important your auto moderation bot(s) of choice are adopting the cutting
edge of Discord API features, as this will allow them to provide better
capabilities and integrate more powerfully with Discord. Slash commands are
especially important as you’re able to configure which commands are usable on
which bot on a case by case basis for each slash command. This will allow you to
maintain very detailed moderation permissions for your moderation team. Bots
that support more recent API features are generally also considered to be more
actively developed, and thus more reliable in regards to reacting to new threat
vectors as well as able to adapt to new features on Discord. A severely outdated
bot could react insufficiently to a high-harm situation.





‍

SLASH COMMAND PERMISSIONS

As mentioned above, one of the more recent features is Slash Commands. Slash
commands are configurable per-command, per-role, and per-channel. This allows
you to designate moderation commands solely to your moderation team without
relying on permissions on the bot’s side to work perfectly. This is relevant
because there have been documented examples in the past of permissions being
bypassed on a moderation bot’s permission checking, allowing normal users to
execute moderation commands.




‍

ANTI-SPAM

One of the most common forms of auto moderation is anti-spam, a type of filter
that can detect and prevent various kinds of spam. Depending on what bot(s)
you’re using, this comes with various levels of configurability.

One of the most common forms of auto moderation is anti-spam, a type of filter
that can detect and prevent various kinds of spam. Depending on what bot(s)
you’re using, this comes with various levels of configurability.


*Unconfigurable filters, these will catch all instances of the trigger,
regardless of whether they’re spammed or a single instance **Giselle combines
these elements into one filter

Anti-spam is integral to running a large private community, or a public
community. There are multiple types of spam a user can engage in, with some of
the most common forms listed in the table above. These types of spam messages
are also very typical of raids, especially Fast Messages and Repeated Text.
While spam can largely be defined as irrelevant or unsolicited messages, the
nature of spam can vary greatly. However, the vast majority of instances involve
a user or users sending lots of messages with the same content with the intent
of disrupting your community.

There are subsets of this spam that many anti-spam filters will be able to
catch. For example, if any of the following: Mentions, Links, Invites, Emoji and
Newline Text are spammed repeatedly in one message, or spammed repeatedly across
several messages, they will provoke most Repeated Text and Fast Messages filters
appropriately. Subset filters are still a good thing for your anti-spam filter
to have as you may wish to punish more or less harshly depending on the spam.
Notably, Emoji and Links may warrant separate punishments. Spamming 10 links in
a single message is inherently worse than having 10 emoji in a message.

Anti-spam will only act on these things contextually, usually in an X in Y
fashion where if a user sends, for example, ten links in five seconds, they will
be punished to some degree. This could be ten links in one message, or one link
in ten messages. In this respect, some anti-spam filters can act simultaneously
as Fast Messages and Repeated Text filters.

Sometimes, spam may happen too quickly and a bot can fall behind. There are rate
limits in place to stop bots from harming communities that can prevent deletion
of individual messages if those messages are being sent too quickly. This can
often happen in raids. As such, Fast Messages filters should prevent offenders
from sending messages; this can be done via a mute, kick or ban. If you want to
protect your community from raids, please read on to the Anti-Raid section of
this article.

TEXT FILTERS

Text filters allow you to control the types of words and/or links that people
are allowed to put in your community. Different bots will provide various ways
to filter these things, keeping your chat nice and clean.




*Defaults to banning ALL links **Users can bulk-input a YML config ***Only the
templates may be used, custom filters cannot be made

A text filter is a must for a well moderated community. It’s strongly
recommended you use a bot that can filter text based on a banlist. A Banned
words filter can catch links and invites provided http:// and https:// are added
to the word banlist (for all links) or specific full site URLs to block
individual websites. In addition, discord.gg can be added to a banlist to block
ALL Discord invites.

A Banned Words filter is integral to running a public community, especially for
Partnered, Community, or Verified servers who have additional content guidelines
they must meet that a Banned Words filter can help with.

Before configuring a filter, it’s a good idea to work out what is and isn’t ok
to say in your community, regardless of context. For example, racial slurs are
generally unacceptable in almost all communities, regardless of context. Banned
word filters often won’t account for context with an explicit banlist. For this
reason, it’s also important that a robust filter contains allowlisting options.
For example, if you add ‘cat’ to your filter and someone says ‘catch’, they
could get in trouble for using an otherwise acceptable word.

Filter immunity may also be important to your community as there may be
individuals who need to discuss the use of banned words, namely members of a
moderation team. There may also be channels that allow the usage of otherwise
banned words. For example, a serious channel dedicated to discussion of real
world issues may require discussions about slurs or other demeaning language, in
this exception channel based Immunity is integral to allowing those
conversations.

Link filtering is important to communities where sharing links in ‘general’
chats isn’t allowed, or where there are specific channels dedicated to sharing
that content. This can allow a community to remove links with an appropriate
reprimand without treating that misstep with the same gravity one would someone
who used a slur.

Allow/ban-listing and templates for links are also a good idea to have. While
many communities will use catch-all filters to make sure links stay in specific
channels, some links will always be inherently unsavory. Being able to filter
specific links is a good feature- with preset filters (like the google filter
provided by YAGPDB) coming in very handy for protecting your user base without
requiring intricate setup on your behalf. However, it is recommended you
configure a custom filter as a supplement, to ensure specific slurs, words, etc.
that break the rules of your community, aren’t being said.

Invite filtering is equally important in large or public communities where users
will attempt to raid, scam or otherwise assault your community with links with
the intention of manipulating your user base or where unsolicited self-promotion
is potentially fruitful. Filtering allows these invites to be recognized
instantly and dealt with more harshly. Some bots may also allow by-community
white/banlisting allowing you to control which communities are approved to share
invites to and which aren’t. A good example of invite filtering usage would be
something like a partners channel, where invites to other, closely linked,
communities are shared. These communities should be added to an invite allowlist
to prevent their deletion.

BUILT-IN SUSPICIOUS LINK AND FILE DETECTION

Discord also implements a native filter on links and files, though this filter
is entirely client-side and doesn’t prevent malicious links or files being sent.
It does, however, warn users who attempt to click suspicious links or download
suspicious files (executables, archives etc.) and prevents known malicious links
from being clicked at all. While this doesn’t remove offending content, and
shouldn’t be relied on as auto moderation, it does prevent some cracks in your
auto moderation from harming users.

ANTI-RAID

Raids, as defined earlier in this article, are mass-joins of users (often
selfbots) with the intent of damaging your community. Protecting your community
from these raids can come in various forms. One method involves gating your
server using a method detailed elsewhere in the DMA.


*Unconfigurable, triggers raid prevention based on user joins and damage
prevention based on humanly impossible user activity. Will not automatically
trigger on the free version of the bot.




Raid detection means a bot can detect the large number of users joining that’s
typical of a raid, usually in an X in Y format. This feature is usually chained
with Raid Prevention or Damage Prevention to prevent the detected raid from
being effective, wherein raiding users will typically spam channels with
unsavory messages.

Raid-user detection is a system designed to detect users who are likely to be
participating in a raid independently of the quantity of frequency of new user
joins. These systems typically look for users that were created recently or have
no profile picture, among other triggers depending on how elaborate the system
is.

Raid prevention stops a raid from happening, either by Raid detection or
Raid-user detection. These countermeasures stop participants of a raid
specifically from harming your community by preventing raiding users from
accessing your community in the first place, such as through kicks, bans, or
mutes of the users that triggered the detection.

Damage prevention stops raiding users from causing any disruption via spam to
your community by closing off certain aspects of it either from all new users,
or from everyone. These functions usually prevent messages from being sent or
read in public channels that new users will have access to. This differs from
Raid Prevention as it doesn’t specifically target or remove new users in the
community.

Raid anti-spam is an anti-spam system robust enough to prevent raiding users’
messages from disrupting channels via the typical spam found in a raid. For an
anti-spam system to fit this dynamic, it should be able to prevent Fast Messages
and Repeated Text. This is a subset of Damage Prevention.

Raid cleanup commands are typically mass-message removal commands to clean up
channels affected by spam as part of a raid, often aliased to ‘Purge’ or
‘Prune’.

Built-in anti-raid

It should be noted that Discord features built-in raid and user bot detection,
which is rather effective at preventing raids as or before they happen. If you
are logging member joins and leaves, you can infer that Discord has taken action
against shady accounts if the time difference between the join and the leave
times is extremely small (such as between 0-5 seconds). However, you shouldn’t
rely solely on these systems if you run a large or public community.

USER FILTERS

Messages aren’t the only way potential evildoers can introduce unwanted content
to your community. They can also manipulate their Discord username or Nickname
to be abusive. There are a few different ways a username can be abusive and
different bots offer different filters to prevent this.



Username filtering is less important than other forms of auto moderation. When
choosing which bot(s) to use for your auto moderation needs, this should
typically be a later priority, since users with malicious usernames can just be
nicknamed in order to hide their actual username.




SPECIALIZED AUTO MODERATION BOTS

So far, we’ve covered general auto moderation bots with a wide toolset. However,
there are some specialized bots that only cover one specific facet of auto
moderation and execute it especially well. A few examples and descriptions are
below:

 * Beemo - Bot raid detection and prevention

This bot detects raids as they happen globally, banning raiders from your
community. This is especially notable as it’ll ban detected raiders from raids
in other communities it’s in as they join your community, making it
significantly more effective than other anti-raid solutions that only pay
attention to your community.

 * Fish - Malicious link and DM raider detection

Fish is designed to counter scamming links and accounts, targeting patterns in
joining users to prevent DM raids (Like normal raids, but members are directly
messaged instead). These DM raids are typically phishing scams, which Fish also
filters, deleting known phishing sites.

 * Safelink and Crosslink - Link automoderation

Both of these bots are highly specialized link and file moderation bots,
effectively filtering adult sites, scamming sites and other categories of sites
as defined by your moderation team.




WHICH BOT DO I USE?

When choosing a bot for auto moderation you should ensure it has an
infraction/punishment system you and your mod team are comfortable with as well
as its features being what’s best suited for your community. Consider testing
out several bots and their compatibility with Discord’s built-in auto moderation
features to find what works best for your server’s needs. You should also keep
in mind that the list of bots in this article is not comprehensive - you can
consider bots not listed here. The world of Discord moderation bots is vast and
fascinating, and we encourage you to do your own research!

FOR SUPER-LARGE PUBLIC COMMUNITIES (>100,000)

For the largest of communities, it’s recommended you employ everything Discord
has to offer. You should use the High or Highest Verification level, all of
Discord’s AutoMod keyword filters and a robust moderation bot like Gearbot or
Gaius. You should seriously consider additional bots like Fish, Beemo and
Safelink/Crosslink to aid in keeping your users safe and have detailed Content
Moderation filters. At this scale, you should seriously consider premium, self
hosted, or custom moderation bots to meet the unique demands of your community.

FOR LARGE PUBLIC COMMUNITIES (>10,000)

It’s recommended you use a bot with a robust and diverse toolset, while
simultaneously utilizing AutoMod’s commonly flagged word filters. You should use
the High Verification level to aid in preventing raids. If raiding isn’t a large
concern for your community, Gearbot and Giselle are viable options. Your largest
concerns in a community of this size is going to be anti-spam and text filters
meaning robust keyword filters are also highly recommended, with user filters as
a good bonus. Beemo is generally recommended for any servers of this size. At
this scale a self hosted, custom, or premium bot may also be a viable option,
but such bots aren’t covered in this article.

FOR MIDSIZED PUBLIC COMMUNITIES (>1,000)

It’s recommended you use Fire, Gearbot, Bulbbot, AutoModerator or Giselle. Mee6
and Dyno are also viable options, however as they’re very large bots and have
been known to experience outages, leaving your community unprotected for large
amounts of time. At this community size, you’re likely not going to be largely
concerned about anti-raid with anti-spam and text filters being your main focus.
You’ll likely be able to get by just using AutoMod’s keyword filters and
commonly flagged words lists provided by Discord. User filters, at this size,
are largely unneeded and your Verification Level shouldn’t need to be any higher
than Medium.

FOR SMALL PUBLIC COMMUNITIES AND PRIVATE COMMUNITIES

If your community is small or private, the likelihood of malicious users joining
to wreak havoc is rather low. As such, you can choose a bot with general
moderation features you like the most and use that for auto moderation. Any of
the bots listed in this article should serve this purpose. At this scale, you
should be able to rely solely on AutoMod’s keyword filters. Your Verification
Level is largely up to you at this scale depending on where you anticipate
member growth coming from, with Medium being default recommended.




CONFIGURING AUTO MODERATION FOR LISTED BOTS

Mee6

First, make sure Mee6 is in the communities you wish to configure it for. Then
log into its online dashboard (https://mee6.xyz/dashboard/), navigate to the
community(s), then plugins and enable the ‘Moderator’ plugin. Within the
settings of this plugin are all the auto moderation options.

Dyno

First, make sure Dyno is in the communities you wish to configure it for. Then
log into its online dashboard (https://dyno.gg/account), navigate to the
community(s), then the ‘Modules’ tab. Within this tab, navigate to ‘Automod’ and
you will find all the auto moderation options.

Giselle

First, make sure Giselle is in the communities you wish to configure it for.
Then, look at its documentation (https://docs.gisellebot.com/) for full details
on how to configure auto moderation for your community.

AutoModerator

First, make sure Gaius is in the communities you wish to configure it for. Then,
look at its documentation (https://automoderator.app/docs/setup/) for full
details on how to configure auto moderation for your community.

Fire

First, make sure Fire is in the communities) you wish to configure it for. Then,
look at its documentation (https://getfire.bot/commands) for full details on how
to configure auto moderation for your community.

Bulbbot

First, make sure Bulbbot is in the communities you wish to configure it for.
Then, look at its documentation (https://docs.bulbbot.rocks/getting-started/)
for full details on how to configure auto moderation for your community.

Gearbot

First, make sure Gearbot is in the communities you wish to configure it for.
Then, look at its documentation (https://gearbot.rocks/docs) for full details on
how to configure auto moderation for your community.





Tags:
Moderation
Server Safety
Contents
Why is Auto Moderation important?Built-in moderation featuresVerification Level
Explicit media content filter Bot-controlled Auto ModerationSpecialized Auto
Moderation Bots Which bot do I use? Configuring Auto Moderation for listed bots


SAFETY CENTER

Explore more
Controlling Your Experience
Four steps to a super safe account


LOREM IPSUM IS SIMPLY




NON-CONSENSUAL ADULT INTIMATE MEDIA POLICY EXPLAINER PAGE


WORDS MATTER


ADDRESSING HARMFUL OFF PLATFORM BEHAVIOR

English, USA
българскиČeštinaDanskDeutschΕλληνικάEnglish,
USAEspañolSuomiFrançaisहिंदीHrvatskiMagyarItaliano日本語한국어LietuviškaiNederlandsNorwegianPolskiPortuguês
do BrasilRomânăРусскийSvenskaไทยTürkçeУкраїнськаTiếng Việt中文繁體中文
English
Čeština
Dansk
Deutsch
English
English (UK)
Español
Español (América Latina)
Français
Hrvatski
Italiano
lietuvių kalba
Magyar
Nederlands
Norsk
Polski
Português (Brasil)
Română
Suomi
Svenska
Tiếng Việt
Türkçe
Ελληνικά
български
Русский
Українська
हिंदी
ไทย
한국어
中文
中文(繁體)
日本語

Product
DownloadNitroStatusApp DirectoryNew Mobile Experience
Company
AboutJobsBrandNewsroom
Resources
CollegeSupportSafetyBlogFeedbackStreamKitCreatorsCommunityDevelopersGamingQuestsOfficial
3rd Party Merch
Policies
TermsPrivacyCookie SettingsGuidelinesAcknowledgementsLicensesCompany Information
Sign up