blog.inliniac.net Open in urlscan Pro
192.0.78.13  Public Scan

Submitted URL: http://inliniac.net/
Effective URL: https://blog.inliniac.net/
Submission: On December 10 via api from CZ — Scanned from DE

Form analysis 5 forms found in the DOM

GET https://blog.inliniac.net/

<form method="get" id="searchform" action="https://blog.inliniac.net/">
  <label for="s" class="assistive-text">Search</label>
  <input type="text" class="field" name="s" id="s" placeholder="Search">
  <input type="submit" class="submit" name="submit" id="searchsubmit" value="Search">
</form>

GET https://blog.inliniac.net/

<form method="get" id="searchform" action="https://blog.inliniac.net/">
  <label for="s" class="assistive-text">Search</label>
  <input type="text" class="field" name="s" id="s" placeholder="Search">
  <input type="submit" class="submit" name="submit" id="searchsubmit" value="Search">
</form>

POST https://subscribe.wordpress.com

<form action="https://subscribe.wordpress.com" method="post" accept-charset="utf-8" id="subscribe-blog">
  <p>Enter your email address to follow this blog and receive notifications of new posts by email.</p>
  <div class="jetpack-subscribe-count">
    <p> Join 55 other followers </p>
  </div>
  <p id="subscribe-email">
    <label id="subscribe-field-label" for="subscribe-field" class="screen-reader-text"> Email Address: </label>
    <input type="email" name="email" style="width: 95%; padding: 1px 10px" placeholder="Enter your email address" value="" id="subscribe-field">
  </p>
  <p id="subscribe-submit">
    <input type="hidden" name="action" value="subscribe">
    <input type="hidden" name="blog_id" value="38109961">
    <input type="hidden" name="source" value="https://blog.inliniac.net/">
    <input type="hidden" name="sub-type" value="widget">
    <input type="hidden" name="redirect_fragment" value="subscribe-blog">
    <input type="hidden" id="_wpnonce" name="_wpnonce" value="2afc958662"> <button type="submit" class="wp-block-button__link"> Follow </button>
  </p>
</form>

POST https://subscribe.wordpress.com

<form method="post" action="https://subscribe.wordpress.com" accept-charset="utf-8" style="display: none;">
  <div class="actnbr-follow-count">Join 55 other followers</div>
  <div>
    <input type="email" name="email" placeholder="Enter your email address" class="actnbr-email-field" aria-label="Enter your email address">
  </div>
  <input type="hidden" name="action" value="subscribe">
  <input type="hidden" name="blog_id" value="38109961">
  <input type="hidden" name="source" value="https://blog.inliniac.net/">
  <input type="hidden" name="sub-type" value="actionbar-follow">
  <input type="hidden" id="_wpnonce" name="_wpnonce" value="2afc958662">
  <div class="actnbr-button-wrap">
    <button type="submit" value="Sign me up"> Sign me up </button>
  </div>
</form>

<form id="jp-carousel-comment-form">
  <label for="jp-carousel-comment-form-comment-field" class="screen-reader-text">Write a Comment...</label>
  <textarea name="comment" class="jp-carousel-comment-form-field jp-carousel-comment-form-textarea" id="jp-carousel-comment-form-comment-field" placeholder="Write a Comment..."></textarea>
  <div id="jp-carousel-comment-form-submit-and-info-wrapper">
    <div id="jp-carousel-comment-form-commenting-as">
      <fieldset>
        <label for="jp-carousel-comment-form-email-field">Email (Required)</label>
        <input type="text" name="email" class="jp-carousel-comment-form-field jp-carousel-comment-form-text-field" id="jp-carousel-comment-form-email-field">
      </fieldset>
      <fieldset>
        <label for="jp-carousel-comment-form-author-field">Name (Required)</label>
        <input type="text" name="author" class="jp-carousel-comment-form-field jp-carousel-comment-form-text-field" id="jp-carousel-comment-form-author-field">
      </fieldset>
      <fieldset>
        <label for="jp-carousel-comment-form-url-field">Website</label>
        <input type="text" name="url" class="jp-carousel-comment-form-field jp-carousel-comment-form-text-field" id="jp-carousel-comment-form-url-field">
      </fieldset>
    </div>
    <input type="submit" name="submit" class="jp-carousel-comment-form-button" id="jp-carousel-comment-form-button-submit" value="Post Comment">
  </div>
</form>

Text Content

Skip to primary content
Skip to secondary content


INLINIAC


EVERYTHING INLINE.

Search


MAIN MENU

 * Home
 * About
 * Contact


POST NAVIGATION

← Older posts



VUURMUUR 0.8 HAS BEEN RELEASED

Posted on 24/02/2019 by inliniac
1

I’ve just pushed the 0.8 release. See my announcement here. Get it from github
or the ftp server.

Largest changes:

 * ipv6 support using ip6tables
 * logging uses nflog – initial work by Fred Leeflang
 * connection logging and viewer
 * add rpfilter and improved helper support
 * a ‘dialog’ based setup wizard
 * single code base / package
 * massive code cleanup

I plan to continue to work on Vuurmuur, but it will likely remain at a low pace.
Suricata development is simply taking too much of my time.

As a next big step, I’m thinking about making the leap to nftables. This would
be quite a project, so I’m resisting it a bit. On the other hand, I would like
to learn more about nftables as well.

Another thing I’ve been dreaming of is somehow integrating support for Suricata.
Fully supporting Suricata would be a massive effort, but perhaps a simple enough
integration. Probably starting with showing logs, setting some basic config
options.

If you’d like to help with Vuurmuur development it would be great. It’s still
written in C, but at least the code is a lot cleaner than in 0.7.

Posted in Vuurmuur | Tagged firewall, iptables, IPv6, nftables, Suricata,
Vuurmuur | 1 Reply


LEARNING RUST: HASH MAP LOOKUP/INSERT PATTERN

Posted on 19/05/2017 by inliniac
Reply

In Suricata we’re experimenting with implementing app-layer parser in Rust. See
Pierre Chifflier’s presentation at the last SuriCon: [pdf].

The first experimental parsers will soon land in master.

So coming from a C world I often use a pattern like:

value = hash_lookup(hashtable, key)
if (!value) {
    hash_insert(hashtable, key, somevalue);
}

Playing with Rust and it’s HashMap implementation I wanted to do something very
similar. Look up a vector and update it with the new data if it exists, or
create a new vector if not:

match self.chunks.get_mut(&self.cur_ooo_chunk_offset) {
    Some(mut v) => {
        v.extend(data);
    },
    None => {
        let mut v = Vec::with_capacity(32768);
        v.extend(data);
        self.chunks.insert(self.cur_ooo_chunk_offset, v);
    },
};

Not super compact but it looks sane to me. However, Rust’s borrow checker
doesn’t accept it.

src/filetracker.rs:233:29: 233:40 error: cannot borrow `self.chunks` as mutable more than once at a time [E0499]
src/filetracker.rs:233                             self.chunks.insert(self.cur_ooo_chunk_offset, v);
                                                   ^~~~~~~~~~~
src/filetracker.rs:233:29: 233:40 help: run `rustc --explain E0499` to see a detailed explanation
src/filetracker.rs:224:27: 224:38 note: previous borrow of `self.chunks` occurs here; the mutable borrow prevents //subsequent moves, borrows, or modification of `self.chunks` until the borrow ends
src/filetracker.rs:224                     match self.chunks.get_mut(&self.cur_ooo_chunk_offset) {
                                                 ^~~~~~~~~~~
src/filetracker.rs:235:22: 235:22 note: previous borrow ends here
src/filetracker.rs:224                     match self.chunks.get_mut(&self.cur_ooo_chunk_offset) {
...
src/filetracker.rs:235                     };
                                           ^
error: aborting due to previous error

Rust has strict rules on taking references. There can be only one mutable
reference at one time, or multiple immutable references.

The ‘match self.chunks.get_mut(&self.cur_ooo_chunk_offset)’ counts as one
mutable reference. ‘self.chunks.insert(self.cur_ooo_chunk_offset, v)’ would be
the second. Thus the error.

My naive way of working around it is this:

let found = match self.chunks.get_mut(&self.cur_ooo_chunk_offset) {
    Some(mut v) => {
        v.extend(data);
        true
    },
    None => { false },
};
if !found {
    let mut v = Vec::with_capacity(32768);
    v.extend(data);
    self.chunks.insert(self.cur_ooo_chunk_offset, v);
}

This is accepted by the compiler and works.

But I wasn’t quite happy yet, so I started looking for something better. I found
this post on StackOverflow (where else?)

It turns there is a Rust pattern for this:

use std::collections::hash_map::Entry::{Occupied, Vacant};

let c = match self.chunks.entry(self.cur_ooo_chunk_offset) {
    Vacant(entry) => entry.insert(Vec::with_capacity(32768)),
    Occupied(entry) => entry.into_mut(),
};
c.extend(data);


Much better 🙂

It can even be done in a single line:

(*self.chunks.entry(self.cur_ooo_chunk_offset).or_insert(Vec::with_capacity(32768))).extend(data);

But personally I think this is getting too hard to read. But maybe I just need
to grow into Rust syntax a bit more.

Posted in Development, Suricata, Uncategorized | Tagged rust, Suricata | Leave a
reply


VUURMUUR DEVELOPMENT UPDATE

Posted on 12/01/2017 by inliniac
Reply

Over the holidays I’ve spent some time refreshing the Vuurmuur code. One major
thing that is now done is that the 3 different ‘projects’ (libvuurmuur, vuurmuur
and vuurmuur-conf) are now merged into a single ‘project’. This means that a
single ‘./configure && make && make install’ now installs everything.

When I originally started Vuurmuur I had much bigger dreams for it than
eventually materialized. Also, I didn’t understand autotools very well, so it
was easier to keep the project split up. At some point there were even 5
projects!

One very convenient consequence is that development can now be done without
system wide installation of the libs. This may sound trivial, but it really
speeds things up.

I’ve updated the install script and the debian scripts for this new model as
well.


QA

A second point is the use of better QA.

 1. Travis-CI integration. This tests gcc/clang builds for compilation warnings
    and errors, the install script, debian package generation
 2. Scan-build and cppcheck. Vuurmuur is now clean in scan-build 3.9 and
    cppcheck 1.77.
 3. Coverity Scan. I’ve registered Vuurmuur with Coverity’s Scan program.
    Initially there were quite a few issues, although most of them minor. I’ve
    fixed all of them so now Vuurmuur is clean for Coverity as well.
 4. ASAN/UBSAN: I’m running Vuurmuur with address and undefined behavior
    sanitizers enabled. Fixed a few issues because of that.


ERROR HANDLING

One major source of issues with the static checkers was the error handling in
vuurmuur_conf. This lead to many completely untested code paths, usually for
things like memory allocation failure or other ‘internal’ errors. I’ve
simplified that handling enormously, by simply adding a class of ‘fatal’ errors
that simply exit vuurmuur_conf in such conditions. This has lead to a smaller
and cleaner code base.


USER VISIBLE CHANGES

Most of the changes are internal, but a few things are user visible.

 1. removal of QUEUE support. ip_queue is long dead and has been replaced with
    NFQUEUE.
 2. proper sorting of connections in Connection Viewer.
 3. default to black background in vuurmuur_conf

I’m hoping to push out a new release soon(ish). Time contraints will continue to
be a big issue though. So if anyone wants to help out, please let me know.

Posted in Development, Vuurmuur | Tagged Vuurmuur, vuurmuur-conf | Leave a reply


SURICATA BITS, INTS AND VARS

Posted on 20/12/2016 by inliniac
Reply

Since the beginning of the project we’ve spoken about variables on multiple
levels. Of course flowbits defined by the Snort language came first, but other
flow based variables quickly followed: flowints for basic counting, and vars for
extracting data using pcre expressions.

I’ve always thought of the pcre data extraction using substring capture as a
potentially powerful feature. However the implementation was lacking. The
extracted data couldn’t really be used for much.


INTERNALS

Recently I’ve started work to address this. The first thing that needed to be
done was to move the mapping between variable names, such as flowbit names, and
the internal id’s out of the detection engine. The detection engine is not
available in the logging engines and logging of the variables was one of my
goals.

This is a bit tricky as we want a lock less data structure to avoid runtime slow
downs. However rule reloads need to be able to update it. The solution I’ve
created has a read only look up structure after initialization that is ‘hot
swapped’ with the new data at reload.


PCRE

The second part of the work is to allow for more flexible substring capture.
There are 2 limitations in the current code: first, only single substring can be
captured per rule. Second, the names of the variables were limited by libpcre.
32 chars with hardly any special chars like dots. The way to express these names
has been a bit of a struggle.

The old way looks like this:

1
pcre:"/(?P.*)<somename>/";

This create a flow based variable named ‘somename’ that is filled by this pcre
expression. The ‘flow_’ prefix can be replaced by ‘pkt_’ to create a packet
based variable.

In the new method the names are no longer inside the regular expression, but
they come after the options:

1
2
3
pcre:"/([a-z]+)\/[a-z]+\/(.+)\/(.+)\/changelog$/GUR, \
    flow:ua/ubuntu/repo,flow:ua/ubuntu/pkg/base,     \
    flow:ua/ubuntu/pkg/version";

After the regular pcre regex and options, a comma separated lists of variable
names. The prefix here is ‘flow:’ or ‘pkt:’ and the names can contain special
characters now. The names map to the capturing substring expressions in order.


KEY/VALUE

While developing this a logical next step became extraction of key/value pairs.
One capture would be the key, the second the value. The notation is similar to
the last:

1
pcre:"^/([A-Z]+) (.*)\r\n/G, pkt:key,pkt:value";

‘key’ and ‘value’ are simply hardcoded names to trigger the key/value
extraction.


LOGGING

Things start to get interesting when logging is added. First, by logging
flowbits existing rulesets can benefit.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
{
  "timestamp": "2009-11-24T06:53:35.727353+0100",
  "flow_id": 1192299914258951,
  "event_type": "alert",
  "src_ip": "69.49.226.198",
  "src_port": 80,
  "dest_ip": "192.168.1.48",
  "dest_port": 1077,
  "proto": "TCP",
  "tx_id": 0,
  "alert": {
    "action": "allowed",
    "gid": 1,
    "signature_id": 2018959,
    "rev": 2,
    "signature": "ET POLICY PE EXE or DLL Windows file download HTTP",
    "category": "Potential Corporate Privacy Violation",
    "severity": 1
  },
  "http": {
    "hostname": "69.49.226.198",
    "url": "/x.exe",
    "http_user_agent": "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1;
SV1)",
    "http_content_type": "application/octet-stream",
    "http_method": "GET",
    "protocol": "HTTP/1.1",
    "status": 200,
    "length": 23040
  },
  "vars": {
    "flowbits": {
      "exe.no.referer": true,
      "http.dottedquadhost": true,
      "ET.http.binary": true
    }
  }
}

When rules are created to extract info and set specific ‘information’ flowbits,
logging can create value:

1
2
3
4
5
6
7
8
9
10
11
"vars": {
  "flowbits": {
    "port/http": true,
    "ua/os/windows": true,
    "ua/tool/msie": true
  },
  "flowvars": {
    "ua/tool/msie/version": "6.0",
    "ua/os/windows/version": "5.1"
  }
}

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
"http": {
  "hostname": "db.local.clamav.net",
  "url": "/daily-15405.cdiff",
  "http_user_agent": "ClamAV/0.97.5 (OS: linux-gnu, ARCH: x86_64, CPU: x86_64)",
  "http_content_type": "application/octet-stream",
  "http_method": "GET",
  "protocol": "HTTP/1.0",
  "status": 200,
  "length": 1688
},
"vars": {
  "flowbits": {
    "port/http": true,
    "ua/os/linux": true,
    "ua/arch/x64": true,
    "ua/tool/clamav": true
  },
  "flowvars": {
     "ua/tool/clamav/version": "0.97.5"
  }
}

In the current code the alert and http logs are showing the ‘vars’.

Next to this, a ‘eve.vars’ log is added, which is a specific output of vars
independent of other logs.


USE CASES

Some of the use cases could be to add more information to logs without having to
add code. For example, I have a set of rules that set of rules that extracts the
packages are installed by apt-get or for which Ubuntu’s updater gets change
logs:

1
2
3
4
5
6
7
8
9
10
11
12
"vars": {
  "flowbits": {
    "port/http": true,
    "ua/tech/python/urllib": true
  },
  "flowvars": {
    "ua/tech/python/urllib/version": "2.7",
    "ua/ubuntu/repo": "main",
    "ua/ubuntu/pkg/base": "libxml2",
    "ua/ubuntu/pkg/version": "libxml2_2.7.8.dfsg-5.1ubuntu4.2"
  }
}

It could even be used as a simple way to ‘parse’ protocols and create logging
for them.


PERFORMANCE

Using rules to extract data from traffic is not going to be cheap for 2 reasons.
First, Suricata’s performance mostly comes from avoiding inspecting rules. It
has a lot of tricks to make sure as little rules as possible are evaluated.
Likewise, the rule writers work hard to make sure their rules are only evaluated
if they have a good chance of matching.

The rules that extract data from user agents or URI’s are going to be matching
very often. So even if the rules are written to be efficient they will still be
evaluated a lot.

Secondly, extraction currently can be done through PCRE and through Lua scripts.
Neither of which are very fast.


TESTING THE CODE

Check out this branch https://github.com/inliniac/suricata/pull/2468 or it’s
replacements.


BONUS: UNIX SOCKET HOSTBITS

Now that variable names can exist outside of the detection engine, it’s also
possible to add unix socket commands that modify them. I created this for
‘hostbits’. The idea here is to simply use hostbits to implement
white/blacklists. A set of unix socket commands will be added to manage
add/remove them. The existing hostbits implementation handles expiration and
matching.

To block on the blacklist:

drop ip any any -> any any (hostbits:isset,blacklist; sid:1;)

To pass all traffic on the whitelist:

pass ip any any -> any any (hostbits:isset,whitelist; sid:2;)

Both rules are ‘ip-only’ compatible, so will be efficient.

A major advantage of this approach is that the black/whitelists can be
modified from ruleset themselves, just like any hostbit.

E.g.:

alert tcp any any -> any any (content:"EVIL"; \
    hostbits:set,blacklist; sid:3;)

A new ‘list’ can be created this way by simply creating a rule that
references a hostbit name.


UNIX COMMANDS

Unix socket commands to add and remove hostbits need to be added.

Add:

1
2
suricatasc -c "add-hostbit <ip> <hostbit> <expire>"
suricatasc -c "add-hostbit 1.2.3.4 blacklist 3600"

If an hostbit is added for an existing hostbit, it’s expiry timer is updated.

Hostbits expire after the expiration timer passes. They can also be manually
removed.

Remove:

1
2
suricatasc -c "remove-hostbit <ip> <hostbit>"
suricatasc -c "remove-hostbit 1.2.3.4 blacklist"


FEEDBACK & FUTURE WORK

I’m looking forward to getting some feedback on a couple of things:

 * log output structure and logic. The output needs to be parseable by things
   like ELK, Splunk and jq.
 * pcre options notation
 * general feedback about how it runs

Some things I’ll probably add:

 * storing extracted data into hosts, ippairs
 * more logging

Some other ideas:

 * extraction using a dedicated keyword, so outside of pcre
 * ‘int’ extraction

Let me know what you think!

Posted in Development, ids, IPS, Suricata | Tagged hostbits, pcre, Suricata,
unix socket | Leave a reply


FUZZING SURICATA WITH PCAPS

Posted on 09/02/2016 by inliniac
Reply

Yesterday I wrote about fuzzing Suricata with AFL. Today I’m going to show
another way. Since early in the project, we’ve shipped a perl based fuzzer
called ‘wirefuzz’. The tool is very simple. It takes a list of pcaps, changes
random bits in them using Wiresharks editcap and runs them through Suricata.
Early in the project Will Metcalf, who wrote the tool, found a lot of issues
with it.

Since it’s random based fuzzing, the fuzzing is quite shallow. It is still a
great way of stressing the decoder layers of Suricata though, as we need to be
able to process all junk input correctly.

Lately we had an issue that I thought should have been found using fuzzing:
#1653, and indeed, when I started fuzzing the code I found the issue within an
hour. Pretty embarrassing.

Another reason to revisit is Address Sanitizer. It’s great because it’s so
unforgiving. If it finds something it blows up. This is great for fuzzing. It’s
recommended to use AFL with Asan as well. Wirefuzz does support a valgrind mode,
but that is very slow. With Asan things are quite fast again, while doing much
more thorough checking.

So I decided to spend some time on improving this tool so that I can add it to
my CI set up.

Here is how to use it.

git clone https://github.com/inliniac/suricata -b dev-fuzz-v3.1
cd suricata
git clone https://github.com/OISF/libhtp -b 0.5.x
bash autogen.sh
export CFLAGS="-fsanitize=address"
./configure --disable-shared --sysconfdir=/etc
make
mkdir fuzzer

# finally run the fuzzer
qa/wirefuzz.pl -r=/home/victor/pcaps/*/* -c=suricata.yaml -e=0.02 \
    -p=src/suricata -l=fuzzer/ -S=rules/http-events.rules -N=1

What this command does is:

 * run from the source dir, output into fuzzer/
 * modify 2% of each pcap randomly while making sure the pcap itself stays valid
   (-e=0.02)
 * use the rules file rules/http-events.rules exclusively (-S)
 * use all the pcaps from /home/victor/pcaps/*/*
 * return success if a single pass over the pcaps was done (-N=1)

One thing to keep in mind is that the script creates a copy of the pcap when
randomizing it. This means that very large files may cause problems depending on
your disk space.

I would encourage everyone to fuzz Suricata using your private pcap collections.
Then report issues to me… pretty please? 🙂

*UPDATE 2/15*: the updated wirefuzz.pl is now part of the master branch.

Posted in Development, Suricata | Tagged fuzzing, Suricata | Leave a reply


FUZZING SURICATA WITH AFL

Posted on 08/02/2016 by inliniac
3

AFL is a very powerful fuzzer, that tries to be smarter than random input
generating fuzzers. It’s cool, but needs a bit more baby sitting. I’ve added
some support to Suricata to assist AFL.

Here’s how to get started on fuzzing pcaps.

mkdir ~/tmp/fuzz
git clone https://github.com/inliniac/suricata -b dev-afl-v5
cd suricata
git clone https://github.com/OISF/libhtp -b 0.5.x
bash autogen.sh
export CFLAGS="-fsanitize=address"
export AFLDIR=/opt/afl-1.96b/bin/
export CC="${AFLDIR}/afl-gcc"
export CXX="${AFLDIR}/afl-g++"
./configure --disable-shared --sysconfdir=/etc --enable-afl


The configure output should show:
Compiler: /opt/afl-1.96b/bin//afl-gcc (exec name) / gcc (real)

make

# create tmp output dir for suricata
mkdir tmp/

# test the command to be fuzzed
src/suricata --runmode=single -k none -c suricata.yaml -l tmp/ \
    -S /dev/null \
    -r /opt/afl-1.96b/share/afl/testcases/others/pcap/small_capture.pcap

# start the fuzzer
export AFL_SKIP_CPUFREQ=1
/opt/afl-1.96b/bin/afl-fuzz -t 100000 -m none \
    -i /opt/afl-1.96b/share/afl/testcases/others/pcap/ -o aflout -- \
    src/suricata --runmode=single -k none -c suricata.yaml -l tmp/ \
    -S /dev/null -r @@

AFL should start running:



Couple of things to keep in mind:

 * the above list assumes you have a /etc/suricata/ set up already, including a
   reference.config and classification.config
 * don’t skip the test step or you risk that AFL will just fuzz some basic error
   reporting by Suricata
 * the used ‘dev-afl-v5’ branch makes fuzzing faster and more reliable by
   disabling random, threading and a few other things
 * src/suricata –build-info should show the compiler is afl
 * keep your test cases small, even then runtime is going to be very long. AFL
   takes the input and modifies it to find as many unique code paths as possible

 


FUZZING RULES AND YAMLS

For fuzzing rules and YAMLs the compilation steps are the same.

To fuzz rules, create a directory & test input:

mkdir testrules
echo 'alert http any any -> any any (content:"abc"; sid:1; rev:1;)' \
    > testrules/rules.txt

# test command
src/suricata -c suricata.yaml -l tmp/ --afl-parse-rules -T \
    -S testrules/rules.txt

# run AFL
export AFL_SKIP_CPUFREQ=1
/opt/afl-1.96b/bin/afl-fuzz -t 100000 -m none \
    -i testrules/ -o aflout -- \
    src/suricata -c suricata.yaml -l tmp/ --afl-parse-rules \
    -T -S @@


Finally, YAMLs:

mkdir testyamls/
cp suricata.yaml testyamls/

# test command
src/suricata -l tmp/ --afl-parse-rules -T -S testrules/rules.txt \
    -c testyamls/suricata.yaml

# run AFL
export AFL_SKIP_CPUFREQ=1
/opt/afl-1.96b/bin/afl-fuzz -t 100000 -m none \
    -i testyamls/ -o aflout -- \
    src/suricata -l tmp/ --afl-parse-rules \
    -T -S testrules/rules.txt -c @@


Note that the default YAML is HUGE for this purpose. It may be more efficient to
use a sub set of it.

I plan to create some wrapper scripts to make things easier in the near future.
Meanwhile, if you have crashes to report, please send them my way!

Posted in Development, Suricata | Tagged afl, fuzzing, Suricata | 3 Replies


SURICATA 3.0 IS OUT!

Posted on 27/01/2016 by inliniac
Reply

Today, almost 2 years after the release of Suricata 2.0, we released 3.0! This
new version of Suricata improves performance, scalability, accuracy and general
robustness. Next to this, it brings a lot of new features.

New features are too numerous to mention here, but I’d like to highlight a few:

 * netmap support: finally a high speed capture method for our FreeBSD friends,
   IDS and IPS
 * multi-tenancy: single instance, multiple detection configs
 * JSON stats: making it much easier to graph the stats in ELK, etc
 * Much improved Lua support: many more fields/protocols available, output
   scripts

Check the full list here in the announcement:
http://suricata-ids.org/2016/01/27/suricata-3-0-available/


NEW RELEASE MODEL

As explained here, this is the first release of the new release model where
we’ll be trying for 3 ‘major’ releases a year. We originally hoped for a month
of release candidate cycles, but due to some issues found and the holidays +
travel on my end it turned into 2 months.

My goal is to optimize our testing and planning to reduce this further, as this
release cycle process is effectively an implicit ‘freeze’. Take a look at the
number of open pull requests to see what I mean. For the next cycle I’ll also
make the freeze explicit, and announce it.


LOOKING FORWARD

While doing a release is great, my mind is already busy with the next steps. We
have a bunch of things coming that are exciting to me.

Performance: my detection engine rewrite work has been tested by many already,
and reports are quite positive. I’ve heard reports up to 25% increase, which is
a great bonus considering the work was started to clean up this messy code.

ICS/SCADA: Jason Ish is finalizing a DNP3 parser that is very full featured,
with detection, logging and lua support. Other protocols are also being
developed.

Documentation: we’re in the process of moving our user docs from the wiki to
sphinx. This means we’ll have versioned docs, nice pdf exports, etc. It’s
already 180 pages!

Plus lots of other things. Keep an eye out on our mailing lists, bug tracker or
IRC channel.

Posted in ids, IPS, oisf, Suricata | Tagged new release, release, Suricata |
Leave a reply


NEW SURICATA RELEASE MODEL

Posted on 24/11/2015 by inliniac
Reply

As the team is back from a very successful week in Barcelona, I’d like to take a
moment on what we discussed and decided on with regards to development.

One thing no one was happy with is how the release schedules are working.
Releases were meant to reasonably frequent, but the time between major releases
was growing longer and longer. The 2.0 branch for example, is closing in on 2
years as the stable branch. The result is that many people are missing out on
many of the improvements we’ve been doing. Currently many people using Suricata
actually use a beta version, of even our git master, in production!

What we’re going to try is time based releases. Pretty much releases will be
more like snapshots of the development branch. We think this can work as our dev
branch is more and more stable due to our extensive QA setup.

Of course, we’ll have to make sure we’re not going to merge super intrusive
changes just before a release. We’ll likely get into some pattern of merge
windows and (feature) freezes, but how this will exactly play out is something
we’ll figure out as we go.

We’re going to try to shoot for 3 of such releases per year.

In our redmine ticket tracker, I’ve also created a new pseudo-version ‘Soon’.
Things we think should be addressed for the next release, will be added there.
But we’ll retarget the tickets when they are actually implemented.

Since it’s already almost 2 years since we’ve done 2.0, we think the next
release warrants a larger jump in the versioning. So we’re going to call it 3.0.
The first release candidate will likely be released this week hopefully followed
by a stable in December.

Posted in Development, ids, IPS, oisf, Suricata | Tagged release, Suricata |
Leave a reply


GET PAID TO WORK ON SURICATA?

Posted on 09/10/2015 by inliniac
Reply

If you like fiddling with Suricata development, maybe you can get paid to do it.

Companies ask me regularly if I can recommend Suricata developers. I’m going to
assemble a list of people who are interested in such work. If you like me to
consider you in such cases, drop me an email.

If you really want me to *recommend* you, it’s important that I actually know
you somewhat. So becoming a (volunteer) contributor will help a lot.

Things to mention in your email:
– interests
– github profile
– open source contributions
– social media, blog links
– availability, whether you’re a contractor or looking for a real J-O-B

Who knows, maybe something good will come from it!

Btw, if you’re not a dev but great at research, or deployments and tuning, I
know some ppl that are always looking for such new hires as well!

Posted in Development, Suricata | Tagged Suricata, work | Leave a reply


DOMAIN BACK UP

Posted on 30/09/2015 by inliniac
Reply

Due to a ‘administrative problem’ between my registrar Xs4all and their
US-partner Network Solutions, my domain has been offline since Sunday. Resolving
the issue took them some time, and there was a technical issue after the
administrative one was resolved. Add long DNS TTL values into the mix, and the
disruption was quite lengthy. The domain is back up, although it may still take
some hours for everyone to see it due to DNS caching.

Sadly, every email that has been sent to my domain during this time is lost. You
should have gotten an error from Network Solutions. A very ugly error for that
matter, looking more like spam or even something malicious. Sadly, that was
completely out of my control.

So if you have something to send me you can probably do so now again. If not,
please wait a few more hours.

I did like the silence though, so not all at once please! 😛

Posted in Personal | Leave a reply


POST NAVIGATION

← Older posts

Search

 * afpacket
 * agent
 * apache
 * authentication
 * beta
 * clamav
 * client
 * comment spam
 * connmark
 * copyright
 * cuda
 * cvs
 * Debian
 * debug
 * deepsec
 * development
 * Emerging Threats
 * engine
 * ethtool
 * evasion
 * feisty
 * file extraction
 * flowint
 * freebsd
 * freenet6
 * github
 * gpl
 * gplv2
 * gplv3
 * http
 * ids
 * inline
 * interview
 * iprep
 * ip reputation
 * IPS
 * ipv4
 * IPv6
 * Ivan Ristic
 * libdnet
 * libnet
 * lua
 * luajit
 * Matt Jonkman
 * Modsec2sguil
 * ModSecurity
 * new release
 * nfqueue
 * nitro security
 * normalization
 * oisf
 * pcap
 * performance
 * prototype
 * raid
 * release
 * rules
 * Sguil
 * SidReporter
 * Snort
 * Snort_inline
 * sourcefire
 * stream4
 * Suricata
 * svn
 * TCP
 * threading
 * tls
 * training
 * Ubuntu
 * unix socket
 * Vulnerability
 * Vuurmuur
 * william metcalf
 * window scaling


ARCHIVES

 * February 2019
 * May 2017
 * January 2017
 * December 2016
 * February 2016
 * January 2016
 * November 2015
 * October 2015
 * September 2015
 * January 2015
 * December 2014
 * November 2014
 * September 2014
 * July 2014
 * April 2014
 * March 2014
 * February 2014
 * December 2013
 * November 2013
 * September 2013
 * July 2013
 * April 2013
 * March 2013
 * January 2013
 * December 2012
 * November 2012
 * October 2012
 * September 2012
 * August 2012
 * July 2012
 * June 2012
 * May 2012
 * March 2012
 * February 2012
 * January 2012
 * December 2011
 * November 2011
 * October 2011
 * September 2011
 * June 2011
 * March 2011
 * January 2011
 * December 2010
 * October 2010
 * September 2010
 * July 2010
 * June 2010
 * May 2010
 * April 2010
 * March 2010
 * February 2010
 * January 2010
 * December 2009
 * November 2009
 * October 2009
 * September 2009
 * August 2009
 * July 2009
 * June 2009
 * May 2009
 * April 2009
 * March 2009
 * February 2009
 * January 2009
 * December 2008
 * November 2008
 * October 2008
 * September 2008
 * August 2008
 * July 2008
 * June 2008
 * May 2008
 * March 2008
 * February 2008
 * January 2008
 * December 2007
 * November 2007
 * October 2007
 * September 2007
 * August 2007
 * July 2007
 * June 2007
 * May 2007
 * April 2007
 * March 2007
 * February 2007
 * January 2007
 * December 2006
 * November 2006
 * October 2006
 * September 2006
 * August 2006
 * July 2006


CATEGORIES

 * Books
 * Debian
 * Development
 * HeX
 * ids
 * IPS
 * IPv6
 * libnet
 * Modsec2sguil
 * ModSecurity
 * NSM
 * oisf
 * Personal
 * Sguil
 * SidReporter
 * Snort
 * Snortsam
 * Snort_inline
 * Suricata
 * TCP
 * tikiwiki
 * Traffic Shaping
 * Ubuntu
 * Uncategorized
 * Vulnerability
 * Vuurmuur
 * Web
 * WordPress


BLOGROLL

 * OISF
 * Suricata
 * Vuurmuur


META

 * Register
 * Log in
 * Entries feed
 * Comments feed
 * WordPress.com


FOLLOW BLOG VIA EMAIL

Enter your email address to follow this blog and receive notifications of new
posts by email.

Join 55 other followers

Email Address:

Follow

Blog at WordPress.com.

Inliniac
Blog at WordPress.com.
 * Follow Following
    * Inliniac
      Join 55 other followers
      
      Sign me up
    * Already have a WordPress.com account? Log in now.

 *  * Inliniac
    * Customize
    * Follow Following
    * Sign up
    * Log in
    * Report this content
    * View site in Reader
    * Manage subscriptions
    * Collapse this bar

 

Loading Comments...

 

Write a Comment...
Email (Required) Name (Required) Website