If you are are an engineer whose organization uses Linux in production, I have two quick questions for you:
1) How many unique outbound TCP connections have your servers made in the past hour?
2) Which processes and users initiated each of those connections?
If you can answer both of these questions, fantastic! You can skip the rest of this blog post. If you can’t, boy-oh-boy do we have a treat for you! We call it go-audit.
Syscalls are how all software communicates with the Linux kernel. Syscalls are used for things like connecting network sockets, reading files, loading kernel modules, and spawning new processes (and much much much more). If you have ever used strace, dtrace, ptrace, or anything with trace in the name, you’ve seen syscalls.
Most folks who use these *trace tools are familiar with syscall monitoring for one-off debugging, but at Slack we collect syscalls as a source of data for continuous monitoring, and so can you.
Linux Audit has been part of the kernel since 2.6.(14?). The audit system consists of two major components. The first component is some kernel code to hook and monitor syscalls. The second bit is a userspace daemon to log these syscall events.
To demonstrate what we can do with auditd, let’s use an example. Say we want to log an event every time someone reads the file /data/topsecret.data. (Note: Please don’t store actual top secret data in a file called topsecret.data). With auditd, we must first tell the kernel that we want to know about these events. We accomplish this by running the userspace auditctl command as root with the following syntax:
auditctl -w /data/topsecret.data -p rwxa
Now, every time /data/topsecret.data is accessed (regardless of whether it was via symlink), the kernel will generate an event. The event is sent to a userspace process (usually auditd) via something called a “netlink” socket. (The tl;dr on netlink is that you tell the kernel to send messages to a process via its PID, and the events appear on this socket.)
In most Linux distributions, the userspace auditd process then writes the data to /var/log/audit/audit.log. If there is no userspace process connected to the netlink socket, these messages generally appear on the console and can be seen in the output of dmesg.
This is pretty cool, but watching a single file is also a very simple case. Let’s do something a bit more fun, something network related.
Daemon processes (or rogue netcats, ahem) usually use the listen syscall to listen for incoming connections. For example, if Apache wants to listen for incoming connections on port 80, it requests this from the kernel. To log these events, we again notify the kernel of our interest by running auditctl:
auditctl -a exit,always -S listen
Now, every time a process starts listening on a socket, we receive a log event. Neat! This logging can be applied to any syscall you like. If you want to handle the questions I mentioned at the top of this post, you’ll want to look at the connect syscall. If you want to watch every new process or command on a host, check out execve.
Supernote: We are not limited to the actions of users. Think about advanced cases like apache spawning `bash` or making an outbound connection to some sketchy IP and what that can tell you.
So now we have a bunch of events in /var/log/audit/audit.log, but logfiles do not a monitoring system make. What should we do with this data? Unfortunately, there are some properties of the auditd log format that make it challenging to work with:
- The data format is (mostly) key=value
- Events can be one or multiple lines
- Events can be interleaved and arrive out of order
There are a few existing tools to parse these log events, such as aureport or ausearch, but they seem to be focused on investigations that happen after the fact, as opposed to being used continuously.
We saw a lot of potential uses for the data we could get from auditd, but needed a way to run this at scale. We developed the project go-audit as a replacement for the userspace part of auditd, with the following goals in mind:
- Convert auditd’s multiline events into a single JSON blob
- Speak directly to the kernel via netlink
- Be (very) performant
- Do minimal (or zero) filtering of events on the hosts themselves
The first three items on that list probably won’t raise many an eyebrow, but the last one should, so it feels worth explaining.
Some obvious questions we need to address with relation to #4 are, “Why don’t we want to filter these events on each server and just target the interesting ones?” and “Won’t you send lots of useless information?”
Imagine your servers have the curl command installed (no need to imagine, yours probably do). During a red team exercise, the attackers use curl to download a rootkit and then to exfiltrate data. Having learned from this, you start logging every command and filtering everything that isn’t curl. Every time someone runs curl, you generate an alert.
There are some serious problems with doing things this way:
- There are approximately 92,481,124 ways to accomplish “download rootkit” and “exfiltrate data” that don’t involve curl. We can’t possibly enumerate all of them.
- The attacker can look at your auditd rules and see that you are watching curl.
- There are legitimate uses of the curl command.
We need something better. What if instead of looking for specific commands, we send everything to a centralized logging and alerting infrastructure? This has some amazingly useful properties:
- The attacker has no idea which commands or network calls you have flagged as interesting. (As mentioned in a recent talk from Rob Fuller, unknowable tripwires give attackers nightmares.)
- We can correlate events and decide that curl is okay sometimes and bad other times.
- New rules can be evaluated and tested against existing data.
- We now have a repository of forensic data that lives off-host.
So here it is friends, go-audit. We are releasing this tool as open source, for free (as in love). The Secops team at Slack created the first version of go-audit over a year ago, and we have been using it in production for nearly that long. It’s a small, but important, piece of our monitoring infrastructure. [I recommend you check out my previous post for more context on how we handle alerting.]
In the go-audit repository, we have provided extensive examples for configuration and collection of this data. Here at Slack, we like rsyslog + relp, because we want to send everything off host immediately, but also spool events to disk if syslog is temporarily undeliverable. You can pretty freely use a different mechanism to deliver these logs, and we look forward to seeing your ideas.
We welcome contributions to this project and hope others will find it useful. This repository was privately shared with a number of external folks over the past year, and some friends of ours are already using it in production.
You may have noticed that I haven’t used the word “security” yet in this post. I’m of the opinion that good general purpose tools can often be used as security tools, but the reverse is not usually the case. Auditd facilitates security monitoring that you’d be hard pressed to replicate in any other way, but go-audit was developed as a general purpose tool. The utility of something like go-audit is immediately apparent to an operations or development person, who can use it to debug problems across a massive, modern fleet.
Let’s revisit the questions at the top of this post. Any company with IDS/IPS/Netflow/PCAP/etc on a network tap can tell you a lot about their network connections and probably answer the first question, but none of those solutions can give you the context about a user/pid/command needed to answer the second. This context is the difference between “someone ran something somewhere on our network and it connected to an IP” vs “Mallory ran curl as root on bigserver01 and connected to the IP 1.2.3.4 on port 1337”.
At Slack, we often say “Don’t let perfection be the enemy of good”. This tool isn’t perfect, but we think it is really, really good, and we’re happy to share it with you today.
FAQ
Why auditd not sysdig or osquery?
Osquery is great. In fact, we use it at Slack. For our production servers, we prefer go-audit because these systems are connected 24/7, which allows us to stream data constantly. With osquery, you are generally receiving a snapshot of the current machine state. If something runs to completion between polling intervals, you might miss it. I think this model works well for laptops and other employee endpoints, but I prefer a stream of data for highly-available machines.
Sysdig is also a fantastic debugging tool, and I’ve used it pretty extensively. The main issue here is that sysdig requires a kernel module be loaded on each machine. Sysdig Falco looks to be useful, but they prefer to have detection running on each endpoint. As mentioned above, we prefer centralized rules that aren’t visible to the attacker logged in to a host.
Auditd has the advantage of having been around for a very long time and living in the mainline kernel. It is as ubiquitous a mechanism for syscall auditing as you’ll find in Linux-land.
What do we do with all these alerts?
We send them to an Elasticsearch cluster. From there we use ElastAlert to query our incoming data continuously for alert generation and general monitoring. You can also use other popular large scale logging systems for this, but (my opinion not the opinion of my employer disclaimer goes here) I have big fundamental problems with pricing structures that incentivize you to log less to save money.
How much log volume are we talking?
Short answer: It is highly variable.
Long answer: It depends which syscalls you log and how many servers you have. We log hundreds of gigabytes per day. This may sound like a lot, but as of this writing we have 5500-ish instances streaming data constantly. You should also consider over-provisioning your cluster to ensure that an attacker will have a difficult time DoSing your collection infrastructure.
Why rsyslog?
We have a lot of experience with rsyslog, and it has some nice properties. We highly recommend using version 8.20+, which has some bug fixes we contributed back to upstream. We could have let go-audit handle reliable delivery, but the advantages of doing so didn’t outweigh the benefit of using something that we’ve been using successfully for years.
What if someone turns it off?
You should be using canaries to validate data flowing in from each of your servers. Generating events that you expect to see on the other side is a useful way to validate hosts are reporting to you. Additionally, go-audit also has mechanisms to detect messages that were missed within a stream.
Thanks to
My teammate Nate Brown, for making go-audit much better.
Mozilla’s audit-go folks, for inspiring this project.
The numerous individuals who reviewed this blog post and gave feedback.
The Chicago Cubs, for winning the World Series.