A Brief Introduction to auditd

2013-01-18 by . 4 comments

Post to Twitter

The auditd subsystem is an access monitoring and accounting for Linux developed and maintained by RedHat. It was designed to integrate pretty tightly with the kernel and watch for interesting system calls. Additionally, likely because of this level of integration and detailed logging, it is used as the logger for SELinux.

All in all, it is a pretty fantastic tool for monitoring what’s happening on your system. Since it operates at the kernel level this gives us a hook into any system operation we want. We have the option to write a log any time a particular system call happens, whether that be unlink or getpid. We can monitor access to any file, all network traffic, really anything we want. The level of detail is pretty phenomenal and, since it operates at such a low level, the granularity of information is incredibly useful.

The biggest downfall is actually a result of the design that makes it so handy. This is itself a logging system and as a result does not use syslog. The good thing here is that it doesn’t have to rely on anything external to operate, so a typo in your (syslog|rsyslog|syslog-ng).conf file won’t result in losing your system audit logs. As a result you’ll have to manage all the audit logging using the auditd suite of tools. This means any kind of log collection, organization, or archiving may not work with these files, including remote logging. As an aside, auditd does have provisions for remote logging, however they are not as trivial as we’ve come to expect from syslog.

Thanks to the level of integration that it provides your auditd configurations can be quite complex, but I’ve found that there are primarily only two options you need to know.

  1. -a exit,always -S <syscall>
  2. -w <filename>

The first of these generates a log whenever the listed syslog exits, and whenever the listed file is modified. Seems pretty easy right? It certainly can be, but it does require some investigation into what system calls interest you, particularly if you’re not familiar with OS programming or POSIX. Fortunately for us there are some standards that give us some guidance on what to look out for. Let’s take, for example, the Center for Internet Security Red Hat Enterprise Linux 6 Benchmark. The relevant section is “5.2 Configure System Account (auditd)” starting on page 99. There is a large number of interesting examples listed, but for our purposes we’ll whittle those down to a more minimal and assume your /etc/audit/audit.rules looks like this.

# This file contains the auditctl rules that are loaded
# whenever the audit daemon is started via the initscripts.
# The rules are simply the parameters that would be passed
# to auditctl.
# First rule - delete all
-D

# Increase the buffers to survive stress events. # Make this bigger for busy systems -b 1024 -a always,exit -S adjtimex -S settimeofday -S stime -k time-change -a always,exit -S clock_settime -k time-change -a always,exit -S sethostname -S setdomainname -k system-locale -w /etc/group -p wa -k identity -w /etc/passwd -p wa -k identity -w /etc/shadow -p wa -k identity -w /etc/sudoers -p wa -k identity -w /var/run/utmp -p wa -k session -w /var/log/wtmp -p wa -k session -w /var/log/btmp -p wa -k session -w /etc/selinux/ -p wa -k MAC-policy # Disable adding any additional rules - note that adding new rules will require a reboot -e 2

Based on our earlier discussion we should be able to see that we generate a log message every time any of the following system calls exit: adjtimex, settimeofday, stime, clock_settime, sethostname, setdomainname. This will let us know whenever the time gets changed or if the host or domain name of the system get changed.

We’re also watching a few files. The first four (group, passwd, shadow, sudo) will let us know whenever users get added, modified, or privileges changed. The next three files (utmp, wtmp, btmp) store the current login state of each user, login/logout history, and failed login attempts respectively. So monitoring these will let us know any time an account is used, or failed login attempt, or more specifically whenever these files get changed which will include malicious covering of tracks. Lastly, we’re watching the directory ‘/etc/selinux/’. Directories are a special case in that this will cause the system to recursively monitor the files in that directory. There is a special caveat that you cannot watch ‘/’.

When watching files we also added the option ‘-p wa’. This tells auditd to only watch for (w)rites or (a)ttribute changes. It should be noted that for write (and read for that matter) we aren’t actually logging on those system calls. Instead we’re logging on ‘open’ if the appropriate flags are set.

It should also be said that the logs are also rather…complete. As an example I added the system call rule for sethostname to a Fedora 17 system, with audit version 2.2.1. This is the resultant log from running “hostname audit-test.home.private” as root.

type=SYSCALL msg=audit(1358306046.744:260): arch=c000003e syscall=170 success=yes exit=0 a0=2025010 a1=17 a2=7 a3=18 items=0 ppid=23922 pid=26742 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts4 ses=16 comm="hostname" exe="/usr/bin/hostname" subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key="system-locale"
There are gobs of fields listed, however the ones that interest me the most are the various field names containing the letters “id”, “exe” and that ugly string of numbers in the first parens. The first bit, 1358306046.744, is the timestamp of the event in epoch time. The exe field contains the full path tot he binary that was executed. Useful, since we know what was run, but it does not contain the full command line including arguments. Not ideal.

Next we see that the command was run by root, since the euid is 0. Interestingly, the field auid (called audit uid) contains 1000, which is the uid of my regular user account on that host. The auid field actually contains the user id of the original logged in user for this login session. This means, that even though I used “su -” to gain a root shell the auditing subsystem still knows who I am. Using su to gain a root shell has always been the bane of account auditing, but the auditd system records information to usefully identify a user. It does not forgive the lack of command line options, but certainly makes me feel better about it.

These examples, while handy, are also only the tip of the iceberg. One would be hard pressed to find a way to get more detailed audit logging than is available here. To help make our way down the rabbit hole of auditd let’s turn this into a series. We’ll collect ideas for use cases and work up an audit config to meet the requirements, much like what I ended up doing on this security.stackexchange.com answer.

If this sounds like fun let me know in the comments and I’ll work up a way to collect the information. Until then…Happy Auditing!

There will be future auditd posts, so check back regularly on the auditd tag.

 

Filed under Configuration

4 Comments

Subscribe to comments with RSS.

  • Jason says:

    Nice quick overview of auditd. Are there any good open source reporting tools out there for monitoring the logs across many servers? I’m thinking either a dashboard or daily email with changed files. It’d also be nice to crosscheck with puppet reports to see that a file was changed intentionally. If not, then I’m considering rolling my own using logstash and elasticsearch.

    • Bob Cat says:

      Have you tried the Linux Auditd app for Splunk? https://splunkbase.splunk.com/app/2642/

    • A central logging system is going to fit this use case. Logstash, Logentries and Splunk all give a great user experence and have powerful quering languages.

      However, I would incurrage one to use logstash/elasticsearch since the free version of splunk and logentries do have several caviates when going over data storage or bandwidth restrictions.

      Also, one can utilize fluentd and collectd+statsd+graphana with logstash and get a better picture of what is going on with ones environment footprint. With fluentd one can even create actionable agents based on the alerts to preform basic procedual administrative tasks via abutraial execuables. (ie, kill long running tasks, null route ddos, launch additional running virtual instances in ones cloud)

  • tdurden says:

    splunk is $$, try greylog2..