WiFi security: history of insecurities in WEP, WPA and WPA2

2013-08-28 by . 1 comments

Recently I answered a question on security.stackexchange.com regarding security in wireless protocols. The OP wanted to know more about what methods were available to break into a wireless network and how long it would take.

more »

Filed under Attack, Network, Wireless

QoTW #47: Lessons learned and misconceptions regarding encryption and cryptology

2013-06-10 by . 1 comments

This one is a slightly different Question of the Week. Makerofthings7 asked a list-type question, which generally doesn’t fit on the Stack Exchange network, however this question generated a lot of interest and some excellent answers containing a lot of useful information, so it is probably worthwhile posting an excerpt of that content here. If you are a budding cryptographer, or a developer asked to implement a crypto function, read these guidelines first!

D.W., one of our resident high-rep cryptographers provided a number of the highest scoring answers.

Don’t roll your own crypto.

Don’t invent your own encryption algorithm or protocol; that is extremely error-prone. As Bruce Schneier likes to say,

“Anyone can invent an encryption algorithm they themselves can’t break; it’s much harder to invent one that no one else can break”.

Crypto algorithms are very intricate and need intensive vetting to be sure they are secure; if you invent your own, you won’t get that, and it’s very easy to end up with something insecure without realizing it.

Instead, use a standard cryptographic algorithm and protocol. Odds are that someone else has encountered your problem before and designed an appropriate algorithm for that purpose.

Your best case is to use a high-level well-vetted scheme: for communication security, use TLS (or SSL); for data at rest, use GPG (or PGP). If you can’t do that, use a high-level crypto library, like cryptlib, GPGME, Keyczar, or NaCL, instead of a low-level one, like OpenSSL, CryptoAPI, JCE, etc.. Thanks to Nate Lawson for this suggestion.

Don’t use encryption without message authentication

It is a very common error to encrypt data without also authenticating it.

Example: The developer wants to keep a message secret, so encrypts the message with AES-CBC mode. The error: This is not sufficient for security in the presence of active attacks, replay attacks, reaction attacks, etc. There are known attacks on encryption without message authentication, and the attacks can be quite serious. The fix is to add message authentication.

This mistake has led to serious vulnerabilities in deployed systems that used encryption without authentication, including ASP.NETXML encryptionAmazon EC2JavaServer Faces, Ruby on Rails, OWASP ESAPIIPSECWEPASP.NET again, and SSH2. You don’t want to be the next one on this list.

To avoid these problems, you need to use message authentication every time you apply encryption. You have two choices for how to do that:

Probably the simplest solution is to use an encryption scheme that provides authenticated encryption, e.g.., GCM, CWC, EAX, CCM, OCB. (See also: 1.) The authenticated encryption scheme handles this for you, so you don’t have to think about it.

Alternatively, you can apply your own message authentication, as follows. First, encrypt the message using an appropriate symmetric-key encryption scheme (e.g., AES-CBC). Then, take the entire ciphertext (including any IVs, nonces, or other values needed for decryption), apply a message authentication code (e.g., AES-CMAC, SHA1-HMAC, SHA256-HMAC), and append the resulting MAC digest to the ciphertext before transmission. On the receiving side, check that the MAC digest is valid before decrypting. This is known as the encrypt-then-authenticate construction. (See also: 12.) This also works fine, but requires a little more care from you.

Be careful when concatenating multiple strings, before hashing.

An error I sometimes see: People want a hash of the strings S and T. They concatenate them to get a single string S||T, then hash it to get H(S||T). This is flawed.

The problem: Concatenation leaves the boundary between the two strings ambiguous. Example:builtin||securely = built||insecurely. Put another way, the hash H(S||T) does not uniquely identify the string S and T. Therefore, the attacker may be able to change the boundary between the two strings, without changing the hash. For instance, if Alice wanted to send the two strings builtin andsecurely, the attacker could change them to the two strings built and insecurely without invalidating the hash.

Similar problems apply when applying a digital signature or message authentication code to a concatenation of strings.

The fix: rather than plain concatenation, use some encoding that is unambiguously decodeable. For instance, instead of computing H(S||T), you could compute H(length(S)||S||T), where length(S) is a 32-bit value denoting the length of S in bytes. Or, another possibility is to use H(H(S)||H(T)), or even H(H(S)||T).

For a real-world example of this flaw, see this flaw in Amazon Web Services or this flaw in Flickr [pdf].

Make sure you seed random number generators with enough entropy.

Make sure you use crypto-strength pseudorandom number generators for things like generating keys, choosing IVs/nonces, etc. Don’t use rand()random()drand48(), etc.

Make sure you seed the pseudorandom number generator with enough entropy. Don’t seed it with the time of day; that’s guessable.

Examples: srand(time(NULL)) is very bad. A good way to seed your PRNG is to grab 128 bits or true-random numbers, e.g., from /dev/urandom, CryptGenRandom, or similar. In Java, use SecureRandom, not Random. In .NET, use System.Security.Cryptography.RandomNumberGenerator, not System.Random. In Python, use random.SystemRandom, not random. Thanks to Nate Lawson for some examples.

Real-world example: see this flaw in early versions of Netscape’s browser, which allowed an attacker to break SSL.

Don’t reuse nonces or IVs

Many modes of operation require an IV (Initialization Vector). You must never re-use the same value for an IV twice; doing so can cancel all the security guarantees and cause a catastrophic breach of security.

For stream cipher modes of operation, like CTR mode or OFB mode, re-using a IV is a security disaster. It can cause the encrypted messages to be trivially recoverable. For other modes of operation, like CBC mode, re-using an IV can also facilitate plaintext-recovery attacks in some cases.

No matter what mode of operation you use, you shouldn’t reuse the IV. If you’re wondering how to do it right, the NIST specification provides detailed documentation of how to use block cipher modes of operation properly.

Don’t use a block cipher with ECB for symmetric encryption

(Applies to AES, 3DES, … )

Equivalently, don’t rely on library default settings to be secure. Specifically, many libraries which implement AES implement the algorithm described in FIPS 197, which is so called ECB (Electronic Code Book) mode, which is essentially a straightforward mapping of:

AES(plaintext [32]byte, key [32]byte) -> ciphertext [32]byte

is very insecure. The reasoning is simple, while the number of possible keys in the keyspace is quite large, the weak link here is the amount of entropy in the message. As always, xkcd.com describes is better than I http://xkcd.com/257/

It’s very important to use something like CBC (Cipher Block Chaining) which basically makes ciphertext[i] a mapping:

ciphertext[i] = SomeFunction(ciphertext[i-1], message[i], key)

Just to point out a few language libraries where this sort of mistake is easy to make:http://golang.org/pkg/crypto/aes/ provides an AES implementation which, if used naively, would result in ECB mode.

The pycrypto library defaults to ECB mode when creating a new AES object.

OpenSSL, does this right. Every AES call is explicit about the mode of operation. Really the safest thing IMO is to just try not to do low level crypto like this yourself. If you’re forced to, proceed as if you’re walking on broken glass (carefully), and try to make sure your users are justified in placing their trust in you to safeguard their data.

Don’t use the same key for both encryption and authentication. Don’t use the same key for both encryption and signing.

A key should not be reused for multiple purposes; that may open up various subtle attacks.

For instance, if you have an RSA private/public key pair, you should not both use it for encryption (encrypt with the public key, decrypt with the private key) and for signing (sign with the private key, verify with the public key): pick a single purpose and use it for just that one purpose. If you need both abilities, generate two keypairs, one for signing and one for encryption/decryption.

Similarly, with symmetric cryptography, you should use one key for encryption and a separate independent key for message authentication. Don’t re-use the same key for both purposes.

Kerckhoffs’s principle: A cryptosystem should be secure even if everything about the system, except the key, is public knowledge

A wrong example: LANMAN hashes

The LANMAN hashes would be hard to figure out if noone knew the algorithm, however once the algorithm was known it is now very trivial to crack.

The algorithm is as follows (from wikipedia) :

  1. The user’s ASCII password is converted to uppercase.
  2. This password is null-padded to 14 bytes
  3. The “fixed-length” password is split into two seven-byte halves.
  4. These values are used to create two DES keys, one from each 7-byte half
  5. Each of the two keys is used to DES-encrypt the constant ASCII string “KGS!@#$%”, resulting in two 8-byte ciphertext values.
  6. These two ciphertext values are concatenated to form a 16-byte value, which is the LM hash

Because you now know the ciphertext of these facts you can now very easily break the ciphertext into two ciphertext’s which you know is upper case resulting in a limited set of characters the password could possibly be.

A correct example: AES encryption

Known algorithm

Scales with technology. Increase key size when in need of more cryptographic oomph

Try to avoid using passwords as encryption keys.

A common weakness in many systems is to use a password or passphrase, or a hash of a password or passphrase, as the encryption/decryption key. The problem is that this tends to be highly susceptible to offline keysearch attacks. Most users choose passwords that do not have sufficient entropy to resist such attacks.

The best fix is to use a truly random encryption/decryption key, not one deterministically generated from a password/passphrase.

However, if you must use one based upon a password/passphrase, use an appropriate scheme to slow down exhaustive keysearch. I recommend PBKDF2, which uses iterative hashing (along the lines of H(H(H(….H(password)…)))) to slow down dictionary search. Arrange to use sufficiently many iterations to cause this process to take, say, 100ms on the user’s machine to generate the key.

In a cryptographic protocol: Make every authenticated message recognisable: no two messages should look the same

A generalisation/variant of:

Be careful when concatenating multiple strings, before hashing. Don’t reuse keys. Don’t reuse nonces.

During a run of cryptographic protocol many messages that cannot be counterfeited without a secret (key or nonce) can be exchanged. These messages can be verified by the received because he knows some public (signature) key, or because only him and the sender know some symmetric key, or nonce. This makes sure that these messages have not been modified.

But this does not make sure that these messages have been emitted during the same run of the protocol: an adversary might have captured these messages previously, or during a concurrent run of the protocol. An adversary may start many concurrent runs of a cryptographic protocol to capture valid messages and reuse them unmodified.

By cleverly replaying messages, it might be possible to attack a protocol without compromising any primary key, without attacking any RNG, any cypher, etc.

By making every authenticated message of the protocol obviously distinct for the receiver, opportunities to replay unmodified messages are reduced (not eliminated).

Don’t use the same key in both directions.

In network communications, a common mistake is to use the same key for communication in the A->B direction as for the B->A direction. This is a bad idea, because it often enables replay attacks that replay something A sent to B, back to A.

The safest approach is to negotiate two independent keys, one for each direction. Alternatively, you can negotiate a single key K, then use K1 = AES(K,00..0) for one direction and K2 = AES(K,11..1) for the other direction.

Don’t use insecure key lengths.

Ensure you use algorithms with a sufficiently long key.

For symmetric-key cryptography, I’d recommend at least a 80-bit key, and if possible, a 128-bit key is a good idea. Don’t use 40-bit crypto; it is insecure and easily broken by amateurs, simply by exhaustively trying every possible key. Don’t use 56-bit DES; it is not trivial to break, but it is within the reach of dedicated attackers to break DES. A 128-bit algorithm, like AES, is not appreciably slower than 40-bit crypto, so you have no excuse for using crummy crypto.

For public-key cryptography, key length recommendations are dependent upon the algorithm and the level of security required. Also, increasing the key size does harm performance, so massive overkill is not economical; thus, this requires a little more thought than selection of symmetric-key key sizes. For RSA, El Gamal, or Diffie-Hellman, I’d recommend that the key be at least 1024 bits, as an absolute minimum; however, 1024-bit keys are on the edge of what might become crackable in the near term and are generally not recommended for modern use, so if at all possible, I would recommend 1536- or even 2048-bit keys. For elliptic-curve cryptography, 160-bit keys appear adequate, and 224-bit keys are better. You can also refer to published guidelines establishing rough equivalences between symmetric- and public-key key sizes.

Don’t re-use the same key on many devices.

The more widely you share a cryptographic key, the less likely you’ll be able to keep it secret. Some deployed systems have re-used the same symmetric key onto every device on the system. The problem with this is that sooner or later, someone will extract the key from a single device, and then they’ll be able to attack all the other devices. So, don’t do that.

See also “Symmetric Encryption Don’t #6: Don’t share a single key across many devices” in this blog article. Credits to Matthew Green.

A one-time pad is not a one-time pad if the key is stretched by an algorithm

The identifier “one-time pad” (also known as a Vernam cipher) is frequently misapplied to various cryptographic solutions in an attempt to claim unbreakable security. But by definition, a Vernam cipher is secure if and only if all three of these conditions are met:

The key material is truly unpredictable; AND The key material is the same length as the plaintext; AND The key material is never reused.

Any violation of those conditions means it is no longer a one-time pad cipher.

The common mistake made is that a short key is stretched with an algorithm. This action violates the unpredictability rule (never mind the key length rule.) Once this is done, the one-time pad is mathematically transformed into the key-stretching algorithm. Combining the short key with random bytes only alters the search space needed to brute force the key-stretching algorithm. Similarly, using “randomly generated” bytes turns the random number generator algorithm into the security algorithm.

You may have a very good key-stretching algorithm. You may also have a very secure random number generator. However, your algorithm is by definition not a one-time pad, and thus does not have the unbreakable property of a one-time pad.

Don’t use an OTP or stream cipher in disk encryption

Example 1

Suppose two files are saved using a stream cipher / OTP. If the file is resaved after a minor edit, an attacker can see that only certain bits were changed and infer information about the document. (Imagine changing the salutation “Dear Bob” to “Dear Alice”).

Example 2

There is no integrity in the output: an attacker can modify the ciphertext and modify the contents of the data by simply XORing the data.

Take away: Modifications to ciphertext are undetected and have predictable impact on the plaintext.

Solution

Use a Block cipher for these situations that includes message integrity checks

Liked this question of the week? Interested in reading more detail, and other answers? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

Filed under Crypto, Question of the Week

QoTW #46: CTRL+ALT+DEL Login – Rationale behind it?

2013-05-10 by . 1 comments

CountZero asked this interesting question: Why is CTRL+ALT+DEL required at login on Windows systems?

His perspective was that it adds an extra step before login, so is bad from a usability perspective, so there must be a reason.

This got a lot of attention, but looking at the top answers:

Adnan‘s answer briefly describes the Secure Attention Key – the Windows kernel will only notify the Winlogon process about this key combination, which prevents it being hijacked by an application, malware or some other process.  In this way, when you press Ctrl+Alt+Del, you can be sure that you’re typing your password in the real login form and not some other fake process trying to steal your password. For example, an application which looks exactly like the windows login. An equivalent of this in Linux is Ctrl+Alt+Pause

Polynomial‘s comment on the answer further expands on the history of this notification:

As a side note: when you say it’s “wired”, what that actually means is that Ctrl+Alt+Del is a mapped to a hardware defined interrupt (set in the APIC, a physical chip on your motherboard). The interrupt was, historically, triggered by the BIOS’ keyboard handler routine, but these days it’s less clear cut. The interrupt is mapped to an ISR which is executed at ring0, which triggers the OS’s internal handler for the event. When no ISR for the interrupt is set, it (usually) causes an ACPI power-cycle event, also known as a hard reboot.

ThomasPornin describes an attack which would work if the Secure Attention Key didn’t exist:

You could make an application which goes full-screen, grabs the keyboard, and displays something which looks like the normal login screen, down to the last pixel. You then log on the machine, launch the application, and go away until some unsuspecting victim finds the machine, tries to log on, and gives his username and password to your application. Your application then just has to simulate a blue screen of death, or maybe to actually log the user on, to complete the illusion.

There is also an excellent answer over on ServerFault, which TerryChia linked to in his answer:

The Windows (NT) kernel is designed to reserve the notification of this key combination to a single process: Winlogon. So, as long as the Windows installation itself is working as it should – no third party application can respond to this key combination (if it could, it could present a fake logon window and keylog your password ;)

So there you have it – as long as your OS hasn’t been hacked, CTRL+ALT+DEL protects you.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

About the recent DNS Amplification Attack against Spamhaus: Countermeasures and Mitigation

2013-04-15 by . 2 comments

A few weeks ago the anti-spam provider Spamhaus was hit by one of the biggest denial of service attacks ever seen, producing over 300 gbit in traffic. The technique used to generate most of the traffic was DNS Amplification, a technique which doesn’t require thousands of infected hosts, but exploits misconfigured DNS servers and a serious design flaw in DNS. We will discuss how this works, what it abuses and how Spamhaus was capable of mitigating the attack.

more »

QoTW #45: Is my developer’s home-brew password security right or wrong, and why?

2013-04-05 by . 0 comments

An incredibly popular question, viewed 17000 times in its first 3 weeks, this question has even led to a new Sec.SE meta meme.

In fact, our top meta meme explains why – the First Rule of Crypto is “Don’t Roll Your Own!”

So, with that in mind, Polynomial’s answer, delivered with a liberal dose of snark, explains in simple language:

This home-brew method offers no real resistance against brute force attacks, and gives a false impression of “complicated” security…Stick to tried and tested key derivation algorithms like PBKDF2 or bcrypt, which have undergone years of in-depth analysis and scrutiny from a wide range of professional and hobbyist cryptographers.

Konerak lists out some advantages of going with an existing public protocol:

  • Probably written by smarter people than you
  • Tested by a lot more people (probably some of them smarter than you)
  • Reviewed by a lot more people (probably some of them smarter than you), often has mathematical proof
  • Improved by a lot more people (probably some of them smarter than you)
  • At the moment just one of those thousands of people finds a flaw, a lot of people start fixing it

KeithS also gives more detail:

  • MD5 is completely broken
  • SHA-1 is considered vulnerable
  • More hashes don’t necessarily mean better hashing
  • Passwords are inherently low-entropy
  • This scheme is not adding any significant proof of work

Along with further answers, the discussion on this post covered a wide range of issues – well worth reading the whole thing!

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

Presentations: Starting your security career – where can you go?

2013-03-28 by . 4 comments

I gave a talk on career planning in Information Security at Abertay University on the 16th of January 2013.

Securi-Tay is an annual security conference organised by students at Abertay and is a very well organised and run event – could put some professional conferences to shame!

Video of my talk

abertay

The talk went down very well, with a lot of discussion spinning off afterwards, and the odd additional visitor to Sec.SE

Most of the video should be straightforward, but a couple of the slides may be hard to read so I have included them here:

Slide 8, industry trends:

slide8

Slide 13, some useful certifications:

slide13

Slide 14, the time-bounded nature of certifications:slide14

Slide 16, self marketing (see that nice big Sec.SE logo:-):slide16

Filed under Community

QoTW #44: How to block or detect user setting up their own personal wifi AP in our LAN?

2013-03-22 by . 0 comments

Nominated by Terry Chia, this question by User15580 should be of interest to anyone managing the security of network s.

The show the variety of aspects security covers in this sort of scenario:

Daniel posted the top answer, and it has nothing to do with IT, but instead focuses on the cause – if a user has installed an access point it is because they need something the existing network is not providing. This is always worth considering:

Discuss with the users what they are trying to accomplish. Perhaps create an official wifi network ( use all the security methods you wish – it will be ‘yours’ ). Or, better, two – Guest and Corporate WAPs.

Polynomial and Thomas Pornin also highlighted the fact this is a user/managerial problem, rather than a technical one.

Remember Immutable Law of Security #10: Technology is not a panacea. Whilst technology can do some amazing things, it can’t enforce user behaviour. You have a user that is bringing undue risk to the organisation, and that risk needs to be dealt with. The solution to your problem is _policy_, not technology. Set up a security policy that details explicitly disallowed behaviours, and have your users sign it. If they violate that policy, you can go to your superiors with evidence of the violation and a penalty can be enforced. As long as the users have physical access to the machines they use and their USB ports (that’s hard to avoid, unless you pour glue in all the USB ports…) and that the installed operating systems allow it (then again, hard to avoid if users are “administrators” on their systems, in particular in BYOD contexts), then the users can setup custom access points which gives access to, at least, their machine.

Rory McCune provided some information on the types of solutions which generally are used in large corporates, where they work well, including NAC and port lockdowns. Lie Ryan‘s comments tend to be appropriate on smaller networks.

k1DBLITZ also focuses on the use of technical solutions in addition to policy, and JasperWallace recommends looking for and blocking unapproved MAC addresses, and further answers discuss wireless scanning and scripted checks.

Overall, it would seem that a mixture of technical and management controls are required – the balance depending on your specific environment.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QoTW #43: Teaching a loved one about secure coding practices

2013-03-01 by . 2 comments

Today’s blog post is on a question posted on Security Stackexchange last year title Teaching a loved one about secure coding practices.

Ignoring the obvious innuendos in the comments, I think this is an excellent question. While the question is far longer, this is the gist of it.

As ITSec pros, we talk about infusing the development cycle with secure coding practices and design, but how does that apply to a brand-new learner? A new programmer is at the start of their own ‘lifelong development cycle’, at it were. At what point is it appropriate, from an educational perspective, to switch from the mindset of ‘getting it to work’ to ‘it absolutely must be secure’? At what point should a student ‘fail’ an assignment because of a security issue?

As a student in an infosec diploma course, I have rather strong opinions on this matter. Let’s start with a personal anecdote. I personally started learning programming on my own due to self-interest. My first exposure to “real” programming is through PHP(I know… shudders). Do a quick google search using the terms “php tutorial”. Go on. The very first link points towards w3schools.com.

A quick browse through the site looks good. Nice, simple, easy to follow tutorials on the basics of PHP and HTML. Wait, are they really teaching unparameterized queries? In 2013? Really? I’d like to point you to this website. In particular, this quote.

W3Schools.com is not affiliated with the W3C in any way. Members of the W3C have asked W3Schools to explicitly disavow any connection in the past, and they have refused to do so. W3Schools frequently publishes inaccurate or misleading content. We have collected several examples illustrating this problem below.

This is an obvious problem. A website on the top of Google’s search results targeted at new programmers providing misleading information? What could go wrong right?

Moving on to the actual question.

User Everett stated this in his answer.

The problem I see, is that secure programming is taught as an add on. Best practices should be taught from the beginning (including security). The lie people are taught is that practice makes perfect. The truth is practice makes permanent. So if you are doing it wrong, you have to unlearn what you have learned. That is a bassackwards approach. I would say that secure coding practices should be taught from day one. There’s no reason to learn how to do it, and then learn how to do it securely. It’s a waste of time and money…

I disagree with his opinion. I think user KeithS provides a very good point.

It’s great to say “Secure coding practices should be taught from day one”, and very hard to demonstrate how that day-one “Hello World” program may be vulnerable, especially when “what is a computer program” is a new concept for the class.

I agree. Many of my peers who entered the diploma course without any prior programming experiences have a tough time even wrapping their heads around basic concepts like looping and conditional statements. Introducing more complex security topics at this point in their education would more likely cause more harm than good.

This is the answer I provided to the question.

I would say a great way to learn is for her to break the applications she has already written. Assuming she is writing web applications, point her towards the OWASP Top 10. Have her see if she can find any of those flaws in her own code. There is no better way to learn about security concepts than actually seeing it happen on your own code. Once a flaw has been found, have her rewrite the application to fix the flaw. Doing so will allow her to appreciate the effect of things like sanitation and validation of user inputs and parameterized queries. Take incremental steps. I wouldn’t jump straight into designing a new application with security in mind before truly understanding what type of codes result in security flaws.

With 37 upvotes and the answer being accepted, it is clear that the community agrees with me.

Conclusion

I think the best approach to teaching secure programming is an iterative one. Start off the students with writing simple applications. Have the students go back and look at their code and see how it can be broken. Refer them to good resources like the OWASP Top 10 list. With a little critical thinking, the students should be able to start figuring out what went wrong in their code and how to fix it.

Like user AviD said,

Students that do not practice critical thinking shouldn’t really be learning programming….

This post is a cross-post from my blog at http://www.infosecstudent.com/2013/02/teaching-secure-programming-how-to-do-it-right/

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

Filed under Question of the Week

QoTW #42: Would publishing a network diagram make the network less secure?

2013-01-25 by . 4 comments

I chose this week’s Question of the Week, saber tabatabaee yazdi‘s “Would publishing a network diagram make the network less secure?” because this is a point which seems to be often misunderstood.

Saber asked this question because he had come across various websites designed to let people share their network diagrams and designs in order that others can comment on them and provide guidance and he wondered what the risks would be from this.

As an example, this diagram from www.ratemynetworkdiagram.com provides IP addresses, host names and even descriptions:

AJ Henderson provided the very valid comment that security through obscurity is not security, but admits that any network will have some weaknesses, and avoiding giving this information to a potential attacker is probably advised.

My answer is taken from the experience of managing many hundreds of penetration tests. My take on it is:

having a map helps me target my attack, avoiding possible sensors, honeypots etc and aiming at high value targets or sources of information. This can speed up an attack immensely, reducing the defender’s chance of preventing it.

But the value from these sites is that you can have obvious mistakes pointed out to you – peer review can be a very valuable thing. So how can you do that safely?

To reduce risk, some steps you can take are:
  • remove addresses, function titles etc
  • only include sections of the network
  • post under an anonymous profile
  • include fake network sections

An attacker will still get information, but it hopefully won’t be enough to let them navigate your entire network.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

A Brief Introduction to auditd

2013-01-18 by . 4 comments

The auditd subsystem is an access monitoring and accounting for Linux developed and maintained by RedHat. It was designed to integrate pretty tightly with the kernel and watch for interesting system calls. Additionally, likely because of this level of integration and detailed logging, it is used as the logger for SELinux.

All in all, it is a pretty fantastic tool for monitoring what’s happening on your system. Since it operates at the kernel level this gives us a hook into any system operation we want. We have the option to write a log any time a particular system call happens, whether that be unlink or getpid. We can monitor access to any file, all network traffic, really anything we want. The level of detail is pretty phenomenal and, since it operates at such a low level, the granularity of information is incredibly useful.

The biggest downfall is actually a result of the design that makes it so handy. This is itself a logging system and as a result does not use syslog. The good thing here is that it doesn’t have to rely on anything external to operate, so a typo in your (syslog|rsyslog|syslog-ng).conf file won’t result in losing your system audit logs. As a result you’ll have to manage all the audit logging using the auditd suite of tools. This means any kind of log collection, organization, or archiving may not work with these files, including remote logging. As an aside, auditd does have provisions for remote logging, however they are not as trivial as we’ve come to expect from syslog.

Thanks to the level of integration that it provides your auditd configurations can be quite complex, but I’ve found that there are primarily only two options you need to know.

  1. -a exit,always -S <syscall>
  2. -w <filename>

The first of these generates a log whenever the listed syslog exits, and whenever the listed file is modified. Seems pretty easy right? It certainly can be, but it does require some investigation into what system calls interest you, particularly if you’re not familiar with OS programming or POSIX. Fortunately for us there are some standards that give us some guidance on what to look out for. Let’s take, for example, the Center for Internet Security Red Hat Enterprise Linux 6 Benchmark. The relevant section is “5.2 Configure System Account (auditd)” starting on page 99. There is a large number of interesting examples listed, but for our purposes we’ll whittle those down to a more minimal and assume your /etc/audit/audit.rules looks like this.

# This file contains the auditctl rules that are loaded
# whenever the audit daemon is started via the initscripts.
# The rules are simply the parameters that would be passed
# to auditctl.
# First rule - delete all
-D

# Increase the buffers to survive stress events. # Make this bigger for busy systems -b 1024 -a always,exit -S adjtimex -S settimeofday -S stime -k time-change -a always,exit -S clock_settime -k time-change -a always,exit -S sethostname -S setdomainname -k system-locale -w /etc/group -p wa -k identity -w /etc/passwd -p wa -k identity -w /etc/shadow -p wa -k identity -w /etc/sudoers -p wa -k identity -w /var/run/utmp -p wa -k session -w /var/log/wtmp -p wa -k session -w /var/log/btmp -p wa -k session -w /etc/selinux/ -p wa -k MAC-policy # Disable adding any additional rules - note that adding new rules will require a reboot -e 2

Based on our earlier discussion we should be able to see that we generate a log message every time any of the following system calls exit: adjtimex, settimeofday, stime, clock_settime, sethostname, setdomainname. This will let us know whenever the time gets changed or if the host or domain name of the system get changed.

We’re also watching a few files. The first four (group, passwd, shadow, sudo) will let us know whenever users get added, modified, or privileges changed. The next three files (utmp, wtmp, btmp) store the current login state of each user, login/logout history, and failed login attempts respectively. So monitoring these will let us know any time an account is used, or failed login attempt, or more specifically whenever these files get changed which will include malicious covering of tracks. Lastly, we’re watching the directory ‘/etc/selinux/’. Directories are a special case in that this will cause the system to recursively monitor the files in that directory. There is a special caveat that you cannot watch ‘/’.

When watching files we also added the option ‘-p wa’. This tells auditd to only watch for (w)rites or (a)ttribute changes. It should be noted that for write (and read for that matter) we aren’t actually logging on those system calls. Instead we’re logging on ‘open’ if the appropriate flags are set.

It should also be said that the logs are also rather…complete. As an example I added the system call rule for sethostname to a Fedora 17 system, with audit version 2.2.1. This is the resultant log from running “hostname audit-test.home.private” as root.

type=SYSCALL msg=audit(1358306046.744:260): arch=c000003e syscall=170 success=yes exit=0 a0=2025010 a1=17 a2=7 a3=18 items=0 ppid=23922 pid=26742 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts4 ses=16 comm="hostname" exe="/usr/bin/hostname" subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key="system-locale"
There are gobs of fields listed, however the ones that interest me the most are the various field names containing the letters “id”, “exe” and that ugly string of numbers in the first parens. The first bit, 1358306046.744, is the timestamp of the event in epoch time. The exe field contains the full path tot he binary that was executed. Useful, since we know what was run, but it does not contain the full command line including arguments. Not ideal.

Next we see that the command was run by root, since the euid is 0. Interestingly, the field auid (called audit uid) contains 1000, which is the uid of my regular user account on that host. The auid field actually contains the user id of the original logged in user for this login session. This means, that even though I used “su -” to gain a root shell the auditing subsystem still knows who I am. Using su to gain a root shell has always been the bane of account auditing, but the auditd system records information to usefully identify a user. It does not forgive the lack of command line options, but certainly makes me feel better about it.

These examples, while handy, are also only the tip of the iceberg. One would be hard pressed to find a way to get more detailed audit logging than is available here. To help make our way down the rabbit hole of auditd let’s turn this into a series. We’ll collect ideas for use cases and work up an audit config to meet the requirements, much like what I ended up doing on this security.stackexchange.com answer.

If this sounds like fun let me know in the comments and I’ll work up a way to collect the information. Until then…Happy Auditing!

There will be future auditd posts, so check back regularly on the auditd tag.

 

Filed under Configuration