Debunking SQRL

2013-10-17 by . 20 comments

This blog post is inspired by a question on Information Security Stack Exchange about two weeks ago. The question seems to have attracted quite a bit of attention (some of it in an attempt to defend the flawed scheme) so I would like to write a more thorough analysis of the scheme.

Let us start by discussing the current standard for authentication. Passwords. No one is arguing that passwords are perfect. Far from it. Passwords, at least passwords in use by most people are incredibly weak. Short, predictable, easy to guess. This problem is made worse by the fact that most applications do not store passwords in a secure fashion. Instead of storing passwords in a secure fashion, many applications store passwords either in plaintext, unsalted or hashed with weak hashing algorithms.

However, passwords aren’t going anywhere because they do have several advantages that no other authentication scheme provides.

  • Passwords are convenient. They do not require users to carry around additional equipment for authentication.
  • Password authentication is easy to setup. There are countless libraries in every single programming language out there designed to assist you with the task.
  • Most importantly, passwords can be secure if used in the right way. In this case, the “right way” involves users using long, unique and randomly generated passwords for each application and that the application stores passwords hashed with a strong algorithm like bcrypt, scrypt or pbkdf2.

So, let us compare Steve Gibson’s SQRL scheme against the golden standard of password authentication. Long, unique and randomly generated passwords hashed with bcrypt, scrypt or pbkdf2 in addition to a one time password generated using the HOTP or TOTP algorithms.

(I am going to be borrowing heavily from the answers to the original Information Security Stack Exchange answer since many excellent points have already been made.)

Disadvantages

Authentication and identification is combined

No annoying account creation: Suppose you wish to simply comment on a blog posting. Rather than going through the annoying process of “creating an account” to uniquely identify yourself to a new website (which such websites know causes them to lose valuable feedback traffic), you can login using your SQRL identity. If the site hasn’t encountered your SQRL ID before, it might prompt you for a “handle name” to use for your postings. But either way, you immediately have an absolutely secure and unique identity on that system where no one can possibly impersonate you, and any time you ever return, you will be immediately and uniquely known. No account, no usernames or passwords. Nothing to remember or to forget. Your SQRL identity eliminates all of that.

What Gibson describes as an advantage to his scheme I consider a huge weakness. Consider a traditional username/password authentication process. What happens when my password gets compromised? I simply request for a password reset through another (hopefully still secure) medium such as my email address. This is impossible with Gibson’s scheme. If my SQRL master key gets compromised, there is absolutely no way for the application to verify my identity through another medium. Sure, the application could associate my SQRL key with a particular email address or similar, but that would defeat one of the supposed advantages of the SQRL scheme in the first place.

Single point of failure

The proposed SQRL scheme derives all application specific keys from a single master key. This essentially provides a single juicy target for attackers to go after. Remember how Mat Honan got his online life ruined due to interconnected accounts? Imagine that but with all your online identities. All your identities are belong to us.

So how is this master key to your online life stored? On a smartphone. Not exactly the safest of places what with all the malware out there targetting iOS or Android devices. You have bigger problems if you are using Windows Phone or Blackberry. 😉

People also have the tendency to misplace their smartphones. When combined with the previous point of no other means to identify yourself to the application, this would be a very good way of locking yourself out of your “house”

Not exactly very reassuring.

Social Engineering attacks

@tylerl has a very good point about this in his answer. I don’t see a way for me to write it in a clearer way to I will quote him verbatim.

So, for example, a phishing site can display an authentic login QR code which logs in the attacker instead of the user. Gibson is confident that users will see the name of the bank or store or site on the phone app during authentication and will therefore exercise sufficient vigilance to thwart attacks. History suggests this unlikely, and the more reasonable expectation is that seeing the correct name on the app will falsely reassure the user into thinking that the site is legitimate because it was able to trigger the familiar login request on the phone. The caution already beaten in to the heads of users regarding password security does not necessarily translate to new authentication techniques like this one, which is what makes this app likely inherently less resistant to social engineering.

Conclusion

The three major flaws I have pointed out is definitely enough to kill SQRL’s viability as a replacement for password authentication. The scheme not only fails to offer any benefits over conventional methods such as storing passwords in a password manager, it has several major flaws that Gibson fails to address in a satisfactory manner.

This blog post has been cross-posted on http://www.infosecstudent.com/2013/10/debunking-sqrl/

Stump the Chump with Auditd 01

2013-09-27 by . 1 comments

ServerFault user ewwhite describes a rather interesting situation regarding application distribution wherein code must be compiled in production. In short he wants to keep track of changes to a specific directory path and send alerts via email.

Let’s assume that there already exists some basic form of auditd in play, so as such we’ll be building out a snippet to be inserted into your existing /etc/audit/audit.rules. Ed was sparse on some of the specifics related to the application, understandably so, so let’s make some additional assumptions. Let’s assume that the source code directory in question is “/opt/application/src” and that all binaries are installed into “/opt/application/bin“.

-w /opt/application/src -p w -k app-policy
-w /opt/application/bin -p wa -k app-policy
I’ve decided to process each directory, source and binaries, separately. The commonality between the two are the ‘-w’ and the ‘-k’ options. The ‘-w’ option says to watch that directory recursively. The ‘-k’ option says to add the text “app-policy” to the output log message, this is just to make log reviews easier. The ‘-p’ option is actually where the magic happens, and is the real reason to separate these two rules out.

As we discussed in the previous post, nearly 8 months ago, the option ‘-p w’ instructs the kernel to log on file writes. One would assume this is accomplished by attaching to the POSIX system call write. That syscall, however, gets called quite a lot when files are actually saved. So as to not overwhelm the logging system auditd instead attaches to the open system call. By using the (w)rite argument we look for any instance of open that uses the O_WRONLY or O_RDWR flags. It’s worth noting that this does not mean a file was actually modified, only that it was opened in such a way that would allow for modification. For example, if a user opened “/opt/application/src/app.h” in a text editor a log would be generated, however if it was written to the terminal using cat or read using less then no log would be generated. This is pretty important to remember as many people will read a file using a text editor and simply exit without saving changes (hopefully).

We also want to watch for file writes in the binary directory except here we would expect them to be more reliable. It would be rather unusual, but not out of the question, for someone to attempt to use a text editor to open an executable. In addition we added the (a)ttribute option. This will alert us if any of the ownership or access permissions change, most importantly if a file is changed to be executable or the ownership is changed. This will not catch SELinux context changes but since SELinux uses the auditd logging engine then those changes will show up in the same log file.

Now that we have the rules constructed we can move on to the alerting. Ed wanted the events to be emailed out. This is actually quite a bit more complicated. By default auditd uses it’s own built in logger. This does make some sense when you consider the type of logging this system is intended to perform. By not relying on an external logger, like syslogd or rsyslog, it can better suffer mistaken configurations. On the downside it makes alternate logging setups trickier. There does exist a subsystem called ‘audispd’ that acts as a log multiplexor. There are a number of output plugins available, such as syslog, UNIX socket, prelude IDS, etc. None of them really do what Ed wants, so I think our best bet would run reports. Auditd is, after all, an auditing tool and not an enforcement tool. So let’s look at something a little different.

Remember how we tacked on ‘-k app-policy‘ to those rules above? Now we get to the why. Let’s try running the command:

aureport -k -ts yesterday 00:00:00 -te yesterday 23:59:59
We should now see a list of all of the logs that contain keys and occurred yesterday. Let’s look at a concrete example of me editing a file in that directory and the subsequent logs.
root@ node1:~> mkdir -p /opt/application/src
root@ node1:~> vim /opt/application/src/app.h
root@ node1:~> aureport -k

Key Report \=============================================== # date time key success exe auid event \=============================================== 1. 09/24/2013 11:41:29 app-policy yes /usr/bin/vim 1000 13446 2. 09/24/2013 11:41:29 app-policy yes /usr/bin/vim 1000 13445 3. 09/24/2013 11:41:29 app-policy yes /usr/bin/vim 1000 13447 4. 09/24/2013 11:41:29 app-policy yes /usr/bin/vim 1000 13448 5. 09/24/2013 11:41:29 app-policy yes /usr/bin/vim 1000 13449 6. 09/24/2013 11:41:35 app-policy yes /usr/bin/vim 1000 13451 7. 09/24/2013 11:41:35 app-policy yes /usr/bin/vim 1000 13450

The report tells us that at 11:41:29 on September the 24th a user ran the command “/usr/bin/vim” and triggered a rule labeled app-policy. It’s all good so far, but not very detailed. The last two fields, however, are quite useful. The first, 1000, is the UID of my personal account. That is important because notice I was actually running as root. Since I had originally used “sudo -i to gain a root shell my original UID was still preserved, this is good! The last field is a unique event ID generated by auditd. Let’s look at that first event, numbered 13446.
root@ node1:~> grep :13446 /var/log/audit/audit.log
type=SYSCALL msg=audit(1380037289.364:13446): arch=c000003e syscall=2 success=yes exit=4 a0=bffa20 a1=c2 a2=180 a3=0 items=2 ppid=21950 pid=22277 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 ses=1226 tty=pts0 comm="vim" exe="/usr/bin/vim" subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key="app-policy"
type=CWD msg=audit(1380037289.364:13446):  cwd="/root"
type=PATH msg=audit(1380037289.364:13446): item=0 name="/opt/application/src/" inode=2747242 dev=fd:01 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=unconfined_u:object_r:usr_t:s0
type=PATH msg=audit(1380037289.364:13446): item=1 name="/opt/application/src/.app.h.swx" inode=2747244 dev=fd:01 mode=0100600 ouid=0 ogid=0 rdev=00:00 obj=unconfined_u:object_r:usr_t:s0
This is what we mean when we say audit logs are verbose. In the introductory blog post we discussed some of those fields so I’ll save us the pain of going over it again. What we can see, however, is that the user with uid 1000 (see auid=1000) ran the command vim as root (see euid=0) and that command resulted in a change to both “/opt/application/src/” and “/opt/application/src/.app.h.swx“.

What we should be able to see here is that the report generated by aureport doesn’t contain everything we need to see what happened, but it does tell us something happened and gives us the information necessary to find the information. In an ideal world you would have some kind of log aggregation system, like Splunk or a SIEM, and send the raw logs there. That system would then have all the alerting functionality built in to alert an admin to to the potential policy violation. However, we don’t live in a perfect world and Ed’s request for email alerts indicate he doesn’t have access to such a system. What I would do is set up a daily cron job to run that report for the previous day. Every morning the log reviewer can check their mailbox and see if any of those files changed when they weren’t supposed to. If daily isn’t reactive enough then we can simply change the values passed to ‘-ts’ and ‘-te’ and run the job more frequently.

Pulling it all together we should have something that looks like this.

#/etc/audit/audit.rules
# This file contains the auditctl rules that are loaded
# whenever the audit daemon is started via the initscripts.
# The rules are simply the parameters that would be passed
# to auditctl.

# First rule - delete all -D

# Increase the buffers to survive stress events. # Make this bigger for busy systems -b 320

# Feel free to add below this line. See auditctl man page -w /opt/application/src -p w -k app-policy -w /opt/application/bin -p wa -k app-policy #/etc/cron.d/audit-report MAILTO=ewwhite@example.com

1 0 * * * root /sbin/aureport -k -ts yesterday 00:00:00 -te yesterday 23:59:59

About Secure Password Hashing

2013-09-13 by . 5 comments

An often overlooked and misunderstood concept in application development is the one involving secure hashing of passwords. We have evolved from plain text password storage, to hashing a password, to appending salts and now even this is not considered adequate anymore. In this post I will discuss what hashing is, what salts and peppers are and which algorithms are to be used and which are to be avoided. more »

QoTW #48: Difference between Privilege and Permission

2013-09-06 by . 0 comments

Ali Ahmad asked, “What is the difference is between Privilege and Permission?

In many cases they seem to be used interchangeably, but in an IT environment knowing the meanings can have an impact on how you configure your systems.

D3C4FF’s top scoring answer  references this question on English Stack Exchange, which does give the literal meanings:

A permission is a property of an object, such as a file. It says which agents are permitted to use the object, and what they are permitted to do (read it, modify it, etc.). A privilege is a property of an agent, such as a user. It lets the agent do things that are not ordinarily allowed. For example, there are privileges which allow an agent to access an object that it does not have permission to access, and privileges which allow an agent to perform maintenance functions such as restart the computer.

AJ Henderson supports this view

…a user might be granted a privilege that corresponds to the permission being demanded, but that would really be semantics of some systems and isn’t always the case.

As does Gilles with this comment:

That distinction is common in the unix world, where we tend to say that a process has privileges (what they can or cannot do) and files have permissions (what can or cannot be done to them)

Callum Wilson offers the more specific case under full Role Based Access Control (RBAC)

the permission is the ER link between the role, function and application, i.e. permissions are given to roles the privilege is the ER link between an individual and the application, i.e. privileges are given to people.

And a further slight twist from KeithS:

A permission is asked for, a privilege is granted. When you think about the conversational use of the two words, the “proactive” use of a permission (the first action typically taken by any subject in a sentence within a context) is to ask for it; the “reactive” use (the second action taken in response to the first) is to grant it. By contrast, the proactive use of a privilege is to grant it, while the reactive use is to exercise it. Permissions are situation-based; privileges are time-based. Again, the connotation of the two terms is that permission is something that must be requested and granted each and every time you perform the action, and whether it is granted can then be based on the circumstances of the situation. A privilege, however, is in regard to some specific action that the subject knows they are allowed to do, because they have been informed once that they may do this thing, and then the subject proceeds to do so without specifically asking, until they are told they can no longer do this thing.

Liked this question of the week? Interested in reading more detail, and other answers? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

WiFi security: history of insecurities in WEP, WPA and WPA2

2013-08-28 by . 0 comments

Recently I answered a question on security.stackexchange.com regarding security in wireless protocols. The OP wanted to know more about what methods were available to break into a wireless network and how long it would take.

more »

Filed under Attack, Network, Wireless

QoTW #47: Lessons learned and misconceptions regarding encryption and cryptology

2013-06-10 by . 0 comments

This one is a slightly different Question of the Week. Makerofthings7 asked a list-type question, which generally doesn’t fit on the Stack Exchange network, however this question generated a lot of interest and some excellent answers containing a lot of useful information, so it is probably worthwhile posting an excerpt of that content here. If you are a budding cryptographer, or a developer asked to implement a crypto function, read these guidelines first!

D.W., one of our resident high-rep cryptographers provided a number of the highest scoring answers.

Don’t roll your own crypto.

Don’t invent your own encryption algorithm or protocol; that is extremely error-prone. As Bruce Schneier likes to say,

“Anyone can invent an encryption algorithm they themselves can’t break; it’s much harder to invent one that no one else can break”.

Crypto algorithms are very intricate and need intensive vetting to be sure they are secure; if you invent your own, you won’t get that, and it’s very easy to end up with something insecure without realizing it.

Instead, use a standard cryptographic algorithm and protocol. Odds are that someone else has encountered your problem before and designed an appropriate algorithm for that purpose.

Your best case is to use a high-level well-vetted scheme: for communication security, use TLS (or SSL); for data at rest, use GPG (or PGP). If you can’t do that, use a high-level crypto library, like cryptlib, GPGME, Keyczar, or NaCL, instead of a low-level one, like OpenSSL, CryptoAPI, JCE, etc.. Thanks to Nate Lawson for this suggestion.

Don’t use encryption without message authentication

It is a very common error to encrypt data without also authenticating it.

Example: The developer wants to keep a message secret, so encrypts the message with AES-CBC mode. The error: This is not sufficient for security in the presence of active attacks, replay attacks, reaction attacks, etc. There are known attacks on encryption without message authentication, and the attacks can be quite serious. The fix is to add message authentication.

This mistake has led to serious vulnerabilities in deployed systems that used encryption without authentication, including ASP.NETXML encryptionAmazon EC2JavaServer Faces, Ruby on Rails, OWASP ESAPIIPSECWEPASP.NET again, and SSH2. You don’t want to be the next one on this list.

To avoid these problems, you need to use message authentication every time you apply encryption. You have two choices for how to do that:

Probably the simplest solution is to use an encryption scheme that provides authenticated encryption, e.g.., GCM, CWC, EAX, CCM, OCB. (See also: 1.) The authenticated encryption scheme handles this for you, so you don’t have to think about it.

Alternatively, you can apply your own message authentication, as follows. First, encrypt the message using an appropriate symmetric-key encryption scheme (e.g., AES-CBC). Then, take the entire ciphertext (including any IVs, nonces, or other values needed for decryption), apply a message authentication code (e.g., AES-CMAC, SHA1-HMAC, SHA256-HMAC), and append the resulting MAC digest to the ciphertext before transmission. On the receiving side, check that the MAC digest is valid before decrypting. This is known as the encrypt-then-authenticate construction. (See also: 12.) This also works fine, but requires a little more care from you.

Be careful when concatenating multiple strings, before hashing.

An error I sometimes see: People want a hash of the strings S and T. They concatenate them to get a single string S||T, then hash it to get H(S||T). This is flawed.

The problem: Concatenation leaves the boundary between the two strings ambiguous. Example:builtin||securely = built||insecurely. Put another way, the hash H(S||T) does not uniquely identify the string S and T. Therefore, the attacker may be able to change the boundary between the two strings, without changing the hash. For instance, if Alice wanted to send the two strings builtin andsecurely, the attacker could change them to the two strings built and insecurely without invalidating the hash.

Similar problems apply when applying a digital signature or message authentication code to a concatenation of strings.

The fix: rather than plain concatenation, use some encoding that is unambiguously decodeable. For instance, instead of computing H(S||T), you could compute H(length(S)||S||T), where length(S) is a 32-bit value denoting the length of S in bytes. Or, another possibility is to use H(H(S)||H(T)), or even H(H(S)||T).

For a real-world example of this flaw, see this flaw in Amazon Web Services or this flaw in Flickr [pdf].

Make sure you seed random number generators with enough entropy.

Make sure you use crypto-strength pseudorandom number generators for things like generating keys, choosing IVs/nonces, etc. Don’t use rand()random()drand48(), etc.

Make sure you seed the pseudorandom number generator with enough entropy. Don’t seed it with the time of day; that’s guessable.

Examples: srand(time(NULL)) is very bad. A good way to seed your PRNG is to grab 128 bits or true-random numbers, e.g., from /dev/urandom, CryptGenRandom, or similar. In Java, use SecureRandom, not Random. In .NET, use System.Security.Cryptography.RandomNumberGenerator, not System.Random. In Python, use random.SystemRandom, not random. Thanks to Nate Lawson for some examples.

Real-world example: see this flaw in early versions of Netscape’s browser, which allowed an attacker to break SSL.

Don’t reuse nonces or IVs

Many modes of operation require an IV (Initialization Vector). You must never re-use the same value for an IV twice; doing so can cancel all the security guarantees and cause a catastrophic breach of security.

For stream cipher modes of operation, like CTR mode or OFB mode, re-using a IV is a security disaster. It can cause the encrypted messages to be trivially recoverable. For other modes of operation, like CBC mode, re-using an IV can also facilitate plaintext-recovery attacks in some cases.

No matter what mode of operation you use, you shouldn’t reuse the IV. If you’re wondering how to do it right, the NIST specification provides detailed documentation of how to use block cipher modes of operation properly.

Don’t use a block cipher with ECB for symmetric encryption

(Applies to AES, 3DES, … )

Equivalently, don’t rely on library default settings to be secure. Specifically, many libraries which implement AES implement the algorithm described in FIPS 197, which is so called ECB (Electronic Code Book) mode, which is essentially a straightforward mapping of:

AES(plaintext [32]byte, key [32]byte) -> ciphertext [32]byte

is very insecure. The reasoning is simple, while the number of possible keys in the keyspace is quite large, the weak link here is the amount of entropy in the message. As always, xkcd.com describes is better than I http://xkcd.com/257/

It’s very important to use something like CBC (Cipher Block Chaining) which basically makes ciphertext[i] a mapping:

ciphertext[i] = SomeFunction(ciphertext[i-1], message[i], key)

Just to point out a few language libraries where this sort of mistake is easy to make:http://golang.org/pkg/crypto/aes/ provides an AES implementation which, if used naively, would result in ECB mode.

The pycrypto library defaults to ECB mode when creating a new AES object.

OpenSSL, does this right. Every AES call is explicit about the mode of operation. Really the safest thing IMO is to just try not to do low level crypto like this yourself. If you’re forced to, proceed as if you’re walking on broken glass (carefully), and try to make sure your users are justified in placing their trust in you to safeguard their data.

Don’t use the same key for both encryption and authentication. Don’t use the same key for both encryption and signing.

A key should not be reused for multiple purposes; that may open up various subtle attacks.

For instance, if you have an RSA private/public key pair, you should not both use it for encryption (encrypt with the public key, decrypt with the private key) and for signing (sign with the private key, verify with the public key): pick a single purpose and use it for just that one purpose. If you need both abilities, generate two keypairs, one for signing and one for encryption/decryption.

Similarly, with symmetric cryptography, you should use one key for encryption and a separate independent key for message authentication. Don’t re-use the same key for both purposes.

Kerckhoffs’s principle: A cryptosystem should be secure even if everything about the system, except the key, is public knowledge

A wrong example: LANMAN hashes

The LANMAN hashes would be hard to figure out if noone knew the algorithm, however once the algorithm was known it is now very trivial to crack.

The algorithm is as follows (from wikipedia) :

  1. The user’s ASCII password is converted to uppercase.
  2. This password is null-padded to 14 bytes
  3. The “fixed-length” password is split into two seven-byte halves.
  4. These values are used to create two DES keys, one from each 7-byte half
  5. Each of the two keys is used to DES-encrypt the constant ASCII string “KGS!@#$%”, resulting in two 8-byte ciphertext values.
  6. These two ciphertext values are concatenated to form a 16-byte value, which is the LM hash

Because you now know the ciphertext of these facts you can now very easily break the ciphertext into two ciphertext’s which you know is upper case resulting in a limited set of characters the password could possibly be.

A correct example: AES encryption

Known algorithm

Scales with technology. Increase key size when in need of more cryptographic oomph

Try to avoid using passwords as encryption keys.

A common weakness in many systems is to use a password or passphrase, or a hash of a password or passphrase, as the encryption/decryption key. The problem is that this tends to be highly susceptible to offline keysearch attacks. Most users choose passwords that do not have sufficient entropy to resist such attacks.

The best fix is to use a truly random encryption/decryption key, not one deterministically generated from a password/passphrase.

However, if you must use one based upon a password/passphrase, use an appropriate scheme to slow down exhaustive keysearch. I recommend PBKDF2, which uses iterative hashing (along the lines of H(H(H(….H(password)…)))) to slow down dictionary search. Arrange to use sufficiently many iterations to cause this process to take, say, 100ms on the user’s machine to generate the key.

In a cryptographic protocol: Make every authenticated message recognisable: no two messages should look the same

A generalisation/variant of:

Be careful when concatenating multiple strings, before hashing. Don’t reuse keys. Don’t reuse nonces.

During a run of cryptographic protocol many messages that cannot be counterfeited without a secret (key or nonce) can be exchanged. These messages can be verified by the received because he knows some public (signature) key, or because only him and the sender know some symmetric key, or nonce. This makes sure that these messages have not been modified.

But this does not make sure that these messages have been emitted during the same run of the protocol: an adversary might have captured these messages previously, or during a concurrent run of the protocol. An adversary may start many concurrent runs of a cryptographic protocol to capture valid messages and reuse them unmodified.

By cleverly replaying messages, it might be possible to attack a protocol without compromising any primary key, without attacking any RNG, any cypher, etc.

By making every authenticated message of the protocol obviously distinct for the receiver, opportunities to replay unmodified messages are reduced (not eliminated).

Don’t use the same key in both directions.

In network communications, a common mistake is to use the same key for communication in the A->B direction as for the B->A direction. This is a bad idea, because it often enables replay attacks that replay something A sent to B, back to A.

The safest approach is to negotiate two independent keys, one for each direction. Alternatively, you can negotiate a single key K, then use K1 = AES(K,00..0) for one direction and K2 = AES(K,11..1) for the other direction.

Don’t use insecure key lengths.

Ensure you use algorithms with a sufficiently long key.

For symmetric-key cryptography, I’d recommend at least a 80-bit key, and if possible, a 128-bit key is a good idea. Don’t use 40-bit crypto; it is insecure and easily broken by amateurs, simply by exhaustively trying every possible key. Don’t use 56-bit DES; it is not trivial to break, but it is within the reach of dedicated attackers to break DES. A 128-bit algorithm, like AES, is not appreciably slower than 40-bit crypto, so you have no excuse for using crummy crypto.

For public-key cryptography, key length recommendations are dependent upon the algorithm and the level of security required. Also, increasing the key size does harm performance, so massive overkill is not economical; thus, this requires a little more thought than selection of symmetric-key key sizes. For RSA, El Gamal, or Diffie-Hellman, I’d recommend that the key be at least 1024 bits, as an absolute minimum; however, 1024-bit keys are on the edge of what might become crackable in the near term and are generally not recommended for modern use, so if at all possible, I would recommend 1536- or even 2048-bit keys. For elliptic-curve cryptography, 160-bit keys appear adequate, and 224-bit keys are better. You can also refer to published guidelines establishing rough equivalences between symmetric- and public-key key sizes.

Don’t re-use the same key on many devices.

The more widely you share a cryptographic key, the less likely you’ll be able to keep it secret. Some deployed systems have re-used the same symmetric key onto every device on the system. The problem with this is that sooner or later, someone will extract the key from a single device, and then they’ll be able to attack all the other devices. So, don’t do that.

See also “Symmetric Encryption Don’t #6: Don’t share a single key across many devices” in this blog article. Credits to Matthew Green.

A one-time pad is not a one-time pad if the key is stretched by an algorithm

The identifier “one-time pad” (also known as a Vernam cipher) is frequently misapplied to various cryptographic solutions in an attempt to claim unbreakable security. But by definition, a Vernam cipher is secure if and only if all three of these conditions are met:

The key material is truly unpredictable; AND The key material is the same length as the plaintext; AND The key material is never reused.

Any violation of those conditions means it is no longer a one-time pad cipher.

The common mistake made is that a short key is stretched with an algorithm. This action violates the unpredictability rule (never mind the key length rule.) Once this is done, the one-time pad is mathematically transformed into the key-stretching algorithm. Combining the short key with random bytes only alters the search space needed to brute force the key-stretching algorithm. Similarly, using “randomly generated” bytes turns the random number generator algorithm into the security algorithm.

You may have a very good key-stretching algorithm. You may also have a very secure random number generator. However, your algorithm is by definition not a one-time pad, and thus does not have the unbreakable property of a one-time pad.

Don’t use an OTP or stream cipher in disk encryption

Example 1

Suppose two files are saved using a stream cipher / OTP. If the file is resaved after a minor edit, an attacker can see that only certain bits were changed and infer information about the document. (Imagine changing the salutation “Dear Bob” to “Dear Alice”).

Example 2

There is no integrity in the output: an attacker can modify the ciphertext and modify the contents of the data by simply XORing the data.

Take away: Modifications to ciphertext are undetected and have predictable impact on the plaintext.

Solution

Use a Block cipher for these situations that includes message integrity checks

Liked this question of the week? Interested in reading more detail, and other answers? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

Filed under Crypto, Question of the Week

QoTW #46: CTRL+ALT+DEL Login – Rationale behind it?

2013-05-10 by . 1 comments

CountZero asked this interesting question: Why is CTRL+ALT+DEL required at login on Windows systems?

His perspective was that it adds an extra step before login, so is bad from a usability perspective, so there must be a reason.

This got a lot of attention, but looking at the top answers:

Adnan‘s answer briefly describes the Secure Attention Key – the Windows kernel will only notify the Winlogon process about this key combination, which prevents it being hijacked by an application, malware or some other process.  In this way, when you press Ctrl+Alt+Del, you can be sure that you’re typing your password in the real login form and not some other fake process trying to steal your password. For example, an application which looks exactly like the windows login. An equivalent of this in Linux is Ctrl+Alt+Pause

Polynomial‘s comment on the answer further expands on the history of this notification:

As a side note: when you say it’s “wired”, what that actually means is that Ctrl+Alt+Del is a mapped to a hardware defined interrupt (set in the APIC, a physical chip on your motherboard). The interrupt was, historically, triggered by the BIOS’ keyboard handler routine, but these days it’s less clear cut. The interrupt is mapped to an ISR which is executed at ring0, which triggers the OS’s internal handler for the event. When no ISR for the interrupt is set, it (usually) causes an ACPI power-cycle event, also known as a hard reboot.

ThomasPornin describes an attack which would work if the Secure Attention Key didn’t exist:

You could make an application which goes full-screen, grabs the keyboard, and displays something which looks like the normal login screen, down to the last pixel. You then log on the machine, launch the application, and go away until some unsuspecting victim finds the machine, tries to log on, and gives his username and password to your application. Your application then just has to simulate a blue screen of death, or maybe to actually log the user on, to complete the illusion.

There is also an excellent answer over on ServerFault, which TerryChia linked to in his answer:

The Windows (NT) kernel is designed to reserve the notification of this key combination to a single process: Winlogon. So, as long as the Windows installation itself is working as it should – no third party application can respond to this key combination (if it could, it could present a fake logon window and keylog your password 😉

So there you have it – as long as your OS hasn’t been hacked, CTRL+ALT+DEL protects you.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

About the recent DNS Amplification Attack against Spamhaus: Countermeasures and Mitigation

2013-04-15 by . 1 comments

A few weeks ago the anti-spam provider Spamhaus was hit by one of the biggest denial of service attacks ever seen, producing over 300 gbit in traffic. The technique used to generate most of the traffic was DNS Amplification, a technique which doesn’t require thousands of infected hosts, but exploits misconfigured DNS servers and a serious design flaw in DNS. We will discuss how this works, what it abuses and how Spamhaus was capable of mitigating the attack.

more »

QoTW #45: Is my developer’s home-brew password security right or wrong, and why?

2013-04-05 by . 0 comments

An incredibly popular question, viewed 17000 times in its first 3 weeks, this question has even led to a new Sec.SE meta meme.

In fact, our top meta meme explains why – the First Rule of Crypto is “Don’t Roll Your Own!”

So, with that in mind, Polynomial’s answer, delivered with a liberal dose of snark, explains in simple language:

This home-brew method offers no real resistance against brute force attacks, and gives a false impression of “complicated” security…Stick to tried and tested key derivation algorithms like PBKDF2 or bcrypt, which have undergone years of in-depth analysis and scrutiny from a wide range of professional and hobbyist cryptographers.

Konerak lists out some advantages of going with an existing public protocol:

  • Probably written by smarter people than you
  • Tested by a lot more people (probably some of them smarter than you)
  • Reviewed by a lot more people (probably some of them smarter than you), often has mathematical proof
  • Improved by a lot more people (probably some of them smarter than you)
  • At the moment just one of those thousands of people finds a flaw, a lot of people start fixing it

KeithS also gives more detail:

  • MD5 is completely broken
  • SHA-1 is considered vulnerable
  • More hashes don’t necessarily mean better hashing
  • Passwords are inherently low-entropy
  • This scheme is not adding any significant proof of work

Along with further answers, the discussion on this post covered a wide range of issues – well worth reading the whole thing!

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

Presentations: Starting your security career – where can you go?

2013-03-28 by . 3 comments

I gave a talk on career planning in Information Security at Abertay University on the 16th of January 2013.

Securi-Tay is an annual security conference organised by students at Abertay and is a very well organised and run event – could put some professional conferences to shame!

Video of my talk

abertay

The talk went down very well, with a lot of discussion spinning off afterwards, and the odd additional visitor to Sec.SE

Most of the video should be straightforward, but a couple of the slides may be hard to read so I have included them here:

Slide 8, industry trends:

slide8

Slide 13, some useful certifications:

slide13

Slide 14, the time-bounded nature of certifications:slide14

Slide 16, self marketing (see that nice big Sec.SE logo:-):slide16

Filed under Community