Author Archive

QoTW #53 How can I punish a hacker?

2016-02-05 by roryalsop. 4 comments

Elmo asked:

I am a small business owner. My website was recently hacked, although no damage was done; non-sensitive data was stolen and some backdoor shells were uploaded. Since then, I have deleted the shells, fixed the vulnerability and blocked the IP address of the hacker.Can I do something to punish the hacker since I have the IP address? Like can I get them in jail or something?

This question comes up time and time again, as people do get upset and angry when their online presence has been attacked, and we have some very simple guidance which will almost always apply:

Terry Chia wrote:

You don’t punish the hacker. The law does. Just report whatever pieces of information you have to the police and let them handle it.

And @TildalWave asked

What makes you believe that this IP is indeed a hacker’s IP address, and not simply another hacked into computer running in zombie mode? And who is to say, that your own web server didn’t run in exactly the same zombie mode until you removed the shells installed through, as you say, later identified backdoor? Should you expect another person, whose web server was attempted to be, or indeed was hacked through your compromised web server’s IP, thinking exactly the same about you, and is already looking for ways to get even like you are?

justausr takes this even further:

Don’t play their game, you’ll lose I’ve learned not to play that game, hackers by nature have more spare time than you and will ultimately win. Even if you get him back, your website will be unavailable to your customers for a solid week afterwards. Remember, you’re the one with public facing servers, you have an IP of a random server that he probably used once. He’s the one with a bunch of scripts and likely more knowledge than you will get in your quest for revenge. Odds aren’t in your favor and the cost to your business is probably too high to risk losing.

Similarly, the other answers mostly discuss the difficulty in identifying the correct perpetrator, and the risks of trying to do something to them.

But Scott Pack‘s answer does provide a little side-step from the generally accepted principles most civilians must follow:

The term most often used to describe what you’re talking about is Hacking Back. It’s part of the Offensive Countermeasures movement that’s gaining traction lately. Some really smart people are putting their heart and soul into figuring out how we, as an industry, should be doing this. There are lots of things you can do, but unless you’re a nation-state, or have orders and a contract from a nation-state your options are severely limited.

tl;dr – don’t be a vigilante. If you do, you will have broken the law, and the police are likely to be able to prove your guilt a lot more easily than that of the unknown hacker.

Like this question of the week? Interested in reading more detail, and other answers? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QoTW #52 Which factors should I consider for devices that accept handwritten digital signatures?

2014-12-19 by roryalsop. 1 comments

Indrek asked this question on digital signature devices, such as the ones delivery drivers get you to sign for your packages. While he identified EU directive 1993/93/EC as appearing to regulate, he had some concerns around what should be done to ensure these signatures are as valid as paper counterparts.

sam280 pointed out that:

EU directive 1999/93/EC (and its upcoming replacement) enforces legal equivalence between a qualified electronic signature and a handwritten signature in all Member States, and “some legal value” for other types of advanced electronic signatures. However, this directive does not address “handwritten digital signatures” but actual electronic signatures, as standardized for instance by PAdES or CAdES. In other words, 1999/93/EC will not help you here, and I doubt technical measures alone will ensure that this kind of signature is accepted in court.

and

advanced electronic signatures which provide legal equivalence with an handwritten signature require the usage of a qualified certificate (1999/93/EC article 5.1) : tablet-based solutions obviously do not belong to this category.

Which implies that the existing regulations don’t cater fully for this use case, and this is borne out by the accepted answer by D.W.

If there is not previous case law on topic, then I would expect this to come down to an assessment of credibility, based upon the testimony of the people involved, possibly testimony from expert witnesses, and the rest of the circumstances surrounding the court case. Keep in mind that the legal process doesn’t treat signatures as absolute ironclad guarantees; they consider them as part of the totality of the evidence.

D.W.’s answer discusses the problem of law here but sums up with a very down to earth conclusion:

…for a low-value transaction, you probably don’t need any crypto. If you’ve captured a signature that appears to match how Alice signs her name, that’s probably going to be good enough for a low-value transaction. Conversely, for a high-value transaction, I’m skeptical about whether these devices are going to be persuasive in court, no matter how much fancy crypto you’ve thrown into them.

Like this question of the week? Interested in reading more detail, and other answers? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

Is our entire password strategy flawed?

2014-06-19 by roryalsop. 8 comments

paj28 posed a question that really fits better here as a blog post:

Security Stack Exchange gets a lot of questions about password strength, password best practices, attacks on passwords, and there’s quite a lot for both users and sites to do, to stay in line with “best practice”.

Web sites need a password strength policy, account lockout policy, and secure password storage with a slow, salted hash. Some of these requirements have usability impacts, denial of service risks, and other drawbacks. And it’s generally not possible for users to tell whether a site actually does all this (hence plaintextoffenders.com).

Users are supposed to pick a strong password that is unique to every site, change it regularly, and never write it down. And carefully verify the identity of the site every time you enter your password. I don’t think anyone actually follows this, but it is the supposed “best practice”.

In enterprise environments there’s usually a pretty comprehensive single sign-on system, which helps massively, as users only need one good work password. And with just one authentication to protect, using multi-factor is more practical. But on the web we do not have single sign-on; every attempt from Passport, through SAML, OpenID and OAuth has failed to gain a critical mass.

But there is a technology that presents to users just like single sign-on, and that is a password manager with browser integration. Most of these can be told to generate a unique, strong password for every site, and rotate it periodically. This keeps you safe even in the event that a particular web site is not following best practice. And the browser integration ties a password to a particular domain, making phishing all but impossible To be fair, there are risks with password managers “putting all your eggs in one basket” and they are particularly vulnerable to malware, which is the greatest threat at present.

But if we look at the technology available to us, it’s pretty clear that the current advice is barking up the wrong tree. We should be telling users to use a password manager, not remember loads of complex passwords. And sites could simply store an unsalted fast hash of the password, forget password strength rules and account lockouts.


A problem we have though is that banks tell customers never to write down passwords, and some explicitly include ‘storage on PC’ in this. Banking websites tend to disallow pasting into password fields, which also doesn’t help.

So what’s the solution? Do we go down the ‘all my eggs are in a nice secure basket’ route and use password managers?

I, like all the techies I know, use a password manager for everything. Of the 126 passwords I have in mine, I probably use 8 frequently. Another 20 monthly-ish. Some of the rest of them have been used only once or twice – and despite having a good memory for letters and numbers, I’m not going to be able to remember them so this password manager is essential for me.

I want to be able to easily open my password manager, copy the password and paste it directly into the password field.

I definitely don’t want this password manager to be part of the browser, however, as in the event of browser compromise I don’t wish all my passwords to be vacuumed up, so while functional interaction like copy and paste is essential, I’d like separation of executables.

What do you think – please comment below.

Communicating Security Risks to Senior Management – 3 years on

2014-04-04 by roryalsop. 0 comments

Back in July 2011 I wrote this brief blog post on the eternal problem of how to bridge the divide between security professionals and senior management.

Thought I’d revisit it nearly 3 years on and hopefully be able to point out ways in which the industry is improving…

Actually, I would have to say it is improving. I am seeing risk functions in larger organisations, especially in the Financial Services industry, with structures that are designed explicitly to oversee technology and report through a CRO to the board, and in the UK, the regulator is providing strong encouragement for financial institutions to get this right. CEOs and shareholders are also seeing the impact of security breaches, denial of service attacks, defacements etc., so now is a very receptive time.

But there is still a long way to go. Many organisations still do not have anyone at board level who can speak the language of business or operational risk and the language of information security or IT risk. So it may have to be you –

My end point in my previous blog about good questions to help you prepare for conversations with senior management are still valid:

If there is one small piece of advice I’d give to anyone in the security profession it would be to learn about business and operational risk – even if it is just a single course it will help!

QoTW #50: Does password protecting the BIOS help in securing sensitive data

2014-02-28 by roryalsop. 0 comments

Camil Staps asked this question back in April 2013, as although it is generally accepted that using a BIOS password is good practice, he couldn’t see what protection this would provide, given, in his words, “…there aren’t really interesting settings in the BIOS, are there?”

While Camil reckoned that the BIOS only holds things like date and time, and enabling drives etc., the answers and comments point out some of the risks, both in relying on a BIOS password and in thinking the BIOS is not important!

The accepted answer from Iszi covers off why the BIOS should be protected:

…an attacker to bypass access restrictions you have in place on any non-encrypted data on your drives. With this, they can:

  • Read any data stored unencrypted on the drive.
  • Run cracking tools against local user credentials, or download the authenticator data for offline cracking.
  • Edit the Registry or password files to gain access within the native OS.
  • Upload malicious files and configure the system to run them on next boot-up.

And what you should do as a matter of course:

That said, a lot of the recommendations in my post here (and other answers in that, and linked, threads) are still worth considering.

  • Encrypt the hard drive
  • Make sure the computer is physically secure (e.g.: locked room/cabinet/chassis)
  • Use strong passwords for encryption & BIOS

Password-protecting the BIOS is not entirely an effort in futility.

Thomas Pornin covers off a possible reason for changing the date:

…by making the machine believe it is in the far past, the attacker may trigger some other behaviour which could impact security. For instance, if the OS-level logon uses smart cards with certificates, then the OS will verify that the certificate has not been revoked. If the attacker got to steal a smart card with its PIN code, but the theft was discovered and the certificate was revoked, then the attacker may want to alter the date so that the machine believes that the certificate is not yet revoked.

But all the answers agree that all a BIOS password does on its own is protect the BIOS settings – all of which can be bypassed by an attacker who has physical access to the machine, so as per Piskvor‘s answer:

you need to set up some sort of disk encryption (so that the data is only accessible when your system is running)

Like this question of the week? Interested in reading more detail, and other answers? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QoTW #49: How can someone go off-web, and anonymise themselves after a life online?

2014-01-27 by roryalsop. 2 comments

Everything we do these days is online, whether through our own social media, purchases from online stores, tracking by google, Amazon etc., and the concept of gaining some sort of freedom is getting traction in the media, with the leaking of NSA snooping documents and other privacy concerns, so before Christmas I asked the deceptively simple question:

How can someone go off-web, and anonymise themselves after a life online?

Which ended up being the most popular question I have ever asked on Stack Exchange, by a good margin.

Lucas Kauffman supplied the top rated answer – which goes into some detail on heuristics and data mining, which is likely to be the biggest problem for anyone trying to do this successfully:

Avoiding heuristics means changing everything you do completely. Stop using the same apps, accounts, go live somewhere else and do not buy the same food from the same brands. The problem here is that this might also pop up as a special pattern because it is so atypical. Changing your identity is the first step. The second one is not being discovered…the internet doesn’t forget. This means that photos of you will remain online, messages you posted, maybe even IDs you shared will remain on the net. So even when changing your behavior it only will need one picture which might expose you.

The Little Bear provided a short but insightful message, with disturbing undertones:

You cannot enforce forgetfulness. The Web is like a big memory, and you cannot force it to forget everything about you(*). The only way, thus, is to change your identity so that everything the Web knows about you becomes stale. From a cryptographic point of view, this is the same case as with a secret value shared by members of a group: to evict a group member, you have to change the secret value. (*) Except by applying sufficiently excessive force. A global thermonuclear war, with all the involved EMP, might do the trick, albeit with some side effects.

Question3CPO looks again at statistics on your financial footprint, but with a focus on how to muddy the waters with:

When it comes to finances, it’s similar; I have to make an assumption that the data I receive are an accurate indicator of who you are. Suppose you make 1/3 or more of your purchases completely away from your interest, for instance, you’re truly a Libertarian, but you decide to subscribe to a Socialist magazine. How accurate are my data then? Also, you may change in ten years, so how accurate will my data be then, unless I account for it (and how effective then is it to have all the historic data)?

and Ajoy follows up with some more pointers on poisoning data stores:

  1. Make a list of all websites where you have accounts or which are linked to you in some way.
  2. One by one, remove your personal details, friends, etc. Add misinformation – new obscure data, new friends, new interests, anything else you can think of. De-link your related accounts, re-link them to other fake ones.
  3. Let the poisoned information stay for some time. Meanwhile, you could additionally change these details again. Poisoning the poisoned! Ensure that there is no visible pattern or link between any of the poisoned accounts.
  4. Then you could delete all of them, again very slowly.

There are quite a few other insightful answers, and the question attracted a couple of very interesting comments, including my favourite:

At the point you have succeeded you will also be someone else. –  stackunderflow

Liked this question of the week? Interested in reading more detail, and other answers? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QoTW #48: Difference between Privilege and Permission

2013-09-06 by roryalsop. 0 comments

Ali Ahmad asked, “What is the difference is between Privilege and Permission?

In many cases they seem to be used interchangeably, but in an IT environment knowing the meanings can have an impact on how you configure your systems.

D3C4FF’s top scoring answer  references this question on English Stack Exchange, which does give the literal meanings:

A permission is a property of an object, such as a file. It says which agents are permitted to use the object, and what they are permitted to do (read it, modify it, etc.). A privilege is a property of an agent, such as a user. It lets the agent do things that are not ordinarily allowed. For example, there are privileges which allow an agent to access an object that it does not have permission to access, and privileges which allow an agent to perform maintenance functions such as restart the computer.

AJ Henderson supports this view

…a user might be granted a privilege that corresponds to the permission being demanded, but that would really be semantics of some systems and isn’t always the case.

As does Gilles with this comment:

That distinction is common in the unix world, where we tend to say that a process has privileges (what they can or cannot do) and files have permissions (what can or cannot be done to them)

Callum Wilson offers the more specific case under full Role Based Access Control (RBAC)

the permission is the ER link between the role, function and application, i.e. permissions are given to roles the privilege is the ER link between an individual and the application, i.e. privileges are given to people.

And a further slight twist from KeithS:

A permission is asked for, a privilege is granted. When you think about the conversational use of the two words, the “proactive” use of a permission (the first action typically taken by any subject in a sentence within a context) is to ask for it; the “reactive” use (the second action taken in response to the first) is to grant it. By contrast, the proactive use of a privilege is to grant it, while the reactive use is to exercise it. Permissions are situation-based; privileges are time-based. Again, the connotation of the two terms is that permission is something that must be requested and granted each and every time you perform the action, and whether it is granted can then be based on the circumstances of the situation. A privilege, however, is in regard to some specific action that the subject knows they are allowed to do, because they have been informed once that they may do this thing, and then the subject proceeds to do so without specifically asking, until they are told they can no longer do this thing.

Liked this question of the week? Interested in reading more detail, and other answers? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QoTW #47: Lessons learned and misconceptions regarding encryption and cryptology

2013-06-10 by roryalsop. 0 comments

This one is a slightly different Question of the Week. Makerofthings7 asked a list-type question, which generally doesn’t fit on the Stack Exchange network, however this question generated a lot of interest and some excellent answers containing a lot of useful information, so it is probably worthwhile posting an excerpt of that content here. If you are a budding cryptographer, or a developer asked to implement a crypto function, read these guidelines first!

D.W., one of our resident high-rep cryptographers provided a number of the highest scoring answers.

Don’t roll your own crypto.

Don’t invent your own encryption algorithm or protocol; that is extremely error-prone. As Bruce Schneier likes to say,

“Anyone can invent an encryption algorithm they themselves can’t break; it’s much harder to invent one that no one else can break”.

Crypto algorithms are very intricate and need intensive vetting to be sure they are secure; if you invent your own, you won’t get that, and it’s very easy to end up with something insecure without realizing it.

Instead, use a standard cryptographic algorithm and protocol. Odds are that someone else has encountered your problem before and designed an appropriate algorithm for that purpose.

Your best case is to use a high-level well-vetted scheme: for communication security, use TLS (or SSL); for data at rest, use GPG (or PGP). If you can’t do that, use a high-level crypto library, like cryptlib, GPGME, Keyczar, or NaCL, instead of a low-level one, like OpenSSL, CryptoAPI, JCE, etc.. Thanks to Nate Lawson for this suggestion.

Don’t use encryption without message authentication

It is a very common error to encrypt data without also authenticating it.

Example: The developer wants to keep a message secret, so encrypts the message with AES-CBC mode. The error: This is not sufficient for security in the presence of active attacks, replay attacks, reaction attacks, etc. There are known attacks on encryption without message authentication, and the attacks can be quite serious. The fix is to add message authentication.

This mistake has led to serious vulnerabilities in deployed systems that used encryption without authentication, including ASP.NETXML encryptionAmazon EC2JavaServer Faces, Ruby on Rails, OWASP ESAPIIPSECWEPASP.NET again, and SSH2. You don’t want to be the next one on this list.

To avoid these problems, you need to use message authentication every time you apply encryption. You have two choices for how to do that:

Probably the simplest solution is to use an encryption scheme that provides authenticated encryption, e.g.., GCM, CWC, EAX, CCM, OCB. (See also: 1.) The authenticated encryption scheme handles this for you, so you don’t have to think about it.

Alternatively, you can apply your own message authentication, as follows. First, encrypt the message using an appropriate symmetric-key encryption scheme (e.g., AES-CBC). Then, take the entire ciphertext (including any IVs, nonces, or other values needed for decryption), apply a message authentication code (e.g., AES-CMAC, SHA1-HMAC, SHA256-HMAC), and append the resulting MAC digest to the ciphertext before transmission. On the receiving side, check that the MAC digest is valid before decrypting. This is known as the encrypt-then-authenticate construction. (See also: 12.) This also works fine, but requires a little more care from you.

Be careful when concatenating multiple strings, before hashing.

An error I sometimes see: People want a hash of the strings S and T. They concatenate them to get a single string S||T, then hash it to get H(S||T). This is flawed.

The problem: Concatenation leaves the boundary between the two strings ambiguous. Example:builtin||securely = built||insecurely. Put another way, the hash H(S||T) does not uniquely identify the string S and T. Therefore, the attacker may be able to change the boundary between the two strings, without changing the hash. For instance, if Alice wanted to send the two strings builtin andsecurely, the attacker could change them to the two strings built and insecurely without invalidating the hash.

Similar problems apply when applying a digital signature or message authentication code to a concatenation of strings.

The fix: rather than plain concatenation, use some encoding that is unambiguously decodeable. For instance, instead of computing H(S||T), you could compute H(length(S)||S||T), where length(S) is a 32-bit value denoting the length of S in bytes. Or, another possibility is to use H(H(S)||H(T)), or even H(H(S)||T).

For a real-world example of this flaw, see this flaw in Amazon Web Services or this flaw in Flickr [pdf].

Make sure you seed random number generators with enough entropy.

Make sure you use crypto-strength pseudorandom number generators for things like generating keys, choosing IVs/nonces, etc. Don’t use rand()random()drand48(), etc.

Make sure you seed the pseudorandom number generator with enough entropy. Don’t seed it with the time of day; that’s guessable.

Examples: srand(time(NULL)) is very bad. A good way to seed your PRNG is to grab 128 bits or true-random numbers, e.g., from /dev/urandom, CryptGenRandom, or similar. In Java, use SecureRandom, not Random. In .NET, use System.Security.Cryptography.RandomNumberGenerator, not System.Random. In Python, use random.SystemRandom, not random. Thanks to Nate Lawson for some examples.

Real-world example: see this flaw in early versions of Netscape’s browser, which allowed an attacker to break SSL.

Don’t reuse nonces or IVs

Many modes of operation require an IV (Initialization Vector). You must never re-use the same value for an IV twice; doing so can cancel all the security guarantees and cause a catastrophic breach of security.

For stream cipher modes of operation, like CTR mode or OFB mode, re-using a IV is a security disaster. It can cause the encrypted messages to be trivially recoverable. For other modes of operation, like CBC mode, re-using an IV can also facilitate plaintext-recovery attacks in some cases.

No matter what mode of operation you use, you shouldn’t reuse the IV. If you’re wondering how to do it right, the NIST specification provides detailed documentation of how to use block cipher modes of operation properly.

Don’t use a block cipher with ECB for symmetric encryption

(Applies to AES, 3DES, … )

Equivalently, don’t rely on library default settings to be secure. Specifically, many libraries which implement AES implement the algorithm described in FIPS 197, which is so called ECB (Electronic Code Book) mode, which is essentially a straightforward mapping of:

AES(plaintext [32]byte, key [32]byte) -> ciphertext [32]byte

is very insecure. The reasoning is simple, while the number of possible keys in the keyspace is quite large, the weak link here is the amount of entropy in the message. As always, xkcd.com describes is better than I http://xkcd.com/257/

It’s very important to use something like CBC (Cipher Block Chaining) which basically makes ciphertext[i] a mapping:

ciphertext[i] = SomeFunction(ciphertext[i-1], message[i], key)

Just to point out a few language libraries where this sort of mistake is easy to make:http://golang.org/pkg/crypto/aes/ provides an AES implementation which, if used naively, would result in ECB mode.

The pycrypto library defaults to ECB mode when creating a new AES object.

OpenSSL, does this right. Every AES call is explicit about the mode of operation. Really the safest thing IMO is to just try not to do low level crypto like this yourself. If you’re forced to, proceed as if you’re walking on broken glass (carefully), and try to make sure your users are justified in placing their trust in you to safeguard their data.

Don’t use the same key for both encryption and authentication. Don’t use the same key for both encryption and signing.

A key should not be reused for multiple purposes; that may open up various subtle attacks.

For instance, if you have an RSA private/public key pair, you should not both use it for encryption (encrypt with the public key, decrypt with the private key) and for signing (sign with the private key, verify with the public key): pick a single purpose and use it for just that one purpose. If you need both abilities, generate two keypairs, one for signing and one for encryption/decryption.

Similarly, with symmetric cryptography, you should use one key for encryption and a separate independent key for message authentication. Don’t re-use the same key for both purposes.

Kerckhoffs’s principle: A cryptosystem should be secure even if everything about the system, except the key, is public knowledge

A wrong example: LANMAN hashes

The LANMAN hashes would be hard to figure out if noone knew the algorithm, however once the algorithm was known it is now very trivial to crack.

The algorithm is as follows (from wikipedia) :

  1. The user’s ASCII password is converted to uppercase.
  2. This password is null-padded to 14 bytes
  3. The “fixed-length” password is split into two seven-byte halves.
  4. These values are used to create two DES keys, one from each 7-byte half
  5. Each of the two keys is used to DES-encrypt the constant ASCII string “KGS!@#$%”, resulting in two 8-byte ciphertext values.
  6. These two ciphertext values are concatenated to form a 16-byte value, which is the LM hash

Because you now know the ciphertext of these facts you can now very easily break the ciphertext into two ciphertext’s which you know is upper case resulting in a limited set of characters the password could possibly be.

A correct example: AES encryption

Known algorithm

Scales with technology. Increase key size when in need of more cryptographic oomph

Try to avoid using passwords as encryption keys.

A common weakness in many systems is to use a password or passphrase, or a hash of a password or passphrase, as the encryption/decryption key. The problem is that this tends to be highly susceptible to offline keysearch attacks. Most users choose passwords that do not have sufficient entropy to resist such attacks.

The best fix is to use a truly random encryption/decryption key, not one deterministically generated from a password/passphrase.

However, if you must use one based upon a password/passphrase, use an appropriate scheme to slow down exhaustive keysearch. I recommend PBKDF2, which uses iterative hashing (along the lines of H(H(H(….H(password)…)))) to slow down dictionary search. Arrange to use sufficiently many iterations to cause this process to take, say, 100ms on the user’s machine to generate the key.

In a cryptographic protocol: Make every authenticated message recognisable: no two messages should look the same

A generalisation/variant of:

Be careful when concatenating multiple strings, before hashing. Don’t reuse keys. Don’t reuse nonces.

During a run of cryptographic protocol many messages that cannot be counterfeited without a secret (key or nonce) can be exchanged. These messages can be verified by the received because he knows some public (signature) key, or because only him and the sender know some symmetric key, or nonce. This makes sure that these messages have not been modified.

But this does not make sure that these messages have been emitted during the same run of the protocol: an adversary might have captured these messages previously, or during a concurrent run of the protocol. An adversary may start many concurrent runs of a cryptographic protocol to capture valid messages and reuse them unmodified.

By cleverly replaying messages, it might be possible to attack a protocol without compromising any primary key, without attacking any RNG, any cypher, etc.

By making every authenticated message of the protocol obviously distinct for the receiver, opportunities to replay unmodified messages are reduced (not eliminated).

Don’t use the same key in both directions.

In network communications, a common mistake is to use the same key for communication in the A->B direction as for the B->A direction. This is a bad idea, because it often enables replay attacks that replay something A sent to B, back to A.

The safest approach is to negotiate two independent keys, one for each direction. Alternatively, you can negotiate a single key K, then use K1 = AES(K,00..0) for one direction and K2 = AES(K,11..1) for the other direction.

Don’t use insecure key lengths.

Ensure you use algorithms with a sufficiently long key.

For symmetric-key cryptography, I’d recommend at least a 80-bit key, and if possible, a 128-bit key is a good idea. Don’t use 40-bit crypto; it is insecure and easily broken by amateurs, simply by exhaustively trying every possible key. Don’t use 56-bit DES; it is not trivial to break, but it is within the reach of dedicated attackers to break DES. A 128-bit algorithm, like AES, is not appreciably slower than 40-bit crypto, so you have no excuse for using crummy crypto.

For public-key cryptography, key length recommendations are dependent upon the algorithm and the level of security required. Also, increasing the key size does harm performance, so massive overkill is not economical; thus, this requires a little more thought than selection of symmetric-key key sizes. For RSA, El Gamal, or Diffie-Hellman, I’d recommend that the key be at least 1024 bits, as an absolute minimum; however, 1024-bit keys are on the edge of what might become crackable in the near term and are generally not recommended for modern use, so if at all possible, I would recommend 1536- or even 2048-bit keys. For elliptic-curve cryptography, 160-bit keys appear adequate, and 224-bit keys are better. You can also refer to published guidelines establishing rough equivalences between symmetric- and public-key key sizes.

Don’t re-use the same key on many devices.

The more widely you share a cryptographic key, the less likely you’ll be able to keep it secret. Some deployed systems have re-used the same symmetric key onto every device on the system. The problem with this is that sooner or later, someone will extract the key from a single device, and then they’ll be able to attack all the other devices. So, don’t do that.

See also “Symmetric Encryption Don’t #6: Don’t share a single key across many devices” in this blog article. Credits to Matthew Green.

A one-time pad is not a one-time pad if the key is stretched by an algorithm

The identifier “one-time pad” (also known as a Vernam cipher) is frequently misapplied to various cryptographic solutions in an attempt to claim unbreakable security. But by definition, a Vernam cipher is secure if and only if all three of these conditions are met:

The key material is truly unpredictable; AND The key material is the same length as the plaintext; AND The key material is never reused.

Any violation of those conditions means it is no longer a one-time pad cipher.

The common mistake made is that a short key is stretched with an algorithm. This action violates the unpredictability rule (never mind the key length rule.) Once this is done, the one-time pad is mathematically transformed into the key-stretching algorithm. Combining the short key with random bytes only alters the search space needed to brute force the key-stretching algorithm. Similarly, using “randomly generated” bytes turns the random number generator algorithm into the security algorithm.

You may have a very good key-stretching algorithm. You may also have a very secure random number generator. However, your algorithm is by definition not a one-time pad, and thus does not have the unbreakable property of a one-time pad.

Don’t use an OTP or stream cipher in disk encryption

Example 1

Suppose two files are saved using a stream cipher / OTP. If the file is resaved after a minor edit, an attacker can see that only certain bits were changed and infer information about the document. (Imagine changing the salutation “Dear Bob” to “Dear Alice”).

Example 2

There is no integrity in the output: an attacker can modify the ciphertext and modify the contents of the data by simply XORing the data.

Take away: Modifications to ciphertext are undetected and have predictable impact on the plaintext.

Solution

Use a Block cipher for these situations that includes message integrity checks

Liked this question of the week? Interested in reading more detail, and other answers? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QoTW #46: CTRL+ALT+DEL Login – Rationale behind it?

2013-05-10 by roryalsop. 1 comments

CountZero asked this interesting question: Why is CTRL+ALT+DEL required at login on Windows systems?

His perspective was that it adds an extra step before login, so is bad from a usability perspective, so there must be a reason.

This got a lot of attention, but looking at the top answers:

Adnan‘s answer briefly describes the Secure Attention Key – the Windows kernel will only notify the Winlogon process about this key combination, which prevents it being hijacked by an application, malware or some other process.  In this way, when you press Ctrl+Alt+Del, you can be sure that you’re typing your password in the real login form and not some other fake process trying to steal your password. For example, an application which looks exactly like the windows login. An equivalent of this in Linux is Ctrl+Alt+Pause

Polynomial‘s comment on the answer further expands on the history of this notification:

As a side note: when you say it’s “wired”, what that actually means is that Ctrl+Alt+Del is a mapped to a hardware defined interrupt (set in the APIC, a physical chip on your motherboard). The interrupt was, historically, triggered by the BIOS’ keyboard handler routine, but these days it’s less clear cut. The interrupt is mapped to an ISR which is executed at ring0, which triggers the OS’s internal handler for the event. When no ISR for the interrupt is set, it (usually) causes an ACPI power-cycle event, also known as a hard reboot.

ThomasPornin describes an attack which would work if the Secure Attention Key didn’t exist:

You could make an application which goes full-screen, grabs the keyboard, and displays something which looks like the normal login screen, down to the last pixel. You then log on the machine, launch the application, and go away until some unsuspecting victim finds the machine, tries to log on, and gives his username and password to your application. Your application then just has to simulate a blue screen of death, or maybe to actually log the user on, to complete the illusion.

There is also an excellent answer over on ServerFault, which TerryChia linked to in his answer:

The Windows (NT) kernel is designed to reserve the notification of this key combination to a single process: Winlogon. So, as long as the Windows installation itself is working as it should – no third party application can respond to this key combination (if it could, it could present a fake logon window and keylog your password 😉

So there you have it – as long as your OS hasn’t been hacked, CTRL+ALT+DEL protects you.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QoTW #45: Is my developer’s home-brew password security right or wrong, and why?

2013-04-05 by roryalsop. 0 comments

An incredibly popular question, viewed 17000 times in its first 3 weeks, this question has even led to a new Sec.SE meta meme.

In fact, our top meta meme explains why – the First Rule of Crypto is “Don’t Roll Your Own!”

So, with that in mind, Polynomial’s answer, delivered with a liberal dose of snark, explains in simple language:

This home-brew method offers no real resistance against brute force attacks, and gives a false impression of “complicated” security…Stick to tried and tested key derivation algorithms like PBKDF2 or bcrypt, which have undergone years of in-depth analysis and scrutiny from a wide range of professional and hobbyist cryptographers.

Konerak lists out some advantages of going with an existing public protocol:

  • Probably written by smarter people than you
  • Tested by a lot more people (probably some of them smarter than you)
  • Reviewed by a lot more people (probably some of them smarter than you), often has mathematical proof
  • Improved by a lot more people (probably some of them smarter than you)
  • At the moment just one of those thousands of people finds a flaw, a lot of people start fixing it

KeithS also gives more detail:

  • MD5 is completely broken
  • SHA-1 is considered vulnerable
  • More hashes don’t necessarily mean better hashing
  • Passwords are inherently low-entropy
  • This scheme is not adding any significant proof of work

Along with further answers, the discussion on this post covered a wide range of issues – well worth reading the whole thing!

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.