Posts Tagged ‘QOTW’

QoTW #53 How can I punish a hacker?

2016-02-05 by roryalsop. 4 comments

Elmo asked:

I am a small business owner. My website was recently hacked, although no damage was done; non-sensitive data was stolen and some backdoor shells were uploaded. Since then, I have deleted the shells, fixed the vulnerability and blocked the IP address of the hacker.Can I do something to punish the hacker since I have the IP address? Like can I get them in jail or something?

This question comes up time and time again, as people do get upset and angry when their online presence has been attacked, and we have some very simple guidance which will almost always apply:

Terry Chia wrote:

You don’t punish the hacker. The law does. Just report whatever pieces of information you have to the police and let them handle it.

And @TildalWave asked

What makes you believe that this IP is indeed a hacker’s IP address, and not simply another hacked into computer running in zombie mode? And who is to say, that your own web server didn’t run in exactly the same zombie mode until you removed the shells installed through, as you say, later identified backdoor? Should you expect another person, whose web server was attempted to be, or indeed was hacked through your compromised web server’s IP, thinking exactly the same about you, and is already looking for ways to get even like you are?

justausr takes this even further:

Don’t play their game, you’ll lose I’ve learned not to play that game, hackers by nature have more spare time than you and will ultimately win. Even if you get him back, your website will be unavailable to your customers for a solid week afterwards. Remember, you’re the one with public facing servers, you have an IP of a random server that he probably used once. He’s the one with a bunch of scripts and likely more knowledge than you will get in your quest for revenge. Odds aren’t in your favor and the cost to your business is probably too high to risk losing.

Similarly, the other answers mostly discuss the difficulty in identifying the correct perpetrator, and the risks of trying to do something to them.

But Scott Pack‘s answer does provide a little side-step from the generally accepted principles most civilians must follow:

The term most often used to describe what you’re talking about is Hacking Back. It’s part of the Offensive Countermeasures movement that’s gaining traction lately. Some really smart people are putting their heart and soul into figuring out how we, as an industry, should be doing this. There are lots of things you can do, but unless you’re a nation-state, or have orders and a contract from a nation-state your options are severely limited.

tl;dr – don’t be a vigilante. If you do, you will have broken the law, and the police are likely to be able to prove your guilt a lot more easily than that of the unknown hacker.

Like this question of the week? Interested in reading more detail, and other answers? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QoTW #52 Which factors should I consider for devices that accept handwritten digital signatures?

2014-12-19 by roryalsop. 1 comments

Indrek asked this question on digital signature devices, such as the ones delivery drivers get you to sign for your packages. While he identified EU directive 1993/93/EC as appearing to regulate, he had some concerns around what should be done to ensure these signatures are as valid as paper counterparts.

sam280 pointed out that:

EU directive 1999/93/EC (and its upcoming replacement) enforces legal equivalence between a qualified electronic signature and a handwritten signature in all Member States, and “some legal value” for other types of advanced electronic signatures. However, this directive does not address “handwritten digital signatures” but actual electronic signatures, as standardized for instance by PAdES or CAdES. In other words, 1999/93/EC will not help you here, and I doubt technical measures alone will ensure that this kind of signature is accepted in court.

and

advanced electronic signatures which provide legal equivalence with an handwritten signature require the usage of a qualified certificate (1999/93/EC article 5.1) : tablet-based solutions obviously do not belong to this category.

Which implies that the existing regulations don’t cater fully for this use case, and this is borne out by the accepted answer by D.W.

If there is not previous case law on topic, then I would expect this to come down to an assessment of credibility, based upon the testimony of the people involved, possibly testimony from expert witnesses, and the rest of the circumstances surrounding the court case. Keep in mind that the legal process doesn’t treat signatures as absolute ironclad guarantees; they consider them as part of the totality of the evidence.

D.W.’s answer discusses the problem of law here but sums up with a very down to earth conclusion:

…for a low-value transaction, you probably don’t need any crypto. If you’ve captured a signature that appears to match how Alice signs her name, that’s probably going to be good enough for a low-value transaction. Conversely, for a high-value transaction, I’m skeptical about whether these devices are going to be persuasive in court, no matter how much fancy crypto you’ve thrown into them.

Like this question of the week? Interested in reading more detail, and other answers? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QoTW #51 Would it be good secure programming practice to overwrite a “sensitive” variable before deleting it?

2014-12-12 by Xander. 0 comments

Jonathan recently asked this question about secure development practices, specifically, whether it makes a difference to your application’s security if you overwrite the values of sensitive variables as soon as you’re through with them.  The rational is that if you don’t clear the variable values then there is a wider window of opportunity for a malicious party to be able to find and use the sensitive data by reading it out of RAM.

Gillesanswer explains that yes, this is important, and explains why.  There are a number of reasons, and while an attacker reading the values out of RAM is a consideration, it isn’t even one of the more important ones.

Yes, it is good practice security-wise to overwrite data that is particularly sensitive when the data is no longer necessary, i.e. as part of an object destructor (either an explicit destructor provided by the language or an action that the program takes before deallocating the object). It is even good practice to overwrite data that isn’t in itself sensitive, for example to zero out pointer fields in a data structure that goes out of use, and also zero out pointers when the object they point to is freed even if you know you aren’t going to use that field anymore. One reason to do this is in case the data leaks through external factors such as an exposed core dump, a stolen hibernation image, a compromised server allowing a memory dump of running processes, etc. Physical attacks where an attacker extracts the RAM sticks and makes use of data remanence are rarely a concern except on laptop computers and perhaps mobile devices such as phones (where the bar is higher because the RAM is soldered), and even then mostly in targeted scenarios only. Remanence of overwritten values is not a concern: it would take very expensive hardware to probe inside a RAM chip to detect any lingering microscopic voltage difference that might be influenced by an overwritten value. If you’re worried about physical attacks on the RAM, a bigger concern would be to ensure that the data is ovewritten in RAM and not just in the CPU cache. But, again, that’s usually a very minor concern. The most important reason to overwrite stale data is as a defense against program bugs that cause uninitialized memory to be used, such as the infamous Heartbleed. This goes beyond sensitive data because the risk is not limited to a leak of the data: if there is a software bug that causes a pointer field to be dereferenced without having been initialized, the bug is both less prone to exploitation and easier to trace if the field contains all-bits-zero than if it potentially points to a valid but meaningless memory location.

The accepted answer by makerofthings7 reckons that not only is this important, also lays some conditions that need to be considered.

Yes that is a good idea to overwrite then delete/release the value. Do not assume that all you have to do is “overwrite the data” or let it fall out of scope for the GC to handle, because each language interacts with the hardware differently. When securing a variable you might need to think about:
  • encryption (in case of memory dumps or page caching)
  • pinning in memory
  • ability to mark as read-only (to prevent any further modifications)
  • safe construction by NOT allowing a constant string to be passed in
  • optimizing compilers (see note in linked article re: ZeroMemory macro)

Andy Dent‘s answer has some advice for achieving this in C and C++

  • Use volatile
  • Use pragmas to surround the code using that variable and disable optimisations.
  • If possible, only assemble the secret in intermediate values rather than any named variables, so it only exists during calculations.

Lawtenfogle points out that you need to be wary of constructs like .NET’s immutable strings, and re-iterates the potential problem with optimizing compilers.

Storing a value that isn’t used again? Seems like something that would be optimized out, regardless of any benefit it might provide. Also, you may not actually overwrite the data in memory depending upon how the language itself works. For example, in a language using a garbage collector, it wouldn’t be removed immediately (and this is assuming you didn’t leave any other references hanging around). For example, in C#, I think the following doesn’t work.
string secret = "my secret data";

...lots of work...

string secret = "blahblahblah";
"my secret data" hangs around until garbage collected because it is immutable. That last line is actually creating a new string and having secret point to it. It does not speed up how fast the actual secret data is removed.

Several answers and comments also pointed out that when using a non-managed language like C or C++, when you can, you should also pin the memory in order to prevent it from being swapped to disk where sensitive values might remain indefinitely.

Like this question of the week? Interested in reading more detail, and other answers? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QoTW #50: Does password protecting the BIOS help in securing sensitive data

2014-02-28 by roryalsop. 0 comments

Camil Staps asked this question back in April 2013, as although it is generally accepted that using a BIOS password is good practice, he couldn’t see what protection this would provide, given, in his words, “…there aren’t really interesting settings in the BIOS, are there?”

While Camil reckoned that the BIOS only holds things like date and time, and enabling drives etc., the answers and comments point out some of the risks, both in relying on a BIOS password and in thinking the BIOS is not important!

The accepted answer from Iszi covers off why the BIOS should be protected:

…an attacker to bypass access restrictions you have in place on any non-encrypted data on your drives. With this, they can:

  • Read any data stored unencrypted on the drive.
  • Run cracking tools against local user credentials, or download the authenticator data for offline cracking.
  • Edit the Registry or password files to gain access within the native OS.
  • Upload malicious files and configure the system to run them on next boot-up.

And what you should do as a matter of course:

That said, a lot of the recommendations in my post here (and other answers in that, and linked, threads) are still worth considering.

  • Encrypt the hard drive
  • Make sure the computer is physically secure (e.g.: locked room/cabinet/chassis)
  • Use strong passwords for encryption & BIOS

Password-protecting the BIOS is not entirely an effort in futility.

Thomas Pornin covers off a possible reason for changing the date:

…by making the machine believe it is in the far past, the attacker may trigger some other behaviour which could impact security. For instance, if the OS-level logon uses smart cards with certificates, then the OS will verify that the certificate has not been revoked. If the attacker got to steal a smart card with its PIN code, but the theft was discovered and the certificate was revoked, then the attacker may want to alter the date so that the machine believes that the certificate is not yet revoked.

But all the answers agree that all a BIOS password does on its own is protect the BIOS settings – all of which can be bypassed by an attacker who has physical access to the machine, so as per Piskvor‘s answer:

you need to set up some sort of disk encryption (so that the data is only accessible when your system is running)

Like this question of the week? Interested in reading more detail, and other answers? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QoTW #49: How can someone go off-web, and anonymise themselves after a life online?

2014-01-27 by roryalsop. 2 comments

Everything we do these days is online, whether through our own social media, purchases from online stores, tracking by google, Amazon etc., and the concept of gaining some sort of freedom is getting traction in the media, with the leaking of NSA snooping documents and other privacy concerns, so before Christmas I asked the deceptively simple question:

How can someone go off-web, and anonymise themselves after a life online?

Which ended up being the most popular question I have ever asked on Stack Exchange, by a good margin.

Lucas Kauffman supplied the top rated answer – which goes into some detail on heuristics and data mining, which is likely to be the biggest problem for anyone trying to do this successfully:

Avoiding heuristics means changing everything you do completely. Stop using the same apps, accounts, go live somewhere else and do not buy the same food from the same brands. The problem here is that this might also pop up as a special pattern because it is so atypical. Changing your identity is the first step. The second one is not being discovered…the internet doesn’t forget. This means that photos of you will remain online, messages you posted, maybe even IDs you shared will remain on the net. So even when changing your behavior it only will need one picture which might expose you.

The Little Bear provided a short but insightful message, with disturbing undertones:

You cannot enforce forgetfulness. The Web is like a big memory, and you cannot force it to forget everything about you(*). The only way, thus, is to change your identity so that everything the Web knows about you becomes stale. From a cryptographic point of view, this is the same case as with a secret value shared by members of a group: to evict a group member, you have to change the secret value. (*) Except by applying sufficiently excessive force. A global thermonuclear war, with all the involved EMP, might do the trick, albeit with some side effects.

Question3CPO looks again at statistics on your financial footprint, but with a focus on how to muddy the waters with:

When it comes to finances, it’s similar; I have to make an assumption that the data I receive are an accurate indicator of who you are. Suppose you make 1/3 or more of your purchases completely away from your interest, for instance, you’re truly a Libertarian, but you decide to subscribe to a Socialist magazine. How accurate are my data then? Also, you may change in ten years, so how accurate will my data be then, unless I account for it (and how effective then is it to have all the historic data)?

and Ajoy follows up with some more pointers on poisoning data stores:

  1. Make a list of all websites where you have accounts or which are linked to you in some way.
  2. One by one, remove your personal details, friends, etc. Add misinformation – new obscure data, new friends, new interests, anything else you can think of. De-link your related accounts, re-link them to other fake ones.
  3. Let the poisoned information stay for some time. Meanwhile, you could additionally change these details again. Poisoning the poisoned! Ensure that there is no visible pattern or link between any of the poisoned accounts.
  4. Then you could delete all of them, again very slowly.

There are quite a few other insightful answers, and the question attracted a couple of very interesting comments, including my favourite:

At the point you have succeeded you will also be someone else. –  stackunderflow

Liked this question of the week? Interested in reading more detail, and other answers? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QoTW #48: Difference between Privilege and Permission

2013-09-06 by roryalsop. 0 comments

Ali Ahmad asked, “What is the difference is between Privilege and Permission?

In many cases they seem to be used interchangeably, but in an IT environment knowing the meanings can have an impact on how you configure your systems.

D3C4FF’s top scoring answer  references this question on English Stack Exchange, which does give the literal meanings:

A permission is a property of an object, such as a file. It says which agents are permitted to use the object, and what they are permitted to do (read it, modify it, etc.). A privilege is a property of an agent, such as a user. It lets the agent do things that are not ordinarily allowed. For example, there are privileges which allow an agent to access an object that it does not have permission to access, and privileges which allow an agent to perform maintenance functions such as restart the computer.

AJ Henderson supports this view

…a user might be granted a privilege that corresponds to the permission being demanded, but that would really be semantics of some systems and isn’t always the case.

As does Gilles with this comment:

That distinction is common in the unix world, where we tend to say that a process has privileges (what they can or cannot do) and files have permissions (what can or cannot be done to them)

Callum Wilson offers the more specific case under full Role Based Access Control (RBAC)

the permission is the ER link between the role, function and application, i.e. permissions are given to roles the privilege is the ER link between an individual and the application, i.e. privileges are given to people.

And a further slight twist from KeithS:

A permission is asked for, a privilege is granted. When you think about the conversational use of the two words, the “proactive” use of a permission (the first action typically taken by any subject in a sentence within a context) is to ask for it; the “reactive” use (the second action taken in response to the first) is to grant it. By contrast, the proactive use of a privilege is to grant it, while the reactive use is to exercise it. Permissions are situation-based; privileges are time-based. Again, the connotation of the two terms is that permission is something that must be requested and granted each and every time you perform the action, and whether it is granted can then be based on the circumstances of the situation. A privilege, however, is in regard to some specific action that the subject knows they are allowed to do, because they have been informed once that they may do this thing, and then the subject proceeds to do so without specifically asking, until they are told they can no longer do this thing.

Liked this question of the week? Interested in reading more detail, and other answers? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QoTW #47: Lessons learned and misconceptions regarding encryption and cryptology

2013-06-10 by roryalsop. 0 comments

This one is a slightly different Question of the Week. Makerofthings7 asked a list-type question, which generally doesn’t fit on the Stack Exchange network, however this question generated a lot of interest and some excellent answers containing a lot of useful information, so it is probably worthwhile posting an excerpt of that content here. If you are a budding cryptographer, or a developer asked to implement a crypto function, read these guidelines first!

D.W., one of our resident high-rep cryptographers provided a number of the highest scoring answers.

Don’t roll your own crypto.

Don’t invent your own encryption algorithm or protocol; that is extremely error-prone. As Bruce Schneier likes to say,

“Anyone can invent an encryption algorithm they themselves can’t break; it’s much harder to invent one that no one else can break”.

Crypto algorithms are very intricate and need intensive vetting to be sure they are secure; if you invent your own, you won’t get that, and it’s very easy to end up with something insecure without realizing it.

Instead, use a standard cryptographic algorithm and protocol. Odds are that someone else has encountered your problem before and designed an appropriate algorithm for that purpose.

Your best case is to use a high-level well-vetted scheme: for communication security, use TLS (or SSL); for data at rest, use GPG (or PGP). If you can’t do that, use a high-level crypto library, like cryptlib, GPGME, Keyczar, or NaCL, instead of a low-level one, like OpenSSL, CryptoAPI, JCE, etc.. Thanks to Nate Lawson for this suggestion.

Don’t use encryption without message authentication

It is a very common error to encrypt data without also authenticating it.

Example: The developer wants to keep a message secret, so encrypts the message with AES-CBC mode. The error: This is not sufficient for security in the presence of active attacks, replay attacks, reaction attacks, etc. There are known attacks on encryption without message authentication, and the attacks can be quite serious. The fix is to add message authentication.

This mistake has led to serious vulnerabilities in deployed systems that used encryption without authentication, including ASP.NETXML encryptionAmazon EC2JavaServer Faces, Ruby on Rails, OWASP ESAPIIPSECWEPASP.NET again, and SSH2. You don’t want to be the next one on this list.

To avoid these problems, you need to use message authentication every time you apply encryption. You have two choices for how to do that:

Probably the simplest solution is to use an encryption scheme that provides authenticated encryption, e.g.., GCM, CWC, EAX, CCM, OCB. (See also: 1.) The authenticated encryption scheme handles this for you, so you don’t have to think about it.

Alternatively, you can apply your own message authentication, as follows. First, encrypt the message using an appropriate symmetric-key encryption scheme (e.g., AES-CBC). Then, take the entire ciphertext (including any IVs, nonces, or other values needed for decryption), apply a message authentication code (e.g., AES-CMAC, SHA1-HMAC, SHA256-HMAC), and append the resulting MAC digest to the ciphertext before transmission. On the receiving side, check that the MAC digest is valid before decrypting. This is known as the encrypt-then-authenticate construction. (See also: 12.) This also works fine, but requires a little more care from you.

Be careful when concatenating multiple strings, before hashing.

An error I sometimes see: People want a hash of the strings S and T. They concatenate them to get a single string S||T, then hash it to get H(S||T). This is flawed.

The problem: Concatenation leaves the boundary between the two strings ambiguous. Example:builtin||securely = built||insecurely. Put another way, the hash H(S||T) does not uniquely identify the string S and T. Therefore, the attacker may be able to change the boundary between the two strings, without changing the hash. For instance, if Alice wanted to send the two strings builtin andsecurely, the attacker could change them to the two strings built and insecurely without invalidating the hash.

Similar problems apply when applying a digital signature or message authentication code to a concatenation of strings.

The fix: rather than plain concatenation, use some encoding that is unambiguously decodeable. For instance, instead of computing H(S||T), you could compute H(length(S)||S||T), where length(S) is a 32-bit value denoting the length of S in bytes. Or, another possibility is to use H(H(S)||H(T)), or even H(H(S)||T).

For a real-world example of this flaw, see this flaw in Amazon Web Services or this flaw in Flickr [pdf].

Make sure you seed random number generators with enough entropy.

Make sure you use crypto-strength pseudorandom number generators for things like generating keys, choosing IVs/nonces, etc. Don’t use rand()random()drand48(), etc.

Make sure you seed the pseudorandom number generator with enough entropy. Don’t seed it with the time of day; that’s guessable.

Examples: srand(time(NULL)) is very bad. A good way to seed your PRNG is to grab 128 bits or true-random numbers, e.g., from /dev/urandom, CryptGenRandom, or similar. In Java, use SecureRandom, not Random. In .NET, use System.Security.Cryptography.RandomNumberGenerator, not System.Random. In Python, use random.SystemRandom, not random. Thanks to Nate Lawson for some examples.

Real-world example: see this flaw in early versions of Netscape’s browser, which allowed an attacker to break SSL.

Don’t reuse nonces or IVs

Many modes of operation require an IV (Initialization Vector). You must never re-use the same value for an IV twice; doing so can cancel all the security guarantees and cause a catastrophic breach of security.

For stream cipher modes of operation, like CTR mode or OFB mode, re-using a IV is a security disaster. It can cause the encrypted messages to be trivially recoverable. For other modes of operation, like CBC mode, re-using an IV can also facilitate plaintext-recovery attacks in some cases.

No matter what mode of operation you use, you shouldn’t reuse the IV. If you’re wondering how to do it right, the NIST specification provides detailed documentation of how to use block cipher modes of operation properly.

Don’t use a block cipher with ECB for symmetric encryption

(Applies to AES, 3DES, … )

Equivalently, don’t rely on library default settings to be secure. Specifically, many libraries which implement AES implement the algorithm described in FIPS 197, which is so called ECB (Electronic Code Book) mode, which is essentially a straightforward mapping of:

AES(plaintext [32]byte, key [32]byte) -> ciphertext [32]byte

is very insecure. The reasoning is simple, while the number of possible keys in the keyspace is quite large, the weak link here is the amount of entropy in the message. As always, xkcd.com describes is better than I http://xkcd.com/257/

It’s very important to use something like CBC (Cipher Block Chaining) which basically makes ciphertext[i] a mapping:

ciphertext[i] = SomeFunction(ciphertext[i-1], message[i], key)

Just to point out a few language libraries where this sort of mistake is easy to make:http://golang.org/pkg/crypto/aes/ provides an AES implementation which, if used naively, would result in ECB mode.

The pycrypto library defaults to ECB mode when creating a new AES object.

OpenSSL, does this right. Every AES call is explicit about the mode of operation. Really the safest thing IMO is to just try not to do low level crypto like this yourself. If you’re forced to, proceed as if you’re walking on broken glass (carefully), and try to make sure your users are justified in placing their trust in you to safeguard their data.

Don’t use the same key for both encryption and authentication. Don’t use the same key for both encryption and signing.

A key should not be reused for multiple purposes; that may open up various subtle attacks.

For instance, if you have an RSA private/public key pair, you should not both use it for encryption (encrypt with the public key, decrypt with the private key) and for signing (sign with the private key, verify with the public key): pick a single purpose and use it for just that one purpose. If you need both abilities, generate two keypairs, one for signing and one for encryption/decryption.

Similarly, with symmetric cryptography, you should use one key for encryption and a separate independent key for message authentication. Don’t re-use the same key for both purposes.

Kerckhoffs’s principle: A cryptosystem should be secure even if everything about the system, except the key, is public knowledge

A wrong example: LANMAN hashes

The LANMAN hashes would be hard to figure out if noone knew the algorithm, however once the algorithm was known it is now very trivial to crack.

The algorithm is as follows (from wikipedia) :

  1. The user’s ASCII password is converted to uppercase.
  2. This password is null-padded to 14 bytes
  3. The “fixed-length” password is split into two seven-byte halves.
  4. These values are used to create two DES keys, one from each 7-byte half
  5. Each of the two keys is used to DES-encrypt the constant ASCII string “KGS!@#$%”, resulting in two 8-byte ciphertext values.
  6. These two ciphertext values are concatenated to form a 16-byte value, which is the LM hash

Because you now know the ciphertext of these facts you can now very easily break the ciphertext into two ciphertext’s which you know is upper case resulting in a limited set of characters the password could possibly be.

A correct example: AES encryption

Known algorithm

Scales with technology. Increase key size when in need of more cryptographic oomph

Try to avoid using passwords as encryption keys.

A common weakness in many systems is to use a password or passphrase, or a hash of a password or passphrase, as the encryption/decryption key. The problem is that this tends to be highly susceptible to offline keysearch attacks. Most users choose passwords that do not have sufficient entropy to resist such attacks.

The best fix is to use a truly random encryption/decryption key, not one deterministically generated from a password/passphrase.

However, if you must use one based upon a password/passphrase, use an appropriate scheme to slow down exhaustive keysearch. I recommend PBKDF2, which uses iterative hashing (along the lines of H(H(H(….H(password)…)))) to slow down dictionary search. Arrange to use sufficiently many iterations to cause this process to take, say, 100ms on the user’s machine to generate the key.

In a cryptographic protocol: Make every authenticated message recognisable: no two messages should look the same

A generalisation/variant of:

Be careful when concatenating multiple strings, before hashing. Don’t reuse keys. Don’t reuse nonces.

During a run of cryptographic protocol many messages that cannot be counterfeited without a secret (key or nonce) can be exchanged. These messages can be verified by the received because he knows some public (signature) key, or because only him and the sender know some symmetric key, or nonce. This makes sure that these messages have not been modified.

But this does not make sure that these messages have been emitted during the same run of the protocol: an adversary might have captured these messages previously, or during a concurrent run of the protocol. An adversary may start many concurrent runs of a cryptographic protocol to capture valid messages and reuse them unmodified.

By cleverly replaying messages, it might be possible to attack a protocol without compromising any primary key, without attacking any RNG, any cypher, etc.

By making every authenticated message of the protocol obviously distinct for the receiver, opportunities to replay unmodified messages are reduced (not eliminated).

Don’t use the same key in both directions.

In network communications, a common mistake is to use the same key for communication in the A->B direction as for the B->A direction. This is a bad idea, because it often enables replay attacks that replay something A sent to B, back to A.

The safest approach is to negotiate two independent keys, one for each direction. Alternatively, you can negotiate a single key K, then use K1 = AES(K,00..0) for one direction and K2 = AES(K,11..1) for the other direction.

Don’t use insecure key lengths.

Ensure you use algorithms with a sufficiently long key.

For symmetric-key cryptography, I’d recommend at least a 80-bit key, and if possible, a 128-bit key is a good idea. Don’t use 40-bit crypto; it is insecure and easily broken by amateurs, simply by exhaustively trying every possible key. Don’t use 56-bit DES; it is not trivial to break, but it is within the reach of dedicated attackers to break DES. A 128-bit algorithm, like AES, is not appreciably slower than 40-bit crypto, so you have no excuse for using crummy crypto.

For public-key cryptography, key length recommendations are dependent upon the algorithm and the level of security required. Also, increasing the key size does harm performance, so massive overkill is not economical; thus, this requires a little more thought than selection of symmetric-key key sizes. For RSA, El Gamal, or Diffie-Hellman, I’d recommend that the key be at least 1024 bits, as an absolute minimum; however, 1024-bit keys are on the edge of what might become crackable in the near term and are generally not recommended for modern use, so if at all possible, I would recommend 1536- or even 2048-bit keys. For elliptic-curve cryptography, 160-bit keys appear adequate, and 224-bit keys are better. You can also refer to published guidelines establishing rough equivalences between symmetric- and public-key key sizes.

Don’t re-use the same key on many devices.

The more widely you share a cryptographic key, the less likely you’ll be able to keep it secret. Some deployed systems have re-used the same symmetric key onto every device on the system. The problem with this is that sooner or later, someone will extract the key from a single device, and then they’ll be able to attack all the other devices. So, don’t do that.

See also “Symmetric Encryption Don’t #6: Don’t share a single key across many devices” in this blog article. Credits to Matthew Green.

A one-time pad is not a one-time pad if the key is stretched by an algorithm

The identifier “one-time pad” (also known as a Vernam cipher) is frequently misapplied to various cryptographic solutions in an attempt to claim unbreakable security. But by definition, a Vernam cipher is secure if and only if all three of these conditions are met:

The key material is truly unpredictable; AND The key material is the same length as the plaintext; AND The key material is never reused.

Any violation of those conditions means it is no longer a one-time pad cipher.

The common mistake made is that a short key is stretched with an algorithm. This action violates the unpredictability rule (never mind the key length rule.) Once this is done, the one-time pad is mathematically transformed into the key-stretching algorithm. Combining the short key with random bytes only alters the search space needed to brute force the key-stretching algorithm. Similarly, using “randomly generated” bytes turns the random number generator algorithm into the security algorithm.

You may have a very good key-stretching algorithm. You may also have a very secure random number generator. However, your algorithm is by definition not a one-time pad, and thus does not have the unbreakable property of a one-time pad.

Don’t use an OTP or stream cipher in disk encryption

Example 1

Suppose two files are saved using a stream cipher / OTP. If the file is resaved after a minor edit, an attacker can see that only certain bits were changed and infer information about the document. (Imagine changing the salutation “Dear Bob” to “Dear Alice”).

Example 2

There is no integrity in the output: an attacker can modify the ciphertext and modify the contents of the data by simply XORing the data.

Take away: Modifications to ciphertext are undetected and have predictable impact on the plaintext.

Solution

Use a Block cipher for these situations that includes message integrity checks

Liked this question of the week? Interested in reading more detail, and other answers? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QoTW #46: CTRL+ALT+DEL Login – Rationale behind it?

2013-05-10 by roryalsop. 1 comments

CountZero asked this interesting question: Why is CTRL+ALT+DEL required at login on Windows systems?

His perspective was that it adds an extra step before login, so is bad from a usability perspective, so there must be a reason.

This got a lot of attention, but looking at the top answers:

Adnan‘s answer briefly describes the Secure Attention Key – the Windows kernel will only notify the Winlogon process about this key combination, which prevents it being hijacked by an application, malware or some other process.  In this way, when you press Ctrl+Alt+Del, you can be sure that you’re typing your password in the real login form and not some other fake process trying to steal your password. For example, an application which looks exactly like the windows login. An equivalent of this in Linux is Ctrl+Alt+Pause

Polynomial‘s comment on the answer further expands on the history of this notification:

As a side note: when you say it’s “wired”, what that actually means is that Ctrl+Alt+Del is a mapped to a hardware defined interrupt (set in the APIC, a physical chip on your motherboard). The interrupt was, historically, triggered by the BIOS’ keyboard handler routine, but these days it’s less clear cut. The interrupt is mapped to an ISR which is executed at ring0, which triggers the OS’s internal handler for the event. When no ISR for the interrupt is set, it (usually) causes an ACPI power-cycle event, also known as a hard reboot.

ThomasPornin describes an attack which would work if the Secure Attention Key didn’t exist:

You could make an application which goes full-screen, grabs the keyboard, and displays something which looks like the normal login screen, down to the last pixel. You then log on the machine, launch the application, and go away until some unsuspecting victim finds the machine, tries to log on, and gives his username and password to your application. Your application then just has to simulate a blue screen of death, or maybe to actually log the user on, to complete the illusion.

There is also an excellent answer over on ServerFault, which TerryChia linked to in his answer:

The Windows (NT) kernel is designed to reserve the notification of this key combination to a single process: Winlogon. So, as long as the Windows installation itself is working as it should – no third party application can respond to this key combination (if it could, it could present a fake logon window and keylog your password 😉

So there you have it – as long as your OS hasn’t been hacked, CTRL+ALT+DEL protects you.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QoTW #42: Would publishing a network diagram make the network less secure?

2013-01-25 by roryalsop. 3 comments

I chose this week’s Question of the Week, saber tabatabaee yazdi‘s “Would publishing a network diagram make the network less secure?” because this is a point which seems to be often misunderstood.

Saber asked this question because he had come across various websites designed to let people share their network diagrams and designs in order that others can comment on them and provide guidance and he wondered what the risks would be from this.

As an example, this diagram from www.ratemynetworkdiagram.com provides IP addresses, host names and even descriptions:

AJ Henderson provided the very valid comment that security through obscurity is not security, but admits that any network will have some weaknesses, and avoiding giving this information to a potential attacker is probably advised.

My answer is taken from the experience of managing many hundreds of penetration tests. My take on it is:

having a map helps me target my attack, avoiding possible sensors, honeypots etc and aiming at high value targets or sources of information. This can speed up an attack immensely, reducing the defender’s chance of preventing it.

But the value from these sites is that you can have obvious mistakes pointed out to you – peer review can be a very valuable thing. So how can you do that safely?

To reduce risk, some steps you can take are:
  • remove addresses, function titles etc
  • only include sections of the network
  • post under an anonymous profile
  • include fake network sections

An attacker will still get information, but it hopefully won’t be enough to let them navigate your entire network.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QoTW #41: Why do we lock our computers?

2012-11-30 by roryalsop. 2 comments

Iszi chose this week’s question of the week, Tom Marthenal‘s “Why do we lock our computers?” – as Tom puts it:

It’s common knowledge that if somebody has physical access to your machine they can do whatever they want with it, so what is the point?

This one attracted a lot of views, as it is a simple question of interest to everyone.

Both Bruce Ediger and Polynomial answered with the core reason – it removes the risk from the casual attacker while costing the user next to nothing! This is an essential factor in cost/usability tradeoffs for security. From Bruce:

The value of locking is somewhat larger than the price of locking it. Sort of like how in good neighborhoods, you don’t need to lock your front door. In most neighborhoods, you do lock your front door, but anyone with a hammer, a large rock or a brick could get in through the windows.

and from Polynomial:

An attacker with a short window of opportunity (e.g. whilst you’re out getting coffee) must be prevented at minimum cost to you as a user, in such a way that makes it non-trivial to bypass under tight time constraints.

Kaz pointed out another essential point, traceability:

If you don’t lock, it is easy for someone to poke around inside your session in such a way that you will not notice it when you return to your machine.

And zzzzBov added this in a comment:

…few bystanders would question someone walking up to a house and entering through the front door. The assumption is that the person entering it has a reason to. If a bystander watches someone break into a window, they’re much more likely to call the authorities. This is analogous with sitting down at a computer that’s unlocked, vs physically hacking into the system after crawling under a desk.

It removes a large percentage of possible attacks – those from your co-workers wanting to mess with your stuff – thanks enedene.

So – protect yourself from co-workers, casual snooping and pilfering and other mischief by simply locking your machine every time you leave your desk!

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.