Standards

QoTW #38: What is SHA-3 – and why did we change it?

2012-10-12 by ninefingers. 0 comments

Lucas Kauffman selected this week’s Question of the Week: What is SHA3 and why did we change it? 

No doubt if you are at least a little bit curious about security, you’ll have heard of AES, the advanced encryption standard. Way back in 1997, when winters were really hard, our modems froze as we used them and Windows 98 had yet to appear, NIST saw the need to replace the then mainstream Data Encryption Standard with something resistant to the advances in cryptography that had occurred since its inception. So, NIST announced a competition and invited interested parties to submit algorithms matching the desired specification – the AES Process was underway.

A number of algorithms with varying designs were submitted for the process and three rounds held, with comments, cryptanalysis and feedback submitted at each stage. Between rounds, designers could tweak their algorithms if needed to address minor concerns; clearly broken algorithms did not progress. This was somewhat of a first for the crypto community – after the export restrictions and the so called “crypto wars” of the 90s, open cryptanalysis of published algorithms was novel, and it worked. We ended up with AES (and some of you may also have used Serpent, or Twofish) as a result.

Now, onto hashing. Way back in 1996, discussions were underway in the cryptographic community on the possibility of finding a collision within MD5. Practically MD5 started to be commonly exploited in 2005 to create fake certificate authorities. More recently, the FLAME malware used MD5 collisions to bypass Windows signature restrictions. Indeed, we covered this attack right here on the security blog.

The need for new hash functions has been known for some time, therefore. To replace MD5, SHA-1 was released. However, like its predecessor, cryptanalysis began to reveal that its collision resistance required a less-than-bruteforce search. Given that this eventually yields practical exploits that undermine cryptographic systems, a hash standard is needed that is resistant to finding collisions.

As of 2001, we have also had available to us SHA-2, a family of functions that as yet has survived cryptanalysis. However, SHA-2 is similar in design to its predecessor, SHA-1, and one might deduce that similar weaknesses may hold.

So, in response and in a similar vein to the AES process, NIST launched the SHA3 competition in 2007, in their words, in response to recent improvements in cryptanalysis of hash functions. Over the past few years, various algorithms have been analyzed and the number of candidates reduced, much like a reality TV show (perhaps without the tears, though). The final round algorithms essentially became the candidates for SHA3.

The big event this year is that Keccak has been announced as the SHA-3 hash standard. Before we go too much further, we should clarify some parts of the NIST process. Depending on the round an algorithm has reached determines the amount of cryptanalysis it will have received – the longer a function stays in the competition, the more analysis it faces. The report of round two candidates does not reveal any suggestion of breakage; however, NIST has selected its final round candidates based on a combination of performance factors and safety margins. Respected cryptographer Bruce Schneier even suggested that perhaps NIST should consider adopting several of the finalist functions as suitable.

That’s the background, so I am sure you are wondering: how does this affect me? Well, here’s what you should take into consideration:

  • MD5 is broken. You should not use it; it has been used in practical exploits in the wild, if reports are to be believed – and even if they are not, there are alternatives.
  • SHA-1 is shown to be theoretically weaker than expected. It is possible it may become practical to exploit it. As such, it would be prudent to migrate to a better hash function.
  • In spite of concerns, the family of SHA-2 functions has thus far survived cryptanalysis. These are fine for current usage.
  • Keccak and selected other SHA-3 finalists will likely become available in mainstream cryptographic libraries soon. SHA-3 is approved by NIST, so it is fine for current usage.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QoTW #37: How does SSL work ?

2012-10-05 by Thomas Pornin. 0 comments

In this week’s question, we will talk about SSL. This question was asked by @Polynomial, who noticed that our site did not have yet a generic question on how SSL works. There were already some questions on the concept of SSL, but nothing really detailed.

Three answers were given, one by @Luc, and two by myself (because I got really verbose and there is a size limit on answers). The three answers concentrated on distinct aspects of SSL; together, they can explain why SSL works: SSL appears to be decently secure and we can see how this is achieved.

My first answer is a painfully long description of the detailed protocol as it appears on the wire. I wrote it as an introduction to the intricacies of the protocol; what information it contains must be known if you want to understand the details of the cryptographic attacks which have been tried on SSL. This is not much more than RFC-reading, but I still made an effort to merge four RFC (for the four protocol versions: SSLv3, TLS 1.0, 1.1 and 1.2) into one text which should be readable linearly. If you plan on implementing your own SSL client or server (a very instructive exercise, which I recommend for its pedagogical virtues), then I hope that this answer will be a useful reading guide for the actual standards.

What the protocol description shows is that at one point, during the initial steps of the SSL connection (the “handshake”), the server sends its “certificate” to the client (actually, a certificate chain), and then, a few steps later, the client appears to have gained some knowledge of the server’s public key, with which asymmetric cryptography is then performed. The SSL/TLS protocol handles these certificates as opaque blobs. What usually happens is that the client decodes the blobs as X.509 certificates and validates them with regards to a set of known trust anchors. The validation yields the server’s public key, with some guarantee that it really is the key owned by the intended server.

This certificate validation is the first foundation of SSL, as it is used for the Web (i.e. HTTPS). @Luc’s answer contains clear explanations on why certificates are used, and on what the guarantees rely on. Most enlightening is this excerpt:

You have to trust the CA not to make certificates as they please. When organizations like Microsoft, Apple and Mozilla trust a CA though, the CA must have audits; another organization checks on them periodically to make sure everything is still running according to the rules.

So the whole system relies on big companies checking on each other. Some of the trusted CA are governmental (from various governments) but the most often used are private business (e.g. Thawte, Verisign…). An important point to make is that it suffices to subvert or corrupt one CA to get a fake certificate which will be trusted worldwide; so this really is as robust as the weakest of the hundred or so trusted CA which browser vendors include by default. Nevertheless, it works quite well (attacks on CA are rare).

Note that since the certificate parts are quite isolated in the protocol itself (the certificates are just opaque blobs), SSL/TLS can be used without certificate validation in setups where the client “just knows” the server’s public key. This happens a lot in closed environments, such as embedded systems which talk to a mother server. Also, there are a few certificate-less cipher suites, such as the ones which use SRP.

This brings us to the second foundation of SSL: its intricate usage of cryptographic algorithms. Asymmetric encryption (RSA) or key exchange (Diffie-Hellman, or an elliptic curve variant), symmetric encryption with stream or block ciphers, hashing, message authentication codes (HMAC)… the whole paraphernalia is there. Assembling all these primitives into a coherent and secure protocol is not easy at all, and the history of SSL is a rather lengthy sequence of attacks and fixes. My second answer gives details on some of them. What must be remembered is that SSL is state of the art: every attack which has ever been conceived has been tried on SSL, because it is a high-value target. SSL got a lot of exposure, and its survival is testimony to its strength. Sure, it was occasionally harmed, but it was always salvaged. It is rather telling that the recent crop of attacks from Duong and Rizzo (ASP.NET padding oracles, BEAST, CRIME) are actually old attacks which Duong and Rizzo applied; their merit is not in inventing them (they didn’t) but in showing how practical they can be in a Web context (and masterfully did they show it).

From all of this we can list the reasons which make SSL work:

  • The binding between the alleged public key and the intended server is addressed. Granted, it is done with X.509 certificates, which have been designed by the Adversary to drive implementors crazy; but at least the problem is dealt with upfront.
  • The encryption system includes checked integrity, with a decent primitive (HMAC). The encryption uses CBC mode for block ciphers and the MAC is included in the MAC-then-encrypt way: both characteristics are suboptimal, and security with these choices requires special care in the specification (the need of random unpredictible IV, basis of the BEAST attack, fixed in TLS 1.1) and in the implementation (information leak through the padding, used in padding oracle attacks, fixed when Microsoft finally consented to notice the warning which was raised by Vaudenay in 2002).
  • All internal key expansion and checksum tasks are done with a specific function (called “the PRF”) which builds on standard primitives (HMAC with cryptographic hash functions).
  • The client and the server send random values, which are included in all PRF invocations, and protect against replay attacks.
  • The handshake ends with a couple of checksums, which are covered by the encryption+MAC layer, and the checksums are computed over all of the handshake messages (and this is important in defeating a lot of nasty things that an active attacker could otherwise do).
  • Algorithm agility. The cryptographic algorithms (cipher suites) and other features (protocol version, compression) are negotiated between the client and server, which allows for a smooth and gradual transition. This is how current browsers and servers can use AES encryption, which was defined in 2001, several years after SSLv3. It also facilitates recovery from attacks on some features, which can be deactivated on the client and/or the server (e.g. compression, which is used in CRIME).

All these characteristics contribute to the strength of SSL.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #18: How can we destroy data on a hard drive?

2012-02-16 by roryalsop. 0 comments

Rather than focus on a specific question this week, we have 9 questions related to the destruction of data, 5 of which are specifically interested in destroying hard drives, as in this modern age where everything is recorded, there are good reasons for ensuring data is deleted when required.

So this post will concentrate on destroying the the drive itself. For the deletion of data from a storage device, have a look at our blog post for Question of the week number 4: How can you reliably wipe data from a storage device?

Matthew Doucette asked simply: How do you destroy an old hard drive?

To which Scott Pack produced the following incredibly detailed answer:

When it comes to drive destruction you typically see one of two main fields:

  1. Disk Degaussing
  2. Physical Destruction

Degaussing

Degaussing used to be the norm, but I am not such a big fan. On the plus side it is fast, you’ll normally just dump the disks on a conveyor belt and watch them get fed through the device. The problem is auditability. Since the circuitry is rendered wobbly, you won’t be able to do a spot check of the drives and verify that the data is gone. It is possible, with some level of probability unknown to me, that data could still exist on the platters. Retrieving the data would, without question, be difficult, but the fact still remains that you cannot demonstrate the data is actually gone. As such, most companies now will actually be doing physical destruction.

Physical Destruction

At the low end, say a small box of drives at a time, you’ll have hard drive crushers. They’re often pneumatic presses that deform the platters beyond useful recognition. At the risk of supporting a specific product, I have personally used this product from eDR. It works well, and is very cathartic.

At a larger scale, say dozens or hundreds of disks, you’ll find large industrial shredders. They operate just like a paper shredder, but are designed to process much stiffer equipment. The mangled bits of metal that are left over are barely identifiable as hard drives.

At an even larger scale you can start looking at incinerators that will melt the drives down to unidentifiable lumps of slag. Since most electronics can produce some rather scary fumes and airborne particulates, I would not recommend doing this on you own. No, this is not a good use of your chiminea.

Manual Dis-assembly

If you are dealing with one or two drives at a time, then simple dis-assembly might be sufficient. Most drives these days are largely held together with torx screws, and will come apart with varying levels of difficulty. Simply remove the top cover, remove the platters from the central spindle. Taking a pocket knife, nail file, screwdriver, whatever, have fun scoring both surfaces of each platter. Then dispose of the materials appropriately. I cannot speak to how recoverable the data is afterwards, but it is probably sufficient. The biggest thing to keep in mind is that while most desktop hard drive platters are metal, some are glass. The glass ones shatter quite extravagantly.

Additional Considerations

Before you decide on a destruction method, make sure to identify what kind of data is stored on each device and treat it appropriately. There may be regulatory or legal requirements for information disposal depending on what data is stored on the disk. As an example, see section 8-306 of DoD 5220.22-M.

For hard drive destruction, DoD 5220.22-M section 8-306 recommends: “Disintegrate, incinerate, pulverize, shred, or melt”

All that being said, performing a single pass zero wipe is probably sufficient for your purposes. Modern research indicates that modern hard drives are largely immune to the “magnetic memory” problem we used to see on magnetic tape. I would never bother doing anything more on a household drive unless the drive itself was exhibiting failures

Ryan M asked a very similar question – What is the best method of retiring hard drives?

And Scott also gave these 2 excellent points in his answer:

Electrical Scrambling

In the olden days when you had a room packed with tape there were few things better than a big honkin’degausser for making sure that you knew what left the room. As hard drives supplanted tape, their use simply got transferred to the new medium. The biggest advantage to using a degausser to take care of hard drives is speed. Just pass a box through the unit, ignore the jiggling in your fillings, and walk away with clean drives. The downside is the lack of ability to audit data destruction. As discussed in the Wikipedia article, once a hard drive is degaussed, the drive is mechanically unusable. As such, one cannot spot check the drive to ensure cleanliness. In theory the platters could be relocated to a new device and we cannot state, categorically, that the data will not be accessible.

Wanton Destruction

This is without question my favorite. Not only because we demonstrate, without question, that the data is gone, but the process is very cathartic. I have been known to take an hour or so, dip into the “To Be Destroyed” bin, and manually disassemble drives. For modern hard drives all you need is a torx set and time (possibly pliers). While one will stock up on their magnet collection, this method of destruction is very time consuming. Many companies have developed equipment specifically for hard drive destruction en-masse. These range from large industrial shredders to single unit crushers such as this beauty from eDR. I have personally used that particular crusher, and highly recommend it to any Information Security professional who has had a bit of a rough day.

I’m thinking if I ever need to destroy hard drives, I’ll either blow them up / give them to my kids / use them for target practice or ask Scott to have fun with them.

Dan Beale points out that exactly what approach you take depends on:

  • how sensitive is the information
  • how serious are the attackers
  • do you need to follow a protocol
  • do you need to persuade other people the data has gone

Auditability is essential if you are susceptible to regulations around data retention and destruction, and for most organisations this will be essential around regulations such as the Data Protection Act 1998 (UK), GLB or HIPAA (US) and others.

QotW #16: When businesses don’t protect your data…

2012-01-27 by roryalsop. 0 comments

This week’s blog post was inspired by camokatu‘s question on what to do when a Utility company doesn’t hash passwords in their database.

It seems the utility company couldn’t understand the benefit of hashing the passwords. Wizzard0 listed some reasons why they might not want to implement protection – added complexity, implementation and test costs, changes in procedures etc. and this is often the key battle. If a company doesn’t see this as a risk they want to remediate, nothing will get done. And to be fair, this is the way business risk should be managed, however here it appears that the company just hasn’t understood the risks or isn’t aware of them.

Obviously the consequences of this can range from minimal to disastrous, so most of the answers concentrate on consequences which could negatively impact the customer, and the main one of these is where the database includes financial information such as accounts, banking details or credit card details.

The key point, raised by Iszi, is that if personally identifiable information is held, it must be protected in most jurisdictions (under data protection acts), and if credit card details are held, the Payment Card Industry Data Security Standard (PCI-DSS) requires it to be protected. (For further background information check out the answers to this question on industry best practices). These regulations tend to be enforced by fining companies, and the PCI can remove a company’s ability to use credit card payments if they fail to meet PCI-DSS.

Does the company realise they can be fined or lose credit card payments? Maybe they do but have decided that is an acceptable risk, but I’d be tempted to say in this case that they just don’t appear to get it.

So when they don’t get it, don’t care, or won’t respond in a way that protects you, the customer, what are your next steps?

from tdammers – responsible disclosure :

Contact the company, offer to keep the vulnerability quiet for a limited amount of time, giving them an opportunity to fix it.

In the meantime, make sure you’re not using the compromised password anywhere else, make sure you don’t have any valuable information stored on their systems, and if you can afford to, cancel your account.

from userunknown

contact their marketing team and explain what a PR disaster it would be if the media learnt about it (no, I’m not suggesting blackmail…:-)

from drjimbob – 3 excellent suggestions:

Submit it to plaintext offenders?

Switch to another utility company?

Lobby your local politicians to pass legislation that companies that do not use secure hashes (e.g., bcrypt or at very least salted hash) on their password data are liable for identity theft damages from any compromise of their systems?

But in addition to those thoughts, which at best will still require time before the company does anything, follow this guidance repeated in almost every question on password security and listed here by Iszi

Use long and complex passwords for all websites & applications, and do not re-use passwords across any websites & applications. Additionally, limit the information you give these websites & applications to only that which is absolutely necessary for them to serve their purpose

QotW #13: Standards for server security, besides PCI-DSS?

2011-11-11 by roryalsop. 0 comments

The Question of the week this week was asked by nealmcb in response to the ever wider list of standards which apply in different industries. The Financial Services industry has a well defined set of standards including the Payment Card Industry Data Security Standard (PCI-DSS) which focuses specifically on credit card data and primary account numbers, but neal’s core question is this:

Are there standards and related server certifications that are more suitable for e.g. web sites that hold a variety of sensitive personal information that is not financial (e.g. social networking sites), or government or military sites, or sites that run private or public elections?

This question hasn’t inspired a large number of answers, which is surprising, as complying with security standards is becoming an ever more important part of running a business.

The answers which have been provided are useful, however, with links to standards provided by the following:

From Gabe:

Of these, the CIS standards are being used more and more in industry as they provide a simple baseline which should be altered to fit circumstances but is a relatively good starting point out of the box.

Jeff Ferland provided a longer list:

And as I tend to be pretty heavily involved with the ISF, I included a link to the Standard of Good Practice which is publicly available and is exactly what it sounds like: rational good practice in security.

From all these (and many more) it can be seen that there is a wide range of standards which all have a different focus on security- which supports this.josh‘s comment:

As is often noted in questions and answers on this site, the solution depends on what you are protecting and who you are protecting it from. Even similar industries under different jurisdictions may need different protections. Thus I think it makes sense for specific industries and organizations to produce their own standards.

A quick look at questions tagged Compliance shows discussion on Data Protection Act, HIPAA, FDA, SEC guidelines, RBI and more.

If you are in charge of IT or Information Security, Audit or Risk, it is essential that you know which standards are appropriate to you, which ones are mandatory, which are optional, which may be required by a business partner etc., and to be honest it can be a bit of a minefield.

The good thing is – this is one of the areas where the Stack Exchange model works really well. If you ask the question “Is this setup PCI compliant” there are enough practitioners, QSA’s and experienced individuals on the site that an answer should be very straightforward. Of course, you would still need a QSA to accredit, but as a step towards understanding what you need to do, Security.StackExchange.com proves its worth.

QotW #10: Carrying Out an IT Audit

2011-09-23 by Jeff Ferland. 0 comments

This week, a newly-minted master’s degree holder asked on our site, “[How do I go about] Carrying out a professional IT audit procedure?” That’s a really big question in one sentence. In trying to break that down to some parts we can address, let’s look at the perspective people involved in an audit might see.

Staff at an audit client interact with auditors who ask a lot of questions off of a script that is usually referred to as a work plan. Many times, they ask you to open the appropriate settings dialogs or configuration files and show them the configuration details that they are interested in. Management at an audit client will discuss what systems beforehand and the managers for the auditors will select or create appropriate work plans.

The quality, nature, and scope of work plans vary widely, and a single audit will often involve the use of many different plans. Work plans might be written by the audit company or available from regulatory bodies, standards organizations, or software vendors. One example of a readily-available plan is the PCI DSS 2.0 standard. That plan displays both high-level overviews and mid-level configuration requirements across a broad spectrum. Low-level details would be operating system or application specific. Many audit plans related to banking applications have granular detail about core system configuration settings. Also have a look at this question about audit standards for law firms for an example from a different industry showing similarities and differences.

While some plans are regulatory-compliance related, most are best-practices focused. All plans are written with best practices in mind, and for those who are new to the world of IT security, that’s the most challenging part about them. There is no universal standard; many plans greatly overlap, but still differ. If the auditor is appropriately considering their client’s needs, they’ll almost certainly end up marking parts of plans as not applicable or not compliant yet okay because of mitigating circumstances.

Another challenging point for those new to auditing is the sometimes hard-to-grasp concept is the separation between objectives and controls. An objective might be to ensure that each authenticates to their own account. Controls listed in a work plan might include discussion about password expiry requirements to help prevent shared account passwords (among other things). Don’t get crossed-up focusing on the control if the objective is met through some other means – perhaps everybody is using biometric authentication instead. There are too many instances of this in the audit world, and it’s a common mistake among newer auditors.

From a good auditor’s perspective, meeting the goals of the work plan might be considered 60% of the goal of the work. An auditor’s job isn’t complete unless they’re looking at the whole organization. Some examples: to fulfill a question about password change requirements, the auditor should ask an administrator, see the configuration and ask users about it (“When was the last time the system made you change your password?”). To review a system setting, the auditor should ask to see settings in a general manner, adding detail only as needed: “Can you show me the net effect of policies on this user’s account?” as opposed to “Start->run->rsop.msc”. Users reporting different experiences about password resets than the system configuration shows or a system administrator who fumbles their way around for everything won’t ever be in the work plan, but it will be a concern.

With that background in mind, here are some general steps to performing an IT audit procedure:

  • Meet with management and determine their needs. You should understand many of the possible accepted risks before you begin the engagement. For example, high-speed traders may stand to lose more money by implementing a firewall than not.
  • Select appropriate audit plans based on available resources and your own relevant work.
  • Properly review the controls with the objectives they meet in mind. Use multiple references when possible, and always try to confirm any settings directly.
  • Pay attention to the big picture. Things should “feel right.”
  • Review your findings with management and consider their thoughts on them. Many times the apparent severity of something needs to be adjusted.
  • At the end of the day, sometimes a business unit may accept the risk from a weak control, despite it looking severe to you as an auditor. That is their prerogative, as long as you have correctly articulated the risks

The last part of that, auditors reviewing findings with client management, takes the most finesse and unexplainable skill. Does your finding really matter? How can you smooth things over and still deliver over 100 findings? At the end of the day, experience and repetition is the biggest part of delivering professional work, and that’s regardless of the kind of work.

Some further starting points for more detail can be found at http://en.wikipedia.org/wiki/Information_technology_audit_process and http://ithandbook.ffiec.gov/.

Base Rulesets in IPTables

2011-08-30 by scottpack. 3 comments

This question on Security.SE made me think in a rather devious way. At first, I found it to be rather poorly worded, imprecise, and potentially not worth salvaging. After a couple of days I started to realize exactly how many times I’ve really been asked this question by well intentioned, and often, knowledgeable people. The real question should be, “Is there a recommended set of firewall rules that can be used as a standard config?” Or more plainly, I have a bunch of systems, so what rules should they all have no matter what services they provide. Now that is a question that can reasonably be answered, and what I’ve typically given is this:

-A INPUT -i lo -j ACCEPT
-A INPUT -p icmp --icmp-type any -j ACCEPT
# Force SYN checks
-A INPUT -p tcp ! --syn -m state --state NEW -j DROP
# Drop all fragments
-A INPUT -f -j DROP
# Drop XMAS packets
-A INPUT -p tcp --tcp-flags ALL ALL -j DROP
# Drop NULL packets
-A INPUT -p tcp --tcp-flags ALL NONE -j DROP
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
The first and last lines should be pretty obvious so I won’t go into too much detail. Enough services use loopback that, except in very restrictive environments, attempting to firewall them will almost definitely be more work than is useful. Similarly, the ESTABLISHED keyword allows return traffic for outgoing connections. The RELATED keyword takes care of things like FTP that use multiple ports and may trigger multiple flows that are not part of the same connection, but are none-the-less dependent on each other. In a perfect world these would be at the top for performance reasons. Since rules are processed in order we really want the fewest number of rules processed as possible, however in order to get the full benefit from the above rule set, we want to run as many packets as possible by them.

The allow all on ICMP is probably the most controversial part of this set. While there are reconnaissance concerns with ICMP, the network infrastructure is designed with the assumption that ICMP is passed. You’re better off allowing it (at least within your organization’s network space), or grokking all of the ICMP messages and determining your own balance. Look at this question for some worthwhile discussion on the matter.

Now, down to the brass tacks. Let’s look at each of the wonky rules in order.

Forcing SYN Checks

-A INPUT -p tcp ! --syn -m state --state NEW -j DROP
This rule performs two checks:

  1. Is the SYN bit NOT set, and
  2. Is this packet NOT part of a connection in the state table

If both conditions match, then the packet gets dropped. This is a bit of a low-hanging fruit kind of rule. If these conditions match, then we’re looking at a packet that we just downright shouldn’t be interested in. This could indicate a situation where the connection has been pruned from the conntrack state table, or perhaps a malicious replay event. Either way, there isn’t any typical benefit to allowing this traffic, so let’s explicitly block it.

Fragments Be Damned

-A INPUT -f -j DROP
This is an easy one. Drop any packet with the fragment bit set. I fully realize this sounds pretty severe. Networks were designed with the notion of fragmentation, in fact the the IPv4 header specifically contains a flag that indicates whether or not that packet should or should not be fragmented. Considering that this is a core feature of IPv4, fragmentation is still a bit of a touchy subject. If the DF bit is set, then MTU path discovery should just work. However, not all devices respond back with the correct ICMP message. The above rule is one of them, however that’s because ICMP type 3 code 4 (wikipedia) isn’t a reject option in iptables. As a result of this one can’t really know whether or not your packets will get fragmented along the network. Nowadays, on internal networks at least, this usually isn’t a problem. You may run into problems, however, when dealing with VPNs and similar where your 1500 byte ethernet segment suddenly needs to make space for an extra header.

So now that we’ve talked about all the reasons to not drop fragments, here’s the reason to. By default, standard iptables rules are only applied to the packet marked as the first fragment. Meaning, any packet marked as a fragment with an offset of 2 or greater is passed through, the assumption being that if we receive an invalid packet then reassembly will fail and the packets will get dropped anyway. In my experience, fragmentation is a small enough problem that I don’t want to deal with risk and block it anyway. This should only get better with IPv6 as path MTU discovery is placed more firmly on the client and is considered less “Magic.”

Christmas in July

-A INPUT -p tcp --tcp-flags ALL ALL -j DROP
Network reconnaissance is a big deal. It allows us to get a good feel for what’s out there so that when doing our work we have some indications of what might exist instead of just blindly stabbing in the dark. So called ‘Christmas Tree Packets’ are one of those reconnaissance techniques used by most network scanners. The idea is that for whatever protocol we use, whether TCP/UDP/ICMP/etc, every flag is set and every option is enabled. The notion is that just like a old style indicator board the packet is “lit up like a Christmas tree”. By using a packet like this we can look at the behavior of the responses and make some guesses about what operating system and version the remote host is running. For a White Hat, we can use that information to build out a distribution graph of what types of systems we have, what versions they’re running, where we might want to focus our protections, or who we might need to visit for an upgrade/remediation. For a Black Hat, this information can be used to find areas of the network to focus their attacks on or particularly vulnerable looking systems that they can attempt to exploit. By design some flags are incompatible with each-other and as a result any Christmas Tree Packet is at best a protocol anomaly, and at worst a precursor to malicious activity. In either case, there is normally no compelling reason to accept such packets, so we should drop them just to be safe.

I have read instances of Christmas Tree Packets resulting in Denial of Service situations, particularly with networking gear. The idea being that since so many flags and options are set, the processing complexity, and thus time, is increased. Flood a network with these and watch the router stop processing normal packets. In truth, I do not have experience with this failure scenario.

Nothing to See Here

-A INPUT -p tcp --tcp-flags ALL NONE -j DROP
When we see a packet where none of the flags or options are set we use the term Null Packet. Just like with the above Christmas Tree Packet, one should not see this on a normal, well behaved network. Also, just as above, they can be used for reconnaissance purposes to try and determine the OS of the remote host.

QotW #3: Does an established SSL connection mean a line is really secure?

2011-07-29 by M'vy. 0 comments

Last week, we looked at the hardening tag. Today we are going back on a specific question : Does an established SSL connection mean a line is really secure?

Why did we pick up this question? Because almost everyone has heard of SSL, but many are not sure how it works and what it is used for.

Well, the first thing we need to talk about is history of the protocol.

Secure Sockets Layer

The Secure Sockets Layer (SSL) is a cryptographic protocol – now renamed to  Transport Layer Security (or TLS). This protocol is designed to create eavesdropping-proof and tampering-proof communication over the Internet and other untrusted networks.

Originally developed by Netscape, the protocol came out in 1995 with its 2.0 version (1.0 was never publicly released). But SSL 2.0 came with some serious security flaws, which included a Man in the Middle vulnerability, which could allow an attacker to sit in the middle of an encrypted communication, unknown to either end, and decrypt the traffic. A new version was released in 1996 as SSL 3.0. The next step of the standard is SSL 3.1 also named TLS 1.0. Only a few improvements were made for this version, but enough to make TLS 1.0 and SSL 3.0 incompatible. The current version of TLS is the 1.2 release from 2008.

Applications

So what is it used for? Well many people use it on a regular basis. In fact, TLS is used on many websites to provide the HTTPS connections. But it can also encapsulate other protocols, like FTP, SMTP, NNTP or XMPP. You can even use it to secure an entire VPN as with OpenVPN.

So is it secure?

The question can’t be answered as is. It depends…

First thing to rule out is that you are not using SSL < 3.0 versions. Since they all showed flaws they should not be used now.

Secondly we must ensure the implementation is correct. This question discusses the SSL TLS Renegotiation Vulnerability which is present in some browser versions.

Next we must make sure the connection is using encryption. What? Yes: TLS supports a mode of NULL encryption.

From curiosity I’ve looked in the about:config page of Firefox 5.x. Be relieved, all ssl2 settings and null encryption mode are disabled.

And finally you need to check that the connection has been established with regard to the protocol. You may be curious on what you could have done not to respect the protocol, let me explain:

Handshaking

The connection is established in multiple steps called a handshake.

  • The client connects, and if it requires a secure connection it sends a list of supported ciphers.
  • The server picks a cipher from the list (Usually the first in the client list which is compatible with the server list, not necesarrily the strongest), then it notifies the client.
  • The server also sends back its identification, packed into a digital certificate. This certification contains the asymmetric public key of the server and it is signed by a Certificate authority.
  • The client MAY contact this Certificate Authority to confirm the validity of the server’s public key. This is very important, but the cost of online verification makes it impractical. So usually, the browser embeds Certificate Authority public key to perform an online check of the server’s certificate.
  • To generate the session keys, the client encrypts a random number using the server’s public key. Asymmetric cryptography makes deciphering the number without the private key infeasible.
  • Now the client and the server have a shared secret they can use to derive the keys for the actual connection.

Protocol failure

One of the key point in this scenario is the verification of the Certificate. Did you ever connect to a site and see your browser pop up a message about the certificate? Did you read the message? Usually those kind of security warning are here to tell you that the server certificate has expired, or that it is not signed by a trusted authority.

Theses warnings are critical! You may be subject to a man in the middle attack. By clicking : go to this site anyway, you are giving your browser the express command to trust the certificate you were presented. But, unless you had verified it from the Certificate Authority yourself and decided it should be trusted, you could have accepted a forged certificate by a compromised authority. Your connection is then no longer secure.

Conclusion

Does this means that respecting the protocol ensures your security? Well yes.

Provided you made all the verifications, and as AviD said in the today’s featured question:

While there are some mostly theoretical attacks on the cryptography of SSL[TLS], from my PoV its still plenty strong enough for almost all purposes, and will be for a long time.

But I should ponder that yes the connection is secure. And even if we will not turn into paranoiacs of security, one should always ask oneself what the server will be doing with the data provided. It is a good thing to send data on a secured connection, but this has no meaning if the other endpoint forwards them on unsecured lines.

To dig more: