Archive for August, 2011

A Risk-Based Look at Fixing the Certificate Authority Problem

2011-08-31 by Jeff Ferland. 3 comments

Bogus SSL certificates are not a new problem and incidents can be found going back over a decade. With a fake certificate for Google.com in the wild, they’re in the news again today. My last two posts have touched on the SSL topic and Moxie Marlinspike’s Convergence.io software is being mentioned in these news articles. At the same time, Dan Kaminski has been pushing for DNSSEC as a replacement for the SSL CA system. Last night, Moxie and Dan had it out 140 characters at a time on Twitter over whether DNSSEC for key distribution was wise.

I’m going to do two things with the discussion: I’m going to cover the discussion I said should be had about replacing the CA system, and I’m going to try and show a risk-based assessment to justify that.

The Risks of the Current System

Risk: Any valid CA chain can create a valid certificate. Currently, there are a number of root certificate authorities. My laptop lists 175 root certificates, and these include names like Comodo, DigiNotar, GoDaddy and Verisign. Any of those 175 root certificates can sign a trusted chain for any certificate and my system will accept it, even if it has previously seen a different unexpired certificate before and even if that certificate has been issued by a different CA. A greater exposure for this risk comes from intermediate certificates. These exist where the CA has signed a certificate for another party that has flags authorizing it to sign other keys and maintain a trusted path. While Google’s real root CA is Equifax, another CA signed a valid certificate that would be accepted by users.

Risk mitigation: DNSSEC. DNSSEC limits the exposure we see above. In the DNSSEC system, *.google.com can only be valid if the signature chain is the DNS root, then the “com.” domain. If Google’s SSL key is distributed through DNSSEC, then we know that the key can only be inappropriate if the DNS root or top-level domain was compromised or is mis-behaving. Just as we trust the properties of an SSL key to secure data, we trust it to maintain a chain, and from that we can say that there is no other risk of a spoofed certificate path if the only certificate innately trusted is the one belonging to the DNS root. Courtesy of offline domain signing, this also means that host a root server does not provide the opportunity to provide malicious responses even by modifying the zone files (they would become invalid).

Risks After Moving to DNSSEC

Risk: We distrust that chain. We presume from China’s actions that China has a direct interest in being able to track all information flowing over the Internet for its users. While the Chinese government has no control over the “com.” domain, they do control the “cn.” domain. Visiting google.cn. does not mean that one aims to let the Chinese government view their data. Many countries have enough resources to perform attacks of collusion where they can alter both the name resolution under their control and the data path.

Risk mitigation: Multiple views of a single site. Convergence.io is a tool that allows one to view the encryption information as seen from different sites all over the world. Without Convergence.io, an attacker needs to compromise the DNS chain and a point between you and your intended site. With Convergence.io, the bar is raised again so that the attacker must compromise both the DNS chain and the point between your target website and the rest of the world.

Weighing the Risks

The two threats we have identified require gaining a valid certificate and compromising the information path to the server. To be a successful attack, both must occur together. In the current system, the bar for getting a validly signed yet inappropriately issued certificate is too low. The bar for performing a Man-in-the-Middle attack on the client side is also relatively low, especially over public wireless networks. DNSSEC raises the certificate bar, and Convergence raises the network bar (your attack will be detected unless you compromise the server side).

That seems ideal for the risks we identified, so what are we weighing? Well, we have to weigh the complexity of Convergence. A new layer is being added, and at the moment it requires an active participation by the user. This isn’t weighing a risk so much as weighing how much we think we can benefit. We also must remember that Convergence doesn’t raise the bar for one who can perform a MITM attack that is between the target server and the whole world.

Weighing DNSSEC, Moxie drives the following risk home for it in key distribution: “DNSSEC provides reduced trust agility. We have to trust VeriSign forever.” I argue that we must already trust the DNS system to point us to the right place, however. If we distrust that system, be it for name resolution or for key distribution, the action is still the same — the actors must be replaced, or the resolution done outside of it. The counter Moxie’s statement in implementing DNSSEC for SSL key distribution is we’ve made the problem an entirely social one. We now know that if we see a fake certificate in the wild, it is because somebody in the authorized chain has lost control of their systems or acted maliciously. We’ve reduced the exposure to only include those in the path — a failure with the “us.” domain doesn’t affect the “com.” domain.

So we’re left with the social risk that we have a bad actor who is now clearly identified. Convergence.io can help to detect that issue even if it only enjoys limited deployment. We are presently ham-strung by the fact that any of a broad number of CAs and ever-broader number of delegates can be responsible for issuing “certified lies,” and that still needs to be reduced.

Making a Decision

Convergence.io is a bolt-on. It does not require anybody else to change except the user. It does add complexity that all users may not be comfortable with, and that may limit its utility. I see no additional exposure risk (in the realm of SSL discussion, not in the realm of somebody changing the source code illicitly) from using it. To that end, Moxie has released a great tool and I see no reason to not be a proponent of the concept.

Moxie still argues that DNSSEC has a potential downside. The two-part question is thus: is reducing the number of valid certification paths for a site to one an improvement? When we remove the risk of an unwanted certification of the site, is the new risk that we can’t drop a certifier a credible increase in risk? Because we must already trust the would-be DSNSEC certifier to direct us to the correct site, the technical differences in my mind are moot.

To put into a practical example: China as a country can already issue a “valid” certificate for Google. They can control any resolution of google.cn. Whether the control the sole key channel for google.cn. or any valid key channel for google.cn., you can’t reach the “true” server and certificate / key combination without trusting China. The solution for that, whether it be the IP address or the keypair is to hold them accountable or find another channel. DNSSEC does not present an additional risk in that regard. It also removes a lot of ugliness related to parsing X.509 certificates and codes (which includes such storied history as the “*” certificate that worked on anything before browser vendors hard-coded it out). Looking at the risks presented and arguments from both sides, I think it’s time to start putting secure channel keys in the DNS stream.

Base Rulesets in IPTables

2011-08-30 by scottpack. 3 comments

This question on Security.SE made me think in a rather devious way. At first, I found it to be rather poorly worded, imprecise, and potentially not worth salvaging. After a couple of days I started to realize exactly how many times I’ve really been asked this question by well intentioned, and often, knowledgeable people. The real question should be, “Is there a recommended set of firewall rules that can be used as a standard config?” Or more plainly, I have a bunch of systems, so what rules should they all have no matter what services they provide. Now that is a question that can reasonably be answered, and what I’ve typically given is this:

-A INPUT -i lo -j ACCEPT
-A INPUT -p icmp --icmp-type any -j ACCEPT
# Force SYN checks
-A INPUT -p tcp ! --syn -m state --state NEW -j DROP
# Drop all fragments
-A INPUT -f -j DROP
# Drop XMAS packets
-A INPUT -p tcp --tcp-flags ALL ALL -j DROP
# Drop NULL packets
-A INPUT -p tcp --tcp-flags ALL NONE -j DROP
-A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT
The first and last lines should be pretty obvious so I won’t go into too much detail. Enough services use loopback that, except in very restrictive environments, attempting to firewall them will almost definitely be more work than is useful. Similarly, the ESTABLISHED keyword allows return traffic for outgoing connections. The RELATED keyword takes care of things like FTP that use multiple ports and may trigger multiple flows that are not part of the same connection, but are none-the-less dependent on each other. In a perfect world these would be at the top for performance reasons. Since rules are processed in order we really want the fewest number of rules processed as possible, however in order to get the full benefit from the above rule set, we want to run as many packets as possible by them.

The allow all on ICMP is probably the most controversial part of this set. While there are reconnaissance concerns with ICMP, the network infrastructure is designed with the assumption that ICMP is passed. You’re better off allowing it (at least within your organization’s network space), or grokking all of the ICMP messages and determining your own balance. Look at this question for some worthwhile discussion on the matter.

Now, down to the brass tacks. Let’s look at each of the wonky rules in order.

Forcing SYN Checks

-A INPUT -p tcp ! --syn -m state --state NEW -j DROP
This rule performs two checks:

  1. Is the SYN bit NOT set, and
  2. Is this packet NOT part of a connection in the state table

If both conditions match, then the packet gets dropped. This is a bit of a low-hanging fruit kind of rule. If these conditions match, then we’re looking at a packet that we just downright shouldn’t be interested in. This could indicate a situation where the connection has been pruned from the conntrack state table, or perhaps a malicious replay event. Either way, there isn’t any typical benefit to allowing this traffic, so let’s explicitly block it.

Fragments Be Damned

-A INPUT -f -j DROP
This is an easy one. Drop any packet with the fragment bit set. I fully realize this sounds pretty severe. Networks were designed with the notion of fragmentation, in fact the the IPv4 header specifically contains a flag that indicates whether or not that packet should or should not be fragmented. Considering that this is a core feature of IPv4, fragmentation is still a bit of a touchy subject. If the DF bit is set, then MTU path discovery should just work. However, not all devices respond back with the correct ICMP message. The above rule is one of them, however that’s because ICMP type 3 code 4 (wikipedia) isn’t a reject option in iptables. As a result of this one can’t really know whether or not your packets will get fragmented along the network. Nowadays, on internal networks at least, this usually isn’t a problem. You may run into problems, however, when dealing with VPNs and similar where your 1500 byte ethernet segment suddenly needs to make space for an extra header.

So now that we’ve talked about all the reasons to not drop fragments, here’s the reason to. By default, standard iptables rules are only applied to the packet marked as the first fragment. Meaning, any packet marked as a fragment with an offset of 2 or greater is passed through, the assumption being that if we receive an invalid packet then reassembly will fail and the packets will get dropped anyway. In my experience, fragmentation is a small enough problem that I don’t want to deal with risk and block it anyway. This should only get better with IPv6 as path MTU discovery is placed more firmly on the client and is considered less “Magic.”

Christmas in July

-A INPUT -p tcp --tcp-flags ALL ALL -j DROP
Network reconnaissance is a big deal. It allows us to get a good feel for what’s out there so that when doing our work we have some indications of what might exist instead of just blindly stabbing in the dark. So called ‘Christmas Tree Packets’ are one of those reconnaissance techniques used by most network scanners. The idea is that for whatever protocol we use, whether TCP/UDP/ICMP/etc, every flag is set and every option is enabled. The notion is that just like a old style indicator board the packet is “lit up like a Christmas tree”. By using a packet like this we can look at the behavior of the responses and make some guesses about what operating system and version the remote host is running. For a White Hat, we can use that information to build out a distribution graph of what types of systems we have, what versions they’re running, where we might want to focus our protections, or who we might need to visit for an upgrade/remediation. For a Black Hat, this information can be used to find areas of the network to focus their attacks on or particularly vulnerable looking systems that they can attempt to exploit. By design some flags are incompatible with each-other and as a result any Christmas Tree Packet is at best a protocol anomaly, and at worst a precursor to malicious activity. In either case, there is normally no compelling reason to accept such packets, so we should drop them just to be safe.

I have read instances of Christmas Tree Packets resulting in Denial of Service situations, particularly with networking gear. The idea being that since so many flags and options are set, the processing complexity, and thus time, is increased. Flood a network with these and watch the router stop processing normal packets. In truth, I do not have experience with this failure scenario.

Nothing to See Here

-A INPUT -p tcp --tcp-flags ALL NONE -j DROP
When we see a packet where none of the flags or options are set we use the term Null Packet. Just like with the above Christmas Tree Packet, one should not see this on a normal, well behaved network. Also, just as above, they can be used for reconnaissance purposes to try and determine the OS of the remote host.

QotW #7: How to write an email regarding IT Security that will be read, and not ignored by the end user?

2011-08-26 by roryalsop. 1 comments

A key aspect of IT and Information Security is the acceptance piece. People aren’t naturally good at this kind of security and generally see it as an annoyance and something they would rather not have to deal with.

Because of this, it is typical in organisations to send out regular security update emails – to help remind users of risks, threats, activities etc.

However it is often the case that these are deleted without even being read. This week’s question: How to write an email regarding IT Security that will be read, and not ignored by the end user? was asked by @makerofthings and generated quite a number of interesting responses, including:

  • Provide a one line summary first, followed by a longer explanation for those who need it
  • Provide a series of options to answer – those who fail to answer at all can be chased
  • Tie in reading and responding to disciplinary procedures – a little bit confrontational here, but can work for items such as mandatory annual updates (I know various organisations do this for Money Laundering awareness etc)
  • Using colours – either for the severity of the notice, or to indicate required action by the user
  • Vary the communications method – email, corporate website, meeting invite etc
  • Don’t send daily update messages – only send important, actionable notices
  • Choose whose mailbox should send these messages – if it is critical everyone reads it, perhaps it should come from the FD or CEO, or the individual’s line manager
  • Be personal, where relevant – users who receive a few hundred emails a day will happily filter or delete boring ones, or those too full of corporate-speak. Impress upon users how it is relevant to them
  • Add “Action Required” in the subject

It is generally agreed that if it isn’t interesting it will be deleted. If there are too many ‘security’ emails, they’ll get deleted. In general, unless you can grab the audience’s attention, the message will fail.

Having said that, another take on it is – do you need to send all these security emails to your users? For example, should they be getting antivirus updates or patch downtime info in an email every week?

Can users do anything about antivirus, or should that be entirely the responsibility of IT or Ops? And wouldn’t  a fixed patch downtime each week that is listed on the internal website make less of an impact on users?

Thinking around the common weak points in security – such as users – for most actions can make much more impact when you actually do need the users to carry out an action.

Associated questions and answers you should probably read:

What are good ways to educate about IT Security in a company?

What policies maximize employee buy-in to security?

Some More BSidesLV and DEFCON

2011-08-23 by Jeff Ferland. 0 comments

Internet connectivity issues kept me from being timely in updating, and a need for sleep upon my return led me to soak up all of the rest of BSides and DEFCON. That means just a few talks are going to be brought up.

First, there’s Moxie Marlinspike‘s talk about SSL. In my last post, I had mentioned that I thought SSL has reached the point where it is due to be replaced. In the time between writing that and seeing his talk, I talked with a few other security folk. We all agreed that DNSSEC made for a better distribution model than the current SSL system, and wondered before seeing Moxie’s presentation why he would add so much more complexity beyond that. The guy next to me (I’ll skip name drops, but he’s got a security.stackexchange.com shirt now and was all over the news in the last year) and I talked before his presentation, and at the end agreed that Moxie’s point about trusting the DNS registry and operators to not change keys could be a mistake.

Thus, we’re still in the world of certificates and complicated x.509 parsing that has a lot of loopholes, and we’ve moved added something that the user needs to be aware of. However, we have one solid bonus: many independent and distributed sources must now collaborate to verify a secure connection. If one of them squawks, you at least have an opportunity to be aware. An equally large entry exists in the negative column: it is likely that many security professionals themselves won’t enjoy the added complexity. There’s still a lot of research work to be done, however the discussion is needed now.

PCI came up in discussion a little bit last year, and a lot more this year. In relation, the upcoming Penetration Testing Execution Standard was discussed. Charlie Vedaa gave a talk at BSides titled “Fuck the Penetration Testing Execution Standard”. It was a frank and open talk with a quick vote at the end: the room as a whole felt that despite the downsides we see structures like PCI and the PTES, we were better off with them than without.

The line for DEFCON badges took most people hours and the conference was out of the hard badges in the first day. Organizers say it wasn’t an issue of under-ordering, but rather that they had exhausted the entire commercial supply of “commercially pure” titanium to make the badges. Then the madness started…

AT&T’s network had its back broken under the strain of DEFCON resulting in tethering being useless and text messages showing up in batches sometimes more than 30 minutes late. The Rio lost its ability to check people into their hotel rooms or process credit cards. Some power issues affected the neighboring hotel at a minimum — Gold Coast had a respectable chunk of casino floor and restaurant space in the dark last evening. The audio system for the Rio’s conference area was apparently taken control of and the technicians locked out of their own system. Rumors of MITM cellular attacks at the conference, and now days later in the press abound. Given talks last year including a demonstration and talks this week at the Chaos Computer Camp, the rumors are credible. We’ll wait to see evidence, though.

DEFCON this year likely had more than 15,000 attendees, and they hit the hotel with an unexpected force. Restaurants were running out of food. Talks were sometimes packed beyond capacity. The Penn and Teller theater was completely filled for at least three talks I had interest in, locking me outside for one of them. The DEFCON WiFi network (the “most hostile in the world”) suffered some odd connectivity issues and a slow-or-dead DHCP server.

Besides a few articles in the press that have provided interesting public opinion, one enterprising person asked a few random non-attendees at the hotel what they thought of the event. The results are… enlightening.

QotW #6: When the sysadmin leaves…

2011-08-19 by ninefingers. 1 comments

When we talk about security of systems, a lot of focus and attention goes on the external attacker. In fact, we often think of “attacker” as some form of abstract concept, usually “some guy just out causing mischief”. However, the real goal of security is to keep a system running and internal, trusted users, can upset that goal more easily and more completely than any external attacker, especially a systems administrator. These are, after all, the men and women you ask to set up your system and defend it.

So us Security SE members took particular interest when user Toby asked: when the systems admin leaves, what extra precautions should be taken? Toby wanted to know what the best practise was, especially where a sysadmin was entrusted with critical credentials.

A very good question and it did not take long for a reference to Terry Childs to appear, the San Francisco FiberWAN administrator who in response to a disciplinary action, refused to divulge critical passwords to routing equipment. Clearly, no company wants to find themselves locked out of their own network when a sysadmin leaves, whether the split was amicable or not. So what can we do?

Where possible, ensure “super user” privileges can be revoked when the user account is shut down.

From Per Wiklander. The most popular answer by votes so far also makes a lot of sense. The answer itself references the typical Ubuntu setup, where the root password is deliberately unknown to even the installer and users are part of a group of administrators, such that if their account is removed, so is their administrator access. Using this level of granular control you can also downgrade access temporarily, e.g. to allow the systems admin to do non-administrative work prior to leaving.

However, other users soon pointed out a potential flaw in this kind of technical solution – there is no way to prevent someone with administrative access from changing this setup, or adding additional back doors, especially if they set up the box. A further raised point was the need not just to have a corporate policy on installation and maintenance, but also ensure it is followed and regularly checked.

Credential control and review matters – make sure you shut down all the non-centralized accounts too.

Mark Davidson‘s answer focused heavily on the need for properly revoking credentials after an administrator leaves. Specifically, he suggests:

  • Do revoke all credentials belonging to that user.
  • Change any root password/Administrator user password of which they are aware.
  • Access to other services on behalf of the business, like hosting details, should also be altered.
  • Ensure any remote access is properly shut down – including incoming IP addresses.
  • Logging and monitoring access attempts should happen to ensure that any possible intrusions are caught.

Security SE moderator AviD was particularly pleased with the last suggestion and added that it is not just administrator accounts that need to be monitored – good practice suggests monitoring all accounts.

It is not just about passwords – do not forget the physical/social dimension.

this.josh raises a critical point. So far, all suggestions have been technical, but there are many other issues that might present a window of opportunity to a sysadmin with a mind to cause chaos. From the answer itself:

  • Physical access is important too. Don’t just collect badges; ensure that any access to cabinets, server rooms, drawers etc are properly revoked. The company should know who has keys to what and why so that this is easy to check when the sysadmin leaves.
  • Ensure the phone/voicemail system is included in the above checks.
  • If the sysadmin could buy on behalf of the company, ensure that by the time they have left, they no longer have authority to do this. Similarly, ensure any contracts the employee has signed have been reviewed – ideally, just like logging sysadmin activities you are also checking this as they’re signed too.
  • Check critical systems, including update status, to ensure they are still running and haven’t been tampered with.
  • My personal favorite: let people know. As this.josh points out, it is very easy as a former employee knowing all the systems to persuade someone to give you access, or give you information that may give you access. Make sure everyone understands that the individual has left and that any suspicious requests are highlighted or queried and not just granted.

Conclusions

So far, that’s it for answers to the question and no answer has yet to be accepted, so if you think anything is missing, please feel free to sign up and present your own answer.

In summary, there appears to be a set of common issues being raised in the discussion. Firstly: technical measures help, but cannot account for every circumstance. Good corporate policy and planning is needed. A company should know what systems a systems administrator has access to, be they virtual or physical, as well as being able to audit access to these systems both during the employee’s tenure and after they have left. Purchasing mechanisms and other critical business functions such as contracts are just other systems and need exactly the same level of oversight.

I did leave one observation out from one of the three answers above: that it depends on whether the moving on employee is leaving amicably or not. That is certainly a factor as to whether there is likely to be a problem, but a proper process is needed anyway, because the risk to your systems and your business is huge. The difference in this scenario is around closing critical access in a timely manner. In practice this includes:

  • Escorting the employee off the premises
  • Taking their belongings to them once they are off premises
  • Removing all logical access rights before they are allowed to leave

These steps reduce the risk that the employee could trigger malicious code or steal confidential information.

QotW #5: Defending your website

2011-08-12 by roryalsop. 0 comments

This week’s post came from this question: I just discovered major security flaws in my web store!, but covers a wider scope and includes some resources on Security Stack Exchange on defending your website.

Any application you connect to the Internet will be attacked within minutes of plugging it in, so you need to look very carefully at protecting it appropriately. Guess what – many application owners and developers go about this in entirely the wrong way, building the application and then thinking about security at the very end, where it is expensive and difficult to do well.

The recommended approach is to build security in from the start – have a look at this post for a discussion on Secure Development Lifecycle. This means having security as a core pre-requisite, just like performance and functionality, and understanding the risks your new application will bring. Read How do you compare risks from your websites, physical perimeter, staff etc

To do this you will need developers to write the application securely. Studies show that this is the most cost effective in the long run, as securely developed applications require very little remediation work (as they don’t suffer from anywhere near as many security flaws as traditionally developed applications)

So, educating developers is key – even basic awareness can make the difference: What security resources should a white-hat developer follow these days?

Assuming your application has been developed using secure practices and you are ready to go live, What are the most important security checks for new web applications? and Testing your web application to gain some confidence that the security controls you have implemented are correctly working and effective is an essential step before allowing connection to the Internet. Penetration testing and associated security assessments are essential checks.

What tools are available to assess the security of a web application?

The globally recognised open standard on web application security is the Open Web Application Security Project, (OWASP) who provide guidance on testing, sample vulnerable applications, and a list of the top ten vulnerabilities. The answers to Is there a typical step-by-step A-Z process for testing a Web site for possible exploits? provide more information on approaches, as do the answers to Books about Penetration Testing.

Once your application is live, you can’t just sit back and relax, as new attacks are developed all the time – being aware of what is being attempted is important.  Can I detect web app attacks by viewing my Apache log file is Apache specific, but the answers are broadly relevant for all web servers.

This is obviously a very quick run-through – but reading the linked questions, and following the related questions links on them will give you a lot of background on this topic.

Could Apple bring secure email to the masses with iOS 5

2011-08-09 by rakkhi. 0 comments

One of my worst ever projects was implementing PGP email encryption. Considering I have only worked in large financial services companies, that is saying something. I have reflected on the lessons learnt from this project before, but I felt that PGP was fundamentally flawed. When Apple iOS 5 was unveiled, there was a small feature that no one talked about. It was hidden among the sparkling jewels of notifications, free messaging and just works synchronization. That feature was S/MIME email encryption support. Now S/MIME is not new, however Apple is uniquely positioned with their ecosystem and user-centric design to solve the fundamental problems and bring secure email to the masses.

I have detailed the problems that email encryption solves, and those it does not solve before. To re-iterate and update:

  • Attacker eavesdropping or modifying an email in transit. both on public networks such the Internet and unprotected wireless networks as well as private internal networks. Recently I have experienced things that have hammered home the reality of this threat: Heartfelt presentations at security conferences such as Uncon about the difficulties faced by those in oppressive regimes such as Iran, where the government reading your email could result in death or worse for you and your family. Even “liberal” governments such as the US blatantly ignoring the law to perform mass domestic wire tapping in the name of freedom. The right to privacy is a human right. Most critical communication these days is electronic. There is a clear problem to be solved in keeping this communication only to it’s intended participants.

  • Attacker reading or modifying emails in storage. The recent attacks on high profile targets such as Sony and low profile like MTGox, who were brought low by simple exploitation of unpatched systems and SQL injection vulnerabilities, have revealed an un-intended consequence. The email addresses and passwords published were often enough to allow attackers access to a users email account as the passwords were re-used. I don’t know about you but these days a lot of my most important information is stored in my gmail account. The awesome search capability, high storage capacity and accessibility everywhere means that it is a natural candidate for scanned documents and notes as well as the traditional highly personal and private communications. There is a problem to be solved of adding a layered defence; an additional wall in your castle if the front gate is breached.

  • Attacker is able to send emails impersonating you. It is amazing how much email is trusted today as being actually from the stated sender. In reality normal email provides very little non-repudiation. This means, it is trivial to send an email that appears to come from someone else. This is a major reason why phishing and spear phishing in particular are still so successful. Now spam blockers, especially good ones like Postini have become very good at examining email headers, verified sender domains, MX records etc to reject fraudulent emails (which is effective unless of course you are an RSA employee and dig it out of your quarantine). You can also have more localised proof of concept of this. If your work accepts manager e-mail approvals for things like pay rises and software access, and it also has open SMTP relays you can have some fun by Googling SMTP telnet commands. There is a problem to solve in how to reliably verify the sender of the email and ensure the contents have not been tampered with.

Problems not solved by email encryption and signing:

  • Sending the wrong contents to the right person (s)
  • Sending the right contents to the wrong person (s)
  • Sending the wrong contents to the wrong person (s)

These are important to keep in mind because you need to choose the right tool for the job. Email encryption is not a panacea for all email mis-use cases.

So having established that there is a need for email encryption and signing, what are the fundemental problems with a market leading technology like PGP (now owned by Symantec)?

Fundemental problems in summary:

  • Key exchange. Bob Greenpeace wants to send an email to Alice Activist for the first time. He is worried about Evil Government and Greedy OilCorp reading it and killing them both. Bob needs Alice’s public key to be able to encrypt the document. If Bob and Alice both worked at the same company and had PGP universal configured correctly, or worked in organisations that both had PGP universal servers exposed externally, or had the foresight to publish their public keys to the public PGP global key server, all would be well. However this is rarely the case. Also email is a conversation. Alice not only wants to reply but also add in Karl Boatdriver in planning the operation. Now suddenly all three need his public keys and he theirs. Bob, Alice and Karl are all experts in their field but not technical and have no idea how to export and send their public keys and install these keys before sending secure email. They have an operation to plan for Thursday!

  • Working transparently and reliably everywhere. Bob is also running Outlook on his Windows phone 7 (chuckles), Alice Pegasus on Ubuntu and Karl Gmail via Safari on his iPad. Email encryption, decryption, signing, signature verification, key exchange all need to just work on all of these and more. It also cannot be like PGP desktop which uses a dirty hack to hook into a rich email client. This gives it extremely good reliability and performance. Intermittent issues like emails going missing and blank emails never occur. Calls to PGP support for any enterprise installation are few and far between. All users in companies love PGP and barely notice it is there. In many organisations email is the highest tier application. It has to have three nines uptime. If Alice and Karl are about to be captured by Greedy Oilcorp and they need to get an email to Bob they cannot afford to get a message blocked by PGP error.

Less fundamental but still annoying:

  • Adding storage encryption – there is no simple way with PGP to encrypt a clear text email in storage. You can send it to yourself again or export it to a file system and encrypt it there but that is it. This is a problem in where the email is not sensitive enough to encrypt in transit (or just where you just forgot) but where after the fact, you do care if script kiddies with your email password get that email and publish it on Pastebin.

How Apple could solve these fundamental problems with iOS 5

Key exchange – If you read marknca‘s excellent iOS security guide, you will see that Apple has effectively built a Public Key Infrastructure (PKI). Cryptographically signing things like apps and checking this signature before allowing them to run is key to their security model. This could be extended to users to provide email encryption and specifically transparent key exchange in the following manner:

  • Each user with an Apple ID would have a public/private key pair automatically generated
  • The public keys would be stored on the Apple servers and available via API and on the web to anyone
  • The private keys would be encrypted to the the users Apple account passphrase and be synchronised via iCloud to all iOS and Mac endpoints
  • To send a secure email the user would just click a secure button on their email tool
  • When sending an email via an iOS device, a Mac or via Mobile me web it would query the Apple servers for the recipients private key and encrypt and sign (optional) the email
  • On receipt as long as the device had been unlocked the email would be decrypted. For web access as long as the certificate was stored in the browser the email would be decrypted (could be an addon to Safari)
  • To encrypt an email in storage you would just drag it to a folder called private email. You could write rules for what emails were automatically stored there.

This system would make key exchange, email encryption, decryption and signing totally transparent. Because S/MIME uses certificates it is easy to get the works everywhere property. However Apple would first enable this only for .me addresses and iOS devices and Mac’s to enable them to sell more of these and to beef up their enterprise cred. Smart hackers and addon writers would soon make it work everywhere though. There you go, secure email for the masses.

BSidesLV 2011, Day 1

2011-08-05 by Jeff Ferland. 0 comments

This week in Las Vegas is Christmas for security. In listening to four BSidesLV talks today, I’ve come to conclude that the community suffers from a real lack of discussion about interacting with management, mandatory access controls need to be enhanced to focus on applications, the SSL system is irreparably broken and DNSSEC really should replace it, and some potential laws related to hacking may be harbingers of a 100 year security dark age.

That’s a loaded paragraph, so here’s the breakdown: Adam Ely’s talk “Exploiting Management for Fun and Profit – or – Management is not Stupid, You Are” made a fantastic point about budgeting for security. Getting better security isn’t about convincing executives that they need better security. Better security is about understanding what the corporate goals are and fitting the application to that model. Consider an executive’s primary goal of a hospital: increase the survival rate of emergency room patients. How can your goals for security further that goal?

Val Smith’s “Are There Still Wolves Among Us” expressed research showing a very skilled black hat community that has a quiet history of program modification at vendors, years-old 0-day exploits and wholesale compromise of security researchers. The summary point is that “cyber warfare” and “government-level” threats may come from non-government hackers, and they’re the quiet ones. LulzSec, Anonymous and the like are providing covering noise for the ones who don’t get caught. It is further a possibility that attacks that appear to be from foreign countries may be intentional proxying by talented hackers

“A Study of What Breaks SSL” by Ivan Ristic conveyed that the majority of servers are misconfigured somehow. Acceptance of data and sometimes the presentation of login forms in unencrypted pages, broken certificate chains, and servers still offering up SSLv2 in abundance. I’ve personally come to believe that the purpose of SSL — provide assurance that an encryption key belongs to the registered domain of the certificate — has been supplanted by the implementation of DNSSEC. As DNSSEC provides for a similar signature chain and distribution of keys, it ought to be used as the in-channel distribution method. Further to that, the bolt-on nature of SSL permits numerous attacks and misconfiguration possibilities that can prevent even negotiating SSL with a client. Those thoughts may be worthy of their own paper

Finally, Schuyler Towne’s “Vulnerability Research Circa 1851” was a great look at the security culture of physical locks. It showed the evolution of lock security as it moved toward a system where knowing the mechanical construction of a lock didn’t prevent it from being secure. More importantly, it showed a 100 year drought of lock security filled with closed and legally enforced locksmith guilds, laws against lockpicking and the stalling of progress in adopted residential security locks — namely that most American household locks are using 100 year old technology. It emphasized the potential disaster that adoption of laws such as Germany’s 202© “anti-hacking tools” law could present the security industry with. Just as the golden age of lock development was spurred on with constant public challenges over lock security and then followed up with a century-long dark age here laws and culture prevented research that would advance security .

The first day of BSides has drawn to a close, the 2nd day is opening. The lines for badges at DEFCON are some kind of absurd, and the week is just warming up. DEFCON organizers (“Goons”) are expecting 12,000 attendees. Why they have only pressed 9200 attendee badges is a notable question given the badge shortages of previous years, though. Security companies are actively and openly conference recruiting attendees from BSides, and I expect more of the same at DEFCON.

QotW #4: How can you reliably wipe data from a storage device?

2011-08-03 by Thomas Pornin. 1 comments

Today we investigate the problem of disposing of hardware storage devices (say, hard disks) which may contain sensitive data. The question which prompted this discussion “Is it enough to only wipe a flash drive once” is about Flash disks (SSD) and received some very good answers; here, we will try to look at the wider picture.

Confidentiality Issues and How to Avoid Them

Storage devices grow old; at some point we want to get rid of them, either because they broke down and need to be replaced, or because they became too small with regards to what we want to do with them. A storage device does not shrink over time, but our needs increase. Quite some time ago, I was sharing some disk space with about one hundred co-students, and the disk offered a hefty 120 megabytes. At that time, a colorful GIF picture was considered to be the top of technology. Today, we take for granted that we can store 2-hour long HD videos on a cellphone. The increase in disk space shows no sign of slowing down, so we have a steady stream of old disks (or USB sticks or SD cards or even ZIP/Jaz cartridges for the old timers — I will not delve into floppy disks) to dispose of. The problem is that all these pieces of hardware have been used quite liberally to store data, possibly confidential data. For the home user, think about the browser cookies, the saved passwords, the cryptographic private keys… In a business context, just about any data element could be of value for competitors.

This is a matter of confidentiality. There are two generic ways to deal with it: encryption, and destruction.

Disk Encryption

Disk encryption is about transforming data in a way which is reversible only with the knowledge of a given secret short-sized element called a key. We are talking about symmetric encryption here; the key is a simple sequence of, say, 128 bits chosen at random. The confidentiality issue is not totally obliterated, only severely reduced: supposedly, it is easier to deal with the secrecy of a sequence of 128 bits than that of 100 GB worth of data. Encryption can be done either by the disk itself, by the operating system, or by the application.

Application-based encryption is limited to what the application can control. For instance, it may have to fight a bit with the OS about virtual memory (that’s pure memory from the point of view of the application, but the OS is prone to write it to the disk anyway). Also, most applications do not have any encryption feature, and modifying all of them is out of the question (not even counting the closed-source ones, that’s just too much work).

OS disk encryption is more thorough, since it can be applied on the complete disk for all files from all applications. It has a few drawbacks, though:

  • The computer must still be able to boot up; in particular, to read from the disk the code which is used to decrypt data. So there must be at least an unencrypted area on the disk. This can cause some system administration headaches.
  • Performance may suffer, for an internal hard disk. An x86 Core2 CPU at 2.4 GHz can encrypt about 160 MB/s with AES (that’s what my PC does with the well-known OpenSSL crypto library): not only do some SSD go faster than that, I also have other things to do with my CPU (my OS is multitasking).
  • For external media (USB sticks, Flash cards…), there can be interoperability issues. There is no well-established standard on disk encryption. You could transport some appropriate software on the media itself but most places will not let you install applications as easily.

Encryption on the hardware itself is easier, but you do not really know if the drive does it properly. Also, the drive must keep the key in some way, and you want it to “forget” that key when the media is decommissioned. As Jesper’s answer describes, good encrypted disks keep the key in NVRAM (i.e. RAM with a battery) and can be instructed to forget the key, but this can prove difficult if the disk is broken: if it does not respond to commands anymore, you cannot really be sure that the NVRAM got blanked.

So while encryption is the theoretically appropriate way to go, it is not complete (you still have to manage the confidentiality of the key) and, in practice, it is not easy. Most of all, it works only if it is applied from day one: encryption can do nothing about data which was written before encryption was envisioned at all. So let’s see what can be done to destroy data.

Data Wiping

Wiping out data is a popular method; but popular does not necessarily mean efficient. The traditional wiping patterns rely on the idea that each data bit will be written exactly over the bit which was at the same logical emplacement on the disk: this was mostly true in 1996, but technology has evolved. Today’s hard disks do not have “track railings” to guide the read/write heads; instead, they use the data itself as guide. The net result is that the new data may be physically off the previous one by a small bit; the old data is still readable “on the edge”.

Also, modern hard disks do not have visibly bad sectors. Bad sectors still exist, but the disk transparently substitutes good sectors instead of bad sectors. This happens dynamically: when a disk writes some data on a sector and detects that the write operation did not go well, then it will allocate a new good sector from its spare area and do the write again there. From the point of view of the operating system, this is invisible; the only consequence is that the write took a few more milliseconds than could have been expected. However, the data has been written to the bad sector (admittedly, one or two bits of it may be wrong, but this leaves more than 4000 genuine bits) and since the sector is now marked as “bad”, it is forever inaccessible from the host computer. No amount of wiping can do anything about that.

On Flash, the same issue arises, multiplied a thousand times. Flash memory works by “blocks” (a few dozen kilobytes) in which two operations can be done:

  • changing a bit from value 1 to value 0;
  • erasing a whole block: all the bit blocks are set to 1.

A given block will endure only that many “erase” operations, so Flash devices use wear leveling techniques, in which write operations are scattered all about the place. The “Flash Translation Layer” is a standard wear leveling algorithm, designed to operate smoothly with a FAT filesystem. The wear leveling means that if you try to overwrite a file, the wiping pattern will be most of the time written elsewhere. Moreover, some blocks can be declared as “bad” and remapped to spare blocks, in a way similar to what magnetic hard disks do (only more often). So some data blocks just linger, forever unreachable from any software wiping.

The basic conclusion is that wiping does not work against a determined attacker. Simply overwriting the whole partition with zeros is enough to deter an attacker who will simply plug the disk in a computer: logically, a single write is enough to “remove” the data, and by working on the partition instead of files, you avoid any filesystem shenanigans. But if your enemies are so cheap that they will limit themselves to the logical layer, then you are lucky indeed.

@nealmcb’s question, How can I reliably erase all information on a hard drive? sparked some excellent discussion including NIST’s guidelines for Media Sanitisation.

Media Destruction

Without encryption (does not work for data which is already there) and data wiping (does not work reliably, or at all), the remaining solution is physical destruction. It is not that easy, though; a good sledgehammer swing, for instance, though satisfying, is not very effective towards data destruction. After all, this is a hard disk. To destroy its contents, you have to remove the cover (there are often many screws, some of which hidden, and glue, and rivets in some models) and then extract the platters, which are quite rigid disks. A simple office shredder will choke on those (although they would easily munch through older floppy disks). The magnetic layer is not thick, so mechanical abrasion may do the trick: use a sander or a grinder. Otherwise, dipping the platters in concentrated sulfuric acid should work.

For Flash devices, things are simpler: that’s mostly silicon with small bits of copper or aluminium, and a plastic cover. Just burn it.

Bottom-line: media destruction requires resources. In a business environment, this could be a system administrator task, but it will involve extra manpower, safety issues (seriously, a geeky system administrator with access to an acid cauldron or a furnace, isn’t it a bit scary ?) and possibly environmental considerations.

Conclusion

Data security must be thought throughout the complete life cycle of storage devices. Whether you go crypto or physical, you must put some thought and resources into it. Most people will simply store old disks and hope for the problem to get away on its own accord.