A company’s data is as much a risk as it is an asset. The way that customer data is stored in modern times requires it to be protected as a vulnerable asset and regarded as an unrealized liability even if not considered on the annual financial statement. In many jurisdictions this is enforced by legislation (eg Data Protection Act of 1998 in the UK)
One of the challenges of this as a modern concern lies in knowing where your data is.
To make a comparable example, consider a company that owns its place of operation. That asset must be protected with an investment both for sound business sense and by regulatory requirement. Fire protection, insurance, and ongoing maintenance are assumed costs that are considered all but mandatory. A company that isn’t covering those costs is probably not standing on all its legs. The loss of a primary building because of inadequate investment in protection and maintenance could be anywhere from a major event to fatal to the company.
It may seem a surreal comparison, but data exposure can have an impact as substantial as losing a primary building. A bank without customer records is dead regardless of the cash on hand. A bank with all its customer records made public is at risk of the same fate. Illegitimate transactions, damage to reputation, liability for customer losses related to disclosure and regulatory reaction may all work together to end an institution.
Consider how earlier this year Sony’s Playstation online gaming network was hacked. In the face of this compromise Sony estimated their present losses at $171 million, suffered extended periods of service downtime, and suffered further breaches in attempting to restore service. Further losses are likely as countries around the world investigate the situation and consider fines. Private civil suits have also been brought in response to the breaches. To put the numbers in perspective, the division responsible for the Playstation network posted a 2010 loss of approximately $1 billion dollars.
In total (across several divisions), Sony suffered at least 16 distinct instances of data loss or unintended access including an Excel file containing sweepstakes data from 2001 that Sony posted online. No compromise was needed for that particular failure — the file was readily available and indexed by Google. In the end, proprietary source code, administrator credentials, customer birthdays, credit card information, telephone numbers and home addresses were all exposed.
In regards to Sony’s breaches, the following question was posed, “If you were called by Sony right now, had 100% control over security, what would you do in the first 24-hours?” The traditional response is isolating breaches, identifying them, restoring service with the immediate attack vectors closed, and shoring up after that. However, from the outside Sony’s issue appears to be systemic. Their systems are varied, complicated, and numerous. Weaknesses appear to be everywhere. They were challenged in returning to a functioning state because even they were not compromised in an isolated way. Isolating and identifying breaches was far as one could get in those first 24 hours. Indeed, a central part of the division’s core revenue business was out of service for roughly 3 weeks.
The root issue behind everything is likely that Sony isn’t aware of their assets. Everything grew in place organically and they didn’t know what to protect. Sony claims it encrypted its credit card data, but researchers have found numbers related to the attack. Other identifying information wasn’t encrypted. Regardless of the circumstances related to the disclosure of credit card data, Sony failed protect other valuable information that can make it liable for fraud. Damage to Sony’s reputation and regulatory reaction, including Japan preventing restarting the service there at least through July, are additional issues.
The lack of awareness about what data is at risk can impact a business of any size. Much as different types of fire protection are needed to deal with an office environment, outdoor tire storage at an auto shop, a server room and a chemical factory, different types of security need to be applied to your data based on its exposure. Water won’t work on a grease fire, and encrypting a database has no value if a query can be run against it to retrieve all the information without encryption. While awareness is acute about credit card sensitivity, losing names, addresses, phone numbers and dates of birth can present just as much risk for fraud and civil or regulatory liability. This is discussed in ‘Data Categorisation – critical or not?’ and should be a key step in any organisation’s plans for resilience.
This week, a newly-minted master’s degree holder asked on our site, “[How do I go about] Carrying out a professional IT audit procedure?” That’s a really big question in one sentence. In trying to break that down to some parts we can address, let’s look at the perspective people involved in an audit might see.
Staff at an audit client interact with auditors who ask a lot of questions off of a script that is usually referred to as a work plan. Many times, they ask you to open the appropriate settings dialogs or configuration files and show them the configuration details that they are interested in. Management at an audit client will discuss what systems beforehand and the managers for the auditors will select or create appropriate work plans.
The quality, nature, and scope of work plans vary widely, and a single audit will often involve the use of many different plans. Work plans might be written by the audit company or available from regulatory bodies, standards organizations, or software vendors. One example of a readily-available plan is the PCI DSS 2.0 standard. That plan displays both high-level overviews and mid-level configuration requirements across a broad spectrum. Low-level details would be operating system or application specific. Many audit plans related to banking applications have granular detail about core system configuration settings. Also have a look at this question about audit standards for law firms for an example from a different industry showing similarities and differences.
While some plans are regulatory-compliance related, most are best-practices focused. All plans are written with best practices in mind, and for those who are new to the world of IT security, that’s the most challenging part about them. There is no universal standard; many plans greatly overlap, but still differ. If the auditor is appropriately considering their client’s needs, they’ll almost certainly end up marking parts of plans as not applicable or not compliant yet okay because of mitigating circumstances.
Another challenging point for those new to auditing is the sometimes hard-to-grasp concept is the separation between objectives and controls. An objective might be to ensure that each authenticates to their own account. Controls listed in a work plan might include discussion about password expiry requirements to help prevent shared account passwords (among other things). Don’t get crossed-up focusing on the control if the objective is met through some other means – perhaps everybody is using biometric authentication instead. There are too many instances of this in the audit world, and it’s a common mistake among newer auditors.
From a good auditor’s perspective, meeting the goals of the work plan might be considered 60% of the goal of the work. An auditor’s job isn’t complete unless they’re looking at the whole organization. Some examples: to fulfill a question about password change requirements, the auditor should ask an administrator, see the configuration and ask users about it (“When was the last time the system made you change your password?”). To review a system setting, the auditor should ask to see settings in a general manner, adding detail only as needed: “Can you show me the net effect of policies on this user’s account?” as opposed to “Start->run->rsop.msc”. Users reporting different experiences about password resets than the system configuration shows or a system administrator who fumbles their way around for everything won’t ever be in the work plan, but it will be a concern.
With that background in mind, here are some general steps to performing an IT audit procedure:
- Meet with management and determine their needs. You should understand many of the possible accepted risks before you begin the engagement. For example, high-speed traders may stand to lose more money by implementing a firewall than not.
- Select appropriate audit plans based on available resources and your own relevant work.
- Properly review the controls with the objectives they meet in mind. Use multiple references when possible, and always try to confirm any settings directly.
- Pay attention to the big picture. Things should “feel right.”
- Review your findings with management and consider their thoughts on them. Many times the apparent severity of something needs to be adjusted.
- At the end of the day, sometimes a business unit may accept the risk from a weak control, despite it looking severe to you as an auditor. That is their prerogative, as long as you have correctly articulated the risks
The last part of that, auditors reviewing findings with client management, takes the most finesse and unexplainable skill. Does your finding really matter? How can you smooth things over and still deliver over 100 findings? At the end of the day, experience and repetition is the biggest part of delivering professional work, and that’s regardless of the kind of work.
Some further starting points for more detail can be found at http://en.wikipedia.org/wiki/Information_technology_audit_process and http://ithandbook.ffiec.gov/.
Bogus SSL certificates are not a new problem and incidents can be found going back over a decade. With a fake certificate for Google.com in the wild, they’re in the news again today. My last two posts have touched on the SSL topic and Moxie Marlinspike’s Convergence.io software is being mentioned in these news articles. At the same time, Dan Kaminski has been pushing for DNSSEC as a replacement for the SSL CA system. Last night, Moxie and Dan had it out 140 characters at a time on Twitter over whether DNSSEC for key distribution was wise.
I’m going to do two things with the discussion: I’m going to cover the discussion I said should be had about replacing the CA system, and I’m going to try and show a risk-based assessment to justify that.
The Risks of the Current System
Risk: Any valid CA chain can create a valid certificate. Currently, there are a number of root certificate authorities. My laptop lists 175 root certificates, and these include names like Comodo, DigiNotar, GoDaddy and Verisign. Any of those 175 root certificates can sign a trusted chain for any certificate and my system will accept it, even if it has previously seen a different unexpired certificate before and even if that certificate has been issued by a different CA. A greater exposure for this risk comes from intermediate certificates. These exist where the CA has signed a certificate for another party that has flags authorizing it to sign other keys and maintain a trusted path. While Google’s real root CA is Equifax, another CA signed a valid certificate that would be accepted by users.
Risk mitigation: DNSSEC. DNSSEC limits the exposure we see above. In the DNSSEC system, *.google.com can only be valid if the signature chain is the DNS root, then the “com.” domain. If Google’s SSL key is distributed through DNSSEC, then we know that the key can only be inappropriate if the DNS root or top-level domain was compromised or is mis-behaving. Just as we trust the properties of an SSL key to secure data, we trust it to maintain a chain, and from that we can say that there is no other risk of a spoofed certificate path if the only certificate innately trusted is the one belonging to the DNS root. Courtesy of offline domain signing, this also means that host a root server does not provide the opportunity to provide malicious responses even by modifying the zone files (they would become invalid).
Risks After Moving to DNSSEC
Risk: We distrust that chain. We presume from China’s actions that China has a direct interest in being able to track all information flowing over the Internet for its users. While the Chinese government has no control over the “com.” domain, they do control the “cn.” domain. Visiting google.cn. does not mean that one aims to let the Chinese government view their data. Many countries have enough resources to perform attacks of collusion where they can alter both the name resolution under their control and the data path.
Risk mitigation: Multiple views of a single site. Convergence.io is a tool that allows one to view the encryption information as seen from different sites all over the world. Without Convergence.io, an attacker needs to compromise the DNS chain and a point between you and your intended site. With Convergence.io, the bar is raised again so that the attacker must compromise both the DNS chain and the point between your target website and the rest of the world.
Weighing the Risks
The two threats we have identified require gaining a valid certificate and compromising the information path to the server. To be a successful attack, both must occur together. In the current system, the bar for getting a validly signed yet inappropriately issued certificate is too low. The bar for performing a Man-in-the-Middle attack on the client side is also relatively low, especially over public wireless networks. DNSSEC raises the certificate bar, and Convergence raises the network bar (your attack will be detected unless you compromise the server side).
That seems ideal for the risks we identified, so what are we weighing? Well, we have to weigh the complexity of Convergence. A new layer is being added, and at the moment it requires an active participation by the user. This isn’t weighing a risk so much as weighing how much we think we can benefit. We also must remember that Convergence doesn’t raise the bar for one who can perform a MITM attack that is between the target server and the whole world.
Weighing DNSSEC, Moxie drives the following risk home for it in key distribution: “DNSSEC provides reduced trust agility. We have to trust VeriSign forever.” I argue that we must already trust the DNS system to point us to the right place, however. If we distrust that system, be it for name resolution or for key distribution, the action is still the same — the actors must be replaced, or the resolution done outside of it. The counter Moxie’s statement in implementing DNSSEC for SSL key distribution is we’ve made the problem an entirely social one. We now know that if we see a fake certificate in the wild, it is because somebody in the authorized chain has lost control of their systems or acted maliciously. We’ve reduced the exposure to only include those in the path — a failure with the “us.” domain doesn’t affect the “com.” domain.
So we’re left with the social risk that we have a bad actor who is now clearly identified. Convergence.io can help to detect that issue even if it only enjoys limited deployment. We are presently ham-strung by the fact that any of a broad number of CAs and ever-broader number of delegates can be responsible for issuing “certified lies,” and that still needs to be reduced.
Making a Decision
Convergence.io is a bolt-on. It does not require anybody else to change except the user. It does add complexity that all users may not be comfortable with, and that may limit its utility. I see no additional exposure risk (in the realm of SSL discussion, not in the realm of somebody changing the source code illicitly) from using it. To that end, Moxie has released a great tool and I see no reason to not be a proponent of the concept.
Moxie still argues that DNSSEC has a potential downside. The two-part question is thus: is reducing the number of valid certification paths for a site to one an improvement? When we remove the risk of an unwanted certification of the site, is the new risk that we can’t drop a certifier a credible increase in risk? Because we must already trust the would-be DSNSEC certifier to direct us to the correct site, the technical differences in my mind are moot.
To put into a practical example: China as a country can already issue a “valid” certificate for Google. They can control any resolution of google.cn. Whether the control the sole key channel for google.cn. or any valid key channel for google.cn., you can’t reach the “true” server and certificate / key combination without trusting China. The solution for that, whether it be the IP address or the keypair is to hold them accountable or find another channel. DNSSEC does not present an additional risk in that regard. It also removes a lot of ugliness related to parsing X.509 certificates and codes (which includes such storied history as the “*” certificate that worked on anything before browser vendors hard-coded it out). Looking at the risks presented and arguments from both sides, I think it’s time to start putting secure channel keys in the DNS stream.
Internet connectivity issues kept me from being timely in updating, and a need for sleep upon my return led me to soak up all of the rest of BSides and DEFCON. That means just a few talks are going to be brought up.
First, there’s Moxie Marlinspike‘s talk about SSL. In my last post, I had mentioned that I thought SSL has reached the point where it is due to be replaced. In the time between writing that and seeing his talk, I talked with a few other security folk. We all agreed that DNSSEC made for a better distribution model than the current SSL system, and wondered before seeing Moxie’s presentation why he would add so much more complexity beyond that. The guy next to me (I’ll skip name drops, but he’s got a security.stackexchange.com shirt now and was all over the news in the last year) and I talked before his presentation, and at the end agreed that Moxie’s point about trusting the DNS registry and operators to not change keys could be a mistake.
Thus, we’re still in the world of certificates and complicated x.509 parsing that has a lot of loopholes, and we’ve moved added something that the user needs to be aware of. However, we have one solid bonus: many independent and distributed sources must now collaborate to verify a secure connection. If one of them squawks, you at least have an opportunity to be aware. An equally large entry exists in the negative column: it is likely that many security professionals themselves won’t enjoy the added complexity. There’s still a lot of research work to be done, however the discussion is needed now.
PCI came up in discussion a little bit last year, and a lot more this year. In relation, the upcoming Penetration Testing Execution Standard was discussed. Charlie Vedaa gave a talk at BSides titled “Fuck the Penetration Testing Execution Standard”. It was a frank and open talk with a quick vote at the end: the room as a whole felt that despite the downsides we see structures like PCI and the PTES, we were better off with them than without.
The line for DEFCON badges took most people hours and the conference was out of the hard badges in the first day. Organizers say it wasn’t an issue of under-ordering, but rather that they had exhausted the entire commercial supply of “commercially pure” titanium to make the badges. Then the madness started…
AT&T’s network had its back broken under the strain of DEFCON resulting in tethering being useless and text messages showing up in batches sometimes more than 30 minutes late. The Rio lost its ability to check people into their hotel rooms or process credit cards. Some power issues affected the neighboring hotel at a minimum — Gold Coast had a respectable chunk of casino floor and restaurant space in the dark last evening. The audio system for the Rio’s conference area was apparently taken control of and the technicians locked out of their own system. Rumors of MITM cellular attacks at the conference, and now days later in the press abound. Given talks last year including a demonstration and talks this week at the Chaos Computer Camp, the rumors are credible. We’ll wait to see evidence, though.
DEFCON this year likely had more than 15,000 attendees, and they hit the hotel with an unexpected force. Restaurants were running out of food. Talks were sometimes packed beyond capacity. The Penn and Teller theater was completely filled for at least three talks I had interest in, locking me outside for one of them. The DEFCON WiFi network (the “most hostile in the world”) suffered some odd connectivity issues and a slow-or-dead DHCP server.
Besides a few articles in the press that have provided interesting public opinion, one enterprising person asked a few random non-attendees at the hotel what they thought of the event. The results are… enlightening.
This week in Las Vegas is Christmas for security. In listening to four BSidesLV talks today, I’ve come to conclude that the community suffers from a real lack of discussion about interacting with management, mandatory access controls need to be enhanced to focus on applications, the SSL system is irreparably broken and DNSSEC really should replace it, and some potential laws related to hacking may be harbingers of a 100 year security dark age.
That’s a loaded paragraph, so here’s the breakdown: Adam Ely’s talk “Exploiting Management for Fun and Profit – or – Management is not Stupid, You Are” made a fantastic point about budgeting for security. Getting better security isn’t about convincing executives that they need better security. Better security is about understanding what the corporate goals are and fitting the application to that model. Consider an executive’s primary goal of a hospital: increase the survival rate of emergency room patients. How can your goals for security further that goal?
Val Smith’s “Are There Still Wolves Among Us” expressed research showing a very skilled black hat community that has a quiet history of program modification at vendors, years-old 0-day exploits and wholesale compromise of security researchers. The summary point is that “cyber warfare” and “government-level” threats may come from non-government hackers, and they’re the quiet ones. LulzSec, Anonymous and the like are providing covering noise for the ones who don’t get caught. It is further a possibility that attacks that appear to be from foreign countries may be intentional proxying by talented hackers
“A Study of What Breaks SSL” by Ivan Ristic conveyed that the majority of servers are misconfigured somehow. Acceptance of data and sometimes the presentation of login forms in unencrypted pages, broken certificate chains, and servers still offering up SSLv2 in abundance. I’ve personally come to believe that the purpose of SSL — provide assurance that an encryption key belongs to the registered domain of the certificate — has been supplanted by the implementation of DNSSEC. As DNSSEC provides for a similar signature chain and distribution of keys, it ought to be used as the in-channel distribution method. Further to that, the bolt-on nature of SSL permits numerous attacks and misconfiguration possibilities that can prevent even negotiating SSL with a client. Those thoughts may be worthy of their own paper
Finally, Schuyler Towne’s “Vulnerability Research Circa 1851″ was a great look at the security culture of physical locks. It showed the evolution of lock security as it moved toward a system where knowing the mechanical construction of a lock didn’t prevent it from being secure. More importantly, it showed a 100 year drought of lock security filled with closed and legally enforced locksmith guilds, laws against lockpicking and the stalling of progress in adopted residential security locks — namely that most American household locks are using 100 year old technology. It emphasized the potential disaster that adoption of laws such as Germany’s 202© “anti-hacking tools” law could present the security industry with. Just as the golden age of lock development was spurred on with constant public challenges over lock security and then followed up with a century-long dark age here laws and culture prevented research that would advance security .
The first day of BSides has drawn to a close, the 2nd day is opening. The lines for badges at DEFCON are some kind of absurd, and the week is just warming up. DEFCON organizers (“Goons”) are expecting 12,000 attendees. Why they have only pressed 9200 attendee badges is a notable question given the badge shortages of previous years, though. Security companies are actively and openly conference recruiting attendees from BSides, and I expect more of the same at DEFCON.