Risk

A short statement on the Heartbleed problem and its impact on common Internet users.

2014-04-11 by lucaskauffman. 2 comments

On the 7th of April 2014 a team of security engineers (Riku, Antti and Matti) at Codenomicon and Neel Mehta of Google Security published information on a security issue in OpenSSL. OpenSSL is a piece of software used in the encryption process; it helps you in coding your computer traffic to ensure unauthorized people cannot understand what you are sending from one computer network to another. It is used in many applications: for example if you use on-line banking websites, code such as OpenSSL helps to ensure that your PIN code remains secret.

The information that was released caused great turmoil in the security community, and many panic buttons were pressed because of the wide-spread use of OpenSSL. If you are using a computer and the Internet you might be impacted: people at home just as much as major corporations. OpenSSL is used for example in web, e-mail and VPN servers and even in some security appliances. However, the fact that you have been impacted does not mean you can no longer use your PC or any of its applications. You may be a little more vulnerable, but the end of the world may still be further than you think. First of all some media reported on the “Heartbleed virus”. Heartbleed is in fact not a virus at all. You cannot be infected with it and you cannot protect against being infected. Instead it is an error in the computer programming code for specific OpenSSL versions (not all) which a hacker could potentially use to obtain  information from the server (which could possibly include passwords and encryption keys, along with other random data in the server’s memory) potentially allowing him to break into a system or account.

Luckily, most applications in which OpenSSL is used, rely on more security measures than only OpenSSL. Most banks for instance continuously work to remain abreast of security issues, and have implemented several measures that lower the risk this vulnerability poses. An example of such a protective measure is transaction signing with an off-line card reader or other forms of two –factor authentication. Typically exploiting the vulnerability on its own will not allow an attacker post fraudulent transactions if you are using two-factor authentication or an offline token generator for transaction signing.

So in summary, does the Heartbleed vulnerability affect end-users? Yes, but not dramatically. A lot of the risk to the end-users can be lowered by following common-sense security principles:

  • Regularly change your on-line passwords (as soon as the websites you use let you know they have updated their software, this is worthwhile, but it should be part of your regular activity)
  • Ideally, do not use the same password for two on-line websites or applications
  • Keep the software on your computer up-to-date.
  • Do not perform on-line transactions on a public network (e.g. WiFi hotspots in an airport). Anyone could be trying to listen in.

Security Stack Exchange has a wide range of questions on Heartbleed ranging from detail on how it works to how to explain it to non-technical friends. 

Authors: Ben Van Erck, Lucas Kauffman

Communicating Security Risks to Senior Management – 3 years on

2014-04-04 by roryalsop. 0 comments

Back in July 2011 I wrote this brief blog post on the eternal problem of how to bridge the divide between security professionals and senior management.

Thought I’d revisit it nearly 3 years on and hopefully be able to point out ways in which the industry is improving…

Actually, I would have to say it is improving. I am seeing risk functions in larger organisations, especially in the Financial Services industry, with structures that are designed explicitly to oversee technology and report through a CRO to the board, and in the UK, the regulator is providing strong encouragement for financial institutions to get this right. CEOs and shareholders are also seeing the impact of security breaches, denial of service attacks, defacements etc., so now is a very receptive time.

But there is still a long way to go. Many organisations still do not have anyone at board level who can speak the language of business or operational risk and the language of information security or IT risk. So it may have to be you –

My end point in my previous blog about good questions to help you prepare for conversations with senior management are still valid:

If there is one small piece of advice I’d give to anyone in the security profession it would be to learn about business and operational risk – even if it is just a single course it will help!

QoTW #40: What’s the impact of disclosing the front-face of a credit or debit card?

2012-11-23 by roryalsop. 0 comments

” What is the impact of disclosing the front face of a credit card?” and “How does Amazon bill me without the CVC/CVV/CVV2?” are two questions which worry a lot of people, especially those who are aware of the security risks of disclosing information, but who don’t fully understand them.

Rory McCune‘s question was inspired by a number of occasions where someone was called out for disclosing the front of their credit card – and he wondered what the likely impact of disclosing this information could be, as the front of the card gives the card PAN (16-digit number), start date, expiry date and cardholder name. Also for debit cards, the cardholders account number and sort code (that may vary by region).

TC1 asked how Amazon and other individuals can bill you without the CVV (the special number on the back of the card)

atdre‘s answer on that second question states that for Amazon:

The only thing necessary to make a purchase is the card number, whether in number form or magnetic. You don’t even need the expiration date.

Ron Robinson also provides this answer:

Amazon pays a slightly higher rate to accept your payment without the CVV, but the CVV is not strictly required to present a transaction – everybody uses CVV because they get a lower rate if it is present (less risk, less cost).

So there is one rule for the Amazons out there and one rule for the rest of us. Which is good – this reduces the risk to us of card fraud. Certainly for online transactions.

So what about physical transactions – If I have a photo of the front of a credit card and use it to create a fake card, is that enough to commit fraud?

From Polynomial‘s answer:

On most EFTPOS systems, it’s possible to manually enter the card details. When a field is not present, the operator simply presses enter to skip, which is common with cards that don’t carry a start date. On these systems, it is trivial to charge a card without the CVV. When I worked in retail, we would frequently do this when the chip on a card wasn’t working and the CVV had rubbed off. In such cases, all that was needed was the card number and expiry date, with a signature on the receipt for verification.

So if a fraudster could fake a card it could be accepted in retail establishments, especially in countries that don’t yet use Chip and Pin.

Additionally, bushibytes pointed out the social engineering possibilities:

As a somewhat current example, see how Mat Honan got hacked last summer :http://www.wired.com/gadgetlab/2012/08/apple-amazon-mat-honan-hacking/all/ In his case, Apple only required the last digits for his credit card (which his attacker obtained from Amazon) in order to give up the account. It stands to reason that other vendors may be duped if an attacker were to provide a full credit card number including expiration dates.

In summary, there is a very real risk of not only financial fraud, but also social engineering, from the public disclosure of the front of your credit card. Online, the simplest fraud is through the big players like Amazon, who don’t need the CVV, and in the real world an attacker who can forge a card with just an image of the front of your card is likely to be able to use that card.

So take care of it – don’t divulge the number unless you have to!

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #30: Are common passwords at particular risk?

2012-06-29 by roryalsop. 0 comments

User Experience Stack Exchange moderator Ben Brocka asked this question on Security Stack Exchange after the UX community asked whether they should disallow common passwords as a matter of course.

Cyril N‘s top scoring answer focuses on a highly valuable response – educating the individual to behave better. From his answer, two key points arise:

Some UX specialists says that it’s not a good idea to refuse a password. One of the arguments is the one you provide : “but if you ban them, users will use other weak passwords”, or they will add random chars like 1234 -> 12340, which is stupid, nonsensical and will then force the user to go through the “lost my password” process because he can’t remember which chars he added.

and

Let the user enter the password he wants. This goes against your question, but as I said, if you force your user to enter another password than one of the 25 known worst passwords, this will result in 1: A bad User Experience, 2: A probably lost password and the whole “lost my password” workflow. Now, what you can do, is indicate to your user that this password is weak, or even add more details by saying it is one of the worst known passwords (with a link to them), that they shouldn’t use it, etc etc. If you detail this, you’ll incline your users to modify their password to a more complex one because now, they know the risk. For the one that will use 1234, let them do this because there is maybe a simple reason : I often put a dumb password in some site that requests my login/pass just to see what this site provides me.

The only problem with this is called out in a comment by user Polynomial who suggests a hybrid solution:

Reject outright terrible passwords, e.g. “qwerty123”, and warn on passwords that are a derivation of a dictionary word / bad password, e.g. “Qw3rty123” or “drag0n1”

Personally, I like the hybrid solution, because as Iszi points out, most password attacks are conducted offline, where the attacker has a copy of the hash database. In this scenario, dictionary attacks are a very low CPU load, very fast option that is easily automated:

…it is realistic to assume that attackers will target “common” passwords before resorting to brute force. John The Ripper, perhaps one of the most well-known offline password cracking tools, even uses this as a default action. So, if your password is “password” or “12345678”, it is very likely to be cracked in less than a minute.

Dr Jimbob has provided the logical Security answer: It depends! The requirements will be very different for an online banking application and for an application that will only cause minor inconvenience to one end user if compromised. Regulatory requirements may define the level of security protection you require. He also points out that:

Very weak passwords (top 1000) can be randomly attacked online by botnets (even if you use captchas/delays after so many incorrect attempts)

Bangdang also supports disallowing common passwords, and has a final section on the tradeoffs between security and usability, along with the effects of successful compromise, which include fingerpointing and blame.

Tylerl provides some insight from his experience analyzing attack code:

the password ordering is this:

    1. Try the most common passwords first. Usually there’s a list of between 10 and 500 passwords to try
    2. Try dictionary passwords second. This often includes variations like substituting “4” where an “A” would be or a “1” where there was the letter “l”, as well as adding numbers to the end.
    3. Exhaust the password space, starting at “a”, “b”, “c”… “aa”, “ab”, “ac”… etc.

Step 3 is usually omitted, and step 1 is usually attempted on a range of usernames before moving to step 2.

In general the answers go to show just how extensive that key problem the security industry has: the trade off between usability and security. You could add strong security at every layer, but if the user experience isn’t appropriate it will not work.

This is why for major roles/access improvement projects we are seeing significant investment in the people/human capital side of things – helping projects understand human acceptance criteria, rejection reasons and passive blocking of projects which on the face of it seem perfectly logical.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #27: Open Source vs Closed Source Systems

2012-05-25 by ninefingers. 0 comments

Question of the Week number 27 is a contentious, hotly debated issue in the software world. The question itself was posed by Security.SE user blunders, who quoted the argument often used in the defence of open source as a business model:

My understanding is that open source systems are commonly believed to be more secure than closed source systems. … A common position against closed source systems is that a lack of awareness is at best a weak security measure; commonly referred to as security through obscurity.

and then we hit the question itself:

Question is, are open source systems on average better for security than closed source systems?

We’ll begin with the top voted answer by SE user Jesper Mortensen, who explained that the whole notion of being able to generally compare open versus closed source systems is a bad one when there are so many other factors involved. To compare two systems you really need to look beyond the licensing model they use, and look at other factors too. I’ll quote Jesper’s list in its entirety:

  • Licenses.
  • Access to source code.
  • Very different incentive structures, for-profit versus for fun.
  • Very different legal liability situations.
  • Different, and wildly varying, team sizes and team skillsets.

Of course, this is by no means a complete treatment of the possible differences.

Jesper also highlighted the importance of comparing pieces of software that solve specific domain issues – not software in general. You have to do this, to even remotely begin to utilise the list above.

Security.SE legend Thomas Pornin also answered this question. I’ll begin coverage of his answer with his summary:

… the “opensource implies security” idea is overrated. What is important is the time (and skill) devoted to the tracking and fixing of security issues, and this is mostly orthogonal to the question of openness of the source.

The main thrust of Thomas’ answer was that actually, maintained software is more secure than unmaintained software. As an example, Thomas cited OpenSSL remote execution bugs that had been left lying in the code tree unfixed for some time – highlighting a possible advantage of closed source systems in that, when developed by companies, the effort and time spent on Q&A is generally higher than open source systems.

Thomas’ answer also covers the counterpoint to this – that closed source systems can easily conceal security issues, too, and that having the source allows you to convince yourself of security more easily.

The next answer was provided by Ori, who lists a set of premises used for justifying the security of open source:

  1. The Customization premise
  2. The License Management premise
  3. The Open Format premise
  4. The Many Eyes premise
  5. The Quick Fix premise

As Ori rightly says, the customization premise means a company can take an open source platform and add an additional set of security controls. Ori quotes NSA’s SELinux as an example of such a project. For companies with the time and money to produce such platforms and make such fixes, this is clearly an advantage for open source systems.

For license management and open format arguments Ori covers from a compliance and resilience perspective. Using open source software (and making modifications) contains certain license constraints – the potential to violate these constraints is clearly a risk to the business. Likewise, for business continuity purposes the ability to not be locked in to a specific platform is a huge win for any company.

Finally, an answer by yours truly. The major thrust of my answer is succinctly summarised by AviD‘s comment on it:

I’ve always proposed an amendment to Linus’ Law: “Given enough trained eyeballs, most bugs are relatively shallow”

I explained, through use of a rather intriguing vulnerability introduced into development kernels by a compiler bug, that having the knowledge to detect these issues is critical to security. The source being available does not directly guarantee you have the knowledge to detect such issues.

That’s it for answers. As you can see, none of us took sides generally on the “open versus closed” debate, instead pointing out that there are many factors to consider beyond the license under which source is available. I think the whole set of answers is best summarised by this.josh‘s comment on the top voted answer – so I’ll leave you with that:

I agree. What matters most is how many people with knowledge and experience in the security domain actively design, implement, test, and maintain the software. Any project where no-one is looking at security will have significant vulnerabilities, regardless of how many people are on the project.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

Some security implications of virtualisation

2011-12-07 by roryalsop. 0 comments

I prepared this presentation for an ISACA event and realised it is applicable to a much wider audience:

The business benefits of virtualisation are fantastic in terms of costs savings, agility in delivering services and business resiliency.  It enables organisations to spend less on hardware assets  and use their existing hardware more efficiently by running multiple machines on a single piece of server hardware without significant degradation in performance. Industry and vendor studies indicate up to 50% savings in IT expenditure.

An organisation can also run just the number of servers they need at any time, with demand led increases possible at short notice simply by enabling a new virtual machine. This is a further driver for virtualisation today as data centre power costs are significant to any large organisation and a reduction in the number of servers needing power and cooling offers a direct saving in bottom line costs. Studies indicate savings in energy costs through server virtualisation can reach 75% for a typical mid-sized organisation.

What are the risks?

Segregation of systems and data

Security around the standard organisational architecture is quite well understood, with acceptance that there are layers of security based on risk or impact levels, or around the value placed on particular assets – a key end result being separation: keep unauthorised individuals out, restrict an individual’s access to that which they need to do their job and host data of differing sensitivity on networks with different levels of protection.

Virtualisation can break this model – imagine your HR database and customer database virtualised onto the same environment as your web servers. All of a sudden the data which was once hidden behind layers of firewalls and network controls may be vulnerable to a smart attacker who gains access to the web server and finds an exploitable vulnerability in the virtualisation layer.

The same is true for servers hosted for customers – in the past, good practice kept these separate where possible. Now the segregation between different customers may be no more than between adjacent areas of memory or hard disk. This may not be appropriate for environments with differing risk, threat or impact ratings, for example external web servers and internal databases.

This also means that all servers or applications in the same environment should be secured to the same level, which means patch management becomes very important.

Segregation of duties

Consolidation of systems also introduces a risk of a loss of segregation of duties (SOD).  It increases the risk that both system administrators and users could potentially (and unintentionally) gain access to data that are above their normal privilege levels. Administration of the internal virtual switch is also a concern, thus introducing a conflict between server administrators and network administrators and therefore potentially compromising the SOD between the two groups.

Licensing and asset management

The ability to rapidly deploy new virtual infrastructure at short notice raises challenges around licensing. Virtualising applications and servers is often implemented by copying an instance of a server and this instance is recreated on the virtual host. When further capacity is required new instances are created, but as these can be very time dependent, the licensing model has to be updated to ensure that an organisation only runs applications they own a licence to (or that don’t require a licence in a virtual environment.)

This applies also to inventory – a virtual server is just as much of an asset to the business as a physical server, but with the number of virtual servers fluctuating with demand, how do you manage your assets appropriately?

Resilience

Although virtualisation can improve resiliency, there is also a risk if implementation is flawed.  If several key services are hosted within the same physical device in a single virtual environment then failure of the hardware device would impact several services simultaneously.

When consolidating servers, consideration should be given to distributing business critical applications with high availability requirements across physical server instances. There should be no single point of failure for a service and the use of anti-affinity rules should be considered to ensure resilient virtual machines within a service are kept on separate infrastructure hosts.

Maturity of tools and security awareness

While some virtual environments (such as the IBM LPAR architecture on mainframes) are considered mature, there is now a range of products in this space which have not been through the same level of development and which run on platforms less suited to multiple user segregation, and while there has been large scale development of PaaS platforms based on Xen, VMware and others, treating these as secure by default may not be appropriate for your business.

Another major risk around security awareness is that virtualisation is moving forward in some organisations without the knowledge or active participation of Information Security.

Administrators need to look at the particular security requirements prior to building a virtual environment in order to use the correct tools and security configuration to support the business requirements in an appropriate manner.

Information Security involvement is critical at these early planning and architecture decision stages.

Communication and Storage Security

Sensitive data being transmitted across networks should be encrypted where possible and supported by the hypervisor. Encryption should be implemented for traffic between the hypervisor, management networks and remote client sessions and MUST be for virtualised DMZ environments.

Virtual disks are used to store the operating system, applications and data and also record the state of the virtual machine. Virtual disks (including backed up copies) must be protected from unauthorised modification.

Where virtual disk password protection features exist, they should be enabled.

Restrict disk shrinking features which allow non-privileged users and processes to reclaim unused virtual disk space as this may cause denial of service to legitimate users and processes.

Auditing, logging and monitoring

Enable security logging and monitoring on all virtual machines and hypervisors according to approved operational standards.

Hypervisors must be configured, at a minimum to log (failed and successful) login attempts to management interfaces.

Activities of privileged hypervisor and VM accounts (for example, the creation, modification and deletion of roles) must be logged and reviewed.

Summary

  • Understand all the key IT controls in the existing environment before virtualisation
  • Understand virtual platform controls and configure for security
  • Replicate IT controls in the virtualised environment
  • Implement an asset management system which can cope with high volatility
  • Work with software and hardware vendors to understand licensing implications
  • If outsourcing to a cloud vendor, choose one that can match your data location requirements and build in a robust reporting framework
  • Rearrange your support teams to suit the new environment, but during the transition it is likely that the learning curve will be steep as new tools are used, and back office and front office support teams are created anew
  • Use a compliance and governance model which can manage the concepts of changing security boundaries and levels
  • Ideally work with a service provider who has done it before!

How long is a password string?

2011-10-18 by grahamlee. 1 comments

Password policy questions are a perennial fixture of IT Security stack exchange, and take many forms.

Take, for example, the recent XKCD comic on the subject:

Shortly after it was posted, user Billy ONeal asked directly whether the logic was sound: Is a short complex password or a long dictionary passphrase better? You will find answers that support either conclusion, as well as answers that put the trade-offs involved into context.

Of course, part of knowing how easy a password is to crack is knowing how a password cracker works. Are there state of the art techniques or theory specifically for attacking pass phrases? Are there lists of the most common words or ngrams used in passwords and pass phrases?

So we’ve got no consistent idea about what constitutes a “good” password, although we can probably guess at what weak passwords will fall first. But then who is responsible for deciding a user’s password strength? Pretend for a moment that we agree what a strong password is. Should we stop users from choosing “weak” passwords, or is it their own fault for not understanding information theory and the entropy content of their chosen string of characters?

The definitive answer is “it depends”. It depends on how valuable the assets being protected by password access are. It depends on whether the value is appreciated by you as service provider/systems administrator, or by the user as customer/asset owner (or by a combination of those roles, or by someone else). It depends on how much inconvenience you’re willing to put your users to, and on how much inconvenience your users are willing to accept.

So, you’ve decided that you do want to enforce a policy. How does that work? Some websites enforce a maximum password length, is that a good idea?. Should passwords be truly random?

To summarise the discussion this far: there are different ideas of what makes a good password, of who is responsible for deciding whether a password is good or bad, and of how to enforce good passwords once you do decide you want to. But all of this discussion has already put the cart before the horse. Are your passwords going to be brute-force, or will the attacker use a key logger? Maybe they’ll attack the password where it’s stored?.

Indeed, while we’re asking the big questions, why passwords at all? Might biometric identification be better? Why not just forget all of this complication over strong passwords and start taking fingerprints instead? Some reasons include the possibility of false positives from the biometrics system (who hasn’t tried holding up a photograph to a facial recognition system?), and the icky disgustingness of some other attacks.

Suffice it to say that the problem of password security is a complex one, but one that the denizens of security.stackexchange.com enjoy tackling.

QotW #12: How to counter the statement: “You don’t need (strong) security if you’re not doing anything illegal”?

2011-10-10 by roryalsop. 0 comments

Ian C posted this interesting question, which does come up regularly in conversation with people who don’t deal with security on a daily basis, and seems to be highlighted in the media for (probably) political reasons. The argument is “surely as long as you aren’t breaking the law, you shouldn’t need to prevent us having access – just to check, you understand”

This can be a very emotive subject, and it is one that has been used and abused by various incumbent authorities to impose intrusions on the liberty of citizens, but how can we argue the case against it in a way the average citizen can understand?

Here are some viewpoints already noted – what is your take on this topic?

M’vy made this point from the perspective of a business owner:

Security is not about doing something illegal, it’s about someone else doing something illegal (that will impact you).

If you don’t encrypt your phone calls, someone could know about what all your salesman are doing and can try to steal your clients. If you don’t shred your documents, someone could use all this information to mount a social engineering attack against your firm, to steal R&D data, prototype, designs…

Graham Lee supported this with a simple example:

 Commercial confidential data…could provide competitors with an advantage in the marketplace if the confidentiality is compromised. If that’s still too abstract, then consider the personal impact of being fired for being the person who leaked the trade secrets.

So we can easily see a need for security in a commercial scenario, but why should a non-technical individual worry? From a more personal perspective, Robert David Graham pointed this out

 As the Miranda Rights say: “anything you say can and will be used against you in a court of law”. Right after the police finish giving you the Miranda rights, they then say “but if you are innocent, why don’t you talk to us?”. This leads to many people getting convicted of crimes because it is indeed used against them in a court of law. This is a great video on YouTube that explains in detail why you should never talk to cops, especially if you are innocent: http://www.youtube.com/watch?v=6wXkI4t7nuc

Tate Hansen‘s thought is to ask,

“Do you have anything valuable that you don’t want someone else to have?”

If the answer is Yes then follow up with “Are you doing anything to protect it?”

From there you can suggest ways to protect what is valuable (do threat modeling, attack modeling, etc.).

But the most popular answer by far was from Justice:

You buy a lock and lock your front door if you live in a city, in close proximity to hundreds of thousands of others. There is a reason for that. And it’s the same reason why you lock your Internet front door.

Iszi asked a very closely linked question “Why does one need a high level of privacy/anonymity for legal activities”, which also inspired a range of answers:

From Andrew Russell, these 4 thoughts go a long way to explaining the need for security and privacy:

If we don’t encrypt communication and lock systems then it would be like:

Sending letters with transparent envelopes. Living with transparent clothes, buildings and cars. Having a webcam for your bed and in your bathroom. Leaving unlocked cars, homes and bikes.

And finally, from the EFF’s privacy page:

Privacy rights are enshrined in our Constitution for a reason — a thriving democracy requires respect for individuals’ autonomy as well as anonymous speech and association. These rights must be balanced against legitimate concerns like law enforcement, but checks must be put in place to prevent abuse of government powers.

A lot of food for thought…

Risk Assessments: Knowing What to Protect

2011-10-04 by Jeff Ferland. 0 comments

A company’s data is as much a risk as it is an asset. The way that customer data is stored in modern times requires it to be protected as a vulnerable asset and regarded as an unrealized liability even if not considered on the annual financial statement. In many jurisdictions this is enforced by legislation (eg Data Protection Act of 1998 in the UK)

One of the challenges of this as a modern concern lies in knowing where your data is.

To make a comparable example, consider a company that owns its place of operation. That asset must be protected with an investment both for sound business sense and by regulatory requirement. Fire protection, insurance, and ongoing maintenance are assumed costs that are considered all but mandatory. A company that isn’t covering those costs is probably not standing on all its legs. The loss of a primary building because of inadequate investment in protection and maintenance could be anywhere from a major event to fatal to the company.

It may seem a surreal comparison, but data exposure can have an impact as substantial as losing a primary building. A bank without customer records is dead regardless of the cash on hand. A bank with all its customer records made public is at risk of the same fate. Illegitimate transactions, damage to reputation, liability for customer losses related to disclosure and regulatory reaction may all work together to end an institution.

Consider how earlier this year Sony’s Playstation online gaming network was hacked. In the face of this compromise Sony estimated their present losses at $171 million, suffered extended periods of service downtime, and suffered further breaches in attempting to restore service. Further losses are likely as countries around the world investigate the situation and consider fines. Private civil suits have also been brought in response to the breaches. To put the numbers in perspective, the division responsible for the Playstation network posted a 2010 loss of approximately $1 billion dollars.

In total (across several divisions), Sony suffered at least 16 distinct instances of data loss or unintended access including an Excel file containing sweepstakes data from 2001 that Sony posted online. No compromise was needed for that particular failure — the file was readily available and indexed by Google. In the end, proprietary source code, administrator credentials, customer birthdays, credit card information, telephone numbers and home addresses were all exposed.

In regards to Sony’s breaches, the following question was posed, “If you were called by Sony right now, had 100% control over security, what would you do in the first 24-hours?” The traditional response is isolating breaches, identifying them, restoring service with the immediate attack vectors closed, and shoring up after that. However, from the outside Sony’s issue appears to be systemic. Their systems are varied, complicated, and numerous. Weaknesses appear to be everywhere. They were challenged in returning to a functioning state because even they were not compromised in an isolated way. Isolating and identifying breaches was far as one could get in those first 24 hours. Indeed, a central part of the division’s core revenue business was out of service for roughly 3 weeks.

The root issue behind everything is likely that Sony isn’t aware of their assets. Everything grew in place organically and they didn’t know what to protect. Sony claims it encrypted its credit card data, but researchers have found numbers related to the attack. Other identifying information wasn’t encrypted. Regardless of the circumstances related to the disclosure of credit card data, Sony failed protect other valuable information that can make it liable for fraud. Damage to Sony’s reputation and regulatory reaction, including Japan preventing restarting the service there at least through July, are additional issues.

The lack of awareness about what data is at risk can impact a business of any size. Much as different types of fire protection are needed to deal with an office environment, outdoor tire storage at an auto shop, a server room and a chemical factory, different types of security need to be applied to your data based on its exposure. Water won’t work on a grease fire, and encrypting a database has no value if a query can be run against it to retrieve all the information without encryption. While awareness is acute about credit card sensitivity, losing names, addresses, phone numbers and dates of birth can present just as much risk for fraud and civil or regulatory liability. This is discussed in ‘Data Categorisation – critical or not?’ and should be a key step in any organisation’s plans for resilience.

A Risk-Based Look at Fixing the Certificate Authority Problem

2011-08-31 by Jeff Ferland. 3 comments

Bogus SSL certificates are not a new problem and incidents can be found going back over a decade. With a fake certificate for Google.com in the wild, they’re in the news again today. My last two posts have touched on the SSL topic and Moxie Marlinspike’s Convergence.io software is being mentioned in these news articles. At the same time, Dan Kaminski has been pushing for DNSSEC as a replacement for the SSL CA system. Last night, Moxie and Dan had it out 140 characters at a time on Twitter over whether DNSSEC for key distribution was wise.

I’m going to do two things with the discussion: I’m going to cover the discussion I said should be had about replacing the CA system, and I’m going to try and show a risk-based assessment to justify that.

The Risks of the Current System

Risk: Any valid CA chain can create a valid certificate. Currently, there are a number of root certificate authorities. My laptop lists 175 root certificates, and these include names like Comodo, DigiNotar, GoDaddy and Verisign. Any of those 175 root certificates can sign a trusted chain for any certificate and my system will accept it, even if it has previously seen a different unexpired certificate before and even if that certificate has been issued by a different CA. A greater exposure for this risk comes from intermediate certificates. These exist where the CA has signed a certificate for another party that has flags authorizing it to sign other keys and maintain a trusted path. While Google’s real root CA is Equifax, another CA signed a valid certificate that would be accepted by users.

Risk mitigation: DNSSEC. DNSSEC limits the exposure we see above. In the DNSSEC system, *.google.com can only be valid if the signature chain is the DNS root, then the “com.” domain. If Google’s SSL key is distributed through DNSSEC, then we know that the key can only be inappropriate if the DNS root or top-level domain was compromised or is mis-behaving. Just as we trust the properties of an SSL key to secure data, we trust it to maintain a chain, and from that we can say that there is no other risk of a spoofed certificate path if the only certificate innately trusted is the one belonging to the DNS root. Courtesy of offline domain signing, this also means that host a root server does not provide the opportunity to provide malicious responses even by modifying the zone files (they would become invalid).

Risks After Moving to DNSSEC

Risk: We distrust that chain. We presume from China’s actions that China has a direct interest in being able to track all information flowing over the Internet for its users. While the Chinese government has no control over the “com.” domain, they do control the “cn.” domain. Visiting google.cn. does not mean that one aims to let the Chinese government view their data. Many countries have enough resources to perform attacks of collusion where they can alter both the name resolution under their control and the data path.

Risk mitigation: Multiple views of a single site. Convergence.io is a tool that allows one to view the encryption information as seen from different sites all over the world. Without Convergence.io, an attacker needs to compromise the DNS chain and a point between you and your intended site. With Convergence.io, the bar is raised again so that the attacker must compromise both the DNS chain and the point between your target website and the rest of the world.

Weighing the Risks

The two threats we have identified require gaining a valid certificate and compromising the information path to the server. To be a successful attack, both must occur together. In the current system, the bar for getting a validly signed yet inappropriately issued certificate is too low. The bar for performing a Man-in-the-Middle attack on the client side is also relatively low, especially over public wireless networks. DNSSEC raises the certificate bar, and Convergence raises the network bar (your attack will be detected unless you compromise the server side).

That seems ideal for the risks we identified, so what are we weighing? Well, we have to weigh the complexity of Convergence. A new layer is being added, and at the moment it requires an active participation by the user. This isn’t weighing a risk so much as weighing how much we think we can benefit. We also must remember that Convergence doesn’t raise the bar for one who can perform a MITM attack that is between the target server and the whole world.

Weighing DNSSEC, Moxie drives the following risk home for it in key distribution: “DNSSEC provides reduced trust agility. We have to trust VeriSign forever.” I argue that we must already trust the DNS system to point us to the right place, however. If we distrust that system, be it for name resolution or for key distribution, the action is still the same — the actors must be replaced, or the resolution done outside of it. The counter Moxie’s statement in implementing DNSSEC for SSL key distribution is we’ve made the problem an entirely social one. We now know that if we see a fake certificate in the wild, it is because somebody in the authorized chain has lost control of their systems or acted maliciously. We’ve reduced the exposure to only include those in the path — a failure with the “us.” domain doesn’t affect the “com.” domain.

So we’re left with the social risk that we have a bad actor who is now clearly identified. Convergence.io can help to detect that issue even if it only enjoys limited deployment. We are presently ham-strung by the fact that any of a broad number of CAs and ever-broader number of delegates can be responsible for issuing “certified lies,” and that still needs to be reduced.

Making a Decision

Convergence.io is a bolt-on. It does not require anybody else to change except the user. It does add complexity that all users may not be comfortable with, and that may limit its utility. I see no additional exposure risk (in the realm of SSL discussion, not in the realm of somebody changing the source code illicitly) from using it. To that end, Moxie has released a great tool and I see no reason to not be a proponent of the concept.

Moxie still argues that DNSSEC has a potential downside. The two-part question is thus: is reducing the number of valid certification paths for a site to one an improvement? When we remove the risk of an unwanted certification of the site, is the new risk that we can’t drop a certifier a credible increase in risk? Because we must already trust the would-be DSNSEC certifier to direct us to the correct site, the technical differences in my mind are moot.

To put into a practical example: China as a country can already issue a “valid” certificate for Google. They can control any resolution of google.cn. Whether the control the sole key channel for google.cn. or any valid key channel for google.cn., you can’t reach the “true” server and certificate / key combination without trusting China. The solution for that, whether it be the IP address or the keypair is to hold them accountable or find another channel. DNSSEC does not present an additional risk in that regard. It also removes a lot of ugliness related to parsing X.509 certificates and codes (which includes such storied history as the “*” certificate that worked on anything before browser vendors hard-coded it out). Looking at the risks presented and arguments from both sides, I think it’s time to start putting secure channel keys in the DNS stream.