Posts Tagged ‘QOTW’

QotW #13: Standards for server security, besides PCI-DSS?

2011-11-11 by roryalsop. 0 comments

The Question of the week this week was asked by nealmcb in response to the ever wider list of standards which apply in different industries. The Financial Services industry has a well defined set of standards including the Payment Card Industry Data Security Standard (PCI-DSS) which focuses specifically on credit card data and primary account numbers, but neal’s core question is this:

Are there standards and related server certifications that are more suitable for e.g. web sites that hold a variety of sensitive personal information that is not financial (e.g. social networking sites), or government or military sites, or sites that run private or public elections?

This question hasn’t inspired a large number of answers, which is surprising, as complying with security standards is becoming an ever more important part of running a business.

The answers which have been provided are useful, however, with links to standards provided by the following:

From Gabe:

Of these, the CIS standards are being used more and more in industry as they provide a simple baseline which should be altered to fit circumstances but is a relatively good starting point out of the box.

Jeff Ferland provided a longer list:

And as I tend to be pretty heavily involved with the ISF, I included a link to the Standard of Good Practice which is publicly available and is exactly what it sounds like: rational good practice in security.

From all these (and many more) it can be seen that there is a wide range of standards which all have a different focus on security- which supports this.josh‘s comment:

As is often noted in questions and answers on this site, the solution depends on what you are protecting and who you are protecting it from. Even similar industries under different jurisdictions may need different protections. Thus I think it makes sense for specific industries and organizations to produce their own standards.

A quick look at questions tagged Compliance shows discussion on Data Protection Act, HIPAA, FDA, SEC guidelines, RBI and more.

If you are in charge of IT or Information Security, Audit or Risk, it is essential that you know which standards are appropriate to you, which ones are mandatory, which are optional, which may be required by a business partner etc., and to be honest it can be a bit of a minefield.

The good thing is – this is one of the areas where the Stack Exchange model works really well. If you ask the question “Is this setup PCI compliant” there are enough practitioners, QSA’s and experienced individuals on the site that an answer should be very straightforward. Of course, you would still need a QSA to accredit, but as a step towards understanding what you need to do, Security.StackExchange.com proves its worth.

QotW #12: How to counter the statement: “You don’t need (strong) security if you’re not doing anything illegal”?

2011-10-10 by roryalsop. 0 comments

Ian C posted this interesting question, which does come up regularly in conversation with people who don’t deal with security on a daily basis, and seems to be highlighted in the media for (probably) political reasons. The argument is “surely as long as you aren’t breaking the law, you shouldn’t need to prevent us having access – just to check, you understand”

This can be a very emotive subject, and it is one that has been used and abused by various incumbent authorities to impose intrusions on the liberty of citizens, but how can we argue the case against it in a way the average citizen can understand?

Here are some viewpoints already noted – what is your take on this topic?

M’vy made this point from the perspective of a business owner:

Security is not about doing something illegal, it’s about someone else doing something illegal (that will impact you).

If you don’t encrypt your phone calls, someone could know about what all your salesman are doing and can try to steal your clients. If you don’t shred your documents, someone could use all this information to mount a social engineering attack against your firm, to steal R&D data, prototype, designs…

Graham Lee supported this with a simple example:

 Commercial confidential data…could provide competitors with an advantage in the marketplace if the confidentiality is compromised. If that’s still too abstract, then consider the personal impact of being fired for being the person who leaked the trade secrets.

So we can easily see a need for security in a commercial scenario, but why should a non-technical individual worry? From a more personal perspective, Robert David Graham pointed this out

 As the Miranda Rights say: “anything you say can and will be used against you in a court of law”. Right after the police finish giving you the Miranda rights, they then say “but if you are innocent, why don’t you talk to us?”. This leads to many people getting convicted of crimes because it is indeed used against them in a court of law. This is a great video on YouTube that explains in detail why you should never talk to cops, especially if you are innocent: http://www.youtube.com/watch?v=6wXkI4t7nuc

Tate Hansen‘s thought is to ask,

“Do you have anything valuable that you don’t want someone else to have?”

If the answer is Yes then follow up with “Are you doing anything to protect it?”

From there you can suggest ways to protect what is valuable (do threat modeling, attack modeling, etc.).

But the most popular answer by far was from Justice:

You buy a lock and lock your front door if you live in a city, in close proximity to hundreds of thousands of others. There is a reason for that. And it’s the same reason why you lock your Internet front door.

Iszi asked a very closely linked question “Why does one need a high level of privacy/anonymity for legal activities”, which also inspired a range of answers:

From Andrew Russell, these 4 thoughts go a long way to explaining the need for security and privacy:

If we don’t encrypt communication and lock systems then it would be like:

Sending letters with transparent envelopes. Living with transparent clothes, buildings and cars. Having a webcam for your bed and in your bathroom. Leaving unlocked cars, homes and bikes.

And finally, from the EFF’s privacy page:

Privacy rights are enshrined in our Constitution for a reason — a thriving democracy requires respect for individuals’ autonomy as well as anonymous speech and association. These rights must be balanced against legitimate concerns like law enforcement, but checks must be put in place to prevent abuse of government powers.

A lot of food for thought…

QotW #11: Is it possible to have a key for encryption, that cannot be used for decryption?

2011-09-30 by ninefingers. 0 comments

This week’s question of the week was asked by George Bailey, who wanted to know if it were possible to have a key for encryption that could not be used for decryption. This seems at first sight like a simple question, but underneath it there are some cryptographic truths that are interesting to look at.

Firstly, as our first answerer SteveS pointed out, the process of encrypting data according to this model is asymmetric encryption. Steve provided links to several other answers we have. First up from this list was asymmetric vs symmetric encryption. From our answers there, public key cryptography requires two keys, one that can only encrypt material and another which can decrypt material. As was observed in several answers, when compared to straightforward symmetric encryption, the requirement for the public key in public key cryptography creates a large additional burden that depends heavily on careful mathematics, while symmetric key encryption really relies on the confusion and diffusion principle outlined in Shannon’s 1949 Communication Theory of Secrecy Systems. I’ll cover some other points raised in answers later on.

A similarly excellent source of information is what are private and public key cryptography and where are they useful?

So that answered the “is it possible to have such a system” question; the next step is how. This question was asked on the SE network’s Crypto site – how does asymmetric encryption work?. In brief, in the most commonly used asymmetric encryption algorithm (RSA), the core element is a trapdoor function or permutation – a process that is relatively trivial to perform in one direction, but difficult (ideally, impossible, but we’ll discuss that in a minute) to perform in reverse, except for those who own some “insider information” — knowledge of the private key being that information. For this to work, the “insider information” must not be guessable from the outside.

This leads directly into interesting territory on our original question. The next linked answer was what is the mathematical model behind the security claims of symmetric ciphers and hash algorithms. Our accepted answer there by D.W. tells you everything you need to know – essentially, there isn’t one. We only believe these functions are secure based on the fact no vulnerability has yet been found.

The problem then becomes: are asymmetric algorithms “secure”? Let’s take RSA as example. RSA uses a trapdoor permutation, which is raising values to some exponent (e.g. 3) modulo a big non-prime integer (the modulus). Anybody can do that (well, with a computer at least). However, the reverse operation (extracting a cube root) appears to be very hard, except if you know the factorization of the modulus, in which case it becomes easy (again, using a computer). We have no actual proof that factoring the modulus is required to compute a cube root; but more of 30 years of research have failed to come up with a better way. And we have no actual proof either that integer factorization is inherently hard; but that specific problem has been studied for, at least, 2500 years, so easy integer factorization is certainly not obvious. Right now, the best known factorization algorithm is General Number Field Sieve and its cost becomes prohibitive when the modulus grows (current World record is for a 768-bit modulus). So it seems that RSA is secure (with a long enough modulus): breaking it would require to outsmart the best mathematicians in the field. Yet it is conceivable that a new mathematical advance may occur any day, leading to an easy (or at least easier) factorization algorithm. The basis for the security claim remains the same: smart people spent time thinking about it, and found no weakness.

Cryptography offers very few algorithms with mathematically proven security (e.g. One-Time Pad), let alone practical algorithms with mathematically proven security; none of them is an asymmetric encryption algorithm. There is no proof that asymmetric encryption can really exist. But there is no proof that hash functions exist, either, and it never prevented anybody from using hash functions.

Blog promotion afficionado Jeff Ferland provided some extra detail in his answer. Specifically, Jeff addressed which cipher setup should be used for actually encrypting the data, noting that the best setup for most real world scenarios is the combined use of asymmetric and symmetric cryptography as occurs in PGP, for example, where a transfer key encrypts the data using symmetric encryption and that key, a much smaller piece of data, can be effectively be protected by asymmetric encryption; this is often called “hybrid encryption”. The reason asymmetric encryption is not used throughout, aside from speed, is the padding requirement as Jeff himself and this question over on Crypto.SE discusses.

So in conclusion, it is definitely possible to have a key that works only for encryption and not for decryption; it requires mathematical structure, and faith in the difficulty of inverting some of these operations. However, using asymmetric encryption correctly and effectively is one of the biggest challenges in the security field; beyond the maths, private key storage, public key distribution, and key usage without leaking confidential information through careless implementation are very difficult to get right.

QotW #10: Carrying Out an IT Audit

2011-09-23 by Jeff Ferland. 0 comments

This week, a newly-minted master’s degree holder asked on our site, “[How do I go about] Carrying out a professional IT audit procedure?” That’s a really big question in one sentence. In trying to break that down to some parts we can address, let’s look at the perspective people involved in an audit might see.

Staff at an audit client interact with auditors who ask a lot of questions off of a script that is usually referred to as a work plan. Many times, they ask you to open the appropriate settings dialogs or configuration files and show them the configuration details that they are interested in. Management at an audit client will discuss what systems beforehand and the managers for the auditors will select or create appropriate work plans.

The quality, nature, and scope of work plans vary widely, and a single audit will often involve the use of many different plans. Work plans might be written by the audit company or available from regulatory bodies, standards organizations, or software vendors. One example of a readily-available plan is the PCI DSS 2.0 standard. That plan displays both high-level overviews and mid-level configuration requirements across a broad spectrum. Low-level details would be operating system or application specific. Many audit plans related to banking applications have granular detail about core system configuration settings. Also have a look at this question about audit standards for law firms for an example from a different industry showing similarities and differences.

While some plans are regulatory-compliance related, most are best-practices focused. All plans are written with best practices in mind, and for those who are new to the world of IT security, that’s the most challenging part about them. There is no universal standard; many plans greatly overlap, but still differ. If the auditor is appropriately considering their client’s needs, they’ll almost certainly end up marking parts of plans as not applicable or not compliant yet okay because of mitigating circumstances.

Another challenging point for those new to auditing is the sometimes hard-to-grasp concept is the separation between objectives and controls. An objective might be to ensure that each authenticates to their own account. Controls listed in a work plan might include discussion about password expiry requirements to help prevent shared account passwords (among other things). Don’t get crossed-up focusing on the control if the objective is met through some other means – perhaps everybody is using biometric authentication instead. There are too many instances of this in the audit world, and it’s a common mistake among newer auditors.

From a good auditor’s perspective, meeting the goals of the work plan might be considered 60% of the goal of the work. An auditor’s job isn’t complete unless they’re looking at the whole organization. Some examples: to fulfill a question about password change requirements, the auditor should ask an administrator, see the configuration and ask users about it (“When was the last time the system made you change your password?”). To review a system setting, the auditor should ask to see settings in a general manner, adding detail only as needed: “Can you show me the net effect of policies on this user’s account?” as opposed to “Start->run->rsop.msc”. Users reporting different experiences about password resets than the system configuration shows or a system administrator who fumbles their way around for everything won’t ever be in the work plan, but it will be a concern.

With that background in mind, here are some general steps to performing an IT audit procedure:

  • Meet with management and determine their needs. You should understand many of the possible accepted risks before you begin the engagement. For example, high-speed traders may stand to lose more money by implementing a firewall than not.
  • Select appropriate audit plans based on available resources and your own relevant work.
  • Properly review the controls with the objectives they meet in mind. Use multiple references when possible, and always try to confirm any settings directly.
  • Pay attention to the big picture. Things should “feel right.”
  • Review your findings with management and consider their thoughts on them. Many times the apparent severity of something needs to be adjusted.
  • At the end of the day, sometimes a business unit may accept the risk from a weak control, despite it looking severe to you as an auditor. That is their prerogative, as long as you have correctly articulated the risks

The last part of that, auditors reviewing findings with client management, takes the most finesse and unexplainable skill. Does your finding really matter? How can you smooth things over and still deliver over 100 findings? At the end of the day, experience and repetition is the biggest part of delivering professional work, and that’s regardless of the kind of work.

Some further starting points for more detail can be found at http://en.wikipedia.org/wiki/Information_technology_audit_process and http://ithandbook.ffiec.gov/.

QotW #9: What are Rainbow Tables and how are they used?

2011-09-09 by roryalsop. 0 comments

This week’s question, asked by @AviD, turned out to have subtle implications that even experienced security professionals may not have been aware of.

A quick bit of background:

With all the furore about passwords, most companies and individuals know they should have strong passwords in place, and many use system enforced password complexity rules (eg 8 characters, with at least 1 number and 1 special character) but how could a company actually audit password strength.

John the Ripper was a pretty good tool for this – it would brute force or use a dictionary attack on password hashes, and if it broke them quickly they were weak. If they lasted longer they were stronger (broadly speaking)

So far so good, but what if you are a security professional emulating an attacker to assess controls? You could run the brute forcer for a while, but this isn’t what an attacker will do – maths has provided much faster ways to get passwords:

Hash Tables and Rainbow Tables

Hash Tables are exactly as the name sounds – tables of hashes generated from every possible password in the space you want, for example a table of all DES crypt hashes for unsalted alphanumeric passwords 8 characters or less, along with the password. If you manage to get hold of the password hashes from the target you simply match them with the hashes in this table, and if the passwords are in the table you win – the password is there (excluding the relatively small possibility of hash collisions – which for most security purposes is irrelevant as you can still use the wrong password if its hash matches the correct one). The main problem with Hash tables is that they get very big very quickly, which means you need a lot of storage space, and an efficient table lookup over this space.

Which is where Rainbow Tables come in. @Crunge‘s answer provides excellent detail in relatively simple language to describe the combination of hashing function, reduction function and the mechanism by which chains of these can lead to an efficient way to search for passwords that are longer or more complex than those that lend themselves well to a hash table.

In fact @Crunge’s conclusion is:

Hash tables are good for common passwords, Rainbow Tables are good for tough passwords. The best approach would be to recover as many passwords as possible using hash tables and/or conventional cracking with a dictionary of the top N passwords. For those that remain, use Rainbow Tables.

@Mark Davidson points us in the direction of resources. You can either generate the rainbow tables yourself using an application like RainbowCrack or you can download them from sources like The Shmoo GroupFree Rainbow Tables project website, Ophcrackproject and many other places depending on what type of hashes you need tables for.

Now from a defence perspective, what do you need to know? 

Two things:

Longer passwords are still stronger against attack, but be aware that if they are too long then users may not be able to remember them. (Correct Horse Battery Staple!)

Salt and Pepper@Rory McCune describes salt and pepper in this answer:

A simple and effective defence that most password hashing functions provide now is salting – the addition of a per user “salt” value stored in the database along with the hashed password. It isn’t intended to be secret but is used to slow down the brute force process and to make rainbow tables impractical to use.  Another add-on is what is called a “pepper” value. This was just another random string but was the same for all users and stored with the application code as opposed to in the database. the theory here is that in some circumstances the database may be compromised but the application code is not, and in those cases this could improve the security. It does, however, introduce problems if there are multiple applications using the same password database.

QotW #8: how to determine what to whitelist with NoScript?

2011-09-02 by ninefingers. 2 comments

Our question of the week this week was asked by Iszi, who wanted to know how exactly we should determine what to trust when employing NoScript. In the question itself Iszi raises some valid points: how does somebody know, other than by trial and error, whether scripts from a given site or third party site are trustworthy? How does a user determine which parts of the javascript are responsible for which bits of functionality? How do we do this without exposing ourselves to the risks of such scripts.

Richard Gadsden suggests one way to approach a solution is for each site to declare what Javascript it directly controls. He notes that such a mechanism could trivially be subverted if the responding page lists any and all javascript as “owned” by the site in question.

Such resource-based mechanisms have been tried and implemented before, albeit for a different problem domain (preventing XSS attacks). Cross-Origin Resource Sharing (W3C, Wikipedia) attempts to do just that by vetting incoming third party requests. However, like HTML-based lists, it does not work well when the trusted end users are “everyone”, i.e. a public web service.

Zuly Gonzalez discusses a potential solution her startup has been working on – running scripts on a disposable vm. Zuly makes some good points – even with a whitelisted domain, you cannot necessarily trust each and every script that is added to the domain; moreover, after you have made your trust decision, a simple whitelist is not enough without re-vetting the script.

Zuly’s company – if you’re interested, check out her answer – runs scripts on a disposable virtual machine rather than on your computer. Disclaimer: we haven’t tested it, but the premise sounds good.

Clearly, however, such a solution is not available to everyone. Karrax suggested that the best option might be to install plugins such as McAfee SiteAdvisor to help inform users as to what domains they should be trusting. He notes that the NoScript team are beginning to integrate such functionality into the user interface of NoScript itself. This is a feature I did not know I had, so I tried it. According to the trial page, at the time of writing the service is experimental, but all of the linked to sites provide a lot of information about the domain name and whether to trust it.

This is an area with no single solution yet, and these various solutions are in continuous development. Let’s see what the future holds.

QotW #7: How to write an email regarding IT Security that will be read, and not ignored by the end user?

2011-08-26 by roryalsop. 1 comments

A key aspect of IT and Information Security is the acceptance piece. People aren’t naturally good at this kind of security and generally see it as an annoyance and something they would rather not have to deal with.

Because of this, it is typical in organisations to send out regular security update emails – to help remind users of risks, threats, activities etc.

However it is often the case that these are deleted without even being read. This week’s question: How to write an email regarding IT Security that will be read, and not ignored by the end user? was asked by @makerofthings and generated quite a number of interesting responses, including:

  • Provide a one line summary first, followed by a longer explanation for those who need it
  • Provide a series of options to answer – those who fail to answer at all can be chased
  • Tie in reading and responding to disciplinary procedures – a little bit confrontational here, but can work for items such as mandatory annual updates (I know various organisations do this for Money Laundering awareness etc)
  • Using colours – either for the severity of the notice, or to indicate required action by the user
  • Vary the communications method – email, corporate website, meeting invite etc
  • Don’t send daily update messages – only send important, actionable notices
  • Choose whose mailbox should send these messages – if it is critical everyone reads it, perhaps it should come from the FD or CEO, or the individual’s line manager
  • Be personal, where relevant – users who receive a few hundred emails a day will happily filter or delete boring ones, or those too full of corporate-speak. Impress upon users how it is relevant to them
  • Add “Action Required” in the subject

It is generally agreed that if it isn’t interesting it will be deleted. If there are too many ‘security’ emails, they’ll get deleted. In general, unless you can grab the audience’s attention, the message will fail.

Having said that, another take on it is – do you need to send all these security emails to your users? For example, should they be getting antivirus updates or patch downtime info in an email every week?

Can users do anything about antivirus, or should that be entirely the responsibility of IT or Ops? And wouldn’t  a fixed patch downtime each week that is listed on the internal website make less of an impact on users?

Thinking around the common weak points in security – such as users – for most actions can make much more impact when you actually do need the users to carry out an action.

Associated questions and answers you should probably read:

What are good ways to educate about IT Security in a company?

What policies maximize employee buy-in to security?

QotW #6: When the sysadmin leaves…

2011-08-19 by ninefingers. 1 comments

When we talk about security of systems, a lot of focus and attention goes on the external attacker. In fact, we often think of “attacker” as some form of abstract concept, usually “some guy just out causing mischief”. However, the real goal of security is to keep a system running and internal, trusted users, can upset that goal more easily and more completely than any external attacker, especially a systems administrator. These are, after all, the men and women you ask to set up your system and defend it.

So us Security SE members took particular interest when user Toby asked: when the systems admin leaves, what extra precautions should be taken? Toby wanted to know what the best practise was, especially where a sysadmin was entrusted with critical credentials.

A very good question and it did not take long for a reference to Terry Childs to appear, the San Francisco FiberWAN administrator who in response to a disciplinary action, refused to divulge critical passwords to routing equipment. Clearly, no company wants to find themselves locked out of their own network when a sysadmin leaves, whether the split was amicable or not. So what can we do?

Where possible, ensure “super user” privileges can be revoked when the user account is shut down.

From Per Wiklander. The most popular answer by votes so far also makes a lot of sense. The answer itself references the typical Ubuntu setup, where the root password is deliberately unknown to even the installer and users are part of a group of administrators, such that if their account is removed, so is their administrator access. Using this level of granular control you can also downgrade access temporarily, e.g. to allow the systems admin to do non-administrative work prior to leaving.

However, other users soon pointed out a potential flaw in this kind of technical solution – there is no way to prevent someone with administrative access from changing this setup, or adding additional back doors, especially if they set up the box. A further raised point was the need not just to have a corporate policy on installation and maintenance, but also ensure it is followed and regularly checked.

Credential control and review matters – make sure you shut down all the non-centralized accounts too.

Mark Davidson‘s answer focused heavily on the need for properly revoking credentials after an administrator leaves. Specifically, he suggests:

  • Do revoke all credentials belonging to that user.
  • Change any root password/Administrator user password of which they are aware.
  • Access to other services on behalf of the business, like hosting details, should also be altered.
  • Ensure any remote access is properly shut down – including incoming IP addresses.
  • Logging and monitoring access attempts should happen to ensure that any possible intrusions are caught.

Security SE moderator AviD was particularly pleased with the last suggestion and added that it is not just administrator accounts that need to be monitored – good practice suggests monitoring all accounts.

It is not just about passwords – do not forget the physical/social dimension.

this.josh raises a critical point. So far, all suggestions have been technical, but there are many other issues that might present a window of opportunity to a sysadmin with a mind to cause chaos. From the answer itself:

  • Physical access is important too. Don’t just collect badges; ensure that any access to cabinets, server rooms, drawers etc are properly revoked. The company should know who has keys to what and why so that this is easy to check when the sysadmin leaves.
  • Ensure the phone/voicemail system is included in the above checks.
  • If the sysadmin could buy on behalf of the company, ensure that by the time they have left, they no longer have authority to do this. Similarly, ensure any contracts the employee has signed have been reviewed – ideally, just like logging sysadmin activities you are also checking this as they’re signed too.
  • Check critical systems, including update status, to ensure they are still running and haven’t been tampered with.
  • My personal favorite: let people know. As this.josh points out, it is very easy as a former employee knowing all the systems to persuade someone to give you access, or give you information that may give you access. Make sure everyone understands that the individual has left and that any suspicious requests are highlighted or queried and not just granted.

Conclusions

So far, that’s it for answers to the question and no answer has yet to be accepted, so if you think anything is missing, please feel free to sign up and present your own answer.

In summary, there appears to be a set of common issues being raised in the discussion. Firstly: technical measures help, but cannot account for every circumstance. Good corporate policy and planning is needed. A company should know what systems a systems administrator has access to, be they virtual or physical, as well as being able to audit access to these systems both during the employee’s tenure and after they have left. Purchasing mechanisms and other critical business functions such as contracts are just other systems and need exactly the same level of oversight.

I did leave one observation out from one of the three answers above: that it depends on whether the moving on employee is leaving amicably or not. That is certainly a factor as to whether there is likely to be a problem, but a proper process is needed anyway, because the risk to your systems and your business is huge. The difference in this scenario is around closing critical access in a timely manner. In practice this includes:

  • Escorting the employee off the premises
  • Taking their belongings to them once they are off premises
  • Removing all logical access rights before they are allowed to leave

These steps reduce the risk that the employee could trigger malicious code or steal confidential information.

QotW #5: Defending your website

2011-08-12 by roryalsop. 0 comments

This week’s post came from this question: I just discovered major security flaws in my web store!, but covers a wider scope and includes some resources on Security Stack Exchange on defending your website.

Any application you connect to the Internet will be attacked within minutes of plugging it in, so you need to look very carefully at protecting it appropriately. Guess what – many application owners and developers go about this in entirely the wrong way, building the application and then thinking about security at the very end, where it is expensive and difficult to do well.

The recommended approach is to build security in from the start – have a look at this post for a discussion on Secure Development Lifecycle. This means having security as a core pre-requisite, just like performance and functionality, and understanding the risks your new application will bring. Read How do you compare risks from your websites, physical perimeter, staff etc

To do this you will need developers to write the application securely. Studies show that this is the most cost effective in the long run, as securely developed applications require very little remediation work (as they don’t suffer from anywhere near as many security flaws as traditionally developed applications)

So, educating developers is key – even basic awareness can make the difference: What security resources should a white-hat developer follow these days?

Assuming your application has been developed using secure practices and you are ready to go live, What are the most important security checks for new web applications? and Testing your web application to gain some confidence that the security controls you have implemented are correctly working and effective is an essential step before allowing connection to the Internet. Penetration testing and associated security assessments are essential checks.

What tools are available to assess the security of a web application?

The globally recognised open standard on web application security is the Open Web Application Security Project, (OWASP) who provide guidance on testing, sample vulnerable applications, and a list of the top ten vulnerabilities. The answers to Is there a typical step-by-step A-Z process for testing a Web site for possible exploits? provide more information on approaches, as do the answers to Books about Penetration Testing.

Once your application is live, you can’t just sit back and relax, as new attacks are developed all the time – being aware of what is being attempted is important.  Can I detect web app attacks by viewing my Apache log file is Apache specific, but the answers are broadly relevant for all web servers.

This is obviously a very quick run-through – but reading the linked questions, and following the related questions links on them will give you a lot of background on this topic.

QotW #4: How can you reliably wipe data from a storage device?

2011-08-03 by Thomas Pornin. 1 comments

Today we investigate the problem of disposing of hardware storage devices (say, hard disks) which may contain sensitive data. The question which prompted this discussion “Is it enough to only wipe a flash drive once” is about Flash disks (SSD) and received some very good answers; here, we will try to look at the wider picture.

Confidentiality Issues and How to Avoid Them

Storage devices grow old; at some point we want to get rid of them, either because they broke down and need to be replaced, or because they became too small with regards to what we want to do with them. A storage device does not shrink over time, but our needs increase. Quite some time ago, I was sharing some disk space with about one hundred co-students, and the disk offered a hefty 120 megabytes. At that time, a colorful GIF picture was considered to be the top of technology. Today, we take for granted that we can store 2-hour long HD videos on a cellphone. The increase in disk space shows no sign of slowing down, so we have a steady stream of old disks (or USB sticks or SD cards or even ZIP/Jaz cartridges for the old timers — I will not delve into floppy disks) to dispose of. The problem is that all these pieces of hardware have been used quite liberally to store data, possibly confidential data. For the home user, think about the browser cookies, the saved passwords, the cryptographic private keys… In a business context, just about any data element could be of value for competitors.

This is a matter of confidentiality. There are two generic ways to deal with it: encryption, and destruction.

Disk Encryption

Disk encryption is about transforming data in a way which is reversible only with the knowledge of a given secret short-sized element called a key. We are talking about symmetric encryption here; the key is a simple sequence of, say, 128 bits chosen at random. The confidentiality issue is not totally obliterated, only severely reduced: supposedly, it is easier to deal with the secrecy of a sequence of 128 bits than that of 100 GB worth of data. Encryption can be done either by the disk itself, by the operating system, or by the application.

Application-based encryption is limited to what the application can control. For instance, it may have to fight a bit with the OS about virtual memory (that’s pure memory from the point of view of the application, but the OS is prone to write it to the disk anyway). Also, most applications do not have any encryption feature, and modifying all of them is out of the question (not even counting the closed-source ones, that’s just too much work).

OS disk encryption is more thorough, since it can be applied on the complete disk for all files from all applications. It has a few drawbacks, though:

  • The computer must still be able to boot up; in particular, to read from the disk the code which is used to decrypt data. So there must be at least an unencrypted area on the disk. This can cause some system administration headaches.
  • Performance may suffer, for an internal hard disk. An x86 Core2 CPU at 2.4 GHz can encrypt about 160 MB/s with AES (that’s what my PC does with the well-known OpenSSL crypto library): not only do some SSD go faster than that, I also have other things to do with my CPU (my OS is multitasking).
  • For external media (USB sticks, Flash cards…), there can be interoperability issues. There is no well-established standard on disk encryption. You could transport some appropriate software on the media itself but most places will not let you install applications as easily.

Encryption on the hardware itself is easier, but you do not really know if the drive does it properly. Also, the drive must keep the key in some way, and you want it to “forget” that key when the media is decommissioned. As Jesper’s answer describes, good encrypted disks keep the key in NVRAM (i.e. RAM with a battery) and can be instructed to forget the key, but this can prove difficult if the disk is broken: if it does not respond to commands anymore, you cannot really be sure that the NVRAM got blanked.

So while encryption is the theoretically appropriate way to go, it is not complete (you still have to manage the confidentiality of the key) and, in practice, it is not easy. Most of all, it works only if it is applied from day one: encryption can do nothing about data which was written before encryption was envisioned at all. So let’s see what can be done to destroy data.

Data Wiping

Wiping out data is a popular method; but popular does not necessarily mean efficient. The traditional wiping patterns rely on the idea that each data bit will be written exactly over the bit which was at the same logical emplacement on the disk: this was mostly true in 1996, but technology has evolved. Today’s hard disks do not have “track railings” to guide the read/write heads; instead, they use the data itself as guide. The net result is that the new data may be physically off the previous one by a small bit; the old data is still readable “on the edge”.

Also, modern hard disks do not have visibly bad sectors. Bad sectors still exist, but the disk transparently substitutes good sectors instead of bad sectors. This happens dynamically: when a disk writes some data on a sector and detects that the write operation did not go well, then it will allocate a new good sector from its spare area and do the write again there. From the point of view of the operating system, this is invisible; the only consequence is that the write took a few more milliseconds than could have been expected. However, the data has been written to the bad sector (admittedly, one or two bits of it may be wrong, but this leaves more than 4000 genuine bits) and since the sector is now marked as “bad”, it is forever inaccessible from the host computer. No amount of wiping can do anything about that.

On Flash, the same issue arises, multiplied a thousand times. Flash memory works by “blocks” (a few dozen kilobytes) in which two operations can be done:

  • changing a bit from value 1 to value 0;
  • erasing a whole block: all the bit blocks are set to 1.

A given block will endure only that many “erase” operations, so Flash devices use wear leveling techniques, in which write operations are scattered all about the place. The “Flash Translation Layer” is a standard wear leveling algorithm, designed to operate smoothly with a FAT filesystem. The wear leveling means that if you try to overwrite a file, the wiping pattern will be most of the time written elsewhere. Moreover, some blocks can be declared as “bad” and remapped to spare blocks, in a way similar to what magnetic hard disks do (only more often). So some data blocks just linger, forever unreachable from any software wiping.

The basic conclusion is that wiping does not work against a determined attacker. Simply overwriting the whole partition with zeros is enough to deter an attacker who will simply plug the disk in a computer: logically, a single write is enough to “remove” the data, and by working on the partition instead of files, you avoid any filesystem shenanigans. But if your enemies are so cheap that they will limit themselves to the logical layer, then you are lucky indeed.

@nealmcb’s question, How can I reliably erase all information on a hard drive? sparked some excellent discussion including NIST’s guidelines for Media Sanitisation.

Media Destruction

Without encryption (does not work for data which is already there) and data wiping (does not work reliably, or at all), the remaining solution is physical destruction. It is not that easy, though; a good sledgehammer swing, for instance, though satisfying, is not very effective towards data destruction. After all, this is a hard disk. To destroy its contents, you have to remove the cover (there are often many screws, some of which hidden, and glue, and rivets in some models) and then extract the platters, which are quite rigid disks. A simple office shredder will choke on those (although they would easily munch through older floppy disks). The magnetic layer is not thick, so mechanical abrasion may do the trick: use a sander or a grinder. Otherwise, dipping the platters in concentrated sulfuric acid should work.

For Flash devices, things are simpler: that’s mostly silicon with small bits of copper or aluminium, and a plastic cover. Just burn it.

Bottom-line: media destruction requires resources. In a business environment, this could be a system administrator task, but it will involve extra manpower, safety issues (seriously, a geeky system administrator with access to an acid cauldron or a furnace, isn’t it a bit scary ?) and possibly environmental considerations.

Conclusion

Data security must be thought throughout the complete life cycle of storage devices. Whether you go crypto or physical, you must put some thought and resources into it. Most people will simply store old disks and hope for the problem to get away on its own accord.