Author Archive

QotW #31: What cryptographic flaw was exploited by Flame, to get its code signed by Microsoft?

2012-07-27 by roryalsop. 0 comments

Community member D.W. nominated this week’s question: What cryptographic flaw was exploited by Flame to get its code signed by Microsoft?

Hendrik Brummerman provided an in depth answer which was subsequently confirmed by updates from Microsoft:

Certificate Purpose

There are multiple purposes a certificate may be used for. For example it may be used as a proof of identity of a person or webserver. It may be used for code sining or to sign other certificates.

In this case a certificate that was intended to sign license information was able to sign code.

It might be as simple as Microsoft not checking the purpose-flag of customer certificates they signed:

Specifically, when an enterprise customer requests a Terminal Services activation license, the certificate issued by Microsoft in response to the request allows code signing.

MD5 collision attack

The reference to an old algorithm might indicate a collision attack on the signing process: There was a talk at CCC 2008 called MD5 considered harmful today – Creating a rogue CA Certificate In that talk the researches explained how to generate two certificates with the same hash. The generated a harmless looking certification request and submitted it to a CA. The CA signed it and generated the valid certificate for https-servers. But this certificate had the same hash as another generated certificate which had the purpose CA-certificate. So the CA signature of the harmless certificate was valid for the dangerous one as well. The researches exploited a weakness in MD5 to generate collisions. In order for the attack to work, they had to predict the information the CA would write into the certificate.

The combination of a collision attack and a misuse of the certificate purpose were both theoretical possibilities before this attack, but  the researchers of the original md5 collision attack published that the attackers used a new variant of the known md5 chosen prefix attack.

Mark Hillick listed a few useful links, around the wider problems the antivirus industry has – being a very reactionary industry its effectiveness is reduced – and a related presentation by Moxie Marlinspike on authentication.

D.W. also provided some useful links for further reading, from Microsoft’s own Technet, and from arstechnica.

Makerofthings7‘s answer focused on reducing the surface area of public trust – in this instance, it wouldn’t have prevented the attack, as the cert was signed by Microsoft, but it would improve security in general.

Silvercore linked to an excellent blog post on the incident – well worth a read.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #30: Are common passwords at particular risk?

2012-06-29 by roryalsop. 0 comments

User Experience Stack Exchange moderator Ben Brocka asked this question on Security Stack Exchange after the UX community asked whether they should disallow common passwords as a matter of course.

Cyril N‘s top scoring answer focuses on a highly valuable response – educating the individual to behave better. From his answer, two key points arise:

Some UX specialists says that it’s not a good idea to refuse a password. One of the arguments is the one you provide : “but if you ban them, users will use other weak passwords”, or they will add random chars like 1234 -> 12340, which is stupid, nonsensical and will then force the user to go through the “lost my password” process because he can’t remember which chars he added.

and

Let the user enter the password he wants. This goes against your question, but as I said, if you force your user to enter another password than one of the 25 known worst passwords, this will result in 1: A bad User Experience, 2: A probably lost password and the whole “lost my password” workflow. Now, what you can do, is indicate to your user that this password is weak, or even add more details by saying it is one of the worst known passwords (with a link to them), that they shouldn’t use it, etc etc. If you detail this, you’ll incline your users to modify their password to a more complex one because now, they know the risk. For the one that will use 1234, let them do this because there is maybe a simple reason : I often put a dumb password in some site that requests my login/pass just to see what this site provides me.

The only problem with this is called out in a comment by user Polynomial who suggests a hybrid solution:

Reject outright terrible passwords, e.g. “qwerty123”, and warn on passwords that are a derivation of a dictionary word / bad password, e.g. “Qw3rty123” or “drag0n1”

Personally, I like the hybrid solution, because as Iszi points out, most password attacks are conducted offline, where the attacker has a copy of the hash database. In this scenario, dictionary attacks are a very low CPU load, very fast option that is easily automated:

…it is realistic to assume that attackers will target “common” passwords before resorting to brute force. John The Ripper, perhaps one of the most well-known offline password cracking tools, even uses this as a default action. So, if your password is “password” or “12345678”, it is very likely to be cracked in less than a minute.

Dr Jimbob has provided the logical Security answer: It depends! The requirements will be very different for an online banking application and for an application that will only cause minor inconvenience to one end user if compromised. Regulatory requirements may define the level of security protection you require. He also points out that:

Very weak passwords (top 1000) can be randomly attacked online by botnets (even if you use captchas/delays after so many incorrect attempts)

Bangdang also supports disallowing common passwords, and has a final section on the tradeoffs between security and usability, along with the effects of successful compromise, which include fingerpointing and blame.

Tylerl provides some insight from his experience analyzing attack code:

the password ordering is this:

    1. Try the most common passwords first. Usually there’s a list of between 10 and 500 passwords to try
    2. Try dictionary passwords second. This often includes variations like substituting “4” where an “A” would be or a “1” where there was the letter “l”, as well as adding numbers to the end.
    3. Exhaust the password space, starting at “a”, “b”, “c”… “aa”, “ab”, “ac”… etc.

Step 3 is usually omitted, and step 1 is usually attempted on a range of usernames before moving to step 2.

In general the answers go to show just how extensive that key problem the security industry has: the trade off between usability and security. You could add strong security at every layer, but if the user experience isn’t appropriate it will not work.

This is why for major roles/access improvement projects we are seeing significant investment in the people/human capital side of things – helping projects understand human acceptance criteria, rejection reasons and passive blocking of projects which on the face of it seem perfectly logical.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #29: Risks of giving developers admin rights to their own PCs

2012-06-08 by roryalsop. 3 comments

Carolinegordon asked Question of the Week number 29 to try and understand what risks are posed by giving developers admin rights to their machines, as it is something many developers expect in order to be able to use their machines effectively, but that security or IT may deny based on company policy.

Interestingly, for a question asked on a security site, the highest voted answers all agree that developers should be given admin rights:

alanbarber listed two very good points – developer toolsets are frequently updated, so the IT load for implementing updates can be high, and debugging tools may need admin rights. His third point, that developers are more security conscious, I’m not so sure about. I tend to think developers are just like other people – some are good at security, some are bad.

Bruno answered along similar lines, but also added the human aspect in two important areas. Giving developers and sysadmins can lead to a divide, and a them-and-us culture, which may impact productivity. Additionally, as developers tend to be skilled in their particular platform, you run the risk of them getting around your controls anyway – which could open up wider risks.

DKNUCKLES made some strong points on what could happen if developers have admin rights:

  • Violation of security practices – especially the usual rule of least privilege
  • Legal violations – you may be liable if you don’t protect code/data appropriately (a grey area at best here, but worth thinking about)
  • Installation of malware – deliberately or accidentally

wrb posted a short answer, but with an important key concept:

The development environment is always isolated from the main network. It is IT’s job to make sure you provide them with what ever setup they need while making sure nothing in the dev environment can harm the main network. Plan ahead and work with management to buy the equipment/software you need to accomplish this.

Todd Dill has a viewpoint which I see a lot in the regulated industries I work in most often – there could be a regulatory requirement which specifies the separation between developers and administrator access. Admittedly this is usually managed by strongly segregating Development, Testing, Staging and Live environments, as at the end of the day there is a business requirement that developers can do their job and deliver application code that works in the timelines required.

Daniel Azuelos came at it with a very practical approach, which is to ask what the difference in risk is between the two scenarios. As these developers are expected to be skilled, and have physical access to their computers, they could in theory run whatever applications they want to, so taking the view that preventing admin access protects from the “evil inside” is a false risk reduction.

This question also generated a large number of highly rated comments, some of which may be more tongue in cheek than others:

The biggest risk is that the developers would actually be able to get some work done. Explain them that the biggest security risk to their network is an angry developer …or just let them learn that the hard way. It should be noted that access to machine hardware is the same as granting admin rights in security terms. A smart malicious agent can easily transform one into the other. If you can attach a debugger to a process you don’t own, you can make that process do anything you want. So debugging rights = admin

My summary of the various points:

While segregating and limiting access is a good security tenet, practicality must rule – developers need to have the functionality to produce applications and code to support the business, and often have the skills to get around permissions, so why not accept that they need admin rights to the development environment, but restrict them elsewhere.

This is an excellent question, as it not only generated interest from people on both sides of the argument, but they produced well thought out answers which helped the questioner and are of value to others who find themselves in the same boat.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #28: I found the company I work for is putting backdoors into mobile phones

2012-06-01 by roryalsop. 0 comments

Question of the Week number 28 got an astonishing amount of views and answers – it is a very hot topic in the world of privacy and data protection.

User anonymousquery wrote this question, as he is concerned about the ethical implications of such a backdoor, whether it is intentional or not, whilst his employers don’t see it as a big deal as they, “aren’t going to use it.”

Oleksi posted the top scoring answer which makes the point which should be raised in any similar circumstance:

Just because they won’t use it, doesn’t mean someone else won’t find it and use it.

It will present a major risk by just existing – if an attacker finds this backdoor, their job is made so much easier. In a comment on this answer, makerofthings7 added the interesting fact that Microsoft have even taken the step of banning harmless Easter Eggs from their software in order to help customers buy in to their Trustworthy Computing concept and to meet government regulations.

Mason Wheeler targeted the question specifically, answering the “What should I do?” part by discussing the moral and ethical responsibility to protect customers from a product with serious security flaws.  He suggests whistleblowing – possibly to the FBI or similar body if it is serious enough!

Martianinvader also pointed out the following important point:

Fixing this issue isn’t just ethical, it’s essential for your company’s survival. It’s far, far better to fix it quietly now than a week after all your users and customers have left you because it was revealed by some online journalist.

Avio pointed out that there are risks to you and your company, and points out another course of action which may be preferable:

And if I were you, I’ll just be very cautious. First, I’ll make really really sure that what I saw was a backdoor, I mean legally speaking. Second, I’ll try in any way to convince the company to remove the backdoor.

Bruce Ediger gave some essential information on protecting yourself – as this is now almost public knowledge, you may get blamed if it is exploited!

With another 17 answers in addition to these ones, there are a wide range of viewpoints and pieces of advice, but the overall view is that the first thing to do is understand where you stand legally, and where ethical issues come into the equation, then consider the impact of either whistleblowing or staying quiet about the issue before making a decision which may affect your career.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #26: Malicious QR Code and Mitigation

2012-05-04 by roryalsop. 0 comments

This week’s Question of the Week was asked by Purge back in February.  His concern has been echoed in various publications – the worry that scanning one of the common QR codes you see in magazine adverts and on billboards could cause something malicious to happen as most QR scanners on smartphones take you straight to the URL encoded in the QR image. This isn’t a malicious QR (unless you count linking to a particular genre of music malicious) but how would you know?

logicalscope pointed out that a QR code was simply an encoding, so anything you could put in a URL could be encoded in a QR code. This could include XSS, SQL Injection or any other URL based attack.

handyjohn linked to a brief paper over on http://dl.packetstormsecurity.net/papers/attack/attaging.pdf outlining how QR codes could be used to direct victims to an attack website. An attacker could simply print QR code stickers and place them over existing ones on popular advertising hoardings to fool people into going to a site either with malicious code, or that is a spoof of the expected website which can ask for credentials from the victim.

roryalsop focused on the mitigation, which can be very straightforward: rather than send the browser directly to the website, just display the URL that is encoded in the QR image. This way the user can make a decision whether it is a malicious website or not (within the usual bounds for Internet users.) Admittedly logicalscope’s final point, that the QR decoder application could have a vulnerability is also true, but by adding in a user validation step we can at least improve security.

How about storing this one in your phone as a Security Stack Exchange business card – assuming people trust you enough to scan it.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW#25: Introducing QotW

2012-04-27 by roryalsop. 1 comments

What is the QotW?

At nearly 3,500 questions we have a wide variety of topics, answers and styles, and in general when someone comes to the site they are looking for answers to a specific problem, or to give answers to questions in their field so they may not see the vast majority of questions. Question of the Week posts on meta.security.stackexchange.com allow the community to vote for their favourite question to be discussed on the blog. This blog itself is quite young – we have 44 posts published, of which 24 are QotW posts.

Why do we do it?

On the Internet, getting visitors to your site is the key metric – QotW is another avenue to get what we do in front of a wider audience. Our QotW blog posts link to questions, answers, community members and external sites where relevant in order to add context and depth, showcasing our site, and this is demonstrated in our referrer stats: we get good traffic from slashdot, reddit, facebook, twitter as well as Bruce Schneier and Dan Kaminsky’s blogs, and even explainxkcd.com so we are doing something right and gaining visibility.

How do we do it?

@Iszi’s answer here lists the process in detail, but to summarise:

We post a QotW meta question on a Friday to invite ideas for the following week. In order to avoid dupes, we maintain a list of previous questions featured on the blog, as well as those which have been proposed but not yet published.

By Tuesday we have topic and author decided (typically individuals volunteer on our chat room, the DMZ – feel free to become a volunteer, we can add you as a contributor role on the blog site.)

The administrators manage the workflow planning through a Trello workspace.

QotW posts aren’t expected to be in depth treatises so drafts are ready by Thursday morning so they can be reviewed in time for a midday Friday publication (we’ve gone with UTC timing for this schedule as we have members from Australia to west coast USA)

Why should you contribute?

First, and most importantly, because you want to. You’ve seen something interesting happening on the site, or have an interesting topic you want to cover and you’d like to share it with the world.

Did we mention it is a nice addition to your careers.stackoverflow.com or LinkedIn profile?

In addition, you help grow the community you are a member of (now over 8000 individuals – a good blog post can more than double the rate of new users joining that day). Your words and name will be attracting the up and coming security experts of tomorrow.

We welcome all contributors to the blog, and the light touch of the QotW posts is a relatively easy way to start security blogging. Seasoned reviewers will be more than happy to assist.

Liked this post? Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #23: Why is it difficult to catch Anonymous/Lulzsec?

2012-04-13 by roryalsop. 2 comments

This weeks question of the week was asked by user claws back in February 2011 and while a lot has happened since then, it is still a very valid question.

The top scoring answer, “What makes you think they don’t get caught”, by atdre, while more of a challenge to the original question, has proven to be quite appropriate, as over the last year various alleged members of Anonymous have been caught. Some through informants, others through intelligence work, however the remainder of the answers focus on the technical and structural reasons why Anonymous continues to be a major force on the Internet.

SteveS, Purge, mrnap, tylerl and others  mention the usual way attackers hide on the Internet – using machines in other countries, generally owned by unwitting individuals who have not protected them sufficiently (This includes botnets – but there are also willing botnets, provided by followers of Anonymous – who allow their machines to be used for attacks) and by routing through networks such as TOR (The Onion Router) so that even if law enforcement try to trace the connection back they will fail either because there are too many connections to track, or because some of the connections will pass through countries where the Internet Service Providers are not able or willing to assist with the trace.

I think Eli hit the nail on the head, however with “because anonymous can be anyone, literally” – as while there are certainly a core group of skilled and motivated individuals, there are many thousands of individuals who will contribute to an attack, and these individuals may be different from one month to the next as the nature of Anonymous allows people to join and take part as and when they want to, if a particular cause is of interest to them.

The Lulzsec spinoff from Anonymous appeared to be a deliberately short lived group who wanted to do something less political, and more for “the lulz” – focusing on large corporates and security organisations to highlight weaknesses in controls, and nealmcb provided links in comments to articles on this group in particular. In terms of detection, the same comments apply here as to the wider Anonymous group.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com

QotW #21: What should I do when my boss asks me to fabricate audit log data?

2012-03-30 by roryalsop. 0 comments

Asked over on Programmers in January, this question is our 5th highest rated of all time, so it’s obviously resonating with our community.

With businesses the world over reliant on the accuracy, availability and integrity of IT systems and data this type of request demonstrates not only unethical behaviour, but a willingness on the part of the boss to sacrifice the building blocks used to ensure their business can continue.

Behaviour like this, if discovered by an audit team, could lead to much wider and deeper audits being conducted to reassure them that the financial records haven’t been tampered with – never mind the possible legal repercussions!

Some key suggestions from our community:

MarkJ suggested getting it in writing before doing it, but despite this being the top answer, having the order in writing will not absolve you from blame if you do actually go ahead with the action.

Johnnyboats advised contacting the auditor, ethics officer or internal council, as they should be in a position to manage the matter. In a small company these roles may not exist, however, or there may be pressure put upon you to just toe the line.

Iszi covered off a key point – knowing about the boss’s proposed unethical behaviour and not reporting the order could potentially put you at risk of being an accessory. He suggests not only getting the order in writing, but contacting legal counsel.

Sorin pointed out that as getting the order in writing may be difficult, especially if the boss knows how unethical it is, the only realistic option may just be to CYA as best you can and leave quietly without making a fuss.

Arjang came at this from the other side – perhaps the boss needs help:

This is not just a case of doing or not doing something wrong cause someone asked you to do it, it is a case of making them realize what they are asking

I am pretty cynical so I’m not sure how you’d do this, but I do like the possibility that the best course of action may be to provide moral guidance and help the boss stop cheating.

Most answers agree on the key points:

  • Don’t make the requested changes – it’s not worth compromising your professional integrity, or getting deeper involved in what could become a very messy legal situation.
  • Record the order – so it won’t just be your word against his, if it comes into question.
  • Get legal counsel – they can provide advice at each stage.
  • Leave the company – the original poster was planning to leave in a month or so anyway, but even if this wasn’t the case, an unethical culture is no place to have your career.

The decision you will have to make is how your report this. At the end of the day, a security professional does encounter this sort of thing far too regularly, so we must adhere to an ethical code. In fact, some professional security certifications, like the CISSP, require it!

Liked this question of the week? Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #20: Are Powerline ethernet adapters inherently secure?

2012-03-15 by roryalsop. 0 comments

ZM15 asked this interesting question just before Christmas over on Superuser. It came over to Security Stack Exchange for some security specific input and I was delighted to see it, as I have done a fair bit of work in the practical elements of securing communications – so this blog post may be a tad biased towards my experiences.

For those not in the know, Powerline ethernet is a technology which allows you to transmit ethernet over your existing mains wiring – which is very useful for buildings which aren’t suitable for running cabling, as all you need to do is pop one of these where you want to connect a computer or other ethernet enabled device and they will be able to route TCP/IP packets. There are some caveats of course, the signal really only works on a single phase, so if you have multiple phases in your house the signal may not travel from one to another, although as DBasnett commented, to get around this, commercial properties may inject the signal deliberately onto all phases.

Example Powerline Adapter Early Powerline adapters had very poor signal quality – noise on the mains caused many problems – but since then the technology has improved considerably, partly through increasing the signal strength, but also through improving the filters which allow you to separate signal from mains.

This is where the security problem lies – that signal can travel quite far down wires, and despite fuse boxes offering some resistance to signals, you can often find the signal is retrievable in the neighbour’s house. Damien answered:

I have experienced the signal bleed from my next door neighbor. I … could identify two other powerline adapters using the same network name. I got anywhere between 10 to 20Mbps of throughput between their adapters and mine. I was able to access their router, watch streaming video and see the computers on the network. I also noticed they had gotten IPs on my router also.

This prompted him to enable security.

Tylerl gave an excellent viewpoint, which is as accurate here as it has ever been:

Many of the more expensive network security disasters in IT have come from the assumption that “behind the firewall” everything is safe.

Here the assumption was that the perimeter of the house is a barrier, but it really isn’t.

Along even weirder lines, as is the way with any electrical signal, it will be transmitted to some degree from every wire that carries it, so if you have the right equipment you may be able to pick up the traffic from a vehicle parked on the street. This has long been an issue for organisations dealing in highly sensitive information, so various techniques have been developed to shield against these transmissions, however you are unlikely to have a Faraday cage built into your house. (See the article on TEMPEST over on Wikipedia or this 1972 NSA document for more information)

For similar wireless eavesdropping, read about keyboards, securing physical locations, this answer from Tom Leek and this one from Rook – all pointing out that to a determined attacker, there is not a lot the average person can do to protect themselves.

Scared yet?

Well, unless you have attackers specifically targeting you, you shouldn’t be, as it is very straightforward to enable security that would be appropriate for most individuals, at least for the foreseeable future. TEMPEST shielding should not be necessary and if you do run Powerline ethernet:

Most Powerline adapters have a security option – simply encryption using a shared key. It adds a little overhead to each communication, but as you can now get 1Gb adapters, this shouldn’t affect most of us. If you need >1Gb, get your property wired.

Liked this question of the week? Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #18: How can we destroy data on a hard drive?

2012-02-16 by roryalsop. 0 comments

Rather than focus on a specific question this week, we have 9 questions related to the destruction of data, 5 of which are specifically interested in destroying hard drives, as in this modern age where everything is recorded, there are good reasons for ensuring data is deleted when required.

So this post will concentrate on destroying the the drive itself. For the deletion of data from a storage device, have a look at our blog post for Question of the week number 4: How can you reliably wipe data from a storage device?

Matthew Doucette asked simply: How do you destroy an old hard drive?

To which Scott Pack produced the following incredibly detailed answer:

When it comes to drive destruction you typically see one of two main fields:

  1. Disk Degaussing
  2. Physical Destruction

Degaussing

Degaussing used to be the norm, but I am not such a big fan. On the plus side it is fast, you’ll normally just dump the disks on a conveyor belt and watch them get fed through the device. The problem is auditability. Since the circuitry is rendered wobbly, you won’t be able to do a spot check of the drives and verify that the data is gone. It is possible, with some level of probability unknown to me, that data could still exist on the platters. Retrieving the data would, without question, be difficult, but the fact still remains that you cannot demonstrate the data is actually gone. As such, most companies now will actually be doing physical destruction.

Physical Destruction

At the low end, say a small box of drives at a time, you’ll have hard drive crushers. They’re often pneumatic presses that deform the platters beyond useful recognition. At the risk of supporting a specific product, I have personally used this product from eDR. It works well, and is very cathartic.

At a larger scale, say dozens or hundreds of disks, you’ll find large industrial shredders. They operate just like a paper shredder, but are designed to process much stiffer equipment. The mangled bits of metal that are left over are barely identifiable as hard drives.

At an even larger scale you can start looking at incinerators that will melt the drives down to unidentifiable lumps of slag. Since most electronics can produce some rather scary fumes and airborne particulates, I would not recommend doing this on you own. No, this is not a good use of your chiminea.

Manual Dis-assembly

If you are dealing with one or two drives at a time, then simple dis-assembly might be sufficient. Most drives these days are largely held together with torx screws, and will come apart with varying levels of difficulty. Simply remove the top cover, remove the platters from the central spindle. Taking a pocket knife, nail file, screwdriver, whatever, have fun scoring both surfaces of each platter. Then dispose of the materials appropriately. I cannot speak to how recoverable the data is afterwards, but it is probably sufficient. The biggest thing to keep in mind is that while most desktop hard drive platters are metal, some are glass. The glass ones shatter quite extravagantly.

Additional Considerations

Before you decide on a destruction method, make sure to identify what kind of data is stored on each device and treat it appropriately. There may be regulatory or legal requirements for information disposal depending on what data is stored on the disk. As an example, see section 8-306 of DoD 5220.22-M.

For hard drive destruction, DoD 5220.22-M section 8-306 recommends: “Disintegrate, incinerate, pulverize, shred, or melt”

All that being said, performing a single pass zero wipe is probably sufficient for your purposes. Modern research indicates that modern hard drives are largely immune to the “magnetic memory” problem we used to see on magnetic tape. I would never bother doing anything more on a household drive unless the drive itself was exhibiting failures

Ryan M asked a very similar question – What is the best method of retiring hard drives?

And Scott also gave these 2 excellent points in his answer:

Electrical Scrambling

In the olden days when you had a room packed with tape there were few things better than a big honkin’degausser for making sure that you knew what left the room. As hard drives supplanted tape, their use simply got transferred to the new medium. The biggest advantage to using a degausser to take care of hard drives is speed. Just pass a box through the unit, ignore the jiggling in your fillings, and walk away with clean drives. The downside is the lack of ability to audit data destruction. As discussed in the Wikipedia article, once a hard drive is degaussed, the drive is mechanically unusable. As such, one cannot spot check the drive to ensure cleanliness. In theory the platters could be relocated to a new device and we cannot state, categorically, that the data will not be accessible.

Wanton Destruction

This is without question my favorite. Not only because we demonstrate, without question, that the data is gone, but the process is very cathartic. I have been known to take an hour or so, dip into the “To Be Destroyed” bin, and manually disassemble drives. For modern hard drives all you need is a torx set and time (possibly pliers). While one will stock up on their magnet collection, this method of destruction is very time consuming. Many companies have developed equipment specifically for hard drive destruction en-masse. These range from large industrial shredders to single unit crushers such as this beauty from eDR. I have personally used that particular crusher, and highly recommend it to any Information Security professional who has had a bit of a rough day.

I’m thinking if I ever need to destroy hard drives, I’ll either blow them up / give them to my kids / use them for target practice or ask Scott to have fun with them.

Dan Beale points out that exactly what approach you take depends on:

  • how sensitive is the information
  • how serious are the attackers
  • do you need to follow a protocol
  • do you need to persuade other people the data has gone

Auditability is essential if you are susceptible to regulations around data retention and destruction, and for most organisations this will be essential around regulations such as the Data Protection Act 1998 (UK), GLB or HIPAA (US) and others.