Author Archive

QoTW #38: What is SHA-3 – and why did we change it?

2012-10-12 by ninefingers. 0 comments

Lucas Kauffman selected this week’s Question of the Week: What is SHA3 and why did we change it? 

No doubt if you are at least a little bit curious about security, you’ll have heard of AES, the advanced encryption standard. Way back in 1997, when winters were really hard, our modems froze as we used them and Windows 98 had yet to appear, NIST saw the need to replace the then mainstream Data Encryption Standard with something resistant to the advances in cryptography that had occurred since its inception. So, NIST announced a competition and invited interested parties to submit algorithms matching the desired specification – the AES Process was underway.

A number of algorithms with varying designs were submitted for the process and three rounds held, with comments, cryptanalysis and feedback submitted at each stage. Between rounds, designers could tweak their algorithms if needed to address minor concerns; clearly broken algorithms did not progress. This was somewhat of a first for the crypto community – after the export restrictions and the so called “crypto wars” of the 90s, open cryptanalysis of published algorithms was novel, and it worked. We ended up with AES (and some of you may also have used Serpent, or Twofish) as a result.

Now, onto hashing. Way back in 1996, discussions were underway in the cryptographic community on the possibility of finding a collision within MD5. Practically MD5 started to be commonly exploited in 2005 to create fake certificate authorities. More recently, the FLAME malware used MD5 collisions to bypass Windows signature restrictions. Indeed, we covered this attack right here on the security blog.

The need for new hash functions has been known for some time, therefore. To replace MD5, SHA-1 was released. However, like its predecessor, cryptanalysis began to reveal that its collision resistance required a less-than-bruteforce search. Given that this eventually yields practical exploits that undermine cryptographic systems, a hash standard is needed that is resistant to finding collisions.

As of 2001, we have also had available to us SHA-2, a family of functions that as yet has survived cryptanalysis. However, SHA-2 is similar in design to its predecessor, SHA-1, and one might deduce that similar weaknesses may hold.

So, in response and in a similar vein to the AES process, NIST launched the SHA3 competition in 2007, in their words, in response to recent improvements in cryptanalysis of hash functions. Over the past few years, various algorithms have been analyzed and the number of candidates reduced, much like a reality TV show (perhaps without the tears, though). The final round algorithms essentially became the candidates for SHA3.

The big event this year is that Keccak has been announced as the SHA-3 hash standard. Before we go too much further, we should clarify some parts of the NIST process. Depending on the round an algorithm has reached determines the amount of cryptanalysis it will have received – the longer a function stays in the competition, the more analysis it faces. The report of round two candidates does not reveal any suggestion of breakage; however, NIST has selected its final round candidates based on a combination of performance factors and safety margins. Respected cryptographer Bruce Schneier even suggested that perhaps NIST should consider adopting several of the finalist functions as suitable.

That’s the background, so I am sure you are wondering: how does this affect me? Well, here’s what you should take into consideration:

  • MD5 is broken. You should not use it; it has been used in practical exploits in the wild, if reports are to be believed – and even if they are not, there are alternatives.
  • SHA-1 is shown to be theoretically weaker than expected. It is possible it may become practical to exploit it. As such, it would be prudent to migrate to a better hash function.
  • In spite of concerns, the family of SHA-2 functions has thus far survived cryptanalysis. These are fine for current usage.
  • Keccak and selected other SHA-3 finalists will likely become available in mainstream cryptographic libraries soon. SHA-3 is approved by NIST, so it is fine for current usage.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #27: Open Source vs Closed Source Systems

2012-05-25 by ninefingers. 0 comments

Question of the Week number 27 is a contentious, hotly debated issue in the software world. The question itself was posed by Security.SE user blunders, who quoted the argument often used in the defence of open source as a business model:

My understanding is that open source systems are commonly believed to be more secure than closed source systems. … A common position against closed source systems is that a lack of awareness is at best a weak security measure; commonly referred to as security through obscurity.

and then we hit the question itself:

Question is, are open source systems on average better for security than closed source systems?

We’ll begin with the top voted answer by SE user Jesper Mortensen, who explained that the whole notion of being able to generally compare open versus closed source systems is a bad one when there are so many other factors involved. To compare two systems you really need to look beyond the licensing model they use, and look at other factors too. I’ll quote Jesper’s list in its entirety:

  • Licenses.
  • Access to source code.
  • Very different incentive structures, for-profit versus for fun.
  • Very different legal liability situations.
  • Different, and wildly varying, team sizes and team skillsets.

Of course, this is by no means a complete treatment of the possible differences.

Jesper also highlighted the importance of comparing pieces of software that solve specific domain issues – not software in general. You have to do this, to even remotely begin to utilise the list above.

Security.SE legend Thomas Pornin also answered this question. I’ll begin coverage of his answer with his summary:

… the “opensource implies security” idea is overrated. What is important is the time (and skill) devoted to the tracking and fixing of security issues, and this is mostly orthogonal to the question of openness of the source.

The main thrust of Thomas’ answer was that actually, maintained software is more secure than unmaintained software. As an example, Thomas cited OpenSSL remote execution bugs that had been left lying in the code tree unfixed for some time – highlighting a possible advantage of closed source systems in that, when developed by companies, the effort and time spent on Q&A is generally higher than open source systems.

Thomas’ answer also covers the counterpoint to this – that closed source systems can easily conceal security issues, too, and that having the source allows you to convince yourself of security more easily.

The next answer was provided by Ori, who lists a set of premises used for justifying the security of open source:

  1. The Customization premise
  2. The License Management premise
  3. The Open Format premise
  4. The Many Eyes premise
  5. The Quick Fix premise

As Ori rightly says, the customization premise means a company can take an open source platform and add an additional set of security controls. Ori quotes NSA’s SELinux as an example of such a project. For companies with the time and money to produce such platforms and make such fixes, this is clearly an advantage for open source systems.

For license management and open format arguments Ori covers from a compliance and resilience perspective. Using open source software (and making modifications) contains certain license constraints – the potential to violate these constraints is clearly a risk to the business. Likewise, for business continuity purposes the ability to not be locked in to a specific platform is a huge win for any company.

Finally, an answer by yours truly. The major thrust of my answer is succinctly summarised by AviD‘s comment on it:

I’ve always proposed an amendment to Linus’ Law: “Given enough trained eyeballs, most bugs are relatively shallow”

I explained, through use of a rather intriguing vulnerability introduced into development kernels by a compiler bug, that having the knowledge to detect these issues is critical to security. The source being available does not directly guarantee you have the knowledge to detect such issues.

That’s it for answers. As you can see, none of us took sides generally on the “open versus closed” debate, instead pointing out that there are many factors to consider beyond the license under which source is available. I think the whole set of answers is best summarised by this.josh‘s comment on the top voted answer – so I’ll leave you with that:

I agree. What matters most is how many people with knowledge and experience in the security domain actively design, implement, test, and maintain the software. Any project where no-one is looking at security will have significant vulnerabilities, regardless of how many people are on the project.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #22: What are legal/ethical concerns to bear in mind, when hacking websites with open invitations?

2012-04-06 by ninefingers. 2 comments

This weeks question of the week was asked by user Yoav Aner, who wanted to understand the legal and ethical concerns of executing an attack on a web site which carried a notice inviting attacks. Yoav specifically wanted to know what, if any contractual implications there were and if it were not specified, how far would be too far. This spun off into two further questions – What security measures to have before openly allowing security researchers to hack your site and What security concerns should one bear in mind when hacking open-invitation websites? so this post will look at all three.

Before we start, I must re-iterate: we are security professionals here, not lawyers, so if in doubt, consult a lawyer.

Legal and ethical concerns:

The answer Yoav accepted was provided by Rory McCune, who raised the point that ultimately, a no holds barred approach is extremely unlikely to be acceptable in any circumstance. Rory highlighted the importance of ensuring the page in question was written by the administrators of the site, as opposed to being supplied through user content. Clearly, unless it is clear the page was written by someone with the authority to make that kind of invitation, attacking it would definitely be hostile.

Another excellent point raised in this answer was that some companies actively invite finding bugs in their web sites and products, provided you follow a number of guidelines. More on this can be found in the answer itself.

Finally, Rory touched on what many people may forget – although the website may invite attempted break-ins, it may actually be illegal where you are even to make the attempt.

Our next answer was provided by Security moderator, user and blogger Rory Alsop. According to Rory, one problem when dealing with this kind of issue is that no test cases have yet passed through the courts – so until they do, it’s unlikely any precedent has been established for dealing with these kinds of issues. Rory also raised the criminal activity point again. Always understand that the law of your country/jurisdiction still applies.

Next up, Rory explained that during a penetration test a contract established for the work may include rules about what should happen, how far a test may go, who should be notified if a vulnerability is found etc. Even with this safeguard, there is still the potential for legal action should something break. Rory advised logging absolutely everything that was going on so as to have proof of actions.

When applied to the website scenario, Rory pointed out that in this case there is no signed contract establishing this understanding, just an implication of one which neither party is legally bound to.

That is all for answers on this question. I tend to miss questions on ethics on the main site, so writing them up is actually quite interesting. As such, I am going to summarise the key points below:

  1. The rules of engagement are not well established. Assuming a “feel free to hack this message” we have no idea to what extent that is actually what they mean. By contrast, penetration testing is usually better scoped.
  2. The author of the page might not have the authority to make such an invitation. As I was reading this, I did wonder – if this is a shared host, the administrator of the site is a different person from the company who owns and maintains the box. So even though the site author can put up this message, they’re not actually entitled to make that call (and it probably violates the ToS on their hosting package).
  3. It may be illegal to engage in the act of attempting, whether or not the site in question has given you permission.
  4. Some sites actively encourage hunting for bugs.

Security concerns when hacking open-invitation websites:

Iszi raised the following worries on his question

  • The site could be a honeypot, run by government or other entities looking to gather information about active (or would-be) hackers.
  • The site could be set up by a black-hat as a honeypot to gather a list of interesting, hackable amateurs to target.
  • A third-party black-hat could potentially access the site’s logs and farm them for data about interesting, hackable amateurs to target.

Lucas Kauffman confirmed that he had a school project where he faked an open sendmail relay and

just piped all the incoming emails to a python script that got all the destinations out, generating my own spamlist. I think in the end after about 3 weeks I had close to 300.000 different email addresses.

Rory Alsop focused on the reputational and professional risks, as the host of the site will be able to see everything you did in their logs…do you ever mistype commands, use dir instead of ls, accidentally stray outside the scope of the test? This will be recorded and could negatively impact you.

Think about what you are divulging when hacking a website….

  • Your methodology
  • Your tools
  • Your mistakes?
  • etc

Finally – Yoav also asked a question focusing on the other side,

What security measures should I have in place before inviting people to hack my website?

Ttfd’s answer went into some considerable detail on the practical logistics – how you think about the problem is probably as important as actually implementing security in this situation:

  1. Do you have the money to do that?
  2. Do you have the resources? (servers, security teams and etc)
  3. How far can you limit the damages a hacker can make to your system? I.E. If a hacker hacks into your server what access will he have ? Will he be able to connect to your database and retrieve/store/update data? Is your data encrypted ? Will he be able to decrypt it? (and so on)
  4. Can your security team find how a hacker exploited your system?
  5. Does your security team have the skills to fix problems that may occur ?
  6. Probably many more questions that you need to ask and answer before you decide.

M15K gave a summarised answer

I can’t imagine very many positive scenarios in declaring open season on you’re front door will result in something useful. But let’s say you do, and you do get some positive feedback. Are you and your security team in a position to remediate those vulnerabilities?

Interesting stuff! All in all, this looks like a relatively risky business on both sides, so core to the decision must be a full understanding of the risks, and how these match to your risk appetite. If you are hosting such a site, you may get some valuable information into attack techniques, but you need to protect yourself from an escalation from the attack environment to your own systems. If you are testing the site, think about the risks you may be facing, and plan accordingly.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com

QotW #19: Why can’t a password hash be reverse engineered?

2012-02-24 by ninefingers. 0 comments

Are you a systems administrator of professional computer systems? Well, serverfault is where you want to be and that’s where this week’s question of the week came from.

New user mucker wanted to understand why, if hashing is just an algorithm, it cannot simply be reverse engineered. A fair question and security.SE as usual did not disappoint.

Since I’m a moderator on crypto.se this question is a perfect fit to write up, so much so I’m going to take a slight detour and define some terms for you and a little on how hashes work.

A background on the internals of hash functions

First, an analogy for hash functions. A hash function works one block of data at a time – so when you hash your large file, the hash function takes so many blocks (depending on the algorithm) at once. It has an initial state – i.e. configuration – which is why sha of nothing actually has a value. Then, each set of incoming blocks alter those values. (Side note: collision resistance is achieved like this).

The analogy in this case is like a bike lock with twisty bits on. Imagine the default state is “1234” and every time you get a number, you alter each of the digits according to the input. When you’ve processed all of the incoming blocks, you then read the number you have in front of you. Hash functions work in a similar way – the state is an array and individual parts of it are shifted, xor’d etc depending on each incoming block. See the linked articles above for more.

Then, we can define input and output of two things: one instance of the hash function has inputs and outputs, as does the overall process of passing all your data through the hash function.

The answers

The top answer from Dietrich Epp is excellent – a simple example was provided of a function – in this case multiplication – which one can do easily forwards (O(N^2)) but that becomes difficult backwards. Factoring large numbers, especially ones with large prime factors, is a famous “hard problem”. Hash functions rely on exactly this property: it is not that they cannot be inverted, it is just that they are hard.

Before migration, Serverfault user Coredump also provided a similar explanation. Some interesting debate came up in the comments of this answer – user nealmcb observed that actually collisions are available in abundance. To go back to the mathsy stuff – the number of inputs is every possible piece of data there is, whereas for outputs we only have 256 bits of data.  So, there are many really long passwords that map to each valid hash value, but that still doesn’t help you find them.

Neal then answered the question himself to raise some further important issues – from a security perspective, it is important to not think of hashes as “impossible” to reverse.  At best they are “hard”, and that is true only if the hash is expertly designed.   As Neal alludes to, breaking hashes often involves significant computing power and dictionary attacks, and might be considered, to steal his words, “messy” (as opposed to a pretty closed-form inverted function) but it can be done.  And all-too-often, it is not even “hard”, as we see with both the famously bad LanMan hash that the original poster mentioned, and the original MYSQL hash.

Several other answers also provided excellent explanations – one to note from Mikeazo that in practise, hash functions are many to one as a result of the fact there are infinite possible inputs, but a fixed number of outputs (hash strings). Luckily for us, a well designed hash function has a large enough output space that collisions aren’t a problem.

So hashes can be inverted?

As a final point on hash functions I’m going to briefly link to this question about the general justification for the security of block ciphers and hash functions. The answer is that even for the best common hashes, no, there is no guarantee of the hardness of reversing them – just as there is no cast iron guarantee products of large primes cannot be factored.

Liked this question of the week? Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #17: What would one need to do in order to hijack a satellite?

2012-02-04 by ninefingers. 0 comments

Slightly later than officially planned, question of the week number 17, a weekly feature on security stack exchange is a rather unusual but very interesting choice. We’ve featured it by community votes – and because it’s an interesting study of “how to think about security”.

So, without further ado, Security.SE member Incognito asked: What would one need to do in order to hijack a satellite?.

I did warn you! Well, never fear, it turns out our members know exactly how to do it! So without further ado:

Overview

In terms of radio communications security, most satellite communications systems are repeaters, accepting communication from the highest strength incoming signal at will. Most satellites then contain a command module to order the satellite to perform certain actions as necessary. Due to the highly custom nature of individual satellites, the commands that are accepted and the security for them is highly variable, so there’s a lot of potential for exploitation. As one of our answerers puts it:

When it comes to satellites, the word general does not apply.

Legal Concerns

As a result of the wide variety of frequencies and power requirements in use, chances are, attempting to send commands to a satellite are likely to violate local radio laws – as such, we do not recommend it (although we find the study of security very interesting, all the same).

Finding and talking to a satellite

Clearly, if you’re going to communicate with a satellite, you need equipment with sufficient power and range. You’ll need to be aware of the carrier frequency, the maximum satellite range, the data rate and satellite transmitter power. The location and altitude of satellites also matters – some are geostationary and as such are always in range, while others orbit and may only be in range for a specific period of time. Directional antennas with tracking motors will help an awful lot if the satellite changes position at all. Our answers provide even more detailed radio advice and links, so if you’re interested in your radio, do have a read!

Taking control of a satellite

There are several means by which you can take control of a satellite:

  1. Direct comms: If you have identified your target satellite, the most obvious method would be to communicate directly with it, sending it the commands you desire. Depending on the satellite you target your options will vary. You’ll need to be aware of the protocol and options available to you.
  2. MITM: One option for hijacking a satellite is to identify its command and control – the ground station – and intercept its communications. If you can afford to rent a small plane and can fly it over the site, possibly allowing you an advantage.

Doing it legally

It may be possible to purchase satellite time, depending on who you ask – and as such it may be possible to legitimately control a satellite, even if only for a brief period!

The expensive way

Many of the answers given focused on the radio communication protocols – however, Security.SE member and former moderator Graham Lee highlighted the physical security of satellites as a major concern – the only problem being the cost of getting into space. If you can, being able to nudge the satellite is enough to deny service by altering the antenna direction – you may be able to exploit it in other ways, whilst you’re up there. Of course, you don’t need to go up there yourself necessarily – a rocket will do the job adequately well and apparently doesn’t even need explosives!

Summary

Satellite security is an interesting area with many concerns that has perhaps been overlooked in our focus on the security of online stores and the like. Thankfully, people are looking at the security of communications systems that rely on satellites!

This QotW writeup relied on answers from Jeff Ferland, this.josh and Graham Lee primarily. Thanks to all our answerers on this particular question for providing their insights!

Can you improve on these answers? Feel free to visit the question and provide additional detail!

QotW #14: How to secure an environment both physically and technically?

2011-12-16 by ninefingers. 0 comments

So after a slight hiatus we are back running question of the week posts again. This time, chosen by me because we had a tie, is the question asked by user jwegner: How to secure an environment both physically and technically?

An interesting question when you may be working in a scenario processing personal data and cannot afford a data leak. So, as a very quick summary, jwegner had:

  • no local storage
  • used cctv cameras
  • used biometric locks and key-card locks
  • used sftp to transfer data in and out when necessary.

However, jwegner was still concerned about a number of issues including mobile phones, preventing data release when the rules need to be relaxed and the fact that their external gateway ran both an sftp server and an ftp server.

Security.se responded. Jeff Ferland has the highest voted answer. He recommended not allowing any mobile phone devices inside the secure area at all, as even in offline mode many phones have data ports and cameras and that the internet connection for the “red” zone was a no-go. However, on the subject of achieving no local storage with flash drives, Jeff recommended the opposite, citing loss of the drives as a big potential risk factor. Jeff continued to recommend that monitoring USB ports is a necessary precaution and possibly using epoxy to fill them – however, his answer also mentions that many devices are now highly reliant on the usb interface, including keyboards and mice.

On the ftp gateway area, Jeff recommended looking into access control to ensure internal accounts only had read access, and possibly using ProFTPd as opposed to the standard sftp subsystem. Finally, Jeff added an extra detail – using deep freeze to ensure machine config cannot persist reboot.

Rory Alsop echoed many of these sentiments in his answer. Over and above Jeff, Rory recommended banning mobiles with very strict consequences for their use inside the secure area as a deterrent – as well as enforcing searches on entry/exit. In addition, he recommended not using ftp at all. Rory also echoed Jeff’s deep freeze, recommending read-only file systems. Finally, his answer mentioned two key points:

  • Internal risks of using ftp – may be worth moving over to sftp to ensure internal traffic is harder to sniff.
  • Staff vetting.

From the comments, an interesting point was raised – blocking cellphones is illegal in the US and may be in other jurisdictions, so whilst detection methods could be used for enforcement outright blocking may require an in-depth review of options before proceeding.

So far, these are the only two answers. Our questions of the week aim to highlight potential interesting questions from the community; if you think you can help answer then the link you need is here.

QotW #11: Is it possible to have a key for encryption, that cannot be used for decryption?

2011-09-30 by ninefingers. 0 comments

This week’s question of the week was asked by George Bailey, who wanted to know if it were possible to have a key for encryption that could not be used for decryption. This seems at first sight like a simple question, but underneath it there are some cryptographic truths that are interesting to look at.

Firstly, as our first answerer SteveS pointed out, the process of encrypting data according to this model is asymmetric encryption. Steve provided links to several other answers we have. First up from this list was asymmetric vs symmetric encryption. From our answers there, public key cryptography requires two keys, one that can only encrypt material and another which can decrypt material. As was observed in several answers, when compared to straightforward symmetric encryption, the requirement for the public key in public key cryptography creates a large additional burden that depends heavily on careful mathematics, while symmetric key encryption really relies on the confusion and diffusion principle outlined in Shannon’s 1949 Communication Theory of Secrecy Systems. I’ll cover some other points raised in answers later on.

A similarly excellent source of information is what are private and public key cryptography and where are they useful?

So that answered the “is it possible to have such a system” question; the next step is how. This question was asked on the SE network’s Crypto site – how does asymmetric encryption work?. In brief, in the most commonly used asymmetric encryption algorithm (RSA), the core element is a trapdoor function or permutation – a process that is relatively trivial to perform in one direction, but difficult (ideally, impossible, but we’ll discuss that in a minute) to perform in reverse, except for those who own some “insider information” — knowledge of the private key being that information. For this to work, the “insider information” must not be guessable from the outside.

This leads directly into interesting territory on our original question. The next linked answer was what is the mathematical model behind the security claims of symmetric ciphers and hash algorithms. Our accepted answer there by D.W. tells you everything you need to know – essentially, there isn’t one. We only believe these functions are secure based on the fact no vulnerability has yet been found.

The problem then becomes: are asymmetric algorithms “secure”? Let’s take RSA as example. RSA uses a trapdoor permutation, which is raising values to some exponent (e.g. 3) modulo a big non-prime integer (the modulus). Anybody can do that (well, with a computer at least). However, the reverse operation (extracting a cube root) appears to be very hard, except if you know the factorization of the modulus, in which case it becomes easy (again, using a computer). We have no actual proof that factoring the modulus is required to compute a cube root; but more of 30 years of research have failed to come up with a better way. And we have no actual proof either that integer factorization is inherently hard; but that specific problem has been studied for, at least, 2500 years, so easy integer factorization is certainly not obvious. Right now, the best known factorization algorithm is General Number Field Sieve and its cost becomes prohibitive when the modulus grows (current World record is for a 768-bit modulus). So it seems that RSA is secure (with a long enough modulus): breaking it would require to outsmart the best mathematicians in the field. Yet it is conceivable that a new mathematical advance may occur any day, leading to an easy (or at least easier) factorization algorithm. The basis for the security claim remains the same: smart people spent time thinking about it, and found no weakness.

Cryptography offers very few algorithms with mathematically proven security (e.g. One-Time Pad), let alone practical algorithms with mathematically proven security; none of them is an asymmetric encryption algorithm. There is no proof that asymmetric encryption can really exist. But there is no proof that hash functions exist, either, and it never prevented anybody from using hash functions.

Blog promotion afficionado Jeff Ferland provided some extra detail in his answer. Specifically, Jeff addressed which cipher setup should be used for actually encrypting the data, noting that the best setup for most real world scenarios is the combined use of asymmetric and symmetric cryptography as occurs in PGP, for example, where a transfer key encrypts the data using symmetric encryption and that key, a much smaller piece of data, can be effectively be protected by asymmetric encryption; this is often called “hybrid encryption”. The reason asymmetric encryption is not used throughout, aside from speed, is the padding requirement as Jeff himself and this question over on Crypto.SE discusses.

So in conclusion, it is definitely possible to have a key that works only for encryption and not for decryption; it requires mathematical structure, and faith in the difficulty of inverting some of these operations. However, using asymmetric encryption correctly and effectively is one of the biggest challenges in the security field; beyond the maths, private key storage, public key distribution, and key usage without leaking confidential information through careless implementation are very difficult to get right.

An accessible overview of browser security

2011-09-20 by ninefingers. 2 comments

So the purpose of this post is to introduce you, an IT-aware but perhaps not software-engineering person, to the various issues surrounding browser security. In order to do this I’ll go quickly through some theory you need to know first and there is some simple C-like pseudocode and the odd bad joke, but otherwise, this isn’t too much of a technical post and you should be able to read it without having to dive into 50 or so books or google every other word. At least, that’s the idea.

What is a browser?

Before we answer that we really need a crash course on kernel internals. It’s not rocket science though, it’s actually very simple. An application on your system runs and is called a process. Each process, for most systems you’re likely to be running, has it’s own address space, which is where everything lives (memory). A process has some code and the operating system (kernel) gives it a set amount of time to run before giving another process some time. How to do that so everything keeps going really would be a technical blog post, so we’ll just skip it. Suffice to say, it’s being constantly improved.

Of course, sometimes you might want to get more than one thing done at once, in which case threads come in. How threads are implemented across OSes is different – on Windows, each process also contains a single, initial thread. If you want to do more things, you can add threads to the process. On Linux, threads and processes are all the same, but they share memory. That’s the real important common idea behind a thread and is also true of Windows – threads share memory, processes do not.

How does my process get stuff done then?

So now we’ve got that out the way, how does your program get stuff done? Well, you can do one of these three things:

  1. If you can do it yourself, as a program, you just do it.
  2. If another library can do it, you might use that.
  3. Or alternatively, you might ask the operating system to do it.

In reality, what might happen is your program asks a library to do it which then asks the operating system to do it. That’s exactly how the C standard library works, for example. The end result, however, and the one we care about, is whether you end up asking the operating system. So you get this:

Operating system  Program

Later on, I’ll make the program side of things slightly more complicated, but let’s move on.

How do we do plugins generically?

Firstly, a word on “shared objects”. Windows calls these dynamic link libraries and they’re very powerful, but all we’re concerned with here really is how they end up working. On the simplest possible level, you might have a function like this:

void parse_html(char* input)
{
    call_to_os();
    do_something_with(input);
}

A shared library might implement a function like this:

void decode_and_play_swf(char* data)
{
    call_to_os();
    do_some_decoding_and_playing_of(data);
}

When a shared object loads with your code, the result, in your program’s memory, is this:

void decode_and_play_swf(char* data)
{
    call_to_os();
    do_some_decoding_and_playing_of(data);
}</p>

<p>void parse_html(char* input)
{
    call_to_os();
    do_something_with(input);
}

Told you it was straightforward. Now, making a plugin work is a little more complicated than that. The program code needs to be able to load the shared object and must expect a standard set of functions be present. For example, a browser might expect a function like this in a plugin:

int plugin_iscompatible(int version)
{
    if ( version <= min_required_version && version <= max_supported_version )
    {
        return 0;
    }
    else
    {
        return -1;
    }
}

Similarly, the app we’re plugging into might offer certain functions that a plugin can call to do things. An example of this would be exactly a browser plugin for drawing custom content to a page – the browser needs to provide the plugin the functionality to do that.

So can all this code do what it likes then?

Well that depends. Basically, when you do this:

Operating system   Program

the operating system doesn’t just do whatever you asked for. In most modern OSes, that program is also run in the context of a given user. So that program can do exactly what the user can do.

I should also point out that for the purposes of loading shared libraries, they appear to most reporting tools as the program they’re running as. After all, they are literally loaded into that program. So when the plugin is doing something, it does it on behalf of the program.

I should also point out that when you load the plugin, it literally gets joined to the memory space of the program. Yes, this is an attack vector called DLL injection, yes you can do bad things but it is also highly useful too.

I read this blog post to know about browsers! What’s all this?

One last point to make. Clearly, browsers run on lots of different platforms. Browsers have to be compiled on each platform, but forcing plugin writers to do that is a bit hostile really. It means many decent plugins might not get written, so browser makers came up with a solution: embed a programming language.

That sounds a lot crazier than it is. Technically, you already have one quite complicated parser already in the browser; you’ve got to support this weird HTML thing (without using regex) – and CSS, and Javascript… wait a minute, given we’ve stuck a programming language parser for javascript into the browser, why don’t we extend that to support plugins?

This is exactly what Mozilla did. See their introductory guide (and as a side note on how good Stack Overflow really is, here’s a Mozilla developer on SO pointing someone to the docs). Actually in truth it’s a little bit more complicated than I made out. Since Firefox already includes an XML/XHTML/HTML parser, the developers went all out – a lot of the user interface can be extended using XUL, an XML variant for that purpose. Technically, you could write an entire user interface in it.

Just as a quick recap, in order for your XML, XHTML, HTML, Javascript, CSS to get things done, it has to do this:

Operating system   Browser   Javascript or something

So, now the browser makes decisions based on what the Javascript asks before it asks the operating system. This has some security implications, which we’ll get to in exactly one more diagram’s time. It’s worth noting that on a broad concept level, languages like Python and Java also work this way:

Operating system   Interpreter/JVM etc  Script/Java Bytecode

Browsers! Focus, please!

Ok, ok. All of that was to enable you to understand the various levels on which a browser can be attacked – you need to understand the difference in locations and the various concepts for this explanation to work, so hopefully we’re there. So let’s get to it.

So, basically, security issues arise for the most part (ignoring some more complicated ones for now) because of bad input. Bad input gets into a program and hopefully crashes it, because the alternative is that specially crafted input makes it do something it shouldn’t and that’s pretty much always a lot worse. So the real aim of the game is evaluating how bad bad input really is. The browser accepts a lot of input on varying levels, so let’s take them turn by turn:

  1. The address bar. You might seriously think, wait, that can’t be a problem can it, but if the browser does not correctly pass the right thing to the “page getting” bit, or the “page getting” bit doesn’t error when it gets bad input, you have a problem, because all the links all over the input are going to break your browser. Believe it or not, in the early days of Internet Explorer, this was actually an issue that could lead you to believe you were on a different site to the one you thought you were on. See here. These days, this particular attack vector is more of a phishing/spoofing style problem than browser manufacturers getting it wrong.

  2. Content parsers. For content parsing I mean all the bits that make it up – html, xhtml, javascript, css. If there are bugs in these, the best case scenario is really that the browser crashes. However, you might be able to get the browser to do something it shouldn’t with a specially crafted web page, javascript or CSS file. Technically speaking, the browser should prevent the javascript it is executing from doing anything too malicious to your system. That’s the idea. The browser will also prevent said javascript from doing anything too malicious to other pages too. A full discussion of web vulnerabilities like this is a blog post in itself, really so we won’t cover it here.

  3. Extensions in Javascript/whatever the browser uses. Again, this comes down to are the vulnerabilities in the parser/enforcement rules. Otherwise, an extension of this nature will have more “power” than an in page Javascript, but still won’t be able to do too much “bad stuff”.

  4. Binary formats in the rendering engine. Clearly, a web page is more than just parseable programming-language like things – it’s also images in a whole variety of complicated formats. A bug in these processors might be exploitable; given they’re written as native functions inside the browser, or as shared libraries the browser uses, the responsibility for access control is at the OS level and so a renderer that is persuaded to do something bad could do anything the user running the process can do.

  5. Extensions (Firefox plugins)/File format handlers. These enable the browser to do things it can’t do in and of itself and are often developed externally to the core “browsing” part. Examples of this are embedding adobe reader in web pages and displaying flash applications. Again, these have decoders or parsers, so bugs in these can persuade the plugin to ask the operating system to do what we want it to do, rather than what it should.

You might wonder if 4 and 5 really should be subject to the same protections afforded to the javascript part. Well, the interesting thing is, that is part of the solution, but you can’t really do it in Javascript, because compared to having direct access to memory and the operating system, it would be much slower. Clearly, that’s a problem for video decoding and really heavy duty graphical applications such as flash is often used for, so plugins are and were the solution.

Ok, ok, tell me, what’s the worst the big bad hacker can do?

Depends. There are two really lucrative possible exploits from bugs of this sort; either, arbitrary code execution (do what you want) or drive by downloading (which is basically, send code that downloads something, then run it). In and of itself, the fact that you’re browsing as a restricted user and not an administrator (right?) means the damage the attacker can do is limited to what your user can do. That could be fairly critical, including infecting all your files with malware, deleting stuff, whatever it is they want. You might then unwittingly pass on the infection.

The real problem, however, is that the attacker can use this downloaded code as a launch platform to attack more important parts. A full discussion of botnets, rootkits and persistent malware really ought to also be a separate blog post, but let’s just say the upshot is your computer could be taking part in attacks on other computers orchestrated by criminal gangs. Serious then.

Oh my unicorns, why haven’t you developers fixed this?

First up, no piece of code is entirely perfect. The problem with large software projects is that they become almost too complicated for one person to understand every part and every change that is going on. So the project is split up with maintainers for each part and coding standards, rules, guidelines, tools you must run and that must pass, tests you must run and must pass etc. In my experience, this improves things, but no matter how much you throw at it in terms of management, you’re still dealing with humans writing programs. So mistakes happen.

However, our increasing reliance on the web has got armies of security people of the friendly, helpful sort trying to fix the problem, including people who build browsers. I went at great lengths to discuss threads and processes – there was a good reason for that, because now I can discuss what google chrome tries to do.

Ok…

So as I said, the traditional way you think about building an application when you want to do more than one thing at once is to use threads. You can store your data in memory and any thread can get it (I’m totally ignoring a whole series of problems you can encounter with this for simplicity, but if you’re a budding programmer, read up on concurrency before just trying this) reasonably easily. So when you load a page and run a download, both can happen at once and the operating system makes it look like they’re both happening at the same time.

Any bugs from 2-5, however, can affect this whole process. There’s nothing to stop a plugin waltzing over and modifying the memory of the parser, or totally replacing it, or well, anything. Also, a crash in a single thread in a program tends to bring the whole thing down, so from a stability point of view, it can also be a problem. Here’s a nice piece of ascii art:

|-----------------------------------------------------------------------|
| Process no 3412  Thread/parsing security.blogoverflow.com             |
|  Big pile of     Thread/rendering PNG                                 |
|  shared memory   Thread/running flash plugin - video from youtube     |
|-----------------------------------------------------------------------|
So, all that code is in the same space (memory address) as each other, basically. Simple enough.

The Chrome developers looked at this and decided it was a really bad idea. For starters, there’s the stability problem – plugins do crash fairly frequently, especially if you’ve ever used the 64-bit flash plugin for linux. That might take the whole process down, or it might cause something else to happen, who knows.

The Chrome model executes sub-processes for each site to be rendered (displayed) and for every plugin in use, like this:

   |--------------------------|       |-----------------------------------|
   | Master chrome            |-------| Rendering process for security....|
   | process. User interface  |       |-----------------------------------|
   | May also use threads     |
   |--------------------------|-------|-----------------------------------|
                |                     | Rendering process for somesite... |
   |--------------------------|       |-----------------------------------|
   | Flash plugin process     |
   |--------------------------|
So now what happens is the master process becomes only the user interface. Child processes actually do the work, but if they get bad input, they die and don’t bring the whole browser down.

Now, this also helps solve the security problem to some extent. Those sub-processes cannot alter each other’s memory easily at all, so there’s not much for them to exploit. In order to get anything done, they have to use IPC – interprocess communication, to talk to the master process and ask for certain things to happen. So now the master process can do some filtering based on a policy of who should be able to access what. Child processes are then denied access to things they can’t have, by the master process.

But, can’t sub processes just ask the operating system directly? Well, not on Windows for certain. When the master process creates the child process, it also asks Windows for a new security “level” if you like (token is the technical term) and creates the process in that level, which is highly restricted. So the child process will actually be denied certain things by the operating system unless it needs them, which is harder to do on a single process where parts of it do need unrestricted access.

In defence to the Windows engineers, this functionality (security tokens) has been part of the Windows API for a while; it’s just developers haven’t really used it. Also, it applies to threads too, although there remains a problem with access to the process’s memory.

Great! I can switch to Chrome and all will be well!

Hold up a second… yes, this is an excellent concept and hopefully other browser writers will start looking at integrating OS security into their browsers too. However, any solution like this will also under certain conditions be circumventable, it’s just a matter of finding a scenario. Also, chrome and any other browser cannot defend you from yourself – some basic rules still apply. They are, briefly, do not open dodgy content and do keep your system/antivirus/firewall up to date.

Right, so what’s the best browser?

I can’t answer that, nor will I attempt to try. All browser manufacturers know that “security” bugs happen and all of them release fixes, as do third parties. A discussion on which has the best architecture is not a simple one – we’ve outlined the theory here, but how it is implemented will really determine whether it works – the devil is in the detail, as they say.

So that concludes our high level discussion of the various security issues with browsers, plugins and drive by downloads. However, I would really like to stress there is a whole other area of security which you might broadly call “web application” security, which concerns in part what the browser allows to happen, but also encompasses servers and databases and many other things. We’ve covered a small corner of the various problems today; hopefully some of our resident web people will over time cover some of the interesting things that happen in building complex web sites. Here is a list of all the questions here tagged web-browser.

Storing secrets in software

2011-09-06 by ninefingers. 0 comments

This question comes up on Stack Overflow and IT Security relatively regularly, and goes along one of these lines:

  1. I have a symmetric encryption key I would like to store in my application so attackers can’t find it.
  2. I have an asymmetric encryption key I would like to store in my application so attackers can’t find it.
  3. I would like to store authentication details of some kind in my application so attackers can’t find them.
  4. I have developed an algorithm. How do I make it so attackers can never find it.
  5. I am selling commercial music. I need to make it so The Nasty Pirates can’t decode it.

Firstly, a review of what cryptography is at heart: encryption and decryption are all about sending data over untrusted networks or storing data in untrusted places such that only the intended recipient can read that information. This gives you confidentiality; cryptography as a whole also aims to provide integrity checks through signatures and message digests. Confusingly, cryptography is often confused with access control, in which it so often plays a part. Cryptography ceases to be able to protect you when you decide to put the key and the encrypted data together – at this point, the data has reached its destination, the trusted place. The expectation that cryptography can protect data once it is decrypted is similar to the expectation that a locked door will protect your house if you leave the key under the mat. The act of accessing the key and decrypting the data (and even encrypting it) is a weak link in the chain: it assumes the system you’re performing these actions on guarantees your confidentiality and integrity – it assumes that system is trusted. Reading the argument presented here, our resident cryptographer provided the following explanation:  “Encryption does not create confidentiality, it just concentrates confidentiality into the key. Presumably, it is easier to keep confidential a small key of fixed size, and the key uniform structure allows for the key confidentiality to be measured. Yet you have to start confidentiality at something. Once the key is known, confidentiality has left.”

All of the above questions are really forms of the same thing: how do I on an untrusted system safely decode some encrypted data without interception? In other words, you’re now asking for cryptography to guarantee the security of that information even after you’ve decrypted it. This isn’t possible. The problem then becomes one of how do you ensure that the system in question will maintain confidentiality and integrity for you.

The answer, then, is to create a system which acts as the recipient such that data can be decoded in it and never needs to be transferred outside of it. I’ll call it the black box. The rest of this blog post will be about looking for the black box setup.

  1. We will begin with the idea we want to write a program that stores something securely whilst preventing the user from accessing that information. So the first place people want to do this is in their source code. Which is fine, except source code can be disassembled. Yes, there are ways to make this more difficult but it is not possible to prevent. The same goes for hiding or stashing files around the system, since a cursory analysis with the right tools will tell you exactly where to look. I should add that disassembly prevention probably makes your software less stable and/or portable.
  2. The next option is to coerce the system into helping you hide your information. In any form, this will essentially look like a rootkit. This is dangerous: you may well have your application categorised as malware, for starters, but more importantly at this stage it is very easy to introduce extra vulnerability into the system you have just hooked. You could also crash your customer’s system, which is “not cool” whichever way you look at it. Finally, whilst rootkits are difficult to remove, taking a live listing of files and then an offline one is not. Rootkits can be found and their installation prevented. In fact, a cunning reverse engineer might replace your rootkit with their own equivalent, stealing the information you send it. Handy.
  3. Stage 3 is one for the slashdot home page: the operating system vendor is complicit in helping you. Unless you are a music giant I suspect this probably is not an option for you. A complicit operating system is harder to circumvent, but entirely possible. All you need is access to ring 0 (for people not familiar with the term, ring 0 is the mode where the processor will not stop you lifting any restrictions placed upon memory or code. You can ignore read only page checks, rewrite chunks of kernel memory, whatever you want to do). Depending on the system, this might be difficult to achieve, but it certainly is not impossible. Another route to this stage is to compromise the boot process. You can pass the OS the correct validation codes if it checks anything. Etc. Clearly, this stage is difficult to pull off and requires time/effort, but it can be done.
  4. So, OS security not enough? Hardware then. At this stage you actually have a real black box (or chip) somewhere on the computer. Of course, if the OS has responses sent to it, see stages 1-3. So now your black box needs to talk to all your other hardware, like your monitor or speaker system. Oh and they need to be intercept-proof too – maybe they can’t be trusted either and are really decoding the data straight back to the hard disk. This might sound far fetched, but High-bandwidth Digital Content Protection (wikipedia) aims to provide exactly this kind of protection.

The move/counter-move sequence carries on – so what is to stop a hardware engineer taking your box apart? Self-destructing hardware? Ok, how do you prevent the fact that you’ll display it on a monitor, sending light-waves and electronic signals out?

What does this mean for software on the PC?

  1. You cannot store passwords, encryption keys etc in your code, or anywhere on the system. The only way around that is to allow for user input and aim to prevent interception, which could in itself be difficult.
  2. License keys do not work either, for the same reason.
  3. Nor does obfuscated code. It has to be decrypted to execute, at some point.

However, there is a case where hardware complicit defences are perfectly possible and may well be encouraged, where the hardware and software combination come together. There are many scenarios in which this happens, the most obvious being the mobile phone. A mobile phone is, relatively speaking, hard to take apart and put back together again whole (unless you’re a mobile phone engineer) so it is possible to have hardware-based security work reasonably effectively. Smart cards with on-board cryptographic function again ensure the keys are exceptionally hard to steal. In the case of smart cards, the boundary problem still exists in terms of transferring the decrypted data back to the untrusted system, but on a mobile phone-like device it would be entirely feasible not to route that information through the OS itself and instead play it directly on the screen, isolating those buffers from the “untrusted” sections of the OS. However, let’s leave that idea here. Trusted platforms is a blog post for another day.

Is there a case for devices capable of such secure display? Absolutely. Want to securely read data on a financial transaction, or have a trusted communication channel with your bank? Upping the bar like this definitely helps protect against malware and other interception threats. However, the most common use case appears to be “how do I defend my asset in an untrusted environment in order to enforce my desired price model?” Which brings me full circle – that is not the problem set cryptography solves.

When people say secure, they often mean “impossible for bad guy, possible for me”. That is almost never the case. Clearly, looking at the above, the number of people skilled enough to counter 3 is actually pretty small. You will, therefore, achieve part of this aim: “hard for bad guy, easyish for me”. When people say secure, they also frequently mean technical security measures. Security is more than just technical security – that’s why we have policies, community, law, education, awareness etc. Going as far as points 2/3 reduces the number of people capable of subverting your protection system to the point where legal action is feasible.

However, I think we still have a narrow definition of security in the first place. If you are talking about delivering content or software, the level of issues a customer is likely to come up against increases exponentially as you move from 1 through to point 4. Does your customer really want to be told that you do not support Windows 8 yet? Or that they cannot use their favourite media player for your content? Or that they need to upgrade their BigMediaCorpSatelliteTVBox because you altered the algorithms? Or that they’ve been locked out of that Abba Specials subscription channel they paid for because of a hardware fault, or… Security here is not just about protecting your content, it is about protecting your business. Will the cost of securing the content using any of 1-4 will make up for the potential revenue you could have made if everyone brought the copy legally? Or will the number of legitimate sales simply go down as consumers react to all those technical barriers shattering their plug-and-play expectation? I do not have any numbers on that one, but I am willing to bet the net result of adding this extra “security” is a loss.

Security needs to be appropriate to the risk of the situation presented, having been fully evaluated from all angles. The general consensus is that DRM is an excessive protection measure given the risks involved – indeed, a very simple solution is to make software/music/whatever at the right price point and value such that the vast majority of users buy it, and allow for the fact that some people will pirate/steal/make unauthorised use of it. In certain situations, providing a value add around the product can make the difference (some companies selling open source software generate their entire revenue using this model), and even persuade users of illegal copies to buy a licence in order to gain access to these services (e.g. support, upgrades).

 

QotW #8: how to determine what to whitelist with NoScript?

2011-09-02 by ninefingers. 2 comments

Our question of the week this week was asked by Iszi, who wanted to know how exactly we should determine what to trust when employing NoScript. In the question itself Iszi raises some valid points: how does somebody know, other than by trial and error, whether scripts from a given site or third party site are trustworthy? How does a user determine which parts of the javascript are responsible for which bits of functionality? How do we do this without exposing ourselves to the risks of such scripts.

Richard Gadsden suggests one way to approach a solution is for each site to declare what Javascript it directly controls. He notes that such a mechanism could trivially be subverted if the responding page lists any and all javascript as “owned” by the site in question.

Such resource-based mechanisms have been tried and implemented before, albeit for a different problem domain (preventing XSS attacks). Cross-Origin Resource Sharing (W3C, Wikipedia) attempts to do just that by vetting incoming third party requests. However, like HTML-based lists, it does not work well when the trusted end users are “everyone”, i.e. a public web service.

Zuly Gonzalez discusses a potential solution her startup has been working on – running scripts on a disposable vm. Zuly makes some good points – even with a whitelisted domain, you cannot necessarily trust each and every script that is added to the domain; moreover, after you have made your trust decision, a simple whitelist is not enough without re-vetting the script.

Zuly’s company – if you’re interested, check out her answer – runs scripts on a disposable virtual machine rather than on your computer. Disclaimer: we haven’t tested it, but the premise sounds good.

Clearly, however, such a solution is not available to everyone. Karrax suggested that the best option might be to install plugins such as McAfee SiteAdvisor to help inform users as to what domains they should be trusting. He notes that the NoScript team are beginning to integrate such functionality into the user interface of NoScript itself. This is a feature I did not know I had, so I tried it. According to the trial page, at the time of writing the service is experimental, but all of the linked to sites provide a lot of information about the domain name and whether to trust it.

This is an area with no single solution yet, and these various solutions are in continuous development. Let’s see what the future holds.