Posts Tagged ‘Browsers’

QotW #36: How does the zero-day Internet Explorer vulnerability discovered in September 2012 work?

2012-09-28 by roryalsop. 0 comments

Community member Iszi nominated this week’s question, which asks for an explanation of the issue from the perspective of a developer/engineer: What is exactly being exploited and why does it work?

Polynomial provided the following detailed, technical answer:

CVE-2012-4969, aka the latest IE 0-day, is a based on a use-after-free bug in IE’s rendering engine. A use-after-free occurs when a dynamically allocated block of memory is used after it has been disposed of (i.e. freed). Such a bug can be exploited by creating a situation where an internal structure contains pointers to sensitive memory locations (e.g. the stack or executable heap blocks) in a way that causes the program to copy shellcode into an executable area.

In this case, the problem is with the CMshtmlEd::Exec function in mshtml.dll. The CMshtmlEd object is freed unexpectedly, then the Exec method is called on it after the free operation.

First, I’d like to cover some theory. If you already know how use-after-free works, then feel free to skip ahead.

At a low level, a class can be equated to a memory region that contains its state (e.g. fields, properties, internal variables, etc) and a set of functions that operate on it. The functions actually take a “hidden” parameter, which points to the memory region that contains the instance state.

For example (excuse my terrible pseudo-C++):

class Account
 {
 int balance = 0;
 int transactionCount = 0;
void Account::AddBalance(int amount)
 {
 balance += amount;
 transactionCount++;
 }
void Account::SubtractBalance(int amount)
 {
 balance -= amount;
 transactionCount++;
 }
 }
The above can actually be represented as the following:
private struct Account
 {
 int balance = 0;
 int transactionCount = 0;
 }
public void* Account_Create()
 {
 Account* account = (Account)malloc(sizeof(Account));
 account->balance = 0;
 account->transactionCount = 0;
 return (void)account;
 }
public void Account_Destroy(void* instance)
 {
 free(instance);
 }
public void Account_AddBalance(void* instance, int amount)
 {
 ((Account)instance)->balance += amount;
 ((Account)Account)->transactionCount++;
 }
public void Account_SubtractBalance(void* instance, int amount)
 {
 ((Account)instance)->balance -= amount;
 ((Account)instance)->transactionCount++;
 }
public int Account_GetBalance(void* instance)
 {
 return ((Account)instance)->balance;
 }
public int Account_GetTransactionCount(void instance)
 {
 return ((Account*)instance)->transactionCount;
 }
I’m using void* to demonstrate the opaque nature of the reference, but that’s not really important. The point is that we don’t want anyone to be able to alter the Account struct manually, otherwise they could add money arbitrarily, or modify the balance without increasing the transaction counter.

Now, imagine we do something like this:

 Account_Destroy(myAccount);
 // ...void* myAccount = Account_Create();
Account_AddBalance(myAccount, 100);
Account_SubtractBalance(myAccount, 75);
// ...
 if(Account_GetBalance(myAccount) > 1000) // <-- !!! use after free !!!
 ApproveLoan();
Now, by the time we reach Account_GetBalance, the pointer value in myAccount actually points to memory that is in an indeterminate state. Now, imagine we can do the following:

  1. Trigger the call to Account_Destroy reliably.
  2. Execute any operation after Account_Destroy but before Account_GetBalance that allows us to allocate a reasonable amount of memory, with contents of our choosing.

Usually, these calls are triggered in different places, so it’s not too difficult to achieve this. Now, here’s what happens:

  1. Account_Create allocates an 8-byte block of memory (4 bytes for each field) and returns a pointer to it. This pointer is now stored in the myAccount variable.
  2. Account_Destroy frees the memory. The myAccount variable still points to the same memory address.
  3. We trigger our memory allocation, containing repeating blocks of 39 05 00 00 01 00 00 00. This pattern correlates to balance = 1337 and transactionCount = 1. Since the old memory block is now marked as free, it is very likely that the memory manager will write our new memory over the old memory block.
  4. Account_GetBalance is called, expecting to point to an Account struct. In actuality, it points to our overwritten memory block, resulting in our balance actually being 1337, so the loan is approved!

This is all a simplification, of course, and real classes create rather more obtuse and complex code. The point is that a class instance is really just a pointer to a block of data, and class methods are just the same as any other function, but they “silently” accept a pointer to the instance as a parameter.

This principle can be extended to control values on the stack, which in turn causes program execution to be modified. Usually, the goal is to drop shellcode on the stack, then overwrite a return address such that it now points to a jmp esp instruction, which then runs the shellcode.

This trick works on non-DEP machines, but when DEP is enabled it prevents execution of the stack. Instead, the shellcode must be designed using Return-Oriented Programming (ROP), which uses small blocks of legitimate code from the application and its modules to perform an API call, in order to bypass DEP.

Anyway, I’m going off-topic a bit, so let’s get into the juicy details of CVE-2012-4969!

In the wild, the payload was dropped via a packed Flash file, designed to exploit the Java vulnerability and the new IE bug in one go. There’s also been some interesting analysis of it by AlienVault.

The metasploit module says the following:

> This module exploits a vulnerability found in Microsoft Internet Explorer (MSIE). When rendering an HTML page, the CMshtmlEd object gets deleted in an unexpected manner, but the same memory is reused again later in the CMshtmlEd::Exec() function, leading to a use-after-free condition.

There’s also an interesting blog post about the bug, albeit in rather poor English – I believe the author is Chinese. Anyway, the blog post goes into some detail:

> When the execCommand function of IE execute a command event, will allocated the corresponding CMshtmlEd object by AddCommandTarget function, and then call mshtml@CMshtmlEd::Exec() function execution. But, after the execCommand function to add the corresponding event, will immediately trigger and call the corresponding event function. Through the document.write("L") function to rewrite html in the corresponding event function be called. Thereby lead IE call CHTMLEditor::DeleteCommandTarget to release the original applied object of CMshtmlEd, and then cause triggered the used-after-free vulnerability when behind execute the msheml!CMshtmlEd::Exec() function.

Let’s see if we can parse that into something a little more readable:

  1. An event is applied to an element in the document.
  2. The event executes, via execCommand, which allocates a CMshtmlEd object via the AddCommandTarget function.
  3. The target event uses document.write to modify the page.
  4. The event is no longer needed, so the CMshtmlEd object is freed via CHTMLEditor::DeleteCommandTarget.
  5. execCommand later calls CMshtmlEd::Exec() on that object, after it has been freed.

Part of the code at the crash site looks like this:

637d464e 8b07 mov eax,dword ptr [edi]
 637d4650 57 push edi
 637d4651 ff5008 call dword ptr [eax+8]
The use-after-free allows the attacker to control the value of edi, which can be modified to point at memory that the attacker controls. Let’s say that we can insert arbitrary code into memory at 01234f00, via a memory allocation. We populate the data as follows:
01234f00: 01234f08
 01234f04: 41414141
 01234f08: 01234f0a
 01234f0a: c3c3c3c3 // int3 breakpoint
1. We set edi to 01234f00, via the use-after-free bug. 2. mov eax,dword ptr [edi] results in eax being populated with the memory at the address in edi, i.e. 01234f00. 3. push edi pushes 01234f00 to the stack. 4. call dword ptr [eax+8] takes eax (which is 01234f00) and adds 8 to it, giving us 01234f08. It then dereferences that memory address, giving us 01234f0a. Finally, it calls 01234f0a. 5. The data at 01234f0a is treated as an instruction. c3 translates to an int3, which causes the debugger to raise a breakpoint. We’ve executed code!

This allows us to control eip, so we can modify program flow to our own shellcode, or to a ROP chain.

Please keep in mind that the above is just an example, and in reality there are many other challenges in exploiting this bug. It’s a pretty standard use-after-free, but the nature of JavaScript makes for some interesting timing and heap-spraying tricks, and DEP forces us to use ROP to gain an executable memory block.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

How can you protect yourself from CRIME, BEAST’s successor?

2012-09-10 by roryalsop. 11 comments

For those who haven’t been following Juliano Rizzo and Thai Duong, two researchers who developed the BEAST attack against TLS 1.0/SSL 3.0 in September 2011, they have developed another attack they plan to publish at the Ekoparty conference in Argentina later this month – this time giving them the ability to hijack HTTPS sessions – and this has started people worrying again.

Security Stack Exchange member Kyle Rozendo asked this question:

With the advent of CRIME, BEASTs successor, what is possible protection is available for an individual and / or system owner in order to protect themselves and their users against this new attack on TLS?

And the community expectation was that we wouldn’t get an answer until Rizzo and Duong presented their attack.

However, our highest reputation member, Thomas Pornin delivered this awesome hypothesis, which I will quote here verbatim:


This attack is supposed to be presented in 10 days from now, but my guess is that they use compression.

SSL/TLS optionally supports data compression. In the ClientHello message, the client states the list of compression algorithms that it knows of, and the server responds, in the ServerHello, with the compression algorithm that will be used. Compression algorithms are specified by one-byte identifiers, and TLS 1.2 (RFC 5246) defines only the null compression method (i.e. no compression at all). Other documents specify compression methods, in particular RFC 3749 which defines compression method 1, based on DEFLATE, the LZ77-derivative which is at the core of the GZip format and also modern Zip archives. When compression is used, it is applied on all the transferred data, as a long stream. In particular, when used with HTTPS, compression is applied on all the successive HTTP requests in the stream, header included. DEFLATE works by locating repeated subsequences of bytes.

Suppose that the attacker is some Javascript code which can send arbitrary requests to a target site (e.g. a bank) and runs on the attacked machine; the browser will send these requests with the user’s cookie for that bank — the cookie value that the attacker is after. Also, let’s suppose that the attacker can observe the traffic between the user’s machine and the bank (plausibly, the attacker has access to the same LAN of WiFi hotspot than the victim; or he has hijacked a router somewhere on the path, possibly close to the bank server).

For this example, we suppose that the cookie in each HTTP request looks like this:

> Cookie: secret=7xc89f+94/wa

The attacker knows the “Cookie: secret=” part and wishes to obtain the secret value. So he instructs his Javascript code to issue a request containing in the body the sequence “Cookie: secret=0”. The HTTP request will look like this:

POST / HTTP/1.1 Host: thebankserver.com (…) Cookie: secret=7xc89f+94/wa (…)

Cookie: secret=0

When DEFLATE sees that, it will recognize the repeated “Cookie: secret=” sequence and represent the second instance with a very short token (one which states “previous sequence has length 15 and was located n bytes in the past); DEFLATE will have to emit an extra token for the ‘0’.

The request goes to the server. From the outside, the eavesdropping part of the attacker sees an opaque blob (SSL encrypts the data) but he can see the blob length (with byte granularity when the connection uses RC4; with block ciphers there is a bit of padding, but the attacker can adjust the contents of his requests so that he may phase with block boundaries, so, in practice, the attacker can know the length of the compressed request).

Now, the attacker tries again, with “Cookie: secret=1” in the request body. Then, “Cookie: secret=2”, and so on. All these requests will compress to the same size (almost — there are subtleties with Huffman codes as used in DEFLATE), except the one which contains “Cookie: secret=7”, which compresses better (16 bytes of repeated subsequence instead of 15), and thus will be shorter. The attacker sees that. Therefore, in a few dozen requests, the attacker has guessed the first byte of the secret value.

He then just has to repeat the process (“Cookie: secret=70”, “Cookie: secret=71”, and so on) and obtain, byte by byte, the complete secret.


What I describe above is what I thought of when I read the article, which talks about “information leak” from an “optional feature”. I cannot know for sure that what will be published as the CRIME attack is really based upon compression. However, I do not see how the attack on compression cannot work. Therefore, regardless of whether CRIME turns out to abuse compression or be something completely different, you should turn off compression support from your client (or your server).

Note that I am talking about compression at the SSL level. HTTP also includes optional compression, but this one applies only to the body of the requests and responses, not the header, and thus does not cover the Cookie: header line. HTTP-level compression is fine.

(It is a shame to have to remove SSL compression, because it is very useful to lower bandwidth requirements, especially when a site contains many small pictures or is Ajax-heavy with many small requests, all beginning with extremely similar versions of a mammoth HTTP header. It would be better if the security model of Javascript was fixed to prevent malicious code from sending arbitrary requests to a bank server; I am not sure it is easy, though.)


As bobince commented:

I hope CRIME is this and we don’t have two vulns of this size in play! However, I wouldn’t say that being limited to entity bodies makes HTTP-level compression safe in general… whilst a cookie header is an obvious first choice of attack, there is potentially sensitive material in the body too. eg Imagine sniffing an anti-XSRF token from response body by causing the browser to send fields that get reflected in that response.

It is reassuring that there is a fix, and my recommendation would be for everyone to assess the risk to them of having sessions hijacked and seriously consider disabling SSL compression support.

An accessible overview of browser security

2011-09-20 by ninefingers. 2 comments

So the purpose of this post is to introduce you, an IT-aware but perhaps not software-engineering person, to the various issues surrounding browser security. In order to do this I’ll go quickly through some theory you need to know first and there is some simple C-like pseudocode and the odd bad joke, but otherwise, this isn’t too much of a technical post and you should be able to read it without having to dive into 50 or so books or google every other word. At least, that’s the idea.

What is a browser?

Before we answer that we really need a crash course on kernel internals. It’s not rocket science though, it’s actually very simple. An application on your system runs and is called a process. Each process, for most systems you’re likely to be running, has it’s own address space, which is where everything lives (memory). A process has some code and the operating system (kernel) gives it a set amount of time to run before giving another process some time. How to do that so everything keeps going really would be a technical blog post, so we’ll just skip it. Suffice to say, it’s being constantly improved.

Of course, sometimes you might want to get more than one thing done at once, in which case threads come in. How threads are implemented across OSes is different – on Windows, each process also contains a single, initial thread. If you want to do more things, you can add threads to the process. On Linux, threads and processes are all the same, but they share memory. That’s the real important common idea behind a thread and is also true of Windows – threads share memory, processes do not.

How does my process get stuff done then?

So now we’ve got that out the way, how does your program get stuff done? Well, you can do one of these three things:

  1. If you can do it yourself, as a program, you just do it.
  2. If another library can do it, you might use that.
  3. Or alternatively, you might ask the operating system to do it.

In reality, what might happen is your program asks a library to do it which then asks the operating system to do it. That’s exactly how the C standard library works, for example. The end result, however, and the one we care about, is whether you end up asking the operating system. So you get this:

Operating system  Program

Later on, I’ll make the program side of things slightly more complicated, but let’s move on.

How do we do plugins generically?

Firstly, a word on “shared objects”. Windows calls these dynamic link libraries and they’re very powerful, but all we’re concerned with here really is how they end up working. On the simplest possible level, you might have a function like this:

void parse_html(char* input)
{
    call_to_os();
    do_something_with(input);
}

A shared library might implement a function like this:

void decode_and_play_swf(char* data)
{
    call_to_os();
    do_some_decoding_and_playing_of(data);
}

When a shared object loads with your code, the result, in your program’s memory, is this:

void decode_and_play_swf(char* data)
{
    call_to_os();
    do_some_decoding_and_playing_of(data);
}</p>

<p>void parse_html(char* input)
{
    call_to_os();
    do_something_with(input);
}

Told you it was straightforward. Now, making a plugin work is a little more complicated than that. The program code needs to be able to load the shared object and must expect a standard set of functions be present. For example, a browser might expect a function like this in a plugin:

int plugin_iscompatible(int version)
{
    if ( version <= min_required_version && version <= max_supported_version )
    {
        return 0;
    }
    else
    {
        return -1;
    }
}

Similarly, the app we’re plugging into might offer certain functions that a plugin can call to do things. An example of this would be exactly a browser plugin for drawing custom content to a page – the browser needs to provide the plugin the functionality to do that.

So can all this code do what it likes then?

Well that depends. Basically, when you do this:

Operating system   Program

the operating system doesn’t just do whatever you asked for. In most modern OSes, that program is also run in the context of a given user. So that program can do exactly what the user can do.

I should also point out that for the purposes of loading shared libraries, they appear to most reporting tools as the program they’re running as. After all, they are literally loaded into that program. So when the plugin is doing something, it does it on behalf of the program.

I should also point out that when you load the plugin, it literally gets joined to the memory space of the program. Yes, this is an attack vector called DLL injection, yes you can do bad things but it is also highly useful too.

I read this blog post to know about browsers! What’s all this?

One last point to make. Clearly, browsers run on lots of different platforms. Browsers have to be compiled on each platform, but forcing plugin writers to do that is a bit hostile really. It means many decent plugins might not get written, so browser makers came up with a solution: embed a programming language.

That sounds a lot crazier than it is. Technically, you already have one quite complicated parser already in the browser; you’ve got to support this weird HTML thing (without using regex) – and CSS, and Javascript… wait a minute, given we’ve stuck a programming language parser for javascript into the browser, why don’t we extend that to support plugins?

This is exactly what Mozilla did. See their introductory guide (and as a side note on how good Stack Overflow really is, here’s a Mozilla developer on SO pointing someone to the docs). Actually in truth it’s a little bit more complicated than I made out. Since Firefox already includes an XML/XHTML/HTML parser, the developers went all out – a lot of the user interface can be extended using XUL, an XML variant for that purpose. Technically, you could write an entire user interface in it.

Just as a quick recap, in order for your XML, XHTML, HTML, Javascript, CSS to get things done, it has to do this:

Operating system   Browser   Javascript or something

So, now the browser makes decisions based on what the Javascript asks before it asks the operating system. This has some security implications, which we’ll get to in exactly one more diagram’s time. It’s worth noting that on a broad concept level, languages like Python and Java also work this way:

Operating system   Interpreter/JVM etc  Script/Java Bytecode

Browsers! Focus, please!

Ok, ok. All of that was to enable you to understand the various levels on which a browser can be attacked – you need to understand the difference in locations and the various concepts for this explanation to work, so hopefully we’re there. So let’s get to it.

So, basically, security issues arise for the most part (ignoring some more complicated ones for now) because of bad input. Bad input gets into a program and hopefully crashes it, because the alternative is that specially crafted input makes it do something it shouldn’t and that’s pretty much always a lot worse. So the real aim of the game is evaluating how bad bad input really is. The browser accepts a lot of input on varying levels, so let’s take them turn by turn:

  1. The address bar. You might seriously think, wait, that can’t be a problem can it, but if the browser does not correctly pass the right thing to the “page getting” bit, or the “page getting” bit doesn’t error when it gets bad input, you have a problem, because all the links all over the input are going to break your browser. Believe it or not, in the early days of Internet Explorer, this was actually an issue that could lead you to believe you were on a different site to the one you thought you were on. See here. These days, this particular attack vector is more of a phishing/spoofing style problem than browser manufacturers getting it wrong.

  2. Content parsers. For content parsing I mean all the bits that make it up – html, xhtml, javascript, css. If there are bugs in these, the best case scenario is really that the browser crashes. However, you might be able to get the browser to do something it shouldn’t with a specially crafted web page, javascript or CSS file. Technically speaking, the browser should prevent the javascript it is executing from doing anything too malicious to your system. That’s the idea. The browser will also prevent said javascript from doing anything too malicious to other pages too. A full discussion of web vulnerabilities like this is a blog post in itself, really so we won’t cover it here.

  3. Extensions in Javascript/whatever the browser uses. Again, this comes down to are the vulnerabilities in the parser/enforcement rules. Otherwise, an extension of this nature will have more “power” than an in page Javascript, but still won’t be able to do too much “bad stuff”.

  4. Binary formats in the rendering engine. Clearly, a web page is more than just parseable programming-language like things – it’s also images in a whole variety of complicated formats. A bug in these processors might be exploitable; given they’re written as native functions inside the browser, or as shared libraries the browser uses, the responsibility for access control is at the OS level and so a renderer that is persuaded to do something bad could do anything the user running the process can do.

  5. Extensions (Firefox plugins)/File format handlers. These enable the browser to do things it can’t do in and of itself and are often developed externally to the core “browsing” part. Examples of this are embedding adobe reader in web pages and displaying flash applications. Again, these have decoders or parsers, so bugs in these can persuade the plugin to ask the operating system to do what we want it to do, rather than what it should.

You might wonder if 4 and 5 really should be subject to the same protections afforded to the javascript part. Well, the interesting thing is, that is part of the solution, but you can’t really do it in Javascript, because compared to having direct access to memory and the operating system, it would be much slower. Clearly, that’s a problem for video decoding and really heavy duty graphical applications such as flash is often used for, so plugins are and were the solution.

Ok, ok, tell me, what’s the worst the big bad hacker can do?

Depends. There are two really lucrative possible exploits from bugs of this sort; either, arbitrary code execution (do what you want) or drive by downloading (which is basically, send code that downloads something, then run it). In and of itself, the fact that you’re browsing as a restricted user and not an administrator (right?) means the damage the attacker can do is limited to what your user can do. That could be fairly critical, including infecting all your files with malware, deleting stuff, whatever it is they want. You might then unwittingly pass on the infection.

The real problem, however, is that the attacker can use this downloaded code as a launch platform to attack more important parts. A full discussion of botnets, rootkits and persistent malware really ought to also be a separate blog post, but let’s just say the upshot is your computer could be taking part in attacks on other computers orchestrated by criminal gangs. Serious then.

Oh my unicorns, why haven’t you developers fixed this?

First up, no piece of code is entirely perfect. The problem with large software projects is that they become almost too complicated for one person to understand every part and every change that is going on. So the project is split up with maintainers for each part and coding standards, rules, guidelines, tools you must run and that must pass, tests you must run and must pass etc. In my experience, this improves things, but no matter how much you throw at it in terms of management, you’re still dealing with humans writing programs. So mistakes happen.

However, our increasing reliance on the web has got armies of security people of the friendly, helpful sort trying to fix the problem, including people who build browsers. I went at great lengths to discuss threads and processes – there was a good reason for that, because now I can discuss what google chrome tries to do.

Ok…

So as I said, the traditional way you think about building an application when you want to do more than one thing at once is to use threads. You can store your data in memory and any thread can get it (I’m totally ignoring a whole series of problems you can encounter with this for simplicity, but if you’re a budding programmer, read up on concurrency before just trying this) reasonably easily. So when you load a page and run a download, both can happen at once and the operating system makes it look like they’re both happening at the same time.

Any bugs from 2-5, however, can affect this whole process. There’s nothing to stop a plugin waltzing over and modifying the memory of the parser, or totally replacing it, or well, anything. Also, a crash in a single thread in a program tends to bring the whole thing down, so from a stability point of view, it can also be a problem. Here’s a nice piece of ascii art:

|-----------------------------------------------------------------------|
| Process no 3412  Thread/parsing security.blogoverflow.com             |
|  Big pile of     Thread/rendering PNG                                 |
|  shared memory   Thread/running flash plugin - video from youtube     |
|-----------------------------------------------------------------------|
So, all that code is in the same space (memory address) as each other, basically. Simple enough.

The Chrome developers looked at this and decided it was a really bad idea. For starters, there’s the stability problem – plugins do crash fairly frequently, especially if you’ve ever used the 64-bit flash plugin for linux. That might take the whole process down, or it might cause something else to happen, who knows.

The Chrome model executes sub-processes for each site to be rendered (displayed) and for every plugin in use, like this:

   |--------------------------|       |-----------------------------------|
   | Master chrome            |-------| Rendering process for security....|
   | process. User interface  |       |-----------------------------------|
   | May also use threads     |
   |--------------------------|-------|-----------------------------------|
                |                     | Rendering process for somesite... |
   |--------------------------|       |-----------------------------------|
   | Flash plugin process     |
   |--------------------------|
So now what happens is the master process becomes only the user interface. Child processes actually do the work, but if they get bad input, they die and don’t bring the whole browser down.

Now, this also helps solve the security problem to some extent. Those sub-processes cannot alter each other’s memory easily at all, so there’s not much for them to exploit. In order to get anything done, they have to use IPC – interprocess communication, to talk to the master process and ask for certain things to happen. So now the master process can do some filtering based on a policy of who should be able to access what. Child processes are then denied access to things they can’t have, by the master process.

But, can’t sub processes just ask the operating system directly? Well, not on Windows for certain. When the master process creates the child process, it also asks Windows for a new security “level” if you like (token is the technical term) and creates the process in that level, which is highly restricted. So the child process will actually be denied certain things by the operating system unless it needs them, which is harder to do on a single process where parts of it do need unrestricted access.

In defence to the Windows engineers, this functionality (security tokens) has been part of the Windows API for a while; it’s just developers haven’t really used it. Also, it applies to threads too, although there remains a problem with access to the process’s memory.

Great! I can switch to Chrome and all will be well!

Hold up a second… yes, this is an excellent concept and hopefully other browser writers will start looking at integrating OS security into their browsers too. However, any solution like this will also under certain conditions be circumventable, it’s just a matter of finding a scenario. Also, chrome and any other browser cannot defend you from yourself – some basic rules still apply. They are, briefly, do not open dodgy content and do keep your system/antivirus/firewall up to date.

Right, so what’s the best browser?

I can’t answer that, nor will I attempt to try. All browser manufacturers know that “security” bugs happen and all of them release fixes, as do third parties. A discussion on which has the best architecture is not a simple one – we’ve outlined the theory here, but how it is implemented will really determine whether it works – the devil is in the detail, as they say.

So that concludes our high level discussion of the various security issues with browsers, plugins and drive by downloads. However, I would really like to stress there is a whole other area of security which you might broadly call “web application” security, which concerns in part what the browser allows to happen, but also encompasses servers and databases and many other things. We’ve covered a small corner of the various problems today; hopefully some of our resident web people will over time cover some of the interesting things that happen in building complex web sites. Here is a list of all the questions here tagged web-browser.