Attack

QoTW #53 How can I punish a hacker?

2016-02-05 by roryalsop. 4 comments

Elmo asked:

I am a small business owner. My website was recently hacked, although no damage was done; non-sensitive data was stolen and some backdoor shells were uploaded. Since then, I have deleted the shells, fixed the vulnerability and blocked the IP address of the hacker.Can I do something to punish the hacker since I have the IP address? Like can I get them in jail or something?

This question comes up time and time again, as people do get upset and angry when their online presence has been attacked, and we have some very simple guidance which will almost always apply:

Terry Chia wrote:

You don’t punish the hacker. The law does. Just report whatever pieces of information you have to the police and let them handle it.

And @TildalWave asked

What makes you believe that this IP is indeed a hacker’s IP address, and not simply another hacked into computer running in zombie mode? And who is to say, that your own web server didn’t run in exactly the same zombie mode until you removed the shells installed through, as you say, later identified backdoor? Should you expect another person, whose web server was attempted to be, or indeed was hacked through your compromised web server’s IP, thinking exactly the same about you, and is already looking for ways to get even like you are?

justausr takes this even further:

Don’t play their game, you’ll lose I’ve learned not to play that game, hackers by nature have more spare time than you and will ultimately win. Even if you get him back, your website will be unavailable to your customers for a solid week afterwards. Remember, you’re the one with public facing servers, you have an IP of a random server that he probably used once. He’s the one with a bunch of scripts and likely more knowledge than you will get in your quest for revenge. Odds aren’t in your favor and the cost to your business is probably too high to risk losing.

Similarly, the other answers mostly discuss the difficulty in identifying the correct perpetrator, and the risks of trying to do something to them.

But Scott Pack‘s answer does provide a little side-step from the generally accepted principles most civilians must follow:

The term most often used to describe what you’re talking about is Hacking Back. It’s part of the Offensive Countermeasures movement that’s gaining traction lately. Some really smart people are putting their heart and soul into figuring out how we, as an industry, should be doing this. There are lots of things you can do, but unless you’re a nation-state, or have orders and a contract from a nation-state your options are severely limited.

tl;dr – don’t be a vigilante. If you do, you will have broken the law, and the police are likely to be able to prove your guilt a lot more easily than that of the unknown hacker.

Like this question of the week? Interested in reading more detail, and other answers? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

Attacking RSA through Sound

2013-12-23 by Terry Chia. 1 comments

A new attack against RSA has been made known this week. Details about it can be found in the paper RSA Key Extraction via Low-Bandwidth Acoustic Cryptanalysis. One notable name amongst the co-authors of the paper is Adi Shamir, who was one of the three that published the algorithm.

This attack is a type of side-channel attack against RSA. A side channel attack is an attack that targets the implementation of a cryptosystem instead of targeting the algorithm. RSA has been broken by many side channel attacks in the past. The most famous of which is probably the timing attack described by Paul C. Kocher in his paper Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS and Other Systems.

This attack works by taking advantage of the fact that a computer emits different sounds when perforing different tasks. With this, it is possible to recover information about the RSA key used during the process of encryption or decryption. Genkin, Shamir and Tromer demonstrated that when the same plaintext is encrypted with different RSA keys, it is possible to discern which key was used in the encryption process. This is a form of key distinguishing attack.

This concept is not new. In fact, it is the topic of Tromer’s PhD thesis Hardware-Based Cryptanalysis published in 2007. What is new about this paper is that the researchers demonstrated an actual attack that is able to distinguish between RSA keys, instead of just the theoratical possibility. What is even more surprising is that the researchers were able to pull off the attack using mobile phones which demonstrates that the attack does not require specialized recording equipment to pull off.

Should you be worried? The attack was demonstrated in lab conditions. It might be a little harder to pull off in real life scenarios where there will presumably be much more background noise to mask the sounds. The actual attack was demonstrated on GnuPG. Updating to the latest version of GnuPG 1.4.x will fix this particular problem. Better still, use the GnuPG 2.x branch which employs RSA blinding that should protect against such side-channel attacks.

While this attack might not be practical as of now, it is very interesting still because many cryptosystems suffer from what are basically implementation problems. Once again, don’t roll your own cryptography!

For some further detail, read the related question on security.stackexchange.com.

 

WiFi security: history of insecurities in WEP, WPA and WPA2

2013-08-28 by lucaskauffman. 0 comments

Recently I answered a question on security.stackexchange.com regarding security in wireless protocols. The OP wanted to know more about what methods were available to break into a wireless network and how long it would take.

more »

QoTW #46: CTRL+ALT+DEL Login – Rationale behind it?

2013-05-10 by roryalsop. 1 comments

CountZero asked this interesting question: Why is CTRL+ALT+DEL required at login on Windows systems?

His perspective was that it adds an extra step before login, so is bad from a usability perspective, so there must be a reason.

This got a lot of attention, but looking at the top answers:

Adnan‘s answer briefly describes the Secure Attention Key – the Windows kernel will only notify the Winlogon process about this key combination, which prevents it being hijacked by an application, malware or some other process.  In this way, when you press Ctrl+Alt+Del, you can be sure that you’re typing your password in the real login form and not some other fake process trying to steal your password. For example, an application which looks exactly like the windows login. An equivalent of this in Linux is Ctrl+Alt+Pause

Polynomial‘s comment on the answer further expands on the history of this notification:

As a side note: when you say it’s “wired”, what that actually means is that Ctrl+Alt+Del is a mapped to a hardware defined interrupt (set in the APIC, a physical chip on your motherboard). The interrupt was, historically, triggered by the BIOS’ keyboard handler routine, but these days it’s less clear cut. The interrupt is mapped to an ISR which is executed at ring0, which triggers the OS’s internal handler for the event. When no ISR for the interrupt is set, it (usually) causes an ACPI power-cycle event, also known as a hard reboot.

ThomasPornin describes an attack which would work if the Secure Attention Key didn’t exist:

You could make an application which goes full-screen, grabs the keyboard, and displays something which looks like the normal login screen, down to the last pixel. You then log on the machine, launch the application, and go away until some unsuspecting victim finds the machine, tries to log on, and gives his username and password to your application. Your application then just has to simulate a blue screen of death, or maybe to actually log the user on, to complete the illusion.

There is also an excellent answer over on ServerFault, which TerryChia linked to in his answer:

The Windows (NT) kernel is designed to reserve the notification of this key combination to a single process: Winlogon. So, as long as the Windows installation itself is working as it should – no third party application can respond to this key combination (if it could, it could present a fake logon window and keylog your password 😉

So there you have it – as long as your OS hasn’t been hacked, CTRL+ALT+DEL protects you.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QoTW #41: Why do we lock our computers?

2012-11-30 by roryalsop. 2 comments

Iszi chose this week’s question of the week, Tom Marthenal‘s “Why do we lock our computers?” – as Tom puts it:

It’s common knowledge that if somebody has physical access to your machine they can do whatever they want with it, so what is the point?

This one attracted a lot of views, as it is a simple question of interest to everyone.

Both Bruce Ediger and Polynomial answered with the core reason – it removes the risk from the casual attacker while costing the user next to nothing! This is an essential factor in cost/usability tradeoffs for security. From Bruce:

The value of locking is somewhat larger than the price of locking it. Sort of like how in good neighborhoods, you don’t need to lock your front door. In most neighborhoods, you do lock your front door, but anyone with a hammer, a large rock or a brick could get in through the windows.

and from Polynomial:

An attacker with a short window of opportunity (e.g. whilst you’re out getting coffee) must be prevented at minimum cost to you as a user, in such a way that makes it non-trivial to bypass under tight time constraints.

Kaz pointed out another essential point, traceability:

If you don’t lock, it is easy for someone to poke around inside your session in such a way that you will not notice it when you return to your machine.

And zzzzBov added this in a comment:

…few bystanders would question someone walking up to a house and entering through the front door. The assumption is that the person entering it has a reason to. If a bystander watches someone break into a window, they’re much more likely to call the authorities. This is analogous with sitting down at a computer that’s unlocked, vs physically hacking into the system after crawling under a desk.

It removes a large percentage of possible attacks – those from your co-workers wanting to mess with your stuff – thanks enedene.

So – protect yourself from co-workers, casual snooping and pilfering and other mischief by simply locking your machine every time you leave your desk!

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QoTW #40: What’s the impact of disclosing the front-face of a credit or debit card?

2012-11-23 by roryalsop. 0 comments

” What is the impact of disclosing the front face of a credit card?” and “How does Amazon bill me without the CVC/CVV/CVV2?” are two questions which worry a lot of people, especially those who are aware of the security risks of disclosing information, but who don’t fully understand them.

Rory McCune‘s question was inspired by a number of occasions where someone was called out for disclosing the front of their credit card – and he wondered what the likely impact of disclosing this information could be, as the front of the card gives the card PAN (16-digit number), start date, expiry date and cardholder name. Also for debit cards, the cardholders account number and sort code (that may vary by region).

TC1 asked how Amazon and other individuals can bill you without the CVV (the special number on the back of the card)

atdre‘s answer on that second question states that for Amazon:

The only thing necessary to make a purchase is the card number, whether in number form or magnetic. You don’t even need the expiration date.

Ron Robinson also provides this answer:

Amazon pays a slightly higher rate to accept your payment without the CVV, but the CVV is not strictly required to present a transaction – everybody uses CVV because they get a lower rate if it is present (less risk, less cost).

So there is one rule for the Amazons out there and one rule for the rest of us. Which is good – this reduces the risk to us of card fraud. Certainly for online transactions.

So what about physical transactions – If I have a photo of the front of a credit card and use it to create a fake card, is that enough to commit fraud?

From Polynomial‘s answer:

On most EFTPOS systems, it’s possible to manually enter the card details. When a field is not present, the operator simply presses enter to skip, which is common with cards that don’t carry a start date. On these systems, it is trivial to charge a card without the CVV. When I worked in retail, we would frequently do this when the chip on a card wasn’t working and the CVV had rubbed off. In such cases, all that was needed was the card number and expiry date, with a signature on the receipt for verification.

So if a fraudster could fake a card it could be accepted in retail establishments, especially in countries that don’t yet use Chip and Pin.

Additionally, bushibytes pointed out the social engineering possibilities:

As a somewhat current example, see how Mat Honan got hacked last summer :http://www.wired.com/gadgetlab/2012/08/apple-amazon-mat-honan-hacking/all/ In his case, Apple only required the last digits for his credit card (which his attacker obtained from Amazon) in order to give up the account. It stands to reason that other vendors may be duped if an attacker were to provide a full credit card number including expiration dates.

In summary, there is a very real risk of not only financial fraud, but also social engineering, from the public disclosure of the front of your credit card. Online, the simplest fraud is through the big players like Amazon, who don’t need the CVV, and in the real world an attacker who can forge a card with just an image of the front of your card is likely to be able to use that card.

So take care of it – don’t divulge the number unless you have to!

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #36: How does the zero-day Internet Explorer vulnerability discovered in September 2012 work?

2012-09-28 by roryalsop. 0 comments

Community member Iszi nominated this week’s question, which asks for an explanation of the issue from the perspective of a developer/engineer: What is exactly being exploited and why does it work?

Polynomial provided the following detailed, technical answer:

CVE-2012-4969, aka the latest IE 0-day, is a based on a use-after-free bug in IE’s rendering engine. A use-after-free occurs when a dynamically allocated block of memory is used after it has been disposed of (i.e. freed). Such a bug can be exploited by creating a situation where an internal structure contains pointers to sensitive memory locations (e.g. the stack or executable heap blocks) in a way that causes the program to copy shellcode into an executable area.

In this case, the problem is with the CMshtmlEd::Exec function in mshtml.dll. The CMshtmlEd object is freed unexpectedly, then the Exec method is called on it after the free operation.

First, I’d like to cover some theory. If you already know how use-after-free works, then feel free to skip ahead.

At a low level, a class can be equated to a memory region that contains its state (e.g. fields, properties, internal variables, etc) and a set of functions that operate on it. The functions actually take a “hidden” parameter, which points to the memory region that contains the instance state.

For example (excuse my terrible pseudo-C++):

class Account
 {
 int balance = 0;
 int transactionCount = 0;
void Account::AddBalance(int amount)
 {
 balance += amount;
 transactionCount++;
 }
void Account::SubtractBalance(int amount)
 {
 balance -= amount;
 transactionCount++;
 }
 }
The above can actually be represented as the following:
private struct Account
 {
 int balance = 0;
 int transactionCount = 0;
 }
public void* Account_Create()
 {
 Account* account = (Account)malloc(sizeof(Account));
 account->balance = 0;
 account->transactionCount = 0;
 return (void)account;
 }
public void Account_Destroy(void* instance)
 {
 free(instance);
 }
public void Account_AddBalance(void* instance, int amount)
 {
 ((Account)instance)->balance += amount;
 ((Account)Account)->transactionCount++;
 }
public void Account_SubtractBalance(void* instance, int amount)
 {
 ((Account)instance)->balance -= amount;
 ((Account)instance)->transactionCount++;
 }
public int Account_GetBalance(void* instance)
 {
 return ((Account)instance)->balance;
 }
public int Account_GetTransactionCount(void instance)
 {
 return ((Account*)instance)->transactionCount;
 }
I’m using void* to demonstrate the opaque nature of the reference, but that’s not really important. The point is that we don’t want anyone to be able to alter the Account struct manually, otherwise they could add money arbitrarily, or modify the balance without increasing the transaction counter.

Now, imagine we do something like this:

 Account_Destroy(myAccount);
 // ...void* myAccount = Account_Create();
Account_AddBalance(myAccount, 100);
Account_SubtractBalance(myAccount, 75);
// ...
 if(Account_GetBalance(myAccount) > 1000) // <-- !!! use after free !!!
 ApproveLoan();
Now, by the time we reach Account_GetBalance, the pointer value in myAccount actually points to memory that is in an indeterminate state. Now, imagine we can do the following:

  1. Trigger the call to Account_Destroy reliably.
  2. Execute any operation after Account_Destroy but before Account_GetBalance that allows us to allocate a reasonable amount of memory, with contents of our choosing.

Usually, these calls are triggered in different places, so it’s not too difficult to achieve this. Now, here’s what happens:

  1. Account_Create allocates an 8-byte block of memory (4 bytes for each field) and returns a pointer to it. This pointer is now stored in the myAccount variable.
  2. Account_Destroy frees the memory. The myAccount variable still points to the same memory address.
  3. We trigger our memory allocation, containing repeating blocks of 39 05 00 00 01 00 00 00. This pattern correlates to balance = 1337 and transactionCount = 1. Since the old memory block is now marked as free, it is very likely that the memory manager will write our new memory over the old memory block.
  4. Account_GetBalance is called, expecting to point to an Account struct. In actuality, it points to our overwritten memory block, resulting in our balance actually being 1337, so the loan is approved!

This is all a simplification, of course, and real classes create rather more obtuse and complex code. The point is that a class instance is really just a pointer to a block of data, and class methods are just the same as any other function, but they “silently” accept a pointer to the instance as a parameter.

This principle can be extended to control values on the stack, which in turn causes program execution to be modified. Usually, the goal is to drop shellcode on the stack, then overwrite a return address such that it now points to a jmp esp instruction, which then runs the shellcode.

This trick works on non-DEP machines, but when DEP is enabled it prevents execution of the stack. Instead, the shellcode must be designed using Return-Oriented Programming (ROP), which uses small blocks of legitimate code from the application and its modules to perform an API call, in order to bypass DEP.

Anyway, I’m going off-topic a bit, so let’s get into the juicy details of CVE-2012-4969!

In the wild, the payload was dropped via a packed Flash file, designed to exploit the Java vulnerability and the new IE bug in one go. There’s also been some interesting analysis of it by AlienVault.

The metasploit module says the following:

> This module exploits a vulnerability found in Microsoft Internet Explorer (MSIE). When rendering an HTML page, the CMshtmlEd object gets deleted in an unexpected manner, but the same memory is reused again later in the CMshtmlEd::Exec() function, leading to a use-after-free condition.

There’s also an interesting blog post about the bug, albeit in rather poor English – I believe the author is Chinese. Anyway, the blog post goes into some detail:

> When the execCommand function of IE execute a command event, will allocated the corresponding CMshtmlEd object by AddCommandTarget function, and then call mshtml@CMshtmlEd::Exec() function execution. But, after the execCommand function to add the corresponding event, will immediately trigger and call the corresponding event function. Through the document.write("L") function to rewrite html in the corresponding event function be called. Thereby lead IE call CHTMLEditor::DeleteCommandTarget to release the original applied object of CMshtmlEd, and then cause triggered the used-after-free vulnerability when behind execute the msheml!CMshtmlEd::Exec() function.

Let’s see if we can parse that into something a little more readable:

  1. An event is applied to an element in the document.
  2. The event executes, via execCommand, which allocates a CMshtmlEd object via the AddCommandTarget function.
  3. The target event uses document.write to modify the page.
  4. The event is no longer needed, so the CMshtmlEd object is freed via CHTMLEditor::DeleteCommandTarget.
  5. execCommand later calls CMshtmlEd::Exec() on that object, after it has been freed.

Part of the code at the crash site looks like this:

637d464e 8b07 mov eax,dword ptr [edi]
 637d4650 57 push edi
 637d4651 ff5008 call dword ptr [eax+8]
The use-after-free allows the attacker to control the value of edi, which can be modified to point at memory that the attacker controls. Let’s say that we can insert arbitrary code into memory at 01234f00, via a memory allocation. We populate the data as follows:
01234f00: 01234f08
 01234f04: 41414141
 01234f08: 01234f0a
 01234f0a: c3c3c3c3 // int3 breakpoint
1. We set edi to 01234f00, via the use-after-free bug. 2. mov eax,dword ptr [edi] results in eax being populated with the memory at the address in edi, i.e. 01234f00. 3. push edi pushes 01234f00 to the stack. 4. call dword ptr [eax+8] takes eax (which is 01234f00) and adds 8 to it, giving us 01234f08. It then dereferences that memory address, giving us 01234f0a. Finally, it calls 01234f0a. 5. The data at 01234f0a is treated as an instruction. c3 translates to an int3, which causes the debugger to raise a breakpoint. We’ve executed code!

This allows us to control eip, so we can modify program flow to our own shellcode, or to a ROP chain.

Please keep in mind that the above is just an example, and in reality there are many other challenges in exploiting this bug. It’s a pretty standard use-after-free, but the nature of JavaScript makes for some interesting timing and heap-spraying tricks, and DEP forces us to use ROP to gain an executable memory block.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #35: Dealing with excessive “Carding” attempts

2012-09-14 by scottpack. 0 comments

Community moderator Jeff Ferland nominated this week’s question: Dealing with excessive “Carding” attempts.

I found this to be an interesting question for two reasons,

  1. It turned the classic password brute force on its head by applying it to credit cards
  2. It attracted the attention from a large number of relatively new users

Jeff Ferland postulated that the website was too helpful with its error codes and recommended returning the same “Transaction Failed” message no matter the error.

User w.c suggested using some kind of additional verification like a CAPTCHA. Also mentioned was the notion of instituting time delays for multiple successive CAPTCHA or transaction failures.

A slightly different tack was discussed by GdD. Instead of suggesting specific mitigations, GdD pointed out the inevitability of the attackers adapting to whatever protections are put in place. The recommendation was to make sure that you keep adapting in turn and force the attackers into your cat and mouse game.

Ajacian81 felt that the attacker’s purpose may not be finding valid numbers at all and instead be performing a payment processing denial of service. The suggested fix was to randomize the name of the input fields in an effort to prevent scripting the site.

John Deters described a company that had previously had the same problem. They completely transferred the problem to their processor by having them automatically decline all charges below a certain threshold. He also pointed out that the FBI may be interested in the situation and should be notified. This, of course, depends on USA jurisdiction.

Both ddyer and Winston Ewert suggested different ways of instituting artificial delays into the processing. Winston discussed outright delaying all transactions while ddyer discussed automated detection of “suspicious” transactions and blocking further transactions from that host for some time period.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

How can you protect yourself from CRIME, BEAST’s successor?

2012-09-10 by roryalsop. 11 comments

For those who haven’t been following Juliano Rizzo and Thai Duong, two researchers who developed the BEAST attack against TLS 1.0/SSL 3.0 in September 2011, they have developed another attack they plan to publish at the Ekoparty conference in Argentina later this month – this time giving them the ability to hijack HTTPS sessions – and this has started people worrying again.

Security Stack Exchange member Kyle Rozendo asked this question:

With the advent of CRIME, BEASTs successor, what is possible protection is available for an individual and / or system owner in order to protect themselves and their users against this new attack on TLS?

And the community expectation was that we wouldn’t get an answer until Rizzo and Duong presented their attack.

However, our highest reputation member, Thomas Pornin delivered this awesome hypothesis, which I will quote here verbatim:


This attack is supposed to be presented in 10 days from now, but my guess is that they use compression.

SSL/TLS optionally supports data compression. In the ClientHello message, the client states the list of compression algorithms that it knows of, and the server responds, in the ServerHello, with the compression algorithm that will be used. Compression algorithms are specified by one-byte identifiers, and TLS 1.2 (RFC 5246) defines only the null compression method (i.e. no compression at all). Other documents specify compression methods, in particular RFC 3749 which defines compression method 1, based on DEFLATE, the LZ77-derivative which is at the core of the GZip format and also modern Zip archives. When compression is used, it is applied on all the transferred data, as a long stream. In particular, when used with HTTPS, compression is applied on all the successive HTTP requests in the stream, header included. DEFLATE works by locating repeated subsequences of bytes.

Suppose that the attacker is some Javascript code which can send arbitrary requests to a target site (e.g. a bank) and runs on the attacked machine; the browser will send these requests with the user’s cookie for that bank — the cookie value that the attacker is after. Also, let’s suppose that the attacker can observe the traffic between the user’s machine and the bank (plausibly, the attacker has access to the same LAN of WiFi hotspot than the victim; or he has hijacked a router somewhere on the path, possibly close to the bank server).

For this example, we suppose that the cookie in each HTTP request looks like this:

> Cookie: secret=7xc89f+94/wa

The attacker knows the “Cookie: secret=” part and wishes to obtain the secret value. So he instructs his Javascript code to issue a request containing in the body the sequence “Cookie: secret=0”. The HTTP request will look like this:

POST / HTTP/1.1 Host: thebankserver.com (…) Cookie: secret=7xc89f+94/wa (…)

Cookie: secret=0

When DEFLATE sees that, it will recognize the repeated “Cookie: secret=” sequence and represent the second instance with a very short token (one which states “previous sequence has length 15 and was located n bytes in the past); DEFLATE will have to emit an extra token for the ‘0’.

The request goes to the server. From the outside, the eavesdropping part of the attacker sees an opaque blob (SSL encrypts the data) but he can see the blob length (with byte granularity when the connection uses RC4; with block ciphers there is a bit of padding, but the attacker can adjust the contents of his requests so that he may phase with block boundaries, so, in practice, the attacker can know the length of the compressed request).

Now, the attacker tries again, with “Cookie: secret=1” in the request body. Then, “Cookie: secret=2”, and so on. All these requests will compress to the same size (almost — there are subtleties with Huffman codes as used in DEFLATE), except the one which contains “Cookie: secret=7”, which compresses better (16 bytes of repeated subsequence instead of 15), and thus will be shorter. The attacker sees that. Therefore, in a few dozen requests, the attacker has guessed the first byte of the secret value.

He then just has to repeat the process (“Cookie: secret=70”, “Cookie: secret=71”, and so on) and obtain, byte by byte, the complete secret.


What I describe above is what I thought of when I read the article, which talks about “information leak” from an “optional feature”. I cannot know for sure that what will be published as the CRIME attack is really based upon compression. However, I do not see how the attack on compression cannot work. Therefore, regardless of whether CRIME turns out to abuse compression or be something completely different, you should turn off compression support from your client (or your server).

Note that I am talking about compression at the SSL level. HTTP also includes optional compression, but this one applies only to the body of the requests and responses, not the header, and thus does not cover the Cookie: header line. HTTP-level compression is fine.

(It is a shame to have to remove SSL compression, because it is very useful to lower bandwidth requirements, especially when a site contains many small pictures or is Ajax-heavy with many small requests, all beginning with extremely similar versions of a mammoth HTTP header. It would be better if the security model of Javascript was fixed to prevent malicious code from sending arbitrary requests to a bank server; I am not sure it is easy, though.)


As bobince commented:

I hope CRIME is this and we don’t have two vulns of this size in play! However, I wouldn’t say that being limited to entity bodies makes HTTP-level compression safe in general… whilst a cookie header is an obvious first choice of attack, there is potentially sensitive material in the body too. eg Imagine sniffing an anti-XSRF token from response body by causing the browser to send fields that get reflected in that response.

It is reassuring that there is a fix, and my recommendation would be for everyone to assess the risk to them of having sessions hijacked and seriously consider disabling SSL compression support.

QOTW #33 – Communications infrastructure after a nuclear explosion

2012-08-24 by Polynomial. 3 comments

A few weeks ago, I was sat pondering nuclear bombs. To any of you who frequent the DMZ, this shouldn’t be much of a surprise, as it’s pretty much par-for-the-course for my imagination on a Wednesday night. I started thinking about the electromagnetic pulse released from a nuclear explosion, and how it might affect electronic devices. “But surely,” I thought, “a nuclear bomb would negate any interesting post-blast resilience implications by turning all local electronic devices into glowing dust!”. How wrong I was.

After a bit of research, I came across High-Altitude Nuclear Explosions (HANE). No, this isn’t the title of the latest Michael Bay movie, but rather the concept of detonating a nuclear bomb at an altitude greater than 30km. Whilst this might sound rather ridiculous, certain countries have actually done it. In the early 60s, the USA and Russia performed a series of HANE tests, in order to better understand the potential for anti-satellite weaponry. These tests showed that the electromagnetic pulse and ensuing radioactive fallout was capable of destroying or damaging satellites in space rather indiscriminately. The tests caused numerous satellites to fail after a short period of time, due to severe radiation damage. What’s really interesting about HANE, though, is that the explosion releases a giant electromagnetic pulse, without the hassle of re-enacting that dream scene from Terminator 2 where Sarah Connor’s skeleton is left clinging onto a chain-link fence.

To quote the Wikipedia article:

This high-altitude EMP occurs between 30 and 50 kilometers (18 and 31 miles) above the Earth’s surface. The potential as an anti-satellite weapon became apparent in August 1958 during Hardtack Teak (a HANE test). The EMP observed at the Apia Observatory at Samoa was four times more powerful than any created by solar storms, while in July 1962 the Starfish Prime test damaged electronics in Honolulu and New Zealand (approximately 1,300 kilometers away), fused 300 street lights on Oahu (Hawaii), set off about 100 burglar alarms, and caused the failure of a microwave repeating station on Kauai, which cut off the sturdy telephone system from the other Hawaiian islands.

That’s one hell of an electromagnetic pulse!

After I’d got over the initial excitement of learning that it is actually theoretically possible to sizzle microchips from space using nukes, I decided to post a question on Security SE. In short, I wanted to know how worldwide communications infrastructure, especially the Internet, was protected from such attacks. This is where our resident nuclear weapons “enthusiast” Thomas Pornin came in, with a wonderfully detailed answer.

It turns out that the Internet is, like a condom machine in the Vatican, amazingly redundant. Obviously not quite the same kind of redundant, but redundant nonetheless. If a network link goes down, packets are routed through other links. It’s a self-healing network. This means that you would have to knock out multiple inter-continental network links in order to cause severe disruption. Furthermore, it has plenty of redundant channels that are likely to still function after a large EMP:

An interesting issue with radio and satellite communication is temporary disturbance of the ionosphere. The electromagnetic pulse would cause electrons to be jiggled about in the upper atmosphere, creating areas of increased and decreased electron density. A lot of long-range radio communication relies on “bouncing” radio waves off of the ionosphere, which means that such communications are likely to suffer strong interference. Longer wavelength signals are likely to be disrupted more than short wavelength (GHz) signals, due to some interesting physics wizardry I won’t go into.

Of course, in the case of a real war, we’d have to worry about anti-satellite weapons knocking out communications too. Thankfully though, under-sea cables are shielded by a big blanket of water, which absorbs electromagnetic radiation. This is why submarines send up a radio mast or buoy. The good news is that our fastest and most reliable network links will actually end up being our most resilient, because they’re on the sea bed. Furthermore, optical fiber cable is not affected by electromagnetic induction, like copper wiring is. So, if all hell does break loose, the phat-pipes should still be able to serve you your daily dose of StackExchange! Yay!

But not everything relies on hardware interlinks. We also have to take into account the systems that facilitate the core functionality of the internet. An interesting example of this is DNS. There are thirteen root nameservers that facilitate the absolute top tier of DNS functionality. Without the root name servers, we wouldn’t be able to reliably translate domain names into IP addresses. Think of them as a set of mirrors for a global directory, where top-level domains (TLDs) are indexed. These entries point to other DNS servers, which point to other DNS servers, etc. We’re left with a large hierarchical map of DNS resolution. Now, whilst thirteen servers sounds rather flimsy, the actual number of servers involved is much larger. A neat technique called anycast allows multiple physical servers across the world to represent a single root name server. This is not the same as cloud computing, which is often touted as a panacea for uptime-critical solutions. Guess what? The cloud is not redundant! But I digress. In order to take down the entire DNS system, you’d need to nuke Japan, England, France, Germany, most of eastern Europe and the entire east and west coasts of America into oblivion. At that point, I don’t think the remaining radiation-resistant insects would be particularly interested in whether the DNS system is functioning or not.

The biggest threat HANE poses to our infrastructure is to our power grid. The world is coated with a mesh of power wiring, stretching over hundreds of miles of land. Since the cables are so long, and are very likely to cut through the magnetic lines of flux in a perpendicular manner, the induced currents could be massive. Local substations would burn out, street lighting would fry, circuit breakers would pop, and fuses would blow. Without power, our worries about communications infrastructure being disrupted would be rather pointless. The servers that host anything worth accessing would be down anyway, and it’d be likely that you couldn’t boot your PC or charge your laptop, due to lack of power. Even when sections of the power grid come back online, the demand would be massive, causing further outages and brown-outs.

So, how can we protect ourselves? Here’s a few ideas:

  • Wrap everything important in a giant Faraday cage. Not exactly practical for power plants or datacenters, though.
  • Ensure that significant chunks of the world’s communications infrastructure is built around trans-oceanic cables and optical links.
  • Ensure that nation-critical services are distributed globally, with the ability to self-heal if remote systems become unreachable.
  • Be prepared to switch to lo-tech communications in the event of such a disaster.

Perhaps the thing to take away from this analysis is that nukes are dangerous, and there’s not much you can do to prevent their aftermath. The best policy is to not detonate nukes at high altitude, or if possible, not to detonate nukes at all.

Nukes are bad, m’kay.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.