Posts Tagged ‘QOTW’

QoTW #39: Why would a virus writer bother to check to see if a machine is infected before infecting it?

2012-10-26 by roryalsop. 0 comments

Terry Chia proposed Swaroop’s question from 2 October 2012: Why would a virus writer bother to check to see if a machine is infected before infecting it?

Swaroop was wondering if this would be an attempt to use files that are already present instead of transferring another copy to the machine, but actually it boils down to two main reasons – the malware writer wants the computer to remain stable for as long as possible, and there is competition between virus writers for control over your PC.

Polynomial wrote:

The less resources used and the less disturbance caused, the less likely the user is to notice something is wrong. Once a user notices something isn’t working properly, they’ll try to fix it, which might result in the malware being removed. As such, it’s better for the malware creator to prevent these problems and remain covert.

Thomas Pornin linked to the original internet worm, which did prevent networks and systems working because it didn’t carry out checks, and both Celeritas and OtisBoxcar supported Polynomial’s top answer with theirs.

WernerCD wrote:

Preemptive removal of competitors products, as well as verification of no prior version of my product, would help keep the system mine. As mentioned, the goal would be to keep the CPU “mine”. That means lay low and don’t attract notice.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QoTW #38: What is SHA-3 – and why did we change it?

2012-10-12 by ninefingers. 0 comments

Lucas Kauffman selected this week’s Question of the Week: What is SHA3 and why did we change it? 

No doubt if you are at least a little bit curious about security, you’ll have heard of AES, the advanced encryption standard. Way back in 1997, when winters were really hard, our modems froze as we used them and Windows 98 had yet to appear, NIST saw the need to replace the then mainstream Data Encryption Standard with something resistant to the advances in cryptography that had occurred since its inception. So, NIST announced a competition and invited interested parties to submit algorithms matching the desired specification – the AES Process was underway.

A number of algorithms with varying designs were submitted for the process and three rounds held, with comments, cryptanalysis and feedback submitted at each stage. Between rounds, designers could tweak their algorithms if needed to address minor concerns; clearly broken algorithms did not progress. This was somewhat of a first for the crypto community – after the export restrictions and the so called “crypto wars” of the 90s, open cryptanalysis of published algorithms was novel, and it worked. We ended up with AES (and some of you may also have used Serpent, or Twofish) as a result.

Now, onto hashing. Way back in 1996, discussions were underway in the cryptographic community on the possibility of finding a collision within MD5. Practically MD5 started to be commonly exploited in 2005 to create fake certificate authorities. More recently, the FLAME malware used MD5 collisions to bypass Windows signature restrictions. Indeed, we covered this attack right here on the security blog.

The need for new hash functions has been known for some time, therefore. To replace MD5, SHA-1 was released. However, like its predecessor, cryptanalysis began to reveal that its collision resistance required a less-than-bruteforce search. Given that this eventually yields practical exploits that undermine cryptographic systems, a hash standard is needed that is resistant to finding collisions.

As of 2001, we have also had available to us SHA-2, a family of functions that as yet has survived cryptanalysis. However, SHA-2 is similar in design to its predecessor, SHA-1, and one might deduce that similar weaknesses may hold.

So, in response and in a similar vein to the AES process, NIST launched the SHA3 competition in 2007, in their words, in response to recent improvements in cryptanalysis of hash functions. Over the past few years, various algorithms have been analyzed and the number of candidates reduced, much like a reality TV show (perhaps without the tears, though). The final round algorithms essentially became the candidates for SHA3.

The big event this year is that Keccak has been announced as the SHA-3 hash standard. Before we go too much further, we should clarify some parts of the NIST process. Depending on the round an algorithm has reached determines the amount of cryptanalysis it will have received – the longer a function stays in the competition, the more analysis it faces. The report of round two candidates does not reveal any suggestion of breakage; however, NIST has selected its final round candidates based on a combination of performance factors and safety margins. Respected cryptographer Bruce Schneier even suggested that perhaps NIST should consider adopting several of the finalist functions as suitable.

That’s the background, so I am sure you are wondering: how does this affect me? Well, here’s what you should take into consideration:

  • MD5 is broken. You should not use it; it has been used in practical exploits in the wild, if reports are to be believed – and even if they are not, there are alternatives.
  • SHA-1 is shown to be theoretically weaker than expected. It is possible it may become practical to exploit it. As such, it would be prudent to migrate to a better hash function.
  • In spite of concerns, the family of SHA-2 functions has thus far survived cryptanalysis. These are fine for current usage.
  • Keccak and selected other SHA-3 finalists will likely become available in mainstream cryptographic libraries soon. SHA-3 is approved by NIST, so it is fine for current usage.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QoTW #37: How does SSL work ?

2012-10-05 by Thomas Pornin. 2 comments

In this week’s question, we will talk about SSL. This question was asked by @Polynomial, who noticed that our site did not have yet a generic question on how SSL works. There were already some questions on the concept of SSL, but nothing really detailed.

Three answers were given, one by @Luc, and two by myself (because I got really verbose and there is a size limit on answers). The three answers concentrated on distinct aspects of SSL; together, they can explain why SSL works: SSL appears to be decently secure and we can see how this is achieved.

My first answer is a painfully long description of the detailed protocol as it appears on the wire. I wrote it as an introduction to the intricacies of the protocol; what information it contains must be known if you want to understand the details of the cryptographic attacks which have been tried on SSL. This is not much more than RFC-reading, but I still made an effort to merge four RFC (for the four protocol versions: SSLv3, TLS 1.0, 1.1 and 1.2) into one text which should be readable linearly. If you plan on implementing your own SSL client or server (a very instructive exercise, which I recommend for its pedagogical virtues), then I hope that this answer will be a useful reading guide for the actual standards.

What the protocol description shows is that at one point, during the initial steps of the SSL connection (the “handshake”), the server sends its “certificate” to the client (actually, a certificate chain), and then, a few steps later, the client appears to have gained some knowledge of the server’s public key, with which asymmetric cryptography is then performed. The SSL/TLS protocol handles these certificates as opaque blobs. What usually happens is that the client decodes the blobs as X.509 certificates and validates them with regards to a set of known trust anchors. The validation yields the server’s public key, with some guarantee that it really is the key owned by the intended server.

This certificate validation is the first foundation of SSL, as it is used for the Web (i.e. HTTPS). @Luc’s answer contains clear explanations on why certificates are used, and on what the guarantees rely on. Most enlightening is this excerpt:

You have to trust the CA not to make certificates as they please. When organizations like Microsoft, Apple and Mozilla trust a CA though, the CA must have audits; another organization checks on them periodically to make sure everything is still running according to the rules.

So the whole system relies on big companies checking on each other. Some of the trusted CA are governmental (from various governments) but the most often used are private business (e.g. Thawte, Verisign…). An important point to make is that it suffices to subvert or corrupt one CA to get a fake certificate which will be trusted worldwide; so this really is as robust as the weakest of the hundred or so trusted CA which browser vendors include by default. Nevertheless, it works quite well (attacks on CA are rare).

Note that since the certificate parts are quite isolated in the protocol itself (the certificates are just opaque blobs), SSL/TLS can be used without certificate validation in setups where the client “just knows” the server’s public key. This happens a lot in closed environments, such as embedded systems which talk to a mother server. Also, there are a few certificate-less cipher suites, such as the ones which use SRP.

This brings us to the second foundation of SSL: its intricate usage of cryptographic algorithms. Asymmetric encryption (RSA) or key exchange (Diffie-Hellman, or an elliptic curve variant), symmetric encryption with stream or block ciphers, hashing, message authentication codes (HMAC)… the whole paraphernalia is there. Assembling all these primitives into a coherent and secure protocol is not easy at all, and the history of SSL is a rather lengthy sequence of attacks and fixes. My second answer gives details on some of them. What must be remembered is that SSL is state of the art: every attack which has ever been conceived has been tried on SSL, because it is a high-value target. SSL got a lot of exposure, and its survival is testimony to its strength. Sure, it was occasionally harmed, but it was always salvaged. It is rather telling that the recent crop of attacks from Duong and Rizzo (ASP.NET padding oracles, BEAST, CRIME) are actually old attacks which Duong and Rizzo applied; their merit is not in inventing them (they didn’t) but in showing how practical they can be in a Web context (and masterfully did they show it).

From all of this we can list the reasons which make SSL work:

  • The binding between the alleged public key and the intended server is addressed. Granted, it is done with X.509 certificates, which have been designed by the Adversary to drive implementors crazy; but at least the problem is dealt with upfront.
  • The encryption system includes checked integrity, with a decent primitive (HMAC). The encryption uses CBC mode for block ciphers and the MAC is included in the MAC-then-encrypt way: both characteristics are suboptimal, and security with these choices requires special care in the specification (the need of random unpredictible IV, basis of the BEAST attack, fixed in TLS 1.1) and in the implementation (information leak through the padding, used in padding oracle attacks, fixed when Microsoft finally consented to notice the warning which was raised by Vaudenay in 2002).
  • All internal key expansion and checksum tasks are done with a specific function (called “the PRF”) which builds on standard primitives (HMAC with cryptographic hash functions).
  • The client and the server send random values, which are included in all PRF invocations, and protect against replay attacks.
  • The handshake ends with a couple of checksums, which are covered by the encryption+MAC layer, and the checksums are computed over all of the handshake messages (and this is important in defeating a lot of nasty things that an active attacker could otherwise do).
  • Algorithm agility. The cryptographic algorithms (cipher suites) and other features (protocol version, compression) are negotiated between the client and server, which allows for a smooth and gradual transition. This is how current browsers and servers can use AES encryption, which was defined in 2001, several years after SSLv3. It also facilitates recovery from attacks on some features, which can be deactivated on the client and/or the server (e.g. compression, which is used in CRIME).

All these characteristics contribute to the strength of SSL.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #36: How does the zero-day Internet Explorer vulnerability discovered in September 2012 work?

2012-09-28 by roryalsop. 0 comments

Community member Iszi nominated this week’s question, which asks for an explanation of the issue from the perspective of a developer/engineer: What is exactly being exploited and why does it work?

Polynomial provided the following detailed, technical answer:

CVE-2012-4969, aka the latest IE 0-day, is a based on a use-after-free bug in IE’s rendering engine. A use-after-free occurs when a dynamically allocated block of memory is used after it has been disposed of (i.e. freed). Such a bug can be exploited by creating a situation where an internal structure contains pointers to sensitive memory locations (e.g. the stack or executable heap blocks) in a way that causes the program to copy shellcode into an executable area.

In this case, the problem is with the CMshtmlEd::Exec function in mshtml.dll. The CMshtmlEd object is freed unexpectedly, then the Exec method is called on it after the free operation.

First, I’d like to cover some theory. If you already know how use-after-free works, then feel free to skip ahead.

At a low level, a class can be equated to a memory region that contains its state (e.g. fields, properties, internal variables, etc) and a set of functions that operate on it. The functions actually take a “hidden” parameter, which points to the memory region that contains the instance state.

For example (excuse my terrible pseudo-C++):

class Account
 {
 int balance = 0;
 int transactionCount = 0;
void Account::AddBalance(int amount)
 {
 balance += amount;
 transactionCount++;
 }
void Account::SubtractBalance(int amount)
 {
 balance -= amount;
 transactionCount++;
 }
 }
The above can actually be represented as the following:
private struct Account
 {
 int balance = 0;
 int transactionCount = 0;
 }
public void* Account_Create()
 {
 Account* account = (Account)malloc(sizeof(Account));
 account->balance = 0;
 account->transactionCount = 0;
 return (void)account;
 }
public void Account_Destroy(void* instance)
 {
 free(instance);
 }
public void Account_AddBalance(void* instance, int amount)
 {
 ((Account)instance)->balance += amount;
 ((Account)Account)->transactionCount++;
 }
public void Account_SubtractBalance(void* instance, int amount)
 {
 ((Account)instance)->balance -= amount;
 ((Account)instance)->transactionCount++;
 }
public int Account_GetBalance(void* instance)
 {
 return ((Account)instance)->balance;
 }
public int Account_GetTransactionCount(void instance)
 {
 return ((Account*)instance)->transactionCount;
 }
I’m using void* to demonstrate the opaque nature of the reference, but that’s not really important. The point is that we don’t want anyone to be able to alter the Account struct manually, otherwise they could add money arbitrarily, or modify the balance without increasing the transaction counter.

Now, imagine we do something like this:

 Account_Destroy(myAccount);
 // ...void* myAccount = Account_Create();
Account_AddBalance(myAccount, 100);
Account_SubtractBalance(myAccount, 75);
// ...
 if(Account_GetBalance(myAccount) > 1000) // <-- !!! use after free !!!
 ApproveLoan();
Now, by the time we reach Account_GetBalance, the pointer value in myAccount actually points to memory that is in an indeterminate state. Now, imagine we can do the following:

  1. Trigger the call to Account_Destroy reliably.
  2. Execute any operation after Account_Destroy but before Account_GetBalance that allows us to allocate a reasonable amount of memory, with contents of our choosing.

Usually, these calls are triggered in different places, so it’s not too difficult to achieve this. Now, here’s what happens:

  1. Account_Create allocates an 8-byte block of memory (4 bytes for each field) and returns a pointer to it. This pointer is now stored in the myAccount variable.
  2. Account_Destroy frees the memory. The myAccount variable still points to the same memory address.
  3. We trigger our memory allocation, containing repeating blocks of 39 05 00 00 01 00 00 00. This pattern correlates to balance = 1337 and transactionCount = 1. Since the old memory block is now marked as free, it is very likely that the memory manager will write our new memory over the old memory block.
  4. Account_GetBalance is called, expecting to point to an Account struct. In actuality, it points to our overwritten memory block, resulting in our balance actually being 1337, so the loan is approved!

This is all a simplification, of course, and real classes create rather more obtuse and complex code. The point is that a class instance is really just a pointer to a block of data, and class methods are just the same as any other function, but they “silently” accept a pointer to the instance as a parameter.

This principle can be extended to control values on the stack, which in turn causes program execution to be modified. Usually, the goal is to drop shellcode on the stack, then overwrite a return address such that it now points to a jmp esp instruction, which then runs the shellcode.

This trick works on non-DEP machines, but when DEP is enabled it prevents execution of the stack. Instead, the shellcode must be designed using Return-Oriented Programming (ROP), which uses small blocks of legitimate code from the application and its modules to perform an API call, in order to bypass DEP.

Anyway, I’m going off-topic a bit, so let’s get into the juicy details of CVE-2012-4969!

In the wild, the payload was dropped via a packed Flash file, designed to exploit the Java vulnerability and the new IE bug in one go. There’s also been some interesting analysis of it by AlienVault.

The metasploit module says the following:

> This module exploits a vulnerability found in Microsoft Internet Explorer (MSIE). When rendering an HTML page, the CMshtmlEd object gets deleted in an unexpected manner, but the same memory is reused again later in the CMshtmlEd::Exec() function, leading to a use-after-free condition.

There’s also an interesting blog post about the bug, albeit in rather poor English – I believe the author is Chinese. Anyway, the blog post goes into some detail:

> When the execCommand function of IE execute a command event, will allocated the corresponding CMshtmlEd object by AddCommandTarget function, and then call mshtml@CMshtmlEd::Exec() function execution. But, after the execCommand function to add the corresponding event, will immediately trigger and call the corresponding event function. Through the document.write("L") function to rewrite html in the corresponding event function be called. Thereby lead IE call CHTMLEditor::DeleteCommandTarget to release the original applied object of CMshtmlEd, and then cause triggered the used-after-free vulnerability when behind execute the msheml!CMshtmlEd::Exec() function.

Let’s see if we can parse that into something a little more readable:

  1. An event is applied to an element in the document.
  2. The event executes, via execCommand, which allocates a CMshtmlEd object via the AddCommandTarget function.
  3. The target event uses document.write to modify the page.
  4. The event is no longer needed, so the CMshtmlEd object is freed via CHTMLEditor::DeleteCommandTarget.
  5. execCommand later calls CMshtmlEd::Exec() on that object, after it has been freed.

Part of the code at the crash site looks like this:

637d464e 8b07 mov eax,dword ptr [edi]
 637d4650 57 push edi
 637d4651 ff5008 call dword ptr [eax+8]
The use-after-free allows the attacker to control the value of edi, which can be modified to point at memory that the attacker controls. Let’s say that we can insert arbitrary code into memory at 01234f00, via a memory allocation. We populate the data as follows:
01234f00: 01234f08
 01234f04: 41414141
 01234f08: 01234f0a
 01234f0a: c3c3c3c3 // int3 breakpoint
1. We set edi to 01234f00, via the use-after-free bug. 2. mov eax,dword ptr [edi] results in eax being populated with the memory at the address in edi, i.e. 01234f00. 3. push edi pushes 01234f00 to the stack. 4. call dword ptr [eax+8] takes eax (which is 01234f00) and adds 8 to it, giving us 01234f08. It then dereferences that memory address, giving us 01234f0a. Finally, it calls 01234f0a. 5. The data at 01234f0a is treated as an instruction. c3 translates to an int3, which causes the debugger to raise a breakpoint. We’ve executed code!

This allows us to control eip, so we can modify program flow to our own shellcode, or to a ROP chain.

Please keep in mind that the above is just an example, and in reality there are many other challenges in exploiting this bug. It’s a pretty standard use-after-free, but the nature of JavaScript makes for some interesting timing and heap-spraying tricks, and DEP forces us to use ROP to gain an executable memory block.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #35: Dealing with excessive “Carding” attempts

2012-09-14 by scottpack. 0 comments

Community moderator Jeff Ferland nominated this week’s question: Dealing with excessive “Carding” attempts.

I found this to be an interesting question for two reasons,

  1. It turned the classic password brute force on its head by applying it to credit cards
  2. It attracted the attention from a large number of relatively new users

Jeff Ferland postulated that the website was too helpful with its error codes and recommended returning the same “Transaction Failed” message no matter the error.

User w.c suggested using some kind of additional verification like a CAPTCHA. Also mentioned was the notion of instituting time delays for multiple successive CAPTCHA or transaction failures.

A slightly different tack was discussed by GdD. Instead of suggesting specific mitigations, GdD pointed out the inevitability of the attackers adapting to whatever protections are put in place. The recommendation was to make sure that you keep adapting in turn and force the attackers into your cat and mouse game.

Ajacian81 felt that the attacker’s purpose may not be finding valid numbers at all and instead be performing a payment processing denial of service. The suggested fix was to randomize the name of the input fields in an effort to prevent scripting the site.

John Deters described a company that had previously had the same problem. They completely transferred the problem to their processor by having them automatically decline all charges below a certain threshold. He also pointed out that the FBI may be interested in the situation and should be notified. This, of course, depends on USA jurisdiction.

Both ddyer and Winston Ewert suggested different ways of instituting artificial delays into the processing. Winston discussed outright delaying all transactions while ddyer discussed automated detection of “suspicious” transactions and blocking further transactions from that host for some time period.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QOTW #34 – iMessage – what security features are present?

2012-09-07 by Terry Chia. 0 comments

Two weeks ago, a phishing vulnerability in the text messaging function of Apple’s iPhone was discovered by pod2g. The statement released by Apple said that iMessage, Apple’s proprietary instant messaging service was secure and suggested using it instead.

Apple takes security very seriously. When using iMessage instead of SMS, addresses are verified which protects against these kinds of spoofing attacks. One of the limitations of SMS is that it allows messages to be sent with spoofed addresses to any phone, so we urge customers to be extremely careful if they’re directed to an unknown website or address over SMS.

This prompted a rather large interest in the security community over how secure iMessage really is, given that the technology behind it is not public information. There have been several blog post and articles about that topic, including one right here on security.stackexchange –  The inner workings of iMessage security?

The answer given by dr jimbob provided a few clues, sourced from several links.

It appears that the connection between the phone and Apple’s servers are encrypted with SSL/TLS (unclear which version) using a certificate self signed by Apple with a 2048 bit RSA key.

The iMessage service has been partially reverse engineered. More information can be found here and here.

My thoughts – Is iMessage truly more secure?

Compared to traditional SMS, yes. I do consider iMessage more secure. For one thing, it uses SSL/TLS to encrypt the connection between the phone and Apple’s servers. Compared to the A5/1 cipher used to encrypt SMS communications, this is much more secure.

However, iMessage still should not be used to send sensitive information. All data so far indicates that the messages are stored in plaintext in Apple’s servers. This presents several vulnerabilities. Apple or anyone able to compromise Apple’s servers would be able to read your messages – for as long as their cached.

Treat iMessage as you would emails or SMS communications. It is safe enough for daily usage, but highly sensitive information should not be sent through it.

References:

http://en.wikipedia.org/wiki/IMessage

http://www.pod2g.org/2012/08/never-trust-sms-ios-text-spoofing.html

http://www.engadget.com/2012/08/18/apple-responds-to-iphone-text-message-spoofing/

http://blog.cryptographyengineering.com/2012/08/dear-apple-please-set-imessage-free.html

http://imfreedom.org/wiki/IMessage

https://github.com/meeee/pushproxy

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QOTW #33 – Communications infrastructure after a nuclear explosion

2012-08-24 by Polynomial. 5 comments

A few weeks ago, I was sat pondering nuclear bombs. To any of you who frequent the DMZ, this shouldn’t be much of a surprise, as it’s pretty much par-for-the-course for my imagination on a Wednesday night. I started thinking about the electromagnetic pulse released from a nuclear explosion, and how it might affect electronic devices. “But surely,” I thought, “a nuclear bomb would negate any interesting post-blast resilience implications by turning all local electronic devices into glowing dust!”. How wrong I was.

After a bit of research, I came across High-Altitude Nuclear Explosions (HANE). No, this isn’t the title of the latest Michael Bay movie, but rather the concept of detonating a nuclear bomb at an altitude greater than 30km. Whilst this might sound rather ridiculous, certain countries have actually done it. In the early 60s, the USA and Russia performed a series of HANE tests, in order to better understand the potential for anti-satellite weaponry. These tests showed that the electromagnetic pulse and ensuing radioactive fallout was capable of destroying or damaging satellites in space rather indiscriminately. The tests caused numerous satellites to fail after a short period of time, due to severe radiation damage. What’s really interesting about HANE, though, is that the explosion releases a giant electromagnetic pulse, without the hassle of re-enacting that dream scene from Terminator 2 where Sarah Connor’s skeleton is left clinging onto a chain-link fence.

To quote the Wikipedia article:

This high-altitude EMP occurs between 30 and 50 kilometers (18 and 31 miles) above the Earth’s surface. The potential as an anti-satellite weapon became apparent in August 1958 during Hardtack Teak (a HANE test). The EMP observed at the Apia Observatory at Samoa was four times more powerful than any created by solar storms, while in July 1962 the Starfish Prime test damaged electronics in Honolulu and New Zealand (approximately 1,300 kilometers away), fused 300 street lights on Oahu (Hawaii), set off about 100 burglar alarms, and caused the failure of a microwave repeating station on Kauai, which cut off the sturdy telephone system from the other Hawaiian islands.

That’s one hell of an electromagnetic pulse!

After I’d got over the initial excitement of learning that it is actually theoretically possible to sizzle microchips from space using nukes, I decided to post a question on Security SE. In short, I wanted to know how worldwide communications infrastructure, especially the Internet, was protected from such attacks. This is where our resident nuclear weapons “enthusiast” Thomas Pornin came in, with a wonderfully detailed answer.

It turns out that the Internet is, like a condom machine in the Vatican, amazingly redundant. Obviously not quite the same kind of redundant, but redundant nonetheless. If a network link goes down, packets are routed through other links. It’s a self-healing network. This means that you would have to knock out multiple inter-continental network links in order to cause severe disruption. Furthermore, it has plenty of redundant channels that are likely to still function after a large EMP:

An interesting issue with radio and satellite communication is temporary disturbance of the ionosphere. The electromagnetic pulse would cause electrons to be jiggled about in the upper atmosphere, creating areas of increased and decreased electron density. A lot of long-range radio communication relies on “bouncing” radio waves off of the ionosphere, which means that such communications are likely to suffer strong interference. Longer wavelength signals are likely to be disrupted more than short wavelength (GHz) signals, due to some interesting physics wizardry I won’t go into.

Of course, in the case of a real war, we’d have to worry about anti-satellite weapons knocking out communications too. Thankfully though, under-sea cables are shielded by a big blanket of water, which absorbs electromagnetic radiation. This is why submarines send up a radio mast or buoy. The good news is that our fastest and most reliable network links will actually end up being our most resilient, because they’re on the sea bed. Furthermore, optical fiber cable is not affected by electromagnetic induction, like copper wiring is. So, if all hell does break loose, the phat-pipes should still be able to serve you your daily dose of StackExchange! Yay!

But not everything relies on hardware interlinks. We also have to take into account the systems that facilitate the core functionality of the internet. An interesting example of this is DNS. There are thirteen root nameservers that facilitate the absolute top tier of DNS functionality. Without the root name servers, we wouldn’t be able to reliably translate domain names into IP addresses. Think of them as a set of mirrors for a global directory, where top-level domains (TLDs) are indexed. These entries point to other DNS servers, which point to other DNS servers, etc. We’re left with a large hierarchical map of DNS resolution. Now, whilst thirteen servers sounds rather flimsy, the actual number of servers involved is much larger. A neat technique called anycast allows multiple physical servers across the world to represent a single root name server. This is not the same as cloud computing, which is often touted as a panacea for uptime-critical solutions. Guess what? The cloud is not redundant! But I digress. In order to take down the entire DNS system, you’d need to nuke Japan, England, France, Germany, most of eastern Europe and the entire east and west coasts of America into oblivion. At that point, I don’t think the remaining radiation-resistant insects would be particularly interested in whether the DNS system is functioning or not.

The biggest threat HANE poses to our infrastructure is to our power grid. The world is coated with a mesh of power wiring, stretching over hundreds of miles of land. Since the cables are so long, and are very likely to cut through the magnetic lines of flux in a perpendicular manner, the induced currents could be massive. Local substations would burn out, street lighting would fry, circuit breakers would pop, and fuses would blow. Without power, our worries about communications infrastructure being disrupted would be rather pointless. The servers that host anything worth accessing would be down anyway, and it’d be likely that you couldn’t boot your PC or charge your laptop, due to lack of power. Even when sections of the power grid come back online, the demand would be massive, causing further outages and brown-outs.

So, how can we protect ourselves? Here’s a few ideas:

  • Wrap everything important in a giant Faraday cage. Not exactly practical for power plants or datacenters, though.
  • Ensure that significant chunks of the world’s communications infrastructure is built around trans-oceanic cables and optical links.
  • Ensure that nation-critical services are distributed globally, with the ability to self-heal if remote systems become unreachable.
  • Be prepared to switch to lo-tech communications in the event of such a disaster.

Perhaps the thing to take away from this analysis is that nukes are dangerous, and there’s not much you can do to prevent their aftermath. The best policy is to not detonate nukes at high altitude, or if possible, not to detonate nukes at all.

Nukes are bad, m’kay.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #29: Risks of giving developers admin rights to their own PCs

2012-06-08 by roryalsop. 3 comments

Carolinegordon asked Question of the Week number 29 to try and understand what risks are posed by giving developers admin rights to their machines, as it is something many developers expect in order to be able to use their machines effectively, but that security or IT may deny based on company policy.

Interestingly, for a question asked on a security site, the highest voted answers all agree that developers should be given admin rights:

alanbarber listed two very good points – developer toolsets are frequently updated, so the IT load for implementing updates can be high, and debugging tools may need admin rights. His third point, that developers are more security conscious, I’m not so sure about. I tend to think developers are just like other people – some are good at security, some are bad.

Bruno answered along similar lines, but also added the human aspect in two important areas. Giving developers and sysadmins can lead to a divide, and a them-and-us culture, which may impact productivity. Additionally, as developers tend to be skilled in their particular platform, you run the risk of them getting around your controls anyway – which could open up wider risks.

DKNUCKLES made some strong points on what could happen if developers have admin rights:

  • Violation of security practices – especially the usual rule of least privilege
  • Legal violations – you may be liable if you don’t protect code/data appropriately (a grey area at best here, but worth thinking about)
  • Installation of malware – deliberately or accidentally

wrb posted a short answer, but with an important key concept:

The development environment is always isolated from the main network. It is IT’s job to make sure you provide them with what ever setup they need while making sure nothing in the dev environment can harm the main network. Plan ahead and work with management to buy the equipment/software you need to accomplish this.

Todd Dill has a viewpoint which I see a lot in the regulated industries I work in most often – there could be a regulatory requirement which specifies the separation between developers and administrator access. Admittedly this is usually managed by strongly segregating Development, Testing, Staging and Live environments, as at the end of the day there is a business requirement that developers can do their job and deliver application code that works in the timelines required.

Daniel Azuelos came at it with a very practical approach, which is to ask what the difference in risk is between the two scenarios. As these developers are expected to be skilled, and have physical access to their computers, they could in theory run whatever applications they want to, so taking the view that preventing admin access protects from the “evil inside” is a false risk reduction.

This question also generated a large number of highly rated comments, some of which may be more tongue in cheek than others:

The biggest risk is that the developers would actually be able to get some work done. Explain them that the biggest security risk to their network is an angry developer …or just let them learn that the hard way. It should be noted that access to machine hardware is the same as granting admin rights in security terms. A smart malicious agent can easily transform one into the other. If you can attach a debugger to a process you don’t own, you can make that process do anything you want. So debugging rights = admin

My summary of the various points:

While segregating and limiting access is a good security tenet, practicality must rule – developers need to have the functionality to produce applications and code to support the business, and often have the skills to get around permissions, so why not accept that they need admin rights to the development environment, but restrict them elsewhere.

This is an excellent question, as it not only generated interest from people on both sides of the argument, but they produced well thought out answers which helped the questioner and are of value to others who find themselves in the same boat.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW#25: Introducing QotW

2012-04-27 by roryalsop. 2 comments

What is the QotW?

At nearly 3,500 questions we have a wide variety of topics, answers and styles, and in general when someone comes to the site they are looking for answers to a specific problem, or to give answers to questions in their field so they may not see the vast majority of questions. Question of the Week posts on meta.security.stackexchange.com allow the community to vote for their favourite question to be discussed on the blog. This blog itself is quite young – we have 44 posts published, of which 24 are QotW posts.

Why do we do it?

On the Internet, getting visitors to your site is the key metric – QotW is another avenue to get what we do in front of a wider audience. Our QotW blog posts link to questions, answers, community members and external sites where relevant in order to add context and depth, showcasing our site, and this is demonstrated in our referrer stats: we get good traffic from slashdot, reddit, facebook, twitter as well as Bruce Schneier and Dan Kaminsky’s blogs, and even explainxkcd.com so we are doing something right and gaining visibility.

How do we do it?

@Iszi’s answer here lists the process in detail, but to summarise:

We post a QotW meta question on a Friday to invite ideas for the following week. In order to avoid dupes, we maintain a list of previous questions featured on the blog, as well as those which have been proposed but not yet published.

By Tuesday we have topic and author decided (typically individuals volunteer on our chat room, the DMZ - feel free to become a volunteer, we can add you as a contributor role on the blog site.)

The administrators manage the workflow planning through a Trello workspace.

QotW posts aren’t expected to be in depth treatises so drafts are ready by Thursday morning so they can be reviewed in time for a midday Friday publication (we’ve gone with UTC timing for this schedule as we have members from Australia to west coast USA)

Why should you contribute?

First, and most importantly, because you want to. You’ve seen something interesting happening on the site, or have an interesting topic you want to cover and you’d like to share it with the world.

Did we mention it is a nice addition to your careers.stackoverflow.com or LinkedIn profile?

In addition, you help grow the community you are a member of (now over 8000 individuals – a good blog post can more than double the rate of new users joining that day). Your words and name will be attracting the up and coming security experts of tomorrow.

We welcome all contributors to the blog, and the light touch of the QotW posts is a relatively easy way to start security blogging. Seasoned reviewers will be more than happy to assist.

Liked this post? Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #24: Why do people tell me not to use VLANs for security?

2012-04-20 by scottpack. 1 comments

This week’s question of the week was asked by user jtnire who was asking a question very near and dear to those security professionals who came out of networking or systems backgrounds. He was doing some network design and came across a classic statement that, “VLANs are Not a Security Tool”.

As of this writing, jtnire had not accepted any answers, however user Rory McCune was leading the pack of answers. Rory focused primarily on the classic human problem of misconfiguration, particularly easy when we’re talking about typing Gi/0/4 when you meant Gi/1/4, as opposed to plugging a cable into the wrong port. He also specifically called out VLAN Hopping, which can abuse a misconfiguration to allow a malicious user access to a non-authorized VLAN.

User and moderator Rory Alsop, speaking from an audit perspective, expanded on what the other Rory mentioned and focused more generally on what would make him double-take. He pointed out that VLANs are generally used for cheap network segmentation and that if you’re using them for as a security tool, then you probably want to do it right and use a physically isolated network instead.

Jakob Borg came in with a completely different approach. He explained that, as an ISP, VLANs are a crucial component of their environment and when done right can be a very powerful tool from both a security and service prospective. User jliendo largely agreed with Jakob that configuration is king, and when configured properly is an excellent tool in your security arsenal. He also went into more technical detail about some of the possible attacks against VLANs and how they can be mitigated.

In this author’s opinion this is a fantastic question as VLANs are becoming an extremely common mechanism for network isolation. The answers also did a great job of coming at the problem from all manner of angles, from external auditors to in the trenches technicians.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.