QoTW #53 How can I punish a hacker?

2016-02-05 by . 4 comments

Elmo asked:

I am a small business owner. My website was recently hacked, although no damage was done; non-sensitive data was stolen and some backdoor shells were uploaded. Since then, I have deleted the shells, fixed the vulnerability and blocked the IP address of the hacker.Can I do something to punish the hacker since I have the IP address? Like can I get them in jail or something?

This question comes up time and time again, as people do get upset and angry when their online presence has been attacked, and we have some very simple guidance which will almost always apply:

Terry Chia wrote:

You don’t punish the hacker. The law does. Just report whatever pieces of information you have to the police and let them handle it.

And @TildalWave asked

What makes you believe that this IP is indeed a hacker’s IP address, and not simply another hacked into computer running in zombie mode? And who is to say, that your own web server didn’t run in exactly the same zombie mode until you removed the shells installed through, as you say, later identified backdoor? Should you expect another person, whose web server was attempted to be, or indeed was hacked through your compromised web server’s IP, thinking exactly the same about you, and is already looking for ways to get even like you are?

justausr takes this even further:

Don’t play their game, you’ll lose I’ve learned not to play that game, hackers by nature have more spare time than you and will ultimately win. Even if you get him back, your website will be unavailable to your customers for a solid week afterwards. Remember, you’re the one with public facing servers, you have an IP of a random server that he probably used once. He’s the one with a bunch of scripts and likely more knowledge than you will get in your quest for revenge. Odds aren’t in your favor and the cost to your business is probably too high to risk losing.

Similarly, the other answers mostly discuss the difficulty in identifying the correct perpetrator, and the risks of trying to do something to them.

But Scott Pack‘s answer does provide a little side-step from the generally accepted principles most civilians must follow:

The term most often used to describe what you’re talking about is Hacking Back. It’s part of the Offensive Countermeasures movement that’s gaining traction lately. Some really smart people are putting their heart and soul into figuring out how we, as an industry, should be doing this. There are lots of things you can do, but unless you’re a nation-state, or have orders and a contract from a nation-state your options are severely limited.

tl;dr – don’t be a vigilante. If you do, you will have broken the law, and the police are likely to be able to prove your guilt a lot more easily than that of the unknown hacker.

Like this question of the week? Interested in reading more detail, and other answers? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

Filed under Attack, ethics, Question of the Week

Business Continuity is concerned with information security risks and impacts

2015-08-02 by . 0 comments

A Business Continuity Programme (BCP) is primarily concerned with those business functions and operations that are critically important to achieve the organization’s operational objectives. It seeks to reduce the impact of a disaster condition before the condition occurs. Buy-in from top level management is required as a review is required of each function defined in the business as to ensure all key-personnel is identified. Why would a business require a BCP?

6844.strip.zoom

The BCP ensures the business can continue in case of (un)foreseen circumstances. To motivate top-level management to support the BCP, the best way is to set up a risk/reward overview and use examples to show what can happen when you do not have a BCP in place. The most important question to ask is: “If we (partially) shut down the business for x amount of time, how much money would this cost, both short (direct business loss) and long term (indirect business loss from reputational damages)?”. Losing critical systems, processes or data because of an interruption in the business could send an organization into a financial tailspin.

The main concern of a BCP is to ensure availability of the business is maintained. Confidentiality and integrity should also be addressed within the Business Continuity Plan. In terms of availability the risk to business continuity is often explained as a service interruption on a critical system, e.g. a payment gateway of a bank goes down, preventing transactions from occurring. The short- and long-term impact are financial losses due to the bank not being able to process transactions, but also clients becoming more and more dissatisfied. Confidentiality in BCP could for example be the transfer of personal data during a disaster recovery. An objective of disaster recovery is to minimize risk to the organization during recovery. There should be a baseline set of documented access controls to use during recovery activities. They are necessary to prevent intrusions and data breaches during the recovery. The impact here can be one of reputation but also of financial nature. If a competing company can for example obtain a set of investment strategies, it could assist the competing company to invest against them, resulting in significant financial losses and even bankruptcy.

Integrity of information means that it is accurate and reliable and has not been tampered with by an unauthorized party. For example it is important that the integrity of each customer’s data, but also information originating from third parties, can be ensured. An example of the impact of integrity violation: when a bank cannot rely on the integrity of data, for instance if it authorizes transactions to a nation or person on a sanctions list (originating from a third party), they could be heavily fined, but also might lose their banking license. A BCP goes wider than just impacts, it also addresses risks. A business impact analysis is performed to understand which business processes are important. These “critical” business processes are provided with special protection in the framework of business continuity management, and precautions are taken in case of a crisis. “Critical” in the sense of business continuity management means “time-critical”, which means that this process must be restored to operation faster because otherwise a high amount of damage to the organisation can be expected. While the BIA answers the question of what effects the failure of a process will have on the organisation, it is necessary to know what the possible causes of the failure could be. Risks at process level as well as risks resource level need to be examined. A risk at the process level could be the failure of one or more (critical) resources, for example. A risk analysis at the resource level only looks for the possible causes of the failure of these critical resources.

BCP relies on both impact and risk assessments, but making a risk assessment without an impact assessment is difficult. ISO 22301 requires a risk assessment process to be present. The goal of this requirement is to establish, implement, and maintain a formal documented risk assessment process that systematically identifies, analyses, and evaluates the risk of disruptive incidents to the organization.

I want to conclude with stating that risk analysis and business impact analysis (BIA) are cornerstones in understanding the threats, vulnerabilities and mission-critical functions of the organization and are thus required if one wants to discover the business’s critical processes and make a correct prioritization.

Filed under Business

QoTW #52 Which factors should I consider for devices that accept handwritten digital signatures?

2014-12-19 by . 1 comments

Indrek asked this question on digital signature devices, such as the ones delivery drivers get you to sign for your packages. While he identified EU directive 1993/93/EC as appearing to regulate, he had some concerns around what should be done to ensure these signatures are as valid as paper counterparts.

sam280 pointed out that:

EU directive 1999/93/EC (and its upcoming replacement) enforces legal equivalence between a qualified electronic signature and a handwritten signature in all Member States, and “some legal value” for other types of advanced electronic signatures. However, this directive does not address “handwritten digital signatures” but actual electronic signatures, as standardized for instance by PAdES or CAdES. In other words, 1999/93/EC will not help you here, and I doubt technical measures alone will ensure that this kind of signature is accepted in court.

and

advanced electronic signatures which provide legal equivalence with an handwritten signature require the usage of a qualified certificate (1999/93/EC article 5.1) : tablet-based solutions obviously do not belong to this category.

Which implies that the existing regulations don’t cater fully for this use case, and this is borne out by the accepted answer by D.W.

If there is not previous case law on topic, then I would expect this to come down to an assessment of credibility, based upon the testimony of the people involved, possibly testimony from expert witnesses, and the rest of the circumstances surrounding the court case. Keep in mind that the legal process doesn’t treat signatures as absolute ironclad guarantees; they consider them as part of the totality of the evidence.

D.W.’s answer discusses the problem of law here but sums up with a very down to earth conclusion:

…for a low-value transaction, you probably don’t need any crypto. If you’ve captured a signature that appears to match how Alice signs her name, that’s probably going to be good enough for a low-value transaction. Conversely, for a high-value transaction, I’m skeptical about whether these devices are going to be persuasive in court, no matter how much fancy crypto you’ve thrown into them.

Like this question of the week? Interested in reading more detail, and other answers? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

Filed under Question of the Week

QoTW #51 Would it be good secure programming practice to overwrite a “sensitive” variable before deleting it?

2014-12-12 by . 0 comments

Jonathan recently asked this question about secure development practices, specifically, whether it makes a difference to your application’s security if you overwrite the values of sensitive variables as soon as you’re through with them.  The rational is that if you don’t clear the variable values then there is a wider window of opportunity for a malicious party to be able to find and use the sensitive data by reading it out of RAM.

Gillesanswer explains that yes, this is important, and explains why.  There are a number of reasons, and while an attacker reading the values out of RAM is a consideration, it isn’t even one of the more important ones.

Yes, it is good practice security-wise to overwrite data that is particularly sensitive when the data is no longer necessary, i.e. as part of an object destructor (either an explicit destructor provided by the language or an action that the program takes before deallocating the object). It is even good practice to overwrite data that isn’t in itself sensitive, for example to zero out pointer fields in a data structure that goes out of use, and also zero out pointers when the object they point to is freed even if you know you aren’t going to use that field anymore. One reason to do this is in case the data leaks through external factors such as an exposed core dump, a stolen hibernation image, a compromised server allowing a memory dump of running processes, etc. Physical attacks where an attacker extracts the RAM sticks and makes use of data remanence are rarely a concern except on laptop computers and perhaps mobile devices such as phones (where the bar is higher because the RAM is soldered), and even then mostly in targeted scenarios only. Remanence of overwritten values is not a concern: it would take very expensive hardware to probe inside a RAM chip to detect any lingering microscopic voltage difference that might be influenced by an overwritten value. If you’re worried about physical attacks on the RAM, a bigger concern would be to ensure that the data is ovewritten in RAM and not just in the CPU cache. But, again, that’s usually a very minor concern. The most important reason to overwrite stale data is as a defense against program bugs that cause uninitialized memory to be used, such as the infamous Heartbleed. This goes beyond sensitive data because the risk is not limited to a leak of the data: if there is a software bug that causes a pointer field to be dereferenced without having been initialized, the bug is both less prone to exploitation and easier to trace if the field contains all-bits-zero than if it potentially points to a valid but meaningless memory location.

The accepted answer by makerofthings7 reckons that not only is this important, also lays some conditions that need to be considered.

Yes that is a good idea to overwrite then delete/release the value. Do not assume that all you have to do is “overwrite the data” or let it fall out of scope for the GC to handle, because each language interacts with the hardware differently. When securing a variable you might need to think about:
  • encryption (in case of memory dumps or page caching)
  • pinning in memory
  • ability to mark as read-only (to prevent any further modifications)
  • safe construction by NOT allowing a constant string to be passed in
  • optimizing compilers (see note in linked article re: ZeroMemory macro)

Andy Dent‘s answer has some advice for achieving this in C and C++

  • Use volatile
  • Use pragmas to surround the code using that variable and disable optimisations.
  • If possible, only assemble the secret in intermediate values rather than any named variables, so it only exists during calculations.

Lawtenfogle points out that you need to be wary of constructs like .NET’s immutable strings, and re-iterates the potential problem with optimizing compilers.

Storing a value that isn’t used again? Seems like something that would be optimized out, regardless of any benefit it might provide. Also, you may not actually overwrite the data in memory depending upon how the language itself works. For example, in a language using a garbage collector, it wouldn’t be removed immediately (and this is assuming you didn’t leave any other references hanging around). For example, in C#, I think the following doesn’t work.
string secret = "my secret data";

...lots of work...

string secret = "blahblahblah";
"my secret data" hangs around until garbage collected because it is immutable. That last line is actually creating a new string and having secret point to it. It does not speed up how fast the actual secret data is removed.

Several answers and comments also pointed out that when using a non-managed language like C or C++, when you can, you should also pin the memory in order to prevent it from being swapped to disk where sensitive values might remain indefinitely.

Like this question of the week? Interested in reading more detail, and other answers? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

Filed under Data, Question of the Week

Is our entire password strategy flawed?

2014-06-19 by . 8 comments

paj28 posed a question that really fits better here as a blog post:

Security Stack Exchange gets a lot of questions about password strength, password best practices, attacks on passwords, and there’s quite a lot for both users and sites to do, to stay in line with “best practice”.

Web sites need a password strength policy, account lockout policy, and secure password storage with a slow, salted hash. Some of these requirements have usability impacts, denial of service risks, and other drawbacks. And it’s generally not possible for users to tell whether a site actually does all this (hence plaintextoffenders.com).

Users are supposed to pick a strong password that is unique to every site, change it regularly, and never write it down. And carefully verify the identity of the site every time you enter your password. I don’t think anyone actually follows this, but it is the supposed “best practice”.

In enterprise environments there’s usually a pretty comprehensive single sign-on system, which helps massively, as users only need one good work password. And with just one authentication to protect, using multi-factor is more practical. But on the web we do not have single sign-on; every attempt from Passport, through SAML, OpenID and OAuth has failed to gain a critical mass.

But there is a technology that presents to users just like single sign-on, and that is a password manager with browser integration. Most of these can be told to generate a unique, strong password for every site, and rotate it periodically. This keeps you safe even in the event that a particular web site is not following best practice. And the browser integration ties a password to a particular domain, making phishing all but impossible To be fair, there are risks with password managers “putting all your eggs in one basket” and they are particularly vulnerable to malware, which is the greatest threat at present.

But if we look at the technology available to us, it’s pretty clear that the current advice is barking up the wrong tree. We should be telling users to use a password manager, not remember loads of complex passwords. And sites could simply store an unsalted fast hash of the password, forget password strength rules and account lockouts.


A problem we have though is that banks tell customers never to write down passwords, and some explicitly include ‘storage on PC’ in this. Banking websites tend to disallow pasting into password fields, which also doesn’t help.

So what’s the solution? Do we go down the ‘all my eggs are in a nice secure basket’ route and use password managers?

I, like all the techies I know, use a password manager for everything. Of the 126 passwords I have in mine, I probably use 8 frequently. Another 20 monthly-ish. Some of the rest of them have been used only once or twice – and despite having a good memory for letters and numbers, I’m not going to be able to remember them so this password manager is essential for me.

I want to be able to easily open my password manager, copy the password and paste it directly into the password field.

I definitely don’t want this password manager to be part of the browser, however, as in the event of browser compromise I don’t wish all my passwords to be vacuumed up, so while functional interaction like copy and paste is essential, I’d like separation of executables.

What do you think – please comment below.

A short statement on the Heartbleed problem and its impact on common Internet users.

2014-04-11 by . 2 comments

On the 7th of April 2014 a team of security engineers (Riku, Antti and Matti) at Codenomicon and Neel Mehta of Google Security published information on a security issue in OpenSSL. OpenSSL is a piece of software used in the encryption process; it helps you in coding your computer traffic to ensure unauthorized people cannot understand what you are sending from one computer network to another. It is used in many applications: for example if you use on-line banking websites, code such as OpenSSL helps to ensure that your PIN code remains secret.

The information that was released caused great turmoil in the security community, and many panic buttons were pressed because of the wide-spread use of OpenSSL. If you are using a computer and the Internet you might be impacted: people at home just as much as major corporations. OpenSSL is used for example in web, e-mail and VPN servers and even in some security appliances. However, the fact that you have been impacted does not mean you can no longer use your PC or any of its applications. You may be a little more vulnerable, but the end of the world may still be further than you think. First of all some media reported on the “Heartbleed virus”. Heartbleed is in fact not a virus at all. You cannot be infected with it and you cannot protect against being infected. Instead it is an error in the computer programming code for specific OpenSSL versions (not all) which a hacker could potentially use to obtain  information from the server (which could possibly include passwords and encryption keys, along with other random data in the server’s memory) potentially allowing him to break into a system or account.

Luckily, most applications in which OpenSSL is used, rely on more security measures than only OpenSSL. Most banks for instance continuously work to remain abreast of security issues, and have implemented several measures that lower the risk this vulnerability poses. An example of such a protective measure is transaction signing with an off-line card reader or other forms of two –factor authentication. Typically exploiting the vulnerability on its own will not allow an attacker post fraudulent transactions if you are using two-factor authentication or an offline token generator for transaction signing.

So in summary, does the Heartbleed vulnerability affect end-users? Yes, but not dramatically. A lot of the risk to the end-users can be lowered by following common-sense security principles:

  • Regularly change your on-line passwords (as soon as the websites you use let you know they have updated their software, this is worthwhile, but it should be part of your regular activity)
  • Ideally, do not use the same password for two on-line websites or applications
  • Keep the software on your computer up-to-date.
  • Do not perform on-line transactions on a public network (e.g. WiFi hotspots in an airport). Anyone could be trying to listen in.

Security Stack Exchange has a wide range of questions on Heartbleed ranging from detail on how it works to how to explain it to non-technical friends. 

Authors: Ben Van Erck, Lucas Kauffman

Communicating Security Risks to Senior Management – 3 years on

2014-04-04 by . 0 comments

Back in July 2011 I wrote this brief blog post on the eternal problem of how to bridge the divide between security professionals and senior management.

Thought I’d revisit it nearly 3 years on and hopefully be able to point out ways in which the industry is improving…

Actually, I would have to say it is improving. I am seeing risk functions in larger organisations, especially in the Financial Services industry, with structures that are designed explicitly to oversee technology and report through a CRO to the board, and in the UK, the regulator is providing strong encouragement for financial institutions to get this right. CEOs and shareholders are also seeing the impact of security breaches, denial of service attacks, defacements etc., so now is a very receptive time.

But there is still a long way to go. Many organisations still do not have anyone at board level who can speak the language of business or operational risk and the language of information security or IT risk. So it may have to be you –

My end point in my previous blog about good questions to help you prepare for conversations with senior management are still valid:

If there is one small piece of advice I’d give to anyone in the security profession it would be to learn about business and operational risk – even if it is just a single course it will help!

Filed under Business, Risk

QoTW #50: Does password protecting the BIOS help in securing sensitive data

2014-02-28 by . 0 comments

Camil Staps asked this question back in April 2013, as although it is generally accepted that using a BIOS password is good practice, he couldn’t see what protection this would provide, given, in his words, “…there aren’t really interesting settings in the BIOS, are there?”

While Camil reckoned that the BIOS only holds things like date and time, and enabling drives etc., the answers and comments point out some of the risks, both in relying on a BIOS password and in thinking the BIOS is not important!

The accepted answer from Iszi covers off why the BIOS should be protected:

…an attacker to bypass access restrictions you have in place on any non-encrypted data on your drives. With this, they can:

  • Read any data stored unencrypted on the drive.
  • Run cracking tools against local user credentials, or download the authenticator data for offline cracking.
  • Edit the Registry or password files to gain access within the native OS.
  • Upload malicious files and configure the system to run them on next boot-up.

And what you should do as a matter of course:

That said, a lot of the recommendations in my post here (and other answers in that, and linked, threads) are still worth considering.

  • Encrypt the hard drive
  • Make sure the computer is physically secure (e.g.: locked room/cabinet/chassis)
  • Use strong passwords for encryption & BIOS

Password-protecting the BIOS is not entirely an effort in futility.

Thomas Pornin covers off a possible reason for changing the date:

…by making the machine believe it is in the far past, the attacker may trigger some other behaviour which could impact security. For instance, if the OS-level logon uses smart cards with certificates, then the OS will verify that the certificate has not been revoked. If the attacker got to steal a smart card with its PIN code, but the theft was discovered and the certificate was revoked, then the attacker may want to alter the date so that the machine believes that the certificate is not yet revoked.

But all the answers agree that all a BIOS password does on its own is protect the BIOS settings – all of which can be bypassed by an attacker who has physical access to the machine, so as per Piskvor‘s answer:

you need to set up some sort of disk encryption (so that the data is only accessible when your system is running)

Like this question of the week? Interested in reading more detail, and other answers? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QoTW #49: How can someone go off-web, and anonymise themselves after a life online?

2014-01-27 by . 2 comments

Everything we do these days is online, whether through our own social media, purchases from online stores, tracking by google, Amazon etc., and the concept of gaining some sort of freedom is getting traction in the media, with the leaking of NSA snooping documents and other privacy concerns, so before Christmas I asked the deceptively simple question:

How can someone go off-web, and anonymise themselves after a life online?

Which ended up being the most popular question I have ever asked on Stack Exchange, by a good margin.

Lucas Kauffman supplied the top rated answer – which goes into some detail on heuristics and data mining, which is likely to be the biggest problem for anyone trying to do this successfully:

Avoiding heuristics means changing everything you do completely. Stop using the same apps, accounts, go live somewhere else and do not buy the same food from the same brands. The problem here is that this might also pop up as a special pattern because it is so atypical. Changing your identity is the first step. The second one is not being discovered…the internet doesn’t forget. This means that photos of you will remain online, messages you posted, maybe even IDs you shared will remain on the net. So even when changing your behavior it only will need one picture which might expose you.

The Little Bear provided a short but insightful message, with disturbing undertones:

You cannot enforce forgetfulness. The Web is like a big memory, and you cannot force it to forget everything about you(*). The only way, thus, is to change your identity so that everything the Web knows about you becomes stale. From a cryptographic point of view, this is the same case as with a secret value shared by members of a group: to evict a group member, you have to change the secret value. (*) Except by applying sufficiently excessive force. A global thermonuclear war, with all the involved EMP, might do the trick, albeit with some side effects.

Question3CPO looks again at statistics on your financial footprint, but with a focus on how to muddy the waters with:

When it comes to finances, it’s similar; I have to make an assumption that the data I receive are an accurate indicator of who you are. Suppose you make 1/3 or more of your purchases completely away from your interest, for instance, you’re truly a Libertarian, but you decide to subscribe to a Socialist magazine. How accurate are my data then? Also, you may change in ten years, so how accurate will my data be then, unless I account for it (and how effective then is it to have all the historic data)?

and Ajoy follows up with some more pointers on poisoning data stores:

  1. Make a list of all websites where you have accounts or which are linked to you in some way.
  2. One by one, remove your personal details, friends, etc. Add misinformation – new obscure data, new friends, new interests, anything else you can think of. De-link your related accounts, re-link them to other fake ones.
  3. Let the poisoned information stay for some time. Meanwhile, you could additionally change these details again. Poisoning the poisoned! Ensure that there is no visible pattern or link between any of the poisoned accounts.
  4. Then you could delete all of them, again very slowly.

There are quite a few other insightful answers, and the question attracted a couple of very interesting comments, including my favourite:

At the point you have succeeded you will also be someone else. –  stackunderflow

Liked this question of the week? Interested in reading more detail, and other answers? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

Filed under Data, Question of the Week

Attacking RSA through Sound

2013-12-23 by . 1 comments

A new attack against RSA has been made known this week. Details about it can be found in the paper RSA Key Extraction via Low-Bandwidth Acoustic Cryptanalysis. One notable name amongst the co-authors of the paper is Adi Shamir, who was one of the three that published the algorithm.

This attack is a type of side-channel attack against RSA. A side channel attack is an attack that targets the implementation of a cryptosystem instead of targeting the algorithm. RSA has been broken by many side channel attacks in the past. The most famous of which is probably the timing attack described by Paul C. Kocher in his paper Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS and Other Systems.

This attack works by taking advantage of the fact that a computer emits different sounds when perforing different tasks. With this, it is possible to recover information about the RSA key used during the process of encryption or decryption. Genkin, Shamir and Tromer demonstrated that when the same plaintext is encrypted with different RSA keys, it is possible to discern which key was used in the encryption process. This is a form of key distinguishing attack.

This concept is not new. In fact, it is the topic of Tromer’s PhD thesis Hardware-Based Cryptanalysis published in 2007. What is new about this paper is that the researchers demonstrated an actual attack that is able to distinguish between RSA keys, instead of just the theoratical possibility. What is even more surprising is that the researchers were able to pull off the attack using mobile phones which demonstrates that the attack does not require specialized recording equipment to pull off.

Should you be worried? The attack was demonstrated in lab conditions. It might be a little harder to pull off in real life scenarios where there will presumably be much more background noise to mask the sounds. The actual attack was demonstrated on GnuPG. Updating to the latest version of GnuPG 1.4.x will fix this particular problem. Better still, use the GnuPG 2.x branch which employs RSA blinding that should protect against such side-channel attacks.

While this attack might not be practical as of now, it is very interesting still because many cryptosystems suffer from what are basically implementation problems. Once again, don’t roll your own cryptography!

For some further detail, read the related question on security.stackexchange.com.

 

Filed under Attack, Crypto