Archive for June, 2012

QotW #30: Are common passwords at particular risk?

2012-06-29 by roryalsop. 0 comments

User Experience Stack Exchange moderator Ben Brocka asked this question on Security Stack Exchange after the UX community asked whether they should disallow common passwords as a matter of course.

Cyril N‘s top scoring answer focuses on a highly valuable response – educating the individual to behave better. From his answer, two key points arise:

Some UX specialists says that it’s not a good idea to refuse a password. One of the arguments is the one you provide : “but if you ban them, users will use other weak passwords”, or they will add random chars like 1234 -> 12340, which is stupid, nonsensical and will then force the user to go through the “lost my password” process because he can’t remember which chars he added.

and

Let the user enter the password he wants. This goes against your question, but as I said, if you force your user to enter another password than one of the 25 known worst passwords, this will result in 1: A bad User Experience, 2: A probably lost password and the whole “lost my password” workflow. Now, what you can do, is indicate to your user that this password is weak, or even add more details by saying it is one of the worst known passwords (with a link to them), that they shouldn’t use it, etc etc. If you detail this, you’ll incline your users to modify their password to a more complex one because now, they know the risk. For the one that will use 1234, let them do this because there is maybe a simple reason : I often put a dumb password in some site that requests my login/pass just to see what this site provides me.

The only problem with this is called out in a comment by user Polynomial who suggests a hybrid solution:

Reject outright terrible passwords, e.g. “qwerty123”, and warn on passwords that are a derivation of a dictionary word / bad password, e.g. “Qw3rty123” or “drag0n1”

Personally, I like the hybrid solution, because as Iszi points out, most password attacks are conducted offline, where the attacker has a copy of the hash database. In this scenario, dictionary attacks are a very low CPU load, very fast option that is easily automated:

…it is realistic to assume that attackers will target “common” passwords before resorting to brute force. John The Ripper, perhaps one of the most well-known offline password cracking tools, even uses this as a default action. So, if your password is “password” or “12345678”, it is very likely to be cracked in less than a minute.

Dr Jimbob has provided the logical Security answer: It depends! The requirements will be very different for an online banking application and for an application that will only cause minor inconvenience to one end user if compromised. Regulatory requirements may define the level of security protection you require. He also points out that:

Very weak passwords (top 1000) can be randomly attacked online by botnets (even if you use captchas/delays after so many incorrect attempts)

Bangdang also supports disallowing common passwords, and has a final section on the tradeoffs between security and usability, along with the effects of successful compromise, which include fingerpointing and blame.

Tylerl provides some insight from his experience analyzing attack code:

the password ordering is this:

    1. Try the most common passwords first. Usually there’s a list of between 10 and 500 passwords to try
    2. Try dictionary passwords second. This often includes variations like substituting “4” where an “A” would be or a “1” where there was the letter “l”, as well as adding numbers to the end.
    3. Exhaust the password space, starting at “a”, “b”, “c”… “aa”, “ab”, “ac”… etc.

Step 3 is usually omitted, and step 1 is usually attempted on a range of usernames before moving to step 2.

In general the answers go to show just how extensive that key problem the security industry has: the trade off between usability and security. You could add strong security at every layer, but if the user experience isn’t appropriate it will not work.

This is why for major roles/access improvement projects we are seeing significant investment in the people/human capital side of things – helping projects understand human acceptance criteria, rejection reasons and passive blocking of projects which on the face of it seem perfectly logical.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #29: Risks of giving developers admin rights to their own PCs

2012-06-08 by roryalsop. 3 comments

Carolinegordon asked Question of the Week number 29 to try and understand what risks are posed by giving developers admin rights to their machines, as it is something many developers expect in order to be able to use their machines effectively, but that security or IT may deny based on company policy.

Interestingly, for a question asked on a security site, the highest voted answers all agree that developers should be given admin rights:

alanbarber listed two very good points – developer toolsets are frequently updated, so the IT load for implementing updates can be high, and debugging tools may need admin rights. His third point, that developers are more security conscious, I’m not so sure about. I tend to think developers are just like other people – some are good at security, some are bad.

Bruno answered along similar lines, but also added the human aspect in two important areas. Giving developers and sysadmins can lead to a divide, and a them-and-us culture, which may impact productivity. Additionally, as developers tend to be skilled in their particular platform, you run the risk of them getting around your controls anyway – which could open up wider risks.

DKNUCKLES made some strong points on what could happen if developers have admin rights:

  • Violation of security practices – especially the usual rule of least privilege
  • Legal violations – you may be liable if you don’t protect code/data appropriately (a grey area at best here, but worth thinking about)
  • Installation of malware – deliberately or accidentally

wrb posted a short answer, but with an important key concept:

The development environment is always isolated from the main network. It is IT’s job to make sure you provide them with what ever setup they need while making sure nothing in the dev environment can harm the main network. Plan ahead and work with management to buy the equipment/software you need to accomplish this.

Todd Dill has a viewpoint which I see a lot in the regulated industries I work in most often – there could be a regulatory requirement which specifies the separation between developers and administrator access. Admittedly this is usually managed by strongly segregating Development, Testing, Staging and Live environments, as at the end of the day there is a business requirement that developers can do their job and deliver application code that works in the timelines required.

Daniel Azuelos came at it with a very practical approach, which is to ask what the difference in risk is between the two scenarios. As these developers are expected to be skilled, and have physical access to their computers, they could in theory run whatever applications they want to, so taking the view that preventing admin access protects from the “evil inside” is a false risk reduction.

This question also generated a large number of highly rated comments, some of which may be more tongue in cheek than others:

The biggest risk is that the developers would actually be able to get some work done. Explain them that the biggest security risk to their network is an angry developer …or just let them learn that the hard way. It should be noted that access to machine hardware is the same as granting admin rights in security terms. A smart malicious agent can easily transform one into the other. If you can attach a debugger to a process you don’t own, you can make that process do anything you want. So debugging rights = admin

My summary of the various points:

While segregating and limiting access is a good security tenet, practicality must rule – developers need to have the functionality to produce applications and code to support the business, and often have the skills to get around permissions, so why not accept that they need admin rights to the development environment, but restrict them elsewhere.

This is an excellent question, as it not only generated interest from people on both sides of the argument, but they produced well thought out answers which helped the questioner and are of value to others who find themselves in the same boat.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #28: I found the company I work for is putting backdoors into mobile phones

2012-06-01 by roryalsop. 0 comments

Question of the Week number 28 got an astonishing amount of views and answers – it is a very hot topic in the world of privacy and data protection.

User anonymousquery wrote this question, as he is concerned about the ethical implications of such a backdoor, whether it is intentional or not, whilst his employers don’t see it as a big deal as they, “aren’t going to use it.”

Oleksi posted the top scoring answer which makes the point which should be raised in any similar circumstance:

Just because they won’t use it, doesn’t mean someone else won’t find it and use it.

It will present a major risk by just existing – if an attacker finds this backdoor, their job is made so much easier. In a comment on this answer, makerofthings7 added the interesting fact that Microsoft have even taken the step of banning harmless Easter Eggs from their software in order to help customers buy in to their Trustworthy Computing concept and to meet government regulations.

Mason Wheeler targeted the question specifically, answering the “What should I do?” part by discussing the moral and ethical responsibility to protect customers from a product with serious security flaws.  He suggests whistleblowing – possibly to the FBI or similar body if it is serious enough!

Martianinvader also pointed out the following important point:

Fixing this issue isn’t just ethical, it’s essential for your company’s survival. It’s far, far better to fix it quietly now than a week after all your users and customers have left you because it was revealed by some online journalist.

Avio pointed out that there are risks to you and your company, and points out another course of action which may be preferable:

And if I were you, I’ll just be very cautious. First, I’ll make really really sure that what I saw was a backdoor, I mean legally speaking. Second, I’ll try in any way to convince the company to remove the backdoor.

Bruce Ediger gave some essential information on protecting yourself – as this is now almost public knowledge, you may get blamed if it is exploited!

With another 17 answers in addition to these ones, there are a wide range of viewpoints and pieces of advice, but the overall view is that the first thing to do is understand where you stand legally, and where ethical issues come into the equation, then consider the impact of either whistleblowing or staying quiet about the issue before making a decision which may affect your career.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.