Question of the Week

QotW #31: What cryptographic flaw was exploited by Flame, to get its code signed by Microsoft?

2012-07-27 by roryalsop. 0 comments

Community member D.W. nominated this week’s question: What cryptographic flaw was exploited by Flame to get its code signed by Microsoft?

Hendrik Brummerman provided an in depth answer which was subsequently confirmed by updates from Microsoft:

Certificate Purpose

There are multiple purposes a certificate may be used for. For example it may be used as a proof of identity of a person or webserver. It may be used for code sining or to sign other certificates.

In this case a certificate that was intended to sign license information was able to sign code.

It might be as simple as Microsoft not checking the purpose-flag of customer certificates they signed:

Specifically, when an enterprise customer requests a Terminal Services activation license, the certificate issued by Microsoft in response to the request allows code signing.

MD5 collision attack

The reference to an old algorithm might indicate a collision attack on the signing process: There was a talk at CCC 2008 called MD5 considered harmful today – Creating a rogue CA Certificate In that talk the researches explained how to generate two certificates with the same hash. The generated a harmless looking certification request and submitted it to a CA. The CA signed it and generated the valid certificate for https-servers. But this certificate had the same hash as another generated certificate which had the purpose CA-certificate. So the CA signature of the harmless certificate was valid for the dangerous one as well. The researches exploited a weakness in MD5 to generate collisions. In order for the attack to work, they had to predict the information the CA would write into the certificate.

The combination of a collision attack and a misuse of the certificate purpose were both theoretical possibilities before this attack, but  the researchers of the original md5 collision attack published that the attackers used a new variant of the known md5 chosen prefix attack.

Mark Hillick listed a few useful links, around the wider problems the antivirus industry has – being a very reactionary industry its effectiveness is reduced – and a related presentation by Moxie Marlinspike on authentication.

D.W. also provided some useful links for further reading, from Microsoft’s own Technet, and from arstechnica.

Makerofthings7‘s answer focused on reducing the surface area of public trust – in this instance, it wouldn’t have prevented the attack, as the cert was signed by Microsoft, but it would improve security in general.

Silvercore linked to an excellent blog post on the incident – well worth a read.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #30: Are common passwords at particular risk?

2012-06-29 by roryalsop. 0 comments

User Experience Stack Exchange moderator Ben Brocka asked this question on Security Stack Exchange after the UX community asked whether they should disallow common passwords as a matter of course.

Cyril N‘s top scoring answer focuses on a highly valuable response – educating the individual to behave better. From his answer, two key points arise:

Some UX specialists says that it’s not a good idea to refuse a password. One of the arguments is the one you provide : “but if you ban them, users will use other weak passwords”, or they will add random chars like 1234 -> 12340, which is stupid, nonsensical and will then force the user to go through the “lost my password” process because he can’t remember which chars he added.

and

Let the user enter the password he wants. This goes against your question, but as I said, if you force your user to enter another password than one of the 25 known worst passwords, this will result in 1: A bad User Experience, 2: A probably lost password and the whole “lost my password” workflow. Now, what you can do, is indicate to your user that this password is weak, or even add more details by saying it is one of the worst known passwords (with a link to them), that they shouldn’t use it, etc etc. If you detail this, you’ll incline your users to modify their password to a more complex one because now, they know the risk. For the one that will use 1234, let them do this because there is maybe a simple reason : I often put a dumb password in some site that requests my login/pass just to see what this site provides me.

The only problem with this is called out in a comment by user Polynomial who suggests a hybrid solution:

Reject outright terrible passwords, e.g. “qwerty123”, and warn on passwords that are a derivation of a dictionary word / bad password, e.g. “Qw3rty123” or “drag0n1”

Personally, I like the hybrid solution, because as Iszi points out, most password attacks are conducted offline, where the attacker has a copy of the hash database. In this scenario, dictionary attacks are a very low CPU load, very fast option that is easily automated:

…it is realistic to assume that attackers will target “common” passwords before resorting to brute force. John The Ripper, perhaps one of the most well-known offline password cracking tools, even uses this as a default action. So, if your password is “password” or “12345678”, it is very likely to be cracked in less than a minute.

Dr Jimbob has provided the logical Security answer: It depends! The requirements will be very different for an online banking application and for an application that will only cause minor inconvenience to one end user if compromised. Regulatory requirements may define the level of security protection you require. He also points out that:

Very weak passwords (top 1000) can be randomly attacked online by botnets (even if you use captchas/delays after so many incorrect attempts)

Bangdang also supports disallowing common passwords, and has a final section on the tradeoffs between security and usability, along with the effects of successful compromise, which include fingerpointing and blame.

Tylerl provides some insight from his experience analyzing attack code:

the password ordering is this:

    1. Try the most common passwords first. Usually there’s a list of between 10 and 500 passwords to try
    2. Try dictionary passwords second. This often includes variations like substituting “4” where an “A” would be or a “1” where there was the letter “l”, as well as adding numbers to the end.
    3. Exhaust the password space, starting at “a”, “b”, “c”… “aa”, “ab”, “ac”… etc.

Step 3 is usually omitted, and step 1 is usually attempted on a range of usernames before moving to step 2.

In general the answers go to show just how extensive that key problem the security industry has: the trade off between usability and security. You could add strong security at every layer, but if the user experience isn’t appropriate it will not work.

This is why for major roles/access improvement projects we are seeing significant investment in the people/human capital side of things – helping projects understand human acceptance criteria, rejection reasons and passive blocking of projects which on the face of it seem perfectly logical.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #29: Risks of giving developers admin rights to their own PCs

2012-06-08 by roryalsop. 3 comments

Carolinegordon asked Question of the Week number 29 to try and understand what risks are posed by giving developers admin rights to their machines, as it is something many developers expect in order to be able to use their machines effectively, but that security or IT may deny based on company policy.

Interestingly, for a question asked on a security site, the highest voted answers all agree that developers should be given admin rights:

alanbarber listed two very good points – developer toolsets are frequently updated, so the IT load for implementing updates can be high, and debugging tools may need admin rights. His third point, that developers are more security conscious, I’m not so sure about. I tend to think developers are just like other people – some are good at security, some are bad.

Bruno answered along similar lines, but also added the human aspect in two important areas. Giving developers and sysadmins can lead to a divide, and a them-and-us culture, which may impact productivity. Additionally, as developers tend to be skilled in their particular platform, you run the risk of them getting around your controls anyway – which could open up wider risks.

DKNUCKLES made some strong points on what could happen if developers have admin rights:

  • Violation of security practices – especially the usual rule of least privilege
  • Legal violations – you may be liable if you don’t protect code/data appropriately (a grey area at best here, but worth thinking about)
  • Installation of malware – deliberately or accidentally

wrb posted a short answer, but with an important key concept:

The development environment is always isolated from the main network. It is IT’s job to make sure you provide them with what ever setup they need while making sure nothing in the dev environment can harm the main network. Plan ahead and work with management to buy the equipment/software you need to accomplish this.

Todd Dill has a viewpoint which I see a lot in the regulated industries I work in most often – there could be a regulatory requirement which specifies the separation between developers and administrator access. Admittedly this is usually managed by strongly segregating Development, Testing, Staging and Live environments, as at the end of the day there is a business requirement that developers can do their job and deliver application code that works in the timelines required.

Daniel Azuelos came at it with a very practical approach, which is to ask what the difference in risk is between the two scenarios. As these developers are expected to be skilled, and have physical access to their computers, they could in theory run whatever applications they want to, so taking the view that preventing admin access protects from the “evil inside” is a false risk reduction.

This question also generated a large number of highly rated comments, some of which may be more tongue in cheek than others:

The biggest risk is that the developers would actually be able to get some work done. Explain them that the biggest security risk to their network is an angry developer …or just let them learn that the hard way. It should be noted that access to machine hardware is the same as granting admin rights in security terms. A smart malicious agent can easily transform one into the other. If you can attach a debugger to a process you don’t own, you can make that process do anything you want. So debugging rights = admin

My summary of the various points:

While segregating and limiting access is a good security tenet, practicality must rule – developers need to have the functionality to produce applications and code to support the business, and often have the skills to get around permissions, so why not accept that they need admin rights to the development environment, but restrict them elsewhere.

This is an excellent question, as it not only generated interest from people on both sides of the argument, but they produced well thought out answers which helped the questioner and are of value to others who find themselves in the same boat.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #28: I found the company I work for is putting backdoors into mobile phones

2012-06-01 by roryalsop. 0 comments

Question of the Week number 28 got an astonishing amount of views and answers – it is a very hot topic in the world of privacy and data protection.

User anonymousquery wrote this question, as he is concerned about the ethical implications of such a backdoor, whether it is intentional or not, whilst his employers don’t see it as a big deal as they, “aren’t going to use it.”

Oleksi posted the top scoring answer which makes the point which should be raised in any similar circumstance:

Just because they won’t use it, doesn’t mean someone else won’t find it and use it.

It will present a major risk by just existing – if an attacker finds this backdoor, their job is made so much easier. In a comment on this answer, makerofthings7 added the interesting fact that Microsoft have even taken the step of banning harmless Easter Eggs from their software in order to help customers buy in to their Trustworthy Computing concept and to meet government regulations.

Mason Wheeler targeted the question specifically, answering the “What should I do?” part by discussing the moral and ethical responsibility to protect customers from a product with serious security flaws.  He suggests whistleblowing – possibly to the FBI or similar body if it is serious enough!

Martianinvader also pointed out the following important point:

Fixing this issue isn’t just ethical, it’s essential for your company’s survival. It’s far, far better to fix it quietly now than a week after all your users and customers have left you because it was revealed by some online journalist.

Avio pointed out that there are risks to you and your company, and points out another course of action which may be preferable:

And if I were you, I’ll just be very cautious. First, I’ll make really really sure that what I saw was a backdoor, I mean legally speaking. Second, I’ll try in any way to convince the company to remove the backdoor.

Bruce Ediger gave some essential information on protecting yourself – as this is now almost public knowledge, you may get blamed if it is exploited!

With another 17 answers in addition to these ones, there are a wide range of viewpoints and pieces of advice, but the overall view is that the first thing to do is understand where you stand legally, and where ethical issues come into the equation, then consider the impact of either whistleblowing or staying quiet about the issue before making a decision which may affect your career.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #27: Open Source vs Closed Source Systems

2012-05-25 by ninefingers. 0 comments

Question of the Week number 27 is a contentious, hotly debated issue in the software world. The question itself was posed by Security.SE user blunders, who quoted the argument often used in the defence of open source as a business model:

My understanding is that open source systems are commonly believed to be more secure than closed source systems. … A common position against closed source systems is that a lack of awareness is at best a weak security measure; commonly referred to as security through obscurity.

and then we hit the question itself:

Question is, are open source systems on average better for security than closed source systems?

We’ll begin with the top voted answer by SE user Jesper Mortensen, who explained that the whole notion of being able to generally compare open versus closed source systems is a bad one when there are so many other factors involved. To compare two systems you really need to look beyond the licensing model they use, and look at other factors too. I’ll quote Jesper’s list in its entirety:

  • Licenses.
  • Access to source code.
  • Very different incentive structures, for-profit versus for fun.
  • Very different legal liability situations.
  • Different, and wildly varying, team sizes and team skillsets.

Of course, this is by no means a complete treatment of the possible differences.

Jesper also highlighted the importance of comparing pieces of software that solve specific domain issues – not software in general. You have to do this, to even remotely begin to utilise the list above.

Security.SE legend Thomas Pornin also answered this question. I’ll begin coverage of his answer with his summary:

… the “opensource implies security” idea is overrated. What is important is the time (and skill) devoted to the tracking and fixing of security issues, and this is mostly orthogonal to the question of openness of the source.

The main thrust of Thomas’ answer was that actually, maintained software is more secure than unmaintained software. As an example, Thomas cited OpenSSL remote execution bugs that had been left lying in the code tree unfixed for some time – highlighting a possible advantage of closed source systems in that, when developed by companies, the effort and time spent on Q&A is generally higher than open source systems.

Thomas’ answer also covers the counterpoint to this – that closed source systems can easily conceal security issues, too, and that having the source allows you to convince yourself of security more easily.

The next answer was provided by Ori, who lists a set of premises used for justifying the security of open source:

  1. The Customization premise
  2. The License Management premise
  3. The Open Format premise
  4. The Many Eyes premise
  5. The Quick Fix premise

As Ori rightly says, the customization premise means a company can take an open source platform and add an additional set of security controls. Ori quotes NSA’s SELinux as an example of such a project. For companies with the time and money to produce such platforms and make such fixes, this is clearly an advantage for open source systems.

For license management and open format arguments Ori covers from a compliance and resilience perspective. Using open source software (and making modifications) contains certain license constraints – the potential to violate these constraints is clearly a risk to the business. Likewise, for business continuity purposes the ability to not be locked in to a specific platform is a huge win for any company.

Finally, an answer by yours truly. The major thrust of my answer is succinctly summarised by AviD‘s comment on it:

I’ve always proposed an amendment to Linus’ Law: “Given enough trained eyeballs, most bugs are relatively shallow”

I explained, through use of a rather intriguing vulnerability introduced into development kernels by a compiler bug, that having the knowledge to detect these issues is critical to security. The source being available does not directly guarantee you have the knowledge to detect such issues.

That’s it for answers. As you can see, none of us took sides generally on the “open versus closed” debate, instead pointing out that there are many factors to consider beyond the license under which source is available. I think the whole set of answers is best summarised by this.josh‘s comment on the top voted answer – so I’ll leave you with that:

I agree. What matters most is how many people with knowledge and experience in the security domain actively design, implement, test, and maintain the software. Any project where no-one is looking at security will have significant vulnerabilities, regardless of how many people are on the project.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #26: Malicious QR Code and Mitigation

2012-05-04 by roryalsop. 0 comments

This week’s Question of the Week was asked by Purge back in February.  His concern has been echoed in various publications – the worry that scanning one of the common QR codes you see in magazine adverts and on billboards could cause something malicious to happen as most QR scanners on smartphones take you straight to the URL encoded in the QR image. This isn’t a malicious QR (unless you count linking to a particular genre of music malicious) but how would you know?

logicalscope pointed out that a QR code was simply an encoding, so anything you could put in a URL could be encoded in a QR code. This could include XSS, SQL Injection or any other URL based attack.

handyjohn linked to a brief paper over on http://dl.packetstormsecurity.net/papers/attack/attaging.pdf outlining how QR codes could be used to direct victims to an attack website. An attacker could simply print QR code stickers and place them over existing ones on popular advertising hoardings to fool people into going to a site either with malicious code, or that is a spoof of the expected website which can ask for credentials from the victim.

roryalsop focused on the mitigation, which can be very straightforward: rather than send the browser directly to the website, just display the URL that is encoded in the QR image. This way the user can make a decision whether it is a malicious website or not (within the usual bounds for Internet users.) Admittedly logicalscope’s final point, that the QR decoder application could have a vulnerability is also true, but by adding in a user validation step we can at least improve security.

How about storing this one in your phone as a Security Stack Exchange business card – assuming people trust you enough to scan it.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW#25: Introducing QotW

2012-04-27 by roryalsop. 1 comments

What is the QotW?

At nearly 3,500 questions we have a wide variety of topics, answers and styles, and in general when someone comes to the site they are looking for answers to a specific problem, or to give answers to questions in their field so they may not see the vast majority of questions. Question of the Week posts on meta.security.stackexchange.com allow the community to vote for their favourite question to be discussed on the blog. This blog itself is quite young – we have 44 posts published, of which 24 are QotW posts.

Why do we do it?

On the Internet, getting visitors to your site is the key metric – QotW is another avenue to get what we do in front of a wider audience. Our QotW blog posts link to questions, answers, community members and external sites where relevant in order to add context and depth, showcasing our site, and this is demonstrated in our referrer stats: we get good traffic from slashdot, reddit, facebook, twitter as well as Bruce Schneier and Dan Kaminsky’s blogs, and even explainxkcd.com so we are doing something right and gaining visibility.

How do we do it?

@Iszi’s answer here lists the process in detail, but to summarise:

We post a QotW meta question on a Friday to invite ideas for the following week. In order to avoid dupes, we maintain a list of previous questions featured on the blog, as well as those which have been proposed but not yet published.

By Tuesday we have topic and author decided (typically individuals volunteer on our chat room, the DMZ – feel free to become a volunteer, we can add you as a contributor role on the blog site.)

The administrators manage the workflow planning through a Trello workspace.

QotW posts aren’t expected to be in depth treatises so drafts are ready by Thursday morning so they can be reviewed in time for a midday Friday publication (we’ve gone with UTC timing for this schedule as we have members from Australia to west coast USA)

Why should you contribute?

First, and most importantly, because you want to. You’ve seen something interesting happening on the site, or have an interesting topic you want to cover and you’d like to share it with the world.

Did we mention it is a nice addition to your careers.stackoverflow.com or LinkedIn profile?

In addition, you help grow the community you are a member of (now over 8000 individuals – a good blog post can more than double the rate of new users joining that day). Your words and name will be attracting the up and coming security experts of tomorrow.

We welcome all contributors to the blog, and the light touch of the QotW posts is a relatively easy way to start security blogging. Seasoned reviewers will be more than happy to assist.

Liked this post? Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #24: Why do people tell me not to use VLANs for security?

2012-04-20 by scottpack. 1 comments

This week’s question of the week was asked by user jtnire who was asking a question very near and dear to those security professionals who came out of networking or systems backgrounds. He was doing some network design and came across a classic statement that, “VLANs are Not a Security Tool”.

As of this writing, jtnire had not accepted any answers, however user Rory McCune was leading the pack of answers. Rory focused primarily on the classic human problem of misconfiguration, particularly easy when we’re talking about typing Gi/0/4 when you meant Gi/1/4, as opposed to plugging a cable into the wrong port. He also specifically called out VLAN Hopping, which can abuse a misconfiguration to allow a malicious user access to a non-authorized VLAN.

User and moderator Rory Alsop, speaking from an audit perspective, expanded on what the other Rory mentioned and focused more generally on what would make him double-take. He pointed out that VLANs are generally used for cheap network segmentation and that if you’re using them for as a security tool, then you probably want to do it right and use a physically isolated network instead.

Jakob Borg came in with a completely different approach. He explained that, as an ISP, VLANs are a crucial component of their environment and when done right can be a very powerful tool from both a security and service prospective. User jliendo largely agreed with Jakob that configuration is king, and when configured properly is an excellent tool in your security arsenal. He also went into more technical detail about some of the possible attacks against VLANs and how they can be mitigated.

In this author’s opinion this is a fantastic question as VLANs are becoming an extremely common mechanism for network isolation. The answers also did a great job of coming at the problem from all manner of angles, from external auditors to in the trenches technicians.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #23: Why is it difficult to catch Anonymous/Lulzsec?

2012-04-13 by roryalsop. 2 comments

This weeks question of the week was asked by user claws back in February 2011 and while a lot has happened since then, it is still a very valid question.

The top scoring answer, “What makes you think they don’t get caught”, by atdre, while more of a challenge to the original question, has proven to be quite appropriate, as over the last year various alleged members of Anonymous have been caught. Some through informants, others through intelligence work, however the remainder of the answers focus on the technical and structural reasons why Anonymous continues to be a major force on the Internet.

SteveS, Purge, mrnap, tylerl and others  mention the usual way attackers hide on the Internet – using machines in other countries, generally owned by unwitting individuals who have not protected them sufficiently (This includes botnets – but there are also willing botnets, provided by followers of Anonymous – who allow their machines to be used for attacks) and by routing through networks such as TOR (The Onion Router) so that even if law enforcement try to trace the connection back they will fail either because there are too many connections to track, or because some of the connections will pass through countries where the Internet Service Providers are not able or willing to assist with the trace.

I think Eli hit the nail on the head, however with “because anonymous can be anyone, literally” – as while there are certainly a core group of skilled and motivated individuals, there are many thousands of individuals who will contribute to an attack, and these individuals may be different from one month to the next as the nature of Anonymous allows people to join and take part as and when they want to, if a particular cause is of interest to them.

The Lulzsec spinoff from Anonymous appeared to be a deliberately short lived group who wanted to do something less political, and more for “the lulz” – focusing on large corporates and security organisations to highlight weaknesses in controls, and nealmcb provided links in comments to articles on this group in particular. In terms of detection, the same comments apply here as to the wider Anonymous group.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com

QotW #22: What are legal/ethical concerns to bear in mind, when hacking websites with open invitations?

2012-04-06 by ninefingers. 2 comments

This weeks question of the week was asked by user Yoav Aner, who wanted to understand the legal and ethical concerns of executing an attack on a web site which carried a notice inviting attacks. Yoav specifically wanted to know what, if any contractual implications there were and if it were not specified, how far would be too far. This spun off into two further questions – What security measures to have before openly allowing security researchers to hack your site and What security concerns should one bear in mind when hacking open-invitation websites? so this post will look at all three.

Before we start, I must re-iterate: we are security professionals here, not lawyers, so if in doubt, consult a lawyer.

Legal and ethical concerns:

The answer Yoav accepted was provided by Rory McCune, who raised the point that ultimately, a no holds barred approach is extremely unlikely to be acceptable in any circumstance. Rory highlighted the importance of ensuring the page in question was written by the administrators of the site, as opposed to being supplied through user content. Clearly, unless it is clear the page was written by someone with the authority to make that kind of invitation, attacking it would definitely be hostile.

Another excellent point raised in this answer was that some companies actively invite finding bugs in their web sites and products, provided you follow a number of guidelines. More on this can be found in the answer itself.

Finally, Rory touched on what many people may forget – although the website may invite attempted break-ins, it may actually be illegal where you are even to make the attempt.

Our next answer was provided by Security moderator, user and blogger Rory Alsop. According to Rory, one problem when dealing with this kind of issue is that no test cases have yet passed through the courts – so until they do, it’s unlikely any precedent has been established for dealing with these kinds of issues. Rory also raised the criminal activity point again. Always understand that the law of your country/jurisdiction still applies.

Next up, Rory explained that during a penetration test a contract established for the work may include rules about what should happen, how far a test may go, who should be notified if a vulnerability is found etc. Even with this safeguard, there is still the potential for legal action should something break. Rory advised logging absolutely everything that was going on so as to have proof of actions.

When applied to the website scenario, Rory pointed out that in this case there is no signed contract establishing this understanding, just an implication of one which neither party is legally bound to.

That is all for answers on this question. I tend to miss questions on ethics on the main site, so writing them up is actually quite interesting. As such, I am going to summarise the key points below:

  1. The rules of engagement are not well established. Assuming a “feel free to hack this message” we have no idea to what extent that is actually what they mean. By contrast, penetration testing is usually better scoped.
  2. The author of the page might not have the authority to make such an invitation. As I was reading this, I did wonder – if this is a shared host, the administrator of the site is a different person from the company who owns and maintains the box. So even though the site author can put up this message, they’re not actually entitled to make that call (and it probably violates the ToS on their hosting package).
  3. It may be illegal to engage in the act of attempting, whether or not the site in question has given you permission.
  4. Some sites actively encourage hunting for bugs.

Security concerns when hacking open-invitation websites:

Iszi raised the following worries on his question

  • The site could be a honeypot, run by government or other entities looking to gather information about active (or would-be) hackers.
  • The site could be set up by a black-hat as a honeypot to gather a list of interesting, hackable amateurs to target.
  • A third-party black-hat could potentially access the site’s logs and farm them for data about interesting, hackable amateurs to target.

Lucas Kauffman confirmed that he had a school project where he faked an open sendmail relay and

just piped all the incoming emails to a python script that got all the destinations out, generating my own spamlist. I think in the end after about 3 weeks I had close to 300.000 different email addresses.

Rory Alsop focused on the reputational and professional risks, as the host of the site will be able to see everything you did in their logs…do you ever mistype commands, use dir instead of ls, accidentally stray outside the scope of the test? This will be recorded and could negatively impact you.

Think about what you are divulging when hacking a website….

  • Your methodology
  • Your tools
  • Your mistakes?
  • etc

Finally – Yoav also asked a question focusing on the other side,

What security measures should I have in place before inviting people to hack my website?

Ttfd’s answer went into some considerable detail on the practical logistics – how you think about the problem is probably as important as actually implementing security in this situation:

  1. Do you have the money to do that?
  2. Do you have the resources? (servers, security teams and etc)
  3. How far can you limit the damages a hacker can make to your system? I.E. If a hacker hacks into your server what access will he have ? Will he be able to connect to your database and retrieve/store/update data? Is your data encrypted ? Will he be able to decrypt it? (and so on)
  4. Can your security team find how a hacker exploited your system?
  5. Does your security team have the skills to fix problems that may occur ?
  6. Probably many more questions that you need to ask and answer before you decide.

M15K gave a summarised answer

I can’t imagine very many positive scenarios in declaring open season on you’re front door will result in something useful. But let’s say you do, and you do get some positive feedback. Are you and your security team in a position to remediate those vulnerabilities?

Interesting stuff! All in all, this looks like a relatively risky business on both sides, so core to the decision must be a full understanding of the risks, and how these match to your risk appetite. If you are hosting such a site, you may get some valuable information into attack techniques, but you need to protect yourself from an escalation from the attack environment to your own systems. If you are testing the site, think about the risks you may be facing, and plan accordingly.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com