Uncategorized

Debunking SQRL

2013-10-17 by Terry Chia. 20 comments

This blog post is inspired by a question on Information Security Stack Exchange about two weeks ago. The question seems to have attracted quite a bit of attention (some of it in an attempt to defend the flawed scheme) so I would like to write a more thorough analysis of the scheme.

Let us start by discussing the current standard for authentication. Passwords. No one is arguing that passwords are perfect. Far from it. Passwords, at least passwords in use by most people are incredibly weak. Short, predictable, easy to guess. This problem is made worse by the fact that most applications do not store passwords in a secure fashion. Instead of storing passwords in a secure fashion, many applications store passwords either in plaintext, unsalted or hashed with weak hashing algorithms.

However, passwords aren’t going anywhere because they do have several advantages that no other authentication scheme provides.

  • Passwords are convenient. They do not require users to carry around additional equipment for authentication.
  • Password authentication is easy to setup. There are countless libraries in every single programming language out there designed to assist you with the task.
  • Most importantly, passwords can be secure if used in the right way. In this case, the “right way” involves users using long, unique and randomly generated passwords for each application and that the application stores passwords hashed with a strong algorithm like bcrypt, scrypt or pbkdf2.

So, let us compare Steve Gibson’s SQRL scheme against the golden standard of password authentication. Long, unique and randomly generated passwords hashed with bcrypt, scrypt or pbkdf2 in addition to a one time password generated using the HOTP or TOTP algorithms.

(I am going to be borrowing heavily from the answers to the original Information Security Stack Exchange answer since many excellent points have already been made.)

Disadvantages

Authentication and identification is combined

No annoying account creation: Suppose you wish to simply comment on a blog posting. Rather than going through the annoying process of “creating an account” to uniquely identify yourself to a new website (which such websites know causes them to lose valuable feedback traffic), you can login using your SQRL identity. If the site hasn’t encountered your SQRL ID before, it might prompt you for a “handle name” to use for your postings. But either way, you immediately have an absolutely secure and unique identity on that system where no one can possibly impersonate you, and any time you ever return, you will be immediately and uniquely known. No account, no usernames or passwords. Nothing to remember or to forget. Your SQRL identity eliminates all of that.

What Gibson describes as an advantage to his scheme I consider a huge weakness. Consider a traditional username/password authentication process. What happens when my password gets compromised? I simply request for a password reset through another (hopefully still secure) medium such as my email address. This is impossible with Gibson’s scheme. If my SQRL master key gets compromised, there is absolutely no way for the application to verify my identity through another medium. Sure, the application could associate my SQRL key with a particular email address or similar, but that would defeat one of the supposed advantages of the SQRL scheme in the first place.

Single point of failure

The proposed SQRL scheme derives all application specific keys from a single master key. This essentially provides a single juicy target for attackers to go after. Remember how Mat Honan got his online life ruined due to interconnected accounts? Imagine that but with all your online identities. All your identities are belong to us.

So how is this master key to your online life stored? On a smartphone. Not exactly the safest of places what with all the malware out there targetting iOS or Android devices. You have bigger problems if you are using Windows Phone or Blackberry. 😉

People also have the tendency to misplace their smartphones. When combined with the previous point of no other means to identify yourself to the application, this would be a very good way of locking yourself out of your “house”

Not exactly very reassuring.

Social Engineering attacks

@tylerl has a very good point about this in his answer. I don’t see a way for me to write it in a clearer way to I will quote him verbatim.

So, for example, a phishing site can display an authentic login QR code which logs in the attacker instead of the user. Gibson is confident that users will see the name of the bank or store or site on the phone app during authentication and will therefore exercise sufficient vigilance to thwart attacks. History suggests this unlikely, and the more reasonable expectation is that seeing the correct name on the app will falsely reassure the user into thinking that the site is legitimate because it was able to trigger the familiar login request on the phone. The caution already beaten in to the heads of users regarding password security does not necessarily translate to new authentication techniques like this one, which is what makes this app likely inherently less resistant to social engineering.

Conclusion

The three major flaws I have pointed out is definitely enough to kill SQRL’s viability as a replacement for password authentication. The scheme not only fails to offer any benefits over conventional methods such as storing passwords in a password manager, it has several major flaws that Gibson fails to address in a satisfactory manner.

This blog post has been cross-posted on http://www.infosecstudent.com/2013/10/debunking-sqrl/

QotW #15: What is the difference between $200 and $1,000+ Firewalls?

2012-01-06 by roryalsop. 0 comments

This week’s question was asked by mdegges – What is the difference between $200 and $1,000+ firewalls?

This sparked some interesting discussion, as well as making me think quite deeply about it. In general, like many other IT people, I see enterprise level firewalls in my day job and run cheap SoHo firewalls for my home network and while it makes sense at these two extremes that there would be cost differences, what are the actual differences in practical terms?

tylerl flagged the two most important differentiating factors for an enterprise, and they aren’t even security features

  • bandwidth
  • latency

In addition Paul listed the number of concurrent connections as a third factor. Again, this is not a security feature.

For an organisation with thousands of connections or large data flow this makes perfect sense – you can’t allow the security device to become a bottleneck which hampers your business.

So does this just mean that you are paying for the firewall to be faster?

No –  there is more to it than that:

SoHo firewalls tend to be very simple – applying rules based on source or destination address, port and protocol, sometimes with some intelligence around matching responses to requests, whereas enterprise-grade firewalls often have deep inspection capability. tylerl gave some good comments around this functionality.

Paul also described the different frameworks – with home firewalls usually being single function devices, and enterprise firewalls providing multiple features dependent on licence, including IPSec, VPN etc.

David also pointed out a final factor that is very important for large organisations – availability. So firewalls at this level need to have High Availability or cluster capability to minimise downtime.

As a quick summary then, the difference is mostly about speed, with a few extra security capabilities.

If you have more information on other differences, please add an answer of your own over on http://security.stackexchange.com/q/9409/what-is-the-difference-between-200-and-1-000-firewalls

QotW #14: How to secure an environment both physically and technically?

2011-12-16 by ninefingers. 0 comments

So after a slight hiatus we are back running question of the week posts again. This time, chosen by me because we had a tie, is the question asked by user jwegner: How to secure an environment both physically and technically?

An interesting question when you may be working in a scenario processing personal data and cannot afford a data leak. So, as a very quick summary, jwegner had:

  • no local storage
  • used cctv cameras
  • used biometric locks and key-card locks
  • used sftp to transfer data in and out when necessary.

However, jwegner was still concerned about a number of issues including mobile phones, preventing data release when the rules need to be relaxed and the fact that their external gateway ran both an sftp server and an ftp server.

Security.se responded. Jeff Ferland has the highest voted answer. He recommended not allowing any mobile phone devices inside the secure area at all, as even in offline mode many phones have data ports and cameras and that the internet connection for the “red” zone was a no-go. However, on the subject of achieving no local storage with flash drives, Jeff recommended the opposite, citing loss of the drives as a big potential risk factor. Jeff continued to recommend that monitoring USB ports is a necessary precaution and possibly using epoxy to fill them – however, his answer also mentions that many devices are now highly reliant on the usb interface, including keyboards and mice.

On the ftp gateway area, Jeff recommended looking into access control to ensure internal accounts only had read access, and possibly using ProFTPd as opposed to the standard sftp subsystem. Finally, Jeff added an extra detail – using deep freeze to ensure machine config cannot persist reboot.

Rory Alsop echoed many of these sentiments in his answer. Over and above Jeff, Rory recommended banning mobiles with very strict consequences for their use inside the secure area as a deterrent – as well as enforcing searches on entry/exit. In addition, he recommended not using ftp at all. Rory also echoed Jeff’s deep freeze, recommending read-only file systems. Finally, his answer mentioned two key points:

  • Internal risks of using ftp – may be worth moving over to sftp to ensure internal traffic is harder to sniff.
  • Staff vetting.

From the comments, an interesting point was raised – blocking cellphones is illegal in the US and may be in other jurisdictions, so whilst detection methods could be used for enforcement outright blocking may require an in-depth review of options before proceeding.

So far, these are the only two answers. Our questions of the week aim to highlight potential interesting questions from the community; if you think you can help answer then the link you need is here.

QotW #12: How to counter the statement: “You don’t need (strong) security if you’re not doing anything illegal”?

2011-10-10 by roryalsop. 0 comments

Ian C posted this interesting question, which does come up regularly in conversation with people who don’t deal with security on a daily basis, and seems to be highlighted in the media for (probably) political reasons. The argument is “surely as long as you aren’t breaking the law, you shouldn’t need to prevent us having access – just to check, you understand”

This can be a very emotive subject, and it is one that has been used and abused by various incumbent authorities to impose intrusions on the liberty of citizens, but how can we argue the case against it in a way the average citizen can understand?

Here are some viewpoints already noted – what is your take on this topic?

M’vy made this point from the perspective of a business owner:

Security is not about doing something illegal, it’s about someone else doing something illegal (that will impact you).

If you don’t encrypt your phone calls, someone could know about what all your salesman are doing and can try to steal your clients. If you don’t shred your documents, someone could use all this information to mount a social engineering attack against your firm, to steal R&D data, prototype, designs…

Graham Lee supported this with a simple example:

 Commercial confidential data…could provide competitors with an advantage in the marketplace if the confidentiality is compromised. If that’s still too abstract, then consider the personal impact of being fired for being the person who leaked the trade secrets.

So we can easily see a need for security in a commercial scenario, but why should a non-technical individual worry? From a more personal perspective, Robert David Graham pointed this out

 As the Miranda Rights say: “anything you say can and will be used against you in a court of law”. Right after the police finish giving you the Miranda rights, they then say “but if you are innocent, why don’t you talk to us?”. This leads to many people getting convicted of crimes because it is indeed used against them in a court of law. This is a great video on YouTube that explains in detail why you should never talk to cops, especially if you are innocent: http://www.youtube.com/watch?v=6wXkI4t7nuc

Tate Hansen‘s thought is to ask,

“Do you have anything valuable that you don’t want someone else to have?”

If the answer is Yes then follow up with “Are you doing anything to protect it?”

From there you can suggest ways to protect what is valuable (do threat modeling, attack modeling, etc.).

But the most popular answer by far was from Justice:

You buy a lock and lock your front door if you live in a city, in close proximity to hundreds of thousands of others. There is a reason for that. And it’s the same reason why you lock your Internet front door.

Iszi asked a very closely linked question “Why does one need a high level of privacy/anonymity for legal activities”, which also inspired a range of answers:

From Andrew Russell, these 4 thoughts go a long way to explaining the need for security and privacy:

If we don’t encrypt communication and lock systems then it would be like:

Sending letters with transparent envelopes. Living with transparent clothes, buildings and cars. Having a webcam for your bed and in your bathroom. Leaving unlocked cars, homes and bikes.

And finally, from the EFF’s privacy page:

Privacy rights are enshrined in our Constitution for a reason — a thriving democracy requires respect for individuals’ autonomy as well as anonymous speech and association. These rights must be balanced against legitimate concerns like law enforcement, but checks must be put in place to prevent abuse of government powers.

A lot of food for thought…

An accessible overview of browser security

2011-09-20 by ninefingers. 2 comments

So the purpose of this post is to introduce you, an IT-aware but perhaps not software-engineering person, to the various issues surrounding browser security. In order to do this I’ll go quickly through some theory you need to know first and there is some simple C-like pseudocode and the odd bad joke, but otherwise, this isn’t too much of a technical post and you should be able to read it without having to dive into 50 or so books or google every other word. At least, that’s the idea.

What is a browser?

Before we answer that we really need a crash course on kernel internals. It’s not rocket science though, it’s actually very simple. An application on your system runs and is called a process. Each process, for most systems you’re likely to be running, has it’s own address space, which is where everything lives (memory). A process has some code and the operating system (kernel) gives it a set amount of time to run before giving another process some time. How to do that so everything keeps going really would be a technical blog post, so we’ll just skip it. Suffice to say, it’s being constantly improved.

Of course, sometimes you might want to get more than one thing done at once, in which case threads come in. How threads are implemented across OSes is different – on Windows, each process also contains a single, initial thread. If you want to do more things, you can add threads to the process. On Linux, threads and processes are all the same, but they share memory. That’s the real important common idea behind a thread and is also true of Windows – threads share memory, processes do not.

How does my process get stuff done then?

So now we’ve got that out the way, how does your program get stuff done? Well, you can do one of these three things:

  1. If you can do it yourself, as a program, you just do it.
  2. If another library can do it, you might use that.
  3. Or alternatively, you might ask the operating system to do it.

In reality, what might happen is your program asks a library to do it which then asks the operating system to do it. That’s exactly how the C standard library works, for example. The end result, however, and the one we care about, is whether you end up asking the operating system. So you get this:

Operating system  Program

Later on, I’ll make the program side of things slightly more complicated, but let’s move on.

How do we do plugins generically?

Firstly, a word on “shared objects”. Windows calls these dynamic link libraries and they’re very powerful, but all we’re concerned with here really is how they end up working. On the simplest possible level, you might have a function like this:

void parse_html(char* input)
{
    call_to_os();
    do_something_with(input);
}

A shared library might implement a function like this:

void decode_and_play_swf(char* data)
{
    call_to_os();
    do_some_decoding_and_playing_of(data);
}

When a shared object loads with your code, the result, in your program’s memory, is this:

void decode_and_play_swf(char* data)
{
    call_to_os();
    do_some_decoding_and_playing_of(data);
}</p>

<p>void parse_html(char* input)
{
    call_to_os();
    do_something_with(input);
}

Told you it was straightforward. Now, making a plugin work is a little more complicated than that. The program code needs to be able to load the shared object and must expect a standard set of functions be present. For example, a browser might expect a function like this in a plugin:

int plugin_iscompatible(int version)
{
    if ( version <= min_required_version && version <= max_supported_version )
    {
        return 0;
    }
    else
    {
        return -1;
    }
}

Similarly, the app we’re plugging into might offer certain functions that a plugin can call to do things. An example of this would be exactly a browser plugin for drawing custom content to a page – the browser needs to provide the plugin the functionality to do that.

So can all this code do what it likes then?

Well that depends. Basically, when you do this:

Operating system   Program

the operating system doesn’t just do whatever you asked for. In most modern OSes, that program is also run in the context of a given user. So that program can do exactly what the user can do.

I should also point out that for the purposes of loading shared libraries, they appear to most reporting tools as the program they’re running as. After all, they are literally loaded into that program. So when the plugin is doing something, it does it on behalf of the program.

I should also point out that when you load the plugin, it literally gets joined to the memory space of the program. Yes, this is an attack vector called DLL injection, yes you can do bad things but it is also highly useful too.

I read this blog post to know about browsers! What’s all this?

One last point to make. Clearly, browsers run on lots of different platforms. Browsers have to be compiled on each platform, but forcing plugin writers to do that is a bit hostile really. It means many decent plugins might not get written, so browser makers came up with a solution: embed a programming language.

That sounds a lot crazier than it is. Technically, you already have one quite complicated parser already in the browser; you’ve got to support this weird HTML thing (without using regex) – and CSS, and Javascript… wait a minute, given we’ve stuck a programming language parser for javascript into the browser, why don’t we extend that to support plugins?

This is exactly what Mozilla did. See their introductory guide (and as a side note on how good Stack Overflow really is, here’s a Mozilla developer on SO pointing someone to the docs). Actually in truth it’s a little bit more complicated than I made out. Since Firefox already includes an XML/XHTML/HTML parser, the developers went all out – a lot of the user interface can be extended using XUL, an XML variant for that purpose. Technically, you could write an entire user interface in it.

Just as a quick recap, in order for your XML, XHTML, HTML, Javascript, CSS to get things done, it has to do this:

Operating system   Browser   Javascript or something

So, now the browser makes decisions based on what the Javascript asks before it asks the operating system. This has some security implications, which we’ll get to in exactly one more diagram’s time. It’s worth noting that on a broad concept level, languages like Python and Java also work this way:

Operating system   Interpreter/JVM etc  Script/Java Bytecode

Browsers! Focus, please!

Ok, ok. All of that was to enable you to understand the various levels on which a browser can be attacked – you need to understand the difference in locations and the various concepts for this explanation to work, so hopefully we’re there. So let’s get to it.

So, basically, security issues arise for the most part (ignoring some more complicated ones for now) because of bad input. Bad input gets into a program and hopefully crashes it, because the alternative is that specially crafted input makes it do something it shouldn’t and that’s pretty much always a lot worse. So the real aim of the game is evaluating how bad bad input really is. The browser accepts a lot of input on varying levels, so let’s take them turn by turn:

  1. The address bar. You might seriously think, wait, that can’t be a problem can it, but if the browser does not correctly pass the right thing to the “page getting” bit, or the “page getting” bit doesn’t error when it gets bad input, you have a problem, because all the links all over the input are going to break your browser. Believe it or not, in the early days of Internet Explorer, this was actually an issue that could lead you to believe you were on a different site to the one you thought you were on. See here. These days, this particular attack vector is more of a phishing/spoofing style problem than browser manufacturers getting it wrong.

  2. Content parsers. For content parsing I mean all the bits that make it up – html, xhtml, javascript, css. If there are bugs in these, the best case scenario is really that the browser crashes. However, you might be able to get the browser to do something it shouldn’t with a specially crafted web page, javascript or CSS file. Technically speaking, the browser should prevent the javascript it is executing from doing anything too malicious to your system. That’s the idea. The browser will also prevent said javascript from doing anything too malicious to other pages too. A full discussion of web vulnerabilities like this is a blog post in itself, really so we won’t cover it here.

  3. Extensions in Javascript/whatever the browser uses. Again, this comes down to are the vulnerabilities in the parser/enforcement rules. Otherwise, an extension of this nature will have more “power” than an in page Javascript, but still won’t be able to do too much “bad stuff”.

  4. Binary formats in the rendering engine. Clearly, a web page is more than just parseable programming-language like things – it’s also images in a whole variety of complicated formats. A bug in these processors might be exploitable; given they’re written as native functions inside the browser, or as shared libraries the browser uses, the responsibility for access control is at the OS level and so a renderer that is persuaded to do something bad could do anything the user running the process can do.

  5. Extensions (Firefox plugins)/File format handlers. These enable the browser to do things it can’t do in and of itself and are often developed externally to the core “browsing” part. Examples of this are embedding adobe reader in web pages and displaying flash applications. Again, these have decoders or parsers, so bugs in these can persuade the plugin to ask the operating system to do what we want it to do, rather than what it should.

You might wonder if 4 and 5 really should be subject to the same protections afforded to the javascript part. Well, the interesting thing is, that is part of the solution, but you can’t really do it in Javascript, because compared to having direct access to memory and the operating system, it would be much slower. Clearly, that’s a problem for video decoding and really heavy duty graphical applications such as flash is often used for, so plugins are and were the solution.

Ok, ok, tell me, what’s the worst the big bad hacker can do?

Depends. There are two really lucrative possible exploits from bugs of this sort; either, arbitrary code execution (do what you want) or drive by downloading (which is basically, send code that downloads something, then run it). In and of itself, the fact that you’re browsing as a restricted user and not an administrator (right?) means the damage the attacker can do is limited to what your user can do. That could be fairly critical, including infecting all your files with malware, deleting stuff, whatever it is they want. You might then unwittingly pass on the infection.

The real problem, however, is that the attacker can use this downloaded code as a launch platform to attack more important parts. A full discussion of botnets, rootkits and persistent malware really ought to also be a separate blog post, but let’s just say the upshot is your computer could be taking part in attacks on other computers orchestrated by criminal gangs. Serious then.

Oh my unicorns, why haven’t you developers fixed this?

First up, no piece of code is entirely perfect. The problem with large software projects is that they become almost too complicated for one person to understand every part and every change that is going on. So the project is split up with maintainers for each part and coding standards, rules, guidelines, tools you must run and that must pass, tests you must run and must pass etc. In my experience, this improves things, but no matter how much you throw at it in terms of management, you’re still dealing with humans writing programs. So mistakes happen.

However, our increasing reliance on the web has got armies of security people of the friendly, helpful sort trying to fix the problem, including people who build browsers. I went at great lengths to discuss threads and processes – there was a good reason for that, because now I can discuss what google chrome tries to do.

Ok…

So as I said, the traditional way you think about building an application when you want to do more than one thing at once is to use threads. You can store your data in memory and any thread can get it (I’m totally ignoring a whole series of problems you can encounter with this for simplicity, but if you’re a budding programmer, read up on concurrency before just trying this) reasonably easily. So when you load a page and run a download, both can happen at once and the operating system makes it look like they’re both happening at the same time.

Any bugs from 2-5, however, can affect this whole process. There’s nothing to stop a plugin waltzing over and modifying the memory of the parser, or totally replacing it, or well, anything. Also, a crash in a single thread in a program tends to bring the whole thing down, so from a stability point of view, it can also be a problem. Here’s a nice piece of ascii art:

|-----------------------------------------------------------------------|
| Process no 3412  Thread/parsing security.blogoverflow.com             |
|  Big pile of     Thread/rendering PNG                                 |
|  shared memory   Thread/running flash plugin - video from youtube     |
|-----------------------------------------------------------------------|
So, all that code is in the same space (memory address) as each other, basically. Simple enough.

The Chrome developers looked at this and decided it was a really bad idea. For starters, there’s the stability problem – plugins do crash fairly frequently, especially if you’ve ever used the 64-bit flash plugin for linux. That might take the whole process down, or it might cause something else to happen, who knows.

The Chrome model executes sub-processes for each site to be rendered (displayed) and for every plugin in use, like this:

   |--------------------------|       |-----------------------------------|
   | Master chrome            |-------| Rendering process for security....|
   | process. User interface  |       |-----------------------------------|
   | May also use threads     |
   |--------------------------|-------|-----------------------------------|
                |                     | Rendering process for somesite... |
   |--------------------------|       |-----------------------------------|
   | Flash plugin process     |
   |--------------------------|
So now what happens is the master process becomes only the user interface. Child processes actually do the work, but if they get bad input, they die and don’t bring the whole browser down.

Now, this also helps solve the security problem to some extent. Those sub-processes cannot alter each other’s memory easily at all, so there’s not much for them to exploit. In order to get anything done, they have to use IPC – interprocess communication, to talk to the master process and ask for certain things to happen. So now the master process can do some filtering based on a policy of who should be able to access what. Child processes are then denied access to things they can’t have, by the master process.

But, can’t sub processes just ask the operating system directly? Well, not on Windows for certain. When the master process creates the child process, it also asks Windows for a new security “level” if you like (token is the technical term) and creates the process in that level, which is highly restricted. So the child process will actually be denied certain things by the operating system unless it needs them, which is harder to do on a single process where parts of it do need unrestricted access.

In defence to the Windows engineers, this functionality (security tokens) has been part of the Windows API for a while; it’s just developers haven’t really used it. Also, it applies to threads too, although there remains a problem with access to the process’s memory.

Great! I can switch to Chrome and all will be well!

Hold up a second… yes, this is an excellent concept and hopefully other browser writers will start looking at integrating OS security into their browsers too. However, any solution like this will also under certain conditions be circumventable, it’s just a matter of finding a scenario. Also, chrome and any other browser cannot defend you from yourself – some basic rules still apply. They are, briefly, do not open dodgy content and do keep your system/antivirus/firewall up to date.

Right, so what’s the best browser?

I can’t answer that, nor will I attempt to try. All browser manufacturers know that “security” bugs happen and all of them release fixes, as do third parties. A discussion on which has the best architecture is not a simple one – we’ve outlined the theory here, but how it is implemented will really determine whether it works – the devil is in the detail, as they say.

So that concludes our high level discussion of the various security issues with browsers, plugins and drive by downloads. However, I would really like to stress there is a whole other area of security which you might broadly call “web application” security, which concerns in part what the browser allows to happen, but also encompasses servers and databases and many other things. We’ve covered a small corner of the various problems today; hopefully some of our resident web people will over time cover some of the interesting things that happen in building complex web sites. Here is a list of all the questions here tagged web-browser.

QotW #5: Defending your website

2011-08-12 by roryalsop. 0 comments

This week’s post came from this question: I just discovered major security flaws in my web store!, but covers a wider scope and includes some resources on Security Stack Exchange on defending your website.

Any application you connect to the Internet will be attacked within minutes of plugging it in, so you need to look very carefully at protecting it appropriately. Guess what – many application owners and developers go about this in entirely the wrong way, building the application and then thinking about security at the very end, where it is expensive and difficult to do well.

The recommended approach is to build security in from the start – have a look at this post for a discussion on Secure Development Lifecycle. This means having security as a core pre-requisite, just like performance and functionality, and understanding the risks your new application will bring. Read How do you compare risks from your websites, physical perimeter, staff etc

To do this you will need developers to write the application securely. Studies show that this is the most cost effective in the long run, as securely developed applications require very little remediation work (as they don’t suffer from anywhere near as many security flaws as traditionally developed applications)

So, educating developers is key – even basic awareness can make the difference: What security resources should a white-hat developer follow these days?

Assuming your application has been developed using secure practices and you are ready to go live, What are the most important security checks for new web applications? and Testing your web application to gain some confidence that the security controls you have implemented are correctly working and effective is an essential step before allowing connection to the Internet. Penetration testing and associated security assessments are essential checks.

What tools are available to assess the security of a web application?

The globally recognised open standard on web application security is the Open Web Application Security Project, (OWASP) who provide guidance on testing, sample vulnerable applications, and a list of the top ten vulnerabilities. The answers to Is there a typical step-by-step A-Z process for testing a Web site for possible exploits? provide more information on approaches, as do the answers to Books about Penetration Testing.

Once your application is live, you can’t just sit back and relax, as new attacks are developed all the time – being aware of what is being attempted is important.  Can I detect web app attacks by viewing my Apache log file is Apache specific, but the answers are broadly relevant for all web servers.

This is obviously a very quick run-through – but reading the linked questions, and following the related questions links on them will give you a lot of background on this topic.