QoTW #44: How to block or detect user setting up their own personal wifi AP in our LAN?

2013-03-22 by . 0 comments

Nominated by Terry Chia, this question by User15580 should be of interest to anyone managing the security of network s.

The show the variety of aspects security covers in this sort of scenario:

Daniel posted the top answer, and it has nothing to do with IT, but instead focuses on the cause – if a user has installed an access point it is because they need something the existing network is not providing. This is always worth considering:

Discuss with the users what they are trying to accomplish. Perhaps create an official wifi network ( use all the security methods you wish – it will be ‘yours’ ). Or, better, two – Guest and Corporate WAPs.

Polynomial and Thomas Pornin also highlighted the fact this is a user/managerial problem, rather than a technical one.

Remember Immutable Law of Security #10: Technology is not a panacea. Whilst technology can do some amazing things, it can’t enforce user behaviour. You have a user that is bringing undue risk to the organisation, and that risk needs to be dealt with. The solution to your problem is _policy_, not technology. Set up a security policy that details explicitly disallowed behaviours, and have your users sign it. If they violate that policy, you can go to your superiors with evidence of the violation and a penalty can be enforced. As long as the users have physical access to the machines they use and their USB ports (that’s hard to avoid, unless you pour glue in all the USB ports…) and that the installed operating systems allow it (then again, hard to avoid if users are “administrators” on their systems, in particular in BYOD contexts), then the users can setup custom access points which gives access to, at least, their machine.

Rory McCune provided some information on the types of solutions which generally are used in large corporates, where they work well, including NAC and port lockdowns. Lie Ryan‘s comments tend to be appropriate on smaller networks.

k1DBLITZ also focuses on the use of technical solutions in addition to policy, and JasperWallace recommends looking for and blocking unapproved MAC addresses, and further answers discuss wireless scanning and scripted checks.

Overall, it would seem that a mixture of technical and management controls are required – the balance depending on your specific environment.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QoTW #43: Teaching a loved one about secure coding practices

2013-03-01 by . 0 comments

Today’s blog post is on a question posted on Security Stackexchange last year title Teaching a loved one about secure coding practices.

Ignoring the obvious innuendos in the comments, I think this is an excellent question. While the question is far longer, this is the gist of it.

As ITSec pros, we talk about infusing the development cycle with secure coding practices and design, but how does that apply to a brand-new learner? A new programmer is at the start of their own ‘lifelong development cycle’, at it were. At what point is it appropriate, from an educational perspective, to switch from the mindset of ‘getting it to work’ to ‘it absolutely must be secure’? At what point should a student ‘fail’ an assignment because of a security issue?

As a student in an infosec diploma course, I have rather strong opinions on this matter. Let’s start with a personal anecdote. I personally started learning programming on my own due to self-interest. My first exposure to “real” programming is through PHP(I know… shudders). Do a quick google search using the terms “php tutorial”. Go on. The very first link points towards w3schools.com.

A quick browse through the site looks good. Nice, simple, easy to follow tutorials on the basics of PHP and HTML. Wait, are they really teaching unparameterized queries? In 2013? Really? I’d like to point you to this website. In particular, this quote.

W3Schools.com is not affiliated with the W3C in any way. Members of the W3C have asked W3Schools to explicitly disavow any connection in the past, and they have refused to do so. W3Schools frequently publishes inaccurate or misleading content. We have collected several examples illustrating this problem below.

This is an obvious problem. A website on the top of Google’s search results targeted at new programmers providing misleading information? What could go wrong right?

Moving on to the actual question.

User Everett stated this in his answer.

The problem I see, is that secure programming is taught as an add on. Best practices should be taught from the beginning (including security). The lie people are taught is that practice makes perfect. The truth is practice makes permanent. So if you are doing it wrong, you have to unlearn what you have learned. That is a bassackwards approach. I would say that secure coding practices should be taught from day one. There’s no reason to learn how to do it, and then learn how to do it securely. It’s a waste of time and money…

I disagree with his opinion. I think user KeithS provides a very good point.

It’s great to say “Secure coding practices should be taught from day one”, and very hard to demonstrate how that day-one “Hello World” program may be vulnerable, especially when “what is a computer program” is a new concept for the class.

I agree. Many of my peers who entered the diploma course without any prior programming experiences have a tough time even wrapping their heads around basic concepts like looping and conditional statements. Introducing more complex security topics at this point in their education would more likely cause more harm than good.

This is the answer I provided to the question.

I would say a great way to learn is for her to break the applications she has already written. Assuming she is writing web applications, point her towards the OWASP Top 10. Have her see if she can find any of those flaws in her own code. There is no better way to learn about security concepts than actually seeing it happen on your own code. Once a flaw has been found, have her rewrite the application to fix the flaw. Doing so will allow her to appreciate the effect of things like sanitation and validation of user inputs and parameterized queries. Take incremental steps. I wouldn’t jump straight into designing a new application with security in mind before truly understanding what type of codes result in security flaws.

With 37 upvotes and the answer being accepted, it is clear that the community agrees with me.

Conclusion

I think the best approach to teaching secure programming is an iterative one. Start off the students with writing simple applications. Have the students go back and look at their code and see how it can be broken. Refer them to good resources like the OWASP Top 10 list. With a little critical thinking, the students should be able to start figuring out what went wrong in their code and how to fix it.

Like user AviD said,

Students that do not practice critical thinking shouldn’t really be learning programming….

This post is a cross-post from my blog at http://www.infosecstudent.com/2013/02/teaching-secure-programming-how-to-do-it-right/

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

Filed under Question of the Week

QoTW #42: Would publishing a network diagram make the network less secure?

2013-01-25 by . 3 comments

I chose this week’s Question of the Week, saber tabatabaee yazdi‘s “Would publishing a network diagram make the network less secure?” because this is a point which seems to be often misunderstood.

Saber asked this question because he had come across various websites designed to let people share their network diagrams and designs in order that others can comment on them and provide guidance and he wondered what the risks would be from this.

As an example, this diagram from www.ratemynetworkdiagram.com provides IP addresses, host names and even descriptions:

AJ Henderson provided the very valid comment that security through obscurity is not security, but admits that any network will have some weaknesses, and avoiding giving this information to a potential attacker is probably advised.

My answer is taken from the experience of managing many hundreds of penetration tests. My take on it is:

having a map helps me target my attack, avoiding possible sensors, honeypots etc and aiming at high value targets or sources of information. This can speed up an attack immensely, reducing the defender’s chance of preventing it.

But the value from these sites is that you can have obvious mistakes pointed out to you – peer review can be a very valuable thing. So how can you do that safely?

To reduce risk, some steps you can take are:
  • remove addresses, function titles etc
  • only include sections of the network
  • post under an anonymous profile
  • include fake network sections

An attacker will still get information, but it hopefully won’t be enough to let them navigate your entire network.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

A Brief Introduction to auditd

2013-01-18 by . 4 comments

The auditd subsystem is an access monitoring and accounting for Linux developed and maintained by RedHat. It was designed to integrate pretty tightly with the kernel and watch for interesting system calls. Additionally, likely because of this level of integration and detailed logging, it is used as the logger for SELinux.

All in all, it is a pretty fantastic tool for monitoring what’s happening on your system. Since it operates at the kernel level this gives us a hook into any system operation we want. We have the option to write a log any time a particular system call happens, whether that be unlink or getpid. We can monitor access to any file, all network traffic, really anything we want. The level of detail is pretty phenomenal and, since it operates at such a low level, the granularity of information is incredibly useful.

The biggest downfall is actually a result of the design that makes it so handy. This is itself a logging system and as a result does not use syslog. The good thing here is that it doesn’t have to rely on anything external to operate, so a typo in your (syslog|rsyslog|syslog-ng).conf file won’t result in losing your system audit logs. As a result you’ll have to manage all the audit logging using the auditd suite of tools. This means any kind of log collection, organization, or archiving may not work with these files, including remote logging. As an aside, auditd does have provisions for remote logging, however they are not as trivial as we’ve come to expect from syslog.

Thanks to the level of integration that it provides your auditd configurations can be quite complex, but I’ve found that there are primarily only two options you need to know.

  1. -a exit,always -S <syscall>
  2. -w <filename>

The first of these generates a log whenever the listed syslog exits, and whenever the listed file is modified. Seems pretty easy right? It certainly can be, but it does require some investigation into what system calls interest you, particularly if you’re not familiar with OS programming or POSIX. Fortunately for us there are some standards that give us some guidance on what to look out for. Let’s take, for example, the Center for Internet Security Red Hat Enterprise Linux 6 Benchmark. The relevant section is “5.2 Configure System Account (auditd)” starting on page 99. There is a large number of interesting examples listed, but for our purposes we’ll whittle those down to a more minimal and assume your /etc/audit/audit.rules looks like this.

# This file contains the auditctl rules that are loaded
# whenever the audit daemon is started via the initscripts.
# The rules are simply the parameters that would be passed
# to auditctl.
# First rule - delete all
-D

# Increase the buffers to survive stress events. # Make this bigger for busy systems -b 1024 -a always,exit -S adjtimex -S settimeofday -S stime -k time-change -a always,exit -S clock_settime -k time-change -a always,exit -S sethostname -S setdomainname -k system-locale -w /etc/group -p wa -k identity -w /etc/passwd -p wa -k identity -w /etc/shadow -p wa -k identity -w /etc/sudoers -p wa -k identity -w /var/run/utmp -p wa -k session -w /var/log/wtmp -p wa -k session -w /var/log/btmp -p wa -k session -w /etc/selinux/ -p wa -k MAC-policy # Disable adding any additional rules - note that adding new rules will require a reboot -e 2

Based on our earlier discussion we should be able to see that we generate a log message every time any of the following system calls exit: adjtimex, settimeofday, stime, clock_settime, sethostname, setdomainname. This will let us know whenever the time gets changed or if the host or domain name of the system get changed.

We’re also watching a few files. The first four (group, passwd, shadow, sudo) will let us know whenever users get added, modified, or privileges changed. The next three files (utmp, wtmp, btmp) store the current login state of each user, login/logout history, and failed login attempts respectively. So monitoring these will let us know any time an account is used, or failed login attempt, or more specifically whenever these files get changed which will include malicious covering of tracks. Lastly, we’re watching the directory ‘/etc/selinux/’. Directories are a special case in that this will cause the system to recursively monitor the files in that directory. There is a special caveat that you cannot watch ‘/’.

When watching files we also added the option ‘-p wa’. This tells auditd to only watch for (w)rites or (a)ttribute changes. It should be noted that for write (and read for that matter) we aren’t actually logging on those system calls. Instead we’re logging on ‘open’ if the appropriate flags are set.

It should also be said that the logs are also rather…complete. As an example I added the system call rule for sethostname to a Fedora 17 system, with audit version 2.2.1. This is the resultant log from running “hostname audit-test.home.private” as root.

type=SYSCALL msg=audit(1358306046.744:260): arch=c000003e syscall=170 success=yes exit=0 a0=2025010 a1=17 a2=7 a3=18 items=0 ppid=23922 pid=26742 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts4 ses=16 comm="hostname" exe="/usr/bin/hostname" subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key="system-locale"
There are gobs of fields listed, however the ones that interest me the most are the various field names containing the letters “id”, “exe” and that ugly string of numbers in the first parens. The first bit, 1358306046.744, is the timestamp of the event in epoch time. The exe field contains the full path tot he binary that was executed. Useful, since we know what was run, but it does not contain the full command line including arguments. Not ideal.

Next we see that the command was run by root, since the euid is 0. Interestingly, the field auid (called audit uid) contains 1000, which is the uid of my regular user account on that host. The auid field actually contains the user id of the original logged in user for this login session. This means, that even though I used “su -” to gain a root shell the auditing subsystem still knows who I am. Using su to gain a root shell has always been the bane of account auditing, but the auditd system records information to usefully identify a user. It does not forgive the lack of command line options, but certainly makes me feel better about it.

These examples, while handy, are also only the tip of the iceberg. One would be hard pressed to find a way to get more detailed audit logging than is available here. To help make our way down the rabbit hole of auditd let’s turn this into a series. We’ll collect ideas for use cases and work up an audit config to meet the requirements, much like what I ended up doing on this security.stackexchange.com answer.

If this sounds like fun let me know in the comments and I’ll work up a way to collect the information. Until then…Happy Auditing!

There will be future auditd posts, so check back regularly on the auditd tag.

 

Filed under Configuration

Securi-Tay 2 Conference

2013-01-17 by . 2 comments

Spent January 16th up in Dundee, at the University of Abertay, at Securi-Tay 2. It was a very well run conference – it was organised by students on the Ethical Hacking and Countermeasures course, but was better organised than some professional conferences I have been to.

I saw some excellent speakers, and gave a talk on career planning in information security, so mine was by far the least technical talk there.

Highlights for me:

  • Rory McCune gave an excellent talk on automation of security testing, both as a standard practice to make life easier, but to help consistency and standards in testing.

 

  • Graham Sutherland’s talk on attacking office hardware ranged from simple and relatively harmless, to pretty hardcore hacking via chip removal and analysis. Excellent fun, but sadly there was no party hat…

  • Nick Walker’s talk on Android Security Assessments, while slightly too technical for me, was very interesting, and reminded me to pop Cyanogenmod on my Galaxy S3 this weekend.
  • The” Rory track” – of the two lecture theatres, one had 3 Rorys presenting, which just goes to confirm one of the Memes of Meta…
 As we had 4 members of Security Stack Exchange presenting, Stack Exchange managed to supply me with a few T-shirts, pens and stickers so quite a few speakers presented their talks wearing them, which was nice. I also gave swag out for good questions and interesting discussion.
Once the videos are up online I will add links here…

And the good folks at Securi-Tay kindly donated this bright red t-shirt to my con swag collection, so I went home even happier!

Filed under Community

QoTW #41: Why do we lock our computers?

2012-11-30 by . 2 comments

Iszi chose this week’s question of the week, Tom Marthenal‘s “Why do we lock our computers?” – as Tom puts it:

It’s common knowledge that if somebody has physical access to your machine they can do whatever they want with it, so what is the point?

This one attracted a lot of views, as it is a simple question of interest to everyone.

Both Bruce Ediger and Polynomial answered with the core reason – it removes the risk from the casual attacker while costing the user next to nothing! This is an essential factor in cost/usability tradeoffs for security. From Bruce:

The value of locking is somewhat larger than the price of locking it. Sort of like how in good neighborhoods, you don’t need to lock your front door. In most neighborhoods, you do lock your front door, but anyone with a hammer, a large rock or a brick could get in through the windows.

and from Polynomial:

An attacker with a short window of opportunity (e.g. whilst you’re out getting coffee) must be prevented at minimum cost to you as a user, in such a way that makes it non-trivial to bypass under tight time constraints.

Kaz pointed out another essential point, traceability:

If you don’t lock, it is easy for someone to poke around inside your session in such a way that you will not notice it when you return to your machine.

And zzzzBov added this in a comment:

…few bystanders would question someone walking up to a house and entering through the front door. The assumption is that the person entering it has a reason to. If a bystander watches someone break into a window, they’re much more likely to call the authorities. This is analogous with sitting down at a computer that’s unlocked, vs physically hacking into the system after crawling under a desk.

It removes a large percentage of possible attacks – those from your co-workers wanting to mess with your stuff – thanks enedene.

So – protect yourself from co-workers, casual snooping and pilfering and other mischief by simply locking your machine every time you leave your desk!

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QoTW #40: What’s the impact of disclosing the front-face of a credit or debit card?

2012-11-23 by . 0 comments

” What is the impact of disclosing the front face of a credit card?” and “How does Amazon bill me without the CVC/CVV/CVV2?” are two questions which worry a lot of people, especially those who are aware of the security risks of disclosing information, but who don’t fully understand them.

Rory McCune‘s question was inspired by a number of occasions where someone was called out for disclosing the front of their credit card – and he wondered what the likely impact of disclosing this information could be, as the front of the card gives the card PAN (16-digit number), start date, expiry date and cardholder name. Also for debit cards, the cardholders account number and sort code (that may vary by region).

TC1 asked how Amazon and other individuals can bill you without the CVV (the special number on the back of the card)

atdre‘s answer on that second question states that for Amazon:

The only thing necessary to make a purchase is the card number, whether in number form or magnetic. You don’t even need the expiration date.

Ron Robinson also provides this answer:

Amazon pays a slightly higher rate to accept your payment without the CVV, but the CVV is not strictly required to present a transaction – everybody uses CVV because they get a lower rate if it is present (less risk, less cost).

So there is one rule for the Amazons out there and one rule for the rest of us. Which is good – this reduces the risk to us of card fraud. Certainly for online transactions.

So what about physical transactions – If I have a photo of the front of a credit card and use it to create a fake card, is that enough to commit fraud?

From Polynomial‘s answer:

On most EFTPOS systems, it’s possible to manually enter the card details. When a field is not present, the operator simply presses enter to skip, which is common with cards that don’t carry a start date. On these systems, it is trivial to charge a card without the CVV. When I worked in retail, we would frequently do this when the chip on a card wasn’t working and the CVV had rubbed off. In such cases, all that was needed was the card number and expiry date, with a signature on the receipt for verification.

So if a fraudster could fake a card it could be accepted in retail establishments, especially in countries that don’t yet use Chip and Pin.

Additionally, bushibytes pointed out the social engineering possibilities:

As a somewhat current example, see how Mat Honan got hacked last summer :http://www.wired.com/gadgetlab/2012/08/apple-amazon-mat-honan-hacking/all/ In his case, Apple only required the last digits for his credit card (which his attacker obtained from Amazon) in order to give up the account. It stands to reason that other vendors may be duped if an attacker were to provide a full credit card number including expiration dates.

In summary, there is a very real risk of not only financial fraud, but also social engineering, from the public disclosure of the front of your credit card. Online, the simplest fraud is through the big players like Amazon, who don’t need the CVV, and in the real world an attacker who can forge a card with just an image of the front of your card is likely to be able to use that card.

So take care of it – don’t divulge the number unless you have to!

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QoTW #39: Why would a virus writer bother to check to see if a machine is infected before infecting it?

2012-10-26 by . 0 comments

Terry Chia proposed Swaroop’s question from 2 October 2012: Why would a virus writer bother to check to see if a machine is infected before infecting it?

Swaroop was wondering if this would be an attempt to use files that are already present instead of transferring another copy to the machine, but actually it boils down to two main reasons – the malware writer wants the computer to remain stable for as long as possible, and there is competition between virus writers for control over your PC.

Polynomial wrote:

The less resources used and the less disturbance caused, the less likely the user is to notice something is wrong. Once a user notices something isn’t working properly, they’ll try to fix it, which might result in the malware being removed. As such, it’s better for the malware creator to prevent these problems and remain covert.

Thomas Pornin linked to the original internet worm, which did prevent networks and systems working because it didn’t carry out checks, and both Celeritas and OtisBoxcar supported Polynomial’s top answer with theirs.

WernerCD wrote:

Preemptive removal of competitors products, as well as verification of no prior version of my product, would help keep the system mine. As mentioned, the goal would be to keep the CPU “mine”. That means lay low and don’t attract notice.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

Filed under Question of the Week

QoTW #38: What is SHA-3 – and why did we change it?

2012-10-12 by . 0 comments

Lucas Kauffman selected this week’s Question of the Week: What is SHA3 and why did we change it? 

No doubt if you are at least a little bit curious about security, you’ll have heard of AES, the advanced encryption standard. Way back in 1997, when winters were really hard, our modems froze as we used them and Windows 98 had yet to appear, NIST saw the need to replace the then mainstream Data Encryption Standard with something resistant to the advances in cryptography that had occurred since its inception. So, NIST announced a competition and invited interested parties to submit algorithms matching the desired specification – the AES Process was underway.

A number of algorithms with varying designs were submitted for the process and three rounds held, with comments, cryptanalysis and feedback submitted at each stage. Between rounds, designers could tweak their algorithms if needed to address minor concerns; clearly broken algorithms did not progress. This was somewhat of a first for the crypto community – after the export restrictions and the so called “crypto wars” of the 90s, open cryptanalysis of published algorithms was novel, and it worked. We ended up with AES (and some of you may also have used Serpent, or Twofish) as a result.

Now, onto hashing. Way back in 1996, discussions were underway in the cryptographic community on the possibility of finding a collision within MD5. Practically MD5 started to be commonly exploited in 2005 to create fake certificate authorities. More recently, the FLAME malware used MD5 collisions to bypass Windows signature restrictions. Indeed, we covered this attack right here on the security blog.

The need for new hash functions has been known for some time, therefore. To replace MD5, SHA-1 was released. However, like its predecessor, cryptanalysis began to reveal that its collision resistance required a less-than-bruteforce search. Given that this eventually yields practical exploits that undermine cryptographic systems, a hash standard is needed that is resistant to finding collisions.

As of 2001, we have also had available to us SHA-2, a family of functions that as yet has survived cryptanalysis. However, SHA-2 is similar in design to its predecessor, SHA-1, and one might deduce that similar weaknesses may hold.

So, in response and in a similar vein to the AES process, NIST launched the SHA3 competition in 2007, in their words, in response to recent improvements in cryptanalysis of hash functions. Over the past few years, various algorithms have been analyzed and the number of candidates reduced, much like a reality TV show (perhaps without the tears, though). The final round algorithms essentially became the candidates for SHA3.

The big event this year is that Keccak has been announced as the SHA-3 hash standard. Before we go too much further, we should clarify some parts of the NIST process. Depending on the round an algorithm has reached determines the amount of cryptanalysis it will have received – the longer a function stays in the competition, the more analysis it faces. The report of round two candidates does not reveal any suggestion of breakage; however, NIST has selected its final round candidates based on a combination of performance factors and safety margins. Respected cryptographer Bruce Schneier even suggested that perhaps NIST should consider adopting several of the finalist functions as suitable.

That’s the background, so I am sure you are wondering: how does this affect me? Well, here’s what you should take into consideration:

  • MD5 is broken. You should not use it; it has been used in practical exploits in the wild, if reports are to be believed – and even if they are not, there are alternatives.
  • SHA-1 is shown to be theoretically weaker than expected. It is possible it may become practical to exploit it. As such, it would be prudent to migrate to a better hash function.
  • In spite of concerns, the family of SHA-2 functions has thus far survived cryptanalysis. These are fine for current usage.
  • Keccak and selected other SHA-3 finalists will likely become available in mainstream cryptographic libraries soon. SHA-3 is approved by NIST, so it is fine for current usage.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QoTW #37: How does SSL work ?

2012-10-05 by . 0 comments

In this week’s question, we will talk about SSL. This question was asked by @Polynomial, who noticed that our site did not have yet a generic question on how SSL works. There were already some questions on the concept of SSL, but nothing really detailed.

Three answers were given, one by @Luc, and two by myself (because I got really verbose and there is a size limit on answers). The three answers concentrated on distinct aspects of SSL; together, they can explain why SSL works: SSL appears to be decently secure and we can see how this is achieved.

My first answer is a painfully long description of the detailed protocol as it appears on the wire. I wrote it as an introduction to the intricacies of the protocol; what information it contains must be known if you want to understand the details of the cryptographic attacks which have been tried on SSL. This is not much more than RFC-reading, but I still made an effort to merge four RFC (for the four protocol versions: SSLv3, TLS 1.0, 1.1 and 1.2) into one text which should be readable linearly. If you plan on implementing your own SSL client or server (a very instructive exercise, which I recommend for its pedagogical virtues), then I hope that this answer will be a useful reading guide for the actual standards.

What the protocol description shows is that at one point, during the initial steps of the SSL connection (the “handshake”), the server sends its “certificate” to the client (actually, a certificate chain), and then, a few steps later, the client appears to have gained some knowledge of the server’s public key, with which asymmetric cryptography is then performed. The SSL/TLS protocol handles these certificates as opaque blobs. What usually happens is that the client decodes the blobs as X.509 certificates and validates them with regards to a set of known trust anchors. The validation yields the server’s public key, with some guarantee that it really is the key owned by the intended server.

This certificate validation is the first foundation of SSL, as it is used for the Web (i.e. HTTPS). @Luc’s answer contains clear explanations on why certificates are used, and on what the guarantees rely on. Most enlightening is this excerpt:

You have to trust the CA not to make certificates as they please. When organizations like Microsoft, Apple and Mozilla trust a CA though, the CA must have audits; another organization checks on them periodically to make sure everything is still running according to the rules.

So the whole system relies on big companies checking on each other. Some of the trusted CA are governmental (from various governments) but the most often used are private business (e.g. Thawte, Verisign…). An important point to make is that it suffices to subvert or corrupt one CA to get a fake certificate which will be trusted worldwide; so this really is as robust as the weakest of the hundred or so trusted CA which browser vendors include by default. Nevertheless, it works quite well (attacks on CA are rare).

Note that since the certificate parts are quite isolated in the protocol itself (the certificates are just opaque blobs), SSL/TLS can be used without certificate validation in setups where the client “just knows” the server’s public key. This happens a lot in closed environments, such as embedded systems which talk to a mother server. Also, there are a few certificate-less cipher suites, such as the ones which use SRP.

This brings us to the second foundation of SSL: its intricate usage of cryptographic algorithms. Asymmetric encryption (RSA) or key exchange (Diffie-Hellman, or an elliptic curve variant), symmetric encryption with stream or block ciphers, hashing, message authentication codes (HMAC)… the whole paraphernalia is there. Assembling all these primitives into a coherent and secure protocol is not easy at all, and the history of SSL is a rather lengthy sequence of attacks and fixes. My second answer gives details on some of them. What must be remembered is that SSL is state of the art: every attack which has ever been conceived has been tried on SSL, because it is a high-value target. SSL got a lot of exposure, and its survival is testimony to its strength. Sure, it was occasionally harmed, but it was always salvaged. It is rather telling that the recent crop of attacks from Duong and Rizzo (ASP.NET padding oracles, BEAST, CRIME) are actually old attacks which Duong and Rizzo applied; their merit is not in inventing them (they didn’t) but in showing how practical they can be in a Web context (and masterfully did they show it).

From all of this we can list the reasons which make SSL work:

  • The binding between the alleged public key and the intended server is addressed. Granted, it is done with X.509 certificates, which have been designed by the Adversary to drive implementors crazy; but at least the problem is dealt with upfront.
  • The encryption system includes checked integrity, with a decent primitive (HMAC). The encryption uses CBC mode for block ciphers and the MAC is included in the MAC-then-encrypt way: both characteristics are suboptimal, and security with these choices requires special care in the specification (the need of random unpredictible IV, basis of the BEAST attack, fixed in TLS 1.1) and in the implementation (information leak through the padding, used in padding oracle attacks, fixed when Microsoft finally consented to notice the warning which was raised by Vaudenay in 2002).
  • All internal key expansion and checksum tasks are done with a specific function (called “the PRF”) which builds on standard primitives (HMAC with cryptographic hash functions).
  • The client and the server send random values, which are included in all PRF invocations, and protect against replay attacks.
  • The handshake ends with a couple of checksums, which are covered by the encryption+MAC layer, and the checksums are computed over all of the handshake messages (and this is important in defeating a lot of nasty things that an active attacker could otherwise do).
  • Algorithm agility. The cryptographic algorithms (cipher suites) and other features (protocol version, compression) are negotiated between the client and server, which allows for a smooth and gradual transition. This is how current browsers and servers can use AES encryption, which was defined in 2001, several years after SSLv3. It also facilitates recovery from attacks on some features, which can be deactivated on the client and/or the server (e.g. compression, which is used in CRIME).

All these characteristics contribute to the strength of SSL.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.