“ What is the impact of disclosing the front face of a credit card?” and “How does Amazon bill me without the CVC/CVV/CVV2?” are two questions which worry a lot of people, especially those who are aware of the security risks of disclosing information, but who don’t fully understand them.
Rory McCune‘s question was inspired by a number of occasions where someone was called out for disclosing the front of their credit card – and he wondered what the likely impact of disclosing this information could be, as the front of the card gives the card PAN (16-digit number), start date, expiry date and cardholder name. Also for debit cards, the cardholders account number and sort code (that may vary by region).
TC1 asked how Amazon and other individuals can bill you without the CVV (the special number on the back of the card)
atdre‘s answer on that second question states that for Amazon:
The only thing necessary to make a purchase is the card number, whether in number form or magnetic. You don’t even need the expiration date.
Ron Robinson also provides this answer:
Amazon pays a slightly higher rate to accept your payment without the CVV, but the CVV is not strictly required to present a transaction – everybody uses CVV because they get a lower rate if it is present (less risk, less cost).
So there is one rule for the Amazons out there and one rule for the rest of us. Which is good – this reduces the risk to us of card fraud. Certainly for online transactions.
So what about physical transactions – If I have a photo of the front of a credit card and use it to create a fake card, is that enough to commit fraud?
From Polynomial‘s answer:
On most EFTPOS systems, it’s possible to manually enter the card details. When a field is not present, the operator simply presses enter to skip, which is common with cards that don’t carry a start date. On these systems, it is trivial to charge a card without the CVV. When I worked in retail, we would frequently do this when the chip on a card wasn’t working and the CVV had rubbed off. In such cases, all that was needed was the card number and expiry date, with a signature on the receipt for verification.
So if a fraudster could fake a card it could be accepted in retail establishments, especially in countries that don’t yet use Chip and Pin.
Additionally, bushibytes pointed out the social engineering possibilities:
As a somewhat current example, see how Mat Honan got hacked last summer :http://www.wired.com/gadgetlab/2012/08/apple-amazon-mat-honan-hacking/all/ In his case, Apple only required the last digits for his credit card (which his attacker obtained from Amazon) in order to give up the account. It stands to reason that other vendors may be duped if an attacker were to provide a full credit card number including expiration dates.
In summary, there is a very real risk of not only financial fraud, but also social engineering, from the public disclosure of the front of your credit card. Online, the simplest fraud is through the big players like Amazon, who don’t need the CVV, and in the real world an attacker who can forge a card with just an image of the front of your card is likely to be able to use that card.
So take care of it – don’t divulge the number unless you have to!
Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.
Cyril N‘s top scoring answer focuses on a highly valuable response – educating the individual to behave better. From his answer, two key points arise:
Some UX specialists says that it’s not a good idea to refuse a password. One of the arguments is the one you provide : “but if you ban them, users will use other weak passwords”, or they will add random chars like 1234 -> 12340, which is stupid, nonsensical and will then force the user to go through the “lost my password” process because he can’t remember which chars he added.
Let the user enter the password he wants. This goes against your question, but as I said, if you force your user to enter another password than one of the 25 known worst passwords, this will result in 1: A bad User Experience, 2: A probably lost password and the whole “lost my password” workflow. Now, what you can do, is indicate to your user that this password is weak, or even add more details by saying it is one of the worst known passwords (with a link to them), that they shouldn’t use it, etc etc. If you detail this, you’ll incline your users to modify their password to a more complex one because now, they know the risk. For the one that will use 1234, let them do this because there is maybe a simple reason : I often put a dumb password in some site that requests my login/pass just to see what this site provides me.
The only problem with this is called out in a comment by user Polynomial who suggests a hybrid solution:
Reject outright terrible passwords, e.g. “qwerty123″, and warn on passwords that are a derivation of a dictionary word / bad password, e.g. “Qw3rty123″ or “drag0n1″
Personally, I like the hybrid solution, because as Iszi points out, most password attacks are conducted offline, where the attacker has a copy of the hash database. In this scenario, dictionary attacks are a very low CPU load, very fast option that is easily automated:
…it is realistic to assume that attackers will target “common” passwords before resorting to brute force. John The Ripper, perhaps one of the most well-known offline password cracking tools, even uses this as a default action. So, if your password is “password” or “12345678″, it is very likely to be cracked in less than a minute.
Dr Jimbob has provided the logical Security answer: It depends! The requirements will be very different for an online banking application and for an application that will only cause minor inconvenience to one end user if compromised. Regulatory requirements may define the level of security protection you require. He also points out that:
Very weak passwords (top 1000) can be randomly attacked online by botnets (even if you use captchas/delays after so many incorrect attempts)
Bangdang also supports disallowing common passwords, and has a final section on the tradeoffs between security and usability, along with the effects of successful compromise, which include fingerpointing and blame.
Tylerl provides some insight from his experience analyzing attack code:
the password ordering is this:
- Try the most common passwords first. Usually there’s a list of between 10 and 500 passwords to try
- Try dictionary passwords second. This often includes variations like substituting “4″ where an “A” would be or a “1″ where there was the letter “l”, as well as adding numbers to the end.
- Exhaust the password space, starting at “a”, “b”, “c”… “aa”, “ab”, “ac”… etc.
Step 3 is usually omitted, and step 1 is usually attempted on a range of usernames before moving to step 2.
In general the answers go to show just how extensive that key problem the security industry has: the trade off between usability and security. You could add strong security at every layer, but if the user experience isn’t appropriate it will not work.
This is why for major roles/access improvement projects we are seeing significant investment in the people/human capital side of things – helping projects understand human acceptance criteria, rejection reasons and passive blocking of projects which on the face of it seem perfectly logical.
Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.
Question of the Week number 27 is a contentious, hotly debated issue in the software world. The question itself was posed by Security.SE user blunders, who quoted the argument often used in the defence of open source as a business model:
My understanding is that open source systems are commonly believed to be more secure than closed source systems. … A common position against closed source systems is that a lack of awareness is at best a weak security measure; commonly referred to as security through obscurity.
and then we hit the question itself:
Question is, are open source systems on average better for security than closed source systems?
We’ll begin with the top voted answer by SE user Jesper Mortensen, who explained that the whole notion of being able to generally compare open versus closed source systems is a bad one when there are so many other factors involved. To compare two systems you really need to look beyond the licensing model they use, and look at other factors too. I’ll quote Jesper’s list in its entirety:
- Access to source code.
- Very different incentive structures, for-profit versus for fun.
- Very different legal liability situations.
- Different, and wildly varying, team sizes and team skillsets.
Of course, this is by no means a complete treatment of the possible differences.
Jesper also highlighted the importance of comparing pieces of software that solve specific domain issues – not software in general. You have to do this, to even remotely begin to utilise the list above.
… the “opensource implies security” idea is overrated. What is important is the time (and skill) devoted to the tracking and fixing of security issues, and this is mostly orthogonal to the question of openness of the source.
The main thrust of Thomas’ answer was that actually, maintained software is more secure than unmaintained software. As an example, Thomas cited OpenSSL remote execution bugs that had been left lying in the code tree unfixed for some time – highlighting a possible advantage of closed source systems in that, when developed by companies, the effort and time spent on Q&A is generally higher than open source systems.
Thomas’ answer also covers the counterpoint to this – that closed source systems can easily conceal security issues, too, and that having the source allows you to convince yourself of security more easily.
- The Customization premise
- The License Management premise
- The Open Format premise
- The Many Eyes premise
- The Quick Fix premise
As Ori rightly says, the customization premise means a company can take an open source platform and add an additional set of security controls. Ori quotes NSA’s SELinux as an example of such a project. For companies with the time and money to produce such platforms and make such fixes, this is clearly an advantage for open source systems.
For license management and open format arguments Ori covers from a compliance and resilience perspective. Using open source software (and making modifications) contains certain license constraints – the potential to violate these constraints is clearly a risk to the business. Likewise, for business continuity purposes the ability to not be locked in to a specific platform is a huge win for any company.
I’ve always proposed an amendment to Linus’ Law: “Given enough trained eyeballs, most bugs are relatively shallow”
I explained, through use of a rather intriguing vulnerability introduced into development kernels by a compiler bug, that having the knowledge to detect these issues is critical to security. The source being available does not directly guarantee you have the knowledge to detect such issues.
That’s it for answers. As you can see, none of us took sides generally on the “open versus closed” debate, instead pointing out that there are many factors to consider beyond the license under which source is available. I think the whole set of answers is best summarised by this.josh‘s comment on the top voted answer – so I’ll leave you with that:
I agree. What matters most is how many people with knowledge and experience in the security domain actively design, implement, test, and maintain the software. Any project where no-one is looking at security will have significant vulnerabilities, regardless of how many people are on the project.
Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.
I prepared this presentation for an ISACA event and realised it is applicable to a much wider audience:
The business benefits of virtualisation are fantastic in terms of costs savings, agility in delivering services and business resiliency. It enables organisations to spend less on hardware assets and use their existing hardware more efficiently by running multiple machines on a single piece of server hardware without significant degradation in performance. Industry and vendor studies indicate up to 50% savings in IT expenditure.
An organisation can also run just the number of servers they need at any time, with demand led increases possible at short notice simply by enabling a new virtual machine. This is a further driver for virtualisation today as data centre power costs are significant to any large organisation and a reduction in the number of servers needing power and cooling offers a direct saving in bottom line costs. Studies indicate savings in energy costs through server virtualisation can reach 75% for a typical mid-sized organisation.
What are the risks?
Segregation of systems and data
Security around the standard organisational architecture is quite well understood, with acceptance that there are layers of security based on risk or impact levels, or around the value placed on particular assets – a key end result being separation: keep unauthorised individuals out, restrict an individual’s access to that which they need to do their job and host data of differing sensitivity on networks with different levels of protection.
Virtualisation can break this model – imagine your HR database and customer database virtualised onto the same environment as your web servers. All of a sudden the data which was once hidden behind layers of firewalls and network controls may be vulnerable to a smart attacker who gains access to the web server and finds an exploitable vulnerability in the virtualisation layer.
The same is true for servers hosted for customers – in the past, good practice kept these separate where possible. Now the segregation between different customers may be no more than between adjacent areas of memory or hard disk. This may not be appropriate for environments with differing risk, threat or impact ratings, for example external web servers and internal databases.
This also means that all servers or applications in the same environment should be secured to the same level, which means patch management becomes very important.
Segregation of duties
Consolidation of systems also introduces a risk of a loss of segregation of duties (SOD). It increases the risk that both system administrators and users could potentially (and unintentionally) gain access to data that are above their normal privilege levels. Administration of the internal virtual switch is also a concern, thus introducing a conflict between server administrators and network administrators and therefore potentially compromising the SOD between the two groups.
Licensing and asset management
The ability to rapidly deploy new virtual infrastructure at short notice raises challenges around licensing. Virtualising applications and servers is often implemented by copying an instance of a server and this instance is recreated on the virtual host. When further capacity is required new instances are created, but as these can be very time dependent, the licensing model has to be updated to ensure that an organisation only runs applications they own a licence to (or that don’t require a licence in a virtual environment.)
This applies also to inventory – a virtual server is just as much of an asset to the business as a physical server, but with the number of virtual servers fluctuating with demand, how do you manage your assets appropriately?
Although virtualisation can improve resiliency, there is also a risk if implementation is flawed. If several key services are hosted within the same physical device in a single virtual environment then failure of the hardware device would impact several services simultaneously.
When consolidating servers, consideration should be given to distributing business critical applications with high availability requirements across physical server instances. There should be no single point of failure for a service and the use of anti-affinity rules should be considered to ensure resilient virtual machines within a service are kept on separate infrastructure hosts.
Maturity of tools and security awareness
While some virtual environments (such as the IBM LPAR architecture on mainframes) are considered mature, there is now a range of products in this space which have not been through the same level of development and which run on platforms less suited to multiple user segregation, and while there has been large scale development of PaaS platforms based on Xen, VMware and others, treating these as secure by default may not be appropriate for your business.
Another major risk around security awareness is that virtualisation is moving forward in some organisations without the knowledge or active participation of Information Security.
Administrators need to look at the particular security requirements prior to building a virtual environment in order to use the correct tools and security configuration to support the business requirements in an appropriate manner.
Information Security involvement is critical at these early planning and architecture decision stages.
Communication and Storage Security
Sensitive data being transmitted across networks should be encrypted where possible and supported by the hypervisor. Encryption should be implemented for traffic between the hypervisor, management networks and remote client sessions and MUST be for virtualised DMZ environments.
Virtual disks are used to store the operating system, applications and data and also record the state of the virtual machine. Virtual disks (including backed up copies) must be protected from unauthorised modification.
Where virtual disk password protection features exist, they should be enabled.
Restrict disk shrinking features which allow non-privileged users and processes to reclaim unused virtual disk space as this may cause denial of service to legitimate users and processes.
Auditing, logging and monitoring
Enable security logging and monitoring on all virtual machines and hypervisors according to approved operational standards.
Hypervisors must be configured, at a minimum to log (failed and successful) login attempts to management interfaces.
Activities of privileged hypervisor and VM accounts (for example, the creation, modification and deletion of roles) must be logged and reviewed.
- Understand all the key IT controls in the existing environment before virtualisation
- Understand virtual platform controls and configure for security
- Replicate IT controls in the virtualised environment
- Implement an asset management system which can cope with high volatility
- Work with software and hardware vendors to understand licensing implications
- If outsourcing to a cloud vendor, choose one that can match your data location requirements and build in a robust reporting framework
- Rearrange your support teams to suit the new environment, but during the transition it is likely that the learning curve will be steep as new tools are used, and back office and front office support teams are created anew
- Use a compliance and governance model which can manage the concepts of changing security boundaries and levels
- Ideally work with a service provider who has done it before!
Password policy questions are a perennial fixture of IT Security stack exchange, and take many forms.
Take, for example, the recent XKCD comic on the subject:
Shortly after it was posted, user Billy ONeal asked directly whether the logic was sound: Is a short complex password or a long dictionary passphrase better? You will find answers that support either conclusion, as well as answers that put the trade-offs involved into context.
Of course, part of knowing how easy a password is to crack is knowing how a password cracker works. Are there state of the art techniques or theory specifically for attacking pass phrases? Are there lists of the most common words or ngrams used in passwords and pass phrases?
So we’ve got no consistent idea about what constitutes a “good” password, although we can probably guess at what weak passwords will fall first. But then who is responsible for deciding a user’s password strength? Pretend for a moment that we agree what a strong password is. Should we stop users from choosing “weak” passwords, or is it their own fault for not understanding information theory and the entropy content of their chosen string of characters?
The definitive answer is “it depends”. It depends on how valuable the assets being protected by password access are. It depends on whether the value is appreciated by you as service provider/systems administrator, or by the user as customer/asset owner (or by a combination of those roles, or by someone else). It depends on how much inconvenience you’re willing to put your users to, and on how much inconvenience your users are willing to accept.
To summarise the discussion this far: there are different ideas of what makes a good password, of who is responsible for deciding whether a password is good or bad, and of how to enforce good passwords once you do decide you want to. But all of this discussion has already put the cart before the horse. Are your passwords going to be brute-force, or will the attacker use a key logger? Maybe they’ll attack the password where it’s stored?.
Indeed, while we’re asking the big questions, why passwords at all? Might biometric identification be better? Why not just forget all of this complication over strong passwords and start taking fingerprints instead? Some reasons include the possibility of false positives from the biometrics system (who hasn’t tried holding up a photograph to a facial recognition system?), and the icky disgustingness of some other attacks.
Suffice it to say that the problem of password security is a complex one, but one that the denizens of security.stackexchange.com enjoy tackling.
QotW #12: How to counter the statement: “You don’t need (strong) security if you’re not doing anything illegal”?
Ian C posted this interesting question, which does come up regularly in conversation with people who don’t deal with security on a daily basis, and seems to be highlighted in the media for (probably) political reasons. The argument is “surely as long as you aren’t breaking the law, you shouldn’t need to prevent us having access – just to check, you understand”
This can be a very emotive subject, and it is one that has been used and abused by various incumbent authorities to impose intrusions on the liberty of citizens, but how can we argue the case against it in a way the average citizen can understand?
Here are some viewpoints already noted – what is your take on this topic?
M’vy made this point from the perspective of a business owner:
Security is not about doing something illegal, it’s about someone else doing something illegal (that will impact you).
If you don’t encrypt your phone calls, someone could know about what all your salesman are doing and can try to steal your clients. If you don’t shred your documents, someone could use all this information to mount a social engineering attack against your firm, to steal R&D data, prototype, designs…Graham Lee supported this with a simple example:
Commercial confidential data…could provide competitors with an advantage in the marketplace if the confidentiality is compromised. If that’s still too abstract, then consider the personal impact of being fired for being the person who leaked the trade secrets.So we can easily see a need for security in a commercial scenario, but why should a non-technical individual worry? From a more personal perspective, Robert David Graham pointed this out
As the Miranda Rights say: “anything you say can and will be used against you in a court of law”. Right after the police finish giving you the Miranda rights, they then say “but if you are innocent, why don’t you talk to us?”. This leads to many people getting convicted of crimes because it is indeed used against them in a court of law. This is a great video on YouTube that explains in detail why you should never talk to cops, especially if you are innocent: http://www.youtube.com/watch?v=6wXkI4t7nuc
Tate Hansen‘s thought is to ask,
“Do you have anything valuable that you don’t want someone else to have?”
If the answer is Yes then follow up with “Are you doing anything to protect it?”
From there you can suggest ways to protect what is valuable (do threat modeling, attack modeling, etc.).
But the most popular answer by far was from Justice:
You buy a lock and lock your front door if you live in a city, in close proximity to hundreds of thousands of others. There is a reason for that. And it’s the same reason why you lock your Internet front door.
Iszi asked a very closely linked question “Why does one need a high level of privacy/anonymity for legal activities”, which also inspired a range of answers:
From Andrew Russell, these 4 thoughts go a long way to explaining the need for security and privacy:
If we don’t encrypt communication and lock systems then it would be like:
Sending letters with transparent envelopes. Living with transparent clothes, buildings and cars. Having a webcam for your bed and in your bathroom. Leaving unlocked cars, homes and bikes.
And finally, from the EFF’s privacy page:
Privacy rights are enshrined in our Constitution for a reason — a thriving democracy requires respect for individuals’ autonomy as well as anonymous speech and association. These rights must be balanced against legitimate concerns like law enforcement, but checks must be put in place to prevent abuse of government powers.
A lot of food for thought…
A company’s data is as much a risk as it is an asset. The way that customer data is stored in modern times requires it to be protected as a vulnerable asset and regarded as an unrealized liability even if not considered on the annual financial statement. In many jurisdictions this is enforced by legislation (eg Data Protection Act of 1998 in the UK)
One of the challenges of this as a modern concern lies in knowing where your data is.
To make a comparable example, consider a company that owns its place of operation. That asset must be protected with an investment both for sound business sense and by regulatory requirement. Fire protection, insurance, and ongoing maintenance are assumed costs that are considered all but mandatory. A company that isn’t covering those costs is probably not standing on all its legs. The loss of a primary building because of inadequate investment in protection and maintenance could be anywhere from a major event to fatal to the company.
It may seem a surreal comparison, but data exposure can have an impact as substantial as losing a primary building. A bank without customer records is dead regardless of the cash on hand. A bank with all its customer records made public is at risk of the same fate. Illegitimate transactions, damage to reputation, liability for customer losses related to disclosure and regulatory reaction may all work together to end an institution.
Consider how earlier this year Sony’s Playstation online gaming network was hacked. In the face of this compromise Sony estimated their present losses at $171 million, suffered extended periods of service downtime, and suffered further breaches in attempting to restore service. Further losses are likely as countries around the world investigate the situation and consider fines. Private civil suits have also been brought in response to the breaches. To put the numbers in perspective, the division responsible for the Playstation network posted a 2010 loss of approximately $1 billion dollars.
In total (across several divisions), Sony suffered at least 16 distinct instances of data loss or unintended access including an Excel file containing sweepstakes data from 2001 that Sony posted online. No compromise was needed for that particular failure — the file was readily available and indexed by Google. In the end, proprietary source code, administrator credentials, customer birthdays, credit card information, telephone numbers and home addresses were all exposed.
In regards to Sony’s breaches, the following question was posed, “If you were called by Sony right now, had 100% control over security, what would you do in the first 24-hours?” The traditional response is isolating breaches, identifying them, restoring service with the immediate attack vectors closed, and shoring up after that. However, from the outside Sony’s issue appears to be systemic. Their systems are varied, complicated, and numerous. Weaknesses appear to be everywhere. They were challenged in returning to a functioning state because even they were not compromised in an isolated way. Isolating and identifying breaches was far as one could get in those first 24 hours. Indeed, a central part of the division’s core revenue business was out of service for roughly 3 weeks.
The root issue behind everything is likely that Sony isn’t aware of their assets. Everything grew in place organically and they didn’t know what to protect. Sony claims it encrypted its credit card data, but researchers have found numbers related to the attack. Other identifying information wasn’t encrypted. Regardless of the circumstances related to the disclosure of credit card data, Sony failed protect other valuable information that can make it liable for fraud. Damage to Sony’s reputation and regulatory reaction, including Japan preventing restarting the service there at least through July, are additional issues.
The lack of awareness about what data is at risk can impact a business of any size. Much as different types of fire protection are needed to deal with an office environment, outdoor tire storage at an auto shop, a server room and a chemical factory, different types of security need to be applied to your data based on its exposure. Water won’t work on a grease fire, and encrypting a database has no value if a query can be run against it to retrieve all the information without encryption. While awareness is acute about credit card sensitivity, losing names, addresses, phone numbers and dates of birth can present just as much risk for fraud and civil or regulatory liability. This is discussed in ‘Data Categorisation – critical or not?’ and should be a key step in any organisation’s plans for resilience.
Bogus SSL certificates are not a new problem and incidents can be found going back over a decade. With a fake certificate for Google.com in the wild, they’re in the news again today. My last two posts have touched on the SSL topic and Moxie Marlinspike’s Convergence.io software is being mentioned in these news articles. At the same time, Dan Kaminski has been pushing for DNSSEC as a replacement for the SSL CA system. Last night, Moxie and Dan had it out 140 characters at a time on Twitter over whether DNSSEC for key distribution was wise.
I’m going to do two things with the discussion: I’m going to cover the discussion I said should be had about replacing the CA system, and I’m going to try and show a risk-based assessment to justify that.
The Risks of the Current System
Risk: Any valid CA chain can create a valid certificate. Currently, there are a number of root certificate authorities. My laptop lists 175 root certificates, and these include names like Comodo, DigiNotar, GoDaddy and Verisign. Any of those 175 root certificates can sign a trusted chain for any certificate and my system will accept it, even if it has previously seen a different unexpired certificate before and even if that certificate has been issued by a different CA. A greater exposure for this risk comes from intermediate certificates. These exist where the CA has signed a certificate for another party that has flags authorizing it to sign other keys and maintain a trusted path. While Google’s real root CA is Equifax, another CA signed a valid certificate that would be accepted by users.
Risk mitigation: DNSSEC. DNSSEC limits the exposure we see above. In the DNSSEC system, *.google.com can only be valid if the signature chain is the DNS root, then the “com.” domain. If Google’s SSL key is distributed through DNSSEC, then we know that the key can only be inappropriate if the DNS root or top-level domain was compromised or is mis-behaving. Just as we trust the properties of an SSL key to secure data, we trust it to maintain a chain, and from that we can say that there is no other risk of a spoofed certificate path if the only certificate innately trusted is the one belonging to the DNS root. Courtesy of offline domain signing, this also means that host a root server does not provide the opportunity to provide malicious responses even by modifying the zone files (they would become invalid).
Risks After Moving to DNSSEC
Risk: We distrust that chain. We presume from China’s actions that China has a direct interest in being able to track all information flowing over the Internet for its users. While the Chinese government has no control over the “com.” domain, they do control the “cn.” domain. Visiting google.cn. does not mean that one aims to let the Chinese government view their data. Many countries have enough resources to perform attacks of collusion where they can alter both the name resolution under their control and the data path.
Risk mitigation: Multiple views of a single site. Convergence.io is a tool that allows one to view the encryption information as seen from different sites all over the world. Without Convergence.io, an attacker needs to compromise the DNS chain and a point between you and your intended site. With Convergence.io, the bar is raised again so that the attacker must compromise both the DNS chain and the point between your target website and the rest of the world.
Weighing the Risks
The two threats we have identified require gaining a valid certificate and compromising the information path to the server. To be a successful attack, both must occur together. In the current system, the bar for getting a validly signed yet inappropriately issued certificate is too low. The bar for performing a Man-in-the-Middle attack on the client side is also relatively low, especially over public wireless networks. DNSSEC raises the certificate bar, and Convergence raises the network bar (your attack will be detected unless you compromise the server side).
That seems ideal for the risks we identified, so what are we weighing? Well, we have to weigh the complexity of Convergence. A new layer is being added, and at the moment it requires an active participation by the user. This isn’t weighing a risk so much as weighing how much we think we can benefit. We also must remember that Convergence doesn’t raise the bar for one who can perform a MITM attack that is between the target server and the whole world.
Weighing DNSSEC, Moxie drives the following risk home for it in key distribution: “DNSSEC provides reduced trust agility. We have to trust VeriSign forever.” I argue that we must already trust the DNS system to point us to the right place, however. If we distrust that system, be it for name resolution or for key distribution, the action is still the same — the actors must be replaced, or the resolution done outside of it. The counter Moxie’s statement in implementing DNSSEC for SSL key distribution is we’ve made the problem an entirely social one. We now know that if we see a fake certificate in the wild, it is because somebody in the authorized chain has lost control of their systems or acted maliciously. We’ve reduced the exposure to only include those in the path — a failure with the “us.” domain doesn’t affect the “com.” domain.
So we’re left with the social risk that we have a bad actor who is now clearly identified. Convergence.io can help to detect that issue even if it only enjoys limited deployment. We are presently ham-strung by the fact that any of a broad number of CAs and ever-broader number of delegates can be responsible for issuing “certified lies,” and that still needs to be reduced.
Making a Decision
Convergence.io is a bolt-on. It does not require anybody else to change except the user. It does add complexity that all users may not be comfortable with, and that may limit its utility. I see no additional exposure risk (in the realm of SSL discussion, not in the realm of somebody changing the source code illicitly) from using it. To that end, Moxie has released a great tool and I see no reason to not be a proponent of the concept.
Moxie still argues that DNSSEC has a potential downside. The two-part question is thus: is reducing the number of valid certification paths for a site to one an improvement? When we remove the risk of an unwanted certification of the site, is the new risk that we can’t drop a certifier a credible increase in risk? Because we must already trust the would-be DSNSEC certifier to direct us to the correct site, the technical differences in my mind are moot.
To put into a practical example: China as a country can already issue a “valid” certificate for Google. They can control any resolution of google.cn. Whether the control the sole key channel for google.cn. or any valid key channel for google.cn., you can’t reach the “true” server and certificate / key combination without trusting China. The solution for that, whether it be the IP address or the keypair is to hold them accountable or find another channel. DNSSEC does not present an additional risk in that regard. It also removes a lot of ugliness related to parsing X.509 certificates and codes (which includes such storied history as the “*” certificate that worked on anything before browser vendors hard-coded it out). Looking at the risks presented and arguments from both sides, I think it’s time to start putting secure channel keys in the DNS stream.
When we talk about security of systems, a lot of focus and attention goes on the external attacker. In fact, we often think of “attacker” as some form of abstract concept, usually “some guy just out causing mischief”. However, the real goal of security is to keep a system running and internal, trusted users, can upset that goal more easily and more completely than any external attacker, especially a systems administrator. These are, after all, the men and women you ask to set up your system and defend it.
So us Security SE members took particular interest when user Toby asked: when the systems admin leaves, what extra precautions should be taken? Toby wanted to know what the best practise was, especially where a sysadmin was entrusted with critical credentials.
A very good question and it did not take long for a reference to Terry Childs to appear, the San Francisco FiberWAN administrator who in response to a disciplinary action, refused to divulge critical passwords to routing equipment. Clearly, no company wants to find themselves locked out of their own network when a sysadmin leaves, whether the split was amicable or not. So what can we do?
Where possible, ensure “super user” privileges can be revoked when the user account is shut down.
From Per Wiklander. The most popular answer by votes so far also makes a lot of sense. The answer itself references the typical Ubuntu setup, where the root password is deliberately unknown to even the installer and users are part of a group of administrators, such that if their account is removed, so is their administrator access. Using this level of granular control you can also downgrade access temporarily, e.g. to allow the systems admin to do non-administrative work prior to leaving.
However, other users soon pointed out a potential flaw in this kind of technical solution – there is no way to prevent someone with administrative access from changing this setup, or adding additional back doors, especially if they set up the box. A further raised point was the need not just to have a corporate policy on installation and maintenance, but also ensure it is followed and regularly checked.
Credential control and review matters – make sure you shut down all the non-centralized accounts too.
Mark Davidson‘s answer focused heavily on the need for properly revoking credentials after an administrator leaves. Specifically, he suggests:
- Do revoke all credentials belonging to that user.
- Change any root password/Administrator user password of which they are aware.
- Access to other services on behalf of the business, like hosting details, should also be altered.
- Ensure any remote access is properly shut down – including incoming IP addresses.
- Logging and monitoring access attempts should happen to ensure that any possible intrusions are caught.
Security SE moderator AviD was particularly pleased with the last suggestion and added that it is not just administrator accounts that need to be monitored – good practice suggests monitoring all accounts.
It is not just about passwords – do not forget the physical/social dimension.
this.josh raises a critical point. So far, all suggestions have been technical, but there are many other issues that might present a window of opportunity to a sysadmin with a mind to cause chaos. From the answer itself:
- Physical access is important too. Don’t just collect badges; ensure that any access to cabinets, server rooms, drawers etc are properly revoked. The company should know who has keys to what and why so that this is easy to check when the sysadmin leaves.
- Ensure the phone/voicemail system is included in the above checks.
- If the sysadmin could buy on behalf of the company, ensure that by the time they have left, they no longer have authority to do this. Similarly, ensure any contracts the employee has signed have been reviewed – ideally, just like logging sysadmin activities you are also checking this as they’re signed too.
- Check critical systems, including update status, to ensure they are still running and haven’t been tampered with.
- My personal favorite: let people know. As this.josh points out, it is very easy as a former employee knowing all the systems to persuade someone to give you access, or give you information that may give you access. Make sure everyone understands that the individual has left and that any suspicious requests are highlighted or queried and not just granted.
So far, that’s it for answers to the question and no answer has yet to be accepted, so if you think anything is missing, please feel free to sign up and present your own answer.
In summary, there appears to be a set of common issues being raised in the discussion. Firstly: technical measures help, but cannot account for every circumstance. Good corporate policy and planning is needed. A company should know what systems a systems administrator has access to, be they virtual or physical, as well as being able to audit access to these systems both during the employee’s tenure and after they have left. Purchasing mechanisms and other critical business functions such as contracts are just other systems and need exactly the same level of oversight.
I did leave one observation out from one of the three answers above: that it depends on whether the moving on employee is leaving amicably or not. That is certainly a factor as to whether there is likely to be a problem, but a proper process is needed anyway, because the risk to your systems and your business is huge. The difference in this scenario is around closing critical access in a timely manner. In practice this includes:
- Escorting the employee off the premises
- Taking their belongings to them once they are off premises
- Removing all logical access rights before they are allowed to leave
These steps reduce the risk that the employee could trigger malicious code or steal confidential information.
One of my worst ever projects was implementing PGP email encryption. Considering I have only worked in large financial services companies, that is saying something. I have reflected on the lessons learnt from this project before, but I felt that PGP was fundamentally flawed. When Apple iOS 5 was unveiled, there was a small feature that no one talked about. It was hidden among the sparkling jewels of notifications, free messaging and just works synchronization. That feature was S/MIME email encryption support. Now S/MIME is not new, however Apple is uniquely positioned with their ecosystem and user-centric design to solve the fundamental problems and bring secure email to the masses.
I have detailed the problems that email encryption solves, and those it does not solve before. To re-iterate and update:
Attacker eavesdropping or modifying an email in transit. both on public networks such the Internet and unprotected wireless networks as well as private internal networks. Recently I have experienced things that have hammered home the reality of this threat: Heartfelt presentations at security conferences such as Uncon about the difficulties faced by those in oppressive regimes such as Iran, where the government reading your email could result in death or worse for you and your family. Even “liberal” governments such as the US blatantly ignoring the law to perform mass domestic wire tapping in the name of freedom. The right to privacy is a human right. Most critical communication these days is electronic. There is a clear problem to be solved in keeping this communication only to it’s intended participants.
Attacker reading or modifying emails in storage. The recent attacks on high profile targets such as Sony and low profile like MTGox, who were brought low by simple exploitation of unpatched systems and SQL injection vulnerabilities, have revealed an un-intended consequence. The email addresses and passwords published were often enough to allow attackers access to a users email account as the passwords were re-used. I don’t know about you but these days a lot of my most important information is stored in my gmail account. The awesome search capability, high storage capacity and accessibility everywhere means that it is a natural candidate for scanned documents and notes as well as the traditional highly personal and private communications. There is a problem to be solved of adding a layered defence; an additional wall in your castle if the front gate is breached.
Attacker is able to send emails impersonating you. It is amazing how much email is trusted today as being actually from the stated sender. In reality normal email provides very little non-repudiation. This means, it is trivial to send an email that appears to come from someone else. This is a major reason why phishing and spear phishing in particular are still so successful. Now spam blockers, especially good ones like Postini have become very good at examining email headers, verified sender domains, MX records etc to reject fraudulent emails (which is effective unless of course you are an RSA employee and dig it out of your quarantine). You can also have more localised proof of concept of this. If your work accepts manager e-mail approvals for things like pay rises and software access, and it also has open SMTP relays you can have some fun by Googling SMTP telnet commands. There is a problem to solve in how to reliably verify the sender of the email and ensure the contents have not been tampered with.
Problems not solved by email encryption and signing:
- Sending the wrong contents to the right person (s)
- Sending the right contents to the wrong person (s)
- Sending the wrong contents to the wrong person (s)
These are important to keep in mind because you need to choose the right tool for the job. Email encryption is not a panacea for all email mis-use cases.
So having established that there is a need for email encryption and signing, what are the fundemental problems with a market leading technology like PGP (now owned by Symantec)?
Fundemental problems in summary:
Key exchange. Bob Greenpeace wants to send an email to Alice Activist for the first time. He is worried about Evil Government and Greedy OilCorp reading it and killing them both. Bob needs Alice’s public key to be able to encrypt the document. If Bob and Alice both worked at the same company and had PGP universal configured correctly, or worked in organisations that both had PGP universal servers exposed externally, or had the foresight to publish their public keys to the public PGP global key server, all would be well. However this is rarely the case. Also email is a conversation. Alice not only wants to reply but also add in Karl Boatdriver in planning the operation. Now suddenly all three need his public keys and he theirs. Bob, Alice and Karl are all experts in their field but not technical and have no idea how to export and send their public keys and install these keys before sending secure email. They have an operation to plan for Thursday!
Working transparently and reliably everywhere. Bob is also running Outlook on his Windows phone 7 (chuckles), Alice Pegasus on Ubuntu and Karl Gmail via Safari on his iPad. Email encryption, decryption, signing, signature verification, key exchange all need to just work on all of these and more. It also cannot be like PGP desktop which uses a dirty hack to hook into a rich email client. This gives it extremely good reliability and performance. Intermittent issues like emails going missing and blank emails never occur. Calls to PGP support for any enterprise installation are few and far between. All users in companies love PGP and barely notice it is there. In many organisations email is the highest tier application. It has to have three nines uptime. If Alice and Karl are about to be captured by Greedy Oilcorp and they need to get an email to Bob they cannot afford to get a message blocked by PGP error.
Less fundamental but still annoying:
- Adding storage encryption – there is no simple way with PGP to encrypt a clear text email in storage. You can send it to yourself again or export it to a file system and encrypt it there but that is it. This is a problem in where the email is not sensitive enough to encrypt in transit (or just where you just forgot) but where after the fact, you do care if script kiddies with your email password get that email and publish it on Pastebin.
How Apple could solve these fundamental problems with iOS 5
Key exchange – If you read marknca‘s excellent iOS security guide, you will see that Apple has effectively built a Public Key Infrastructure (PKI). Cryptographically signing things like apps and checking this signature before allowing them to run is key to their security model. This could be extended to users to provide email encryption and specifically transparent key exchange in the following manner:
- Each user with an Apple ID would have a public/private key pair automatically generated
- The public keys would be stored on the Apple servers and available via API and on the web to anyone
- The private keys would be encrypted to the the users Apple account passphrase and be synchronised via iCloud to all iOS and Mac endpoints
- To send a secure email the user would just click a secure button on their email tool
- When sending an email via an iOS device, a Mac or via Mobile me web it would query the Apple servers for the recipients private key and encrypt and sign (optional) the email
- On receipt as long as the device had been unlocked the email would be decrypted. For web access as long as the certificate was stored in the browser the email would be decrypted (could be an addon to Safari)
- To encrypt an email in storage you would just drag it to a folder called private email. You could write rules for what emails were automatically stored there.
This system would make key exchange, email encryption, decryption and signing totally transparent. Because S/MIME uses certificates it is easy to get the works everywhere property. However Apple would first enable this only for .me addresses and iOS devices and Mac’s to enable them to sell more of these and to beef up their enterprise cred. Smart hackers and addon writers would soon make it work everywhere though. There you go, secure email for the masses.