You may have noticed all the blacked out sites on the Internet on the 18th of January, but possibly aren’t aware of why they were doing this. The answer is SOPA – the poorly named “Stop Online Piracy Act”
In itself, that sounds fine, right? We want to cut piracy so this must be good. Well, no.
We already have laws in many countries which already allow us to take down websites hosting pirated content, but in many countries the process is one of “Innocent until proven guilty” – this means evidence needs to be provided, legal processes have to take place, and a court can rule that a website must be shutdown. The DMCA lets the US do that.
What SOPA does is change the balance to “Guilty until proven innocent” – which means that a website can be taken down just because of an allegation of copyright infringement.
For StackExchange, for example, occasionally people post plagiarised content. We rely on the community flagging this content, and moderators then remove it as fast as possible. (Look at Joel’s answer to this question to see the full process under DMCA and how this protects StackExchange legally). If SOPA was in force, StackExchange would be in danger of being removed from the Internet for any violation as it would be seen as guilty.
What is also very worrying is the US’s history of going after individuals in other countries citing US law, when those individuals have no connection to the US. See Richard O’Dwyer’s case – while what he did may have been used for nefarious purposes, all he really provided was links to other websites, and yet the US have forced the UK to extradite him. With a significant portion of the Internet under some US control, it wouldn’t be stretching the truth to assume this US bill would affect many other countries.
Another outcome if SOPA came into force would be an acceleration of businesses away from networks owned or controlled by the US, which may directly lead to greater piracy threats.
The Electronic Frontier Foundation – a strong force for free speech on the Internet has this handy guide to SOPA.
So why didn’t StackExchange join the blackout? – see this answer: the content is owned by the community, so the entire community would have had to agree on the blackout.
QotW #12: How to counter the statement: “You don’t need (strong) security if you’re not doing anything illegal”?
Ian C posted this interesting question, which does come up regularly in conversation with people who don’t deal with security on a daily basis, and seems to be highlighted in the media for (probably) political reasons. The argument is “surely as long as you aren’t breaking the law, you shouldn’t need to prevent us having access – just to check, you understand”
This can be a very emotive subject, and it is one that has been used and abused by various incumbent authorities to impose intrusions on the liberty of citizens, but how can we argue the case against it in a way the average citizen can understand?
Here are some viewpoints already noted – what is your take on this topic?
M’vy made this point from the perspective of a business owner:
Security is not about doing something illegal, it’s about someone else doing something illegal (that will impact you).
If you don’t encrypt your phone calls, someone could know about what all your salesman are doing and can try to steal your clients. If you don’t shred your documents, someone could use all this information to mount a social engineering attack against your firm, to steal R&D data, prototype, designs…Graham Lee supported this with a simple example:
Commercial confidential data…could provide competitors with an advantage in the marketplace if the confidentiality is compromised. If that’s still too abstract, then consider the personal impact of being fired for being the person who leaked the trade secrets.So we can easily see a need for security in a commercial scenario, but why should a non-technical individual worry? From a more personal perspective, Robert David Graham pointed this out
As the Miranda Rights say: “anything you say can and will be used against you in a court of law”. Right after the police finish giving you the Miranda rights, they then say “but if you are innocent, why don’t you talk to us?”. This leads to many people getting convicted of crimes because it is indeed used against them in a court of law. This is a great video on YouTube that explains in detail why you should never talk to cops, especially if you are innocent: http://www.youtube.com/watch?v=6wXkI4t7nuc
Tate Hansen‘s thought is to ask,
“Do you have anything valuable that you don’t want someone else to have?”
If the answer is Yes then follow up with “Are you doing anything to protect it?”
From there you can suggest ways to protect what is valuable (do threat modeling, attack modeling, etc.).
But the most popular answer by far was from Justice:
You buy a lock and lock your front door if you live in a city, in close proximity to hundreds of thousands of others. There is a reason for that. And it’s the same reason why you lock your Internet front door.
Iszi asked a very closely linked question “Why does one need a high level of privacy/anonymity for legal activities”, which also inspired a range of answers:
From Andrew Russell, these 4 thoughts go a long way to explaining the need for security and privacy:
If we don’t encrypt communication and lock systems then it would be like:
Sending letters with transparent envelopes. Living with transparent clothes, buildings and cars. Having a webcam for your bed and in your bathroom. Leaving unlocked cars, homes and bikes.
And finally, from the EFF’s privacy page:
Privacy rights are enshrined in our Constitution for a reason — a thriving democracy requires respect for individuals’ autonomy as well as anonymous speech and association. These rights must be balanced against legitimate concerns like law enforcement, but checks must be put in place to prevent abuse of government powers.
A lot of food for thought…
A company’s data is as much a risk as it is an asset. The way that customer data is stored in modern times requires it to be protected as a vulnerable asset and regarded as an unrealized liability even if not considered on the annual financial statement. In many jurisdictions this is enforced by legislation (eg Data Protection Act of 1998 in the UK)
One of the challenges of this as a modern concern lies in knowing where your data is.
To make a comparable example, consider a company that owns its place of operation. That asset must be protected with an investment both for sound business sense and by regulatory requirement. Fire protection, insurance, and ongoing maintenance are assumed costs that are considered all but mandatory. A company that isn’t covering those costs is probably not standing on all its legs. The loss of a primary building because of inadequate investment in protection and maintenance could be anywhere from a major event to fatal to the company.
It may seem a surreal comparison, but data exposure can have an impact as substantial as losing a primary building. A bank without customer records is dead regardless of the cash on hand. A bank with all its customer records made public is at risk of the same fate. Illegitimate transactions, damage to reputation, liability for customer losses related to disclosure and regulatory reaction may all work together to end an institution.
Consider how earlier this year Sony’s Playstation online gaming network was hacked. In the face of this compromise Sony estimated their present losses at $171 million, suffered extended periods of service downtime, and suffered further breaches in attempting to restore service. Further losses are likely as countries around the world investigate the situation and consider fines. Private civil suits have also been brought in response to the breaches. To put the numbers in perspective, the division responsible for the Playstation network posted a 2010 loss of approximately $1 billion dollars.
In total (across several divisions), Sony suffered at least 16 distinct instances of data loss or unintended access including an Excel file containing sweepstakes data from 2001 that Sony posted online. No compromise was needed for that particular failure — the file was readily available and indexed by Google. In the end, proprietary source code, administrator credentials, customer birthdays, credit card information, telephone numbers and home addresses were all exposed.
In regards to Sony’s breaches, the following question was posed, “If you were called by Sony right now, had 100% control over security, what would you do in the first 24-hours?” The traditional response is isolating breaches, identifying them, restoring service with the immediate attack vectors closed, and shoring up after that. However, from the outside Sony’s issue appears to be systemic. Their systems are varied, complicated, and numerous. Weaknesses appear to be everywhere. They were challenged in returning to a functioning state because even they were not compromised in an isolated way. Isolating and identifying breaches was far as one could get in those first 24 hours. Indeed, a central part of the division’s core revenue business was out of service for roughly 3 weeks.
The root issue behind everything is likely that Sony isn’t aware of their assets. Everything grew in place organically and they didn’t know what to protect. Sony claims it encrypted its credit card data, but researchers have found numbers related to the attack. Other identifying information wasn’t encrypted. Regardless of the circumstances related to the disclosure of credit card data, Sony failed protect other valuable information that can make it liable for fraud. Damage to Sony’s reputation and regulatory reaction, including Japan preventing restarting the service there at least through July, are additional issues.
The lack of awareness about what data is at risk can impact a business of any size. Much as different types of fire protection are needed to deal with an office environment, outdoor tire storage at an auto shop, a server room and a chemical factory, different types of security need to be applied to your data based on its exposure. Water won’t work on a grease fire, and encrypting a database has no value if a query can be run against it to retrieve all the information without encryption. While awareness is acute about credit card sensitivity, losing names, addresses, phone numbers and dates of birth can present just as much risk for fraud and civil or regulatory liability. This is discussed in ‘Data Categorisation – critical or not?’ and should be a key step in any organisation’s plans for resilience.
This week, a newly-minted master’s degree holder asked on our site, “[How do I go about] Carrying out a professional IT audit procedure?” That’s a really big question in one sentence. In trying to break that down to some parts we can address, let’s look at the perspective people involved in an audit might see.
Staff at an audit client interact with auditors who ask a lot of questions off of a script that is usually referred to as a work plan. Many times, they ask you to open the appropriate settings dialogs or configuration files and show them the configuration details that they are interested in. Management at an audit client will discuss what systems beforehand and the managers for the auditors will select or create appropriate work plans.
The quality, nature, and scope of work plans vary widely, and a single audit will often involve the use of many different plans. Work plans might be written by the audit company or available from regulatory bodies, standards organizations, or software vendors. One example of a readily-available plan is the PCI DSS 2.0 standard. That plan displays both high-level overviews and mid-level configuration requirements across a broad spectrum. Low-level details would be operating system or application specific. Many audit plans related to banking applications have granular detail about core system configuration settings. Also have a look at this question about audit standards for law firms for an example from a different industry showing similarities and differences.
While some plans are regulatory-compliance related, most are best-practices focused. All plans are written with best practices in mind, and for those who are new to the world of IT security, that’s the most challenging part about them. There is no universal standard; many plans greatly overlap, but still differ. If the auditor is appropriately considering their client’s needs, they’ll almost certainly end up marking parts of plans as not applicable or not compliant yet okay because of mitigating circumstances.
Another challenging point for those new to auditing is the sometimes hard-to-grasp concept is the separation between objectives and controls. An objective might be to ensure that each authenticates to their own account. Controls listed in a work plan might include discussion about password expiry requirements to help prevent shared account passwords (among other things). Don’t get crossed-up focusing on the control if the objective is met through some other means – perhaps everybody is using biometric authentication instead. There are too many instances of this in the audit world, and it’s a common mistake among newer auditors.
From a good auditor’s perspective, meeting the goals of the work plan might be considered 60% of the goal of the work. An auditor’s job isn’t complete unless they’re looking at the whole organization. Some examples: to fulfill a question about password change requirements, the auditor should ask an administrator, see the configuration and ask users about it (“When was the last time the system made you change your password?”). To review a system setting, the auditor should ask to see settings in a general manner, adding detail only as needed: “Can you show me the net effect of policies on this user’s account?” as opposed to “Start->run->rsop.msc”. Users reporting different experiences about password resets than the system configuration shows or a system administrator who fumbles their way around for everything won’t ever be in the work plan, but it will be a concern.
With that background in mind, here are some general steps to performing an IT audit procedure:
- Meet with management and determine their needs. You should understand many of the possible accepted risks before you begin the engagement. For example, high-speed traders may stand to lose more money by implementing a firewall than not.
- Select appropriate audit plans based on available resources and your own relevant work.
- Properly review the controls with the objectives they meet in mind. Use multiple references when possible, and always try to confirm any settings directly.
- Pay attention to the big picture. Things should “feel right.”
- Review your findings with management and consider their thoughts on them. Many times the apparent severity of something needs to be adjusted.
- At the end of the day, sometimes a business unit may accept the risk from a weak control, despite it looking severe to you as an auditor. That is their prerogative, as long as you have correctly articulated the risks
The last part of that, auditors reviewing findings with client management, takes the most finesse and unexplainable skill. Does your finding really matter? How can you smooth things over and still deliver over 100 findings? At the end of the day, experience and repetition is the biggest part of delivering professional work, and that’s regardless of the kind of work.
Some further starting points for more detail can be found at http://en.wikipedia.org/wiki/Information_technology_audit_process and http://ithandbook.ffiec.gov/.
QotW #7: How to write an email regarding IT Security that will be read, and not ignored by the end user?
A key aspect of IT and Information Security is the acceptance piece. People aren’t naturally good at this kind of security and generally see it as an annoyance and something they would rather not have to deal with.
Because of this, it is typical in organisations to send out regular security update emails – to help remind users of risks, threats, activities etc.
However it is often the case that these are deleted without even being read. This week’s question: How to write an email regarding IT Security that will be read, and not ignored by the end user? was asked by @makerofthings and generated quite a number of interesting responses, including:
- Provide a one line summary first, followed by a longer explanation for those who need it
- Provide a series of options to answer – those who fail to answer at all can be chased
- Tie in reading and responding to disciplinary procedures – a little bit confrontational here, but can work for items such as mandatory annual updates (I know various organisations do this for Money Laundering awareness etc)
- Using colours – either for the severity of the notice, or to indicate required action by the user
- Vary the communications method – email, corporate website, meeting invite etc
- Don’t send daily update messages – only send important, actionable notices
- Choose whose mailbox should send these messages – if it is critical everyone reads it, perhaps it should come from the FD or CEO, or the individual’s line manager
- Be personal, where relevant – users who receive a few hundred emails a day will happily filter or delete boring ones, or those too full of corporate-speak. Impress upon users how it is relevant to them
- Add “Action Required” in the subject
It is generally agreed that if it isn’t interesting it will be deleted. If there are too many ‘security’ emails, they’ll get deleted. In general, unless you can grab the audience’s attention, the message will fail.
Having said that, another take on it is – do you need to send all these security emails to your users? For example, should they be getting antivirus updates or patch downtime info in an email every week?
Can users do anything about antivirus, or should that be entirely the responsibility of IT or Ops? And wouldn’t a fixed patch downtime each week that is listed on the internal website make less of an impact on users?
Thinking around the common weak points in security – such as users – for most actions can make much more impact when you actually do need the users to carry out an action.
Associated questions and answers you should probably read:
When we talk about security of systems, a lot of focus and attention goes on the external attacker. In fact, we often think of “attacker” as some form of abstract concept, usually “some guy just out causing mischief”. However, the real goal of security is to keep a system running and internal, trusted users, can upset that goal more easily and more completely than any external attacker, especially a systems administrator. These are, after all, the men and women you ask to set up your system and defend it.
So us Security SE members took particular interest when user Toby asked: when the systems admin leaves, what extra precautions should be taken? Toby wanted to know what the best practise was, especially where a sysadmin was entrusted with critical credentials.
A very good question and it did not take long for a reference to Terry Childs to appear, the San Francisco FiberWAN administrator who in response to a disciplinary action, refused to divulge critical passwords to routing equipment. Clearly, no company wants to find themselves locked out of their own network when a sysadmin leaves, whether the split was amicable or not. So what can we do?
Where possible, ensure “super user” privileges can be revoked when the user account is shut down.
From Per Wiklander. The most popular answer by votes so far also makes a lot of sense. The answer itself references the typical Ubuntu setup, where the root password is deliberately unknown to even the installer and users are part of a group of administrators, such that if their account is removed, so is their administrator access. Using this level of granular control you can also downgrade access temporarily, e.g. to allow the systems admin to do non-administrative work prior to leaving.
However, other users soon pointed out a potential flaw in this kind of technical solution – there is no way to prevent someone with administrative access from changing this setup, or adding additional back doors, especially if they set up the box. A further raised point was the need not just to have a corporate policy on installation and maintenance, but also ensure it is followed and regularly checked.
Credential control and review matters – make sure you shut down all the non-centralized accounts too.
Mark Davidson‘s answer focused heavily on the need for properly revoking credentials after an administrator leaves. Specifically, he suggests:
- Do revoke all credentials belonging to that user.
- Change any root password/Administrator user password of which they are aware.
- Access to other services on behalf of the business, like hosting details, should also be altered.
- Ensure any remote access is properly shut down – including incoming IP addresses.
- Logging and monitoring access attempts should happen to ensure that any possible intrusions are caught.
Security SE moderator AviD was particularly pleased with the last suggestion and added that it is not just administrator accounts that need to be monitored – good practice suggests monitoring all accounts.
It is not just about passwords – do not forget the physical/social dimension.
this.josh raises a critical point. So far, all suggestions have been technical, but there are many other issues that might present a window of opportunity to a sysadmin with a mind to cause chaos. From the answer itself:
- Physical access is important too. Don’t just collect badges; ensure that any access to cabinets, server rooms, drawers etc are properly revoked. The company should know who has keys to what and why so that this is easy to check when the sysadmin leaves.
- Ensure the phone/voicemail system is included in the above checks.
- If the sysadmin could buy on behalf of the company, ensure that by the time they have left, they no longer have authority to do this. Similarly, ensure any contracts the employee has signed have been reviewed – ideally, just like logging sysadmin activities you are also checking this as they’re signed too.
- Check critical systems, including update status, to ensure they are still running and haven’t been tampered with.
- My personal favorite: let people know. As this.josh points out, it is very easy as a former employee knowing all the systems to persuade someone to give you access, or give you information that may give you access. Make sure everyone understands that the individual has left and that any suspicious requests are highlighted or queried and not just granted.
So far, that’s it for answers to the question and no answer has yet to be accepted, so if you think anything is missing, please feel free to sign up and present your own answer.
In summary, there appears to be a set of common issues being raised in the discussion. Firstly: technical measures help, but cannot account for every circumstance. Good corporate policy and planning is needed. A company should know what systems a systems administrator has access to, be they virtual or physical, as well as being able to audit access to these systems both during the employee’s tenure and after they have left. Purchasing mechanisms and other critical business functions such as contracts are just other systems and need exactly the same level of oversight.
I did leave one observation out from one of the three answers above: that it depends on whether the moving on employee is leaving amicably or not. That is certainly a factor as to whether there is likely to be a problem, but a proper process is needed anyway, because the risk to your systems and your business is huge. The difference in this scenario is around closing critical access in a timely manner. In practice this includes:
- Escorting the employee off the premises
- Taking their belongings to them once they are off premises
- Removing all logical access rights before they are allowed to leave
These steps reduce the risk that the employee could trigger malicious code or steal confidential information.
Despite the security industry getting ever more professional, with well trained teams, security incidents seem to be increasing. Just look at the news recently:
- Sony – 23 incidents so far?
- UK NHS Laptop – over 8 million patient records
- Citibank – 200,000 customer records lost
- WordPress platform vulnerabilities (see Steve Lord’s presentations on this)
- Stuxnet – targeted at a specific use
From the 2011 Verizon DBIR
- 761 breaches of which 87% were in Hospitality, Retail and Financial Services
- 436 of these were in companies with 11-100 employees – so this is not just a problem for the big targets any more
- 92% of attacks (leading to 99% of records compromised) were external
Some new threat groups:
Anonymous and LulzSec – often punitive, sometimes for the Lulz
- ACS Law
- Spanish Police
- FBI, CIA
- Porn sites
- Took requests
So what do the security professionals do?
- After an audit fix vulnerabilities
- After an attack, fix the issues
- Use encryption and strong authentication
- Patch regularly
- Validate input, encode output
- Assess your 3rd parties
- Use a Secure Development Lifecycle
- Build security into everything you do
No problem, right? If only it were that easy…
Your CEO wants the business to make money, so wants to minimise bottom line spend but will spend appropriately on risks to the business.
So why doesn’t the CEO see these security issues as business risks?
- Maybe they aren’t significant when compared to other business risks
- Or perhaps the CEO just doesn’t understand the latest security threat
Both of these are your problems. The security community is often called the ‘echo chamber’ for a good reason – we say good things and have good ideas about how to fix security issues, but we pretty much only communicate with other enlightened security professionals. We need to tell others in a way they can understand – but in general we talk technical 1337 speak full of jargon and no one can make head or tail of it.
It is up to you, the security professional to learn how to talk the language of business risk
There are materials out there that will help you:
- IIA GAIT – Guide to the Assessment of IT Risk
- FAIR – Factor Analysis of Information Risk
- ISACA CobIT – Control Objectives in IT
- ISACA – CRISC and CGEIT certifications
- IISP – approved Information Risk Management courses
Begin to understand what is important to a business executive and how they talk about it, and you are already well on the way to being able to articulate security in the same way. If you are a manager of security professionals, make it easy for them to gain exposure to heads of business, C-suite, executive, directors etc. and it is likely to benefit you in the long run.
These are good questions to prepare you for conversations you may have along the way
- What metrics should a CIO rely on to gauge risk
- How would you work with the security paranoiac on your team
- Can you provide loss values on security breaches
- How do you compare risks from website, physical perimeter, staff, etc.
- What are good threats to kick off conversations with others about what worries them