Posts Tagged ‘access-control’

Is our entire password strategy flawed?

2014-06-19 by roryalsop. 8 comments

paj28 posed a question that really fits better here as a blog post:

Security Stack Exchange gets a lot of questions about password strength, password best practices, attacks on passwords, and there’s quite a lot for both users and sites to do, to stay in line with “best practice”.

Web sites need a password strength policy, account lockout policy, and secure password storage with a slow, salted hash. Some of these requirements have usability impacts, denial of service risks, and other drawbacks. And it’s generally not possible for users to tell whether a site actually does all this (hence plaintextoffenders.com).

Users are supposed to pick a strong password that is unique to every site, change it regularly, and never write it down. And carefully verify the identity of the site every time you enter your password. I don’t think anyone actually follows this, but it is the supposed “best practice”.

In enterprise environments there’s usually a pretty comprehensive single sign-on system, which helps massively, as users only need one good work password. And with just one authentication to protect, using multi-factor is more practical. But on the web we do not have single sign-on; every attempt from Passport, through SAML, OpenID and OAuth has failed to gain a critical mass.

But there is a technology that presents to users just like single sign-on, and that is a password manager with browser integration. Most of these can be told to generate a unique, strong password for every site, and rotate it periodically. This keeps you safe even in the event that a particular web site is not following best practice. And the browser integration ties a password to a particular domain, making phishing all but impossible To be fair, there are risks with password managers “putting all your eggs in one basket” and they are particularly vulnerable to malware, which is the greatest threat at present.

But if we look at the technology available to us, it’s pretty clear that the current advice is barking up the wrong tree. We should be telling users to use a password manager, not remember loads of complex passwords. And sites could simply store an unsalted fast hash of the password, forget password strength rules and account lockouts.


A problem we have though is that banks tell customers never to write down passwords, and some explicitly include ‘storage on PC’ in this. Banking websites tend to disallow pasting into password fields, which also doesn’t help.

So what’s the solution? Do we go down the ‘all my eggs are in a nice secure basket’ route and use password managers?

I, like all the techies I know, use a password manager for everything. Of the 126 passwords I have in mine, I probably use 8 frequently. Another 20 monthly-ish. Some of the rest of them have been used only once or twice – and despite having a good memory for letters and numbers, I’m not going to be able to remember them so this password manager is essential for me.

I want to be able to easily open my password manager, copy the password and paste it directly into the password field.

I definitely don’t want this password manager to be part of the browser, however, as in the event of browser compromise I don’t wish all my passwords to be vacuumed up, so while functional interaction like copy and paste is essential, I’d like separation of executables.

What do you think – please comment below.

A short statement on the Heartbleed problem and its impact on common Internet users.

2014-04-11 by lucaskauffman. 2 comments

On the 7th of April 2014 a team of security engineers (Riku, Antti and Matti) at Codenomicon and Neel Mehta of Google Security published information on a security issue in OpenSSL. OpenSSL is a piece of software used in the encryption process; it helps you in coding your computer traffic to ensure unauthorized people cannot understand what you are sending from one computer network to another. It is used in many applications: for example if you use on-line banking websites, code such as OpenSSL helps to ensure that your PIN code remains secret.

The information that was released caused great turmoil in the security community, and many panic buttons were pressed because of the wide-spread use of OpenSSL. If you are using a computer and the Internet you might be impacted: people at home just as much as major corporations. OpenSSL is used for example in web, e-mail and VPN servers and even in some security appliances. However, the fact that you have been impacted does not mean you can no longer use your PC or any of its applications. You may be a little more vulnerable, but the end of the world may still be further than you think. First of all some media reported on the “Heartbleed virus”. Heartbleed is in fact not a virus at all. You cannot be infected with it and you cannot protect against being infected. Instead it is an error in the computer programming code for specific OpenSSL versions (not all) which a hacker could potentially use to obtain  information from the server (which could possibly include passwords and encryption keys, along with other random data in the server’s memory) potentially allowing him to break into a system or account.

Luckily, most applications in which OpenSSL is used, rely on more security measures than only OpenSSL. Most banks for instance continuously work to remain abreast of security issues, and have implemented several measures that lower the risk this vulnerability poses. An example of such a protective measure is transaction signing with an off-line card reader or other forms of two –factor authentication. Typically exploiting the vulnerability on its own will not allow an attacker post fraudulent transactions if you are using two-factor authentication or an offline token generator for transaction signing.

So in summary, does the Heartbleed vulnerability affect end-users? Yes, but not dramatically. A lot of the risk to the end-users can be lowered by following common-sense security principles:

  • Regularly change your on-line passwords (as soon as the websites you use let you know they have updated their software, this is worthwhile, but it should be part of your regular activity)
  • Ideally, do not use the same password for two on-line websites or applications
  • Keep the software on your computer up-to-date.
  • Do not perform on-line transactions on a public network (e.g. WiFi hotspots in an airport). Anyone could be trying to listen in.

Security Stack Exchange has a wide range of questions on Heartbleed ranging from detail on how it works to how to explain it to non-technical friends. 

Authors: Ben Van Erck, Lucas Kauffman

Stump the Chump with Auditd 01

2013-09-27 by scottpack. 1 comments

ServerFault user ewwhite describes a rather interesting situation regarding application distribution wherein code must be compiled in production. In short he wants to keep track of changes to a specific directory path and send alerts via email.

Let’s assume that there already exists some basic form of auditd in play, so as such we’ll be building out a snippet to be inserted into your existing /etc/audit/audit.rules. Ed was sparse on some of the specifics related to the application, understandably so, so let’s make some additional assumptions. Let’s assume that the source code directory in question is “/opt/application/src” and that all binaries are installed into “/opt/application/bin“.

-w /opt/application/src -p w -k app-policy
-w /opt/application/bin -p wa -k app-policy
I’ve decided to process each directory, source and binaries, separately. The commonality between the two are the ‘-w’ and the ‘-k’ options. The ‘-w’ option says to watch that directory recursively. The ‘-k’ option says to add the text “app-policy” to the output log message, this is just to make log reviews easier. The ‘-p’ option is actually where the magic happens, and is the real reason to separate these two rules out.

As we discussed in the previous post, nearly 8 months ago, the option ‘-p w’ instructs the kernel to log on file writes. One would assume this is accomplished by attaching to the POSIX system call write. That syscall, however, gets called quite a lot when files are actually saved. So as to not overwhelm the logging system auditd instead attaches to the open system call. By using the (w)rite argument we look for any instance of open that uses the O_WRONLY or O_RDWR flags. It’s worth noting that this does not mean a file was actually modified, only that it was opened in such a way that would allow for modification. For example, if a user opened “/opt/application/src/app.h” in a text editor a log would be generated, however if it was written to the terminal using cat or read using less then no log would be generated. This is pretty important to remember as many people will read a file using a text editor and simply exit without saving changes (hopefully).

We also want to watch for file writes in the binary directory except here we would expect them to be more reliable. It would be rather unusual, but not out of the question, for someone to attempt to use a text editor to open an executable. In addition we added the (a)ttribute option. This will alert us if any of the ownership or access permissions change, most importantly if a file is changed to be executable or the ownership is changed. This will not catch SELinux context changes but since SELinux uses the auditd logging engine then those changes will show up in the same log file.

Now that we have the rules constructed we can move on to the alerting. Ed wanted the events to be emailed out. This is actually quite a bit more complicated. By default auditd uses it’s own built in logger. This does make some sense when you consider the type of logging this system is intended to perform. By not relying on an external logger, like syslogd or rsyslog, it can better suffer mistaken configurations. On the downside it makes alternate logging setups trickier. There does exist a subsystem called ‘audispd’ that acts as a log multiplexor. There are a number of output plugins available, such as syslog, UNIX socket, prelude IDS, etc. None of them really do what Ed wants, so I think our best bet would run reports. Auditd is, after all, an auditing tool and not an enforcement tool. So let’s look at something a little different.

Remember how we tacked on ‘-k app-policy‘ to those rules above? Now we get to the why. Let’s try running the command:

aureport -k -ts yesterday 00:00:00 -te yesterday 23:59:59
We should now see a list of all of the logs that contain keys and occurred yesterday. Let’s look at a concrete example of me editing a file in that directory and the subsequent logs.
root@ node1:~> mkdir -p /opt/application/src
root@ node1:~> vim /opt/application/src/app.h
root@ node1:~> aureport -k

Key Report \=============================================== # date time key success exe auid event \=============================================== 1. 09/24/2013 11:41:29 app-policy yes /usr/bin/vim 1000 13446 2. 09/24/2013 11:41:29 app-policy yes /usr/bin/vim 1000 13445 3. 09/24/2013 11:41:29 app-policy yes /usr/bin/vim 1000 13447 4. 09/24/2013 11:41:29 app-policy yes /usr/bin/vim 1000 13448 5. 09/24/2013 11:41:29 app-policy yes /usr/bin/vim 1000 13449 6. 09/24/2013 11:41:35 app-policy yes /usr/bin/vim 1000 13451 7. 09/24/2013 11:41:35 app-policy yes /usr/bin/vim 1000 13450

The report tells us that at 11:41:29 on September the 24th a user ran the command “/usr/bin/vim” and triggered a rule labeled app-policy. It’s all good so far, but not very detailed. The last two fields, however, are quite useful. The first, 1000, is the UID of my personal account. That is important because notice I was actually running as root. Since I had originally used “sudo -i to gain a root shell my original UID was still preserved, this is good! The last field is a unique event ID generated by auditd. Let’s look at that first event, numbered 13446.
root@ node1:~> grep :13446 /var/log/audit/audit.log
type=SYSCALL msg=audit(1380037289.364:13446): arch=c000003e syscall=2 success=yes exit=4 a0=bffa20 a1=c2 a2=180 a3=0 items=2 ppid=21950 pid=22277 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 ses=1226 tty=pts0 comm="vim" exe="/usr/bin/vim" subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key="app-policy"
type=CWD msg=audit(1380037289.364:13446):  cwd="/root"
type=PATH msg=audit(1380037289.364:13446): item=0 name="/opt/application/src/" inode=2747242 dev=fd:01 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=unconfined_u:object_r:usr_t:s0
type=PATH msg=audit(1380037289.364:13446): item=1 name="/opt/application/src/.app.h.swx" inode=2747244 dev=fd:01 mode=0100600 ouid=0 ogid=0 rdev=00:00 obj=unconfined_u:object_r:usr_t:s0
This is what we mean when we say audit logs are verbose. In the introductory blog post we discussed some of those fields so I’ll save us the pain of going over it again. What we can see, however, is that the user with uid 1000 (see auid=1000) ran the command vim as root (see euid=0) and that command resulted in a change to both “/opt/application/src/” and “/opt/application/src/.app.h.swx“.

What we should be able to see here is that the report generated by aureport doesn’t contain everything we need to see what happened, but it does tell us something happened and gives us the information necessary to find the information. In an ideal world you would have some kind of log aggregation system, like Splunk or a SIEM, and send the raw logs there. That system would then have all the alerting functionality built in to alert an admin to to the potential policy violation. However, we don’t live in a perfect world and Ed’s request for email alerts indicate he doesn’t have access to such a system. What I would do is set up a daily cron job to run that report for the previous day. Every morning the log reviewer can check their mailbox and see if any of those files changed when they weren’t supposed to. If daily isn’t reactive enough then we can simply change the values passed to ‘-ts’ and ‘-te’ and run the job more frequently.

Pulling it all together we should have something that looks like this.

#/etc/audit/audit.rules
# This file contains the auditctl rules that are loaded
# whenever the audit daemon is started via the initscripts.
# The rules are simply the parameters that would be passed
# to auditctl.

# First rule - delete all -D

# Increase the buffers to survive stress events. # Make this bigger for busy systems -b 320

# Feel free to add below this line. See auditctl man page -w /opt/application/src -p w -k app-policy -w /opt/application/bin -p wa -k app-policy #/etc/cron.d/audit-report MAILTO=ewwhite@example.com

1 0 * * * root /sbin/aureport -k -ts yesterday 00:00:00 -te yesterday 23:59:59

QoTW #48: Difference between Privilege and Permission

2013-09-06 by roryalsop. 0 comments

Ali Ahmad asked, “What is the difference is between Privilege and Permission?

In many cases they seem to be used interchangeably, but in an IT environment knowing the meanings can have an impact on how you configure your systems.

D3C4FF’s top scoring answer  references this question on English Stack Exchange, which does give the literal meanings:

A permission is a property of an object, such as a file. It says which agents are permitted to use the object, and what they are permitted to do (read it, modify it, etc.). A privilege is a property of an agent, such as a user. It lets the agent do things that are not ordinarily allowed. For example, there are privileges which allow an agent to access an object that it does not have permission to access, and privileges which allow an agent to perform maintenance functions such as restart the computer.

AJ Henderson supports this view

…a user might be granted a privilege that corresponds to the permission being demanded, but that would really be semantics of some systems and isn’t always the case.

As does Gilles with this comment:

That distinction is common in the unix world, where we tend to say that a process has privileges (what they can or cannot do) and files have permissions (what can or cannot be done to them)

Callum Wilson offers the more specific case under full Role Based Access Control (RBAC)

the permission is the ER link between the role, function and application, i.e. permissions are given to roles the privilege is the ER link between an individual and the application, i.e. privileges are given to people.

And a further slight twist from KeithS:

A permission is asked for, a privilege is granted. When you think about the conversational use of the two words, the “proactive” use of a permission (the first action typically taken by any subject in a sentence within a context) is to ask for it; the “reactive” use (the second action taken in response to the first) is to grant it. By contrast, the proactive use of a privilege is to grant it, while the reactive use is to exercise it. Permissions are situation-based; privileges are time-based. Again, the connotation of the two terms is that permission is something that must be requested and granted each and every time you perform the action, and whether it is granted can then be based on the circumstances of the situation. A privilege, however, is in regard to some specific action that the subject knows they are allowed to do, because they have been informed once that they may do this thing, and then the subject proceeds to do so without specifically asking, until they are told they can no longer do this thing.

Liked this question of the week? Interested in reading more detail, and other answers? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QoTW #44: How to block or detect user setting up their own personal wifi AP in our LAN?

2013-03-22 by roryalsop. 0 comments

Nominated by Terry Chia, this question by User15580 should be of interest to anyone managing the security of network s.

The show the variety of aspects security covers in this sort of scenario:

Daniel posted the top answer, and it has nothing to do with IT, but instead focuses on the cause – if a user has installed an access point it is because they need something the existing network is not providing. This is always worth considering:

Discuss with the users what they are trying to accomplish. Perhaps create an official wifi network ( use all the security methods you wish – it will be ‘yours’ ). Or, better, two – Guest and Corporate WAPs.

Polynomial and Thomas Pornin also highlighted the fact this is a user/managerial problem, rather than a technical one.

Remember Immutable Law of Security #10: Technology is not a panacea. Whilst technology can do some amazing things, it can’t enforce user behaviour. You have a user that is bringing undue risk to the organisation, and that risk needs to be dealt with. The solution to your problem is _policy_, not technology. Set up a security policy that details explicitly disallowed behaviours, and have your users sign it. If they violate that policy, you can go to your superiors with evidence of the violation and a penalty can be enforced. As long as the users have physical access to the machines they use and their USB ports (that’s hard to avoid, unless you pour glue in all the USB ports…) and that the installed operating systems allow it (then again, hard to avoid if users are “administrators” on their systems, in particular in BYOD contexts), then the users can setup custom access points which gives access to, at least, their machine.

Rory McCune provided some information on the types of solutions which generally are used in large corporates, where they work well, including NAC and port lockdowns. Lie Ryan‘s comments tend to be appropriate on smaller networks.

k1DBLITZ also focuses on the use of technical solutions in addition to policy, and JasperWallace recommends looking for and blocking unapproved MAC addresses, and further answers discuss wireless scanning and scripted checks.

Overall, it would seem that a mixture of technical and management controls are required – the balance depending on your specific environment.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #29: Risks of giving developers admin rights to their own PCs

2012-06-08 by roryalsop. 3 comments

Carolinegordon asked Question of the Week number 29 to try and understand what risks are posed by giving developers admin rights to their machines, as it is something many developers expect in order to be able to use their machines effectively, but that security or IT may deny based on company policy.

Interestingly, for a question asked on a security site, the highest voted answers all agree that developers should be given admin rights:

alanbarber listed two very good points – developer toolsets are frequently updated, so the IT load for implementing updates can be high, and debugging tools may need admin rights. His third point, that developers are more security conscious, I’m not so sure about. I tend to think developers are just like other people – some are good at security, some are bad.

Bruno answered along similar lines, but also added the human aspect in two important areas. Giving developers and sysadmins can lead to a divide, and a them-and-us culture, which may impact productivity. Additionally, as developers tend to be skilled in their particular platform, you run the risk of them getting around your controls anyway – which could open up wider risks.

DKNUCKLES made some strong points on what could happen if developers have admin rights:

  • Violation of security practices – especially the usual rule of least privilege
  • Legal violations – you may be liable if you don’t protect code/data appropriately (a grey area at best here, but worth thinking about)
  • Installation of malware – deliberately or accidentally

wrb posted a short answer, but with an important key concept:

The development environment is always isolated from the main network. It is IT’s job to make sure you provide them with what ever setup they need while making sure nothing in the dev environment can harm the main network. Plan ahead and work with management to buy the equipment/software you need to accomplish this.

Todd Dill has a viewpoint which I see a lot in the regulated industries I work in most often – there could be a regulatory requirement which specifies the separation between developers and administrator access. Admittedly this is usually managed by strongly segregating Development, Testing, Staging and Live environments, as at the end of the day there is a business requirement that developers can do their job and deliver application code that works in the timelines required.

Daniel Azuelos came at it with a very practical approach, which is to ask what the difference in risk is between the two scenarios. As these developers are expected to be skilled, and have physical access to their computers, they could in theory run whatever applications they want to, so taking the view that preventing admin access protects from the “evil inside” is a false risk reduction.

This question also generated a large number of highly rated comments, some of which may be more tongue in cheek than others:

The biggest risk is that the developers would actually be able to get some work done. Explain them that the biggest security risk to their network is an angry developer …or just let them learn that the hard way. It should be noted that access to machine hardware is the same as granting admin rights in security terms. A smart malicious agent can easily transform one into the other. If you can attach a debugger to a process you don’t own, you can make that process do anything you want. So debugging rights = admin

My summary of the various points:

While segregating and limiting access is a good security tenet, practicality must rule – developers need to have the functionality to produce applications and code to support the business, and often have the skills to get around permissions, so why not accept that they need admin rights to the development environment, but restrict them elsewhere.

This is an excellent question, as it not only generated interest from people on both sides of the argument, but they produced well thought out answers which helped the questioner and are of value to others who find themselves in the same boat.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #6: When the sysadmin leaves…

2011-08-19 by ninefingers. 1 comments

When we talk about security of systems, a lot of focus and attention goes on the external attacker. In fact, we often think of “attacker” as some form of abstract concept, usually “some guy just out causing mischief”. However, the real goal of security is to keep a system running and internal, trusted users, can upset that goal more easily and more completely than any external attacker, especially a systems administrator. These are, after all, the men and women you ask to set up your system and defend it.

So us Security SE members took particular interest when user Toby asked: when the systems admin leaves, what extra precautions should be taken? Toby wanted to know what the best practise was, especially where a sysadmin was entrusted with critical credentials.

A very good question and it did not take long for a reference to Terry Childs to appear, the San Francisco FiberWAN administrator who in response to a disciplinary action, refused to divulge critical passwords to routing equipment. Clearly, no company wants to find themselves locked out of their own network when a sysadmin leaves, whether the split was amicable or not. So what can we do?

Where possible, ensure “super user” privileges can be revoked when the user account is shut down.

From Per Wiklander. The most popular answer by votes so far also makes a lot of sense. The answer itself references the typical Ubuntu setup, where the root password is deliberately unknown to even the installer and users are part of a group of administrators, such that if their account is removed, so is their administrator access. Using this level of granular control you can also downgrade access temporarily, e.g. to allow the systems admin to do non-administrative work prior to leaving.

However, other users soon pointed out a potential flaw in this kind of technical solution – there is no way to prevent someone with administrative access from changing this setup, or adding additional back doors, especially if they set up the box. A further raised point was the need not just to have a corporate policy on installation and maintenance, but also ensure it is followed and regularly checked.

Credential control and review matters – make sure you shut down all the non-centralized accounts too.

Mark Davidson‘s answer focused heavily on the need for properly revoking credentials after an administrator leaves. Specifically, he suggests:

  • Do revoke all credentials belonging to that user.
  • Change any root password/Administrator user password of which they are aware.
  • Access to other services on behalf of the business, like hosting details, should also be altered.
  • Ensure any remote access is properly shut down – including incoming IP addresses.
  • Logging and monitoring access attempts should happen to ensure that any possible intrusions are caught.

Security SE moderator AviD was particularly pleased with the last suggestion and added that it is not just administrator accounts that need to be monitored – good practice suggests monitoring all accounts.

It is not just about passwords – do not forget the physical/social dimension.

this.josh raises a critical point. So far, all suggestions have been technical, but there are many other issues that might present a window of opportunity to a sysadmin with a mind to cause chaos. From the answer itself:

  • Physical access is important too. Don’t just collect badges; ensure that any access to cabinets, server rooms, drawers etc are properly revoked. The company should know who has keys to what and why so that this is easy to check when the sysadmin leaves.
  • Ensure the phone/voicemail system is included in the above checks.
  • If the sysadmin could buy on behalf of the company, ensure that by the time they have left, they no longer have authority to do this. Similarly, ensure any contracts the employee has signed have been reviewed – ideally, just like logging sysadmin activities you are also checking this as they’re signed too.
  • Check critical systems, including update status, to ensure they are still running and haven’t been tampered with.
  • My personal favorite: let people know. As this.josh points out, it is very easy as a former employee knowing all the systems to persuade someone to give you access, or give you information that may give you access. Make sure everyone understands that the individual has left and that any suspicious requests are highlighted or queried and not just granted.

Conclusions

So far, that’s it for answers to the question and no answer has yet to be accepted, so if you think anything is missing, please feel free to sign up and present your own answer.

In summary, there appears to be a set of common issues being raised in the discussion. Firstly: technical measures help, but cannot account for every circumstance. Good corporate policy and planning is needed. A company should know what systems a systems administrator has access to, be they virtual or physical, as well as being able to audit access to these systems both during the employee’s tenure and after they have left. Purchasing mechanisms and other critical business functions such as contracts are just other systems and need exactly the same level of oversight.

I did leave one observation out from one of the three answers above: that it depends on whether the moving on employee is leaving amicably or not. That is certainly a factor as to whether there is likely to be a problem, but a proper process is needed anyway, because the risk to your systems and your business is huge. The difference in this scenario is around closing critical access in a timely manner. In practice this includes:

  • Escorting the employee off the premises
  • Taking their belongings to them once they are off premises
  • Removing all logical access rights before they are allowed to leave

These steps reduce the risk that the employee could trigger malicious code or steal confidential information.