Business

Business Continuity is concerned with information security risks and impacts

2015-08-02 by lucaskauffman. 0 comments

A Business Continuity Programme (BCP) is primarily concerned with those business functions and operations that are critically important to achieve the organization’s operational objectives. It seeks to reduce the impact of a disaster condition before the condition occurs. Buy-in from top level management is required as a review is required of each function defined in the business as to ensure all key-personnel is identified. Why would a business require a BCP?

6844.strip.zoom

The BCP ensures the business can continue in case of (un)foreseen circumstances. To motivate top-level management to support the BCP, the best way is to set up a risk/reward overview and use examples to show what can happen when you do not have a BCP in place. The most important question to ask is: “If we (partially) shut down the business for x amount of time, how much money would this cost, both short (direct business loss) and long term (indirect business loss from reputational damages)?”. Losing critical systems, processes or data because of an interruption in the business could send an organization into a financial tailspin.

The main concern of a BCP is to ensure availability of the business is maintained. Confidentiality and integrity should also be addressed within the Business Continuity Plan. In terms of availability the risk to business continuity is often explained as a service interruption on a critical system, e.g. a payment gateway of a bank goes down, preventing transactions from occurring. The short- and long-term impact are financial losses due to the bank not being able to process transactions, but also clients becoming more and more dissatisfied. Confidentiality in BCP could for example be the transfer of personal data during a disaster recovery. An objective of disaster recovery is to minimize risk to the organization during recovery. There should be a baseline set of documented access controls to use during recovery activities. They are necessary to prevent intrusions and data breaches during the recovery. The impact here can be one of reputation but also of financial nature. If a competing company can for example obtain a set of investment strategies, it could assist the competing company to invest against them, resulting in significant financial losses and even bankruptcy.

Integrity of information means that it is accurate and reliable and has not been tampered with by an unauthorized party. For example it is important that the integrity of each customer’s data, but also information originating from third parties, can be ensured. An example of the impact of integrity violation: when a bank cannot rely on the integrity of data, for instance if it authorizes transactions to a nation or person on a sanctions list (originating from a third party), they could be heavily fined, but also might lose their banking license. A BCP goes wider than just impacts, it also addresses risks. A business impact analysis is performed to understand which business processes are important. These “critical” business processes are provided with special protection in the framework of business continuity management, and precautions are taken in case of a crisis. “Critical” in the sense of business continuity management means “time-critical”, which means that this process must be restored to operation faster because otherwise a high amount of damage to the organisation can be expected. While the BIA answers the question of what effects the failure of a process will have on the organisation, it is necessary to know what the possible causes of the failure could be. Risks at process level as well as risks resource level need to be examined. A risk at the process level could be the failure of one or more (critical) resources, for example. A risk analysis at the resource level only looks for the possible causes of the failure of these critical resources.

BCP relies on both impact and risk assessments, but making a risk assessment without an impact assessment is difficult. ISO 22301 requires a risk assessment process to be present. The goal of this requirement is to establish, implement, and maintain a formal documented risk assessment process that systematically identifies, analyses, and evaluates the risk of disruptive incidents to the organization.

I want to conclude with stating that risk analysis and business impact analysis (BIA) are cornerstones in understanding the threats, vulnerabilities and mission-critical functions of the organization and are thus required if one wants to discover the business’s critical processes and make a correct prioritization.

Is our entire password strategy flawed?

2014-06-19 by roryalsop. 8 comments

paj28 posed a question that really fits better here as a blog post:

Security Stack Exchange gets a lot of questions about password strength, password best practices, attacks on passwords, and there’s quite a lot for both users and sites to do, to stay in line with “best practice”.

Web sites need a password strength policy, account lockout policy, and secure password storage with a slow, salted hash. Some of these requirements have usability impacts, denial of service risks, and other drawbacks. And it’s generally not possible for users to tell whether a site actually does all this (hence plaintextoffenders.com).

Users are supposed to pick a strong password that is unique to every site, change it regularly, and never write it down. And carefully verify the identity of the site every time you enter your password. I don’t think anyone actually follows this, but it is the supposed “best practice”.

In enterprise environments there’s usually a pretty comprehensive single sign-on system, which helps massively, as users only need one good work password. And with just one authentication to protect, using multi-factor is more practical. But on the web we do not have single sign-on; every attempt from Passport, through SAML, OpenID and OAuth has failed to gain a critical mass.

But there is a technology that presents to users just like single sign-on, and that is a password manager with browser integration. Most of these can be told to generate a unique, strong password for every site, and rotate it periodically. This keeps you safe even in the event that a particular web site is not following best practice. And the browser integration ties a password to a particular domain, making phishing all but impossible To be fair, there are risks with password managers “putting all your eggs in one basket” and they are particularly vulnerable to malware, which is the greatest threat at present.

But if we look at the technology available to us, it’s pretty clear that the current advice is barking up the wrong tree. We should be telling users to use a password manager, not remember loads of complex passwords. And sites could simply store an unsalted fast hash of the password, forget password strength rules and account lockouts.


A problem we have though is that banks tell customers never to write down passwords, and some explicitly include ‘storage on PC’ in this. Banking websites tend to disallow pasting into password fields, which also doesn’t help.

So what’s the solution? Do we go down the ‘all my eggs are in a nice secure basket’ route and use password managers?

I, like all the techies I know, use a password manager for everything. Of the 126 passwords I have in mine, I probably use 8 frequently. Another 20 monthly-ish. Some of the rest of them have been used only once or twice – and despite having a good memory for letters and numbers, I’m not going to be able to remember them so this password manager is essential for me.

I want to be able to easily open my password manager, copy the password and paste it directly into the password field.

I definitely don’t want this password manager to be part of the browser, however, as in the event of browser compromise I don’t wish all my passwords to be vacuumed up, so while functional interaction like copy and paste is essential, I’d like separation of executables.

What do you think – please comment below.

Communicating Security Risks to Senior Management – 3 years on

2014-04-04 by roryalsop. 0 comments

Back in July 2011 I wrote this brief blog post on the eternal problem of how to bridge the divide between security professionals and senior management.

Thought I’d revisit it nearly 3 years on and hopefully be able to point out ways in which the industry is improving…

Actually, I would have to say it is improving. I am seeing risk functions in larger organisations, especially in the Financial Services industry, with structures that are designed explicitly to oversee technology and report through a CRO to the board, and in the UK, the regulator is providing strong encouragement for financial institutions to get this right. CEOs and shareholders are also seeing the impact of security breaches, denial of service attacks, defacements etc., so now is a very receptive time.

But there is still a long way to go. Many organisations still do not have anyone at board level who can speak the language of business or operational risk and the language of information security or IT risk. So it may have to be you –

My end point in my previous blog about good questions to help you prepare for conversations with senior management are still valid:

If there is one small piece of advice I’d give to anyone in the security profession it would be to learn about business and operational risk – even if it is just a single course it will help!

QoTW #41: Why do we lock our computers?

2012-11-30 by roryalsop. 2 comments

Iszi chose this week’s question of the week, Tom Marthenal‘s “Why do we lock our computers?” – as Tom puts it:

It’s common knowledge that if somebody has physical access to your machine they can do whatever they want with it, so what is the point?

This one attracted a lot of views, as it is a simple question of interest to everyone.

Both Bruce Ediger and Polynomial answered with the core reason – it removes the risk from the casual attacker while costing the user next to nothing! This is an essential factor in cost/usability tradeoffs for security. From Bruce:

The value of locking is somewhat larger than the price of locking it. Sort of like how in good neighborhoods, you don’t need to lock your front door. In most neighborhoods, you do lock your front door, but anyone with a hammer, a large rock or a brick could get in through the windows.

and from Polynomial:

An attacker with a short window of opportunity (e.g. whilst you’re out getting coffee) must be prevented at minimum cost to you as a user, in such a way that makes it non-trivial to bypass under tight time constraints.

Kaz pointed out another essential point, traceability:

If you don’t lock, it is easy for someone to poke around inside your session in such a way that you will not notice it when you return to your machine.

And zzzzBov added this in a comment:

…few bystanders would question someone walking up to a house and entering through the front door. The assumption is that the person entering it has a reason to. If a bystander watches someone break into a window, they’re much more likely to call the authorities. This is analogous with sitting down at a computer that’s unlocked, vs physically hacking into the system after crawling under a desk.

It removes a large percentage of possible attacks – those from your co-workers wanting to mess with your stuff – thanks enedene.

So – protect yourself from co-workers, casual snooping and pilfering and other mischief by simply locking your machine every time you leave your desk!

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #35: Dealing with excessive “Carding” attempts

2012-09-14 by scottpack. 0 comments

Community moderator Jeff Ferland nominated this week’s question: Dealing with excessive “Carding” attempts.

I found this to be an interesting question for two reasons,

  1. It turned the classic password brute force on its head by applying it to credit cards
  2. It attracted the attention from a large number of relatively new users

Jeff Ferland postulated that the website was too helpful with its error codes and recommended returning the same “Transaction Failed” message no matter the error.

User w.c suggested using some kind of additional verification like a CAPTCHA. Also mentioned was the notion of instituting time delays for multiple successive CAPTCHA or transaction failures.

A slightly different tack was discussed by GdD. Instead of suggesting specific mitigations, GdD pointed out the inevitability of the attackers adapting to whatever protections are put in place. The recommendation was to make sure that you keep adapting in turn and force the attackers into your cat and mouse game.

Ajacian81 felt that the attacker’s purpose may not be finding valid numbers at all and instead be performing a payment processing denial of service. The suggested fix was to randomize the name of the input fields in an effort to prevent scripting the site.

John Deters described a company that had previously had the same problem. They completely transferred the problem to their processor by having them automatically decline all charges below a certain threshold. He also pointed out that the FBI may be interested in the situation and should be notified. This, of course, depends on USA jurisdiction.

Both ddyer and Winston Ewert suggested different ways of instituting artificial delays into the processing. Winston discussed outright delaying all transactions while ddyer discussed automated detection of “suspicious” transactions and blocking further transactions from that host for some time period.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

Confidentiality, Integrity, Availability: The three components of the CIA Triad

2012-08-20 by Terry Chia. 2 comments

In this post, I shall be exploring one of the fundamental concepts of security that should be familiar with most security professionals and students: the CIA triad.

What is the CIA triad? No, CIA in this case is not referring to the Central Intelligence Agency. CIA refers to Confidentiality, Integrity and Availability. Confidentiality of information, integrity of information and availability of information. Many security measures are designed to protect one or more facets of the CIA triad. I shall be exploring some of them in this post.

Confidentiality

When we talk about confidentiality of information, we are talking about protecting the information from disclosure to unauthorized parties.

Information has value, especially in today’s world. Bank account statements, personal information, credit card numbers, trade secrets, government documents. Every one has information they wish to keep a secret. Protecting such information is a very major part of information security.

A very key component of protecting information confidentiality would be encryption. Encryption ensures that only the right people (people who knows the key) can read the information. Encryption is VERY widespread in today’s environment and can be found in almost every major protocol in use. A very prominent example will be SSL/TLS, a security protocol for communications over the internet that has been used in conjunction with a large number of internet protocols to ensure security.

Other ways to ensure information confidentiality include enforcing file permissions and access control list to restrict access to sensitive information.

Keeping valuable algorithms secret

This is an excellent question on Security.Stackexchange that covers how to keep important information confidential. Similar questions can be found here.

Integrity

Integrity of information refers to protecting information from being modified by unauthorized parties.

Information only has value if it is correct. Information that has been tampered with could prove costly. For example, if you were sending an online money transfer for $100, but the information was tampered in such a way that you actually sent $10,000, it could prove to be very costly for you.

As with data confidentiality, cryptography plays a very major role in ensuring data integrity. Commonly used methods to protect data integrity includes hashing the data you receive and comparing it with the hash of the original message. However, this means that the hash of the original data must be provided to you in a secure fashion. More convenient methods would be to use existing schemes such as GPG to digitally sign the data.

Why aren’t application downloads routinely done over HTTPS?

This is a question regarding data integrity, with several suggestions on how to protect data integrity. You can find more questions with the integrity tag here.

Availability

Availability of information refers to ensuring that authorized parties are able to access the information when needed.

Information only has value if the right people can access it at the right times. Denying access to information has become a very common attack nowadays. Almost every week you can find news about high profile websites being taken down by DDoS attacks. The primary aim of DDoS attacks is to deny users of the website access to the resources of the website. Such downtime can be very costly. Other factors that could lead to lack of availability to important information may include accidents such as power outages or natural disasters such as floods.

How does one ensure data availability? Backup is key. Regularly doing off-site backups can limit the damage caused by damage to hard drives or natural disasters. For information services that is highly critical, redundancy might be appropriate. Having a off-site location ready to restore services in case anything happens to your primary data centers will heavily reduce the downtime in case of anything happens.

Conclusion

The CIA triad is a very fundamental concept in security. Often, ensuring that the three facets of the CIA triad is protected is an important step in designing any secure system. However, it has been suggested that the CIA triad is not enough. Alternative models such as the Parkerian hexad (Confidentiality, Possession or Control, Integrity, Authenticity, Availability and Utility) have been proposed. Other factors besides the three facets of the CIA triad are also very important in certain scenarios, such as non-repudiation. There have been debates over the pros and cons of such alternative models, but it is a post for another time.

Thank you for reading.

QotW #29: Risks of giving developers admin rights to their own PCs

2012-06-08 by roryalsop. 3 comments

Carolinegordon asked Question of the Week number 29 to try and understand what risks are posed by giving developers admin rights to their machines, as it is something many developers expect in order to be able to use their machines effectively, but that security or IT may deny based on company policy.

Interestingly, for a question asked on a security site, the highest voted answers all agree that developers should be given admin rights:

alanbarber listed two very good points – developer toolsets are frequently updated, so the IT load for implementing updates can be high, and debugging tools may need admin rights. His third point, that developers are more security conscious, I’m not so sure about. I tend to think developers are just like other people – some are good at security, some are bad.

Bruno answered along similar lines, but also added the human aspect in two important areas. Giving developers and sysadmins can lead to a divide, and a them-and-us culture, which may impact productivity. Additionally, as developers tend to be skilled in their particular platform, you run the risk of them getting around your controls anyway – which could open up wider risks.

DKNUCKLES made some strong points on what could happen if developers have admin rights:

  • Violation of security practices – especially the usual rule of least privilege
  • Legal violations – you may be liable if you don’t protect code/data appropriately (a grey area at best here, but worth thinking about)
  • Installation of malware – deliberately or accidentally

wrb posted a short answer, but with an important key concept:

The development environment is always isolated from the main network. It is IT’s job to make sure you provide them with what ever setup they need while making sure nothing in the dev environment can harm the main network. Plan ahead and work with management to buy the equipment/software you need to accomplish this.

Todd Dill has a viewpoint which I see a lot in the regulated industries I work in most often – there could be a regulatory requirement which specifies the separation between developers and administrator access. Admittedly this is usually managed by strongly segregating Development, Testing, Staging and Live environments, as at the end of the day there is a business requirement that developers can do their job and deliver application code that works in the timelines required.

Daniel Azuelos came at it with a very practical approach, which is to ask what the difference in risk is between the two scenarios. As these developers are expected to be skilled, and have physical access to their computers, they could in theory run whatever applications they want to, so taking the view that preventing admin access protects from the “evil inside” is a false risk reduction.

This question also generated a large number of highly rated comments, some of which may be more tongue in cheek than others:

The biggest risk is that the developers would actually be able to get some work done. Explain them that the biggest security risk to their network is an angry developer …or just let them learn that the hard way. It should be noted that access to machine hardware is the same as granting admin rights in security terms. A smart malicious agent can easily transform one into the other. If you can attach a debugger to a process you don’t own, you can make that process do anything you want. So debugging rights = admin

My summary of the various points:

While segregating and limiting access is a good security tenet, practicality must rule – developers need to have the functionality to produce applications and code to support the business, and often have the skills to get around permissions, so why not accept that they need admin rights to the development environment, but restrict them elsewhere.

This is an excellent question, as it not only generated interest from people on both sides of the argument, but they produced well thought out answers which helped the questioner and are of value to others who find themselves in the same boat.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #27: Open Source vs Closed Source Systems

2012-05-25 by ninefingers. 0 comments

Question of the Week number 27 is a contentious, hotly debated issue in the software world. The question itself was posed by Security.SE user blunders, who quoted the argument often used in the defence of open source as a business model:

My understanding is that open source systems are commonly believed to be more secure than closed source systems. … A common position against closed source systems is that a lack of awareness is at best a weak security measure; commonly referred to as security through obscurity.

and then we hit the question itself:

Question is, are open source systems on average better for security than closed source systems?

We’ll begin with the top voted answer by SE user Jesper Mortensen, who explained that the whole notion of being able to generally compare open versus closed source systems is a bad one when there are so many other factors involved. To compare two systems you really need to look beyond the licensing model they use, and look at other factors too. I’ll quote Jesper’s list in its entirety:

  • Licenses.
  • Access to source code.
  • Very different incentive structures, for-profit versus for fun.
  • Very different legal liability situations.
  • Different, and wildly varying, team sizes and team skillsets.

Of course, this is by no means a complete treatment of the possible differences.

Jesper also highlighted the importance of comparing pieces of software that solve specific domain issues – not software in general. You have to do this, to even remotely begin to utilise the list above.

Security.SE legend Thomas Pornin also answered this question. I’ll begin coverage of his answer with his summary:

… the “opensource implies security” idea is overrated. What is important is the time (and skill) devoted to the tracking and fixing of security issues, and this is mostly orthogonal to the question of openness of the source.

The main thrust of Thomas’ answer was that actually, maintained software is more secure than unmaintained software. As an example, Thomas cited OpenSSL remote execution bugs that had been left lying in the code tree unfixed for some time – highlighting a possible advantage of closed source systems in that, when developed by companies, the effort and time spent on Q&A is generally higher than open source systems.

Thomas’ answer also covers the counterpoint to this – that closed source systems can easily conceal security issues, too, and that having the source allows you to convince yourself of security more easily.

The next answer was provided by Ori, who lists a set of premises used for justifying the security of open source:

  1. The Customization premise
  2. The License Management premise
  3. The Open Format premise
  4. The Many Eyes premise
  5. The Quick Fix premise

As Ori rightly says, the customization premise means a company can take an open source platform and add an additional set of security controls. Ori quotes NSA’s SELinux as an example of such a project. For companies with the time and money to produce such platforms and make such fixes, this is clearly an advantage for open source systems.

For license management and open format arguments Ori covers from a compliance and resilience perspective. Using open source software (and making modifications) contains certain license constraints – the potential to violate these constraints is clearly a risk to the business. Likewise, for business continuity purposes the ability to not be locked in to a specific platform is a huge win for any company.

Finally, an answer by yours truly. The major thrust of my answer is succinctly summarised by AviD‘s comment on it:

I’ve always proposed an amendment to Linus’ Law: “Given enough trained eyeballs, most bugs are relatively shallow”

I explained, through use of a rather intriguing vulnerability introduced into development kernels by a compiler bug, that having the knowledge to detect these issues is critical to security. The source being available does not directly guarantee you have the knowledge to detect such issues.

That’s it for answers. As you can see, none of us took sides generally on the “open versus closed” debate, instead pointing out that there are many factors to consider beyond the license under which source is available. I think the whole set of answers is best summarised by this.josh‘s comment on the top voted answer – so I’ll leave you with that:

I agree. What matters most is how many people with knowledge and experience in the security domain actively design, implement, test, and maintain the software. Any project where no-one is looking at security will have significant vulnerabilities, regardless of how many people are on the project.

Liked this question of the week? Interested in reading it or adding an answer? See the question in full. Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #21: What should I do when my boss asks me to fabricate audit log data?

2012-03-30 by roryalsop. 0 comments

Asked over on Programmers in January, this question is our 5th highest rated of all time, so it’s obviously resonating with our community.

With businesses the world over reliant on the accuracy, availability and integrity of IT systems and data this type of request demonstrates not only unethical behaviour, but a willingness on the part of the boss to sacrifice the building blocks used to ensure their business can continue.

Behaviour like this, if discovered by an audit team, could lead to much wider and deeper audits being conducted to reassure them that the financial records haven’t been tampered with – never mind the possible legal repercussions!

Some key suggestions from our community:

MarkJ suggested getting it in writing before doing it, but despite this being the top answer, having the order in writing will not absolve you from blame if you do actually go ahead with the action.

Johnnyboats advised contacting the auditor, ethics officer or internal council, as they should be in a position to manage the matter. In a small company these roles may not exist, however, or there may be pressure put upon you to just toe the line.

Iszi covered off a key point – knowing about the boss’s proposed unethical behaviour and not reporting the order could potentially put you at risk of being an accessory. He suggests not only getting the order in writing, but contacting legal counsel.

Sorin pointed out that as getting the order in writing may be difficult, especially if the boss knows how unethical it is, the only realistic option may just be to CYA as best you can and leave quietly without making a fuss.

Arjang came at this from the other side – perhaps the boss needs help:

This is not just a case of doing or not doing something wrong cause someone asked you to do it, it is a case of making them realize what they are asking

I am pretty cynical so I’m not sure how you’d do this, but I do like the possibility that the best course of action may be to provide moral guidance and help the boss stop cheating.

Most answers agree on the key points:

  • Don’t make the requested changes – it’s not worth compromising your professional integrity, or getting deeper involved in what could become a very messy legal situation.
  • Record the order – so it won’t just be your word against his, if it comes into question.
  • Get legal counsel – they can provide advice at each stage.
  • Leave the company – the original poster was planning to leave in a month or so anyway, but even if this wasn’t the case, an unethical culture is no place to have your career.

The decision you will have to make is how your report this. At the end of the day, a security professional does encounter this sort of thing far too regularly, so we must adhere to an ethical code. In fact, some professional security certifications, like the CISSP, require it!

Liked this question of the week? Have questions of a security nature of your own? Security expert and want to help others? Come and join us at security.stackexchange.com.

QotW #16: When businesses don’t protect your data…

2012-01-27 by roryalsop. 0 comments

This week’s blog post was inspired by camokatu‘s question on what to do when a Utility company doesn’t hash passwords in their database.

It seems the utility company couldn’t understand the benefit of hashing the passwords. Wizzard0 listed some reasons why they might not want to implement protection – added complexity, implementation and test costs, changes in procedures etc. and this is often the key battle. If a company doesn’t see this as a risk they want to remediate, nothing will get done. And to be fair, this is the way business risk should be managed, however here it appears that the company just hasn’t understood the risks or isn’t aware of them.

Obviously the consequences of this can range from minimal to disastrous, so most of the answers concentrate on consequences which could negatively impact the customer, and the main one of these is where the database includes financial information such as accounts, banking details or credit card details.

The key point, raised by Iszi, is that if personally identifiable information is held, it must be protected in most jurisdictions (under data protection acts), and if credit card details are held, the Payment Card Industry Data Security Standard (PCI-DSS) requires it to be protected. (For further background information check out the answers to this question on industry best practices). These regulations tend to be enforced by fining companies, and the PCI can remove a company’s ability to use credit card payments if they fail to meet PCI-DSS.

Does the company realise they can be fined or lose credit card payments? Maybe they do but have decided that is an acceptable risk, but I’d be tempted to say in this case that they just don’t appear to get it.

So when they don’t get it, don’t care, or won’t respond in a way that protects you, the customer, what are your next steps?

from tdammers – responsible disclosure :

Contact the company, offer to keep the vulnerability quiet for a limited amount of time, giving them an opportunity to fix it.

In the meantime, make sure you’re not using the compromised password anywhere else, make sure you don’t have any valuable information stored on their systems, and if you can afford to, cancel your account.

from userunknown

contact their marketing team and explain what a PR disaster it would be if the media learnt about it (no, I’m not suggesting blackmail…:-)

from drjimbob – 3 excellent suggestions:

Submit it to plaintext offenders?

Switch to another utility company?

Lobby your local politicians to pass legislation that companies that do not use secure hashes (e.g., bcrypt or at very least salted hash) on their password data are liable for identity theft damages from any compromise of their systems?

But in addition to those thoughts, which at best will still require time before the company does anything, follow this guidance repeated in almost every question on password security and listed here by Iszi

Use long and complex passwords for all websites & applications, and do not re-use passwords across any websites & applications. Additionally, limit the information you give these websites & applications to only that which is absolutely necessary for them to serve their purpose