February 23, 2011 | Leave a Comment
Disaster Recovery (DR) and Business Continuity Planning (BCP) continue to be driving factors for some organizations looking to move to cloud. Many are looking to manage their Disaster Recovery planning through extensive use of managed cloud services – and for good reasons. These are the most common benefits of leveraging cloud services for disaster recovery planning cited by cloud customers:
- 1. I only have to pay for what I use. If I don’t declare a disaster scenario, my costs are nominal.
- 2. I have flexibility with the amount of management my provider requires of me to maintain my DR from “full control” to “no control”.
- 3. I can leverage a world-class redundant facility to provide the greatest assurance of business continuity in the event of a major event.
- 4. I can keep my applications as up-to-date as I want, by defining my Recovery Point Objective (RPO) and Recovery Time Objective (RTO).
- 5. When I declare a disaster, I can rely on my cloud service provider for support rather than expect my staff to travel to a Disaster Recovery site for recovery work.
However, some cloud customers are not sure how their managed cloud service providers deliver redundant cloud environments, Disaster Recovery options, and Business Continuity Planning and execution. After all, not all cloud providers are the same.
Nobody wants to be left out in the rain when disaster strikes. The mistaken notion is, “It is all in the cloud; It must be highly available.” However, that is not necessarily the case. Here are some key questions to ask your managed cloud service provider about its cloud infrastructure:
- To what level of redundancy do you maintain your cloud infrastructure within the primary location? N + 50%? N + 1? N x 2?
- To what level of capacity do you maintain your cloud infrastructure Disaster Recovery services in the redundant location or locations?
- Are Disaster Recovery or Business Continuity services included in my contract and managed cloud environment?
- How am I billed during steady state?
- How am I billed in the event of a declared disaster?
- What are the options for providing the best Recovery Point Objective (RPO) and the costs associated with those options?
- What are the options for providing the best Recovery Time Objective (RTO) and the costs associated with those options?
- When I declare a disaster, what are the resources I can rely on to provide assistance to perform full recovery of services and data?
- How often and to what extent are you willing to perform regular DR tests?
- Are your cloud data centers diverse in the following manner:
- Are they geographically disparate?
- Do they have redundant power feeds?
- Do you maintain redundant circuits into diverse sides of the facilities?
- Is the network distribution to the cloud environment fully redundant?
Another common concern with Disaster Recovery and Business Continuity in the cloud is whether all of the policies, procedures and controls are maintained in the cloud environment when a disaster is declared. Most organizations maintain strict compliance with policies or regulations that could be violated if not maintained in the cloud environment. Here are the common questions regarding policies and procedures:
- What processes are in place to be sure my data is synchronized?
- What processes are in place to ensure changes are implemented consistently in all cloud node environments?
- Are the environments run in an active/active, active/passive, or active/off-line configuration?
- How often does the managed cloud service provider support DR testing?
- Are all security measures mirrored in the redundant location, even when inactive?
- Authentication and Authorization
- Security Event Correlation
- What options are there to maintain development, quality assurance, and Disaster Recovery environments with version control?
- What processes and services are available to ensure a smooth recovery to primary location after the disaster is over, if necessary?
- What is the sustainability of the DR environment? Is the DR environment architected to provide degraded or minimal performance?
- Are the same compliance controls provided in all Cloud node environments (e.g., SAS70 in every Data Center)?
- What processes are in place to maintain backups during disaster declaration, and synchronize backups and restore the backup processes to normal after restoration of services to primary location?
Disaster Recovery and Business Continuity Planning can be extremely difficult to manage and maintain. However, the right managed cloud service provider can ensure that your environment is fully protected, your systems remain available and accessible, and you recover seamlessly when disaster strikes.
Allen Allison, Chief Security Officer at NaviSite (www.navisite.com)
During his 20+ year career in the information security industry, Allen Allison has served in management and technical roles, including the development of NaviSite’s industry-leading cloud computing platform; chief engineer and developer for a market-leading managed security operations center; and lead auditor and assessor for information security programs in the healthcare, government, e-commerce, and financial industries. With experience in the fields of systems programming; network infrastructure design and deployment; and information security, Allison has earned the highest industry certifications, including CCIE, CCSP, CISSP, MCSE, CCSE, and INFOSEC Professional. A graduate of the University of California, Irvine, Allison has lectured at colleges and universities on the subject of information security and regulatory compliance.
February 23, 2011 | 2 Comments
By Ian Huynh, Vice President of Engineering, Hubspan
Cloud computing has become an integrated part of IT strategy for companies in every sector of our economy. By 2012, IDC predicts that IT spending on cloud services will grow almost threefold to $42 billion. So it’s no surprise that decision makers no longer wonder “if” they can benefit from cloud computing. Instead, the question being asked now is “how” best to leverage the cloud while keeping data and systems secure.
With such an astounding amount of cloud computing growth expected in the next few years, it’s important for all executives, not just IT professionals, to understand the opportunities and precautions when considering a cloud solution. Security questions can span from whether information transferred between systems in the cloud is safe to what type of data is best stored in the cloud to how do I control who accesses my data?
It’s important to arm executives with actionable advice when considering a cloud computing service provider. Below is a list of the top six questions every CIO should consider when evaluating how secure a cloud solution is:
- 1. How does your vendor plan on securing your data?
You need to understand how your provider’s physical security, personnel, access controls and architecture work together to build a secure environment for your company, your data and your external partners or customers that also might be using the solution.
Application Access Control
For application access control, think front-end as well as back-end. While there may be rigorous user access management rules when the application is accessed via the application interface (i.e. front-end), what about system maintenance activities and related accesses that are routinely performed by your cloud vendor, on the back end, to ensure optimal application and system performance? Does your cloud vendor also apply the same rigorous access control, if not more?
Physical Access Control
Most people are familiar with application access control and user entitlements, but physical access control is just as important. In fact, many people forget that behind every cloud platform is a physical data center, and while it’s easy to assume vendors will have robust access controls around their data center, this isn’t always the case. Vendors should limit physical access to not only the overall data center facility but also to key areas like backup storage, servers and other critical network systems.
Personnel Access Control
Personnel considerations are another aspect of network security closely related to physical access control. Who does your vendor let access your data and how are they trained? Do they approach operations with a security-centric mindset? The security of any platform depends on the people that run it. This means that HR practices can have a huge impact on your vendor’s security operations. Smart vendors will institute background checks and special security training for their employees to defend against social engineering and phishing attacks.
Your cloud vendor’s solution needs to keep your data separate from that of other cloud tenants that use the same platform. This should be a primary concern when your data resides in “virtual private clouds,” where there is an expectation of stronger segregation controls. As your data is stored in the same storage space as your neighboring tenants, you need to know how your cloud vendor will ensure that your data isn’t illegally accessed.
Also, the overall level of security for cloud applications needs to be addressed. Depending on your vendor’s architecture, there may be customers with differing security needs operating within the same multi-tenant environment. In these cases, the entire system needs to operate at the highest level of security to avoid the “weakest link syndrome.” Incidentally, this highlights one of the benefits of cloud computing – you can have the benefits of world-class security without the cost of building and the maintaining such infrastructure.
- 2. Do they secure the transactional data as well as the data at rest?
Most vendors claim strong data encryption but do they truly provide end-to-end encryption with security in place while the data is at rest or in storage. Also, cloud security should go beyond data encryption to include encryption key management, which is a vital part of any cloud security scheme and should not be overlooked.
Most data centers don’t encrypt their data at rest, encrypt their backups or audit their data encryption process – but they should. A truly secure system would take these considerations into account. Data in backups will likely stick around much longer than the information that is currently on your servers. A mandate that provides strong guidance for data encryption is the Federal Information Processing Standards (FIPS)-140 security standard. This standard specifies the requirements for cryptology modules. Ask your vendor if they adhere to FIPS guidelines.
How are encryption keys stored and secured? You can encrypt all of your data, but the encryption keys are the proverbial “keys to the kingdom.” Best practices call for splitting the knowledge of each key between two or more individuals – hence, to re-construct an entire key, you need all those individuals present for authorization.
Furthermore, where business practice requires that at least one person in the company has knowledge of the entire key (e.g. the CEO or CSO), then procedures and processes should be in place to ensure that those individuals with the knowledge cannot access the data (e.g. they may have the key but cannot get access to the lock to open it – hence, there’s still a degree of separation).
- Does the vendor follows secure development principles?
A truly secure cloud platform is built for security through and through. That means security starts from “ground zero” – the design phase of the application as well as the platform. It simply isn’t enough to operate your system with a security-centric mindset; you have to design your system using the same guiding principles, following an unbroken chain of secure procedures from conception in the lab to real-life implementation. This means that design reviews, development practices and quality assurance plans must be engineered using the same strict security guidelines you would use in a production data center.
- 4. What are the vendor’s security certifications, audits and compliance mandates?
There are many regulations in the market, but the two most important ones covering cloud security and data protection are PCI DSS and SAS 70 Type II mandates.
Consider vendors that follow the industry standard PCI DSS guidelines, developed and governed by the Payment Card Industry Security Standards Council. It is a set of requirements for enhancing payment account data security. While created for the credit card and banking industries, it is relevant for any sector, as the goal is keeping data safe and personally identifiable information protected.
Another major control mechanism is the Statement on Auditing Standards No. 70 (SAS 70) Type II. SAS 70 compliance means a service provider has been through an in-depth audit of their control objectives and activities.
In addition to these certifications, there are a couple of other associations and groups the vendor should acknowledge and use as guidance in prioritizing data security issues. They are the Open Web Application Security Project (OWASP), which has a top ten list outlining the most dangerous current Web application security flaws along with the effective methods of dealing with them. And the Cloud Security Alliance (CSA), an industry group that advises best practices for data security in the cloud.
In addition to third-party compliance, the cloud vendor should be engaging in their own annual security audits. Your vendor should have scheduled audits and include penetration tests using an independent third-party audit provider to evaluate the quality of the security provided with your cloud vendor. Although the PCI version 1.2 specifications only mandate annual security audits, find a vendor that goes above and beyond. There are vendors that perform quarterly audits, four times what is considered typical industry specifications.
- 5. How does your vendor detect a compromise or intrusion?
Attempts by hackers to breach data security measures are becoming the norm in today’s high-tech computing environment. Whether you maintain your infrastructure and data on premise or in the cloud, the issues of securing your data are the same.
Your cloud vendor should include strong mechanisms for both intrusion prevention, or keeping your data safe from attack or a breach; and intrusion detection, which is the ability to monitor and know what’s happening with your data and if or when an intrusion happens. The vendor should be able to monitor, measure and react to any potential breach, particularly the ability to monitor access to its systems and detect any unauthorized changes to systems, policies or configuration files.
Also, what does your vendor do when things go wrong and is that communicated to you? A good Service Level Agreement (SLA) would have an intrusion notification clause built-in. A great SLA provides some transparency into the vendor’s operations in the areas of audits and compliances, and how those processes are comparable to your own requirements.
- 6. What are their disaster recovery plans and how does data security figure into those plans?
Your vendor’s security story needs to include their business continuity plan. First of all, they need to have a failover system or back-up data center. They should also be able to convincingly demonstrate to you that they can execute their backup plan. Many of the biggest cloud computing outages in recent memory were the result of a failure of disaster recovery processes.
Secondly, this secondary datacenter must have all of the same security processes and procedures applied to it as the primary one. It’s no good to have a second system in place, if you cannot operate securely in that environment.
Finally, if there were some sort of impending disaster, they need to notify you in advance. Keep in mind that you may not always know where your data is physically located, so the onus of reporting is on your provider.
Your vendor’s plan for securing your data should be a like a well-choreographed dance with a strong beginning, middle and end. Their system needs to be protected at the network and application layers and begin with the development process. Access control policies should span the entire operation. The vendor needs to have a coherent plan that protects data at all times, whether in motion or at rest. They need to include robust compliance, auditing and reporting processes, to ensure the integrity of the overall security scheme. And, your vendor should have robust disaster recovery procedures in place, and be able to show you that they are capable of executing them.
While cloud computing brings many benefits, all clouds are not created equal. Make sure your vendor provides the security you need to confidently move your data to the cloud.
Ian Huynh, Vice President of Engineering, Hubspan
Ian Huynh has over 20 years’ experience in the software and services markets, with particular expertise in cloud computing, security and application architecture. Ian has been featured in publications such as Network World and CS Techcast, a technology network for IT pros. Prior to joining Hubspan, Ian served as Software Architect at Concur Technologies, and has held technical leadership positions at 7Software and Microsoft Corp.
February 1, 2011 | 1 Comment
by Mark O’Neill, CTO, Vordel
In this blog post we examine how Single Sign-On from the enterprise to Cloud-based services is enabled. Single Sign-On is a critical component for any organization wishing to leverage Cloud services. In fact, an organization accessing Cloud-based services without Single Sign-On risks increased exposure to security risks and the potential for increased IT Help Desk costs, as well the danger of “dangling” accounts from former employees which are open to rogue usage.
Let’s take a look at Google Apps and the concept of Single Sign-On. Organizations are increasingly using Cloud services such as Google Apps for email and document sharing. Google Apps, especially Gmail, are a popular option for organizations making their first foray into leveraging Cloud-based Services. While the cost advantages of this model are compelling, organizations do not want to create a whole new set of accounts for their employees in the Cloud, or force their employees to remember a new password.
The solution to this problem is to allow users to continue to use their own local accounts, logging into their computers as normal, but then seamlessly being logged into the Cloud services. In this way, the user experiences a continuous link from the corporate systems, such as their Windows login, into the Cloud services, such as email. This is known as Single Sign-On, and is enabled by technologies such as Security Assertion Markup Language (SAML). This allows operations staff to manage their organization’s usage of the external Cloud services as if they were a part of their internal network, even without the same degree of physical control. As a result, the usual problems of password synchronization, user provisioning (adding users) and de-provisioning (removing users), and auditing are minimized.
When an organization wants to use Gmail for its employees, they usually get a key from Google to enable single sign on. This application programming interface (API) key is only valid for the organization and enables its employees to sign in. As such, it is vitally important this key is protected. If an unauthorized person gets the key they can log in and impersonate the email account owners, share Google documents and generally have unlimited access to users email and documents.
A good solution to overcome this issue is to provide Single Sign-On between on-premises systems and the Cloud. However, the key security requirement of Single Sign-On is protection of API keys. In effect, these API keys are the keys of the kingdom for Cloud Single Sign-On. I will discuss the topic of protecting API keys in a future blog, but want to underscore the importance of their security. After all, if an organization wishes to enable single sign-on to their Google Apps (so that their users can access their email without having to log in a second time) then this access is via API Keys. If these keys were to be stolen, then an attacker would have access to the email of every person in that organization, by using the key to create a signed SAML assertion and sending it to Google. Clearly that must be avoided.
Single Sign-on Options:
There are two broad paths for any organization interested in implementing Single Sign-On today. One option is for an organization’s developer staff to create Single Sign-On via the sample code offered by all Cloud Service providers for the purpose of connecting to Cloud Services. This approach appeals to developers who want to create and code the connections into existing applications. The programming approach means it is the developer who is doing the work by writing code and making the connections to an organization’s applications.
A second approach is to take an off-the-shelf product like a Cloud Service Broker and use this technology to configure the managed or “brokered” connection up to the Cloud service. If an organization decides to leverage an off-the-shelf product, it usually results in systems doing the configuration and does not involve developers writing code. This is because the Cloud Service Broker sits external to the application and acts as a piece of network infrastructure brokering the connection. As a result, the management of this process comes under the responsibility of those managing the network infrastructure. Additionally, the Cloud Service Broker brokers the connection to the Cloud without having to get mired in the intricacies of particular programming of APIs for each product.
Implementation of Single Sign –On:
The implementation of Single Sign-On for a large enterprise is challenging. Typically it is a long involved project that requires the stitching together of applications that were not originally intended to work together, with products which use proprietary approaches and proprietary (read: not SAML or OAuth) tokens. This approach is labor intensive and time consuming. Within the consumer world the rise of more agile technologies like REST and the Web Service stack has enabled the more efficient adoption of Single Sign-On. Additionally, the growth of Cloud based services like Google Apps means we are seeing more lightweight Web technologies. These more straightforward Web technologies mean organizations, especially SMEs, can leverage off-the-shelf technologies such as a Cloud Broker to broker users’ identity up to the Cloud provider and secure the API keys via Single Sign-On
API Keys must be protected just like passwords and private keys are protected. This means that they should not be stored as files on the file system, or baked unto non-obfuscated applications which can be analyzed relatively easily. In the case of a Cloud Service Broker, API keys are stored encrypted, and when a Hardware Security Module (HSM) is used, this provides the option of storing the API keys on hardware, since a number of HSM vendors now support the storage of material other than only RSA/DSA keys. The secure storage of API keys means that operations staff can apply a policy to their key usage. It also means that regulatory criteria related to privacy and protection of critical communications (for later legal “discovery” if mandated) are met.
Other Benefits of Single-On
In addition to protecting API keys it is worth noting the cost and productivity benefits Single
Sign-On offers an organization. Consider the fact that users with multiple passwords are also a
potential security threat and a drain on IT Help Desk resources. The risks and costs associated with
multiple passwords are particularly relevant for any large organization making its first steps into
Cloud Computing and leveraging Software-as-a-Service (Saas) applications. For example, if an organization has 10,000 employees, it is very costly to have the IT department assign new passwords to access Cloud Services for each individual user and indeed reassign new passwords whenever a user forgets their original access details.
By leveraging Single Sign-On capabilities an organization can enable a user to access both
the user’s desktops and any Cloud Services via a single password. In addition to preventing security
issues, there are significant costs savings to this approach. For example, Single Sign-On users are
less likely to lose passwords reducing the assistance required by IT helpdesks. Single Sign-On is also
helpful for the provisioning and de-provisioning of passwords. If a new user joins or leaves the
organization there is only a single password to activate or deactivate vs. having multiple passwords
to deal with.
Although Single Sign-On is not a new concept, it is finding new application for connecting organizations to Cloud service providers such as Google Apps. It is a powerful concept, enabling users to experience seamless connections from their computers up to their email, calendars, and shared documents. Standards such as SAML are enabling this trend. A Cloud Service Broker is an important enabling component for this trend, enabling the connection while protecting the all-important API keys.
Mark O’Neill – Chief Technology Officer – Vordel
As CTO at Vordel he oversees the development of Vordel’s technical development strategy for the delivery of high performance Cloud Computing and SOA management solutions to Fortune 500 companies and Governments worldwide. Mark is author of the book, “Web Services Security”, and a contributor to “Hardening Network Security”, both published by Osborne-McGrawHill. Mark is also a representative of the Cloud Security Alliance, where he is a member of the Identity Management advisory panel.