Cloud Security: An Oxymoron?

Written by Torsten George, Vice President of Worldwide Marketing at Agiliance

 

Cloud computing represents today’s big innovation trend in the information technology (IT) space. Because it allows organizations to deploy quickly, move swiftly, and share resources, cloud computing is rapidly replacing conventional in-house facilities at organizations of all sizes.

 

However, the 2012 Global State of Information Security Survey, which was conducted by PwC US in conjunction with CIO and CSO magazines among more than 9,600 security executives from 138 countries, reveals that uncertainty about the ability of cloud service providers’ security policies is still a major inhibitor to cloud computing. More than 30 percent of respondents identified their company’s uncertain ability to enforce their cloud providers’ security policies as the greatest security threat from cloud computing. With this in mind, is cloud security even achievable or just an oxymoron?

 

In their eagerness to adopt cloud platforms and applications, organizations are neglecting to recognize and address the compliance and security risks that come with implementation. Often the ease of getting a business into the cloud – a credit card and a few keystrokes is all that is required – combined with service level agreements provides a false sense of security.

 

However, shortcomings in the cloud providers’ security strategy can trickle down to the organizations that leverage their services. Damages can range from pure power outages impacting business performance, data loss, unauthorized disclosure, data destruction, copyright infringement, to brand reputational loss.

 

Cloud Computing Vs. Cloud Security

 

A naturally risk-adverse group, IT professionals are facing a strong executive push to harness the obvious advantages of the cloud (greater mobility, flexibility, and savings), while continuing to protect their organization against new threats that appear as a result.

 

For organizations planning to transition their IT environment to the cloud, it is imperative to be cognizant of often overlooked issues such as loss of control and lack of transparency. Cloud providers may have service level agreements in place, but security provisions, the physical location of data, and other vital details may not be well-defined. This leaves organizations in a bind, as they must also meet contractual agreements and regulatory requirements for securing data and comply with countless breach notification and data protection laws.

 

Whether organizations plan usage of public clouds, which promise an even higher return on investment, or private clouds, better security and compliance is needed. To address this challenge, organizations should institute policies and controls that match their pre-cloud requirements. At the end, why would you apply less stringent requirements to a third-party IT environment than your own – especially if it potentially impacts your performance and valuation?

 

Most recent cyber attacks and associated data breaches of Google and Epsilon (a leading marketing services firm) are prime examples of why organizations need to think about an advanced risk and compliance plan that includes their third-party managed cloud environment.

 

Enabling Cloud Security

 

With most organizations beyond debating whether or not to embrace the cloud model, IT professionals should now re-focus their resources on managing the move to the cloud so that the risks are mitigated appropriately.

 

When transitioning your IT infrastructure to a cloud environment you have to find ways to determine how to trust your cloud provider with your sensitive data. Practically speaking, you need the ability to assess security standards, trust security implementations, and prove infrastructure compliance to auditors.

 

As part of a Cloud Readiness Assessment, organizations should evaluate potential cloud service models and providers. Organizations should insist that the cloud service providers grant visibility into security processes and controls to ensure confidentiality, integrity, and availability of data. It is important not only to rely on certifications (e.g., SAS 70), but more importantly document security practices (e.g., assessment of threat and vulnerability management capabilities, continuous monitoring, business continuity plan), compliance posture, and ability to generate dynamic and detailed compliance reports that can be used by the provider, auditors, and an organization’s internal resources.

 

Considering that many organizations deal with a heterogeneous cloud eco-system, comprised of infrastructure service providers, cloud software providers (e.g., cloud management, data, compute, file storage, and virtualization), and platform services (e.g., business intelligence, integration, development and testing, as well as database), it is often challenging to gather the above mentioned information in a manual fashion. Thus, automation of the vendor risk assessment might be a viable option.

 

Following the guidelines developed by the Cloud Security Alliance, a non-profit organization formed to promote the use of best practices for providing security assurance within cloud computing, organizations should not stop with the initial Cloud Risk Assessment, but continuously monitor the cloud operations to evaluate the associated risks.

 

A portion of the cost savings obtained by moving to the cloud should be invested into increasing the scrutiny of the security qualifications of an organization’s cloud service provider, particularly as it relates to security controls, and ongoing detailed assessments and audits to ensure continuous compliance.

 

If at all possible and accepted by the cloud service provider, organizations should consider leveraging monitoring services or security risk management software that achieves

 

  • Continuous compliance monitoring.
  • Segregation and virtualization provisioning management.
  • Automation of CIS benchmarks and secure configuration management integrations with security tools such as VMware vShield, McAfee ePO, and NetIQ SCM.
  • Threat management with automated data feeds from zero-day vendors such as VeriSign and the National Vulnerability Database (NVD), as well as virtualized vulnerability integrations with companies such as eEye Retina and Tenable Nessus.

 

Automated technology, which allows a risk-based approach and continuous monitoring for compliance would be suitable for organizations seeking to protect and manage their data in the cloud.

 

Many cloud service providers might be opposed to such measures, but the increasing number of cyber security attacks and associated data breaches are offering great incentives to offer these capabilities to their clients not only as a sign of establishing trust, but also as a competitive advantage.

Cloud Security Considerations

Can a cloud be as secure as a traditional network?  In a word, yes!  I agree that some may find this statement surprising.  Depending on the network, that may be a low bar, but good security principles and approaches are just as applicable to cloud environments as they are to traditional network environments.  However, the key is to know how to extend a multi-layered defense into the cloud/virtualization layer.

 

One of the cloud security benefits frequently mentioned is standardization and hardening of VM images.  This can help reduce complexity and ensure that all systems start from a good security posture.  Also, it helps enable a rapid response to fix identified issues.  Some people claim that complexity, or the diversity, of different systems in a traditional network environment is a security benefit because a single vulnerability is not capable of compromising all systems. However, the reality is it is usually more difficult to manage the disparate systems because of the tools and expert resources required to maintain them.

 

Hardening is not only for VMs.  It has to be extended throughout the cloud environment to include the hypervisor, management interfaces, and all other virtual components, such as network devices.  This requires some time and expertise in understanding how to control functionality without losing productivity.  If you ask your service provider or internal team about hardening the virtualization layer and you get blank stares back, you may have a problem.  Also, you should not accept the default statement that “the hypervisor is essentially a hardened O/S” as a complete answer.  Securing the virtualization layer is one of the new and key areas to providing protection for cloud environments.

 

Strong authentication and authorization methods are critical to address, since this is an often neglected area in traditional networks.  It is important to do it right.  It is worth noting that the Verizon 2011 Data Breach Investigative Report cites “exploitation of default or guessable credentials” and “use of stolen login credentials” as some of the most used hacking attacks.  Whether a private or public cloud environment, there needs to be a solid layer of protection from unauthorized access.  Two-factor authentication is a must for remote and administrative access; it is a best practice to require two-factor authentication throughout the virtualized environment, wherever it is practicable.

 

Encryption should be utilized for both data in-transit, as well as data–at-rest.  In addition to providing confidentiality and integrity, encryption plays a critical role in protecting data that is in environment where it may not be able to be destroyed by normal methods.  Once encrypted data is no longer needed, the encryption key for that data set can be destroyed. However, this requires that the organization retain and manage the encryption keys and not the service provider.

 

Encryption is also being used in innovative ways to create an isolated environment within a cloud.  This can be used to extend security and compliance controls from an organization’s traditional network into a cloud.  This can help overcome barriers to cloud security by enabling enterprises to run selected applications and maintain data in the cloud with the same protection and control available internally.

 

Summary

Clouds, like a traditional network environments, require careful security planning, design, and operations.  The various types our clouds and delivery models will have varying degrees of security and flexibility, some with the ability to layer in additional levels of security controls.  This is why it is important to have a firm understanding of security and compliance requirements prior to moving to the cloud.

 

It is fortunate that good security practices are applicable to the cloud.  However, the virtualization layer is a new area – one that requires specialized attention understanding and proficient when it comes to implementing security controls.  Hardening, access control, and encryption are three primary areas of focus in building a multi-layered defense in cloud environments.  Clouds can meet security and compliance requirements, but only if essential security practices are applied throughout them.

 

About the Author

Ken Biery is a principal security consultant with Terremark, Verizon’s IT services subsidiary, focused on providing governance, risk, and compliance counsel to enterprises moving to the cloud. With extensive knowledge in the area of cloud computing, he enables companies around the globe to securely migrate to the cloud and crate more efficient IT operations.

Leveraging Managed Cloud Services to Meet Cloud Compliance Challenges

By Allen Allison

 

Regardless of your industry, customer base, or product, it is highly likely that you face regulatory compliance requirements.  If you handle Protected Health Information (PHI), the Health Insurance Portability and Accountability Act (HIPAA) – along with the HITECH enhancements – are a primary concern for your organization.  If you work with government agencies, you may need to be compliant with the Federal Information Security Management Act (FISMA) or National Institute of Science and Technology (NIST) requirements.  In addition, most states have privacy laws protecting Personally Identifiable Information for residents.

It is a common misunderstanding that these regulatory compliance requirements preclude many organizations from being able to leverage outsourced, managed cloud services.  Depending on the cloud services provider you choose, you may not only be able to meet your existing compliance concerns, but the cloud provider is likely to have controls and processes that improve your compliance program.

When HIPAA was enhanced by the Health Information Technology for Economic and Clinical Health (HITECH) Act, companies with PHI began to panic.  Not only were they expected to protect patient health information, but they had the added requirement of ensuring that third-party providers enabled the same stringent controls on the systems they support.  Furthermore, these organizations had the added responsibility of providing breach notification in the event of a loss of confidentiality.

If nothing else, HITECH gives us two things.  First, the heightened awareness of the sensitivity of each individual’s health information provides more enhanced security programs and assurance to the public that privacy is being protected.  Second, because no organization wants to be in the headlines for a security breach, HITECH spurs organizations to improve their information security, enhance their response services, and enable a platform to notify affected individuals if their information has been compromised.  I can, with all honesty, say that I do feel a bit more secure with my Protected Health Information.

I use HIPAA and HITECH as an example, not because it is the model information security regulation (it is not), but because it is a topic that everyone can relate to.  Similar security requirements stretch across most industries.  What HITECH has done for cloud service providers is enable them to build a common control platform, implement technologies that may be too expensive for some organizations to implement themselves, and leverage a world class security and compliance platform to ensure that the PHI, which is vital to the ongoing management of health care, remains secure, protected, and confidential.

When searching for a cloud provider, it is important to understand which controls the provider has built into the underlying platform are applicable to your compliance.  I recommend asking these three questions:

  1. How many customers in my industry do you have as a customer in your cloud platform?
  2. May I see your most recent SSAE 16 SOC report or other applicable audit?
  3. What is the development lifecycle process your team undergoes to build cloud services and the underlying platform?

With a complete understanding of how ingrained security is in a cloud service provider’s technology and processes, you can begin to understand how it will deal with your sensitive data.

I would like to point out one pitfall.  Not all compliance programs apply to a cloud service provider’s customers.  For example, the SSAE 16 program is of great benefit to customers of cloud service providers.  And customers to whom SSAE 16 extends can rely on the SOC report as part of their own internal controls and compliance.  On the other hand, a provider’s compliance with, for example, Safe Harbor does not extend to the customer; the customer must pursue Safe Harbor, separately.

You must remember, working with a reputable cloud service provider may be an excellent way to leverage expertise and processes you may not otherwise have in-house, and mitigate some risk by assigning responsibility to a 3rd party you can hold accountable to protect your data.  The cloud is rapidly becoming the hosting platform of choice for highly regulated industries because more organizations are leveraging the expertise of these pure information-centric service providers.

 

Allen Allison, Chief Security Officer at NaviSite (www.navisite.com)

During his 20+ year career in information security, Allen Allison has served in management and technical roles, including the development of NaviSite’s industry-leading cloud computing platform, chief engineer and developer for a market-leading managed security operations center; lead auditor and assessor for information security programs in the healthcare, government, e-commerce, and financial industries. With experience in systems programming, network infrastructure design/deployment, and information security, Allison has earned the highest industry certifications, including CCIE, CCSP, CISSP, MCSE, CCSE, and INFOSEC Professional. A graduate of the University of California, Irvine, Allison has lectured at universities and spoken at industry shows such as Interop, RSA Conference, Cloud Computing Expo, MIT Sloan CIO Symposium, and Citrix Synergy.

 

Cloud Security: Confident, Fearful, or Surprised

By Ken Biery

 

This two-part guest blog series explores the topic of cloud security.  Part one of the series focuses on the questions enterprise IT decision makers should ask when considering moving business applications to a cloud-based computing environment.

 

 

There is no shortage of information about cloud security. There are those that say cloud security is inherently more secure because of its ability to create and maintain a more hardened centralized environment.  Others claim, because of multi-tenancy, virtual systems and data will never be even modestly secure.

 

The big surprise about cloud security may be that there are not really any big surprises.  The good security practices that work in a traditional network also work for cloud-based IT.  The key is understanding how to apply security practices to a cloud environment and to develop a security strategy that uses known and sound security foundations to address various cloud environments.

 

A more secure cloud is the product of careful planning, design, and operations.  This begins with understanding the type of cloud (public, private, hybrid) that is being used and then its model, whether it be software-as-a-service (SaaS), platform-as-a-service (PaaS) or infrastructure-as-a-service (IaaS.) These two factors will determine the type and amount of security controls needed and who is responsible for them.

 

Public and Private Clouds

Public clouds typically tend to have a limited number of security measures, providing a more open and flexible computing environment.  These clouds usually have a lower cost since their security features are basic.  While this may be perfectly acceptable for some circumstances, such as non-critical or non-sensitive environments, it will not usually meet the requirements of most enterprise users.

 

Public clouds also generate the most concern about using a shared virtualized environment.  These are mainly centered on how to properly segment systems and isolate processing resources.  Segmentation and isolation can be challenging to accomplish and measure, especially for an auditor or assessor looking at these primary security control areas.  Another factor is that many public cloud providers do not, or cannot, sufficiently support the types of controls required by enterprises to meet security and compliance requirements.

 

When considering a public cloud, it is important to ask the provider about their security measures, such as segmentation, firewalls/intrusion protection systems, monitoring, logging, access controls, and encryption.  Their responses and transparency about the details of their environment’s security measures speak volumes of what to expect.  Also, you may want to do some searches on the provider as they may have a reputation for harboring “bad neighborhoods”, which tend to host botnets or malware sites.

 

Private clouds can be internally hosted or located at a service providers’ facility.  For internally hosted clouds, just like traditional environments, the security design and controls can be highly customized and controlled by the organization.  If hosted at a service provider, the number of controls can vary considerably depending on the model selected.  This is not to say that a service provider cannot provide a good set of default and optional security controls.  Obviously, this is why having a good understanding of the provider’s cloud design and its features, as well as your own requirements, is crucial.

 

Multi-tenancy, Segmentation, and Isolation

Multi-tenancy is one of the major issues when it comes to security and compliance in the cloud.  In some cases, multi-tenancy may require that an environment’s controls be set to the lowest level to support the broadest set of requirements for the largest number of potential users.  One of the main concerns around multi-tenancy is that, due to the use of a shared resource pool of computing resources, one entity’s virtual machine (VM) could compromise another entity’s VM.  A lack of proper segmentation between the two entities’ environments could make this possible.

 

This lack of separation can also create compliance challenges for multi-tenancy environments.  Assessors and auditors are looking for sufficient controls to help prevent information leakage between virtual environment components.  Improperly configured hypervisors, management interfaces, and VMs have the potential to become a leading cause for non compliance and risk exposure.  In a traditional network, if a system is misconfigured, it can be compromised.  If a virtual environment is misconfigured, it can compromise all of the systems within it.

 

It is important to note that there has not been any major publically disclosed compromise of hypervisors.  However, it is only a matter of time.  The virtualization layer is too tantalizing of a target for hackers not to pursue aggressively.

 

One of the cleanest ways to show separation within a virtualized environment is to have VMs with compliance or higher security requirements run on dedicated physical hardware.  Yes, this is contrary to one of the benefits of cloud computing until the effort and cost of compliance and robust security is considered.  This approach can be easier to establish and maintain since only a smaller number of systems may need to have advanced protection.

 

Isolation needs to be performed at the operating system (O/S) layer and no two VM operating systems should be shared. Specifically, the rapid-access memory (RAM), processor and storage area network (SAN) resources should be logically separated, with no visibility to other client instances. From a network perspective, each entity is separated from the next by use of a private virtual local area network (VLAN.)

 

The second part of this blog series will explore the cloud security best practices that can be employed to create a multi-layered defense for cloud-based computing environments.

 

About the Author

Ken Biery is a principal security consultant with Terremark, Verizon’s IT services subsidiary, focused on providing governance, risk, and compliance counsel to enterprises moving to the cloud. With extensive knowledge in the area of cloud computing, he enables companies around the globe to securely migrate to the cloud and crate more efficient IT operations.

Test Accounts: Another Compliance Risk

By: Merritt Maximi

A major benefit associated with deploying identity management and/or identity governance into an organization is that these solutions provide the ability to detect and remove orphan accounts.  Orphan accounts refer to active accounts belonging to a user who is no longer involved with that organization.  From a compliance standpoint, orphan accounts are a major concern since orphan accounts mean that ex-employees  and former contractors or suppliers still have legitimate credentials and access to internal systems.  Identity management and identity governance solutions can help identify potential orphan accounts which the IT and audit teams can review and determine if these accounts should be deleted.  By actively monitoring and managing orphan accounts, organizations can reduce IT risk and better manage their overall users and entitlements more effectively.

However, there is another type of account that can present many of the same problems as orphans, yet these accounts are often overlooked during the certification and governance process. The accounts in questions are test accounts which reside on almost every application.  Test accounts serve a very valuable function, especially as organizations prep to move a new app or version from test to production.  The test account is how IT can verify functionality.  Because of the application requirements, most test accounts have full administrative privileges meaning that the account has access to every capability in the given application.  The challenge is test accounts serve a valuable purpose and they cannot be removed completely.

The preferred best practice is to only have test accounts in test environment, or staging environment at most, but never in the production environment.

However, as is often the case in today’s highly complex, heterogeneous and distributed IT environment, test accounts often end up in production environments.  Even worse, these test accounts often lie undetected or in a large grouping of unaligned accounts.  And generally speaking, the longer an application has been in production in an organization, the greater the probability is that test accounts reside within those systems.

So what is the best approach for managing test accounts?

  1. First, if a test account is to reside in a production environment (and there may be legitimate business reasons for this), make sure that this account is assigned the least possible privileges possible.  This allows for some basic testing of the production system without exposing the entire application.
  2. Leave the full test accounts for the test and staging environments.
  3. Adopt an organization-wide common syntax for test accounts.  Make them all called “test” or something else.  This will make step 4 even easier.
  4. Conduct periodic audits of your production environments to identify potential test accounts.  This can be laborious manual work, but your auditors (and others) will thank you in the long run.  The simplest way would be to start looking in the group of unaligned accounts (those not tied to any individual) as well as for other syntax like “test” or “12345” which are often what developers use to name a test account.
  5. When conducting your periodic review of test accounts, you should also do a review of the test account activities. The helps determine if anyone was using the test accounts for actual changes that could affect the production environment.   The impact could be indirect (e.g. policy changes that are done in staging environment may impact production if, by error, these changed policies are automatically pushed into the production environment as part of some bigger configuration rollout) and analyzing the activity can help prevent these issues.
  6. Consider utilizing privileged user password management (PUM) functionality to protect all your test accounts, especially those in production.  PUM solutions can help mitigate the risk of test accounts by securing those accounts in a secure encrypted vault., and can ensure appropriate access to the passwords based on a documented policy.  Doing this also helps make the users of the test accounts accountable and also means that test account use is no longer anonymous as all user actions are securely recorded.

In summary, test accounts are not the enemy, but they do represent a potential risk that every IT organization should manage.

When It Comes To Cloud Security, Don’t Forget SSL

By Michael Lin, Symantec

 

Cloud computing appears here to stay, bringing with it new challenges and security risks on one hand, while on the other hand boasting efficiencies, cost savings and competitive advantage. With the new security risks of cloud and the mounting skill and cunning of today’s malicious players on the Web, Secure Sockets Layer (SSL) certificates are here to stand up to the risks. Using SSL encryption and authentication, SSL certificates have long been established as a primary security standard of computing and the Internet, and a no-brainer for securely transferring information between parties online.


What is SSL?

SSL Certificates encrypt private communications over the public Internet. Using public key infrastructure, SSL consists of a public key (which encrypts information) and a private key (which deciphers information), with encryption mathematically encoding data so that only the key owners can read it. Each certificate provides information about the certificate owner and issuer, as well as the certificate’s validity period.

Certificate Authorities (CAs) issue each certificate, which is a credential for the online world, to only one specific domain or server.  The server sends the identification information to the browser when it connects, then sends the browser a copy of its SSL Certificate. The browser verifies the certificate, and then sends a message to the server and the server sends back a digitally signed acknowledgement to start an SSL-encrypted session, letting encrypted data transfer between the browser and the server.


How does it secure data in the cloud?

If SSL seems a little old-school in comparison to the whiz-bang novelty of cloud computing, consider this:  since SSL offers encryption that prevents prying eyes from reading data traversing the cloud, as well as authentication to verify the identity of any server or endpoint receiving that data, it’s well-suited to address a host of cloud security challenges.

Where does my data reside, and who can see it? Moving to the cloud means giving up control of private and confidential data, bringing data segregation risks. Traditional on-site storage lets businesses control where data is located and exactly who can access it, but putting information in the cloud means putting location and access in the cloud provider’s hands.

This is where SSL swoops in to quell data segregation worries. By requiring cloud providers to use SSL encryption, data can securely move between servers or between servers and browsers. This prevents unauthorized interceptors from reading that data. And, don’t forget that SSL device authentication identifies and vets the identity of each device involved in the transaction, before one bit of data moves, keeping rogue devices from accessing sensitive data.

How can I maintain regulatory compliance in the cloud? In addition to surrendering control of the location of data, organizations also need to address how regulatory compliance is maintained when data lives in the cloud.  SSL encryption thwarts accidental disclosure of protected or private data according to regulatory requirements. It also provides the convenience of automated due diligence.

Will my data be at risk in transit? Putting data in the cloud usually means not knowing where it physically resides, as discussed earlier. The good news is that cloud providers using SSL encryption protect data wherever it goes. This approach not only safeguards data where it lives, but also helps assure customers that data is secure while in transit.

Another point to note here is that cloud providers using a legitimate third-party SSLCAwill not issue SSL certificates to servers in interdicted countries, nor store data on servers located in those countries. SSL therefore further ensures that organizations are working with trusted partners.


Will any SSL do?

Recent breaches and hacks reinforce the fact that not all SSL is created equal, and neither are all CAs. Security is a serious matter and needs to be addressed as organizations push data to the cloud. Well-established best practices help those moving to the cloud make smart choices and protect themselves. Here are some things to keep in mind while weighing cloud providers:

  • Be certain that the cloud providers you work with use SSL from established and reliable independent CAs. Even among those trusted CAs, not all SSL is the same, so choose cloud providers that ensure that those providers have SSL certificates from certificate authorities that:
  • Ensure that the SSL your cloud provider uses supports at least AES 128-bit encryption, preferably stronger AES 256-bit encryption, based on the new 2048-bit global root
  • Require a rigorous, annual audit of the authentication process Maintain military-grade data centers and disaster recovery sites optimized for data protection and availability

Who will you trust? That’s the question with cloud computing, and with SSL. Anybody can generate and issue certificates with free software. Partnering with a trusted CA ensures that it has verified the identity information on the certificate. Therefore, organizations seeking an SSL Certificate need to partner with a trusted CA.

SSL might not be the silver bullet for cloud security, but it is a valuable tool with a strong track record for encrypting and authenticating data online. Amid new and complex cloud security solutions, with SSL, one of the most perfectly suited solutions has been here all along.

 

Securing Your File Transfer in the Cloud

By Stuart Lisk, Sr. Product Manager, Hubspan Inc. 

File transfer has been around since the beginning of time. Ok, well maybe that is an exaggeration, but the point is, file transfer was one of the earliest uses of “network” computing dating back to the early 1970’s when IBM introduced the floppy disk. While we have been sharing files with each other for ages, the security of the data shared is often questionable.

Despite File Transfer Protocol (FTP) being published in 1971, it took until the mid-80s for systems to catch up to the original vision of FTP, as LANs were beginning to find their way into the business environment. During this time period, transferring files internally became easier and the ability to move files externally by leveraging the client server typology eliminated the “here’s the disk” approach. If you think about it, these were pretty confined environments with the client and server having a true relationship. Securing the file in this scenario had more to do with making sure that no one could access the data as oppose to worrying about protecting the transport itself. Centralized control and access was the way of the world back in these “good ole days.”

Fast forward to the proliferation of the internet and the World Wide Web, the concern of securing files while in transit to its location then became top of mind. IT managers were ultimately concerned that anyone within a company could log on via the web and access a self-service, cloud based, File Transfer application without IT’s knowledge, adding to the increased security risk for file transfer.

Performing file transfer over the internet, via the “cloud”, has provided major benefits over the traditional methods.  In fact, we’ve seen that the ability to quickly deploy and provision file transfer activities actually drives more people to the cloud. However, along with the quick on-boarding of companies and individuals comes the challenge of ensuring secure connectivity, managed access, reporting, adaptability, and compliance.

Having a secure connection is not as easy as it should be. Many companies still utilize legacy file transfer protocols that don’t encrypt traffic, exposing the payload to anyone that can access the network. While FTP protocol is a bit dated, the majority of companies still use it.  According to a recent file transfer survey conducted in March 2011, over 70% of respondents currently utilize FTP as their primary transport protocol. Furthermore, over 56% of those responding stated that they use a mailbox or other email applications to transfer files.

In order for enterprises to move beyond FTP to ensure sensitive files are transferred securely, they must implement protection policies that include adherence to security compliance mandates; and do so with the same ease-of-use that exists with simple email. IT managers must be concerned with who is authorizing and initiating file transfers as well as controlling what gets shared. Any time files leave a company without going through proper “file transfer” policy checks puts businesses at risk. Typical email attachments and use of ad-hoc file web-based file transfer applications makes it easy for someone to share files they shouldn’t.

In today’s computing environment, securing file transfer in the cloud requires the use of protocols that integrate security during transit and at rest.  Common secure protocols are Secure FTP (SFTP), FTPS (FTP over SSL), AS2, and HTTPS to name a few. Companies need to be actively looking at one of these protocols, as it will encrypt data while minimizing risk.

When leveraging the cloud for file transfer, IT managers need to be sure that the application and/or vendor they are working with utilizes a proven encryption method. Encrypting the file when it is most vulnerable in-transit, is best. Additionally, IT managers would be wise to work with cloud vendors that have integrated security already built into their platform.  Built-in encryption, certification and validation of data are vital to ensure a safe delivery of files. While you may not have influence over what your partner implements as their transport, you can take steps to mitigate issues. In fact today there are a number of file transfer applications that validate content prior to and after the file transfer occurs.

Another area of focus for IT mangers when accessing file transfer security is around access controls. Simply put, who has access and to what data.  Companies must have a plan to control access to each file and what data is stored there. Again in this scenario, encrypting methods to access the file is the best way to mitigate a breach. As mentioned earlier, FTP does not protect credentials from predators. More than 30% of the respondents from the March survey indicated that access controls is one of the most important criteria for Cloud based transfers.

Receipt notification is yet another way for senders ensure their confidential files are being delivered and opened by the right people.  Additionally, using file transfer applications that utilize an expiration time that keeps the file available is a great way to mitigate unauthorized access.

As mentioned earlier, adhering to industry and corporate compliance policies has is critical. Corporate governance regulations include but not limited to:

  • Sarbanes-Oxley Section 404: Requires audit trails, authenticity, record retention
  • HIPAA requirements: Record retention, privacy protection, service trails
  • 21 CFR Part 11: Record retention, authenticity, confidentiality, audit trails
  • Department of Defense (DOD) 5015.2: Record authenticity, protection, secure shredding

While there are many criteria to consider when deciding how to implement and leverage file transfer activities within your organization, there are really a few simple areas to focus on:

  • Choose a secure protocol
  • Implement data protection in-transit and at-rest
  • Utilize effective encryption technology
  • Maximize access controls
  • Leverage auditing and reporting functionality
  • Adhere to corporate and industry compliance policies

While that may seem like an endless number of steps, it can be easier than it sounds as long as you evaluate and execute file transfer activity that protects and secure your sensitive data.

 

Stuart Lisk, Senior Product Manager, Hubspan

Stuart Lisk is a Senior Product Manager for Hubspan, working closely with customers, executives, engineering and marketing to establish and drive an aggressive product strategy and roadmap.  Stuart has over 20 years of experience in product management, spanning enterprise network, system, storage and application products, including ten years managing cloud computing (SaaS) products. He brings extensive knowledge and experience in product positioning, messaging, product strategy development, and product life cycle development process management.  Stuart holds a Certificate of Cloud Security Knowledge (CCSK) from the Cloud Security Alliance, and a Bachelor of Science in Business Administration from Bowling Green State University.

 

CSA Blog: The “Don’t Trust Model”

By Ed King

The elephant in the room when it comes to barriers to the growth and adoption of Cloud computing by enterprises is the lack of trust held for  Cloud service providers.  Enterprise IT has legitimate concerns over the security, integrity, and reliability of Cloud based services.  The recent high profile outages at Amazon and Microsoft Azure, as well as  security issues at DropBox and Sony only add to the argument that Cloud computing poses substantial risks for enterprises.

 

Cloud service providers realize this lack of trust is  preventing enterprise IT from completely embracing Cloud computing.  To ease this concern, Cloud service providers have traditionally taken one or both of the following approaches:

  1. Cloud service providers, especiallythe larger ones, have implemented substantial security and operational procedures to ensure customer data safety, system integrity, and service availability.  This typically includes documenting the platform’s security architecture, data center operating procedures, and adding service-side security options like encryption and strong authentication.  On top of this, they obtain SAS-70 certification to provide proof that “we did what we said we would do.”
  2. Cloud service providers also like to point out their security and operational technology and controls are no worse, indeed, are probably  better than the security procedures which most enterprises have implemented on their own.

 

Both of these approaches boil down to a simple maxim, “trust me, I know what I am doing!”  This “Trust Me” approach has launchedt the Cloud computing industry but to date, most large enterprises have not put mission critical applications and sensitive data into the public Cloud.  As enterprises look to leverage Cloud technologies for mission critical applications, the talk has now shifted towards private Cloud, because fundamentally the “Trust Me” approach has reached its limit.

 

In terms of further development, Cloud service providers must come to the realization that enterprises will never entrust the providers with their business critical applications and data unless they have more direct control over security, integrity, and availability.  No amount of documentation, third party certification, or  on-site auditing can mitigate risks enough to replace the loss of direct control.  As an industry, the sooner it is    realized that we need solutions offering Cloud control back to the customer, the sooner enterprises and the industry will  benefit from the true commercial benefits of Cloud computing.  As such, the approach would be, be “you don’t have to trust your Cloud providers, because you own the risk mitigating controls”.  Security professionals normally talk about best practice approaches to implementing trust models for IT architectures. I like to refer to the self-enablement of the customer as the “Don’t Trust Model”. Let’s examine how we can put control back into the customer’s hands so we can shift to a “Don’t Trust Model”?

 

Manage Cloud Redundancy

Enterprises usually dual-source critical information and build redundancy into their mission critical infrastructures.  Why should Cloud based services be any different?  When Amazon Web Services (AWS) experienced an outage on April 21, 2011, a number of businesses that used AWS went completely off line, but Netflix did not.  Netflix survived the outage with some degradation in service because it has designed redundancy into its Cloud based infrastructure.  Netflix has spread  its Cloud infrastructure across multiple vendors and has designed redundancy into its platform.  Features like stateless services and fallback are designed specifically to deal with scenarios such as the AWS outage (see an interesting technical discussion at Netflix’s Tech Blog).  Technologies like Cloud Gateway, Cloud Services Broker and Cloud Switch can greatly simplify the task of setting up, managing, monitoring, and switching of Cloud redundancy.

 

For example, a Cloud Gateway can provide continuous monitoring of Cloud service availability and quality.  When service quality dips beyond a certain threshold, the Cloud Gateway can send out alerts and  automatically divert traffic to back-up providers.

Put Security Controls On-premise

Salesforce.com (SFDC) is the poster child of a successful Cloud based service.  However, as SFDC expanded beyond the small and medium business sector to go after large enterprises, they found a  more reluctant customer segment due to the concern over data security in the Cloud.  On August 26, 2011, SFDC bought Navajo Systems, an acquisition of a technology that puts security control back in the hands of SFDC customers.  Navajo Systems provides solutions that encrypt and tokenize data stored in the Cloud, a Cloud Data Gateway.

 

Cloud Data Gateway secures the data before it leaves the enterprise premises.  The Gateway monitors data traffic to the Cloud and enforces policies to block, remove, mask, encrypt, or tokenize sensitive data.  The Cloud Data Gateway technology has different deployment options.  Using a combination of Gateways at the Cloud service provider and Gateways on-premise, different levels of data security can be achieved.  By giving customers control over data security before the data leaves the premises, customers do not have to trust the Cloud service provider and need not rely on the Cloud provider alone to ensure the safekeeping of its data.

Integrate Cloud WithEnterpriseSecurity Platforms

Enterprises have spent millions of dollars on security infrastructure, including identity and access management, data security, and application security.  The deployments of these technologies are accompanied by supporting processes such as user on-boarding, data classification, and software development lifecycle management.  These processes take years to rollout and provide critical controls to mitigate security risks.  These tools and processes will evolve to incorporate new technologies like Cloud computing and mobile devices, but for Cloud computing to gain acceptance within the enterprise, Cloud services must be seamlessly integrated into existing security platforms and processes.

 

Single sign-on (SSO) is a great example.  After years of effort to deploy an enterprise access management solution like CA Siteminder, Oracle Access Manager or IBM Tivoli Access Manager to enable SSO and have finally trained all the users on how to perform password reset, do you think IT has the appetite to let each Cloud service become a security silo?  From a user standpoint, they simply expect SSO to be SSO, not “SSO, excluding Cloud based services”.   Most major Cloud service providers support standards such as SAML (Security Assertion Markup Language) for SSO and provide detailed instructions on how to integrate with on-premise access management systems.  Usually this involves some consulting work and maybe a third party product.   A more scalable approach would be using technologies such as Access Gateway (also known as SOA Gateway, XML Gateway, Enterprise Gateway) to provide integrated and out-of-the-box integrations to access management platforms.  Gateway based solutions extend existing access policies and SSO processes to Cloud based services, placing access control back  with information security teams.

 

It’s clear that more needs to be done to place control back into the hands of the customer.  Cloud computing is a paradigm shift and holds great promise for cost savings and new revenue generation.  However, to accelerate the acceptance of Cloud computing by enterprises IT, we as an industry must change from a trust model to a “Don’t Trust” model way of thinking.

 

Ed King VP Product Marketing, Vordel
Ed has responsibility for Product Marketing and Strategic Business Alliances. Prior to Vordel, he was VP of Product Management at Qualys, where he directed the company’s transition to its next generation product platform. As VP of Marketing at Agiliance, Ed revamped both product strategy and marketing programs to help the company double its revenue in his first year of tenure. Ed joined
Oracle as Senior Director of Product Management, where he built
Oracle’s identity management business from a niche player to the undisputed market leader in just 3 years. Ed also held product management roles at Jamcracker, Softchain and Thor Technologies. He holds an engineering degree from the Massachusetts Institute of Technology and a MBA from the University of California, Berkeley.

Seven Steps to Securing File Transfer’s Journey to the Cloud

By Oded Valin, Product Line Manager, Cyber-Ark Software

 

“When it absolutely, positively has to be there overnight.”  There’s a lot we can identify with when it comes to reciting FedEx’s famous slogan, especially as it relates to modern file transfer processes. When you think about sharing health care records, financial data or law enforcement-related information, peace of mind is only made possible when utilizing technology and processes that are dependable, trustworthy – and traceable.  Organizations that rely on secure file transfer to conduct business with partners, customers and other third-parties must maintain the same level of confidence that that slogan inspired.  Now, consider taking the transfer of sensitive information to the cloud.  Still confident?

 

In many ways, when you consider the number of USB sticks that have been lost in the past six-to-nine months due to human error or the number of FTP vulnerabilities that have been routinely exploited, it’s clear there must be a better way.

 

For organizations seeking a cost-effective solution for exchanging sensitive files that can be deployed quickly and with minimal training, it may be time to consider cloud-based alternatives.  But how can organizations safely exchange sensitive files in the cloud while maintaining security and compliance requirements, and remaining accountable to third-parties?  Following are seven steps to ensuring a safe journey for taking governed file transfer activities to the cloud.

 

For those organizations interested in starting off on the right foot for a cloud-based governed file transfer project, either starting from scratch or migrating from an existing enterprise program, here are important steps to consider:

 

  1. Identify Painful and Costly Processes: Examine existing transfer processes and consider costs to maintain them. Do they delay the business and negatively impact IT staff? If starting from scratch, what processes must you be securing and ensuring are free from vulnerabilities in the cloud?  Typically, starting a file transfer program from scratch requires significant IT and administrative investments ranging from setting up the firewall and VPN to engaging with a courier service to handle files that are too large to be transferred electronically.  The elasticity of the cloud enables greater flexibility and scalability and significantly decreases the amount of time and resources required to establish a reliable program.  Utilizing a cloud-based model, organizations can become fully operational within days or weeks versus months, while reducing the drag on IT resources.  Ultimately, in cases like one Healthcare provider that turned to the cloud to share images with primary MRI and CT scan providers, services being provided to the patient were more timely and efficient, and less expensive.
  2. Define Initial Community: Who are the users – internal? external?  When exchanging files with third-party partners, particularly business users, it’s important to provide a file transfer solution that works the way they work.  User communities are increasingly relying on tablets and browser-based tools to conduct business, so the file transfer process and user-interface must reflect the community’s skill sets and computing preferences.  The ease of deployment and the level of customization made possible in cloud-based environments encourage adoption and effective use of file transfer solutions.
  3. Determine File Transfer Type: Do you need something scalable or ad-hoc? How important is automation?  Compared to manual file transfer process, a cloud computing environment can support centralized administration for any file type while also providing the benefits of greater storage, accommodation for large file transfers and schedule-based processes, all without negatively impacting server or network performance.
  4. Integrate with Existing Systems: Can you integrate your existing systems with a cloud-based file transfer solution? What automation tools are provided by the cloud vendor?  Many organizations believe that file transfer systems are stand-alone platforms that can’t be integrated with existing systems, like finance and accounting, for example.  Utilizing a flexible cloud-based solution with open APIs and out of the box plug-ins not only assists with secure integration with current databases and applications, but it can also be deployed very quickly with the flexibility to support the adoption of a hybrid cloud/on-premise model, should the organization decide that scenario worked best for its business.
  5. Define Workflows: Examine how business, operations and security are interrelated.  What regulations and transparency requirements need to be considered?  How are they different in the cloud?  Ensure segregation of duties between the operations and the content, between the content owners themselves.  Organizations seeking to adopt a cloud-based file transfer solution must make sure the service provider can support its user-defined workflows. It’s also important to ensure your cloud vendor goes “beyond the basics.”  Specifically, many file sharing services allow organizations to share data and information simply from Point A to Point B.  But, if you need to add additional functionality like automatically converting to a .pdf and adding a watermark for additional security, manage audit permissions, scan the file for viruses and other advanced features, an enterprise class cloud solution is necessary.
  6. Continuous Monitoring: Take steps to ensure file download activity is being monitored, file exchange validated and transfers are smooth. Organizations must be able to verify when files arrived and know who opened them. These actions are absolutely supported in a cloud environment, and are overall governed file transfer best practices.
  7. Ongoing Operations: Is it quick and easy to add new partners or set up new file transfer processes? How reliable is the service in terms of high availability, disaster recovery and automatic recovery of file transfer processes?  The cloud-based solution should provide an easy-to-use interface to empower the business user and encourage autonomy at the operations level without requiring IT involvement. Additionally, organizations should find a cloud provider that provides a simple pricing model.  For example, paying per email is not scalable and doesn’t align with typical business use.  Finally, you shouldn’t have to fly alone, be sure to take advantage of all the consulting services and expertise your service provider offers to support ongoing operations without interruption.

 

To conclude, given the traditional reliance on antiquated technologies and unreliable processes, it’s absolutely time for organizations to consider adopting cloud-based approaches to governed file transfer activities.  Moving beyond the well-established cost and resource benefits of the cloud, for those companies with complex requirements or special file transfer needs, the flexibility and security that are possible in the cloud will ensure that high quality standards are continuously met and that the confidence and peace of mind necessary to secure your file transfer’s trip to the cloud are achieved. Rest assured.

 

Oded Valin is a Product Line Manager at Cyber-Ark Software (www.cyber-ark.com). Drawing on his 15 years of high-tech experience, Valin’s responsibilities include leading definition and delivery of Cyber-Ark’s Sensitive Information Management product line, product positioning and overall product roadmap.

Five Ways to Achieve Cloud Compliance

With the rapid adoption of cloud computing technologies, IT organizations have found a way to deliver applications and services more quickly and efficiently to their customers, incorporating the nearly ubiquitous utility-like platforms of managed cloud services companies.  The use of these cloud technologies are enabling the delivery of messaging platforms, financial applications, Software as a Service offerings, and systems consolidation in a manner more consistent with the speed of the business.

However, audit and compliance teams have been less aggressive in adopting cloud technologies as a solution of choice for a variety of reasons – there may be a lack of understanding of what security components are available in cloud; there may be a concern that the controls in cloud are inadequate for securing data; or, there may be a fear that control over the environment is lost when the application and data move to the cloud.  And, while these concerns are understandable, there is an ever-growing recognition of the security and compliance benefits available in managed cloud services that are putting to rest the minds of corporate audit and compliance teams.

Here are five steps you can take to ensure that your audit and compliance team is comfortable with the cloud:

1.       Understand and be able to relay the compliance requirements to your cloud service provider.  I have worked with organizations in all industries with a wide variety of regulations, and the most successful organizations adopting cloud come with a very in-depth understanding of what security controls and technologies are necessary to meet the compliance of their own organizations.  For example, we had a large provider of healthcare services approach us with a request to move a portion of their environment to cloud.  This environment contained Patient Health Information (PHI), and the customer knew that, in order to pass their audit, they must be able to:

a)      Enforce their own security policies in the new environment including password policies, standard builds, change management, incident handling, and maintenance procedures.

b)      Incorporate specific technologies in the environment including file integrity monitoring, intrusion detection, encryption, two-factor authentication, and firewalls.

c)       Integrate the security architecture into their already robust security operations processes for multisite event correlation, security incident response, and eDiscovery.

By ensuring that the cloud environment was architected from the very beginning with those controls in mind, the audit and compliance team had very little work to do to ensure the new environment would be consistent with the corporate security policies and achieve HIPAA compliance.

2.       Select a cloud provider with a history of transparency in security and policies built into the cloud platform.  It is extremely important that the controls in place supporting the cloud infrastructure are consistent with those of your organization or that the cloud provider has the flexibility to incorporate your controls into the cloud environment that will house your data.  It is important to note that compliance is not one-size-fits-all. An example of this is the financial industry, where there are very specific controls that must be incorporated into an IT infrastructure, such as data retention, data classification, business continuity, and data integrity.  Be sure that the managed cloud services provider is able to incorporate those policies that differ from the standard policies.  Key policies and services that are often adjustable for different industries include the following:

a)      Data and Backup Retention

b)      Data encryption at rest and in transit

c)       Business resumption and continuity plans

d)      eDiscovery and data classification policies

e)      Data integrity assurance

f)       Identity and access management

Most organizations maintain a risk management program. If your company has a risk assessment process, include your provider early to ensure that the controls you need are included.  If your organization does not, there are several accessible questionnaires that you can tailor to suit your needs.  Two great resources are the Cloud Security Alliance (https://cloudsecurityalliance.org ) and the Shared Assessments program (http://www.sharedassessments.org ).

3.       Understand what the application, the data, and the traffic flow look like.  It is not uncommon for a cloud customer not to understand exactly what data exists in the system and what controls need to be incorporated.  For example, one of the early adopter of cloud services I worked with years ago did not know that the application they hosted processed credit card transactions on a regular basis.  When they first came to us, they wanted to put their Software as a Service application in the cloud not knowing that one of the uses that a customer of theirs had was to process credit cards in a high-touch retail model – the Payment Card Industry Data Security Standard (PCI DSS) was the furthest thing from their mind.  After the end-customer performed an audit, the gaps in security and policies were closed by incorporating those policies and technologies that were made available in the cloud platform.  Further, by understanding the transaction and process flow, the customer was able to reduce costs by segmenting the cardholder environment from the rest of the environment, and implemented the more stringent security controls on the environment with the cardholder data

4.       Clearly define the roles and responsibilities between your organization and the managed cloud services provider.  Some of the roles and responsibilities in a hosted service clearly belong to the hosting provider, and some clearly belong to the customer.  For example, in cloud, the underlying cloud infrastructure, its architecture, its maintenance, and its redundancy is clearly the responsibility of the provider; likewise, the application (in many cases) and all of the data maintenance is clearly the responsibility of the customer.  However, how an organization assigns roles and responsibilities for everything in between and assigns responsibility for the ongoing compliance of those roles and responsibilities is extremely important to the ongoing management of the compliance program. Remember that some of the controls and security technologies may be in addition to the cloud platform, and your requirements may result in additional services and scope.

5.       Gain an understanding of the certifications and compliance you can leverage from your managed cloud services provider. Your managed cloud services provider may have an existing compliance program that incorporates many of the controls that your audit team will require when assessing the compliance of the cloud environment.  In many cases, this compliance program, and the audited controls, can be adopted and audited as though they were those of your organization.  For example, some cloud providers have included the cloud platform and customer environments in their SSAE 16 (formerly SAS70) program.  The SSAE 16 compliance program is audited by a third party, and provides the assurance that the controls and policies that are stated within the provider’s compliance program are in place and followed.  By inclusion into that compliance program, you may provide your auditors with a quick path to assessment completion.

The most important thing to remember in moving your environment to the cloud is to be sure to have conversations early and often with your provider regarding your requirements and the specific expectations of the provider.  They should be able to provide the information necessary to be sure that your environment includes all of the security and controls to achieve your company’s compliance and certifications.

 

Allen Allison, Chief Security Officer at NaviSite (www.navisite.com)

During his 20+ year career in information security, Allen Allison has served in management and technical roles, including the development of NaviSite’s industry-leading cloud computing platform, chief engineer and developer for a market-leading managed security operations center; lead auditor and assessor for information security programs in the healthcare, government, e-commerce, and financial industries. With experience in systems programming, network infrastructure design/deployment, and information security, Allison has earned the highest industry certifications, including CCIE, CCSP, CISSP, MCSE, CCSE, and INFOSEC Professional. A graduate of the University of California, Irvine, Allison has lectured at universities and spoken at industry shows such as Interop, RSA Conference, Cloud Computing Expo,

MIT Sloan CIO Symposium, and Citrix Synergy.

 

Cloud Hosting and Security Demystified

I am always amazed when I read the daily cloud blogs, articles and news headlines. Any given day will bring conflicting points of view by cloud industry experts and pundits on how secure clouds are, both private and public.  There never seems to be a real consensus on how far security in the cloud has evolved.   How then can any corporate CIO sort through the conflicting information and make an informed decision?  The good news is that several cloud industry publications; security vendors and research organizations are making a concerted effort to cut through the hype and provide CIO’s with non-biased and researched driven data to help with the decision-making process.

 

According to Gartner’s 2011 CIO Agenda survey, just 3% of the CIOs surveyed say the majority of their IT operations are in the cloud today. Looking ahead, 43% say that within four years they expect to have the majority of their IT running in the cloud on Infrastructure-as-a-Service (IaaS) or on Software-as-a-Services (SaaS) technologies. This article will review the security issues that are holding back CIOs right now, and what will be needed to accelerate that growth.

 

CIOs have a fiduciary duty and the ultimate responsibility (legally and ethically) to ensure that the corporation’s sensitive information and data are protected from unauthorized access.  CIOs also have limited budgets and resources to work with so they are always researching new and emerging technologies that will reduce cost, increase security and scalability, and maximize efficiencies in their infrastructure.  Independent studies have demonstrated that both IaaS and SasS cloud models decrease cost, increase scalability and are extremely efficient when it comes to rapid deployment of new systems.   So what are the main security issues that have CIOs delaying a move to the cloud?

 

Perceived Lack of Control in the Cloud

 

To a CIO, control is everything; on the surface hosting your sensitive information on an outsourced, shared, multi-tenant cloud platform would seem like a complete surrender and loss of control.   How can you control risk and security of an information system that resides in someone else’s data center and is co-managed by outsourced personnel?

 

There are several secure cloud service providers that understand this concern and have built their entire core business around providing facilities, services, policy and procedures that give their clients complete transparency and control over their information systems. Most secure cloud service providers have adopted and implemented the same security best practices, regulatory and compliance controls that CIOs enforce inside their own internal organization such as PCI DSS 2.0, NIST 800.53, ISO 27001 and ITIL.

 

In fact CIOs can leverage a secure CSP’s infrastructure and services that may otherwise be cost prohibitive to implement internally thus giving them greater control over their information systems and sensitive data than they might have if hosted internally.

 

Another area of concern for CIOs is the perceived outsourcing of the risk management of their systems.  There is a great level of trust between a secure CSP and a CIO.  The CIO is dependent on the cloud service provider for patch management, vulnerability scanning, virus/malware detection, intrusion detection, firewall management, network management, account management, log management and the list goes on and on.   Certainly outsourcing all of these critical tasks would constitute loss of control right?  Wrong!   As part of their standard service offering most secure cloud service providers provide customers system access, dashboards, portals, configuration and risk reports in real time giving CIOs complete control and transparency into their systems.   In fact CIOs should consider secure cloud service providers as more of an extension of their own IT departments.

 

Multi-tenant Cloud Security – is it possible?

 

One area that keeps CIOs and potential cloud adopters awake at night is the idea that their virtual machines and data would reside on the same server with other customers VMs and data.  In addition, multiple customers would also be accessing the same server remotely. As discussed in the previous paragraph, to a CIO control is everything.  So is it possible to isolate and secure multiple environments in a multi-tenant cloud?   The answer is YES.

 

So how do you secure a virtual environment hosted in a multi-tenant cloud?  The same security best practices that apply to a dedicated standalone information system would also apply to a VM.  Virtual machines live in a virtual network on the hypervisor.   Hypervisors are the operating system that your virtual machines run on top of.  Through VM isolation you isolate your VMs on its own network thus isolating your VMs from other tenants VMs. There is no way for other tenants to see your VMs, or your data.   The same goes for network security. You would simply implement firewalls in front of your VMs just as you would in front of a dedicated system.

 

Another area of concern for CIOs that should not be left out is the topic of disk wiping and data remanence.  In a public cloud, multi-tenant environment customer data is typically co-mingled on a shared storage device.   Conventional wisdom says that the only way to truly remove data from a disk drive is to literally shred the drives.  Degaussing disk is time consuming, expensive and not practical for a public cloud environment.   So what can a cloud service provider do to address this problem and provide assurance to CIOs system owners, security and compliance officers that their data has been completely wiped from all storage in the public cloud?  Again, the approach is the same as it would be for a dedicated system.  Using a DoD approved disk wiping utility you can boot the VM with the utility and perform the recommended number of passes to properly wipe the data from the shared storage.

 

In summary, there are a variety of reasons CIOs are delaying their move to the cloud from lifecycle management consideration to budgetary reasons.  One area of concern that should not delay the move is cloud security.  If architected and configured properly, utilizing security best practices both a private or public cloud can securely host and protect your information system and sensitive data.

 

Mark McCurley is the Director of Security and Compliance for FireHost, where he oversees security feature development and management of the company’s cloud hosting platform and pci compliant hosting environments. Prior to joining FireHost, McCurley played a key role in the development of a large managed service provider’s compliance practice, focused on delivering IT Security, compliance and C&A services to commercial and Federal agencies. His career has centered around data centers and customer IT systems that need to adhere to federal, DoD and commercial compliance mandates and directives. He holds CISSP, CAP and Security+ certifications, and specializes in Security and compliance for the following federal, DoD and commercial compliance mandates: DIACAP, FISMA, SOX, HIPAA and PCI.

Cloud Signaling – The Data Center’s Best Defense

By Rakesh Shah, Director, Product Marketing & Strategy at Arbor Networks

Recent high-profile security incidents heightened awareness of how Distributed Denial of Service (DDoS) attacks can compromise the availability of critical Web sites, applications and services.  Any downtime can result in lost business, brand damage, financial penalties, and lost productivity. For many large companies and institutions, DDoS attacks have been a sobering wake-up call, and threats to availability are also one of the biggest potential hurdles before moving to, or rolling out a cloud infrastructure.

Arbor Networks’ sixth annual Worldwide Infrastructure Security Report shows that DDoS attacks are growing rapidly and can vary widely in scale and sophistication. At the high end of the spectrum, large volumetric attacks, reaching sustained peaks of 100 Gbps have been reported. These attacks exceed the aggregate inbound bandwidth capacity of most Internet Service Providers (ISPs), hosting providers, data center operators, enterprises, application service providers (ASPs) and government institutions that interconnect most of the Internet’s content.

At the other end of the spectrum, application and service-layer DDoS attacks focus not on denying bandwidth but on degrading the back-end computation, database and distributed storage resources of Web-based services. For example, service or application-level attacks may cause an application server to patiently wait for client data—thus causing a processing bottleneck.  Application-layer attacks are the fastest-growing DDoS attack vector.

Detecting and mitigating the most damaging attacks is a challenge that must be shared by network operators, hosting providers and enterprises. The world’s leading carriers generally use specialized, high-speed mitigation infrastructures—and sometimes the cooperation of other providers—to detect and block attack traffic. Beyond ensuring that their providers have these capabilities, enterprises must also deploy intelligent DDoS mitigation systems to protect critical applications and services.

Why Existing Security Solutions Can’t Stop DDoS Attacks

Why can’t enterprises protect themselves against DDoS attacks when they have sophisticated security technology? Enterprises continuously deploy products like firewalls and Intrusion Prevention Systems (IPS), but the attacks continue. While IPS, firewalls and other security products are essential elements of a layered-defense strategy, they do not solve the DDoS problem.  Because they are designed to protect the network perimeter from infiltrations and exploits and to be policy enforcement points in the security portfolio of organizations, they leverage stateful traffic inspection technologies to enforce network policy and integrity. This makes these devices susceptible to state resource exhaustion, which results in dropped traffic, device lock-ups and potential crashes.

The application-layer DDoS threat actually amplifies the risk to data center operators. That’s because IPS devices and firewalls become more vulnerable to the increased state demands of this emerging attack vector—making the devices themselves more susceptible to the attacks.  Moreover, there is a distinct gap in the ability of existing edge-based solutions to leverage the cloud’s growing DDoS mitigation capacity, the service provider’s DDoS infrastructure or the dedicated DDoS mitigation capacity deployed upstream of the victim’s infrastructure.

Current solutions do not take advantage of the distributed computing power available in the network and cannot coordinate upstream resources to deflect an attack before saturating the last mile. No existing solution enables both DDoS mitigation at the edge and in the cloud.

Cloud Signaling: A Faster, Automated Approach to Comprehensive DDoS Mitigation

Enterprises need comprehensive, integrated protection from the data center edge to the service provider cloud. For example, when data center operators discover they are under a service-disrupting DDoS attack, they should be able to quickly mitigate the attack in the cloud by triggering a signal to upstream infrastructure of their provider’s network.

The following scenario demonstrates the need for cloud signaling from an enterprise’s perspective. A network engineer notices that critical services such as corporate sites, email and DNS are no longer accessible. After a root cause analysis, the engineer realizes that its servers are under a significant DDoS attack. Because its external services are down, the entire company, along with its customers, are suddenly watching every move. He must then work with customer support centers from multiple upstream ISPs to coordinate a broad DDoS mitigation response to stop the attack.

Simultaneously, he must provide constant updates internally to management teams and various application owners. To be effective, the engineer must also have the right internal tools available in front of the firewalls to stop the application-layer attack targeting the servers. All of this must be done in a high-pressure, time-sensitive environment.

Until now, no comprehensive threat resolution mechanism has existed that completely addresses application-layer DDoS attacks at the data center edge, and volumetric DDoS attacks in the cloud. True, many data center operators have purchased DDoS protection services from their ISP or MSSP. But they lack a simple mechanism to connect the premises to the cloud and a single dashboard to provide visibility. These capabilities can stop targeted application attacks as well as upstream volumetric threats that can be distributed across multiple providers.

The previous hypothetical scenario would be quite different if the data center engineer had the option of signaling to the cloud. Once he discovered that the source of the problem is a DDoS attack, the engineer could choose to mitigate the attack in the cloud by triggering a cloud signal to the provider network. The cloud signal would include details about the attack to increase the effectiveness of the provider’s response. This would take internal pressure off the engineer from management and application owners. It would also allow the engineer to communicate with the upstream cloud provider to give more information about the attack and fine-tune the cloud defense.

As DDoS attacks become more prevalent, data center operators and service providers must find new ways to identify and mitigate evolving DDoS attacks. Vendors must empower data center operators to quickly address both high-bandwidth attacks and targeted application-layer attacks in an automated and simple manner. This saves companies from major operational expense, customer churn and revenue loss. It’s called Cloud Signaling and it’s the next step in protecting data centers in the cloud, including revenue-generating applications and services.

Rakesh Shah has been with Arbor Networks since 2001, helping to take products from early stage to category-leading solutions.  Before managing the product marketing group, Rakesh was the Director of Product Management for Arbor’s Peakflow products, and he was also a manager in the engineering group.  Previously, Rakesh held various engineering and technical roles at Lucent Technologies and CGI/AMS.  He holds a M.Eng. fromCornellUniversityand a B.S. fromUniversityofIllinoisat Urbana-Champaign both in Electrical and Computer Engineering.

Pass the Buck: Who ‘s Responsible for Security in the Cloud?

Cloud computing changes the equation of responsibility and accountability for information security and poses some new challenges for enterprise IT. At Vormetric we are working with service providers and enterprises to help them secure and control sensitive data in the cloud with encryption, which has given us a good perspective on the issues surrounding who is responsible for cloud security.

While data owners are ultimately accountable for maintaining security and control over their information, the cloud introduces a shared level of responsibility between the data owner and the service provider. This division of responsibility varies depending on the cloud delivery model and specific vendor agreements with the cloud service provider (CSP).  In addition, the use of multi-tenant technology by CSPs to achieve economies of scale by serving customers using shared infrastructure and applications introduces another layer of risk.

Where the buck stops or gets passed on poses some new operational and legal issues.  Let’s look at each cloud delivery model to understand how each creates a slightly different balance of security responsibility between the data owner and CSP.

Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS) models typically place much of the responsibility for data security and control in the hands of the SaaS or PaaS provider. There is not much leeway for enterprises to deploy data security or governance solutions in SaaS and PaaS environments since the CSP owns most of the IT and security stack.

Infrastructure-as-a-Service (IaaS) tilts the balance towards a greater degree of shared responsibility.  IaaS providers typically provide some baseline level of security such as firewalls and load balancing to mitigate Distributed Denial of Service (DDoS) attacks. Meanwhile, responsibility for securing the individual enterprise instance and control of the data inside of that instance typically falls to the enterprise.

A widely-referenced example that clearly describes IaaS security responsibilities can be found in the Amazon Web Services Terms of Service. While enterprises can negotiate liability, terms and conditions in their Enterprise Agreements with service providers, the IaaS business model is not well suited for CSPs to assume inordinate amounts of security risk. CSPs aren’t typically willing to take on too much liability because this could jeopardize their business.

Since an enterprise’s ownership of security in the cloud gradually increases between SaaS, PaaS and IaaS, it’s important to clearly understand the level of responsibility provided in the terms and conditions of CSP agreements.

Having established what a cloud provider is delivering in the way of security, enterprises should backfill these capabilities with additional controls necessary to adequately protect and control data.  This includes identity and access management, encryption, data masking and monitoring tools such as Security Information and Event Management (SIEM) or Data Loss Prevention (DLP).  One valuable resource for evaluating cloud service provider security is the Cloud Security Alliance Cloud Controls Matrix.

Enterprises looking to further mitigate the risk of data security incidents in the cloud can also investigate Cyber insurance offerings that protect against cyber events such as cyber extortion, loss of service or data confidentiality breach.  Finally, enterprises should develop both a data recovery plan and exit strategy if they need to terminate their relationship with a CSP.

Cloud security is a new and evolving frontier for enterprises as well as CSPs. Understanding the roles, responsibilities, and accountability for security in the cloud is critical for making sure that data is protected as well in the cloud as it is in an enterprise data center. The process starts with a thorough due diligence of what security measures are provided and not provided by the CSP, which enables enterprises to know where they need to shore up cloud defenses. Until further notice, the cloud security buck always stops with the enterprise.

Todd Thiemann is Senior Director of Product Marketing at Vormetric and co-chair of the Cloud Security Alliance (CSA) Solution Provider Advisory Council.

PKI Still Matters, Especially in the Cloud

By:  Merritt Maxim

Director of IAM Product Marketing

CA Technologies Inc.

Infosec veterans probably remember (with a smirk) how Public Key Infrastructure (PKI) was heralded as the next “big thing” in information security at the dawn of the 21st century.  While PKI failed to reach the broad adoption the hype suggested, certain PKI capabilities such as key management are still important.  The Diffie-Hellman key exchange protocol which solved the serious technical challenge of how to exchange private keys over an insecure channel basically created PKI.

I had not thought about key management until a recent visit to my local car dealer for an oil change.  While waiting, I noticed several dealer employees struggling with a large wall-mounted metal box.  This box is the dealer’s central repository for all car keys on the dealer’s lot.  The box is accessed via a numeric keypad which appeared to be a sensible approach since the keypad logs all access attempts for auditing and tracking purposes.

However, on this particular day, the numeric codes would not open the box, leaving the keys inaccessible and employees quite frustrated.  I left before seeing how the problem was resolved, but this incident reminded me of key management and how this technology is still crucial for data management especially with rise of cloud computing.

Key management often goes unnoticed for extended periods of time and only surfaces when a problem appears, as was the case at the dealer.  When problems appear, key management is either the solution or the culprit.  In the latter case, key management is generally the culprit because of an improper implementation.  Poor key management can create several significant problems such as:

  • Complete Compromise-A poor key management system, if broken, could mean that all keys are compromised and all encrypted data is thus at risk (see my postscript for a great example).  And fixing a broken key management system can be complex and costly.
  • Inaccessibility-As I witnessed at the dealer, a poorly implemented key management may prevent any or some access to encrypted data.  That may seem good from a security standpoint, but the security must be weighed against the inconvenience and productivity loss created from being unable to access data.

With the continued stream of data breaches that appear in daily headlines, a common refrain is that data encryption is the solution to preventing data breaches.  While data encryption is certainly a good security best practice and important first step, especially for sensitive data or PII, effective key management must accompany any data encryption effort to ensure a comprehensive implementation.

Here’s why.

Just throwing encryption at a problem especially after a breach is not a panacea-it must be deployed within the context of a broader key management system.  NIST Special Publication 800-57, “Recommendation for Key Management-Part 1-General” published in March 2007 stated,

“The proper management of cryptographic keys is essential to the effective use of cryptography for security. Keys are analogous to the combination of a safe. If the combination becomes known to an adversary, the strongest safe provides no security against penetration. Similarly, poor key management may easily compromise strong algorithms. Ultimately, the security of information protected by cryptography directly depends on the strength of the keys, the effectiveness of mechanisms and protocols associated with keys, and the protection afforded the keys. “

Even though this NIST publication is more than four years old, this statement is still relevant.

A centralized key management solution should deal with the three ‘R’s-Renewal, Revocation and Recovery.  Key management is necessary to solve problems such as:

  • Volume of keys-In a peer to peer model, using freeware like PGP may work, but when you are an organization with thousands of users, you need centralized key management.  Just like organizations need to revoke privileges and entitlements when a user leaves the organization, you need to do the same with cryptographic keys.  This can only be achieved via central key management and would crumble in a peer to peer model.
  • Archiving and Data Recovery.  Data retention policies vary by regulation and policy, but anywhere from three to 10 years is common.  If archived data is encrypted (generally a good practice), key management is necessary to ensure that the data can be recovered and decrypted in the future if needed as part of an investigation.  The growth on cloud-based storage makes this problem particularly acute.

Organizations that encrypt data without a centralized comprehensive key management system are still at risk of a breach because the lack of a centralized system can cause inconsistencies and error-prone manual processes.  Further, today’s sophisticated hackers are more likely to attack a poorly implemented key management system rather than attack an encrypted file, much like the German Army flanked France’s Maginot Line in 1940 to avoid dealing with the line’s formidable defenses.  This is why an important aspect of key management is ensuring appropriate checks and balances on the administrators of these systems as well as ongoing auditing of the key management processes and systems to detect any potential design errors, or worse, malicious activity by authorized users.

Key management is not going away.  As cloud computing adoption grows, key management is going to become even more crucial especially around data storage in the cloud.  We have already seen some examples with online storage providers that show how key management is already an issue in the cloud.  Cloud computing and encryption are great concepts, but organizations must accompany these with a sound key management strategy.  Otherwise, the overall effectiveness of such systems will be reduced.

PS-a great example of what happens with an ineffective key management implementation is convicted spy John Walker who managed cryptographic keys for US Naval communications but copied the keys and gave them to the USSR for cash.  Walker compromised a significant volume of US Navy encrypted traffic but because there was no significant auditing of his duties, his spying went undetected for years. There are several books on the Walker case, but I recommend Peter Earley’s “Family of Spies”

 

Merritt Maxim is director of IAM product marketing and strategy at CA Technologies.  He has 15+ years of product management and product marketing experience in Identity and Access Management (IAM) and the co-author of “Wireless Security.” Merritt blogs and is an active tweeter on a range of IAM, security & privacy topics.  Merritt received his BA cum laude from Colgate University and his MBA from the MIT Sloan School of Management.

 

Understanding Best-in-Class Cloud Security Measures and How to Evaluate Providers

By Fahim Siddiqui

Despite a broader interest in cloud computing, many organizations have been reluctant to embrace the technology due to security concerns. While today’s businesses can benefit from cloud computing’s on-demand capacity and economies of scale, the model does require they relinquish part of the control over the application and data.

 

Unfortunately, security controls vary significantly from one cloud provider to the next.  Therefore, companies need to make certain the providers they use have invested in state-of-the-art security measures. This will help ensure that a company’s customer security and data protection policies can be seamlessly extended to the cloud applications to which they subscribe. Best practices dictate that critical information should be protected at all times, and from all possible avenues of attack. When evaluating cloud providers, practitioners should address four primary areas of concern — application, infrastructure, process and personnel security — each of which is subject to its own security regimen.

 

1. Application Security

With cloud services, the need for security begins as soon as users access the supporting application. The best cloud providers protect their offerings with strong authentication and equally potent authorization systems. Authentication ensures that only those with valid user credentials (who can also prove their identity claims) obtain access, while authorization controls allow administrators to decide which services and data items users may access and update. Multi-factor authentication may also be provided for controlling access to high sensitivity privileges (e.g. administrators) or information.

 

All application-level access should be protected using strong encryption to prevent unauthorized sniffing or snooping of online activities. Application data needs to be validated on the way in and on the way out to ensure security. Robust watermarking features ensure that materials cannot be reproduced or disseminated without permission. More advanced security measures include the use of rights management technology to enforce who can print, copy or forward data, and prevent such activity unless it is specifically authorized, as well as impose revocation and digital shredding even after documents leave the enterprise.

 

2. Infrastructure Security

Best-in-class providers will have a highly available, redundant infrastructure to provide uninterruptible services to their customers. A cloud provider or partner should use real-time replication, multiple connections, alternate power sources and state-of-the-art emergency response systems to provide complete and thorough data protection. Network and periphery security are paramount for infrastructure elements. Therefore, leading-edge technologies for firewalls, load balancers and intrusion detection/prevention should be in place and continuously monitored by experienced security personnel.

 

3. Process Security

Cloud providers, particularly those involved in business critical information, invest large amounts of time and resources into developing security procedures and controls for every aspect of their service offerings. Truly qualified cloud providers will have earned SAS 70 Type II certification or international equivalents.  Depending upon geography or industry requirements, they may have enacted measures to keep their clients in compliance with appropriate regulations (e.g., the U.S. Food and Drug Administration (FDA) 21 CFR 11 regulations for the Pharmaceutical industry). ISO-27001 certification is another good measure of a provider’s risk management strategies. These certifications ensure thorough outside reviews of security policies and procedures.

 

4. Personnel Security

People are an important component of any information system, but they can also present insider threats that no outside attacker can match. At the vendor level, administrative controls should be in place to limit employee access to client information. Background checks of all employees and enforceable confidentiality agreements should be mandatory.

 

Putting Providers to the Test

When evaluating a cloud provider’s security approach, it’s important to ask them to address how they provide the following:

  • Holistic, 360-degree security: Providers must adhere to the most stringent of industry security standards, and meet client expectations, regulatory requirements and prevailing best practices.

This includes their coverage of application, data, infrastructure, product development, personnel and process security.

  • Complete security cycle: A competent cloud provider understands that implementing security involves more than technology — it requires a complete lifecycle approach. Providers should offer a comprehensive approach to training, implementation and auditing/testing.
  • Proactive security awareness and coverage: The best cloud providers understand that security is best maintained through constant monitoring, and they take swift, decisive steps to limit potential exposures to risks.
  • Defense-in-depth strategy: Savvy cloud vendors understand the value of defense in depth, and can explain how they use multiple layers of security protection to protect sensitive data and assets.
  • 24/7 customer support: Just as their applications are available around-the-clock, service providers should operate support and incident response teams at all times.

 

Tips for Obtaining Information from Service Providers

When comparing cloud providers, it is essential to check their ability to deliver on their promises. All cloud providers promise to provide excellent security, but only through discussions with existing customers, access to the public record and inspection of audit and incident reports can the best providers be distinguished from their run-of-the-mill counterparts.

 

Ideally, obtaining information about security from providers should require little or no effort. The providers who understand security — particularly those for whom security is a primary focus — will provide detailed security information as a matter of course, if not a matter of pride.

Fahim Siddiqui, chief product officer, IntraLinkswww.intralinks.com

Fahim has been with IntraLinks since January 2008. Prior to joining IntraLinks, he served as CEO at Sereniti, a privately held technology company. He was also the Managing Partner of K2 Software Group, a technology consulting partnership providing product solutions to companies in the high tech, energy and transportation industries. Previously, Fahim held executive and senior management positions in engineering and information systems with ICG Telecom, Enron Energy Services, MCI, Time Warner Telecommunications and Sprint.

 

Watch Out for the Top 6 Cloud Gotchas!

By Margaret Dawson, VP of Product Management, Hubspan

I am a huge proponent of cloud-based solutions, but I also have a bailiwick for people who look to the cloud just for cloud’s sake, and do not take time to do the due diligence.  While the cloud can bring strong technical, economic and business benefits if managed correctly, it can also cause pain just like any solution with which you do not follow clear criteria for evaluation to make sure it meets your needs today and in the future.

In my many discussions with IT leaders and from my own experience, I have outlined the top six cloud gotchas that you need to watch out for:

  1. Standards: The cloud, while filling our life right now, is still relatively young with minimal standards. This one is particularly important with Platform as a Service (PaaS) vendors. Many of these platforms provide an easy-to-use and fast-to-deploy application development and life cycle environment. However, most are also based on proprietary platforms that do not play nice with other solutions. It’s important to understand potential proprietary lock-in as well as how you interface with the cloud platform or with the API infrastructure.
  2. Flexibility: This seems odd for a cloud gotcha since flexibility and agility is touted as one of the cloud’s greatest benefits.  In this case, I’m talking about flexibility within the cloud environment and in the way you interact with the cloud.  What communication protocols are supported, such as REST, SOAP, FTPS, etc.?  In the PaaS world, what languages are supported – is it flexible or, for example, a JAVA or .NET environment only.  Does it have a flexible API infrastructure?
  3. Reliability & Scalability: Everyone knows that the cloud provides on-demand scalability, but make sure your solution scales both up and DOWN – with the latter being the stickler for most companies.  Burst capacity and quick addition of scalability might be easy, but what if you want to scale back your deployment? Make sure it’s just as easy and without penalties.  Overall, know the bandwidth capability across the deployment, not just the first or last mile. On the reliability front, be wary of claims of four or five nines (99.999% uptime) and ask for an uptime report from your cloud vendor.  Build uptime into your SLA (service level agreement) if this cloud deployment is mission critical for your business.
  4. Security: This one is probably the most discussed and debated.  I believe, and many vendors have proved this, that a cloud-based solution is as secure if not more secure than an on-premise approach.  But as with technology in general, not all clouds are created equal, and security needs to be evaluated holistically.  The platform should provide end-to-end data protection, which means encryption both in motion and at rest, as well as strong and auditable access control rules.  Do you know where the data is located amid the vendor’s many data centers, and is the level of data protection consistent among all of those environments? Does the vendor use secure protocols for moving the data, such as SSL. Look for key compliance adherence by the vendor, such as PCI DSS and SAS 70 Type 2.  There’s a reason the Cloud Security Alliance (CSA) is now developing a PCI courseware – it’s because there’s a clear link between the security capabilities of a cloud platform and its ability to meet the most stringent security and data protection demands found in the PCI mandate.
  5. Costs: I can hear everyone now saying “duh” this is obvious.  Yes, the initial cost of deployment or your monthly subscription fees are an easy evaluation.  However, look for hidden or unexpected costs, and make sure you fully understand the pricing model.  Many cloud solutions are cost-effective for a standard deployment, but then each additional module or add-on feature slaps you with additional costs.  Does the vendor charge a “per support” charge? Are upgrades to new versions included?  Also, there are often pricing tiers or “buckets”, and when you hit that tier, your costs can significantly increase.  Finally, look for a way to clearly show your ROI or success metrics for this solution.  Align your costs with your expected results, whether quantifiable or qualifiable.  This is particularly important if your company is new to cloud consumption, as your ability to show success with an initial deployment will influence future implementations.
  6. Integration: Integration is truly the missing link in the cloud.  It’s so appealing to put our data in the cloud or develop new applications or extend our current infrastructure that sometimes we forget that the data in the cloud needs to be accessible, secured and managed just like on-premise data.  How are you migrating data to the cloud?  If you are putting everything on a physical disk and shipping it to the cloud vendor, doesn’t that rather run contrary to the whole cloud benefit?  How are you exchanging and sharing information between cloud-based environments and on-premise infrastructure or even between two clouds?  Think about integration before you deploy a new cloud solution and think about integration among internal systems and people as well as external partners and corporate divisions.  Gartner is doing a lot of work in this area, and has a new market category called “cloud brokers”.

As I’ve said many times in presentations on the cloud, you should first buy the solution, then buy the cloud. The cloud is not a panacea, and while a cloud architectural approach brings strong business and IT value, you need to thoroughly evaluate any solution to ensure it not only meets your company’s technical and business requirements, but also enables you to grow and evolve.

 

Margaret Dawson is vice president of product management for Hubspan. She’s responsible for the overall product vision and roadmap and works with key partners in delivering innovative solutions to the market. She has over 20 years’ experience in the IT industry, working with leading companies in the network security, semiconductor, personal computer, software, and e-commerce markets, including Microsoft and Amazon.com. Dawson has worked and traveled extensively in Asia, Europe and North America, including ten years working in the Greater China region, consulting with many of the area’s leading IT companies and serving as a BusinessWeek magazine foreign correspondent.

 

How Public Cloud Providers Can Improve Their Trustworthiness

By Matthew Gardiner

When you meet someone you have never met for the first time, in a place you have never been to, do you trust him?  Would you have him hold your wallet for you or would you share some sensitive personal information with him?  Of course not. Obviously this person is not trusted by you at this point in time, but that doesn’t mean he never could be.  Assuming you have good, trustworthy friends, it’s possible that this person could be trusted if you got to know him better.  This analogy can be applied to the current state of security and trust with the public cloud.

The biggest barrier to broader and faster adoption of public cloud services (whether SaaS, PaaS, IaaS) is trust. Consider the results of nearly any survey on cloud adoption or talk with your friends and colleagues in IT, and you’ll find the message is the same; the public cloud has great promise and impressive early adoption, but there remains a nagging set of concerns that are proving hard to address.  Many characterize these concerns as being about security.  While I agree there are important issues around security that need to be resolved, such as how security can be managed jointly by the cloud provider and cloud consumer, I prefer characterizing the issue more broadly to be about trust.  Overall though, it stands to reason that the greater the trust the greater the adoption.

Trust is about more than just security controls. Trust also emerges from good execution of “abilities,” such as reliability, availability, portability, and interoperability.  Is the public cloud trustworthy for organizations’ more sensitive and mission critical applications and data?  The only one that can ultimately decide this for you is you.  While trust can be influenced by 3rd-parties it can only occur between two parties.

In order to improve their trustworthiness, cloud providers should:

  1. Avoid being a black box, in particular for security and “ability” related systems and processes. I am not saying public cloud providers should publicly disclose everything and risk elevating their vulnerability levels, but they should give as much control and visibility to their customers as possible over the customer’s own services, systems and processes that they are delivering for them.  The systems and processes within their services should not be a secret. Control or visibility of the customers’ services should move all the way up the application stack – from the network, through the storage, servers, applications, and data.  People tend to trust those that don’t appear to be hiding anything, and thus transparency by cloud providers can help foster trust.  Audits can also serve as vehicles to gain trust – whether they are done by third parties or by the customers themselves.
  2. Improve trust by reducing technical lock-in. Portability will be high on the list of cloud consumers. Instead of keeping your customers through technical lock-in, put your head in your customers’ hands right at beginning of the relationship and make sure they have all the flexibility needed to swap vendors.   Make sure that your cloud service, as appropriate, has data and application service portability that is crisply defined and free or inexpensive to invoke.  Bend over backward to avoid causing customer technical lock-in, and strive to keep your customer through great service at a great price.  In addition, offer clear SLAs with great warranties.  In your SLAs put in clear financial penalties for missing aspects of your SLAs and maybe even some bonuses for surpassing them.  In some senses I recognize that this maybe counter intuitive for some, but if the goal is enhancing trust this is a great way to do it.
  3. When things go wrong be open and honest about them. Said another way, keep your promises. And if you can’t, tell your cloud customers quickly and honestly about your mistakes and explain what you are going to do better next time.  In fact, this should be part of your corporate philosophy, so that your prospective customers hear about it before they actually experience it.  Just like with personal relationships, most good cloud provider/cloud consumer relationships can survive some broken promises.

 

We all know that trust is relative, as in “I trust that person (or service) more than that one” or “I trust this service more than I used to.”  Mathematically I think of it this way: Trust = Performance x Time.  As good performance accumulates over time overall trust goes up.  And good performance over a short time period elicits some more trust, but not much more.  For public cloud services to attain their prospective position as the next major IT service delivery architecture (following mainframe, client/server, and Web) it is imperative that the industry take proactive steps to improve their trustworthiness.

 

Matthew Gardiner is a Director working in the Security business unit at CA Technologies. He is a recognized industry leader in the security and Identity & Access Management (IAM) markets worldwide. He writes and is interviewed regularly in leading industry media on a wide range of IAM, cloud security, and other security-related topics. He is a member of the Kantara Initiative Board of Trustees. Matthew has a BSEE from the University of Pennsylvania and an SM in Management from MIT’s Sloan School of Management.  He blogs regularly at: http://community.ca.com/members/Matthew-Gardiner.aspx and also tweets @jmatthewg1234.  More information about CA Technologies can be found at www.ca.com.

 

Security Standards – Why they are so Critical for the Cloud

By Matthew Gardiner

Everyone loves standards, right?  When is the last time you heard a vendor proudly say that their product or service was closed and proprietary?  However, it also seems that every time a new IT architecture sweeps through the market, this time one based on cloud models, the lessons of the critical value of standards needs to be relearned.  While it is easy to poke fun at standards by saying such things as “I love standards because there are so many from which to choose,” it is also easy to see the incredible value that they can unlock. Look at the Internet itself as an example.  It is hard to imagine the cloud reaching its potential without it using a set of widely adopted standards – security and otherwise.

In the context of this blog when I refer to security standards, I am talking about security interface standards (basically cloud security APIs) that enable security systems in one domain, whether in a cloud service or in an on-premise enterprise system, to communicate and interoperate programmatically with security systems in other domains.  The absence of such standards drives the use of customized integrations which have been the bane of IT agility since the beginning of modern computing.

Why is it that everyone loves standards in concept, including those for security, but often standards definition and deployment is less than speedy?  Why doesn’t everyone involved just pull together and solve this obvious problem now, instead of waiting until we are all suffering from lack of standards?  While this is a general issue with standards, let’s look at this issue through the lens of the emerging public cloud-based services (public IaaS, PaaS, & SaaS).  There are both rational and less rational reasons why standards are developed and used at a rate slower than they should be for maximum benefit.

While not the only factor to consider, the reality is that standards must be considered as an element of the overall vendor competitive struggle, where differentiation is key.  There are logical economic reasons why market dominant vendors — in this case dominant cloud service providers — tend to be wary of using publicly available interface standards for their services.  For one it makes their differentiation that much harder and it lowers the cost of switching to competitive services.  Thus interface standards can serve as a competitive threat.

While no vendor will come out explicitly against standards (remember that everybody loves them), when pressed on the issue, they will come back with answers such as, “existing standards are too immature” or the “market is moving too fast to standardize yet” to explain why they are not moving more quickly to standardize their interfaces.  Of course they might be partially right, but these are not objections which generally hold up under explicit and consistent customer demand for standardization.  See the broad adoption of SAML by cloud providers as an example of what this pressure can accomplish.

This leads me to one of the less rational reasons why standards are not used as readily as they could be:  Lack of customer vision! Without a clear long-term vision of the future and how cloud services will be engaged to support the business, customer’s of today’s cloud service providers basically stumble into using the available proprietary interfaces and thus are enabling the current providers to largely get away with not providing standards based interfaces.  IT departments are doing what they need to get the job done, which optimizes the short-term results, but unfortunately it’s at the expense of the longer-term.

What does the future of the cloud look like over the next three to five years?  In my view organizations of all sizes will be deep in the middle of a dynamic and hybrid mix of public cloud services, private cloud services, and traditional on-premise IT systems.  The mix will vary by organization. We could see 20 percent public cloud services and 80 percent on-premise and private cloud services at some organizations and a 50/50 split or some other mix at other organizations.  Even within the public cloud category there will be a tremendous variety of usage at most organizations, not only with the types of cloud services used (Infrastructure-as-a-Service, Platform-as-a-Service, Software-as-a-service) but also with the variety of service providers from which they receive them.  If you agree with this view of the future, then you should understand the need to use security interface standards to enable effective security management across them.

If supporting dynamic and hybrid IT requires organizations to continually build-up and tear down proprietary security integrations that bridge their on-premise and cloud worlds, then they will either be spending an inordinate amount of time and money creating these integrations, or worse will be living in the middle of a hodge-podge of security silos, which are neither secure nor convenient for the users.

For the cloud to reach its potential as the next transformative IT architecture akin to the Internet itself, it is critical that it operate similar to Legos which can be assembled and re-assembled quickly and securely as required.  Furthermore, it is imperative that automated controls, both preventive and detective, can be configured to flow back-and-forth between and among all components of the organization’s mix of public and on-premise IT systems.  This prospective future is not as far off as if it might seem.  There are many security interface standards already in existence (XACML, WS-Security, CloudAudit) and some are already relatively widely deployed, such as SAML, that were built to enable the hybrid cloud and on-premise application world.  The primary issue now is the adoption of these standards.

While I recognize that collective action on the use of security standards such as these is not easy, I believe it is imperative that customers start envisioning and working towards this future now – and pushing their cloud service providers to get onboard with it too.

 

Matthew Gardiner is a Director working in the Security business unit at CA Technologies. He is a recognized industry leader in the security and Identity & Access Management (IAM) markets worldwide. He writes and is interviewed regularly in leading industry media on a wide range of IAM, cloud security, and other security-related topics. He is a member of the Kantara Initiative Board of Trustees. Matthew has a BSEE from the University of Pennsylvania and an SM in Management from MIT’s Sloan School of Management.  He blogs regularly at: http://community.ca.com/members/Matthew-Gardiner.aspx and also tweets @jmatthewg1234.  More information about CA Technologies can be found at www.ca.com.

 

OAuth – authentication & authorization for mobile applications

By Paul Madsen

paul-7.jpg

Federation is a model of identity management that distributes the various individual components of an identity operation amongst different actors. The presumption being that the jobs can be distributed according to which actors are best suited or positioned to take them on. For instance, when an enterprise employee accesses services at a SaaS provider, Single sign-on (SSO) has the employee’s authentication performed by their company, but the subsequent authorization decision made by the SaaS provider.

Federation’s primary underlying mechanisms are ‘security tokens.’ It is by the creation, delivery, and interpretation of security tokens between the actors involved in a transaction that each is given the necessary information to perform their function. Security tokens serve to insulate each actor from the specific IT & security infrastructure of their partners, customers, and others by standardizing how identity information can be shared across company and policy boundaries. Returning to the enterprise employ SSO example, after authenticating the employee, the enterprise creates a security token attesting to that fact, as well as additional attributes that might determine what actions she can perform at a particular SaaS provider (e.g. she is in Engineering not Sales) and then delivers the security token to the SaaS provider. The SaaS provider, rather than directly authenticating the employee through some stored password, instead relies on the authentication performed by the enterprise, and acts accordingly.

SSO simplifies life for the employee because she need not manage a password for each SaaS application her job demands. Furthermore, SSO provides security benefits for the employer, such as being able to easily & quickly terminate access to all those applications should the employee leave the company. SSO arguably offers even greater value when the service being accessed is a mobile web application (i.e. delivered through the browser on an employee’s mobile phone). Data entry remains challenging on mobile devices, even more so when corporate password policy requires entering a mix of case and characters. If an employee is tempted to create (or reuse) an easy password at her desktop, then she will be doubly so on a phone.

The federation standards for browser-based SSO to web applications are well-established (if perhaps a bit duplicative) on the consumer web with OpenID being the preferred choice. In the enterprise and cloud world, the Security Assertion Markup Language (SAML) is the default with WS-Federation an option in Microsoft environments. SSO for mobile web applications works the same as for desktop browsers. The protocol messages and security tokens are delivered through the browser between the actors. The only potential difference is that the HTML served up may be optimized for the smaller screen and/or processing capabilities of the phone.

The popularity of the iPhone AppStore and Android Market in the consumer world highlights an increasingly important alternative to browser-based applications, Native applications have the user download and install the application to her device; the application then interacts with servers to retrieve the data rather than rely on the browser. Both native and web applications have their pros and cons. A seeming trend towards the native model may well be reversed as HTML5 makes possible richer user experiences and device integration for web applications.

The native applications on the phone push and pull data from the server typically through REST APIs. The IdM challenge for native applications is how the native application can authenticate to these APIs so that the API can make an appropriate access control decision. Security tokens provide a solution, offering similar advantages as they do for web applications. Critically though, the federation protocols relevant for web applications (i.e. SAML, OpenID, WS-Federation) are generally not optimized for the requirements, challenges, and opportunities presented by native applications.

OAuth 2.0 is a federation protocol, currently nearing finalization as an IETF standard, that can be optimized for just such native applications. OAuth emerged from the Consumer Web (an archetypical use case that of one web site being able to post to a user’s Twitter stream) but has evolved to meet enterprise and cloud requirements. For mobile native applications, OAuth defines 1) how a native application can obtain a security token from an ‘authorization server’ and 2) how to include that security token on its calls to the relevant REST APIs. Importantly, OAuth supports the concept of the user being able to control the issuance of security tokens to native applications and so indirectly control the authorizations the native applications have for accessing personal data behind APIs. Before OAuth, the default authentication model for native applications was the so-called ‘password anti-pattern’ in which the native application would ask the user to provide her password for the site hosting the APIs the native application wanted to call. Teaching users to share their passwords with arbitrary (and potentially untrustworthy) applications is less than ideal. OAuth mitigates the practice by having the native application authenticate to the API with a security token and not the password itself.

By abstracting away the particulars of each of their security infrastructures from multiple participants, and obviating the need for placing passwords ‘on the wire’ federation (and the more fundamental security token model) offers many benefits for both web (browser-based) and native (installed) mobile applications. Ultimately, an authentication and authorization framework for mobile applications should address the needs of both application models through support for the relevant federation protocols like SAML and OAuth.

About Paul Madsen

Paul Madsen is a Senior Technical Architect within the Office of the CTO at Ping Identity. He has served in various design, chairing, editing, and education roles for a number of federation standards, including OASIS Security Assertion Markup Language (SAML), OASIS Service Provisioning Markup Language (SPML), and Liberty Identity Web Services Framework (ID-WSF). He participates in a number of the Kantara Initiative’s activities, as well as various other cloud identity initiatives. He holds an M.Sc. in Applied Mathematics and a Ph.D. in Theoretical Physics from Carleton University and the University of Western Ontario respectively.

Who Moved My Cloud

by Allen Allison, Chief Security Officer at NaviSite (www.navisite.com)

Managed cloud services are quickly being adopted by large enterprises. Organizations are increasingly embracing cloud technologies for core services like financial systems, IT infrastructure, online merchant sites, and messaging solutions. This adoption rate is creating an ever-increasing role for audit and compliance in the cloud.

Before cloud computing gave IT environments elasticity, flexibility, and transportability, it was relatively simple to provide the regulatory compliance. Prior to the cloud, an organization was able to isolate all of the devices, operating systems, and applications on which sensitive or regulated data could reside, and the auditors had an easy task of auditing the security controls and verifying policies, procedures and processes for isolated environments. However, as the industry began to adopt more flexible solutions such as cloud, it became more difficult to contain environments for auditors to provide the same review without requiring a significantly higher level of work. While a managed cloud services company may deploy like policies and security solutions for cloud computing as would be in a traditional IT environment, proof of those same controls grows more difficult to demonstrate to the satisfaction of the auditors.

For example, if an organization had a virtualized environment that had well-defined boundaries or security zones, and even during a failover or disaster recovery all events, logs, and incidents were easily tracked and verified, it took little effort for an auditor to review and provide the assurance of compliance for the environment.

Cloud changes this game a bit, with its ability to move environments dynamically, without human intervention. This move could be within a single data center, but is often from data center to data center, from coast to coast, or even from continent to continent. This flexibility, while often necessary to support business needs, introduces a level of complexity that many auditors have had difficulty with. When the auditor can’t pin down the environment, how can she or he assess its compliance?

But there are a number of cloud providers who have been working to overcome these challenges in conjunction with their auditors. For example, SAS70 (soon to be SSAE16) has been especially difficult for auditors to assess in cloud environments. Depending on the controls, SAS70 will likely have the requirement of aggregating the review of physical access to the facility, at-console access to systems, and logical access to the environment. To add to the complexity, there may be differing controls for the application that provides the user interface from the application being presented to the end users. Furthermore, the controls in place may incorporate role-based access controls with built-in work flow for provisioning and approvals. This has provided for a very complicated system of buttons and levers to assess. However, by providing a common platform for the audit trails and logs, managed cloud providers are simplifying the work for the assessor and allowing for the aggregation and correlation of those events into a simplified platform.

In addition to the aggregation of these access events, following are additional controls that cloud service providers are incorporating in order to provide the common manageability of and the ability to audit a cloud platform:

Security Event Correlation – By incorporating industry leading Security Incident and Event Management (SIEM) solutions, more cloud providers are able to aggregate the logs from multiple platforms, multiple customer-specific and customer-shared devices, and multiple data centers into a centralized security management solution that can provide an easy to review aggregation point of all related security events.

Centralized Authentication – Providing a single authority for authentication and authorization, while centralizing all accounting, is a significant step to providing the proof of access and attempted access to an auditor. This authentication, authorization, and accounting (AAA) is a critical aspect of audit and verification of access to key systems housing data or intellectual property.

Data Replication – A growing requirement for organizations moving to cloud is the seamless failover and recovery of applications in the event of an outage. While we have always enjoyed highly available, fault-tolerant systems, the gating factor has always been the integrity and currency of the backend data. In order to provide the assurance that the data in all systems, and all data centers is consistent, data replication solutions are often deployed to guarantee the low Recovery Point Objective (RPO) often required in a Disaster Recovery solution. These may require high bandwidth, low latency backend solutions to deliver the infrastructure to support such replication, and most globally diverse managed cloud service providers deliver these networks across their infrastructure.

Common Monitoring and Management Solutions – A single pane of glass is often required to provide a unified look of the entire infrastructure. This will provide an auditor the ability to verify the provider is delivering the level of service guaranteed by the solution. Auditors often look for event handling and common management across all systems. By automating the deployment of such monitoring solutions, and relying on a common platform for the management (including patch management, software revision control, and system lockdown procedures) a level of assurance can be provided to the auditor that all systems are uniform and follow the controls of the monitoring and management criteria.

As the adoption of cloud accelerates, there will be added requirements for auditors to understand these ever-changing, elastic environments, and to be able to provide the same compliance and accreditation that they have historically provided for more static, pre-defined solutions in the past. These requirements are increasing at a significant pace, and the industry relies heavily on managed cloud service providers to guide the auditors through these more difficult assessments.

Allen Allison, Chief Security Officer at NaviSite (www.navisite.com)

During his 20+ year career in the information security industry, Allen Allison has served in management and technical roles, including the development of NaviSite’s industry-leading cloud computing platform; chief engineer and developer for a market-leading managed security operations center; and lead auditor and assessor for information security programs in the healthcare, government, e-commerce, and financial industries. With experience in the fields of systems programming; network infrastructure design and deployment; and information security, Allison has earned the highest industry certifications, including CCIE, CCSP, CISSP, MCSE, CCSE, and INFOSEC Professional. A graduate of the University of California, Irvine, Allison has lectured at colleges and universities on the subject of information security and regulatory compliance.