Cloud Computing and Device Security: The “Always Able” Era

By Mark Bregman, CTO of Symantec

Device Proliferation: Mobility and Security in the Cloud

Chief Information Security Officers know instinctively that the world under their purview is undergoing a shift every bit as significant as the rise of the World Wide Web more than 15 years ago. The demand on our workforce to be ever more productive is driving us to rethink how we use technology to get the job done. Today’s workers expect and demand smart, mobile, powerful devices that place the capabilities of a PC in the palm of the hand.

In this new environment, IT departments are faced with a hard choice: remain committed to an outdated model that limits productivity by placing stringent restrictions on the technology workers use or look for ways to implement new policies that give employees the tools they need to be “always able” while keeping company information safe.

This change in attitude has been driven more by the cloud than many IT decision makers may realize. For enterprise users to do their jobs, they must be able to create, retrieve, manipulate and store massive amounts of data. In the past, the PC was best tool for this job because it could store and process data locally. But today, storing data in the cloud sets device makers free to create a wide range of computing products – from highly portable to highly stylish and more. Increasingly, these devices can be used to create everything from office documents to rich multimedia content, driving demand for even smarter and more powerful devices.

The loss of traditional security controls with the mobile devices combined with cloud driven services results in the need for a new approach to security. According to findings from security firm Mocana, 47% of organizations do not believe they can adequately manage the risks introduced by mobile devices, and more than 45% of organizations say security concerns are one of the biggest obstacles to the proliferation of smart devices. Organizations now must cope with workers introducing personal devices on the enterprise cloud and accessing workplace technology for personal purposes. For IT, the ultimate goal is protecting data by defining who should access what data and defining rights management for viewing and manipulating that data.

At the 30,000 feet level, users demand the flexibility to choose the devices they want, which means IT is tasked with enforcing governance over those devices to ensure corporate data is protected. To allow for uniform enforcement, administrators need the ability to centrally define and distribute security policies to all devices – using what else but the cloud – to secure data at rest or in motion.

To this end, there are five important guidelines enterprises should consider as they reshape IT policy to enable mobile devices to function seamlessly and securely in the cloud:

Take an inventory of all devices – You can’t protect or manage what you can’t see. This begins with device inventory to gain visibility across multiple networks and into the cloud. After taking stock, implement continuous security practices, such as scanning for current security software, operating system patches, and hardware information, e.g., model and serial number.

Device security equals cloud security – Since they are essentially access points to the cloud, mobile devices need the same multi-layer protection we apply to other business endpoints, including:

• Firewalls protecting the device and its contents by port and by protocol.

• Antivirus protection spanning multiple attack vectors, which might include MMS (multimedia messaging service), infrared, Bluetooth, and e-mail.

• Real-time protection, including intrusion prevention with heuristics to block “zero-day” attacks for unpublished signatures, and user and administrator alerts for attacks in progress.

• Antispam for the growing problem of short messaging service spam.

Unified protection – Security and management for mobile devices should be integrated into the overall enterprise security and management framework and administered in the same way – ideally using compatible solutions and unified policies. This creates operational efficiencies, but more importantly, it ensures consistent protection across your infrastructure, whether it be on premises or in the cloud. Security policy should be unified across popular mobile operating systems such as Symbian, Windows Mobile, BlackBerry, Android or Apple iOS, and their successors. And non-compliant mobile devices should be denied network access until they have been scanned, and if necessary patched, upgraded, or remediated.

Cloud-based Encryption – Millions of mobile devices used in the U.S. alone “go missing” every year. To protect against unauthorized users gaining access to valuable corporate data, encryption delivered in the cloud is necessary to protect the date that resides there. As an additional layer of security, companies should ensure they have a remote-wipe capability for unrecovered devices.

Scalability – Threats that target mobile devices are the same for small businesses and enterprises. As businesses grow, they require security management technology that is automated, policy-based, and scalable so that the infrastructure can accommodate new mobile platforms and services as they are introduced. With this information-centric framework in place, companies can take full advantage of the benefits offered by the cloud. At the same time, having the right policies and technologies in place provides confidence that data – the new enterprise currency – is secure from unauthorized access.

Combined, the five guidelines provide a strong baseline policy, which should give IT and business leaders confidence in the cloud and the mobile devices it enables.

Mark Bregman is Executive Vice President and Chief Technology Officer at Symantec, responsible for the Symantec Research Labs, Symantec Security Response and shared technologies, emerging technologies, architecture and standards, localization and secure coding, and developing the technology strategy for the company.

Is Tokenization or Encryption Keeping You Up at Night?

By Stuart Lisk, Senior Product Manager, Hubspan

Are you losing sleep over whether to implement tokenization or full encryption as your cloud security methodology? Do you find yourself lying awake wondering if you locked all the doors to your sensitive data? Your “sleepless with security” insomnia can be treated by analyzing your current situation and determining the level of coverage you need.

Do you need a heavy blanket that covers you from head to toe to keep you warm and cozy or perhaps just a special small blanket to keep your feet warm? Now extend this idea to your data security – do you need end-to-end encryption that blankets all of the data being processed or is a tokenization approach enough, with the blanket covering only the part of the data set that needs to be addressed?

Another main reason why there is much discussion over which method is right for you, relates to compliance with industry standards and government regulations. PCI DSS is the most common compliance issue as it focuses specifically on financial data being transmitted over a network, resulting in exposure to hackers and “the boogie man.”

There is much hype in the industry that makes us believe we must choose one approach over the other. Instead of the analysts and security experts helping us make the decision, they have actually caused more confusion and sleepless nights.

As with anything involving choice, there are pros and cons for each approach. Tokenization provides flexibility, because you can select (and thereby limit) the data that needs to be protected, such as credit card numbers. Another example of how tokenization is often leveraged is in the processing of Social Security numbers. We all know how valuable those digits are. People steal those golden numbers and, in essence, steal identities. Isolating a Social Security number allows it to be replaced with a token during transit, and then replaced with the real numbers upon arrival at the destination. This process secures the numbers from being illegally obtained.

However, in order to do this successfully, you must be able to identify the specific data to encrypt, which means you must have intimate knowledge of your data profile. Are you confident you can identify every piece of sensitive data within your data set? If not, then encryption may be a better strategy.

Another advantage of utilizing tokenization as your security methodology is that it minimizes the cost and complexity of compliance with industry standards and government regulations. Certainly from a PCI DSS compliance issue, leveraging tokenization as a means to secure credit card data is less expensive than E2EE as the information that needs to be protected is well known and clearly identified.

Full, end-to-end encryption secures all the data regardless of its makeup, from one end of the process through to the destination. This “full” protection leaves no chance of missing data that should be protected. However, it could also be overkill, more expensive or potentially hurt performance.

Many companies will utilize full encryption if there is concern of computers being lost, stolen or worries of a natural disaster. Full end-to-end encryption ensures data protection from the source throughout the entire transmission. All data, without regard for knowing the content, is passed securely over the network, including public networks, to its destination where it is de-crypted and managed by the recipient.

While there is much being said in the market about performance, this should not be a deal breaker, and optimization technologies and methodologies can minimize the performance difference. It also depends on whether security is the highest priority.  In a recent survey Hubspan conducted on cloud security, more than 77% of the respondents said they were willing to sacrifice some level of performance in order to ensure data security. The reality is full encryption performance is acceptable for most implementations.

Also, you do not need to choose one methodology over the other. As with cloud implementations, many companies are adopting a hybrid approach when it comes to data security in the cloud. If your data set is well known and defined, and the data subset is sensitive, then tokenization is a reliable and proven method to implement. However, if you are not sure of the content of the data and you are willing to basically lock it down, then encrypting the data end-to-end is most likely the best approach.

Clearly there are a number of approaches one can take to secure their data from malware and other security holes. Tokenization and E2EE are two of the popular ways today. Fact is, you must look at a variety of different approaches and incorporate any and all of them to keep your data out of the hands of those that would do you harm.

It is also important to realize that each of these methodologies require a different set of infrastructure to support it. And the cost of implementing them will vary just as much. Keep that in mind as well as you consider how best to secure your data.

In an attempt not to over-simplify your decision criteria, think of data security as if you deciding whether to use a full comforter to keep you warm at night, or utilize the foot blanket to provide the warmth to your feet you specifically desire.

Stuart Lisk, Senior Product Manager, Hubspan (

Stuart Lisk is a Senior Product Manager for Hubspan with over 20 years’ experience in enterprise network, system, storage and application product management. He has over ten years of experience managing cloud computing (SaaS) products.

Protect the API Keys to your Cloud Kingdom

API keys to become first class citizens of security policies, just like SSL keys

By Mark O’Neill, CTO, Vordel

Much lip service is paid to protecting information in the Cloud, but the reality is often seat-of-the-pants Cloud security. Most organizations use some form of API keys to access their cloud services. Protection of these API keys is vital. This blog post will explore the issues at play when protecting API keys, and make some recommended solutions.

In 2011, the sensitivity of API Keys will start to be realized, and organizations will better understand the need to protect these keys at all costs. After all, API keys are directly linked to access to sensitive information in the cloud (like email, sales leads, or shared documents) and pay-as-you-use Cloud services. As such, if an organization condones the casual management of API keys they are at risk of: 1) unauthorized individuals using the keys to access confidential information and 2) the possibility of huge credit card bills for unapproved access to pay-as-you-use Cloud services.

In effect, easily accessed API keys means anyone can use them and run up huge bills on virtual machines. This is akin to having access to someone’s credit card and making unauthorized purchases.


Let’s take a look at APIs. As you know, many Cloud services are accessed using simple REST Web Services interfaces. These are commonly called APIs, since they are similar in concept to the more heavyweight C++ or Visual Basis APIs of old, though they are much easier to leverage from a Web page or from a mobile phone, hence their increasing ubiquity. In a nutshell, API Keys are used to access these Cloud services. As Darryl Plummer of Gartner noted in his blog, “The cloud has made the need for integrating between services (someone told me, “if you’re over 30 you call it an ‘API’, and if you are under 30 you call it a ‘service’”) more evident than ever. Companies want to connect from on-premises apps to cloud services and from cloud services to cloud services. And, all of these connections need to be secure and governed for performance.” [i]

As such, it’s clear that API keys control access to the Cloud service’s crown jewels, yet they are often emailed around an organization without due regard to their sensitivity, or stored on file servers accessed by many people. For example, if an organization is using a SaaS offering, such as Gmail for its employees, they usually get an API key from Google to enable single sign-on. This API key is only valid for the organization and allows employees to sign-in and access company email.  You can read more about the importance of API keys for single sign-on in my earlier blog titled “Extend the enterprise into the cloud with single sign- on to cloud based services.”

How are API keys protected?

API Keys must be protected just like passwords and private keys are protected. This means that they should not be stored as files on the file system, or baked into non-obfuscated applications that can be analyzed relatively easily. In the case of a Cloud Service Broker, API keys are stored encrypted, and when a Hardware Security Module (HSM) is used, this provides the option of storing the API keys on hardware, since a number of HSM vendors including: Sophos-Utimaco, nCipher Thales, Safenet and Bull among others, now support the storage of material other than only RSA/DSA keys. The secure storage of API keys means that operations staff can apply a policy to their key usage. It also means that regulatory criteria related to privacy and protection of critical communications (for later legal “discovery” if mandated) are met.

Broker Model for Protecting API Keys

The following are some common approaches to handling API keys with recommended solutions:

1)      Developers Emailing API Keys: organizations often email API keys to developers who copy and paste them into code. This casual practice is rife with security issues and should be avoided. Additionally, if a developer bakes the API keys into code, a request for a new key requires a code change resulting in extra work.

2)      Configuration Files: another common scenario is where a developer puts an API key into a configuration file on the system where it can be easily discovered. As such, people should think of API keys as the equivalent of private SSL keys that should be handled in a secure fashion. In fact, if API keys get into the wrong hands they are actually more dangerous than private SSL keys as they are very actionable. For example, if someone uses an organization’s API keys, the organization gets the bill. One solution to this scenario is to have the keys in a network infrastructure that is built for security. This would involve cloud service broker architecture with the broker product managing the API keys.

3)      Inventory of Keys: a way to avoid issues with managing API keys is the implementation of an explicit security policy regarding these keys. Ideally, this should come under the control of the Corporate Security Policy with a clear focus on governance and accountability. The foundation of this approach is to keep an inventory of API keys. Despite the benefits of such an inventory, many organizations continue to adopt an ad hoc approach to keeping track of API keys.

Some of the key questions organizations should ask when developing an inventory of API keys are:

a)      what keys are being used and for what purposes?

b)      who is the contact person responsible for specific keys?

c)      is there an expiry plan in response to the expiry date on the usage of keys? How will notification happen? If there is no clear plan on how to handle expired API keys, it can cause pandemonium when a password expires.

The inventory could be managed on a home-grown encrypted excel spreadsheet or database or via other more specific off-the-shelf products. The disadvantage of the home-grown approach is the amount of time required to manage the spreadsheet or database and the possibility of human error. An alternate approach is to leverage the capabilities of an off-the-shelf product such as a cloud service broker. In addition to providing other services, a broker allows an organization to easily view critical information about API keys, including identifying who is responsible for them, as well as providing information on how they are being used and the expiry dates.

4)      Encrypted File Storage:
One of the more potentially dangerous options is when a developer tries to implement their own security for API keys. For example, the developer understands that the API keys have to be protected and chooses to store the keys in a difficult to find spot – sometimes by using an encryption algorithm and hiding it in files or a registry which people would not typically access. Someone will inevitably find out about this “secret” hiding spot and before long this information is publicized throughout an organization. This classic mistake really highlights the old adage that security through obscurity is no security at all.

In summary, as organizations increasingly access Cloud services, it is clear the casual use and sharing of API keys is an accident waiting to happen. As such, regardless of how an organization chooses to manage API keys, either using a home grown approach or off-the shelf product, the critical goal is to safeguard the access and usage of these keys.

I would encourage CIOs and CSOs to recognize API keys as first class citizens of security policies similar to SSL private keys.  I would also advise anyone dealing with API keys to think of them as a sensitive resource to be protected as they provide access to sensitive information and access to pay-as-you-use Cloud services. In summary, effective management of API keys could enhance an organization’s Cloud security and avoid unauthorized credit card charges for running virtual machines whereas slack management would likely result in leakage of sensitive information in addition to providing unrestricted access to pay-as-you-go Cloud Services –courtesy of the organization’s credit card.

Mark O’Neill – Chief Technology Officer – Vordel
As CTO at Vordel he oversees the development of Vordel’s technical development strategy for the delivery of high performance Cloud Computing and SOA management solutions to Fortune 500 companies and Governments worldwide. Mark is author of the book, “Web Services Security”, and a contributor to “Hardening Network Security”, both published by Osborne-McGrawHill. Mark is also a representative of the Cloud Security Alliance, where he is a member of the Identity Management advisory panel.

[i]Cloudstreams: The Next Cloud Integration Challenge – November 8, 2010

Constant Vigilance

By Jon Heimerl


Constant Vigilance. Mad-Eye Moody puts it very well. Constant Vigilance.

Unfortunately, these days we need constant vigilance to help protect ourselves and companies from peril. That is not to say that we can never relax and breathe. This is based on a key part of any decent cyber-security program – to prioritize the threats you face and consider the potential impact they could have on your business. Good practice says we need to do those things that really protect us from the big, bad important things – those threats that can really hurt us. “Constant vigilance” says we will actually follow through, do the analysis, and take appropriate mitigating actions.

Why should we worry? We worry because of the difference that any mitigating actions could make on an exigent threat.

Did BP exercise any sense of constant vigilance in the operations of their Deepwater Horizon oil well? The rupture in the oil well originally occurred on April 20, 2010, and the well was finally capped 87 days later on July 16. Estimates are rough, but something on the order of 328 million gallons of oil spilled into the gulf, along with a relatively unknown amount of natural gas, methane, along with other gases and pollutants. At a current market price of $3.59 per gallon, that would have been about $1.2 billion worth of gasoline. BP estimated clean up costs in excess of another $42 billion. Of that amount, BP estimated “response” costs as $2.9 billion. And with the recent news that U.S. prosecutors are considering filling manslaughter charges against some of the BP managers for decisions they made before the explosion, there is a good chance that the U. S. Department of Justice could be considering filing Gross Negligence charges against BP, which could add another $16 billion in fines, and lead the way to billions more in lawsuits.

So I have to ask, what form of vigilance did BP exercise when they constructed and drilled the well? As they demonstrated, they were clearly unprepared for the leak. They responded slowly, and their first attempts at stopping the leak were feeble and ineffective. It seriously looked like they had no idea what they were doing. Not only that, but it quickly became obvious that they did not even have plan for how to deal with the leaking well, or with the clean up, other than to let the ocean disperse the oil. Even if we ignore the leaked oil and associated clean up, if they had spent $2 billion on measures to address the leak, before it happened, they would have come out nearly a billion dollars ahead. $2 billion could have paid for a lot of monitoring, safety equipment, and potential well caps; maybe even a sea-floor level emergency cutoff valve, if they had things ready beforehand. If they had evaluated the potential threat and prepared ahead of time. If they had exercised just a little bit of vigilance. Yes, hindsight is 20/20, but by all appearances BP had not even seriously considered how to deal with something like this.

On Friday, March 11, an 8.9 magnitude earthquake struck off the coast of Japan, hitting the island country, followed by a massive tsunami. The earthquake and tsunami struck the Fukushima nuclear power plant located almost due west from the quake epicenter. Since then, all six of the reactors at the Fukushima plant have had problems. As of the end of March, Japan is still struggling with the reactors, and the radioactive material that has leaked from them. Radioactive plutonium has been discovered in the soil and water outside of some of the reactors, and we still do not know the exact extent of the danger or the eventual cost of this part of the disaster in Japan. The single largest crisis at the plant has been the lack of power that could help keep cooling systems active. The issue at point is that the nuclear plant had skipped a scheduled inspection of the plant that would have included 33 pieces of equipment across the six plant reactors. Among the equipment that was not inspected were a motor and backup power generator, which failed during the earthquake. Efforts to restore power have been hampered by the water from the Tsunami which breached the sea wall and flooded parts of the low lying reactor complex.

We don’t yet know the exact extent of the reactor disaster, and the potential costs for continued clean up and containment, or if such clean up is even possible. But, at best, we can estimate that the cost will exceed many millions of dollars. Would a good measure of diligence have helped minimize the extent of the disaster at Fukushima? We cannot say for sure, but perhaps. Would the inspection have found a problem with the generator that could have helped provide the needed power to the reactor cooling systems? Perhaps the 19 foot sea wall that protected the plant was determined by experts to be appropriate for the job, but the 46 foot tsunami overwhelmed the wall and flooded the facility. I would have to hear from an expert in that area before I made a final judgment, but perhaps better drainage and water pumps to remove excess water would have been appropriate. Much of this is easy to say in hindsight, but perhaps more vigilance upfront would have helped make the disaster more manageable. Or at least, less unmanageable.

We can’t foresee everything, and cannot anticipate every conceivable threat. But, we can ask ourselves a couple basic questions.

  1. Where can I find my cool information, systems and resources?
  2. What are the major threats to those things identified in #1?
  3. What can I do to minimize the impact that those threats have on me?

After that, it just takes a little vigilance.

Jon-Louis Heimerl, CISSP

Cloud Annexation

By Stephen R Carter

The Cloud is the next evolutionary step in the life of the Internet. From the experimental ARPANET (Advanced Research Projects Agency Network) to the Internet to the Web – and now to the Cloud, the evolution continues to advance international commerce and interaction on a grand scale. The Web did not become what it is today until SSL (Secure Sockets Layer) was developed together with the collection of root certificates that are a part of every secure browser. Until SSL (and later TLS [Transport Layer Security]) the Web was an interesting way to look at content but without the benefit of secured commerce. It was the availability of secure commerce that really woke the Web up and changed the commerce model of the planet Earth forever.

While the user saw massive changes in interaction patterns from ARPANET to Internet to Web, the evolution to the cloud will be mostly restricted to the way that service and commerce providers see things. With the Cloud, service and commerce providers are expecting to see a decrease in costs because of the increase of economy of scale and the ability to operate a sophisticated data center with only very little brick and mortar to care for (if any). With a network link and a laptop a business in the Cloud era could be a first class citizen in the growing nation of on-line commerce providers.

However, just as the lack of SSL prevented commerce on the web – the lack of security in the Cloud is holding that nation of on-line commerce providers back from the promise of Cloud. As early as February 2011, this author as seen advertised seminars and gatherings concerning the lack of security in the Cloud. Any gathering concerning the Cloud will have a track or two on the agenda concerning Cloud security.

The issue is not that Cloud providers do not use strong cryptographic mechanisms and materials, rather, the issue stems from the control that a business or enterprise has over the operational characteristics of a Cloud together with audit events to show regulatory compliance. Every data center has a strict set of operations policies that work together to show to the world and shareholders that the business is under control and can meet its compliance reporting requirements. If the enterprise adopts a “private cloud” or a Cloud inside of the data center, the problems start to show themselves and they compound at an alarming rate when a public Cloud is utilized.

So, what is to be done? There is no single solution to the security issue surrounding Cloud like there was for Web. The enterprise needs to have a ability to control operations according to policy which is compromised by a private cloud and breaks down with a public cloud.  The answer is described by a term I call, “Cloud Annexation.” Just as Sovereign Nation 1 can work with Sovereign Nation 2 to obtain property and annex it into Sovereign Nation 1, thus making the laws of Sovereign Nation 1 the prevailing law-of-the-land within the property, so to should an enterprise be able to annex a portion of a cloud (private or public) and impose policy (law) upon the annexed portion of the cloud so that, as far a policy is concerned, the annexed portion of the cloud becomes a part of the data center. Annexation also allows enterprise identities, policy, and compliance to be maintained locally if desired.

Figure 1: Cloud Annexation

This is obviously not what we have today. But, it is not unreasonable to expect that we could have it in the future. Standards bodies such as the DMTF are working on Cloud interoperability and Cloud management where the interfaces and infrastructure necessary to provide the functions of cloud annexation would be made available. The cloud management of the future should allow for an enterprise to impose its own crypto materials, policy, and event monitoring upon the portion of a cloud that it is using, thus annexing that portion of the Cloud. The imposition of enterprise policy must not, of course, interfere with the policy that the cloud provider must enforce – after all, the cloud provider has a business to care for as well. This will require that there be some facility to normalize the policies of the cloud provider and cloud consumer so that, without exposing sensitive information, both parties can be assured that appropriate policies can be enforced from both sides. The situation would be improved substantially if, like we have a network fabric today, we were to have an Identity Fabric – a network layer that overlays the network fabric that would provide identity as pervasively as network interconnectivity is today. But that is the topic of another posting.

In conclusion, the Cloud will not be as successful as it could be if the enterprise must integrate yet another operating and policy environment. The Cloud must become a natural extension of the data center so that the cost and effort of Cloud adoption are reduced and the “security” concerns are alleviated. If Cloud annexation becomes a reality, the evolution will be complete.

Novell Fellow Stephen R Carter is a computer scientist focused on creating solutions for identity, cloud infrastructure and services, and advanced network communication and collaboration. Carter is named on more than 100 worldwide patents with more than 100 patents still pending. He is the recipient of the State of Utah’s 2004 Governor’s Medal for Science and Technology and was recognized in 2009 and 2011 as a “Utah Genius” because of his patent work.

Privileged Administrators and the Cloud: Who will Watch the Watchmen?

By Matthew Gardiner

One of the key advantages of the cloud, whether public or private, flows from a well-known econometric concept known as “economies of scale.” The concept of economies of scale refers to an operation that to a point gets more efficient as it gets bigger – think electricity power plants, car factories, and semiconductor fabs.  Getting bigger is way of building differential advantage for the provider and thus becomes a key business driver for them, as he who gets bigger faster maintains the powerful position of low cost provider.   These efficiencies generally come from spreading fixed costs, whether human or otherwise, across more units of production.  Thus the cost per unit goes down as unit production goes up.

One important source of the economies of scale for cloud providers is from the IT administrators who make the cloud service and related datacenters operate.  A typical measure of this efficiency is the ratio of managed servers to number of administrators.  With a typical traditional enterprise datacenter this ratio is in the hundreds, whereas cloud providers, through homogeneity and greater automation, often can attain ratios of thousands or tens of thousands of servers per administrator.

However what is good from an economic point of view is not always good from a security and risk point of view.  With so many IT “eggs” from so many cloud consumers in one basket, the risk from these privileged cloud provider administrators must be explicitly recognized and addressed.  Privileged administrators going “rogue” by accident, for profit, or for retribution has happened so often around the world, that it’s hard to believe cloud providers will somehow be immune from this.  The short answer is they won’t.  The question is, what should you as a cloud consumer do to protect yourself from one of the cloud providers’ administrators “going rogue” on your data and applications?

For the purposes of this analysis I will focus on the use of public cloud providers as opposed to private cloud providers.  While the basic principles I discuss apply equally to both, I use public cloud providers because controls are generally hardest to design and enforce when they are largely operated by someone else.

I find the well worn IT concept of “people, process, and technology” to be a perfectly good framework with which to address this privileged administrator risk.  As cloud consumers move more sensitive applications to the cloud, they first need to be comfortable with who these IT administrators are in terms of location, qualifications, hiring, training, vetting, and supervision.  Shouldn’t the cloud providers’ HR processes for IT administrators be at least as rigorous as your own?

However, given that there is always a bad apple in a large enough bunch no matter the precautions, the next step is for the cloud providers to have operational processes that exhibit good security hygiene.   Practices such as segregation-of-duties, checks-and-balances, and need-to-know apply perfectly to how cloud administrators should be operating.  Cloud consumers also need to understand what the cloud providers’ practices, policies and processes are for the role of IT administrator.  Is it a violation for cloud provider administrators to look at or copy customer data, or stop customer applications, or copy virtual images?  It certainly should be.

The final area to consider is the various technologies that are being used to automate and enforce the security controls discussed above.  This certainly is made more challenging due to the variety of cloud services that are available.  What cloud consumers can do with public SaaS or PaaS providers (where they have little direct control or visibility into the cloud provider’s systems), is significantly less than that of IaaS providers, where the cloud consumer can install any software that they want at least at the virtual layer and above.  With SaaS and PaaS providers it is important that cloud consumers push hard for regular access at least to logs related to their data and applications, so that normal historical analysis can be conducted.  Of course, real-time, anytime access to system monitors would be even better.

For IaaS based public cloud services the security options for the cloud consumer are much wider.  For example, it should become regular practice that cloud consumers encrypt their sensitive data that resides in the cloud – to avoid prying eyes – as well as use privileged user management software that combines control of the host operating system with privileged user password management, fine grained access control, and privileged user auditing and reporting.  Using this type of privileged user management software enables the cloud consuming organization to control their own administrators and perhaps more importantly control and/or monitor the cloud provider’s administrators as well.

While there are huge benefits to using the cloud, it is equally important for organizations moving increasingly sensitive data and applications to the cloud that they think through how to mitigate all potential attack vectors.  The unfortunate reality is that people are a source of vulnerability and highly privileged people only increase this risk.  As the ancient Romans said – Quis custodiet ipsos custodes? – Who will watch the watchmen?

Matthew Gardiner is a Director working in the Security business unit at CA Technologies. He is a recognized industry leader in the security and Identity & Access Management (IAM) markets worldwide. He writes and is interviewed regularly in leading industry media on a wide range of IAM, cloud security, and other security-related topics. He is a member of the Kantara Initiative Board of Trustees. Matthew has a BSEE from the University of Pennsylvania and an SM in Management from MIT’s Sloan School of Management.  He blogs regularly at: and also tweets @jmatthewg1234.  More information about CA Technologies can be found at

Debunking the Top Three Cloud Security Myths

By Margaret Dawson

The “cloud” is one of the most discussed topics among IT professionals today, and organizations are increasingly exploring the potential benefits of using cloud computing or solutions for their businesses. It’s no surprise Gartner predicts that cloud computing will be a top priority for CIOs in 2011.

In spite of this, many companies and IT leaders remain skeptical about the cloud, with many simply not knowing how to get started or how to evaluate which cloud platform or approach is right for them. Furthermore, uncertainty and fears around cloud security and reliability continues to permeate the market and media coverage. And finally, there remains confusion around the definition of what is the cloud and what is it not, leading some CIOs to want to scrap the term “cloud” altogether.

My number one advice to companies of all sizes is to not buy the cloud, but rather, buy the solution.  Just as we have always done in IT, begin with identifying the challenge or pain that needs to be solved. In evaluating solutions that help address your challenge, include both on-premise and “as a service” based solutions.  And then use the same critical criteria to evaluate those cloud solutions as you would any other, making sure it addresses your requirements around data protection, identity management, compliance, access control rules, and other security capabilities.

Also, do not get sucked into the hype.  Below, I attempt to dispel some of the most common myths about cloud security that are common today:

1. All clouds are created equal

One of the biggest crimes committed by the vendor community and media over the last couple of years has been in talking about “the cloud” as if it was a single, monolithic entity. This mindset disregards the dozens of ways companies need to configure the infrastructure underlying a cloud solution, and the many more ways of configuring and running applications on a cloud platform.

Often people lump together established, enterprise-class cloud solutions with free services offered by social networks and similar “permanent beta” products. As a result of this definition of “the cloud”, many organizations fear that cloud solutions could expose critical enterprise resources and valuable intellectual property in the public domain. An unfortunate result of this fundamental disservice to the cloud security discussion is that it will only increase apprehension towards cloud adoption.

While the cloud can absolutely be as secure as or even more secure than an on-premise solution, all clouds are NOT created equal.  There are huge variances in security practices and capability, and you must establish clear criteria to make sure any solution addresses your requirements and compliance mandates.

2. Cloud security is so new, there’s no way it can be secure

With all the buzz surrounding the cloud, there’s a misconception that cloud security is a brand new challenge that has not been addressed. What most people don’t understand is that while the cloud is already bringing radical changes in cost, scalability and deployment time, most of the underlying security concerns are, in fact, not new or unattainable. It’s true that the cloud represents a brand new attack vector that hackers love to go after, but the vulnerabilities and security holes are the same ones you face in your traditional infrastructure.

Today’s cloud security issues are much the same as any other outsourcing model that organizations have been using for years. What companies need to remember is that when you talk about the cloud, you’re still talking about data, applications and operating systems in a data center, running the cloud solution.

It’s important to note that many cloud vendors leverage best-in-class security practices across their infrastructure, application and services layers.  What’s more, a cloud solution provides this same industry-leading security for all of its users, often offering you with a level of security your own organization could not afford to implement or maintain.

3. All clouds are inherently insecure

As previously mentioned, a cloud solution is no more or less secure than the datacenter, network and application on which it is build. In reality, the cloud can actually be more secure than your own internal IT infrastructure. A key advantage to third-party cloud solutions is that a cloud vendor’s core competency is to keep its network up and deliver the highest level of security. In fact, most cloud service providers have clear SLAs around this.

In order to run a cloud solution securely, cloud vendors have the opportunity to become PCI DSS compliant, SAS 70 certified and more. Undergoing these rigorous compliance and security routes can provide organizations with the assurance that cloud security is top of mind for their vendor and appropriately addressed. The economies of scale involved in cloud computing also extend to vendor expertise in areas like application security, IT governance and system administration. A recent move towards cloud computing by the security-conscious U.S. Federal Government is a prime example of how clouds can be extremely secure, depending on how they are built.

The one area to remember that folks often forget is the services piece of many cloud solutions.  Beyond the infrastructure and the application, make sure you understand how the vendor controls access to your data by their services and support personnel. Ac

Anxiety over cloud security is not likely to dissipate any time soon. However, by focusing on the facts and addressing the market’s concerns directly – like debunking cloud security myths – it will go a long way in helping companies gain confidence in deploying the cloud. There are also an increasing number of associations and industry forums, such as the Cloud Security Alliance, that provide vendor-neutral best practices and advice.  In spite of the jokes, cloud security is not an oxymoron, but in fact, an achievable and real goal.

Margaret Dawson is Vice President of Product Management for Hubspan ( She’s responsible for the overall product vision and roadmap, and works with key partners in delivering innovative solutions to the market. She has over 20 years experience in the IT industry, working with leading companies in the network security, semiconductor, personal computer, software, and e-commerce markets, including Microsoft and She is a frequent speaker on cloud security, cloud platforms, and other cloud-related themes. Dawson has worked and traveled extensively in Asia, Europe and North America, including ten years working in the Greater China region, consulting with many of the area’s leading IT companies, and serving as a BusinessWeek magazine foreign correspondent.

What NetFlix Can Teach Us About Security in the Cloud

By Eric Baize

For years, the security industry has been complacent, using complex concepts to keep security discussions isolated from mainstream IT infrastructure conversation.  The cloud revolution is bringing an end to this security apartheid. The emergence of an integrated IT infrastructure stack, the need for information-centric security and the disruption brought by virtualization are more and more making security a feature of the IT infrastructure. The industry consolidation, initiated by EMC’s acquisition of RSA in 2006 and now well on its way with the recent acquisition of McAfee by Intel and Arcsight by HP, is demonstrating that the security and IT infrastructure conversation are one in the same.

We, the security people, must follow this transition and lay out a vision that non-security experts can understand without having to take a PhD course in prime number computation.

Let me give it a try by using the video rental industry as an example on why security in the cloud will be different and more effective.

Video rental industry:

1 – You start with a simple need:   Most families want to watch movies in their living room, a movie of their choosing, at a time of their choosing.

2 – A new market emerges:   Video rental stores with chains such as Blockbuster in the U.S.  Do you remember the late fees?

3 – Then comes a new business model.  Instead of paying per movie and driving to the store, you pay a monthly subscription fee and movies are delivered directly to your home.  Netflix* jumps in and makes the new delivery model work with legacy technology by sending DVDs through postal mail.

4 – Increase in network bandwidth makes video on demand possible on many kinds of end-user devices from cell phones to video game consoles.  Netflix expands its footprint by embedding its technology into any video viewing device that makes it into your home:   Game consoles, streaming players and smart phones.

5 – Blockbuster has filed for Chapter 11 bankruptcy.  Netflix is uniquely positioned to help consumers transition from the old world of video viewing with DVDs to video on-demand.  The customer wins with better movie choices delivered faster.

The Security Industry

The parallel with the evolution the security industry is going through is striking:

1 – You start with a simple need from CIOs and CSOs:  They want to secure their information.

2 – A new market emerges:  IT security with early players focusing on perimeter security:  Building firewalls around information and bolting on security controls on top of insecure infrastructure.

3 – Here comes the cloud, a different way of delivering, operating and consuming IT.  IT is delivered as a service.  Enterprises use virtualization to build private clouds operated by internal IT teams.  The IT infrastructure is invisible and security is becoming much more information-centric. New security solutions such as the RSA Solution for Cloud Security and Compliance emerge, that focus on gaining visibility over the new cloud infrastructure and on controlling information.

4 – Increase in bandwidth makes it possible to expand private cloud into hybrid clouds, using a cloud provider’s IT infrastructure to develop new applications or to run server or desktop workloads.  Security is changing as controls are directly embedded in the new cloud infrastructure, making it security aware. The need for visibility expands to cloud provider’s IT infrastructure and new approaches such as the Cloud Security Alliance GRC Stack enable enterprises to expand their GRC platform to manage compliance of their cloud provider infrastructure.

5 – What will happen to the security industry?  It must adapt and manage the transition from physical to virtual to cloud infrastructures.  First, by dealing with traditional security controls in physical IT infrastructure;  then, by embedding its control in the virtual and cloud infrastructure to build a trusted cloud; and finally by providing a consolidated view of risk and compliance across all types of IT infrastructure: physical or virtual, on-premise or on a cloud provider’s premises. The customer wins:  IT infrastructures have become security-aware, making security and compliance more effective and easier to manage.

So, does this explanation work for you? I welcome all comments below!

* Netflix is a registered trademark of Netflix, Inc.

Eric Baize is Senior Director in the RSA’s Office of Strategy and Technology with responsibility for developing RSA’s strategy for cloud and virtualization. Mr Baize also leads the EMC Product Security Office with company-wide responsibility for securing EMC and RSA products.

Previously, Mr. Baize pioneered EMC’s push towards security. He was a founding member of the leadership team that defined EMC’s vision of information-centric security, and which drove the acquisition of RSA Security and Network Intelligence in 2006.

Mr Baize is a Certified Information Security Manager, holder of a U.S. patent and author of international security standards. He represents EMC on the Board of Directors of SAFECode.

[How to] Be Confident Storing Information in the Cloud

By Anil Chakravarthy and Deepak Mohan

Over the past few years, information explosion has inhibited organizations’ ability to effectively secure, manage and recover data. This complexity is only increasing as organizations try to manage the data growth by moving it to the cloud. It’s clear that storage administrators must regain control of information to reduce costs and recovery times while complying with regulatory compliance standards, including privacy laws.

Data growth is currently one of the biggest challenges for organizations. In a recent survey by Gartner, 47 percent of respondents ranked data growth as the biggest data center hardware infrastructure challenge for large enterprises. In fact, IDC says that enterprise storage will grow an average of 60 percent annually.

As a result, companies are turning to the cloud to help them alleviate some of the pains caused by these issues.

The Hype of the Cloud: Public, Private and/or Hybrid?

There is so much hype associated with cloud computing. Companies often struggle with defining the potential benefits of the cloud to their executives, and which model to recommend. In short, the cloud is a computing environment that can deliver on-demand resources in a scalable, elastic manner and is typically, but not necessarily, accessible from anywhere – through the internet ( The cloud encompasses the principle that users should have the ability to access data when, where and how they want – regardless of device.

The public cloud is typically when a third party owns the infrastructure and operations and is delivering a service to multiple private entities (i.e., cloud-based email or document sharing). While these services typically provide low-cost storage, this model has a few drawbacks: companies have limited control over implementation, security, privacy. This can be less than ideal for some organizations.

We believe most enterprises will implement a private cloud over the next few years. A private cloud retains control by enabling the company to host data and applications on their own infrastructure and maintain their own operations. This gives them maximum control, protecting against unforeseen issues. Private clouds can be scalable and elastic (similar to public clouds), providing them the best online storage operations and options to improve performance, capacity, and availability as needed.

A hybrid approach enables organization to combine the inexpensive nature of public cloud and private clouds, but giving additional control over the management, performance, privacy and accessibility of the cloud for their organization. For example, an organization may define a private cloud storage infrastructure for a set of applications and take advantage of public cloud storage for off-site copies and/or longer-term retention. This gives the organization the flexibility to deliver a service-oriented model to their internal customers.

Deciding which model to use is crucial. Each organization ought to evaluate their application portfolio, determine the corporate risk tolerance, and look to an agile way to consume cloud services. For small and medium-sized enterprises the propensity for public cloud applications and infrastructure can be much greater than large enterprise organizations.

The Private Cloud and Virtualization: Tools to Minimize Data Growth

As companies look to private clouds, often leveraging server virtualization, to more efficiently deliver applications to their business, it can also help manage, backup, archive and discover information. Adding to this issue, IDC reports that companies often waste up to 65 percent of storage capacity as disk utilization rates range from 28-35 percent. Cloud initiatives seem like the natural solution.

The private cloud is the clear answer. Combined with virtual environments, if managed correctly, the cloud can help organization save money, increase application delivery times, increase application availability, and reduce risks.

As a best practice, organizations need to increase storage utilization within virtual infrastructures as virtual machines deployments can often result in unused and wasted storage. Moreover, there can be performance implications as companies look to virtual desktops when a large number of users log into their desktops simultaneously, performance can suffer dramatically. Organizations can use new tools that address the storage challenges of virtual environments and integrate with the virtual machine management console for rapid provisioning of servers and virtual desktops. This would include the cloning and set up of virtual machines, desktops, applications, and files needed across all virtual servers.

By having intelligent storage management tools, organizations can reduce the number of copies stored for virtual machines and desktops yet still deliver the same number of applications and desktops to the business. This enables administrators to utilize the appropriate storage (including the appropriate characteristics cost, performance, availability, etc). According to our own tests, this can eliminate as much as 70 percent of storage requirements and costs by storing only the differences between VM boot images.

In addition, by utilizing appropriate management tools that look across all environments – whether physical, virtual or cloud-based – organizations can drive down costs by giving them a better understanding of how they are using storage to improve utilization and make better purchasing decisions. Furthermore, using such centralized management tools will help them to better automate tasks to improve services and reduce errors. This automation helps organizations deliver storage as a service (a key tenant for private cloud computing) with capabilities including on-host storage provisioning, policy-driven storage tiering and chargeback reporting.

Another example is when organizations back up applications within their virtual environment, they have normally done two separate backups: one for the full image recovery and one of the individual files within the environment for recovery later. Organizations can reduce this waste by implementing solutions that will do a single backup that is off-host, in the cloud, and will allow them to do two separate recoveries of the full image and of granular files. This more effective implementation of deduplication keeps data volumes lower and allows for better storage utilization.

Hybrid Cloud Solutions: Control of Storage Utilization and Archiving

Data protection and archiving environments within virtual and cloud environments tend to grow faster than anticipated. These environments will need to be managed closely to keep costs down. Luckily, there are software tools that address this quite effectively.

Implementing a hybrid model allows organizations to get storage offsite (through public cloud storage), eliminating tape rotations and other expenses associated with off-premise storage. However, organizations should be cautious when looking at tools that don’t provide consistent management across physic, virtual, and cloud-based infrastructures.

Many organizations are examining cloud-based email such as Google’s Gmail and Microsoft Office 365. But, as a best practice for this hybrid model, organizations can’t compromise corporate security and governance policies. This often results in organizations needing to maintain on-premise email archiving and discovery capabilities with information that resides in the cloud. In doing so, organizations now have a consistent way to discovery information that in their private cloud as well as information hosted in the public cloud.

Of course, organizations that integrate tightly with major cloud storage partners will see the biggest benefit of this hybrid approach – especially if they need to quickly deploy a cloud implementation to meet rapid growth.

Moving Forward with an Eye to the Sky

IDC reports that 62 percent of respondents to a recent survey say that they will be investing in data archiving or retirement in 2011 to address the challenges associated with data growth. IT organizations are in the process of trying to re-architect their environments to meet these challenges. Private and hybrid clouds, combined with virtualization, seem key in addressing these challenges.

By implementing cloud solutions, storage administrators are regaining control of information, helping them to reduce storage costs, and better deal with tomorrow’s challenges.

Anil Chakravarthy, Senior Vice President, Storage and Availability Management Group and Deepak Mohan, Senior Vice President, Information Management Group, Symantec Corporation

Hey, You, Get off of My Cloud

By Allen Allison

The emerging Public Cloud versus Private Cloud debate is not just about which solution is best. It extends to the very definition of cloud.  I won’t pretend that my definitions of public cloud and private cloud match everybody elses, but I would like to begin by establishing my point of reference for the differences between public and private cloud.

Public Cloud:  A multi-tenant computing environment that can deliver on-demand resources in a scalable, elastic manner that is both measured and metered, and often charged, on a per-usage basis.  The public cloud environment is typically, but not necessarily, accessible from anywhere – through the internet.

Private Cloud: A single tenant computing environment that may provide similar scalability and over-subscription to the Public Cloud, but solely within the single tenant’s infrastructure.  This infrastructure may exist on the tenant’s premises, and may be delivered in a dedicated model through a managed services offering.  The private cloud environment is typically accessible from within the tenant’s infrastructure.  However, it may be necessary to enable external access via the internet or other connectivity.

It is commonly understood that a cloud environment, whether public or private, has several benefits including lower total cost of ownership (TCO).  However, there are considerations that should be made when determining whether the appropriate option is a public or private cloud.  Below are some key points to consider, as well as some perceptions, or misperceptions, of the benefits of each.

In a Private Cloud, the owner or tenant may have more flexibility in establishing policies and procedures for provisioning, usage, and security.  If there are specific controls, that may otherwise impact other tenants in a shared environment, there may be greater control given to the organization within a dedicated environment.

In a Public Cloud, the tenant has less control over the shared resources, the security of the platform, or the compliance of the infrastructure. The tenant, however, may be able to leverage common security controls or compliance certifications that may inspire greater confidence in the use of a managed cloud offering.  For example, if the public cloud infrastructure is included in the SAS70 (soon to be replaced by SSAE16) audit by a 3rd party, the tenant may be in a position to offer the controls and compliance as part of their own compliance program.

In a Private Cloud, the owner or tenant may be able to leverage the scalability and capacity management of a platform that is able to handle the over-subscription or provisioning processes of a multi-resource infrastructure.  This allows for a consolidation of hardware and management resources, a potential reduction in administrative costs, and a scale that enables the use of idle resources (e.g. memory, CPU, etc.).  However, these benefits may come with a significant capital expense, depending on the cost model.

In a Public Cloud, the tenants enjoy greater scalability and capacity benefits because the costs of adding resources or managing the environment is not tied to a single tenant, but spread over all tenants of the platform.  Typically, in a public cloud, the tenant is only billed for the use of those resources.  This allows for a lower initial expense and a growth in cost to match utilization, which, in many cases, can equate to growth in revenue for the hosted application.  Likewise, when the need for resources is reduced, the total cost is also reduced.  This can be especially helpful when the platform is used to support a seasonal business (e.g., online merchant).

In a Private Cloud, the tenant has more control over maintenance schedules, upgrades, and the change-management process.  This allows for greater flexibility in the managed platform to comply with specific requirements, such as the FDA CFR 21 or NIST 800-53.  As the stringent requirements of these regulations impair the flexibility of cloud environments, it is easier to maintain the entire dedicated cloud platform to these specific controls rather than to attempt to carve out exceptions in an otherwise multi-tenant cloud environment.

In a Public Cloud, the costs of the shared security infrastructure that may be available to customers can be spread over multiple tenants.  For example, the cloud provider may enable the use of shared firewall resources for the inspection of traffic at the ingress of the cloud environment.  Customer can share costs of the maintenance and management as well as the shared hardware resources used to deliver those firewall services.  This is important to note when those security resources include threat management and intrusion detection services.  Often, the deployment and support of dedicated security infrastructure can be expensive.  Furthermore, most security infrastructure can be tailored to comply with most specific regulations or security standards, such as HIPAA, PCI DSS, and others.

It is important to understand how cloud providers deliver managed cloud services on a public cloud platform.  Typically, the elastic environment is built on a robust, highly scalable platform with the ability to grow much larger than any individual private cloud environment.  This implies that there are a significant number of benefits of scale built into a common platform.  This allows for the following benefits to the provider, with a trickle-down effect to each tenant.

  1. The per-unit cost of each additional resource is greatly reduced because a greater number of enhancements can be performed in a public cloud platform than in a private cloud platform.
  2. When a provider delivers security services in a public cloud environment, each tenant gains the benefits of security measures enforced for other clients.  An example of these benefits would be if a specific, known vulnerability is remediated for one customer, the same vulnerability remediation may be easily applied to all customers.
  3. The cloud provider’s reputation may work to the tenant’s advantage.  A cloud provider may take better precautions, such as adding additional redundancy, adding capacity sooner, or establishing more stringent change-management programs, for a shared public cloud infrastructure than they may be willing to deliver in a dedicated private cloud.  This may lend itself to better Service Level Agreements (SLA), greater availability, better flexibility, and rapid growth.

It is rare that a new cloud customer will require a dedicated cloud infrastructure.  This is most often reserved for those in the government, servicing the government, or in highly regulated industries.  For the rest, a public cloud infrastructure will likely provide the flexibility, growth, cost savings, and elasticity necessary to make the move from a dedicated physical environment to the cloud.  Those who choose to move to the public cloud understand the benefits and are able to leverage their providers to deliver the service levels and manageability to make the cloud experience a positive one.

Allen Allison, Chief Security Officer at NaviSite (

During his 20+ year career in the information security industry, Allen Allison has served in management and technical roles, including the development of NaviSite’s industry-leading cloud computing platform; chief engineer and developer for a market-leading managed security operations center; and lead auditor and assessor for information security programs in the healthcare, government, e-commerce, and financial industries. With experience in the fields of systems programming; network infrastructure design and deployment; and information security, Allison has earned the highest industry certifications, including CCIE, CCSP, CISSP, MCSE, CCSE, and INFOSEC Professional. A graduate of the University of California, Irvine, Allison has lectured at colleges and universities on the subject of information security and regulatory compliance.

Three Cloud-Computing Data Security Risks That Can’t be Overlooked

By Slavik Markovich, CTO of Sentrigo

The move to Cloud Computing brings with it a number of attributes that require special consideration when it comes to securing data.  And since in nearly every organization, their most sensitive data will be stored either directly in a relational database, or ultimately in a relational database through an application, how these new risks impact database security in particular is worth considering. As users move applications involving sensitive data to the cloud, they need to be concerned with three key issues that affect database security:

1)      Privileged User Access– Sensitive data processed outside the enterprise brings with it an inherent level of risk, because outsourced services bypass the physical, logical and personnel controls IT departments exert over in-house programs. Put simply, outsiders are now insiders.

2)      Server Elasticity – One of the major benefits of cloud computing is flexibility, so aside from the fact that you may not know (or could have little control over) exactly where your data is hosted, the servers hosting this data may also be provisioned and de-provisioned frequently to reflect current capacity requirements. This changing topology can be an obstacle to some technologies you rely on today, or a management nightmare if configurations must be updated with every change.

3)      Regulatory Compliance: Organizations are ultimately responsible for the security and integrity of their own data, even when it is held by a service provider. The ability to demonstrate to auditors that their data is secure despite a lack of physical control over systems, hinges in part on educating them, and in part on providing them with the necessary visibility into all activity.

Access control and monitoring of cloud administrators is a critical issue to ensuring sensitive data is secure.  While you likely perform background checks on your own privileged users and may also have significant physical monitoring in place as well (card keys for entry to the datacenter, cameras, and even monitoring by security personnel) — even if this is being done by your cloud provider — it is still not your own process.  And that means giving up some element of control.   Yet, these individuals may have nearly unlimited access to your infrastructure, something they need in many cases to ensure the performance and availability of the cloud resources for all customers.

So, it is reasonable to ask the cloud provider what kinds of controls exist on the physical infrastructure – most will have this well under control (run away, do not walk, if this is not the case).  The same is likely true for background checks on administrators.  However, you’ll also want to know if a malicious administrator at the cloud provider makes an unauthorized copy of your database, or simply connects directly to the database and changes records in your customer accounts.  You can’t trust simple auditing solutions as they are easily bypassed by DBAs, and audit files can be doctored or deleted by System Administrators.

You have a number of ways to address this (encryption, tokenization, masking, auditing and monitoring), but in all cases you need to make sure the solution you deploy cannot be easily defeated, even by privileged users, and will also work well in the distributed environment of the cloud.  This brings us to our next point.

Much has been written about how the location of your data assets in the cloud can impact security, but in fact potentially even more challenging, is the fact that the servers hosting this data are often reconfigured over the course of a day or week, in some cases without your prior knowledge.  In order to provide high availability and disaster recovery capabilities, cloud providers typically have data centers in multiple locations.  And to provide the elastic element of cloud computing, where you can expand capacity requirements in near real-time, additional resources may be provisioned as needed wherever capacity is available.  This results in an environment that is simply not static, and unless you are hosting your own private cloud, you may have limited visibility into these physical infrastructure updates.

How does this impact security?  Many of the traditional methods used to protect sensitive data rely on an understanding of the network topology, including perimeter protection, proxy filtering and network sniffing.  Others may rely on physical devices or connections to the server, for example some types of encryption, or hardware-assisted SSL.  In all of these cases, the dynamic nature of the cloud will render these models untenable, as they will require constant configuration changes to stay up-to-date.  Some approaches will be impossible, as you will not be able to ensure hardware is installed in the servers hosting your VMs, or on specific network segments along with the servers.

To work in this model, you need to rethink database security, and utilize a distributed approach –  look for components that run efficiently wherever data assets are located (locally on your cloud VMs), and that requires minimal (if any) configuration as VMs are provisioned, de-provisioned, and moved.

Lastly, you will likely face a somewhat more challenging regulatory audit as you move data subject to these provisions to the cloud.  It’s not that this is inherently less secure, but more so due to the fact that it will be something different for most auditors.  And to the majority of auditors, different is not usually a good thing (apologies up front to all those very flexible auditors that are reading this – why is it we never have you on our customer audits?)  So, if the data you need for an application hosted in the cloud is subject to Sarbanes-Oxley, HIPAA/HITECH, PCI DSS, or many other regulations, you need to make sure the controls necessary to meet compliance are in place, AND that you can demonstrate this to your auditor.

We’re seeing many cloud providers trying to placate these concerns by getting their own SAS-70 certifications, or even PCI DSS certifications done generally for their environment.  However, while this is a nice touch and can even be helpful in your own audit, you are ultimately responsible for your own data and the processes related to it — and YOUR auditor will audit YOUR environment, including any cloud services.  So, you will need to be able to run reports on all access to the database in question and prove that in no case could an insider have gained access undetected (assuming your auditor is doing his or her job well, of course). The key here is to look for strong segregation of duties, including the ability for you (or a separate 3rd party, NOT the cloud provider) to monitor all activity on your databases.  So, if a privileged user touches your data, alerts go off, and if they turn off the monitoring all together, you are notified in real-time.

It is certainly possible to address these issues, and implement database security that is not easily defeated, that operates smoothly in the dynamic environment of the cloud, and provides auditors with demonstrable proof that regulatory compliance requirements have been satisfied.  But, it very well may mean looking at a whole new set of security products, developed with the specific needs of cloud deployments in mind.

Slavik is CTO and co-founder of Sentrigo (, a leading provider of database security for on-premises and cloud computing environments and corporate member of the Cloud Security Alliance (CSA). Previously, Slavik was VP R&D and Chief Architect at [email protected], a leading IT architecture consultancy, and led projects for clients including Orange, Comverse, Actimize and Oracle. Slavik is a recognized authority on Oracle and JAVA/JavaEE technologies, has contributed to open source projects and is a regular speaker at industry conferences. He holds a BSc in Computer Science.

Cloud Security: The Identity Factor

The Problem with Passwords


by Patrick Harding, CTO, Ping Identity

The average enterprise employee uses 12 userid/password pairs for accessing the many applications required to perform his or her job (Osterman Research 2009).  It is unreasonable to expect anyone to create, regularly change (also a prudent security practice) and memorize a dozen passwords, but is considered today to be a common practice.  Users are forced to take short-cuts, such as using the same userid and password for all applications, or writing down their many strong passwords on Post-It notes or, even worse, in a file on their desktop or smartphone.

Even if most users could memorize several strong passwords, there remains risk to the enterprise when passwords are used to access cloud services (such as Google Apps or where they can be phished, intercepted or otherwise stolen.

The underlying problem with passwords is that they work well only in “small” spaces; that is, in environments that have other means to mitigate risk.  Consider as an analogy the bank vault.  Its combination is the equivalent of a strong password, and is capable of adequately protecting the vault’s contents if, and only if, there are other layers of security at the bank.

Such other layers of security also exist within the enterprise in the form of locked doors, receptionists, ID badges, security guards, video surveillance, etc.  These layers of security explain why losing a laptop PC in a public place can be a real problem (and why vaults are never located outside of banks!).

Ideally, these same layers of internal security could also be put to use securing access to cloud services.  Also ideally, users could then be asked to remember only one strong password (like the bank vault combination), or use just one method of multi-factor authentication.  And ideally, the IT department could administer user access controls for cloud services centrally via a common directory (and no longer be burdened by constant calls to the Help Desk from users trying to recall so many different passwords).

One innovation in cloud security makes this ideal a practical reality:  federated identity.

Federated Identity Secures the Cloud

Parsing “federated identity” into its two constituent words reveals the power behind this approach to securing the cloud.  The identity is of an individual user, which is the basis for both authentication (the credentials for establishing the user is who he/she claims to be) and authorization (the cloud services permitted for use by specific users).  Federation involves a set of standards that allows identity-related information to be shared securely between parties, in this case:  the enterprise and cloud-based service providers.

The major advantage of federated identity is that it enables the enterprise to maintain full and centralized control over access to all applications, whether internal or external. Further, federated single sign-on (SSO) allows a user to login once then access all authorized cloud services via a portal or other convenient means of navigation. The IT department essentially controls how users authenticate; including whatever credentials may be required.  A related advantage is that, with all access control provisions fully centralized, “on-boarding” (adding new employees) and “off-boarding” (terminating employees) become at once more secure and substantially easier to perform.

Security Tokens

Identity-related information is shared between the enterprise and cloud-based service providers through security tokens; not the physical kind, but as cryptographically signed documents (e.g. XML-based SAML tokens) that contain data about a user.  Under this trust model, the good guys have good documents (security tokens) issued from a trusted source; the bad guys never do.  For this reason, both the enterprise and the service providers are protected.  These security tokens essentially replace the use of a password at each cloud service.

When enabling a federated relationship with different cloud services, there are always two parties: the Identity Provider (IdP) and the Relying Party (RP)[PD1] .  The Identity Provider (the enterprise) is the authoritative source of the identity information contained in the security tokens.  The Relying Parties (the cloud service providers) establish relationships with one or more Identity Providers and verifies and trusts the security tokens containing the assertions needed to govern access control.

The authoritative nature of and the structured relationship between the two parties are fundamental to federated identity.  Based on the trust established between the Enterprise and the cloud service the Relying Parties have full confidence in the security tokens they receive. [PH2]


As the popularity of cloud-based services continues to grow, IT departments will increasingly turn to federated identity as the preferred means for managing access control.  With federated identity, users and the IT staff both benefit from greater security but also from greater convenience and productivity.  Users log in only once, remembering only one strong password, to access all authorized cloud services.

To learn more about Identity’s role in Cloud Security, visit Ping Identity

Patrick Harding, CTO, Ping Identity

Harding brings more than 20 years of experience in software development, networking infrastructure and information security to the role of Chief Technology Officer for Ping Identity. Previously, Harding was a vice president and security architect at Fidelity Investments where he was responsible for aligning identity management and security technologies with the strategic goals of the business. An active leader in the Identity Security space, Harding is a Founding Board Member for the Information Card Foundation, a member of the Cloud Security Alliance Board of Advisors, on the steering committee for OASIS and actively involved in the Kantara Initiative and Project Concordia. He is a regular speaker at RSA, Digital ID World, SaaS Summit, Catalyst and other conferences. Harding holds a BS Degree in Computer Science from the University of New South Wales in Sydney, Australia.

[PD1] Service Provider is used above

[PH2] I would suggest that this paragraph be deleted as a lot of the terms and concepts have not been introduced or explained.

Navigating Cloud Application Security: Myths vs. Realities

Chris Wysopal, CTO, Veracode

Developers and IT departments are being told they need to move applications to the cloud and are often left on their own to navigate the challenges related to developing and managing the security of applications in those environments.  Because no one should have to fly blind through these uncertain skies, it’s important to dispel the myths, expose the realities and establish best practices for securing cloud-based applications.

Inherent Threats

Whether we are talking about IaaS (Infrastructure as a Service), PaaS (Platform as a Service) or SaaS (Software as a Service), perceived security vulnerabilities in the cloud are abundant.  A common myth is that organizations utilizing cloud applications should be most concerned about someone breaking in to the hosting provider, or an insider gaining access to applications they shouldn’t.  This is an outdated, generic IT/infrastructure point of view.  What’s more important and elemental is to examine if the web application being used is more vulnerable because of the way it was built, then deployed in the cloud – versus focusing on cloud security risks from an environmental or infrastructure perspective.

It’s imperative to understand the inherent (and non-storied) threats facing applications in virtualized environments.  Common vulnerabilities associated with multi-tenancy and cloud provider services, like identity and access management, must be examined from both a security and compliance perspective.  Obviously in a multi-tenant environment, hardware devices are being shared among other companies – potentially by competitors and other customers, as well as would-be attackers.  Organizations lose control over physical network or computing systems, even local storage for debugging and logging is remote.  Additionally, auditors may be concerned about the fact that the cloud provider has access to sensitive data at rest and in transit.

Inherent threats are not only present in the virtualized deployment environment, but also in the way applications for the cloud are developed in the first place.  Consider the choices many architects and designers are forced to make when it comes to developing and deploying applications in the cloud.  Because they are now in a position where they are relying on external controls put in place by the provider, they may feel comfortable taking short cuts when it comes to building in application security features.  Developers can rationalize speed time to market advantages related to by being able to use, and test, less code.  However, by handing external security controls to the provider, new attack surfaces quickly emerge related to VM, PaaS APIs and cloud management infrastructure.

Security – Trust No One

Security trust boundaries completely change with the movement of applications from internal or DMZ, to the cloud.  As opposed to traditional internal application infrastructures, in the cloud the trust boundary shrinks down to encompassing only the application itself, with all the users and related storage, database and identity management systems becoming “external” to that application.  In this situation, “trust no one” takes on great significance to the IT organization.  With all these external sources wanting access to the application, how do you know what request is legitimate?  How can we make up the lack of trust?  It boils down to establishing an additional layer of security controls. Organizations must encrypt all sensitive data stored or transmitted and treat all environmental inputs as untrusted in order to protect assets from attackers and the cloud provider itself.

Fasten Your Seatbelts

Best practices aimed at building protection must be incorporated into the development process to minimize risks.  How can you help applications become more secure?  It starts with a seatbelt – in the form of application level security controls that can be built into application code or implemented by the cloud services provider itself.  Examples of these controls can include encryption at rest, encryption in transit, point-to-point and message contents, auditing and logging, or authentication and authorization.    Unfortunately, in an IaaS environment, it may not be an option to have the provider manage these controls.  The advantages of using PaaS APIs to establish these controls, for example, is that in most cases the service provider has tested and debugged the API to speed time to market for the application.  SaaS environments offer no choice to the developer, as the SaaS provider will be totally in control of how data is secured and identity managed.

Traditional Application Security Approaches Still Apply

Another myth that must be debunked is the belief that any approach to application security testing – perhaps with a slightly different wrapper on it – can be used in a cloud environment.  While it is true that traditional application security issues still apply in the cloud, and that you still need to take advantage of established processes associated with requirements, design, implementation and testing, organizations can’t simply repackage what they know about application security.  Applications in the cloud require special care.  IT teams can’t be content to use mitigation techniques only at the network or operating system level anymore.

Security testing must be done at the application level, not the environmental level.  Threat modeling and design phases need to take additional cloud environmental risks into account.  And, implementation needs to use cloud security aware coding patterns in order to effectively eliminate vulnerability classes such as Cross-Site Scripting (XSS) and SQL Injections.  Standards such as OWASP Top 10 and CWE/SANS Top 25 are still applicable for testing IaaS and PaaS applications, and many SaaS extensions.

Overall, dynamic web testing and manual testing are relatively unchanged from traditional enterprise application testing, but it’s important to get permission and notify your cloud provider if you plan to do dynamic or manual testing, especially on a SaaS extension you have written, so it doesn’t create the appearance that your organization is attempting an attack on the provider.

It’s also important to note that cloud design and implementation patterns are still being researched, with efforts being led by organizations like the Cloud Security Alliance and NIST.  Ultimately, it would be valuable for service providers to come up with a recipe-like implementation for APIs.

Pre-Flight Checklists

After applications have been developed, application security testing has been performed according to requirements of the platform, and you are presumably ready to deploy, how do you know you are ready?  Each environment, IaaS, PaaS or SaaS, requires its own checklist to ensure the applications are ready for prime time.  For example, for an IaaS application, the organization must have taken steps such as securing the inter-host communication with channel level encryption and message based security, and filtered and masked sensitive information sent to debugging and logging functions.  For a PaaS application, threat modeling must have incorporated the platform API’s multi-tenancy risks.  For SaaS, it’s critical to have reviewed the provider’s documentation on how data is isolated from other tenants’ data.  You must also verify the SaaS provider’s certifications and their SDLC security processes.

Future Threats

Myth: just because you are prepared for a safe flight, doesn’t mean it will be.  Even with all the best preparation and safety measures in place, there is no debating the nascent nature of this deployment environment, leaving much more research that needs to be done.  One effective approach it to use threat modeling to help developers better understand the special risks of applications in the cloud.  For example, using this approach they can identify software vulnerabilities that can be exploited by a “pause and resume” attack where a virtual machine becomes temporarily frozen.  A seemingly innocent halt to end-user productivity can actually mean a hacker has been able to enter a system to cause damage by accessing sensitive information or planting malicious code that can be released at a future time.

As a security community, with security vendors, cloud service providers, research organizations and end-users who all have a vested interest in secure deploying applications in the cloud, we have the power establish guidelines and regular best practices aimed at building protection into the development process to prevent deployment risks.  Fasten your seatbelts, it’s going to be a fun ride.

As co-founder and CTO of Veracode (, Chris Wysopal is responsible for the security analysis capabilities of Veracode technology. He’s a recognized expert and well-known speaker in the information security field. His groundbreaking work while at the company @stake was instrumental in developing industry guidelines for responsibly disclosing software security vulnerabilities. Chris is co-author of the award winning password auditing and recovery application @stake LC (L0phtCrack), currently used by more than 6,000 government, military and corporate organizations worldwide.

Trusted Client to Cloud Access

Cloud computing has become an integral part of all IT decision making today across industries and geographies. This market is growing at a rapid pace. By 2014, IDC expects public cloud spending to rise to $29.5 billion growing at 21.6 percent per year. At the same time, Forrester predicts the cloud security market to grow to $1.5 billion by 2015. This is good news, yet there are many CIOs sitting on the fence and not jumping on the opportunity cloud computing presents as they worry about security of data and applications. The figure below lists survey results from top CIOs when asked about their top of mind concern for using cloud services by TechTarget.

Loss of control, Compliance implications, and Confidentiality and auditing topped the results. Under these 3 themes, the issues they listed are:

  • They find it hard to trust cloud providers security model
  • Manage proliferation of user accounts across cloud application providers
  • Extended enterprise boundary complicates compliance
  • Shared infrastructure, if the cloud gets hacked so do you
  • Audit log silos on proprietary cloud platforms

This blog post lists a potential solution to address these issues and more.

Security Layers

First, lets look at the various layers that are required to secure cloud applications and data.

You need to protect applications and data for assurance and compliance, access control, and defend against malicious attacks at the perimeter. Yet, the weakest link remains the client as malware and phishing attacks can send requests as if it were coming from a human user. To achieve end-to-end security, you need to look holistically at how to provide “trusted client to cloud access”. You can watch a webinar on this topic I recently did with security expert Gunnar Peterson.


One solution to this problem is to have a trusted broker that provides the glue between client security and cloud security. It should be able to determine if cloud applications are being accessed from trusted and attested client devices or not, and block access from all non-trusted clients. One way to get client attestation is through Intel® Identity Protection Technology (IPT) which embeds 2nd factor authentication in the processor itself.

While a trusted broker enforces above check it should also be able to provide supplemental security on top of what cloud applications provide by offering:

  • Federated Single Sign-On (SSO) using industry standards such as SAML, OAUTH and OpenID
  • 2 factor strong authentication with convenient soft OTP token support
  • Elevated authentication (term to represent step-up authentication on a per request basis, coined by Mark Diodati of Burton group in his latest report on Authentication Decision Point Reference Architecture)
  • Automated account provisioning and deprovisioning with automated identity attribute synchronization to ensure that all identity attributes across enterprise and cloud applications never go out-of-sync
  • Centralized audit repository with common audit record across cloud applications
  • Orphan account reporting to catch unauthorized account creation by administrators in cloud applications
  • And, a single dashboard to get 360 degree visibility on how cloud applications are being accessed by users (aka user activity monitoring)

Such a “Trusted Broker” software can insure that Enterprises adopt cloud applications providing tools to achieve “Control, Visibility, and Compliance” when accessing cloud applications. View  more on Intel’s solutions in this space.

Industry initiatives

Cloud Security Alliance (CSA) is working feverishly to provide awareness and guidance with reference implementations to address some of the security concerns listed earlier in this blog post. At the CSA summit 2011 held at RSA conference 2011, I presented a roadmap for Trusted Cloud Initiative (TCI) which is one of the sub groups of CSA. In it’s reference architecture, TCI lists the following use cases for trusted access to the cloud.

TCI also published a whitepaper covering identity and access control for cloud applications.


While cloud application providers continue to enhance their security posture, it’s in the best interest of enterprises to supplement it with additional security controls using technologies such as “Trusted Broker” that enable end-to-end secure client to cloud access and provide 360 degree visibility and compliance into how various cloud applications are being accessed by enterprise users. One such implementation of a “Trusted Broker” is provided by Intel Expressway Cloud Access 360 product. Visit to learn more.

Vikas Jain, Director of Product Management for Application Security and Identity Products with Intel Corporation has over 16 years experience in the software and services market, with particular expertise in cloud security, identity and access management, and application architecture. Prior to joining Intel, Vikas has held leadership roles in product management and software development at a wide-range of technology companies including Oracle, Oblix, Wipro and Infosys. You can follow him on twitter @ VikasJainTweet

And the Thunder Rolls: All the Noise about Cloud and What that Means When Lightning Strikes

Disaster Recovery (DR) and Business Continuity Planning (BCP) continue to be driving factors for some organizations looking to move to cloud.  Many are looking to manage their Disaster Recovery planning through extensive use of managed cloud services – and for good reasons.  These are the most common benefits of leveraging cloud services for disaster recovery planning cited by cloud customers:

  1. 1. I only have to pay for what I use.  If I don’t declare a disaster scenario, my costs are nominal.
  2. 2. I have flexibility with the amount of management my provider requires of me to maintain my DR from “full control” to “no control”.
  3. 3. I can leverage a world-class redundant facility to provide the greatest assurance of business continuity in the event of a major event.
  4. 4. I can keep my applications as up-to-date as I want, by defining my Recovery Point Objective (RPO) and Recovery Time Objective (RTO).
  5. 5. When I declare a disaster, I can rely on my cloud service provider for support rather than expect my staff to travel to a Disaster Recovery site for recovery work.

However, some cloud customers are not sure how their managed cloud service providers deliver redundant cloud environments, Disaster Recovery options, and Business Continuity Planning and execution.  After all, not all cloud providers are the same.

Nobody wants to be left out in the rain when disaster strikes.  The mistaken notion is, “It is all in the cloud; It must be highly available.”  However, that is not necessarily the case.  Here are some key questions to ask your managed cloud service provider about its cloud infrastructure:

  1. To what level of redundancy do you maintain your cloud infrastructure within the primary location? N + 50%?  N + 1?  N x 2?
  2. To what level of capacity do you maintain your cloud infrastructure Disaster Recovery services in the redundant location or locations?
  3. Are Disaster Recovery or Business Continuity services included in my contract and managed cloud environment?
  4. How am I billed during steady state?
  5. How am I billed in the event of a declared disaster?
  6. What are the options for providing the best Recovery Point Objective (RPO) and the costs associated with those options?
  7. What are the options for providing the best Recovery Time Objective (RTO) and the costs associated with those options?
  8. When I declare a disaster, what are the resources I can rely on to provide assistance to perform full recovery of services and data?
  9. How often and to what extent are you willing to perform regular DR tests?
  10. Are your cloud data centers diverse in the following manner:
    1. Are they geographically disparate?
    2. Do they have redundant power feeds?
    3. Do you maintain redundant circuits into diverse sides of the facilities?
    4. Is the network distribution to the cloud environment fully redundant?

Another common concern with Disaster Recovery and Business Continuity in the cloud is whether all of the policies, procedures and controls are maintained in the cloud environment when a disaster is declared.  Most organizations maintain strict compliance with policies or regulations that could be violated if not maintained in the cloud environment.  Here are the common questions regarding policies and procedures:

  1. What processes are in place to be sure my data is synchronized?
  2. What processes are in place to ensure changes are implemented consistently in all cloud node environments?
  3. Are the environments run in an active/active, active/passive, or active/off-line configuration?
  4. How often does the managed cloud service provider support DR testing?
  5. Are all security measures mirrored in the redundant location, even when inactive?
    1. Auditing
    2. Logging
    3. Authentication and Authorization
    4. Encryption
    5. Security Event Correlation
    6. What options are there to maintain development, quality assurance, and Disaster Recovery environments with version control?
    7. What processes and services are available to ensure a smooth recovery to primary location after the disaster is over, if necessary?
    8. What is the sustainability of the DR environment?  Is the DR environment architected to provide degraded or minimal performance?
    9. Are the same compliance controls provided in all Cloud node environments (e.g., SAS70 in every Data Center)?
    10. What processes are in place to maintain backups during disaster declaration, and synchronize backups and restore the backup processes to normal after restoration of services to primary location?

Disaster Recovery and Business Continuity Planning can be extremely difficult to manage and maintain.  However, the right managed cloud service provider can ensure that your environment is fully protected, your systems remain available and accessible, and you recover seamlessly when disaster strikes.

Allen Allison, Chief Security Officer at NaviSite (

During his 20+ year career in the information security industry, Allen Allison has served in management and technical roles, including the development of NaviSite’s industry-leading cloud computing platform; chief engineer and developer for a market-leading managed security operations center; and lead auditor and assessor for information security programs in the healthcare, government, e-commerce, and financial industries. With experience in the fields of systems programming; network infrastructure design and deployment; and information security, Allison has earned the highest industry certifications, including CCIE, CCSP, CISSP, MCSE, CCSE, and INFOSEC Professional. A graduate of the University of California, Irvine, Allison has lectured at colleges and universities on the subject of information security and regulatory compliance.

Top Six Security Questions Every CIO Should Ask a Cloud Vendor

By Ian Huynh, Vice President of Engineering, Hubspan

Cloud computing has become an integrated part of IT strategy for companies in every sector of our economy.  By 2012, IDC predicts that IT spending on cloud services will grow almost threefold to $42 billion. So it’s no surprise that decision makers no longer wonder “if” they can benefit from cloud computing. Instead, the question being asked now is “how” best to leverage the cloud while keeping data and systems secure.

With such an astounding amount of cloud computing growth expected in the next few years, it’s important for all executives, not just IT professionals, to understand the opportunities and precautions when considering a cloud solution. Security questions can span from whether information transferred between systems in the cloud is safe to what type of data is best stored in the cloud to how do I control who accesses my data?

It’s important to arm executives with actionable advice when considering a cloud computing service provider.  Below is a list of the top six questions every CIO should consider when evaluating how secure a cloud solution is:

  1. 1. How does your vendor plan on securing your data?

You need to understand how your provider’s physical security, personnel, access controls and architecture work together to build a secure environment for your company, your data and your external partners or customers that also might be using the solution.

Application Access Control

For application access control, think front-end as well as back-end. While there may be rigorous user access management rules when the application is accessed via the application interface (i.e. front-end), what about system maintenance activities and related accesses that are routinely performed by your cloud vendor, on the back end, to ensure optimal application and system performance?  Does your cloud vendor also apply the same rigorous access control, if not more?

Physical Access Control

Most people are familiar with application access control and user entitlements, but physical access control is just as important. In fact, many people forget that behind every cloud platform is a physical data center, and while it’s easy to assume vendors will have robust access controls around their data center, this isn’t always the case. Vendors should limit physical access to not only the overall data center facility but also to key areas like backup storage, servers and other critical network systems.

Personnel Access Control

Personnel considerations are another aspect of network security closely related to physical access control. Who does your vendor let access your data and how are they trained? Do they approach operations with a security-centric mindset? The security of any platform depends on the people that run it. This means that HR practices can have a huge impact on your vendor’s security operations. Smart vendors will institute background checks and special security training for their employees to defend against social engineering and phishing attacks.


Your cloud vendor’s solution needs to keep your data separate from that of other cloud tenants that use the same platform. This should be a primary concern when your data resides in “virtual private clouds,” where there is an expectation of stronger segregation controls.  As your data is stored in the same storage space as your neighboring tenants, you need to know how your cloud vendor will ensure that your data isn’t illegally accessed.

Also, the overall level of security for cloud applications needs to be addressed. Depending on your vendor’s architecture, there may be customers with differing security needs operating within the same multi-tenant environment. In these cases, the entire system needs to operate at the highest level of security to avoid the “weakest link syndrome.” Incidentally, this highlights one of the benefits of cloud computing – you can have the benefits of world-class security without the cost of building and the maintaining such infrastructure.

  1. 2. Do they secure the transactional data as well as the data at rest?

Most vendors claim strong data encryption but do they truly provide end-to-end encryption with security in place while the data is at rest or in storage. Also, cloud security should go beyond data encryption to include encryption key management, which is a vital part of any cloud security scheme and should not be overlooked.

Data Encryption

Most data centers don’t encrypt their data at rest, encrypt their backups or audit their data encryption process – but they should. A truly secure system would take these considerations into account. Data in backups will likely stick around much longer than the information that is currently on your servers. A mandate that provides strong guidance for data encryption is the Federal Information Processing Standards (FIPS)-140 security standard. This standard specifies the requirements for cryptology modules.  Ask your vendor if they adhere to FIPS guidelines.

Key Security

How are encryption keys stored and secured? You can encrypt all of your data, but the encryption keys are the proverbial “keys to the kingdom.”  Best practices call for splitting the knowledge of each key between two or more individuals – hence, to re-construct an entire key, you need all those individuals present for authorization.

Furthermore, where business practice requires that at least one person in the company has knowledge of the entire key (e.g. the CEO or CSO), then procedures and processes should be in place to ensure that those individuals with the knowledge cannot access the data (e.g. they may have the key but cannot get access to the lock to open it – hence, there’s still a degree of separation).

  1. Does the vendor follows secure development principles?

A truly secure cloud platform is built for security through and through. That means security starts from “ground zero” – the design phase of the application as well as the platform. It simply isn’t enough to operate your system with a security-centric mindset; you have to design your system using the same guiding principles, following an unbroken chain of secure procedures from conception in the lab to real-life implementation. This means that design reviews, development practices and quality assurance plans must be engineered using the same strict security guidelines you would use in a production data center.

  1. 4. What are the vendor’s security certifications, audits and compliance mandates?

There are many regulations in the market, but the two most important ones covering cloud security and data protection are PCI DSS and SAS 70 Type II mandates.

Consider vendors that follow the industry standard PCI DSS guidelines, developed and governed by the Payment Card Industry Security Standards Council. It is a set of requirements for enhancing payment account data security. While created for the credit card and banking industries, it is relevant for any sector, as the goal is keeping data safe and personally identifiable information protected. 

Another major control mechanism is the Statement on Auditing Standards No. 70 (SAS 70) Type II. SAS 70 compliance means a service provider has been through an in-depth audit of their control objectives and activities. 

In addition to these certifications, there are a couple of other associations and groups the vendor should acknowledge and use as guidance in prioritizing data security issues.  They are the Open Web Application Security Project (OWASP), which has a top ten list outlining the most dangerous current Web application security flaws along with the effective methods of dealing with them. And the Cloud Security Alliance (CSA), an industry group that advises best practices for data security in the cloud. 

In addition to third-party compliance, the cloud vendor should be engaging in their own annual security audits. Your vendor should have scheduled audits and include penetration tests using an independent third-party audit provider to evaluate the quality of the security provided with your cloud vendor. Although the PCI version 1.2 specifications only mandate annual security audits, find a vendor that goes above and beyond.  There are vendors that perform quarterly audits, four times what is considered typical industry specifications.

  1. 5. How does your vendor detect a compromise or intrusion?

Attempts by hackers to breach data security measures are becoming the norm in today’s high-tech computing environment. Whether you maintain your infrastructure and data on premise or in the cloud, the issues of securing your data are the same.

Your cloud vendor should include strong mechanisms for both intrusion prevention, or keeping your data safe from attack or a breach; and intrusion detection, which is the ability to monitor and know what’s happening with your data and if or when an intrusion happens. The vendor should be able to monitor, measure and react to any potential breach, particularly the ability to monitor access to its systems and detect any unauthorized changes to systems, policies or configuration files.

Also, what does your vendor do when things go wrong and is that communicated to you? A good Service Level Agreement (SLA) would have an intrusion notification clause built-in. A great SLA provides some transparency into the vendor’s operations in the areas of audits and compliances, and how those processes are comparable to your own requirements.

  1. 6. What are their disaster recovery plans and how does data security figure into those plans?

Your vendor’s security story needs to include their business continuity plan. First of all, they need to have a failover system or back-up data center. They should also be able to convincingly demonstrate to you that they can execute their backup plan. Many of the biggest cloud computing outages in recent memory were the result of a failure of disaster recovery processes.

Secondly, this secondary datacenter must have all of the same security processes and procedures applied to it as the primary one. It’s no good to have a second system in place, if you cannot operate securely in that environment.

Finally, if there were some sort of impending disaster, they need to notify you in advance. Keep in mind that you may not always know where your data is physically located, so the onus of reporting is on your provider.

Your vendor’s plan for securing your data should be a like a well-choreographed dance with a strong beginning, middle and end. Their system needs to be protected at the network and application layers and begin with the development process.  Access control policies should span the entire operation.  The vendor needs to have a coherent plan that protects data at all times, whether in motion or at rest. They need to include robust compliance, auditing and reporting processes, to ensure the integrity of the overall security scheme. And, your vendor should have robust disaster recovery procedures in place, and be able to show you that they are capable of executing them.

While cloud computing brings many benefits, all clouds are not created equal.  Make sure your vendor provides the security you need to confidently move your data to the cloud.

Ian Huynh, Vice President of Engineering, Hubspan

Ian Huynh has over 20 years’ experience in the software and services markets, with particular expertise in cloud computing, security and application architecture. Ian has been featured in publications such as Network World and CS Techcast, a technology network for IT pros. Prior to joining Hubspan, Ian served as Software Architect at Concur Technologies, and has held technical leadership positions at 7Software and Microsoft Corp.

Extend the Enterprise into the Cloud with Single Sign-On to Cloud-Based services

by Mark O’Neill, CTO, Vordel
In this blog post we examine how Single Sign-On from the enterprise to Cloud-based services is enabled. Single Sign-On is a critical component for any organization wishing to leverage Cloud services. In fact, an organization accessing Cloud-based services without Single Sign-On risks increased exposure to security risks and the potential for increased IT Help Desk costs, as well the danger of “dangling” accounts from former employees which are open to rogue usage.

Let’s take a look at Google Apps and the concept of Single Sign-On. Organizations are increasingly using Cloud services such as Google Apps for email and document sharing. Google Apps, especially Gmail, are a popular option for organizations making their first foray into leveraging Cloud-based Services. While the cost advantages of this model are compelling, organizations do not want to create a whole new set of accounts for their employees in the Cloud, or force their employees to remember a new password.

The solution to this problem is to allow users to continue to use their own local accounts, logging into their computers as normal, but then seamlessly being logged into the Cloud services. In this way, the user experiences a continuous link from the corporate systems, such as their Windows login, into the Cloud services, such as email. This is known as Single Sign-On, and is enabled by technologies such as Security Assertion Markup Language (SAML). This allows operations staff to manage their organization’s usage of the external Cloud services as if they were a part of their internal network, even without the same degree of physical control. As a result, the usual problems of password synchronization, user provisioning (adding users) and de-provisioning (removing users), and auditing are minimized.

When an organization wants to use Gmail for its employees, they usually get a key from Google to enable single sign on. This application programming interface (API) key is only valid for the organization and enables its employees to sign in. As such, it is vitally important this key is protected. If an unauthorized person gets the key they can log in and impersonate the email account owners, share Google documents and generally have unlimited access to users email and documents.

A good solution to overcome this issue is to provide Single Sign-On between on-premises systems and the Cloud. However, the key security requirement of Single Sign-On is protection of API keys. In effect, these API keys are the keys of the kingdom for Cloud Single Sign-On. I will discuss the topic of protecting API keys in a future blog, but want to underscore the importance of their security. After all, if an organization wishes to enable single sign-on to their Google Apps (so that their users can access their email without having to log in a second time) then this access is via API Keys. If these keys were to be stolen, then an attacker would have access to the email of every person in that organization, by using the key to create a signed SAML assertion and sending it to Google. Clearly that must be avoided.

Single Sign-on Options:
There are two broad paths for any organization interested in implementing Single Sign-On today. One option is for an organization’s developer staff to create Single Sign-On via the sample code offered by all Cloud Service providers for the purpose of connecting to Cloud Services. This approach appeals to developers who want to create and code the connections into existing applications. The programming approach means it is the developer who is doing the work by writing code and making the connections to an organization’s applications.

A second approach is to take an off-the-shelf product like a Cloud Service Broker and use this technology to configure the managed or “brokered” connection up to the Cloud service. If an organization decides to leverage an off-the-shelf product, it usually results in systems doing the configuration and does not involve developers writing code. This is because the Cloud Service Broker sits external to the application and acts as a piece of network infrastructure brokering the connection. As a result, the management of this process comes under the responsibility of those managing the network infrastructure. Additionally, the Cloud Service Broker brokers the connection to the Cloud without having to get mired in the intricacies of particular programming of APIs for each product.

Implementation of Single Sign –On:

The implementation of Single Sign-On for a large enterprise is challenging. Typically it is a long involved project that requires the stitching together of applications that were not originally intended to work together, with products which use proprietary approaches and proprietary (read: not SAML or OAuth) tokens. This approach is labor intensive and time consuming. Within the consumer world the rise of more agile technologies like REST and the Web Service stack has enabled the more efficient adoption of Single Sign-On. Additionally, the growth of Cloud based services like Google Apps means we are seeing more lightweight Web technologies. These more straightforward Web technologies mean organizations, especially SMEs, can leverage off-the-shelf technologies such as a Cloud Broker to broker users’ identity up to the Cloud provider and secure the API keys via Single Sign-On

How are API keys protected?

API Keys must be protected just like passwords and private keys are protected. This means that they should not be stored as files on the file system, or baked unto non-obfuscated applications which can be analyzed relatively easily. In the case of a Cloud Service Broker, API keys are stored encrypted, and when a Hardware Security Module (HSM) is used, this provides the option of storing the API keys on hardware, since a number of HSM vendors now support the storage of material other than only RSA/DSA keys. The secure storage of API keys means that operations staff can apply a policy to their key usage. It also means that regulatory criteria related to privacy and protection of critical communications (for later legal “discovery” if mandated) are met.

Other Benefits of Single-On

In addition to protecting API keys it is worth noting the cost and productivity benefits Single
Sign-On offers an organization. Consider the fact that users with multiple passwords are also a
potential security threat and a drain on IT Help Desk resources. The risks and costs associated with
multiple passwords are particularly relevant for any large organization making its first steps into
Cloud Computing and leveraging Software-as-a-Service (Saas) applications. For example, if an organization has 10,000 employees, it is very costly to have the IT department assign new passwords to access Cloud Services for each individual user and indeed reassign new passwords whenever a user forgets their original access details.

By leveraging Single Sign-On capabilities an organization can enable a user to access both
the user’s desktops and any Cloud Services via a single password. In addition to preventing security
issues, there are significant costs savings to this approach. For example, Single Sign-On users are
less likely to lose passwords reducing the assistance required by IT helpdesks. Single Sign-On is also
helpful for the provisioning and de-provisioning of passwords. If a new user joins or leaves the
organization there is only a single password to activate or deactivate vs. having multiple passwords
to deal with.


Although Single Sign-On is not a new concept, it is finding new application for connecting organizations to Cloud service providers such as Google Apps. It is a powerful concept, enabling users to experience seamless connections from their computers up to their email, calendars, and shared documents. Standards such as SAML are enabling this trend. A Cloud Service Broker is an important enabling component for this trend, enabling the connection while protecting the all-important API keys.

Mark O’Neill – Chief Technology Officer – Vordel
As CTO at Vordel he oversees the development of Vordel’s technical development strategy for the delivery of high performance Cloud Computing and SOA management solutions to Fortune 500 companies and Governments worldwide. Mark is author of the book, “Web Services Security”, and a contributor to “Hardening Network Security”, both published by Osborne-McGrawHill. Mark is also a representative of the Cloud Security Alliance, where he is a member of the Identity Management advisory panel.

Moving to a “Show Me” State – Gaining Control and Visibility in Cloud Services

Survey after survey, security and more specifically the lack of control and visibility around what is happening to your information on cloud provider premises, is listed as the number one barrier to cloud adoption.

So far, there have been two approaches to solving the problem:

1 – The “Trust Me” approach: The enterprise relies on the cloud provider to apply best practices to secure your data, and the only tool you have available to get visibility into what is happening on the cloud provider’s premise is Google Earth. If you use Gmail and want to know more about what is happening to your email, follow this link or this one.

2 – The “Show Me” approach: The cloud provider gets bombarded by hundreds of questions and demands for site visits that vary from one customer to another. In most cases, these questionnaires are not tailored for cloud computing, they’re based on the existing control frameworks and best practices used to manage internal IT environments and external vendors.

Neither approach has been satisfactory thus far.

The “Trust Me” approach creates frustration for enterprises moving to the cloud: they cannot meet their compliance processes which often demands providing detailed evidence of compliance and answers to very specific questions.

The “Show Me” approach creates a tremendous burden for the cloud provider and a very long process for end-customers before any cloud-based service can be deployed. It completely defeats the cloud agility promise.

Auditors’ insatiable demand for evidence of compliance is pushing the industry towards standardizing a “Show Me” approach.

The Cloud Security Alliance Governance, Risk management and Compliance (GRC) Stack to assess security of cloud environments is a great step in that direction. It defines an industry accepted approach to document security controls implemented in cloud offerings:

– CloudAudit provides the technical foundation to enable transparency and trust between cloud computing providers and their customers

– Cloud Controls Matrix provides fundamental security principles to guide cloud vendors and to assist prospective cloud customers in assessing the overall security risk of a cloud provider.

– Consensus Assessments Initiative Questionnaire provides industry-accepted ways to document what security controls exist in a cloud provider’s offering.

The Cloud Security Alliance’s high profile, with members representing the leading cloud providers, technology vendors, and enterprise consumers of cloud services, provides the necessary weight and credibility such an initiative needs to be successful.

It offers cloud providers and end-customers alike a consistent and common approach to establish more transparency in cloud services. Enterprise GRC solutions such as RSA Archer have integrated the CSA GRC controls into the core platform so that customers can use the same GRC platform to assess cloud service providers as the one they already use to manage risk and compliance across the enterprise.

This is a great step forward towards solving the “Verify” part of the “Trust and Verify” equation that needs to be addressed to help drive cloud adoption.

What do readers think of this new approach by the Cloud Security Alliance? Is it a step in the right direction or should it go further?

Eric Baize is Senior Director in the RSA’s Office of Strategy and Technology with responsibility for developing RSA’s strategy and roadmaps cloud and virtualization. Mr Baize also leads the EMC Product Security Office with company-wide responsibility for securing EMC and RSA products.

Previously, Mr. Baize pioneered EMC’s push towards security. He was a founding member of the leadership team that defined EMC’s vision of information-centric security, and which drove the acquisition of RSA Security and Network Intelligence in 2006.

Mr Baize is a Certified Information Security Manager, holder of a U.S. patent and author of international security standards. He represents EMC on the Board of Directors of SAFECode.

Building a Secure Future in the Cloud

By Mark Bregman

Executive Vice President and Chief Technology Officer, Symantec

Cloud computing offers clear and powerful benefits to IT organizations of all sizes, but the path to cloud computing – please excuse the pun – is often cloudy.

With cloud computing, IT resources can scale almost immediately in response to business needs and can be delivered under a predictable (and budget friendly) pay-as-you-go model. An InformationWeek survey[1] in June 2010 found 58 percent of companies have either already moved to a private cloud, or plan to soon. Many others, meanwhile, are considering whether to shift some or all of their IT infrastructure to public clouds.

One of the biggest challenges in moving to a public cloud is security – organizations must be confident their data is protected, whether at rest or in motion.  This is new territory for our industry.  We don’t yet have a standard method or a broadly accepted blueprint for IT leaders to follow.  In my role as CTO of Symantec, I have invested a lot of time examining this challenge, both through internal research and in numerous discussions with our customers around the world.  And while we are not yet at the point of writing the book on this subject, I can tell you that there are a number of common themes that arise in nearly every conversation I have on this subject.  From this information, I’ve developed a checklist of five critical business considerations that decision makers should examine as they think about moving their infrastrucutre – and their data – to the cloud.

1. Cost-benefit analysis. The business case for cloud computing requires a clear understanding of costs as compared to an organization’s in-house solution. The key measure is that cloud must reduce capital and operational expenses without sacrificing user functionality, such as availability. The best delivery model for cloud functionality is a hardware-agnostic approach that embraces the commodity architectures in use by the world’s leading Internet and SaaS providers. This can be achieved through low-cost commodity servers and disks coupled with intelligent management software, providing true cloud-based economies of scale and efficiency.

2. Robust security. When you move to the cloud, you’re entrusting the organization’s intellectual property to a third party. Do their security standards meet the needs of your business? Even the smallest entry point can create an opening for unauthorized access and theft. Authentication and access controls are even more critical in a public cloud where cluster attacks aimed at a hypervisor can compromise multiple customers. Ideally, the cloud provider should offer a broad set of security solutions enabling an information-centric approach to securing critical interfaces – between services and end users, private and public services, as well as virtual and physical cloud infrastructures.

3. Data availability. As cloud places new demands on storage infrastructure, data availability, integrity, and confidentiality must be guaranteed. Often, these provisions come with vendors who offer massive scalability and elasticity in their clouds. To make this approach manageable for customers, cloud vendors must offer tools that provide visibility and control across heterogeneous storage platforms. The final test for cloud storage is interoperability with virtual infrastructures. This allows service providers to standardize on a single approach to data protection, de-duplication, assured availability, and disaster recovery across physical and virtual cloud server environments, including VMWare, MS Hyper-V and a variety of UNIX virtualization platforms.

4. Regulatory compliance. Cloud computing brings a host of new governance considerations. Organizations must evaluate the ability of the cloud provider to address the company’s own regulations, national and worldwide rules for conducting business in different regions, and customer needs. For example, many healthcare customers will require SOX and HIPAA compliance while financial customers must comply with Gramm-Leahy-Biley and Red Flags.

5. Check the fine print. Don’t forget to thoroughly evaluate your organization’s SLA requirements and ensure the cloud provider can and is legally responsible to deliver on these provisions. The most common SLAs relate to disaster recovery services. Make sure a contingency plan is in place to cover against outages. In the event of a disaster, is the facility hosting your data able to quickly offload into another data center? On a related note, an SLA best practice is to perform data classification for everything – including customer data – being considered for cloud migration. Know where your vendor’s cloud assets are physically located, because customer SLAs – such as with federal agencies – may require highly confidential data to stay on-shore. Non-sensitive information can reside in offshore facilities.

These five critical business considerations serve as a checklist for building trust into the cloud. This trust is crucial as the consumerization of IT continues to redefine the goals and requirements of IT organizations. Consider that, by the end of 2011, one billion smartphones will connect to the Internet as compared to 1.3 billion PCs.[2] The acceptance of mobile devices into the enterprise environment creates more demand for SaaS and remote access. As result, businesses large and small will increasingly turn to cloud to keep pace with demand.

# # #

[1] InformationWeek, “These Private Cloud Stats Surprised Me”:

[1] Mocana Study: “2010 Mobile & Smart Device Security Survey”

Moving to the Cloud? Take Your Application Security With You

By Bill Pennington, Chief Strategy Officer, WhiteHat Security

Cloud computing is becoming a fundamental part of information technology. Nearly every enterprise is evaluating or deploying cloud solutions. Even as business managers turn to the cloud to reduce costs, streamline staff, and increase efficiencies, they remain wary about the security of their applications.  Many companies express concern about turning over responsibility for their application security to an unknown entity, and rightly so.

Who is responsible for application security in the new world of cloud computing? Increasingly, we see third-party application providers, who are not necessarily security vendors, being asked to verify the thoroughness and effectiveness of their security strategies. Nevertheless, the enterprise ultimately still bears most of the responsibility for assessing application security regardless of where the application resides. Cloud computing or not, application security is a critical component of any operational IT strategy.

Businesses are run on the Internet, and as cloud computing expands that means that a host of new data is being exposed publicly. History and experience tell us that well over 80% of all websites have at least one serious software flaw, or vulnerability, that exposes an organization to loss of sensitive corporate or customer data, significant brand and reputation damage, and in some cases, huge financial repercussions.

Recent incidents on popular websites like YouTube, Twitter and iTunes; hosting vendors like Go Daddy; and the Apple iPad have exposed millions of records, often taking advantage of garden-variety cross-site scripting (XSS) vulnerabilities. The 2009 Heartland Payment Systems breach was accomplished via a SQL Injection vulnerability. Thus far, the financial cost to Heartland is $60 million and counting. The soft costs are more difficult to determine.

Across the board, organizations will have the opportunity to prioritize security on the most exposed part of the business, Web applications, and often the most seriously underfunded. The following issues must be understood in order to align business goals and security needs as the enterprise transitions to cloud computing.

1. Web Applications are the Primary Attack Target – Securing Applications Must be a Priority

Most experts agree that websites are the target of choice. Why? With more than 200 million websites in production today, it follows that attackers would make them their target. No matter the skill level, there is something for everyone on the Web, from random, opportunistic attackers to very focused criminals focused on data from a specific organization.  In one of the most recognized cases, an attacker used SQL injection to steal credit /debit card numbers that were then used to steal more than $1 million from ATMs worldwide.

And yet, most organizations believe that application security is underfunded, with only 18% of IT security budgets allocated to address the threat posed by insecure Web applications, while 43 percent of IT security budgets were allocated to network and host security.

While businesses are fighting yesterday’s wars, hackers have already moved on. Even more puzzling, application security is not a strategic corporate initiative at more than 70 percent of organizations. And these same security practitioners do not believe they have sufficient resources specifically budgeted to Web application security to address the risk.        As more applications move to the cloud, this imbalance must change. IT and the business must work together to prioritize the most critical security risk.

2. The Network Layer has Become Abstracted

Prior to cloud computing, organizations felt a certain confidence level about the security of applications that resided behind the firewall. To a certain extent, they were justified in their beliefs. Now, we see the network layer, which had been made nearly impenetrable over the past 10 years, becoming abstracted by the advent of cloud computing. Where once there was confidence, there is now confusion among security teams as to where to focus their resources. The short answer is: Keep your eye on the application because it is the only thing you can control.

With cloud computing, the customer is left vulnerable in many ways. First, the security team has lost visibility into the network security infrastructure. If the cloud provider makes a change to its infrastructure, it naturally changes the risk profile of the customer’s application. However, the customer is most likely not informed of these changes and therefore unaware of the ultimate impact. It is the customer’s responsibility to demand periodic security reports from its cloud vendor and thoroughly understand how their valuable data is being protected.

3. Security Team Loses Visibility with Cloud Computing: No IPS/IDS

One of the main concerns of security professionals anticipating an organizational switch to cloud computing is loss of visibility into attacks in progress, particularly with software-as-a-service (SaaS) offerings. With enterprise applications hosted by the cloud service provider, the alarm bells that the security team could rely on to alert them of attack, typically Intrusion Prevention or Intrusion Detection Systems, are now in the hands of the vendor. For some, this loss of visibility can translate into loss of control. In order to retain a measure of control, it is critical to understand the security measures that are in place at your cloud vendor and also to require that vendor to provide periodic security updates.

4. Change in Infrastructure is a Great Time to make Policy Changes/New Security Controls

Any time there is a change from one infrastructure to another, it presents businesses with an impetus to review its security policies and procedures. In fact, a move to cloud computing can be an excellent opportunity to institute new security policies and controls across the board. A credible case can be made to review budgets and allocate more funds to application security.         Where previously application security was a second-tier spending priority, it now rises to the top when SaaS comes into play.

This is a great time to pull business, security and development teams together to develop a strategy.

5. Cloud Security Brings App Security more in line with Business Goals – Decision Making Based on Business Value and Appropriate Risk.

For many organizations, application security is an afterthought. The corporate focus is on revenue, and often that means frequently pushing new code. Even with rigid development and QA processes, there will be differences between QA websites and actual production applications. This was not as critical when the applications resided behind the firewall, but now managers must take into account the value of the data stored in an application residing in the cloud.

Ideally, the security team and the business managers would inventory their cloud (and existing Web) application deployments. Once an accurate asset inventory is obtained, the business managers should evaluate every application and prioritize the security measures based on business value and create a risk profile. Once these measurements have occurred, an accurate application vulnerability assessment should be performed on all applications. Only then can the team assign value and implement an appropriate solution for the risk level. For example, a brochureware website will not need the same level of security as an e-commerce application.

Once an organization has accurate and actionable vulnerability data about all its websites, it can then create a mitigation plan. Having the correct foundation in place simplifies everything. Urgent issues can be “virtually patched” with a Web application firewall; less serious issues can be sent to the development queue for later remediation. Instead of facing the undesirable choice between shutting a website down or leaving it exposed, organizations armed with the right data can be proactive about security, reduce risk and maintain business continuity.


Ultimately, website security in the cloud is no different than website security in your own environment. Every enterprise needs to create a website risk management plan that will protect their valuable corporate and customer data from attackers. If your organization has not prioritized website security previously, then now is the time to make it a priority. Attackers understand what many organizations do not – that Web applications are the easiest and most profitable target. And, cloud applications are accessed via the browser which means the website security is the only preventive measure that will help fight attacks.

At the same time, enterprises need to hold cloud vendors responsible for a certain level of network security while remaining accountable for their own data security. Ask vendors what type of security measures they employ, how frequently they assess their security and more. As a customer, you have a right to know before you hand over your most valuable assets. And, vendors know that a lack of security can mean lost business.

There may be some hurdles to jump during the transition from internal to cloud applications. But, by following these recommendations, an organization can avoid pitfalls:

1.       You can’t secure what you don’t know you own – Inventory your applications to gain visibility into what data is at risk and where attackers can exploit the money or data transacted.

2        Assign a champion – Designate someone who can own and drive data security and is strongly empowered to direct numerous teams for support. Without accountability, security and compliance will suffer.

3        Don’t wait for developers to take charge of security – Deploy shielding technologies to mitigate the risk of vulnerable applications.

4        Shift budget from infrastructure to application security – With the proper resource allocation, corporate risk can be dramatically reduced.