Cloud Computing and Device Security: The “Always Able” Era Arrow to Content

April 29, 2011 | 1 Comment

By Mark Bregman, CTO of Symantec

Device Proliferation: Mobility and Security in the Cloud

Chief Information Security Officers know instinctively that the world under their purview is undergoing a shift every bit as significant as the rise of the World Wide Web more than 15 years ago. The demand on our workforce to be ever more productive is driving us to rethink how we use technology to get the job done. Today’s workers expect and demand smart, mobile, powerful devices that place the capabilities of a PC in the palm of the hand.

In this new environment, IT departments are faced with a hard choice: remain committed to an outdated model that limits productivity by placing stringent restrictions on the technology workers use or look for ways to implement new policies that give employees the tools they need to be “always able” while keeping company information safe.

This change in attitude has been driven more by the cloud than many IT decision makers may realize. For enterprise users to do their jobs, they must be able to create, retrieve, manipulate and store massive amounts of data. In the past, the PC was best tool for this job because it could store and process data locally. But today, storing data in the cloud sets device makers free to create a wide range of computing products – from highly portable to highly stylish and more. Increasingly, these devices can be used to create everything from office documents to rich multimedia content, driving demand for even smarter and more powerful devices.

The loss of traditional security controls with the mobile devices combined with cloud driven services results in the need for a new approach to security. According to findings from security firm Mocana, 47% of organizations do not believe they can adequately manage the risks introduced by mobile devices, and more than 45% of organizations say security concerns are one of the biggest obstacles to the proliferation of smart devices. Organizations now must cope with workers introducing personal devices on the enterprise cloud and accessing workplace technology for personal purposes. For IT, the ultimate goal is protecting data by defining who should access what data and defining rights management for viewing and manipulating that data.

At the 30,000 feet level, users demand the flexibility to choose the devices they want, which means IT is tasked with enforcing governance over those devices to ensure corporate data is protected. To allow for uniform enforcement, administrators need the ability to centrally define and distribute security policies to all devices – using what else but the cloud – to secure data at rest or in motion.

To this end, there are five important guidelines enterprises should consider as they reshape IT policy to enable mobile devices to function seamlessly and securely in the cloud:

Take an inventory of all devices – You can’t protect or manage what you can’t see. This begins with device inventory to gain visibility across multiple networks and into the cloud. After taking stock, implement continuous security practices, such as scanning for current security software, operating system patches, and hardware information, e.g., model and serial number.

Device security equals cloud security – Since they are essentially access points to the cloud, mobile devices need the same multi-layer protection we apply to other business endpoints, including:

• Firewalls protecting the device and its contents by port and by protocol.

• Antivirus protection spanning multiple attack vectors, which might include MMS (multimedia messaging service), infrared, Bluetooth, and e-mail.

• Real-time protection, including intrusion prevention with heuristics to block “zero-day” attacks for unpublished signatures, and user and administrator alerts for attacks in progress.

• Antispam for the growing problem of short messaging service spam.

Unified protection – Security and management for mobile devices should be integrated into the overall enterprise security and management framework and administered in the same way – ideally using compatible solutions and unified policies. This creates operational efficiencies, but more importantly, it ensures consistent protection across your infrastructure, whether it be on premises or in the cloud. Security policy should be unified across popular mobile operating systems such as Symbian, Windows Mobile, BlackBerry, Android or Apple iOS, and their successors. And non-compliant mobile devices should be denied network access until they have been scanned, and if necessary patched, upgraded, or remediated.

Cloud-based Encryption – Millions of mobile devices used in the U.S. alone “go missing” every year. To protect against unauthorized users gaining access to valuable corporate data, encryption delivered in the cloud is necessary to protect the date that resides there. As an additional layer of security, companies should ensure they have a remote-wipe capability for unrecovered devices.

Scalability – Threats that target mobile devices are the same for small businesses and enterprises. As businesses grow, they require security management technology that is automated, policy-based, and scalable so that the infrastructure can accommodate new mobile platforms and services as they are introduced. With this information-centric framework in place, companies can take full advantage of the benefits offered by the cloud. At the same time, having the right policies and technologies in place provides confidence that data – the new enterprise currency – is secure from unauthorized access.

Combined, the five guidelines provide a strong baseline policy, which should give IT and business leaders confidence in the cloud and the mobile devices it enables.

Mark Bregman is Executive Vice President and Chief Technology Officer at Symantec, responsible for the Symantec Research Labs, Symantec Security Response and shared technologies, emerging technologies, architecture and standards, localization and secure coding, and developing the technology strategy for the company.

Is Tokenization or Encryption Keeping You Up at Night? Arrow to Content

April 20, 2011 | Leave a Comment

By Stuart Lisk, Senior Product Manager, Hubspan

Are you losing sleep over whether to implement tokenization or full encryption as your cloud security methodology? Do you find yourself lying awake wondering if you locked all the doors to your sensitive data? Your “sleepless with security” insomnia can be treated by analyzing your current situation and determining the level of coverage you need.

Do you need a heavy blanket that covers you from head to toe to keep you warm and cozy or perhaps just a special small blanket to keep your feet warm? Now extend this idea to your data security – do you need end-to-end encryption that blankets all of the data being processed or is a tokenization approach enough, with the blanket covering only the part of the data set that needs to be addressed?

Another main reason why there is much discussion over which method is right for you, relates to compliance with industry standards and government regulations. PCI DSS is the most common compliance issue as it focuses specifically on financial data being transmitted over a network, resulting in exposure to hackers and “the boogie man.”

There is much hype in the industry that makes us believe we must choose one approach over the other. Instead of the analysts and security experts helping us make the decision, they have actually caused more confusion and sleepless nights.

As with anything involving choice, there are pros and cons for each approach. Tokenization provides flexibility, because you can select (and thereby limit) the data that needs to be protected, such as credit card numbers. Another example of how tokenization is often leveraged is in the processing of Social Security numbers. We all know how valuable those digits are. People steal those golden numbers and, in essence, steal identities. Isolating a Social Security number allows it to be replaced with a token during transit, and then replaced with the real numbers upon arrival at the destination. This process secures the numbers from being illegally obtained.

However, in order to do this successfully, you must be able to identify the specific data to encrypt, which means you must have intimate knowledge of your data profile. Are you confident you can identify every piece of sensitive data within your data set? If not, then encryption may be a better strategy.

Another advantage of utilizing tokenization as your security methodology is that it minimizes the cost and complexity of compliance with industry standards and government regulations. Certainly from a PCI DSS compliance issue, leveraging tokenization as a means to secure credit card data is less expensive than E2EE as the information that needs to be protected is well known and clearly identified.

Full, end-to-end encryption secures all the data regardless of its makeup, from one end of the process through to the destination. This “full” protection leaves no chance of missing data that should be protected. However, it could also be overkill, more expensive or potentially hurt performance.

Many companies will utilize full encryption if there is concern of computers being lost, stolen or worries of a natural disaster. Full end-to-end encryption ensures data protection from the source throughout the entire transmission. All data, without regard for knowing the content, is passed securely over the network, including public networks, to its destination where it is de-crypted and managed by the recipient.

While there is much being said in the market about performance, this should not be a deal breaker, and optimization technologies and methodologies can minimize the performance difference. It also depends on whether security is the highest priority.  In a recent survey Hubspan conducted on cloud security, more than 77% of the respondents said they were willing to sacrifice some level of performance in order to ensure data security. The reality is full encryption performance is acceptable for most implementations.

Also, you do not need to choose one methodology over the other. As with cloud implementations, many companies are adopting a hybrid approach when it comes to data security in the cloud. If your data set is well known and defined, and the data subset is sensitive, then tokenization is a reliable and proven method to implement. However, if you are not sure of the content of the data and you are willing to basically lock it down, then encrypting the data end-to-end is most likely the best approach.

Clearly there are a number of approaches one can take to secure their data from malware and other security holes. Tokenization and E2EE are two of the popular ways today. Fact is, you must look at a variety of different approaches and incorporate any and all of them to keep your data out of the hands of those that would do you harm.

It is also important to realize that each of these methodologies require a different set of infrastructure to support it. And the cost of implementing them will vary just as much. Keep that in mind as well as you consider how best to secure your data.

In an attempt not to over-simplify your decision criteria, think of data security as if you deciding whether to use a full comforter to keep you warm at night, or utilize the foot blanket to provide the warmth to your feet you specifically desire.

Stuart Lisk, Senior Product Manager, Hubspan (www.hubspan.com)

Stuart Lisk is a Senior Product Manager for Hubspan with over 20 years’ experience in enterprise network, system, storage and application product management. He has over ten years of experience managing cloud computing (SaaS) products.

Protect the API Keys to your Cloud Kingdom Arrow to Content

April 18, 2011 | 1 Comment

API keys to become first class citizens of security policies, just like SSL keys

By Mark O’Neill, CTO, Vordel

Much lip service is paid to protecting information in the Cloud, but the reality is often seat-of-the-pants Cloud security. Most organizations use some form of API keys to access their cloud services. Protection of these API keys is vital. This blog post will explore the issues at play when protecting API keys, and make some recommended solutions.

In 2011, the sensitivity of API Keys will start to be realized, and organizations will better understand the need to protect these keys at all costs. After all, API keys are directly linked to access to sensitive information in the cloud (like email, sales leads, or shared documents) and pay-as-you-use Cloud services. As such, if an organization condones the casual management of API keys they are at risk of: 1) unauthorized individuals using the keys to access confidential information and 2) the possibility of huge credit card bills for unapproved access to pay-as-you-use Cloud services.

In effect, easily accessed API keys means anyone can use them and run up huge bills on virtual machines. This is akin to having access to someone’s credit card and making unauthorized purchases.

APIs

Let’s take a look at APIs. As you know, many Cloud services are accessed using simple REST Web Services interfaces. These are commonly called APIs, since they are similar in concept to the more heavyweight C++ or Visual Basis APIs of old, though they are much easier to leverage from a Web page or from a mobile phone, hence their increasing ubiquity. In a nutshell, API Keys are used to access these Cloud services. As Darryl Plummer of Gartner noted in his blog, “The cloud has made the need for integrating between services (someone told me, “if you’re over 30 you call it an ‘API’, and if you are under 30 you call it a ‘service’”) more evident than ever. Companies want to connect from on-premises apps to cloud services and from cloud services to cloud services. And, all of these connections need to be secure and governed for performance.” [i]

As such, it’s clear that API keys control access to the Cloud service’s crown jewels, yet they are often emailed around an organization without due regard to their sensitivity, or stored on file servers accessed by many people. For example, if an organization is using a SaaS offering, such as Gmail for its employees, they usually get an API key from Google to enable single sign-on. This API key is only valid for the organization and allows employees to sign-in and access company email.  You can read more about the importance of API keys for single sign-on in my earlier blog titled “Extend the enterprise into the cloud with single sign- on to cloud based services.”

How are API keys protected?

API Keys must be protected just like passwords and private keys are protected. This means that they should not be stored as files on the file system, or baked into non-obfuscated applications that can be analyzed relatively easily. In the case of a Cloud Service Broker, API keys are stored encrypted, and when a Hardware Security Module (HSM) is used, this provides the option of storing the API keys on hardware, since a number of HSM vendors including: Sophos-Utimaco, nCipher Thales, Safenet and Bull among others, now support the storage of material other than only RSA/DSA keys. The secure storage of API keys means that operations staff can apply a policy to their key usage. It also means that regulatory criteria related to privacy and protection of critical communications (for later legal “discovery” if mandated) are met.

Broker Model for Protecting API Keys

The following are some common approaches to handling API keys with recommended solutions:

1)      Developers Emailing API Keys: organizations often email API keys to developers who copy and paste them into code. This casual practice is rife with security issues and should be avoided. Additionally, if a developer bakes the API keys into code, a request for a new key requires a code change resulting in extra work.

2)      Configuration Files: another common scenario is where a developer puts an API key into a configuration file on the system where it can be easily discovered. As such, people should think of API keys as the equivalent of private SSL keys that should be handled in a secure fashion. In fact, if API keys get into the wrong hands they are actually more dangerous than private SSL keys as they are very actionable. For example, if someone uses an organization’s API keys, the organization gets the bill. One solution to this scenario is to have the keys in a network infrastructure that is built for security. This would involve cloud service broker architecture with the broker product managing the API keys.

3)      Inventory of Keys: a way to avoid issues with managing API keys is the implementation of an explicit security policy regarding these keys. Ideally, this should come under the control of the Corporate Security Policy with a clear focus on governance and accountability. The foundation of this approach is to keep an inventory of API keys. Despite the benefits of such an inventory, many organizations continue to adopt an ad hoc approach to keeping track of API keys.

Some of the key questions organizations should ask when developing an inventory of API keys are:

a)      what keys are being used and for what purposes?

b)      who is the contact person responsible for specific keys?

c)      is there an expiry plan in response to the expiry date on the usage of keys? How will notification happen? If there is no clear plan on how to handle expired API keys, it can cause pandemonium when a password expires.

The inventory could be managed on a home-grown encrypted excel spreadsheet or database or via other more specific off-the-shelf products. The disadvantage of the home-grown approach is the amount of time required to manage the spreadsheet or database and the possibility of human error. An alternate approach is to leverage the capabilities of an off-the-shelf product such as a cloud service broker. In addition to providing other services, a broker allows an organization to easily view critical information about API keys, including identifying who is responsible for them, as well as providing information on how they are being used and the expiry dates.

4)      Encrypted File Storage:
One of the more potentially dangerous options is when a developer tries to implement their own security for API keys. For example, the developer understands that the API keys have to be protected and chooses to store the keys in a difficult to find spot – sometimes by using an encryption algorithm and hiding it in files or a registry which people would not typically access. Someone will inevitably find out about this “secret” hiding spot and before long this information is publicized throughout an organization. This classic mistake really highlights the old adage that security through obscurity is no security at all.

In summary, as organizations increasingly access Cloud services, it is clear the casual use and sharing of API keys is an accident waiting to happen. As such, regardless of how an organization chooses to manage API keys, either using a home grown approach or off-the shelf product, the critical goal is to safeguard the access and usage of these keys.

I would encourage CIOs and CSOs to recognize API keys as first class citizens of security policies similar to SSL private keys.  I would also advise anyone dealing with API keys to think of them as a sensitive resource to be protected as they provide access to sensitive information and access to pay-as-you-use Cloud services. In summary, effective management of API keys could enhance an organization’s Cloud security and avoid unauthorized credit card charges for running virtual machines whereas slack management would likely result in leakage of sensitive information in addition to providing unrestricted access to pay-as-you-go Cloud Services –courtesy of the organization’s credit card.

Mark O’Neill – Chief Technology Officer – Vordel
As CTO at Vordel he oversees the development of Vordel’s technical development strategy for the delivery of high performance Cloud Computing and SOA management solutions to Fortune 500 companies and Governments worldwide. Mark is author of the book, “Web Services Security”, and a contributor to “Hardening Network Security”, both published by Osborne-McGrawHill. Mark is also a representative of the Cloud Security Alliance, where he is a member of the Identity Management advisory panel.


[i]Cloudstreams: The Next Cloud Integration Challenge – November 8, 2010

http://blogs.gartner.com/daryl_plummer/2010/11/08/cloudstreams-the-next-cloud-integration-challenge/

Constant Vigilance Arrow to Content

April 14, 2011 | 1 Comment

By Jon Heimerl

 

Constant Vigilance. Mad-Eye Moody puts it very well. Constant Vigilance.

Unfortunately, these days we need constant vigilance to help protect ourselves and companies from peril. That is not to say that we can never relax and breathe. This is based on a key part of any decent cyber-security program – to prioritize the threats you face and consider the potential impact they could have on your business. Good practice says we need to do those things that really protect us from the big, bad important things – those threats that can really hurt us. “Constant vigilance” says we will actually follow through, do the analysis, and take appropriate mitigating actions.

Why should we worry? We worry because of the difference that any mitigating actions could make on an exigent threat.

Did BP exercise any sense of constant vigilance in the operations of their Deepwater Horizon oil well? The rupture in the oil well originally occurred on April 20, 2010, and the well was finally capped 87 days later on July 16. Estimates are rough, but something on the order of 328 million gallons of oil spilled into the gulf, along with a relatively unknown amount of natural gas, methane, along with other gases and pollutants. At a current market price of $3.59 per gallon, that would have been about $1.2 billion worth of gasoline. BP estimated clean up costs in excess of another $42 billion. Of that amount, BP estimated “response” costs as $2.9 billion. And with the recent news that U.S. prosecutors are considering filling manslaughter charges against some of the BP managers for decisions they made before the explosion, there is a good chance that the U. S. Department of Justice could be considering filing Gross Negligence charges against BP, which could add another $16 billion in fines, and lead the way to billions more in lawsuits.

So I have to ask, what form of vigilance did BP exercise when they constructed and drilled the well? As they demonstrated, they were clearly unprepared for the leak. They responded slowly, and their first attempts at stopping the leak were feeble and ineffective. It seriously looked like they had no idea what they were doing. Not only that, but it quickly became obvious that they did not even have plan for how to deal with the leaking well, or with the clean up, other than to let the ocean disperse the oil. Even if we ignore the leaked oil and associated clean up, if they had spent $2 billion on measures to address the leak, before it happened, they would have come out nearly a billion dollars ahead. $2 billion could have paid for a lot of monitoring, safety equipment, and potential well caps; maybe even a sea-floor level emergency cutoff valve, if they had things ready beforehand. If they had evaluated the potential threat and prepared ahead of time. If they had exercised just a little bit of vigilance. Yes, hindsight is 20/20, but by all appearances BP had not even seriously considered how to deal with something like this.

On Friday, March 11, an 8.9 magnitude earthquake struck off the coast of Japan, hitting the island country, followed by a massive tsunami. The earthquake and tsunami struck the Fukushima nuclear power plant located almost due west from the quake epicenter. Since then, all six of the reactors at the Fukushima plant have had problems. As of the end of March, Japan is still struggling with the reactors, and the radioactive material that has leaked from them. Radioactive plutonium has been discovered in the soil and water outside of some of the reactors, and we still do not know the exact extent of the danger or the eventual cost of this part of the disaster in Japan. The single largest crisis at the plant has been the lack of power that could help keep cooling systems active. The issue at point is that the nuclear plant had skipped a scheduled inspection of the plant that would have included 33 pieces of equipment across the six plant reactors. Among the equipment that was not inspected were a motor and backup power generator, which failed during the earthquake. Efforts to restore power have been hampered by the water from the Tsunami which breached the sea wall and flooded parts of the low lying reactor complex.

We don’t yet know the exact extent of the reactor disaster, and the potential costs for continued clean up and containment, or if such clean up is even possible. But, at best, we can estimate that the cost will exceed many millions of dollars. Would a good measure of diligence have helped minimize the extent of the disaster at Fukushima? We cannot say for sure, but perhaps. Would the inspection have found a problem with the generator that could have helped provide the needed power to the reactor cooling systems? Perhaps the 19 foot sea wall that protected the plant was determined by experts to be appropriate for the job, but the 46 foot tsunami overwhelmed the wall and flooded the facility. I would have to hear from an expert in that area before I made a final judgment, but perhaps better drainage and water pumps to remove excess water would have been appropriate. Much of this is easy to say in hindsight, but perhaps more vigilance upfront would have helped make the disaster more manageable. Or at least, less unmanageable.

We can’t foresee everything, and cannot anticipate every conceivable threat. But, we can ask ourselves a couple basic questions.

  1. Where can I find my cool information, systems and resources?
  2. What are the major threats to those things identified in #1?
  3. What can I do to minimize the impact that those threats have on me?

After that, it just takes a little vigilance.

Jon-Louis Heimerl, CISSP

Cloud Annexation Arrow to Content

April 12, 2011 | Leave a Comment

By Stephen R Carter

The Cloud is the next evolutionary step in the life of the Internet. From the experimental ARPANET (Advanced Research Projects Agency Network) to the Internet to the Web – and now to the Cloud, the evolution continues to advance international commerce and interaction on a grand scale. The Web did not become what it is today until SSL (Secure Sockets Layer) was developed together with the collection of root certificates that are a part of every secure browser. Until SSL (and later TLS [Transport Layer Security]) the Web was an interesting way to look at content but without the benefit of secured commerce. It was the availability of secure commerce that really woke the Web up and changed the commerce model of the planet Earth forever.

While the user saw massive changes in interaction patterns from ARPANET to Internet to Web, the evolution to the cloud will be mostly restricted to the way that service and commerce providers see things. With the Cloud, service and commerce providers are expecting to see a decrease in costs because of the increase of economy of scale and the ability to operate a sophisticated data center with only very little brick and mortar to care for (if any). With a network link and a laptop a business in the Cloud era could be a first class citizen in the growing nation of on-line commerce providers.

However, just as the lack of SSL prevented commerce on the web – the lack of security in the Cloud is holding that nation of on-line commerce providers back from the promise of Cloud. As early as February 2011, this author as seen advertised seminars and gatherings concerning the lack of security in the Cloud. Any gathering concerning the Cloud will have a track or two on the agenda concerning Cloud security.

The issue is not that Cloud providers do not use strong cryptographic mechanisms and materials, rather, the issue stems from the control that a business or enterprise has over the operational characteristics of a Cloud together with audit events to show regulatory compliance. Every data center has a strict set of operations policies that work together to show to the world and shareholders that the business is under control and can meet its compliance reporting requirements. If the enterprise adopts a “private cloud” or a Cloud inside of the data center, the problems start to show themselves and they compound at an alarming rate when a public Cloud is utilized.

So, what is to be done? There is no single solution to the security issue surrounding Cloud like there was for Web. The enterprise needs to have a ability to control operations according to policy which is compromised by a private cloud and breaks down with a public cloud.  The answer is described by a term I call, “Cloud Annexation.” Just as Sovereign Nation 1 can work with Sovereign Nation 2 to obtain property and annex it into Sovereign Nation 1, thus making the laws of Sovereign Nation 1 the prevailing law-of-the-land within the property, so to should an enterprise be able to annex a portion of a cloud (private or public) and impose policy (law) upon the annexed portion of the cloud so that, as far a policy is concerned, the annexed portion of the cloud becomes a part of the data center. Annexation also allows enterprise identities, policy, and compliance to be maintained locally if desired.

Figure 1: Cloud Annexation

This is obviously not what we have today. But, it is not unreasonable to expect that we could have it in the future. Standards bodies such as the DMTF are working on Cloud interoperability and Cloud management where the interfaces and infrastructure necessary to provide the functions of cloud annexation would be made available. The cloud management of the future should allow for an enterprise to impose its own crypto materials, policy, and event monitoring upon the portion of a cloud that it is using, thus annexing that portion of the Cloud. The imposition of enterprise policy must not, of course, interfere with the policy that the cloud provider must enforce – after all, the cloud provider has a business to care for as well. This will require that there be some facility to normalize the policies of the cloud provider and cloud consumer so that, without exposing sensitive information, both parties can be assured that appropriate policies can be enforced from both sides. The situation would be improved substantially if, like we have a network fabric today, we were to have an Identity Fabric – a network layer that overlays the network fabric that would provide identity as pervasively as network interconnectivity is today. But that is the topic of another posting.

In conclusion, the Cloud will not be as successful as it could be if the enterprise must integrate yet another operating and policy environment. The Cloud must become a natural extension of the data center so that the cost and effort of Cloud adoption are reduced and the “security” concerns are alleviated. If Cloud annexation becomes a reality, the evolution will be complete.

Novell Fellow Stephen R Carter is a computer scientist focused on creating solutions for identity, cloud infrastructure and services, and advanced network communication and collaboration. Carter is named on more than 100 worldwide patents with more than 100 patents still pending. He is the recipient of the State of Utah’s 2004 Governor’s Medal for Science and Technology and was recognized in 2009 and 2011 as a “Utah Genius” because of his patent work.

www.novell.com

Privileged Administrators and the Cloud: Who will Watch the Watchmen? Arrow to Content

April 1, 2011 | 1 Comment

By Matthew Gardiner

One of the key advantages of the cloud, whether public or private, flows from a well-known econometric concept known as “economies of scale.” The concept of economies of scale refers to an operation that to a point gets more efficient as it gets bigger – think electricity power plants, car factories, and semiconductor fabs.  Getting bigger is way of building differential advantage for the provider and thus becomes a key business driver for them, as he who gets bigger faster maintains the powerful position of low cost provider.   These efficiencies generally come from spreading fixed costs, whether human or otherwise, across more units of production.  Thus the cost per unit goes down as unit production goes up.

One important source of the economies of scale for cloud providers is from the IT administrators who make the cloud service and related datacenters operate.  A typical measure of this efficiency is the ratio of managed servers to number of administrators.  With a typical traditional enterprise datacenter this ratio is in the hundreds, whereas cloud providers, through homogeneity and greater automation, often can attain ratios of thousands or tens of thousands of servers per administrator.

However what is good from an economic point of view is not always good from a security and risk point of view.  With so many IT “eggs” from so many cloud consumers in one basket, the risk from these privileged cloud provider administrators must be explicitly recognized and addressed.  Privileged administrators going “rogue” by accident, for profit, or for retribution has happened so often around the world, that it’s hard to believe cloud providers will somehow be immune from this.  The short answer is they won’t.  The question is, what should you as a cloud consumer do to protect yourself from one of the cloud providers’ administrators “going rogue” on your data and applications?

For the purposes of this analysis I will focus on the use of public cloud providers as opposed to private cloud providers.  While the basic principles I discuss apply equally to both, I use public cloud providers because controls are generally hardest to design and enforce when they are largely operated by someone else.

I find the well worn IT concept of “people, process, and technology” to be a perfectly good framework with which to address this privileged administrator risk.  As cloud consumers move more sensitive applications to the cloud, they first need to be comfortable with who these IT administrators are in terms of location, qualifications, hiring, training, vetting, and supervision.  Shouldn’t the cloud providers’ HR processes for IT administrators be at least as rigorous as your own?

However, given that there is always a bad apple in a large enough bunch no matter the precautions, the next step is for the cloud providers to have operational processes that exhibit good security hygiene.   Practices such as segregation-of-duties, checks-and-balances, and need-to-know apply perfectly to how cloud administrators should be operating.  Cloud consumers also need to understand what the cloud providers’ practices, policies and processes are for the role of IT administrator.  Is it a violation for cloud provider administrators to look at or copy customer data, or stop customer applications, or copy virtual images?  It certainly should be.

The final area to consider is the various technologies that are being used to automate and enforce the security controls discussed above.  This certainly is made more challenging due to the variety of cloud services that are available.  What cloud consumers can do with public SaaS or PaaS providers (where they have little direct control or visibility into the cloud provider’s systems), is significantly less than that of IaaS providers, where the cloud consumer can install any software that they want at least at the virtual layer and above.  With SaaS and PaaS providers it is important that cloud consumers push hard for regular access at least to logs related to their data and applications, so that normal historical analysis can be conducted.  Of course, real-time, anytime access to system monitors would be even better.

For IaaS based public cloud services the security options for the cloud consumer are much wider.  For example, it should become regular practice that cloud consumers encrypt their sensitive data that resides in the cloud – to avoid prying eyes – as well as use privileged user management software that combines control of the host operating system with privileged user password management, fine grained access control, and privileged user auditing and reporting.  Using this type of privileged user management software enables the cloud consuming organization to control their own administrators and perhaps more importantly control and/or monitor the cloud provider’s administrators as well.

While there are huge benefits to using the cloud, it is equally important for organizations moving increasingly sensitive data and applications to the cloud that they think through how to mitigate all potential attack vectors.  The unfortunate reality is that people are a source of vulnerability and highly privileged people only increase this risk.  As the ancient Romans said – Quis custodiet ipsos custodes? – Who will watch the watchmen?

Matthew Gardiner is a Director working in the Security business unit at CA Technologies. He is a recognized industry leader in the security and Identity & Access Management (IAM) markets worldwide. He writes and is interviewed regularly in leading industry media on a wide range of IAM, cloud security, and other security-related topics. He is a member of the Kantara Initiative Board of Trustees. Matthew has a BSEE from the University of Pennsylvania and an SM in Management from MIT’s Sloan School of Management.  He blogs regularly at: http://community.ca.com/members/Matthew-Gardiner.aspx and also tweets @jmatthewg1234.  More information about CA Technologies can be found at www.ca.com.

Page Dividing Line