Cloud Computing and Device Security: The “Always Able” Era Arrow to Content

April 29, 2011 | 1 Comment

By Mark Bregman, CTO of Symantec

Device Proliferation: Mobility and Security in the Cloud

Chief Information Security Officers know instinctively that the world under their purview is undergoing a shift every bit as significant as the rise of the World Wide Web more than 15 years ago. The demand on our workforce to be ever more productive is driving us to rethink how we use technology to get the job done. Today’s workers expect and demand smart, mobile, powerful devices that place the capabilities of a PC in the palm of the hand.

In this new environment, IT departments are faced with a hard choice: remain committed to an outdated model that limits productivity by placing stringent restrictions on the technology workers use or look for ways to implement new policies that give employees the tools they need to be “always able” while keeping company information safe.

This change in attitude has been driven more by the cloud than many IT decision makers may realize. For enterprise users to do their jobs, they must be able to create, retrieve, manipulate and store massive amounts of data. In the past, the PC was best tool for this job because it could store and process data locally. But today, storing data in the cloud sets device makers free to create a wide range of computing products – from highly portable to highly stylish and more. Increasingly, these devices can be used to create everything from office documents to rich multimedia content, driving demand for even smarter and more powerful devices.

The loss of traditional security controls with the mobile devices combined with cloud driven services results in the need for a new approach to security. According to findings from security firm Mocana, 47% of organizations do not believe they can adequately manage the risks introduced by mobile devices, and more than 45% of organizations say security concerns are one of the biggest obstacles to the proliferation of smart devices. Organizations now must cope with workers introducing personal devices on the enterprise cloud and accessing workplace technology for personal purposes. For IT, the ultimate goal is protecting data by defining who should access what data and defining rights management for viewing and manipulating that data.

At the 30,000 feet level, users demand the flexibility to choose the devices they want, which means IT is tasked with enforcing governance over those devices to ensure corporate data is protected. To allow for uniform enforcement, administrators need the ability to centrally define and distribute security policies to all devices – using what else but the cloud – to secure data at rest or in motion.

To this end, there are five important guidelines enterprises should consider as they reshape IT policy to enable mobile devices to function seamlessly and securely in the cloud:

Take an inventory of all devices – You can’t protect or manage what you can’t see. This begins with device inventory to gain visibility across multiple networks and into the cloud. After taking stock, implement continuous security practices, such as scanning for current security software, operating system patches, and hardware information, e.g., model and serial number.

Device security equals cloud security – Since they are essentially access points to the cloud, mobile devices need the same multi-layer protection we apply to other business endpoints, including:

• Firewalls protecting the device and its contents by port and by protocol.

• Antivirus protection spanning multiple attack vectors, which might include MMS (multimedia messaging service), infrared, Bluetooth, and e-mail.

• Real-time protection, including intrusion prevention with heuristics to block “zero-day” attacks for unpublished signatures, and user and administrator alerts for attacks in progress.

• Antispam for the growing problem of short messaging service spam.

Unified protection – Security and management for mobile devices should be integrated into the overall enterprise security and management framework and administered in the same way – ideally using compatible solutions and unified policies. This creates operational efficiencies, but more importantly, it ensures consistent protection across your infrastructure, whether it be on premises or in the cloud. Security policy should be unified across popular mobile operating systems such as Symbian, Windows Mobile, BlackBerry, Android or Apple iOS, and their successors. And non-compliant mobile devices should be denied network access until they have been scanned, and if necessary patched, upgraded, or remediated.

Cloud-based Encryption – Millions of mobile devices used in the U.S. alone “go missing” every year. To protect against unauthorized users gaining access to valuable corporate data, encryption delivered in the cloud is necessary to protect the date that resides there. As an additional layer of security, companies should ensure they have a remote-wipe capability for unrecovered devices.

Scalability – Threats that target mobile devices are the same for small businesses and enterprises. As businesses grow, they require security management technology that is automated, policy-based, and scalable so that the infrastructure can accommodate new mobile platforms and services as they are introduced. With this information-centric framework in place, companies can take full advantage of the benefits offered by the cloud. At the same time, having the right policies and technologies in place provides confidence that data – the new enterprise currency – is secure from unauthorized access.

Combined, the five guidelines provide a strong baseline policy, which should give IT and business leaders confidence in the cloud and the mobile devices it enables.

Mark Bregman is Executive Vice President and Chief Technology Officer at Symantec, responsible for the Symantec Research Labs, Symantec Security Response and shared technologies, emerging technologies, architecture and standards, localization and secure coding, and developing the technology strategy for the company.

Is Tokenization or Encryption Keeping You Up at Night? Arrow to Content

April 20, 2011 | Leave a Comment

By Stuart Lisk, Senior Product Manager, Hubspan

Are you losing sleep over whether to implement tokenization or full encryption as your cloud security methodology? Do you find yourself lying awake wondering if you locked all the doors to your sensitive data? Your “sleepless with security” insomnia can be treated by analyzing your current situation and determining the level of coverage you need.

Do you need a heavy blanket that covers you from head to toe to keep you warm and cozy or perhaps just a special small blanket to keep your feet warm? Now extend this idea to your data security – do you need end-to-end encryption that blankets all of the data being processed or is a tokenization approach enough, with the blanket covering only the part of the data set that needs to be addressed?

Another main reason why there is much discussion over which method is right for you, relates to compliance with industry standards and government regulations. PCI DSS is the most common compliance issue as it focuses specifically on financial data being transmitted over a network, resulting in exposure to hackers and “the boogie man.”

There is much hype in the industry that makes us believe we must choose one approach over the other. Instead of the analysts and security experts helping us make the decision, they have actually caused more confusion and sleepless nights.

As with anything involving choice, there are pros and cons for each approach. Tokenization provides flexibility, because you can select (and thereby limit) the data that needs to be protected, such as credit card numbers. Another example of how tokenization is often leveraged is in the processing of Social Security numbers. We all know how valuable those digits are. People steal those golden numbers and, in essence, steal identities. Isolating a Social Security number allows it to be replaced with a token during transit, and then replaced with the real numbers upon arrival at the destination. This process secures the numbers from being illegally obtained.

However, in order to do this successfully, you must be able to identify the specific data to encrypt, which means you must have intimate knowledge of your data profile. Are you confident you can identify every piece of sensitive data within your data set? If not, then encryption may be a better strategy.

Another advantage of utilizing tokenization as your security methodology is that it minimizes the cost and complexity of compliance with industry standards and government regulations. Certainly from a PCI DSS compliance issue, leveraging tokenization as a means to secure credit card data is less expensive than E2EE as the information that needs to be protected is well known and clearly identified.

Full, end-to-end encryption secures all the data regardless of its makeup, from one end of the process through to the destination. This “full” protection leaves no chance of missing data that should be protected. However, it could also be overkill, more expensive or potentially hurt performance.

Many companies will utilize full encryption if there is concern of computers being lost, stolen or worries of a natural disaster. Full end-to-end encryption ensures data protection from the source throughout the entire transmission. All data, without regard for knowing the content, is passed securely over the network, including public networks, to its destination where it is de-crypted and managed by the recipient.

While there is much being said in the market about performance, this should not be a deal breaker, and optimization technologies and methodologies can minimize the performance difference. It also depends on whether security is the highest priority.  In a recent survey Hubspan conducted on cloud security, more than 77% of the respondents said they were willing to sacrifice some level of performance in order to ensure data security. The reality is full encryption performance is acceptable for most implementations.

Also, you do not need to choose one methodology over the other. As with cloud implementations, many companies are adopting a hybrid approach when it comes to data security in the cloud. If your data set is well known and defined, and the data subset is sensitive, then tokenization is a reliable and proven method to implement. However, if you are not sure of the content of the data and you are willing to basically lock it down, then encrypting the data end-to-end is most likely the best approach.

Clearly there are a number of approaches one can take to secure their data from malware and other security holes. Tokenization and E2EE are two of the popular ways today. Fact is, you must look at a variety of different approaches and incorporate any and all of them to keep your data out of the hands of those that would do you harm.

It is also important to realize that each of these methodologies require a different set of infrastructure to support it. And the cost of implementing them will vary just as much. Keep that in mind as well as you consider how best to secure your data.

In an attempt not to over-simplify your decision criteria, think of data security as if you deciding whether to use a full comforter to keep you warm at night, or utilize the foot blanket to provide the warmth to your feet you specifically desire.

Stuart Lisk, Senior Product Manager, Hubspan (www.hubspan.com)

Stuart Lisk is a Senior Product Manager for Hubspan with over 20 years’ experience in enterprise network, system, storage and application product management. He has over ten years of experience managing cloud computing (SaaS) products.

Protect the API Keys to your Cloud Kingdom Arrow to Content

April 18, 2011 | 1 Comment

API keys to become first class citizens of security policies, just like SSL keys

By Mark O’Neill, CTO, Vordel

Much lip service is paid to protecting information in the Cloud, but the reality is often seat-of-the-pants Cloud security. Most organizations use some form of API keys to access their cloud services. Protection of these API keys is vital. This blog post will explore the issues at play when protecting API keys, and make some recommended solutions.

In 2011, the sensitivity of API Keys will start to be realized, and organizations will better understand the need to protect these keys at all costs. After all, API keys are directly linked to access to sensitive information in the cloud (like email, sales leads, or shared documents) and pay-as-you-use Cloud services. As such, if an organization condones the casual management of API keys they are at risk of: 1) unauthorized individuals using the keys to access confidential information and 2) the possibility of huge credit card bills for unapproved access to pay-as-you-use Cloud services.

In effect, easily accessed API keys means anyone can use them and run up huge bills on virtual machines. This is akin to having access to someone’s credit card and making unauthorized purchases.

APIs

Let’s take a look at APIs. As you know, many Cloud services are accessed using simple REST Web Services interfaces. These are commonly called APIs, since they are similar in concept to the more heavyweight C++ or Visual Basis APIs of old, though they are much easier to leverage from a Web page or from a mobile phone, hence their increasing ubiquity. In a nutshell, API Keys are used to access these Cloud services. As Darryl Plummer of Gartner noted in his blog, “The cloud has made the need for integrating between services (someone told me, “if you’re over 30 you call it an ‘API’, and if you are under 30 you call it a ‘service’”) more evident than ever. Companies want to connect from on-premises apps to cloud services and from cloud services to cloud services. And, all of these connections need to be secure and governed for performance.” [i]

As such, it’s clear that API keys control access to the Cloud service’s crown jewels, yet they are often emailed around an organization without due regard to their sensitivity, or stored on file servers accessed by many people. For example, if an organization is using a SaaS offering, such as Gmail for its employees, they usually get an API key from Google to enable single sign-on. This API key is only valid for the organization and allows employees to sign-in and access company email.  You can read more about the importance of API keys for single sign-on in my earlier blog titled “Extend the enterprise into the cloud with single sign- on to cloud based services.”

How are API keys protected?

API Keys must be protected just like passwords and private keys are protected. This means that they should not be stored as files on the file system, or baked into non-obfuscated applications that can be analyzed relatively easily. In the case of a Cloud Service Broker, API keys are stored encrypted, and when a Hardware Security Module (HSM) is used, this provides the option of storing the API keys on hardware, since a number of HSM vendors including: Sophos-Utimaco, nCipher Thales, Safenet and Bull among others, now support the storage of material other than only RSA/DSA keys. The secure storage of API keys means that operations staff can apply a policy to their key usage. It also means that regulatory criteria related to privacy and protection of critical communications (for later legal “discovery” if mandated) are met.

Broker Model for Protecting API Keys

The following are some common approaches to handling API keys with recommended solutions:

1)      Developers Emailing API Keys: organizations often email API keys to developers who copy and paste them into code. This casual practice is rife with security issues and should be avoided. Additionally, if a developer bakes the API keys into code, a request for a new key requires a code change resulting in extra work.

2)      Configuration Files: another common scenario is where a developer puts an API key into a configuration file on the system where it can be easily discovered. As such, people should think of API keys as the equivalent of private SSL keys that should be handled in a secure fashion. In fact, if API keys get into the wrong hands they are actually more dangerous than private SSL keys as they are very actionable. For example, if someone uses an organization’s API keys, the organization gets the bill. One solution to this scenario is to have the keys in a network infrastructure that is built for security. This would involve cloud service broker architecture with the broker product managing the API keys.

3)      Inventory of Keys: a way to avoid issues with managing API keys is the implementation of an explicit security policy regarding these keys. Ideally, this should come under the control of the Corporate Security Policy with a clear focus on governance and accountability. The foundation of this approach is to keep an inventory of API keys. Despite the benefits of such an inventory, many organizations continue to adopt an ad hoc approach to keeping track of API keys.

Some of the key questions organizations should ask when developing an inventory of API keys are:

a)      what keys are being used and for what purposes?

b)      who is the contact person responsible for specific keys?

c)      is there an expiry plan in response to the expiry date on the usage of keys? How will notification happen? If there is no clear plan on how to handle expired API keys, it can cause pandemonium when a password expires.

The inventory could be managed on a home-grown encrypted excel spreadsheet or database or via other more specific off-the-shelf products. The disadvantage of the home-grown approach is the amount of time required to manage the spreadsheet or database and the possibility of human error. An alternate approach is to leverage the capabilities of an off-the-shelf product such as a cloud service broker. In addition to providing other services, a broker allows an organization to easily view critical information about API keys, including identifying who is responsible for them, as well as providing information on how they are being used and the expiry dates.

4)      Encrypted File Storage:
One of the more potentially dangerous options is when a developer tries to implement their own security for API keys. For example, the developer understands that the API keys have to be protected and chooses to store the keys in a difficult to find spot – sometimes by using an encryption algorithm and hiding it in files or a registry which people would not typically access. Someone will inevitably find out about this “secret” hiding spot and before long this information is publicized throughout an organization. This classic mistake really highlights the old adage that security through obscurity is no security at all.

In summary, as organizations increasingly access Cloud services, it is clear the casual use and sharing of API keys is an accident waiting to happen. As such, regardless of how an organization chooses to manage API keys, either using a home grown approach or off-the shelf product, the critical goal is to safeguard the access and usage of these keys.

I would encourage CIOs and CSOs to recognize API keys as first class citizens of security policies similar to SSL private keys.  I would also advise anyone dealing with API keys to think of them as a sensitive resource to be protected as they provide access to sensitive information and access to pay-as-you-use Cloud services. In summary, effective management of API keys could enhance an organization’s Cloud security and avoid unauthorized credit card charges for running virtual machines whereas slack management would likely result in leakage of sensitive information in addition to providing unrestricted access to pay-as-you-go Cloud Services –courtesy of the organization’s credit card.

Mark O’Neill – Chief Technology Officer – Vordel
As CTO at Vordel he oversees the development of Vordel’s technical development strategy for the delivery of high performance Cloud Computing and SOA management solutions to Fortune 500 companies and Governments worldwide. Mark is author of the book, “Web Services Security”, and a contributor to “Hardening Network Security”, both published by Osborne-McGrawHill. Mark is also a representative of the Cloud Security Alliance, where he is a member of the Identity Management advisory panel.


[i]Cloudstreams: The Next Cloud Integration Challenge – November 8, 2010

http://blogs.gartner.com/daryl_plummer/2010/11/08/cloudstreams-the-next-cloud-integration-challenge/

Constant Vigilance Arrow to Content

April 14, 2011 | 1 Comment

By Jon Heimerl

 

Constant Vigilance. Mad-Eye Moody puts it very well. Constant Vigilance.

Unfortunately, these days we need constant vigilance to help protect ourselves and companies from peril. That is not to say that we can never relax and breathe. This is based on a key part of any decent cyber-security program – to prioritize the threats you face and consider the potential impact they could have on your business. Good practice says we need to do those things that really protect us from the big, bad important things – those threats that can really hurt us. “Constant vigilance” says we will actually follow through, do the analysis, and take appropriate mitigating actions.

Why should we worry? We worry because of the difference that any mitigating actions could make on an exigent threat.

Did BP exercise any sense of constant vigilance in the operations of their Deepwater Horizon oil well? The rupture in the oil well originally occurred on April 20, 2010, and the well was finally capped 87 days later on July 16. Estimates are rough, but something on the order of 328 million gallons of oil spilled into the gulf, along with a relatively unknown amount of natural gas, methane, along with other gases and pollutants. At a current market price of $3.59 per gallon, that would have been about $1.2 billion worth of gasoline. BP estimated clean up costs in excess of another $42 billion. Of that amount, BP estimated “response” costs as $2.9 billion. And with the recent news that U.S. prosecutors are considering filling manslaughter charges against some of the BP managers for decisions they made before the explosion, there is a good chance that the U. S. Department of Justice could be considering filing Gross Negligence charges against BP, which could add another $16 billion in fines, and lead the way to billions more in lawsuits.

So I have to ask, what form of vigilance did BP exercise when they constructed and drilled the well? As they demonstrated, they were clearly unprepared for the leak. They responded slowly, and their first attempts at stopping the leak were feeble and ineffective. It seriously looked like they had no idea what they were doing. Not only that, but it quickly became obvious that they did not even have plan for how to deal with the leaking well, or with the clean up, other than to let the ocean disperse the oil. Even if we ignore the leaked oil and associated clean up, if they had spent $2 billion on measures to address the leak, before it happened, they would have come out nearly a billion dollars ahead. $2 billion could have paid for a lot of monitoring, safety equipment, and potential well caps; maybe even a sea-floor level emergency cutoff valve, if they had things ready beforehand. If they had evaluated the potential threat and prepared ahead of time. If they had exercised just a little bit of vigilance. Yes, hindsight is 20/20, but by all appearances BP had not even seriously considered how to deal with something like this.

On Friday, March 11, an 8.9 magnitude earthquake struck off the coast of Japan, hitting the island country, followed by a massive tsunami. The earthquake and tsunami struck the Fukushima nuclear power plant located almost due west from the quake epicenter. Since then, all six of the reactors at the Fukushima plant have had problems. As of the end of March, Japan is still struggling with the reactors, and the radioactive material that has leaked from them. Radioactive plutonium has been discovered in the soil and water outside of some of the reactors, and we still do not know the exact extent of the danger or the eventual cost of this part of the disaster in Japan. The single largest crisis at the plant has been the lack of power that could help keep cooling systems active. The issue at point is that the nuclear plant had skipped a scheduled inspection of the plant that would have included 33 pieces of equipment across the six plant reactors. Among the equipment that was not inspected were a motor and backup power generator, which failed during the earthquake. Efforts to restore power have been hampered by the water from the Tsunami which breached the sea wall and flooded parts of the low lying reactor complex.

We don’t yet know the exact extent of the reactor disaster, and the potential costs for continued clean up and containment, or if such clean up is even possible. But, at best, we can estimate that the cost will exceed many millions of dollars. Would a good measure of diligence have helped minimize the extent of the disaster at Fukushima? We cannot say for sure, but perhaps. Would the inspection have found a problem with the generator that could have helped provide the needed power to the reactor cooling systems? Perhaps the 19 foot sea wall that protected the plant was determined by experts to be appropriate for the job, but the 46 foot tsunami overwhelmed the wall and flooded the facility. I would have to hear from an expert in that area before I made a final judgment, but perhaps better drainage and water pumps to remove excess water would have been appropriate. Much of this is easy to say in hindsight, but perhaps more vigilance upfront would have helped make the disaster more manageable. Or at least, less unmanageable.

We can’t foresee everything, and cannot anticipate every conceivable threat. But, we can ask ourselves a couple basic questions.

  1. Where can I find my cool information, systems and resources?
  2. What are the major threats to those things identified in #1?
  3. What can I do to minimize the impact that those threats have on me?

After that, it just takes a little vigilance.

Jon-Louis Heimerl, CISSP

Cloud Annexation Arrow to Content

April 12, 2011 | Leave a Comment

By Stephen R Carter

The Cloud is the next evolutionary step in the life of the Internet. From the experimental ARPANET (Advanced Research Projects Agency Network) to the Internet to the Web – and now to the Cloud, the evolution continues to advance international commerce and interaction on a grand scale. The Web did not become what it is today until SSL (Secure Sockets Layer) was developed together with the collection of root certificates that are a part of every secure browser. Until SSL (and later TLS [Transport Layer Security]) the Web was an interesting way to look at content but without the benefit of secured commerce. It was the availability of secure commerce that really woke the Web up and changed the commerce model of the planet Earth forever.

While the user saw massive changes in interaction patterns from ARPANET to Internet to Web, the evolution to the cloud will be mostly restricted to the way that service and commerce providers see things. With the Cloud, service and commerce providers are expecting to see a decrease in costs because of the increase of economy of scale and the ability to operate a sophisticated data center with only very little brick and mortar to care for (if any). With a network link and a laptop a business in the Cloud era could be a first class citizen in the growing nation of on-line commerce providers.

However, just as the lack of SSL prevented commerce on the web – the lack of security in the Cloud is holding that nation of on-line commerce providers back from the promise of Cloud. As early as February 2011, this author as seen advertised seminars and gatherings concerning the lack of security in the Cloud. Any gathering concerning the Cloud will have a track or two on the agenda concerning Cloud security.

The issue is not that Cloud providers do not use strong cryptographic mechanisms and materials, rather, the issue stems from the control that a business or enterprise has over the operational characteristics of a Cloud together with audit events to show regulatory compliance. Every data center has a strict set of operations policies that work together to show to the world and shareholders that the business is under control and can meet its compliance reporting requirements. If the enterprise adopts a “private cloud” or a Cloud inside of the data center, the problems start to show themselves and they compound at an alarming rate when a public Cloud is utilized.

So, what is to be done? There is no single solution to the security issue surrounding Cloud like there was for Web. The enterprise needs to have a ability to control operations according to policy which is compromised by a private cloud and breaks down with a public cloud.  The answer is described by a term I call, “Cloud Annexation.” Just as Sovereign Nation 1 can work with Sovereign Nation 2 to obtain property and annex it into Sovereign Nation 1, thus making the laws of Sovereign Nation 1 the prevailing law-of-the-land within the property, so to should an enterprise be able to annex a portion of a cloud (private or public) and impose policy (law) upon the annexed portion of the cloud so that, as far a policy is concerned, the annexed portion of the cloud becomes a part of the data center. Annexation also allows enterprise identities, policy, and compliance to be maintained locally if desired.

Figure 1: Cloud Annexation

This is obviously not what we have today. But, it is not unreasonable to expect that we could have it in the future. Standards bodies such as the DMTF are working on Cloud interoperability and Cloud management where the interfaces and infrastructure necessary to provide the functions of cloud annexation would be made available. The cloud management of the future should allow for an enterprise to impose its own crypto materials, policy, and event monitoring upon the portion of a cloud that it is using, thus annexing that portion of the Cloud. The imposition of enterprise policy must not, of course, interfere with the policy that the cloud provider must enforce – after all, the cloud provider has a business to care for as well. This will require that there be some facility to normalize the policies of the cloud provider and cloud consumer so that, without exposing sensitive information, both parties can be assured that appropriate policies can be enforced from both sides. The situation would be improved substantially if, like we have a network fabric today, we were to have an Identity Fabric – a network layer that overlays the network fabric that would provide identity as pervasively as network interconnectivity is today. But that is the topic of another posting.

In conclusion, the Cloud will not be as successful as it could be if the enterprise must integrate yet another operating and policy environment. The Cloud must become a natural extension of the data center so that the cost and effort of Cloud adoption are reduced and the “security” concerns are alleviated. If Cloud annexation becomes a reality, the evolution will be complete.

Novell Fellow Stephen R Carter is a computer scientist focused on creating solutions for identity, cloud infrastructure and services, and advanced network communication and collaboration. Carter is named on more than 100 worldwide patents with more than 100 patents still pending. He is the recipient of the State of Utah’s 2004 Governor’s Medal for Science and Technology and was recognized in 2009 and 2011 as a “Utah Genius” because of his patent work.

www.novell.com

Privileged Administrators and the Cloud: Who will Watch the Watchmen? Arrow to Content

April 1, 2011 | 1 Comment

By Matthew Gardiner

One of the key advantages of the cloud, whether public or private, flows from a well-known econometric concept known as “economies of scale.” The concept of economies of scale refers to an operation that to a point gets more efficient as it gets bigger – think electricity power plants, car factories, and semiconductor fabs.  Getting bigger is way of building differential advantage for the provider and thus becomes a key business driver for them, as he who gets bigger faster maintains the powerful position of low cost provider.   These efficiencies generally come from spreading fixed costs, whether human or otherwise, across more units of production.  Thus the cost per unit goes down as unit production goes up.

One important source of the economies of scale for cloud providers is from the IT administrators who make the cloud service and related datacenters operate.  A typical measure of this efficiency is the ratio of managed servers to number of administrators.  With a typical traditional enterprise datacenter this ratio is in the hundreds, whereas cloud providers, through homogeneity and greater automation, often can attain ratios of thousands or tens of thousands of servers per administrator.

However what is good from an economic point of view is not always good from a security and risk point of view.  With so many IT “eggs” from so many cloud consumers in one basket, the risk from these privileged cloud provider administrators must be explicitly recognized and addressed.  Privileged administrators going “rogue” by accident, for profit, or for retribution has happened so often around the world, that it’s hard to believe cloud providers will somehow be immune from this.  The short answer is they won’t.  The question is, what should you as a cloud consumer do to protect yourself from one of the cloud providers’ administrators “going rogue” on your data and applications?

For the purposes of this analysis I will focus on the use of public cloud providers as opposed to private cloud providers.  While the basic principles I discuss apply equally to both, I use public cloud providers because controls are generally hardest to design and enforce when they are largely operated by someone else.

I find the well worn IT concept of “people, process, and technology” to be a perfectly good framework with which to address this privileged administrator risk.  As cloud consumers move more sensitive applications to the cloud, they first need to be comfortable with who these IT administrators are in terms of location, qualifications, hiring, training, vetting, and supervision.  Shouldn’t the cloud providers’ HR processes for IT administrators be at least as rigorous as your own?

However, given that there is always a bad apple in a large enough bunch no matter the precautions, the next step is for the cloud providers to have operational processes that exhibit good security hygiene.   Practices such as segregation-of-duties, checks-and-balances, and need-to-know apply perfectly to how cloud administrators should be operating.  Cloud consumers also need to understand what the cloud providers’ practices, policies and processes are for the role of IT administrator.  Is it a violation for cloud provider administrators to look at or copy customer data, or stop customer applications, or copy virtual images?  It certainly should be.

The final area to consider is the various technologies that are being used to automate and enforce the security controls discussed above.  This certainly is made more challenging due to the variety of cloud services that are available.  What cloud consumers can do with public SaaS or PaaS providers (where they have little direct control or visibility into the cloud provider’s systems), is significantly less than that of IaaS providers, where the cloud consumer can install any software that they want at least at the virtual layer and above.  With SaaS and PaaS providers it is important that cloud consumers push hard for regular access at least to logs related to their data and applications, so that normal historical analysis can be conducted.  Of course, real-time, anytime access to system monitors would be even better.

For IaaS based public cloud services the security options for the cloud consumer are much wider.  For example, it should become regular practice that cloud consumers encrypt their sensitive data that resides in the cloud – to avoid prying eyes – as well as use privileged user management software that combines control of the host operating system with privileged user password management, fine grained access control, and privileged user auditing and reporting.  Using this type of privileged user management software enables the cloud consuming organization to control their own administrators and perhaps more importantly control and/or monitor the cloud provider’s administrators as well.

While there are huge benefits to using the cloud, it is equally important for organizations moving increasingly sensitive data and applications to the cloud that they think through how to mitigate all potential attack vectors.  The unfortunate reality is that people are a source of vulnerability and highly privileged people only increase this risk.  As the ancient Romans said – Quis custodiet ipsos custodes? – Who will watch the watchmen?

Matthew Gardiner is a Director working in the Security business unit at CA Technologies. He is a recognized industry leader in the security and Identity & Access Management (IAM) markets worldwide. He writes and is interviewed regularly in leading industry media on a wide range of IAM, cloud security, and other security-related topics. He is a member of the Kantara Initiative Board of Trustees. Matthew has a BSEE from the University of Pennsylvania and an SM in Management from MIT’s Sloan School of Management.  He blogs regularly at: http://community.ca.com/members/Matthew-Gardiner.aspx and also tweets @jmatthewg1234.  More information about CA Technologies can be found at www.ca.com.

Debunking the Top Three Cloud Security Myths Arrow to Content

March 30, 2011 | 3 Comments

By Margaret Dawson

The “cloud” is one of the most discussed topics among IT professionals today, and organizations are increasingly exploring the potential benefits of using cloud computing or solutions for their businesses. It’s no surprise Gartner predicts that cloud computing will be a top priority for CIOs in 2011.

In spite of this, many companies and IT leaders remain skeptical about the cloud, with many simply not knowing how to get started or how to evaluate which cloud platform or approach is right for them. Furthermore, uncertainty and fears around cloud security and reliability continues to permeate the market and media coverage. And finally, there remains confusion around the definition of what is the cloud and what is it not, leading some CIOs to want to scrap the term “cloud” altogether.

My number one advice to companies of all sizes is to not buy the cloud, but rather, buy the solution.  Just as we have always done in IT, begin with identifying the challenge or pain that needs to be solved. In evaluating solutions that help address your challenge, include both on-premise and “as a service” based solutions.  And then use the same critical criteria to evaluate those cloud solutions as you would any other, making sure it addresses your requirements around data protection, identity management, compliance, access control rules, and other security capabilities.

Also, do not get sucked into the hype.  Below, I attempt to dispel some of the most common myths about cloud security that are common today:

1. All clouds are created equal

One of the biggest crimes committed by the vendor community and media over the last couple of years has been in talking about “the cloud” as if it was a single, monolithic entity. This mindset disregards the dozens of ways companies need to configure the infrastructure underlying a cloud solution, and the many more ways of configuring and running applications on a cloud platform.

Often people lump together established, enterprise-class cloud solutions with free services offered by social networks and similar “permanent beta” products. As a result of this definition of “the cloud”, many organizations fear that cloud solutions could expose critical enterprise resources and valuable intellectual property in the public domain. An unfortunate result of this fundamental disservice to the cloud security discussion is that it will only increase apprehension towards cloud adoption.

While the cloud can absolutely be as secure as or even more secure than an on-premise solution, all clouds are NOT created equal.  There are huge variances in security practices and capability, and you must establish clear criteria to make sure any solution addresses your requirements and compliance mandates.

2. Cloud security is so new, there’s no way it can be secure

With all the buzz surrounding the cloud, there’s a misconception that cloud security is a brand new challenge that has not been addressed. What most people don’t understand is that while the cloud is already bringing radical changes in cost, scalability and deployment time, most of the underlying security concerns are, in fact, not new or unattainable. It’s true that the cloud represents a brand new attack vector that hackers love to go after, but the vulnerabilities and security holes are the same ones you face in your traditional infrastructure.

Today’s cloud security issues are much the same as any other outsourcing model that organizations have been using for years. What companies need to remember is that when you talk about the cloud, you’re still talking about data, applications and operating systems in a data center, running the cloud solution.

It’s important to note that many cloud vendors leverage best-in-class security practices across their infrastructure, application and services layers.  What’s more, a cloud solution provides this same industry-leading security for all of its users, often offering you with a level of security your own organization could not afford to implement or maintain.

3. All clouds are inherently insecure

As previously mentioned, a cloud solution is no more or less secure than the datacenter, network and application on which it is build. In reality, the cloud can actually be more secure than your own internal IT infrastructure. A key advantage to third-party cloud solutions is that a cloud vendor’s core competency is to keep its network up and deliver the highest level of security. In fact, most cloud service providers have clear SLAs around this.

In order to run a cloud solution securely, cloud vendors have the opportunity to become PCI DSS compliant, SAS 70 certified and more. Undergoing these rigorous compliance and security routes can provide organizations with the assurance that cloud security is top of mind for their vendor and appropriately addressed. The economies of scale involved in cloud computing also extend to vendor expertise in areas like application security, IT governance and system administration. A recent move towards cloud computing by the security-conscious U.S. Federal Government is a prime example of how clouds can be extremely secure, depending on how they are built.

The one area to remember that folks often forget is the services piece of many cloud solutions.  Beyond the infrastructure and the application, make sure you understand how the vendor controls access to your data by their services and support personnel. Ac

Anxiety over cloud security is not likely to dissipate any time soon. However, by focusing on the facts and addressing the market’s concerns directly – like debunking cloud security myths – it will go a long way in helping companies gain confidence in deploying the cloud. There are also an increasing number of associations and industry forums, such as the Cloud Security Alliance, that provide vendor-neutral best practices and advice.  In spite of the jokes, cloud security is not an oxymoron, but in fact, an achievable and real goal.


Margaret Dawson is Vice President of Product Management for Hubspan (www.hubspan.com). She’s responsible for the overall product vision and roadmap, and works with key partners in delivering innovative solutions to the market. She has over 20 years experience in the IT industry, working with leading companies in the network security, semiconductor, personal computer, software, and e-commerce markets, including Microsoft and Amazon.com. She is a frequent speaker on cloud security, cloud platforms, and other cloud-related themes. Dawson has worked and traveled extensively in Asia, Europe and North America, including ten years working in the Greater China region, consulting with many of the area’s leading IT companies, and serving as a BusinessWeek magazine foreign correspondent.

What NetFlix Can Teach Us About Security in the Cloud Arrow to Content

March 29, 2011 | Leave a Comment

By Eric Baize

For years, the security industry has been complacent, using complex concepts to keep security discussions isolated from mainstream IT infrastructure conversation.  The cloud revolution is bringing an end to this security apartheid. The emergence of an integrated IT infrastructure stack, the need for information-centric security and the disruption brought by virtualization are more and more making security a feature of the IT infrastructure. The industry consolidation, initiated by EMC’s acquisition of RSA in 2006 and now well on its way with the recent acquisition of McAfee by Intel and Arcsight by HP, is demonstrating that the security and IT infrastructure conversation are one in the same.

We, the security people, must follow this transition and lay out a vision that non-security experts can understand without having to take a PhD course in prime number computation.

Let me give it a try by using the video rental industry as an example on why security in the cloud will be different and more effective.

Video rental industry:

1 – You start with a simple need:   Most families want to watch movies in their living room, a movie of their choosing, at a time of their choosing.

2 – A new market emerges:   Video rental stores with chains such as Blockbuster in the U.S.  Do you remember the late fees?

3 – Then comes a new business model.  Instead of paying per movie and driving to the store, you pay a monthly subscription fee and movies are delivered directly to your home.  Netflix* jumps in and makes the new delivery model work with legacy technology by sending DVDs through postal mail.

4 – Increase in network bandwidth makes video on demand possible on many kinds of end-user devices from cell phones to video game consoles.  Netflix expands its footprint by embedding its technology into any video viewing device that makes it into your home:   Game consoles, streaming players and smart phones.

5 – Blockbuster has filed for Chapter 11 bankruptcy.  Netflix is uniquely positioned to help consumers transition from the old world of video viewing with DVDs to video on-demand.  The customer wins with better movie choices delivered faster.

The Security Industry

The parallel with the evolution the security industry is going through is striking:

1 – You start with a simple need from CIOs and CSOs:  They want to secure their information.

2 – A new market emerges:  IT security with early players focusing on perimeter security:  Building firewalls around information and bolting on security controls on top of insecure infrastructure.

3 – Here comes the cloud, a different way of delivering, operating and consuming IT.  IT is delivered as a service.  Enterprises use virtualization to build private clouds operated by internal IT teams.  The IT infrastructure is invisible and security is becoming much more information-centric. New security solutions such as the RSA Solution for Cloud Security and Compliance emerge, that focus on gaining visibility over the new cloud infrastructure and on controlling information.

4 – Increase in bandwidth makes it possible to expand private cloud into hybrid clouds, using a cloud provider’s IT infrastructure to develop new applications or to run server or desktop workloads.  Security is changing as controls are directly embedded in the new cloud infrastructure, making it security aware. The need for visibility expands to cloud provider’s IT infrastructure and new approaches such as the Cloud Security Alliance GRC Stack enable enterprises to expand their GRC platform to manage compliance of their cloud provider infrastructure.

5 – What will happen to the security industry?  It must adapt and manage the transition from physical to virtual to cloud infrastructures.  First, by dealing with traditional security controls in physical IT infrastructure;  then, by embedding its control in the virtual and cloud infrastructure to build a trusted cloud; and finally by providing a consolidated view of risk and compliance across all types of IT infrastructure: physical or virtual, on-premise or on a cloud provider’s premises. The customer wins:  IT infrastructures have become security-aware, making security and compliance more effective and easier to manage.

So, does this explanation work for you? I welcome all comments below!

* Netflix is a registered trademark of Netflix, Inc.

Eric Baize is Senior Director in the RSA’s Office of Strategy and Technology with responsibility for developing RSA’s strategy for cloud and virtualization. Mr Baize also leads the EMC Product Security Office with company-wide responsibility for securing EMC and RSA products.

Previously, Mr. Baize pioneered EMC’s push towards security. He was a founding member of the leadership team that defined EMC’s vision of information-centric security, and which drove the acquisition of RSA Security and Network Intelligence in 2006.

Mr Baize is a Certified Information Security Manager, holder of a U.S. patent and author of international security standards. He represents EMC on the Board of Directors of SAFECode.

[How to] Be Confident Storing Information in the Cloud Arrow to Content

March 29, 2011 | 2 Comments

By Anil Chakravarthy and Deepak Mohan

Over the past few years, information explosion has inhibited organizations’ ability to effectively secure, manage and recover data. This complexity is only increasing as organizations try to manage the data growth by moving it to the cloud. It’s clear that storage administrators must regain control of information to reduce costs and recovery times while complying with regulatory compliance standards, including privacy laws.

Data growth is currently one of the biggest challenges for organizations. In a recent survey by Gartner, 47 percent of respondents ranked data growth as the biggest data center hardware infrastructure challenge for large enterprises. In fact, IDC says that enterprise storage will grow an average of 60 percent annually.

As a result, companies are turning to the cloud to help them alleviate some of the pains caused by these issues.

The Hype of the Cloud: Public, Private and/or Hybrid?

There is so much hype associated with cloud computing. Companies often struggle with defining the potential benefits of the cloud to their executives, and which model to recommend. In short, the cloud is a computing environment that can deliver on-demand resources in a scalable, elastic manner and is typically, but not necessarily, accessible from anywhere – through the internet (https://blog.cloudsecurityalliance.org/). The cloud encompasses the principle that users should have the ability to access data when, where and how they want – regardless of device.

The public cloud is typically when a third party owns the infrastructure and operations and is delivering a service to multiple private entities (i.e., cloud-based email or document sharing). While these services typically provide low-cost storage, this model has a few drawbacks: companies have limited control over implementation, security, privacy. This can be less than ideal for some organizations.

We believe most enterprises will implement a private cloud over the next few years. A private cloud retains control by enabling the company to host data and applications on their own infrastructure and maintain their own operations. This gives them maximum control, protecting against unforeseen issues. Private clouds can be scalable and elastic (similar to public clouds), providing them the best online storage operations and options to improve performance, capacity, and availability as needed.

A hybrid approach enables organization to combine the inexpensive nature of public cloud and private clouds, but giving additional control over the management, performance, privacy and accessibility of the cloud for their organization. For example, an organization may define a private cloud storage infrastructure for a set of applications and take advantage of public cloud storage for off-site copies and/or longer-term retention. This gives the organization the flexibility to deliver a service-oriented model to their internal customers.

Deciding which model to use is crucial. Each organization ought to evaluate their application portfolio, determine the corporate risk tolerance, and look to an agile way to consume cloud services. For small and medium-sized enterprises the propensity for public cloud applications and infrastructure can be much greater than large enterprise organizations.

The Private Cloud and Virtualization: Tools to Minimize Data Growth

As companies look to private clouds, often leveraging server virtualization, to more efficiently deliver applications to their business, it can also help manage, backup, archive and discover information. Adding to this issue, IDC reports that companies often waste up to 65 percent of storage capacity as disk utilization rates range from 28-35 percent. Cloud initiatives seem like the natural solution.

The private cloud is the clear answer. Combined with virtual environments, if managed correctly, the cloud can help organization save money, increase application delivery times, increase application availability, and reduce risks.

As a best practice, organizations need to increase storage utilization within virtual infrastructures as virtual machines deployments can often result in unused and wasted storage. Moreover, there can be performance implications as companies look to virtual desktops when a large number of users log into their desktops simultaneously, performance can suffer dramatically. Organizations can use new tools that address the storage challenges of virtual environments and integrate with the virtual machine management console for rapid provisioning of servers and virtual desktops. This would include the cloning and set up of virtual machines, desktops, applications, and files needed across all virtual servers.

By having intelligent storage management tools, organizations can reduce the number of copies stored for virtual machines and desktops yet still deliver the same number of applications and desktops to the business. This enables administrators to utilize the appropriate storage (including the appropriate characteristics cost, performance, availability, etc). According to our own tests, this can eliminate as much as 70 percent of storage requirements and costs by storing only the differences between VM boot images.

In addition, by utilizing appropriate management tools that look across all environments – whether physical, virtual or cloud-based – organizations can drive down costs by giving them a better understanding of how they are using storage to improve utilization and make better purchasing decisions. Furthermore, using such centralized management tools will help them to better automate tasks to improve services and reduce errors. This automation helps organizations deliver storage as a service (a key tenant for private cloud computing) with capabilities including on-host storage provisioning, policy-driven storage tiering and chargeback reporting.

Another example is when organizations back up applications within their virtual environment, they have normally done two separate backups: one for the full image recovery and one of the individual files within the environment for recovery later. Organizations can reduce this waste by implementing solutions that will do a single backup that is off-host, in the cloud, and will allow them to do two separate recoveries of the full image and of granular files. This more effective implementation of deduplication keeps data volumes lower and allows for better storage utilization.

Hybrid Cloud Solutions: Control of Storage Utilization and Archiving

Data protection and archiving environments within virtual and cloud environments tend to grow faster than anticipated. These environments will need to be managed closely to keep costs down. Luckily, there are software tools that address this quite effectively.

Implementing a hybrid model allows organizations to get storage offsite (through public cloud storage), eliminating tape rotations and other expenses associated with off-premise storage. However, organizations should be cautious when looking at tools that don’t provide consistent management across physic, virtual, and cloud-based infrastructures.

Many organizations are examining cloud-based email such as Google’s Gmail and Microsoft Office 365. But, as a best practice for this hybrid model, organizations can’t compromise corporate security and governance policies. This often results in organizations needing to maintain on-premise email archiving and discovery capabilities with information that resides in the cloud. In doing so, organizations now have a consistent way to discovery information that in their private cloud as well as information hosted in the public cloud.

Of course, organizations that integrate tightly with major cloud storage partners will see the biggest benefit of this hybrid approach – especially if they need to quickly deploy a cloud implementation to meet rapid growth.

Moving Forward with an Eye to the Sky

IDC reports that 62 percent of respondents to a recent survey say that they will be investing in data archiving or retirement in 2011 to address the challenges associated with data growth. IT organizations are in the process of trying to re-architect their environments to meet these challenges. Private and hybrid clouds, combined with virtualization, seem key in addressing these challenges.

By implementing cloud solutions, storage administrators are regaining control of information, helping them to reduce storage costs, and better deal with tomorrow’s challenges.

Anil Chakravarthy, Senior Vice President, Storage and Availability Management Group and Deepak Mohan, Senior Vice President, Information Management Group, Symantec Corporation

Hey, You, Get off of My Cloud Arrow to Content

March 22, 2011 | Leave a Comment

By Allen Allison

The emerging Public Cloud versus Private Cloud debate is not just about which solution is best. It extends to the very definition of cloud.  I won’t pretend that my definitions of public cloud and private cloud match everybody elses, but I would like to begin by establishing my point of reference for the differences between public and private cloud.

Public Cloud:  A multi-tenant computing environment that can deliver on-demand resources in a scalable, elastic manner that is both measured and metered, and often charged, on a per-usage basis.  The public cloud environment is typically, but not necessarily, accessible from anywhere – through the internet.

Private Cloud: A single tenant computing environment that may provide similar scalability and over-subscription to the Public Cloud, but solely within the single tenant’s infrastructure.  This infrastructure may exist on the tenant’s premises, and may be delivered in a dedicated model through a managed services offering.  The private cloud environment is typically accessible from within the tenant’s infrastructure.  However, it may be necessary to enable external access via the internet or other connectivity.

It is commonly understood that a cloud environment, whether public or private, has several benefits including lower total cost of ownership (TCO).  However, there are considerations that should be made when determining whether the appropriate option is a public or private cloud.  Below are some key points to consider, as well as some perceptions, or misperceptions, of the benefits of each.

In a Private Cloud, the owner or tenant may have more flexibility in establishing policies and procedures for provisioning, usage, and security.  If there are specific controls, that may otherwise impact other tenants in a shared environment, there may be greater control given to the organization within a dedicated environment.

In a Public Cloud, the tenant has less control over the shared resources, the security of the platform, or the compliance of the infrastructure. The tenant, however, may be able to leverage common security controls or compliance certifications that may inspire greater confidence in the use of a managed cloud offering.  For example, if the public cloud infrastructure is included in the SAS70 (soon to be replaced by SSAE16) audit by a 3rd party, the tenant may be in a position to offer the controls and compliance as part of their own compliance program.

In a Private Cloud, the owner or tenant may be able to leverage the scalability and capacity management of a platform that is able to handle the over-subscription or provisioning processes of a multi-resource infrastructure.  This allows for a consolidation of hardware and management resources, a potential reduction in administrative costs, and a scale that enables the use of idle resources (e.g. memory, CPU, etc.).  However, these benefits may come with a significant capital expense, depending on the cost model.

In a Public Cloud, the tenants enjoy greater scalability and capacity benefits because the costs of adding resources or managing the environment is not tied to a single tenant, but spread over all tenants of the platform.  Typically, in a public cloud, the tenant is only billed for the use of those resources.  This allows for a lower initial expense and a growth in cost to match utilization, which, in many cases, can equate to growth in revenue for the hosted application.  Likewise, when the need for resources is reduced, the total cost is also reduced.  This can be especially helpful when the platform is used to support a seasonal business (e.g., online merchant).

In a Private Cloud, the tenant has more control over maintenance schedules, upgrades, and the change-management process.  This allows for greater flexibility in the managed platform to comply with specific requirements, such as the FDA CFR 21 or NIST 800-53.  As the stringent requirements of these regulations impair the flexibility of cloud environments, it is easier to maintain the entire dedicated cloud platform to these specific controls rather than to attempt to carve out exceptions in an otherwise multi-tenant cloud environment.

In a Public Cloud, the costs of the shared security infrastructure that may be available to customers can be spread over multiple tenants.  For example, the cloud provider may enable the use of shared firewall resources for the inspection of traffic at the ingress of the cloud environment.  Customer can share costs of the maintenance and management as well as the shared hardware resources used to deliver those firewall services.  This is important to note when those security resources include threat management and intrusion detection services.  Often, the deployment and support of dedicated security infrastructure can be expensive.  Furthermore, most security infrastructure can be tailored to comply with most specific regulations or security standards, such as HIPAA, PCI DSS, and others.

It is important to understand how cloud providers deliver managed cloud services on a public cloud platform.  Typically, the elastic environment is built on a robust, highly scalable platform with the ability to grow much larger than any individual private cloud environment.  This implies that there are a significant number of benefits of scale built into a common platform.  This allows for the following benefits to the provider, with a trickle-down effect to each tenant.

  1. The per-unit cost of each additional resource is greatly reduced because a greater number of enhancements can be performed in a public cloud platform than in a private cloud platform.
  2. When a provider delivers security services in a public cloud environment, each tenant gains the benefits of security measures enforced for other clients.  An example of these benefits would be if a specific, known vulnerability is remediated for one customer, the same vulnerability remediation may be easily applied to all customers.
  3. The cloud provider’s reputation may work to the tenant’s advantage.  A cloud provider may take better precautions, such as adding additional redundancy, adding capacity sooner, or establishing more stringent change-management programs, for a shared public cloud infrastructure than they may be willing to deliver in a dedicated private cloud.  This may lend itself to better Service Level Agreements (SLA), greater availability, better flexibility, and rapid growth.

It is rare that a new cloud customer will require a dedicated cloud infrastructure.  This is most often reserved for those in the government, servicing the government, or in highly regulated industries.  For the rest, a public cloud infrastructure will likely provide the flexibility, growth, cost savings, and elasticity necessary to make the move from a dedicated physical environment to the cloud.  Those who choose to move to the public cloud understand the benefits and are able to leverage their providers to deliver the service levels and manageability to make the cloud experience a positive one.

Allen Allison, Chief Security Officer at NaviSite (www.navisite.com)

During his 20+ year career in the information security industry, Allen Allison has served in management and technical roles, including the development of NaviSite’s industry-leading cloud computing platform; chief engineer and developer for a market-leading managed security operations center; and lead auditor and assessor for information security programs in the healthcare, government, e-commerce, and financial industries. With experience in the fields of systems programming; network infrastructure design and deployment; and information security, Allison has earned the highest industry certifications, including CCIE, CCSP, CISSP, MCSE, CCSE, and INFOSEC Professional. A graduate of the University of California, Irvine, Allison has lectured at colleges and universities on the subject of information security and regulatory compliance.

Page Dividing Line