Security Considerations When Evaluating Google Apps Marketplace Applications

By: Tsahy Shapsa, VP of Sales & Marketing and Co-Founder, CloudLock

 

 

Customers care about the security of their data in the cloud, and security of customer data is obviously important to Google, which is why Google has invested in completing numerous security audits and certifications such as FISMA, SSAE 16, and recently, ISO 27001.

 

Since 2010, we’ve had the honor of engaging with some of the largest Google Apps customers in the world. When dealing with these large organizations (as well as smaller ones that care about the security of their data), at some point during the evaluation process, “security assessment” questions arise.

 

One might argue that the real value of Google Apps is not contained to messaging and collaboration, but rather in the ability to transform the way businesses consume applications. This transformation is demonstrated and driven by the Google Apps Marketplace, offering hundreds of applications, broken down into multiple categories, which any Google Apps customer can add to their domain with a click of a mouse.

With great power, comes great responsibility, and as enterprise IT Security professionals know, adding Google Apps Marketplace applications extends the security perimeter of the organization to include that application, and the company behind it (including the employees).

 

Here’s the warning message displayed when installing a marketplace application:

Explanation:

 

Technically speaking, adding non-Google services (aka “installing” marketplace applications) to a domain, is really granting privileges for that application to access the domain and end-user data. Different applications require access to different data repositories: documents, spreadsheets, and calendar are just a few examples.

 

Security of organizational data is a critical part of any comprehensive DLP strategy. Security audits and certifications can be equally important to internal auditors, legal and compliance teams, as well as customers (if the company is hosting customer data).

Google reminds customers that it is their responsibility to trust and verify 3rd party (non-Google) services they would like to add to their domain.

 

How do customers trust and verify 3rd party applications?

 

Here’s a quick checklist that any organization can use in evaluating whether to add a marketplace application to their domain:

 

Assessing the trustworthiness of a marketplace application provider

Here’s a quick explanation around each one of the security controls and its impact on the level of trustworthiness of the marketplace application provider. Highly trustworthy 3rd party application vendors will be able to provide the security assurances customers require proactively:

 

  • SSAE 16 Audited – having this means that a 3rd party auditing company has reviewed and attested to the security controls reported by the application vendor
  • System Security Plan (SSP) – a system security plan is a ‘must have’ to be considered even somewhat trustworthy. Just having an SSP isn’t enough as anyone can write their own, and security officers should look for independent verification of the controls, procedures and processes reported in the SSP
  • Ongoing Application Vulnerability Scanning – A standard practice for any SaaS application
  • Customer Security Assessment – In lieu of an industry standard security audit, prospective customers should demand the app provider to respond to a security assessment which will capture the controls they have in place. These include employee background checks, documented and implemented policies and procedures, change management, monitoring, and vendor self-audit verification
  • Application is strategic to the vendor – if the app is not strategic to the vendor’s core business, chances are that the necessary investment in security controls didn’t take place. And security does require an ongoing investment (as anyone who’s gone through a security audit will testify)

 

In summary, security of customer data should be important to all of us.

 

Security is important to Google – As is evident by their heavy investment and excellent track record in world class security

 

Security is important to customers – To protect organizational data, to adhere to legal requirements, and to ensure that the organization’s security perimeter is not compromised by adding non-trustworthy services.

 

Security must be a top priority for vendors to be trusted – Trustworthy third party vendors make no compromises, and continuously invest, audit, and innovate around their security practices.

 

Though the transition to the cloud has brought unprecedented sharing, availability, and collaboration benefits to organizations of all sizes, companies must be aware of the 3rd party vendors that now have access to corporate data, and must be able determine whether those vendors pose a security risk.

 

 

Tsahy Shapsa is the VP of Sales & Marketing and Co-Founder of CloudLock, where helps the largest Google Apps customers in the world bring DLP to their data in the cloud. Prior to founding CloudLock, he held various business and technology management roles with companies like Sun Microsystems and Network Appliance, both domestically and internationally.

 

Some Things To Consider When Extending Your IdM Into The Cloud

 

About Author
Mark O’Neill is CTO of Vordel, a company which enables companies to connect to mobile and cloud

 

Like many organizations, you no doubt face the challenge of extending your IT operations into the cloud to take advantage of the many cloud-based services demanded by your users today. As you make the transition from a firewall-protected in-house IT infrastructure to an IT environment that extends into the cloud, one challenge you cannot ignore is how to also transition your identity management (IdM) platform in such a way that you ensure the security of your new hybrid on-premises and cloud-based IT approach.

Indeed, IT organizations today must consider a number of things before they transition to a cloud-centric IdM strategy, including the probability that they must deal with a number of complex security challenges posed by multiple identity storage siloes.

 

IT organizations historically have had their own on-premises identity stores containing directories such as Active Directory or Novell. Their IdM challenge was to successfully federate these IdM stores if, for example, they were doing business with, or merging with, another company and needed to tightly integrate the identity stores of both entities. This historical IdM federation challenge, as a result, was small and contained within a small number of organizations working together.

 

In today’s cloud-centric IT environment, the challenge of IdM federation has grown exponentially.  As organizations extend their IT infrastructures to take advantage of the many cloud-based services available, they are faced with a proliferation of different on-premises and cloud-based identity stores because their users have multiple IDs – both corporate and personal – that they use at access multiple cloud-based services.

 

The proliferation of cloud-based identity siloes has clearly gotten out of control, so corporate IT organizations are now trying regain control – starting by taking a tally of all the different cloud-based services that are being used by employees and then developing a strategy for managing these identities. Meeting this challenge increasingly falls to CIOs and their staffs.

 

How should they proceed? In many situations, the choice comes down to either integrating identities across their cloud-based services on a one-by-one basis, linking each one back into the on-premises system, or instead adopting a socialized model using an off-the-shelf product to link together their on-premises IdM systems with many cloud-based services in one go, thereby avoiding the complexity of doing it on a one-by-one basis.

 

Consider your options

 

IT organizations must consider three options in developing an IdM strategy that successfully manages multiple identity storage siloes in a hybrid on-premises and cloud-based IT environment. If their company’s employees use a number of cloud services such as SalesForce.com, Dropbox, Concur for expenses and Google apps for email, then they can either (a) give the employee five passwords (which the employee is likely to forget), (b) have the employee use the exact same password everywhere (but then require it to be changed everywhere at once if they suspect it has been compromised), or (c) enable the employee to log  into all of the cloud services by logging into the company’s system with a single sign-on.

 

Option (c) clearly is preferable. It’s simple for the employee to remember and use. And it’s much more easily provisioned and managed by IT.  The employee never needs wrestle with, or even know, the passwords for the company’s cloud-based services.

 

Complete visibility is critical for the success of this single sign-on strategy. The IT organization must have visibility of all the cloud-based services that are being used. If single sign-on services have been implemented, IT can turn the cloud-based services off, just as easily as they turned them on.

 

One of our clients — an education management firm that manages over 100 private colleges with 150,000 students inNorth America– provides an excellent example. Each college has its own portal and homepage providing the students with access to the various student services offered by the college. The common portal creates a common identity across all of the student applications and services – making it far easier for IT to manage. As a result, the student’s college password allows single sign-on access to a number of different cloud-side applications, including various Google offerings, that otherwise would each require its own password. When students log into the college portal, we log them into Gmail. Google Apps or Google Docs based on a single sign-on. And when the students graduate and leave college, we turn off single sign-on and remove the account.

 

Leverage industry standards

 

Industry standards also must be considered when extending your IdM platform into the cloud. Standards are shaping the way the federation of on-premises and cloud-based services can be set up and managed to ensure security.

 

A number of standards are in use today. For single sign-on, there’s Open ID and OAuth that allow you to log into one service – your internal systems – then use that identity to log into the other systems without handing your password over to those systems. You log into an Identity Provider and, if the other systems (the Service Providers) trust the Identity Provider, they allow you to log into their systems. For example, powered by Open ID and OAuth, today you can log into other systems using your Facebook, Google or Twitter credentials instead of theirs. And you don’t even need to create a new account with a new password for these systems.

 

Another standard, called SCIM (Simple Cloud Identity Management), is also important to consider when you are addressing the need for cloud-based user management. SCIM allows you to manage user identity up in the cloud, enabling your users to be easily provisioned and de-provisioned for cloud-based services.

 

Adhering to standards to build a secure cloud-based IdM platform, however, is not enough. You also must construct an entire framework for identity management, including an audit trail for transactions to ensure identities are not compromised and real-time monitoring that provides a real-time view of what’s going on.  This framework also must provide the scalability needed to accommodate all of the users who need to log in at any one time.

 

… And don’t forget the regulations

 

Last, but certainly not least, you must be aware of and address the regulations governing the use of the cloud for IdM. Different jurisdictions have different rules governing data retention, how and where information about your users can be stored, and the user notifications required regarding changes to personal information stored in the cloud. These regulations can vary greatly from country to country and must be considered based on the geographies in which your company is doing business.

 

 

Cloud Security Best Practices: Sharing Lessons Learned

By Frank Simorjay, Sr. Product Marketing Manager, Microsoft Trustworthy Computing

Compliance regulations and frameworks can be difficult to comprehend and even harder to explain to management when it’s time to invest in mastering IT governance. TheCloud Security Alliance (CSA) has taken steps to help make this complex effort simpler for everyone as they work to be in compliance. Looking at the STAR, Microsoft has worked to outline the critical thinking steps required to understand the complexity of complying with any framework or regulation. The STAR assessment makes this effort a bit cleaner and simpler to understand.

To help customers better understand compliance efforts, Microsoft published a newMicrosoft approach to cloud transparency white paper that illustrates the benefits of using the STAR to enable the transparency of cloud services. The paper focuses on three cloud service offerings including Windows Azure, Office 365, and Microsoft Dynamics CRM and provides visibility into how these services are operated using the evaluation criteria documented in the CSA STAR. Since the ISO 27000 standards family is important to many of Microsoft’s customers, the paper also outlines how Microsoft’s cloud services are operated to meet or exceed the standards. The paper provides an overview of various risks, governance and information security frameworks and standards to consider when looking at cloud computing as a solution includingISO/IEC 27001, the Control Objectives for Information and related Technology (COBIT) framework, and NIST Special Publication (SP) 800 series.

If you are considering using a cloud service provider, check to see if they have submitted answers to the CSA STAR to learn more about their security and privacy practices.  If the cloud provider has not submitted a self-assessment to the CSA STAR, you can use the free framework provided by the CSA to ask the cloud provider the questions that are relevant to your organization.  Understanding how your cloud provider manages security and privacy to operate their cloud services can help to minimize headaches down the road that might arise.

Frank Simorjay, Sr. Product Marketing Manager, Microsoft Trustworthy Computing, CISSP. Currently, Frank heads  up the Trustworthy cloud effort for security, privacy and reliability effort for Microsoft.  Most recently Frank was responsible for the Security Intelligence Report ( www.microsoft.com/sir) and a security subject matter expert . Frank is the founder and long standing member of ISSA Puget Sound, and a standing CPAC member for ISSA International. Additionally Frank is a CSA solutions provider representative.  Formerly, Frank was a Security Product, and Program Manager and compliance subject matter expert (SME) for Microsoft Solutions Accelerator. Prior to joining Microsoft Frank was a senior engineer for NetIQ and for NFR Security, where he designed security solutions for enterprise networks in banking and telecommunication for more than 10 years.

 

 

 

Think beyond securing the edge of the enterprise. It’s time to secure the “edge of the Cloud”

By Ed King, VP Product Marketing, Vordel

 

Everyone is familiar with the notion of securing the edge of the enterprise.  With the growing adoption of cloud technologies, IT must now also think about securing the “edge of the Cloud”.  The edge of the Cloud is the perimeter around any Cloud environment where it touches the open Internet.  In this post we examine just what security at the edge of the Cloud means and how enterprises can achieve a Cloud security strategy that is consistent with their existing on-premise strategy. How an enterprise chooses to secure the edge of the Cloud has a direct impact on what Cloud strategy it adopts. The various flavors of  SaaS, IaaS, PaaS, private, public and hybrid Cloud solutions all have individual security requirements that we will examine.

 

Edge of the enterprise security includes what gets deployed in the demilitarized zone (DMZ) and beyond, and can be divided into the three following areas of network, application and data security.

 

  • Network security focuses on keeping the bad guys out and securing communication channels.  Technologies include network firewalls, intrusion prevention and detection systems (IDS/IPS) and virtual private networks (VPN).
  • Application security is about giving good guys access to approved resources under the right context, by securing application access points. Technologies include web application firewalls (WAF), application/XML/SOA gateways and identity management.
  • Data security is about maintaining the data on the inside, as well as securing any data going out. Technologies include leakage prevention (DLP), encryption and tokenization.

 

So how does edge-of-the-enterprise security apply to the edge of the Cloud? With public and hybrid Clouds there is at least one third-party company involved, so who owns what aspects of security needs to be clearly defined.  By default, an enterprise should assume it has the ultimate responsibility for securing the Cloud services it uses, including security at the edge of the Cloud, despite whatever reassurances the Cloud provider might offer.  The enterprise needs to define a security strategy for Cloud usage and delegate to Cloud service providers what they can provide and manage.

 

Security for the edge of the Cloud differs by the type of Cloud based services.

 

Software-As-A-Service (SaaS) Security

A SaaS vendor owns its application delivery infrastructure, which makes things simple but limiting for enterprises looking to secure SaaS.  Enterprises have no say in how security is implemented and have to trust the SaaS vendor’s documented security policies and SAS-70 certification. In an earlierCSAblog we discussed how enterprises can adopt a “don’t-trust” model when dealing with Cloud based services.

 

While network security and data security are take-it-or-leave-it, some SaaS vendors offer a few application security options.  Multi-factor authentication is a popular option, especially with software tokens such as Verisign ID Protection (VIP).  Many SaaS vendors also provide SAML (Security Assertion Mark-up Language) based integration so enterprise users can single sign-on from on-premise identity management platforms such as CA SiteMinder, IBM Tivoli Access Manager, or Oracle Access Manager.  OAuth based federation is also quickly catching on for enterprise use.  This indeptharticle on SSO to Cloud based services provides extra reading.

 

While enterprises have a limited choice when it comes to directly securing the edge of SaaS Clouds, they should as a minimum, ensure the protection of the API keys.  API keys are used to authenticate applications calling SaaS APIs.  API keys are frequently distributed directly to application developers and hard-coded into applications.  This is a insecure and unscaleable practice.  Consider using a DMZ-based solution to securely manage and store the API keys, and broker the authentication of on-premise applications to the SaaS.  Technologies designed for this purpose go by a number of different names: API Server, API Gateway or Cloud Service Broker.  These technologies also monitor data traffic going to the Cloud to block, mask or encrypt sensitive data.  This podcast offers more information about API key security.

 

Infrastructure-As-A-Service (IaaS) Security

An IaaS vendor provides hardware, the operating system and some software options.  Network security is typically provided as a standard service.  Communication is secured using SSL and VPN.  As with SaaS, network security for IaaS is also a take-it-or-leave-it proposition.  In contrast to SaaS, since the enterprise has complete ownership of what applications are deployed in the IaaS environment, it has complete responsibility and a good degree of flexibility when securing its applications at the edge of the Cloud.

 

Application security starts with an application firewall.  Network firewalls are not content aware and cannot protect applications against attacks such as cross-site scripting and injections.  A WAF is good for protecting web applications but provides limited protection for APIs.  API security products offer comprehensive API protection but lack WAF’s self-learning capabilities.  Application firewalls should be deployed as standard services for the IaaS, so every time an application is deployed in the IaaS, an application firewall service is spun up to protect that application.  Look for firewalls that can be deployed in the Cloud, can be spun up and down elastically, and can protect REST/JSON style APIs.

 

Once the IaaS perimeter is protected from attacks, the next task is to control access to the application resources, including the API and data.  Identity management technologies typically handle access control and single sign-on (SSO) to enterprise applications deployed on-premise.  For IaaS environments that are accessed exclusively via VPN, enterprises can treat the applications deployed there like on-premise application.  This typically requires deploying an agent as the policy enforcement points (PEP) for each application.  Deploying agents can be expensive and error prone, especially in a highly dynamic IaaS environment where applications are spun up and down frequently.  Using a proxy based PEP is more scalable and secure for IaaS deployed applications.  For applications that need to be accessible to third-parties, consider using a federation model instead of requiring third-parties to obtain VPN access.  To enable proxy based access control and federation, enterprises have two technology options.  Cloud based federation services are single-purpose products that do well for user/browser access.  It is a good low-cost option if API access is not important.  To support both user and API access to IaaS deployed applications, consider the API security products mentioned above.

 

For data security, DLP technology can work equally well for applications deployed in IaaS and for applications deployed on-premise. DLP should be made available as a standardized service that can be automatically provisioned as part of the IaaS provisioning process.  Since API security and security gateway products offer standard DLP functions as well, it may be feasible to use those products for both application and data security.

 

Platform-As-A-Service (PaaS) Security

PaaS lets enterprises develop and deploy applications completely in the Cloud.  While there are public PaaS offerings such as CloudFoundry.com, Force.com,and Engine Yard, enterprise adoption of PaaS will likely be predominantly in the form of private Clouds.  In terms of network security, application security and data security, PaaS is very similar to IaaS.  Regardless how an application is developed, once it is deployed in the Cloud, the run-time security is much the same.

 

What is unique about PaaS is the infrastructure services required for the development of applications, especially those services that need to connect PaaS applications to on-premise systems.  These services can be for integration of security, data, process or management.  Take identity management as an example.  Applications developed on PaaS should not have their own identity silos in the Cloud.  These applications will need access to identity, policy and entitlement data from on-premise identity management systems.  For instance, developers need an Account Service in the PaaS that can provide identity data from the corporate directory.  Leading PaaS providers do offer a library of standard infrastructure services, but the backend integrations that connect these services to on-premise systems remain the responsibility of the enterprise.

 

To create these infrastructure services for PaaS involves two parts.  First, create Cloud-ready APIs for on-premise systems.  In other words, create REST style APIs out of existing SOAP based web services (or JavaAPI, JMS, MQ, PL/SQL or other legacy interfaces).  Use a technology like an API Server to create, manage, deliver and secure these APIs so they can be exposed to the PaaS.  Next, at the edge of the PaaS Cloud, deploy API Brokers to mediate the security and protocol requirements from different on-premise API sources.  Vordel’s CTO Mark O’Neill has blogged about an interesting example of edge-of-the-Cloud security for VMware’s Horizon Applications Manager.  See post here.

 

It’s Time To Take Edge-of-The-Cloud Security Seriously

Too often security is an afterthought when enterprises adopt new technologies.  Cloud is no exception.  Cloud computing introduces new wrinkles to existing security best practice and technologies.  Cloud computing creates an additional perimeter, the edge of the Cloud, that the enterprise must now secure.  How an enterprise chooses to secure the edge of the Cloud has a direct impact on what Cloud strategy is feasible.  SaaS, IaaS, PaaS, private, public and hybrid Cloud solutions all carry their unique security requirements that need to be factored in.  The good news is that enough security technologies already exist today to make secured Cloud computing a reality even for the most risk-averse enterprises.

 

CNIL (French Data Protection Authority) recommendations on the use of cloud computing services

On June 25, CNIL – the French Data Protection Authority – published its recommendation on the use of cloud computing services. This recommendation is the result of a research project on cloud issues, which started in the Fall of 2011 with a consultation with industry. The documents released by CNIL include a summary of the research and documents; a compilation of the responses received to the consultation, and a set of recommendations.

Below are a summary of the recommendations, provided by CSA’s General Counsel, Francoise Gilbert, and reproduced here from her blog with permission:

http://www.francoisegilbert.com/2012/06/cnil-on-cloud-computing/

The recommendations includes:

  • Clearly identify the type of data and type of processing that will be in the cloud
  • Identify the security and legal requirements
  • Conduct a risk analysis to identify the needed security measures
  • Identify the type of cloud service that is adapted for the contemplated type of processing
  • Choose a provider that provides sufficient guarantees

The CNIL document also provides an outline of the contractual clauses that should be included in a cloud contract and contains “Model Clauses” that may be added to contracts for cloud services.  These model clauses are provided as a sample, are not mandatory, and can be changed or adapted to each specific contract.

Except for a high level summary in English, the documents described above are currently available only in French on the CNIL website.  According to CNIL representatives, English translations of these documents should be available shortly.

  • Overview of CNIL Recommendation – Summary in English:

http://www.cnil.fr/english/news-and-events/news/article/cloud-computing-cnils-recommandations-for-companies-using-these-new-services/

  • Overview of CNIL Recommendation – Summary in French

http://www.cnil.fr/la-cnil/actualite/article/article/cloud-computing-les-conseils-de-la-cnil-pour-les-entreprises-qui-utilisent-ces-nouveaux-services/

  • Compilation of the responses to the CNIL consultation on cloud computing (in French)

http://www.cnil.fr/fileadmin/images/la_cnil/actualite/Synthese_des_reponses_a_la_consultation_publique_sur_le_Cloud_et_analyse_de_la_CNIL.pdf

  • Recommendation for companies wishing to use cloud services (in French)

http://www.cnil.fr/fileadmin/images/la_cnil/actualite/Recommandations_pour_les_entreprises_qui_envisagent_de_souscrire_a_des_services_de_Cloud.pdf.

 

 

Free Your Data & the Apps Will Follow – But what About Security?

About Author
Mark O’Neill is CTO of Vordel, a company which enables companies to connect to mobile and cloud


Application Programming Interfaces (API) represent such an important technology trend, that new business models are evolving on top of them, and this has led to the term “The API economy”. The API economy encompasses API developers, the businesses providing the APIs, the businesses hosting APIs and app developers. This growing API economy has resulted in a switch in the mindset of many organizations that are now making access to internal data readily available to third parties, enabling partners and customers to develop value-added applications on top of this data. As such, many organizations no longer hold information close, but actually are seeking to make it available for external developers to write apps on top of the data. While many organizations are naturally concerned about the security risks posed by opening up and sharing access to data and indeed how they can derive long-term revenues from new API-led business models, the good news is that these concerns are being addressed. In fact, if organizations are not prepared to play in the API economy, they run an even greater risk of being left behind. In this article we look at some of the security challenges APIs pose, and how these can be addressed to ensure organizations don’t miss out on the opportunities API offer.
The Organization is now the Platform

APIs thrive on data. Examples include shipping information APIs (shipping data), financial quote APIs (financial data), and geographic APIs (location data). The popular maxim around the API economy notes that if an organization is willing to free its data, the applications will follow.

This new paradigm shift driven by APIs has also impacted at board room level. CEOs now expect their CIOs and CTOs to be able to showcase iPhone and Android app versions of their latest service offerings. However rather than asking “why are we not building iPhone applications,” the CEO should be asking, “why aren’t we allowing others to write iPhone applications on top of our data?” In other words, the goal of the organization should be to become a transparent platform for serving up data to third parties who can develop mobile apps on top of this platform. This means that the business effectively becomes a platform. For example, if a Financial Services company provides APIs enabling any developer to write the application, then it becomes a platform itself.

 

 

 

 

Secure API Delivery

So we’ve seen how APIs enable enterprises to deliver business services via Cloud, mobile, and partner channels quickly and flexibly. Enterprises need an agile API Server platform to ensure quick time-to-market with new business services. APIs handle critical business transactions and often have direct impact on customer interactions and business’s ability to execute. Poor API security and performance can result in lost sales, missed opportunities and inability to deliver. Every API requires a supporting infrastructure to make sure the APIs are properly managed, delivered, and secured.

 

Strong security is also essential as organizations need to monitor any suspicious usage of APIs in order that their APIs can be safely deployed, without compromising the data. Critical business functions such as ordering, fulfilment and payment are conducted via APIs. Attacks on these business critical services can result in loss of revenue and sensitive data. On the one hand, “enemy fire” attacks and exploits are becoming more sophisticated and organized, while on the other hand, the proliferation of API clients is subjecting APIs to increased levels of “friendly fire” from poorly engineered or malfunctioning clients. Organizations need to protect their APIs from both enemy and friendly fire alike.

Protect APIs Against Enemy and Friendly Fire

Threats that organizations need to consider protecting their enterprises against include such all common attacks as outlined in the NIST SP800-95 document “Common Attacks Against Web Services”[i] which include:

  • Denial of service attacks
  • Command injection attacks
  • Malicious code, virus
  • Sniffing
  • Spoofing, tampering, and impersonation
  • Data harvesting
  • Privilege escalation
  • Reconnaissance

The increase in both number and variety of API clients can also lead to a larger number of poorly engineered clients, as well as an increase in incidents of client malfunction. A misbehaving client repeatedly sending requests can cause as much damage as a denial-of-service attack. Organizations need to protect their APIs from potential “friendly fire” by monitoring API call volume and client behaviours. Clients exhibiting disruptive behaviours can be blocked or throttled.

 

Conclusions
APIs are increasingly being exposed to larger and more diverse populations of developers and applications. With this increased exposure, comes inevitably increased levels of operational and security risks. To guarantee good availability and user experience; IT must have security, control and monitoring capabilities as part of its API delivery platform. Having an API Server to manage, deliver and secure APIs is central to any coherent API strategy.

 

Outline of BCR for Processors Published by Article 29 Working Party (EU)

On June 19th, the Article 29 Working Party, which is composed of Data Protection Authorities from the member states of the European Union, released an important opinion on the use of a legal means to move personal data outside of the border of the EU, called Binding Corporate Rules (BCRs). The guidelines apply to data processors. This has implications for cloud computing in Europe. The following blog entry on the Working Party’s opinion was written by the external legal counsel of the CSA, Ms. Francoise Gilbert of the IT Law Group. We repost it here with her permission.

Outline of BCR for Processors Published by Article 29 Working Party

Posted by fgilbert on June 20th, 2012

On June 19, 2012, the Article 29 Working Party adopted a Working Paper (WP 195) on Binding Corporate Rules (BCR) for processors, to allow companies acting as data processors to use BCR in the context of transborder transfers of personal data, such as in the case of cloud computing and outsourcing.

WP 195 includes a full checklist of the requirements for BCR for Processors and is designed both for companies and for data protection authorities.  The document provides a checklist outlining the conditions to be met in order to facilitate the use of BCR for processors, and the information to be found in the applications for approval of BCR to be presented in the application filed with the Data Protection Authorities.

Are Network Perimeters the Berlin Walls of Cloud IdM?

A single enterprise wide identity and access management (IAM) platform is a noble but unattainable goal. The network perimeter is now a metaphorical “Berlin Wall” between the two identity platform domains of Cloud and On-Premise. It is time for enterprises to formalize a strategy of integrating their IAM silos using identity middleware.

 Over the last decade, Identity Access Management (IAM) has grown into a well-established product category anchored by the three big vendors: CA, IBM, and Oracle.  Despite all the hard work and technologies developed, most customers have implemented just basic web single sign-on (SSO), have provisioned only a handful of core systems, and still have far too many directories.  Oh, then there is still that Microsoft problem.  The integration of Microsoft technologies such as SharePoint with enterprise IAM is still like mixing oil and water.  Microsoft centric customers turn to Microsoft centric vendors such as Quest and Omada, while other customers treat Microsoft integration like a red-haired stepchild.  Furthermore, whilst most organizations are still struggling to implement enterprise-wide IAM across on-premise assets, along came Cloud Computing to muddy the water even more.

Cloud based services post a new set of challenges as they are not owned by the enterprise and each service offers its own flavor of IAM integration.  Vordel’s CTO Mark O’Neill has written extensively about the different challenges of IAM integration for IaaS, PaaS, and SaaS.  Mark affectionately refers to this topic as “covering your *aaS”.  As often is the case, leading IAM vendors are slow to address the Cloud integration problems.  Seeing opportunities, new IAM vendors have emerged offering Cloud based IAM services.  This group of vendors includes startups such as Okta, Symplified, and Tricipher (acquired by VMware), as well as large vendors like Intel/McAfee and Symantec, new to the IAM space.  The basic offering of these Cloud based IAM services is a Security Assertion Markup Language (SAML) based security token service (STS) with pre-built SSO integrations to popular Cloud based services, usually referred to as “application catalogs”.  There is usually some means of integration with an enterprise directory using an on-premise agent.  These services make it very simple to SSO into the most popular Cloud based services, and have gained good traction from enterprises large and small.  That is positive progress, right?  Not exactly.

Instead of further consolidation and moving towards a true vision of enterprise-wide IAM, enterprises now find themselves with more identity silos than ever.  Let me count the ways:

  • “Enterprise IAM” solutions from CA, IBM, Oracle, or one of the smaller vendors.  Many large enterprises have more than one of these.
  • Microsoft silo with integrations directly to Active Directory using Integrated Windows Authentication (IWA) and Active Directory Federation Server (ADFS).  Each Windows domain or SharePoint instance may be an individual silo.
  • Many point solutions exist specifically to solve the SharePoint mobile access challenge.
  • Mainframe IAM integration is notoriously challenging.  Instead of tackling RACF and ACF2 integrations, most companies opt to delay these projects, hoping these legacy applications will be modernized soon.
  • Cloud-based IAM for Cloud-based services.  This is often adopted by the business, bypassing enterprise IAM efforts.
  • Large business application vendors such as Oracle and SAP continue to push integrated IAM capabilities.  This limited interoperability is by design, leveraging their business application footprint as a mean to push their middleware sales.

 

This proliferation of IAM silos has led to an explosion of agents, proxies, plug-ins and integration modules.  For many enterprises, the management of these integration points consumes the majority of their IAM project resources.  For some, they have long lost track of how many of these integrations modules exist in the enterprise.

I think it is time to pronounce that a single enterprise wide IAM platform is just a noble but sadly unachievable idea.  While enterprise should strive to reduce the number of IAM silos, at some point the effort becomes prohibitively expensive.  However much we wish it to be the case, Cloud based IAM services is not the solution to this problem, it is just compounding the problem.  It is time for enterprises to formalize a strategy of integrating their IAM silos.  It is time to introduce the concept of “identity middleware”.  Identity middleware is a class of technologies that integrates identity silos introduced by different technologies, vendors, standards, network boundaries and business ownerships.  Identity middleware does not duplicate capabilities offered by standard IAM products.  It does not introduce another identity silo.  Identity middleware’s sole purpose is to consolidate IAM silo integrations into a single technology and platform to enhance manageability and scalability.  Identity middleware should have these capabilities at a minimum:

  • Exchange standard-based and proprietary tokens (security token service)
  • Authentication scheme that can handle combination of user, device and application identities
  • Encryption and signing
  • SSL termination
  • Certificate and key management, with integration to key stores and certificate authorities (CA), as well as integration to Hardware Security Modules (HSM)
  • Token and session caching and management
  • Add, delete, and modify security artifacts to and from messages and APIs running on HTTP, FTP, TCP, and other popular protocols
  • Configurable orchestration of IAM mediation tasks
  • Route messages and API requests based on policy
  • Out-of-the-box integrations with leading IAM products and services
  • Support leading standards, such as SAML, OAuth, WS-Security, XACML, OpenID… etc.
  • Secure operations at the edge of the enterprise and edge of the Cloud to mediate both Cloud-based and on-premise access
  • High performance and low latency

 

IAM is not a pure infrastructure technology.  IAM technology shares many of the characteristics of business systems.  It is closely integrated and often embedded within business systems.  It also needs to integrate with other IAM systems from business partners.  Just like application integration requires mediation middleware, so does IAM integration.

Where can you find identity middleware technologies?  While identity federation technologies handle standard token mediation tasks (mostly SAML based), it lacks the configurable orchestration and message manipulation capabilities required to be a true identity middleware platform.  Today your best bet is look to integration technologies such as application gateways and enterprise service buses.

 

 

 

 

 

 

Look for a gateway or service bus that offers:

  • Out-of-the-box integrations with leading IAM products and services
  • Strong support for Microsoft security technologies, namely Integrated Windows Authentication, Kerberos, and SPNEGO
  • Support for mainstream standards such as SAML and OAuth

If your use cases involve integration across network boundaries to Cloud, B2B, and mobile endpoints, then only the gateway will suffice, since enterprise service bus is not suitable for deployment in the DMZ.

Ed King VP Product Marketing, Vordel
Ed has responsibility for Product Marketing and Strategic Business Alliances. Prior to Vordel, he was VP of Product Management at Qualys, where he directed the company’s transition to its next generation product platform. As VP of Marketing at Agiliance, Ed revamped both product strategy and marketing programs to help the company double its revenue in his first year of tenure. Ed joined
Oracle as Senior Director of Product Management, where he built
Oracle’s identity management business from a niche player to the undisputed market leader in just 3 years. Ed also held product management roles at Jamcracker, Softchain and Thor Technologies. He holds an engineering degree from the Massachusetts Institute of Technology and a MBA from theUniversity ofCalifornia,Berkeley.

Cloud Market Maturity

by Henry St. Andre, CCSK | Trust Office Director | inContact

The Cloud Security Alliance, in conjunction with ISACA will be initiating a new working group to perform research on what it means to have Market Maturity in the Cloud.  This is a very interesting subject for me.  I have been working in the telecommunications and data industry now for over 25 years.   During that time, I have observed in real terms the application of the phrase ‘ahead of its time’ and what that can mean to a nascent industry or technology.  As an example, people are amazed to discover that the technology that would become the fax machine was first invented in 1843, in England by Alexander Bains (a psychologist).    Yet it took almost 100 years for the fax machine to become the common business tool it is today.   Some of the technological factors that influence the maturation of a product include communication, computing, fabrication, miniaturization and materials.   Ultimately, one of the most critical factors is whether or not the technology exists to manufacture the product or perform the functions in a cost effective fashion, and whether there is sufficient ubiquity of that technology to allow the masses to utilize it.  There are, however, two other important elements, I believe in the maturation of a product or service.  Are people psychologically disposed to using it and is there a legal and regulatory environment that describes its use?

Psychology has a huge impact on the acceptance and use of a technology and product.  I am 50 years old now, and I remember slide rules, record players, cassette tapes,  typewriters, Cobol , acoustic modems, DEC Writers, Archie and Gopher.  I have been engaged in technology all my life and am fairly comfortable with it, but still, I know that I view technology and in particular the Internet very differently than my children do.    Take my smart phone.  It does 101 things, and oh yeah it makes phone calls.  I know about those 101 things, and I some of those 101 things, but the main reason I have a cell phone is to make calls.  But, personally, I prefer not using a cell phone to make calls.  I think it is inferior to my traditional ‘land line’ phone and I will use a land line phone if I have the choice.  My children, on the other hand, use their smart phones for 101 different purposes, and sometimes to make phone calls.   Similarly, I find that the maturity of the cloud as a product that is both used and accepted by the masses is not simply a function of whether the technology exists to provide the service in a cost effective fashion, but also whether or not people are comfortable using it.   For this reason, I believe that the younger generations, will be greatest drivers of the cloud market and its maturation.   People of my generation are adopting the cloud because of the economics.  Our children will use the cloud because they will think it is the obvious way to do things.

Finally, laws and regulations, in some ways we hate them, but ultimately businesses need them.  While it is true that businesses can be choked by over regulation, it is also true that businesses flounder when there is uncertainty.  When there is an absence of laws and regulations that establish the rules of the game and the field of play, it creates uncertainty and fear for businesses.  Uncertainty and fear can kill a business model.  Because the cloud and the technology that has supported and enabled it has changed and developed so quickly, laws and regulations are struggling to keep up.

That is changing, and organizations like ISACA and the Cloud Security Alliance have been and will be instrumental in that change.

This Cloud Market Maturity project will be an important endeavor.   The results and guidance from this project will provide legislators, technologists, consumers and businesses with the guidance and information that each needs in order to further the progress of this new Cloud Model.

Outsourcing B2B Integration: The Forgotten Option

Business continuity remains a major concern for enterprises as they move more mission-critical processes to the cloud. Outsourcing B2B integration while ensuring cloud security in order to effectively integrate business processes is challenging at best and ambiguous for certain.   All too often, IT professionals feel that they will lose the reliability and availability needed if they don’t implement an on-premise cloud environment.  However, there can be strategic approaches to outsourcing integration that include both a secure cloud environment for business processes as well as reliability and availability that extends beyond traditional borders.

 

Gartner defines outsourcing as follows: “A model in which a business acts on behalf of consumers of one or more cloud services to intermediate and add value to the service being consumed. Providers of cloud services can also benefit through the establishment of an ecosystem of partners which enhances the provider’s service and draws customers to it.”  October 2010: Defining Cloud Services Brokerage: Taking Intermediation to the Next Level.

 

When comparing outsourcing B2B cloud integration to on-premise solutions, a major area of consideration is the security of cloud implementations. The burden of addressing the needs of an enterprise’s partner community while meeting the needs of moving to a more secure connection methodology is difficult, especially when it comes to the disparity of transport protocols utilized. And let’s not forget the cost of adhering to the multiple of security compliance organizations to help safeguard the data can be astronomical. For example, an outsourcer gets to spread the cost of implementing PCI DSS compliance over their multiple tenants. Everyone benefits without the individual capital outlay.

 

Before implementing any cloud strategy, there’s a basic set of questions that all organizations should address before moving forward. This includes: ”Which cloud implementation is best for our company’s needs? Do we outsource the cloud or manage it ourselves?”  Also be sure to educate your self on common industry terms and jargon such as cloud outsourcing, cloudsourcing, and cloudware. Eventually as you continue to compare outsourced and on-premise cloud security concerns, you’ll notice that it ultimately boils down to whether both options can be as secure as enterprises require.

 

Clearly, one of the implementations an enterprise can address is B2B integration. The process of an enterprise extending its IT processes to its business and trading partner community including customers, vendors, suppliers and distributors is no easy task, but can be done efficiently and securely. The pressure for enterprises to connect more closely with their partner communities, tear down walls and optimize business processes such as procurement, eCommerce, supply chain management, inventory visibility, and logistics optimization is higher than ever.

 

It has been debated whether B2B integration is really needed by enterprises or whether companies can get by with putting their applications in the cloud and provide broader access.  From thorough conversations with enterprise customers, it is evident that there is a lot of pressure on IT departments today to mitigate data center overhead and provide a more efficient way to incorporate others into their ecosystem.

 

Many in the industry also question whether providing B2B integration on-site is an IT department’s charter or whether the IT pros should instead spend their time on more strategic projects and initiatives to help drive revenue. If there is agreement to integrate processes, which more and more companies are considering, then the options are: keeping things as status quo, build it yourself and keep it on premise, as many IT departments have today, or outsource to a cloud-based platform.

 

Taking a look at these three options can certainly result in a lively discussion.  Keeping things at status quo for most organizations means having manual processes, time-intensive quality control resulting from errors that occur, requires in-house expertise on subject matters such as cloud security, and the loss of revenue and/or opportunity because of the lack of implementing in a timely and cost effective manner.  However, building the cloud integration environment yourself and keeping it on-premise may solve a specific integration challenge but does not necessarily provide the broader implementation that is conducive to the changing business environment.

 

The burden of finding and selecting the right combination of software, middleware, appliances, and hardware falls on you as opposed to relying on someone else that already has the environment where those decisions and tests have occurred. The outsourcer has invested their time and resources to ensure a secure and robust environment those other companies can leverage. This allows for quicker implementation resulting in achievement of business goals.  In fact, the high upfront and ongoing capital expense to create a battle-tested cloud environment is clearly something all IT managers need to consider. Typically, the cost just to get started will entail a $50K to $100K hardware and software expense; implementation and consulting services is about $25K to $50k or more and the cost for the on-going support and maintenance is at least $50K to $100k annually.

 

However, by leveraging a cloud platform to integrate your business processes, companies don’t have to pay any of the upfront cost. Instead, they can leverage the power of the internet without having to install additional hardware or software.  The limited upfront cost is focused around getting an organization and its community on-board quickly. The subscription based model that the “outsourcer” adheres to is an operating expense and eliminates the capital expenditure approval process.

 

While there is criteria to evaluate when considering whether to build or outsource, many organizations will find that planning resources related to the core competency of a business as opposed to whether a team has the skill set to implement and manage the B2B integration will be another hurdle they must address.  The ability to minimize the time-to-market, enabling enterprises to be more competitive in a timely manner, is critical to meeting the demands of the ultimate consumer.  Do enterprises have the resources needed to on-board partners quickly? Most, if not all, do not.

 

Last but not least, we must consider the security and compliance implications. When an outsourcer has integrated data it’s important to transport security into their model as well. This indication is best suited to ensure a safe full loop data process.  Since many companies work with partners that have their own security policies, it is unrealistic for the enterprise to expect their community to follow their security guidelines.  An outsourcer mitigates the disparate security policies to ensure a smooth and secure experience.

 

As companies continue to evaluate their cloud strategy and debate the implementation of an on-premise solution or utilize an outsourcer, there are many considerations to ponder. Discuss these issues and realize for your own organization that there are many ways to successfully implement a cloud integration strategy.

About the Author:

Stuart Lisk is a Senior Product Manager for Hubspan, working closely with customers, executives, engineering and marketing to establish and drive an aggressive product strategy and roadmap.  Stuart has over 20 years of experience in product management, spanning enterprise network, system, storage and application products, including ten years managing cloud computing (SaaS) products. Stuart holds a Certificate of Cloud Security Knowledge (CCSK) from the Cloud Security Alliance, and a Bachelor of Science in Business Administration from Bowling Green State University.  For more information, go to www.hubspan.com or follow the company on Twitter @Hubspan

 

Configuration Compliance in the Cloud

By David Meltzer

 

As a member solution provider in the Cloud Security Alliance, paying careful attention to risk and planning for improvement is second nature for my own companies’ security services.  As a consumer of many start-up cloud services built completely outside the security industry, however, I have observed that  building secure cloud services is a much more daunting task for companies not filled with security experts.  Asking an early stage SaaS start-up to answer 197 questions about their risk and how they comply with the 98  items in the Cloud Controls Matrix is more likely to get a “You have got to be joking” and /or a virtual blank stare than receive any substantive assurances about security risk.

 

Vendors might look at a list of questions like the CSA Consensus Assessments Initiative Questionnaire and be overwhelmed with all the requirements. Vendors that want to provide a more substantive answer than  ‘YES’ or ’ NO’ are probably also asking, ‘How do I get started with the basics?’

 

In this article, I’ll walk through one of the basic security building blocks that can turn an average start-up SaaS service into one that takes security seriously and can ‘pass muster’ with even the most paranoid security auditors found at companies like mine.

 

One requirement cuts across a broad cross-section of controls in the Cloud Controls Matrix is the performance of infrastructure audits. Infrastructure audits always begin with a discovery process; – you have to know everything in your infrastructure before you can determine if it is secure.  This seems straight forward, but it’s not as easy as you think. Do you know specifically how many assets you have, where are they, and what are they?  Discovery can be a simple process if all management is centralized, but most companies can find a few surprising things (or a lot of things) pretty quickly. For example, what started as a few virtual instances with a single provider can quickly morphed into multiple cloud infrastructure providers with a private network or two thrown in for good measure. At this point an asset inventory becomes a very valuable step. A variety of open-source and free cloud solutions that automate  basic network discovery are available, so if the answers to infrastructure questions aren’t totally straight forward, it’s easy and free to get detailed, reliable answers.

 

Once you know what is there, the next question to ask yourself is, ‘Do I have a security configuration policy for each of these systems’?  It is rarely necessary to create any configuration policies yourself;  the security industry has spent the last decade building policy templates for a wide range of operating systems, servers, devices, and applications.  The most prominent sources for these policies today are the Center for Internet Security (http://www.cisecurity.org/) & NIST’s Security Content Automation Protocol (http://scap.nist.gov/content/index.html).

These policies can be applied to your systems ‘as-is’ or used as a baseline and modified to fit your particular application needs.

 

Now that you have a policy, the next step is auditing the assets against the policy.  A variety of solutions exist for doing this – it can be a manual effort, a host-based approach applied system by system, or a network-based approach assessing the entire discovered network at once.  Both CIS & NIST have certification processes and publicly list certifications awarded, so if you decide to use a vendor instead of assessing each asset manually it’s easy to narrow down options.

 

Automation of configuration auditing pays dividends quickly, but the frequency of updates to your production services will dictate how much re-auditing is necessary.  In an ideal closed-loop solution, changes to a configuration will immediately trigger an automated re-audit, giving you a constantly updated assessment of how closely the configurations of your production assets compare to the policies you’ve set.  With manual processes, weekly or monthly audits may be a more practical goal to set. Almost anyone who implements an automated configuration auditing program will start to see how quickly policy deviations creep into production services.  With quick detection, these configuration errors are just as easy to remediate as they are to detect.

 

Implementing a configuration compliance program from scratch that includes discovery, policy assignment, and auditing doesn’t require a lot of time and produces one of the biggest ‘bangs for the buck’ in securing a service. And, perhaps more importantly, with a configuration compliance program in place you are able to produce evidence of compliance for future customers and auditors.  This program ensures you have a broad set of documented configurations for your infrastructure that should be configured (with little work on your part), a program to audit compliance, and evidence of compliance, as provided by the output of your audits, for every asset of your infrastructure.

 

A solid configuration compliance program is the cornerstone of every cloud security program.  It pays immediate dividends with customer and auditors and is relatively inexpensive to put together.

Cloud Security Requires All Hands on Deck

Andrew Wild, CSO at Qualys, discusses how security postures and attitudes need to change as more and more IT functionality moves to the cloud

 

It’s clear there are many compelling reasons, both financial and productivity-related, for enterprises to move IT functionality into the cloud, so it’s not surprising that they’re moving quickly to adopt popular collaboration services like Box.net, Yammer, Jive, and the like. According to a recent study by business technology service provider Avanade, 74 percent of enterprises are using cloud computing, a 25 percent increase over results for the same survey in September 2009.  Of those organizations yet to adopt cloud services, three-quarters say cloud is in their future plans. The migration of IT functionality into the cloud magnifies the importance of ensuring users understand how to use these services most productively and securely, especially since security for cloud services is typically implemented by the cloud service provider and  the enterprise has limited control over that security while retaining legal responsibility for regulatory compliance.

 

Understanding Controls

The use of cloud services brings many advantages to the enterprise, but it’s vital that everyone involved understand how the differences between cloud and traditional enterprise IT services impact  information security.  Most organizations have defined security policies that provide administrative guidance for users about how to use IT services securely, as well as the responsibility of all users to safeguard privileged information.  In addition to these administrative security controls, enterprises typically implement technical controls that detect, or in some cases prevent violations of those policies.  Examples of these technical security controls include data loss prevention (DLP) technologies, firewalls, and email security appliances among others.  These technical controls are often used to prevent disclosure (both malicious and unintentional) of sensitive information.

 

The effectiveness of these controls is generally acceptable for on-premise security, but for cloud-based IT, they bring little or no benefit, because they’re designed to function inside the enterprise IT infrastructure. Technical security controls for cloud IT services are designed, implemented and managed by the cloud provider.  The specific technical security controls implemented by cloud vendors vary by provider, but in general, enterprise IT and Security staff have significantly less visibility into or control over them than a comparable in-house deployment.

 

Embracing Constant Change without Sacrificing Security

Cloud-based IT services typically provide a feature-rich, highly interactive experience for end users.  Because of the deployment model, cloud service providers can introduce new functionality and service enhancements frequently and rapidly, usually with no involvement from the organization’s IT or security team.   This dynamic and rapidly evolving environment is challenging for both end users and organizational IT and security staff.  End users may find it difficult to keep up with all of the new functionality, and may not be able to make full use of the features, leading to less than optimal productivity, while  IT and security staff will not usually have sufficient time to assess the possible impact of the new functionality on the security of the organization.

 

Security Awareness and End User Training:  More Important than Ever

 

These factors combined require an altered approach to end-user education across the enterprise. Now more than ever, every person who accesses company information must play an active role in ensuring the security of an organization’s information.  Enterprises must fully educate their employees about those responsibilities to ensure the security of organization’s information, as well as how to use the cloud IT services securely.  This is particularly true now that many employees are accessing corporate networks from personal devices such as smartphones and tablet computers; a recent study by Cisco Systems found that a majority of individuals believe they are not responsible for protecting information accessed through devices[1]

 

Here are some ideas to begin implementing security awareness and IT training programs that ensure security in the face of the disruptive nature of cloud-based IT:

 

  1. Establish clear IT objectives for each cloud-based IT service that you select.
    Understanding how you expect the particular cloud based IT service to be used is essential in order to evaluate the possible risks the service may pose to your organization.  You can’t always avoid the risk, but by educating your end users as to the security risks and appropriate use of the service will go a long way towards minimizing that risk.
  2. Ensure end users understand their responsibilities.
    Make sure that end users fully understand their role in securing the organization’s information.  Far too often, employees believe that security is solely the purview of the  IT security team, rather than a responsibility of every employee.  Your organization’s culture should reflect that global responsibility, so  all employees understand the critical role they play, and IT security staff are seen as shepherds and helpers rather than guards and enforcers.
  3.  Ensure that your information security program encourages end user participation.
    In the rapidly changing world of cloud-based IT services, it is very likely that end users will learn of new features and capabilities before your IT and security staff does.  You can take advantage of this – involved end users are more likely to provide feedback to the organization about how new features may introduce risk.  This kind of feedback from end users is critical to the participatory process, enabling  IT security staff to adapt awareness training and security controls as appropriate to minimize the risks.

 

A Plea to Cloud IT Service Providers

Enterprise IT security staff understands the differences between cloud and on-premise IT services. So it’s very clear to them that most cloud IT service providers do not provide the enterprise sufficient transparency in the implementation and ongoing management of security controls.

 

Cloud service providers must continue to work to improve the visibility they provide to their enterprise customers to ensure the proper implementation of technical security controls.  While the responsibility for implementing the technical security controls shifts to the cloud provider in a cloud-based IT service model, the responsibility for securing the enterprise’s data still belongs to the enterprise security team.  Cloud providers must enable the enterprise to integrate the security of cloud-based IT services with enterprise managed IT services.   The ability to integrate with existing enterprise processes is critical for the enterprise to meet compliance requirements by leveraging existing security resources while adopting cloud-based IT services.

 

A few examples of the ways in which transparency can be improved include:

  1. Federated identity services, allowing the enterprise to own the management of its user identities.  Giving the Enterprise the ability to manage its own identities allows the enterprise to leverage the existing user account provisioning and de-provisioning processes and controls.
  2. Access to event logging for the purpose of auditing user activity.  Providing the enterprise with detailed event logging information, especially in regard to user activity, allows the enterprise to leverage existing event management processes and controls.
  3. Configurable options at the organization level to manage the sharing of information.  Allowing the enterprise to configure how information is shared on the cloud based IT service enables the enterprise to ensure consistency with its enterprise information classification and handling processes.

Enterprise security professionals understand that cloud-based IT services are still maturing; cloud IT service providers should not forget that a lack of progress towards improved transparency will eventually impede the adoption of their services.

 

Kudos to Microsoft! 3 Offerings Now on the STAR Registry

We at the CSA want to offer a hearty congratulations to the team at Microsoft, for their leadership in completing and publishing STAR assessments for their products. As of today, Office 365, Windows Azure and Dynamics all have STAR assessments completed and published.

We applaud Microsoft for leading the way in bringing visibility and transparency of security best practices to the cloud.

For more information on the Microsoft STAR Registry entries, and to learn more about what their customers are asking for, visit their blog post.

Cloud Fundamentals Video Series: Bring Your Own Device and the Cloud

Another great video out on the Trustworthy Computing site…

This latest video features Tim Rains, director, Trustworthy Computing, speaking with Jim Reavis of the CSA about the consumerization of IT and the issues that can be encountered when employees place an organization’s data on their personal devices such as smart phones. The challenges are heightened by the varied ways people share data today, such as through cloud services and social networks, and organizations are still learning how to manage the risks and their data security. Here’s the video, along with more details:

http://blogs.technet.com/b/trustworthycomputing/archive/2012/03/27/cloud-fundamentals-video-series-bring-your-own-device-and-the-cloud.aspx

Jim notes, “We’ve certainly seen an acceleration in cloud adoption. A lot of the organizations and enterprises we’re tracking are not only adopting both public and private cloud, but we’re seeing quite a change in Bring Your Own Device (BYOD).”

“The reality is, we’re going to see a lot of these highly mobile devices that may never make it inside of an enterprise actually need to be managed by cloud based services. That’s why at this RSA Conference we announced the CSA Mobile Initiative, in which we’re going to do a lot of research very similar to our original guidance to break down the different issues,” said Jim. “We’re looking to get the whole ecosystem involved.”

Secure Cloud – Myth or Reality?

Cloud Security is not a myth.  It can be achieved.  The biggest hindrance on debunking this myth is for enterprise businesses to begin thinking about the Cloud differently.  It is not the equipment of co-location dedicated servers, or on-premises technology, as it is changeable, flexible and transforming everyday at the speed of light.  With these changes come better security and technology to protect ‘big data’.  The other issue to take into consideration is the human factor, there will always be people involved in building clouds and managing them, and there will always be people who want to attack them.  Therefore we need to consider these two key factors when enterprises choose their Cloud with security in mind.

First, technology and layers of security. “It’s more about giving up control of our assets and data (and not controlling the associated risk) than any technology specific to the cloud.” – This quote is from ‘2011 Data Breach Investigations Report’, a study conducted by the Verizon RISK Team. If architected with security in mind, it seems there is no evidence that specifically proves the Cloud is any more or less secure than a dedicated environment.  In fact, regulatory compliance such as PCI-DSS 2.0 for credit card information and HIPAA for healthcare data is regularly achieved in the public cloud.  It seems the biggest reservation of organizations resistant to moving into the Cloud is the fact that a majority of the infrastructure is shared.

 

Depending on your goals, there are essentially two key ingredients for true security in the Cloud.  The first and most important is separation.  This is absolutely essential – not only should your data be segregated from other tenants on the infrastructure, your network traffic, virtual machines and even security policies should be separate.

 

For instance, although a firewall or web application firewall may be shared, it’s imperative that policy modification does not impact anyone other than the tenant it was modified for.

 

The other key ingredient is transparency and auditability.  So you’ve decided to move to the cloud?  Great.  But how do you know you are getting what was advertised?  Simply put, you don’t.  Transparency is essential in keeping tabs on your Cloud hosting provider.  Being able to see behind the curtain should allow you to see exactly how your environment is being protected.  Not only does it give you peace of mind, but it’s required to perform regulatory compliance audits.

 

With data separation and being able to keep a watchful eye on your resources, most organizations are better off moving to the Cloud security-wise.  Reducing cost by only paying for resources you need, when you need them is a substantial benefit, but being able to leverage a provider’s security infrastructure is much better.  Most organizations don’t have the expertise, much less the budget to implement security measures such as high-end firewalls, DDoS mitigation, VPN with two factor authentication, web application firewalls, IDS, IPS, patch management, anti-virus and a host of other security measures.  As a result, some may actually be more secure in the cloud.

 

Secondly, defending against attacks.  Cyber attacks are created and launched by people, and they happen in many ways, some more common than others.  In April 2011, Sony PlayStation players were compromised.  It is estimated that 100 million players’ data containing names, addresses, e-mail accounts and passwords were stolen.  Some customers were hacked over and over again, as much as 10 times per customer.  This was the result of a planned and calculated hack.  In a letter to the U.S. House Commerce Committee the Chairman of Sony said they had shut down the affected system while it investigated the attack and beefed up security.  This larger breach followed another, and Sony believes while they were tracking and defending the large DDoS attack the vulnerability was exposed, allowing the larger and more troubling breach to happen.   While Sony was working to address and mitigate the DDoS attack, the group Anonymous was able to infiltrate the system and cause an even greater breach.  Security updates and bug fixes must be constantly monitored and applied to all applications.

 

Last year, CitiGroup was hacked by criminals who stole more than 200,000 Citigroup customer bank account details.  Unfortunately for Citigroup, this damage was done through what was apparently a trivial, insecure direct object reference vulnerability – number four on the OWASP top ten.  By simply manipulating the URL in the address bar, authenticated users were able to jump from account to account, as they did tens of thousands of times.  This vulnerability could have easily been detected by not using direct references to account numbers, secure code review, or web application firewalls and application log monitoring and review.

 

In conclusion, from a security perspective, there are a number of perceived obstacles to implementing a public Cloud infrastructure.  All of these may appear, at first sight, to be perfectly valid.  This is largely because many existing public Cloud environments have been built with capacity, connectivity, scalability and other core attributes for hosting as a priority, with security implemented as a secondary layer.  A truly secure public Cloud is possible, but only if it is built upon a secure framework – this ensures that, no matter how hosting technologies change and develop – as well as the arrival of new tactics devised by hackers to exploit them – there is always a secure foundation underpinning the entire architecture.

 

Chris Hinkley is a Senior Security Engineer at managed hosting provider FireHost where he maintains and configures network security devices, and develops policies and procedures to secure customer servers and websites. Hinkley has been with FireHost since the company’s inception. In his various roles within the organization, he’s serviced hundreds of customer servers, including Windows and Linux, and overseen the security of hosting environments to meet PCI, HIPAA and other compliance guidelines.

 

Seeing Through the Clouds: Gaining confidence when physical access to your data is removed

Cloud computing brings with it new opportunities, new frontiers, new challenges, and new chances for loss of intellectual property.  From hosting simple web sites, to entire development environments, companies have been experimenting with cloud-based services for some time.  Whether a company decides to put a single application or entire datacenters in the cloud, there are different risks and threats that the businesses and IT need to think about.  All of these different uses, all of these different scenarios are going to require thorough planning and development in order to make sure whatever gets put in the cloud is protected.  When implemented properly, companies may actually find that they have improved their overall security posture.

 

When putting systems and information into your own datacenter, certain security measures have to be in place to ensure external threats are minimized.  One of the big security measures is the datacenter itself, with a security boundary only allowing authorized personnel to have direct access to the physical systems.  Within the datacenter, dedicated network connections ensure the data flows properly with little concern of unauthorized snooping.

 

These and other physical controls go away when working in a cloud environment.  Regardless of whether you choose an Infrastructure-as-a-Service (IAAS), Platform-as-a-Service (PaaS) or Software-as-a-Service (SaaS) cloud model, the physical boundary has gone from a select few authorized people to an unknown number of people who are not even part of your company.

 

Other controls inherent to locally hosted systems include firewalls, network segmentation, physical separation of systems and data and a dizzying array of monitoring tools.  When going to a cloud model, whether it’s a public or private cloud, most of these controls either go away entirely or have significant limitations to them.  The controls may still be there, but may not be under your direct management.  In other cases some of these controls may be removed entirely.  The three tenants of security are confidentiality, integrity, and availability.  When our data sits in our own datacenters we feel confident that we have a pretty good level of control over all three of those tenants.  When we put our data in the cloud, we feel that we have lost control of all three.  This doesn’t mean that a cloud-based solution is bad, rather it means we need to look at what it is we’re migrating to the cloud and make sure the three tenants are still covered.

 

Simply picking up an application or an in-house service and moving it to a cloud-based solution isn’t good enough, and will most likely leave information exposed.  You need to review:

  • How the information is secured
  • How access is authorized
  • How integrity and confidentiality are controlled

 

It may require new technologies to help enhance these things, but it may also just be a matter of tighter IT processes around how systems are configured and managed.

 

One of the attractive components of moving to a cloud-based service is the ability to expand on demand.  This model allows a company to handle high load periods and have those services pulled back when not needed.  Implementing additional authentication and authorization controls, as well as data encryption at rest as well as in motion will also help increase the level of security and control kept by a company when using cloud services. Since there will be a number of components of cloud-based services outside of an administrators control, the implementation of additional security controls including host-based and next-gen firewall and monitoring activities will also help enhance security while providing peace of mind.

 

The cloud has several attributes that make it attractive to business besides cost savings, including:

  • The ability to have highly redundant, geographically diverse systems help companies handle both disaster scenarios and enhance customer experience.
  • The ability to quickly add more systems helps companies handle spikes in traffic.
  • Speed of deployment can also help a company to keep a competitive edge.
  • If implemented with the appropriate security controls in place, companies can have safe, secure systems that not only rival those they could have built within their own datacenters, but with more features and security than traditional IT deployments.

 

Expanding into the cloud requires the IT staff to think differently about security. Decisions in the past may have provided enough security for information stored within your datacenter, but when using the cloud security and monitoring has to be reassessed and modified to account for the changes in the risk boundaries.

 

David Lingenfelter is the Information Security Officer at Fiberlink.  David is a seasoned security professional with experience in risk management, information security, compliance, and policy development. As Information Security Officer of Fiberlink, David has managed projects for SAS70 Type 2 and SOC2 Type 2 certifications, as well as led the company through audits to become the first Mobile Device Management vendor with the FISMA authorization from the GSA.  Through working with Fiberlink’s varied customer-base, David has ensured the MaaS360 cloud architecture meets requirements for HIPPA, PCI, SOX, and NIST.  He has been an instrumental part in designing Fiberlink’s cloud model, and is an active member of the CSA, as well as the NIST Cloud working groups.


Fiberlink is the recognized leader in software-as-a-service (SaaS) solutions for secure enterprise mobile device and application management. Its cloud-based MaaS360 platform provides IT organizations with mobility intelligence and control over mobile devices, applications and content to enhance the mobile user experience and keep corporate data secure across smartphones, tablets and laptops. MaaS360 helps companies monitor the expanding suite of mobile operating systems, including Apple iOS, Android, BlackBerry and Windows Phone. Named by Network World as the Clear Choice Test winner for mobile device management solutions and honored with the 2012 Global Mobile Award for “Best Enterprise Mobile Service” at Mobile World Congress, MaaS360 is used to manage and secure more than one million endpoints globally. For more information, please visit http://www.maas360.com.

Lock Box: Where Should You Store Cloud Encryption Keys

Whether driven by regulatory compliance or corporate mandates, sensitive data in the cloud needs protection along with access control. This usually involves encrypting data in transit as well as data at rest in some way, shape or form, and then managing the encryption keys to access the data. The new conundrum for enterprises lies in encryption key management for data in the cloud

When considering a Software-as-a-Service (SaaS) or Platform-as-a-Service (PaaS) offerings, protection for data-at-rest typically rests in the hands of the cloud service provider. Digging into the the terms of service or master subscription agreement reveals the security commitments of the SaaS/PaaS provider.  For example, Salesforce.com’s Master Subscription Agreement indicates “We shall maintain appropriate administrative, physical, and technical safeguards for protection of the security, confidentiality and integrity of Your Data.” For Infrastructure-as-a-Service (IaaS), the security burden typically falls primarily on the cloud consumer to ensure protection of their data.  Encryption is a core requirement for protecting and controlling access to data-at-rest in the cloud, but the issue of who should control the encryption keys poses new questions in this context.

When weighing where to maintain encryption keys, enterprises should consider issues including security of the key management infrastructure (a compromised key can mean compromised data), separation of duties (for example, imposing access controls so administrators can backup files but not view sensitive data), availability (if a key is lost, data is cryptographically shredded), and legal issues (if keys are in the cloud, law enforcement could request and obtain encrypted data along with the keys without the enterprise’s consent).

There are a variety of ways to protect data-at-rest in the public cloud, such as tokenization or data anonymization. The most commonly used approach is to encrypting the data at rest. Whether encrypting a mounted storage volume, a file, or using native database encryption (sometimes referred to as “Transparent Data Encryption”, or TDE), all of these operations involve an encryption key. Where should that encryption key be stored and managed?  There are three primary options (with lots of variations of the three).

Keys in Enterprise Datacenter:  Holding the keys in the datacenter ensures maximum security and availability.  There is no risk of an external party being compromised (as in the RSA SecureID breach) and a high availability/disaster recovery configuration can be implemented to ensure keys are always available.   There are various deployment decisions including whether to use a virtual appliance or a hardware appliance depending on risk tolerance levels.

SaaS Key Management: A second alternative is using a SaaS key management solution. This involves having a SaaS vendor take responsibility for the keys. While this approach takes advantage of cloud economics, there are risks. Since the SaaS key management vendor assumes responsibility for availability of the keys – if they experience an outage, the data could become unavailable. If keys are somehow lost or corrupted, you data could be permanently unavailable. The vendor is also responsible for the security of the keys – any compromise of the SaaS infrastructure puts customer data at risk (the RSA SecureID episode again comes to mind).  There are also legal issues to consider if you do not hold the encryption keys- a cloud service provider (SaaS or IaaS) could be compelled to turn over encryption keys and data via the USA PATRIOT Act without the data owner being aware (a Forrester Research blog posting by Andrew Rose provides a nice summary of the issue).

IaaS Manages Keys: A third option is to rely on tokenization or encryption services provided by your favorite IaaS vendor. This provides a checkbox that data is encrypted, but creates similar security and availability risks posed by the SaaS alternative (you are relying on the security and availability of your IaaS provider’s key management and effectively making the IaaS provider custodian of both the encryption keys and encrypted data – not an ideal separation of duties).  Some IaaS providers offer encryption options that allow customers to choose whether they want to manage the keys themselves or have the vendor assume management responsibility. For example, Amazon’s S3 storage includes encryption options to encrypt volumes of data while enabling you to either manage your own encryption keys or to have Amazon hold the keys.

The cloud may create new key management challenges, but the principles for choosing between the various alternatives remain the same. Enterprises must assess their risk tolerance and audit requirements before they can select a solution that best meets their encryption key management needs.

Todd Thiemann is senior director of product marketing at Vormetric and co-chair of the Cloud Security Alliance (CSA) Solution Provider Advisory Council.

Deprovisioning in the Cloud

Let’s be honest: how many of you have tried logging in to one of your former employer’s accounts?  Maybe you had a CRM solution and you wanted to get the name of that guy who suggested he had the next hot idea.  You didn’t set your out-of-office message with your new/personal contact information in the hosted email service.  The travel site for the previous company was just plain better than anything else you can access.  As security professionals, we know the risks: the lag time for deprovisioning varies, but best practices suggest when an employee walks out the door, all of his administrative access shuts down as it closes.  That has been harder to do in the cloud.  Even with SAML tokens and a smathering of open standards for authentication, inconsistent support by SaaS providers and spotty enterprise directory integration leave opportunities for exploitation that simply don’t exist in the on-premise IT world.

 

The open identity standards were supposed to fix this, but even after six+ years, they haven’t been adopted across the industry. Federated Identity Management, OAuth, SAML (Security Assessment Markup Language), OpenID and large initiatives to implement them, such as those by Google and Facebook, are beginning to pop up on various sites.  A Fortune 500 company easily uses over 100 cloud services, ranging from expense reporting with Concur to American Express’ GetThere Travel to SalesForce’s Customer Relationship Management software.  All 100 don’t support a new authentication standard and those that do don’t all support the same one.

 

Why is this important, you might ask?  Quite simply, until you have a single place to pull the plug or an extraordinarily mature configuration management / process control structure embedded into the corporation, you cannot fully disconnect an ex-employee from the company.  Most companies will immediately remove access to obvious things like Active Directory/LDAP and VPN credentials.  They may synch other passwords through automated processes and close down internal access to SAP or Oracle.  The remaining stragglers may seem innocuous, but there was some function that was important enough to enroll the employee in the first place.  Think back to the multitude of cloud services you use day to day for your job; many of them still rely on good old usernames and passwords.

 

Password policy controls

 

If you’re stuck with passwords, there is always the possibility of a password intermediary, a system you log into that stores all of your credentials in a hash format you can’t read or, better still, access.  There are plenty of programs that run locally on your machine, password vaults of sorts like KeePass, Password Vault, etc., where the user enters a master password (that may be looped into a directory structure of some sort) and receives back a set of service IDs.  Click on the service you want and a password is automatically copied to the clipboard.  Of course, there is a bit of customization necessary to make the commercial and open source projects into something single purposed where, if an administrator removes your rights to that program, all other access simply goes away.

 

But what happens when a power user, or better still a corporate executive, says they want to venture away from the corporate standard (be it Wintel or Apple or even Linux) and use something different.  Customizing software’s expensive; customizing and supporting multiple platforms becomes exponentially so.  Someone else may already have done the legwork – at least one vendor looks to have taken this approach towards handling customization.  Between the major PC OS release schedules and versions, and the constantly (and quickly) evolving mobile platforms, can anyone really afford to wait for locally developed software?

 

There are a plethora of “as a Service”s – Software as a Service, Platform as a Service, and recently venturing cloud folks began pitching Identity as a Service.  Certificate Authority (CA) vendors like Entrust, GeoTrust and Verisign might argue that’s what public certificates were all about, but let’s table that discussion for later. Instead of Identity as a Service, which is trademarked by Fischer International Identity, let’s use Gartner’s “Identity and Access Management as a Service” (IAMaaS) as defined in their 2011 Magic Quadrant for User Administration and Provisioning.   So, how can IAMaaS help?

 

Let’s let the cloud fix the cloud problem

 

Since the cloud got us in to this mess, can it be the solution as well?  What happens if we move the password vault to the cloud?  The idea for most IAMaaS providers centers around an information store that synchronizes internally, federates when it can, uses two-factor when it’s important and stores passwords when all else fails.

 

  • Internal Synchronization – this is one of the stickier prospects to the deal.  The directory service that a corporation uses (LDAP, AD, etc.) has to be accessible to the IAMaaS.  Not directly accessible that you’re handing over the password tables, but lookups and validations do need to occur.  In many cases, this is an on-premise device for network security reasons and so that the data remain fresh; real-time integration beats anything with latency.  Plus, when we keep the systems internally synchronized, we’re not amplifying the deprovisioning problem by introducing an across the board delay.
  • Multi-platform support – the cloud is always (usually) on and all of the authentication happens across http.  This makes the services cross platform (provided they have a TCP connected device and browser).
  • No expensive programming – most of the vendors write connectors for the various websites they support out-of-the-box.  This abstracts away the complications of Facebook changing their login processes or Google changing their APIs.  The number of connectors included out of the box may be indicative of the breadth of support by that vendor, or poor design/programming choices in creating the backend software.
  • Standards support – in the Fortune 500 company example, I can use the out-of-the-box connectors side by side with a federated OpenID, SAML or even a future standard.  And I expect that two-factor authentication will still work when it needs to.
  • Ease of adding new sites, services and apps – In addition to the out of box connectors and standards, several of the products include do-it-yourself, wizard options that work in most cases.  When that doesn’t work, some vendors find the development of new connectors so trivial they offer a fixed-priced development.

 

The benefits of IAMaaS are numerous and include deprovisioning.  An administrator maintains access to the keys to the kingdom without having the responsibility of legible text files or tons of endpoints to support.  Users don’t worry about complexity requirements, password rotation, multiple login credentials, federation, or any of the other headaches associated with good password policies.  When an employee leaves, the administrator uses the well-implemented processes already in place to eliminate internal access; the external sites simply fall off as user’s rights.

 

The same techniques are applicable to a wide range of sectors, and could even be useful in the Public Sector.  The US military stood up their DEERS/RAPIDS CA around the mid 90’s. Several of the agencies and military services utilize this CA through a smart chip embedded card called the Common Access Card (CAC) for Identification and authentication.  With this card, users can log in to their systems, access secured web sites using client side certificates, and send signed or encrypted emails – internally, they define secure.  But even the government uses publicly available websites (you thought Linkedin and Facebook are only mined by corporate recruiters?) as well as “external” inter-departmental and inter-agency sites, where these same deprovisioning problems are even more imparative.  Automating the processes might help the Government more than we’ll ever know.

 

The crop of identity providers clambering to ride the cloud and become the next default solution include players in Gartner’s MQ that already embraced the cloud, but also non-represented companies conceived there:

 

  • CA Technologies’ Role and Compliance Manager
  • Citrix OpenCloud Access
  • Courion’s Access Assurance
  • Fischer International Identity’s Identity as a Service (yes, the trademarked one)
  • Forge Rock
  • Intel’s Cloud Access 360
  • Iron Stratus
  • McAfee’s Cloud Identity Manager
  • Okta’s Application Network
  • OneLogin
  • Ping Identity’s Ping Federated
  • Symantec’s O3
  • Symplified’s Symplified Suite
  • VMWare’s Horizon Application Manager

 

This is far from an exhaustive list, and each solution has their benefits and detractors.  If I left you out, please expand the article by way of comments.

 

Jon-Michael C. Brook is a Sr. Principal Security Architect with Symantec’s Public Sector Organization.  He holds a BS-CEN from the University of Florida and an MBA from the University of South Florida.  He obtained a number of industry certifications, including the CISSP and CCSK, holds patents & trade secrets in intrusion detection, enterprise network controls, cross domain security and semantic data redaction, and has a special interest in privacy.  More information may be found on his LinkedIn profile.

 

Opportunity Knocks Once…

In 1983, I was a young electrical engineering student, when I took a job working for a small long distance company in Phoenix Arizona.  For me, Opportunity had Knocked and I had just opened the door on an amazing future.  In the world of communications, things were already changing and were about to begin changing in even more dramatic ways.  1984, the Divestiture of AT&T would reshape the way the world communicates.  The personal PC was appearing.  In the 1980’s fiber optic cables and  transoceanic fiber optic cables  began to crisscross the world.  As of 2010, the only continent that was not connected with fiber optic cables was Antarctica.  Fiber optics enabled huge amounts of data to be transported anywhere in the world.  On the heels of this data explosion came the World Wide Web, the Internet began to be something more than a tool for universities and the Defense Department.  Multiple processor and multi-core processor computing systems became the norm, putting tremendous amounts of processing power in the hands of the masses. In recent years, virtualization has made its debut and helped to launch the cloud revolution.    Truly, I have had a once in a lifetime opportunity to be in the right place at the right time.

 

When I had the opportunity to get involved with the Cloud Security Alliance and become a part of the Subject Matter Expert (SME) Group, I realized that Opportunity was Knocking once again.  Just as all of these advancements have paved the way, making cloud services possible, I believe that the Cloud Security Alliance will serve a critical role and establish the patterns that will determine what Cloud Services will look like in the future and the SME Group will have an important part in that future.

 

When the CSA was formed, it recognized the importance of involving and engaging the companies and people that were working with and making the technologies of tomorrow.  The SME Group was formed to involve and engage those companies, making available a forum where those companies that are using and creating the cloud can have a voice.  I learned long ago, that I don’t know everything and that there is great power and opportunity in association.  The SME Group is such an association.  As a member of the SME Group, you will have the opportunity to meet and work with people from all over the world, working in all kinds of industries and technology with vast amounts of knowledge and experience.  Members of the SME Group not only have a front row seat to where the Cloud is going, but can also provide input and direction, allowing the entire cloud community to benefit from their knowledge and experience.

 

If you are a member of the SME Group, I want to thank you for your participation.  If you are a corporate sponsor of the CSA, but not currently a member of or involved in the SME Group, I want to invite you to get involved, and if you are a company interested in, involved in or considering incorporating cloud services in your business, I want to invite you to become a corporate sponsor of the Cloud Security Alliance.  It will be an investment that is well worth it.

 

Opportunity is Knocking, all you need to do is open the door.

 

Henry St. Andre

Co-Chair SME Group

Cloud Fundamentals Video Series: The Benefits of Industry Collaboration to Cloud Computing Security

SUBJECT: Cloud Fundamentals – Video from CSA Congress with Jim Reavis, Executive Director of the CSA, and Tim Rains, Microsoft’s Director of Trustworthy Computing
At the CSA Congress in November, Tim Rains, Director of Trustworthy Computing for Microsoft, sat down with Jim Reavis, our executive director, to talk about the biggest challenges for cloud computing security, and what vendors and customers are doing to help with these challenges.
Said Reavis, “Each day, a growing number of companies decide to leverage cloud computing for important business activities.
There is an immediate and compelling mandate for all of us to become better informed as to how cloud computing functions, its key benefits and considerations to establishing trust.”
ere’s the video from the event:

http://blogs.technet.com/b/trustworthycomputing/archive/2012/01/19/cloud-fundamentals-video-series-the-benefits-of-industry-collaboration-to-cloud-computing-security.aspx

Enjoy!