Join CSA’s New DC Metro Area Chapter

CSA DC Chapter logoThe Cloud Security Alliance (CSA) is pleased to announce that its DC Metro Area chapter has been chartered to serve the DC metro area CSA membership.

The chapter’s region includes a diverse range of businesses, government organizations and academic institutions who all have an interest in well-engineered, secure IT systems, including many heavily regulated industries such as the U.S. government, healthcare and financial sectors. The new DC Metro Area chapter will influence policy around IT security regulations and privacy via position papers designed to modernize outdated regulations and create new regulations and policies in areas where none exist.

The chapter will also develop guidance for IT modernization incorporating security-by-design, providing vendor-neutral guidance to modernize heavily regulated IT systems. The CSA DC membership, coupled with the Government, Healthcare and Financial Sector Advisory Boards, will provide direction around topics for research within the chapter. We encourage you to join CSA-DC in its mission to modernize IT to securely move at the speed of business.

CSA’s newest chapter needs volunteers! If you’d like to help build this chapter, you can contact the DC Metro Area Chapter via its LinkedIn Group. Or, attend the chapter’s inaugural event on Sept. 27 to learn about CSA-DC and how you can join their mission to modernize IT to securely move at the speed of business.

Five Distinguished Security Experts to Keynote SecureCloud 2014

SecureCloud 2014 is just around the corner and the CSA is pleased to announce the keynote speaker lineup for this must-attend event, which is taking place in Amsterdam on April 1-2.

Secure Cloud Speakers

This year’s event will feature keynote addresses from the following five security experts on a wide range of cloud security topics:

  • Prof. Dr. Udo Helmbrecht, executive director of the European Network and Information Security Agency (ENISA) will speak on the uptake of Cloud computing in Europe and how ENISA supports Cloud Security in the Member States.
  • Prof. Dr. Reinhard Posch, CIO for the Austrian Federal Government will present on the European Cloud Partnership and Austrian Government approach to cloud
  • Alan Boehme, Chief of Enterprise Architecture for The Coca-Cola Company will present on the CSA Software Defined Perimeter initiative
  • Jim Reavis, CEO of the Cloud Security Alliance will discuss trends and innovation in cloud security and CSA activities in 2014
  • Richard Mogull, CEO of Securosis will give the closing keynote on Automation & DevOps

If you haven’t already registered, early bird discount pricing is being offered through February 14. Registration information can be found at:

https://cloudsecurityalliance.org/events/securecloud2014/#_reg

We look forward to seeing all of you in Amsterdam in the Spring!

The Dark Side of Big Data: CSA Opens Peer Review Period for the “Top Ten Big Data and Privacy Challenges” Report

moonBig Data seems to be on the lips of every organization’s CXO these days. By exploiting Big Data, enterprises are able to gain valuable new insights into customer behavior via advanced analytics. However, what often gets lost amidst all the excitement are the very real and many security and privacy issues that go hand in hand with Big Data.  Traditional security schemes mechanisms were simply never designed to deal with the reality of Big Data, which often relies on distributed, large-scale cloud infrastructures, a diversity of data sources, and the high volume and frequency of data migration between different cloud environments.

To address these challenges, the CSA Big Data Working Group released an initial report, The Top 10 Big Data Security and Privacy Challenges at CSA Congress 2012, It was the first such industry report to take a holistic view at the wide variety of big data challenges facing enterprises. Since this time, the group has been working to further its research, assembling detailed information and use cases for each threat.  The result is the first Top 10 Big Data and Privacy Challenges report and, beginning today, the report is open for peer review during which CSA members are invited to review and comment on the report prior to its final release. The 35-page report outlines the unique challenges presented by Big Data through narrative use cases and identifies the dimension of difficulty for each challenge.

The Top 10 Big Data and Privacy Challenges have been enumerated as follows:

  1. Secure computations in distributed programming frameworks
  2. Security best practices for non-relational data stores
  3. Secure data storage and transactions logs
  4. End-point input validation/filtering
  5. Real-time security monitoring
  6. Scalable and composable privacy-preserving data mining and analytics
  7. Cryptographically enforced data centric security
  8. Granular access control
  9. Granular audits
  10. Data provenance

The goal of outlining these challenges is to raise awareness among security practitioners and researchers so that industry wide best practices might be adopted to addresses these issues as they continue to evolve. The open review period ends March 18, 2013.  To review the report and provide comments, please visit https://interact.cloudsecurityalliance.org/index.php/bigdata/top_ten_big_data_2013 .

Tweet this: The Dark Side of Big Data: CSA Releases Top 10 Big Data and Privacy Challenges Report. http://bit.ly/VHmk0d

CSA Releases CCM v 3.0

The Cloud Security Alliance (CSA) today has released a draft of the latest version of the Cloud Control Matrix, CCM v3.0. This latest revision to the industry standard for cloud computing security controls realigns the CCM control domains to achieve tighter integration with the CSA’s “Security Guidance for Critical Areas of Focus in Cloud Computing version 3” and introduces three new control domains. Beginning February 25, 2013 the draft version of CCM v3.0 will be made available for peer review through the CSA Interact website with the peer review period closing March 27, 2013, and final release of CCM v3.0 on April 1, 2013.

The three new control domains; “Mobile Security”, “Supply Change Management, Transparency and Accountability”, and “Interoperability & Portability” address rapidly expanding methods cloud data is accessed, the need for ensuring due care is taken in the cloud providers supply chain, and the minimization of service disruptions in the face of a change to cloud provider relationship.

The “Mobile Security” controls are built upon the CSA’s “Security Guidance for Critical Areas of Mobile Computing, v1.0” and are the first mobile device specific controls incorporated into the Cloud Control Matrix.

The “Supply Change Management, Transparency and Accountability” control domain seeks to address risks associated with governing data within the cloud while the “Interoperability & Portability” brings to the forefront considerations to minimize service disruptions in the face of a change in a cloud vendor relationship or expansion of services.

The realigned control domains have also benefited through changes in language to improve the clarity and intent of the control, and, in some cases, realigned within the expanded control domains to ensure the cohesiveness within each control domain and minimize overlap.

The draft of the Cloud Control Matrix can be downloaded from the Cloud Security Alliance website and the CSA welcomes peer review through the CSA Interact website.

The CSA invites all interested parties to participate in the peer review and the CSA Cloud Controls Matrix Working Group Meeting to be held during the week of the RSA Conference, at 4pm PT on February 28, 2013, at the Sir Francis Drake Hotel
Franciscan Room
450 Powell St in San Francisco, CA.

Towards a “Permanent Certified Cloud”: Monitoring Compliance in the Cloud with CTP 3.0

Cloud services can be monitored for system performance but can they also be monitored for compliance? That’s one of the main questions that the Cloud Trust Protocol aims to address in 2013.

Compliance and transparency go hand in hand.

The Cloud Trust Protocol (CTP) is designed to allow cloud customers to query cloud providers in real-time about the security level of their service. This is measured by evaluating “security attributes” such as availability, elasticity, confidentiality, location of processing or incident management performance, just to name a few examples. To achieve this, CTP will provide two complementary features:

  • First, CTP can be used to automatically retrieve information about the security offering of cloud providers, as typically represented by an SLA.
  • Second, CTP is designed as a mechanism to report the current level of security actually measured in the cloud, enabling customers to be alerted about specific security events.

These features will help cloud customers compare competing cloud offerings to discover which ones provide the level of security, transparency and monitoring capabilities that best match the control objectives supporting their compliance requirements. Additionally, once a cloud service has been selected, the cloud customer will also be able to compare what the cloud provider offered with what was later actually delivered.

For example, a cloud customer might decide to implement a control objective related to incident management through a procedure that requires some security events to be reported back to a specific team within a well-defined time-frame. This customer could then use CTP to ask the maximum delay the cloud provider commits to for reporting incidents to customers during business hours. The same cloud customer may also ask for the percentage of incidents that were actually reported back to customers within that specific time-limit during the preceding two-month period. The first example is typical of an SLA while the second one describes the real measured value of a security attribute.

CTP is thus designed to promote transparency and accountability, enabling cloud customers to make informed decisions about the use of cloud services, as a complement to the other components of the GRC stack. Real time compliance monitoring should encourage more businesses to move to the cloud by putting more control in their hands.

From CTP 2.0 to CTP 3.0

CTP 2.0 was born in 2010 as an ambitious framework designed by our partner CSC to provide a tool for cloud customers to “ask for and receive information about the elements of transparency as applied to cloud service providers”. CSA research has begun undertaking the task of transforming this original framework into a practical and implementable protocol, referred to as CTP 3.0.

We are moving fast and the first results are already ready for review. On January 15th, CSA completed a first review version of the data model and a RESTful API to support the exchange of information between cloud customers and cloud provider, in a way that is independent of any cloud deployment model (IaaS, PaaS or SaaS). This is now going through the CSA peer review process.

Additionally, a preliminary set of reference security attributes is also undergoing peer review. These attributes are an attempt to describe and standardize the diverse approaches taken by cloud providers to expressing the security features reported by CTP. For example, we have identified more than five different ways of measuring availability. Our aim is to make explicit the exact meaning of the metrics used. For example, what does unavailability really mean for a given provider? Is their system considered unavailable if a given percentage of users reports complete loss of service? Is it considered unavailable according to the results of some automated test to determine system health?

As well as all this nice theory, we are also planning to get our hands dirty and build a working prototype implementation of CTP 3.0 in the second half of 2013.

Challenges and research initiatives

While CTP 3.0 may offer a novel approach to compliance and accountability in the cloud, it also creates interesting challenges.

To start with, providing metrics for some security attributes or control measures can be tricky. For example, evaluating the quality of vulnerability assessments performed on an information system is not trivial if we want results to be comparable across cloud providers. Other examples are data location and retention, which are both equally complex to monitor, because of the difficulty of providing supporting evidence.

As a continuous monitoring tool, CTP 3.0 is a nice complement to traditional audit and certification mechanisms, which typically only assess compliance at a specific point in time. In theory, this combination brings up the exciting possibility of a “permanently certified cloud”, where a certification could be extended in time through automated monitoring. In practice however, making this approach “bullet-proof” requires a strong level of trust in the monitoring infrastructure.

As an opportunity to investigate these points and several other related questions, CSA has recently joined two ambitious European Research projects: A4Cloud and CUMULUS. A4Cloud will produce an accountability framework for the entire cloud supply chain, by combining risk analysis, creative policy enforcement mechanisms and monitoring. CUMULUS aims to provide novel cloud certification tools by combining hybrid, incremental and multi-layer security certification mechanisms, relying on service testing, monitoring data and trusted computing proofs.

We hope to bring back plenty of new ideas for CTP!

Help us make compliance monitoring a reality!

A first draft of the “CTP 3.0 Data Model and API” is currently undergoing expert review and will then be opened to public review. If you would like to provide your expert feedback, please do get in touch!

by Alain Pannetrat 

[email protected]

Dr. Alain Pannetrat is a Senior Researcher at Cloud Security Alliance EMEA. He works on CSA’s Cloud Trust Protocol providing monitoring mechanisms for cloud services, as well as CSA research contributions to EU funded projects such as A4Cloud and Cumulus. He is a security and privacy expert, specialized in cryptography and cloud computing. He previously worked as a IT Specialist for the CNIL, the French data protection authority, and was an active member of the Technology Subgroup of the Article 29 Working Party, which informs European policy on data protection. He received a PhD in Computer Science after conducting research at Institut Eurecom on novel cryptographic protocols for IP multicast security.

 

 

EMEA Congress Recap

The inaugural EMEA Congress in Amsterdam was an unqualified success, with hundreds of security visionaries in attendance and featuring presentations from some of the leading voices from across the cloud security landscape. What follows are just a sample of the discussions and some of the key takeaways from the two-day event:

EMEA Congress Presenters

  • Monica Josi, Microsoft’s Chief Security Adviser EMEA presented on Microsoft’s compliance strategy, emphasizing the importance of a common mapping  strategy to define compliance standards. Microsoft has mapped over 600 controls and 1500 audit obligations onto the ISO27001 framework and are using CSA’s CCM and ISO27001 to certify their Dynamic CRM, Azure and Office365 platforms. They have also published all relevant documentation on the CSA’s STAR repository.
  • Chad Woolf, Global Risk and Compliance Leader for Amazon Web Services highlighted the difference between security IN the cloud as opposed to security OF the cloud. According to Chad, security IN the cloud presents a much greater risk and discussed some of the different assurance mechanisms provided by AWS.
  • Data security and privacy expert Stewart Room provided an update on some of the more pressing legal issues facing cloud security, including a plea for more realistic legislation (e.g. subcontractor recommendations of Art 29 WP)
  • Mark O’Neill, CTO for Vordel gave an update on IDM standards, including oAuth 2.0 and OpenID Connect and how they fit into the cloud ecosystem. oAuth 2.0 is now a stable standard which can be used to give granular, revocable access control. It is lighter than SAML and therefore more suitable for mobile/REST scenarios.
  • Phil Dunkelberger made an impassioned call to arms for the industry to create a standard authentication protocol which would allow for the integration of appropriate authentication mechanisms into diverse services.
  • Jean-François Audenard, Cloud Security Advisor, for Orange Business Services presented their Secure Development Lifecycle that covers security and legal obligations, mitigation plans, security reviews and on-going operational security and the roles of their security Advisors, Architects and Managers in the lifecycle.

Panel Discussion Takeaways:

  • While Gartner has some 26 definitions for Cloud, according to Bruce Schneier it can be boiled down to the fact that it’s simply your data on somebody else’s hard disk that you access over the Internet!
  • Cloud provider specialization and reputation means better security in many respects. As to the question of what can be more difficult in the cloud, forensics is a major issue (e.g., ‘freezing the crime scene’, confiscation of hardware, etc)
  • As a customer, there is a lot you can and should do to monitor the cloud service provider (either independently and/or via executive dashboards). This also allows you to establish trust in smaller companies with less history.
  • Internal IT teams are not redundant . There are lots of security-related tasks still need to be taken care of. This is especially true for IaaS providers ( e.g. credential management ). The cloud provides opportunities for many of these individuals to perform higher value tasks (i.e., security training of staff, service monitoring, etc).
  • Business is consuming technology quicker than IT can provide it; as a result more internal business users are utilising external third party and cloud vendors to process their information. For example, MARS Information Services is using a modified version of ISO27001 (ISO++) and the CSA’s CCM to risk assess their third party vendors. As engagement move from Iaas to Paas and SaaS the level of risks increase as the controls are given to the service provider.
  • Historically, organizations have been largely concerned with securing the network, not the information that resides on it. We need to now protect information based on the risk associated with the compromise of that data. As such, a risk based approach to security requires data to be “high level” classified.
  • Once data has migrated to the Cloud, access and authentication becomes key. Authentication is currently taken for granted (passport, room key, ID badge, airline ticket, cards), except online where credentials are often re-used. If they are compromised, all systems using those credentials are vulnerable.
  • As data moves to the Cloud, there will situations that will require the data to be recovered, in a forensically sound way. The use of multi-tenant environments across multi-jurisdictions introduces numerous e-disclose and chain of custody challenges that are yet to be solved.

 

“Great conference with a number of speakers that really provided up to date, timely and in-depth information” – Peter Demmink, Merck / MSD

“The CSA delivered an excellent intro to all the aspects of cloud security and compliance” – Albert Brouwer, AEGON

 

 

 

Will the Cloud Cause the Reemergence of Security Silos?

by: Matthew Gardiner

Generally in the world silos relate to things that are beneficial, such as silos for grain or corn.  However in the world of IT security, silos are very bad.  In many forensic investigations application silos turn up as a key culprit that enabled data leakage of one sort or another.  It is not that any one application silo is inherently a problem – one can repair and manage a single silo much as a farmer would do – it is the existence of many silos, and silos of so many type, that is the core problem.  Farmers generally don’t use thousands of grain silos to handle their harvest; they have a handful of large, sophisticated, and centralized ones.

The same approach has proven highly effective in the world of application security, particularly since the emergence of the Web and its explosion of applications and users.  Managing security as a centralized service and applying it across large swaths of an organization’s infrastructure and applications is clearly a best practice.  However with the emergence of the Cloud as the hot application development and deployment platform going forward, organizations are at significant risk of returning to the bad days of security silos.  When speed overruns architecture, say hello to security silos and the weaknesses that they bring.

What do I mean by security silos?  I think of silos as application “architectures” which cause security (as well as IT management in general) to be conducted in “bits-and-pieces”, thus uniquely within the specific platform or system.  Applications are built this way because it feels faster in the short term. After all, the project needs to get done.  But after this approach is executed multiple times the organization is left with many inconsistent, custom, and diverse implementations and related security systems.  These systems are inevitably both complex to operate and expensive to maintain as well as easy to breach on purpose or by accident.

Perhaps this time it is different?  Perhaps IT complexity will magically decline with the Cloud?  Do you really think that the move to the Cloud is going to make the enterprise IT environment homogeneous and thus inherently easier to manage and secure?  Not a chance.  In fact, just the opposite is most likely. How many organizations will move all of their applications and data to public clouds?  And for that matter to a single public cloud provider.  Very few.  Given this, it is imperative that security architects put in place security systems that are designed to operate in a highly heterogeneous, hybrid (mixed public cloud and on-premise) world.  The cloud-connected world is one where applications and data will on one day be inside the organization on a traditional platform, the next day hosted within the organizations private cloud, the next day migrated to live within a public cloud service, and then back again, based on what is best for the organization at that time.

Are security silos inevitable with the move to the Cloud?  In the short term, unfortunately, probably yes.  With every new IT architecture the security approach has to do some catch-up.  It is the security professionals’ job to make this catch-up period as short as possible.

How should we shorten the catch-up period?

  • First update your knowledge base around the Cloud and security.  There are a lot of good sources out there; one in particular that I like is from the Cloud Security Alliance (CSA), Security Guidance for Critical Areas of Focus in Cloud Computing.
  • Second rethink your existing people, processes, and technology (sorry for the classic IT management cliché) in terms of the cloud.  You will find the control objectives don’t change, but how you will accomplish them will.
  • Third start making the necessary investments to prepare your organization for the transition to the cloud that is likely already underway.

While there are many areas covered in the above CSA document, let me focus on one area that in particular highlights some cloud specific security challenges, specifically around Identity and Access Management.

The CSA document says it well, “While an enterprise may be able to leverage several Cloud Computing services without a good identity and access management strategy, in the long run extending an organization’s identity services into the cloud is a necessary precursor towards strategic use of on-demand computing services.”  Issues such as user provisioning, authentication, session management, and authorization are not new issues to security professionals.  However, accomplishing them in the context of the cloud requires that the identity management systems that are on-premise in the enterprise automatically “dance” with the equivalent systems at the various cloud service providers.  This dance is best choreographed through the use of standards, such as SAML, XACML, and others.  In fact the rise of the cloud also raises the possibility of outsourcing even some of your identity management services, such as multi-factor authentication, access management, and other capabilities to specialized cloud security providers.

While in the short term it would seem that the emergence of some security silos is inevitable with organizations’ aggressive move to the cloud, it doesn’t have be this way forever.  We know security silos are bad, we know how to avoid them, and we have much of the necessary technology already available to eliminate them. Our necessary action is to take action.

Matthew Gardiner is a Director working in the Security business unit at CA Technologies. He is a recognized industry leader in the security & Identity and Access Management (IAM) markets worldwide. He is published, blogs, and is interviewed regularly in leading industry media on a wide range of IAM, cloud security, and other security-related topics. He is a member of the Kantara Initiative Board of Trustees. Matthew has a BSEE from the University of Pennsylvania and an SM in Management from MIT’s Sloan School of Management.  He blogs regularly at: http://community.ca.com/members/Matthew-Gardiner.aspx and also tweets @jmatthewg1234.  More information about CA Technologies can be found at www.ca.com.

Multi-tenancy and bad landlords

So there’s been a lot of discussion about multi-tenancy recently and what it means for cloud providers and users. To put it simply: multi-tenancy is highly desirable to providers because they can provide a service or a platform (such as WordPress) and cram a kajillion users into it without having to constantly customize it, modify it or otherwise do much work to sell it individually. The reality is that whether or not users like multi-tenancy, the providers love it, so it’s here to stay.

So what happens when you have a bad, or just unlucky landlord? In the last few months WordPress.com has had a number of outages:

What Happened: We are still gathering details, but it appears an unscheduled change to a core router by one of our datacenter providers messed up our network in a way we haven’t experienced before, and broke the site. It also broke all the mechanisms for failover between our locations in San Antonio and Chicago. All of your data was safe and secure, we just couldn’t serve it.

And more recently:

If you tried to access TechCrunch any time in the last hour or so, you probably noticed that it wasn’t working at all. Instead, you were greeted by the overly cheery notice “WordPress.com will be back in a minute!” Had we written that message ourselves, there would have been significantly more profanity.

http://techcrunch.com/2010/06/10/wordpress-gives-us-the-vip-treatment-goes-down-on-us-again/

So what can we do to support this leg (availability) of the A-I-C triad of information security?

I don’t honestly know. It’s such a service/provider specific issue (do they control DNS? do you control DNS? can you redirect to another provider with the same service who has a recent copy of your data? If you do so can you then export any updates/orders/etc. back to your original provider when they come back? etc.) that pretty much any answer you’ll get is useless unless it’s specifically tailored to that provider or service.

If you have an answer to this, please post it in the comments.

Backups and security for cloud applications

Backups, the thing we all love to hate, and hate to love. Recreating data is rarely cheap, especially if it involves detailed analysis and combination. So we back it up.

Take for example this blog, it’s based on WordPress; which is about as standard and supported as you can get for a blog. Backing up the entire blog isn’t that bad, just grab a copy of the database and you are mostly good to go, except for the minor things like custom web pages and CSS files. So what is one to do? Well the obvious thought is to outsource your cloud service backups to a cloud service backup service.

http://blog.vaultpress.com/2010/03/30/announcing/

Update: Trend Micro appears to be getting in on the secure online backup thing.

http://www.mspmentor.net/2010/06/14/trend-micro-cloud-storage-and-saas-security-converge/

Put your chauffeur on the upgrade treadmill

I don’t know if anyone here remembers the “Billion Dollar Brain” by Len Deighton. One scene that stuck with me is General Midwinter making his minion (a chauffeur or bodyguard, I can’t remember which) do his time on the exercise bike for him and asking “how many miles did we bike today?”

Wouldn’t it be great if we were all rich enough to hire someone to do the horrible chores that have to be done every day (or weekly) like exercising in order to keep our bodies fit?

This is one of the more appealing aspects of Software-as-a-service (SaaS). In fact this blog is a perfect example of upgrade and maintenance avoidance. Rather than hosting the blog in-house and having to maintain and upgrade WordPress every few weeks we decided to simply outsource it to WordPress.com. Now there are some downsides; we can’t run all the plugins we’d like to (basically you get what WordPress gives you and you learn to like), but on the upside I will never have to upgrade a WordPress plugin or WordPress itself ever again (which is a security disaster waiting to happen as many have found out).

News roundup for May 28 2010

Financial Services Like The Cloud, Provided It’s Private – http://www.informationweek.com/cloud-computing/blog/archives/2010/05/financial_servi.html

Novell Identity Manager extended to cloud – http://www.computerworlduk.com/technology/applications/software-service/news/index.cfm?newsid=20357

Amazon CEO Jeff Bezos: Cloud services can be as big as retail business – http://www.zdnet.com/blog/btl/amazon-ceo-jeff-bezos-cloud-services-can-be-as-big-as-retail-business/35111

Software evaluation 2.0 ?

I spend a lot of time evaluating software; for product reviews, to see which versions are vulnerable to various exploits and sometimes just to see if I should be using it. Most often this looks something like: find the software, download it, find the install and configuration documents, walk through them, find an error or two (documentation always seems to be out of date), fiddle with dependencies (database settings, etc.) finally get it mostly working (I think) and then test it out. I’m always wondering in the back of my mind if I’ve done it properly or if I’ve missed something, especially when it comes to performance issues (is the software slow, or did I mess up the database settings and not give it enough buffers?).

But it seems that finally some places are taking note of this and making their software available in a VM, fully configured and working, no fuss or mess, just download, run it, and play with the software. Personally I love this, especially for large and complex systems that require external components such as a database server, message queues and so on. No more worries about configuring all the add on bits, or making sure you have a compatible version and so on.

This really dovetails nicely with an increasing reliance on cloud computing, instead of buying a software package and having to maintain it you can buy a VM image with the software, essentially getting SaaS levels of configuration and support but still giving you the IaaS levels of control (you can run it in house, behind a VPN, you can configure it specially for your needs if you have to, etc.). The expertise needed to properly configure the database (which varies between databases hugely, and depending on what the product needs, i.e. latency? memory? bulk transfers? special character encoding?) is provided by the vendor who (at least in theory) knows best. I also think vendors will start to appreciate this, the tech support and time needed to guide customers through install and configuration and integration with other services and components can be replaced by “Ok sir I need you to upload the image to your VMware server (or EC2 or whatever) and turn it on… Ok now login as admin and change the password. Ok we’re done.”

Security is also a lot easier. The vendor can test patches knowing with certainty that they will work on customer’s systems since the customer has the exact same system as the vendor, and customers can apply patches with a higher degree of confidence, knowing that they were tested in the same environment.

Reddit ships code as fully functional VM. – http://blog.reddit.com/2010/05/admins-never-do-what-you-want-now-it-is.html

Update: CERT release fuzzing framework, as a VMware image – http://threatpost.com/en_us/blogs/cert-releases-basic-fuzzing-framework-052710
http://www.cert.org/download/bff/

Counterfeit gear in the cloud

One of the best and worst things about outsourced cloud computing (as opposed to in house efforts) is the ability to spend more time on what is important to you, and leave things like networking infrastructure, hardware support and maintenance and so on to the provider. The thing I remember most about system and network administration is all the little glitches, some of which weren’t so little and had to be fixed right away (usually at 3 in the morning). One thing I love about outsourcing this stuff is I no longer have to worry about network infrastructure.

Assuming of course that the cloud provider does a good job. The good news here is that network availability and performance is really easy to measure, and really hard for a cloud provider to hide. Latency is latency, and you generally can’t fake low latency networks (although if you can please let me know! We’ll make millions). Ditto for bandwidth, either the data transfers in 3 minutes or 4 minutes, a provider can’t really fake that either. Reliability is a little tougher since you have to measure it continuously to get good numbers (are there short but total outages, longer “brownouts” with reduced network capacity, or is everything actually working fine?). But none of this takes into account or allow us to predict the type of catastrophic failures that result in significant downtime.

One way providers deal with this potential problem is simple: they buy good name brand gear with support contracts that guarantee replacement times, how long it will take a engineer to show up, etc. But this stuff is expensive. So what happens if a cloud provider is finds, or is offered name brand equipment at reduced, or even really cheap prices (this does happen legitimately; a company goes bust and stuff is sometimes sold off cheap). This stuff isn’t under a support contract and is not up to the same specs as the real stuff meaning it is more likely to fail or suffer problems, causing you grief.

How do you, the cloud provider customer, know that your provider isn’t accidentally (or otherwise) buying counterfeit network gear?

Well short of a physical inspection and phoning in the serial numbers to the manufacturer you won’t. Unfortunately I can’t think of any decent solutions to this, so if you know of them or have any ideas feel free to leave comments or email me, [email protected].

Feds shred counterfeit Cisco trade – With a new conviction today, the federal action known as Operation Network Raider has resulted in 30 felony convictions and more than 700 seizures of counterfeit Cisco network hardware with an estimated value of more than $143 million.

-By Layer 8, Network World

Yikes.

Amazon AWS – 11 9’s of reliability?

Amazon recently added a new redundancy service to their S3 data storage service. Amazon now claims that data stored in the “durable storage” class is 99.999999999% “durable” (not to be confused with availability – more on this later).

“If you store 10,000 objects with us, on average we may lose one of them every 10 million years or so. This storage is designed in such a way that we can sustain the concurrent loss of data in two separate storage facilities.”

http://aws.typepad.com/aws/2010/05/new-amazon-s3-reduced-redundancy-storage-rrs.html –Jef;

So how exactly does Amazon arrive at this claim? Well reading further they also offer a “REDUCED_REDUNDANCY” storage class (which is 33% cheaper than normal) that guarantees 99.99% and is “designed to sustain the loss of data in a single facility.” From this was can extrapolate that Amazon is simply storing the data in multiple physical data centers, the chance of each one becoming unavailable (burning down, cable cut, etc.) is something like 0.01%, so storing at two data centers means a 0.0001% chance that both will fail at the same time (or on the flip side: a 99.9999% durability guarantee), three data centers giving us 0.000001% chance of loss (a 99.999999% durability guarantee) and so on. I’m not sure of the exact numbers that Amazon is using but you get the general idea; a small chance of failure, combined with multiple locations makes for a very very small chance of failure at all the locations at the same time.

Except there is a huge gaping hole in this logic. To expose it let’s revisit history, specifically the Hubble Space Telescope. The Hubble Space Telescope can be pointed in specific directions using six on board gyroscopes. By adding momentum to a single gyroscope or applying the brakes to it you can cause Hubble to spin clockwise or counter clockwise in a single axis. With two of these gyroscopes you can move Hubble in three axis to point anywhere. Of course having three sets of gyroscopes makes maneuvering it easier and having spare gyroscopes ensures that a failure or three won’t leave you completely unable to point the Hubble at interesting things.

But what happens when you have a manufacturing defect in the gyroscopes, specifically the use of regular air instead of inert nitrogen during the manufacturing of the gyroscopes? Well having redundancy doesn’t do much since the gyroscopes start failing in the same manner at around the same time (almost leaving Hubble useless if not for the first servicing mission).

The lesson here is that having redundant and backup systems that are identical to the primary systems may not increase the availability of the system significantly. And I’m willing to bet that Amazons S3 data storage facilities are near carbon copies of each other with respect to the hardware and software they use (to say nothing of configuration, access controls, authentication and so on). A single flaw in the software, for example an software related issue that results in a loss or mangling of data may hit multiple sites at the same time as the bad data is propagated. Alternatively a security flaw in the administrative end of things could let an attacker gain access to and start deleting data from the entire S3 “cloud”.

You can’t just take the chance of failure and square it for two sites if the two sites are identical. The same goes for 3, 4 or 27 sites. Oh and also to read the fine print: “durability” means the data is stored somewhere, but Amazon makes no claims about availability or whether or not you can get at it.
Something to keep in mind as you move your data into the cloud.

Season’s Greetings from the CSA!

By Zenobia Godschalk

2009 has been a busy year for the CSA, and 2010 promises to be even more fruitful. The alliance is now 23 corporate members strong, and is affiliated with numerous leading industry groups (such as ISACA, OWASP and the Jericho Forum) to help advance the goal of cloud security. Below is a recap of recent news and events, as well as upcoming events. We have had tremendous response to the work to date, and this month we will release version two of our guidance. Thanks to all our members for their hard work in our inaugural year!

RECENT NEWS

 

CSA and DMTF

The CSA and DMTF have partnered to help coordinate best practices for Cloud security. More details here: http://www.cloudsecurityalliance.org/pr20091201.html

Cloud Security Survey

The Cloud Security survey is still open for responses! Here’s your chance to influence cloud security research. Survey takes just a few minutes, and respondents will receive a free, advance copy of the results.

http://www.surveymonkey.com/s.aspx?sm=VqH8jHHwc9GhANj3EzDl1g_3d_3d

Computerworld: Clear Metrics for Cloud Security? Yes, Seriously

http://www.computerworld.com/s/article/9141010/Clear_Metrics_for_Cloud_Security_Yes_Seriously

The Cloud Computing Show (featuring interview with CSA’s Chris Hof)

http://cloudcomputingshow.blogspot.com/2009/11/cloud-computing-show-20.html

 

For more cloud security news, check out the press page on the CSA site.

 

RECENT EVENTS

 

State of California

At the end of October CSA was invited to present to Information Security professionals of the State of California. During this 2-day state-sponsored conference we provided education and transparency into CSA’s research around Cloud Security and how the federal government is using cloud deployments.

CSI DC

Also at the end of October the CSA participated at a Cloud Security workshop during the annual CSI Conference in DC.

India Business Technology Summit

In November Nils Puhlmann, co-founder of CSA, presented to an audience of 1,400 at the annual India Business Technology Summit. Not only did he address the audience in a keynote but delivered the CSA message and learnings in a workshop at the India Institute of Science and Technology in Bangalore. Puhlmann also participated in a follow up panel in Mumbai at the India Business Technology Executive Summit.

Conference of ISMS Forum

In December CSA was represented at the 6th International Conference of ISMS Forum in Seville. Nils Puhlmann delivered a keynote and moderated a panel looking into the future of information security and how new technologies like cloud computing might affect our industry.

CSA and ISMS also signed an MOU to cooperate more closely together. The event and CSA’s participation were covered in the prominent Spanish newspaper, Cinco Días, the business supplement of El País.

ISMS and CSA also started activities to launch a Spanish Chapter of CSA to better address the unique and local issues around secure cloud adoption in Spain.

UPCOMING NEWS AND EVENTS

Guidance V2

The second version of the security guidance for critical areas of focus in cloud computing is coming soon! Watch for the next version of the guidance to be released this month.

 

SecureCloud 2010

Registration is now open for SecureCloud 2010, a joint conference with ENISA and ISACA being held in Barcelona, March 16th and 17th.

http://www.cloudsecurityalliance.org/sc2010.html

Cloud Security Alliance Summit

In addition, the Cloud Security Alliance Summit will be held in conjunction with the RSA Conference in SF March 1. Further details are below, and check back on the CSA website for more updates coming soon!

Cloud Security Alliance Summit

March 1, 2010, San Francisco, Moscone Center The next generation of computing is being delivered as a utility.

Cloud Computing is a fundamental shift in information technology utilization, creating a host of security, trust and compliance issues.

The Cloud Security Alliance is the world’s leading organization focused on the cloud, and has assembled top experts and industry stakeholders to provide authoritative information about the state of cloud security in the Cloud Security Alliance Summit. This half day event will provide broad coverage of cloud security domains and available best practices for governance, legal, compliance and technical issues. From encryption and virtualization to vendor management and electronic discovery, the speakers provide guidance on key business and operational issues. We will also present the latest findings from the CSA working groups for Cloud Threats, Metrics and Controls Mappings.