The Cloud in the Fight Against Cyber-Bullying

By the Cybersecurity International Institute (CSI)

Learn about the upcoming innovative social project on Cyber-bullying using a cloud platform.

The CSI Institute (Cybersecurity International Institute) is a non-governmental and not-for-profit organization. Our goal is to contribute to the information, education, and, overall practical awareness of citizens in new technologies, online safety, and cybersecurity issues.  In this context, we aim to enhance the scientific research in the field of modern technology, with an emphasis on cybercrime and online threats (viruses, etc.).

The antibullying project is an original and innovative action, exclusively established by the CSI Institute. It is, in fact, the first global innovation of its kind as there has been no such action internationally that could impact the community worldwide. The initiative will focus on supporting awareness, education and prevention of bullying and cyber-bullying in all Greek schools as a first implementation step 

The project aims to develop a communication window with teens and pre-teens in the whole country, to reduce the dramatic dimensions of bullying and cyberbullying. As many people might know, some of the countless negative effects of bullying and cyberbullying include depression, anxiety, social phobia, loneliness, isolation, panic attacks, difficulty in concentrating and attention, substance use, eating disorders, online grooming, trafficking, tendencies and behaviors of self-injury, and even suicidal intentions.

In the antibullying project, students from every class of schools throughout the country will be elected as (Anti-Bullying) Ambassadors. Their role will be to pass on the knowledge they acquire from the CSI Institute to their environment. They will also report any dangerous activities or behaviors within the school or digital environment to the dedicated digital center that has been set up.  In this way, students will receive first-aid psychological assistance and support from our specialized scientists. The target group for this action is students from Greece, ranging in ages from the third grade of primary school to the third grade of senior high school. The aim is to inform, educate, and raise awareness in the educational system of Greece and then expand this operation internationally (starting from Europe and then continue globally).

This action has generated a great deal of interest among many international organizations, and its initially calculated to cost 150,000 euros. This number includes the cost of setting up the whole project (computer systems, digital cloud platforms, and the yearly staff expenses for the experts that will be hired).

This action will be undertaken by any organization or group that shows an interest in supporting its full implementation and function. Once implemented, the minimum cost per year will be approximately 90,000 euros. Additional costs will include: the daily wages of the six specialized scientists, the maintenance of the digital platform and any travels nationally and internationally, where there is increased interest due to multiple cases of bullying.

Our final goal is to have a positive impact internationally and reduce the number of bullying and cyber-bullying incidents, as well as their harmful effects.

If you want to receive more information, please send us an email at: [email protected]

Bitglass Security Spotlight: Uber, Apollo, & Chegg

By Jacob Serpa, Product Manager, Bitglass

man reading cybersecurity stories in newspaperHere are the top cybersecurity stories of recent weeks:

—Uber fined $148 million over cover-up
—Apollo database of 200 million contacts breached
—Chegg hack exposes 40 million users’ credentials
—Port of San Diego faces cyberattack

Uber fined $148 million over cover-up

In late 2016, Uber suffered a breach at the hands of hackers who were looking to infiltrate one of the company’s cloud services. However, instead of reporting the event (as they were supposed to), they instead paid the culprits $100,000 and elected to keep silent about the attack. Since then, all fifty states, as well as the District of Colombia, have sought legal action against the company, culminating in a fine of $148 million.

Apollo database of 200 million contacts breached

Apollo, a well-known sales engagement startup, recently had its database of 200 million contacts breached by malicious parties. Unfortunately, as detailed in the message that the company sent to the individuals whose information was exposed, the breach did take a number of weeks to detect. As massive damage can be done in a matter of moments, organizations must employ real-time security measures if they want to avoid a similar fate.

Chegg hack exposes 40 million users’ credentials

Chegg was recently found to have been breached by unauthorized users seeking to steal sensitive information. While it is believed that no Social Security numbers were stolen, data that was successfully exfiltrated included users’ names, usernames, passwords, email addresses, shipping addresses, and more. Unfortunately, the breach, which occurred in April of 2018, took months to detect, giving hackers plenty of time to pursue their malicious ends. The company has since reset the affected users’ passwords.

Port of San Diego faces cyberattack

Within a week of the cyberattack on the Port of Barcelona in Spain, another assault was launched upon the Port of San Diego. This pair of cyberattacks highlights the reality that hackers can target infrastructure and have widespread, adverse repercussions for organizations around the world. Fortunately, this particular attack affected only land-based operations at the port. The causes have yet to be discovered.

Learn about cloud access security brokers (CASBs) and how they can protect your enterprise from threats in the cloud and download the Definitive Guide to CASBs.

Bitglass Security Spotlight: Veeam, Mongo Lock, Password Theft, Atlas Quantum & the 2020 Census

By Jacob Serpa, Product Manager, Bitglass

man reading cybersecurity headlinesHere are the top cybersecurity headlines of recent weeks:
—440 million email addresses exposed by Veeam
—Unprotected MongoDB databases being targeted
—42 million emails, passwords, and more leaked
—Cold-boot attacks steal passwords and encryption keys
—2 billion devices still vulnerable to Bluetooth attack
—Atlas Quantum, cryptocurrency platform, breached
—Security concerns around the 2020 census
—Air Canada’s mobile app breached
—WellCare breach exposes data of 20k children

440 million email addresses exposed by Veeam

Data management company Veeam has ironically mismanaged hundreds of millions of users’ data. A public-facing database exposed 440 million users’ email addresses, names, and, in some circumstances, IP addresses. While this leak may seem innocuous, names and email addresses are all that is needed to conduct targeted spear phishing attacks.

Unprotected MongoDB databases being targeted

The rise of the Mongo Lock attack is seeing improperly secured, poorly configured Mongo DB databases being targeted in a ransomware-like fashion. In these attacks, hackers scan for publicly accessible databases, remove their contents, and demand a Bitcoin ransom in exchange for having data returned.

42 million emails, passwords, and more leaked

A public hosting service that allows individuals to upload files for free was recently found to contain a massive amount of personal data. Over 42 million email addresses and passwords, as well as partial credit card numbers, were found within the platform. As noted in the Veeam section, hackers can easily use this type of data to conduct targeted spear phishing campaigns and steal more sensitive information.

Cold-boot attacks steal passwords and encryption keys

A new cold-boot attack can take information in under two minutes from unsuspecting victims. The attack, which is further detailed at the above link, involves stealing information from RAM, or random access memory. Through this tactic, passwords and even encryption keys can be stolen. Fortunately, hackers need physical access to a computer to execute this kind of technique. Rather than allowing a system to sleep, forcing it to hibernate or shut down is a helpful defense.

2 billion devices still vulnerable to Bluetooth attack

One year ago, BlueBorne, a collection of vulnerabilities in devices that leverage Bluetooth, was revealed. Unfortunately, despite the fact that an entire year has gone by, 2 billion devices remain exposed. This is due to systems that have not been patched, systems that cannot be patched, and more.

Atlas Quantum, cryptocurrency platform, breached

Well-known cryptocurrency platform Atlas Quantum was recently found to have been breached. 261,000 of the company’s users had their names, account balances, email addresses, and phone numbers exposed. While the company initially declined to disclose the circumstances surrounding the breach, it did state that users’ cryptocurrencies were safe – it was merely information that was stolen.

Security concerns around the 2020 census

In the US, the Government Accountability Office has concerns about the cybersecurity of the Census Bureau. The bureau is reported to have thousands of security vulnerabilities – dozens of which are identified as highly risky and dangerous. Naturally, as conducting a census involves collecting data from countless citizens, these security gaps must be filled before the next census in 2020.

Air Canada’s mobile app breached

Late last month, Air Canada’s mobile app was found to have been breached. While it was only 1% of the application’s 1.7 million users that were affected, it was still 20,000 individuals who had their names, phone numbers, passport numbers, and dates of birth exposed.

WellCare breach exposes data of 20k children

In WellCare Health Plans’ recent breach, 20,000 children had their PHI (protected health information) exposed. The information’s security was compromised when WellCare accidentally mailed letters to the wrong addresses. Exposed data included children’s names, ages, and healthcare providers.

Learn about cloud access security brokers (CASBs) and how they can defend against the rising tide of data breaches.


Bitglass Security Spotlight: Yale, LifeLock, SingHealth, Malware Evolving & Reddit Breached

By Jacob Serpa, Product Manager, Bitglass

man reading cybersecurity headlinesHere are the top cybersecurity headlines of recent months:

—Future malware to recognize victims’ faces
—Reddit suffers breach
—6 million records of Georgian voters exposed
—RASPITE Group attacks US infrastructure
—Decade-old breach at Yale uncovered
—Bug exposes LifeLock customer data
—Patient data of 1.5 million exposed in SingHealth breach
—Tesla, GM, Toyota, and others expose 157 GB of data
—COSCO hit with ransomware attack

Future malware to recognize victims’ faces

Malware is poised to continue its evolution and deploy newer, more advanced capabilities. In particular, it is believed that threats will leverage artificial intelligence in order to become increasingly context aware. For example, malware may soon employ facial recognition that uses an individual’s appearance to trigger an attack.

Reddit suffers breach

Early last month, a hacker was discovered to have breached Reddit’s systems and stolen a variety of user data; for example, email addresses, passwords, private messages, and more. While the breached data came from an unsecured database containing information from 2005 to 2007, the incident still highlights the importance of maintaining constant visibility and control over data.

6 million records of Georgian voters exposed

Voters in Georgia recently had their personal information exposed when the office of the Secretary of State granted various parties access to voter registration data in an unsecured fashion. This data included dates of birth, drivers license numbers, and Social Security numbers. If the data were obtained by nefarious individuals, widespread identity theft could ensue very easily.

RASPITE Group attacks US infrastructure

Since 2017, the RASPITE Group has been a cybersecurity threat that has attacked nations around the world. Countries in the Middle East, Asia, and Europe have all suffered. Recently, the cybercriminal group was tied to Iran and found to be targeting electric utility companies in the US. Naturally, these organizations must have adequate defenses lying in wait

Decade-old breach at Yale uncovered

About ten years ago, Yale University suffered a breach. Unfortunately, at the time, the intrusion was not detected. Alumni and various faculty and staff had information like Social Security numbers exposed. This event highlights the need for proactive cybersecurity measures as well as constant threat monitoring.

Bug exposes LifeLock customer data

In an ironic twist of fate, LifeLock, an organization built upon defending customers from identity theft, was found to have exposed its users’ email addresses through a bug. The company’s users are now more vulnerable to targeted phishing attacks that imitate communications from LifeLock.

Patient data of 1.5 million exposed in SingHealth breach

Singaporean healthcare organization, SingHealth, was recently breached – much to the ire of those in the country pushing for Singapore to become a cloud-first nation. The cybersecurity incident exposed sensitive information belonging to 1.5 million, including 160,000 whose prescription details were stolen.

Tesla, GM, Toyota, and others expose 157 GB of data

Leading automotive companies (Ford, Volkswagen, and many others) were recently found to have extensive amounts of proprietary information publicly available online. The data was reportedly exposed by poor configurations around rsync protocol, demonstrating, once again, the importance of maintaining a robust and detail-oriented security posture.

COSCO hit with ransomware attack 

As one of the biggest shipping enterprises in the world, COSCO sends countless goods around the globe every day. Unfortunately, the company was recently hit with a ransomware attack that harmed some of its US operations. While the company has since responded to the attacks, ransomware continues to represent an imposing threat for businesses everywhere.

To learn about cloud access security brokers (CASBs) and how they can defend against malware, breaches, and more, download the Definitive Guide to CASBs.

Join CSA’s New DC Metro Area Chapter

CSA DC Chapter logoThe Cloud Security Alliance (CSA) is pleased to announce that its DC Metro Area chapter has been chartered to serve the DC metro area CSA membership.

The chapter’s region includes a diverse range of businesses, government organizations and academic institutions who all have an interest in well-engineered, secure IT systems, including many heavily regulated industries such as the U.S. government, healthcare and financial sectors. The new DC Metro Area chapter will influence policy around IT security regulations and privacy via position papers designed to modernize outdated regulations and create new regulations and policies in areas where none exist.

The chapter will also develop guidance for IT modernization incorporating security-by-design, providing vendor-neutral guidance to modernize heavily regulated IT systems. The CSA DC membership, coupled with the Government, Healthcare and Financial Sector Advisory Boards, will provide direction around topics for research within the chapter. We encourage you to join CSA-DC in its mission to modernize IT to securely move at the speed of business.

CSA’s newest chapter needs volunteers! If you’d like to help build this chapter, you can contact the DC Metro Area Chapter via its LinkedIn Group. Or, attend the chapter’s inaugural event on Sept. 27 to learn about CSA-DC and how you can join their mission to modernize IT to securely move at the speed of business.

Five Distinguished Security Experts to Keynote SecureCloud 2014

SecureCloud 2014 is just around the corner and the CSA is pleased to announce the keynote speaker lineup for this must-attend event, which is taking place in Amsterdam on April 1-2.

Secure Cloud Speakers

This year’s event will feature keynote addresses from the following five security experts on a wide range of cloud security topics:

  • Prof. Dr. Udo Helmbrecht, executive director of the European Network and Information Security Agency (ENISA) will speak on the uptake of Cloud computing in Europe and how ENISA supports Cloud Security in the Member States.
  • Prof. Dr. Reinhard Posch, CIO for the Austrian Federal Government will present on the European Cloud Partnership and Austrian Government approach to cloud
  • Alan Boehme, Chief of Enterprise Architecture for The Coca-Cola Company will present on the CSA Software Defined Perimeter initiative
  • Jim Reavis, CEO of the Cloud Security Alliance will discuss trends and innovation in cloud security and CSA activities in 2014
  • Richard Mogull, CEO of Securosis will give the closing keynote on Automation & DevOps

If you haven’t already registered, early bird discount pricing is being offered through February 14. Registration information can be found at:

We look forward to seeing all of you in Amsterdam in the Spring!

The Dark Side of Big Data: CSA Opens Peer Review Period for the “Top Ten Big Data and Privacy Challenges” Report

moonBig Data seems to be on the lips of every organization’s CXO these days. By exploiting Big Data, enterprises are able to gain valuable new insights into customer behavior via advanced analytics. However, what often gets lost amidst all the excitement are the very real and many security and privacy issues that go hand in hand with Big Data.  Traditional security schemes mechanisms were simply never designed to deal with the reality of Big Data, which often relies on distributed, large-scale cloud infrastructures, a diversity of data sources, and the high volume and frequency of data migration between different cloud environments.

To address these challenges, the CSA Big Data Working Group released an initial report, The Top 10 Big Data Security and Privacy Challenges at CSA Congress 2012, It was the first such industry report to take a holistic view at the wide variety of big data challenges facing enterprises. Since this time, the group has been working to further its research, assembling detailed information and use cases for each threat.  The result is the first Top 10 Big Data and Privacy Challenges report and, beginning today, the report is open for peer review during which CSA members are invited to review and comment on the report prior to its final release. The 35-page report outlines the unique challenges presented by Big Data through narrative use cases and identifies the dimension of difficulty for each challenge.

The Top 10 Big Data and Privacy Challenges have been enumerated as follows:

  1. Secure computations in distributed programming frameworks
  2. Security best practices for non-relational data stores
  3. Secure data storage and transactions logs
  4. End-point input validation/filtering
  5. Real-time security monitoring
  6. Scalable and composable privacy-preserving data mining and analytics
  7. Cryptographically enforced data centric security
  8. Granular access control
  9. Granular audits
  10. Data provenance

The goal of outlining these challenges is to raise awareness among security practitioners and researchers so that industry wide best practices might be adopted to addresses these issues as they continue to evolve. The open review period ends March 18, 2013.  To review the report and provide comments, please visit .

Tweet this: The Dark Side of Big Data: CSA Releases Top 10 Big Data and Privacy Challenges Report.

CSA Releases CCM v 3.0

The Cloud Security Alliance (CSA) today has released a draft of the latest version of the Cloud Control Matrix, CCM v3.0. This latest revision to the industry standard for cloud computing security controls realigns the CCM control domains to achieve tighter integration with the CSA’s “Security Guidance for Critical Areas of Focus in Cloud Computing version 3” and introduces three new control domains. Beginning February 25, 2013 the draft version of CCM v3.0 will be made available for peer review through the CSA Interact website with the peer review period closing March 27, 2013, and final release of CCM v3.0 on April 1, 2013.

The three new control domains; “Mobile Security”, “Supply Change Management, Transparency and Accountability”, and “Interoperability & Portability” address rapidly expanding methods cloud data is accessed, the need for ensuring due care is taken in the cloud providers supply chain, and the minimization of service disruptions in the face of a change to cloud provider relationship.

The “Mobile Security” controls are built upon the CSA’s “Security Guidance for Critical Areas of Mobile Computing, v1.0” and are the first mobile device specific controls incorporated into the Cloud Control Matrix.

The “Supply Change Management, Transparency and Accountability” control domain seeks to address risks associated with governing data within the cloud while the “Interoperability & Portability” brings to the forefront considerations to minimize service disruptions in the face of a change in a cloud vendor relationship or expansion of services.

The realigned control domains have also benefited through changes in language to improve the clarity and intent of the control, and, in some cases, realigned within the expanded control domains to ensure the cohesiveness within each control domain and minimize overlap.

The draft of the Cloud Control Matrix can be downloaded from the Cloud Security Alliance website and the CSA welcomes peer review through the CSA Interact website.

The CSA invites all interested parties to participate in the peer review and the CSA Cloud Controls Matrix Working Group Meeting to be held during the week of the RSA Conference, at 4pm PT on February 28, 2013, at the Sir Francis Drake Hotel
Franciscan Room
450 Powell St in San Francisco, CA.

Towards a “Permanent Certified Cloud”: Monitoring Compliance in the Cloud with CTP 3.0

Cloud services can be monitored for system performance but can they also be monitored for compliance? That’s one of the main questions that the Cloud Trust Protocol aims to address in 2013.

Compliance and transparency go hand in hand.

The Cloud Trust Protocol (CTP) is designed to allow cloud customers to query cloud providers in real-time about the security level of their service. This is measured by evaluating “security attributes” such as availability, elasticity, confidentiality, location of processing or incident management performance, just to name a few examples. To achieve this, CTP will provide two complementary features:

  • First, CTP can be used to automatically retrieve information about the security offering of cloud providers, as typically represented by an SLA.
  • Second, CTP is designed as a mechanism to report the current level of security actually measured in the cloud, enabling customers to be alerted about specific security events.

These features will help cloud customers compare competing cloud offerings to discover which ones provide the level of security, transparency and monitoring capabilities that best match the control objectives supporting their compliance requirements. Additionally, once a cloud service has been selected, the cloud customer will also be able to compare what the cloud provider offered with what was later actually delivered.

For example, a cloud customer might decide to implement a control objective related to incident management through a procedure that requires some security events to be reported back to a specific team within a well-defined time-frame. This customer could then use CTP to ask the maximum delay the cloud provider commits to for reporting incidents to customers during business hours. The same cloud customer may also ask for the percentage of incidents that were actually reported back to customers within that specific time-limit during the preceding two-month period. The first example is typical of an SLA while the second one describes the real measured value of a security attribute.

CTP is thus designed to promote transparency and accountability, enabling cloud customers to make informed decisions about the use of cloud services, as a complement to the other components of the GRC stack. Real time compliance monitoring should encourage more businesses to move to the cloud by putting more control in their hands.

From CTP 2.0 to CTP 3.0

CTP 2.0 was born in 2010 as an ambitious framework designed by our partner CSC to provide a tool for cloud customers to “ask for and receive information about the elements of transparency as applied to cloud service providers”. CSA research has begun undertaking the task of transforming this original framework into a practical and implementable protocol, referred to as CTP 3.0.

We are moving fast and the first results are already ready for review. On January 15th, CSA completed a first review version of the data model and a RESTful API to support the exchange of information between cloud customers and cloud provider, in a way that is independent of any cloud deployment model (IaaS, PaaS or SaaS). This is now going through the CSA peer review process.

Additionally, a preliminary set of reference security attributes is also undergoing peer review. These attributes are an attempt to describe and standardize the diverse approaches taken by cloud providers to expressing the security features reported by CTP. For example, we have identified more than five different ways of measuring availability. Our aim is to make explicit the exact meaning of the metrics used. For example, what does unavailability really mean for a given provider? Is their system considered unavailable if a given percentage of users reports complete loss of service? Is it considered unavailable according to the results of some automated test to determine system health?

As well as all this nice theory, we are also planning to get our hands dirty and build a working prototype implementation of CTP 3.0 in the second half of 2013.

Challenges and research initiatives

While CTP 3.0 may offer a novel approach to compliance and accountability in the cloud, it also creates interesting challenges.

To start with, providing metrics for some security attributes or control measures can be tricky. For example, evaluating the quality of vulnerability assessments performed on an information system is not trivial if we want results to be comparable across cloud providers. Other examples are data location and retention, which are both equally complex to monitor, because of the difficulty of providing supporting evidence.

As a continuous monitoring tool, CTP 3.0 is a nice complement to traditional audit and certification mechanisms, which typically only assess compliance at a specific point in time. In theory, this combination brings up the exciting possibility of a “permanently certified cloud”, where a certification could be extended in time through automated monitoring. In practice however, making this approach “bullet-proof” requires a strong level of trust in the monitoring infrastructure.

As an opportunity to investigate these points and several other related questions, CSA has recently joined two ambitious European Research projects: A4Cloud and CUMULUS. A4Cloud will produce an accountability framework for the entire cloud supply chain, by combining risk analysis, creative policy enforcement mechanisms and monitoring. CUMULUS aims to provide novel cloud certification tools by combining hybrid, incremental and multi-layer security certification mechanisms, relying on service testing, monitoring data and trusted computing proofs.

We hope to bring back plenty of new ideas for CTP!

Help us make compliance monitoring a reality!

A first draft of the “CTP 3.0 Data Model and API” is currently undergoing expert review and will then be opened to public review. If you would like to provide your expert feedback, please do get in touch!

by Alain Pannetrat 

[email protected]

Dr. Alain Pannetrat is a Senior Researcher at Cloud Security Alliance EMEA. He works on CSA’s Cloud Trust Protocol providing monitoring mechanisms for cloud services, as well as CSA research contributions to EU funded projects such as A4Cloud and Cumulus. He is a security and privacy expert, specialized in cryptography and cloud computing. He previously worked as a IT Specialist for the CNIL, the French data protection authority, and was an active member of the Technology Subgroup of the Article 29 Working Party, which informs European policy on data protection. He received a PhD in Computer Science after conducting research at Institut Eurecom on novel cryptographic protocols for IP multicast security.



EMEA Congress Recap

The inaugural EMEA Congress in Amsterdam was an unqualified success, with hundreds of security visionaries in attendance and featuring presentations from some of the leading voices from across the cloud security landscape. What follows are just a sample of the discussions and some of the key takeaways from the two-day event:

EMEA Congress Presenters

  • Monica Josi, Microsoft’s Chief Security Adviser EMEA presented on Microsoft’s compliance strategy, emphasizing the importance of a common mapping  strategy to define compliance standards. Microsoft has mapped over 600 controls and 1500 audit obligations onto the ISO27001 framework and are using CSA’s CCM and ISO27001 to certify their Dynamic CRM, Azure and Office365 platforms. They have also published all relevant documentation on the CSA’s STAR repository.
  • Chad Woolf, Global Risk and Compliance Leader for Amazon Web Services highlighted the difference between security IN the cloud as opposed to security OF the cloud. According to Chad, security IN the cloud presents a much greater risk and discussed some of the different assurance mechanisms provided by AWS.
  • Data security and privacy expert Stewart Room provided an update on some of the more pressing legal issues facing cloud security, including a plea for more realistic legislation (e.g. subcontractor recommendations of Art 29 WP)
  • Mark O’Neill, CTO for Vordel gave an update on IDM standards, including oAuth 2.0 and OpenID Connect and how they fit into the cloud ecosystem. oAuth 2.0 is now a stable standard which can be used to give granular, revocable access control. It is lighter than SAML and therefore more suitable for mobile/REST scenarios.
  • Phil Dunkelberger made an impassioned call to arms for the industry to create a standard authentication protocol which would allow for the integration of appropriate authentication mechanisms into diverse services.
  • Jean-François Audenard, Cloud Security Advisor, for Orange Business Services presented their Secure Development Lifecycle that covers security and legal obligations, mitigation plans, security reviews and on-going operational security and the roles of their security Advisors, Architects and Managers in the lifecycle.

Panel Discussion Takeaways:

  • While Gartner has some 26 definitions for Cloud, according to Bruce Schneier it can be boiled down to the fact that it’s simply your data on somebody else’s hard disk that you access over the Internet!
  • Cloud provider specialization and reputation means better security in many respects. As to the question of what can be more difficult in the cloud, forensics is a major issue (e.g., ‘freezing the crime scene’, confiscation of hardware, etc)
  • As a customer, there is a lot you can and should do to monitor the cloud service provider (either independently and/or via executive dashboards). This also allows you to establish trust in smaller companies with less history.
  • Internal IT teams are not redundant . There are lots of security-related tasks still need to be taken care of. This is especially true for IaaS providers ( e.g. credential management ). The cloud provides opportunities for many of these individuals to perform higher value tasks (i.e., security training of staff, service monitoring, etc).
  • Business is consuming technology quicker than IT can provide it; as a result more internal business users are utilising external third party and cloud vendors to process their information. For example, MARS Information Services is using a modified version of ISO27001 (ISO++) and the CSA’s CCM to risk assess their third party vendors. As engagement move from Iaas to Paas and SaaS the level of risks increase as the controls are given to the service provider.
  • Historically, organizations have been largely concerned with securing the network, not the information that resides on it. We need to now protect information based on the risk associated with the compromise of that data. As such, a risk based approach to security requires data to be “high level” classified.
  • Once data has migrated to the Cloud, access and authentication becomes key. Authentication is currently taken for granted (passport, room key, ID badge, airline ticket, cards), except online where credentials are often re-used. If they are compromised, all systems using those credentials are vulnerable.
  • As data moves to the Cloud, there will situations that will require the data to be recovered, in a forensically sound way. The use of multi-tenant environments across multi-jurisdictions introduces numerous e-disclose and chain of custody challenges that are yet to be solved.


“Great conference with a number of speakers that really provided up to date, timely and in-depth information” – Peter Demmink, Merck / MSD

“The CSA delivered an excellent intro to all the aspects of cloud security and compliance” – Albert Brouwer, AEGON




Will the Cloud Cause the Reemergence of Security Silos?

by: Matthew Gardiner

Generally in the world silos relate to things that are beneficial, such as silos for grain or corn.  However in the world of IT security, silos are very bad.  In many forensic investigations application silos turn up as a key culprit that enabled data leakage of one sort or another.  It is not that any one application silo is inherently a problem – one can repair and manage a single silo much as a farmer would do – it is the existence of many silos, and silos of so many type, that is the core problem.  Farmers generally don’t use thousands of grain silos to handle their harvest; they have a handful of large, sophisticated, and centralized ones.

The same approach has proven highly effective in the world of application security, particularly since the emergence of the Web and its explosion of applications and users.  Managing security as a centralized service and applying it across large swaths of an organization’s infrastructure and applications is clearly a best practice.  However with the emergence of the Cloud as the hot application development and deployment platform going forward, organizations are at significant risk of returning to the bad days of security silos.  When speed overruns architecture, say hello to security silos and the weaknesses that they bring.

What do I mean by security silos?  I think of silos as application “architectures” which cause security (as well as IT management in general) to be conducted in “bits-and-pieces”, thus uniquely within the specific platform or system.  Applications are built this way because it feels faster in the short term. After all, the project needs to get done.  But after this approach is executed multiple times the organization is left with many inconsistent, custom, and diverse implementations and related security systems.  These systems are inevitably both complex to operate and expensive to maintain as well as easy to breach on purpose or by accident.

Perhaps this time it is different?  Perhaps IT complexity will magically decline with the Cloud?  Do you really think that the move to the Cloud is going to make the enterprise IT environment homogeneous and thus inherently easier to manage and secure?  Not a chance.  In fact, just the opposite is most likely. How many organizations will move all of their applications and data to public clouds?  And for that matter to a single public cloud provider.  Very few.  Given this, it is imperative that security architects put in place security systems that are designed to operate in a highly heterogeneous, hybrid (mixed public cloud and on-premise) world.  The cloud-connected world is one where applications and data will on one day be inside the organization on a traditional platform, the next day hosted within the organizations private cloud, the next day migrated to live within a public cloud service, and then back again, based on what is best for the organization at that time.

Are security silos inevitable with the move to the Cloud?  In the short term, unfortunately, probably yes.  With every new IT architecture the security approach has to do some catch-up.  It is the security professionals’ job to make this catch-up period as short as possible.

How should we shorten the catch-up period?

  • First update your knowledge base around the Cloud and security.  There are a lot of good sources out there; one in particular that I like is from the Cloud Security Alliance (CSA), Security Guidance for Critical Areas of Focus in Cloud Computing.
  • Second rethink your existing people, processes, and technology (sorry for the classic IT management cliché) in terms of the cloud.  You will find the control objectives don’t change, but how you will accomplish them will.
  • Third start making the necessary investments to prepare your organization for the transition to the cloud that is likely already underway.

While there are many areas covered in the above CSA document, let me focus on one area that in particular highlights some cloud specific security challenges, specifically around Identity and Access Management.

The CSA document says it well, “While an enterprise may be able to leverage several Cloud Computing services without a good identity and access management strategy, in the long run extending an organization’s identity services into the cloud is a necessary precursor towards strategic use of on-demand computing services.”  Issues such as user provisioning, authentication, session management, and authorization are not new issues to security professionals.  However, accomplishing them in the context of the cloud requires that the identity management systems that are on-premise in the enterprise automatically “dance” with the equivalent systems at the various cloud service providers.  This dance is best choreographed through the use of standards, such as SAML, XACML, and others.  In fact the rise of the cloud also raises the possibility of outsourcing even some of your identity management services, such as multi-factor authentication, access management, and other capabilities to specialized cloud security providers.

While in the short term it would seem that the emergence of some security silos is inevitable with organizations’ aggressive move to the cloud, it doesn’t have be this way forever.  We know security silos are bad, we know how to avoid them, and we have much of the necessary technology already available to eliminate them. Our necessary action is to take action.

Matthew Gardiner is a Director working in the Security business unit at CA Technologies. He is a recognized industry leader in the security & Identity and Access Management (IAM) markets worldwide. He is published, blogs, and is interviewed regularly in leading industry media on a wide range of IAM, cloud security, and other security-related topics. He is a member of the Kantara Initiative Board of Trustees. Matthew has a BSEE from the University of Pennsylvania and an SM in Management from MIT’s Sloan School of Management.  He blogs regularly at: and also tweets @jmatthewg1234.  More information about CA Technologies can be found at

Multi-tenancy and bad landlords

So there’s been a lot of discussion about multi-tenancy recently and what it means for cloud providers and users. To put it simply: multi-tenancy is highly desirable to providers because they can provide a service or a platform (such as WordPress) and cram a kajillion users into it without having to constantly customize it, modify it or otherwise do much work to sell it individually. The reality is that whether or not users like multi-tenancy, the providers love it, so it’s here to stay.

So what happens when you have a bad, or just unlucky landlord? In the last few months has had a number of outages:

What Happened: We are still gathering details, but it appears an unscheduled change to a core router by one of our datacenter providers messed up our network in a way we haven’t experienced before, and broke the site. It also broke all the mechanisms for failover between our locations in San Antonio and Chicago. All of your data was safe and secure, we just couldn’t serve it.

And more recently:

If you tried to access TechCrunch any time in the last hour or so, you probably noticed that it wasn’t working at all. Instead, you were greeted by the overly cheery notice “ will be back in a minute!” Had we written that message ourselves, there would have been significantly more profanity.

So what can we do to support this leg (availability) of the A-I-C triad of information security?

I don’t honestly know. It’s such a service/provider specific issue (do they control DNS? do you control DNS? can you redirect to another provider with the same service who has a recent copy of your data? If you do so can you then export any updates/orders/etc. back to your original provider when they come back? etc.) that pretty much any answer you’ll get is useless unless it’s specifically tailored to that provider or service.

If you have an answer to this, please post it in the comments.

Backups and security for cloud applications

Backups, the thing we all love to hate, and hate to love. Recreating data is rarely cheap, especially if it involves detailed analysis and combination. So we back it up.

Take for example this blog, it’s based on WordPress; which is about as standard and supported as you can get for a blog. Backing up the entire blog isn’t that bad, just grab a copy of the database and you are mostly good to go, except for the minor things like custom web pages and CSS files. So what is one to do? Well the obvious thought is to outsource your cloud service backups to a cloud service backup service.

Update: Trend Micro appears to be getting in on the secure online backup thing.

Put your chauffeur on the upgrade treadmill

I don’t know if anyone here remembers the “Billion Dollar Brain” by Len Deighton. One scene that stuck with me is General Midwinter making his minion (a chauffeur or bodyguard, I can’t remember which) do his time on the exercise bike for him and asking “how many miles did we bike today?”

Wouldn’t it be great if we were all rich enough to hire someone to do the horrible chores that have to be done every day (or weekly) like exercising in order to keep our bodies fit?

This is one of the more appealing aspects of Software-as-a-service (SaaS). In fact this blog is a perfect example of upgrade and maintenance avoidance. Rather than hosting the blog in-house and having to maintain and upgrade WordPress every few weeks we decided to simply outsource it to Now there are some downsides; we can’t run all the plugins we’d like to (basically you get what WordPress gives you and you learn to like), but on the upside I will never have to upgrade a WordPress plugin or WordPress itself ever again (which is a security disaster waiting to happen as many have found out).

News roundup for May 28 2010

Financial Services Like The Cloud, Provided It’s Private –

Novell Identity Manager extended to cloud –

Amazon CEO Jeff Bezos: Cloud services can be as big as retail business –

Software evaluation 2.0 ?

I spend a lot of time evaluating software; for product reviews, to see which versions are vulnerable to various exploits and sometimes just to see if I should be using it. Most often this looks something like: find the software, download it, find the install and configuration documents, walk through them, find an error or two (documentation always seems to be out of date), fiddle with dependencies (database settings, etc.) finally get it mostly working (I think) and then test it out. I’m always wondering in the back of my mind if I’ve done it properly or if I’ve missed something, especially when it comes to performance issues (is the software slow, or did I mess up the database settings and not give it enough buffers?).

But it seems that finally some places are taking note of this and making their software available in a VM, fully configured and working, no fuss or mess, just download, run it, and play with the software. Personally I love this, especially for large and complex systems that require external components such as a database server, message queues and so on. No more worries about configuring all the add on bits, or making sure you have a compatible version and so on.

This really dovetails nicely with an increasing reliance on cloud computing, instead of buying a software package and having to maintain it you can buy a VM image with the software, essentially getting SaaS levels of configuration and support but still giving you the IaaS levels of control (you can run it in house, behind a VPN, you can configure it specially for your needs if you have to, etc.). The expertise needed to properly configure the database (which varies between databases hugely, and depending on what the product needs, i.e. latency? memory? bulk transfers? special character encoding?) is provided by the vendor who (at least in theory) knows best. I also think vendors will start to appreciate this, the tech support and time needed to guide customers through install and configuration and integration with other services and components can be replaced by “Ok sir I need you to upload the image to your VMware server (or EC2 or whatever) and turn it on… Ok now login as admin and change the password. Ok we’re done.”

Security is also a lot easier. The vendor can test patches knowing with certainty that they will work on customer’s systems since the customer has the exact same system as the vendor, and customers can apply patches with a higher degree of confidence, knowing that they were tested in the same environment.

Reddit ships code as fully functional VM. –

Update: CERT release fuzzing framework, as a VMware image –

Counterfeit gear in the cloud

One of the best and worst things about outsourced cloud computing (as opposed to in house efforts) is the ability to spend more time on what is important to you, and leave things like networking infrastructure, hardware support and maintenance and so on to the provider. The thing I remember most about system and network administration is all the little glitches, some of which weren’t so little and had to be fixed right away (usually at 3 in the morning). One thing I love about outsourcing this stuff is I no longer have to worry about network infrastructure.

Assuming of course that the cloud provider does a good job. The good news here is that network availability and performance is really easy to measure, and really hard for a cloud provider to hide. Latency is latency, and you generally can’t fake low latency networks (although if you can please let me know! We’ll make millions). Ditto for bandwidth, either the data transfers in 3 minutes or 4 minutes, a provider can’t really fake that either. Reliability is a little tougher since you have to measure it continuously to get good numbers (are there short but total outages, longer “brownouts” with reduced network capacity, or is everything actually working fine?). But none of this takes into account or allow us to predict the type of catastrophic failures that result in significant downtime.

One way providers deal with this potential problem is simple: they buy good name brand gear with support contracts that guarantee replacement times, how long it will take a engineer to show up, etc. But this stuff is expensive. So what happens if a cloud provider is finds, or is offered name brand equipment at reduced, or even really cheap prices (this does happen legitimately; a company goes bust and stuff is sometimes sold off cheap). This stuff isn’t under a support contract and is not up to the same specs as the real stuff meaning it is more likely to fail or suffer problems, causing you grief.

How do you, the cloud provider customer, know that your provider isn’t accidentally (or otherwise) buying counterfeit network gear?

Well short of a physical inspection and phoning in the serial numbers to the manufacturer you won’t. Unfortunately I can’t think of any decent solutions to this, so if you know of them or have any ideas feel free to leave comments or email me, [email protected].

Feds shred counterfeit Cisco trade – With a new conviction today, the federal action known as Operation Network Raider has resulted in 30 felony convictions and more than 700 seizures of counterfeit Cisco network hardware with an estimated value of more than $143 million.

-By Layer 8, Network World


Amazon AWS – 11 9’s of reliability?

Amazon recently added a new redundancy service to their S3 data storage service. Amazon now claims that data stored in the “durable storage” class is 99.999999999% “durable” (not to be confused with availability – more on this later).

“If you store 10,000 objects with us, on average we may lose one of them every 10 million years or so. This storage is designed in such a way that we can sustain the concurrent loss of data in two separate storage facilities.” –Jef;

So how exactly does Amazon arrive at this claim? Well reading further they also offer a “REDUCED_REDUNDANCY” storage class (which is 33% cheaper than normal) that guarantees 99.99% and is “designed to sustain the loss of data in a single facility.” From this was can extrapolate that Amazon is simply storing the data in multiple physical data centers, the chance of each one becoming unavailable (burning down, cable cut, etc.) is something like 0.01%, so storing at two data centers means a 0.0001% chance that both will fail at the same time (or on the flip side: a 99.9999% durability guarantee), three data centers giving us 0.000001% chance of loss (a 99.999999% durability guarantee) and so on. I’m not sure of the exact numbers that Amazon is using but you get the general idea; a small chance of failure, combined with multiple locations makes for a very very small chance of failure at all the locations at the same time.

Except there is a huge gaping hole in this logic. To expose it let’s revisit history, specifically the Hubble Space Telescope. The Hubble Space Telescope can be pointed in specific directions using six on board gyroscopes. By adding momentum to a single gyroscope or applying the brakes to it you can cause Hubble to spin clockwise or counter clockwise in a single axis. With two of these gyroscopes you can move Hubble in three axis to point anywhere. Of course having three sets of gyroscopes makes maneuvering it easier and having spare gyroscopes ensures that a failure or three won’t leave you completely unable to point the Hubble at interesting things.

But what happens when you have a manufacturing defect in the gyroscopes, specifically the use of regular air instead of inert nitrogen during the manufacturing of the gyroscopes? Well having redundancy doesn’t do much since the gyroscopes start failing in the same manner at around the same time (almost leaving Hubble useless if not for the first servicing mission).

The lesson here is that having redundant and backup systems that are identical to the primary systems may not increase the availability of the system significantly. And I’m willing to bet that Amazons S3 data storage facilities are near carbon copies of each other with respect to the hardware and software they use (to say nothing of configuration, access controls, authentication and so on). A single flaw in the software, for example an software related issue that results in a loss or mangling of data may hit multiple sites at the same time as the bad data is propagated. Alternatively a security flaw in the administrative end of things could let an attacker gain access to and start deleting data from the entire S3 “cloud”.

You can’t just take the chance of failure and square it for two sites if the two sites are identical. The same goes for 3, 4 or 27 sites. Oh and also to read the fine print: “durability” means the data is stored somewhere, but Amazon makes no claims about availability or whether or not you can get at it.
Something to keep in mind as you move your data into the cloud.

Season’s Greetings from the CSA!

By Zenobia Godschalk

2009 has been a busy year for the CSA, and 2010 promises to be even more fruitful. The alliance is now 23 corporate members strong, and is affiliated with numerous leading industry groups (such as ISACA, OWASP and the Jericho Forum) to help advance the goal of cloud security. Below is a recap of recent news and events, as well as upcoming events. We have had tremendous response to the work to date, and this month we will release version two of our guidance. Thanks to all our members for their hard work in our inaugural year!




The CSA and DMTF have partnered to help coordinate best practices for Cloud security. More details here:

Cloud Security Survey

The Cloud Security survey is still open for responses! Here’s your chance to influence cloud security research. Survey takes just a few minutes, and respondents will receive a free, advance copy of the results.

Computerworld: Clear Metrics for Cloud Security? Yes, Seriously

The Cloud Computing Show (featuring interview with CSA’s Chris Hof)


For more cloud security news, check out the press page on the CSA site.




State of California

At the end of October CSA was invited to present to Information Security professionals of the State of California. During this 2-day state-sponsored conference we provided education and transparency into CSA’s research around Cloud Security and how the federal government is using cloud deployments.


Also at the end of October the CSA participated at a Cloud Security workshop during the annual CSI Conference in DC.

India Business Technology Summit

In November Nils Puhlmann, co-founder of CSA, presented to an audience of 1,400 at the annual India Business Technology Summit. Not only did he address the audience in a keynote but delivered the CSA message and learnings in a workshop at the India Institute of Science and Technology in Bangalore. Puhlmann also participated in a follow up panel in Mumbai at the India Business Technology Executive Summit.

Conference of ISMS Forum

In December CSA was represented at the 6th International Conference of ISMS Forum in Seville. Nils Puhlmann delivered a keynote and moderated a panel looking into the future of information security and how new technologies like cloud computing might affect our industry.

CSA and ISMS also signed an MOU to cooperate more closely together. The event and CSA’s participation were covered in the prominent Spanish newspaper, Cinco Días, the business supplement of El País.

ISMS and CSA also started activities to launch a Spanish Chapter of CSA to better address the unique and local issues around secure cloud adoption in Spain.


Guidance V2

The second version of the security guidance for critical areas of focus in cloud computing is coming soon! Watch for the next version of the guidance to be released this month.


SecureCloud 2010

Registration is now open for SecureCloud 2010, a joint conference with ENISA and ISACA being held in Barcelona, March 16th and 17th.

Cloud Security Alliance Summit

In addition, the Cloud Security Alliance Summit will be held in conjunction with the RSA Conference in SF March 1. Further details are below, and check back on the CSA website for more updates coming soon!

Cloud Security Alliance Summit

March 1, 2010, San Francisco, Moscone Center The next generation of computing is being delivered as a utility.

Cloud Computing is a fundamental shift in information technology utilization, creating a host of security, trust and compliance issues.

The Cloud Security Alliance is the world’s leading organization focused on the cloud, and has assembled top experts and industry stakeholders to provide authoritative information about the state of cloud security in the Cloud Security Alliance Summit. This half day event will provide broad coverage of cloud security domains and available best practices for governance, legal, compliance and technical issues. From encryption and virtualization to vendor management and electronic discovery, the speakers provide guidance on key business and operational issues. We will also present the latest findings from the CSA working groups for Cloud Threats, Metrics and Controls Mappings.