Introducing Reflexive Security for integrating security, development and operations

By the CSA DevSecOps Working Group 

Organizations today are confronted with spiraling compliance governance costs, a shortage of information security professionals, and a disconnect between strategic security and operational security. Due to these challenges, more and more companies value agility and integrated operations. In short, a security management program must now deliver more for less to match the needs of becoming cost efficient. 

How can organizations accomplish this task? In order to answer that question, CSA recently published a document defining Reflexive Security, a new framework that addresses today’s increasing risks and cybersecurity threats. 

Information Security Management through Reflexive Security — Six Pillars in the Integration of Security, Development and Operations 

This document provides a flexible framework that: 

  • Focuses on collaboration and integration 
  • Is outcome-oriented 
  • Provides a “reflexive” response to risks. 

The word “Reflexive” comes from the reflexive relation in mathematical sets, where every element in such a relation is related to itself. In Reflexive Security, every action taken is related to the context of the security at hand and needs of the organization itself. 

Reflexive Security versus ISMS

While the information security management system (ISMS) approach is well-defined by the International Standard ISO/IEC 27001, organizations who thrive with agile development or other collaborative-oriented processes have found it valuable to use the Reflexive Security framework. They value it for its non-prescriptive, holistic, needs-based, and interactive approach, especially with their existing activities that are already tightly-integrated. 

Reflexive Security builds on the examples from Agile development and DevOps movements, and is solely focused on a collaborative and integrated environment. It is especially suited for cloud environments, which are crucial for facilitating efficiencies for development and operation teams. Compared to the ISMS approach, Reflexive Security is like using Agile software development versus the Waterfall mindset. 

Reflexive Security also emphasizes security across organizational roles that reacts to external and internal threats. Similar to the body’s immune system, Reflexive Security values the balance of decentralization and centralization over a top-down leadership approach. This is so responsibilities and activities of information security management are infused to all members of the organization. 

The document describes the core principles of Reflexive Security in “Six Pillars,” which leads to the “Six Benefits,” and also explores a number of strategies for the fulfillment of this framework. 

The Six Pillars of Reflexive Security (abbreviated as “RAMPAC”): 

  • Responsible collectively: Security leadership plays a shepherding role for information security within an organization; everyone is responsible for an organization’s security.
  • Pragmatic: Security should provide value, not a hindrance.
  • Align and bridge: Organizational risks and requirements must be fully aligned in order to derive maximum effectiveness and value from security processes.
  • Automate: Automated security practices are the core of optimizing process efficiency.
  • Measure and improve: Performance that cannot be measured cannot be improved.
  • Collaborate and integrate: Arguably the most important Pillar. Security can only be achieved through collaboration, not confrontation. A security-aware and collaborative culture is necessary for everyone to feel comfortable reporting potential anomalies. 

The Six Benefits of Reflexive Security: 

  • Human-centric: Security is integrated and internalized as an aspect of everyone’s work, and requires mind-share within every employee.
  • Elastic: Growing maturity of a Reflexive Security approach could lead to achievement of formal ISMS requirements, while being flexible enough to only target critical areas for maximum value based on actual risks.
  • Apt and holistic: Focused on business needs and responding to the actual risk context faced by the organization when compared to traditional information security management.
  • Resilient: Security no longer relies on a single security function, but security practices are integrated with business processes and embedded throughout the organization. 
  • Tailored: Prioritized approach to provision stronger protection to core or more vulnerable processes over those less exploitable. 
  • Dynamic: The protection of business goals is performed by integrating security with business processes, allowing the organization to react faster and more effectively to threats and incidents. 

Key Takeaways

Reflexive Security is an information security management strategy that is dynamic, interactive, holistic, and effective. It represents cultural practices extrapolated from existing collaborative concepts and practices, and provides a set of widely implicating and easily understandable principles that affect an organization’s cybersecurity posture. This approach is especially suitable for organizations operating under resource and personnel constraints in today’s fast-paced and challenging cybersecurity landscape. 

Interested in learning more? Download this research report here: https://cloudsecurityalliance.org/artifacts/information-security-management-through-reflexive-security/

CAIQ V3 Updates

Cloud Security Alliance (CSA) would like to present the next version of the Consensus Assessments Initiative Questionnaire (CAIQ) v3.1.

The CAIQ offers an industry-accepted way to document what security controls exist in IaaS, PaaS, and SaaS services, providing security control transparency. It provides a set of Yes/No questions a cloud consumer and cloud auditor may wish to ask of a cloud provider to ascertain their compliance to the Cloud Controls Matrix (CCM). Therefore, it helps cloud customers to gauge the security posture of prospective cloud service providers and determine if their cloud services are suitably secure.

CAIQ v3.1 represents a minor update to the previous CAIQ v3.0.1. In addition to improving the clarity and accuracy, it also supports better auditability of the CCM controls. The new updated version aims to not only correct errors but also appropriately align and improve the semantics of unclear questions for corresponding CCM v3.0.1 controls. In total, 49 new questions were added, and 25 existing ones were revised.

For this new CAIQ version, CSA took into account the combined comprehensive feedback that was collected over the years from its partners, the industry and the CCM working group.

Egregious 11 Meta-Analysis Part 3: Weak Control Plane and DoS

By Victor Chin, Research Analyst, CSA

This is the third blog post in the series where we analyze the security issues in the new iteration of the Top Threats to Cloud Computing report. Each blog post features a security issue that is being perceived as less relevant and one that is being perceived as more relevant.

In this report, we found that traditional cloud security issues, like those stemming from concerns about having third-party providers, are being reported as less relevant. While more nuanced issues specific to cloud environments are being reported as more problematic. With this in mind, we will be examining Denial of Service and Weak Control Plane further.

 **Please note that the Top Threats to Cloud Computing reports are not meant to be the definitive list of security issues in the cloud. Rather, the studies are a measure of industry perception of key security issues.

Weak Control Plane

Weak control plane featured at the 8th position in the latest iteration of the Top Threats to Cloud Computing report. A weak cloud control plane refers to when a cloud service does not provide adequate or sufficient security controls to meet the security requirements of the customer. One example of a weak control plane is the lack of two-factor authentication and the ability to enforce its usage. Like the other debuting security issues, a weak control plane is something that a customer might only realize after they have migrated to the cloud. 

A key difference between traditional IT and Cloud

A key difference between traditional IT and cloud service applications that might help explain why weak control planes are becoming a problem in cloud services. In traditional IT environments, customer-controlled applications and their security features were designed with the customer as the main user. The application is hosted on the customer’s infrastructure and configured by the customer. The customer has full visibility and control over the application and is thus also responsible for its security. The main role of the IT provider would be to continually provide patches or updates to the application to ensure that bugs and vulnerabilities are fixed.

The situation for cloud services is different because the cloud service is never fully ‘shipped off’ to the customer. The cloud service will always be hosted by the cloud service provider. Hence, they not only have to design a suite of security controls in the cloud service that is useable by their customers. They also have to consider the security mechanism and features that protect the cloud service and the virtual infrastructure that hosts it. Furthermore, due to the nature of cloud services, customers generally cannot use their security tools or technologies to augment the cloud service (i.e. monitoring underlying virtual infrastructure). Both sets of security controls must meet the security, regulatory and compliance requirements of their various customers. With increasingly more enterprises adopting a ‘cloud-first’ policy, cloud service providers are faced with the situation of satisfying various technical security requirements of their many customers. Hence, it is not surprising that some enterprises might find the current security controls inadequate for their business needs. 

 Fulfilling regulatory and security requirements

To sidestep such issues, prospective customers have to do their due diligence when considering cloud migration. Customers have to ensure that the cloud services they wish to use can fulfill their regulatory and security requirements. Prospective cloud customers can use the Cloud Security Alliance’s Consensus Assessment Initiative Questionnaire (CAIQ)[2] to that end. The CAIQ is aligned with the Cloud Controls Matrix (CCM) and helps document what security controls exist in IaaS, PaaS and SaaS offerings, providing security control transparency. Furthermore, after cloud migration, customers should continue to monitor their regulatory and compliance landscape and communicate any changes to the cloud service providers. Having an open communication channel helps ensure that cloud service providers can make timely changes to the cloud service to align with changing customer security, compliance, and regulatory requirements.

Denial of Service

Denial of Service was rated 8th and then 11th in the last two iterations of the Top Threats report. In the latest Egregious 11 report, Denial of Service has dropped off the list. Denial of Service can take many forms. It can refer to a network attack such as a Distributed Denial of Service (DDoS) attack or system failure caused by a system administrator. 

Denial of Service (like many other security issues that have dropped off the list), is a security concern stemming from the fact that cloud services are a form of third-party in nature. In the early days of cloud computing, it was natural that enterprises were concerned about service availability when considering cloud migration. These enterprises had valid concerns about the cloud service providers’ network bandwidth as well as their compute and storage capacities. However, over the years, cloud service providers have significantly invested in their infrastructure and now have almost unrivaled bandwidth and processing capabilities. At the same time, cloud service providers have built sophisticated DDoS protection for their customers. For example, Amazon Web Services (AWS) has AWS Shield[3], Microsoft Azure as Azure DDoS Protection[4] and Google Cloud Platform (GCP) has Google Cloud Armor[5].

In spite of all the infrastructure investment and the tools available to help customers mitigate DDoS attacks, other forms of denial of service can still happen. These denial of service incidents are often not malicious but rather occur due to mistakes by the cloud service provider. For example, in May 2019, Microsoft Azure and Office 365 experienced a three-hour outage due to a DNS configuration blunder[6]. Unfortunately, no amount of infrastructure investment or tools can prevent such incidents from happening. Customers have to realize that by migrating to the cloud, they are relishing full control of certain aspects of their IT. They have to trust that the cloud service provider has put in place the necessary precautions to reduce, as much as possible, the occurrence of such incidents. 

______________________________________________________________________________

  [1] https://cloudsecurityalliance.org/artifacts/top-threats-to-cloud-computing-egregious-eleven

[2] https://cloudsecurityalliance.org/artifacts/consensus-assessments-initiative-questionnaire-v3-0-1/

[3] https://aws.amazon.com/shield/

[4] https://docs.microsoft.com/en-us/azure/virtual-network/ddos-protection-overview

[5] https://cloud.google.com/armor/

Open API Survey Report

By the Open API CSA Working Group

Cloud Security Alliance completed its first-ever Open API Survey Report, in an effort to see exactly where the industry stood on the knowledge surrounding Open APIs as well as how business professionals and consumers were utilizing them day to day. The key traits taken from the survey will be noted within this blog post to give the reader an idea of our current state of Open API knowledge and function. Moving forward, source code for security and open platforms has become increasingly shareable. As source code becomes more shareable between companies, it is giving way to new and robust manners which can be leveraged to improve upon what we already know. 

The survey was meant to be used as a means to see:

  • What the outlook and future of Open API’s are
  • The gaps we can notice from people actually using them
  • How they can become more useful for better security posture and development 
  • How Open APIs can be used for emerging technologies. 

Interoperability is key within this survey. Businesses like the idea of using Open-API’s because of their ability to work with systems already in place, and the ability to edit them to specific needs of a business. However, with this comes a lack of common education on where to go for implementing them, or how their security functions work internally from the original source. 

Unfolding within this survey, however, was one thing that stood out the most among all of the questions and answers. Was anyone aware of best practices guide concerning Open APIs? The number was quite staggering, with 84% saying no. This immediately raises a red flag. The one thing we are using the most within development lifecycles and to build new products, doesn’t have a well-known guidance supporting its usage and implementation into business models. 

As we move towards a future of open banking and other items that will be played at the hand of Open APIs, it is noticed that 44.74% of respondents to this survey have already implemented some form of an Open API. 

The Open API platforms businesses are currently using or planning to use in the future were Key management/organization with 28%, and Open API Universal banking (PSD2) coming in a very close second. With the growth of online banking, however, this number for Universal Banking is more than likely going to grow the most in the coming years compared to other areas of specific interest. 

Building off of this question, we next asked if SaaS apps have proper security guarding them. 57% of the responses answered No. Of those 57% who answered No, 40% answered that they already have implemented Open API within their own workspace. Being already familiar with the existence of an Open API, we can confidently assume that security posture with SaaS apps are lacking security features. Because of the free availability of these programs, this can be looked at as no single guideline for secure functions being implemented through each use of a specific API. Lack of guideline and security input from development teams is a vital part of this missing function. 

A staggering 94% responded “Yes” that security vendors should, in fact, be maintaining the Open-API’s for SaaS vendors in an effort to push real-time updates. Half of that group is within the category of also already having a strong implementation of currently used open- API’s, which also has suggested that the biggest benefit to their organization is interoperability. 

Something to note from this data set specifically, is that of all of the “yes” answers above are presently split down the middle that the future of Open API’s in speaking to security will lie more dominantly in the IoT devices and B2C/AI categories. 

According to the study:

  • 71% – Lack of knowledge on how to get started with Open API framework
  • 89% – Not enough information on securing Open API’s
  • 73% – Not enough information on how to implement Open API’s or where to look for a checklist for security posture. 

These all flow together to form a larger picture –> “How do we do this and where do we go?” A lack of guidance and policy surrounding these items is creating confusion beyond just implementing different open API’s. 

We had our respondents rate the best to the worst for organizations to implement security across SaaS vendors which included forward and reverse proxies, webhook integration, and other. As you can see from the image above, forward and reverse proxy scored 22% within the category as being the worst choice (1). Looking at the rows from 1 to 5, webhooks framework yielded the highest positive average ratio for the best choice for implementing security across SaaS vendors. 

It is important to note that webhook integration was the strongest choice for security posture and integration into a business environment. Though there were only 13% saying that they strongly agree, 52% were able to agree that a webhook integration is critical to the expansion of an existing framework. Of that group of 52%, more than 60% of their organizations either are working with universal banking initiatives or key management. 

There is much left to be developed within the realm of securing Open APIs and giving the reigns to who should actually be responsible for such a job. With Universal Banking becoming dominant internationally and moving into North America, the focus needs to shift to the idea of an interoperable and flexible framework that can give enterprises a knowledge base for building their programming architecture outwards. 

Interested in learning more about Open APIs? Visit the working group page here.

How to Share the Security Responsibility Between the CSP and Customer

By Dr. Kai Chen, Chief Security Technology Officer, Consumer BG, Huawei Technologies Co. Ltd.

The behemoths of cloud service providers (CSPs) have released shared security responsibility related papers and articles, explaining their roles and responsibilities in cloud provisioning. Although they share similar concepts, in reality, there are different interpretations and implementations among CSPs.

While there are many cloud security standards to help guide CSPs in fulfilling their security responsibilities, the cloud customers still find it challenging to design, deploy, and operate a secure cloud service. “Guideline on Effectively Managing Security Service in the Cloud” (referred to as the ‘Guideline’) developed by CSA’s Cloud Security Services Management (CSSM) Working Group provides an easy-to-understand guidance for cloud customers. It covers how to design, deploy, and operate a secure cloud service for different cloud service models, namely IaaS, PaaS, and SaaS. Cloud customers can use it to help ensure the secure running of service systems.

In the Guideline, the shared security responsibility figure was developed with reference to Gartner’s shared security responsibility model[1]. It illustrates the security handoff points for IaaS, PaaS, and SaaS cloud models. The handoff point moves up the stack across the models.

[1] Staying Secure in the Cloud Is a Shared Responsibility, Gartner,
https://www.gartner.com/doc/3277620/staying-secure-cloud-shared-responsibility

Security responsibility division between CSPs and cloud customers in different cloud service models.

While there are differences in the security responsibility across the models, some responsibilities are common to all cloud service models:

CSPs’ Common Security Responsibilities

  • Physical security of the infrastructure, including but not limited to: equipment room location selection; power supply assurance; cooling facilities; protection against fire, water, shock, and theft; and surveillance (for details about the security requirements, see related standards)
  • Security of computing, storage, and network hardware
  • Security of basic networks, such as anti-distributed denial of service and firewalls
  • Cloud storage security, such as backup and recovery
  • Security of cloud infrastructure virtualization, such as tenant resource isolation and virtualization resource management
  • Tenant identity management and access control
  • Secure access to cloud resources by tenant
  • Security management, operating monitoring, and emergency response of infrastructure
  • Formulating and rehearsing service continuity assurance plans and disaster recovery plans for infrastructure

Cloud Customers’ Common Security Responsibilities

  • User identity management and access control of service systems
  • Data security (in the European General Data Protection Regulation (GDPR) mode, cloud customers control the data and should be responsible for data security while CSPs only process the data and should take security responsibilities granted by data controllers.)
  • Security management and control of terminals that access cloud services, including hardware, software, application systems, and device rights

Besides that, the Guideline contains chapters that describe the technical requirements for the security assurance of cloud service systems and provides an implementation guide based on the existing security technologies, products, and services. It also illustrates security assurance technologies, products, and services that CSPs and customers should provide in different cloud service models as mentioned previously.

Security responsibilities between CSPs and cloud customers

Mapping of the Guideline with CCM

To help provide an overview to end users about the similarities and differences between the security recommendations listed in the Guideline and the Cloud Controls Matrix (CCM) controls, the CSSM working group conducted a mapping of CCM version 3.0.1 to the Guideline.

The Mapping of “Guideline on Effectively Managing Security Service in the Cloud” Security Recommendations to CCM was a one-way mapping, using the CCM as base, done in accordance with the Methodology for the Mapping of the Cloud Controls Matrix.

The mapping document is supplemented with a detailed gap analysis report that breaks down the gaps in each CCM domain and provides recommendations to readers.

“This mapping work brings users of the Guideline a step closer to being CCM compliant, beneficial to organizations looking to extrapolate existing security controls to match another framework, standard or best practice,” said Dr. Chen Kai, Chief Security Technology Officer, Consumer BG, Huawei Technologies Co. Ltd., and chair of the CSSM Working Group.

Users of the Guideline will be able to bridge lacking areas with ease based on the gap analysis. By understanding what it takes to go from the Guideline to CCM, the mapping work complements the Guideline to help users achieve holistic security controls.

Download the gap analysis report on mapping to the CSA’s Cloud Controls Matrix(CCM) now.

Learn more about the Cloud Services Management Working Group here.

Egregious 11 Meta-Analysis Part 2: Virtualizing Visibility

By Victor Chin, Research Analyst, CSA

This is the second blog post in the series where we analyze the security issues in the new iteration of the Top Threats to Cloud Computing report. Each blog post features a security issue that is being perceived as less relevant and one that is being perceived as more relevant.

In this report, we found that traditional cloud security issues stemming from concerns about having a third-party provider are being perceived as less relevant. While more nuanced issues specific to cloud environments are being perceived as more problematic. With this in mind, we will be examining Shared Technology Vulnerabilities and Limited Cloud Usage Visibility further.

**Please note that the Top Threats to Cloud Computing reports are not meant to be the definitive list of security issues in the cloud. Rather, the studies measures what industry experts perceive the key security issues to be.

Shared Technology Vulnerabilities

Shared Technology Vulnerabilities generally refers to vulnerabilities in the virtual infrastructure where resources are shared amongst tenants. Over the years, there have been several vulnerabilities of that nature with the most prominent being the VENOM (CVE-2015-3456)[1] vulnerability that was disclosed in 2015. Shared Technology Vulnerabilities used to be high up on the list of problematic issues. For example, in the first two iterations of the report, Shared Technology Vulnerabilities were rated at 9th and 12th. In the latest iteration of the report, it has dropped off entirely and is no longer perceived by as relevant. It had a score of 6.27 (our cutoff was 7 and above) and ranked 16 out of the 20 security issues surveyed.

Virtualization itself is not a new cloud technology, and its benefits are well known. Organizations have been using virtualization technology for many years as it helps to increase organizational IT agility, flexibility, and scalability while generating cost savings. For example, organizations would only have to procure and maintain one physical asset. That physical IT asset is then virtualized so that its resources are shared across the organization. As the organization owns and manages the entire IT stack, it also has visibility and control over the virtualization technology.

In cloud environments, the situation is markedly different. Virtualization technology (like hypervisors) is generally considered underlying technology that is owned and managed by the cloud service provider. Consequently, the cloud customer has limited access or visibility into the virtualization layer.

For example, the figure on the right is an architectural representation of the three cloud service models. Underlying technology in an Infrastructure-as-a-Service (IaaS) service model refers to APIs (blue) and anything else below it. Those components are under the control and management of the CSP. At the same time, anything above the APIs (blue) is under the control and management of the cloud customer. For Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS), underlying technology refers to anything underneath Integration & Middleware and Presentation Modality and Presentation Platform, respectively.

Naturally, in the early days of cloud computing, such vulnerabilities were a significant concern for customers. Not only did they have limited access and visibility into the virtualization layer, but the cloud services were also all multi-tenant systems which contained the data and services of other customers of the CSPs.

Over time, it seems like the industry has grown to trust the cloud service providers when it comes to Shared Technology Vulnerabilities. Cloud adoption is at its highest with many organizations adopting a ‘Cloud First’ policy. However, there is still no industry standard or existing framework that formalizes vulnerability notifications for CSPs, even when a vulnerability is found in the underlying cloud infrastructure. For example, when there is a vulnerability disclosure for a particular hypervisor, (e.g. XEN) an affected CSP does not have to provide any information to its customers. For more information on this issue, please read my other blogpost on cloud vulnerabilities.

That said, it is of note that many recent cloud breaches are the result of misconfigurations by cloud customers. For example, in 2017, Accenture left at least four Amazon S3 buckets set to public and exposed mission-critical infrastructure data. As cloud services developed, the major CSPs have, for the most part, provided sufficient security controls to enable cloud customers to properly configure their environments.

Nevertheless, virtualization technology is a critical component to any cloud service, and vulnerabilities in the virtualization layer can have severe consequences. Cloud customers must remain vigilant when it comes to Shared Technology Vulnerabilities.

Limited Cloud Usage Visibility

In the latest Top Threats to Cloud Computing report, Limited Cloud Usage Visibility made its debut in the 10th position.

Limited Cloud Usage Visibility refers to when organizations experience a significant reduction in visibility over their information technology stack. This is due to two main factors. Firstly, unlike in traditional IT environments, the enterprise does not own or manage the underlying cloud IT infrastructure. Consequently, they are not able to fully implement security controls or monitoring tools with as much depth and autonomy as they did with a traditional IT stack. Instead, cloud customers often have to rely on logs provided to them by the cloud providers. Sometimes, these logs are not as detailed as the customer would like it to be.

Secondly, cloud services are highly accessible. They can generally be accessed from the public internet and do not have to go through a company VPN or gateway. Hence, the effectiveness of some traditional enterprise security tools is reduced. For instance, network traffic monitoring and perimeter firewalls are not as effective as they cannot capture network traffic to cloud services that originate outside the organization. For many organizations, such monitoring capabilities are becoming more critical as they begin to host business-critical data and services in the cloud.

To alleviate the issue, enterprises can start using more cloud-aware technology or services to provide more visibility and control of the cloud environment. However, most of the time, the level of control and granularity cannot match that of a traditional IT environment. This lack of visibility and control is something that enterprises moving to the cloud have to get used to. There will be some level of risk associated to it, and it is a risk that they have to accept or work around. Organizations that are not prepared for this lack of visibility in the cloud might end up not applying the proper mitigations. That or they will find themselves unable to fully realize the cost savings of a cloud migration.

Continue reading the series…

Read our next blog post in this series analyzing the overarching trend of cloud security issues highlighted in the Top Threats to Cloud Computing: Egregious 11 report. We will take a look at Weak Control Plane and Denial of Service.

[1] http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-3456

Egregious 11 Meta-Analysis Part 1: (In)sufficient Due Diligence and Cloud Security Architecture and Strategy  

By Victor Chin, Research Analyst, CSA

On August 6th, 2019, the CSA Top Threats working group released the third iteration of the Top Threats to Cloud Computing report. This is the first blog post in the series where we analyze the security issues in the new iteration of the Top Threats to Cloud Computing report. Each blog post features a security issue that is being perceived as less relevant and one that is being perceived as more relevant.

 **Please note that the Top Threats to Cloud Computing reports are not meant to be the definitive list of security issues in the cloud. Rather, the studies are a measure of industry perception of key security issues.

The following security issues from the previous iteration (“The Treacherous Twelve”) appeared again in the latest report.

  • Data Breaches
  • Account Hijacking
  • Insider Threats
  • Insecure Interfaces and APIs
  • Abuse and Nefarious Use of Cloud Services

At the same time, five new security issues below made their debuts.

  • Misconfiguration and Insufficient Change Control
  • Lack of Cloud Security Architecture and Strategy
  • Weak Control Plane
  • Metastructure and Applistructure Failures
  • Limited Cloud Usage Visibility made their debuts.

The Overarching Trends

Throughout the three iterations of the report, one particular trend has been increasingly more prominent. Traditional cloud security issues stemming from concerns about having a third-party provider are being perceived as less relevant. Some examples of such issues are Data Loss, Denial of Service, and Insufficient Due Diligence. While more nuanced issues pertaining specifically to cloud environments are increasingly being perceived as more problematic. These include Lack of Cloud Security Architecture and Strategy, Weak Control Plane and Metastructure and Applistructure Failures.

Most and Least Relevant Security Issues

Over the next few weeks, we will examine and try to account for the trend mentioned earlier. Each blog post will feature a security issue that is being perceived as less relevant and one that is being perceived as more relevant. In the first post, we will take a closer look at Insufficient Due Diligence and Lack of Cloud Security Architecture and Strategy.

(In)sufficient Due Diligence

Insufficient Due Diligence was rated 8th and 9th in the first and second iteration of the Top Threats to Cloud Computing report, respectively. In the current report, it has completely dropped off. Insufficient Due Diligence refers to prospective cloud customers conducting cloud service provider (CSP) evaluations to ensure that the CSPs meets the various business and regulatory requirements. Such concerns were especially pertinent during the early years of cloud computing, where there were not many resources available to help cloud customers make that evaluation.

 Frameworks to Improve Cloud Procurement

Since then, many frameworks and projects have been developed to make cloud procurement a smooth journey. The Cloud Security Alliance (CSA), for example, has several tools to help enterprises on their journey of cloud procurement and migration.

  • The CAIQ and CCM are further supported by the Security, Trust and Assurance Registry (STAR) program, which is a multi-level assurance framework. The STAR program makes CSP information such as completed CAIQs (Level 1) and third-party audit certifications (Level 2) publicly accessible.

Around the world, we see many similar frameworks and guidances being developed. For example:

  • The Federal Risk and Authorization Management Program (FedRAMP) in the US
  • Multi-Tier Cloud Security (MTCS) Certification Scheme in Singapore
  • The European Security Certification Framework (EU-SEC) in the European Union.

With so many governance, risk and compliance support programs being developed globally, it is understandable that Insufficient Due Diligence has fallen off the Top Threats to Cloud Computing list.

Examining Lack of Cloud Security Architecture and Strategy

Lack of Cloud Security Architecture and Strategy was rated third in The Egregious Elven. Large organizations migrating their information technology stack to the cloud without considering the nuances of IT operations in the cloud environment are creating a significant amount of business risk for themselves. Such organizations fail to plan for the shortcomings that they will experience operating their IT stack in the cloud. Moving workloads to the cloud will result in organizations having less visibility and control over their data and the underlying cloud infrastructure. Coupled with the self-provisioning and on-demand nature of cloud resources, it becomes very easy to scale up cloud resources – sometimes, in an insecure manner. For example, in 2019, Accenture left at least 4 cloud storage buckets unsecured and publicly downloadable. In highly complex and scalable cloud environments without proper cloud security architecture and processes, such misconfigurations can occur easily. For cloud migration and operations to go smoothly, such shortcomings must be accounted for. Organizations can engage a Cloud Security Access Broker (CASB) or use cloud-aware technology to provide some visibility into the cloud infrastructure. Being able to monitor your cloud environment for misconfigurations or exposures will be extremely critical when operating in the cloud.

On a different note, the fact that a Lack of Cloud Security Architecture and Strategy is high up in the Top Threats to Cloud Computing is evidence that organizations are actively migrating to the cloud. These nuanced cloud security issues only crop up post-migration and will be the next tranche of problems for which solutions must be found.

Continue reading the series…

Read our next blog post analyzing the overarching trend of cloud security issues highlighted in the Top Threats to Cloud Computing: Egregious 11 report. Next time we will take a look at Shared Technology Vulnerabilities and Limited Cloud Usage Visibility.

Uncovering the CSA Top Threats to Cloud Computing with Jim Reavis

By Greg Jensen, Sr. Principal Director – Security Cloud Business Group, Oracle

For the few that attend this year’s BlackHat conference kicking off this week in Las Vegas, many will walk away with an in depth understanding and knowledge on risk as well as actionable understandings on how they can work to implement new strategies to defend against attacks. For the many others who don’t attend, Cloud Security Alliance has once again developed their CSA Top Threats to Cloud Computing: The Egregious 11.

I recently sat down with the CEO and founder of CSA, Jim Reavis, to gain a deeper understanding on what leaders and practitioners can learn from this year’s report that covers the top 11 threats to cloud computing – The Egregious 11.

(Greg) Jim, for those who have never seen this, what is the CSA Top Threats to Cloud report and who is your target reader?

(Jim) The CSA Top Threats to Cloud Computing is a research report that is periodically updated by our research team and working group of volunteers to identify high priority cloud security risks, threats and vulnerabilities to enable organizations to optimize risk management decisions related to securing their cloud usage.  The Top Threats report is intended to be a companion to CSA’s Security Guidance and Cloud Controls Matrix best practices documents by providing context around important threats in order to prioritize the deployment of security capabilities to the issues that really matter.

Our Top Threats research is compiled via industry surveys as well as through qualitative analysis from leading industry experts.  This research is among CSA’s most popular downloads and has spawned several translations and companion research documents that investigate cloud penetration testing and real world cloud incidents.  Top Threats research is applicable to the security practitioner seeking to protect assets, executives needing to validate broader security strategies and any others wanting to understand how cloud threats may impact their organization.  We make every effort to relate the potential pitfalls of cloud to practical steps that can be taken to mitigate these risks.

(Greg) Were there any findings in the Top Threats report that really stood out for you?

(Jim) Virtually all of the security issues we have articulated impact all different types of cloud.  This is important as we find a lot of practitioners who may narrow their cloud security focus on either Infrastructure as a Service (IaaS) or Software as a Service (SaaS), depending upon their own responsibilities or biases.  The cloud framework is a layered model, starting with physical infrastructure with layers of abstraction built on top of it.  SaaS is essentially the business application layer built upon some form of IaaS, so the threats are applicable no matter what type of cloud one uses.  Poor identity management practices, such as a failure to implement strong authentication, sticks out to me as a critical and eminently solvable issue.  I think the increased velocity of the “on demand” characteristic of cloud finds its way into the threat of insufficient due diligence and problems of insecure APIs.  The fastest way to implement cloud is to implement it securely the first time. 

(Greg) What do you think are some of the overarching trends you’ve noticed throughout the last 3 iterations of the report?

(Jim) What has been consistent is that the highest impact threats are primarily the responsibility of the cloud user.  To put a bit of nuance around this as the definition of a “cloud user” can be tricky, I like to think of this in three categories: a commercial SaaS provider, an enterprise building its own “private SaaS” applications on top of IaaS or a customer integrating a large number of SaaS applications have the bulk of the technical security responsibilities.  So much of the real world threats that these cloud users grapple with are improper configuration, poor secure software development practices and insufficient identity and access management strategies.

Greg Jensen, Sr Dir of Security, Oracle

(Greg) Are you seeing any trends that show there is increasing trust in cloud services, as well as the CSP working more effectively around Shared Responsibility Security Model?

(Jim) The market growth in cloud is a highly quantifiable indicator that cloud is becoming more trusted.  “Cloud first” is a common policy we see for organizations evaluating new IT solutions, and it hasn’t yet caused an explosion of cloud incidents, although I fear we must see an inevitable increase in breaches as it becomes the default platform.

We have been at this for over 10 years at CSA and have seen a lot of maturation in cloud during that time.  One of the biggest contributions we have seen from the CSPs over that time is the amount of telemetry they make available to their customers.  The amount and diversity of logfile information customers have today does not compare to the relative “blackbox” that existed when we started this journey more than a decade ago.

Going back to the layered model of cloud yet again, CSPs understand that most of the interesting applications customers build are a mashup of technologies.  Sophisticated CSPs understand this shared responsibility for security and have doubled down on educational programs for customers.  Also, I have to say that one of the most rewarding aspects of being in the security industry is observing the collegial nature among competing CSPs to share threat intelligence and best practices to improve the security of the entire cloud ecosystem.

One of the initiatives CSA developed that helps promulgate shared responsibility is the CSA Security, Trust, Assurance & Risk (STAR) Registry.  We publish the answers CSPs provide to our assessment questionnaire so consumers can objectively evaluate a CSP’s best practices and understand the line of demarcation and where their responsibility begins.

(Greg) How does the perception of threats, risks and vulnerabilities help to guide an organization’s decision making & strategy?

(Jim) This is an example of why it is so important to have a comprehensive body of knowledge of cloud security best practices and to be able to relate it to Top Threats.  A practitioner must be able to evaluate using any risk management strategy for a given threat, e.g. risk avoidance, risk mitigation, risk acceptance, etc.  If one understand the threats but not the best practices, one will almost always choose to avoid the risk, which may end up being a poor business decision.  Although the security industry has gotten much better over the years, we still fight the reputation of being overly conservative and obstructing new business opportunities over concerns about security threats.  While being paranoid has sometimes served us well, threat research should be one of a portfolio of tools that helps us embrace innovation.  

(Greg) What are some of the security issues that are currently brewing/underrated that you think might become more relevant in the near future?

(Jim) I think it is important to understand that malicious attackers will take the easy route and if they can phish your cloud credentials, they won’t need to leverage more sophisticated attacks.  I don’t spend a lot of time worrying about sophisticated CSP infrastructure attacks like the Rowhammer direct random access memory (DRAM) leaks, although a good security practitioner worries a little bit about everything. I try to think about fast moving technology areas that are manipulated by the customer, because there are far more customers than CSPs.  For example, I get concerned about the billions of IoT devices that get hooked into the cloud and what kinds of security hardening they have.  I also don’t think we have done enough research into how blackhats can attack machine learning systems to avoid next generation security systems.

Our Israeli chapter recently published a fantastic research document on the 12 Most Critical Risks for Serverless Applications.  Containerization and Serverless computing are very exciting developments and ultimately will improve security as they reduce the amount of resource management considerations for the developer and shrink the attack surface.  However, these technologies may seem foreign to security practitioners used to a virtualized operating system and it is an open question how well our tools and legacy best practices address these areas.

The future will be a combination of old threats made new and exploiting fast moving new technology.  CSA will continue to call them as we see them and try to educate the industry before these threats are fully realized.

(Greg) Jim, it’s been great hearing from you today on this new Top Threats to Cloud report. Hats off to the team and the contributors for this year’s report. Has been great working with them all!

(Jim) Thanks Greg! To learn more about this, or to download a copy of the report, visit us at www.cloudsecurityalliance.com

Challenges & Best Practices in Securing Application Containers and Microservices

By Anil Karmel, Co-Chair, CSA Application Containers and Microservices (ACM) Working Group

Application Containers have a long and storied history, dating back to the early 1960s with virtualization on mainframes up to the 2000s with the release of Solaris and Linux Containers (LXC). The rise of Docker in the early 2010s elevated the significance of Application Containerization as an efficient and reliable means to develop and deploy applications. Coupled with the rise of Microservices as an architectural pattern to decompose applications into fundamental building blocks, these two approaches have become the de facto means for how modern applications are delivered.

As with any new standard, challenges arise in how to secure application containers and microservices. The National Institute of Standards and Technology’s (NIST) Cloud Security Working Group launched a group focused on developing initial guidance around this practice area. The Cloud Security Alliance partnered with NIST on development of this guidance and focused on maturing the same culminating in the release of two foundational artifacts:

CSA’s Application Container and Microservices Working Group continues the charge laid by NIST to develop additional guidance around best practices in securing Microservices.

We want to invite interested parties to contribute content towards this end.  Please visit https://cloudsecurityalliance.org/research/join-working-group/ to join this working group.

CCM v3.0.1. Update for AICPA, NIST and FedRAMP Mappings

Victor Chin and Lefteris Skoutaris, Research Analysts, CSA

The CSA Cloud Controls Matrix (CCM) Working Group is glad to announce the new update to the CCM v3.0.1. This minor update will incorporate the following mappings:

A total of four documents will be released. The updated CCM (CCM v3.0.1-03-08-2019) will be released to replace the outdated CCM v3.0.1-12-11-2017. Additionally, three addendums will be released for AICPA TSC 2017, NIST 800-53 R4 Moderate and FedRAMP moderate, separately. The addendums will contain gap analyses and also control mappings. We hope that organizations will find these documents helpful in bridging compliance gaps between the CCM, AICPA TSC 2017, FedRAMP and NIST 800-53 R4 Moderate.

With the release of this update the CCM Working Group will be concluding all CCM v3 work and refocusing our efforts on CCM v4.

The upgrade of CCM v3 to the next version 4 has been made imperative due to the evolution of the cloud security standards, the need for more efficient auditability of the CCM controls and integration into CCM of the security requirements deriving from the new cloud technologies introduced.

In this context, a CCM task force has already been established to take on this challenge and drive CCM v4 development. The CCM v4 working group is comprised of CSA’s community volunteers comprised of industry’s leading experts in the domain of cloud computing and security. This endeavor is supported and supervised by the CCM co-chairs and strategic advisors (https://cloudsecurityalliance.org/research/working-groups/cloud-controls-matrix) who will ensure that the CCM v4 vision requirements and development plan are successfully implemented.

Some of the core objectives that drive CCM v4 development include:

  • Improving the auditability of the controls
  • Providing additional implementation and assessment guidance to organizations
  • Improve interoperability and compatibility with other standards
  • Ensuring coverage of requirements deriving from new cloud technologies (e.g., microservices, containers) and emerging technologies (e.g., IoT)

CCMv4 development works are expected to be concluded by the end of 2020. Should you be interested in knowing more, or participating and contributing to the development of CCM v4, please join the working group here: https://cloudsecurityalliance.org/research/join-working-group/.

Quantum Technology Captures Headlines in the Wall Street Journal

By the Quantum-Safe Security Working Group

Last month, we celebrated the 50th anniversary of the Apollo 11 moon landing. Apollo, which captured the imagination of the whole world, epitomizes the necessity for government involvement in long term, big science projects. What started as a fierce race between the USA and the USSR at the apex of the cold war ended up as a peaceful mission, “one giant leap for mankind”.

This “Leap” was just one of many steps that lead to the US, Russia, Japan, Europe and Canada sharing the International Space Station for further space exploration. The parallel with the quantum computer, which recently made headlines in the Wall Street Journal, is striking gauntlet to be picked up. A foreign power, in this case China, developed advanced quantum technologies passing its western counterparts and warrants a competitive response. Here again, the US policymakers rise to the challenge and call for a significant investment in quantum technologies (as presented in the WSJ article: In a White House Summit on Quantum Technology, Experts Map Next Steps).

Quantum technologies may not capture the imagination of so many star-gazing children as space. However, show them a golden “chandelier” of a quantum computer, tell them that it operates at temperatures colder than space, explain that it can do more optimization calculations than all classical computers combined, and we might get some converts.  We will need these engineers, developers and professions we have not yet thought of to get the full and profound impacts that are likely with quantum computers. If history is any guide, the currently expected applications in pharmaceuticals, finance and transportation mentioned in the WSJ are only a small portion of the real potential. Just these fields will require education on the quantum technologies at a broad level, as called for by the bipartisan participants to the White House Summit on Quantum Technologies. In addition, the threat of the quantum computer on our existing cybersecurity infrastructure (again reported in the WSJ: The Day When Computers Can Break All Encryption Is Coming), is real today. Sensitive digital data can already be recorded today and decrypted once a powerful-enough quantum computer is available. 

 This brings us back to the cold war space race, now with many potential players shielded in the obscurity of cyberspace. Let’s hope that, as with Apollo, the end result will be improvement for humankind. The international effort, led by the National Institute of Standards and Technology (NIST), to develop new quantum-resistant algorithms, as well as the development of quantum technologies, such as quantum random number generation and quantum-key distribution (QKD), to counter the very threat of the quantum computer, are steps in the right direction.

CSA’s quantum-safe security working group has produced several research papers addressing many aspects of quantum-safe security that were discussed in both of these articles.  These documents can help enterprises to better understand the quantum threat and steps they can start taking to address this coming threat.

The CSA QSS working group is an open forum for all interested in the development of these new technologies. Join the working group or download their latest research here.

Use Cases for Blockchain Beyond Cryptocurrency

CSA’s newest white paper, Documentation of Relevant Distributed Ledger Technology and Blockchain Use Cases v2 is a continuation of the efforts made in v1. The purpose of this publication is to describe relevant use cases beyond cryptocurrency for the application of these technologies.

In the process of outlining several use cases across discrete economic application sectors, we covered multiple industry verticals, as well as some use cases which cover multiple verticals simultaneously. For this document, we considered a use case as relevant when it provides the potential for any of the following:

  • disruption of existing business models or processes;
  • strong benefits for an organization, such as financial, improvement in speed of transactions, auditability, etc.;
  • large and widespread application; and
  • concepts that can be applied in real-world scenarios.

From concept to the production environment, we also identified six separate stages of maturity to get a better assessment of how much work has been done within the scope and how much more work remains to be done.

  1. Concept
  2. Proof of concept
  3. Prototype
  4. Pilot
  5. Pilot production
  6. Production

Some of the industry verticals which we identified are finance, supply chain, media/entertainment, and insurance, all of which are ripe for disruption from a technological point of view.

The document also clearly identified the expected benefits from the adoption of DLTs/blockchain in these use cases, type of DLT, use of private vs public blockchain, infrastructure provider-CSP and the type of services (IaaS, PaaS, SaaS). Identification of some other key features in the use case implementations such as Smart Contracts and Distributed Databases have also been outlined.

The working group hopes this document will be a valuable reference to all key stakeholders in the blockchain/DLT ecosystem, as well as contribute to its maturity.

Documentation of Relevant Distributed Ledger Technology and Blockchain Use Cases v2

Organizations Must Realign to Face New Cloud Realities

Jim Reavis, Co-founder and Chief Executive Officer, CSA

While cloud adoption is moving fast, many enterprises still underestimate the scale and complexity of cloud threats

Technology advancements often present benefits to humanity while simultaneously opening up new fronts in the on-going and increasingly complex cyber security battle. We are now at that critical juncture when it comes to the cloud: While the compute model has inherent security advantages when properly deployed, the reality is that any fast-growth platform is bound to see a proportionate increase in incidents and exposure.

The Cloud Security Alliance (CSA) is a global not-for-profit organization that was launched 10 years ago as a broad coalition to create a trusted cloud ecosystem. A decade later, cloud adoption is pervasive to the point of becoming the default IT system worldwide. As the ecosystem has evolved, so have the complexity and scale of cyber security attacks. That shift challenges the status quo, mounting pressure on organizations to understand essential technology trends, the changing threat landscape and our shared responsibility to rapidly address the resultant issues.

A decade later, cloud adoption is pervasive to the point of becoming the default IT system worldwide. As the ecosystem has evolved, so have the complexity and scale of cyber security attacks. 

There are real concerns that organizations have not adequately realigned for the cloud compute age and in some cases, are failing to reinvent their cyber defense strategies. Symantec’s inaugural Cloud Security Threat Report (CSTR) is a landmark report that shines a light on the current challenges and provides a useful roadmap that can help organizations improve and mature their cloud security strategy. The report articulates the most pressing cloud security issues of today, clarifies the areas that should be prioritized to improve an enterprise security posture, and offers a reality check on the state of cloud deployment.

Cloud in the Fast Lane

What the CSTR reveals and the CSA can confirm is that cloud adoption is moving too fast for enterprises, which are struggling with increasing complexity and loss of control. According to the Symantec CSTR, over half (54%) of respondents agree that their organization’s cloud security maturity is not keeping pace with the rapid expansion of new cloud apps.

The report also revealed that enterprises underestimate the scale and complexity of cloud threats. For example, the CSTR found that most commonly investigated incidents included garden variety data breaches, DDOS attacks and cloud malware injections. However, Symantec internal data shows that unauthorized access accounts for the bulk of cloud security incidents (64%), covering both simple exploits as well as sophisticated threats such as lateral movement and cross-cloud attacks. Companies are beginning to recognize their vulnerabilities–nearly two thirds (65%) of CSTR respondents believe the increasing complexity of their organization’s cloud infrastructure is opening them up to entirely new and dangerous threat vectors.

For example, identity-related attacks have escalated in the cloud, making proper identity and access management the fundamental backbone of security across domains in a highly virtualized technology stack. The speed with which cloud can be “spun up” and the often-decentralized manner in which it is deployed magnifies human errors and creates vulnerabilities that attackers can exploit. A lack of visibility into detailed cloud usage hampers optimal policies and controls.

The report also revealed that enterprises underestimate the scale and complexity of cloud threats.

As CSA delved into this report, we found strong alignment with the best practices research and education we advocate. As the CSTR reveals, a Zero Trust strategy, building out a software-defined perimeter, and adopting serverless and containerization technologies are critical building blocks for a mature cloud security posture.

The CSTR also advises organizations to develop robust governance strategies supported by a Cloud Center of Excellence (CCoE) to rally stakeholder buy-in and get everyone working from the same enterprise roadmap. Establishing security as a continuous process rather than front-loading efforts at the onset of procurement and deployment is a necessity given the frenetic pace of change.

As the CSTR suggests and we can confirm, security architectures must also be designed with an eye towards scalability, and automation and cloud-native approaches like DevSecOps are essential for minimizing errors, optimizing limited man power and facilitating new controls.

While there is a clear strategy for securing cloud operations, too few companies have embarked on the changes. Symantec internal data reports that 85% are not using best security practices as outlined by the Center for Internet Security (CIS). As a result, nearly three-quarters of respondents to the CSTR said they experienced a security incident in cloud-based infrastructure due to this immaturity.

The CSTR is a pivotal first step in increasing that awareness.

The good news is that the users of cloud have a full portfolio of solutions, including multi-factor authentication, data loss prevention, encryption and identity and authentication tools, at their disposal to address cloud security threats along with new processes and an educated workforce. The bad news is that many users of cloud are not aware of the full magnitude of their cloud adoption, the demarcation of the shared responsibility model and the inclination to rely on outdated security best practices. The CSTR is a pivotal first step in increasing that awareness.

Cloud is and will continue to be the epicenter of IT, and increasingly the foundation for cyber security. Understanding how threat vectors are shifting in cloud is fundamental to overhauling and modernizing an enterprise security program and strategy. CSA recommends the Symantec CSTR report be read widely and we look forward to future updates to its findings.

Download 2019 Cloud Security Threat Report >>

Interested in learning more? You can watch our CloudBytes webinar with Jim Reavis, Co-Founder & CEO at Cloud Security Alliance, and Kevin Haley, Director Security Technology and Response at Symantec as they discuss the key findings from the 2019 Cloud Security Threat Report. Watch it here >>

Signal vs. Noise: Banker Cloud Stories by Craig Balding

A good question to ask any professional in any line of business is: which “industry events” do you attend and why?  Over a few decades of attending a wide variety of events – and skipping many more – my primary driver is “signal to noise” ratio.  In other words, I look for events attended by people that are shaping our industry – specifically deep thinkers, leading experimenters, policy makers, risk takers and of course, “in the field” practitioners.  Skip the “talking shops” and seek out people “walking the talk”.  This is the reason I look forward to attending the regional meet-ups of the CSA Financial Services Stakeholder Platform.  The FSSP is a members-only group of banks and financial services organisation focused on cloud security.
In June, 23 of our broader group met in-person in beautiful Leuven, Belgium, where we were generously hosted by Roel at KBC headquarters.  We spent the day sharing experiences and discussing emerging practices under the Chatham House rule.

What topics did we cover?
The day comprised of valuable presentations, book-ended with networking sessions.  Each presentation served as a launchpad for in-depth question and answer sessions – a natural consequence of peers coming together.  For every 10 minutes of presentation, there was an equal amount discussion: digging into the challenges, the alternatives considered, the nuances of organisational fit and the methods and measures that matter.

  • A financial institution’s cloud native journey,
  • Compliance monitoring and automation
  • Key management & Protection: evaluation of hardware, tokens, TEEs and MPC, KUL
  • Continuous compliance (on AWS resources)
  • Modern Cloud Risk Assessment

Each cloud journey is different and no-single person or entity in the group can lay claim to all the answers.  Certainly, some are further in their journey than others, but each is delivering solutions in banks with a different profile, history and organisational culture.  The trajectory is the same though: step-by-step getting to “cloud first”, striving for “control parity”  whilst operating within the banks risk appetite.

What’s next?

Our members appetite for cloud is never satisfied!  Not only do they bring their “A game” to our events but they challenge us at CSA to facilitate and drive working groups (formal and informal) to work on things that matter.  This already happened a while back with the formation of the dedicated CSA Key Management Working Group.  So as our session concluded, members shared what’s on their mind and as a group we coalesced on the following topics for future focus:

  • Container security: check potential synergies with the Container & microservices WG – but not interested in container “use cases” but addressing underlying security aspects.
  • Understanding, quantifying, assessing, simplifying and benchmarking cloud complexity as the cloud adoption scenarios are becoming more and more complex (aka “complexity risk”)
  • With increased focus on innovation and increased transformation of waterfall to agile operations the FSSP members would like to more actively share cloud change stories in the financial industry.

If you have responsibility at a bank or financial firm for cloud security policy, architecture, engineering, risk management and/or controls assurance …you really are missing out if you miss these meet-ups.  Skip the low signal/noise events and join us at a regional FSSP meet-up near you.  Get in touch to find out more.

CSA Financial Services Stakeholders Platform: https://cloudsecurityalliance.org/research/working-groups/financial-services-stakeholder-platform/

The State of SDP Survey: A Summary

The CSA recently completed its first annual “State of Software-Defined Perimeter” Survey, gauging market awareness and adoption of this modern security architecture – summarized in this infographic.

The survey indicates it is still early for SDP market adoption and awareness, with only 24% of respondents claiming that they are very familiar or have fairly in-depth knowledge of SDP. The majority of respondents are less knowledgeable, with 29% being “somewhat” conversant in SDP, 35% having heard of it, and 11% knowing nothing about it.

A majority of organizations recognize the need to change their approach to a Zero Trust Architecture– 70% of respondents noted that they have a high or medium need to change their approach to user access control by better securing user authentication and authorization.

Survey respondents noted that the largest barrier to SDP adoption is existing in-place security technologies, closely followed by organizational lack of awareness and budgetary constraints. This is consistent with SDP’s early adopter market status, and its unique role as an integrated security architecture that enhances and, in some cases, eliminates the use of traditional security tools and technologies. Lack of awareness and perceived budgetary constraints point to a need for the CSA to educate the market on SDP’s security benefits and provide additional research to organizations about the cost benefit of SDP’s ability to provide preventive security compared with cyber breach detection after the fact.           

Respondents clearly understand that SDP functionally overlaps with VPN and NAC solutions, and also understand that SDP will benefit in-place systems such as IAM and SIEM. Organizations also see the benefits that SDP provides, with a majority indicating they could realize an improved security posture (63%) and a reduced attack surface (52%). A strong minority also see the benefits of reduce costs (48%) and improved compliance (44%).

In terms of adoption, a majority of organizations see themselves using SDP as a VPN replacement (64%) or a NAC alternative (55%) – both of which are common first projects for SDP.      

Based on this initial survey, we’re pleased to see this level of awareness, and optimistic that the concept of Zero Trust can be achieved by implementing SDP. Clearly, organizations are just beginning the transition from traditional security technologies to SDP and are looking for guidance. The CSA is addressing this demand with SDP resources and information – in fact, a majority of survey respondents requested additional technical documents, marketing resources, and webcasts. The SDP Working Group has recently published the SDP Architecture Guide research document, and other resources such as version 2.0 of the SDP specification, and additional guidance noted in the architecture document will follow.      

SDP is clearly a very important security development, providing an updated approach to current measures that fail to address the inherent vulnerabilities in the network and application connectivity protocols of the past. If you’d like to download the above infographic as a pdf, you can find it here: https://cloudsecurityalliance.org/artifacts/sdp-awareness-and-adoption-infographic

We’d like to thank the following individuals from the SDP leadership team for their work in creating this report and accompanying blog post:

  • Juanita Koilpillai
  • Nya Murray
  • Jason Garbis
  • Junaid Islam

How to Improve the Accuracy and Completeness of Cloud Computing Risk Assessments?

By Jim de Haas, cloud security expert, ABN AMRO Bank

This whitepaper aims to draw upon the security challenges in cloud computing environments and suggests a logical approach to dealing with the security aspects in a holistic way by introducing a Cloud Octagon model. This model makes it easier for organizations to identify, represent and assess risks in the context of their cloud implementation across multiple actors (legal, information risk management, operational risk management, compliance, architecture, procurement, privacy office, development teams and security).

Why the Cloud Octagon Model?

Developed to support your organization’s risk assessment methodology, the Cloud Octagon model provides practical guidance and structure to all involved risk parties in order to keep up with rapid changes in privacy & data protection laws, regulations, changes in technology and its security implications. The goals of this model are to reduce risks associated with cloud computing, improve the effectiveness of the cloud risk team, improve manageability of the solution and lastly to improve security.

Positioning the Octagon Model in Risk Assessments

What if an organization already has procedures and tools for cloud risk assessments or its regulator demands that the risk assessment methodology is supported by international standards? The octagon model can be used to supplement an organization’s existing risk assessment methodology. By applying it, risk assessments will be both more complete and accurate.

Security controls

The whitepaper contains information about 60 security controls that are included in the model. These 60 security controls are spread across the octagon aspects. No matter how complex or large your cloud project is, talking about these 60 controls will result in a proper risk assessment.

Game on!

In addition to structured education and certification programs, learning about cloud security while playing a game is a great way to get the message across. One of the initiatives to raise awareness of cloud computing security among the 2nd line experts is to develop and produce a game board version of the octagon model. The game was developed with help from the gamemaster. By playing, participants will learn what the relevant topics are to discuss during a risk workshop.

Interested in learning more? You can download the Cloud Octagon Model for free here.

https://cloudsecurityalliance.org/artifacts/cloud-octagon-model/

Will Hybrid Cryptography Protect Us from the Quantum Threat?

By Roberta Faux, Director of Advance Cryptography, BlackHorse Solutions

mitigating quantum threat

Our new white paper explains the pros and cons of hybrid cryptography. The CSA Quantum-Safe Security Working Group has produced a new primer on hybrid cryptography. This paper, “Mitigating the Quantum Threat with Hybrid Cryptography,” is aimed at helping non-technical corporate executives understand how to potentially address the threat of quantum computers on an organization’s infrastructure. Topics covered include:

–Types of hybrids
–Cost of hybrids
–Who needs a hybrid
–Caution about hybrids

The quantum threat

Quantum computers are already here. Well, at least tiny ones are here. Scientists are hoping to solve the scaling issues needed to build large-scale quantum computers in the next 10 years, perhaps. There are many exciting applications for quantum computing, but there is also one glaring threat: Large-scale quantum computers will render vulnerable nearly all of today’s cryptography.

Standards organizations prepare

The good news is that there already exist cryptographic algorithms believed to be unbreakable—even against large-scale quantum computers. These cryptographic algorithms are called “quantum resistant.” Standards organizations worldwide, including ETSI, IETF, NIST, ISO, and X9, have been scrambling to put guidance into place, but the task is daunting.

Quantum-resistant cryptography is based on complex underlying mathematical problems, such as the following:

  • Shortest-Vector Problem in a lattice
  • Syndrome Decoding Problem
  • Solving systems of multivariate equations
  • Constructing isogenies between supersingular elliptic curves

For such problems, there are no known attacks–even with a future large-scale quantum computer. There are many quantum-resistant cryptographic algorithms, each with numerous trade-offs (e.g., computation time, key size, security). No single algorithm satisfies all possible requirements; many factors need to be considered in order to determine the ideal match for a given environment.

Cryptographic migration

There is a growing concern about how and when to migrate from the current ubiquitously used “classical cryptography” of yesterday and today to the newer quantum-resistant cryptography of today and tomorrow. Historically, cryptographic migrations require at least a decade for large enterprises. Moreover, as quantum-resistant algorithms tend to have significantly larger key sizes, migration to quantum-resistant systems will likely involve updating both software and protocols. Consequently, live migrations will prove a huge challenge.

Cryptographic hybrids

A cryptographic hybrid scheme uses two cryptographic schemes to accomplish the same function. For instance, a hybrid system might digitally sign a message with one cryptographic scheme and then re-sign the same message with a second scheme. The benefit is that the message will remain secure even if one of the two cryptographic schemes becomes compromised. Hence, many are turning to hybrid solutions. As discussed in the paper, there are several flavors of hybrids:

  • A classical scheme and a quantum-resistant scheme
  • Two quantum-resistant schemes
  • A classical scheme with quantum key distribution
  • A classical asymmetric scheme along with a symmetric scheme

However, adopting a quantum-resistant solution prematurely may be even riskier.

Hybrid drawbacks

Hybrids come at the cost of increased bandwidth, code management, and interoperability challenges. Cryptographic implementations, in general, can be quite tricky. The threat of a flawed hybrid implementation would potentially be even more dangerous than a quantum computer, as security breaches are more commonly the result of a flawed implementation than an inherently weak cryptosystem. Even a small mistake in configuration or coding may result in a diminishment of some or all of the cryptographic security. There needs to be very careful attention paid to any hybrid cryptographic implementation in order to ensure that it does not make us less secure.

Do you need a hybrid?

Some business models will need to begin migration before standards are in place. So, who needs to consider a hybrid as a mitigation to the quantum threat? Two types of organizations are at high risk, namely, those who:

  • need to keep secrets for a very long time, and/or
  • lack the ability to change cryptographic infrastructure quickly.

An organization that has sensitive data should be concerned if an adversary could potentially collect that data now in encrypted form and decrypt it later whenever quantum computing capabilities become available. This is a threat facing governments, law firms, pharmaceutical companies, and many others. Also, organizations that rely on firmware or hardware will need significant development time to update and replace dependencies on firmware or hardware. These would include industries working in aerospace, automotive connectivity, data processing, telecommunications, and organizations that use hardware security modules.

Conclusion

The migration to quantum resistance is going to be a challenge. It is vital that corporate leaders plan for this now. Organizations need to start asking the following questions:

  • How is your organization dependent on cryptography?
  • How long does your data need to be secure?
  • How long will it take you to migrate?
  • Have you ensured you fully understand the ramifications of migration?

Well-informed planning will be key for a smooth transition to quantum-resistant security. Organizations need to start to conduct experiments now to determine unforeseen impacts. Importantly, organizations are advised to seek expert advice so that their migration doesn’t introduce new vulnerabilities.

As you prepare your organization to secure against future threats from quantum computers, make sure to do the following:

  • Identify reliance on cryptography
  • Determine risks
  • Understand options
  • Perform a proof of concept
  • Make a plan

Mitigating the Quantum Threat with Hybrid Cryptography offers more insights into how hybrids will help address the threat of quantum computers. Download the full paper today.

CSA Issues Top 20 Critical Controls for Cloud Enterprise Resource Planning Customers

By Victor Chin, Research Analyst, Cloud Security Alliance

Top 20 Critical Controls for Cloud ERP Customers

Cloud technologies are being increasingly adopted by organizations, regardless of their size, location or industry. And it’s no different when it comes to business-critical applications, typically known as enterprise resource planning (ERP) applications. Most organizations are migrating business-critical applications to a hybrid architecture of ERP applications. To assist in this process, CSA has released the Top 20 Critical Controls for Cloud Enterprise Resource Planning (ERP) Customers, a report that assesses and prioritizes the most critical controls organizations need to consider when transitioning their business-critical applications to cloud environments.

This document provides 20 controls, grouped into domains for ease of consumption, that align with the existing CSA Cloud Control Matrix (CCM) v3 structure of controls and domains.

The document focuses on the following domains:

  • Cloud ERP Users: Thousands of different users with very different access requirements and authorizations extensively use cloud
    enterprise resource planning applications. This domain provides controls aimed to protect users and access to cloud enterprise resource planning.
  • Cloud ERP Application: An attribute associated with cloud ERP applications is the complexity of the technology and functionality provided to users. This domain provides controls that are aimed to protect the application itself.
  • Integrations: Cloud ERP applications are not isolated systems but instead tend to be extensively integrated and connected to other applications and data sources. This domain focuses on securing the integrations of cloud enterprise resource planning applications.
  • Cloud ERP Data: Cloud enterprise resource planning applications store highly sensitive and regulated data. This domain focuses on critical controls to protect access to this data.
  • Business Processes: Cloud enterprise resource planning applications support some of the most complex and critical business processes for organizations. This domain provides controls that mitigate risks to these processes.

While there are various ERP cloud service models such as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS)—each with different security/service-level agreements and lines of responsibility—organizations are required to protect their own data, users and intellectual property (IP). As such, organizations that are either considering an ERP cloud migration or already have workloads in the cloud can use these control guidelines to build or bolster a strong foundational ERP security program.

By themselves, ERP applications utilize complex systems and, consequently, are challenging to secure. In the cloud, their complexity increases due to factors such as shared security models, varying cloud service models, and the intersection between IT and business controls. Nevertheless, due to cloud computing benefits, enterprise resource planning applications are increasingly migrating to the cloud.

Organizations should leverage this document as a guide to drive priorities around the most important controls that should be implemented while adopting Cloud ERP Applications. The CSA ERP Security Working Group will continue to keep this document updated and relevant. In the meantime, the group hopes readers find this document useful when migrating or securing enterprise resource planning applications in the cloud.

Download this free resource now.

The 12 Most Critical Risks for Serverless Applications

By Sean Heide, CSA Research Analyst and Ory Segal, Israel Chapter Board Member

12 Most Critical Risks for Serverless Applications 2019 report cover

When building the idea and thought process around implementing a serverless structure for your company, there are a few key risks one must take into account to ensure the architecture is gathering proper controls when speaking to security measures and how to adopt a program that can assist in maintaining the longevity of applications. Though this is a list of 12 highlighted risks that are deemed the most occurring, there should always be the idea that other potential risks need to be treated just the same.

Serverless architectures (also referred to as “FaaS,” or Function as a Service) enable organizations to build and deploy software and services without maintaining or provisioning any physical or virtual servers. Applications made using serverless architectures are suitable for a wide range of services and can scale elastically as cloud workloads grow. As a result of this wide array of off-site application structures, it opens up a string of potential attack surfaces that take advantage of vulnerabilities spanning from the use of multiple APIs and HTTP.

From a software development perspective, organizations adopting serverless architectures can focus instead on core product functionality, rather than the underlying operating system, application server or software runtime environment. By developing applications using serverless architectures, users relieve themselves from the daunting task of continually applying security patches for the underlying operating system and application servers. Instead, these tasks are now the responsibility of the serverless architecture provider. In serverless architectures, the serverless provider is responsible for securing the data center, network, servers, operating systems, and their configurations. However, application logic, code, data, and application-layer configurations still need to be robust—and resilient to attacks. These are the responsibility of application owners.

While the comfort and elegance of serverless architectures is appealing, they are not without their drawbacks. In fact, serverless architectures introduce a new set of issues that must be considered when securing such applications, including increased attack surface, attack surface complexity, inadequate security testing, and traditional security protections such as firewalls.

Serverless application risks by the numbers

Today, many organizations are exploring serverless architectures, or just making their first steps in the serverless world. In order to help them become successful in building robust, secure and reliable applications, the Cloud Security Alliance’s Israel Chapter has drafted the “The 12 Most Critical Risks for Serverless Applications 2019.” This new paper enumerates what top industry practitioners and security researchers with vast experience in application security, cloud and serverless architectures believe to be the current top risks, specific to serverless architectures

Organized in order of risk factor, with SAS-1 being the most critical, the list breaks down as the following:

  • SAS-1: Function Event Data Injection
  • SAS-2: Broken Authentication
  • SAS-3: Insecure Serverless Deployment Configuration
  • SAS-4: Over-Privileged Function Permissions & Roles
  • SAS-5: Inadequate Function Monitoring and Logging
  • SAS-6: Insecure Third-Party Dependencies
  • SAS-7: Insecure Application Secrets Storage
  • SAS-8: Denial of Service & Financial Resource Exhaustion
  • SAS-9: Serverless Business Logic Manipulation
  • SAS-10: Improper Exception Handling and Verbose Error Messages
  • SAS-11: Obsolete Functions, Cloud Resources and Event Triggers
  • SAS-12: Cross-Execution Data Persistency

In developing this security awareness and education guide, researchers pulled information from such sources as freely available serverless projects on GitHub and other open source repositories; automated source code scanning of serverless projects using proprietary algorithms; and data provided by our partners, individual contributors and industry practitioners.

While the document provides information about what are believed to be the most prominent security risks for serverless architectures, it is by no means an exhaustive list. Interested parties should also check back often as this paper will be updated and enhanced based on community input along with research and analysis of the most common serverless architecture risks.

Thanks must also be given to the following contributors, who were involved in the development of this document: Ory Segal, Shaked Zin, Avi Shulman, Alex Casalboni, Andreas N, Ben Kehoe, Benny Bauer, Dan Cornell, David Melamed, Erik Erikson, Izak Mutlu, Jabez Abraham, Mike Davies, Nir Mashkowski, Ohad Bobrov, Orr Weinstein, Peter Sbarski, James Robinson, Marina Segal, Moshe Ferber, Mike McDonald, Jared Short, Jeremy Daly, and Yan Cui.

CVE and Cloud Services, Part 2: Impacts on Cloud Vulnerability and Risk Management

By Victor Chin, Research Analyst, Cloud Security Alliance, and Kurt Seifried, Director of IT, Cloud Security Alliance

Internet Cloud server cabinet

This is the second post in a series, where we’ll discuss cloud service vulnerability and risk management trends in relation to the Common Vulnerability and Exposures (CVE) system. In the first blog post, we wrote about the Inclusion Rule 3 (INC3) and how it affects the counting of cloud service vulnerabilities. Here, we will delve deeper into how the exclusion of cloud service vulnerabilities impacts enterprise vulnerability and risk management.

 

Traditional vulnerability and risk management

CVE identifiers are the linchpin of traditional vulnerability management processes. Besides being an identifier for vulnerabilities, the CVE system allows different services and business processes to interoperate, making enterprise IT environments more secure. For example, a network vulnerability scanner can identify whether a vulnerability (e.g. CVE-2018-1234) is present in a deployed system by querying said system.

The queries can be conducted in many ways, such as via a banner grab, querying the system for what software is installed, or even via proof of concept exploits that have been de-weaponized. Such queries confirm the existence of the vulnerability, after which risk management and vulnerability remediation can take place.

Once the existence of the vulnerability is confirmed, enterprises must conduct risk management activities. Enterprises might first prioritize vulnerability remediation according to the criticality of the vulnerabilities. The Common Vulnerability Scoring System (CVSS) is one way on which the triaging of vulnerabilities is based. The system gives each vulnerability a score according to how critical it is, and from there enterprises can prioritize and remediate the more critical ones. Like other vulnerability information, CVSS scores are normally associated to CVE IDs.

Next, mitigating actions can be taken to remediate the vulnerabilities. This could refer to implementing patches, workarounds, or applying security controls. How the organization chooses to address the vulnerability is an exercise of risk management. They have to carefully balance their resources in relation to their risk appetite. But generally, organizations choose risk avoidance/rejection, risk acceptance, or risk mitigation.

Risk avoidance and rejection is fairly straightforward. Here, the organization doesn’t want to mitigate the vulnerability. At the same time, based on information available, the organization determines that the risk the vulnerability poses is above their risk threshold, and they stop using the vulnerable software.

Risk acceptance refers to when the organization, based on information available, determines that the risk posed is below their risk threshold and decides to accept the risk.

Lastly, in risk mitigation, the organization chooses to take mitigating actions and implement security controls that will reduce the risk. In traditional environments, such mitigating actions are possible because the organization generally owns and controls the infrastructure that provisions the IT service. For example, to mitigate a vulnerability, organizations are able to implement firewalls, intrusion detection systems, conduct system hardening activities, deactivate a service, change the configuration of a service, and many other options.

Thus, in traditional IT environments, organizations are able to take many mitigating actions because they own and control the stack. Furthermore, organizations have access to vulnerability information with which to make informed risk management decisions.

Cloud service customer challenges

Compared to traditional IT environments, the situation is markedly different for external cloud environments. The differences all stem from organizations not owning and controlling the infrastructure that provisions the cloud service, as well as not having access to vulnerability data of cloud native services.

Enterprise users don’t have ready access to cloud native vulnerabilities because there is no way to officially associate the data to cloud native vulnerabilities as CVE IDs are not generally assigned to them. Consequently, it’s difficult for enterprises to make an informed, risk-based decision regarding a vulnerable cloud service. For example, when should an enterprise customer reject the risk and stop using the service or accept the risk and continue using the service.

Furthermore, even if CVE IDs are assigned to cloud native vulnerabilities, the differences between traditional and cloud environments are so vast that vulnerability data which is normally associated to a CVE in a traditional environment is inadequate when dealing with cloud service vulnerabilities. For example, in a traditional IT environment, CVEs are linked to the version of a software. An enterprise customer can verify that a vulnerable version of a software is running by checking the software version. In cloud services, the versioning of the software (if there is one!) is usually only known to the cloud service provider and is not made public. Additionally, the enterprise user is unable to apply security controls or other mitigations to address the risk of a vulnerability.

This is not saying that CVEs and the associated vulnerability data are useless for cloud services. Instead, we should consider including vulnerability data that is useful in the context of a cloud service. In particular, cloud service vulnerability data should help enterprise cloud customers make the important risk-based decision of when to continue or stop using the service.

Thus, just as enterprise customers must trust cloud service providers with their sensitive data, they must also trust, blindly, that the cloud service providers are properly remediating the vulnerabilities in their environment in a timely manner.

The CVE gap

With the increasing global adoption and proliferation of cloud services, the exclusion of service vulnerabilities from the CVE system and the impacts of said exclusion have left a growing gap that the cloud services industry should address. This gap not only impacts enterprise vulnerability and risk management but also other key stakeholders in the cloud services industry.

In the next post, we’ll explore how other key stakeholders are affected by the shortcomings of cloud service vulnerability management.

Please let us know what you think about the INC3’s impacts on cloud service vulnerability and risk management in the comment section below, or you can also email us.