Egregious 11 Meta-Analysis Part 3: Weak Control Plane and DoS

By Victor Chin, Research Analyst, CSA

This is the third blog post in the series where we analyze the security issues in the new iteration of the Top Threats to Cloud Computing report. Each blog post features a security issue that is being perceived as less relevant and one that is being perceived as more relevant.

In this report, we found that traditional cloud security issues, like those stemming from concerns about having third-party providers, are being reported as less relevant. While more nuanced issues specific to cloud environments are being reported as more problematic. With this in mind, we will be examining Denial of Service and Weak Control Plane further.

 **Please note that the Top Threats to Cloud Computing reports are not meant to be the definitive list of security issues in the cloud. Rather, the studies are a measure of industry perception of key security issues.

Weak Control Plane

Weak control plane featured at the 8th position in the latest iteration of the Top Threats to Cloud Computing report. A weak cloud control plane refers to when a cloud service does not provide adequate or sufficient security controls to meet the security requirements of the customer. One example of a weak control plane is the lack of two-factor authentication and the ability to enforce its usage. Like the other debuting security issues, a weak control plane is something that a customer might only realize after they have migrated to the cloud. 

A key difference between traditional IT and Cloud

A key difference between traditional IT and cloud service applications that might help explain why weak control planes are becoming a problem in cloud services. In traditional IT environments, customer-controlled applications and their security features were designed with the customer as the main user. The application is hosted on the customer’s infrastructure and configured by the customer. The customer has full visibility and control over the application and is thus also responsible for its security. The main role of the IT provider would be to continually provide patches or updates to the application to ensure that bugs and vulnerabilities are fixed.

The situation for cloud services is different because the cloud service is never fully ‘shipped off’ to the customer. The cloud service will always be hosted by the cloud service provider. Hence, they not only have to design a suite of security controls in the cloud service that is useable by their customers. They also have to consider the security mechanism and features that protect the cloud service and the virtual infrastructure that hosts it. Furthermore, due to the nature of cloud services, customers generally cannot use their security tools or technologies to augment the cloud service (i.e. filtering incoming network traffic). Both sets of security controls must meet the security, regulatory and compliance requirements of their various customers. With increasingly more enterprises adopting a ‘cloud-first’ policy, cloud service providers are faced with the situation of satisfying various technical security requirements of their many customers. Hence, it is not surprising that some enterprises might find the current security controls inadequate for their business needs. 

 Fulfilling regulatory and security requirements

To sidestep such issues, prospective customers have to do their due diligence when considering cloud migration. Customers have to ensure that the cloud services they wish to use can fulfill their regulatory and security requirements. Prospective cloud customers can use the Cloud Security Alliance’s Consensus Assessment Initiative Questionnaire (CAIQ)[2] to that end. The CAIQ is aligned with the Cloud Controls Matrix (CCM) and helps document what security controls exist in IaaS, PaaS and SaaS offerings, providing security control transparency. Furthermore, after cloud migration, customers should continue to monitor their regulatory and compliance landscape and communicate any changes to the cloud service providers. Having an open communication channel helps ensure that cloud service providers can make timely changes to the cloud service to align with changing customer security, compliance, and regulatory requirements.

Denial of Service

Denial of Service was rated 8th and then 11th in the last two iterations of the Top Threats report. In the latest Egregious 11 report, Denial of Service has dropped off the list. Denial of Service can take many forms. It can refer to a network attack such as a Distributed Denial of Service (DDoS) attack or system failure caused by a system administrator. 

Denial of Service (like many other security issues that have dropped off the list), is a security concern stemming from the fact that cloud services are a form of third-party in nature. In the early days of cloud computing, it was natural that enterprises were concerned about service availability when considering cloud migration. These enterprises had valid concerns about the cloud service providers’ network bandwidth as well as their compute and storage capacities. However, over the years, cloud service providers have significantly invested in their infrastructure and now have almost unrivaled bandwidth and processing capabilities. At the same time, cloud service providers have built sophisticated DDoS protection for their customers. For example, Amazon Web Services (AWS) has AWS Shield[3], Microsoft Azure as Azure DDoS Protection[4] and Google Cloud Platform (GCP) has Google Cloud Armor[5].

In spite of all the infrastructure investment and the tools available to help customers mitigate DDoS attacks, other forms of denial of service can still happen. These denial of service incidents are often not malicious but rather occur due to mistakes by the cloud service provider. For example, in May 2019, Microsoft Azure and Office 365 experienced a three-hour outage due to a DNS configuration blunder[6]. Unfortunately, no amount of infrastructure investment or tools can prevent such incidents from happening. Customers have to realize that by migrating to the cloud, they are relishing full control of certain aspects of their IT. They have to trust that the cloud service provider has put in place the necessary precautions to reduce, as much as possible, the occurrence of such incidents. 

______________________________________________________________________________

  [1] https://cloudsecurityalliance.org/artifacts/top-threats-to-cloud-computing-egregious-eleven

[2] https://cloudsecurityalliance.org/artifacts/consensus-assessments-initiative-questionnaire-v3-0-1/

[3] https://aws.amazon.com/shield/

[4] https://docs.microsoft.com/en-us/azure/virtual-network/ddos-protection-overview

[5] https://cloud.google.com/armor/

Egregious 11 Meta-Analysis Part 2: Virtualizing Visibility

By Victor Chin, Research Analyst, CSA

This is the second blog post in the series where we analyze the security issues in the new iteration of the Top Threats to Cloud Computing report. Each blog post features a security issue that is being perceived as less relevant and one that is being perceived as more relevant.

In this report, we found that traditional cloud security issues stemming from concerns about having a third-party provider are being perceived as less relevant. While more nuanced issues specific to cloud environments are being perceived as more problematic. With this in mind, we will be examining Shared Technology Vulnerabilities and Limited Cloud Usage Visibility further.

**Please note that the Top Threats to Cloud Computing reports are not meant to be the definitive list of security issues in the cloud. Rather, the studies measures what industry experts perceive the key security issues to be.

Shared Technology Vulnerabilities

Shared Technology Vulnerabilities generally refers to vulnerabilities in the virtual infrastructure where resources are shared amongst tenants. Over the years, there have been several vulnerabilities of that nature with the most prominent being the VENOM (CVE-2015-3456)[1] vulnerability that was disclosed in 2015. Shared Technology Vulnerabilities used to be high up on the list of problematic issues. For example, in the first two iterations of the report, Shared Technology Vulnerabilities were rated at 9th and 12th. In the latest iteration of the report, it has dropped off entirely and is no longer perceived by as relevant. It had a score of 6.27 (our cutoff was 7 and above) and ranked 16 out of the 20 security issues surveyed.

Virtualization itself is not a new cloud technology, and its benefits are well known. Organizations have been using virtualization technology for many years as it helps to increase organizational IT agility, flexibility, and scalability while generating cost savings. For example, organizations would only have to procure and maintain one physical asset. That physical IT asset is then virtualized so that its resources are shared across the organization. As the organization owns and manages the entire IT stack, it also has visibility and control over the virtualization technology.

In cloud environments, the situation is markedly different. Virtualization technology (like hypervisors) is generally considered underlying technology that is owned and managed by the cloud service provider. Consequently, the cloud customer has limited access or visibility into the virtualization layer.

For example, the figure on the right is an architectural representation of the three cloud service models. Underlying technology in an Infrastructure-as-a-Service (IaaS) service model refers to APIs (blue) and anything else below it. Those components are under the control and management of the CSP. At the same time, anything above the APIs (blue) is under the control and management of the cloud customer. For Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS), underlying technology refers to anything underneath Integration & Middleware and Presentation Modality and Presentation Platform, respectively.

Naturally, in the early days of cloud computing, such vulnerabilities were a significant concern for customers. Not only did they have limited access and visibility into the virtualization layer, but the cloud services were also all multi-tenant systems which contained the data and services of other customers of the CSPs.

Over time, it seems like the industry has grown to trust the cloud service providers when it comes to Shared Technology Vulnerabilities. Cloud adoption is at its highest with many organizations adopting a ‘Cloud First’ policy. However, there is still no industry standard or existing framework that formalizes vulnerability notifications for CSPs, even when a vulnerability is found in the underlying cloud infrastructure. For example, when there is a vulnerability disclosure for a particular hypervisor, (e.g. XEN) an affected CSP does not have to provide any information to its customers. For more information on this issue, please read my other blogpost on cloud vulnerabilities.

That said, it is of note that many recent cloud breaches are the result of misconfigurations by cloud customers. For example, in 2017, Accenture left at least four Amazon S3 buckets set to public and exposed mission-critical infrastructure data. As cloud services developed, the major CSPs have, for the most part, provided sufficient security controls to enable cloud customers to properly configure their environments.

Nevertheless, virtualization technology is a critical component to any cloud service, and vulnerabilities in the virtualization layer can have severe consequences. Cloud customers must remain vigilant when it comes to Shared Technology Vulnerabilities.

Limited Cloud Usage Visibility

In the latest Top Threats to Cloud Computing report, Limited Cloud Usage Visibility made its debut in the 10th position.

Limited Cloud Usage Visibility refers to when organizations experience a significant reduction in visibility over their information technology stack. This is due to two main factors. Firstly, unlike in traditional IT environments, the enterprise does not own or manage the underlying cloud IT infrastructure. Consequently, they are not able to fully implement security controls or monitoring tools with as much depth and autonomy as they did with a traditional IT stack. Instead, cloud customers often have to rely on logs provided to them by the cloud providers. Sometimes, these logs are not as detailed as the customer would like it to be.

Secondly, cloud services are highly accessible. They can generally be accessed from the public internet and do not have to go through a company VPN or gateway. Hence, the effectiveness of some traditional enterprise security tools is reduced. For instance, network traffic monitoring and perimeter firewalls are not as effective as they cannot capture network traffic to cloud services that originate outside the organization. For many organizations, such monitoring capabilities are becoming more critical as they begin to host business-critical data and services in the cloud.

To alleviate the issue, enterprises can start using more cloud-aware technology or services to provide more visibility and control of the cloud environment. However, most of the time, the level of control and granularity cannot match that of a traditional IT environment. This lack of visibility and control is something that enterprises moving to the cloud have to get used to. There will be some level of risk associated to it, and it is a risk that they have to accept or work around. Organizations that are not prepared for this lack of visibility in the cloud might end up not applying the proper mitigations. That or they will find themselves unable to fully realize the cost savings of a cloud migration.

Continue reading the series…

Read our next blog post in this series analyzing the overarching trend of cloud security issues highlighted in the Top Threats to Cloud Computing: Egregious 11 report. We will take a look at Weak Control Plane and Denial of Service.

[1] http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-3456