Egregious 11 Meta-Analysis Part 1: (In)sufficient Due Diligence and Cloud Security Architecture and Strategy  

By Victor Chin, Research Analyst, CSA

On August 6th, 2019, the CSA Top Threats working group released the third iteration of the Top Threats to Cloud Computing report.

The following security issues from the previous iteration (“The Treacherous Twelve”) appeared again in the latest report.

  • Data Breaches
  • Account Hijacking
  • Insider Threats
  • Insecure Interfaces and APIs
  • Abuse and Nefarious Use of Cloud Services

At the same time, five new security issues below made their debuts.

  • Misconfiguration and Insufficient Change Control
  • Lack of Cloud Security Architecture and Strategy
  • Weak Control Plane
  • Metastructure and Applistructure Failures
  • Limited Cloud Usage Visibility made their debuts.

To access the full report, please click here.

Before we go into the meta-analysis of The Egregious Eleven, it is important to note that the Top Threats to Cloud Computing reports focus on identifying prominent security issues in the industry based on perception. It is not meant to be the definitive list of security issues in the cloud — instead, the study measures what industry experts perceive the key security issues to be.

The Overarching Trends

Throughout the three iterations of the report, one particular trend has been increasingly more prominent. Traditional cloud security issues stemming from concerns about having a third-party provider are being perceived as less relevant. Some examples of such issues are Data Loss, Denial of Service, and Insufficient Due Diligence. While more nuanced issues pertaining specifically to cloud environments are increasingly being perceived as more problematic. These include Lack of Cloud Security Architecture and Strategy, Weak Control Plane and Metastructure and Applistructure Failures.

Most and Least Relevant Security Issues

Over the next few weeks, we will examine and try to account for the trend mentioned earlier. Each blog post will feature a security issue that is being perceived as less relevant and one that is being perceived as more relevant. In the first post, we will take a closer look at Insufficient Due Diligence and Lack of Cloud Security Architecture and Strategy.

(In)sufficient Due Diligence

Insufficient Due Diligence was rated 8th and 9th in the first and second iteration of the Top Threats to Cloud Computing report, respectively. In the current report, it has completely dropped off. Insufficient Due Diligence refers to prospective cloud customers conducting cloud service provider (CSP) evaluations to ensure that the CSPs meets the various business and regulatory requirements. Such concerns were especially pertinent during the early years of cloud computing, where there were not many resources available to help cloud customers make that evaluation.

 Frameworks to Improve Cloud Procurement

Since then, many frameworks and projects have been developed to make cloud procurement a smooth journey. The Cloud Security Alliance (CSA), for example, has several tools to help enterprises on their journey of cloud procurement and migration.

  • The CAIQ and CCM are further supported by the Security, Trust and Assurance Registry (STAR) program, which is a multi-level assurance framework. The STAR program makes CSP information such as completed CAIQs (Level 1) and third-party audit certifications (Level 2) publicly accessible.

Around the world, we see many similar frameworks and guidances being developed. For example:

  • The Federal Risk and Authorization Management Program (FedRAMP) in the US
  • Multi-Tier Cloud Security (MTCS) Certification Scheme in Singapore
  • The European Security Certification Framework (EU-SEC) in the European Union.

With so many governance, risk and compliance support programs being developed globally, it is understandable that Insufficient Due Diligence has fallen off the Top Threats to Cloud Computing list.

Examining Lack of Cloud Security Architecture and Strategy

Lack of Cloud Security Architecture and Strategy was rated third in The Egregious Elven. Large organizations migrating their information technology stack to the cloud without considering the nuances of IT operations in the cloud environment are creating a significant amount of business risk for themselves. Such organizations fail to plan for the shortcomings that they will experience operating their IT stack in the cloud. Moving workloads to the cloud will result in organizations having less visibility and control over their data and the underlying cloud infrastructure. Coupled with the self-provisioning and on-demand nature of cloud resources, it becomes very easy to scale up cloud resources – sometimes, in an insecure manner. For example, in 2019, Accenture left at least 4 cloud storage buckets unsecured and publicly downloadable. In highly complex and scalable cloud environments without proper cloud security architecture and processes, such misconfigurations can occur easily. For cloud migration and operations to go smoothly, such shortcomings must be accounted for. Organizations can engage a Cloud Security Access Broker (CASB) or use cloud-aware technology to provide some visibility into the cloud infrastructure. Being able to monitor your cloud environment for misconfigurations or exposures will be extremely critical when operating in the cloud.

On a different note, the fact that a Lack of Cloud Security Architecture and Strategy is high up in the Top Threats to Cloud Computing is evidence that organizations are actively migrating to the cloud. These nuanced cloud security issues only crop up post-migration and will be the next tranche of problems for which solutions must be found.

Continue reading the series…

Stay tuned for our next blog post analyzing the overarching trend of cloud security issues highlighted in the Top Threats to Cloud Computing: Egregious 11 report. Next time we will take a look at Shared Technology Vulnerabilities and Limited Cloud Usage Visibility.

Uncovering the CSA Top Threats to Cloud Computing with Jim Reavis

By Greg Jensen, Sr. Principal Director – Security Cloud Business Group, Oracle

For the few that attend this year’s BlackHat conference kicking off this week in Las Vegas, many will walk away with an in depth understanding and knowledge on risk as well as actionable understandings on how they can work to implement new strategies to defend against attacks. For the many others who don’t attend, Cloud Security Alliance has once again developed their CSA Top Threats to Cloud Computing: The Egregious 11.

I recently sat down with the CEO and founder of CSA, Jim Reavis, to gain a deeper understanding on what leaders and practitioners can learn from this year’s report that covers the top 11 threats to cloud computing – The Egregious 11.

(Greg) Jim, for those who have never seen this, what is the CSA Top Threats to Cloud report and who is your target reader?

(Jim) The CSA Top Threats to Cloud Computing is a research report that is periodically updated by our research team and working group of volunteers to identify high priority cloud security risks, threats and vulnerabilities to enable organizations to optimize risk management decisions related to securing their cloud usage.  The Top Threats report is intended to be a companion to CSA’s Security Guidance and Cloud Controls Matrix best practices documents by providing context around important threats in order to prioritize the deployment of security capabilities to the issues that really matter.

Our Top Threats research is compiled via industry surveys as well as through qualitative analysis from leading industry experts.  This research is among CSA’s most popular downloads and has spawned several translations and companion research documents that investigate cloud penetration testing and real world cloud incidents.  Top Threats research is applicable to the security practitioner seeking to protect assets, executives needing to validate broader security strategies and any others wanting to understand how cloud threats may impact their organization.  We make every effort to relate the potential pitfalls of cloud to practical steps that can be taken to mitigate these risks.

(Greg) Were there any findings in the Top Threats report that really stood out for you?

(Jim) Virtually all of the security issues we have articulated impact all different types of cloud.  This is important as we find a lot of practitioners who may narrow their cloud security focus on either Infrastructure as a Service (IaaS) or Software as a Service (SaaS), depending upon their own responsibilities or biases.  The cloud framework is a layered model, starting with physical infrastructure with layers of abstraction built on top of it.  SaaS is essentially the business application layer built upon some form of IaaS, so the threats are applicable no matter what type of cloud one uses.  Poor identity management practices, such as a failure to implement strong authentication, sticks out to me as a critical and eminently solvable issue.  I think the increased velocity of the “on demand” characteristic of cloud finds its way into the threat of insufficient due diligence and problems of insecure APIs.  The fastest way to implement cloud is to implement it securely the first time. 

(Greg) What do you think are some of the overarching trends you’ve noticed throughout the last 3 iterations of the report?

(Jim) What has been consistent is that the highest impact threats are primarily the responsibility of the cloud user.  To put a bit of nuance around this as the definition of a “cloud user” can be tricky, I like to think of this in three categories: a commercial SaaS provider, an enterprise building its own “private SaaS” applications on top of IaaS or a customer integrating a large number of SaaS applications have the bulk of the technical security responsibilities.  So much of the real world threats that these cloud users grapple with are improper configuration, poor secure software development practices and insufficient identity and access management strategies.

Greg Jensen, Sr Dir of Security, Oracle

(Greg) Are you seeing any trends that show there is increasing trust in cloud services, as well as the CSP working more effectively around Shared Responsibility Security Model?

(Jim) The market growth in cloud is a highly quantifiable indicator that cloud is becoming more trusted.  “Cloud first” is a common policy we see for organizations evaluating new IT solutions, and it hasn’t yet caused an explosion of cloud incidents, although I fear we must see an inevitable increase in breaches as it becomes the default platform.

We have been at this for over 10 years at CSA and have seen a lot of maturation in cloud during that time.  One of the biggest contributions we have seen from the CSPs over that time is the amount of telemetry they make available to their customers.  The amount and diversity of logfile information customers have today does not compare to the relative “blackbox” that existed when we started this journey more than a decade ago.

Going back to the layered model of cloud yet again, CSPs understand that most of the interesting applications customers build are a mashup of technologies.  Sophisticated CSPs understand this shared responsibility for security and have doubled down on educational programs for customers.  Also, I have to say that one of the most rewarding aspects of being in the security industry is observing the collegial nature among competing CSPs to share threat intelligence and best practices to improve the security of the entire cloud ecosystem.

One of the initiatives CSA developed that helps promulgate shared responsibility is the CSA Security, Trust, Assurance & Risk (STAR) Registry.  We publish the answers CSPs provide to our assessment questionnaire so consumers can objectively evaluate a CSP’s best practices and understand the line of demarcation and where their responsibility begins.

(Greg) How does the perception of threats, risks and vulnerabilities help to guide an organization’s decision making & strategy?

(Jim) This is an example of why it is so important to have a comprehensive body of knowledge of cloud security best practices and to be able to relate it to Top Threats.  A practitioner must be able to evaluate using any risk management strategy for a given threat, e.g. risk avoidance, risk mitigation, risk acceptance, etc.  If one understand the threats but not the best practices, one will almost always choose to avoid the risk, which may end up being a poor business decision.  Although the security industry has gotten much better over the years, we still fight the reputation of being overly conservative and obstructing new business opportunities over concerns about security threats.  While being paranoid has sometimes served us well, threat research should be one of a portfolio of tools that helps us embrace innovation.  

(Greg) What are some of the security issues that are currently brewing/underrated that you think might become more relevant in the near future?

(Jim) I think it is important to understand that malicious attackers will take the easy route and if they can phish your cloud credentials, they won’t need to leverage more sophisticated attacks.  I don’t spend a lot of time worrying about sophisticated CSP infrastructure attacks like the Rowhammer direct random access memory (DRAM) leaks, although a good security practitioner worries a little bit about everything. I try to think about fast moving technology areas that are manipulated by the customer, because there are far more customers than CSPs.  For example, I get concerned about the billions of IoT devices that get hooked into the cloud and what kinds of security hardening they have.  I also don’t think we have done enough research into how blackhats can attack machine learning systems to avoid next generation security systems.

Our Israeli chapter recently published a fantastic research document on the 12 Most Critical Risks for Serverless Applications.  Containerization and Serverless computing are very exciting developments and ultimately will improve security as they reduce the amount of resource management considerations for the developer and shrink the attack surface.  However, these technologies may seem foreign to security practitioners used to a virtualized operating system and it is an open question how well our tools and legacy best practices address these areas.

The future will be a combination of old threats made new and exploiting fast moving new technology.  CSA will continue to call them as we see them and try to educate the industry before these threats are fully realized.

(Greg) Jim, it’s been great hearing from you today on this new Top Threats to Cloud report. Hats off to the team and the contributors for this year’s report. Has been great working with them all!

(Jim) Thanks Greg! To learn more about this, or to download a copy of the report, visit us at www.cloudsecurityalliance.com

Challenges & Best Practices in Securing Application Containers and Microservices

By Anil Karmel, Co-Chair, CSA Application Containers and Microservices (ACM) Working Group

Application Containers have a long and storied history, dating back to the early 1960s with virtualization on mainframes up to the 2000s with the release of Solaris and Linux Containers (LXC). The rise of Docker in the early 2010s elevated the significance of Application Containerization as an efficient and reliable means to develop and deploy applications. Coupled with the rise of Microservices as an architectural pattern to decompose applications into fundamental building blocks, these two approaches have become the de facto means for how modern applications are delivered.

As with any new standard, challenges arise in how to secure application containers and microservices. The National Institute of Standards and Technology’s (NIST) Cloud Security Working Group launched a group focused on developing initial guidance around this practice area. The Cloud Security Alliance partnered with NIST on development of this guidance and focused on maturing the same culminating in the release of two foundational artifacts:

CSA’s Application Container and Microservices Working Group continues the charge laid by NIST to develop additional guidance around best practices in securing Microservices.

We want to invite interested parties to contribute content towards this end.  Please visit https://cloudsecurityalliance.org/research/join-working-group/ to join this working group.

CCM v3.0.1. Update for AICPA, NIST and FedRAMP Mappings

Victor Chin and Lefteris Skoutaris, Research Analysts, CSA

The CSA Cloud Controls Matrix (CCM) Working Group is glad to announce the new update to the CCM v3.0.1. This minor update will incorporate the following mappings:

A total of four documents will be released. The updated CCM (CCM v3.0.1-03-08-2019) will be released to replace the outdated CCM v3.0.1-12-11-2017. Additionally, three addendums will be released for AICPA TSC 2017, NIST 800-53 R4 Moderate and FedRAMP moderate, separately. The addendums will contain gap analyses and also control mappings. We hope that organizations will find these documents helpful in bridging compliance gaps between the CCM, AICPA TSC 2017, FedRAMP and NIST 800-53 R4 Moderate.

With the release of this update the CCM Working Group will be concluding all CCM v3 work and refocusing our efforts on CCM v4.

The upgrade of CCM v3 to the next version 4 has been made imperative due to the evolution of the cloud security standards, the need for more efficient auditability of the CCM controls and integration into CCM of the security requirements deriving from the new cloud technologies introduced.

In this context, a CCM task force has already been established to take on this challenge and drive CCM v4 development. The CCM v4 working group is comprised of CSA’s community volunteers comprised of industry’s leading experts in the domain of cloud computing and security. This endeavor is supported and supervised by the CCM co-chairs and strategic advisors (https://cloudsecurityalliance.org/research/working-groups/cloud-controls-matrix) who will ensure that the CCM v4 vision requirements and development plan are successfully implemented.

Some of the core objectives that drive CCM v4 development include:

  • Improving the auditability of the controls
  • Providing additional implementation and assessment guidance to organizations
  • Improve interoperability and compatibility with other standards
  • Ensuring coverage of requirements deriving from new cloud technologies (e.g., microservices, containers) and emerging technologies (e.g., IoT)

CCMv4 development works are expected to be concluded by the end of 2020. Should you be interested in knowing more, or participating and contributing to the development of CCM v4, please join the working group here: https://cloudsecurityalliance.org/research/join-working-group/.

Quantum Technology Captures Headlines in the Wall Street Journal

By the Quantum-Safe Security Working Group

Last month, we celebrated the 50th anniversary of the Apollo 11 moon landing. Apollo, which captured the imagination of the whole world, epitomizes the necessity for government involvement in long term, big science projects. What started as a fierce race between the USA and the USSR at the apex of the cold war ended up as a peaceful mission, “one giant leap for mankind”.

This “Leap” was just one of many steps that lead to the US, Russia, Japan, Europe and Canada sharing the International Space Station for further space exploration. The parallel with the quantum computer, which recently made headlines in the Wall Street Journal, is striking gauntlet to be picked up. A foreign power, in this case China, developed advanced quantum technologies passing its western counterparts and warrants a competitive response. Here again, the US policymakers rise to the challenge and call for a significant investment in quantum technologies (as presented in the WSJ article: In a White House Summit on Quantum Technology, Experts Map Next Steps).

Quantum technologies may not capture the imagination of so many star-gazing children as space. However, show them a golden “chandelier” of a quantum computer, tell them that it operates at temperatures colder than space, explain that it can do more optimization calculations than all classical computers combined, and we might get some converts.  We will need these engineers, developers and professions we have not yet thought of to get the full and profound impacts that are likely with quantum computers. If history is any guide, the currently expected applications in pharmaceuticals, finance and transportation mentioned in the WSJ are only a small portion of the real potential. Just these fields will require education on the quantum technologies at a broad level, as called for by the bipartisan participants to the White House Summit on Quantum Technologies. In addition, the threat of the quantum computer on our existing cybersecurity infrastructure (again reported in the WSJ: The Day When Computers Can Break All Encryption Is Coming), is real today. Sensitive digital data can already be recorded today and decrypted once a powerful-enough quantum computer is available. 

 This brings us back to the cold war space race, now with many potential players shielded in the obscurity of cyberspace. Let’s hope that, as with Apollo, the end result will be improvement for humankind. The international effort, led by the National Institute of Standards and Technology (NIST), to develop new quantum-resistant algorithms, as well as the development of quantum technologies, such as quantum random number generation and quantum-key distribution (QKD), to counter the very threat of the quantum computer, are steps in the right direction.

CSA’s quantum-safe security working group has produced several research papers addressing many aspects of quantum-safe security that were discussed in both of these articles.  These documents can help enterprises to better understand the quantum threat and steps they can start taking to address this coming threat.

The CSA QSS working group is an open forum for all interested in the development of these new technologies. Join the working group or download their latest research here.

Use Cases for Blockchain Beyond Cryptocurrency

CSA’s newest white paper, Documentation of Relevant Distributed Ledger Technology and Blockchain Use Cases v2 is a continuation of the efforts made in v1. The purpose of this publication is to describe relevant use cases beyond cryptocurrency for the application of these technologies.

In the process of outlining several use cases across discrete economic application sectors, we covered multiple industry verticals, as well as some use cases which cover multiple verticals simultaneously. For this document, we considered a use case as relevant when it provides the potential for any of the following:

  • disruption of existing business models or processes;
  • strong benefits for an organization, such as financial, improvement in speed of transactions, auditability, etc.;
  • large and widespread application; and
  • concepts that can be applied in real-world scenarios.

From concept to the production environment, we also identified six separate stages of maturity to get a better assessment of how much work has been done within the scope and how much more work remains to be done.

  1. Concept
  2. Proof of concept
  3. Prototype
  4. Pilot
  5. Pilot production
  6. Production

Some of the industry verticals which we identified are finance, supply chain, media/entertainment, and insurance, all of which are ripe for disruption from a technological point of view.

The document also clearly identified the expected benefits from the adoption of DLTs/blockchain in these use cases, type of DLT, use of private vs public blockchain, infrastructure provider-CSP and the type of services (IaaS, PaaS, SaaS). Identification of some other key features in the use case implementations such as Smart Contracts and Distributed Databases have also been outlined.

The working group hopes this document will be a valuable reference to all key stakeholders in the blockchain/DLT ecosystem, as well as contribute to its maturity.

Documentation of Relevant Distributed Ledger Technology and Blockchain Use Cases v2

Organizations Must Realign to Face New Cloud Realities

Jim Reavis, Co-founder and Chief Executive Officer, CSA

While cloud adoption is moving fast, many enterprises still underestimate the scale and complexity of cloud threats

Technology advancements often present benefits to humanity while simultaneously opening up new fronts in the on-going and increasingly complex cyber security battle. We are now at that critical juncture when it comes to the cloud: While the compute model has inherent security advantages when properly deployed, the reality is that any fast-growth platform is bound to see a proportionate increase in incidents and exposure.

The Cloud Security Alliance (CSA) is a global not-for-profit organization that was launched 10 years ago as a broad coalition to create a trusted cloud ecosystem. A decade later, cloud adoption is pervasive to the point of becoming the default IT system worldwide. As the ecosystem has evolved, so have the complexity and scale of cyber security attacks. That shift challenges the status quo, mounting pressure on organizations to understand essential technology trends, the changing threat landscape and our shared responsibility to rapidly address the resultant issues.

A decade later, cloud adoption is pervasive to the point of becoming the default IT system worldwide. As the ecosystem has evolved, so have the complexity and scale of cyber security attacks. 

There are real concerns that organizations have not adequately realigned for the cloud compute age and in some cases, are failing to reinvent their cyber defense strategies. Symantec’s inaugural Cloud Security Threat Report (CSTR) is a landmark report that shines a light on the current challenges and provides a useful roadmap that can help organizations improve and mature their cloud security strategy. The report articulates the most pressing cloud security issues of today, clarifies the areas that should be prioritized to improve an enterprise security posture, and offers a reality check on the state of cloud deployment.

Cloud in the Fast Lane

What the CSTR reveals and the CSA can confirm is that cloud adoption is moving too fast for enterprises, which are struggling with increasing complexity and loss of control. According to the Symantec CSTR, over half (54%) of respondents agree that their organization’s cloud security maturity is not keeping pace with the rapid expansion of new cloud apps.

The report also revealed that enterprises underestimate the scale and complexity of cloud threats. For example, the CSTR found that most commonly investigated incidents included garden variety data breaches, DDOS attacks and cloud malware injections. However, Symantec internal data shows that unauthorized access accounts for the bulk of cloud security incidents (64%), covering both simple exploits as well as sophisticated threats such as lateral movement and cross-cloud attacks. Companies are beginning to recognize their vulnerabilities–nearly two thirds (65%) of CSTR respondents believe the increasing complexity of their organization’s cloud infrastructure is opening them up to entirely new and dangerous threat vectors.

For example, identity-related attacks have escalated in the cloud, making proper identity and access management the fundamental backbone of security across domains in a highly virtualized technology stack. The speed with which cloud can be “spun up” and the often-decentralized manner in which it is deployed magnifies human errors and creates vulnerabilities that attackers can exploit. A lack of visibility into detailed cloud usage hampers optimal policies and controls.

The report also revealed that enterprises underestimate the scale and complexity of cloud threats.

As CSA delved into this report, we found strong alignment with the best practices research and education we advocate. As the CSTR reveals, a Zero Trust strategy, building out a software-defined perimeter, and adopting serverless and containerization technologies are critical building blocks for a mature cloud security posture.

The CSTR also advises organizations to develop robust governance strategies supported by a Cloud Center of Excellence (CCoE) to rally stakeholder buy-in and get everyone working from the same enterprise roadmap. Establishing security as a continuous process rather than front-loading efforts at the onset of procurement and deployment is a necessity given the frenetic pace of change.

As the CSTR suggests and we can confirm, security architectures must also be designed with an eye towards scalability, and automation and cloud-native approaches like DevSecOps are essential for minimizing errors, optimizing limited man power and facilitating new controls.

While there is a clear strategy for securing cloud operations, too few companies have embarked on the changes. Symantec internal data reports that 85% are not using best security practices as outlined by the Center for Internet Security (CIS). As a result, nearly three-quarters of respondents to the CSTR said they experienced a security incident in cloud-based infrastructure due to this immaturity.

The CSTR is a pivotal first step in increasing that awareness.

The good news is that the users of cloud have a full portfolio of solutions, including multi-factor authentication, data loss prevention, encryption and identity and authentication tools, at their disposal to address cloud security threats along with new processes and an educated workforce. The bad news is that many users of cloud are not aware of the full magnitude of their cloud adoption, the demarcation of the shared responsibility model and the inclination to rely on outdated security best practices. The CSTR is a pivotal first step in increasing that awareness.

Cloud is and will continue to be the epicenter of IT, and increasingly the foundation for cyber security. Understanding how threat vectors are shifting in cloud is fundamental to overhauling and modernizing an enterprise security program and strategy. CSA recommends the Symantec CSTR report be read widely and we look forward to future updates to its findings.

Download 2019 Cloud Security Threat Report >>

Interested in learning more? You can watch our CloudBytes webinar with Jim Reavis, Co-Founder & CEO at Cloud Security Alliance, and Kevin Haley, Director Security Technology and Response at Symantec as they discuss the key findings from the 2019 Cloud Security Threat Report. Watch it here >>

Signal vs. Noise: Banker Cloud Stories by Craig Balding

A good question to ask any professional in any line of business is: which “industry events” do you attend and why?  Over a few decades of attending a wide variety of events – and skipping many more – my primary driver is “signal to noise” ratio.  In other words, I look for events attended by people that are shaping our industry – specifically deep thinkers, leading experimenters, policy makers, risk takers and of course, “in the field” practitioners.  Skip the “talking shops” and seek out people “walking the talk”.  This is the reason I look forward to attending the regional meet-ups of the CSA Financial Services Stakeholder Platform.  The FSSP is a members-only group of banks and financial services organisation focused on cloud security.
In June, 23 of our broader group met in-person in beautiful Leuven, Belgium, where we were generously hosted by Roel at KBC headquarters.  We spent the day sharing experiences and discussing emerging practices under the Chatham House rule.

What topics did we cover?
The day comprised of valuable presentations, book-ended with networking sessions.  Each presentation served as a launchpad for in-depth question and answer sessions – a natural consequence of peers coming together.  For every 10 minutes of presentation, there was an equal amount discussion: digging into the challenges, the alternatives considered, the nuances of organisational fit and the methods and measures that matter.

  • A financial institution’s cloud native journey,
  • Compliance monitoring and automation
  • Key management & Protection: evaluation of hardware, tokens, TEEs and MPC, KUL
  • Continuous compliance (on AWS resources)
  • Modern Cloud Risk Assessment

Each cloud journey is different and no-single person or entity in the group can lay claim to all the answers.  Certainly, some are further in their journey than others, but each is delivering solutions in banks with a different profile, history and organisational culture.  The trajectory is the same though: step-by-step getting to “cloud first”, striving for “control parity”  whilst operating within the banks risk appetite.

What’s next?

Our members appetite for cloud is never satisfied!  Not only do they bring their “A game” to our events but they challenge us at CSA to facilitate and drive working groups (formal and informal) to work on things that matter.  This already happened a while back with the formation of the dedicated CSA Key Management Working Group.  So as our session concluded, members shared what’s on their mind and as a group we coalesced on the following topics for future focus:

  • Container security: check potential synergies with the Container & microservices WG – but not interested in container “use cases” but addressing underlying security aspects.
  • Understanding, quantifying, assessing, simplifying and benchmarking cloud complexity as the cloud adoption scenarios are becoming more and more complex (aka “complexity risk”)
  • With increased focus on innovation and increased transformation of waterfall to agile operations the FSSP members would like to more actively share cloud change stories in the financial industry.

If you have responsibility at a bank or financial firm for cloud security policy, architecture, engineering, risk management and/or controls assurance …you really are missing out if you miss these meet-ups.  Skip the low signal/noise events and join us at a regional FSSP meet-up near you.  Get in touch to find out more.

CSA Financial Services Stakeholders Platform: https://cloudsecurityalliance.org/research/working-groups/financial-services-stakeholder-platform/

The State of SDP Survey: A Summary

The CSA recently completed its first annual “State of Software-Defined Perimeter” Survey, gauging market awareness and adoption of this modern security architecture – summarized in this infographic.

The survey indicates it is still early for SDP market adoption and awareness, with only 24% of respondents claiming that they are very familiar or have fairly in-depth knowledge of SDP. The majority of respondents are less knowledgeable, with 29% being “somewhat” conversant in SDP, 35% having heard of it, and 11% knowing nothing about it.

A majority of organizations recognize the need to change their approach to a Zero Trust Architecture– 70% of respondents noted that they have a high or medium need to change their approach to user access control by better securing user authentication and authorization.

Survey respondents noted that the largest barrier to SDP adoption is existing in-place security technologies, closely followed by organizational lack of awareness and budgetary constraints. This is consistent with SDP’s early adopter market status, and its unique role as an integrated security architecture that enhances and, in some cases, eliminates the use of traditional security tools and technologies. Lack of awareness and perceived budgetary constraints point to a need for the CSA to educate the market on SDP’s security benefits and provide additional research to organizations about the cost benefit of SDP’s ability to provide preventive security compared with cyber breach detection after the fact.           

Respondents clearly understand that SDP functionally overlaps with VPN and NAC solutions, and also understand that SDP will benefit in-place systems such as IAM and SIEM. Organizations also see the benefits that SDP provides, with a majority indicating they could realize an improved security posture (63%) and a reduced attack surface (52%). A strong minority also see the benefits of reduce costs (48%) and improved compliance (44%).

In terms of adoption, a majority of organizations see themselves using SDP as a VPN replacement (64%) or a NAC alternative (55%) – both of which are common first projects for SDP.      

Based on this initial survey, we’re pleased to see this level of awareness, and optimistic that the concept of Zero Trust can be achieved by implementing SDP. Clearly, organizations are just beginning the transition from traditional security technologies to SDP and are looking for guidance. The CSA is addressing this demand with SDP resources and information – in fact, a majority of survey respondents requested additional technical documents, marketing resources, and webcasts. The SDP Working Group has recently published the SDP Architecture Guide research document, and other resources such as version 2.0 of the SDP specification, and additional guidance noted in the architecture document will follow.      

SDP is clearly a very important security development, providing an updated approach to current measures that fail to address the inherent vulnerabilities in the network and application connectivity protocols of the past. If you’d like to download the above infographic as a pdf, you can find it here: https://cloudsecurityalliance.org/artifacts/sdp-awareness-and-adoption-infographic

We’d like to thank the following individuals from the SDP leadership team for their work in creating this report and accompanying blog post:

  • Juanita Koilpillai
  • Nya Murray
  • Jason Garbis
  • Junaid Islam

How to Improve the Accuracy and Completeness of Cloud Computing Risk Assessments?

By Jim de Haas, cloud security expert, ABN AMRO Bank

This whitepaper aims to draw upon the security challenges in cloud computing environments and suggests a logical approach to dealing with the security aspects in a holistic way by introducing a Cloud Octagon model. This model makes it easier for organizations to identify, represent and assess risks in the context of their cloud implementation across multiple actors (legal, information risk management, operational risk management, compliance, architecture, procurement, privacy office, development teams and security).

Why the Cloud Octagon Model?

Developed to support your organization’s risk assessment methodology, the Cloud Octagon model provides practical guidance and structure to all involved risk parties in order to keep up with rapid changes in privacy & data protection laws, regulations, changes in technology and its security implications. The goals of this model are to reduce risks associated with cloud computing, improve the effectiveness of the cloud risk team, improve manageability of the solution and lastly to improve security.

Positioning the Octagon Model in Risk Assessments

What if an organization already has procedures and tools for cloud risk assessments or its regulator demands that the risk assessment methodology is supported by international standards? The octagon model can be used to supplement an organization’s existing risk assessment methodology. By applying it, risk assessments will be both more complete and accurate.

Security controls

The whitepaper contains information about 60 security controls that are included in the model. These 60 security controls are spread across the octagon aspects. No matter how complex or large your cloud project is, talking about these 60 controls will result in a proper risk assessment.

Game on!

In addition to structured education and certification programs, learning about cloud security while playing a game is a great way to get the message across. One of the initiatives to raise awareness of cloud computing security among the 2nd line experts is to develop and produce a game board version of the octagon model. The game was developed with help from the gamemaster. By playing, participants will learn what the relevant topics are to discuss during a risk workshop.

Interested in learning more? You can download the Cloud Octagon Model for free here.

https://cloudsecurityalliance.org/artifacts/cloud-octagon-model/

Will Hybrid Cryptography Protect Us from the Quantum Threat?

By Roberta Faux, Director of Advance Cryptography, BlackHorse Solutions

mitigating quantum threat

Our new white paper explains the pros and cons of hybrid cryptography. The CSA Quantum-Safe Security Working Group has produced a new primer on hybrid cryptography. This paper, “Mitigating the Quantum Threat with Hybrid Cryptography,” is aimed at helping non-technical corporate executives understand how to potentially address the threat of quantum computers on an organization’s infrastructure. Topics covered include:

–Types of hybrids
–Cost of hybrids
–Who needs a hybrid
–Caution about hybrids

The quantum threat

Quantum computers are already here. Well, at least tiny ones are here. Scientists are hoping to solve the scaling issues needed to build large-scale quantum computers in the next 10 years, perhaps. There are many exciting applications for quantum computing, but there is also one glaring threat: Large-scale quantum computers will render vulnerable nearly all of today’s cryptography.

Standards organizations prepare

The good news is that there already exist cryptographic algorithms believed to be unbreakable—even against large-scale quantum computers. These cryptographic algorithms are called “quantum resistant.” Standards organizations worldwide, including ETSI, IETF, NIST, ISO, and X9, have been scrambling to put guidance into place, but the task is daunting.

Quantum-resistant cryptography is based on complex underlying mathematical problems, such as the following:

  • Shortest-Vector Problem in a lattice
  • Syndrome Decoding Problem
  • Solving systems of multivariate equations
  • Constructing isogenies between supersingular elliptic curves

For such problems, there are no known attacks–even with a future large-scale quantum computer. There are many quantum-resistant cryptographic algorithms, each with numerous trade-offs (e.g., computation time, key size, security). No single algorithm satisfies all possible requirements; many factors need to be considered in order to determine the ideal match for a given environment.

Cryptographic migration

There is a growing concern about how and when to migrate from the current ubiquitously used “classical cryptography” of yesterday and today to the newer quantum-resistant cryptography of today and tomorrow. Historically, cryptographic migrations require at least a decade for large enterprises. Moreover, as quantum-resistant algorithms tend to have significantly larger key sizes, migration to quantum-resistant systems will likely involve updating both software and protocols. Consequently, live migrations will prove a huge challenge.

Cryptographic hybrids

A cryptographic hybrid scheme uses two cryptographic schemes to accomplish the same function. For instance, a hybrid system might digitally sign a message with one cryptographic scheme and then re-sign the same message with a second scheme. The benefit is that the message will remain secure even if one of the two cryptographic schemes becomes compromised. Hence, many are turning to hybrid solutions. As discussed in the paper, there are several flavors of hybrids:

  • A classical scheme and a quantum-resistant scheme
  • Two quantum-resistant schemes
  • A classical scheme with quantum key distribution
  • A classical asymmetric scheme along with a symmetric scheme

However, adopting a quantum-resistant solution prematurely may be even riskier.

Hybrid drawbacks

Hybrids come at the cost of increased bandwidth, code management, and interoperability challenges. Cryptographic implementations, in general, can be quite tricky. The threat of a flawed hybrid implementation would potentially be even more dangerous than a quantum computer, as security breaches are more commonly the result of a flawed implementation than an inherently weak cryptosystem. Even a small mistake in configuration or coding may result in a diminishment of some or all of the cryptographic security. There needs to be very careful attention paid to any hybrid cryptographic implementation in order to ensure that it does not make us less secure.

Do you need a hybrid?

Some business models will need to begin migration before standards are in place. So, who needs to consider a hybrid as a mitigation to the quantum threat? Two types of organizations are at high risk, namely, those who:

  • need to keep secrets for a very long time, and/or
  • lack the ability to change cryptographic infrastructure quickly.

An organization that has sensitive data should be concerned if an adversary could potentially collect that data now in encrypted form and decrypt it later whenever quantum computing capabilities become available. This is a threat facing governments, law firms, pharmaceutical companies, and many others. Also, organizations that rely on firmware or hardware will need significant development time to update and replace dependencies on firmware or hardware. These would include industries working in aerospace, automotive connectivity, data processing, telecommunications, and organizations that use hardware security modules.

Conclusion

The migration to quantum resistance is going to be a challenge. It is vital that corporate leaders plan for this now. Organizations need to start asking the following questions:

  • How is your organization dependent on cryptography?
  • How long does your data need to be secure?
  • How long will it take you to migrate?
  • Have you ensured you fully understand the ramifications of migration?

Well-informed planning will be key for a smooth transition to quantum-resistant security. Organizations need to start to conduct experiments now to determine unforeseen impacts. Importantly, organizations are advised to seek expert advice so that their migration doesn’t introduce new vulnerabilities.

As you prepare your organization to secure against future threats from quantum computers, make sure to do the following:

  • Identify reliance on cryptography
  • Determine risks
  • Understand options
  • Perform a proof of concept
  • Make a plan

Mitigating the Quantum Threat with Hybrid Cryptography offers more insights into how hybrids will help address the threat of quantum computers. Download the full paper today.

CSA Issues Top 20 Critical Controls for Cloud Enterprise Resource Planning Customers

By Victor Chin, Research Analyst, Cloud Security Alliance

Top 20 Critical Controls for Cloud ERP Customers

Cloud technologies are being increasingly adopted by organizations, regardless of their size, location or industry. And it’s no different when it comes to business-critical applications, typically known as enterprise resource planning (ERP) applications. Most organizations are migrating business-critical applications to a hybrid architecture of ERP applications. To assist in this process, CSA has released the Top 20 Critical Controls for Cloud Enterprise Resource Planning (ERP) Customers, a report that assesses and prioritizes the most critical controls organizations need to consider when transitioning their business-critical applications to cloud environments.

This document provides 20 controls, grouped into domains for ease of consumption, that align with the existing CSA Cloud Control Matrix (CCM) v3 structure of controls and domains.

The document focuses on the following domains:

  • Cloud ERP Users: Thousands of different users with very different access requirements and authorizations extensively use cloud
    enterprise resource planning applications. This domain provides controls aimed to protect users and access to cloud enterprise resource planning.
  • Cloud ERP Application: An attribute associated with cloud ERP applications is the complexity of the technology and functionality provided to users. This domain provides controls that are aimed to protect the application itself.
  • Integrations: Cloud ERP applications are not isolated systems but instead tend to be extensively integrated and connected to other applications and data sources. This domain focuses on securing the integrations of cloud enterprise resource planning applications.
  • Cloud ERP Data: Cloud enterprise resource planning applications store highly sensitive and regulated data. This domain focuses on critical controls to protect access to this data.
  • Business Processes: Cloud enterprise resource planning applications support some of the most complex and critical business processes for organizations. This domain provides controls that mitigate risks to these processes.

While there are various ERP cloud service models such as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS)—each with different security/service-level agreements and lines of responsibility—organizations are required to protect their own data, users and intellectual property (IP). As such, organizations that are either considering an ERP cloud migration or already have workloads in the cloud can use these control guidelines to build or bolster a strong foundational ERP security program.

By themselves, ERP applications utilize complex systems and, consequently, are challenging to secure. In the cloud, their complexity increases due to factors such as shared security models, varying cloud service models, and the intersection between IT and business controls. Nevertheless, due to cloud computing benefits, enterprise resource planning applications are increasingly migrating to the cloud.

Organizations should leverage this document as a guide to drive priorities around the most important controls that should be implemented while adopting Cloud ERP Applications. The CSA ERP Security Working Group will continue to keep this document updated and relevant. In the meantime, the group hopes readers find this document useful when migrating or securing enterprise resource planning applications in the cloud.

Download this free resource now.

The 12 Most Critical Risks for Serverless Applications

By Sean Heide, CSA Research Analyst and Ory Segal, Israel Chapter Board Member

12 Most Critical Risks for Serverless Applications 2019 report cover

When building the idea and thought process around implementing a serverless structure for your company, there are a few key risks one must take into account to ensure the architecture is gathering proper controls when speaking to security measures and how to adopt a program that can assist in maintaining the longevity of applications. Though this is a list of 12 highlighted risks that are deemed the most occurring, there should always be the idea that other potential risks need to be treated just the same.

Serverless architectures (also referred to as “FaaS,” or Function as a Service) enable organizations to build and deploy software and services without maintaining or provisioning any physical or virtual servers. Applications made using serverless architectures are suitable for a wide range of services and can scale elastically as cloud workloads grow. As a result of this wide array of off-site application structures, it opens up a string of potential attack surfaces that take advantage of vulnerabilities spanning from the use of multiple APIs and HTTP.

From a software development perspective, organizations adopting serverless architectures can focus instead on core product functionality, rather than the underlying operating system, application server or software runtime environment. By developing applications using serverless architectures, users relieve themselves from the daunting task of continually applying security patches for the underlying operating system and application servers. Instead, these tasks are now the responsibility of the serverless architecture provider. In serverless architectures, the serverless provider is responsible for securing the data center, network, servers, operating systems, and their configurations. However, application logic, code, data, and application-layer configurations still need to be robust—and resilient to attacks. These are the responsibility of application owners.

While the comfort and elegance of serverless architectures is appealing, they are not without their drawbacks. In fact, serverless architectures introduce a new set of issues that must be considered when securing such applications, including increased attack surface, attack surface complexity, inadequate security testing, and traditional security protections such as firewalls.

Serverless application risks by the numbers

Today, many organizations are exploring serverless architectures, or just making their first steps in the serverless world. In order to help them become successful in building robust, secure and reliable applications, the Cloud Security Alliance’s Israel Chapter has drafted the “The 12 Most Critical Risks for Serverless Applications 2019.” This new paper enumerates what top industry practitioners and security researchers with vast experience in application security, cloud and serverless architectures believe to be the current top risks, specific to serverless architectures

Organized in order of risk factor, with SAS-1 being the most critical, the list breaks down as the following:

  • SAS-1: Function Event Data Injection
  • SAS-2: Broken Authentication
  • SAS-3: Insecure Serverless Deployment Configuration
  • SAS-4: Over-Privileged Function Permissions & Roles
  • SAS-5: Inadequate Function Monitoring and Logging
  • SAS-6: Insecure Third-Party Dependencies
  • SAS-7: Insecure Application Secrets Storage
  • SAS-8: Denial of Service & Financial Resource Exhaustion
  • SAS-9: Serverless Business Logic Manipulation
  • SAS-10: Improper Exception Handling and Verbose Error Messages
  • SAS-11: Obsolete Functions, Cloud Resources and Event Triggers
  • SAS-12: Cross-Execution Data Persistency

In developing this security awareness and education guide, researchers pulled information from such sources as freely available serverless projects on GitHub and other open source repositories; automated source code scanning of serverless projects using proprietary algorithms; and data provided by our partners, individual contributors and industry practitioners.

While the document provides information about what are believed to be the most prominent security risks for serverless architectures, it is by no means an exhaustive list. Interested parties should also check back often as this paper will be updated and enhanced based on community input along with research and analysis of the most common serverless architecture risks.

Thanks must also be given to the following contributors, who were involved in the development of this document: Ory Segal, Shaked Zin, Avi Shulman, Alex Casalboni, Andreas N, Ben Kehoe, Benny Bauer, Dan Cornell, David Melamed, Erik Erikson, Izak Mutlu, Jabez Abraham, Mike Davies, Nir Mashkowski, Ohad Bobrov, Orr Weinstein, Peter Sbarski, James Robinson, Marina Segal, Moshe Ferber, Mike McDonald, Jared Short, Jeremy Daly, and Yan Cui.

CVE and Cloud Services, Part 2: Impacts on Cloud Vulnerability and Risk Management

By Victor Chin, Research Analyst, Cloud Security Alliance, and Kurt Seifried, Director of IT, Cloud Security Alliance

Internet Cloud server cabinet

This is the second post in a series, where we’ll discuss cloud service vulnerability and risk management trends in relation to the Common Vulnerability and Exposures (CVE) system. In the first blog post, we wrote about the Inclusion Rule 3 (INC3) and how it affects the counting of cloud service vulnerabilities. Here, we will delve deeper into how the exclusion of cloud service vulnerabilities impacts enterprise vulnerability and risk management.

 

Traditional vulnerability and risk management

CVE identifiers are the linchpin of traditional vulnerability management processes. Besides being an identifier for vulnerabilities, the CVE system allows different services and business processes to interoperate, making enterprise IT environments more secure. For example, a network vulnerability scanner can identify whether a vulnerability (e.g. CVE-2018-1234) is present in a deployed system by querying said system.

The queries can be conducted in many ways, such as via a banner grab, querying the system for what software is installed, or even via proof of concept exploits that have been de-weaponized. Such queries confirm the existence of the vulnerability, after which risk management and vulnerability remediation can take place.

Once the existence of the vulnerability is confirmed, enterprises must conduct risk management activities. Enterprises might first prioritize vulnerability remediation according to the criticality of the vulnerabilities. The Common Vulnerability Scoring System (CVSS) is one way on which the triaging of vulnerabilities is based. The system gives each vulnerability a score according to how critical it is, and from there enterprises can prioritize and remediate the more critical ones. Like other vulnerability information, CVSS scores are normally associated to CVE IDs.

Next, mitigating actions can be taken to remediate the vulnerabilities. This could refer to implementing patches, workarounds, or applying security controls. How the organization chooses to address the vulnerability is an exercise of risk management. They have to carefully balance their resources in relation to their risk appetite. But generally, organizations choose risk avoidance/rejection, risk acceptance, or risk mitigation.

Risk avoidance and rejection is fairly straightforward. Here, the organization doesn’t want to mitigate the vulnerability. At the same time, based on information available, the organization determines that the risk the vulnerability poses is above their risk threshold, and they stop using the vulnerable software.

Risk acceptance refers to when the organization, based on information available, determines that the risk posed is below their risk threshold and decides to accept the risk.

Lastly, in risk mitigation, the organization chooses to take mitigating actions and implement security controls that will reduce the risk. In traditional environments, such mitigating actions are possible because the organization generally owns and controls the infrastructure that provisions the IT service. For example, to mitigate a vulnerability, organizations are able to implement firewalls, intrusion detection systems, conduct system hardening activities, deactivate a service, change the configuration of a service, and many other options.

Thus, in traditional IT environments, organizations are able to take many mitigating actions because they own and control the stack. Furthermore, organizations have access to vulnerability information with which to make informed risk management decisions.

Cloud service customer challenges

Compared to traditional IT environments, the situation is markedly different for external cloud environments. The differences all stem from organizations not owning and controlling the infrastructure that provisions the cloud service, as well as not having access to vulnerability data of cloud native services.

Enterprise users don’t have ready access to cloud native vulnerabilities because there is no way to officially associate the data to cloud native vulnerabilities as CVE IDs are not generally assigned to them. Consequently, it’s difficult for enterprises to make an informed, risk-based decision regarding a vulnerable cloud service. For example, when should an enterprise customer reject the risk and stop using the service or accept the risk and continue using the service.

Furthermore, even if CVE IDs are assigned to cloud native vulnerabilities, the differences between traditional and cloud environments are so vast that vulnerability data which is normally associated to a CVE in a traditional environment is inadequate when dealing with cloud service vulnerabilities. For example, in a traditional IT environment, CVEs are linked to the version of a software. An enterprise customer can verify that a vulnerable version of a software is running by checking the software version. In cloud services, the versioning of the software (if there is one!) is usually only known to the cloud service provider and is not made public. Additionally, the enterprise user is unable to apply security controls or other mitigations to address the risk of a vulnerability.

This is not saying that CVEs and the associated vulnerability data are useless for cloud services. Instead, we should consider including vulnerability data that is useful in the context of a cloud service. In particular, cloud service vulnerability data should help enterprise cloud customers make the important risk-based decision of when to continue or stop using the service.

Thus, just as enterprise customers must trust cloud service providers with their sensitive data, they must also trust, blindly, that the cloud service providers are properly remediating the vulnerabilities in their environment in a timely manner.

The CVE gap

With the increasing global adoption and proliferation of cloud services, the exclusion of service vulnerabilities from the CVE system and the impacts of said exclusion have left a growing gap that the cloud services industry should address. This gap not only impacts enterprise vulnerability and risk management but also other key stakeholders in the cloud services industry.

In the next post, we’ll explore how other key stakeholders are affected by the shortcomings of cloud service vulnerability management.

Please let us know what you think about the INC3’s impacts on cloud service vulnerability and risk management in the comment section below, or you can also email us.

CVE and Cloud Services, Part 1: The Exclusion of Cloud Service Vulnerabilities

By Kurt Seifried, Director of IT, Cloud Security Alliance and Victor Chin, Research Analyst, Cloud Security Alliance

The vulnerability management process has traditionally been supported by a finely balanced ecosystem, which includes such stakeholders as security researchers, enterprises, and vendors. At the crux of this ecosystem is the Common Vulnerabilities and Exposures (CVE) identification system. In order to be assigned an ID, vulnerabilities have to fulfill certain criteria. In recent times, these criteria have become problematic as they exclude vulnerabilities in certain categories of IT services that are becoming more and more common.

This is the first in a series of blogposts that will explore the challenges and opportunities in enterprise vulnerability management in relation to the increasing adoption of cloud services.

Common Vulnerabilities and Exposures

CVE® is a list of entries, each containing an identification number, a description, and at least one public reference for publicly known cybersecurity vulnerabilities[1].

CVEs are identifiers for security vulnerabilities that are—or are expected to become—public. Traditionally, they are assigned by one of two entities: The CNA (CVE Numbering Authority) that exists specifically for that piece of software (e.g. Microsoft, which covers Microsoft software) or a CNA that has been given coverage of said software (e.g. The Debian Project, Distributed Weakness Filing Project, and Red Hat all cover Open Source software to varying degrees). These CVEs are then published in the MITRE CVE database. Finally, they are consumed and republished by other organizations, often with additional information such as workarounds or fixes which makes tracking and remediating those vulnerabilities possible.

Customers of companies or organizations that are CNAs for their own products can be reasonably assured that CVE IDs are assigned to historical, current and future vulnerabilities found in those products.

CVE and Vulnerability Management

The CVE system is the linchpin of the vulnerability management process, as its widespread use and adoption allows different services and business processes to interoperate. The system provides a way for specific vulnerabilities to be tracked via the assignment of IDs. Enterprises, security researchers, penetration testers, software providers and even vulnerability scanning tools all use CVE IDs to track vulnerabilities in products. These IDs also allow important information regarding a vulnerability to be associated with it such as workarounds, vulnerable software versions, and Common Vulnerability Scoring System (CVSS) scores. Without the CVE system, it becomes difficult to track vulnerabilities in a way that allows the different stakeholders and their tools to interoperate.

Example of CVE and other associated information
Fig 1: Example of CVE and other associated information taken from CVEdetails.com (https://www.cvedetails.com/cve/CVE-2007-0994/)

CVE Inclusion Rules and Limitations

The decision to assign an ID to a vulnerability is governed by the Inclusion Rules. In order to assign a CVE ID to a vulnerability, the assigner has to take the vulnerability through the Inclusion Rules. Generally, only a vulnerability that fulfills all five criteria will be assigned an ID. For example, one of the Inclusion rules, INC3, states that a vulnerability should only be assigned a CVE ID if it is customer-controlled or customer-installable. A vulnerability in a Customer Relationship Management (CRM) software that is installed on a server owned and managed by an enterprise fulfills that requirement.

CVE Inclusion Rule INC3
Fig 2: Inclusion Rule INC3 taken from cve.mitre.org (https://cve.mitre.org/cve/editorial_policies/counting_rules.html)

INC3, as it is currently worded, is problematic for a world that is increasingly dominated by cloud services. In the past, this inclusion rule has worked well for the IT industry as most enterprise IT services have generally been provisioned with infrastructure owned by the enterprise. However, with the proliferation of cloud services, this particular rule has created a growing gap for enterprise vulnerability management. This is because cloud services, as we currently understand them, are not customer controlled. As a result, vulnerabilities in cloud services are generally not assigned CVE IDs. Information such as workarounds, affected software or hardware versions, proof of concepts, references and patches are not available as this information is normally associated to a CVE ID. Without the support of the CVE system, it becomes difficult, if not impossible, to track and manage vulnerabilities.

Conclusion

The Cloud Security Alliance and the CVE board are currently exploring solutions to this problem.

One of the first tasks is to obtain industry feedback regarding a possible modification of INC3 to take into account vulnerabilities that are not customer-controlled. Such a change would officially put cloud service vulnerabilities in the scope of the CVE system. This would not only allow vulnerabilities to be properly tracked, it would also enable important information to be associated with a service vulnerability.

Please let us know what you think about a change to INC3 and the resulting impact on the vulnerability management ecosystem in the comment section below or you can also email us.

Stay tuned for our next blog post where we will explore the impacts that the current Inclusion Rules have on enterprise vulnerability management.

[1] https://cve.mitre.org/

CSA Releases New Whitepaper on Virtualization Security Best Practices

At this year’s RSA Conference, the Cloud Security Alliance released a new whitepaper entitled: “Best Practices for Mitigating Risks in Virtualized Environments” which provides guidance on the identification and management of security risks specific to compute virtualization technologies that run on server hardware.

The whitepaper was developed by CSA’s Virtualization Working Group which is co-chaired by Kapil Raina, of Elastica, and
Kelvin Ng of Nanyang Polytechnic and sponsored by TrendMicro. The 35 page paper identifies 11 core risks related to virtualization, including:

  1. VM Sprawl
  2. Sensitive Data within a VM
  3. Security of Offline and Dormant VMs
  4. Security of Pre-Configured (Golden Image) VM / Active VMs
  5. Lack of Visibility Into and Controls Over Virtual Networks
  6. Resource Exhaustion
  7. Hypervisor Security
  8. Unauthorized Access to Hypervisor
  9. Account or Service Hijacking Through the Self-Service Portal
  10. Workload of Different Trust Levels Located on the Same Server
  11. Risk Due to Cloud Service Provider API

The report is free and can be downloaded in full here.

 

 

 

 

The Dark Side of Big Data: CSA Opens Peer Review Period for the “Top Ten Big Data and Privacy Challenges” Report

moonBig Data seems to be on the lips of every organization’s CXO these days. By exploiting Big Data, enterprises are able to gain valuable new insights into customer behavior via advanced analytics. However, what often gets lost amidst all the excitement are the very real and many security and privacy issues that go hand in hand with Big Data.  Traditional security schemes mechanisms were simply never designed to deal with the reality of Big Data, which often relies on distributed, large-scale cloud infrastructures, a diversity of data sources, and the high volume and frequency of data migration between different cloud environments.

To address these challenges, the CSA Big Data Working Group released an initial report, The Top 10 Big Data Security and Privacy Challenges at CSA Congress 2012, It was the first such industry report to take a holistic view at the wide variety of big data challenges facing enterprises. Since this time, the group has been working to further its research, assembling detailed information and use cases for each threat.  The result is the first Top 10 Big Data and Privacy Challenges report and, beginning today, the report is open for peer review during which CSA members are invited to review and comment on the report prior to its final release. The 35-page report outlines the unique challenges presented by Big Data through narrative use cases and identifies the dimension of difficulty for each challenge.

The Top 10 Big Data and Privacy Challenges have been enumerated as follows:

  1. Secure computations in distributed programming frameworks
  2. Security best practices for non-relational data stores
  3. Secure data storage and transactions logs
  4. End-point input validation/filtering
  5. Real-time security monitoring
  6. Scalable and composable privacy-preserving data mining and analytics
  7. Cryptographically enforced data centric security
  8. Granular access control
  9. Granular audits
  10. Data provenance

The goal of outlining these challenges is to raise awareness among security practitioners and researchers so that industry wide best practices might be adopted to addresses these issues as they continue to evolve. The open review period ends March 18, 2013.  To review the report and provide comments, please visit https://interact.cloudsecurityalliance.org/index.php/bigdata/top_ten_big_data_2013 .

Tweet this: The Dark Side of Big Data: CSA Releases Top 10 Big Data and Privacy Challenges Report. http://bit.ly/VHmk0d

CSA Releases CCM v 3.0

The Cloud Security Alliance (CSA) today has released a draft of the latest version of the Cloud Control Matrix, CCM v3.0. This latest revision to the industry standard for cloud computing security controls realigns the CCM control domains to achieve tighter integration with the CSA’s “Security Guidance for Critical Areas of Focus in Cloud Computing version 3” and introduces three new control domains. Beginning February 25, 2013 the draft version of CCM v3.0 will be made available for peer review through the CSA Interact website with the peer review period closing March 27, 2013, and final release of CCM v3.0 on April 1, 2013.

The three new control domains; “Mobile Security”, “Supply Change Management, Transparency and Accountability”, and “Interoperability & Portability” address rapidly expanding methods cloud data is accessed, the need for ensuring due care is taken in the cloud providers supply chain, and the minimization of service disruptions in the face of a change to cloud provider relationship.

The “Mobile Security” controls are built upon the CSA’s “Security Guidance for Critical Areas of Mobile Computing, v1.0” and are the first mobile device specific controls incorporated into the Cloud Control Matrix.

The “Supply Change Management, Transparency and Accountability” control domain seeks to address risks associated with governing data within the cloud while the “Interoperability & Portability” brings to the forefront considerations to minimize service disruptions in the face of a change in a cloud vendor relationship or expansion of services.

The realigned control domains have also benefited through changes in language to improve the clarity and intent of the control, and, in some cases, realigned within the expanded control domains to ensure the cohesiveness within each control domain and minimize overlap.

The draft of the Cloud Control Matrix can be downloaded from the Cloud Security Alliance website and the CSA welcomes peer review through the CSA Interact website.

The CSA invites all interested parties to participate in the peer review and the CSA Cloud Controls Matrix Working Group Meeting to be held during the week of the RSA Conference, at 4pm PT on February 28, 2013, at the Sir Francis Drake Hotel
Franciscan Room
450 Powell St in San Francisco, CA.

Your Chance to Influence Cloud Security Research!

By Zenobia Godschalk

The Cloud Security Alliance needs your help! We are conducting a survey to help us better understand users current cloud deployment plans and biggest areas of security and compliance concern. The feedback generated here will assist the CSA in shaping our educational curriculum and areas of guidance over the coming months. So, if you’re concerned about cloud security, let your voice be heard!

http://www.surveymonkey.com/s.aspx?sm=VqH8jHHwc9GhANj3EzDl1g_3d_3d

(Survey takes just a few minutes, and you will receive a complimentary copy of the results)

Cloud Security and Privacy book by CSA founding members

By Jim Reavis

I wanted to let everyone know about the new book release, Cloud Security and Privacy: An Enterprise Perspective on Risks and Compliance. This book was written by three experts, two of whom are CSA founding members. I had the opportunity to read the book prior to its publication and I can personally recommend it as a great resource for those seeking to learn about and securely adopt cloud computing. The book URL is below:

http://oreilly.com/catalog/9780596802769/