CSA Issues Top 20 Critical Controls for Cloud Enterprise Resource Planning Customers

By Victor Chin, Research Analyst, Cloud Security Alliance

Top 20 Critical Controls for Cloud ERP Customers

Cloud technologies are being increasingly adopted by organizations, regardless of their size, location or industry. And it’s no different when it comes to business-critical applications, typically known as enterprise resource planning (ERP) applications. Most organizations are migrating business-critical applications to a hybrid architecture of ERP applications. To assist in this process, CSA has released the Top 20 Critical Controls for Cloud Enterprise Resource Planning (ERP) Customers, a report that assesses and prioritizes the most critical controls organizations need to consider when transitioning their business-critical applications to cloud environments.

This document provides 20 controls, grouped into domains for ease of consumption, that align with the existing CSA Cloud Control Matrix (CCM) v3 structure of controls and domains.

The document focuses on the following domains:

  • Cloud ERP Users: Thousands of different users with very different access requirements and authorizations extensively use cloud
    enterprise resource planning applications. This domain provides controls aimed to protect users and access to cloud enterprise resource planning.
  • Cloud ERP Application: An attribute associated with cloud ERP applications is the complexity of the technology and functionality provided to users. This domain provides controls that are aimed to protect the application itself.
  • Integrations: Cloud ERP applications are not isolated systems but instead tend to be extensively integrated and connected to other applications and data sources. This domain focuses on securing the integrations of cloud enterprise resource planning applications.
  • Cloud ERP Data: Cloud enterprise resource planning applications store highly sensitive and regulated data. This domain focuses on critical controls to protect access to this data.
  • Business Processes: Cloud enterprise resource planning applications support some of the most complex and critical business processes for organizations. This domain provides controls that mitigate risks to these processes.

While there are various ERP cloud service models such as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS)—each with different security/service-level agreements and lines of responsibility—organizations are required to protect their own data, users and intellectual property (IP). As such, organizations that are either considering an ERP cloud migration or already have workloads in the cloud can use these control guidelines to build or bolster a strong foundational ERP security program.

By themselves, ERP applications utilize complex systems and, consequently, are challenging to secure. In the cloud, their complexity increases due to factors such as shared security models, varying cloud service models, and the intersection between IT and business controls. Nevertheless, due to cloud computing benefits, enterprise resource planning applications are increasingly migrating to the cloud.

Organizations should leverage this document as a guide to drive priorities around the most important controls that should be implemented while adopting Cloud ERP Applications. The CSA ERP Security Working Group will continue to keep this document updated and relevant. In the meantime, the group hopes readers find this document useful when migrating or securing enterprise resource planning applications in the cloud.

Download this free resource now.

Financial Services: Counting on CASBs

By Will Houcheime, Product Marketing Manager, Bitglass

Financial institutions handle a great deal of sensitive data and are highly conscientious of where they store and process it. Nevertheless, they are aware of the many benefits that they can gain by using cloud applications. In order to embrace the cloud’s myriad advantages without compromising the security of their data, financial institutions have been turning to cloud access security brokers (CASBs). To find out why, check out our latest episode of Glass Class:

Survey Says: Almost Half of Cloud Workloads Not Controlled by Privileged Access

By Nate Yocom, Chief Technology Officer, Centrify

For the past few years, Centrify has been using a statistic from Forrester to demonstrate the importance of protecting privileged accounts, which estimates that 80 percent of data breaches involve privileged credentials. This first showed up in The Forrester Wave: Privileged Identity Management report in Q3 2016, and was used again in the same report in Q4 2018.

Recently I was thrilled to see the results of a survey we conducted with FINN Partners, polling 1,000 IT decision makers (500 U.S./500 U.K.) about their awareness of the privileged credential threats they’re facing, their understanding of the Privileged Access Management (PAM) market, and how Zero Trust can help reduce their risk of becoming the next data breach headline.

The headline stat from the survey:

This fact now confirms what we already know: The majority of cyber-attacks abuse privileged credentials, making it the leading attack vector.

Furthermore, it’s pretty close to the Forrester estimate, and lends credibility to why Gartner named PAM a Top 10 Security Project in 2018, and again in 2019.

Still not prioritizing PAM

What’s concerning about the survey, however, is that despite knowing privileged credential abuse is involved in the majority of breaches, most organizations and IT leaders are not prioritizing PAM or implementing it effectively. What’s worse, they continue to grant too much trust and too much privilege.

We’ve said or written it a thousand times: attackers no longer “hack” in, they log in using weak credentials and then fan out, seeking privileged access to critical infrastructure and sensitive data.

There are some very basic PAM capabilities and best practices that are still not being implemented, namely:

  • 52 percent of respondents do not have a password vault! This is PAM 101, and one of the very first steps of the PAM maturity model. Over half aren’t even vaulting privileged passwords, which means they’re probably written down on shared spreadsheets.
  • 63 percent indicate their companies usually take more than one day to shut off privileged access for employees who leave the company.
  • 65 percent are still sharing root or privileged access to systems and data at least somewhat often, including to cloud infrastructure and workloads.

The modern threatscape – including cloud workloads – is not secure

If organizations are still struggling to implement some of the most basic or required PAM strategies, then it’s not surprising that the survey revealed most are also not securing modern attack surfaces, most notably cloud workloads.

While it’s encouraging to see that 63 percent of US respondents are controlling privileged access to cloud workloads, there’s a pretty big gap between them and the 47 percent of UK counterparts who are doing the same. Furthermore, that averages out to 55 percent of all respondents … which means that almost half are NOT leveraging PAM solutions to manage privileged access to cloud workloads.

This is a big focus area for Centrify right now. One area we know is a major pain point is directory services. Cloud services like AWS and Azure require the creation of a unique user directory, making a huge mess to create, manage, update, and revoke privilege when needed.

One solution is to provide multi-directory brokering, enabling an organization to leverage whatever user directory it’s already using to broker access to cloud infrastructure, services, and workloads. So, for example, if an organization is using Active Directory (AD) to control authentication, they would be able to leverage the existing directory to manage and broker privileged access to AWS or Azure.

That’s a perfect example of a modern attack surface that needs privilege management, but doesn’t have the native capabilities to provide it simply and effectively. Legacy PAM solutions simply cannot secure modern attack surfaces.

Organizations need to quickly move to Zero Trust Privilege backed by cloud-ready services that minimize the attack surface, improve audit and compliance visibility, and reduce risk, complexity and costs for the modern, hybrid enterprise.

Download the survey report now.

Nate Yocom is Chief Technology Officer at Centrify and a member of CSA’s Hybrid Cloud Security Services Working Group.

12 Ways Cloud Upended IT Security (And What You Can Do About It)

By Andrew Wright, Co-founder & Vice President of Communications, Fugue

12 ways cloud upended IT security (and what you can do about it)

The cloud represents the most disruptive trend in enterprise IT over the past decade, and security teams have not escaped turmoil during the transition. It’s understandable for security professionals to feel like they’ve lost some control in the cloud and feel frustrated while attempting to get a handle on the cloud “chaos” in order to secure it from modern threats.

Here, we take a look at the ways cloud has disrupted security, with insights into how security teams can take advantage of these changes and succeed in their critical mission to keep data secure.

1. The cloud relieves security of some big responsibilities

Organizations liberate themselves from the burdens of acquiring and maintaining physical IT infrastructure when they adopt cloud, and this means security is no longer responsible for the security of physical infrastructure. The Shared Security Model of Cloud dictates that Cloud Service Providers (CSPs) such as AWS and Azure are responsible for the security of the physical infrastructure. CSP customers (that’s you!) are responsible for the secure use of cloud resources. There’s a lot of misunderstanding out there about the Shared Responsibility Model however, and that brings risk.

2. In the cloud, developers make their own infrastructure decisions

Cloud resources are available on-demand via Application Programming Interfaces (APIs). Because the cloud is self-service, developers move fast, sidestepping traditional security gatekeepers. When developers spin up cloud environments for their applications, they’re configuring the security of their infrastructure. And developers can make mistakes, including critical cloud resource misconfigurations and compliance policy violations.

3. And developers change those decisions constantly

Organizations can innovate faster in the cloud than they ever could in the datacenter. Continuous Integration and Continuous Deployment (CI/CD) means continuous change to cloud environments. And it’s easy for developers to change infrastructure configurations to perform tasks like getting logs from an instance or troubleshoot an issue. So, even if they got the security of their cloud infrastructure is correct on day one, a misconfiguration vulnerability may have been introduced on day two (or hour two).

4. The cloud is programmable and can be automated

Because cloud resources can be created, modified, and destroyed via APIs, developers have ditched web-based cloud “consoles” and taken to programming their cloud resources using infrastructure-as-code tools like AWS CloudFormation and Hashicorp Terraform. Massive cloud environments can be predefined, deployed on-demand, and updated at will–programmatically and with automation. These infrastructure configuration files include the security-related configurations for critical resources.

5. There’s more kinds of infrastructure in the cloud to secure

In certain respects, security in the datacenter is easier to manage. You have your network, firewalls, and servers on racks. The cloud has those too, in virtualized form. But the cloud also produced a flurry of new kinds of infrastructure resources, like serverless and containers. AWS alone has introduced hundreds of new kinds of services over the past few years. Even familiar things like networks and firewalls operate in unfamiliar ways in the cloud. All require new and different security postures.

6. There’s also more infrastructure in the cloud to secure

There’s simply more cloud infrastructure resources to track and secure, and due to the elastic nature of cloud, “more” varies by the minute. Teams operating at scale in the cloud may be managing a dozens of environments across multiple regions and accounts, and each may involve tens of thousands of resources that are individually configured and accessible via APIs. These resources interact with each other and require their own identity and access control (IAM) permissions. Microservice architectures compound this problem.

7. Cloud security is all about configuration—and misconfiguration

Cloud operations is all about the configuration of cloud resources, including security-sensitive resources such as networks, security groups, and access policies for databases and object storage. Without physical infrastructure to concern yourself with, security focus shifts to the configuration of cloud resources to make sure they’re correct on day one, and that they stay that way on day two and beyond.

8. Cloud security is also all about identity

In the cloud, many services connect to each other via API calls, requiring identity management for security rather than IP based network rules, firewalls, etc. For instance, a connection from a Lambda to an S3 bucket is accomplished using a policy attached to a role that the Lambda takes on—its service identity. Identity and Access Management (IAM) and similar services are complex and feature rich, and it’s easy to be overly permissive just to get things to work. And since these cloud services are created and managed with configuration, see #7.

9. The nature of threats to cloud are different

Bad actors use code and automation to find vulnerabilities in your cloud environment and exploit them, and automated threats will always outrun manual or semi-manual defenses. Your cloud security must be resilient against modern threats, which means they must cover all critical resources and policies, and recover from any misconfiguration of those resources automatically, without human involvement. The key metric here is Mean Time to Remediation (MTTR) for critical cloud misconfiguration. If yours is measured in hours, days, or (gasp!) weeks, you’ve got work to do.

10. Datacenter security doesn’t work in the cloud

By now, you’ve probably concluded that many of the security tools that worked in the datacenter aren’t of much use in the cloud. This doesn’t mean you need to ditch everything you’ve been using, but learn which still apply and which are obsolete. For instance, application security still matters, but network monitoring tools that rely on spans or taps to inspect traffic don’t because CSPs don’t provide direct network access. The primary security gap you need to fill is concerned with cloud resource configuration.

11. Security can be easier and more effective in the cloud

You’re probably ready for some good news. Because the cloud is programmable and can be automated, the security of your cloud is also programmable and can be automated. This means cloud security can be easier and more effective than it ever could be in the datacenter. In the midst of all this cloud chaos lies opportunity!

Monitoring for misconfiguration and drift from your provisioned baseline can be fully automated, and you can employ self-healing infrastructure for your critical resources to protect sensitive data. And before infrastructure is provisioned or updated, you can run automated tests to validate that infrastructure-as-code complies with your enterprise security policies, just like you do to secure your application code. This lets developers know earlier on if there are problems that need to be fixed, and it ultimately helps them move faster and keep innovating.

12. Compliance can also be easier and more effective in the cloud

There’s good news for compliance analysts as well. Traditional manual audits of cloud environments can be incredibly costly, error-prone, and time-consuming, and they’re usually obsolete before they’re completed. Because the cloud is programmable and can be automated, compliance scanning and reporting can be as well. It’s now possible to automate compliance audits and generate reports on a regular basis without investing a lot of time and resources. Because cloud environments change so frequently, a gap between audits that’s longer than a day is probably too long.

Where to start with cloud security

  1. Learn what your developers are doing
    What cloud environments are they using, and how are they separating concerns by account (i.e. dev, test, prod)? What provisioning and CI/CD tools are they using? Are they currently using any security tools? The answers to these questions will help you develop a cloud security roadmap and identify ideal areas to focus.
  2. Apply a compliance framework to an existing environment. 
    Identify violations and then work with your developers to bring it into compliance. If you aren’t subject to a compliance regime like HIPAA, GDPR, NIST 800-53, or PCI, then adopt the CIS Benchmark. Cloud providers like AWS and Azure have adapted it to their cloud platforms to help remove guesswork on how they apply to what your organization is doing.
  3. Identify critical resources and establish good configuration baselines.
    Don’t let the forest cause you to lose sight of the really important trees. Work with your developers to identify cloud resources that contain critical data, and establish secure configuration baselines for them (along with related resources like networks and security groups). Start detecting configuration drift for these and consider automated remediation solutions to prevent misconfiguration from leading to an incident.
  4. Help developers be more secure in their work. 
    Embrace a “Shift Left” mentality by working with developers to bake in security earlier in the software development lifecycle (SLDC). DevSecOps approaches such as automated policy checks during development exist to help keep innovation moving fast by eliminating slow, manual security and compliance processes.

The key to an effective and resilient cloud security posture is close collaboration with your development and operations teams to get everyone on the same page and talking the same language. In the cloud, security can’t operate as a stand-alone function.

Webinar: The Ever Changing Paradigm of Trust in the Cloud

By CSA Staff

abstract line connection on night city background implying cloud computing

The CSA closed its 10th annual Summit at RSA on Monday, and the consensus was that the cloud has come to dominate the technology landscape and revolutionize the market, creating a tectonic shift in accepted practice.

The advent of the cloud has been a huge advancement in technology. Today’s need for flexible access has led to an increase in business demand for cloud computing, bringing with it increased security and privacy concerns. How organizations evaluate Cloud Service Providers (CSPs) has become key to providing increased levels of assurance and transparency.

On Thursday, March 14 at 2 pm ET, John DiMaria, Cloud Security Alliance’s Assurance Investigatory Fellow and one of the key innovators in the evolution of CSA STAR, will share his insight on the:

  • current global landscape of cloud computing,
  • ongoing concerns regarding the cloud, and the
  • evolution of efforts to answer to the demand for higher transparency and assurance.

Join John DiMaria as he reviews the efforts being led by CSA to answer this call. You’ll walk away with a deeper understanding of how these efforts are aimed at helping organizations optimize processes, reduce costs, and decrease risk while simultaneously meeting the continuing rigorous international demands on cloud services allowing for the highest level of assurance and transparency.

Register today.

CSA Summit Recap Part 2: CSP & CISO Perspective

By Elisa Morrison, Marketing Intern, Cloud Security Alliance

When CSA was started in 2009, Uber was just a German word for ‘Super’ and all CSA stood for was Community Supported Agriculture. Now in 2019, spending on cloud infrastructure has finally exceeded on-premises, and CSA is celebrating its 10th anniversary. For those who missed the Summit, this is the CSA Summit Recap Part 2, and in this post we will be highlighting key takeaways from sessions geared towards CSPs and CISOs.

Can you trust your eyes? Context as the basis for “Zero Trust” systems – Jason Garbis

During this session, Jason Garbis identified three steps towards implementing Zero Trust: reducing attack surfaces, securing access, and neutralizing adversaries. He also addressed how to adopt modern security architecture to make intelligent actions for trust. In implementing Zero Trust, Garbis highlighted the need for:

  • Authentication. From passwords to biometric to tokens. That said, authentication alone is not sufficient for adequate security, as he warned it is too late in the process.
  • Network technology changes. Firewall technology is too restricted (e.g. IP addresses are shared across multiple people). The question in these cases is yes or no access. This not Zero Trust. Better security is based on the role or person and data definition. This has more alternatives and is based on many attributes, as well as the role and data definition.
  • Access control requirements. There is a need for requirements that dynamically adjust based on context. If possible, organizations need to find a unified solution via Software-Defined Perimeter.

Securing Your IT Transformation to the Cloud – Jay Chaudhry, Bob Varnadoe, and Tom Filip

Every CEO wants to embrace cloud, but how can you do it securely? The old world was network-centric, and the data center was the center of universe. We could build a moat around our network with firewalls and proxies. The new world is user-centric, and the network control is fluid. Not to mention, the network is decoupled from security, and we rely on policy-based access as depicted in the picture below.

Slide: Old World vs New World

In order to address this challenge, organizations need to view security with a clean slate. Applications and network must be decoupled. More traffic on the cloud is encrypted, but offers a way for malicious users to get in, so proxy and firewalls should be used for inspection of traffic.

Ten Years in the Cloud – PANEL

The responsibility to protect consumers and enterprise has expanded dramatically. Meanwhile, the role of the CISO is changing – responsibilities now include both users and the company. CISOs are faced with challenges as legacy tools don’t always translate to the cloud. Now there is also a need to tie the value of the security program to business, and the function of security has changed especially in support. In light of these changes, the panel unearthed the following five themes in their discussion of lessons learned in the past 10 years of cloud.

  1. Identity as the new perimeter. How do we identify people are who they say they are?
  2. DevOps as critical for security. DevOps allows security to be embedded into the app, but it is also a risk since there is faster implementation and more developers.
  3. Ensuring that security is truly embedded in the code. Iterations in real-time require codified security.
  4. Threat and data privacy regulations. This is on the legislative to-do list for many states; comparable to the interest that privacy has in financial services and health care information.
  5. Security industry as a whole is failing us all. It is not solving problems in real-time; as software becomes more complex it poses security problems. Tools are multiplying but they do not address the overall security environment. Because of this, there’s a need for an orchestrated set of tools.

Finally! Cloud Security for Unmanaged Devices… for All Apps – Nico Popp

Now we have entered the gateway wars …Web vs. CASB vs. SDP. Whoever wins, the problem of BYOD and unmanaged devices still remains. There is also the issue that we can’t secure endpoint users’ mobile devices. As is, the technologies of mirror gateway and forward proxy solve the sins of “reverse proxy” and have become indispensable blades. Forward proxy is the solution for all apps when you can manage the endpoint, and mirror gateway can be used for all users, all endpoints and all sanctioned apps.

Lessons from the Cloud -David Cass

Slide:

Cloud is a means to an end … and the end requires organizations to truly transform. This is especially important as regulators expect a high level of control in a cloud environment. Below are the key takeaways presented:

  • Cloud impacts the strategy and governance from the strategy, to controls, to monitoring, measuring, and managing information all the way to external communications.
  • The enterprise cloud requires a programmatic approach with data as the center of the universe and native controls only get you so far. Cloud is a journey, not just a change in technology.
  • Developing a cloud security strategy requires taking into account service consumption, IaaS, PaaS, and SaaS. It is also important to keep in mind that cloud is not just an IT initiative.

Security Re-Defined – Jason Clark and Bob Schuetter

This session examined how Valvoline went to the cloud to transform its security program and accelerate its digital transformation. When Valvoline split as an IPO with two global multi-billion startup they had no datacenter for either. The data was flowing like water, there was complexity and control created friction, not to mention a lack of visibility.

Slide: Digital transformation

They viewed cloud as security’s new north star, and said the ‘The Fourth Industrial Revolution’ was moving to the cloud. So how did they get there? The following are the five lessons they shared:

  1. Stop technical debt
  2. Go where your data is going
  3. Think big, move fast, and start small
  4. Organizational structure, training, and mindset
  5. Use the power of new analytics

Blockchain Demo

Slide: A simple claim example

Inspired by the cryptocurrency model, OpenCPEs is a way to revolutionize how security professionals measure their professional development experiences.

OpenCPEs provides a method of validating experiences listed on your resume without maintaining or storing an individual’s personal data. Learn more about this project by downloading the presentation slides.

The full slides to the summit presentations are available for download.

CSA Summit Recap Part 1: Enterprise Perspective

By Elisa Morrison, Marketing Intern, Cloud Security Alliance

CSA’s 10th anniversary, coupled with the bestowal of the Decade of Excellence Awards gave a sense of accomplishment to this Summit that bodes well yet also challenges the CSA community to continue its pursuit of excellence.

The common theme was the ‘Journey to the Cloud’ and emphasized how organizations can not only go faster but also reduce costs during this journey. The Summit this year also touched on the future of privacy, disruptive technologies, and introduced CSA’s newest initiatives in Blockchain, IoT and the launch of the STAR Continuous auditing program. Part 1 of this CSA Summit Recap highlights sessions from the Summit geared toward the enterprise perspective.

Securing Your IT Transformation to the Cloud – Jay Chaudhry, Bob Varnadoe, and Tom Filip

Slide: Network security is becoming irrelevant

Every CEO wants to embrace cloud but how to do it securely? To answer this question this trio looked at the journeys other companies such as Kellogg and NRC took to the cloud. In Kellogg’s case they found that when it comes to your transformation the VMs of single-tenant won’t cut it. They also brought to light the question of  the ineffectiveness of services such as hybrid security. Why pay the tax for services not used?

For NCR, major themes were how to streamline connectivity and access to cloud service. The big question was how do end users access NCR data in a secure environment? They found that applications and network must be decoupled. And, while more traffic on the cloud is encrypted, it offers another way for malicious users to get in. Their solution was to use proxy and firewalls for inspection of traffic.

The Future of Privacy: Futile or Pretty Good? – Jon Callas

ACLU technology fellow Jon Callas brought to light the false dichotomy we see when discussing privacy. It is easy to be nihilistic about privacy, but positives are out there as well.

There is movement in the right direction that we can already see, examples include: GDPR, California Privacy Law, Illinois Biometric Privacy Law, and the Carpenter, Riley, and Oakland Magistrate decisions. There has also been a precedent set for laws with more privacy toward consumers. For organizations, privacy has also become the focus of competition and companies such as Apple, Google, and Microsoft all compete on privacy. Protocols such as TLS and DNS are also becoming a reality. Other positive trends include default encryption and that disasters are documented, reported on, and a concern.

Unfortunately, there has also been movement in the wrong direction. There is a balancing act between the design for security versus design for surveillance. The surveillance economy is increasing, and too many platforms and devices are now collecting data and selling it. Lastly, government arrogance and the overreach to legislate surveillance over security is an issue.

All in all, Callas summarized that the future is neither futile nor pretty good and it’s necessary to balance both moving forward.

From GDPR to California Privacy – Kevin Kiley

Slide: Steps to better vendor risk management

This session touched on third-party breaches, regulatory liability, the need for strong data processing paramount to scope and how to comply with GDPR and CCPA. Kiley identified a need for a holistic approach with more detailed vendor vetting requirements. He outlined five areas organizations should improve to better their vendor risk management.

  1. Onboarding. Who’s doing the work for procurement, privacy, or security?
  2. Populating & Triaging. Leverage templated vendor evaluation assessments and populate with granular details.
  3. Documentation and demonstration
  4. Monitoring vendors
  5. Offboarding

Building an Award-Winning Cloud Security Program – Pete Chronis and Keith Anderson

This session covered key lessons learned along the way as Turner built its award-winning cloud security program. One of the constant challenges Turner faced was the battle between the speed to market over security program. To improve their program, Turner enacted continuance compliance measurement by using open source for cloud plane assessment. They also ensured each user attestation was signed by both the executive and technical support. For accounts, they implemented intrusion prevention, detection, and security monitoring. They learned to define what good looks like, while also developing lexicon and definitions for security. It was emphasized that organizations should always be iterating from new > good > better. Lastly, when building your cloud security program they emphasized that not all things need to be secured the same and not all data needs the same level of security.

Case Study: Behind the Scenes of MGM Resorts’ Digital Transformation – Rajiv Gupta and Scott Howitt

MGM’s global user base meant they wanted to expand functions to guest services, check-in volume management and find a way of bringing new sites online faster. To accomplish this, MGM embarked on a cloud journey. Their journey was broken into business requirements (innovation velocity and M&A agility) along with necessary security requirements (dealing with sensitive data, the need to enable employees to move faster, and the ability to deploy a security platform).

Slide: Where is your sensitive data in the cloud?

As they described MGM’s digital transformation the question was raised, where is sensitive data stored in the cloud? An emerging issue that continues to come up is API management. Eighty-seven percent of companies permit employees to use unmanaged devices to access business apps, and the BYOD policy is often left unmanaged or unenforced. In addition, MGM found that on average number 14 misconfigured IaaS services are running at a given time in an average organization, and the average organization has 1527 DLP incidents in PaaS/IaaS in a month.

To address these challenges, organizations need to consider the relations between devices, network and the cloud. The session ended with three main points to keep in mind during your organization’s cloud journey. 1) Focus on your data. 2) Apply controls pertinent to your data. 3) Take a platform approach to your cloud security needs.

Taking Control of IoT – Hillary Baron

image of IoT connected devices overlayed on a cityscape

There is a gap in the security controls framework for IoT. With the landscape changing at a rapid pace and over 2020 billion IoT devices, the need is great. Added to that is the fact that IoT manufacturers typically do not build security into devices; hence the need for the security controls framework. You can learn more about the framework and its accompanying guidebook covered in this session here.

Panel – The Approaching Decade of Disruptive Technologies

While buzzwords can mean different things to different organizations, organizations should still implement processes among new and emerging technologies such as AI, Machine Learning, and Blockchain, and be conscious of what is implemented.

This session spent a lot of its time examining Zero Trust. The perimeter is in different locations for security, and it is challenging looking for the best place to establish the security perimeter. It can no longer be a fixed point, but must flex with the mobility of users, e.g. mobile phones require very flexible boundaries. Zero Trust can help address these issues, it’s BYOD-friendly. There are still challenges, but  Web Authentication helps as a standard for Zero Trust.

Cloud has revolutionized security in the past decade. With cloud, you inherit security and with it the idea of a simple system has gone out the window. One of the key questions that was asked was “Why are we not learning the security lessons from the cloud?” The answer? Because the number of developers grows exponentially among new technology.  

The key takeaway: Don’t assume your industry is different. Realize that others have faced these threats and have come up with successful treatment methodologies when approaching disruptive technologies.

CISO Guide to Surviving an Enterprise Cloud Journey – Andy Kirkland, Starbucks

Five years ago, the Director of  Information and Security for Starbucks, Andy Kirkland, recommended not going to the cloud for cautionary purposes. Since then, Starbucks migrated to the cloud and learned a lot on the way. Below is an outline of Starbucks’ survival tips for organizations wanting to survive a cloud journey:

  • Establish workload definitions to understand criteria
  • Utilize standardized controls across the enterprise
  • Provide security training for the technologist
  • Have a security incident triage tailored to your cloud provider
  • Establish visibility into cloud security control effectiveness
  • Define the security champion process to allow for security to scale

PANEL – CISO Counterpoint

In this keynote panel, leading CISOs discussed their cloud adoption experiences for enterprise applications. Jerry Archer, CSO for Sallie Mae, described their cloud adoption journey as “nibbling our way to success.” They started by putting things into the cloud that were small. By keeping up constant conversations with regulators, there were no surprises during the migration to the cloud. Now, they don’t have any physical supplies remaining. Other takeaways were that in 2019 containers have evolved and we now see: ember security, arbitrage workloads, and RAIN (Refracting Artificial Intelligence Networks).

Download the full summit presentation slides here.

Education: A Cloud Security Investigation (CSI)

By Will Houcheime, Product Marketing Manager, Bitglass

cloud education painted on pavement

Cloud computing is now widely used in higher education. It has become an indispensable tool for both the institutions themselves and their students. This is mainly because cloud applications, such as such as G Suite and Microsoft Office 365, come with built-in sharing and collaboration functionality – they are designed for efficiency, teamwork, and flexibility. This, when combined with the fact that education institutions tend to receive massive discounts from cloud service providers, has led to a cloud adoption rate in education that surpasses that of every other industry. Naturally, this means that education institutions need to find a cloud security solution that can protect their data wherever it goes.

Cloud adoption means new security concerns

When organizations move to the cloud, there are new security concerns that must be addressed; for example, cloud applications, which are designed to enable sharing, can be used to share data with parties that are not authorized to view it. Despite the fact that some of these applications have their own native security features, many lack granularity, meaning that sensitive data such as personally identifiable information (PII), personal health information (PHI), federally funded research, and payment card industry data (PCI) can still fall into the wrong hands.

Complicating the situation further is the fact that education institutions are required to comply with multiple regulations; for example, FERPA, FISMA, PCI DSS, and HIPAA. Additionally, when personal devices are used to access data (a common occurrence for faculty and students alike), securing data and complying with regulatory demands becomes even more challenging.

Fortunately, cloud access security brokers (CASBs) are designed to protect data in today’s business world. Leading CASBs provide complete visibility and control over data in any app, any device, anywhere. Identity and access management capabilities, zero-day threat detection, and granular data protection policies ensure that sensitive information is safe and regulatory demands are thoroughly addressed.

Want to learn more? Download the Higher Education Solution Brief.

Rocks, Pebbles, Shadow IT

By Rich Campagna, Chief Marketing Officer, Bitglass

Way back in 2013/14, Cloud Access Security Brokers (CASBs) were first deployed to identify Shadow IT, or unsanctioned cloud applications. At the time, the prevailing mindset amongst security professionals was that cloud was bad, and discovering Shadow IT was viewed as the first step towards stopping the spread of cloud in their organization.

Flash forward just a few short years and the vast majority of enterprises have done a complete 180º with regards to cloud, embracing an ever increasing number of “sanctioned” cloud apps. As a result, the majority of CASB deployments today are focused on real-time data protection for sanctioned applications – typically starting with System of Record applications that handle wide swaths of critical data (think Office 365Salesforce, etc). Shadow IT discovery, while still important, is almost never the main driver in the CASB decision making process.

Regardless, I still occasionally hear of CASB intentions that harken back to the days of yore – “we intend to focus on Shadow IT discovery first before moving on to protect our managed cloud applications.” Organizations that start down this path quickly fall into the trap of building time consuming processes for triaging and dealing with what quickly grows from hundreds to thousands of applications, all the while delaying building appropriate processes for protecting data in the sanctioned applications where they KNOW sensitive data resides.

This approach is a remnant of marketing positioning by early vendors in the CASB space. For me, it brings to mind Habit #3 from Stephen Covey’s The 7 Habits of Highly Effective People -“Put First Things First.” 

Putting first things first is all about focusing on your most important priorities. There’s a video of Stephen famously demonstrating this habit on stage in one of his seminars. In the video, he asks an audience member to fill a bucket with sand, followed by pebbles, and then big rocks. The result is that once the pebbles and sand fill the bucket, there is no more room for the rocks. He then repeats the demonstration by having her add the big rocks first. The result is that all three fit in the bucket, with the pebbles and sand filtering down between the big rocks.

Now, one could argue that after you take care of the big rocks, perhaps you should just forget about the sand, but regardless, this lesson is directly applicable to your CASB deployment strategy:

You have major sanctioned apps in the cloud that contain critical data. These apps require controls around data leakage, unmanaged device access, credential compromise and malicious insiders, malware prevention, and more. Those are your big rocks and the starting point of your CASB rollout strategy. Focus too much on the sand and you’ll never get to the rocks.

Read what Gartner has to say on the topic in 2018 Critical Capabilities for CASBs.

Rethinking Security for Public Cloud

Symantec’s Raj Patel highlights how organizations should be retooling security postures to support a modern cloud environment

By Beth Stackpole, Writer, Symantec

old fashioned scales with glass globe on one side and gold coins on the other

Enterprises have come a long way with cyber security, embracing robust enterprise security platforms and elevating security roles and best practices. Yet with public cloud adoption on the rise and businesses shifting to agile development processes, new threats and vulnerabilities are testing traditional security paradigms and cultures, mounting pressure on organizations to seek alternative approaches.

Raj Patel, Symantec’s vice president, cloud platform engineering, recently shared his perspective on the shortcoming of a traditional security posture along with the kinds of changes and tools organizations need to embrace to mitigate risk in an increasingly cloud-dominant landscape.

Q: What are the key security challenges enterprises need to be aware of when migrating to the AWS public cloud and what are the dangers of continuing traditional security approaches?

A: There are a few reasons why it’s really important to rethink this model. First of all, the public cloud by its very definition is a shared security model with your cloud provider. That means organizations have to play a much more active role in managing security in the public cloud than they may have had in the past.

Infrastructure is provided by the cloud provider and as such, responsibility for security is being decentralized within an organization. The cloud provider provides a certain level of base security, but the application owner directly develops infrastructure on top of the public cloud, thus now has to be security-aware.

The public cloud environment is also a very fast-moving world, which is one of the key reasons why people migrate to it. It is infinitely scalable and much more agile. Yet those very same benefits also create a significant amount of risk. Security errors are going to propagate at the same speed if you are not careful and don’t do things right. So from a security perspective, you have to apply that logic in your security posture.

Finally, the attack vectors in the cloud are the entire fabric of the cloud. Traditionally, people might worry about protecting their machines or applications. In the public cloud, the attack surface is the entire fabric of the cloud–everything from infrastructure services to platform services, and in many cases, software services. You may not know all the elements of the security posture of all those services … so your attack surface is much larger than you have in a traditional environment.

Q: Where does security fit in a software development lifecycle (SDLC) when deploying to a public cloud like AWS and how should organizations retool to address the demands of the new decentralized environment?

A: Most organizations going through a cloud transformation take a two-pronged approach. First, they are migrating their assets and infrastructure to the public cloud and second, they are evolving their software development practices to fit the cloud operating model. This is often called going cloud native and it’s not a binary thing—it’s a journey.

With that in mind, most cloud native transformations require a significant revision of the SDLC … and in most cases, firms adopt some form of a software release pipeline, often called a continuous integration, continuous deployment (CI/CD) pipeline. I believe that security needs to fit within the construct of the release management pipeline or CI/CD practice. Security becomes yet another error class to manage just like a bug. If you have much more frequent release cycles in the cloud, security testing and validation has to move at the same speed and be part of the same release pipeline. The software tools you choose to manage such pipelines should accommodate this modern approach.

Q: Explain the concept of DevSecOps and why it’s an important best practice for public cloud security?

A: DevOps is a cultural construct. It is not a function. It is a way of doing something—specifically, a way of building a cloud-native application. And a new term, DevSecOps, has emerged which contends that security should be part of the DevOps construct. In a sense, DevOps is a continuum from development all the way to operations, and the DevSecOps philosophy says that development, security, and operations are one continuum.

Q: DevOps and InfoSec teams are not typically aligned—what are your thoughts on how to meld the decentralized, distributed world of DevOps with the traditional command-and-control approach of security management?

A: It starts with a very strong, healthy respect for the discipline of security within the overall application development construct. Traditionally, InfoSec professionals didn’t intersect with DevOps teams because security work happened as an independent activity or as an adjunct to the core application development process. Now, as we’re talking about developing cloud-native applications, security is part of how you develop because you want to maximize agility and frankly, harness the pace of development changes going on.

One practice that works well is when security organizations embed a security professional or engineer within an application group or DevOps group. Oftentimes, the application owners complain that the security professionals are too far removed from the application development process so they don’t understand it or they have to explain a lot, which slows things down. I’m proposing breaking that log jam by embedding a security person in the application group so that the security professional becomes the delegate of the security organization, bringing all their tools, knowledge, and capabilities.

At Symantec, we also created a cloud security practitioners working group as we started our cloud journey. Engineers involved in migrating to the public cloud as well as our security professionals work as one common operating group to come up with best practices and tools. That has been very powerful because it is not a top-down approach, it’s not a bottoms-up approach–it is the best outcome of the collective thinking of these two groups.

Q: How does the DevSecOps paradigm address the need for continuous compliance management as a new business imperative?

A: It’s not as much that DevSecOps invokes continuous compliance validation as much as the move to a cloud-native environment does. Changes to configurations and infrastructure are much more rapid and distributed in nature. Since changes are occurring almost on a daily basis, the best practice is to move to a continuous validation mode. The cloud allows you to change things or move things really rapidly and in a software-driven way. That means lots of good things, but it can also mean increasing risk a lot. This whole notion of DevSecOps to CI/CD to continuous validation comes from that basic argument.

Cloud Security Alliance Announces the Release of the Spanish Translation of Guidance 4.0

By JR Santos, Executive Vice President of Research, Cloud Security Alliance.

Guidance 4.0 Spanish version coverThe Cloud Security Alliance (CSA), the world’s leading organization dedicated to defining and raising awareness of best practices to help ensure a secure cloud computing environment, today announced the release of Guidance for Critical Areas of Focus in Cloud Computing 4.0 in Spanish. This is the second major translation release since Guidance 4.0 was released in July of 2017 (Previous version was released in 2011).

An actionable cloud adoption roadmap

Guidance 4.0, which acts as a practical, actionable roadmap for individuals and organizations looking to safely and securely adopt the cloud paradigm, includes significant content updates to address leading-edge cloud security practices.

Approximately 80 percent of the Guidance was rewritten from the ground up with domains restructured to better represent the current state and future of cloud computing security. Guidance 4.0 incorporates more of the various applications used in the security environment today to better reflect real-world security practices.

“Guidance 4.0 is the culmination of more than a year of dedicated research and public participation from the CSA community, working groups and the public at large,” said Rich Mogull, Founder & VP of Product, DisruptOPS. “The landscape has changed dramatically since 2011, and we felt the timing was right to make the changes we did. We worked hard with the community to ensure that the Guidance was not only updated to reflect the latest cloud security practices, but to ensure it provides practical, actionable advice along with the background material to support the CSA’s recommendations. We’re extremely proud of the work that went into this and the contributions of everyone involved.”

CCM, CAIQ, DevOps and more

Guidance 4.0 integrates the latest CSA research projects, such as the Cloud Controls Matrix (CCM) and the Consensus Assessments Initiative Questionnaire (CAIQ), and covers such topics as DevOps, IoT, Mobile and Big Data. Among the other topics covered are:

  • DevOps, continuous delivery, and secure software development;
  • Software Defined Networks, the Software Defined Perimeter and cloud network security.
  • Microservices and containers;
  • New regulatory guidance and evolving roles of audits and compliance inheritance;
  • Using CSA tools such as the CCM, CAIQ, and STAR Registry to inform cloud risk decisions;
  • Securing the cloud management plane;
  • More practical guidance for hybrid cloud;
  • Compute security guidance for containers and serverless, plus updates to managing virtual machine security; and
  • The use of immutable, serverless, and “new” cloud architectures.

The oversight of the development of Guidance 4.0 was conducted by the professional research analysts at Securosis and based on an open research model relying on community contributions and feedback during all phases of the project. The entire history of contributions and research development is available online for complete transparency.

CVE and Cloud Services, Part 2: Impacts on Cloud Vulnerability and Risk Management

By Victor Chin, Research Analyst, Cloud Security Alliance, and Kurt Seifried, Director of IT, Cloud Security Alliance

Internet Cloud server cabinet

This is the second post in a series, where we’ll discuss cloud service vulnerability and risk management trends in relation to the Common Vulnerability and Exposures (CVE) system. In the first blog post, we wrote about the Inclusion Rule 3 (INC3) and how it affects the counting of cloud service vulnerabilities. Here, we will delve deeper into how the exclusion of cloud service vulnerabilities impacts enterprise vulnerability and risk management.

 

Traditional vulnerability and risk management

CVE identifiers are the linchpin of traditional vulnerability management processes. Besides being an identifier for vulnerabilities, the CVE system allows different services and business processes to interoperate, making enterprise IT environments more secure. For example, a network vulnerability scanner can identify whether a vulnerability (e.g. CVE-2018-1234) is present in a deployed system by querying said system.

The queries can be conducted in many ways, such as via a banner grab, querying the system for what software is installed, or even via proof of concept exploits that have been de-weaponized. Such queries confirm the existence of the vulnerability, after which risk management and vulnerability remediation can take place.

Once the existence of the vulnerability is confirmed, enterprises must conduct risk management activities. Enterprises might first prioritize vulnerability remediation according to the criticality of the vulnerabilities. The Common Vulnerability Scoring System (CVSS) is one way on which the triaging of vulnerabilities is based. The system gives each vulnerability a score according to how critical it is, and from there enterprises can prioritize and remediate the more critical ones. Like other vulnerability information, CVSS scores are normally associated to CVE IDs.

Next, mitigating actions can be taken to remediate the vulnerabilities. This could refer to implementing patches, workarounds, or applying security controls. How the organization chooses to address the vulnerability is an exercise of risk management. They have to carefully balance their resources in relation to their risk appetite. But generally, organizations choose risk avoidance/rejection, risk acceptance, or risk mitigation.

Risk avoidance and rejection is fairly straightforward. Here, the organization doesn’t want to mitigate the vulnerability. At the same time, based on information available, the organization determines that the risk the vulnerability poses is above their risk threshold, and they stop using the vulnerable software.

Risk acceptance refers to when the organization, based on information available, determines that the risk posed is below their risk threshold and decides to accept the risk.

Lastly, in risk mitigation, the organization chooses to take mitigating actions and implement security controls that will reduce the risk. In traditional environments, such mitigating actions are possible because the organization generally owns and controls the infrastructure that provisions the IT service. For example, to mitigate a vulnerability, organizations are able to implement firewalls, intrusion detection systems, conduct system hardening activities, deactivate a service, change the configuration of a service, and many other options.

Thus, in traditional IT environments, organizations are able to take many mitigating actions because they own and control the stack. Furthermore, organizations have access to vulnerability information with which to make informed risk management decisions.

Cloud service customer challenges

Compared to traditional IT environments, the situation is markedly different for external cloud environments. The differences all stem from organizations not owning and controlling the infrastructure that provisions the cloud service, as well as not having access to vulnerability data of cloud native services.

Enterprise users don’t have ready access to cloud native vulnerabilities because there is no way to officially associate the data to cloud native vulnerabilities as CVE IDs are not generally assigned to them. Consequently, it’s difficult for enterprises to make an informed, risk-based decision regarding a vulnerable cloud service. For example, when should an enterprise customer reject the risk and stop using the service or accept the risk and continue using the service.

Furthermore, even if CVE IDs are assigned to cloud native vulnerabilities, the differences between traditional and cloud environments are so vast that vulnerability data which is normally associated to a CVE in a traditional environment is inadequate when dealing with cloud service vulnerabilities. For example, in a traditional IT environment, CVEs are linked to the version of a software. An enterprise customer can verify that a vulnerable version of a software is running by checking the software version. In cloud services, the versioning of the software (if there is one!) is usually only known to the cloud service provider and is not made public. Additionally, the enterprise user is unable to apply security controls or other mitigations to address the risk of a vulnerability.

This is not saying that CVEs and the associated vulnerability data are useless for cloud services. Instead, we should consider including vulnerability data that is useful in the context of a cloud service. In particular, cloud service vulnerability data should help enterprise cloud customers make the important risk-based decision of when to continue or stop using the service.

Thus, just as enterprise customers must trust cloud service providers with their sensitive data, they must also trust, blindly, that the cloud service providers are properly remediating the vulnerabilities in their environment in a timely manner.

The CVE gap

With the increasing global adoption and proliferation of cloud services, the exclusion of service vulnerabilities from the CVE system and the impacts of said exclusion have left a growing gap that the cloud services industry should address. This gap not only impacts enterprise vulnerability and risk management but also other key stakeholders in the cloud services industry.

In the next post, we’ll explore how other key stakeholders are affected by the shortcomings of cloud service vulnerability management.

Please let us know what you think about the INC3’s impacts on cloud service vulnerability and risk management in the comment section below, or you can also email us.

Cloud Migration Strategies and Their Impact on Security and Governance

By Peter HJ van Eijk, Head Coach and Cloud Architect, ClubCloudComputing.com

cloud migration concept with servers in the cloud

Public cloud migrations come in different shapes and sizes, but I see three major approaches. Each of these has very different technical and governance implications.

Three approaches to cloud migration

Companies dying to get rid of their data centers often get started on a ‘lift and shift’ approach, where applications are moved from existing servers to equivalent servers in the cloud. The cloud service model consumed here is mainly IaaS (infrastructure as a service). Not much is outsourced to cloud providers here. Contrast that with SaaS.

The other side of the spectrum is adopting SaaS solutions. More often than not, these trickle in from the business side, not from IT. These could range from small meeting planners to full blown sales support systems.

More recently, developers have started to embrace cloud native architectures. Ultimately, both the target environment as well as the development environment can be cloud based. The cloud service model consumed here is typically PaaS.

I am not here to advocate the benefits of one over the other, I think there can be business case for each of these.

The categories also have some overlap. Lift and shift can require some refactoring of code, to have it better fit cloud native deployments. And hardly any SaaS application is stand alone, so some (cloud native) integration with other software is often required.

Profound differences

The big point I want to make here is that there are profound differences in the issues that each of these categories faces, and the hard decisions that have to be made. Most of these decisions are about governance and risk management.

With lift and shift, the application functionality is pretty clear, but bringing that out to the cloud introduces data risks and technical risks. Data controls may be insufficient, and the application’s architecture may not be a good match for cloud, leading to poor performance and high cost.

One group of SaaS applications stems from ‘shadow IT’. The people that adopt them typically pay little attention to existing risk management policies. These can also add useless complexity to the application landscape. The governance challenges for these are obvious: consolidate and make them more compliant with company policies.

Another group of SaaS applications is the reincarnation of the ‘enterprise software package’. Think ERP, CRM or HR applications. These are typically run as a corporate project, with all its change management issues, except that you don’t have to run it yourself.

The positive side of SaaS solutions, in general, is that they are likely to be cloud native, which could greatly reduce their risk profile. Of course, this has to be validated, and a minimum risk control is to have a good exit strategy.

Finally, cloud native development is the most exciting, rewarding and risky approach. This is because it explores and creates new possibilities that can truly transform an organization.

One of the most obvious balances to strike here is between speed of innovation and independence of platform providers. The more you are willing to commit yourself to an innovative platform, the faster you may be able to move. The two big examples I see of that are big data and internet of things. The major cloud providers have very interesting offerings there, but moving a fully developed application from one provider to another is going to be a really painful proposition. And of course, the next important thing is for developers to truly understand the risks and benefits of cloud native development.

Again, big governance and risk management are issues to address.

Peter van Eijk is one of the world’s most experienced cloud trainers. He has worked for 30+ years in research, with IT service providers and in IT consulting (University of Twente, AT&T Bell Labs, EDS, EUNet, Deloitte). In more than 100 training sessions he has helped organizations align on security and speed up their cloud adoption. He is an authorized CSA CCSK and (ISC)2 CCSP trainer, and has written or contributed to several cloud training courses. 

Continuous Monitoring in the Cloud

By Michael Pitcher, Vice President, Technical Cyber Services, Coalfire Federal

lock and key for cloud security

I recently spoke at the Cloud Security Alliance’s Federal Summit on the topic “Continuous Monitoring / Continuous Diagnostics and Mitigation (CDM) Concepts in the Cloud.” As government has moved and will continue to move to the cloud, it is becoming increasingly important to ensure continuous monitoring goals are met in this environment. Specifically, cloud assets can be highly dynamic, lacking persistence, and thus traditional methods for continuous monitoring that work for on-premise solutions don’t always translate to the cloud.

Coalfire has been involved with implementing CDM for various agencies and is the largest Third Party Assessment Organization (3PAO), having done more FedRAMP authorizations than anyone, uniquely positioning us to help customers think through this challenge. However, these concepts and challenges are not unique to the government agencies that are a part of the CDM program; they also translate to other government and DoD communities as well as commercial entities.

To review, Phase 1 of the Department of Homeland Security (DHS) CDM program focused largely on static assets and for the most part excluded the cloud. It was centered around building and knowing an inventory, which could then be enrolled in ongoing scanning, as frequently as every 72 hours. The objective is to determine if assets are authorized to be on the network, are being managed, and if they have software installed that is vulnerable and/or misconfigured. As the cloud becomes a part of the next round of CDM, it is important to understand how the approach to these objectives needs to adapt.

Cloud services enable resources to be allocated, consumed, and de-allocated on the fly to meet peak demands. Just about any system is going to have times where more resources are required than others, and the cloud allows compute, storage, and network resources to scale with this demand. As an example, within Coalfire we have a Security Parsing Tool (Sec-P) that spins up compute resources to process vulnerability assessment files that are dropped into a cloud storage bucket. The compute resources only exist for a few seconds while the file gets processed, and then they are torn down. Examples such as this, as well as serverless architectures, challenge traditional continuous monitoring approaches.

However, potential solutions are out there, including:

  • Adopting built-in services and third-party tools
  • Deploying agents
  • Leveraging Infrastructure as Code (IaC) review
  • Using sampling for validation
  • Developing a custom approach

Adopting built-in services and third-party tools

Dynamic cloud environments highlight the inadequacies of performing active and passive scanning to build inventories. Assets may simply come and go before they can be assessed by a traditional scan tool. Each of the major cloud services providers (CSPs) and many of the smaller ones provide inventory management services in addition to services that can monitor resource changes – examples include AWS’ System Manager Inventory Manager and Cloud Watch, Microsoft’s Azure Resource Manager and Activity Log, and Google’s Asset Inventory and Cloud Audit Logging. There are also quality third-party applications that can be used, some of them even already FedRAMP authorized. Regardless of the service/tool used, the key here is interfacing them with the integration layer of an existing CDM or continuous monitoring solution. This can occur via API calls to and from the solution, which are made possible by the current CDM program requirements.

Deploying agents

For resources that are going to have some degree of persistence, agents are a great way to perform continuous monitoring. Agents can check in with a master to maintain the inventory and also perform security checks once the resource is spun up, instead of having to wait for a sweeping scan. Agents can be installed as a part of the build process or even be made part of a deployment image. Interfacing with the master node that controls the agents and comparing that to the inventory is a great way to perform cloud-based “rogue” asset detection, a requirement under CDM. This concept employed on-premises is really about finding unauthorized assets, such as a personal laptop plugged into an open network port. In the cloud it is all about finding assets that have drifted from the approved configuration and are out of compliance with the security requirements.

For resources such as our Coalfire Sec-P tool from the previous example, where it exists as code more than 90 percent of the time, we need to think differently. An agent approach may not work as the compute resources may not exist long enough to even check in with the master, let alone perform any security checks.

Infrastructure as code review

IaC is used to deploy and configure cloud resources such as compute, storage, and networking. It is basically a set of templates that “programs” the infrastructure. It is not a new concept for the cloud, but the speed at which environments change in the cloud is bringing IaC into the security spotlight.

Now, we need to consider how we can perform assessment on the code that builds and configures the resources. There are many tools and different approaches on how to do this; application security is not anything new, it just must be re-examined when we consider it part of performing continuous monitoring on infrastructure. The good news is that IaC uses structured formats and common languages such as XML, JSON, and YAML. As a result, it is possible to use tools or even write custom scripts to perform the review. This structured format also allows for automated and ongoing monitoring of the configurations, even when the resources only exist as code and are not “living.” It is also important to consider what software is spinning up with the resources, as the packages that are leveraged must include up-to-date versions that do not have vulnerabilities. Code should undergo a security review when it changes, and thus the approved code can be continuously monitored.

Setting asset expiry is one way to enforce CDM principals in a high DevOps environment that leverages IaC. The goal of CDM is to assess assets every 72 hours, and thus we can set them to expire (get torn down, and therefore require rebuild) within the timeframe to know they are living on fresh infrastructure built with approved code.

Sampling

Sampling is to be used in conjunction with the methods above. In a dynamic environment where the total number of assets is always changing, there should be a solid core of the fleet that can be scanned via traditional means of active scanning. We just need to accept that we are not going to be able to scan the complete inventory. There should also be far fewer profiles, or “gold images,” than there are total assets. The idea is that if you can get at least 25% of each profile in any given scan, there is a good chance you are going to find all the misconfigurations and vulnerabilities that exist on all the resources of the same profile, and/or identify if assets are drifting from the fleet. This is enough to identify systemic issues such as bad deployment code or resources being spun up with out-of-date software. If you are finding resources in a profile that have a large discrepancy with the others in that same profile, then that is a sign of DevOps or configuration management issues that need to be addressed. We are not giving up on the concept of having a complete inventory, just accepting the fact that there really is no such thing.

Building IaC assets specifically for the purposes of performing security testing is a great option to leverage as well. These assets can have persistence and be “enrolled” into a continuous monitoring solution to report on the vulnerabilities in a similar manner to on-premises devices, via a dashboard or otherwise. The total number of vulnerabilities in the fleet is the quantity found on these sample assets, multiplied by the number of those assets that are living in the fleet. As we stated above, we can get this quantity from the CSP services or third-party tools.

Custom approaches

There are many different CSPs out there for the endless cloud-based possibilities, and all CSPs have various services and tools available from them, and for them. What I have reviewed are high-level concepts, but each customer will need to dial in the specifics based on their use cases and objectives.

Cloud Security Trailing Cloud App Adoption in 2018

By Jacob Serpa, Product Marketing Manager, Bitglass

In recent years, the cloud has attracted countless organizations with its promises of increased productivity, improved collaboration, and decreased IT overhead. As more and more companies migrate, more and more cloud-based tools arise.

In its fourth cloud adoption report, Bitglass reveals the state of cloud in 2018. Unsurprisingly, organizations are adopting more cloud-based solutions than ever before. However, their use of key cloud security tools is lacking. Read on to learn more.

The Single Sign-On Problem

Single sign-on (SSO) is a basic, but critical security tool that authenticates users across cloud applications by requiring them to sign in to a single portal. Unfortunately, a mere 25 percent of organizations are using an SSO solution today. When compared to the 81 percent of companies that are using the cloud, it becomes readily apparent that there is a disparity between cloud usage and cloud security usage. This is a big problem.

The Threat of Data Leakage

While using the cloud is not inherently more risky than the traditional method of conducting business, it does lead to different threats that must be addressed in appropriate fashions. As adoption of cloud-based tools continues to grow, organizations must deploy cloud-first security solutions in order to defend against modern-day threats. While SSO is one such tool that is currently underutilized, other relevant security capabilities include shadow IT discoverydata loss prevention (DLP), contextual access control, cloud encryptionmalware detection, and more. Failure to use these tools can prove fatal to any enterprise in the cloud.

Microsoft Office 365 vs. Google’s G Suite

Office 365 and G Suite are the leading cloud productivity suites. They each offer a variety of tools that can help organizations improve their operations. Since Bitglass’ 2016 report, Office 365 has been deployed more frequently than G Suite. Interestingly, this year, O365 has extended its lead considerably. While roughly 56 percent of organizations now use Microsoft’s offering, about 25 percent are using Google’s. The fact that Office 365 has achieved more than two times as many deployments as G Suite highlights Microsoft’s success in positioning its product as the solution of choice for the enterprise.

The Rise of AWS

Through infrastructure as a service (IaaS), organizations are able to avoid making massive investments in IT infrastructure. Instead, they can leverage IaaS providers like Microsoft, Amazon, and Google in order to achieve low-cost, scalable infrastructure. In this year’s cloud adoption report, every analyzed industry exhibited adoption of Amazon Web Services (AWS), the leading IaaS solution. While the technology vertical led the way at 21.5 percent adoption, 13.8 percent of all organizations were shown to use AWS.

To gain more information about the state of cloud in 2018, download Bitglass’ report, Cloud Adoption: 2018 War.

Five Cloud Migration Mistakes That Will Sink a Business

By Jon-Michael C. Brook, Principal, Guide Holdings, LLC

intersection of success and failure Today, with the growing popularity of cloud computing, there exists a wealth of resources for companies that are considering—or are in the process of—migrating their data to the cloud. From checklists to best practices, the Internet teems with advice. But what about the things you shouldn’t be doing? The best-laid plans of mice and men often go awry, and so, too, will your cloud migration unless you manage to avoid these common cloud mistakes:

“The Cloud Service Provider (CSP) will do everything.”

Cloud computing offers significant advantages—cost, scalability, on-demand service and infinite bandwidth. And the processes, procedures, and day-to-day activities a CSP delivers provides every cloud customer–regardless of size–with the capabilities of Fortune 50 IT staff. But nothing is idiot proof. CSPs aren’t responsible for everything–they are only in charge of the parts they can control based on the shared responsibility model and expect customers to own more of the risk mitigation.

Advice: Take the time upfront to read the best practices of the cloud you’re deploying to. Follow cloud design patterns and understand your responsibilities–don’t trust that your cloud service provider will take care of everything. Remember, it is a shared responsibility model.

“Cryptography is the panacea; data-in-motion, data-at-rest and data-in-transit protection works the same in the cloud.”

Cybersecurity professionals refer to the triad balance: Confidentiality, Integrity and Availability. Increasing one decreases the other two. In the cloud, availability and integrity are built into every service and even guaranteed with Service Level Agreements (SLAs).The last bullet in the confidentiality chamber involves cryptography, mathematically adjusting information to make it unreadable without the appropriate key. However, cryptography works differently in the cloud. Customers expect service offerings will work together, and so the CSP provides the “80/20” security with less effort (i.e. CSP managed keys).

Advice: Expect that while you must use encryption for the cloud, there will be a learning curve. Take the time to read through the FAQs and understand what threats each architectural option really opens you up to.

“My cloud service provider’s default authentication is good enough.”

One of cloud’s tenets is self-service. CSPs have a duty to protect not just you, but themselves and everyone else that’s virtualized on their environment. One of the early self-service aspects is authentication—the act of proving you are who you say you are. There are three ways to accomplish this proof: 1) Reply with something you know (i.e., password); 2) Provide something you have (i.e., key or token); or 3) Produce something you are (i.e., a fingerprint or retina scan). These are all commonplace activities. For example, most enterprise systems require a password with a complexity factor (upper/lower/character/number), and even banks now require customers to enter additional password codes received as text messages. These techniques are imposed to make the authentication stronger, more reliable and with wider adoption. Multi-factor authentication uses more than one of them.

Advice: Cloud Service Providers offer numerous authentication upgrades, including some sort of multi-factor authentication option—use them.

“Lift and shift is the clear path to cloud migration.”

Cloud cost advantages evaporate quickly due to poor strategic decisions or architectural choices. A lift-and-shift approach in moving to cloud is where existing virtualized images or snapshots of current in-house systems are simply transformed and uploaded onto a Cloud Service Provider’s system. If you want to run the exact same system in-house rented on an IaaS platform, it will cost less money to buy a capital asset and depreciate the hardware over three years.  The lift-and-shift approach ignores the elastic scalability to scale up and down on demand, and doesn’t use rigorously tested cloud design patterns that result in resiliency and security. There may be systems within a design that are appropriate to be an exact copy, however, placing an entire enterprise architecture directly onto a CSP would be costly and inefficient.

Advice: Invest the time up front to redesign your architecture for the cloud, and you will benefit greatly.

“Of course, we’re compliant.”

Enterprise risk and compliance departments have decades of frameworks, documentation and mitigation techniques. Cloud-specific control frameworks are less than five years old, but are solid and are continuing to be understood each year.

However, adopting the cloud will need special attention, especially when it comes to non-enterprise risks such as an economic denial of service (credit card over-the-limit), third-party managed encryption keys that potentially give them access to your data (warrants/eDiscovery) or compromised root administrator account responsibilities (CSP shutting down your account and forcing physical verification for reinstatement).

Advice: These items don’t have direct analogs in the enterprise risk universe. Instead, the understandings must expand, especially in highly regulated industries. Don’t face massive fines, operational downtime or reputational losses by not paying attention to a widened risk environment.

Jon-Michael C. Brook, Principal at Guide Holdings, LLC, has 20 years of experience in information security with such organizations as Raytheon, Northrop Grumman, Booz Allen Hamilton, Optiv Security and Symantec. He is co-chair of CSA’s Top Threats Working Group and the Cloud Broker Working Group, and contributor to several additional working groups. Brook is a Certified Certificate of Cloud Security Knowledge+ (CCSK+) trainer and Cloud Controls Matrix (CCM) reviewer and trainer.

Cybersecurity and Privacy Certification from the Ground Up

By Daniele Catteddu, CTO, Cloud Security Alliance

The European Cybersecurity Act, proposed in 2017 by the European Commission, is the most recent of several policy documents adopted and/or proposed by governments around the world, each with the intent (among other objectives) to bring clarity to cybersecurity certifications for various products and services.

The reason why cybersecurity, and most recently privacy, certifications are so important is pretty obvious: They represent a vehicle of trust and serve the purpose of providing assurance about the level of cybersecurity a solution could provide. They represent, at least in theory, a simple mechanism through which organizations and individuals can make quick, risk-based decisions without the need to fully understand the technical specifications of the service or product they are purchasing.

What’s in a certification?

Most of us struggle to keep pace with technological innovations, and so we often find ourselves buying services and products without sufficient levels of education and awareness of the potential side effects these technologies can bring. We don’t fully understand the possible implications of adopting a new service, and sometimes we don’t even ask ourselves the most basic questions about the inherent risks of certain technologies.

In this landscape, certifications, compliance audits, trust marks and seals are mechanisms that help improve market conditions by providing a high-level representation of the level of cybersecurity a solution could offer.

Certifications are typically performed by a trusted third party (an auditor or a lab) who evaluates and assesses a solution against a set of requirements and criteria that are in turn part of a set of standards, best practices, or regulations. In the case of a positive assessment, the evaluator issues a certification or statement of compliance that is typically valid for a set length of time.

One of the problems with certifications under the current market condition is that they have a tendency to proliferate, which is to say that for the same product or service more than one certification exists. The example of cloud services is pretty illustrative of this issue. More than 20 different schemes exist to certify the level of security of cloud services, ranging from international standards to national accreditation systems to sectorial attestation of compliance.

Such a proliferation of certifications can serve to produce the exact opposite result that a certification was built for. Rather than supporting and streamlining the decision-making process, they could create confusion, and rather than increasing trust, they favor uncertainty. It should be noted, however, that such a proliferation isn’t always a bad thing. Sometimes, it’s the result of the need to accommodate important nuances of various security requirements.

Crafting the ideal certification

CSA has been a leader in cloud assurance, transparency and compliance for many years now, supporting the effort to improve the certification landscape. Our goal has been—and still is—to make the cloud and IoT technology environment more secure, transparent, trustworthy, effective and efficient by developing innovative solutions for compliance and certification.

It’s in this context that we are surveying our community and the market at-large to understand what both subject matter experts and laypersons see as the essential features and characteristics of the ideal certification scheme or meta-framework.

Our call to action?

Tell us—in a paragraph, a sentence or a word—what you think a cybersecurity and privacy certification should look like. Tell us what the scope should be (security/privacy, product /processes /people, cloud/IoT, global/regional/national), what’s the level of assurance offered, which guarantees and liabilities are expected, what’s the tradeoff between cost and value, how it should be proposed/communicated to be understood and valuable for the community at large.

Tell us, but do it before July 2 because that’s when the survey closes.

How ChromeOS Dramatically Simplifies Enterprise Security

By Rich Campagna, Chief Marketing Officer, Bitglass

chrome logoGoogle’s Chromebooks have enjoyed significant adoption in education, but have seen very little interest in the enterprise until recently. According to Gartner’s Peter Firstbrook in Securing Chromebooks in the Enterprise (6 March 2018), a survey of more than 700 respondents showed that nearly half of organizations will definitely purchase or probably will purchase Chromebooks by EOY 2017. And Google has started developing an impressive list of case studies, including WhirlpoolNetflixPinterestthe Better Business Bureau, and more.

And why wouldn’t this trend continue? As the enterprise adopts cloud en masse, more and more applications are available anywhere through a browser – obviating the need for a full OS running legacy applications. Additionally, Chromebooks can represent a large cost savings – not only in terms of a lower up-front cost of hardware, but lower ongoing maintenance and helpdesk costs as well.

With this shift comes a very different approach to security. Since Chrome OS is hardened and locked down, the need to secure the endpoint diminishes, potentially saving a lot of time and money. At the same time, the primary storage mechanism shifts from the device to the cloud, meaning that the need to secure data in cloud applications, like G Suite, with a Cloud Access Security Broker (CASB) becomes paramount. Fortunately, the CASB market has matured substantially in recent years, and is now widely viewed as “ready for primetime.”

Overall, the outlook for Chromebooks in the enterprise is positive, with a very real possibility of dramatically simplifying security. Now, instead of patching and protecting thousands of laptops, the focus shift towards protecting data in a relatively small number of cloud applications. Quite the improvement!

What If the Cryptography Underlying the Internet Fell Apart?

By Roberta Faux, Director of Research, Envieta

Without the encryption used to secure passwords for logging in to services like Paypal, Gmail, or Facebook, a user is left vulnerable to attack. Online security is becoming fundamental to life in the 21st century. Once quantum computing is achieved, all the secret keys we use to secure our online life are in jeopardy.

The CSA Quantum-Safe Security Working Group has produced a new primer on the future of cryptography. This paper, “The State of Post-Quantum Cryptography,” is aimed at helping non-technical corporate executives understand what the impact of quantum computers on today’s security infrastructure will be.

Some topics covered include:
–What Is Post-Quantum Cryptography
–Breaking Public Key Cryptography
–Key Exchange & Digital Signatures
–Quantum Safe Alternative
–Transition Planning for Quantum-Resistant Future

Quantum Computers Are Coming
Google, Microsoft, IBM, and Intel, as well as numerous well-funded startups, are making significant progress toward quantum computers. Scientists around the world are investigating a variety of technologies to make quantum computers real. While no one is sure when (or even if) quantum computers will be created, some experts believe that within 10 years a quantum computer capable of breaking today’s cryptography could exist.

Effects on Global Public Key Infrastructure
Quantum computing strikes at the heart of the security of the global public key infrastructure (PKI). PKI establishes secure keys for bidirectional encrypted communications over an insecure network. PKI authenticates the identity of information senders and receivers, as well as protects data from manipulation. The two primary public key algorithms used in the global PKI are RSA and Elliptic Curve Cryptography. A quantum computer would easily break these algorithms.

The security of these algorithms is based on intractably hard mathematical problems in number theory. However, they are only intractable for a classical computer, where bits can have only one value (a 1 or a 0). In a quantum computer, where k bits represent not one but 2^k values, RSA and Elliptic Curve cryptography can be solved in polynomial time using an algorithm called Shor’s algorithm. If quantum computers can scale to work on even tens of thousands of bits, today’s public key cryptography becomes immediately insecure.

Post-Quantum Cryptography
Fortunately, there are cryptographically hard problems that are believed to be secure even from quantum attacks. These crypto-systems are known as post-quantum or quantum-resistant cryptography. In recent years, post-quantum cryptography has received an increasing amount of attention in academic communities as well as from industry. Cryptographers have been designing new algorithms to provide quantum-safe security.

Proposed algorithms are based on a number of underlying hard problems widely believed to be resistant to attacks even with quantum computers. These fall into the following classes:

  • Multivariate cryptography
  • Hash-based cryptography
  • Code-based cryptography
  • Supersingular elliptic curve isogeny cryptography

Our new white paper explains the pros and cons of the various classes for post-quantum cryptography. Most post-quantum algorithms will require significantly larger key sizes than existing public key algorithms which may pose unanticipated issues such as compatibility with some protocols. Bandwidth will need to increase for key establishment and signatures. These larger key sizes also mean more storage inside a device.

Cryptographic Standards
Cryptography is typically implemented according to a standard. Standard organizations around the globe are advising stakeholders to plan for the future. In 2015, the U.S. National Security Agency posted a notice urging the need to plan for the replacement of current public key cryptography with quantum-resistant cryptography. While there are quantum-safe algorithms available today, standards are still being put in place.

Standard organizations such as ETSI, IETF, ISO, and X9 are all working on recommendations. The U.S. National Institute for Standards and Technology, known as NIST, is currently working on a project to produce a draft standard of a suite of quantum resistant algorithms in the 2022-2024 timeframe. This is a challenging process which has attracted worldwide debate. Various algorithms have advantages and disadvantages with respect to computation, key sizes and degree of confidence. These factors need to be evaluated against the target environment.

Cryptographic Transition Planning
One of the most important issues that the paper underscores, is the need to being planning for cryptographic transition to migrate from existing public key cryptography to post-quantum cryptography. Now is the time to vigorously investigate the wide range of post quantum cryptographic algorithms and find the best ones for use in the future. This point is vital for corporate leaders to understand and begin transition planning now.

The white paper, “The State of Post-Quantum Cryptography,” was released by CSA Quantum-Safe Security Working Group. This introduces non-technical executives to the current and evolving landscape in cryptographic security.

Download the paper now.

Building a Foundation for Successful Cyber Threat Intelligence Exchange: A New Guide from CSA

By Brian Kelly, Co-chair/Cloud Cyber Incident Sharing Center (CISC) Working Group, and CSO/Rackspace

Building a Foundation for Successful Cyber Threat Intelligence Exchange report coverNo organization is immune from cyber attack. Malicious actors collaborate with skill and agility, moving from target to target at a breakneck pace. With new attacks spreading from dozens of companies to a few hundred within a matter of days, visibility into the past cyber environment won’t cut it anymore. Visibility into what’s coming next is critical to staying alive.

Sophisticated organizations, particularly cloud providers, know the difference between a minor incident and massive breach lies in their ability to quickly detect, contain, and mitigate an attack. To facilitate this, they are increasingly participating in cyber intelligence and cyber incident exchanges, programs that enable cloud providers to share cyber-event information with others who may be experiencing the same issue or who are at risk for the same type of attack.

To help organizations navigate the sometimes treacherous waters of cyber-intelligence sharing programs, CSA’s Cloud Cyber Incident Sharing Center (Cloud-CISC) Working Group has produced Building a Foundation for Successful Cyber Threat Intelligence Exchange. This free report is the first in a series that will provide a framework to help corporations seeking to participate in cyber intelligence exchange programs that enhance their event data and incident response capabilities.

The paper addresses such challenges as:

  • determining what event data to share. This is essential (and fundamental) information for those organizations that struggle to understand their internal event data
  • incorporating cyber intelligence provided by others via email, a format which by its very nature limits the ability to integrate it into ones own.
  • scaling laterally to other sectors and vertically with one’s supply chains.
  • understanding that the motive for sharing is not necessarily helping others, but rather supporting internal response capabilities.

Past, Present, Future

Previous programs were more focused on sharing information about cyber security incidents after the fact and acted more as a public service to others than as a tool to support rapid incident response. That’s changed, and today’s Computer Security Incident Response Teams have matured.

New tools and technologies in cyber intelligence, data analytics and security incident management have created new opportunities for faster and actionable cyber intelligence exchange. Suspicious event data can now be rapidly shared and analyzed across teams, tools and even companies as part of the immediate response process.

Even so, there are questions and concerns beyond simply understanding the basics of the exchange process itself:

  • How do I share this information without compromising my organization’s sensitive data?
  • How do I select an exchange platform that best meets my company’s needs?
  • Which capabilities and business requirements should I consider when building a value-driven cyber intelligence exchange program?

Because the cloud industry is already taking advantage of many of the advanced technologies that support cyber intelligence exchange—and has such a unique and large footprint across the IT infrastructure—we believe that we have a real opportunity to take the lead and make cyber-intelligence sharing pervasive.

The Working Group’s recommendations were based largely on the lessons learned through their own development and operation of Cloud CISC, as well as their individual experiences in managing these programs for their companies.

Our industry cannot afford to let another year pass working in silos while malicious actors collaborate against us. It is time to level the playing field, and perhaps even gain an advantage. Come join us.