Cloud Security Alliance Announces the Release of the Spanish Translation of Guidance 4.0

By JR Santos, Executive Vice President of Research, Cloud Security Alliance.

Guidance 4.0 Spanish version coverThe Cloud Security Alliance (CSA), the world’s leading organization dedicated to defining and raising awareness of best practices to help ensure a secure cloud computing environment, today announced the release of Guidance for Critical Areas of Focus in Cloud Computing 4.0 in Spanish. This is the second major translation release since Guidance 4.0 was released in July of 2017 (Previous version was released in 2011).

An actionable cloud adoption roadmap

Guidance 4.0, which acts as a practical, actionable roadmap for individuals and organizations looking to safely and securely adopt the cloud paradigm, includes significant content updates to address leading-edge cloud security practices.

Approximately 80 percent of the Guidance was rewritten from the ground up with domains restructured to better represent the current state and future of cloud computing security. Guidance 4.0 incorporates more of the various applications used in the security environment today to better reflect real-world security practices.

“Guidance 4.0 is the culmination of more than a year of dedicated research and public participation from the CSA community, working groups and the public at large,” said Rich Mogull, Founder & VP of Product, DisruptOPS. “The landscape has changed dramatically since 2011, and we felt the timing was right to make the changes we did. We worked hard with the community to ensure that the Guidance was not only updated to reflect the latest cloud security practices, but to ensure it provides practical, actionable advice along with the background material to support the CSA’s recommendations. We’re extremely proud of the work that went into this and the contributions of everyone involved.”

CCM, CAIQ, DevOps and more

Guidance 4.0 integrates the latest CSA research projects, such as the Cloud Controls Matrix (CCM) and the Consensus Assessments Initiative Questionnaire (CAIQ), and covers such topics as DevOps, IoT, Mobile and Big Data. Among the other topics covered are:

  • DevOps, continuous delivery, and secure software development;
  • Software Defined Networks, the Software Defined Perimeter and cloud network security.
  • Microservices and containers;
  • New regulatory guidance and evolving roles of audits and compliance inheritance;
  • Using CSA tools such as the CCM, CAIQ, and STAR Registry to inform cloud risk decisions;
  • Securing the cloud management plane;
  • More practical guidance for hybrid cloud;
  • Compute security guidance for containers and serverless, plus updates to managing virtual machine security; and
  • The use of immutable, serverless, and “new” cloud architectures.

The oversight of the development of Guidance 4.0 was conducted by the professional research analysts at Securosis and based on an open research model relying on community contributions and feedback during all phases of the project. The entire history of contributions and research development is available online for complete transparency.

CVE and Cloud Services, Part 2: Impacts on Cloud Vulnerability and Risk Management

By Victor Chin, Research Analyst, Cloud Security Alliance, and Kurt Seifried, Director of IT, Cloud Security Alliance

Internet Cloud server cabinet

This is the second post in a series, where we’ll discuss cloud service vulnerability and risk management trends in relation to the Common Vulnerability and Exposures (CVE) system. In the first blog post, we wrote about the Inclusion Rule 3 (INC3) and how it affects the counting of cloud service vulnerabilities. Here, we will delve deeper into how the exclusion of cloud service vulnerabilities impacts enterprise vulnerability and risk management.

 

Traditional vulnerability and risk management

CVE identifiers are the linchpin of traditional vulnerability management processes. Besides being an identifier for vulnerabilities, the CVE system allows different services and business processes to interoperate, making enterprise IT environments more secure. For example, a network vulnerability scanner can identify whether a vulnerability (e.g. CVE-2018-1234) is present in a deployed system by querying said system.

The queries can be conducted in many ways, such as via a banner grab, querying the system for what software is installed, or even via proof of concept exploits that have been de-weaponized. Such queries confirm the existence of the vulnerability, after which risk management and vulnerability remediation can take place.

Once the existence of the vulnerability is confirmed, enterprises must conduct risk management activities. Enterprises might first prioritize vulnerability remediation according to the criticality of the vulnerabilities. The Common Vulnerability Scoring System (CVSS) is one way on which the triaging of vulnerabilities is based. The system gives each vulnerability a score according to how critical it is, and from there enterprises can prioritize and remediate the more critical ones. Like other vulnerability information, CVSS scores are normally associated to CVE IDs.

Next, mitigating actions can be taken to remediate the vulnerabilities. This could refer to implementing patches, workarounds, or applying security controls. How the organization chooses to address the vulnerability is an exercise of risk management. They have to carefully balance their resources in relation to their risk appetite. But generally, organizations choose risk avoidance/rejection, risk acceptance, or risk mitigation.

Risk avoidance and rejection is fairly straightforward. Here, the organization doesn’t want to mitigate the vulnerability. At the same time, based on information available, the organization determines that the risk the vulnerability poses is above their risk threshold, and they stop using the vulnerable software.

Risk acceptance refers to when the organization, based on information available, determines that the risk posed is below their risk threshold and decides to accept the risk.

Lastly, in risk mitigation, the organization chooses to take mitigating actions and implement security controls that will reduce the risk. In traditional environments, such mitigating actions are possible because the organization generally owns and controls the infrastructure that provisions the IT service. For example, to mitigate a vulnerability, organizations are able to implement firewalls, intrusion detection systems, conduct system hardening activities, deactivate a service, change the configuration of a service, and many other options.

Thus, in traditional IT environments, organizations are able to take many mitigating actions because they own and control the stack. Furthermore, organizations have access to vulnerability information with which to make informed risk management decisions.

Cloud service customer challenges

Compared to traditional IT environments, the situation is markedly different for external cloud environments. The differences all stem from organizations not owning and controlling the infrastructure that provisions the cloud service, as well as not having access to vulnerability data of cloud native services.

Enterprise users don’t have ready access to cloud native vulnerabilities because there is no way to officially associate the data to cloud native vulnerabilities as CVE IDs are not generally assigned to them. Consequently, it’s difficult for enterprises to make an informed, risk-based decision regarding a vulnerable cloud service. For example, when should an enterprise customer reject the risk and stop using the service or accept the risk and continue using the service.

Furthermore, even if CVE IDs are assigned to cloud native vulnerabilities, the differences between traditional and cloud environments are so vast that vulnerability data which is normally associated to a CVE in a traditional environment is inadequate when dealing with cloud service vulnerabilities. For example, in a traditional IT environment, CVEs are linked to the version of a software. An enterprise customer can verify that a vulnerable version of a software is running by checking the software version. In cloud services, the versioning of the software (if there is one!) is usually only known to the cloud service provider and is not made public. Additionally, the enterprise user is unable to apply security controls or other mitigations to address the risk of a vulnerability.

This is not saying that CVEs and the associated vulnerability data are useless for cloud services. Instead, we should consider including vulnerability data that is useful in the context of a cloud service. In particular, cloud service vulnerability data should help enterprise cloud customers make the important risk-based decision of when to continue or stop using the service.

Thus, just as enterprise customers must trust cloud service providers with their sensitive data, they must also trust, blindly, that the cloud service providers are properly remediating the vulnerabilities in their environment in a timely manner.

The CVE gap

With the increasing global adoption and proliferation of cloud services, the exclusion of service vulnerabilities from the CVE system and the impacts of said exclusion have left a growing gap that the cloud services industry should address. This gap not only impacts enterprise vulnerability and risk management but also other key stakeholders in the cloud services industry.

In the next post, we’ll explore how other key stakeholders are affected by the shortcomings of cloud service vulnerability management.

Please let us know what you think about the INC3’s impacts on cloud service vulnerability and risk management in the comment section below, or you can also email us.

Cloud Migration Strategies and Their Impact on Security and Governance

By Peter HJ van Eijk, Head Coach and Cloud Architect, ClubCloudComputing.com

cloud migration concept with servers in the cloud

Public cloud migrations come in different shapes and sizes, but I see three major approaches. Each of these has very different technical and governance implications.

Three approaches to cloud migration

Companies dying to get rid of their data centers often get started on a ‘lift and shift’ approach, where applications are moved from existing servers to equivalent servers in the cloud. The cloud service model consumed here is mainly IaaS (infrastructure as a service). Not much is outsourced to cloud providers here. Contrast that with SaaS.

The other side of the spectrum is adopting SaaS solutions. More often than not, these trickle in from the business side, not from IT. These could range from small meeting planners to full blown sales support systems.

More recently, developers have started to embrace cloud native architectures. Ultimately, both the target environment as well as the development environment can be cloud based. The cloud service model consumed here is typically PaaS.

I am not here to advocate the benefits of one over the other, I think there can be business case for each of these.

The categories also have some overlap. Lift and shift can require some refactoring of code, to have it better fit cloud native deployments. And hardly any SaaS application is stand alone, so some (cloud native) integration with other software is often required.

Profound differences

The big point I want to make here is that there are profound differences in the issues that each of these categories faces, and the hard decisions that have to be made. Most of these decisions are about governance and risk management.

With lift and shift, the application functionality is pretty clear, but bringing that out to the cloud introduces data risks and technical risks. Data controls may be insufficient, and the application’s architecture may not be a good match for cloud, leading to poor performance and high cost.

One group of SaaS applications stems from ‘shadow IT’. The people that adopt them typically pay little attention to existing risk management policies. These can also add useless complexity to the application landscape. The governance challenges for these are obvious: consolidate and make them more compliant with company policies.

Another group of SaaS applications is the reincarnation of the ‘enterprise software package’. Think ERP, CRM or HR applications. These are typically run as a corporate project, with all its change management issues, except that you don’t have to run it yourself.

The positive side of SaaS solutions, in general, is that they are likely to be cloud native, which could greatly reduce their risk profile. Of course, this has to be validated, and a minimum risk control is to have a good exit strategy.

Finally, cloud native development is the most exciting, rewarding and risky approach. This is because it explores and creates new possibilities that can truly transform an organization.

One of the most obvious balances to strike here is between speed of innovation and independence of platform providers. The more you are willing to commit yourself to an innovative platform, the faster you may be able to move. The two big examples I see of that are big data and internet of things. The major cloud providers have very interesting offerings there, but moving a fully developed application from one provider to another is going to be a really painful proposition. And of course, the next important thing is for developers to truly understand the risks and benefits of cloud native development.

Again, big governance and risk management are issues to address.

Peter van Eijk is one of the world’s most experienced cloud trainers. He has worked for 30+ years in research, with IT service providers and in IT consulting (University of Twente, AT&T Bell Labs, EDS, EUNet, Deloitte). In more than 100 training sessions he has helped organizations align on security and speed up their cloud adoption. He is an authorized CSA CCSK and (ISC)2 CCSP trainer, and has written or contributed to several cloud training courses. 

Continuous Monitoring in the Cloud

By Michael Pitcher, Vice President, Technical Cyber Services, Coalfire Federal

lock and key for cloud security

I recently spoke at the Cloud Security Alliance’s Federal Summit on the topic “Continuous Monitoring / Continuous Diagnostics and Mitigation (CDM) Concepts in the Cloud.” As government has moved and will continue to move to the cloud, it is becoming increasingly important to ensure continuous monitoring goals are met in this environment. Specifically, cloud assets can be highly dynamic, lacking persistence, and thus traditional methods for continuous monitoring that work for on-premise solutions don’t always translate to the cloud.

Coalfire has been involved with implementing CDM for various agencies and is the largest Third Party Assessment Organization (3PAO), having done more FedRAMP authorizations than anyone, uniquely positioning us to help customers think through this challenge. However, these concepts and challenges are not unique to the government agencies that are a part of the CDM program; they also translate to other government and DoD communities as well as commercial entities.

To review, Phase 1 of the Department of Homeland Security (DHS) CDM program focused largely on static assets and for the most part excluded the cloud. It was centered around building and knowing an inventory, which could then be enrolled in ongoing scanning, as frequently as every 72 hours. The objective is to determine if assets are authorized to be on the network, are being managed, and if they have software installed that is vulnerable and/or misconfigured. As the cloud becomes a part of the next round of CDM, it is important to understand how the approach to these objectives needs to adapt.

Cloud services enable resources to be allocated, consumed, and de-allocated on the fly to meet peak demands. Just about any system is going to have times where more resources are required than others, and the cloud allows compute, storage, and network resources to scale with this demand. As an example, within Coalfire we have a Security Parsing Tool (Sec-P) that spins up compute resources to process vulnerability assessment files that are dropped into a cloud storage bucket. The compute resources only exist for a few seconds while the file gets processed, and then they are torn down. Examples such as this, as well as serverless architectures, challenge traditional continuous monitoring approaches.

However, potential solutions are out there, including:

  • Adopting built-in services and third-party tools
  • Deploying agents
  • Leveraging Infrastructure as Code (IaC) review
  • Using sampling for validation
  • Developing a custom approach

Adopting built-in services and third-party tools

Dynamic cloud environments highlight the inadequacies of performing active and passive scanning to build inventories. Assets may simply come and go before they can be assessed by a traditional scan tool. Each of the major cloud services providers (CSPs) and many of the smaller ones provide inventory management services in addition to services that can monitor resource changes – examples include AWS’ System Manager Inventory Manager and Cloud Watch, Microsoft’s Azure Resource Manager and Activity Log, and Google’s Asset Inventory and Cloud Audit Logging. There are also quality third-party applications that can be used, some of them even already FedRAMP authorized. Regardless of the service/tool used, the key here is interfacing them with the integration layer of an existing CDM or continuous monitoring solution. This can occur via API calls to and from the solution, which are made possible by the current CDM program requirements.

Deploying agents

For resources that are going to have some degree of persistence, agents are a great way to perform continuous monitoring. Agents can check in with a master to maintain the inventory and also perform security checks once the resource is spun up, instead of having to wait for a sweeping scan. Agents can be installed as a part of the build process or even be made part of a deployment image. Interfacing with the master node that controls the agents and comparing that to the inventory is a great way to perform cloud-based “rogue” asset detection, a requirement under CDM. This concept employed on-premises is really about finding unauthorized assets, such as a personal laptop plugged into an open network port. In the cloud it is all about finding assets that have drifted from the approved configuration and are out of compliance with the security requirements.

For resources such as our Coalfire Sec-P tool from the previous example, where it exists as code more than 90 percent of the time, we need to think differently. An agent approach may not work as the compute resources may not exist long enough to even check in with the master, let alone perform any security checks.

Infrastructure as code review

IaC is used to deploy and configure cloud resources such as compute, storage, and networking. It is basically a set of templates that “programs” the infrastructure. It is not a new concept for the cloud, but the speed at which environments change in the cloud is bringing IaC into the security spotlight.

Now, we need to consider how we can perform assessment on the code that builds and configures the resources. There are many tools and different approaches on how to do this; application security is not anything new, it just must be re-examined when we consider it part of performing continuous monitoring on infrastructure. The good news is that IaC uses structured formats and common languages such as XML, JSON, and YAML. As a result, it is possible to use tools or even write custom scripts to perform the review. This structured format also allows for automated and ongoing monitoring of the configurations, even when the resources only exist as code and are not “living.” It is also important to consider what software is spinning up with the resources, as the packages that are leveraged must include up-to-date versions that do not have vulnerabilities. Code should undergo a security review when it changes, and thus the approved code can be continuously monitored.

Setting asset expiry is one way to enforce CDM principals in a high DevOps environment that leverages IaC. The goal of CDM is to assess assets every 72 hours, and thus we can set them to expire (get torn down, and therefore require rebuild) within the timeframe to know they are living on fresh infrastructure built with approved code.

Sampling

Sampling is to be used in conjunction with the methods above. In a dynamic environment where the total number of assets is always changing, there should be a solid core of the fleet that can be scanned via traditional means of active scanning. We just need to accept that we are not going to be able to scan the complete inventory. There should also be far fewer profiles, or “gold images,” than there are total assets. The idea is that if you can get at least 25% of each profile in any given scan, there is a good chance you are going to find all the misconfigurations and vulnerabilities that exist on all the resources of the same profile, and/or identify if assets are drifting from the fleet. This is enough to identify systemic issues such as bad deployment code or resources being spun up with out-of-date software. If you are finding resources in a profile that have a large discrepancy with the others in that same profile, then that is a sign of DevOps or configuration management issues that need to be addressed. We are not giving up on the concept of having a complete inventory, just accepting the fact that there really is no such thing.

Building IaC assets specifically for the purposes of performing security testing is a great option to leverage as well. These assets can have persistence and be “enrolled” into a continuous monitoring solution to report on the vulnerabilities in a similar manner to on-premises devices, via a dashboard or otherwise. The total number of vulnerabilities in the fleet is the quantity found on these sample assets, multiplied by the number of those assets that are living in the fleet. As we stated above, we can get this quantity from the CSP services or third-party tools.

Custom approaches

There are many different CSPs out there for the endless cloud-based possibilities, and all CSPs have various services and tools available from them, and for them. What I have reviewed are high-level concepts, but each customer will need to dial in the specifics based on their use cases and objectives.

Cloud Security Trailing Cloud App Adoption in 2018

By Jacob Serpa, Product Marketing Manager, Bitglass

In recent years, the cloud has attracted countless organizations with its promises of increased productivity, improved collaboration, and decreased IT overhead. As more and more companies migrate, more and more cloud-based tools arise.

In its fourth cloud adoption report, Bitglass reveals the state of cloud in 2018. Unsurprisingly, organizations are adopting more cloud-based solutions than ever before. However, their use of key cloud security tools is lacking. Read on to learn more.

The Single Sign-On Problem

Single sign-on (SSO) is a basic, but critical security tool that authenticates users across cloud applications by requiring them to sign in to a single portal. Unfortunately, a mere 25 percent of organizations are using an SSO solution today. When compared to the 81 percent of companies that are using the cloud, it becomes readily apparent that there is a disparity between cloud usage and cloud security usage. This is a big problem.

The Threat of Data Leakage

While using the cloud is not inherently more risky than the traditional method of conducting business, it does lead to different threats that must be addressed in appropriate fashions. As adoption of cloud-based tools continues to grow, organizations must deploy cloud-first security solutions in order to defend against modern-day threats. While SSO is one such tool that is currently underutilized, other relevant security capabilities include shadow IT discoverydata loss prevention (DLP), contextual access control, cloud encryptionmalware detection, and more. Failure to use these tools can prove fatal to any enterprise in the cloud.

Microsoft Office 365 vs. Google’s G Suite

Office 365 and G Suite are the leading cloud productivity suites. They each offer a variety of tools that can help organizations improve their operations. Since Bitglass’ 2016 report, Office 365 has been deployed more frequently than G Suite. Interestingly, this year, O365 has extended its lead considerably. While roughly 56 percent of organizations now use Microsoft’s offering, about 25 percent are using Google’s. The fact that Office 365 has achieved more than two times as many deployments as G Suite highlights Microsoft’s success in positioning its product as the solution of choice for the enterprise.

The Rise of AWS

Through infrastructure as a service (IaaS), organizations are able to avoid making massive investments in IT infrastructure. Instead, they can leverage IaaS providers like Microsoft, Amazon, and Google in order to achieve low-cost, scalable infrastructure. In this year’s cloud adoption report, every analyzed industry exhibited adoption of Amazon Web Services (AWS), the leading IaaS solution. While the technology vertical led the way at 21.5 percent adoption, 13.8 percent of all organizations were shown to use AWS.

To gain more information about the state of cloud in 2018, download Bitglass’ report, Cloud Adoption: 2018 War.

Five Cloud Migration Mistakes That Will Sink a Business

By Jon-Michael C. Brook, Principal, Guide Holdings, LLC

intersection of success and failure Today, with the growing popularity of cloud computing, there exists a wealth of resources for companies that are considering—or are in the process of—migrating their data to the cloud. From checklists to best practices, the Internet teems with advice. But what about the things you shouldn’t be doing? The best-laid plans of mice and men often go awry, and so, too, will your cloud migration unless you manage to avoid these common cloud mistakes:

“The Cloud Service Provider (CSP) will do everything.”

Cloud computing offers significant advantages—cost, scalability, on-demand service and infinite bandwidth. And the processes, procedures, and day-to-day activities a CSP delivers provides every cloud customer–regardless of size–with the capabilities of Fortune 50 IT staff. But nothing is idiot proof. CSPs aren’t responsible for everything–they are only in charge of the parts they can control based on the shared responsibility model and expect customers to own more of the risk mitigation.

Advice: Take the time upfront to read the best practices of the cloud you’re deploying to. Follow cloud design patterns and understand your responsibilities–don’t trust that your cloud service provider will take care of everything. Remember, it is a shared responsibility model.

“Cryptography is the panacea; data-in-motion, data-at-rest and data-in-transit protection works the same in the cloud.”

Cybersecurity professionals refer to the triad balance: Confidentiality, Integrity and Availability. Increasing one decreases the other two. In the cloud, availability and integrity are built into every service and even guaranteed with Service Level Agreements (SLAs).The last bullet in the confidentiality chamber involves cryptography, mathematically adjusting information to make it unreadable without the appropriate key. However, cryptography works differently in the cloud. Customers expect service offerings will work together, and so the CSP provides the “80/20” security with less effort (i.e. CSP managed keys).

Advice: Expect that while you must use encryption for the cloud, there will be a learning curve. Take the time to read through the FAQs and understand what threats each architectural option really opens you up to.

“My cloud service provider’s default authentication is good enough.”

One of cloud’s tenets is self-service. CSPs have a duty to protect not just you, but themselves and everyone else that’s virtualized on their environment. One of the early self-service aspects is authentication—the act of proving you are who you say you are. There are three ways to accomplish this proof: 1) Reply with something you know (i.e., password); 2) Provide something you have (i.e., key or token); or 3) Produce something you are (i.e., a fingerprint or retina scan). These are all commonplace activities. For example, most enterprise systems require a password with a complexity factor (upper/lower/character/number), and even banks now require customers to enter additional password codes received as text messages. These techniques are imposed to make the authentication stronger, more reliable and with wider adoption. Multi-factor authentication uses more than one of them.

Advice: Cloud Service Providers offer numerous authentication upgrades, including some sort of multi-factor authentication option—use them.

“Lift and shift is the clear path to cloud migration.”

Cloud cost advantages evaporate quickly due to poor strategic decisions or architectural choices. A lift-and-shift approach in moving to cloud is where existing virtualized images or snapshots of current in-house systems are simply transformed and uploaded onto a Cloud Service Provider’s system. If you want to run the exact same system in-house rented on an IaaS platform, it will cost less money to buy a capital asset and depreciate the hardware over three years.  The lift-and-shift approach ignores the elastic scalability to scale up and down on demand, and doesn’t use rigorously tested cloud design patterns that result in resiliency and security. There may be systems within a design that are appropriate to be an exact copy, however, placing an entire enterprise architecture directly onto a CSP would be costly and inefficient.

Advice: Invest the time up front to redesign your architecture for the cloud, and you will benefit greatly.

“Of course, we’re compliant.”

Enterprise risk and compliance departments have decades of frameworks, documentation and mitigation techniques. Cloud-specific control frameworks are less than five years old, but are solid and are continuing to be understood each year.

However, adopting the cloud will need special attention, especially when it comes to non-enterprise risks such as an economic denial of service (credit card over-the-limit), third-party managed encryption keys that potentially give them access to your data (warrants/eDiscovery) or compromised root administrator account responsibilities (CSP shutting down your account and forcing physical verification for reinstatement).

Advice: These items don’t have direct analogs in the enterprise risk universe. Instead, the understandings must expand, especially in highly regulated industries. Don’t face massive fines, operational downtime or reputational losses by not paying attention to a widened risk environment.

Jon-Michael C. Brook, Principal at Guide Holdings, LLC, has 20 years of experience in information security with such organizations as Raytheon, Northrop Grumman, Booz Allen Hamilton, Optiv Security and Symantec. He is co-chair of CSA’s Top Threats Working Group and the Cloud Broker Working Group, and contributor to several additional working groups. Brook is a Certified Certificate of Cloud Security Knowledge+ (CCSK+) trainer and Cloud Controls Matrix (CCM) reviewer and trainer.

Cybersecurity and Privacy Certification from the Ground Up

By Daniele Catteddu, CTO, Cloud Security Alliance

The European Cybersecurity Act, proposed in 2017 by the European Commission, is the most recent of several policy documents adopted and/or proposed by governments around the world, each with the intent (among other objectives) to bring clarity to cybersecurity certifications for various products and services.

The reason why cybersecurity, and most recently privacy, certifications are so important is pretty obvious: They represent a vehicle of trust and serve the purpose of providing assurance about the level of cybersecurity a solution could provide. They represent, at least in theory, a simple mechanism through which organizations and individuals can make quick, risk-based decisions without the need to fully understand the technical specifications of the service or product they are purchasing.

What’s in a certification?

Most of us struggle to keep pace with technological innovations, and so we often find ourselves buying services and products without sufficient levels of education and awareness of the potential side effects these technologies can bring. We don’t fully understand the possible implications of adopting a new service, and sometimes we don’t even ask ourselves the most basic questions about the inherent risks of certain technologies.

In this landscape, certifications, compliance audits, trust marks and seals are mechanisms that help improve market conditions by providing a high-level representation of the level of cybersecurity a solution could offer.

Certifications are typically performed by a trusted third party (an auditor or a lab) who evaluates and assesses a solution against a set of requirements and criteria that are in turn part of a set of standards, best practices, or regulations. In the case of a positive assessment, the evaluator issues a certification or statement of compliance that is typically valid for a set length of time.

One of the problems with certifications under the current market condition is that they have a tendency to proliferate, which is to say that for the same product or service more than one certification exists. The example of cloud services is pretty illustrative of this issue. More than 20 different schemes exist to certify the level of security of cloud services, ranging from international standards to national accreditation systems to sectorial attestation of compliance.

Such a proliferation of certifications can serve to produce the exact opposite result that a certification was built for. Rather than supporting and streamlining the decision-making process, they could create confusion, and rather than increasing trust, they favor uncertainty. It should be noted, however, that such a proliferation isn’t always a bad thing. Sometimes, it’s the result of the need to accommodate important nuances of various security requirements.

Crafting the ideal certification

CSA has been a leader in cloud assurance, transparency and compliance for many years now, supporting the effort to improve the certification landscape. Our goal has been—and still is—to make the cloud and IoT technology environment more secure, transparent, trustworthy, effective and efficient by developing innovative solutions for compliance and certification.

It’s in this context that we are surveying our community and the market at-large to understand what both subject matter experts and laypersons see as the essential features and characteristics of the ideal certification scheme or meta-framework.

Our call to action?

Tell us—in a paragraph, a sentence or a word—what you think a cybersecurity and privacy certification should look like. Tell us what the scope should be (security/privacy, product /processes /people, cloud/IoT, global/regional/national), what’s the level of assurance offered, which guarantees and liabilities are expected, what’s the tradeoff between cost and value, how it should be proposed/communicated to be understood and valuable for the community at large.

Tell us, but do it before July 2 because that’s when the survey closes.

How ChromeOS Dramatically Simplifies Enterprise Security

By Rich Campagna, Chief Marketing Officer, Bitglass

chrome logoGoogle’s Chromebooks have enjoyed significant adoption in education, but have seen very little interest in the enterprise until recently. According to Gartner’s Peter Firstbrook in Securing Chromebooks in the Enterprise (6 March 2018), a survey of more than 700 respondents showed that nearly half of organizations will definitely purchase or probably will purchase Chromebooks by EOY 2017. And Google has started developing an impressive list of case studies, including WhirlpoolNetflixPinterestthe Better Business Bureau, and more.

And why wouldn’t this trend continue? As the enterprise adopts cloud en masse, more and more applications are available anywhere through a browser – obviating the need for a full OS running legacy applications. Additionally, Chromebooks can represent a large cost savings – not only in terms of a lower up-front cost of hardware, but lower ongoing maintenance and helpdesk costs as well.

With this shift comes a very different approach to security. Since Chrome OS is hardened and locked down, the need to secure the endpoint diminishes, potentially saving a lot of time and money. At the same time, the primary storage mechanism shifts from the device to the cloud, meaning that the need to secure data in cloud applications, like G Suite, with a Cloud Access Security Broker (CASB) becomes paramount. Fortunately, the CASB market has matured substantially in recent years, and is now widely viewed as “ready for primetime.”

Overall, the outlook for Chromebooks in the enterprise is positive, with a very real possibility of dramatically simplifying security. Now, instead of patching and protecting thousands of laptops, the focus shift towards protecting data in a relatively small number of cloud applications. Quite the improvement!

What If the Cryptography Underlying the Internet Fell Apart?

By Roberta Faux, Director of Research, Envieta

Without the encryption used to secure passwords for logging in to services like Paypal, Gmail, or Facebook, a user is left vulnerable to attack. Online security is becoming fundamental to life in the 21st century. Once quantum computing is achieved, all the secret keys we use to secure our online life are in jeopardy.

The CSA Quantum-Safe Security Working Group has produced a new primer on the future of cryptography. This paper, “The State of Post-Quantum Cryptography,” is aimed at helping non-technical corporate executives understand what the impact of quantum computers on today’s security infrastructure will be.

Some topics covered include:
–What Is Post-Quantum Cryptography
–Breaking Public Key Cryptography
–Key Exchange & Digital Signatures
–Quantum Safe Alternative
–Transition Planning for Quantum-Resistant Future

Quantum Computers Are Coming
Google, Microsoft, IBM, and Intel, as well as numerous well-funded startups, are making significant progress toward quantum computers. Scientists around the world are investigating a variety of technologies to make quantum computers real. While no one is sure when (or even if) quantum computers will be created, some experts believe that within 10 years a quantum computer capable of breaking today’s cryptography could exist.

Effects on Global Public Key Infrastructure
Quantum computing strikes at the heart of the security of the global public key infrastructure (PKI). PKI establishes secure keys for bidirectional encrypted communications over an insecure network. PKI authenticates the identity of information senders and receivers, as well as protects data from manipulation. The two primary public key algorithms used in the global PKI are RSA and Elliptic Curve Cryptography. A quantum computer would easily break these algorithms.

The security of these algorithms is based on intractably hard mathematical problems in number theory. However, they are only intractable for a classical computer, where bits can have only one value (a 1 or a 0). In a quantum computer, where k bits represent not one but 2^k values, RSA and Elliptic Curve cryptography can be solved in polynomial time using an algorithm called Shor’s algorithm. If quantum computers can scale to work on even tens of thousands of bits, today’s public key cryptography becomes immediately insecure.

Post-Quantum Cryptography
Fortunately, there are cryptographically hard problems that are believed to be secure even from quantum attacks. These crypto-systems are known as post-quantum or quantum-resistant cryptography. In recent years, post-quantum cryptography has received an increasing amount of attention in academic communities as well as from industry. Cryptographers have been designing new algorithms to provide quantum-safe security.

Proposed algorithms are based on a number of underlying hard problems widely believed to be resistant to attacks even with quantum computers. These fall into the following classes:

  • Multivariate cryptography
  • Hash-based cryptography
  • Code-based cryptography
  • Supersingular elliptic curve isogeny cryptography

Our new white paper explains the pros and cons of the various classes for post-quantum cryptography. Most post-quantum algorithms will require significantly larger key sizes than existing public key algorithms which may pose unanticipated issues such as compatibility with some protocols. Bandwidth will need to increase for key establishment and signatures. These larger key sizes also mean more storage inside a device.

Cryptographic Standards
Cryptography is typically implemented according to a standard. Standard organizations around the globe are advising stakeholders to plan for the future. In 2015, the U.S. National Security Agency posted a notice urging the need to plan for the replacement of current public key cryptography with quantum-resistant cryptography. While there are quantum-safe algorithms available today, standards are still being put in place.

Standard organizations such as ETSI, IETF, ISO, and X9 are all working on recommendations. The U.S. National Institute for Standards and Technology, known as NIST, is currently working on a project to produce a draft standard of a suite of quantum resistant algorithms in the 2022-2024 timeframe. This is a challenging process which has attracted worldwide debate. Various algorithms have advantages and disadvantages with respect to computation, key sizes and degree of confidence. These factors need to be evaluated against the target environment.

Cryptographic Transition Planning
One of the most important issues that the paper underscores, is the need to being planning for cryptographic transition to migrate from existing public key cryptography to post-quantum cryptography. Now is the time to vigorously investigate the wide range of post quantum cryptographic algorithms and find the best ones for use in the future. This point is vital for corporate leaders to understand and begin transition planning now.

The white paper, “The State of Post-Quantum Cryptography,” was released by CSA Quantum-Safe Security Working Group. This introduces non-technical executives to the current and evolving landscape in cryptographic security.

Download the paper now.

Building a Foundation for Successful Cyber Threat Intelligence Exchange: A New Guide from CSA

By Brian Kelly, Co-chair/Cloud Cyber Incident Sharing Center (CISC) Working Group, and CSO/Rackspace

Building a Foundation for Successful Cyber Threat Intelligence Exchange report coverNo organization is immune from cyber attack. Malicious actors collaborate with skill and agility, moving from target to target at a breakneck pace. With new attacks spreading from dozens of companies to a few hundred within a matter of days, visibility into the past cyber environment won’t cut it anymore. Visibility into what’s coming next is critical to staying alive.

Sophisticated organizations, particularly cloud providers, know the difference between a minor incident and massive breach lies in their ability to quickly detect, contain, and mitigate an attack. To facilitate this, they are increasingly participating in cyber intelligence and cyber incident exchanges, programs that enable cloud providers to share cyber-event information with others who may be experiencing the same issue or who are at risk for the same type of attack.

To help organizations navigate the sometimes treacherous waters of cyber-intelligence sharing programs, CSA’s Cloud Cyber Incident Sharing Center (Cloud-CISC) Working Group has produced Building a Foundation for Successful Cyber Threat Intelligence Exchange. This free report is the first in a series that will provide a framework to help corporations seeking to participate in cyber intelligence exchange programs that enhance their event data and incident response capabilities.

The paper addresses such challenges as:

  • determining what event data to share. This is essential (and fundamental) information for those organizations that struggle to understand their internal event data
  • incorporating cyber intelligence provided by others via email, a format which by its very nature limits the ability to integrate it into ones own.
  • scaling laterally to other sectors and vertically with one’s supply chains.
  • understanding that the motive for sharing is not necessarily helping others, but rather supporting internal response capabilities.

Past, Present, Future

Previous programs were more focused on sharing information about cyber security incidents after the fact and acted more as a public service to others than as a tool to support rapid incident response. That’s changed, and today’s Computer Security Incident Response Teams have matured.

New tools and technologies in cyber intelligence, data analytics and security incident management have created new opportunities for faster and actionable cyber intelligence exchange. Suspicious event data can now be rapidly shared and analyzed across teams, tools and even companies as part of the immediate response process.

Even so, there are questions and concerns beyond simply understanding the basics of the exchange process itself:

  • How do I share this information without compromising my organization’s sensitive data?
  • How do I select an exchange platform that best meets my company’s needs?
  • Which capabilities and business requirements should I consider when building a value-driven cyber intelligence exchange program?

Because the cloud industry is already taking advantage of many of the advanced technologies that support cyber intelligence exchange—and has such a unique and large footprint across the IT infrastructure—we believe that we have a real opportunity to take the lead and make cyber-intelligence sharing pervasive.

The Working Group’s recommendations were based largely on the lessons learned through their own development and operation of Cloud CISC, as well as their individual experiences in managing these programs for their companies.

Our industry cannot afford to let another year pass working in silos while malicious actors collaborate against us. It is time to level the playing field, and perhaps even gain an advantage. Come join us.

 

Speeding the Secure Cloud Adoption Process

By Vinay Patel, Chair, CSA Global Enterprise Advisory Board, and Managing Director, Citigroup

State of Cloud Security 2018 report coverInnovators and early adopters have been using cloud for years, taking advantage of the quicker deployment, greater scalability, and cost saving of services. The growth of cloud computing continues to accelerate, offering more solutions with added features and benefits, and with proper implementation, enhanced security. In the age of information digitalization and innovation, enterprise users must keep pace with consumer demand and new technology solutions ensuring they can meet both baseline capabilities and security requirements.

CSA’s new report, c This free resource provides a roadmap to developing best practices where providers, regulators, and the enterprise can come together in the establishment of baseline security requirements needed to protect organizational data.

The report, authored by the CSA Global Enterprise Advisory Board, examines such areas as the adoption of cloud and related technologies, what both enterprises and cloud providers are doing to ensure security requirements are met, how to best work with regulators, the evolving threat landscape, and goes on to touch upon the industry skills gap.

Among the report’s key takeaways are:

  • Exploration of case studies and potential use cases for blockchain, application containers, microservices and other technologies will be important to keep pace with market adoption and the creation of secure industry best practices.
  • With the rapid introduction of new features, safe default configurations and ensuring the proper use of features by enterprises should be a goal for providers.
  • As adversaries collaborate quickly, the information security community needs to respond to attacks swiftly with collaborative threat intelligence exchanges that include both providers and enterprise end users.
  • A staged approach on migrating sensitive data and critical applications to the cloud is recommended.
  • When meeting regulatory compliance, it is important for enterprises to practice strong security fundamentals to demonstrate compliance rather than use compliance to drive security requirements.

Understanding the use of cloud and related technologies will help in brokering the procurement and management of these services while maintaining proper responsibility of data security and ownership. Education and awareness still needs to improve around provider services and new technologies for the enterprise. Small-scale adoption projects need to be shared so that security challenges and patterns can be adopted to scale with the business and across industry verticals. This skills gap, particularly around cloud and newer IT technologies, needs to be met by the industry through partnership and collaboration between all parties of the cyber ecosystem.

The state of cloud security is a work in progress with an ever-increasing variety of challenges and potential solutions. It is incumbent upon the cloud user community, therefore, to collaborate and speak with an amplified voice to ensure that their key security issues are heard and addressed.

Download the full report.

Five Reasons to Reserve Your Seat at the CCSK Plus Hands-on Course at RSAC 2018

By Ryan Bergsma, Training Program Director, Cloud Security Alliance

man investing in Certificate of Cloud Security Knowledge courseThe IT job market is tough and it’s even tougher to stand out from the pack, whether it’s to your current boss or a prospective one. There is one thing, though, that can put you head and shoulders above the rest—achieving your Certificate of Cloud Security Knowledge (CCSK). CCSK certificate holders have an advantage over their colleagues and get noticed by employers across the IT industry, and no wonder.

It’s been called the “mother of all cloud computing security certifications” by CIO Magazine, and Search Cloud Security notes that it’s “a good alternative cloud security certification for an entry-level to midrange security professional with an interest in cloud security.” So it was no surprise when Certification Magazine listed CCSK at #1 on the Average Salary Survey 2016.

For those interested in taking their careers to the next level, we are offering the CCSK Plus Hands-on Course (San Francisco, April 15-16) at the 2018 RSA Conference.

Our intensive 2-day course gives you hands-on, in-depth cloud security training, where you’ll learn to apply your knowledge as you perform a series of exercises to complete a scenario bringing a fictional organization securely into the cloud.

Divided into six theoretical modules and six lab exercises, the course begins with a detailed description of cloud computing, and goes on to cover material from the official Security Guidance for Critical Areas of Focus in Cloud Computing, Cloud Controls Matrix v3.0.1 (CCM) documents from Cloud Security Alliance, and recommendations from the European Network and Information Security Agency (ENISA).

Still on the fence? Here are five reasons you need to register today.

  1. Get trained by THE best in the business. Rich Mogull, a prominent industry analyst and sought-after speaker at events such as RSAC and BlackHat, will be there to guide you through this 2-day, intensive cloud security course. Not only is he the most experienced CCSK trainer in the industry, but he created the course content. Need we say more?
  2. Gain actionable security knowledge. In addition to learning the foundational differences of cloud, you’ll acquire practical knowledge and the skills to build and maintain a secure cloud business environment right away. It’s good for you and good for your company.
  3. Make the boss sit up and notice. Your newfound knowledge will translate to increased confidence and credibility when working within the cloud, and just maybe a better job or dare we say, a raise?
  4. Move to the head of the class. By the end of the course, you’ll be prepared to take the CCSK exam to earn your Cloud Security Alliance CCSK v4.0 certificate, a highly regarded certification throughout the industry certifying competency in key cloud security areas. ‘Nuff said.
  5. Invest in your future. The course price includes the cost of the exam, a $395 value. That’s what we call a sound investment.

Still not convinced? Watch this and you will be.

Register.

CSA Summit at RSA Conference 2018 Turns Its Focus to Enterprise Grade Security: Will you be there?

By J.R. Santos,  Executive Vice President of Research, Cloud Security Alliance

CSA Summit at RSAC 2018Today’s enterprise cloud adoption has moved well beyond the early adopters to encompass a wide range of mission-critical business functions. As financial services, government and other industries with regulatory mandates have made significant steps into the cloud over the past year, it’s only fitting that this year’s CSA Summit at RSA Conference 2018, now in its ninth year, turn its attention to enterprise-grade security.

For both companies and governments, however, making this leap has not come without effort. It’s required a transformation in both the technology of security and the mindset of security professionals. To help facilitate this transformation, we’ll again be bringing together some of the best and brightest minds from across the industry to share the common practices that are enabling the shift to cloud as our dominant IT system.

Thought leaders from multi-national enterprises, government, cloud providers and the information security industry will be speaking on some of cloud security’s most pressing topics, including:

  • Appetite for Destruction – The Cloud Edition. Over the last two years, the multitude of data leaks and breaches in the cloud has skyrocketed. Many of these leaks are reminiscent of the past security lessons, and some show new attributes unique to our evolving computing environments. In this short talk, Raj Samani, chief scientist at McAfee, takes a look at the past, and peers toward the future.
  • Cloud Security Journey. Get a preview of how a major retailer solves the problem of security software chaos and fragmentation while addressing new security requirements in this session from Symantec and Albertsons Companies. You’ll get a real-world perspective on how they approached cloud security while addressing end-to-end compliance, data governance, and threat protection requirements.
  • A GDPR-Compliance & Preparation Report Card. With the impending May 2018 deadline for GDPR compliance, organizations worldwide need to account for the regulation in their security policies and programs. Join Netskope Chief Scientist Krishna Narayanaswamy and CSO Jason Clark for an interactive session that previews their recent study with the Cloud Security Alliance on how organizations are preparing for compliance.
  • The Software-Defined Perimeter in Action. Cyxtera’s Cybersecurity Officer Chris Day will chronicle how organizations have taken CSA’s Software-Defined Perimeter (SDP) from experimental to enterprise-grade. You’ll walk away with valuable insights and learn compelling best practices on how enterprises can make SDP adoption a reality.

Other discussions and panels will also explore new frontiers that are accelerating change in information security, such as artificial intelligence, blockchain and fog computing.

Register for RSAC and the Summit today using the discount code 18UCSAFD to receive $100 of the full conference pass to RSAC or receive a complimentary expo pass with the code X8ECLOUD. The CSA Summit is a free event for all registered conference attendees regardless of pass.

For those interested in taking their careers to the next level, we also are offering the CCSK Plus Hands-on Course (April 15-16) at the RSA Conference 2018. Our intensive 2-day course gives you hands-on, in-depth cloud security training, where you’ll learn to apply your knowledge as you perform a series of exercises to complete a scenario bringing a fictional organization securely into the cloud and emerge prepared to take the Certificate of Cloud Security Knowledge exam.

The CCSK gives you a distinct edge over your cloud security colleagues. Why else would CIO Magazine have called it the “Mother of all cloud computing security certifications?” Certification Magazine even listed CCSK at #1 on the Average Salary Survey 2016.

So what are you waiting for? Register now.

 

Electrify Your Digital Transformation with the Cloud

By Tori Ballantine, Product Marketing, Hyland

Taking your organization on a digital transformation journey isn’t just a whimsical idea; or something fun to daydream about; or an initiative that “other” companies probably have time to implement. It’s something that every organization needs to seriously consider. If your business isn’t digital, it needs to be in order to remain competitive.

So if you take it as a given that you need to embrace digital transformation to survive and thrive in the current landscape, the next logical step is to look at how the cloud fits into your strategy. Because sure, it’s possible to digitally transform without availing yourself of the massive benefits of the cloud. But why would you?

Why would you intentionally leave on the table what could be one of the strongest tools in your arsenal? Why would you take a pass on the opportunity to transform – and vastly improve – the processes at the crux of how your business works?

Lightning strikes
In the case of content services, including capabilities like content management, process management and case management, cloud adoption is rising by the day. Companies with existing on-premises solutions are considering the cloud as the hosting location for their critical information, and companies seeking new solutions are looking at cloud deployments to provide them with the functionality they require.

If your company was born in the digital age, it’s likely that you inherently operate digitally. If your company was founded in the time before, perhaps you’re playing catch up.

Both of these types of companies can find major benefits in the cloud. Data is created digitally, natively — but there is still paper that needs to be brought into the digital fold. The digitizing of information is just a small part of digital transformation. To truly take information management to the next level, the cloud offers transformative options that just aren’t available in a premises-bound solution.

People are overwhelmingly using the cloud in their personal lives, according to AIIM’s State of Information Management: Are Businesses Digitally Transforming or Stuck in Neutral? Of those polled, 75 percent use the cloud in their personal life and 68 percent report that they use the cloud for business. That’s nearly three-quarters of respondents!

When we look at the usage of cloud-based solutions in areas like enterprise content management (ECM) and related applications, 35 percent of respondents leverage the cloud as their primary content management solutions; for collaboration and secure file sharing; or for a combination of primary content management and file sharing. These respondents are deploying these solutions either exclusively in the cloud or as part of on-prem/cloud hybrid solutions.

Another 46 percent are migrating all their content to the cloud over time; planning to leverage the cloud but haven’t yet deployed; or are still experimenting with different options. They are in the process of discerning exactly how best to leverage the power of the cloud for their organizations.

And only 11 percent have no plans for the cloud. Eleven percent! Can your business afford to be in that minority?

More and more, the cloud is becoming table stakes in information management. Organizations are growing to understand that a secure cloud solution not only can save them time and money, but also provide them with stronger security features, better functionality and larger storage capacity.

The bright ideas
So, what are some of the ways that leveraging the cloud for your content services can digitally transform your business?

  • Disaster recovery. When your information is stored on-premises and calamity strikes — a fire, a robbery, a flood — you’re out of luck. When your information is in the cloud, it’s up and ready to keep your critical operations running.
  • Remote access. Today’s workforce wants to be mobile, and they need to access their critical information wherever they are. A cloud solution empowers your workers by granting them the ability to securely access critical information from remote locations.
  • Enhanced security. Enterprise-level cloud security has come a long way and offers sophisticated protection that is out of reach for many companies to manage internally.

Here are other highly appealing advantages of cloud-based enterprise solutions, based on a survey conducted by IDG Enterprise:

  • Increased uptime
  • 24/7 data availability
  • Operational cost savings
  • Improved incident response
  • Shared/aggregated security expertise of vendor
  • Access to industry experts on security threats

Whether you’re optimizing your current practices or rethinking them from the ground up, these elements can help you digitally transform your business by looking to the cloud.

Can you afford not to?

What’s Hindering the Adoption of Cloud Computing in Europe?

upload download cloud computingAs with their counterparts in North America, organizations across Europe are eagerly embracing cloud computing into their operating environment. However, despite the overall enthusiasm around the potential of cloud computing to transform their business practices, many CIOs have real concerns about migrating their sensitive data and applications to public cloud environments. Why? In essence, it boils down to a few core areas of concern:

  1. A perceived lack of clarity in existing Cloud Service Level Agreements and security policy agreements
  2. The application, monitoring, and enforcement of security SLA’s
  3. The relative immaturity of cloud services

These issues of course are far from new and, in fact, great progress has been made over the past five years to address these and other concerns around fostering greater trust in cloud computing. The one threat that is present across these issues is transparency – the greater transparency that a cloud service provider can provide into their approach to information security, the more confident organizations will be in adopting and trusting public cloud providers with their data and assets.

To this end, the European Commission (EC) launched the Cloud Selected Industry Group (SIG) on Certification in April of 2013 with the aim of supporting the identification of certifications and schemes deemed “appropriate” for the European Economic Area (EEA) market. Following this, ENISA (European Network and Information Security Agency) launched their Cloud Certification Schemes Metaframework (CCSM) initiative in 2014 to map detailed security requirements used in the public sector to describe security objectives in existing cloud certification schemes. And of course, the Cloud Security Alliance has also played a role in defining security-specific certification schemes with the creation the CSA Open Certification Framework (CSA OCF) which works to enable cloud providers to achieve a global, accredited and trusted certification.

Beyond defining a common set of standards and certifications, SLA’s have become an important proxy by which to gauge visibility into a Cloud provider’s security and privacy capabilities. The specification of security parameters in Cloud Service Level Agreements (“secSLAs)” has been recognized as a mechanism to bring more transparency and trust for both cloud service providers and their customers.  Unfortunately, the conspicuous lack of relevant Cloud security SLA standards has also become a barrier for their adoption. For these reasons, standardized Cloud secSLAs should become part of the more general SLAs/Master Service Agreements signed between the CSP and their customers. Current efforts from the CSA and ISO/IEC in this field are expected to bring some initial results by 2016.

This topic will be a key theme at this year’s EMEA Congress, taking place November 17-19 in Berlin, Germany, with a plenary panel on “Cloud Trust and Security Innovation” featuring Nathaly Rey, Head of Trust, Google for Work as well as a track on Secure SLA’s which is being led by Dr. Michaela Iorga, Senior Security Technical Lead for Cloud Computing, NIST.

To register for the EMEA Congress, visit: https://csacongress.org/event/emea-2015/#registration.