CVE and Cloud Services, Part 1: The Exclusion of Cloud Service Vulnerabilities

By Kurt Seifried, Director of IT, Cloud Security Alliance and Victor Chin, Research Analyst, Cloud Security Alliance

The vulnerability management process has traditionally been supported by a finely balanced ecosystem, which includes such stakeholders as security researchers, enterprises, and vendors. At the crux of this ecosystem is the Common Vulnerabilities and Exposures (CVE) identification system. In order to be assigned an ID, vulnerabilities have to fulfill certain criteria. In recent times, these criteria have become problematic as they exclude vulnerabilities in certain categories of IT services that are becoming more and more common.

This is the first in a series of blogposts that will explore the challenges and opportunities in enterprise vulnerability management in relation to the increasing adoption of cloud services.

Common Vulnerabilities and Exposures

CVE® is a list of entries, each containing an identification number, a description, and at least one public reference for publicly known cybersecurity vulnerabilities[1].

CVEs are identifiers for security vulnerabilities that are—or are expected to become—public. Traditionally, they are assigned by one of two entities: The CNA (CVE Numbering Authority) that exists specifically for that piece of software (e.g. Microsoft, which covers Microsoft software) or a CNA that has been given coverage of said software (e.g. The Debian Project, Distributed Weakness Filing Project, and Red Hat all cover Open Source software to varying degrees). These CVEs are then published in the MITRE CVE database. Finally, they are consumed and republished by other organizations, often with additional information such as workarounds or fixes which makes tracking and remediating those vulnerabilities possible.

Customers of companies or organizations that are CNAs for their own products can be reasonably assured that CVE IDs are assigned to historical, current and future vulnerabilities found in those products.

CVE and Vulnerability Management

The CVE system is the linchpin of the vulnerability management process, as its widespread use and adoption allows different services and business processes to interoperate. The system provides a way for specific vulnerabilities to be tracked via the assignment of IDs. Enterprises, security researchers, penetration testers, software providers and even vulnerability scanning tools all use CVE IDs to track vulnerabilities in products. These IDs also allow important information regarding a vulnerability to be associated with it such as workarounds, vulnerable software versions, and Common Vulnerability Scoring System (CVSS) scores. Without the CVE system, it becomes difficult to track vulnerabilities in a way that allows the different stakeholders and their tools to interoperate.

Example of CVE and other associated information
Fig 1: Example of CVE and other associated information taken from CVEdetails.com (https://www.cvedetails.com/cve/CVE-2007-0994/)

CVE Inclusion Rules and Limitations

The decision to assign an ID to a vulnerability is governed by the Inclusion Rules. In order to assign a CVE ID to a vulnerability, the assigner has to take the vulnerability through the Inclusion Rules. Generally, only a vulnerability that fulfills all five criteria will be assigned an ID. For example, one of the Inclusion rules, INC3, states that a vulnerability should only be assigned a CVE ID if it is customer-controlled or customer-installable. A vulnerability in a Customer Relationship Management (CRM) software that is installed on a server owned and managed by an enterprise fulfills that requirement.

CVE Inclusion Rule INC3
Fig 2: Inclusion Rule INC3 taken from cve.mitre.org (https://cve.mitre.org/cve/editorial_policies/counting_rules.html)

INC3, as it is currently worded, is problematic for a world that is increasingly dominated by cloud services. In the past, this inclusion rule has worked well for the IT industry as most enterprise IT services have generally been provisioned with infrastructure owned by the enterprise. However, with the proliferation of cloud services, this particular rule has created a growing gap for enterprise vulnerability management. This is because cloud services, as we currently understand them, are not customer controlled. As a result, vulnerabilities in cloud services are generally not assigned CVE IDs. Information such as workarounds, affected software or hardware versions, proof of concepts, references and patches are not available as this information is normally associated to a CVE ID. Without the support of the CVE system, it becomes difficult, if not impossible, to track and manage vulnerabilities.

Conclusion

The Cloud Security Alliance and the CVE board are currently exploring solutions to this problem.

One of the first tasks is to obtain industry feedback regarding a possible modification of INC3 to take into account vulnerabilities that are not customer-controlled. Such a change would officially put cloud service vulnerabilities in the scope of the CVE system. This would not only allow vulnerabilities to be properly tracked, it would also enable important information to be associated with a service vulnerability.

Please let us know what you think about a change to INC3 and the resulting impact on the vulnerability management ecosystem in the comment section below or you can also email us.

Stay tuned for our next blog post where we will explore the impacts that the current Inclusion Rules have on enterprise vulnerability management.

[1] https://cve.mitre.org/

Software-Defined Perimeter Architecture Guide Preview

Part 1 in a four-part series.

By Jason Garbis, Vice President/Secure Access Products, Cyxtera Technologies Inc.

cyber security, lockThe Software-Defined Perimeter (SDP) Working Group was founded five years ago, with a mission to promote and evangelize a new, more secure architecture for managing user access to applications. Since the initial publication of the SDP Specification, we’ve witnessed growing adoption and awareness throughout the industry. As practitioners, vendors, evangelists, and guides, we (as the SDP working group) have learned a great deal about SDP in practice, and wanted to capture and share that knowledge.

This was the driver for us to create the forthcoming Software-Defined Perimeter Architecture Guide. We’ve decided to publish a preview blog series here to obtain feedback on this work-in-progress artifact, and to spark conversation about SDP architectures and deployments. Ultimately, we intend the final published Architecture Guide—scheduled for publication in Q4 2018—to encourage broader (and more successful) adoption of SDP architectures.

Please join the conversation in the SDP working group here—we’re open to feedback, questions, or even just good restaurant recommendations. Thanks for reading this, and we look forward to engaging with you.

In this first blog posting, we’re going to walk through the SDP Architecture Guide outline and provide color commentary. Keep in mind that this document is still a work-in-progress, so the content and structure may well change prior to publication. Let’s dive in:

  • Introduction
    • Why We Wrote This Document
    • Target Audience
    • Goals
    • SDP Scenarios

In the introduction, we provide the motivation for the document, articulate who our target audience is, and explain our goals. Then, we enumerate SDP scenarios (AKA use cases), briefly explaining each one, and exploring the benefits that SDP provides in that scenario.

  • SDP, Zero Trust, and Google’s BeyondCorp

In addition to SDP, there is a lot of noise and activity in today’s marketplace around the Zero-Trust philosophy, and to some degree about Google’s internal BeyondCorp security initiative. In this section, we attempt to make sense of this and explain the similarities and differences between them.

  • SDP Overview
    • Core SDP Concepts
    • SDP Architecture
    • SDP Deployment Models
      • Client-to-Gateway Model
      • Client-to-Server
      • Server-to-Server Model
      • Client-to-Server-to-Client
      • Client-to-Gateway-to-Client
      • An Alternative Architecture: The Cloud-Routed Model

This section presents the foundational elements of SDP, including its core underlying concepts. We also dive into the SDP architecture and discuss each of the SDP deployment models.

    • Single-Packet Authorization
      • SPA Benefits

Single-Packet Authorization (SPA) is one of most important parts of SDP. By compensating for the fundamentally open (and insecure) nature of TCP/IP, SPA enables secure and reliable deployment of SDP Controllers and Gateways onto insecure and public networks. In this section, we analyze the SPA protocol, suggest some improvements, and expand upon its benefits to SDP.

    • SDP Policy Model
      • SDP Policy Overview
      • Policy Components

SDP, as a specification, is silent on a policy model. In this section, we introduce the elements that an SDP policy model should have and the corresponding capabilities that an SDP platform should be able to express. We conclude this section with a few example policies.

  • SDP in the Enterprise
    • Architecture Considerations
    • Security and IT Technologies
      • SIEM
      • Firewalls
      • Intrusion Detection/Prevention Systems
      • Virtual Private Networks
      • Next-Generation Firewalls
      • Identity and Access Management
      • VPNs
      • NAC
      • SDN
      • IDS / IPS
      • EMM / MDM
      • Web Application Firewalls
      • Cloud Access Security Brokers

This section introduces a simplified (but prototypical) enterprise model, exploring how each of the Security and IT technologies shown above are impacted by the deployment of SDP.

  • SDP Business Benefits

We conclude with the business benefits that SDP can deliver. This section, which will be constructed in a tabular format, will provide an overview of these benefits. We look forward to providing more detailed, quantified benefits and case studies in a future document.

Thanks for reading through the outline. In our next blog post in this series we’ll talk through the SDP Core Concepts table.

Jason Garbis is Vice President of Secure Access Products at Cyxtera, a provider of secure infrastructure for today’s hybrid environments, where he leads strategy and management for the company’s security solutions. Jason has over 25 years of product management, engineering, and consulting experience at security and technology firms including RSA, HPE, BMC, and Iona. He is co-chair of the Software Defined Perimeter (SDP) Working Group at the Cloud Security Alliance, holds a CISSP certification, is a published author, and led the creation of the Cloud Security Alliance initiative applying Software-Defined Perimeter to Infrastructure-as-a-Service environments.

Convincing Organizations to Say “Yes to InfoSec”

By Jon-Michael C. Brook, Principal, Guide Holdings, LLC

security turned on in smartphoneSecurity departments have their hands full. The first half of my career was government-centric, and we always seemed to be the “no” team, eliminating most initiatives before they started. The risks were often found to outweigh the benefits, and unless there was a very strong executive sponsor, say the CEO or Sector President, the ideas would be shelved.

More recently, as a response to the security “no” team, IT staff started several “Shadow IT” projects. People began using cloud computing systems and pay-as-you-go strategies on a corporate credit card to quickly develop and roll-out projects before anyone in security could get a word in.

These “beg forgiveness” aspects hamstrung security on several projects, especially if a data leakage incident occurred or breach was in progress. What’s more, we weren’t unique in seeing shadow projects. These projects increasingly become the norm as IT staff looking to move initiatives forward come up against cybersecurity professionals hell-bent on maintaining security and, who know that in the event of a breach, heads could easily roll. Most likely theirs.

Tired of being seen as the “no” team? Here are three ideas that could reshape the value of security to your company as a whole:

Demonstrate Trust

Trust messages needs to come from outside of the department, even if it’s ghostwritten or created internally. Be it the CTO, CFO or CEO, there needs to be a bit of understanding that risk comes in many forms, and the Security Department takes all of those into account before approving or denying projects.

Many compliance frameworks have an HR or training domain, and some security departments successfully use this for mandatory training for topics like phishing. When a non-infosec colleague clicks on a fake attack, the trust point may be reiterated with a reminder of example fines and the costs. Breach notifications or PCI violations aren’t cheap after all.

Show Security as a Business Enabler

Share a couple of department wins, where the security team found involvement early in the process and added value to the program deployed. Look for examples like oAuth or Single Sign On (SSO) simplifying a portal’s usage or a project where business continuity planning or encryption helped pass an acceptance audit.

Demonstrating that security builds team success and is no longer the “no” department pays dividends.

Provide Educational Incentives

Lastly, extend the educational aspect beyond testing for ignorance. See if your organization offers reimbursement or even bonuses for security certifications, and stand-up internal lunch-and-learn or video conference preparation sessions. If your organization doesn’t provide an across-the-board financial incentive, maybe fund a raffle for five of the folks who pass the test to receive a spot bonus.

Hopefully, you’ll find these as an opportunity to impress upon the rest of the corporation the importance of the CISO’s office. There’s a long history of “no;” without efforts on the infosec staff’s part, that image will linger well past its truth.

Jon-Michael C. Brook, Principal at Guide Holdings, LLC, has 20 years of experience in information security with such organizations as Raytheon, Northrop Grumman, Booz Allen Hamilton, Optiv Security and Symantec. He is co-chair of CSA’s Top Threats Working Group and the Cloud Broker Working Group, and contributor to several additional working groups. Brook is a Certified Certificate of Cloud Security Knowledge+ (CCSK+) trainer and Cloud Controls Matrix (CCM) reviewer and trainer.

What Is a CASB?

By Dylan Press, Director of Marketing, Avanan

Email is the #1 attack vector. Cloud Account Takeover is the #1 attack target.
A CASB is the best way to protect against these threats.

cartoon of man asking What is a CASBGartner first defined the term Cloud Access Security Broker (CASB) in 2011, when most IT applications were hosted in the data center and few companies trusted the cloud. Most online services were primarily aimed at the consumer. At the time, CASB products were designed to provide visibility for so-called Shadow IT and limit employee access to unauthorized cloud services.

Today, organizations have embraced the cloud, replacing many of their datacenter applications with Software as a Service (SaaS) or moving much of their IT into infrastructure (IaaS) providers like Amazon or Azure. Instead of limiting access, CASBs have evolved to protect cloud-hosted data and provide enterprise-class security controls so that organizations can incorporate SaaS and IaaS into their existing security architecture.

CASBs provide four primary security services: Visibility, Data Security, Threat Protection, and Compliance. When comparing CASB solutions you should first make sure that they meet your needs in each of these categories.

Visibility

A CASB identifies all the cloud services (both sanctioned and unsanctioned) used by an organization’s employees. Originally, this only included the services they would use directly from their computer or mobile device, often called “Shadow IT“. Today, it is possible for an employee to connect an unsanctioned SaaS directly to a an approved SaaS via API. This “Shadow SaaS” requires more advanced visibility tools.

Shadow IT Monitoring: Your CASB must connect to your cloud to monitor all outbound traffic for unapproved SaaS applications and capture real-time web activity. Since nearly all SaaS applications send your users email notifications, your CASB should also scan every inbox for rogue SaaS communication to identify unapproved accounts on an approved cloud services.

Shadow SaaS Monitoring: Your CASB must connect to your approved SaaS and IaaS providers to monitor third-party SaaS applications that users might connect to their account. It should identify both the service as well as the level of access the user has provided.

Risk Reporting: A CASB should assess the risk level for each Shadow IT/Shadow SaaS connection, including the level of access each service might request (i.e. read-only access to a calendar might be appropriate, read-write access to email might not.) This allows you to make informed decisions and prioritize the applications that need immediate attention.

Event Monitoring: Your CASB should provide information about real-time and historical events in all of your organization’s SaaS applications. If you do not know how the applications are being used, you can not properly control them or properly assess the threats facing your organization.

Data Security

A CASB enforces data-centric security policies by offering granular access controls or encryption. It incorporates role-based policy tools, data classification and loss prevention technologies to monitor user activity and audit, block or limit access. Once, these were stand-alone systems. Today it is vital that they are integrated into the organization’s data policy architecture.

Data Classification: Your CASB should identify personally identifiable information (PII) and other confidential text within every file, email or message. Taking this further, it should be capable of applying policies to control how that sensitive information can be shared.

Data-Centric Access Management: Your CASB should allow you to manage file permissions based upon the user’s role and the type of data the file contains using cloud-aware enforcement options that work within the context of the cloud service.

Policy-based Encryption: Your CASB should be able to encrypt sensitive information across all your cloud services to ensure data security, even after files leave the cloud.

Threat Protection

A CASB protects cloud services from unwanted users or applications. This might include real time malware detection, file sandboxing or behavior analytics and anomaly detection. New threats require new protections, so the list should include anti-phishing, account-takeover detection and predictive (A.I.) malware technologies.

Anti-phishing Protection: Phishing attacks are the #1 source of data breaches every year, but few CASBs offer phishing protection for cloud-based email. For a technology that is protecting your cloud environment, anti-phishing is a must. It has been proven over and over again that your email provider is not a viable solution to the phishing problem.

Account Takeover Protection: Your CASB should monitor every user event (not just logins) to identify anomalous behavior, permission violations, or configuration changes that indicated a compromised account.

URL Filtering: Your CASB should check every email, file, and chat messages for malicious links.

Real Time Malware Detection: Your CASB should scan every email and file for active code and malicious content before it reaches the inbox.

Advanced Threat Sandboxing: Your CASB should test suspicious files in an emulation environment to detect and stop zero-day threats.

Compliance

Regulated organizations require auditing and reporting tools to demonstrate data compliance and a CASB should provide all the necessary auditing and reporting tools. More advanced solutions offer policy controls and remediation workflows that enforce regulatory compliance in real time for every industry, from GDPR and SOX to PCI and HIPAA..

SIEM Integration: Your CASB should collect and correlate user, file and configuration events from each cloud application installed in your organization’s environment and make them visible through your organization’s existing reporting infrastructure.

Auditing: Your CASB should have access to historical event data for retrospective compliance auditing as well as real-time reporting.

Enforcement: Your CASB should be able to move and encrypt files, change permissions, filter messages or use any number of cloud-native tools to ensure compliance through automated policies.

Email Security from Your CASB

As you may have noticed, across all the CASB criteria, email security is a major component. Can this really be that important? After all, so few CASBs include email security.

No matter the motivation, email continues to be the most common vector for enterprise breaches. Phishing and pretexting represented 98% of social incidents and 93% of breaches last year. Protection for the cloud must include protection for cloud-based email. Without cloud-based email security, a CASB is not truly providing full cloud security and is just acting as a simple Shadow IT tool.

Conclusion

While a solution doesn’t need to have every feature mentioned in this blog post in order to sell themselves as a CASB, they are the criteria that separate the CASBs that are complete security solutions from those that will need to be paired with additional security tools. If you want a CASB to act as your full security suite protecting your organization from cloud-borne threats then this will serve as a useful checklist.

Avoiding Cyber Fatigue in Four Easy Steps

By Jon-Michael C. Brook, Principal, Guide Holdings, LLC

coffee cup by an IT worker's screen indicating cyber fatigueCyber alert fatigue. In the cybersecurity space, it is inevitable. Every day, there will be a new disclosure, a new hack, a new catchy title for the latest twist on an old attack sequence. As a 23-year practitioner, the burnout is a real thing, and it unfortunately comes in waves. You’ll stay up on the latest and greatest for months on end. Take a couple weeks off at the wrong time of year, maybe around the big security conferences (think RSA or Blackhat/DEF CON), and you could spend 6 weeks catching back up. Everyone has a take, and without getting in front of the wave, the wheat may not be easy to separate from the chaff. How can you avoid–or at least lessen–the chance of missing the next question from a CISO while still maintaining a sense of sanity?

Where does the quest for knowledge transform into chasing your own tail?

Be picky

First and foremost, carefully vet your media input sources. Every source you sign-up for will inevitably add to the noise in your feed. Each follow, every like, even entering your email address for more information opens more avenues for daily discourse. Pick a few trusted sources of information, the innovators in your niche. For cybersecurity, Bruce Schneier (@schneierblog), Gene Spafford (@therealspaf) and Brian Krebs (@briankrebs) fit the mold. They’ll put enough content on the wire for a daily read in a short amount of time.

Set time limits

Set aside a period of time each day to catch up. It’s easy to read articles 24×7. Personally, I’m click baited any time I read a headline news article. My ADD increases my penchant for distraction, and suddenly three hours of my day passed without a tangible memo, report or other accomplishment.

Choose a duration that doesn’t wipe out the entire day, probably during the morning so you’ll have water cooler talk. Maybe it’s first thing before everyone comes in or you leave for the office, or try the train, lunch time. Find a daily podcast (Raf Los aka @Wh1t3Rabbit’s Down The Security Rabbit Hole is usually interesting) and listen to it during a morning exercise. Whatever it is, limit your alert time per day; they don’t call it Twitter for nothing.

Back-scatter and bit buckets

Be prepared to be bought and sold. The luckiest thing I ever did was buy my own domain name. I use unique email addresses for everything I sign up for and then forward the important ones into folders to keep my immediate inbox clean. It’s technically a back-scatter technique. If you have to make it past a marketing wall and provide information, don’t be afraid to unsubscribe, unfollow or remove access. Your contact info will be monetized, and most reputable marketing/distribution houses fear the legal ramifications of not complying with spam prevention acts. When someone doesn’t comply appropriately, simply point that individual address to the bit bucket.

The struggle is real

Add an additional account for friends and family threads for non-business hours. Co-workers at the office won’t think you’re wasting work time on personal pursuits. You also have a chance to create a work/life balance.

No one wants to live, breathe and die work. Cyber fatigue is real …

Jon-Michael C. Brook, Principal at Guide Holdings, LLC, has 20 years of experience in information security with such organizations as Raytheon, Northrop Grumman, Booz Allen Hamilton, Optiv Security and Symantec. He is co-chair of CSA’s Top Threats Working Group and the Cloud Broker Working Group, and contributor to several additional working groups. Brook is a Certified Certificate of Cloud Security Knowledge+ (CCSK+) trainer and Cloud Controls Matrix (CCM) reviewer and trainer.

 

Cloud Migration Strategies and Their Impact on Security and Governance

By Peter HJ van Eijk, Head Coach and Cloud Architect, ClubCloudComputing.com

cloud migration concept with servers in the cloud

Public cloud migrations come in different shapes and sizes, but I see three major approaches. Each of these has very different technical and governance implications.

Three approaches to cloud migration

Companies dying to get rid of their data centers often get started on a ‘lift and shift’ approach, where applications are moved from existing servers to equivalent servers in the cloud. The cloud service model consumed here is mainly IaaS (infrastructure as a service). Not much is outsourced to cloud providers here. Contrast that with SaaS.

The other side of the spectrum is adopting SaaS solutions. More often than not, these trickle in from the business side, not from IT. These could range from small meeting planners to full blown sales support systems.

More recently, developers have started to embrace cloud native architectures. Ultimately, both the target environment as well as the development environment can be cloud based. The cloud service model consumed here is typically PaaS.

I am not here to advocate the benefits of one over the other, I think there can be business case for each of these.

The categories also have some overlap. Lift and shift can require some refactoring of code, to have it better fit cloud native deployments. And hardly any SaaS application is stand alone, so some (cloud native) integration with other software is often required.

Profound differences

The big point I want to make here is that there are profound differences in the issues that each of these categories faces, and the hard decisions that have to be made. Most of these decisions are about governance and risk management.

With lift and shift, the application functionality is pretty clear, but bringing that out to the cloud introduces data risks and technical risks. Data controls may be insufficient, and the application’s architecture may not be a good match for cloud, leading to poor performance and high cost.

One group of SaaS applications stems from ‘shadow IT’. The people that adopt them typically pay little attention to existing risk management policies. These can also add useless complexity to the application landscape. The governance challenges for these are obvious: consolidate and make them more compliant with company policies.

Another group of SaaS applications is the reincarnation of the ‘enterprise software package’. Think ERP, CRM or HR applications. These are typically run as a corporate project, with all its change management issues, except that you don’t have to run it yourself.

The positive side of SaaS solutions, in general, is that they are likely to be cloud native, which could greatly reduce their risk profile. Of course, this has to be validated, and a minimum risk control is to have a good exit strategy.

Finally, cloud native development is the most exciting, rewarding and risky approach. This is because it explores and creates new possibilities that can truly transform an organization.

One of the most obvious balances to strike here is between speed of innovation and independence of platform providers. The more you are willing to commit yourself to an innovative platform, the faster you may be able to move. The two big examples I see of that are big data and internet of things. The major cloud providers have very interesting offerings there, but moving a fully developed application from one provider to another is going to be a really painful proposition. And of course, the next important thing is for developers to truly understand the risks and benefits of cloud native development.

Again, big governance and risk management are issues to address.

Peter van Eijk is one of the world’s most experienced cloud trainers. He has worked for 30+ years in research, with IT service providers and in IT consulting (University of Twente, AT&T Bell Labs, EDS, EUNet, Deloitte). In more than 100 training sessions he has helped organizations align on security and speed up their cloud adoption. He is an authorized CSA CCSK and (ISC)2 CCSP trainer, and has written or contributed to several cloud training courses. 

Top Security Tips for Small Businesses

By Jon-Michael C. Brook, Principal, Guide Holdings, LLC

employees discussing top small business security tipsMost small businesses adopt some sort of cloud offering, be it Software as a Service like Quickbooks or Salesforce, or even renting computers in Amazon Web Services or Microsoft’s Azure, in an Infrastructure as a Service environment. You get Fortune 50 IT support, including things that a small business could never afford, like building security and power fail-over with 99.999-percent reliability.

While cloud has great advantages, you must know your supply chain. Cloud providers use something called the shared responsibility model. Their risks and vulnerabilities become yours, so choosing a discount provider may open you up to compliance issues you never thought possible. That said, cloud does allow small business to focus on their competitively different things, leaving the technical aspects to others for essentially a pay-as-you-go utility computing.

In today’s increasingly complex security environment, following these three top security tips will go a long way to letting small business owners concentrate on running their business rather than keeping up with the latest security issues.

Something you know

Let’s talk about authentication, typically referred to as passwords. The first thing to establish is “something you know,” like a pin or password. The worst thing anyone can do in today’s day and age is use one username with one password. If any one of the sites used becomes compromised, the username/password combination will be sold on the Dark Web as a known combination. The lists are huge, but infinitely faster on other banking or e-commerce sites that implement effective security. This happened in the Yahoo! breach that nearly scuttled the Verizon acquisition a couple years ago, sending ripples throughout the web and forced resets by nearly every company in the world.

At the very least, use a unique password with between eight and (preferably) 16 characters. Characters are more than numbers and letters. The more of the keyboard utilized, the longer testing every combination in a brute force attack becomes.

Password managers such as LastPass or KeePass will make keeping these organized easier, and they synch across the various phone, laptop and desktop devices through cloud providers like Dropbox, Box and OneDrive. Many of these are now tying in to the “something you are” such as fingerprint or facial recognition.

Something you have

The next step up is a technique known as one-time passwords. They are much more than one-step effective and take the something you know to also include “something you have” in your mobile device. That’s why banks and financial trading firms incorporated the technology a few years ago.

As security gets better, so, too, do the hackers. SIM-card duplication and other attacks gave rise to something call soft tokens from Google Authenticator and Authy. The apps use a synchronized clock and the same hard mathematics in cryptography to make a system where the next number is easy to compute in the valid minute of use but the previous is impossibly difficult before the timer clicks over to the next one.

Currently, the most secure consumer password scenario comes from mathematics developed in the late 70’s called public key cryptography. This is the same technology in the soft token apps but in a purpose-built device, typically seen as a key fob or USB from manufacturers like Entrust, RSA or Yubi. This takes the one-time password to the next level by self-erasing on any attempt to get to the originally entered number.

To recap, secure passwords should be a combination of something you know, something you have and something you are, with an order of strength: Same Passwords -> Unique Passwords -> Txt Messages -> Soft Tokens (Authenticator/Authy) -> Hard Tokens (SecureID/RSA/Yubi)

Built-in, not bolted on

Lastly, follow your industry/vertical’s rules early.

The typical adage of “built-in, not bolted on” holds true for small business if you really want to make it in the long haul. It’s always easier to include security in the beginning than shoehorn it in afterwards. A small business may be fined for non-compliance to the point of bankruptcy by a few of the below regulations:

  • US Securities and Exchange Commission’s Sarbanes Oxley (SOX);
  • Payment Card Industry’s Data Security Standard (PCI-DSS);
  • Health Insurance Portability and Accountability Act (HIPAA);
  • Privacy controls by the US Federal Trade Commission’s Fair Credit Reporting Act (FCRA) and Children’s Online Privacy Protection Act (COPPA); and
  • European Union’s General Data Protection Directive (GDPR).

 Jon-Michael C. Brook, Principal at Guide Holdings, LLC, has 20 years of experience in information security with such organizations as Raytheon, Northrop Grumman, Booz Allen Hamilton, Optiv Security and Symantec. He is co-chair of CSA’s Top Threats Working Group and the Cloud Broker Working Group, and contributor to several additional working groups. Brook is a Certified Certificate of Cloud Security Knowledge+ (CCSK+) trainer and Cloud Controls Matrix (CCM) reviewer and trainer.

Updated CCM Introduces Reverse Mappings, Gap Analysis

By Sean Cordero, VP of Cloud Strategy, Netskope

CCM logoSince its introduction in 2010, the Cloud Security Alliance’s Cloud Control Matrix (CCM) has led the industry in the measurement of cloud service providers (CSP). The CCM framework continues to deliver for CSPs and cloud consumers alike a uniform set of controls to measure the security readiness of a cloud-centric security program. It continues to be the industry standard used to measure, evaluate, and inform risk, information security, and audit professionals on the best practices for securing cloud services.

Consistent with the CSA’s commitment to driving greater trust, assurance, and accountability across the information risk and security industry, this latest expansion to the CCM incorporates the ISO/IEC 27017:2015, ISO/IEC 27018:2014, and ISO/IEC 27002:2013 controls, and introduces a new approach to the development of the CCM and an updated approach to incorporate new industry control standards.

Core to this release of the ISO 27017:2015, 27018:2014, and 27002:2013 reverse mappings and gap analysis were two additional goals defined by the CSA and the CCM Working Group:

  1. Improve the ease of operationalization and measurement for all new controls.
  2. Increase the flexibility for CSPs and cloud consumers adopting additional control frameworks while retaining alignment with the core CCM controls.

Improved ease of operational usage and measurement

The avoidance of overly prescriptive control statements has been central to the CCM’s control development philosophy. This approach was required to avoid duplication across other control frameworks and to avoid rework for security and audit professionals. While this approach is reflected in the language of the CCM, this intentional lack of specificity has made it, at times, challenging to fully integrate into architectural and validation efforts. To address this within the language for the newly developed controls two key changes were made—first, to the alignment of the core of the research team and second, to the method of delivery for new controls.

First, two working group sub-teams were created and leaders of each identified. One group specific to information risk management and the other for audit and control measurement. To ensure that both teams brought to bear their collective expertise across the entire revision, each team then collaborated on the review of the work product of the other team, which has led to the most comprehensive and well-defined release of the CCM to date.

The information security team was led by Ai Ping Foo. Her team focused on the identification and creation of new controls and mappings with a focus on ensuring the incorporation of these controls across security architectures.

The assurance team was led by Ahmed Maaloul, whose team drove the creation of the new controls and mappings with a focus on ensuring control clarity, ease of measurement, and reproducibility for audit and assurance professionals.

Improved flexibility and delivery for new controls

This latest release of the Cloud Controls Matrix introduces reverse mappings and gap analysis to the CCM program. We believe that this approach allows organizations to continue their alignment to the core CCM standard while giving the option of further expanding their controls without disruption to any STAR certification efforts underway or existing certifications.

As the CCM framework continues to mature we are confident it will give security, audit, and assurance professionals the most flexibility for control identification without compromising the existing CCM controls.

The CCM continues to define the standard for trust, assurance, and control for security, audit, and compliance analysts when conducting operations in the cloud. This latest release reflects the CSA’s and the CCM Working Group’s continued commitment towards ease of use, flexibility, and uniformity across the multiple disciplines which enable trusted cloud operations.

The success of the CCM continues to be the result of the dedicated professionals within the CCM Working Group. This latest release would not have been possible without the expertise, focus, and collaboration of the following working group members:

Security Team Leader: Ai Ping Foo

Assurance Team Leader: Ahmed Maaloul

CCM Working Group Volunteers:

Ai Ping Foo

Adnan Dakhwe

Ahmed Maaloul

Angela Dogan

Alejandro del Rio Betancourt

Bunmi Ogun

Chris Sellards

Chris Shull

Eric Tierling

Josep Bardallo

Kazuki Yonezawa

Kelvin Arcelay

Madhav Chablani

Masashiro Morozumi

Mariela Rengel

Mohin Gulzar

Muswagha Katya

Noutcha Gilles

Puneet Thapliyal

Shahid Sharif

Saraj Mohammed

M. Reid Leake

William Butler

Download the latest version of the CSA Cloud Control Matrix.

Sean Cordero has over 18 years of IT and Information Risk Management. He has held senior security executive roles at leading bio-technology, financial, retail, and consulting organizations. Cordero is the Chair of the CSA’s Cloud Control Matrix Working Group and serves as the Co-Chair of the CSA’s Consensus Assessments Initiative Questionnaire. Cordero was honored by the CSA with the Ron Knode Service Award in 2013 and inducted as a CSA Research Fellow in 2016. Cordero is a certified CISSP, CISM, CISA and CRISC.

 

Cybersecurity Trends and Training Q and A

cybersecurity word montageBy Jon-Michael C. Brook, Principal, Guide Holdings, LLC

Q: Why is it important for organizations and agencies to stay current in their cybersecurity training?

A: Changes accelerate in technology. There’s an idea called Moore’s Law, named after Gordon Moore working with Intel, that the power of a micro-chip doubles every 18 months. When combined with the virtualization aspects necessary for cloud computing, technology professionals tackle ideas seen as science fiction 30 years ago. You carry around more processing power in an Apple Watch than launched the space shuttle. Big Data, Blockchain, Internet of Things, AI and self-driving cars were inconceivable. Now you see advertisements for the NCAA trend analysis (Big Data), Bitcoin (Blockchain), Alexa and smart homes (Internet of Things), AI (Watson) and Tesla. Humans create all of this new technology; we’re flaw ridden, and cybersecurity researchers find exploitable bugs every day.

Training for developers is important —they’re a small population and make a huge impact limiting the types and quantities of flaws. Training for general users helps them avoid clicking malicious links, phishing schemes and opening files of unknown pedigree. Staying current keeps users only a half step behind the latest exploitation schemes; everything turns over entirely too fast for reliance on 10-year-old security knowledge. Ransomware wasn’t something we trained people on 15 years ago, even though the PC Cyborg virus demanded the first $378 payment in 1989. Now, people clicking a link could lock out a company’s entire data store.

Q: Do you find that most organizations and agencies employ a workforce that is woefully undertrained in cybersecurity?

A: There are companies like KnowBe4 and PhishMe that specifically target under-trained employees. KnowBe4 calls it the Human Firewall—accurate when it works properly. In the cybersecurity world, we’ve said for years two things about users—you have to trust someone, and users are the weakest link in any computer architecture. We made inroads limiting the damage by segmenting networks, limiting access privileges and better authentication capabilities, but training is a moving target and people forget or get careless.

Q: Is cybercrime on the upswing? Do you have statistics or studies to back this up?

A: The trends for cybercrime show increases in the total occurrences. Part of that is “who’s” doing the work for the majority of the takeovers. In many cases, self-replicating viruses and bots do the work—they don’t sleep. Some cybersecurity researchers find flaws and immediately publish their sample code. Not contacting the product manufacturer first is irresponsible. The sample code gets weaponized and added to existing exploit development kits and loaded into malware, including ransomware, for instance. Ransomware encrypts all the files on a drive and rose from 22nd to 5th-most-common malware between 2014 and 2016 (2017 Verizon Data Breach Investigations Report). Recently, the city of Atlanta was hit with a $51,000 demand.

Executives at a company the size and stature of Uber decided to pay a ransomware demand. They clearly didn’t have good backup and recovery processes, and we can’t expect the 718,000 other victims in 2016 to do much better. Uber, in turn, funded the next round of development. According to Symantec, the cyber criminals saw per-victim value increases of 266 percent from 2015 to 2017, and continue their efforts. There are over 50 families of ransomware alone. That’s families—not applications. Cracking a single variant in a family doesn’t necessarily eliminate that version’s effectiveness. An effort by Europol and several cybersecurity vendors to inform users and collect decryption keys started last year with the site nomoreransom.org.

Q: Which organizations are currently most targeted for cybercrime, and why?

A: There was a quote in the New Yorker during the 1950’s where Willie Sutton answered the question why he robbed banks. His response was straightforward:  “I rob banks because that’s where the money is.” This trend has held true throughout history, be it land during feudal times, stage coaches and trains during the Old West, and finally cybercrime today.

So where is the proverbial money in today’s cloud-connected, on-demand, app-everywhere world?

The industry most people think of with cybercrime and fraud is the credit card and banking institutions referred to as the Payment Card Industry (PCI). They really worked to lock everything down starting with the Payment Card Industry Data Security Standard (PCI-DSS) in December 2004. The rationale was simple —rampant fraud in the late 1990’s. They were losing every time someone called about a bad charge.

Credit card companies are steadily improving to the point now where your bank tracks your location and habits and will proactively block suspicious transactions, calling or sending a text message as an additional authorization step. I’ve seen it fail miserably (a friend of mine received a deny on a charge at the local Kroger after using the same card at the same store weekly for the past 18 months) and work stupendously (a $1 Burger King charge in Mexico while I was buying snacks at the Ft Lauderdale airport). The chip cards are also reducing fraud, as they prove to the card processors that you have the original card and not a fake copy. The Payment Card Industry does such a good job now that bulk credit card numbers on the Dark Web cost pennies per thousands.

That’s not the same for the healthcare industry, however. Personal Health Information (PHI) continues to be the most profitable data, running in the $0.50 to $7 range. That is down significantly from the $150 range less than 5 years ago. However, extensive health histories provide a treasure chest of fraud possibilities but are now purchased with additional information purchases like birth dates, Social Security numbers, and driver’s license data. Knowing a patient’s previous diagnosis of high cholesterol makes fake claims for heart procedures more plausible. CIPP Guide pointed out how common abandoned medical records were 10 years ago. Doctors place a premium on their time, but the HIPAA compliance actions for Electronic Health Records (EHR) and the ease of which the information may be destroyed eliminates the same sort of abandonment. It does open up a new situation, where a patient actually wants their previous health history to continue with a new practice. At that point, people must take personal responsibility and keep their own EHR.

Let’s investigate where the money isn’t … sort of. Cyberattacks were a significant part of the Russian attacks on Georgia and the Ukraine in 2017. One of the first nation-state attributed cyberweapons, Stuxnet, set back the Iranian nuclear program in 2010 by attacking power plant equipment—Supervisory Control and Data Acquisition (SCADA)—responsible for their uranium enrichment centrifuges. The Russian Government election interference in the US elections is a continued congressional topic. And early in 2018, the city of Atlanta experienced ransomware demands. While governments typically have big budgets, getting to them will prove more difficult.

Lastly, the area I’m most concerned about is transportation. Money is replaceable. More “intelligent” features are making their way into mass production, from braking assist and lane departure to auto-pilot. Two researchers demonstrated a remote automobile attack at the DEF CON hacking conference in 2015. The conference introduced a Car Hacking Village, where attendees could try the exploits themselves. Since that time, self-driving vehicles, including cars and semi-trucks are under development by Tesla, Uber and NVidia. Uber recently suspended self-driving car tests after a pedestrian accident in Arizona on March 19, 2018.

The possibility of a driverless future, where there is limited road rage and fewer traffic fatalities sounds promising. The fact of the matter is that the systems use external connections to download updates. History shows remote updates as a vulnerability. The automobile immobilizer/remote disablement feature flaws were demonstrated in 2016. The possibilities to stop a car suddenly are already part of police controls for theft prevention and recovery. Hollywood TV shows dramatize accelerating quickly. The prospects of ransom or terrorism are frightening at 60 MPH.

Q: How bad is cybercrime expected to be in the future?

A: Cybercrime success in the future depends on the diligence of everyone involved. Punishment for unacceptable behavior was documented in biblical times. Deterrence depends on risk versus reward similar to the drug trade. The main difference surrounds education—hacking requires access to computers and coding skills. In the US, our Bill of Rights and Constitution keeps American hackers from being executed with the exception of treason. Life in prison or heavy fines are the punishments of choice. If you don’t have money, the heavy fines don’t look as daunting. A serious prison term carries a bit more weight. That’s not how most of the US laws read currently. Kevin Mitnick, one of the best known hackers, received a 5-year sentence after breaking into several corporations’ networks, including Pacific Bell’s voice mail system. The main charge that got him jail time was wire fraud.

Folks outside of the US, especially organized crimes in the poorer nations of Africa and Asia, already show a great deal of interest in cybercrime–mostly phishing schemes. Eastern Europe also has several well-known hacking groups. Their tools are getting better and easier to use. That’s a double-edged sword—less knowledgeable users will probably make implementation mistakes that allow projects like NoMoreRansom work.

Cybersecurity protections will continue evolving. Organizations within the PCI are now asking for continuous access to your location data so they can correlate your spending with your charge card and ATM usage, the next logical evolution in their fraud detection. Until you forget your phone. And at that point, we need to adjust where the “money” is, and start examining what can be done with your location information and other low-hanging fruit. If criminals know you’re not in your residence, will the crime statistics show a spike in burglaries? Will social engineers or phishing scams target you based on the most susceptible device? Email scams work best on your tablet, text scams on your phone and click fraud on your laptop?

Q: Who are these cyber criminals and where do they come from?

A: In the past, we dealt a lot with individual hackers. There were hacktivists and folks who wanted to see how they could get in and what they could do in infiltration. That has since moved to organized crime, with the bulk of cyber criminals motivated by money, and how quickly they can turn whatever they find into cash. Most of the latest attacks are external, financially focused, and automated to increase return on investment.

Q: A lot is now being discussed about cyber criminals holding the data of individuals and organizations hostage. How is this possible and what can be done to prevent it?

A: The data hostage taking refers to a type of malware called ransomware. It is so named as a ransomware infected system will scramble all the stored data using encryption and demand payment for release of the decryption key. Most anti-virus companies will catch all but the latest 0-day hacks (those not yet discovered by cybersecurity professionals).

Keep the cybersecurity software up to date. Likewise, keep ALL your systems patched—most operating systems will automatically install them and unlike the old days for desktop systems at least, everything won’t crash. Mobile device users are slightly less accepting of auto-updates, for fear of favorite apps no longer working or battery draining updates. Keep in mind, the favorite apps could be part of the reason for the patch. Lastly, invest in some sort of backup software. Plenty of choices will automatically save all of your files—Apple has iCloud, Microsoft has OneDrive, you could use Google Drive or Amazon’s S3 cloud service. There are plenty of third-party solution providers, including Carbonite, CrashPlan and others. Make the best choice that fits with your lifestyle—if you own all Apple devices, that’s probably your best choice. And as mentioned on nomoreransom.org, paying the ransom equates to venture funding the next round of attacks.

Q: Besides cyber blackmail, are there other new schemes in cybercrime that organizations need to be aware of?

A: An emerging scheme involves stealing cycles from people’s web browsers, or cryptojacking. It’s a combination of Bitcoin mining and a “free” component— the advertising revenue stream is augmented or replaced with either pornography or a game depending on the user set. There is additional code on the page that uses your computer to mine Bitcoin for them. My kids were playing a tank game that crashed my system from heat. Bitcoin thefts a couple years ago (see Mt Gox, for instance) were popular because there was little risk of getting caught. With cryptojacking, people think it’s just a poorly written web page and restart their browser/computer. You never get something for nothing.

These examples highlight the negatives and shouldn’t all be seen as daunting. The technology behind Bitcoin opens up a new world of possibilities around worldwide money transactions. A company called Ripple, an “altcoin” using the same blockchain technology, based their whole business model on efficiently and effectively moving money between countries in Southeast Asia. IBM commercials tout the advantages for our food supply and eliminating “blood diamonds.” Even with all the accident reports on driverless cars, autonomous vehicles have the potential of saving millions of lives eliminating driving under the influence or distracted driving. EHR and smart watches, for instance, allow doctors access to continuous monitoring of vital signs, looking for abnormalities day-to-day rather than relying on just the annual patient screening. All of these were science fiction or unfathomable even 20 years ago. As a society, we need to be aware and diligent of criminal activity, but being aware shouldn’t scare the world into a techno-free cave.

Jon-Michael C. Brook, Principal at Guide Holdings, LLC, has 20 years of experience in information security with such organizations as Raytheon, Northrop Grumman, Booz Allen Hamilton, Optiv Security and Symantec. He is co-chair of CSA’s Top Threats Working Group and the Cloud Broker Working Group, and contributor to several additional working groups. Brook is a Certified Certificate of Cloud Security Knowledge+ (CCSK+) trainer and Cloud Controls Matrix (CCM) reviewer and trainer.

Cybersecurity Certifications That Make a Difference

By Jon-Michael C. Brook, Principal, Guide Holdings, LLC

cloud security symbol overlaying laptop for cybersecurity certificationsThe security industry is understaffed. By a lot. Previous estimates by the Ponemon Institute suggest as much as 50 percent underemployment for cybersecurity positions. Seventy percent of existing IT security organizations are understaffed and 58 percent say it’s difficult to retain qualified candidates. ESG’s 2017 annual global survey of IT and cybersecurity professionals suggests the biggest shortage of skills is in cybersecurity for at least six years running. It’s a fast moving field with hacker’s crosshairs constantly targeting companies; mess up and you’re on the front page of the Wall Street Journal. With all of the pressure and demand, security is also one of the best paying segments of IT.

Cybersecurity is a different vernacular, with a set of acronyms and ideas far outside even its information technologies brethren. For the gold standard as a security professional, the title to have is the Certified Information Systems Security Professional (CISSP) from the ISC2 (isc2.org). The requirements grow increasingly strict since my testing in 2001. Not lax, mind you, but five-year industry minimums and certified professional attestation gives the credential even more heft. There is an associate version available, the Associate Systems Security Certified Practitioner (SSCP) that eliminates the time and sponsorship minimums and would be appropriate for someone new to the field.

Adding to the professional shortages are new IT delivery methods, a la cloud computing. Amazon Web Services is the giant in the space, offering several certifications for cloud architecture and implementation. Microsoft and Google round out the top three. These, too, are hot commodities, as cloud is a relatively nascent industry and not very well understood. Layer security onto the cloud platform, and you find certifications such as the Cloud Security Alliance’s Certificate of Cloud Security (CCSK) and, again, the ISC2’s Certified Cloud Security Professional (CCSP). In 2017, Certification Magazine listed cloud security certifications as some of the highest salary increases available to an IT professional.

One caveat to all of the excitement of underemployment: recruiters, headhunters and hiring managers. Position requirements are sometimes outlandish or poorly vetted, such as the requisition asking for 10 years of cloud and 20 years of security experience. Amazon Web Services started in 2006. Microsoft Azure and Google Compute Platform were seen as cannibalistic to existing revenue streams. Even five years of cloud industry experience is a lifetime, and the industry moves so fast that AWS’s Certified Solutions Architect (AWS-ASA) requires re-certification every two years vs. the standard three for the rest of IT. They, too, have a security exam recently out of beta, the AWS Certified Security Specialty, though it requires one of their associate certifications first.

If you have the appetite for learning, add privacy to the mix. The number of industry vertical regulations (healthcare’s HIPAA, Payment Card Industry’s PCI-DSS, finance’s FINRA/SOX, etc…) and regionally specific requirements (EU’s GDPR) have the International Association of Privacy Professionals (IAPP), offering eight Certified Information Privacy Professional (CIPP) certifications. As an IT professional in the US, the Certified Information Privacy Technologist (CIPT) and CIPP/US are probably the most attainable and attractive.

Jon-Michael C. Brook, Principal at Guide Holdings, LLC, has 20 years of experience in information security with such organizations as Raytheon, Northrop Grumman, Booz Allen Hamilton, Optiv Security and Symantec. He is co-chair of CSA’s Top Threats Working Group and the Cloud Broker Working Group, and contributor to several additional working groups. Brook is a Certified Certificate of Cloud Security Knowledge+ (CCSK+) trainer and Cloud Controls Matrix (CCM) reviewer and trainer.

Microsoft Workplace Join Part 2: Defusing the Security Timebomb

By Chris Higgins, Technical Support Engineer, Bitglass

timebomb countdown to Workplace Join infosecurity riskIn my last post, I introduced Microsoft Workplace Join. It’s a really convenient feature that can automatically log users in to corporate accounts from any devices of their choosing. However, this approach essentially eliminates all sense of security.

So, if you’re a sane and rational security professional (or even if you’re not), you clearly want to disable this feature immediately. Your options?

Option #1 (Most Secure, Most Convenient): Completely disable InTune Mobile Device Management for O365 and then disable Workplace Join

As Workplace Join can create serious security headaches, one of the most secure and most convenient options is to disable the InTune MDM for Office 365 and then disable Workplace Join completely. Obviously, these should quickly be replaced by other, less invasive security tools. In particular, organizations should consider agentless security for BYOD and mobile in order to protect data and preserve user privacy.

Option #2 (Least Convenient): Use InTune policies to block all personal devices

Microsoft does not provide a method of limiting this feature that does not utilize InTune policies. Effectively, you must either not use InTune at all, or pay to block unwanted access. However, the latter approach means blocking all BYO devices (reducing employee flexibility and efficiency) and introduces the complexity of downloading software to every device, raising additional costs.

Option #3 (Least Convenient and Least Secure): Whack-a-mole manual policing of new device registrations

As an administrator in Azure AD, deleting or disabling an account only prevents automated logins on each of that account’s registered devices—this has to be done manually every time a user links a new endpoint. Unfortunately, deactivation and deletion in Azure do not remove the “Join Workplace or School” link from the control panel of the machine in question. Additionally, deactivation still allows the user to manually log in, as does deletion—neither action prevents the user from re-enrolling the same device. In other words, pursuing this route means playing an endless game of deactivation and deletion whack-a-mole.

Firmware Integrity in the Cloud Data Center

By John Yeoh, Research Director/Americas, Cloud Security Alliance

firmware integrity in the cloud data center coverAs valued members, we wanted you to be among the first to hear about the newest report out from CSA—Firmware Integrity in the Cloud Data Center, in which key cloud providers and datacenter development stakeholders share their thoughts on building cloud infrastructure using secure servers that enable customers to trust the cloud providers’ infrastructure at the hardware/firmware level.

Authored by the Cloud Security Industry Summit (CISC) Technical Working Group, the  position paper is aimed at hardware and firmware manufacturers, and  identifies gaps in the industry, which make it difficult to meet the recently published NIST 800-193 requirements with ‘standard’ general-purpose servers and offers ways in which to build servers designed to meet the NIST requirements (including calling out missing technology when applicable) and enable cloud providers to increase trust in commodity hardware. The paper also suggests additional requirements that could further strengthen the level of security of servers.

Among the gaps that CSIS singles out for immediate attention by hardware manufacturers are:

  1. First-instruction integrity – The ability to ensure integrity of the first instruction (the first code or data loaded from mutable non-volatile media) in a way that is verifiable by the cloud provider and not just by the manufacturer.
  2. Chain-of-Trust for peripherals – The ability to leverage the host root of trust and other roots of trust to create a chain of trust to peripherals (e.g. for PCIe devices or other symbiont devices).
  3. Automatable Recovery – The ability to perform automated recovery back to a known boot-time state upon detection of corrupted firmware (after initial boot).

With the increasing level of sophistication of attackers and nation state threat mitigations, we think it’s critical to build a new, more secure generation of servers. The hardware/firmware industry must do a better job of building firmware with high code-quality and minimal potential for vulnerabilities at the firmware level. It’s vital that supply chain security can be verified every step along the way from component to system to solution The CISC’s opinion is that these requirements can be met without cloud vendors having to design and build specialized hardware but rather through standardized commodity hardware.

We hope you will find this informative and that it will lead to further discussions within your own organization about the challenges involved in the future of cloud security.

Read the position paper.

Continuous Monitoring in the Cloud

By Michael Pitcher, Vice President, Technical Cyber Services, Coalfire Federal

lock and key for cloud security

I recently spoke at the Cloud Security Alliance’s Federal Summit on the topic “Continuous Monitoring / Continuous Diagnostics and Mitigation (CDM) Concepts in the Cloud.” As government has moved and will continue to move to the cloud, it is becoming increasingly important to ensure continuous monitoring goals are met in this environment. Specifically, cloud assets can be highly dynamic, lacking persistence, and thus traditional methods for continuous monitoring that work for on-premise solutions don’t always translate to the cloud.

Coalfire has been involved with implementing CDM for various agencies and is the largest Third Party Assessment Organization (3PAO), having done more FedRAMP authorizations than anyone, uniquely positioning us to help customers think through this challenge. However, these concepts and challenges are not unique to the government agencies that are a part of the CDM program; they also translate to other government and DoD communities as well as commercial entities.

To review, Phase 1 of the Department of Homeland Security (DHS) CDM program focused largely on static assets and for the most part excluded the cloud. It was centered around building and knowing an inventory, which could then be enrolled in ongoing scanning, as frequently as every 72 hours. The objective is to determine if assets are authorized to be on the network, are being managed, and if they have software installed that is vulnerable and/or misconfigured. As the cloud becomes a part of the next round of CDM, it is important to understand how the approach to these objectives needs to adapt.

Cloud services enable resources to be allocated, consumed, and de-allocated on the fly to meet peak demands. Just about any system is going to have times where more resources are required than others, and the cloud allows compute, storage, and network resources to scale with this demand. As an example, within Coalfire we have a Security Parsing Tool (Sec-P) that spins up compute resources to process vulnerability assessment files that are dropped into a cloud storage bucket. The compute resources only exist for a few seconds while the file gets processed, and then they are torn down. Examples such as this, as well as serverless architectures, challenge traditional continuous monitoring approaches.

However, potential solutions are out there, including:

  • Adopting built-in services and third-party tools
  • Deploying agents
  • Leveraging Infrastructure as Code (IaC) review
  • Using sampling for validation
  • Developing a custom approach

Adopting built-in services and third-party tools

Dynamic cloud environments highlight the inadequacies of performing active and passive scanning to build inventories. Assets may simply come and go before they can be assessed by a traditional scan tool. Each of the major cloud services providers (CSPs) and many of the smaller ones provide inventory management services in addition to services that can monitor resource changes – examples include AWS’ System Manager Inventory Manager and Cloud Watch, Microsoft’s Azure Resource Manager and Activity Log, and Google’s Asset Inventory and Cloud Audit Logging. There are also quality third-party applications that can be used, some of them even already FedRAMP authorized. Regardless of the service/tool used, the key here is interfacing them with the integration layer of an existing CDM or continuous monitoring solution. This can occur via API calls to and from the solution, which are made possible by the current CDM program requirements.

Deploying agents

For resources that are going to have some degree of persistence, agents are a great way to perform continuous monitoring. Agents can check in with a master to maintain the inventory and also perform security checks once the resource is spun up, instead of having to wait for a sweeping scan. Agents can be installed as a part of the build process or even be made part of a deployment image. Interfacing with the master node that controls the agents and comparing that to the inventory is a great way to perform cloud-based “rogue” asset detection, a requirement under CDM. This concept employed on-premises is really about finding unauthorized assets, such as a personal laptop plugged into an open network port. In the cloud it is all about finding assets that have drifted from the approved configuration and are out of compliance with the security requirements.

For resources such as our Coalfire Sec-P tool from the previous example, where it exists as code more than 90 percent of the time, we need to think differently. An agent approach may not work as the compute resources may not exist long enough to even check in with the master, let alone perform any security checks.

Infrastructure as code review

IaC is used to deploy and configure cloud resources such as compute, storage, and networking. It is basically a set of templates that “programs” the infrastructure. It is not a new concept for the cloud, but the speed at which environments change in the cloud is bringing IaC into the security spotlight.

Now, we need to consider how we can perform assessment on the code that builds and configures the resources. There are many tools and different approaches on how to do this; application security is not anything new, it just must be re-examined when we consider it part of performing continuous monitoring on infrastructure. The good news is that IaC uses structured formats and common languages such as XML, JSON, and YAML. As a result, it is possible to use tools or even write custom scripts to perform the review. This structured format also allows for automated and ongoing monitoring of the configurations, even when the resources only exist as code and are not “living.” It is also important to consider what software is spinning up with the resources, as the packages that are leveraged must include up-to-date versions that do not have vulnerabilities. Code should undergo a security review when it changes, and thus the approved code can be continuously monitored.

Setting asset expiry is one way to enforce CDM principals in a high DevOps environment that leverages IaC. The goal of CDM is to assess assets every 72 hours, and thus we can set them to expire (get torn down, and therefore require rebuild) within the timeframe to know they are living on fresh infrastructure built with approved code.

Sampling

Sampling is to be used in conjunction with the methods above. In a dynamic environment where the total number of assets is always changing, there should be a solid core of the fleet that can be scanned via traditional means of active scanning. We just need to accept that we are not going to be able to scan the complete inventory. There should also be far fewer profiles, or “gold images,” than there are total assets. The idea is that if you can get at least 25% of each profile in any given scan, there is a good chance you are going to find all the misconfigurations and vulnerabilities that exist on all the resources of the same profile, and/or identify if assets are drifting from the fleet. This is enough to identify systemic issues such as bad deployment code or resources being spun up with out-of-date software. If you are finding resources in a profile that have a large discrepancy with the others in that same profile, then that is a sign of DevOps or configuration management issues that need to be addressed. We are not giving up on the concept of having a complete inventory, just accepting the fact that there really is no such thing.

Building IaC assets specifically for the purposes of performing security testing is a great option to leverage as well. These assets can have persistence and be “enrolled” into a continuous monitoring solution to report on the vulnerabilities in a similar manner to on-premises devices, via a dashboard or otherwise. The total number of vulnerabilities in the fleet is the quantity found on these sample assets, multiplied by the number of those assets that are living in the fleet. As we stated above, we can get this quantity from the CSP services or third-party tools.

Custom approaches

There are many different CSPs out there for the endless cloud-based possibilities, and all CSPs have various services and tools available from them, and for them. What I have reviewed are high-level concepts, but each customer will need to dial in the specifics based on their use cases and objectives.

Microsoft Workplace Join Part 1: The Security Timebomb

By Chris Higgins, Technical Support Engineer, Bitglass

timebomb countdown to Workplace Join infosecurity riskIt’s no secret that enterprise users wish to access work data and applications from a mix of both corporate and personal devices. In order to help facilitate this mix of devices, Microsoft has introduced a new feature called Workplace Join into Azure Active Directory, Microsoft’s cloud-based directory and identity service. While the intent of streamlining user access to work-related data is helpful, the delivery of this feature has resulted in a large security gap—one that can’t easily be disabled. This is another example of an app vendor optimizing for user experience ahead of appropriate controls and protections—demonstrating the basis for the cloud app shared responsibility model and the need for third-party security solutions like cloud access security brokers (CASBs).

According to Microsoft, “…by using Workplace Join, information workers can join their personal devices with their company’s workplace computers to access company resources and services. When you join your personal device to your workplace, it becomes a known device and provides seamless second factor authentication and Single Sign-On to workplace resources and applications.”

How does it work?

When a user links their Windows machine to “Access Work or School,” the machine is registered in Azure AD, and a master OAuth token is created for use between all Microsoft client applications as well as Edge/I.E. browsers. Subsequent login attempts to any Office resource will cause the application to gather an access token and log in the user without ever prompting for credentials. The ideology behind this process is that logging in to Windows is enough to identify a user and give them unrestricted access to all Office 365 resources.

In plain language, this means that once you login to Office 365 from any device (Grandma’s PC, hotel kiosks, etc.), you, and anyone accessing that device, are logged in to Office 365 automatically moving forward.

Why is this such a big security issue?

Workplace Join undoes all of your organization’s hard work establishing strong identity processes and procedures—all so that an employee can access corporate data from Grandma’s PC (without entering credentials). Since Grandma only has three grandkids and one cat, it likely won’t take a sophisticated robot to guess her password—exposing corporate data to anyone who accesses her machine. Making matters worse, user accounts on Windows 10 don’t even require passwords, making it even easier for data to be exfiltrated from such unmanaged devices.

Workplace Join is enabled by default for all O365 tenants. Want to turn it off? You’ll have to wait for the next blog post to sort that out.

In the meantime, download the Definitive Guide to CASBs to learn how cloud access security brokers can help secure your sensitive data.

Cloud Security Trailing Cloud App Adoption in 2018

By Jacob Serpa, Product Marketing Manager, Bitglass

In recent years, the cloud has attracted countless organizations with its promises of increased productivity, improved collaboration, and decreased IT overhead. As more and more companies migrate, more and more cloud-based tools arise.

In its fourth cloud adoption report, Bitglass reveals the state of cloud in 2018. Unsurprisingly, organizations are adopting more cloud-based solutions than ever before. However, their use of key cloud security tools is lacking. Read on to learn more.

The Single Sign-On Problem

Single sign-on (SSO) is a basic, but critical security tool that authenticates users across cloud applications by requiring them to sign in to a single portal. Unfortunately, a mere 25 percent of organizations are using an SSO solution today. When compared to the 81 percent of companies that are using the cloud, it becomes readily apparent that there is a disparity between cloud usage and cloud security usage. This is a big problem.

The Threat of Data Leakage

While using the cloud is not inherently more risky than the traditional method of conducting business, it does lead to different threats that must be addressed in appropriate fashions. As adoption of cloud-based tools continues to grow, organizations must deploy cloud-first security solutions in order to defend against modern-day threats. While SSO is one such tool that is currently underutilized, other relevant security capabilities include shadow IT discoverydata loss prevention (DLP), contextual access control, cloud encryptionmalware detection, and more. Failure to use these tools can prove fatal to any enterprise in the cloud.

Microsoft Office 365 vs. Google’s G Suite

Office 365 and G Suite are the leading cloud productivity suites. They each offer a variety of tools that can help organizations improve their operations. Since Bitglass’ 2016 report, Office 365 has been deployed more frequently than G Suite. Interestingly, this year, O365 has extended its lead considerably. While roughly 56 percent of organizations now use Microsoft’s offering, about 25 percent are using Google’s. The fact that Office 365 has achieved more than two times as many deployments as G Suite highlights Microsoft’s success in positioning its product as the solution of choice for the enterprise.

The Rise of AWS

Through infrastructure as a service (IaaS), organizations are able to avoid making massive investments in IT infrastructure. Instead, they can leverage IaaS providers like Microsoft, Amazon, and Google in order to achieve low-cost, scalable infrastructure. In this year’s cloud adoption report, every analyzed industry exhibited adoption of Amazon Web Services (AWS), the leading IaaS solution. While the technology vertical led the way at 21.5 percent adoption, 13.8 percent of all organizations were shown to use AWS.

To gain more information about the state of cloud in 2018, download Bitglass’ report, Cloud Adoption: 2018 War.

Five Cloud Migration Mistakes That Will Sink a Business

By Jon-Michael C. Brook, Principal, Guide Holdings, LLC

intersection of success and failure Today, with the growing popularity of cloud computing, there exists a wealth of resources for companies that are considering—or are in the process of—migrating their data to the cloud. From checklists to best practices, the Internet teems with advice. But what about the things you shouldn’t be doing? The best-laid plans of mice and men often go awry, and so, too, will your cloud migration unless you manage to avoid these common cloud mistakes:

“The Cloud Service Provider (CSP) will do everything.”

Cloud computing offers significant advantages—cost, scalability, on-demand service and infinite bandwidth. And the processes, procedures, and day-to-day activities a CSP delivers provides every cloud customer–regardless of size–with the capabilities of Fortune 50 IT staff. But nothing is idiot proof. CSPs aren’t responsible for everything–they are only in charge of the parts they can control based on the shared responsibility model and expect customers to own more of the risk mitigation.

Advice: Take the time upfront to read the best practices of the cloud you’re deploying to. Follow cloud design patterns and understand your responsibilities–don’t trust that your cloud service provider will take care of everything. Remember, it is a shared responsibility model.

“Cryptography is the panacea; data-in-motion, data-at-rest and data-in-transit protection works the same in the cloud.”

Cybersecurity professionals refer to the triad balance: Confidentiality, Integrity and Availability. Increasing one decreases the other two. In the cloud, availability and integrity are built into every service and even guaranteed with Service Level Agreements (SLAs).The last bullet in the confidentiality chamber involves cryptography, mathematically adjusting information to make it unreadable without the appropriate key. However, cryptography works differently in the cloud. Customers expect service offerings will work together, and so the CSP provides the “80/20” security with less effort (i.e. CSP managed keys).

Advice: Expect that while you must use encryption for the cloud, there will be a learning curve. Take the time to read through the FAQs and understand what threats each architectural option really opens you up to.

“My cloud service provider’s default authentication is good enough.”

One of cloud’s tenets is self-service. CSPs have a duty to protect not just you, but themselves and everyone else that’s virtualized on their environment. One of the early self-service aspects is authentication—the act of proving you are who you say you are. There are three ways to accomplish this proof: 1) Reply with something you know (i.e., password); 2) Provide something you have (i.e., key or token); or 3) Produce something you are (i.e., a fingerprint or retina scan). These are all commonplace activities. For example, most enterprise systems require a password with a complexity factor (upper/lower/character/number), and even banks now require customers to enter additional password codes received as text messages. These techniques are imposed to make the authentication stronger, more reliable and with wider adoption. Multi-factor authentication uses more than one of them.

Advice: Cloud Service Providers offer numerous authentication upgrades, including some sort of multi-factor authentication option—use them.

“Lift and shift is the clear path to cloud migration.”

Cloud cost advantages evaporate quickly due to poor strategic decisions or architectural choices. A lift-and-shift approach in moving to cloud is where existing virtualized images or snapshots of current in-house systems are simply transformed and uploaded onto a Cloud Service Provider’s system. If you want to run the exact same system in-house rented on an IaaS platform, it will cost less money to buy a capital asset and depreciate the hardware over three years.  The lift-and-shift approach ignores the elastic scalability to scale up and down on demand, and doesn’t use rigorously tested cloud design patterns that result in resiliency and security. There may be systems within a design that are appropriate to be an exact copy, however, placing an entire enterprise architecture directly onto a CSP would be costly and inefficient.

Advice: Invest the time up front to redesign your architecture for the cloud, and you will benefit greatly.

“Of course, we’re compliant.”

Enterprise risk and compliance departments have decades of frameworks, documentation and mitigation techniques. Cloud-specific control frameworks are less than five years old, but are solid and are continuing to be understood each year.

However, adopting the cloud will need special attention, especially when it comes to non-enterprise risks such as an economic denial of service (credit card over-the-limit), third-party managed encryption keys that potentially give them access to your data (warrants/eDiscovery) or compromised root administrator account responsibilities (CSP shutting down your account and forcing physical verification for reinstatement).

Advice: These items don’t have direct analogs in the enterprise risk universe. Instead, the understandings must expand, especially in highly regulated industries. Don’t face massive fines, operational downtime or reputational losses by not paying attention to a widened risk environment.

Jon-Michael C. Brook, Principal at Guide Holdings, LLC, has 20 years of experience in information security with such organizations as Raytheon, Northrop Grumman, Booz Allen Hamilton, Optiv Security and Symantec. He is co-chair of CSA’s Top Threats Working Group and the Cloud Broker Working Group, and contributor to several additional working groups. Brook is a Certified Certificate of Cloud Security Knowledge+ (CCSK+) trainer and Cloud Controls Matrix (CCM) reviewer and trainer.

Cybersecurity and Privacy Certification from the Ground Up

By Daniele Catteddu, CTO, Cloud Security Alliance

The European Cybersecurity Act, proposed in 2017 by the European Commission, is the most recent of several policy documents adopted and/or proposed by governments around the world, each with the intent (among other objectives) to bring clarity to cybersecurity certifications for various products and services.

The reason why cybersecurity, and most recently privacy, certifications are so important is pretty obvious: They represent a vehicle of trust and serve the purpose of providing assurance about the level of cybersecurity a solution could provide. They represent, at least in theory, a simple mechanism through which organizations and individuals can make quick, risk-based decisions without the need to fully understand the technical specifications of the service or product they are purchasing.

What’s in a certification?

Most of us struggle to keep pace with technological innovations, and so we often find ourselves buying services and products without sufficient levels of education and awareness of the potential side effects these technologies can bring. We don’t fully understand the possible implications of adopting a new service, and sometimes we don’t even ask ourselves the most basic questions about the inherent risks of certain technologies.

In this landscape, certifications, compliance audits, trust marks and seals are mechanisms that help improve market conditions by providing a high-level representation of the level of cybersecurity a solution could offer.

Certifications are typically performed by a trusted third party (an auditor or a lab) who evaluates and assesses a solution against a set of requirements and criteria that are in turn part of a set of standards, best practices, or regulations. In the case of a positive assessment, the evaluator issues a certification or statement of compliance that is typically valid for a set length of time.

One of the problems with certifications under the current market condition is that they have a tendency to proliferate, which is to say that for the same product or service more than one certification exists. The example of cloud services is pretty illustrative of this issue. More than 20 different schemes exist to certify the level of security of cloud services, ranging from international standards to national accreditation systems to sectorial attestation of compliance.

Such a proliferation of certifications can serve to produce the exact opposite result that a certification was built for. Rather than supporting and streamlining the decision-making process, they could create confusion, and rather than increasing trust, they favor uncertainty. It should be noted, however, that such a proliferation isn’t always a bad thing. Sometimes, it’s the result of the need to accommodate important nuances of various security requirements.

Crafting the ideal certification

CSA has been a leader in cloud assurance, transparency and compliance for many years now, supporting the effort to improve the certification landscape. Our goal has been—and still is—to make the cloud and IoT technology environment more secure, transparent, trustworthy, effective and efficient by developing innovative solutions for compliance and certification.

It’s in this context that we are surveying our community and the market at-large to understand what both subject matter experts and laypersons see as the essential features and characteristics of the ideal certification scheme or meta-framework.

Our call to action?

Tell us—in a paragraph, a sentence or a word—what you think a cybersecurity and privacy certification should look like. Tell us what the scope should be (security/privacy, product /processes /people, cloud/IoT, global/regional/national), what’s the level of assurance offered, which guarantees and liabilities are expected, what’s the tradeoff between cost and value, how it should be proposed/communicated to be understood and valuable for the community at large.

Tell us, but do it before July 2 because that’s when the survey closes.

How ChromeOS Dramatically Simplifies Enterprise Security

By Rich Campagna, Chief Marketing Officer, Bitglass

chrome logoGoogle’s Chromebooks have enjoyed significant adoption in education, but have seen very little interest in the enterprise until recently. According to Gartner’s Peter Firstbrook in Securing Chromebooks in the Enterprise (6 March 2018), a survey of more than 700 respondents showed that nearly half of organizations will definitely purchase or probably will purchase Chromebooks by EOY 2017. And Google has started developing an impressive list of case studies, including WhirlpoolNetflixPinterestthe Better Business Bureau, and more.

And why wouldn’t this trend continue? As the enterprise adopts cloud en masse, more and more applications are available anywhere through a browser – obviating the need for a full OS running legacy applications. Additionally, Chromebooks can represent a large cost savings – not only in terms of a lower up-front cost of hardware, but lower ongoing maintenance and helpdesk costs as well.

With this shift comes a very different approach to security. Since Chrome OS is hardened and locked down, the need to secure the endpoint diminishes, potentially saving a lot of time and money. At the same time, the primary storage mechanism shifts from the device to the cloud, meaning that the need to secure data in cloud applications, like G Suite, with a Cloud Access Security Broker (CASB) becomes paramount. Fortunately, the CASB market has matured substantially in recent years, and is now widely viewed as “ready for primetime.”

Overall, the outlook for Chromebooks in the enterprise is positive, with a very real possibility of dramatically simplifying security. Now, instead of patching and protecting thousands of laptops, the focus shift towards protecting data in a relatively small number of cloud applications. Quite the improvement!

What If the Cryptography Underlying the Internet Fell Apart?

By Roberta Faux, Director of Research, Envieta

Without the encryption used to secure passwords for logging in to services like Paypal, Gmail, or Facebook, a user is left vulnerable to attack. Online security is becoming fundamental to life in the 21st century. Once quantum computing is achieved, all the secret keys we use to secure our online life are in jeopardy.

The CSA Quantum-Safe Security Working Group has produced a new primer on the future of cryptography. This paper, “The State of Post-Quantum Cryptography,” is aimed at helping non-technical corporate executives understand what the impact of quantum computers on today’s security infrastructure will be.

Some topics covered include:
–What Is Post-Quantum Cryptography
–Breaking Public Key Cryptography
–Key Exchange & Digital Signatures
–Quantum Safe Alternative
–Transition Planning for Quantum-Resistant Future

Quantum Computers Are Coming
Google, Microsoft, IBM, and Intel, as well as numerous well-funded startups, are making significant progress toward quantum computers. Scientists around the world are investigating a variety of technologies to make quantum computers real. While no one is sure when (or even if) quantum computers will be created, some experts believe that within 10 years a quantum computer capable of breaking today’s cryptography could exist.

Effects on Global Public Key Infrastructure
Quantum computing strikes at the heart of the security of the global public key infrastructure (PKI). PKI establishes secure keys for bidirectional encrypted communications over an insecure network. PKI authenticates the identity of information senders and receivers, as well as protects data from manipulation. The two primary public key algorithms used in the global PKI are RSA and Elliptic Curve Cryptography. A quantum computer would easily break these algorithms.

The security of these algorithms is based on intractably hard mathematical problems in number theory. However, they are only intractable for a classical computer, where bits can have only one value (a 1 or a 0). In a quantum computer, where k bits represent not one but 2^k values, RSA and Elliptic Curve cryptography can be solved in polynomial time using an algorithm called Shor’s algorithm. If quantum computers can scale to work on even tens of thousands of bits, today’s public key cryptography becomes immediately insecure.

Post-Quantum Cryptography
Fortunately, there are cryptographically hard problems that are believed to be secure even from quantum attacks. These crypto-systems are known as post-quantum or quantum-resistant cryptography. In recent years, post-quantum cryptography has received an increasing amount of attention in academic communities as well as from industry. Cryptographers have been designing new algorithms to provide quantum-safe security.

Proposed algorithms are based on a number of underlying hard problems widely believed to be resistant to attacks even with quantum computers. These fall into the following classes:

  • Multivariate cryptography
  • Hash-based cryptography
  • Code-based cryptography
  • Supersingular elliptic curve isogeny cryptography

Our new white paper explains the pros and cons of the various classes for post-quantum cryptography. Most post-quantum algorithms will require significantly larger key sizes than existing public key algorithms which may pose unanticipated issues such as compatibility with some protocols. Bandwidth will need to increase for key establishment and signatures. These larger key sizes also mean more storage inside a device.

Cryptographic Standards
Cryptography is typically implemented according to a standard. Standard organizations around the globe are advising stakeholders to plan for the future. In 2015, the U.S. National Security Agency posted a notice urging the need to plan for the replacement of current public key cryptography with quantum-resistant cryptography. While there are quantum-safe algorithms available today, standards are still being put in place.

Standard organizations such as ETSI, IETF, ISO, and X9 are all working on recommendations. The U.S. National Institute for Standards and Technology, known as NIST, is currently working on a project to produce a draft standard of a suite of quantum resistant algorithms in the 2022-2024 timeframe. This is a challenging process which has attracted worldwide debate. Various algorithms have advantages and disadvantages with respect to computation, key sizes and degree of confidence. These factors need to be evaluated against the target environment.

Cryptographic Transition Planning
One of the most important issues that the paper underscores, is the need to being planning for cryptographic transition to migrate from existing public key cryptography to post-quantum cryptography. Now is the time to vigorously investigate the wide range of post quantum cryptographic algorithms and find the best ones for use in the future. This point is vital for corporate leaders to understand and begin transition planning now.

The white paper, “The State of Post-Quantum Cryptography,” was released by CSA Quantum-Safe Security Working Group. This introduces non-technical executives to the current and evolving landscape in cryptographic security.

Download the paper now.

Surprise Apps in Your CASB PoC

By Rich Campagna, Chief Marketing Officer, Bitglass

Barely five years old, the Cloud Access Security Broker (CASB) market is undergoing its second major shift in primary usage. The first CASBs to hit the market way back in 2013-2014 primarily provided visibility into Shadow IT. Interest in that visibility use case quickly waned in favor of data protection (and later threat protection) for sanctioned, well-known SaaS applications like Office 365 and Box — this was the first major shift in the CASB market.

The second major shift, the one that we’re currently undergoing, doesn’t replace this use case, but adds on to it. As IT and security teams have gotten comfortable with cloud applications like Office 365, the business has responded with demands for more applications. Sometimes that means other SaaS apps; sometimes it means custom apps or packaged software moving to the cloud. Regardless, what started as a relatively small, defined set of applications has exploded to a much broader demand over the past year or so, and is showing no signs of slowing down — this is the second major shift and we’re seeing it in every industry and across organizations of all sizes.

The quandary here is trying to sort out whether the CASBs that you’re evaluating will meet not only your current needs, but the needs of your business down the road as well. A really interesting approach that I have seen several times now is the concept of surprise apps in a proof of concept (PoC). When calling vendors in for the PoC, the enterprise will enumerate some of the applications to be tested, but leave others as a surprise for the vendor. The objective is to test whether the CASB will be able to meet their organization’s future cloud security needs, whatever those might be.

Most CASB vendors still rely on a fixed catalog of applications that they support and you don’t want to be waiting months (or longer) for a new app to be added to their roadmap when you have the GM of your company’s biggest line of business breathing down your neck to deploy that new application they so desperately need.