What’s New with the Treacherous 12?

October 20, 2017 | Leave a Comment

By the CSA Top Threats Working Group

In 2016, the CSA Top Threats Working Group published the Treacherous 12: Top Threats to Cloud Computing, which expounds on 12 categories of security issues that are relevant to cloud environments. The 12 security issues were determined by a survey of 271 respondents.

Following the publication of that document, the group has continued to track the cloud security landscape for incidents. This activity culminated in the creation of an update titled Top Threats to Cloud Computing Plus: Industry Insights.

The update serves as a validation of the relevance of security issues discussed in the earlier document, as well as provides references and overviews of these incidents. In total, 21 anecdotes and examples are featured in the document.

The references and overview of each anecdote and example are written with the help of publicly available information.

The Top Threats Working Group hopes that shedding light on recent anecdotes and examples related to the 12 security issues will provide readers with relevant context that is current and in-line with the security landscape.

 

CSA Releases Minor Update to CCM, CAIQ

October 19, 2017 | Leave a Comment

By the CSA Research Team

The Cloud Security Alliance has released a minor update for the Cloud Control Matrix (CCM) and the Consensus Assessment Initiative Questionnaire (CAIQ) v3.0.1. This update incorporates mappings to Shared Assessments 2017 Agreed Upon Procedures (AUP), PCI DSS v3.2, CIS-AWS-Foundation v1.1, HITRUST CSF v8.1, NZISM v2.5.

The Cloud Security Alliance would like to thank the following individuals and organizations for their contributions to this minor update of the CCM.

Shared Assessments 2017 AUP
Angela Dogan
The Shared Assessments Team

PCI DSS v3.2 
Michael Fasere
Capital One

NZISM v2.5
Phillip Cutforth
New Zealand Government CIO

HITRUST CSF v8.1
CSA CCM Working Group

CIS-AWS-Foundations
Jon-Michael Brook

Learn more about this minor update to the CCM. Please feel free to contact us at [email protected]nce.org if you have any queries regarding the update.

If you are interested in participating in future CCM Working Group activities, please feel free to sign up for the working group.

The GDPR and Personal Data…HELP!

October 4, 2017 | Leave a Comment

By Chris Lippert, Senior Associate, Schellman & Co.

With the General Data Protection Regulation (GDPR) becoming effective May 25, 2018, organizations (or rather, organisations) seem to be stressing a bit. Most we speak with are asking, “where do we even start?” or “what is included as personal data under the GDPR?” It is safe to say that these are exactly the questions organizations should be asking, but to know where to start, organizations first need to understand how the GDPR applies to their organization within this new definition for personal data. Without first understanding what to look for, an organization cannot begin to perform data discovery and data mapping exercises, review data management practices and prepare the organization for compliance with the GDPR.

Personal data redefined…sort of.
To start – is personal data redefined by the GDPR? Yes. Is it more encompassing of a definition? Yes. Does it provide a good amount of guidance on interpretation of said definition? In some areas, but not in others.

The Articles of the GDPR open with a list of definitions in Article 4 that provide some guidance on how to digest the remainder of the regulation—the recitals also contain some nuggets of wisdom if you have time to review. Personal data is the very first definition listed under Article 4, hinting that it is most likely pertinent to a comprehensive understanding of the regulation. Article 4(1) states:

‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.

In breaking down this definition, there are a few key phrases to focus on. Any information is the big one, as it confirms that personal data, under this regulation, is not limited to a particular group or type of data. Relating to specifies that personal data can encompass any group or type of data, as long as the data is tied to or related to something else. What is that something else? A natural person. A natural person is just that—an actual human being to whom the data applies.

You may have noticed I skipped the ‘an identified or identifiable’ portion of the definition—identified or identifiable means that the natural person has either already been identified, or can readily be identified utilizing other available information. Article 4(1) adds further clarity here, stating that an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person. The fact that name, identification number, location data and online identifier are specifically referenced at the beginning of this definition is important, as those pieces of data serve to directly identify an individual. If that specific data is held by the organization, all related data is in scope.

However, if those unique identifiers are not held, your organization should reference the list of other data that could otherwise identify the natural person and bring everything into scope. For example, you may not have John Smith’s name in your database, but you may have salary, company name, and city that that point directly to John Smith when linked together.

In addition to the new definition of personal data, the GDPR also adds some more specificity around what it deems “special categories” of personal data. Article 9 1. states:

processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade-union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation shall be prohibited.

This definition is important, as this states that certain personal data falls into a subcategory that has stricter processing requirements. Although the requirement above states that processing of special categories of personal data is prohibited, it is important to note that there are exceptions to this rule. Organizations should reference Article 9 if they believe special categories of data to be in scope.

So how does this definition differ from previous definitions of personal data?
Even though the GDPR “redefines” personal data, is it really all that different from existing definitions? As a baseline, let’s refer to two of the more commonly used definitions for personal data taken from the GDPR’s predecessor—the Data Protection Directive—and NIST 800-122.

The Data Protection Directive defines personal data in Article 2 (a), which states ‘personal data ‘ shall mean any information relating to an identified or identifiable natural person (‘data subject’); an identifiable person is one who can be identified, directly or indirectly, in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity. This definition is almost identical to that of the GDPR. The main difference is that the GDPR added additional data that can identify an individual, such as name, location data and online identifier. By adding these into the mix, the GDPR is clarifying where individuals are presumed to be identified, helping organizations understand that the data associated with those identifiers is in scope and covered under the regulation.

Special categories of personal data is also defined under the Data Protection Directive. Article 8 1. states Member States shall prohibit the processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, trade-union membership, and the processing of data concerning health or sex life. The GDPR expanded on this definition as well, now including genetic and biometric data, as well as sexual orientation data to be included in special categories. Essentially, the GDPR has taken the definitions for both personal data and special categories from the Data Protection Directive and provided more clarity, while making them more inclusive at the same time.

Most people probably expect the Data Protection Directive and GDPR to have similar definitions, as they are essentially version 1 and 2 of modern day EU data privacy legislation, respectively. However, when compared to the definition of personal data contained in U.S.-based guidance, we start to see some key differences. As the National Institute of Standards and Technology (NIST) is widely accepted, let’s look at their definition of personal data found in their Guide to Protecting the Confidentiality of Personally Identifiable Information (PII) from 2010.  NIST 800-122, Section 2.1 states PII is – any information about an individual maintained by an agency, including (1) any information that can be used to distinguish or trace an individual‘s identity, such as name, social security number, date and place of birth, mother‘s maiden name, or biometric records; and (2) any other information that is linked or linkable to an individual, such as medical, educational, financial, and employment information.

In breaking down this NIST definition, we see some similarities, in that the NIST definition starts off just as broadly with the phrasing “any information.” In the same vein, the wording “about an individual” speaks to the clarification provided in the GDPR definition as well. That being said, the definition then goes on to add more specifics regarding information that can identify or be linked to an individual, which is where we start to notice some differences. The identifying pieces of information listed in the NIST definition includes name, social security number, date and place of birth, mother’s maiden name or biometric records. The GDPR is a bit more inclusive in its definition, including name, identification number, location data, and online identifier, which covers most of the items from the NIST definition but also adds the online portion as well. While the GDPR doesn’t include the biometric data in the main definition, it does cover physical and genetic information in the other related information listing.

These differences don’t stop there. The NIST definition does go on to provide guidance on other information that could be linked to the individual, but instead of listing out specific data, the definition focuses rather on sectoral categories of data that seem to be derived from the sectoral privacy laws in the United States. The GDPR definition does not follow this pattern, and instead focuses on the different data that can be linked to an individual from a more generic standpoint, listing out the pieces of information that could be tied to an individual in most industries. Also, while the GDPR definition states that one or more of those other data elements can also identify the individual, the NIST definition really brings that other information into scope by saying it can be personal data as long as the individual is identified—though it does not state that the information can also be used to identify an otherwise unidentified individual.

Final Thoughts
With the GDPR’s becoming effective next year, it’s clear that this new definition of personal data expands on the preexisting EU definition of personal data contained in the Data Protection Directive.  Additionally, it adds more specificity to the data that can be used to identify an individual in comparison with leading US personal data definitions.

Why is this so important and relevant to organizations? This new definition of personal data is the most comprehensive definition to date, bringing into scope more information to be considered than any previous definitions in industry regulations or standards. Now, organizations will need to take another look at their previous determination of personal data and reevaluate their data management practices to ensure that the information they hold has been labeled and handled correctly. In fact, information deemed not applicable to past privacy regulations and standards may now become relevant when taking the new definition of personal data into consideration.

Look no further than IP addresses.  Most companies wouldn’t normally lump in IP addresses with personal data, but the now-effective GDPR specifically calls out online identifiers in the definition of personal data. The Court of Justice for the European Union (CJEU) issued its judgement indicating as such in Case C-582/14: Patrick Breyer v Bundesrepublik Deutschland, setting precedent that even dynamic IP addresses can be considered personal data in certain situations. Given this new standard, it will be important for organizations to incorporate judgements from recent cases and guidance from the Article 29 Working Party (being replaced by the European Data Protection Board in May of 2018) when determining how the GDPR impacts to their organization and how best to comply.

New procedures and criteria can be confusing, but hopefully the information above has provided some clarity around this new definition of personal data that the GDPR will introduce next year. Basic knowledge of these definitions can be a starting point for determining how the GDPR applies to your organization, and  if approached from a comprehensive data and risk management standpoint, this information can help better prepare your organization for compliance with the GDPR and other future privacy regulations and frameworks.

If you should have any questions regarding the new definition of personal data or the GDPR in general, please feel free to reach out to your friendly neighborhood privacy team here at Schellman.

Webinar: How Threat Intelligence Sharing Can Help You Stay Ahead of Attacks

September 27, 2017 | Leave a Comment

By Lianna Catino, Communications Manager, TruSTAR Technology

According to a recent Ponemon Institute survey of more than 1,000 security practitioners, 84 percent say threat intelligence is “essential to a strong security posture,” but the data is too voluminous and complex to be actionable.

Enter the CloudCISC Working Group. Powered by TruSTAR’s threat intelligence platform, more than 30 CSA enterprise members are now actively exchanging threat data on a daily basis to help them surface relevant intelligence. The platform allows security analysts to mine historical incident data correlations among CSA members to take faster action against new threats.

This month CloudCISC marks its one year anniversary, and to celebrate we’re bringing you a recap of some of the hottest trending threats we’re seeing on the CSA platform in Q3.

Led by CSA and TruSTAR, we’ll be walking you through the CloudCISC platform and dissecting threats that are specifically relevant and trending among CSA members.

In the event you missed it, you can watch the replay.

Thinking of joining CSA’s Cloud Cyber Intelligence Exchange? Request your invitation today.

Improving Metrics in Cyber Resiliency: A Study from CSA

August 30, 2017 | Leave a Comment

By  Dr. Senthil Arul, Lead Author, Improving Metrics in Cyber Resiliency

With the growth in cloud computing, businesses rely on the network to access information about operational assets being stored away from the local server. Decoupling information assets from other operational assets could result in poor operational resiliency if the cloud is compromised. Therefore, to keep the operational resiliency unaffected, it is essential to bolster information asset resiliency in the cloud.

To study the resiliency of cloud computing, the CSA formed a research team consisting of members from both private and public sectors within the Incident Management and Forensics Working Group and the Cloud Cyber Incident Sharing Center.

To measure cyber resiliency, the team leveraged a model developed to measure the resiliency of a community after an earthquake. Expanding this model to cybersecurity introduced two new variables that could be used to improve cyber resiliency.

  • Elapsed Time to Identify Failure (ETIF)
  • Elapsed Time to Identify Threat (ETIT)

Measuring these and developing processes to lower the values of ETIF and ETIT can improve the resiliency of an information system.

The study also looked at recent cyberattacks and measured ETIF for each of the attacks. The result showed that the forensic analysis process is not standard across all industries and, as such, the data in the public domain are not comparable. Therefore, to improve cyber resiliency, the team recommends that the calculation and publication of ETIF be transferred to an independent body (such as companies in IDS space) from the companies that experienced cyberattacks. A technical framework and appropriate regulatory framework need to be created to enable the measurement and reporting of ETIF and ETIT.

Download the full study.

Security Needs Vs. Business Strategy – Finding a Common Ground

August 21, 2017 | Leave a Comment

By Yael Nishry, Vice President of Business Development, Vaultive

Even before cloud adoption became mainstream, it wasn’t uncommon for IT security needs to conflict with both business strategy and end user preferences. Almost everyone with a background in security has found themselves in the awkward position of having to advise on going against a technology with significant appeal and value because it would introduce too much risk.

In my time working both as a vendor and as a risk management consultant, few IT leaders I’ve come across want to be a roadblock when it comes to achieving business goals and accommodating (reasonable) user preferences and requests. However, they also understand the costs of a potential security or non-compliance issue down the road. Unfortunately, many IT security teams have also experienced the frustration of being overridden, either officially by executives electing to accept the risk or by users adopting unregulated, unsanctioned applications and platforms, introducing risk into the organization against their recommendation.

In today’s world of cloud computing there are more vendor options than ever and end users often come to the table with their preferences and demands.  More and more I speak to IT and security leaders who have been directed to move to the cloud or have been pressured to move data to a specific cloud application for business reasons but find themselves saying no because the native cloud security controls are not enough.

Fortunately, in the past few years, solutions have emerged that allow IT and security leaders to stop saying no and instead enable the adoption of business-driven requests while giving IT teams the security controls they need to reduce risk. Cloud vendors spend a lot of time and resources to secure their infrastructure and applications, but what they are not responsible for is ensuring compliant cloud usage in their customer’s organizations.

The legal liability for data breaches is yours and yours alone.  Only you can guarantee compliant usage within your organization, so it’s important to understand the types of data that will be flowing into the cloud environment and work with various stakeholders to enforce controls that will reduce risk to an acceptable level and comply with any geographic or industry regulations.

It can be tempting, as always, to lock everything down and allow users only the most basic functionality in cloud applications. However, that often results in a poor user experience and leads to unsanctioned cloud use and shadow IT.

While cloud environments are very different from on premise environments, many of the security principles are still valid. As a foundation, I often guide organizations to look at what they are doing today for on-premises security and begin with extending those same principles into the cloud. Three useful principles to begin with are:

Privilege Management
Privilege management has been used in enterprises for years as an on-premises method to secure sensitive data and guide compliant user behavior by limiting access. In some cloud services, like Amazon Web Services (AWS), individual administrators can quickly amass enough power to cause significant downtime or security concerns, either unintentionally or through compromised credentials. Ensuring appropriate privilege management in the cloud can help reduce that risk.

In addition to traditional privilege management, the cloud also introduces a unique challenge when it comes to cloud service providers. Since they can access your cloud instance, it’s important to factor into your cloud risk assessment that your cloud provider also has access to your data. If you’re concerned about insider threats or government data requests served directly to the cloud provider, evaluating options to segregate data from your cloud provider is recommended.

Data Loss Protection
Another reason it’s so important to speak with stakeholders and identify the type of data flowing into the cloud is to determine what data loss protection (DLP) policies you need to enforce. Common data characteristics to look out for include personally identifiable information, credit card numbers, or even source code. If you’re currently using on-premises DLP, it’s a good time to review and update your organizations’ already defined patterns and data classification definitions to ensure that they are valid and relevant as you look to extend them to the cloud.

It’s also important to also educate end users on what to expect. Good cloud security should be mostly frictionless, but, if you decided to enforce policies such blocking a transaction or requiring additional authentication for sensitive transactions, it’s important to include this in your training materials and any internal documentation provided to users. It not only lets users know what to expect, leading to fewer helpdesk tickets but also can be used to refresh users on internal policies and security basics.

Auditing
A key aspect of any data security strategy is to maintain visibility into your data to ensure compliant usage. Companies need to make sure that they do not lose this capability as they migrate their data and infrastructure into the cloud. If you use security information event management (SIEM) tools today, it’s worth taking the time to decide on what cloud applications and transactions you should integrate into your reports.

By extending the controls listed above into your cloud environment, you can establish a common ground of good security practices that protect business enabling technology. With the right tools and strategy in place, it’s possible to stop saying no outright and instead come to the table enabled to empower relevant business demands while maintaining appropriate security and governance controls.

 

 

Ransomware Explained

August 18, 2017 | Leave a Comment

By Ryan Hunt, PR and Content Manager, SingleHop

How it Works    Plus Tips for Prevention & Recovery
Ransomware attacks — a type of malware (a.ka. malicious software) — are proliferating around the globe at a blistering pace. In Q1 2017, a new specimen emerged every 4.2 seconds!* What makes ransomware a go-to mechanism for cyber attackers? The answer is in the name itself.

How it works
Unlike other hacks, the point of ransomware isn’t to steal or destroy valuable data; it’s to hold it hostage.

Ransomware enters computer systems via email attachments, pop-up ads, outdated business applications and even corrupted USB sticks.

Even if one computer is initially infected, ransomware can easily spread network-wide via a LAN or by gaining access to username and passwords.

Once the malware activates, the hostage situation begins: Data is encrypted and the user is instructed to pay a ransom to regain control.

Ransomware Prevention

  1. Install Anti-Virus/Anti-Malware Software
  2. But Be Sure to Update & Patch Software/Operating Systems
  3. Invest In Enterprise Threat Detection Systems and Mail Server Filtering
  4. Educate Employees on Network Security

What to do if your data is held hostage? If attacked, should your company pay?
Remember: Preventative measures are never 100% effective.

Paying the ransom might get you off the hook quickly, but will make you a repeat target for attack.

There’s a better way
Beat the attackers to the punch by investing in Cloud Backups and Disaster Recovery as a Service.

Backups
Daily Offsite Backups = You’ll Always Have Clean, Recent Copies of Your Data

Disaster Recovery
Disaster Recovery Solutions are crucial in the event Ransomware compromises your entire system. Here, you’ll be able to operate your business as usual via a redundant network and infrastructure. Sorry, Malware Ninjas.

Is the Cloud Moving Too Fast for Security?

July 28, 2017 | Leave a Comment

By Doug Lane, Vice President/Product Marketing, Vaultive

In February 2017, a vulnerability in Slack was discovered which had the potential to expose the data of the company’s reported four million daily active users. Another breach in February on CloudFlare, a content delivery network, leaked sensitive customer data stored by millions of websites powered by the company. On March 7, the Wikileaks CIA Vault 7 exposed 8,761 documents on alleged agency hacking operations. On June 19, Deep Root Analytics, a conservative data firm, misconfigured an Amazon S3 Server that housed information on 198 million U.S. voters. On July 12, Verizon had the same issue and announced a misconfigured Amazon S3 data repository at a third-party vendor that exposed the data of more than 14 million U.S. customers.

That’s at least five-major cloud application and infrastructure data breach incidents for 2017, and we’re only in July. Add in the number of ransomeware and other attacks during the first half of this year and it’s clear the cloud has a real security problem.

By now, most everyone recognizes the benefits of the cloud; bringing new applications and infrastructure online quickly and scaling it to meet ever changing business demands. Although highly valuable for the business side, when security teams lose control over how and where new services are implemented, the network is at risk and subsequently, so is their data. The balance of allowing businesses to move at the speed of the cloud and maintain the needed security controls is becoming increasingly difficult. With the spike in data exposures and breaches, it shows that security teams are struggling to secure cloud use.

The Slack breach is a great example at the application-level. Slack is simple to use and implement, which has driven the application’s record-breaking growth. Departments, teams, and small groups can easily spin up Slack without IT approval or support, and instances of the application can spread quickly across an organization. Although Slack patched the vulnerability identified in February before any known exposure occurred, if it were hacked, the attacker could have had full access and control over four million user accounts.

In the Verizon situation, a lack of control at the infrastructure level is what caused so many of their customers to be exposed this month. When servers can be brought online so easily and configured remotely by third-party partners, the right security protocols can be missed or ignored.

As more businesses move to the cloud and as cloud services continue to grow, organizations must establish a unified set of cloud security and governance controls for business-critical SaaS applications and IaaS resources. In most cases, cloud providers will have stronger security than any individual company can maintain and manage on-premise. However, each new service comes with it’s own security capabilities, which can increase risks because of feature gaps or human error during configuration. Adding additional encryption and policy controls independently of the vendor, is a proven way for organizations to fully entrust their data to a cloud provider without giving up complete control over who can access it while also making sure employees are compliant when using SaaS applications. These controls allow businesses to move at the speed of the cloud without placing their data at risk.

The reality is that threats are increasing in frequency and severity. The people behind attacks are far more sophisticated and their intentions far more sinister. We, as individuals and businesses, entrust a mind-boggling amount of data to the cloud but there doesn’t exist today a way to entirely prevent hackers from getting through the door at the service, infrastructure or software provider. Remaining in control of your data that traverses all the cloud services that you use is the safest thing you can do to protect your business. Because, in the end, if they can’t read it or use it, is data really data?

Guidance for Critical Areas of Focus in Cloud Computing Has Been Updated

July 26, 2017 | Leave a Comment

Newest version reflects real-world security practices, future of cloud computing security

By J.R. Santos, Executive Vice President of Research, Cloud Security Alliance

Today marks a momentous day not only for CSA but for all IT and information security professionals as we release Guidance for Critical Areas of Focus in Cloud Computing 4.0, the first major update to the Guidance since 2011.

As anyone involved in cloud security knows, the landscape we face today is a far cry from what was going on 10, even five, years ago. To keep pace with those changes almost every aspect of the Guidance was reworked. In fact, almost 80 percent of it was rewritten from the ground up, and domains were restructured to better reflect the current state of cloud computing, as well as the direction in which this critical sector is heading.

For those unfamiliar with what is widely considered to be the definitive guide for cloud security, the Guidance acts as a practical, actionable roadmap for individuals and organizations looking to safely and securely adopt the cloud paradigm. This newest version includes significant content updates to address leading-edge cloud security practices and incorporates more of the various applications used in the security environment today.

Guidance 4.0 covers such topics as:

  • DevOps, continuous delivery, and secure software development;
  • Software Defined Networks, the Software Defined Perimeter and cloud network security.
  • Microservices and containers;
  • New regulatory guidance and evolving roles of audits and compliance inheritance;
  • Using CSA tools such as the CCM, CAIQ, and STAR Registry to inform cloud risk decisions;
  • Securing the cloud management plane;
  • More practical guidance for hybrid cloud;
  • Compute security guidance for containers and serverless, plus updates to managing virtual machine security; and
  • The use of immutable, serverless, and “new” cloud architectures.

Today is the culmination of more than a year of input and review from the CSA and information security communities. Guidance 4.0 was drafted using an open research model (a herculean effort for those unfamiliar with the process), and none of it would have been possible without the assistance of Securosis, whose research analysts oversaw the project. We owe them—and everyone involved—a tremendous thanks.

You can learn more about the Guidance and read the updated version here.

Patch Me If You Can

July 24, 2017 | Leave a Comment

By Yogi Chandiramani, Technical Director/EMEA, Zscaler

In May, the worldwide WannaCry attack infected more than 200,000 workstations. A month later, just as organizations were regaining their footing, we saw another ransomware attack, which impacted businesses in more than 65 countries.

What have we learned about these attacks?

  • Compromises/infections can happen no matter what types of controls you implement – zero risk does not exist
  • The security research community collaborated to identify indicators of compromise (IOCs) and provide steps for mitigation
  • Organizations with an incident response plan were more effective at mitigating risk
  • Enterprises with a patching strategy and process were better protected

Patching effectively
Two months before the attack, Microsoft released a patch for the vulnerability that WannaCry exploited. But, because many systems did not receive the patch, and because WannaCry was so widely publicized, the patching debate made it to companies’ board-level leadership, garnering the sponsorship needed for a companywide patch strategy.

Even so, the attack of June 27 spread laterally using the SMB protocol a month after WannaCry, by which time most systems should have been patched. Does the success of this campaign reflect a disregard for the threat? A lack of urgency when it comes to patching? Or does the problem come down to the sheer volume of patches?

Too many security patches
As we deploy more software and more devices to drive productivity and improve business outcomes, we create new vulnerabilities. Staying ahead of them is daunting, with the need to continually update security systems, and patch end-user devices running different operating systems and software versions. Along with patch and version management, there is change control, outage windows, documentation processes, post-patch support, and more. And it’s only getting worse.

The following graph illustrates the severity of vulnerabilities over time, and you can see that halfway through 2017, the number of disclosed vulnerabilities is already close to the overall patch volume of 2016.

source: National Vulnerability Database, a part of the National Institute of Standards and Technology (NIST). (https://nvd.nist.gov/vuln-metrics/visualizations/cvss-severity-distribution-over-time)

The challenge for companies is the sheer number of patches that need to be processed to remain fully up to date (a volume that continues to increase). Technically speaking, systems will always be one step behind in terms of vulnerability patching.

Companies must become aware of security gap
In light of the recent large-scale attacks, companies should revisit their patching strategy as a part of their fundamental security posture. Where are the gaps? The only way to know is through global visibility — for example, visibility into vulnerable clients or identifying botnet traffic — which provides key insights in terms of where to start and focus.

Your security platform’s access logs are a gold mine, providing data as well as context, with information such as who, when, where, and how traffic is flowing through the network. The following screen capture is a sample log showing a botnet callback attempt. With this information, you can see where to where to focus your attention and your security investments.

In the following example, you can identify potentially vulnerable browsers or plugins. It’s important to ensure that your update strategies include these potential entry points for malware, as well.

These are but two examples of potential gaps that can be easily closed with the appropriate insight into what software and versions are being used within an organisation. As a next step, companies should focus on patching those gaps with the highest known risk as a starting point.

But patching remains an onerous, largely manual task that is difficult to manage. A better alternative is a cloud-delivered security-as-a-service solution, which automates updates and the patching process. With threat actors becoming increasingly inventive as they design their next exploits, it pays to have a forward-thinking strategy that reduces the administrative overhead, improves visibility, and delivers protections that are always up to date.