Why You Need a CASB for GDPR Compliance

April 4, 2017 | Leave a Comment

By Rich Campagna, Senior Vice President/Products & Marketing, Bitglass

With enforcement of the EU’s General Data Protection Regulation (GDPR) is just over a year away in May, 2018, your planning efforts should already be well underway. Adoption of cloud applications across the EU continues at a rapid clip, and the global nature of leading cloud applications means that protecting personal data and achieving data residency can be difficult to achieve.

With mandatory breach notifications and very steep fines (4% of annual revenues), the cost of non-compliance is high. On the other hand, it’s nearly impossible to stop the move to cloud in most organizations, so that’s not an option either. Fortunately, you still have time to arm your organization with the key to combining cloud adoption and GDPR compliance: cloud access security broker (CASB). Let’s take a look at some of the areas where a CASB can help:

  • Identifying personal data – the EU GDPR is primarily concerned with the protection of any data that can be used to identify a person (name, address, email, driver’s license number, and much, much more). The first thing that you need to do in order to protect that data is to identify where it is. CASBs can scan across both data-in-transit and data-at-rest for a wide range of cloud-delivered apps (SaaS, IaaS, and custom applications). Any CASB you choose should have a library of pre-built identifiers that can be used to scan for names, phone numbers, addresses, national identity and driver’s license numbers, health record information, bank account numbers, and more.
  • Controlling the flow of personal data – Once you’ve identified where sensitive data resides, you want to control where it can go. CASBs include a range of policy options that allow you to do things like geofence personal information, control access from unmanaged/unprotected devices, control external sharing, and encrypt data upon download. All of these options can help mitigate the risk of non-compliance.
  • Maintaining data residency and sovereignty – Major cloud applications often have global architectures which makes it difficult, it not impossible, to keep data within a given country or region. Fortunately, the GDPR allows for the use of encryption to meet GDPR requirements, if the cloud provider transfers data outside of the EU. Seek out a CASB that offers the killer app for GDPR – full-strength cloud encryption – across both unstructured (file) and structured (field) data.
    • Word of caution: some cloud application vendors offer their own “built-in” or “platform” encryption. With these schemes, the cloud provider has access to the keys and, therefore, the data as well. This is a GDPR gray area and may leave you, the data controller, on the hook for those hefty fines and mandatory notifications.
  • Monitor Risky Activity – A CASB can give you visibility into everything that’s happening with your users and your data across protected cloud applications. User Behavior Analytics and alerting capabilities let you know when risky activity is happening. This might mean reporting on indicators of breach, credential compromise, personal data access from outside the EU, or more. This critical visibility will allow you to identify and stop activities that might otherwise leave you staring down a fine of 4% of revenues (and a corresponding loss of your job).
  • Identify Shadow IT – simply put, GDPR and Shadow IT are a volatile and risky mix. There’s simply no feasible way to get the controls and visibility needed over applications that your organization has no ability to control. A CASB can give you much needed visibility into Shadow IT applications, and their corresponding risk, but your only option when faced with GDPR is get out of the shadows – either sanction and protect shadow IT through a CASB, or block unsanctioned apps altogether.

These CASB controls can really jumpstart a successful GDPR program across your organization, leaving you free to consider some of the many other GDPR-related controls and policies you’ll need to put in place over the next 12 months, including appointing a Data Protection Officer, figuring out how to implement “right to be forgotten,” and reevaluating licensing terms and data ownership across your many cloud application vendors.

CASB Is Eating the IDaaS Market

March 31, 2017 | Leave a Comment

By Rich Campagna, Senior Vice President/Products & Marketing, Bitglass

In the past 6-9 months, I’ve noticed a trend amongst Bitglass customers where more and more of them are opting to use the identity capabilities built into our  Cloud Access Security Broker (CASB) in lieu of a dedicated Identity as a Service (IDaaS) product. As CASB identity functionality has evolved, there is less need for a separate, standalone product in this space and we are seeing the beginnings of CASBs eating the IDaaS market.

A few years back, Bitglass’ initial identity capabilities consisted solely of our SAML proxy, which ensures that even if a user goes direct to a cloud application from an unmanaged device and on a public network, they are transparently redirected into Bitglass’ proxies – without agents!

From there, customer demand lead us to build Active Directory synchronization capability for group and user management, authentication directly against AD, and native multifactor authentication. Next came SCIM support and the ability to provide SSO not only for sanctioned/protected cloud applications, but any application.

So what’s left? If you look at Gartner’s Magic Quadrant for Identity and Access Management as a Service, Worldwide*, Greg Kreizmann and Neil Wynne break IDaaS capabilities into three categories:

  • Identity Governance and Administration – “At a minimum, the vendor’s service is able to automate synchronization (adds, changes and deletions) of identities held by the service or obtained from customers’ identity repositories to target applications and other repositories. The vendor also must provide a way for customers’ administrators to manage identities directly through an IDaaS administrative interface, and allow users to reset their passwords. In addition, vendors may offer deeper functionality, such as supporting identity life cycle processes, automated provisioning of accounts among heterogeneous systems, access requests (including self-service), and governance over user access to critical systems via workflows for policy enforcement, as well as for access certification processes. Additional capabilities may include role management and access certification.”
  • Access – “Access includes user authentication, single sign-on (SSO) and authorization enforcement. At a minimum, the vendor provides authentication and SSO to target applications using web proxies and federation standards. Vendors also may offer ways to vault and replay passwords to get to SSO when federation standards are not supported by the applications. Most vendors offer additional authentication methods — their own or through integration with third-party authentication products.”
  • Identity log monitoring and reporting – “The vendor logs IGA and access events, makes the log data available to customers for their own analysis, and provides customers with a reporting capability to answer the questions, ‘Who has been granted access to which target systems and when?’ and ‘Who has accessed those target systems and when?’”

Check, check, and check! Not only do leading CASBs offer these capabilities as part of their cloud data protection suites, in some cases, they go quite a bit farther. Take logging and reporting for example. An IDaaS product sees login and logout events, but nothing that happens during the session. CASBs can log and report on every single transaction – login, logout and everything in between.

Another example is multifactor authentication. Whereas an IDaaS can trigger MFA at the beginning of a session due to a suspicious context, a CASB can trigger MFA at any time – such as mid-session if a user starts to exhibit risk behaviors.

Since these capabilities have evolved as part of CASBs, which offer comprehensive data protection capabilities for cloud applications, I expect that 2017 will be a year with a lot more enterprises considering CASB platforms for both cloud identity and cloud data protection.

*Magic Quadrant for Identity and Access Management as a Service, Worldwide, Greg Kreizmann and Neil Wynne, 06 June 2016

Brexit or Bust: What Does It Mean for Data?

March 23, 2017 | Leave a Comment

By Nic Scott, Managing Director/UK, Code 42

What’s the latest on Brexit? When the UK government triggers Article 50, it will signal the start of the official two-year countdown until the UK leaves the European Union. According to UK Prime Minister Theresa May, this is still on track to happen at some point in March.

While there are still many unknowns in regards to geopolitical policies and legislation that will be created, annulled, or abolished post-Brexit, the UK government has given away one handy hint when it comes to the now-infamous General Data Protection Regulation (GDPR).

Post-Brexit, the UK will be mirroring data protection regulations to that which exists in Europe.

This means that from May 2018, when the UK is still an EU member, the GDPR will be applicable to UK businesses. And, even when the UK exits the EU in 2019, an identical version of the GDPR will also be enforced.

Needless to say, this isn’t good news for UK organizations that have been burying their heads in the sand, hoping that this pesky EU legislation will just go away post-Brexit. Unfortunately, these rules aren’t going anywhere. It’s time for companies to wake up to the consequence of data negligence regarding the GDPR. This isn’t just infosecurity providers scaremongering for sales, and it’s not a ‘potential’ occurrence like the Y2K bug, this is actually happening.

Get your ducks in a row, or get fined
Should a sensitive data breach occur under the GDPR, the European Data Protection Board (or likely the Information Commissioner’s Office, post-Brexit) will evaluate whether the affected company has been negligent in its data protection operations and the level of compensation a company must pay affected parties—which can reach €20m or a fine of up to four percent of its global turnover. Not a pretty thought for the C-suite, which by nature is tasked with mitigating risk.

Concerningly, according to Code42’s 2016 Datastrophe Study, in which over 400 UK IT decision makers (ITDMs) were surveyed, 50 percent of them acknowledged that the security measures they have in place currently will not be enough to meet GDPR standards.

How to become compliant
The first step is for an organization to know what kind of data falls under GDPR protection, where it is stored, and for how long it should be kept. Moreover, what is the best way to secure it, to what extent that data should be backed up, and how to prevent any leaks from the inside of the company. Simple, right?

The implementation of the right endpoint security stack is vital—one that takes into consideration first-line defense, such as intrusion detection systems and antivirus solutions, right down to last line defense, to easily remediate and recover should a breach occur. The right solution is an important advantage given the number of people and devices accessing potentially sensitive corporate information.

Also, enterprises should create internal policies that promote accessibility and flexibility with approved solutions, without locking the enterprise down to the point of stifling productivity. Employees play a big role regarding the sanctity of corporate information. That is why it is vital to train and educate your staff about possible intrusions, how they can secure data themselves, and how to avoid being tricked into leaking sensitive information.

Taking these precautions will allow an organization to gain control of its own information and ensure that the CIO’s overall focus is on increasing profit and expanding technological reach, rather than worrying about the safety of the zeroes and ones.

Odds Are in Quantum Encryption’s Favor

March 22, 2017 | Leave a Comment

By Jane Melia, Vice President of Strategic Business Development , QuintessenceLabs and Co-chair, CSA Quantum-safe Security Working Group

Image credit: Jeff Kubina

No kinds of organizations have tighter security than the average casino. After all, the house always wins, and it wants to keep those winnings. A recent Wired article, however, explains how a team of Russian hackers managed to beat a lot of casinos worldwide. They did so by exploiting inherent flaws in the pseudo-random number generators (PRNG) that are integral parts of randomizing every spin of a slot machine. Even if you don’t care about wealthy casino bosses losing money, you still need to be concerned about the drawbacks to using PRNGs because slots aren’t the only things that are vulnerable. Most of the world’s encryption is also based on pseudo-random numbers.

What’s in a Name?
Before going into detail about how the heists were carried out, let’s talk about PRNGs and why pseudo should be a no-no for slot machines and, more importantly, cybersecurity. As the prefix “pseudo” indicates, the numbers generated are not truly random. PRNGs are programs that start with a base number known as a seed. The seed gets tumbled together with other inputs such as another algorithm and a random-ish physical component such as the timing of the strokes on a user’s keyboard. Both humans and computers are really bad at random so if someone is able to measure the pattern of your keystrokes and/or break one of the algorithms used, they can reverse engineer the other inputs and predict the next numbers in the “random” sequence. Find the pattern, break the code and the jackpot (or encrypted data) is yours.

One- and Two-Armed Bandits
In the case of the Russian casino swindlers, they were given a head start by Vladamir Putin who had gambling outlawed in 2009. This meant a lot of slot machines were available on the cheap. Take apart a few machines, figure out how the PRNGs work and you’re nearly there. Since the inputs for slot machine PRNGs change based on the time of day, the hackers, in this case, had to do more work on-site at the casinos. The leg man would set himself up in front of a machine and video a dozen or more spins using his smartphone. The video would be streamed live to his compatriots in St. Petersburg who would analyze the video and use what they knew about the machine’s innards to predict its pattern. Then they would send a list of timing markers that caused the phone to vibrate a split-second before a winning combination comes up, signaling casino guy to hit the spin button. It didn’t work every time but it was a whole lot more effective than chance – somewhere around $250K per week more effective.

To make things worse, not only did the engineered cheat allow a shadowy St. Petersburg group to snatch millions of dollars, the problem they exploited is a fundamental part of PRNGs so casinos are still vulnerable to this kind of fraud. That brings us back to cybersecurity issues. As shown in the casino example, it takes a lot of work to figure out the patterns produced by a PRNG. Most hackers don’t have two dozen guys with a supercomputer in St. Petersburg to help. Soon, however, they will all have something better – at least if your goal is to defeat the PRNGs and break an encryption.

The Future is Yesterday
Any data that needs to be kept secret and safe over time is already at risk of being breached. Quantum computers exponentially more powerful than those we use today are already being developed. Current predictions are that quantum computers will be fully realized in the next five to ten years, but it could be even less than that. No PRNG will be able to stand up to the brute force of quantum computers. All too soon, only a true random number generator (RNG) will do.

The only way to generate true random numbers is by using the natural world (i.e. something not made by humans). Quantum encryption, for instance, uses the fully entropic (or completely random) nature of the quantum world to generate true random numbers that are the basis for the strongest possible encryption keys. Quantum key generation is designed to take on the coming quantum computing storm and keep medical records, tax returns, classified government documents, corporate secrets (and anything else that needs to stay under wraps after 2020 safe). Bet on it.

Observations on CSA Summit at RSA – Part 1

March 15, 2017 | Leave a Comment

By Katie Lewin, Federal Director, Cloud Security Alliance

CSA Summit at RSA was a day-long session on Securing the Converged Cloud organized around presentations and panels from leading vendors such as Centrify, Veracode, Microsoft, and Netskope, as well as a talk on “Effective Cybersecurity” by Ret. Gen. Keith Alexander and a fireside chat with Robert Herjavec of “Shark Tank” fame. (Session recordings from the CSA Summit are now available.)

Several themes emerged over the course of the day of presentations, panels and fireside chats:

  • The cloud is still the most secure environment for data and acceptance of cloud as a secure environment for data storage is at the tipping point of acceptance by most IT users. In one survey cited, half of the respondents said that the cloud was more secure than on-premises.
  • Identity continues to be important – the message of many of the speakers is that there are too many passwords and too many special privileges.
  • Emphasis should be placed on data protection rather than device protection. Security is moving to Modern Data Controls – from device and identity security to data protection and controls. Rights management and data classification are the key indicators in data control.
  • Security must move to a process that authenticates first and then connects as opposed to the current emphasis on connect and then authenticate.

Presentation slides will be available on the CSA web site.

Many speakers asserted that today’s security is not secure. Evidence of this includes breaches at Yahoo, USG Office of Personnel Management, 2016 Presidential election. Network perimeters are fading with cloud use, mobile devices, IoT devices and the mobile work force. Therefore, security in the age of access must focus on passwords.

Too many passwords and privileged users require a paradigm shift to identity management.

There is evidence that focusing on identity reduces the number of beaches. Businesses must take steps to implement identity management, including:

  • Establish Identity assurance across the IT environment;
  • Consolidate identities through single sign-on and then layer on multi-factor identification;
  • Limit lateral movement – move to automated provisioning – identify who is still on staff and what they can access;
  • Move to approval workflow for access requests; and
  • Audit privilege access.

Speakers emphasized that transition to the cloud can be revolutionary rather than evolutionary. There were several real-life examples of a revolutionary transition. One large company wanted to eliminate its Intranet and rely solely on the Internet. The benefits of this approach included single sign-on, reduced complexity, establishment of standards, improved security and cost efficiency. In addition, the company did not have to secure and maintain network devices on its premises. In order to effect this transition, the company determined that they needed to concentrate on securing their data assets and not their appliances. The approach they took was to establish a strict policy-based access structure combined with micro-segmentation. This approach was successful. The Internet gives users access similar to private transactions; eliminates choke points of routing all transaction to a single data center. They were also able to optimize data center traffic using a hybrid cloud approach.

One of the highlights of the day was a speech from Ret. Gen. Keith Alexander on “Strategy of Effective Cybersecurity.” He began by outlining some of the current trends in cyber world:

  • Technology is rapidly changing, and data available is increasing exponentially; but this information becomes outdated in 2 -3 year horizon.
  • Advanced technology is playing a more important role in our lives – for example IBM’s Watson is now working on formulating chemo for brain cancer patients.
  • Moving to the cloud is good – resulting in better security and cost savings especially for small and mid-size businesses.

However, there are threats that must be addressed in this environment. Cyber skills are now part of a nation’s power in the world. There are many examples of this, including cyber attacks from nation states aimed at other states. These attacks are evolving from disruptive to destructive.

What is the path forward to meet these threats? Entities must share meta data on attacks and intrusion attempts to have the information to formulate defensive strategies. There should also be Software as Service defensive tools on the cloud available for entities to share. These tools and strategies can be developed and implemented while also protecting civil liberties and privacy.

Product Announcement from AWS – Regulatory Product Mapping Tool
This tool maps security control frameworks to reveal overlap and gaps between various security methodologies. Currently, the product includes FedRAMP controls and the AWS set of controls. Other control sets will be added. This product could be useful in determining how long and how much it could cost a system to obtain an Authority to Operate from a Federal agency. Click for more information on this tool.

 

 

Preparing for the Quantum Future: Setting Global Security Standards to Make Us Quantum-Safe

March 13, 2017 | Leave a Comment

By Frank Guanco, Quantum-Safe Security Working Group, Cloud Security Alliance

Recently there has been an increase in the perceived threat of the quantum computer to modern cryptographic standards in widespread use. During the last year, security agencies such as the United States Government National Security Agency (NSA) and the United Kingdom’s Communications Electronics Security Group (CESG) have called for a move to a set of quantum-safe cryptographic standards. The consensus is that today’s cyber security solutions needs to be retooled sooner rather than later, and the transition to quantum-safe security must begin now. The arrival date for a practical quantum computer is still up for debate, however, most experts believe we will see a quantum computer capable of breaking current public key cryptosystems within five to 15 years.

Recently the Quantum-Safe Security Working Group from the Cloud Security Alliance (CSA), released its ‘Applied Quantum-Safe Security’ paper, designed to provide individuals in the security industry and related fields with applicable knowledge regarding the quantum computer and its influence on cyber security. The white paper discusses how cryptographic tools must be adapted to fit specific types of data and serves as a call-to-arms for the available protection options for when the quantum computer arrives.

Digital and physical security
Computer security has primarily focused on digital security methods, however, physical security of data is also critical. Algorithms provide authentication and encryption for online communications and security of a cryptographic scheme is based on mathematics and resilience against large computing power to ensure digital security. Consider this physical security example – security breaches impacting governments and large organizations are often linked to insiders, capable of physical access not afforded the outside world. This breach occurs despite the fact that digital avenues may have been closed and intensive security protocols employed. Cryptographic keys are not only abstract random strings, but also real physical objects that should be stored in secured physical appliances. To be more quantum-safe, new tools must include all physical and mathematical security systems, each with its own practical application domain.

Impact of Cloud Computing
The ongoing move toward the cloud for all our IT needs greatly increases the reliance on data networks. Data is stored in huge data centers, and transferred between them at ever-increasing rates. The cloud model—with its associated storage and network requirements—enables a stronger and more reliable IT infrastructure. This heavily networked model also opens some serious new post-quantum threat vectors, with the most serious being a “data-vaulting” or harvesting attack where an attacker stores communications between the client and the cloud so that data can be decrypted in the future when general purpose quantum computers are available.  What we need to keep top of mind is that data stored today may already be compromised by future quantum computers, especially if the data is being monitored and stored.

Data “at rest” in enormous cloud data centers is also at risk since quantum computers will effectively reduce the keys protecting that data to half of their original strength. Additionally, post-quantum attack vectors will compromise the key management systems that generate, distribute and protect the keys needed to secure that data. Any connections and links between these large data centers must have the highest levels of protection possible. The need for quantum-safe cybersecurity is greatly compounded in a cloud-based IT environment.

As we move towards a world of quantum computers, organizations need to take the knowledge outlined in the ‘Applied Quantum-Safe Security’ paper and assess their own quantum-safe needs. Not every organization will require the same security measures and it takes time to change an infrastructure. The best way to prepare is to follow what is going on with the development of the quantum computer and its security solutions.

Since the cloud relies heavily on secure communications, quantum safety is a critical issue for the CSA. Enterprises will only use cloud services if they believe that their data is safe, both in the cloud provider servers and in transit. Quantum-safe security is a true requirement for further expansion of the cloud. The CSA encourages industry leaders to start thinking and talking about quantum safety. Quantum-related technology is evolving very quickly every day, both on the attack side and the defend side. Organizations should think about adopting some low-risk solutions now to improve infrastructure.

Cyber security technology never has and never will be a ‘one size fits all’.  There is no one universal solution that would provide the perfect security against all possible threats. What we have learned, however, is that we must prepare ourselves for emerging technology, especially when we know it’s coming. The key to quantum computer protection is the use of adaptable cryptographic tools. These tools must be tailored to fit specific types of data and specific applications. To download a copy of the full white paper, please visit here.

 

 

 

Market & Technology Readiness (MTRLs)

March 9, 2017 | Leave a Comment

By Frank Khan Sullivan, Vice President/Marketing, Strategic Blue

There is a need to communicate a project’s maturity to a non-technical audience. The Market & Technology Readiness Level Framework [PDF] aims to provide decision makers with a holistic view of a project’s maturity in a simple way – with a single score. It offers a faster way to assess, measure and support technology projects. The MTRL Assessment form is at the bottom of this article for those interested.

This framework has been developed by Frank Khan Sullivan, Michel Drescher from Oxford University e-Research Centre and Frank Bennett at Cloud Industry Forum, and was originally used to support several European Research & Innovation projects in cloud software and security to develop a go-to-market strategy at CloudForward 2016.

We will be accepting the next intake round of projects and businesses at Cloud Expo 2017 in London on Thursday, March 16 at 14:00 in the Cloud Innovation Theatre. The session is free to attend and will be hosted at the ExCeL Centre near Docklands. We will joining the CloudWatch2 project consortium speaking on March 15 on the European Digital Single Market and why trust is vital to the future cloud market.

By adopting the MTRL framework R&I projects can benefit from:

  • Access Direct Support Workshops* before or during project reviews
  • Quickly assess the maturity of a group of projects in a cluster/portfolio
  • Communicate clearly the current and desired future state of a project
  • Reduce the risk of project failure by intervening before crisis points
  • Understand roadblocks and dependencies between TRL and MRL

Understanding How To Communicate R&D Projects
The decision to exploit outputs of applied research projects often rests on a decision maker’s understanding of how value will be created. The project leader must articulate a project’s current state of maturity, and demonstrate how it will progress through development stages. However, what the project leader wishes to communicate and what the decision maker understands does not always match.

Creating a Common Framework for All Stakeholders
Without a common framework to understand how mature a technology is, or its level of traction with its target users or constituents, funding and operational decisions take longer. The MTRL framework provides a common language for project leaders and funding decision makers to articulate their progress between stages.

Technology Readiness Levels are a widely accepted measure of the maturity of a technology, however, it obscures an important dimension – is the technology or project output ready to be brought to market, and if not, what can be practically done to accelerate its entry and subsequent uptake within a group of constituent users?

For example, if a project has developed a small scale prototype but has yet to validate the needs of its intended users, much effort and funding may be expended in the pursuit of features for a large scale prototype that will never be used.

Combining Technology Readiness and Market Readiness
By understanding both the current state of a project’s technology and market readiness, it becomes possible to offer more targeted support, such as refinement of a value proposition or closer pairing with an industry partner. This in turn increases the likelihood of a project’s outputs persisting outside the lab, reduced dependence on increasingly scarce grants and a more efficient use of existing resources.

First Success Story: CloudTeams.eu launches in Europe
CloudTeams.eu joined us at the CloudForward conference in Madrid back in October 2016 to conduct a Market & Technology Readiness workshop. Less than 6 months later, they have now successfully launched and made a major leap forward in executing their go-to-market strategy. CloudTeams is an innovative online platform connecting developers and users to speed up the collection of feedback from a target group of users to reduce time-to-market, reduce costly development errors and validate feature sets. We would like to take this opportunity to congratulate them on a really great project!

Conclusion: The MTRL Framework is ready for rollout!
In summary, the MTRL framework helps decision makers understand what resources may be required to progress through specific stages of development in the project lifecycle. This becomes particularly relevant in reducing the time it takes to assess groups of technology projects in clusters and making support accessible before reviews.

For project leaders: Request an MTRL Assessment and Direct Support Workshop

For funding bodies: Learn about implementing the MTRL Assessment Framework

Special thanks to Michel Drescher, Frank Bennett and Prof. David Wallom for their inputs in developing the methodology and thinking behind the framework. Feel free to connect with me directly to discuss how MTRL Assessments can be used to help your fund, project or go-to-market strategy/business models.

Prepare for Windows 10 Migration the Gartner Way

March 8, 2017 | Leave a Comment

By Jeremy Zoss, Managing Editor, Code42

It’s 2017, which means there’s a good chance your company is preparing to migrate to Windows 10. The operating system may have launched back in 2015, but this is the year that Gartner predicts enterprise adoption of the operating system will truly take off, hitting its peak in 2020.

What caused the delay in adoption? Based on a Spiceworks survey, concerns included stability, application compatibility, and security. Perhaps the largest factor, however, was large corporations opting to combine their move to Windows 10 with a device migration. Typically, these purchases occur every two to four years, so may companies were simply waiting for the next hardware purchase cycle to switch to the new operating system.

Whether combined with new machines or upgrading existing hardware, there are many factors to consider during device migrations, and the costs may surprise you. Fortunately, Gartner has also prepared an extremely detailed report on the costs and challenges of moving to Windows 10. Read the report today to discover:

  • The typical costs to migrate a PC to Windows 10.
  • The key factors of migration cost.
  • How to determine your budget for migration costs using Gartner’s model.
  • How to improve migration before it starts.

Read the report today to prepare for your Windows 10 device migration.

Is Your Industry at High Risk of Insider Threat?

February 24, 2017 | Leave a Comment

By Jeremy Zoss, Managing Editor, Code42

In the movies, data theft is usually the work of outsiders. You’ve witnessed the scene a million times: A cyber thief breaks into a business, avoiding security measures, dodging guards and employees, and making off with a USB stick of valuable data seconds before he or she would have been spotted. But in the real world, data theft is much more mundane. Most cyberattacks are carried out by someone within the company or someone posing as such. Sometimes they take data that’s essentially harmless, like personal files they feel entitled to keep. Other times, what they take is potentially much more harmful. According to a 2016 report from Deloitte, 59 percent of employees who leave an organization say they take sensitive data with them! With IP making up 80 percent of a company’s value, insider threat is something that every company should take seriously.

Some industries are much more at risk of insider threat than others. Is your industry one of the most vulnerable? The infographic below details the industries hit with the most instances of insider threat in 2015. If you work in one of these industries, perhaps it is time to revisit your cyber security policies.

 

The Rise in SSL-based Threats

February 23, 2017 | Leave a Comment

By Derek Gooley, Security Researcher, Zscaler

Overview
The majority of Internet traffic is now encrypted. With the advent of free SSL providers like Let’s Encrypt, the move to encryption has become easy and free. On any given day in the Zscaler cloud, more than half of the traffic that inspected uses SSL. It is no surprise, then, that malicious actors have also been using the SSL protocol in their activities over the last several years. The increasing use of SSL creates problems for organizations that are unable to monitor SSL traffic, as they must rely on less-effective techniques like IP and domain blocking in an attempt to identify and block threats.

In this report, we will outline trends we have seen in the use of SSL in the malware lifecycle and in adware distribution, based on a review of traffic on the Zscaler cloud from August 2016 through January 2017. What follows is a graphic illustrating our findings, and an analysis of recent activities.

 

Malicious SSL Activity
During the six-month period, the ThreatLabZ research team observed that the Zscaler cloud blocked an average 600,000 malicious activities each day that used SSL, including exploit kit traffic, malware and adware distribution, malware callbacks, and other malicious traffic.

Figure 1. Total SSL blocks, August 2016 – January 2017

In our cloud, we observed an overall increase in malicious SSL traffic in nearly all categories — a trend we expect to continue — with periodic spikes, such as those in early August and late November, when SSL malware blocks reached nearly two million a day.

Browser Exploits and Payload Delivery
Exploit kit (EK) authors are more frequently including SSL in the infection chain at some point. Previous malvertising campaigns have been observed in which EKs took advantage of SSL-enabled advertising networks to inject malicious scripts into legitimate webpages. EK authors may also abuse services that provide free SSL certificates to add HTTPS support to their maliciously controlled domains. This maneuver enables them to bypass the SSL integrity checks built into modern web browsers.

Figure 2. SSL web exploit monthly total hits, August 2016 – January 2017

Figure 3. SSL web exploit blocks, August 2016 – January 2017

During the observation period, we saw an average of 10,000 hits per month for web exploits that included SSL as part of the infection chain.

Phishing

Figure 4. Phishing blocks, August 2016 – January 2017

Phishing campaigns have been increasingly using SSL in their attacks. Many phishing attacks involve hosting the phishing page on a legitimate domain that has been compromised. Since the number of legitimate sites that support SSL is constantly increasing, so are the number of SSL-enabled phishing attacks. This rise presents a significant threat, because organizations, in an attempt to thwart ransomware and other phishing schemes, have implemented security hardware solutions to detect and block phishing, but few of them support SSL inspection.

Malware Families That Use SSL
Several years ago, it was rare to see malware using SSL to encrypt command-and-control (C&C) mechanisms. As malware design has become more sophisticated, and with the near ubiquity of SSL on the Internet, it made sense for malware authors to begin using SSL to hide their activities. Some malware families have gone further, using anonymity services such as Tor to hide the location of their C&C servers, connecting to (otherwise legitimate) HTTP Tor gateways via SSL.

Botnets typically use self-signed SSL certificates, frequently using the names and information of real companies to try to appear legitimate. The SSL Blacklist is a project that tracks the SSL certificates used by malware authors.

Figure 5. Malware callbacks over SSL, September 2016 – January 2017

Corresponding with the increase in malicious payload deliveries in November 2016, we also observed an increase in blocked malicious SSL traffic during that time.

In our analysis, we came across many malware families that were using SSL for malicious purposes. Some of the recent and notorious malware families actively using SSL are:

  • Dridex/Dyre/TrickLoader: The Dridex, Dyre, and TrickLoader banking Trojans are capable of communicating to the C&C servers via SSL using its own SSL certificate. These family previously used the common browser hooking technique for callbacks, but the latest versions can perform redirects via local proxy or local DNS poisoning to fake websites, controlled by the attacker.
  • Vawtrak: Vawtrak is a well-crafted piece of malware supporting the VNC and SOCKS proxies, screenshot and video capturing, and extensibility with regular updates from C&C servers. Vawtrak samples contain code for downloading and validating SSL certificates and are capable of initiating an HTTPS connection. The malware contains a list of HTTPS-secured hosts that contain updated lists of live C&C servers.
  • Gootkit: Gootkit is a stealth banking trojan with backdoor and spyware capabilities that uses fileless infection and communications over SSL. Gootkit intercepts user data via web injections into HTTPS traffic.

Adware
A common function of adware is to inject unwanted advertisements into web traffic. These advertisements can also lead to malicious infections, as exploit authors frequently take advantage of less-scrupulous advertising networks to distribute exploit redirect scripts. Securing web traffic with SSL/HTTPS prevents this distribution in most cases. Adware installed on a client machine would not be able to perform a man-in-the-middle attack with a self-signed certificate due to the HTTPS safeguards included in modern browsers.

However, in several notable cases, major adware distributions have circumvented these safeguards to inject advertisements into HTTPS traffic. The two most high-profiles examples are the Superfish and PrivDog adware distributions, which were first abusing SSL in 2015. Both of these adware programs install a self-signed root CA certificate onto the victim’s computer, and intercept all web traffic in order to inject advertisements into web pages. PrivDog in particular was a serious concern because it did not validate SSL certificates on its end of the proxy, allowing users to inadvertently navigate to websites with invalid SSL certificates, exposing them to additional threats.

Adware variants have also started to host their files on HTTPS sites. We came across a family of adware called InstallCore, which was doing this kind of activity. InstallCore is a Potentially Unwanted Application (PUA) that installs a program to display and/or download unwanted advertisements and toolbars, and tracks a computer’s web usage to feed the victim undesired ad pop-ups; some versions can even hijack a browser’s start or search pages, redirecting the user to a different site or search engine.

InstallCore is often delivered by tricking the user into installing the Flash plugin or a Java update. In some cases, InstallCore is delivered by misdirected download buttons. These fake pop-ups of the Flash player or download buttons appear on content distribution sites, like torrent sites, or free software sites that work on HTTPS.

Figure 6. Fake Flash download pop-up

Conclusion
Due to the rising use of SSL encryption to hide exploit kits, malware, and other threats, it is important to have a security infrastructure that can detect and block these threats. The problem is that SSL inspection is compute-intensive, so even organizations whose security appliances support SSL inspection often disable this feature, as its use would slow traffic throughput to unacceptable levels. Dedicated appliances for SSL inspection are available, but their price puts them out of reach for many organizations. SSL inspection is built into the Zscaler security platform, which, due to its scale, can inspect all SSL traffic without latency.

Research by: Derek Gooley, Jithin Nair, Manohar Ghule