How CASB Is Different from Web Proxy / Firewall

April 8, 2016 | Leave a Comment

By Cameron Coles, Sr. Product Marketing Manager, Skyhigh Networks

admin-ajaxA common question that arises as IT teams begin to look at cloud access security broker (CASB) products goes something like, “we already have a web proxy and/or firewall, how is this different?” or “does CASB replace my web proxy / firewall?” These are natural questions because web proxies and firewalls have visibility into all traffic over the corporate network including traffic to and from cloud services. However, there are significant differences between existing network security solutions and a CASB. Let’s first dispel a major misconception: a CASB is not a replacement for existing network security tools, and vice versa.

[CASBs] deliver capabilities that are differentiated and generally unavailable today in security controls such as Web application firewalls (WAFs), secure Web gateways (SWGs) and enterprise firewalls – Gartner Market Guide for Cloud Access Security Brokers, Craig Lawson, Neil MacDonald, Brian Lowans [Oct. 22, 2015]

CASB is a separate, and differentiated market from proxies and firewalls. While CASBs can be deployed in forward or reverse proxy mode to enforce inline controls, the similarities to web proxies stops there. Unlike network security solutions that focus on a wide variety of inbound threats and filtering for millions of potentially illicit websites, a CASB is focused on deep visibility into and granular controls for cloud usage. A CASB can also be deployed in an API mode to scan data at rest in cloud services and enforce policies across this data. Here are some of the high-level functions of a CASB not available in existing network security solutions:

  • Provide a detailed, independent risk assessment for each cloud service (e.g. compliance certifications, recent data breaches, security controls, legal jurisdiction).
  • Enforce risk-based policies (e.g. block access to all high-risk file sharing services and display a real-time coaching message directing users to a company-approved service).
  • Control access to individual user actions based on context (e.g. prevent users from downloading reports to unmanaged devices on remote networks).
  • Enforce data-centric security policies (e.g. encrypting data as it is uploaded to the cloud or applying rights management protection to sensitive data on download).
  • Apply machine learning to detect threats (e.g. an IT user downloading an unusual volume of sensitive data and uploading it to a personal account in another cloud app).
  • Respond to cloud-based threats in real time (e.g. terminating account access in the face of an insider threat or requiring additional authentication factors to continue using a cloud service in the face of a compromised account).
  • Enforce policies for data at rest in the cloud (e.g. revoking sharing permissions on files shared with a business partner or retroactively encrypting sensitive data).

Cloud-related functions of web proxies / firewalls
Web proxies and firewalls offer broad protection against network threats and, as part of this protection, they do offer some limited visibility into cloud usage, even without integrating to a CASB. For example, although these solutions may have difficulty mapping URLs users access to cloud services, they track cloud access over the corporate network. Some customers use their network security solutions to terminate SSL and inspect content for malware. Proxies and firewalls also bucket cloud services into high-level categories (e.g. Technology/Internet, Business/Economy, Suspicious); however, these categories generally do not reflect the underlying function of the service such as file sharing, CRM, or social media.

One of the primary use cases of network security solutions is categorizing and enforcing access to millions of illicit websites that contain pornography, drugs, gambling, etc. Web proxies can redirect access attempts to specific URLs to an alternate webpage hosting a notification that the URL was blocked. Similarly, firewalls can be configured to block access to specific IP addresses. Both solutions lack detailed and up-to-date cloud registries with cloud service URLs and IP addresses to extend this access control functionality to cloud services. Enterprises often find that while they may have initially blocked a cloud service, cloud providers routinely introduce new URLs and IPs that are not blocked. This results in the widespread phenomenon of “proxy leakage” in which employees regularly access cloud services that IT intends to block.

The focus on IP reputation is also not directly applicable to cloud services. A cloud service may have a high IP reputation, but due to its security controls, or lack thereof, it may also be unsuitable to store corporate data. For example, take a file sharing service with a good IP reputation that allows anonymous use, shares customer data with third parties, is hosted in a privacy-unfriendly country, and experienced a password breach three months ago. Few IT leaders would want sensitive corporate data uploaded to this service. Without a registry of these attributes, network security solutions are unable to enforce risk-based policies. Moreover, since many cloud services do not use standard content disposition headers, network security solutions are unable to enforce data loss prevention (DLP) policies to prevent the upload of sensitive data.

How CASB integrates with web proxies / firewalls
CASB is a complementary technology to web proxies and firewalls. By integrating with these solutions, a CASB can leverage existing network infrastructure to gain visibility into cloud usage. Simultaneously, a CASB enhances the value of these investments by making them cloud-aware. There are three primary methods a CASB uses to integrate with network security solutions: log collection, packet capture, and proxy chaining.

Log Collection
Web proxies and firewalls capture data about cloud usage occurring over the network, but they may not differentiate cloud usage from Internet usage. A CASB can ingest log files from these solutions and reveal which cloud services are in use by which users, data volumes uploaded to and downloaded from the cloud, and the risk and category of each cloud service. In effect, a CASB makes existing infrastructure cloud-aware. CASBs detect enforcement gaps with existing egress infrastructure and can push access policies to them with up-to-date cloud service URLs to close enforcement gaps. For customers that terminate SSL, a CASB can also gather additional detail from these logs on the actions users take within cloud services. Using machine learning, a CASB can detect malware or botnets using the cloud as a vector for data exfiltration.

p1x

Packet Capture
In the packet capture deployment mode, a CASB ingests a feed of traffic from existing network security solutions to gain visibility into the content of data. For example, a CASB can integrate with a web proxy via ICAP. The web proxy is configured to copy and forward cloud traffic to the CASB to evaluate data loss prevention (DLP) policies in a monitor-only configuration. Many cloud services use custom content disposition headers in an effort to improve the performance of their applications. These custom headers have the unintended side effect of preventing network security solutions (and on-premises DLP solutions that integrate to them via ICAP) from inspecting content for DLP. CASBs leverage detailed cloud service signatures to inspect cloud traffic, evaluate DLP policies, and generate alerts for DLP policy violations.

p2

Proxy Chaining
A CASB can be deployed as a forward proxy. Many organizations already have a web proxy, and they do not want to deploy another endpoint agent. In proxy chaining mode, the downstream web proxy is configured to route all cloud traffic through the CASB. In this deployment mode, the CASB can enforce real-time governance and security policies. For instance, a CASB can enforce access control policies limiting specific cloud service functionality and displaying educational messages when a user accesses a service outside of policy with options to notify, justify access, and direct users to approved cloud services. Unlike packet capture, this deployment mode enables a CASB to enforce inline DLP policies to prevent policy violations.

p3

Taken together, CASBs enhance the value of investments enterprises have made in network security solutions. Rather than forcing a rip and replace of existing solutions, CASBs integrate with and extend their capabilities to the cloud. There are clear differences in the functionality of web proxies / firewalls and CASB. Neither is a replacement for the other, but together they deliver better visibility into cloud usage and the ability to enforce compliance and governance policies to protect corporate data as it moves to the cloud. To learn more about the cloud access security broker (CASB) market, download a free copy of the latest Gartner How to Evaluate and Operate a Cloud Access Security Broker, Neil MacDonald, Craig Lawson [Dec. 8, 2015] here.

 

How to Get C-suite Support for Insider Threat Prevention

April 6, 2016 | Leave a Comment

By Susan Richardson, Manager/Content Strategy, Code42

033016_idc_blogIf you’re not getting support and adequate funding from the C-suite to address insider threats, a recent report highlights a powerful persuasive tool you may have overlooked: money—as in fines (cha-ching), lawsuits (cha-ching) and credit monitoring services (cha-ching) you’ll have to pay as the result of a data breach.

The IDC report, “Endpoint Data Protection for Extensible DLP Strategies,” cites two health-care groups that paid six figures each in fines for data breaches as a result of improper employee behaviors. Here are even more powerful examples of the price your organization could pay for not addressing insider data security threats:

Target insider breach costs could reach $1 billion
Target may have skirted an SEC fine, but the retailer is still paying a hefty price because cyber thieves were able to access customer credit card data via a subcontractor’s systems. Breach costs included $10 million to settle a class action lawsuit, $39 million to financial institutions that had to reimburse customers who lost money, and $67 million to Visa for charges it incurred reissuing compromised cards. For 2014, Target had $191 million in breach costs on its books; estimated totals could reach $1 billion after everything shakes out.

AT&T fined $25 million for employee breach
In 2015, AT&T paid a $25 million fine to the Federal Communications Commission after three call center employees sold information about 68,000 customers to a third party. The cyber thieves used the information to unlock customers’ AT&T phones.

On top of the fine, AT&T was required to do things it should have done in the first place:

  • Appoint a senior compliance manager who is a certified privacy professional.
  • Conduct a privacy risk assessment.
  • Implement an information security program.
  • Create a compliance manual and regularly train employees.
  • File regular compliance reports with the FCC.

AvMed paid $3 million in settlement
While the health plan company avoided a HIPAA fine, it paid $3 million in settlements to 460,000 customers whose personal information was on two stolen, unencrypted laptops. On top of that were costs to reimburse customers’ actual monetary losses.

In addition, the company had to:

  • Provide mandatory security awareness and training programs for all company employees.
  • Provide mandatory training on appropriate laptop use and security.
  • Upgrade all company laptops with additional security mechanisms, including GPS tracking technology.
  • Add new password protocols and full-disk encryption technology on all company desktops and laptops so that electronic data stored on the devices would be encrypted at rest.
  • Upgrade physical security to further safeguard workstations from theft.
  • Review and revise written policies and procedures to enhance information security.

The lesson here should be obvious. It’s far cheaper to act now—by implementing available endpoint protection technology and instituting a security-aware culture—than to wait for a breach that forces you into action.

As security expert Philip Lieberman noted in the AT&T case, the penalty cost AT&T much more than the steps it should have taken to prevent the insider breach: “The C-level staff will have to explain this to the board as to why they did not implement a control when the cost would be trivial.”

To learn more about “Endpoint Data Protection for Extensible DLP Strategies” get the IDC analyst report.

Don’t Let Your Cloud Security Strategy Get Railroaded by Old Thinking

April 4, 2016 | Leave a Comment

By Player Pate, Senior Manager/Product Marketing, Cisco Security Business Group

AM37473-432x230The standard gauge used for railroads (that is the distance between the rails) in the U.S. is four feet, eight and a half inches, which is an odd number however you look at it. The history behind it is even stranger and is a cautionary tale of assumptions and the consequences of basing decisions on old thinking.

That oddly sized gauge was borrowed from the English standard of railroad width, where they built railroads with the same tools they used to build wagons, which used that wheel spacing. And the wheel spacing had to be that width because that was the spacing of the wheel ruts that existed at the time in the roads throughout England.

So who created those?

Roman chariots created the wheel ruts in the roads when they occupied England some two thousand years ago. These Roman war chariots were built just wide enough to accommodate the rear-ends of two horses, which just happened to be…you guessed it: four feet, eight and a half inches wide. This created the standard gauge that is still used today.

Ok, so where’s this heading?

The space shuttles used in modern day space exploration carried two large booster rockets on the sides of their main fuel tanks. These rockets, called solid rocket boosters or SRBs, which gave the spacecraft initial thrust upon launch, were built in a factory in Utah. The engineers of the SRBs would have preferred to make them larger, but the SRBs had to be transported by train from the factory to the launch site. That railroad line ran through a tunnel in the Rocky Mountains and the SRBs had to fit through that tunnel. The tunnel is only slightly wider than the railroad track, and the railroad track, as we now know, is only about as wide as the hindquarters of two equestrian.

Say that again?

A primary constraint in the design of one of the most advanced transportation systems ever developed was determined more than two thousand years ago by two horses’ asses.

Interesting, but what’s that have to do with cloud security?

That is the danger of getting caught in the rut of the same old thinking. There’s danger in thinking about security in the old way when it comes to securing cloud infrastructure. Cloud security can’t be solved with legacy security technologies or siloed approaches to security. Cloud security must be as dynamic as the nature of the cloud itself and should address the issues of:

  1. Keeping valuable data secure in the data center or wherever your cloud is hosted;
  2. Securing applications and data in the cloud;
  3. Enabling secure access anywhere, to anything for the mobile user or IoT;
  4. Consistently protecting against threats across the data center, cloud and wherever users roam before, during, and after attacks; while
  5. Providing visibility across the entire spectrum to enforce governance and compliance.

Cloud security doesn’t require simply the deployment of a separate application or new technology. Nor does it require you to completely scrap your existing infrastructure. It is an extension of your entire security program where security is embedded into the intelligent network infrastructure, integrates with a rich ecosystem of applications and services, is pervasive across the extended network – not just networks themselves but all endpoints, mobile and virtual, that extend to wherever employees are and wherever data is…from the beating heart of the enterprise data center out to the mobile endpoint and even onto the factory floor.

Think of the journey to cloud security adoption as your chance to take off into space; when planning the size of your rockets, are you imagining all the new possibilities or limiting your opportunities by what we’ve always done. Hopefully the cautionary tale of the history of US railroads helps you expand your thinking.

Check out our Cisco Business Cloud Advisor adoption tool to evaluate the overall readiness of your organization’s cloud strategy, including from a security perspective. Also stay tuned to this blog as dig further into this topic.

Four Security Solutions Not Stopping Third-Party Data Breaches

March 31, 2016 | Leave a Comment

By Philip Marshall, Director of Product Marketing, Cryptzone

4-Common-Security-Solutions-that-dont-stop-third-party-data-breaches-250x167A new breed of cyberattack is on the rise. Although it was practically unheard of a few years ago, the third-party data breach is rapidly becoming one of the most infamous IT security trends of modern times: Target, Home Depot, Goodwill, Dairy Queen, Jimmy John’s and Lowes are just a few of the US companies to have lost massive amounts of customer records as a result of their contractors’ usernames and passwords falling into the wrong hands.

What went wrong? Hackers have started to see contractors as the easy way into their targets’ networks. Why? Because too many organizations are still using yesterday’s security solutions, which weren’t designed for today’s complex ecosystems and distributed (read cloud-based) applications and data.

Here are four examples of solutions that, in their traditional forms, simply aren’t capable of stopping third-party data breaches. Could your company be at risk?

1. Firewalls and Access Control Lists
Many organizations still control traffic flow between network segments in the same way they’ve done for decades: with firewalls and access control lists (ACLs). Unfortunately, security in the modern age isn’t as simple as just defining which IP addresses and ranges can access which resources.

Let’s say you have a single VPN for all of a department’s workers and contractors, with every authenticated user getting a DHCP-allocated IP address. Your firewall rules are going to have to be wide open to suit the access needs of each user on the IP range, and yet you’re not going to be able to trace suspicious activity back to a particular account and machine.

It’s also a lot of work for your IT department to set up and maintain complex firewall rules across the entire organization, so it’s not unlikely that they’ll make mistakes, respond slowly to employee departures, and leave access wider open than it should be.

2. Authentication and Authorization
Leading on from this, another problem with ACLs is that they generally rely on static rules, which in no way account for the security risks of today’s distributed workforces. A username and password pair will unlock the same resources whether used from a secure workstation at a contractor’s premises or from an unknown device on the other side of the world.

Authentication and authorization rules should be dynamic rather than static, and adjusted on the fly according to the risk profile of the connection. One of your contractors needs remote access to a management network segment? Fine – but only if they use a hardened machine during office hours. If the context of their connection is more suspicious, you might consider two-factor authentication and more limited access.

3. IPsec and SSL VPNs
More than nine in ten organizations (91 percent) still use VPNs – a 20-year-old technology – to provision remote access to their networks. It’s potentially their single greatest risk factor for third-party data breaches, because both IPsec and SSL VPNs are readily exploitable by hackers.

In an IPsec session, remote users are treated as full members of the network. Nothing is invisible – they have direct access to the underlying infrastructure. So, if they’re malicious, they can start digging around and looking for vulnerabilities in seconds.

SSL VPNs, meanwhile, deliver resources via the user’s browser. And what web application has ever been secure? Tricks like SQL injection and remote code execution attacks make it trivial for hackers to start widening their foothold on the network.

4. IDS, IPS and SIEM
Finally, a word on the technologies organizations use to detect data breaches. IDS, IPS and SIEM are generally mature and effective solutions that do the job they’re intended to do: identify suspicious activity on the network.

However, the combination of the antiquated technologies described above means that most networks are rife with false positives: legitimate users and harmless applications causing suspicious traffic in the network layer. Change this model, and IDS, IPS and SIEM systems might start to deliver more value. As it stands, though, they’re often resource-intensive and reactive rather than proactive, so they’re not really equipped to stop hackers in their tracks.

The Alternative to Prevent Third-Party Data Breaches
In the new world of pervasive internal and external threats, distributed organizations and global ecosystems, the perimeter is more porous and less relevant than ever. The old models simply aren’t working. We need to move from perimeter-centric, VLAN and IP-focused security to a model that focuses on securing the entire path from user to application, device to service – on a one-to-one basis.

That’s where solutions like AppGate that enables organizations to adopt a software-defined perimeter approach for granular security control become increasingly a must have security solution. AppGate makes the application/server infrastructure effectively “invisible.” It then delivers access to authorized resources only, creating a ‘segment of one’ and verifying a number of user variables and entitlements each session—including device posture and identity—before granting access to an application. Once the user logs out, the secure tunnel disappears.

Kicking Tires on World Backup Day: A Five-Point Inspection for Endpoint Backup

March 29, 2016 | Leave a Comment

By Rachel Holdgrafer, Business Content Strategist, Code42

032816_worldbackupday_blogLiving with the constant threat of data breach or loss, large organizations have comprehensive remediation plans designed to guarantee speedy data recovery and business continuity. March 31, 2016 is World Backup Day—the perfect time to evaluate your endpoint backup strategy to insure you’re ready if the worst happens.

A viable backup plan is a lot like having car insurance. While car insurance can’t prevent an accident, it willreplace the crumpled bumper and shattered headlamps after you’ve been rear ended, or the entire vehicle if you’ve been totaled. In the same way, endpoint backup allows you to recover several lost files, everything on the laptop, or the entire enterprise—to a point in time as recent as moments ago.

Here are five inspection points to consider as you evaluate your endpoint backup solution.

Point #1: Do you have continuous protection—everywhere? The modern workforce works when, where and how they choose; so they need endpoint backup that protects their files continuously, whether they are in the office or on the road. Choose centralized, cloud-based endpoint backup that works across geographies, platforms and devices. It should be simple to manage and scale, and offer powerful features to solve other data collection, migration and security problems.

Point #2: Does it work with Macs, Windows and Linux? The modern enterprise is no longer a PC-only environment. Employee preference for Apple devices has increased Mac’s market share in the enterprise—and there’s no going back. Choose an endpoint backup solution that protects a “hybrid workplace” that includes Windows, Linux and OS X laptops and desktops and offers a consistent user experience across all platforms. Make sure your backup solution restores files to any computer or mobile device—without requiring a VPN connection.

Point #3: Will it enable rapid response and remediation? When protected/classified data goes public, response time is critical. Choose an endpoint data backup solution that provides 100-percent visibility and attribution of file content on any device. This enables IT (and InfoSec) to quickly identify a threat, mitigate the impact and determine whether data on compromised devices—including those that are lost or stolen—requires the organization to notify agencies or individuals of breach. If there is a reportable breach, 100 percent data attribution prevents over reporting of affected records.

Point #4: Will it support fast data recovery in a dangerous world? Endpoint devices—and the humans who operate them—are the weakest link in the enterprise security profile. The 2016 Cyberthreat Defense Reportfound that 76 percent of organizations were breached in 2015, making it essential to plan for data breach and your recovery before it happens. Choose an endpoint backup solution that ensures rapid recovery of data—no matter the cause, without paying a ransom, without the original device, without the former employee. Endpoint backup is an investment in business continuity, risk mitigation and peace of mind.

Point #5: Does it let you decide where to store encryption keys? True data privacy means only the enterprise can view unencrypted files. Choose an endpoint backup solution that deduplicates locally rather than globally, encrypts data in transit and at rest, and enables you to hold encryption keys regardless of where your data is stored. On-premises key escrow ensures that only you can view decrypted data—keeping it safe from the cloud vendor, government surveillance and blind subpoena.

Proactively evaluating your endpoint backup processes at least once a year positions your enterprise for quick and total recovery before data loss or breach occurs.

Top 3 Malware Bogeymen Keeping CISOs Up at Night

March 22, 2016 | Leave a Comment

By Susan Richardson, Manager/Content Strategy, Code42

031616_cyberthreat_blogWhat keeps CISOs up at night? Of all the cyberthreats, malware sends chills down a CISO’s spine, according to The CyberEdge Group’s recently released 2016 Cyberthreat Defense Report. Malware bogeymen come in many shapes and sizes. Here are three of the most nefarious in their respective categories:

Ransomware: CryptoWall
Ransomware has come a long way since 1989, when the AIDS Trojan first encrypted a user’s hard drive files and demanded money to unlock them. The latest version of CryptoWall, the most significant ransomware threat in the States, not only encrypts the file, it also encrypts the file name—making it a challenge to even find “kidnapped” files.

CryptoWall cost victims more than $18 million in losses in a single year, according to the FBI. While individual ransom fees are typically only $200 to $10,000, additional costs can include loss of productivity, mitigating the network, incorporating security countermeasures, and purchasing credit monitoring services for employees and/or customers.

Banking Trojan: Dyreza
Banking Trojans use a man-in-the-browser attack. They infect web browsers, lying in wait for the user to visit his or her online banking site. The Trojan steals the victim’s authentication credentials and sends them to the cyberthief, who transfers money from the victim’s account to another account, usually registered to a money mule.

For nearly a decade, the ZeuS Trojan conducted a reign of terror in the banking world. Even after Europol took down the Ukrainian syndicate suspected of operating ZeuS in 2015, new strains kept appearing. But it seems ZeuS has met its match in Dyreza (aka Dyre, aka Dyzap). More than 40% of banking Trojan attacks in 2015 were by Dyreza, according to Kaspersky Lab’s 2015 Security Bulletin. Dyreza’s one-two punch? It can now attack Windows 10 machines and hook into the Edge browser.

Mutant two-deaded worm: Duqu 2.0
There isn’t an official category yet for the most sophisticated malware seen to date. At a London press conference announcing an attack by the new version of the Duqu worm on its corporate network, Kaspersky Lab founder Eugene Kaspersky described the malware as a “mix of Alien, Terminator and Predator, in terms of Hollywood.

The original Duqu worm was mysterious enough, being written in an unknown, high-level programming code. Now Duqu 2.0 is further flabbergasting the security experts. Some describe it as a compound sequel of the Duqu worm that assimilates the features of a Trojan horse and a computer worm. Others call it a collection of malware or a malware platform.

I’m dubbing it the Mutant Two-Headed Worm because it has two variants. The first is a basic back door that gives attackers an initial foothold on a victim network. The second variant contains multiple modules that give it multiple superpowers: it can gather system information, steal data, do network discovery, infect other computers and communicate with command-and-control servers. And did I mention Duqu 2.0 has an invisibility Cloak? The malware resides solely in a computer’s memory, with no files written to disk, making it almost impossible to detect.

If Duqu 2.0 attacks increase in 2016, expect malware to be a CISO’s worst nightmare next year too.

Download the 2016 Cyberthreat Defense Report to learn how IT security professionals perceive cyberthreats and their plan to defend against them.

CIO, CISO and IT Practitioners Worry They Will Face a Datastrophe!

March 18, 2016 | Leave a Comment

By Rick Orloff, Chief Security Officer, Code42

031516_datastrophe_blogWe are not lacking choices: whether it’s in the information we consume, the things we can buy or the ability to express ourselves through multimedia channels. It’s therefore no surprise that our most valuable asset, human capital, is finding ways to work outside of the boundaries of the traditional workplace. Enterprises are increasingly porous, as is the technology infrastructure that is supposed to keep all the bits and bytes of precious corporate data within the corporate infrastructure. This is because we now have an expectation of holding data wherever we are—on endpoint devices such as laptops, tablets and in cloud storage. An enterprise can secure the core, but data has become persistently mobile and accessible outside the corporate network perimeters.

In this rapidly changing and rather troublesome security landscape, we decided in the UK to conduct a piece of research—Code42’s 2016 Datastrophe Study. This research aimed to get under the skin of how chief information officers (CIOs), chief information security officers (CISOs) and IT decision makers (ITDMs) view the porous enterprise. It also collated the views of employees who are holding most of the data outside the perimeter. Working with two independent research partners, we surveyed 400 IT decision makers—including CIOs and CISOs—and more than 1,500 UK-based knowledge workers between the ages of 16-55+, all of whom are working in enterprise size organizations.

The results are startling: 45% of all corporate data today is also held on endpoint devices—according to the IT respondents. Yet, at least one-in-four ITDMs acknowledge that they do not do enough or do not know if they do enough to protect corporate data. Putting this into perspective, the IT department knows it has a problem, but 25% of ITDMs know they are not tackling it. This is a huge risk. An impending data catastrophe. A datastophe! Well, you get the point.

We all know the issues. 88% of CIOs/CISOs and 83% of ITDMs reveal that they understand the serious implications and risks of large swathes of corporate data residing on endpoint devices—stating that losing critical data would be seriously disruptive or could cause irreparable harm to the corporation and its brand. But, awareness of data risk is also felt on the shop floor, with 47% of employees agreeing that the risks of corporate data loss would pose a threat to business continuity.

Yet, despite this understanding, three-in-ten ITDMs (30%) acknowledge that they do not have, or—very worryingly—don’t know if they have a meaningful endpoint data protection security strategy or solution in place. In turn, one in four employees (25%) say they do not trust their IT teams or companies with their personal data. And a further 36% of employees believe their company is at risk of a public breach in the next 12 months. So, the employees in the trenches with the real-world view (that the C-suite sometimes lacks) are worried.

For IT departments, the issues are not just internal. There are added regulatory pressures to consider. Sixty-nine percent of ITDMs say that the upcoming EU General Data Protection Regulation (GDPR) will affect the way they purchase and/or provision data protection and data security tools/solutions. In fact, 76% suggest they will be increasing their security tools and capabilities. Yet, 18% are waiting for the proposed regulatory changes to be finalized before making any commitments—this might be too little too late. Adding new capabilities requires careful planning with CapEx & OpEx considerations and this rarely happens overnight. Add onto this the 43% of ITDMs that say they have been affected by the invalidation of Safe Harbor—which will soon be replaced by Privacy Shield, and you might see ITDM’s engaged in a waiting game a.k.a., analysis paralysis.

Security leaders need clear, effective, and measurable strategies that pursue proactive steps to protect their companies—or risk facing a datastrophe! From the CISO to the technical administrator, each individual needs to work with the lines of business in their organizations. They need to define their unique endpoint risks, embrace the agreed upon solutions(s), and deliver them according to plan—quickly. It’s never too late for a proactive plan.

A last word: The 2016 Datastrophe Study is peppered with commentary from experts who share their views on the future of endpoint data protection including CISO’s, analyst’s and ethical hackers. To participate in the conversation, you can join us @code42 (A malware free site ☺).

EU Safe Harbor and Privacy Shield: Timelines, Deadlines and Red Lines

March 16, 2016 | Leave a Comment

What has happened since safe harbor was declared invalid and what’s next?

By Nigel Hawthorne, EMEA Marketing Director, Skyhigh Networks

blog-banner-us-eu-privacy-shield-1024x614 (1)As a quick reminder, Safe Harbor was the primary legal mechanism that allowed US-based companies and cloud providers to transfer data on European individuals to US data centers, however this mechanism was declared invalid by the European court on September 24, 2015.

It’s been five months since then and here are the main changes made by companies, negotiators, data protection authorities and lawmakers since then.

Most US-based organisations have looked at their mechanisms for transferring data and either adopted EU Model Clauses, Binding Corporate Rules, or new terms and conditions – as an example Salesforce issued new terms the day after the judgement; they were obviously ready.

More cloud providers have opened European-based data centers, (or “centres”, as they are referred to in the UK!) allowing data to stay in Europe, for example Skyhigh’s own announcement and Microsoft’s announcement jointly with Deutsch Telekom.

The various European data protection authorities that make up the EU Article 29 Working Party issued a statement on 16th October naming a deadline of the end of January for negotiations to come up with a new plan with the threat that otherwise the data protection authorities “are committed to take all necessary and appropriate actions, which may include coordinated enforcement actions”.

Some of the data protection authorities issued their own news with advice for companies; this one from the UK’s ICO puts the story in context and is very helpful.

The negotiators just missed the end of January deadline, but a few hours before the Working Party was to meet and decide their actions, The EU-US Privacy Shield was announced. Frankly, there were few details at the time, so it was probably issued to hold off actions from the data protection authorities and buy a bit of negotiating time.

Another blog from the UK’s ICO makes clear that their position is to wait and see what happens “We will not be seeking to expedite complaints about Safe Harbor while the process to finalise its replacement remains ongoing and businesses await the outcome”.

Move forward to February 29 and the European Commission published their FAQ fact sheet on the EU-US Privacy Shield that fills in many of the details needed. It shows that US organisations have stronger obligations, there are clearer safeguards, EU citizens have the right to redress, and the US has affirmed that there is no mass surveillance of data.

This isn’t the end of the process, but we continue down the road. The next steps are that the EU member states, the data protection working party (WP29) and the college of commissioners all need to approve the text for ratification, which is expected in June 2016.

If that happens, there could still be another claim back to the European Court of Justice that the framework is not strong enough and once the new EU GDPR (General Data Protection Regulation) becomes law in 2018, it is likely to be reviewed again.

There’s certainly been a lot of talking in the months since Safe Harbor was declared invalid. The situation isn’t completely clear, but no one with data on European individuals should be complacent in expecting that data privacy problems will all just go away.

Anyone with data on individuals in the 28 countries of the EU should consider how it is gathered, the opt-in given to users, how it is transferred, which cloud services hold that data, where those cloud services are based, where they store the data itself, which employees and third parties have access to that data, and the legal and privacy policies in use. Enterprises must look at the mechanisms being used to track the movement of data, the security technologies deployed, and the education of employees.

Ultimately, if you collect data, you are responsible for keeping it safe and the policies and mechanisms to ensure it is not lost. Transferring data from the EU to the US requires careful handling and organisations need to be able to follow the data that their users may be accessing. Outsourcing computing to the cloud may transfer personal information of EU individuals outside the EU, specifically to US cloud service providers. In this case, the employer needs to be able to track, log, manage and even block transfers made by employees if the appropriate legal and technical mechanisms are not in place to keep that data secure.

CSA Summit San Francisco 2016 Recap

March 11, 2016 | Leave a Comment

By Frank Guanco, Research Project Manager, CSA Global

IMG_0946At the end of February, the Cloud Security Alliance (CSA) concluded its CSA Summit San Francisco 2016 with a full slate of presentations, releases, and announcements. CSA Summit kicked off the week with a full day of speakers and panels on the subject of ‘Cloudifying Information Security’ with a standing room only crowd. Throughout the week, CSA shared a number of updates, announcements, and releases that touched on the entire CSA portfolio. Below are links that recap some of the activity during CSA Summit San Francisco 2016.


Cloud Security Alliance Forms Global Enterprise Advisory Board
The Cloud Security Alliance announced the formation of the CSA Global Advisory Board, a 10-member body representing some the world’s most recognized experts within information technology, information security, risk management and cloud computing industries. The Global Advisory Board has been established to support CSA in further anticipating emerging trends, and as a result, increase the influence enterprises have over the future of the cloud industry’s ability to address dynamic and optimal cloud security requirements.

Cloud Security Alliance Establishes Research Fellowship Program
The Cloud Security Alliance announced the establishment of the CSA Research Fellowship Program designation, the highest honor and distinction awarded to a CSA Research Volunteer who has demonstrated significant contributions to CSA Research. The honor aims to recognize the talented and dedicated efforts of select CSA Research Volunteers whose work has led to groundbreaking and forward-thinking advancements of the CSA.

CCM Candidate Mapping update and CAIQ minor update
The CSA announced the release of the Candidate Mappings of ISO 27002/27017/27018 to version 3.0.1 of the CSA Cloud Controls Matrix (CCM). The ISO 27XXX series provides an overview of information security management systems. ISO 27002 provides further security techniques on controls based in ISO 27001. ISO 27017 adds this security code of conduct to the procurement of cloud services. Finally, ISO 27018 is the first international standard delivering security techniques on the privacy and protection of PII (Personally Identifiable Information).

Additionally, CSA’s Consensus Assessments Initiative Working Group has released an update to version 3.0.1 of the Consensus Assessments Initiative Questionnaire (CAIQ) that included minor updates and corrections.

Cloud Security Alliance Releases New Network Functional Virtualization Security Position Paper
The CSA’s Virtualization Working Group released a new position paper on Network Function Virtualization, which discusses some of the potential security issues and concerns, and offers guidance for securing a Network Virtual Function (NFV) based architecture, whereby security services are provisioned in the form of Virtual Network Functions (VNFs). We refer to such an NFV-based architecture as the NFV Security Framework. This paper also references Software-Defined Networking (SDN) concepts, since SDN is a critical virtualization-enabling technology. The paper is the first step in developing practical guidance on how to secure NFV and SDN environments.

Cloud Security Alliance Releases The Treacherous 12: Cloud Computing Top Threats in 2016
The CSA’s Top Threats Working Group released their latest report The Treacherous 12: Cloud Computing Top Threats in 2016, an important new research report developed to serve as an up-to-date guide to help cloud users and providers make informed decisions about risk mitigation within a cloud strategy. This report serves as an up-to-date guide that will help cloud users and providers make informed decisions about risk mitigation within a cloud strategy. While there are many security concerns in the cloud, this report focuses on 12 specifically related to the shared, on-demand nature of cloud computing.

Cloud Security Alliance Research Working Group Sessions
When CSA’s big events happen around the world, like CSA Summit San Francisco 2016, the CSA’s Research team hosts working group sessions for the various projects, groups, and initiatives that comprise the research portfolio. This year, about a dozen working groups shared their status updates and recent releases. The presentations from these sessions are available here.

Thanks to all that attended CSA Summit San Francisco 2016, those that visited our exhibition booth, and those we interacted with during the convention week. It was a successful event and we look forward to seeing everyone at next year’s CSA Summit San Francisco 2017.

Between SSL-cylla and Charib-TLS

March 11, 2016 | Leave a Comment

By Jacob Ansari, Manager, Schellman & Company, Inc.

https://commons.wikimedia.org/wiki/File:Johann_Heinrich_F%C3%BCssli_054.jpgSecuring encrypted Internet traffic transmissions, such as those between web browsers and web servers, is decidedly not simple. Despite the fact that well-established protocols, namely Secure Sockets Layer (SSL) and Transport Layer Security (TLS), have seen use for many years, they still have some complexities and security vulnerabilities. The most recent version of the Payment Card Industry Data Security Standard (PCI DSS), version 3.1, published in May 2015, specifically addressed some of the problems arising from vulnerable versions of SSL and TLS, and set the stage for a more rigorous approach to evaluating the security of the transport-layer protocols and mechanisms used for secure data communications. Unfortunately, navigating the particulars of communications protocols and cryptographic implementations can prove challenging. Further, some of the requirements to remove old, insecure protocols have proven particularly challenging for some organizations and in December 2015, the PCI SSC extended its deadline to eliminate SSL and early TLS by a full two years. Perhaps to best clear up some of the confusion around good security practice for these protocols, consider three elements: protocol versions, cipher groups, and software versions.

The last 12 to 15 months has seen a significant upheaval in the threat landscape for securing Internet communications. In late 2014, security researchers at Google published the details of an attack they called POODLE (for Padding Oracle on Downgraded Legacy Encryption), which exploited a deficiency in one of the most common security protocols used on the Internet, Secure Sockets Layer (SSL), and allowed an attacker to determine the encryption key used in a supposedly secure connection and  decrypt the data in transit. Despite the fact that this particular protocol was developed by Netscape in the 1990s and had been replaced by a better protocol called Transport Layer Security (TLS), version 3 of the SSL protocol (SSLv3) remained in popular use for many years. Further research revealed that version 1.0 of TLS had similar weaknesses that allowed a similar attack: decrypting web traffic with comparatively little reference data and computing power. Security practitioners and standards organizations began advising against the use of SSLv3 or TLSv1.0, which, while entirely correct from a security perspective, has caused no small difficulty for endpoints that don’t support TLSv1.1 or v1.2, such as older versions of Internet Explorer and numerous Android devices that no longer get software updates from their manufacturer. This issue, in particular, has had a cascading effect, where organizations that interact with endpoints like old browsers often hesitate to disable support for these vulnerable protocols, particularly TLSv1.0, and then the organizations that interface with them find themselves faced with the need to continue this support. While the complexities of negotiating the interactions amongst clients and servers and differing organizations mean that eliminating vulnerable protocols will take careful planning and disciplined execution, organizations should not hesitate to eliminate actively the use of SSLv3 and TLSv1.0, as these versions of the protocols are fundamentally insecure, and attacks against them will become easier and more widespread.

Beyond the question of protocol version, when a client and server first begin communication, they negotiate the means by which they will communicate. This also includes the groups of cryptographic ciphers they use, commonly known as cipher suites. Like protocol versions, most systems open negotiations using the best options, but many systems will allow other, more insecure cipher suites for communication. Without getting too far into the technical details, a cipher suite consists of a key-exchange protocol, usually Diffie-Hellman or RSA, which is used to prevent attackers from impersonating legitimate sites; a symmetric-key cipher used to encrypt the messages that go back and forth; and a cryptographic hash function, used to verify that an error or attack didn’t alter the message in transmission. Over the years, certain ciphers and key lengths proved less resistant to attacks given currently available computing power, and security experts no longer consider them effective for secure communications. Nevertheless, many servers and clients still support their use, and this can lead to some significant security problems such as the attacks from 2015 called FREAK and Logjam. Properly securing this environment usually involves making configuration changes to only support cipher suites that contain only strong encryption.

The last major area is hardly unique to software used for securing data transmission: patches and security updates. The software used to establish TLS connections for web servers, and other types of systems has complexities and bugs and security weaknesses like any other piece of software. In fact, because of the complexity of doing cryptography right, popular implementations often release frequent updates to address security deficiencies, and, like any other vulnerability management process, organizations looking to defend against insecure transmission situations should pay attention to these releases and apply the fixes promptly.

Despite the many security vulnerabilities and potential pitfalls in using TLS, it remains an effective mechanism for securing Internet traffic transmissions. That said, it requires proper configuration and use and regular maintenance, just like any other piece of software. It has some unique properties in that advances in knowledge of vulnerabilities and security research can render previously sufficient controls inadequate quickly, but a careful and disciplined security operations practice can usually manage these issues.