May the Fourth Be with EU

April 20, 2016 | Leave a Comment

Data Privacy Gets a Stronger Light Saber

By Nigel Hawthorn, EMEA Marketing Director, Skyhigh Networks

admin-ajax (1)On April 14, 2016, the EU Parliament passed the long-awaited new EU rules for personal data protection (GDPR). Everyone who holds or processes data on individuals in the 28 countries of the EU has until Star Wars Day 2018 (May 4) to comply.

The top 10 provisions of the regulation are:

  1. It is a global law. No matter where you are in the world, if you have data on individuals in the EU and lose it, you are responsible and can be fined. As an example, if you have a web site and a European comes on and enters their contact information, you have to conform.
  2. Increased fines. Up to 4% of global turnover or €20,000,000 (US$22M)
  3. Opt-in regulations. Users must give clear consent to opt-in to their data being collected and you must only use it for the purpose defined. No opting out, no hidden terms, no selling/giving data to other people.
  4. Breach notification. If you lose data, you have 72 hours to tell the authorities.
  5. Joint liability. If multiple companies process the data, they are all liable if data is lost, so if you hold data YOU are responsible if data gets lost via a risky cloud service.
  6. Users can demand their data back, that it is updated and deleted. If you hold data, you need to work out how to achieve those.
  7. Removes ambiguity. One law across all 28 countries of the EU.
  8. Common enforcement. The authorities are expected to enforce consistently across all the countries, the good news is data holders only need to deal with one authority.
  9. Collective redress. Users can sue together if data is lost in class action lawsuits.
  10. Data transfer. Data transfer from the EU is allowed, but subject to strict conditions.

If you work for a company collecting data, you are responsible for the security of that data no matter where it gets processed. It’s more important than ever that you know the shadow IT services that employees may be using, as they could be the conduit for data loss and your organisation will be liable.

There’s some good news for IT in the regulation – the new rules encourage privacy-friendly techniques such as pseudonimysation, anonymisation, encryption and data protection by design and by default. So capabilities such as encrypting data before it is uploaded to the cloud, especially when harnessed with keeping the keys on premises, can reduce your liabilities.

This is good news for EU citizens, as they will have strong and clear rights over their personal data, its collection, processing and security.

Some organizations have in the past treated personal data as a cheap commodity but this regulation clearly shows how valuable data really is and demands that they treat it with great respect.

We should all put a value on data about ourselves and our families and embrace this legislation because the outcome is that all of our data will be safer.

WP29: Thumbs Down to Draft EU-US Privacy Shield

April 20, 2016 | Leave a Comment

By  Françoise Gilbert,Global Privacy and Cybersecurity Attorney, Greenberg Traurig

In a 58-page opinion published April 13, 2016, the influential European Union Article 29 Working Party (WP29), which includes representatives of the data protection authorities of the 28 EU Member States, expressed significant concerns with respect to the terms of the proposed EU-US Privacy Shield that is intended to replace the EU-US Safe Harbor.

The WP29 made numerous critiques to the proposed EU-US Privacy Shield framework. Some of which include, for example, the lack of consistency between the principles set forth in the Privacy Shield documents and the fundamental EU Data Protection principles outlined in the 1995 EU Data Protection Directive, the proposed EU General Data Protection Regulation, and related documents.

The WP29 group also requested clearer restrictions for the onward transfer of personal information that occurs after personal data of EU residents is transferred to the US. The WP29 is especially concerned with the subsequent transfer of data to a third country, outside the United States. In addition, the WP29 continues to be concerned about the effect, scope, and effectiveness of the measures proposed to address activities of law enforcement and intelligence agencies, often described as a “massive collection” of data.

On Feb. 29, 2016, the European Commission and U.S. Department of Commerce published a series of documents intended to constitute a new framework for transatlantic exchanges of personal data for commercial purposes, to be named the EU-U.S. Privacy Shield. The Privacy Shield would replace the EU-US Safe Harbor, which was invalidated by the Court of Justice of the European Union (CJEU) in October 2015, in the Schrems case.

Since the publication of the draft Privacy Shield documents, the WP29 members have convened in a series of meetings over the course of the past six-weeks in order to evaluate these documents and come up with a common position.

The results of this 6-week evaluation were expressed in an opinion entitled: “Opinion 01/2106 on the EU-US Privacy Shield Draft Adequacy Decision – WP 238,” published on April 13, 2016. The 58-page document, which is well-drafted and thoughtful, contains numerous positive comments about the efforts of the EU and US in trying to design a framework that would adhere to the two-page guidance published at the end of January, which outlined the key aspects of the proposed cross-Atlantic framework.

The document also expressed a wide variety of concerns with respect to the proposed EU-US Privacy Shield. The WP29 group was concerned by: (i) the commercial provisions (which address issues similar to those addressed in the Safe Harbor principles); (ii) the surveillance aspects (specifically, the possible derogations to the principles of the Privacy Shield for national security, law enforcement, and public interests purposes); as well as, (iii) the proposed joint review mechanism.

Commercial Aspects
Consistency with Data Protection Principles
The WP29 indicated in its Opinion that its key objective is to make sure that the Privacy Shield would offer an equivalent level of protection for individuals when personal data is processed. The WP29 believes that some key EU data protection principles are not reflected in the draft documents, or have been inadequately substituted by alternative notions.

While it does not expect the Privacy Shield to be a mere and exhaustive copy of the EU legal framework, the WP29 stressed that the Privacy Shield should contain the substance of the fundamental principles in effect in the European Union, so that it can ensure an “essentially equivalent” level of protection. To this point, WP29 explains that the data retention principle is not expressly mentioned and there is no wording on the protection that should be afforded against automated individual decisions based solely on automated processing. The application of the purpose limitation principle to data processing is also unclear.

Onward Transfers
The WP29 paid special attention to onward transfers, an issue that was key to the Safe Harbor decision. It believes that the Privacy Shield provisions addressing onward transfers of EU personal data are insufficiently framed, especially regarding their scope, the limitation of their purpose, and the guarantees applying to transfers to Agents.

The WP29 noted that since the Privacy Shield would be used to address onward transfers from a Privacy Shield entity located in the US to third country recipients, it should provide the same level of protection on all aspects of the Shield, including national security. In case of an onward transfer to a third country, every Privacy Shield organization should have the obligation to assess any mandatory requirements of the third country’s national legislation applicable to the data importer before making the transfer.

Recourse Mechanisms
Finally, although the WP29 notes the additional recourses made available to individuals to exercise their rights, it is concerned that the new redress mechanism may prove to be too complex in practice and difficult to use for EU individuals, and therefore, ineffective. Further clarification of the various recourse procedures is therefore stressed; in particular, where they are willing, the WP29 suggests that EU data protection authorities could be considered as a natural contact point for EU individuals involved in these complex redress procedures, and could have the option to act on their behalf.

National Security
Derogations for National Security Purposes
The WP29 observed that the draft EU Commission Adequacy Decision extensively addresses the possible access to data processed under the Privacy Shield for purposes of national security and law enforcement. It also notes that the US Administration, in Annex VI of the documents, also provides for increased transparency on the legislation applicable to intelligence data collection.

Massive Collection
Regarding the massive collection of information, the WP29 notes that the representations of the U.S. Office of the Director of National Intelligence (ODNI) do not exclude massive and indiscriminate collection of personal data originating from the EU. This brings concerns for the protection of the fundamental rights to privacy and data protection. The WP29 pointed to other resources for clarification on this point, such as the forthcoming rulings of the CJEU in cases regarding massive and indiscriminate data collection.

Concerning redress, the WP29 welcomes the establishment of an Ombudsperson as a new redress mechanism. Concurrently, it expressed its concern that this new institution might not be sufficiently independent, might not be vested with adequate powers to effectively exercise its duty, and does not guarantee a satisfactory remedy in case of disagreement.

Annual Joint Review
Regarding the proposed Annual Joint Review mechanism mentioned in the Privacy Shield framework, the WP29 noted that the Joint Review is a key factor to the credibility of the Privacy Shield. It points out, however, that the specific modalities for operations, such as the resulting report, its publicity, and the possible consequences, as well as the financing, need to be agreed upon well in advance of the first review.

Drafting Deficiencies
Consistency with the General Data Protection Regulation
The WP29 notes that the Privacy Shield needs to be consistent with the EU data protection legal framework, in both scope and terminology. It suggests that a review should be undertaken shortly after the entry into application of the General Data Protection Regulation (GDPR), to ensure that the higher level of data protection offered by the GDPR is followed in the adequacy decision and its annexes.

Structure and Content
Regarding the structure and content of the documents, the WP29 noted that the complexity of the structure of the documents that constitute the Privacy Shield make the documents difficult to understand. They are also concerned that the lack of clarity in the new framework might cause it to be difficult to comprehend by data subjects, organizations, and even data protection authorities. In addition, they note occasional inconsistencies within the 110 pages that form the current draft of the Privacy Shield framework. The WP29 urges the Commission to make the documents more clear and understandable for both sides of the Atlantic.

In its 58-page opinion, the WP29 made great efforts to point to the improvements brought by the Privacy Shield compared to the Safe Harbor decision. However, overall, the evaluation of the 110-page proposed Privacy Shield framework is generally negative. The WP29 appears to doubt that the protection that would be offered under the Privacy Shield would be equivalent to that of the EU. The extent to which the EU Commission will be able to address these concerns, identify appropriate solutions and provide the requested clarifications in order to improve the proposed documents remains to be seen.

Six months after the CJEU invalidated the EU Commission decision that had created the EU-US Safe Harbor, it seems that cross-Atlantic data transfers are still in limbo. There is still no simple, business friendly solution to addressing the stringent prohibition against cross border data transfers between EU/EEA entities and US based companies. The viability of the Privacy Shield remains in question. With the negative opinion issued by the WP29, a very influential body of the European Union, it is uncertain whether and when a stable and final draft will be completed. Assuming such framework may reach a form that is satisfactory to both sides, it would then need to be implemented. At a minimum, a new infrastructure, a website, and additional personnel will also be needed to make it operational—these are all things that take even more time.

In the meantime, US companies that built their operations and business models around the simple and easy to use EU-US Safe Harbor should review the legality of their cross border data transfers with their counsel. With no light at the end of the tunnel, it is urgent that they evaluate and implement means to address the stringent restriction against cross border data transfers in effect in the European Union and European Economic Area, and that they understand and address the needs of their counterparts in the EU/EEA region in order to minimize the risk of enforcement action against the European entities.

BYOD Stalled? Three Tips to Get It Going

April 19, 2016 | Leave a Comment

By Susan Richardson, Manager/Content Strategy, Code42

041116_cyberthreat2_blogDespite some surveys that say Bring Your own Device (BYOD) is growing, the CyberEdge Group’s recently released 2016 Cyberthreat Defense Report found that enterprise BYOD programs have stalled. Only one-third of respondents this year had implemented a BYOD policy—the same as two years ago. And 20 percent still have no plans to add one.

The delay in leveraging BYOD programs may be because organizations find them harder to establish, manage and secure than first thought. But the lack of an official policy doesn’t mean employees aren’t plugging their unapproved devices into the network. A Gartner survey found that 45 percent of workers use a personal device for work without their employer’s knowledge.

So here are answers to three key BYOD sticking points, to help organizations get unstuck and leverage the increased productivity gains BYOD can bring:

Q: How do we separate corporate and personal data on a device?
A: Containerization.

Most mobile device management (MDM) programs today allow you to separate the corporate workspace from the personal workspace on mobile devices. Containerization, also know as sandboxing, helps reduce the number of policies required to effectively manage mobile risks. It can also assuage employee fears that if they’re terminated or report a device missing, you’ll wipe away the entire contents of their device—including personal data like photographs and emails.

Q: How do we keep tabs on all that roaming mobile data?
A: With a comprehensive cloud endpoint backup system.

Modern cloud endpoint backup solutions serve as the new data guardian, continuously and automatically moving data from a device to the cloud and back again to a new machine whenever it’s needed. It protects enterprise data by continuously backing up every change and deletion. The best endpoint backup systems also give IT a comprehensive, single point of aggregation and control. You can see what’s on your network, how each device is configured, how it interacts with your environment, as well as where and when data was created, if it’s been altered, and who changed it. This happens whenever the machine is connected to the Internet, without prompting the user to engage with it, all while running seamlessly and silently in the background.

Q: Who pays and how?
A: You, the enterprise, by automating reimbursement.

With California leading the way, BYOD reimbursement won’t just be the ethical thing to do, it will be legally required under fair labor laws. But manually managing reimbursement via expense reports is archaic and expensive. It can cost $15 to $20 per expense report in internal labor, because so many different departments have to touch the report, from accounts payable to finance to IT. Instead, do like Intel did and automate reimbursement by setting up corporate-funded plans with mobile providers. That way, your company takes care of the bill and can negotiate corporate discounts with providers.

To get started developing a BYOD strategy, download this BYOD checklist.

Panama Papers Expose Data Security Deficiencies in Law Firms

April 12, 2016 | Leave a Comment

By Rick Orloff, Chief Security Officer, Code42

040616_pentagonpapers_blogThe unprecedented leak of 11.5 million files from the database of the world’s fourth biggest offshore law firm is riveting. As details continue to emerge about the Panama Papers leak, the money laundering and secretive tax regimes and high-profile clientele make for a juicy story. But from an enterprise data security perspective, here at Code42 we’re shaking our heads.

It’s hard to imagine a situation where the stakes for data protection could be higher. This is an organization whose entire “empire” is built on “secret” data. And it was an all-or-nothing game: Mossack Fonseca will likely never recover to earn the trust of a future client—tax evader or otherwise. If there ever was an organization that warranted exceptional network security tools and data security measures, Mossack Fonseca was it.

A data security wake-up call for honest law firms everywhere
If a massive international law firm dealing exclusively in extremely sensitive data is this easily hacked, how vulnerable is your average, above board law firm?

According to the statistics, the answer is “very.” John McAfee penned an article for Business Insider in which he concludes that “law firms are easy pickings for hackers.” Bloomberg found that 80 percent of large U.S. law firms were hacked in 2015. Even more alarming, in the 2015 ABA Technology Survey, 23 percent of firms surveyed said they “don’t know” if they’ve experienced a breach, and only 10 percent have any sort of cyber liability coverage. For a cohort that knows a thing or two about liability lawsuits—and certainly knows that “ignorance of the law” is a poor defense—this is surprising.

Data protection is a high-stakes game for every law firm
And while a data breach at your average law-abiding law firm isn’t likely to result in indictments for fraud, the stakes are still extremely high. “The implications of law firm breaches are mind boggling,” Philip Lieberman, president of Lieberman Software, told Computer Business Review.

Most clearly, a firm stands to destroy every shred of trust with its clients—a reputation bomb that will be tough to recover from. In many cases, a leak could compromise legal proceedings and eliminate advantages by placing litigation strategy and privileged information out in the open.

Even if a firm’s clients and reputation escapes unscathed, data loss of any kind can trigger significant financial impact. A damaged laptop, or ransomware that holds data hostage, can leave an associate without access to critical information. The loss of billable hours quickly adds up. Add to that breach reporting requirements and potential fines, and the ROI of modern enterprise data security tools is easily apparent.

It will be interesting to watch the continued fallout from the Panama Papers, and we’re happy to count this as a win for the “good guys.” But as it dominates headlines and newsfeeds, we hope it’s also a major reminder for law firms—and enterprises in every industry—to re-examine what they’re doing to protect their data.

Download The Guide to Modern Endpoint Backup and Data Visibility to learn more about selecting a modern endpoint backup solution in a dangerous world.

CSA Releases New White Paper on Current Cloud Certification Challenges Ahead and Proposed Solutions

April 11, 2016 | Leave a Comment

By Daniele Catteddu, Chief Technology Officer, Cloud Security Alliance

cloud-security-allianceToday, the Cloud Security Alliance has released the CSA STAR Program & Open Certification Framework in 2016 and Beyond, an important new whitepaper that has been created to provide the security community with a description of some of the key security certification challenges and how the CSA intends to address them moving forward.

As background, launched in 2011, the CSA’s Security, Trust and Assurance Registry (STAR) program has become the industry’s leading trust mark for cloud security with the successful objective to improve trust in the cloud market by offering increased transparency and information security assurance. The Open Certification Framework, also developed by the CSA, is an industry initiative to allow global, accredited, trusted certification of cloud providers. It allows for flexible, incremental and multi-layered cloud service provider (CSP) certifications according to the CSA’s industry leading security guidance.

Together the OCF/STAR program comprises a global cloud computing assurance framework with a scope of capabilities, flexibility of execution, and completeness of vision that far exceeds the risk and compliance objectives of other security audit and certification programs.

Since the launch of STAR, the cloud market has evolved and matured, and so has the cloud audit and certification landscape with now more than fifteen options including national, regional and global, sector-specific, cloud-specific and generic certification schemes available. This proliferation has resulted, in among other things, a barrier to entry for CSPs that cannot afford to get certified by multiple countries and organizations.

Aside for the time and cost of pursuing and maintaining these numerous certifications, there are a number of other concerns including:

  • Lack of means to provide higher level of assurance and transparency
  • Privacy not adequately taken into account
  • Limited transparency
  • Lack of means to streamline GRC

To address these certification challenges, the CSA is proposing, through the OCF, to offer the cloud community with both a global recognition scheme for security and privacy certification, and a set of GRC tools and practices that address the many complex assurance and transparency requirements of cloud stakeholders.

The three core ideas behind the CSA suggested solutions are that an effective and efficient approach to trust and assurance has to:

  • delicately balance the need of nations and business sectors to develop their specific certification schemas with the need of CSPs to reduce compliance costs
  • avoid that humans (auditors) do activities that can be performed by machines (e.g. collecting data)
  • make sure that accurate and reliable evidences/information are provided to relevant people, in a timely fashion, leveraging as much as possible automatic means

The paper also outlines how a number of other frameworks and controls should play a part in this solution including:

  • Leveraging CCM and OCF/STAR as normalizing factors
  • Conducting continuous monitoring/auditing

Integrating privacy level agreements code of conduct into the STAR Program

The CSA is currently seeking validation for its proposed OCF-STAR program action plan and is seeking input and support from the CSA community.  To become involved, visit the Open Certification Working Group.

How CASB Is Different from Web Proxy / Firewall

April 8, 2016 | Leave a Comment

By Cameron Coles, Sr. Product Marketing Manager, Skyhigh Networks

admin-ajaxA common question that arises as IT teams begin to look at cloud access security broker (CASB) products goes something like, “we already have a web proxy and/or firewall, how is this different?” or “does CASB replace my web proxy / firewall?” These are natural questions because web proxies and firewalls have visibility into all traffic over the corporate network including traffic to and from cloud services. However, there are significant differences between existing network security solutions and a CASB. Let’s first dispel a major misconception: a CASB is not a replacement for existing network security tools, and vice versa.

[CASBs] deliver capabilities that are differentiated and generally unavailable today in security controls such as Web application firewalls (WAFs), secure Web gateways (SWGs) and enterprise firewalls – Gartner Market Guide for Cloud Access Security Brokers, Craig Lawson, Neil MacDonald, Brian Lowans [Oct. 22, 2015]

CASB is a separate, and differentiated market from proxies and firewalls. While CASBs can be deployed in forward or reverse proxy mode to enforce inline controls, the similarities to web proxies stops there. Unlike network security solutions that focus on a wide variety of inbound threats and filtering for millions of potentially illicit websites, a CASB is focused on deep visibility into and granular controls for cloud usage. A CASB can also be deployed in an API mode to scan data at rest in cloud services and enforce policies across this data. Here are some of the high-level functions of a CASB not available in existing network security solutions:

  • Provide a detailed, independent risk assessment for each cloud service (e.g. compliance certifications, recent data breaches, security controls, legal jurisdiction).
  • Enforce risk-based policies (e.g. block access to all high-risk file sharing services and display a real-time coaching message directing users to a company-approved service).
  • Control access to individual user actions based on context (e.g. prevent users from downloading reports to unmanaged devices on remote networks).
  • Enforce data-centric security policies (e.g. encrypting data as it is uploaded to the cloud or applying rights management protection to sensitive data on download).
  • Apply machine learning to detect threats (e.g. an IT user downloading an unusual volume of sensitive data and uploading it to a personal account in another cloud app).
  • Respond to cloud-based threats in real time (e.g. terminating account access in the face of an insider threat or requiring additional authentication factors to continue using a cloud service in the face of a compromised account).
  • Enforce policies for data at rest in the cloud (e.g. revoking sharing permissions on files shared with a business partner or retroactively encrypting sensitive data).

Cloud-related functions of web proxies / firewalls
Web proxies and firewalls offer broad protection against network threats and, as part of this protection, they do offer some limited visibility into cloud usage, even without integrating to a CASB. For example, although these solutions may have difficulty mapping URLs users access to cloud services, they track cloud access over the corporate network. Some customers use their network security solutions to terminate SSL and inspect content for malware. Proxies and firewalls also bucket cloud services into high-level categories (e.g. Technology/Internet, Business/Economy, Suspicious); however, these categories generally do not reflect the underlying function of the service such as file sharing, CRM, or social media.

One of the primary use cases of network security solutions is categorizing and enforcing access to millions of illicit websites that contain pornography, drugs, gambling, etc. Web proxies can redirect access attempts to specific URLs to an alternate webpage hosting a notification that the URL was blocked. Similarly, firewalls can be configured to block access to specific IP addresses. Both solutions lack detailed and up-to-date cloud registries with cloud service URLs and IP addresses to extend this access control functionality to cloud services. Enterprises often find that while they may have initially blocked a cloud service, cloud providers routinely introduce new URLs and IPs that are not blocked. This results in the widespread phenomenon of “proxy leakage” in which employees regularly access cloud services that IT intends to block.

The focus on IP reputation is also not directly applicable to cloud services. A cloud service may have a high IP reputation, but due to its security controls, or lack thereof, it may also be unsuitable to store corporate data. For example, take a file sharing service with a good IP reputation that allows anonymous use, shares customer data with third parties, is hosted in a privacy-unfriendly country, and experienced a password breach three months ago. Few IT leaders would want sensitive corporate data uploaded to this service. Without a registry of these attributes, network security solutions are unable to enforce risk-based policies. Moreover, since many cloud services do not use standard content disposition headers, network security solutions are unable to enforce data loss prevention (DLP) policies to prevent the upload of sensitive data.

How CASB integrates with web proxies / firewalls
CASB is a complementary technology to web proxies and firewalls. By integrating with these solutions, a CASB can leverage existing network infrastructure to gain visibility into cloud usage. Simultaneously, a CASB enhances the value of these investments by making them cloud-aware. There are three primary methods a CASB uses to integrate with network security solutions: log collection, packet capture, and proxy chaining.

Log Collection
Web proxies and firewalls capture data about cloud usage occurring over the network, but they may not differentiate cloud usage from Internet usage. A CASB can ingest log files from these solutions and reveal which cloud services are in use by which users, data volumes uploaded to and downloaded from the cloud, and the risk and category of each cloud service. In effect, a CASB makes existing infrastructure cloud-aware. CASBs detect enforcement gaps with existing egress infrastructure and can push access policies to them with up-to-date cloud service URLs to close enforcement gaps. For customers that terminate SSL, a CASB can also gather additional detail from these logs on the actions users take within cloud services. Using machine learning, a CASB can detect malware or botnets using the cloud as a vector for data exfiltration.


Packet Capture
In the packet capture deployment mode, a CASB ingests a feed of traffic from existing network security solutions to gain visibility into the content of data. For example, a CASB can integrate with a web proxy via ICAP. The web proxy is configured to copy and forward cloud traffic to the CASB to evaluate data loss prevention (DLP) policies in a monitor-only configuration. Many cloud services use custom content disposition headers in an effort to improve the performance of their applications. These custom headers have the unintended side effect of preventing network security solutions (and on-premises DLP solutions that integrate to them via ICAP) from inspecting content for DLP. CASBs leverage detailed cloud service signatures to inspect cloud traffic, evaluate DLP policies, and generate alerts for DLP policy violations.


Proxy Chaining
A CASB can be deployed as a forward proxy. Many organizations already have a web proxy, and they do not want to deploy another endpoint agent. In proxy chaining mode, the downstream web proxy is configured to route all cloud traffic through the CASB. In this deployment mode, the CASB can enforce real-time governance and security policies. For instance, a CASB can enforce access control policies limiting specific cloud service functionality and displaying educational messages when a user accesses a service outside of policy with options to notify, justify access, and direct users to approved cloud services. Unlike packet capture, this deployment mode enables a CASB to enforce inline DLP policies to prevent policy violations.


Taken together, CASBs enhance the value of investments enterprises have made in network security solutions. Rather than forcing a rip and replace of existing solutions, CASBs integrate with and extend their capabilities to the cloud. There are clear differences in the functionality of web proxies / firewalls and CASB. Neither is a replacement for the other, but together they deliver better visibility into cloud usage and the ability to enforce compliance and governance policies to protect corporate data as it moves to the cloud. To learn more about the cloud access security broker (CASB) market, download a free copy of the latest Gartner How to Evaluate and Operate a Cloud Access Security Broker, Neil MacDonald, Craig Lawson [Dec. 8, 2015] here.


How to Get C-suite Support for Insider Threat Prevention

April 6, 2016 | Leave a Comment

By Susan Richardson, Manager/Content Strategy, Code42

033016_idc_blogIf you’re not getting support and adequate funding from the C-suite to address insider threats, a recent report highlights a powerful persuasive tool you may have overlooked: money—as in fines (cha-ching), lawsuits (cha-ching) and credit monitoring services (cha-ching) you’ll have to pay as the result of a data breach.

The IDC report, “Endpoint Data Protection for Extensible DLP Strategies,” cites two health-care groups that paid six figures each in fines for data breaches as a result of improper employee behaviors. Here are even more powerful examples of the price your organization could pay for not addressing insider data security threats:

Target insider breach costs could reach $1 billion
Target may have skirted an SEC fine, but the retailer is still paying a hefty price because cyber thieves were able to access customer credit card data via a subcontractor’s systems. Breach costs included $10 million to settle a class action lawsuit, $39 million to financial institutions that had to reimburse customers who lost money, and $67 million to Visa for charges it incurred reissuing compromised cards. For 2014, Target had $191 million in breach costs on its books; estimated totals could reach $1 billion after everything shakes out.

AT&T fined $25 million for employee breach
In 2015, AT&T paid a $25 million fine to the Federal Communications Commission after three call center employees sold information about 68,000 customers to a third party. The cyber thieves used the information to unlock customers’ AT&T phones.

On top of the fine, AT&T was required to do things it should have done in the first place:

  • Appoint a senior compliance manager who is a certified privacy professional.
  • Conduct a privacy risk assessment.
  • Implement an information security program.
  • Create a compliance manual and regularly train employees.
  • File regular compliance reports with the FCC.

AvMed paid $3 million in settlement
While the health plan company avoided a HIPAA fine, it paid $3 million in settlements to 460,000 customers whose personal information was on two stolen, unencrypted laptops. On top of that were costs to reimburse customers’ actual monetary losses.

In addition, the company had to:

  • Provide mandatory security awareness and training programs for all company employees.
  • Provide mandatory training on appropriate laptop use and security.
  • Upgrade all company laptops with additional security mechanisms, including GPS tracking technology.
  • Add new password protocols and full-disk encryption technology on all company desktops and laptops so that electronic data stored on the devices would be encrypted at rest.
  • Upgrade physical security to further safeguard workstations from theft.
  • Review and revise written policies and procedures to enhance information security.

The lesson here should be obvious. It’s far cheaper to act now—by implementing available endpoint protection technology and instituting a security-aware culture—than to wait for a breach that forces you into action.

As security expert Philip Lieberman noted in the AT&T case, the penalty cost AT&T much more than the steps it should have taken to prevent the insider breach: “The C-level staff will have to explain this to the board as to why they did not implement a control when the cost would be trivial.”

To learn more about “Endpoint Data Protection for Extensible DLP Strategies” get the IDC analyst report.

Don’t Let Your Cloud Security Strategy Get Railroaded by Old Thinking

April 4, 2016 | Leave a Comment

By Player Pate, Senior Manager/Product Marketing, Cisco Security Business Group

AM37473-432x230The standard gauge used for railroads (that is the distance between the rails) in the U.S. is four feet, eight and a half inches, which is an odd number however you look at it. The history behind it is even stranger and is a cautionary tale of assumptions and the consequences of basing decisions on old thinking.

That oddly sized gauge was borrowed from the English standard of railroad width, where they built railroads with the same tools they used to build wagons, which used that wheel spacing. And the wheel spacing had to be that width because that was the spacing of the wheel ruts that existed at the time in the roads throughout England.

So who created those?

Roman chariots created the wheel ruts in the roads when they occupied England some two thousand years ago. These Roman war chariots were built just wide enough to accommodate the rear-ends of two horses, which just happened to be…you guessed it: four feet, eight and a half inches wide. This created the standard gauge that is still used today.

Ok, so where’s this heading?

The space shuttles used in modern day space exploration carried two large booster rockets on the sides of their main fuel tanks. These rockets, called solid rocket boosters or SRBs, which gave the spacecraft initial thrust upon launch, were built in a factory in Utah. The engineers of the SRBs would have preferred to make them larger, but the SRBs had to be transported by train from the factory to the launch site. That railroad line ran through a tunnel in the Rocky Mountains and the SRBs had to fit through that tunnel. The tunnel is only slightly wider than the railroad track, and the railroad track, as we now know, is only about as wide as the hindquarters of two equestrian.

Say that again?

A primary constraint in the design of one of the most advanced transportation systems ever developed was determined more than two thousand years ago by two horses’ asses.

Interesting, but what’s that have to do with cloud security?

That is the danger of getting caught in the rut of the same old thinking. There’s danger in thinking about security in the old way when it comes to securing cloud infrastructure. Cloud security can’t be solved with legacy security technologies or siloed approaches to security. Cloud security must be as dynamic as the nature of the cloud itself and should address the issues of:

  1. Keeping valuable data secure in the data center or wherever your cloud is hosted;
  2. Securing applications and data in the cloud;
  3. Enabling secure access anywhere, to anything for the mobile user or IoT;
  4. Consistently protecting against threats across the data center, cloud and wherever users roam before, during, and after attacks; while
  5. Providing visibility across the entire spectrum to enforce governance and compliance.

Cloud security doesn’t require simply the deployment of a separate application or new technology. Nor does it require you to completely scrap your existing infrastructure. It is an extension of your entire security program where security is embedded into the intelligent network infrastructure, integrates with a rich ecosystem of applications and services, is pervasive across the extended network – not just networks themselves but all endpoints, mobile and virtual, that extend to wherever employees are and wherever data is…from the beating heart of the enterprise data center out to the mobile endpoint and even onto the factory floor.

Think of the journey to cloud security adoption as your chance to take off into space; when planning the size of your rockets, are you imagining all the new possibilities or limiting your opportunities by what we’ve always done. Hopefully the cautionary tale of the history of US railroads helps you expand your thinking.

Check out our Cisco Business Cloud Advisor adoption tool to evaluate the overall readiness of your organization’s cloud strategy, including from a security perspective. Also stay tuned to this blog as dig further into this topic.

Four Security Solutions Not Stopping Third-Party Data Breaches

March 31, 2016 | Leave a Comment

By Philip Marshall, Director of Product Marketing, Cryptzone

4-Common-Security-Solutions-that-dont-stop-third-party-data-breaches-250x167A new breed of cyberattack is on the rise. Although it was practically unheard of a few years ago, the third-party data breach is rapidly becoming one of the most infamous IT security trends of modern times: Target, Home Depot, Goodwill, Dairy Queen, Jimmy John’s and Lowes are just a few of the US companies to have lost massive amounts of customer records as a result of their contractors’ usernames and passwords falling into the wrong hands.

What went wrong? Hackers have started to see contractors as the easy way into their targets’ networks. Why? Because too many organizations are still using yesterday’s security solutions, which weren’t designed for today’s complex ecosystems and distributed (read cloud-based) applications and data.

Here are four examples of solutions that, in their traditional forms, simply aren’t capable of stopping third-party data breaches. Could your company be at risk?

1. Firewalls and Access Control Lists
Many organizations still control traffic flow between network segments in the same way they’ve done for decades: with firewalls and access control lists (ACLs). Unfortunately, security in the modern age isn’t as simple as just defining which IP addresses and ranges can access which resources.

Let’s say you have a single VPN for all of a department’s workers and contractors, with every authenticated user getting a DHCP-allocated IP address. Your firewall rules are going to have to be wide open to suit the access needs of each user on the IP range, and yet you’re not going to be able to trace suspicious activity back to a particular account and machine.

It’s also a lot of work for your IT department to set up and maintain complex firewall rules across the entire organization, so it’s not unlikely that they’ll make mistakes, respond slowly to employee departures, and leave access wider open than it should be.

2. Authentication and Authorization
Leading on from this, another problem with ACLs is that they generally rely on static rules, which in no way account for the security risks of today’s distributed workforces. A username and password pair will unlock the same resources whether used from a secure workstation at a contractor’s premises or from an unknown device on the other side of the world.

Authentication and authorization rules should be dynamic rather than static, and adjusted on the fly according to the risk profile of the connection. One of your contractors needs remote access to a management network segment? Fine – but only if they use a hardened machine during office hours. If the context of their connection is more suspicious, you might consider two-factor authentication and more limited access.

3. IPsec and SSL VPNs
More than nine in ten organizations (91 percent) still use VPNs – a 20-year-old technology – to provision remote access to their networks. It’s potentially their single greatest risk factor for third-party data breaches, because both IPsec and SSL VPNs are readily exploitable by hackers.

In an IPsec session, remote users are treated as full members of the network. Nothing is invisible – they have direct access to the underlying infrastructure. So, if they’re malicious, they can start digging around and looking for vulnerabilities in seconds.

SSL VPNs, meanwhile, deliver resources via the user’s browser. And what web application has ever been secure? Tricks like SQL injection and remote code execution attacks make it trivial for hackers to start widening their foothold on the network.

4. IDS, IPS and SIEM
Finally, a word on the technologies organizations use to detect data breaches. IDS, IPS and SIEM are generally mature and effective solutions that do the job they’re intended to do: identify suspicious activity on the network.

However, the combination of the antiquated technologies described above means that most networks are rife with false positives: legitimate users and harmless applications causing suspicious traffic in the network layer. Change this model, and IDS, IPS and SIEM systems might start to deliver more value. As it stands, though, they’re often resource-intensive and reactive rather than proactive, so they’re not really equipped to stop hackers in their tracks.

The Alternative to Prevent Third-Party Data Breaches
In the new world of pervasive internal and external threats, distributed organizations and global ecosystems, the perimeter is more porous and less relevant than ever. The old models simply aren’t working. We need to move from perimeter-centric, VLAN and IP-focused security to a model that focuses on securing the entire path from user to application, device to service – on a one-to-one basis.

That’s where solutions like AppGate that enables organizations to adopt a software-defined perimeter approach for granular security control become increasingly a must have security solution. AppGate makes the application/server infrastructure effectively “invisible.” It then delivers access to authorized resources only, creating a ‘segment of one’ and verifying a number of user variables and entitlements each session—including device posture and identity—before granting access to an application. Once the user logs out, the secure tunnel disappears.

Kicking Tires on World Backup Day: A Five-Point Inspection for Endpoint Backup

March 29, 2016 | Leave a Comment

By Rachel Holdgrafer, Business Content Strategist, Code42

032816_worldbackupday_blogLiving with the constant threat of data breach or loss, large organizations have comprehensive remediation plans designed to guarantee speedy data recovery and business continuity. March 31, 2016 is World Backup Day—the perfect time to evaluate your endpoint backup strategy to insure you’re ready if the worst happens.

A viable backup plan is a lot like having car insurance. While car insurance can’t prevent an accident, it willreplace the crumpled bumper and shattered headlamps after you’ve been rear ended, or the entire vehicle if you’ve been totaled. In the same way, endpoint backup allows you to recover several lost files, everything on the laptop, or the entire enterprise—to a point in time as recent as moments ago.

Here are five inspection points to consider as you evaluate your endpoint backup solution.

Point #1: Do you have continuous protection—everywhere? The modern workforce works when, where and how they choose; so they need endpoint backup that protects their files continuously, whether they are in the office or on the road. Choose centralized, cloud-based endpoint backup that works across geographies, platforms and devices. It should be simple to manage and scale, and offer powerful features to solve other data collection, migration and security problems.

Point #2: Does it work with Macs, Windows and Linux? The modern enterprise is no longer a PC-only environment. Employee preference for Apple devices has increased Mac’s market share in the enterprise—and there’s no going back. Choose an endpoint backup solution that protects a “hybrid workplace” that includes Windows, Linux and OS X laptops and desktops and offers a consistent user experience across all platforms. Make sure your backup solution restores files to any computer or mobile device—without requiring a VPN connection.

Point #3: Will it enable rapid response and remediation? When protected/classified data goes public, response time is critical. Choose an endpoint data backup solution that provides 100-percent visibility and attribution of file content on any device. This enables IT (and InfoSec) to quickly identify a threat, mitigate the impact and determine whether data on compromised devices—including those that are lost or stolen—requires the organization to notify agencies or individuals of breach. If there is a reportable breach, 100 percent data attribution prevents over reporting of affected records.

Point #4: Will it support fast data recovery in a dangerous world? Endpoint devices—and the humans who operate them—are the weakest link in the enterprise security profile. The 2016 Cyberthreat Defense Reportfound that 76 percent of organizations were breached in 2015, making it essential to plan for data breach and your recovery before it happens. Choose an endpoint backup solution that ensures rapid recovery of data—no matter the cause, without paying a ransom, without the original device, without the former employee. Endpoint backup is an investment in business continuity, risk mitigation and peace of mind.

Point #5: Does it let you decide where to store encryption keys? True data privacy means only the enterprise can view unencrypted files. Choose an endpoint backup solution that deduplicates locally rather than globally, encrypts data in transit and at rest, and enables you to hold encryption keys regardless of where your data is stored. On-premises key escrow ensures that only you can view decrypted data—keeping it safe from the cloud vendor, government surveillance and blind subpoena.

Proactively evaluating your endpoint backup processes at least once a year positions your enterprise for quick and total recovery before data loss or breach occurs.