Survey of IT Pros Highlights Lack of Understanding of SaaS Data Loss Risks

April 26, 2016 | Leave a Comment

By Melanie Sommer, Director of Marketing, Spanning by EMC

Recently, Spanning – an EMC company and provider of backup and recovery for SaaS applications – announced the results of a survey* of over 1,000 IT professionals across the U.S. and the U.K. about trends in SaaS data protection. It turns out that IT pros across the pond have the same concerns as here in the U.S., as the survey found that security is the top concern when moving critical applications to cloud. Specifically, 44 percent of U.S. and U.K. IT pros cited external hacking/data breaches as their top concerns, ahead of insider attacks and user error.

But that’s not the most interesting finding, as the survey found that perceived concerns differ from reality when it comes to actual data loss. In total, nearly 80 percent of respondents have experienced data loss in their organizations’ SaaS deployments. Accidental deletion of information was the leading cause of data loss from SaaS applications (43 percent in U.S., 41 percent in U.K.), ahead of data loss caused by malicious insiders and hackers.

While organizations in both the U.S. and U.K. have experienced data loss due to accidental deletions, migration errors (33 percent in U.S., 31 percent in U.K.), and accidental overwrites (27 percent in U.S., 26 percent in U.K.) also led external and insider attacks as top causes of data loss.

How SaaS Backup and Recovery Helps
As a case in point, consider one serious user error – clicking a malicious link or file and triggering a ransomware attack. If an organization uses cloud-based collaboration tools like Office 365 One Drive for Business or Google Drive, the impact from a ransomware attack is multiplied at compute speed. How? An infected laptop contains files that automatically sync to the cloud (via Google Drive, or OneDrive for Business). Those newly-infected files sync, then infect and encrypt other files in every connected system – including those of business partners or customers, whose files and collaboration tools will be similarly compromised.

This is where backup and recovery enters the picture. Nearly half of respondents in the U.S. not already using a cloud-to-cloud backup and recovery solution said that they trust their SaaS providers with managing backup, while the other half rely on manual solutions. In most cases, SaaS providers are not in a position to recover lost or deleted data due to user error, and cannot blunt the impact of a ransomware attack on their customers. Further, with many organizations relying both on manual backups and an assumption that none of the admins in charge are malicious, the opportunity for accidental neglect or oversight is too big to ignore. The industry would seem to agree. Roughly a third of organizations in the U.S. (37 percent) are already using or plan to use a cloud-to-cloud backup provider for backup and recovery of their SaaS applications within the next 12 months.

Since the survey included U.K. respondents, it also gauged sentiment around the rapidly changing data privacy regulations in the EU, specifically in regards to the “E.U.-U.S. Privacy Shield.” The vast majority of IT professionals surveyed agree (66 percent in the U.K., 72 percent in the U.S.) that storing data in a primary cloud provider’s EU data center will ensure 100 percent compliance with data and privacy regulations.

These results paint a picture of an industry that is as unsure as they are underprepared; while security is a top concern when moving critical applications to the cloud, most organizations trust the inherent protection of their SaaS applications to keep their data safe, even though the leading cause of data loss is user error, which is not normally covered under native SaaS application backup. The results also show that the concerns influencing cloud adoption have little to do with the real cause of everyday data loss and more with a fear of data breaches or hackers.

The takeaway from these survey results: more IT pros need an increased awareness and understanding about where, when, and how critical data can be lost to reduce their cloud adoption concerns; and, more IT pros need to learn how to minimize the true sources of SaaS data loss risk. To learn more, download the full survey report, or view an infographic outlining the major findings of the survey.

*Survey Methodology
Spanning by EMC commissioned the online survey, which was completed by 1,037 respondents in December 2015. Of the respondents, 537 (52 percent) were based in the United Kingdom, and 500 in the United States (48 percent). A full 100 percent of the respondents “have influence or decision making authority on spending in the IT department” of their organization.
Respondents were asked to select between two specific roles: “IT Function with Oversight for SaaS Applications” (75 percent U.S., 78 percent U.K., 77 percent overall); “Line of Business/SaaS application owner” (39 percent U.S., 43 percent U.K., 41 percent overall); the remaining identified as “other.”

Can a CASB Protect You From the Treacherous 12?

April 25, 2016 | Leave a Comment

By Ganesh Kirti, Founder and CTO, Palerra

CSA.T12blog500pxMany frequently asked questions related to cloud security have included concerns about compliance and insider threats. But lately, a primary question is whether cloud services are falling victim to the same level of external attack as the data center. With Software as a Service (SaaS) becoming the new normal for the corporate workforce, and Infrastructure as a Service (IaaS) on the rise, cloud services now hold mission-critical enterprise data, intellectual property, and other valuable assets. As a result, the cloud is coming under attack, and it’s happening from both inside and outside the organization.

On February 29, the CSA Top Threats Working Group clarified the nature of cloud service attacks in a report titled, “The Treacherous 12: Cloud Computing Top Threats in 2016.” In this report the CSA concludes that although cloud services deliver business-supporting technology more efficiently than ever before, they also bring significant risk.

The CSA suggests that these risks occur in part because enterprise business units often acquire cloud services independently of the IT department, and often without regard for security. In addition, regardless of whether the IT department sanctions new cloud services, the door is wide open for the Treacherous 12.

Because all cloud services (sanctioned or not) present risks, the CSA points out that businesses need to take security policies, processes, and best practices into account. That makes sense, but is it enough?

Gartner predicts that through 2020, 95 percent of cloud security failures will be the customer’s fault. This does not necessarily mean that customers lack security expertise. What it does mean, though, is that it’s no longer sufficient to know how to make decisions about risk mitigation in the cloud. To reliably address cloud security, automation will be key.

Cloud security automation is where Cloud Access Security Brokers (CASBs) come into play. A CASB can help automate visibility, compliance, data security, and threat protection for cloud services. We thought it would be interesting to take a look at how well CASBs in general would fare at helping enterprises survive the treacherous 12.

The good news is that CASBs clearly address nine of the treacherous 12 (along with many other risks not mentioned in the report). These include:

#1 Data breach
#2 Weak ID, credential, and access management
#3 Insecure APIs
#4 System and application vulnerabilities
#5 Account hijacking
#6 Malicious insiders
#7 Advanced persistent threats
#10 Abuse and nefarious use of cloud services
#12 Shared technology issues

There are countless examples of why being protected against the treacherous 12 is important. Some of the more high profile ones:

  • Data breach: In the 2015 Anthem breach, hackers used a third-party cloud service to steal over 80M customer credentials.
  • Insecure APIs: The mid-2015 IRS breach exposed over 300K records. While that’s a big number, the more interesting one is that it only took 1 vulnerable API to allow the breach to happen.
  • Malicious Insiders: Uber reported that their main database was improperly accessed. The unauthorized individual downloaded 50K names and numbers to a cloud service. Was it their former employee, the current Lyft CTO? That was Uber’s opinion. The DOJ disagreed and a lawsuit ensued.

In each of these cases a CASB could have helped. A CASB can help detect data breaches by monitoring privileged users, encryption policies, and movement of sensitive data. A CASB can also detect unusual activity within cloud services that originate from API calls, and support risk scoring of external APIs and applications based on the activity. And a CASB can spot malicious insiders by monitoring for overly-privileged user accounts as well as user profiles, roles, and privileges that drift from compliant baselines. Finally, a CASB can detect malicious user activity through user behavior analytics.

What about the three threats that aren’t covered by a CASB? Those include:

#8 Data loss
#9 Insufficient due diligence
#11 Denial of services

The cost of data loss (#8, above) is huge. A now-defunct company named Code Spaces had to close down when their corporate assets were destroyed, because it did not follow best practices for business continuity and disaster recovery. Data loss prevention is a primary corporate responsibility, and a CASB can’t detect whether it is in place. Insufficient due diligence (#9) is the responsibility of the organization leveraging the cloud service, not the service provider. Executives need a good roadmap and checklist for due diligence. A CASB can provide advice, but they don’t automate the process. Finally, denial of service (DoS, #11, above) attacks are intended to take the provider down. It is the provider’s responsibility to take precautions to mitigate DoS attacks.

For a quick reference guide to the question, “Can a CASB protect you from the 2016 treacherous 12?,” download this infographic.

To learn more, join Palerra CTO Ganesh Kirti and CSA Executive VP of Research J.R. Santos as they discuss “CASBs and the Treacherous 12 Top Cloud Threats” on April 25, 2-3pm EDT. Register for the webinar now.

The Panama Papers, Mossack Fonseca and Security Fundamentals

April 21, 2016 | Leave a Comment

By Matt Wilgus, Practice Director, Schellman

The-Panama-Papers-Mossack-Fonseca-and-the-Writing-on-the-WallThe release of details contained in the Panama Papers will be one of the biggest news stories of the year. The number of high-profile individuals implicated will continue to grow as teams comb through the 11.5 million documents leaked from Mossack Fonseca, a Panamanian law firm. While the news headlines will focus on mainly world leaders, athletes and well-to-dos, the overview from The International Consortium of Investigative Journalists (ICIJ) gets into additional details. This overview is worth reading to understand what services the firm provided, who uses the services, how they can be used legally and how they can be abused.

The overview seems like something out of a John Grisham book. In fact some of the information being released is similar to a plot from a book he wrote over 25 years ago. In 1991, John Grisham published “The Firm”, a book which revolves around several lawyers working for the fictional law firm Bendini, Lambert and Locke. Some of the similarities between the book and today include a law firm that primarily exists to assist money laundering and tax evasion, part of the plot involves the details of many transactions from retrieving thousands of documents and there is a whistleblower. The fictional firm also provided services to legitimate clients, although in the book that number is about 25 percent. It is unknown what percentage of Mossack Foneseca clients were legitimate and how many would be described as Ponzi schemers, drug kingpins and tax evaders, as the ICIJ overview mentions. While the novel is fiction, the book sets the stage as something that has been seen before.

Whether the leak started from an external breach of systems or an intentional leak from an insider, it is always intriguing to know how it occurred and what could have been done. Did it start with a phishing email, a rogue employee, a web application flaw, etc.? Forbes reported that the client portal server was running Drupal 7.23, which was found to be susceptible to a SQL injection vulnerability that was announced in October 2014. There were many reports of exploitation of this vulnerability days after it was announced, so it is likely someone took advantage of the exploit. The team responsible for WordFence, a popular WordPress security plugin, provided another possible exploitation scenario related to upload functionality that existed in the Revolution Slide plugin. These are just some of the potential means that could have caused a breach at Mossack Fonseca. Other possibilities include scenarios related to weaknesses in the email server and a lack of encryption in transit. Mossack Fonseca’s does have a Data Security page on their site, although it primarily touts SSL and the fact they house all of our servers in-house as their primary security measures. In 2011, I wrote a post on how the legal profession was an easy target for breaches. Looking back I realize that technology has changed, but in many ways the weaknesses are likely to stay the same. One of the biggest changes to note from 2011 is the number of online applications law firms have now. This isn’t just the top 100 law firms; this includes smaller regional firms as well. In addition to the main corporate web site and an area to share documents (or client portal), which are now offerings that appear much more prevalent across firms of all sizes, firms have blog sites, premium service offerings, extranets and even applications that provide a gateway into all the other online applications. More applications means a larger attack surface. Unlike Mossack Fonseca, which claims it hosted everything internally, many law firms we see do use third-party SaaS offerings to handle some of these functions. Outsourcing to a third party which specializes in providing a particular service can often provide better security than a firm can provide in house.

Given the Mossack Fonseca’s focus on company formation, minimizing tax burdens, Private Interest Foundations and the like, the firm could have easily been a target given the recent groundswell of activism against tax avoidance and income inequality. While the lapse in security at Mossack Fonseca may not be representative of security at all law firms, the details surrounding their environment point to likely weaknesses in people, processes and technology which could exist in any organization.

  • People – Given what we know about potential vulnerabilities in their environment and the exfiltration of data, we can surmise that someone was not paying attention for an extended period of time. There are many security roles in an organization including, but not limited to policy development, administration and monitoring. In some environments one person may be responsible for many roles and in some cases not all responsibilities can be met. This may because no one was given the role or the person that was given the responsibility left the organization. A recent search of LinkedIn did not turn up too many IT-related profiles with Mossack Fonseca as a current or previous employer, although this doesn’t necessarily mean these individuals do not exist. Contractors may have also performed the role. That said, a third party could have been hired for a given job, say deploying the client portal, but maybe was not responsible for post implementation support.
  • Process – Being notified of vulnerabilities in the software supporting the organization is paramount to understanding where risks exist. Knowing what data is leaving the environment is also critical. The likelihood that either of these was occurring is low and if either were occurring there wasn’t necessarily anyone to act on it in a timely fashion.
  • Technology – A breakdown in people and processes can occasionally be mitigated by technology. The WordPress and Drupal sites are now protected by a third party security provider, but other sites likely are not. An up-to-date intrusion detection system (IDS) may have detected some of the threats the organization faced, or activities that occurred, although there were several potential options to exploit so one avenue or another would have likely been open. For an organization that appears to have missed some fundamental security concerns, they may have used technology to secure some data as there is a site named crypt.mossfon.com, which is still up.

The Panama Papers incident may once again raise awareness around data security with legal firms. Organizations performing support services to legal firms, such as eDiscovery and Case Management providers, may also want to take note. Mossack Fonseca has a link on their page for ISO Certifications. However, the only one listed is ISO 9001:2008. An ISO 27001 assessment, or certification, may not have prevented the leak, but it would have demonstrated greater consideration of security on the part of Mossack Fonseca. A penetration test would also have been beneficial, although given the vulnerabilities that existed even a vulnerability scan would have detected some of the issues.

With most data breaches, the actual data on the people and companies is less interesting (albeit potentially more valuable) than the way in which the breach occurred or the attacker persisted in the attack. As it relates to the Panama Papers, it is the opposite. The forthcoming details related to various individuals, their transactions, and the potential future tax and privacy implications are far more interesting to the public than the means whereby the exfiltration actually occurred. That said, taking a few minutes to understand how it happened and what we can learn can be a worthwhile step in preventing future breaches.

May the Fourth Be with EU

April 20, 2016 | Leave a Comment

Data Privacy Gets a Stronger Light Saber

By Nigel Hawthorn, EMEA Marketing Director, Skyhigh Networks

admin-ajax (1)On April 14, 2016, the EU Parliament passed the long-awaited new EU rules for personal data protection (GDPR). Everyone who holds or processes data on individuals in the 28 countries of the EU has until Star Wars Day 2018 (May 4) to comply.

The top 10 provisions of the regulation are:

  1. It is a global law. No matter where you are in the world, if you have data on individuals in the EU and lose it, you are responsible and can be fined. As an example, if you have a web site and a European comes on and enters their contact information, you have to conform.
  2. Increased fines. Up to 4% of global turnover or €20,000,000 (US$22M)
  3. Opt-in regulations. Users must give clear consent to opt-in to their data being collected and you must only use it for the purpose defined. No opting out, no hidden terms, no selling/giving data to other people.
  4. Breach notification. If you lose data, you have 72 hours to tell the authorities.
  5. Joint liability. If multiple companies process the data, they are all liable if data is lost, so if you hold data YOU are responsible if data gets lost via a risky cloud service.
  6. Users can demand their data back, that it is updated and deleted. If you hold data, you need to work out how to achieve those.
  7. Removes ambiguity. One law across all 28 countries of the EU.
  8. Common enforcement. The authorities are expected to enforce consistently across all the countries, the good news is data holders only need to deal with one authority.
  9. Collective redress. Users can sue together if data is lost in class action lawsuits.
  10. Data transfer. Data transfer from the EU is allowed, but subject to strict conditions.

If you work for a company collecting data, you are responsible for the security of that data no matter where it gets processed. It’s more important than ever that you know the shadow IT services that employees may be using, as they could be the conduit for data loss and your organisation will be liable.

There’s some good news for IT in the regulation – the new rules encourage privacy-friendly techniques such as pseudonimysation, anonymisation, encryption and data protection by design and by default. So capabilities such as encrypting data before it is uploaded to the cloud, especially when harnessed with keeping the keys on premises, can reduce your liabilities.

This is good news for EU citizens, as they will have strong and clear rights over their personal data, its collection, processing and security.

Some organizations have in the past treated personal data as a cheap commodity but this regulation clearly shows how valuable data really is and demands that they treat it with great respect.

We should all put a value on data about ourselves and our families and embrace this legislation because the outcome is that all of our data will be safer.

WP29: Thumbs Down to Draft EU-US Privacy Shield

April 20, 2016 | Leave a Comment

By  Françoise Gilbert,Global Privacy and Cybersecurity Attorney, Greenberg Traurig

In a 58-page opinion published April 13, 2016, the influential European Union Article 29 Working Party (WP29), which includes representatives of the data protection authorities of the 28 EU Member States, expressed significant concerns with respect to the terms of the proposed EU-US Privacy Shield that is intended to replace the EU-US Safe Harbor.

The WP29 made numerous critiques to the proposed EU-US Privacy Shield framework. Some of which include, for example, the lack of consistency between the principles set forth in the Privacy Shield documents and the fundamental EU Data Protection principles outlined in the 1995 EU Data Protection Directive, the proposed EU General Data Protection Regulation, and related documents.

The WP29 group also requested clearer restrictions for the onward transfer of personal information that occurs after personal data of EU residents is transferred to the US. The WP29 is especially concerned with the subsequent transfer of data to a third country, outside the United States. In addition, the WP29 continues to be concerned about the effect, scope, and effectiveness of the measures proposed to address activities of law enforcement and intelligence agencies, often described as a “massive collection” of data.

Background
On Feb. 29, 2016, the European Commission and U.S. Department of Commerce published a series of documents intended to constitute a new framework for transatlantic exchanges of personal data for commercial purposes, to be named the EU-U.S. Privacy Shield. The Privacy Shield would replace the EU-US Safe Harbor, which was invalidated by the Court of Justice of the European Union (CJEU) in October 2015, in the Schrems case.

Since the publication of the draft Privacy Shield documents, the WP29 members have convened in a series of meetings over the course of the past six-weeks in order to evaluate these documents and come up with a common position.

The results of this 6-week evaluation were expressed in an opinion entitled: “Opinion 01/2106 on the EU-US Privacy Shield Draft Adequacy Decision – WP 238,” published on April 13, 2016. The 58-page document, which is well-drafted and thoughtful, contains numerous positive comments about the efforts of the EU and US in trying to design a framework that would adhere to the two-page guidance published at the end of January, which outlined the key aspects of the proposed cross-Atlantic framework.

The document also expressed a wide variety of concerns with respect to the proposed EU-US Privacy Shield. The WP29 group was concerned by: (i) the commercial provisions (which address issues similar to those addressed in the Safe Harbor principles); (ii) the surveillance aspects (specifically, the possible derogations to the principles of the Privacy Shield for national security, law enforcement, and public interests purposes); as well as, (iii) the proposed joint review mechanism.

Commercial Aspects
Consistency with Data Protection Principles
The WP29 indicated in its Opinion that its key objective is to make sure that the Privacy Shield would offer an equivalent level of protection for individuals when personal data is processed. The WP29 believes that some key EU data protection principles are not reflected in the draft documents, or have been inadequately substituted by alternative notions.

While it does not expect the Privacy Shield to be a mere and exhaustive copy of the EU legal framework, the WP29 stressed that the Privacy Shield should contain the substance of the fundamental principles in effect in the European Union, so that it can ensure an “essentially equivalent” level of protection. To this point, WP29 explains that the data retention principle is not expressly mentioned and there is no wording on the protection that should be afforded against automated individual decisions based solely on automated processing. The application of the purpose limitation principle to data processing is also unclear.

Onward Transfers
The WP29 paid special attention to onward transfers, an issue that was key to the Safe Harbor decision. It believes that the Privacy Shield provisions addressing onward transfers of EU personal data are insufficiently framed, especially regarding their scope, the limitation of their purpose, and the guarantees applying to transfers to Agents.

The WP29 noted that since the Privacy Shield would be used to address onward transfers from a Privacy Shield entity located in the US to third country recipients, it should provide the same level of protection on all aspects of the Shield, including national security. In case of an onward transfer to a third country, every Privacy Shield organization should have the obligation to assess any mandatory requirements of the third country’s national legislation applicable to the data importer before making the transfer.

Recourse Mechanisms
Finally, although the WP29 notes the additional recourses made available to individuals to exercise their rights, it is concerned that the new redress mechanism may prove to be too complex in practice and difficult to use for EU individuals, and therefore, ineffective. Further clarification of the various recourse procedures is therefore stressed; in particular, where they are willing, the WP29 suggests that EU data protection authorities could be considered as a natural contact point for EU individuals involved in these complex redress procedures, and could have the option to act on their behalf.

National Security
Derogations for National Security Purposes
The WP29 observed that the draft EU Commission Adequacy Decision extensively addresses the possible access to data processed under the Privacy Shield for purposes of national security and law enforcement. It also notes that the US Administration, in Annex VI of the documents, also provides for increased transparency on the legislation applicable to intelligence data collection.

Massive Collection
Regarding the massive collection of information, the WP29 notes that the representations of the U.S. Office of the Director of National Intelligence (ODNI) do not exclude massive and indiscriminate collection of personal data originating from the EU. This brings concerns for the protection of the fundamental rights to privacy and data protection. The WP29 pointed to other resources for clarification on this point, such as the forthcoming rulings of the CJEU in cases regarding massive and indiscriminate data collection.

Redress
Concerning redress, the WP29 welcomes the establishment of an Ombudsperson as a new redress mechanism. Concurrently, it expressed its concern that this new institution might not be sufficiently independent, might not be vested with adequate powers to effectively exercise its duty, and does not guarantee a satisfactory remedy in case of disagreement.

Annual Joint Review
Regarding the proposed Annual Joint Review mechanism mentioned in the Privacy Shield framework, the WP29 noted that the Joint Review is a key factor to the credibility of the Privacy Shield. It points out, however, that the specific modalities for operations, such as the resulting report, its publicity, and the possible consequences, as well as the financing, need to be agreed upon well in advance of the first review.

Drafting Deficiencies
Consistency with the General Data Protection Regulation
The WP29 notes that the Privacy Shield needs to be consistent with the EU data protection legal framework, in both scope and terminology. It suggests that a review should be undertaken shortly after the entry into application of the General Data Protection Regulation (GDPR), to ensure that the higher level of data protection offered by the GDPR is followed in the adequacy decision and its annexes.

Structure and Content
Regarding the structure and content of the documents, the WP29 noted that the complexity of the structure of the documents that constitute the Privacy Shield make the documents difficult to understand. They are also concerned that the lack of clarity in the new framework might cause it to be difficult to comprehend by data subjects, organizations, and even data protection authorities. In addition, they note occasional inconsistencies within the 110 pages that form the current draft of the Privacy Shield framework. The WP29 urges the Commission to make the documents more clear and understandable for both sides of the Atlantic.

Conclusion
In its 58-page opinion, the WP29 made great efforts to point to the improvements brought by the Privacy Shield compared to the Safe Harbor decision. However, overall, the evaluation of the 110-page proposed Privacy Shield framework is generally negative. The WP29 appears to doubt that the protection that would be offered under the Privacy Shield would be equivalent to that of the EU. The extent to which the EU Commission will be able to address these concerns, identify appropriate solutions and provide the requested clarifications in order to improve the proposed documents remains to be seen.

Six months after the CJEU invalidated the EU Commission decision that had created the EU-US Safe Harbor, it seems that cross-Atlantic data transfers are still in limbo. There is still no simple, business friendly solution to addressing the stringent prohibition against cross border data transfers between EU/EEA entities and US based companies. The viability of the Privacy Shield remains in question. With the negative opinion issued by the WP29, a very influential body of the European Union, it is uncertain whether and when a stable and final draft will be completed. Assuming such framework may reach a form that is satisfactory to both sides, it would then need to be implemented. At a minimum, a new infrastructure, a website, and additional personnel will also be needed to make it operational—these are all things that take even more time.

In the meantime, US companies that built their operations and business models around the simple and easy to use EU-US Safe Harbor should review the legality of their cross border data transfers with their counsel. With no light at the end of the tunnel, it is urgent that they evaluate and implement means to address the stringent restriction against cross border data transfers in effect in the European Union and European Economic Area, and that they understand and address the needs of their counterparts in the EU/EEA region in order to minimize the risk of enforcement action against the European entities.

BYOD Stalled? Three Tips to Get It Going

April 19, 2016 | Leave a Comment

By Susan Richardson, Manager/Content Strategy, Code42

041116_cyberthreat2_blogDespite some surveys that say Bring Your own Device (BYOD) is growing, the CyberEdge Group’s recently released 2016 Cyberthreat Defense Report found that enterprise BYOD programs have stalled. Only one-third of respondents this year had implemented a BYOD policy—the same as two years ago. And 20 percent still have no plans to add one.

The delay in leveraging BYOD programs may be because organizations find them harder to establish, manage and secure than first thought. But the lack of an official policy doesn’t mean employees aren’t plugging their unapproved devices into the network. A Gartner survey found that 45 percent of workers use a personal device for work without their employer’s knowledge.

So here are answers to three key BYOD sticking points, to help organizations get unstuck and leverage the increased productivity gains BYOD can bring:

Q: How do we separate corporate and personal data on a device?
A: Containerization.

Most mobile device management (MDM) programs today allow you to separate the corporate workspace from the personal workspace on mobile devices. Containerization, also know as sandboxing, helps reduce the number of policies required to effectively manage mobile risks. It can also assuage employee fears that if they’re terminated or report a device missing, you’ll wipe away the entire contents of their device—including personal data like photographs and emails.

Q: How do we keep tabs on all that roaming mobile data?
A: With a comprehensive cloud endpoint backup system.

Modern cloud endpoint backup solutions serve as the new data guardian, continuously and automatically moving data from a device to the cloud and back again to a new machine whenever it’s needed. It protects enterprise data by continuously backing up every change and deletion. The best endpoint backup systems also give IT a comprehensive, single point of aggregation and control. You can see what’s on your network, how each device is configured, how it interacts with your environment, as well as where and when data was created, if it’s been altered, and who changed it. This happens whenever the machine is connected to the Internet, without prompting the user to engage with it, all while running seamlessly and silently in the background.

Q: Who pays and how?
A: You, the enterprise, by automating reimbursement.

With California leading the way, BYOD reimbursement won’t just be the ethical thing to do, it will be legally required under fair labor laws. But manually managing reimbursement via expense reports is archaic and expensive. It can cost $15 to $20 per expense report in internal labor, because so many different departments have to touch the report, from accounts payable to finance to IT. Instead, do like Intel did and automate reimbursement by setting up corporate-funded plans with mobile providers. That way, your company takes care of the bill and can negotiate corporate discounts with providers.

To get started developing a BYOD strategy, download this BYOD checklist.

Panama Papers Expose Data Security Deficiencies in Law Firms

April 12, 2016 | Leave a Comment

By Rick Orloff, Chief Security Officer, Code42

040616_pentagonpapers_blogThe unprecedented leak of 11.5 million files from the database of the world’s fourth biggest offshore law firm is riveting. As details continue to emerge about the Panama Papers leak, the money laundering and secretive tax regimes and high-profile clientele make for a juicy story. But from an enterprise data security perspective, here at Code42 we’re shaking our heads.

It’s hard to imagine a situation where the stakes for data protection could be higher. This is an organization whose entire “empire” is built on “secret” data. And it was an all-or-nothing game: Mossack Fonseca will likely never recover to earn the trust of a future client—tax evader or otherwise. If there ever was an organization that warranted exceptional network security tools and data security measures, Mossack Fonseca was it.

A data security wake-up call for honest law firms everywhere
If a massive international law firm dealing exclusively in extremely sensitive data is this easily hacked, how vulnerable is your average, above board law firm?

According to the statistics, the answer is “very.” John McAfee penned an article for Business Insider in which he concludes that “law firms are easy pickings for hackers.” Bloomberg found that 80 percent of large U.S. law firms were hacked in 2015. Even more alarming, in the 2015 ABA Technology Survey, 23 percent of firms surveyed said they “don’t know” if they’ve experienced a breach, and only 10 percent have any sort of cyber liability coverage. For a cohort that knows a thing or two about liability lawsuits—and certainly knows that “ignorance of the law” is a poor defense—this is surprising.

Data protection is a high-stakes game for every law firm
And while a data breach at your average law-abiding law firm isn’t likely to result in indictments for fraud, the stakes are still extremely high. “The implications of law firm breaches are mind boggling,” Philip Lieberman, president of Lieberman Software, told Computer Business Review.

Most clearly, a firm stands to destroy every shred of trust with its clients—a reputation bomb that will be tough to recover from. In many cases, a leak could compromise legal proceedings and eliminate advantages by placing litigation strategy and privileged information out in the open.

Even if a firm’s clients and reputation escapes unscathed, data loss of any kind can trigger significant financial impact. A damaged laptop, or ransomware that holds data hostage, can leave an associate without access to critical information. The loss of billable hours quickly adds up. Add to that breach reporting requirements and potential fines, and the ROI of modern enterprise data security tools is easily apparent.

It will be interesting to watch the continued fallout from the Panama Papers, and we’re happy to count this as a win for the “good guys.” But as it dominates headlines and newsfeeds, we hope it’s also a major reminder for law firms—and enterprises in every industry—to re-examine what they’re doing to protect their data.

Download The Guide to Modern Endpoint Backup and Data Visibility to learn more about selecting a modern endpoint backup solution in a dangerous world.

CSA Releases New White Paper on Current Cloud Certification Challenges Ahead and Proposed Solutions

April 11, 2016 | Leave a Comment

By Daniele Catteddu, Chief Technology Officer, Cloud Security Alliance

cloud-security-allianceToday, the Cloud Security Alliance has released the CSA STAR Program & Open Certification Framework in 2016 and Beyond, an important new whitepaper that has been created to provide the security community with a description of some of the key security certification challenges and how the CSA intends to address them moving forward.

As background, launched in 2011, the CSA’s Security, Trust and Assurance Registry (STAR) program has become the industry’s leading trust mark for cloud security with the successful objective to improve trust in the cloud market by offering increased transparency and information security assurance. The Open Certification Framework, also developed by the CSA, is an industry initiative to allow global, accredited, trusted certification of cloud providers. It allows for flexible, incremental and multi-layered cloud service provider (CSP) certifications according to the CSA’s industry leading security guidance.

Together the OCF/STAR program comprises a global cloud computing assurance framework with a scope of capabilities, flexibility of execution, and completeness of vision that far exceeds the risk and compliance objectives of other security audit and certification programs.

Since the launch of STAR, the cloud market has evolved and matured, and so has the cloud audit and certification landscape with now more than fifteen options including national, regional and global, sector-specific, cloud-specific and generic certification schemes available. This proliferation has resulted, in among other things, a barrier to entry for CSPs that cannot afford to get certified by multiple countries and organizations.

Aside for the time and cost of pursuing and maintaining these numerous certifications, there are a number of other concerns including:

  • Lack of means to provide higher level of assurance and transparency
  • Privacy not adequately taken into account
  • Limited transparency
  • Lack of means to streamline GRC

To address these certification challenges, the CSA is proposing, through the OCF, to offer the cloud community with both a global recognition scheme for security and privacy certification, and a set of GRC tools and practices that address the many complex assurance and transparency requirements of cloud stakeholders.

The three core ideas behind the CSA suggested solutions are that an effective and efficient approach to trust and assurance has to:

  • delicately balance the need of nations and business sectors to develop their specific certification schemas with the need of CSPs to reduce compliance costs
  • avoid that humans (auditors) do activities that can be performed by machines (e.g. collecting data)
  • make sure that accurate and reliable evidences/information are provided to relevant people, in a timely fashion, leveraging as much as possible automatic means

The paper also outlines how a number of other frameworks and controls should play a part in this solution including:

  • Leveraging CCM and OCF/STAR as normalizing factors
  • Conducting continuous monitoring/auditing

Integrating privacy level agreements code of conduct into the STAR Program

The CSA is currently seeking validation for its proposed OCF-STAR program action plan and is seeking input and support from the CSA community.  To become involved, visit the Open Certification Working Group.

How CASB Is Different from Web Proxy / Firewall

April 8, 2016 | Leave a Comment

By Cameron Coles, Sr. Product Marketing Manager, Skyhigh Networks

admin-ajaxA common question that arises as IT teams begin to look at cloud access security broker (CASB) products goes something like, “we already have a web proxy and/or firewall, how is this different?” or “does CASB replace my web proxy / firewall?” These are natural questions because web proxies and firewalls have visibility into all traffic over the corporate network including traffic to and from cloud services. However, there are significant differences between existing network security solutions and a CASB. Let’s first dispel a major misconception: a CASB is not a replacement for existing network security tools, and vice versa.

[CASBs] deliver capabilities that are differentiated and generally unavailable today in security controls such as Web application firewalls (WAFs), secure Web gateways (SWGs) and enterprise firewalls – Gartner Market Guide for Cloud Access Security Brokers, Craig Lawson, Neil MacDonald, Brian Lowans [Oct. 22, 2015]

CASB is a separate, and differentiated market from proxies and firewalls. While CASBs can be deployed in forward or reverse proxy mode to enforce inline controls, the similarities to web proxies stops there. Unlike network security solutions that focus on a wide variety of inbound threats and filtering for millions of potentially illicit websites, a CASB is focused on deep visibility into and granular controls for cloud usage. A CASB can also be deployed in an API mode to scan data at rest in cloud services and enforce policies across this data. Here are some of the high-level functions of a CASB not available in existing network security solutions:

  • Provide a detailed, independent risk assessment for each cloud service (e.g. compliance certifications, recent data breaches, security controls, legal jurisdiction).
  • Enforce risk-based policies (e.g. block access to all high-risk file sharing services and display a real-time coaching message directing users to a company-approved service).
  • Control access to individual user actions based on context (e.g. prevent users from downloading reports to unmanaged devices on remote networks).
  • Enforce data-centric security policies (e.g. encrypting data as it is uploaded to the cloud or applying rights management protection to sensitive data on download).
  • Apply machine learning to detect threats (e.g. an IT user downloading an unusual volume of sensitive data and uploading it to a personal account in another cloud app).
  • Respond to cloud-based threats in real time (e.g. terminating account access in the face of an insider threat or requiring additional authentication factors to continue using a cloud service in the face of a compromised account).
  • Enforce policies for data at rest in the cloud (e.g. revoking sharing permissions on files shared with a business partner or retroactively encrypting sensitive data).

Cloud-related functions of web proxies / firewalls
Web proxies and firewalls offer broad protection against network threats and, as part of this protection, they do offer some limited visibility into cloud usage, even without integrating to a CASB. For example, although these solutions may have difficulty mapping URLs users access to cloud services, they track cloud access over the corporate network. Some customers use their network security solutions to terminate SSL and inspect content for malware. Proxies and firewalls also bucket cloud services into high-level categories (e.g. Technology/Internet, Business/Economy, Suspicious); however, these categories generally do not reflect the underlying function of the service such as file sharing, CRM, or social media.

One of the primary use cases of network security solutions is categorizing and enforcing access to millions of illicit websites that contain pornography, drugs, gambling, etc. Web proxies can redirect access attempts to specific URLs to an alternate webpage hosting a notification that the URL was blocked. Similarly, firewalls can be configured to block access to specific IP addresses. Both solutions lack detailed and up-to-date cloud registries with cloud service URLs and IP addresses to extend this access control functionality to cloud services. Enterprises often find that while they may have initially blocked a cloud service, cloud providers routinely introduce new URLs and IPs that are not blocked. This results in the widespread phenomenon of “proxy leakage” in which employees regularly access cloud services that IT intends to block.

The focus on IP reputation is also not directly applicable to cloud services. A cloud service may have a high IP reputation, but due to its security controls, or lack thereof, it may also be unsuitable to store corporate data. For example, take a file sharing service with a good IP reputation that allows anonymous use, shares customer data with third parties, is hosted in a privacy-unfriendly country, and experienced a password breach three months ago. Few IT leaders would want sensitive corporate data uploaded to this service. Without a registry of these attributes, network security solutions are unable to enforce risk-based policies. Moreover, since many cloud services do not use standard content disposition headers, network security solutions are unable to enforce data loss prevention (DLP) policies to prevent the upload of sensitive data.

How CASB integrates with web proxies / firewalls
CASB is a complementary technology to web proxies and firewalls. By integrating with these solutions, a CASB can leverage existing network infrastructure to gain visibility into cloud usage. Simultaneously, a CASB enhances the value of these investments by making them cloud-aware. There are three primary methods a CASB uses to integrate with network security solutions: log collection, packet capture, and proxy chaining.

Log Collection
Web proxies and firewalls capture data about cloud usage occurring over the network, but they may not differentiate cloud usage from Internet usage. A CASB can ingest log files from these solutions and reveal which cloud services are in use by which users, data volumes uploaded to and downloaded from the cloud, and the risk and category of each cloud service. In effect, a CASB makes existing infrastructure cloud-aware. CASBs detect enforcement gaps with existing egress infrastructure and can push access policies to them with up-to-date cloud service URLs to close enforcement gaps. For customers that terminate SSL, a CASB can also gather additional detail from these logs on the actions users take within cloud services. Using machine learning, a CASB can detect malware or botnets using the cloud as a vector for data exfiltration.

p1x

Packet Capture
In the packet capture deployment mode, a CASB ingests a feed of traffic from existing network security solutions to gain visibility into the content of data. For example, a CASB can integrate with a web proxy via ICAP. The web proxy is configured to copy and forward cloud traffic to the CASB to evaluate data loss prevention (DLP) policies in a monitor-only configuration. Many cloud services use custom content disposition headers in an effort to improve the performance of their applications. These custom headers have the unintended side effect of preventing network security solutions (and on-premises DLP solutions that integrate to them via ICAP) from inspecting content for DLP. CASBs leverage detailed cloud service signatures to inspect cloud traffic, evaluate DLP policies, and generate alerts for DLP policy violations.

p2

Proxy Chaining
A CASB can be deployed as a forward proxy. Many organizations already have a web proxy, and they do not want to deploy another endpoint agent. In proxy chaining mode, the downstream web proxy is configured to route all cloud traffic through the CASB. In this deployment mode, the CASB can enforce real-time governance and security policies. For instance, a CASB can enforce access control policies limiting specific cloud service functionality and displaying educational messages when a user accesses a service outside of policy with options to notify, justify access, and direct users to approved cloud services. Unlike packet capture, this deployment mode enables a CASB to enforce inline DLP policies to prevent policy violations.

p3

Taken together, CASBs enhance the value of investments enterprises have made in network security solutions. Rather than forcing a rip and replace of existing solutions, CASBs integrate with and extend their capabilities to the cloud. There are clear differences in the functionality of web proxies / firewalls and CASB. Neither is a replacement for the other, but together they deliver better visibility into cloud usage and the ability to enforce compliance and governance policies to protect corporate data as it moves to the cloud. To learn more about the cloud access security broker (CASB) market, download a free copy of the latest Gartner How to Evaluate and Operate a Cloud Access Security Broker, Neil MacDonald, Craig Lawson [Dec. 8, 2015] here.

 

How to Get C-suite Support for Insider Threat Prevention

April 6, 2016 | Leave a Comment

By Susan Richardson, Manager/Content Strategy, Code42

033016_idc_blogIf you’re not getting support and adequate funding from the C-suite to address insider threats, a recent report highlights a powerful persuasive tool you may have overlooked: money—as in fines (cha-ching), lawsuits (cha-ching) and credit monitoring services (cha-ching) you’ll have to pay as the result of a data breach.

The IDC report, “Endpoint Data Protection for Extensible DLP Strategies,” cites two health-care groups that paid six figures each in fines for data breaches as a result of improper employee behaviors. Here are even more powerful examples of the price your organization could pay for not addressing insider data security threats:

Target insider breach costs could reach $1 billion
Target may have skirted an SEC fine, but the retailer is still paying a hefty price because cyber thieves were able to access customer credit card data via a subcontractor’s systems. Breach costs included $10 million to settle a class action lawsuit, $39 million to financial institutions that had to reimburse customers who lost money, and $67 million to Visa for charges it incurred reissuing compromised cards. For 2014, Target had $191 million in breach costs on its books; estimated totals could reach $1 billion after everything shakes out.

AT&T fined $25 million for employee breach
In 2015, AT&T paid a $25 million fine to the Federal Communications Commission after three call center employees sold information about 68,000 customers to a third party. The cyber thieves used the information to unlock customers’ AT&T phones.

On top of the fine, AT&T was required to do things it should have done in the first place:

  • Appoint a senior compliance manager who is a certified privacy professional.
  • Conduct a privacy risk assessment.
  • Implement an information security program.
  • Create a compliance manual and regularly train employees.
  • File regular compliance reports with the FCC.

AvMed paid $3 million in settlement
While the health plan company avoided a HIPAA fine, it paid $3 million in settlements to 460,000 customers whose personal information was on two stolen, unencrypted laptops. On top of that were costs to reimburse customers’ actual monetary losses.

In addition, the company had to:

  • Provide mandatory security awareness and training programs for all company employees.
  • Provide mandatory training on appropriate laptop use and security.
  • Upgrade all company laptops with additional security mechanisms, including GPS tracking technology.
  • Add new password protocols and full-disk encryption technology on all company desktops and laptops so that electronic data stored on the devices would be encrypted at rest.
  • Upgrade physical security to further safeguard workstations from theft.
  • Review and revise written policies and procedures to enhance information security.

The lesson here should be obvious. It’s far cheaper to act now—by implementing available endpoint protection technology and instituting a security-aware culture—than to wait for a breach that forces you into action.

As security expert Philip Lieberman noted in the AT&T case, the penalty cost AT&T much more than the steps it should have taken to prevent the insider breach: “The C-level staff will have to explain this to the board as to why they did not implement a control when the cost would be trivial.”

To learn more about “Endpoint Data Protection for Extensible DLP Strategies” get the IDC analyst report.