May 3, 2016 | Leave a Comment
By Susan Richardson, Manager/Content Strategy, Code42
If you’re among the 28 percent of enterprises that still haven’t implemented a planned endpoint backup system, here are 5 key attributes to look for in a system, to help drive adoption and success. These recommendations are courtesy of Laura DuBois, program vice president at IDC, a global market intelligence provider with 1,500 highly mobile, knowledge-driven employees:
1. Supports Productivity
Look for a lightweight system that doesn’t put a drag on memory, so employees can access data and collaborate quickly. If the system slows people down, they won’t use it.
2. Increases Security
While some people think of endpoint backup primarily for disaster recovery, you should think of it as a data loss prevention tool, too. A good endpoint backup system offers a multi-layered security model that includes transmission security, account security, password security, encryption security (both in transit and at rest) and secure messaging.
3. Offers Intuitive Self-Service
Employees don’t want to wait for IT to recapture lost data. Having an easy-to-use, self-service interface allows employees to locate and retrieve their own data. Not only does this help increase adoption, it also cuts down on calls to the IT Help Desk to save administrative time and money. A survey of Code42 customers found that 36 percent had fewer restore support tickets after installing the CrashPlan endpoint backup system, and 49 percent reduced IT hours spent on data restores.
In fact, for CISOs looking to make the case for an endpoint backup system, DuBois suggests compiling Help Desk volume data and the productivity associated with it.
4. Supports Heterogeneity
DuBois’ research showed that the average corporate employee uses 2.5 devices for work, some company issued and some not. Your endpoint backup system has to accommodate today’s diversity in devices, platforms and network connectivity.
5. Handles the Added Traffic
Some endpoint backup systems can get bogged down with lots of users and not enough network bandwidth. Look for a system that backups up almost continuously, so the processing is spread out vs. taxing the system all at once and slowing it down.
To learn more, see DuBois’ webinar, “5 Expert Tips to Drive User Adoption in Endpoint Backup Deployments.”
April 27, 2016 | Leave a Comment
By Rachel Holdgrafer, Business Content Strategist, Code42
The evolution of software has made possible things we never dreamed. With software upgrades come new competencies and capabilities, better security, speed, power and often disruption. Whenever something new enters an existing ecosystem, it can upset the works.
The cadence of software upgrades in large organizations is typically guided by upgrade policies; the risk of disruption is greater in large organizations—which is the chief reason large companies lag up to two versions behind current software releases. They take a wait-and-see approach, observe how the early adopters fare with software upgrades and adopt as a late majority.
A proper upgrade process involves research, planning and execution. Use these top 10 principles to establish when and why to upgrade:
1. What’s driving the upgrade? Software upgrades addressing known security vulnerabilities are a priority in the enterprise. Usability issues that impact productivity should also be addressed quickly.
2. Who depends on the legacy software? Identifying departments that depend on legacy software allows IT to schedule an upgrade when it has the least impact on productivity.
3. Can the upgrade be scheduled according to our policy? Scheduling upgrades within the standard upgrade cycle minimizes distraction and duplication of effort. Change control policies formalize how products are introduced into the environment and minimize disruption to the enterprise and IT.
4. Is the organization ready for another upgrade? Just because an organization needs a software upgrade doesn’t mean it can sustain that upgrade. Upgrade and patch fatigue are very real. Consider the number of upgrades you’ve deployed in recent months when deciding whether to undertake another one.
5. What is the upgrade going to cost? Licensing costs are only one part of the total cost associated with software upgrades. Services, staff time, impact to other projects, tech support for associated systems and upgrades for systems that no longer work with the new platform must also be included in the total cost.
6. What is the ROI of the upgrade? Software updates that defeat security vulnerabilities are non-negotiable—security itself is the ROI. Non-security related upgrades, however, must demonstrate their value through increased productivity or improved efficiency and reduced costs.
7. How will the customer be impacted? Consider all the ways an upgrade could impact customers and make adjustments before the upgrade begins. Doing so ensures you mitigate any potential issues before they happen.
8. What could go wrong? Since your goal is to increase performance, not diminish it, draft contingency plans for each identified scenario to readily address performance and stability issues, should they arise.
9. What level of support does the vendor provide? Once you understand what could go wrong during the upgrade, look into the level of support the vendor provides. Identify gaps in coverage and source outside resources to fill in as needed.
10. What’s your recourse? No one wants to think about it, but sometimes upgrades do more harm than good. In the event something goes wrong and you need to revert to a previous software version, can you?
Download The Guide to Modern Endpoint Backup and Data Visibility to learn more about how a modern endpoint backup solution can simplify software upgrades.
April 26, 2016 | Leave a Comment
By Melanie Sommer, Director of Marketing, Spanning by EMC
Recently, Spanning – an EMC company and provider of backup and recovery for SaaS applications – announced the results of a survey* of over 1,000 IT professionals across the U.S. and the U.K. about trends in SaaS data protection. It turns out that IT pros across the pond have the same concerns as here in the U.S., as the survey found that security is the top concern when moving critical applications to cloud. Specifically, 44 percent of U.S. and U.K. IT pros cited external hacking/data breaches as their top concerns, ahead of insider attacks and user error.
But that’s not the most interesting finding, as the survey found that perceived concerns differ from reality when it comes to actual data loss. In total, nearly 80 percent of respondents have experienced data loss in their organizations’ SaaS deployments. Accidental deletion of information was the leading cause of data loss from SaaS applications (43 percent in U.S., 41 percent in U.K.), ahead of data loss caused by malicious insiders and hackers.
While organizations in both the U.S. and U.K. have experienced data loss due to accidental deletions, migration errors (33 percent in U.S., 31 percent in U.K.), and accidental overwrites (27 percent in U.S., 26 percent in U.K.) also led external and insider attacks as top causes of data loss.
How SaaS Backup and Recovery Helps
As a case in point, consider one serious user error – clicking a malicious link or file and triggering a ransomware attack. If an organization uses cloud-based collaboration tools like Office 365 One Drive for Business or Google Drive, the impact from a ransomware attack is multiplied at compute speed. How? An infected laptop contains files that automatically sync to the cloud (via Google Drive, or OneDrive for Business). Those newly-infected files sync, then infect and encrypt other files in every connected system – including those of business partners or customers, whose files and collaboration tools will be similarly compromised.
This is where backup and recovery enters the picture. Nearly half of respondents in the U.S. not already using a cloud-to-cloud backup and recovery solution said that they trust their SaaS providers with managing backup, while the other half rely on manual solutions. In most cases, SaaS providers are not in a position to recover lost or deleted data due to user error, and cannot blunt the impact of a ransomware attack on their customers. Further, with many organizations relying both on manual backups and an assumption that none of the admins in charge are malicious, the opportunity for accidental neglect or oversight is too big to ignore. The industry would seem to agree. Roughly a third of organizations in the U.S. (37 percent) are already using or plan to use a cloud-to-cloud backup provider for backup and recovery of their SaaS applications within the next 12 months.
Since the survey included U.K. respondents, it also gauged sentiment around the rapidly changing data privacy regulations in the EU, specifically in regards to the “E.U.-U.S. Privacy Shield.” The vast majority of IT professionals surveyed agree (66 percent in the U.K., 72 percent in the U.S.) that storing data in a primary cloud provider’s EU data center will ensure 100 percent compliance with data and privacy regulations.
These results paint a picture of an industry that is as unsure as they are underprepared; while security is a top concern when moving critical applications to the cloud, most organizations trust the inherent protection of their SaaS applications to keep their data safe, even though the leading cause of data loss is user error, which is not normally covered under native SaaS application backup. The results also show that the concerns influencing cloud adoption have little to do with the real cause of everyday data loss and more with a fear of data breaches or hackers.
The takeaway from these survey results: more IT pros need an increased awareness and understanding about where, when, and how critical data can be lost to reduce their cloud adoption concerns; and, more IT pros need to learn how to minimize the true sources of SaaS data loss risk. To learn more, download the full survey report, or view an infographic outlining the major findings of the survey.
Spanning by EMC commissioned the online survey, which was completed by 1,037 respondents in December 2015. Of the respondents, 537 (52 percent) were based in the United Kingdom, and 500 in the United States (48 percent). A full 100 percent of the respondents “have influence or decision making authority on spending in the IT department” of their organization.
Respondents were asked to select between two specific roles: “IT Function with Oversight for SaaS Applications” (75 percent U.S., 78 percent U.K., 77 percent overall); “Line of Business/SaaS application owner” (39 percent U.S., 43 percent U.K., 41 percent overall); the remaining identified as “other.”
April 25, 2016 | Leave a Comment
By Ganesh Kirti, Founder and CTO, Palerra
Many frequently asked questions related to cloud security have included concerns about compliance and insider threats. But lately, a primary question is whether cloud services are falling victim to the same level of external attack as the data center. With Software as a Service (SaaS) becoming the new normal for the corporate workforce, and Infrastructure as a Service (IaaS) on the rise, cloud services now hold mission-critical enterprise data, intellectual property, and other valuable assets. As a result, the cloud is coming under attack, and it’s happening from both inside and outside the organization.
On February 29, the CSA Top Threats Working Group clarified the nature of cloud service attacks in a report titled, “The Treacherous 12: Cloud Computing Top Threats in 2016.” In this report the CSA concludes that although cloud services deliver business-supporting technology more efficiently than ever before, they also bring significant risk.
The CSA suggests that these risks occur in part because enterprise business units often acquire cloud services independently of the IT department, and often without regard for security. In addition, regardless of whether the IT department sanctions new cloud services, the door is wide open for the Treacherous 12.
Because all cloud services (sanctioned or not) present risks, the CSA points out that businesses need to take security policies, processes, and best practices into account. That makes sense, but is it enough?
Gartner predicts that through 2020, 95 percent of cloud security failures will be the customer’s fault. This does not necessarily mean that customers lack security expertise. What it does mean, though, is that it’s no longer sufficient to know how to make decisions about risk mitigation in the cloud. To reliably address cloud security, automation will be key.
Cloud security automation is where Cloud Access Security Brokers (CASBs) come into play. A CASB can help automate visibility, compliance, data security, and threat protection for cloud services. We thought it would be interesting to take a look at how well CASBs in general would fare at helping enterprises survive the treacherous 12.
The good news is that CASBs clearly address nine of the treacherous 12 (along with many other risks not mentioned in the report). These include:
#1 Data breach
#2 Weak ID, credential, and access management
#3 Insecure APIs
#4 System and application vulnerabilities
#5 Account hijacking
#6 Malicious insiders
#7 Advanced persistent threats
#10 Abuse and nefarious use of cloud services
#12 Shared technology issues
There are countless examples of why being protected against the treacherous 12 is important. Some of the more high profile ones:
- Data breach: In the 2015 Anthem breach, hackers used a third-party cloud service to steal over 80M customer credentials.
- Insecure APIs: The mid-2015 IRS breach exposed over 300K records. While that’s a big number, the more interesting one is that it only took 1 vulnerable API to allow the breach to happen.
- Malicious Insiders: Uber reported that their main database was improperly accessed. The unauthorized individual downloaded 50K names and numbers to a cloud service. Was it their former employee, the current Lyft CTO? That was Uber’s opinion. The DOJ disagreed and a lawsuit ensued.
In each of these cases a CASB could have helped. A CASB can help detect data breaches by monitoring privileged users, encryption policies, and movement of sensitive data. A CASB can also detect unusual activity within cloud services that originate from API calls, and support risk scoring of external APIs and applications based on the activity. And a CASB can spot malicious insiders by monitoring for overly-privileged user accounts as well as user profiles, roles, and privileges that drift from compliant baselines. Finally, a CASB can detect malicious user activity through user behavior analytics.
What about the three threats that aren’t covered by a CASB? Those include:
#8 Data loss
#9 Insufficient due diligence
#11 Denial of services
The cost of data loss (#8, above) is huge. A now-defunct company named Code Spaces had to close down when their corporate assets were destroyed, because it did not follow best practices for business continuity and disaster recovery. Data loss prevention is a primary corporate responsibility, and a CASB can’t detect whether it is in place. Insufficient due diligence (#9) is the responsibility of the organization leveraging the cloud service, not the service provider. Executives need a good roadmap and checklist for due diligence. A CASB can provide advice, but they don’t automate the process. Finally, denial of service (DoS, #11, above) attacks are intended to take the provider down. It is the provider’s responsibility to take precautions to mitigate DoS attacks.
For a quick reference guide to the question, “Can a CASB protect you from the 2016 treacherous 12?,” download this infographic.
To learn more, join Palerra CTO Ganesh Kirti and CSA Executive VP of Research J.R. Santos as they discuss “CASBs and the Treacherous 12 Top Cloud Threats” on April 25, 2-3pm EDT. Register for the webinar now.
April 21, 2016 | Leave a Comment
By Matt Wilgus, Practice Director, Schellman
The release of details contained in the Panama Papers will be one of the biggest news stories of the year. The number of high-profile individuals implicated will continue to grow as teams comb through the 11.5 million documents leaked from Mossack Fonseca, a Panamanian law firm. While the news headlines will focus on mainly world leaders, athletes and well-to-dos, the overview from The International Consortium of Investigative Journalists (ICIJ) gets into additional details. This overview is worth reading to understand what services the firm provided, who uses the services, how they can be used legally and how they can be abused.
The overview seems like something out of a John Grisham book. In fact some of the information being released is similar to a plot from a book he wrote over 25 years ago. In 1991, John Grisham published “The Firm”, a book which revolves around several lawyers working for the fictional law firm Bendini, Lambert and Locke. Some of the similarities between the book and today include a law firm that primarily exists to assist money laundering and tax evasion, part of the plot involves the details of many transactions from retrieving thousands of documents and there is a whistleblower. The fictional firm also provided services to legitimate clients, although in the book that number is about 25 percent. It is unknown what percentage of Mossack Foneseca clients were legitimate and how many would be described as Ponzi schemers, drug kingpins and tax evaders, as the ICIJ overview mentions. While the novel is fiction, the book sets the stage as something that has been seen before.
Whether the leak started from an external breach of systems or an intentional leak from an insider, it is always intriguing to know how it occurred and what could have been done. Did it start with a phishing email, a rogue employee, a web application flaw, etc.? Forbes reported that the client portal server was running Drupal 7.23, which was found to be susceptible to a SQL injection vulnerability that was announced in October 2014. There were many reports of exploitation of this vulnerability days after it was announced, so it is likely someone took advantage of the exploit. The team responsible for WordFence, a popular WordPress security plugin, provided another possible exploitation scenario related to upload functionality that existed in the Revolution Slide plugin. These are just some of the potential means that could have caused a breach at Mossack Fonseca. Other possibilities include scenarios related to weaknesses in the email server and a lack of encryption in transit. Mossack Fonseca’s does have a Data Security page on their site, although it primarily touts SSL and the fact they house all of our servers in-house as their primary security measures. In 2011, I wrote a post on how the legal profession was an easy target for breaches. Looking back I realize that technology has changed, but in many ways the weaknesses are likely to stay the same. One of the biggest changes to note from 2011 is the number of online applications law firms have now. This isn’t just the top 100 law firms; this includes smaller regional firms as well. In addition to the main corporate web site and an area to share documents (or client portal), which are now offerings that appear much more prevalent across firms of all sizes, firms have blog sites, premium service offerings, extranets and even applications that provide a gateway into all the other online applications. More applications means a larger attack surface. Unlike Mossack Fonseca, which claims it hosted everything internally, many law firms we see do use third-party SaaS offerings to handle some of these functions. Outsourcing to a third party which specializes in providing a particular service can often provide better security than a firm can provide in house.
Given the Mossack Fonseca’s focus on company formation, minimizing tax burdens, Private Interest Foundations and the like, the firm could have easily been a target given the recent groundswell of activism against tax avoidance and income inequality. While the lapse in security at Mossack Fonseca may not be representative of security at all law firms, the details surrounding their environment point to likely weaknesses in people, processes and technology which could exist in any organization.
- People – Given what we know about potential vulnerabilities in their environment and the exfiltration of data, we can surmise that someone was not paying attention for an extended period of time. There are many security roles in an organization including, but not limited to policy development, administration and monitoring. In some environments one person may be responsible for many roles and in some cases not all responsibilities can be met. This may because no one was given the role or the person that was given the responsibility left the organization. A recent search of LinkedIn did not turn up too many IT-related profiles with Mossack Fonseca as a current or previous employer, although this doesn’t necessarily mean these individuals do not exist. Contractors may have also performed the role. That said, a third party could have been hired for a given job, say deploying the client portal, but maybe was not responsible for post implementation support.
- Process – Being notified of vulnerabilities in the software supporting the organization is paramount to understanding where risks exist. Knowing what data is leaving the environment is also critical. The likelihood that either of these was occurring is low and if either were occurring there wasn’t necessarily anyone to act on it in a timely fashion.
- Technology – A breakdown in people and processes can occasionally be mitigated by technology. The WordPress and Drupal sites are now protected by a third party security provider, but other sites likely are not. An up-to-date intrusion detection system (IDS) may have detected some of the threats the organization faced, or activities that occurred, although there were several potential options to exploit so one avenue or another would have likely been open. For an organization that appears to have missed some fundamental security concerns, they may have used technology to secure some data as there is a site named crypt.mossfon.com, which is still up.
The Panama Papers incident may once again raise awareness around data security with legal firms. Organizations performing support services to legal firms, such as eDiscovery and Case Management providers, may also want to take note. Mossack Fonseca has a link on their page for ISO Certifications. However, the only one listed is ISO 9001:2008. An ISO 27001 assessment, or certification, may not have prevented the leak, but it would have demonstrated greater consideration of security on the part of Mossack Fonseca. A penetration test would also have been beneficial, although given the vulnerabilities that existed even a vulnerability scan would have detected some of the issues.
With most data breaches, the actual data on the people and companies is less interesting (albeit potentially more valuable) than the way in which the breach occurred or the attacker persisted in the attack. As it relates to the Panama Papers, it is the opposite. The forthcoming details related to various individuals, their transactions, and the potential future tax and privacy implications are far more interesting to the public than the means whereby the exfiltration actually occurred. That said, taking a few minutes to understand how it happened and what we can learn can be a worthwhile step in preventing future breaches.
April 20, 2016 | Leave a Comment
Data Privacy Gets a Stronger Light Saber
By Nigel Hawthorn, EMEA Marketing Director, Skyhigh Networks
On April 14, 2016, the EU Parliament passed the long-awaited new EU rules for personal data protection (GDPR). Everyone who holds or processes data on individuals in the 28 countries of the EU has until Star Wars Day 2018 (May 4) to comply.
The top 10 provisions of the regulation are:
- It is a global law. No matter where you are in the world, if you have data on individuals in the EU and lose it, you are responsible and can be fined. As an example, if you have a web site and a European comes on and enters their contact information, you have to conform.
- Increased fines. Up to 4% of global turnover or €20,000,000 (US$22M)
- Opt-in regulations. Users must give clear consent to opt-in to their data being collected and you must only use it for the purpose defined. No opting out, no hidden terms, no selling/giving data to other people.
- Breach notification. If you lose data, you have 72 hours to tell the authorities.
- Joint liability. If multiple companies process the data, they are all liable if data is lost, so if you hold data YOU are responsible if data gets lost via a risky cloud service.
- Users can demand their data back, that it is updated and deleted. If you hold data, you need to work out how to achieve those.
- Removes ambiguity. One law across all 28 countries of the EU.
- Common enforcement. The authorities are expected to enforce consistently across all the countries, the good news is data holders only need to deal with one authority.
- Collective redress. Users can sue together if data is lost in class action lawsuits.
- Data transfer. Data transfer from the EU is allowed, but subject to strict conditions.
If you work for a company collecting data, you are responsible for the security of that data no matter where it gets processed. It’s more important than ever that you know the shadow IT services that employees may be using, as they could be the conduit for data loss and your organisation will be liable.
There’s some good news for IT in the regulation – the new rules encourage privacy-friendly techniques such as pseudonimysation, anonymisation, encryption and data protection by design and by default. So capabilities such as encrypting data before it is uploaded to the cloud, especially when harnessed with keeping the keys on premises, can reduce your liabilities.
This is good news for EU citizens, as they will have strong and clear rights over their personal data, its collection, processing and security.
Some organizations have in the past treated personal data as a cheap commodity but this regulation clearly shows how valuable data really is and demands that they treat it with great respect.
We should all put a value on data about ourselves and our families and embrace this legislation because the outcome is that all of our data will be safer.
April 20, 2016 | Leave a Comment
By Françoise Gilbert,Global Privacy and Cybersecurity Attorney, Greenberg Traurig
In a 58-page opinion published April 13, 2016, the influential European Union Article 29 Working Party (WP29), which includes representatives of the data protection authorities of the 28 EU Member States, expressed significant concerns with respect to the terms of the proposed EU-US Privacy Shield that is intended to replace the EU-US Safe Harbor.
The WP29 made numerous critiques to the proposed EU-US Privacy Shield framework. Some of which include, for example, the lack of consistency between the principles set forth in the Privacy Shield documents and the fundamental EU Data Protection principles outlined in the 1995 EU Data Protection Directive, the proposed EU General Data Protection Regulation, and related documents.
The WP29 group also requested clearer restrictions for the onward transfer of personal information that occurs after personal data of EU residents is transferred to the US. The WP29 is especially concerned with the subsequent transfer of data to a third country, outside the United States. In addition, the WP29 continues to be concerned about the effect, scope, and effectiveness of the measures proposed to address activities of law enforcement and intelligence agencies, often described as a “massive collection” of data.
On Feb. 29, 2016, the European Commission and U.S. Department of Commerce published a series of documents intended to constitute a new framework for transatlantic exchanges of personal data for commercial purposes, to be named the EU-U.S. Privacy Shield. The Privacy Shield would replace the EU-US Safe Harbor, which was invalidated by the Court of Justice of the European Union (CJEU) in October 2015, in the Schrems case.
Since the publication of the draft Privacy Shield documents, the WP29 members have convened in a series of meetings over the course of the past six-weeks in order to evaluate these documents and come up with a common position.
The results of this 6-week evaluation were expressed in an opinion entitled: “Opinion 01/2106 on the EU-US Privacy Shield Draft Adequacy Decision – WP 238,” published on April 13, 2016. The 58-page document, which is well-drafted and thoughtful, contains numerous positive comments about the efforts of the EU and US in trying to design a framework that would adhere to the two-page guidance published at the end of January, which outlined the key aspects of the proposed cross-Atlantic framework.
The document also expressed a wide variety of concerns with respect to the proposed EU-US Privacy Shield. The WP29 group was concerned by: (i) the commercial provisions (which address issues similar to those addressed in the Safe Harbor principles); (ii) the surveillance aspects (specifically, the possible derogations to the principles of the Privacy Shield for national security, law enforcement, and public interests purposes); as well as, (iii) the proposed joint review mechanism.
Consistency with Data Protection Principles
The WP29 indicated in its Opinion that its key objective is to make sure that the Privacy Shield would offer an equivalent level of protection for individuals when personal data is processed. The WP29 believes that some key EU data protection principles are not reflected in the draft documents, or have been inadequately substituted by alternative notions.
While it does not expect the Privacy Shield to be a mere and exhaustive copy of the EU legal framework, the WP29 stressed that the Privacy Shield should contain the substance of the fundamental principles in effect in the European Union, so that it can ensure an “essentially equivalent” level of protection. To this point, WP29 explains that the data retention principle is not expressly mentioned and there is no wording on the protection that should be afforded against automated individual decisions based solely on automated processing. The application of the purpose limitation principle to data processing is also unclear.
The WP29 paid special attention to onward transfers, an issue that was key to the Safe Harbor decision. It believes that the Privacy Shield provisions addressing onward transfers of EU personal data are insufficiently framed, especially regarding their scope, the limitation of their purpose, and the guarantees applying to transfers to Agents.
The WP29 noted that since the Privacy Shield would be used to address onward transfers from a Privacy Shield entity located in the US to third country recipients, it should provide the same level of protection on all aspects of the Shield, including national security. In case of an onward transfer to a third country, every Privacy Shield organization should have the obligation to assess any mandatory requirements of the third country’s national legislation applicable to the data importer before making the transfer.
Finally, although the WP29 notes the additional recourses made available to individuals to exercise their rights, it is concerned that the new redress mechanism may prove to be too complex in practice and difficult to use for EU individuals, and therefore, ineffective. Further clarification of the various recourse procedures is therefore stressed; in particular, where they are willing, the WP29 suggests that EU data protection authorities could be considered as a natural contact point for EU individuals involved in these complex redress procedures, and could have the option to act on their behalf.
Derogations for National Security Purposes
The WP29 observed that the draft EU Commission Adequacy Decision extensively addresses the possible access to data processed under the Privacy Shield for purposes of national security and law enforcement. It also notes that the US Administration, in Annex VI of the documents, also provides for increased transparency on the legislation applicable to intelligence data collection.
Regarding the massive collection of information, the WP29 notes that the representations of the U.S. Office of the Director of National Intelligence (ODNI) do not exclude massive and indiscriminate collection of personal data originating from the EU. This brings concerns for the protection of the fundamental rights to privacy and data protection. The WP29 pointed to other resources for clarification on this point, such as the forthcoming rulings of the CJEU in cases regarding massive and indiscriminate data collection.
Concerning redress, the WP29 welcomes the establishment of an Ombudsperson as a new redress mechanism. Concurrently, it expressed its concern that this new institution might not be sufficiently independent, might not be vested with adequate powers to effectively exercise its duty, and does not guarantee a satisfactory remedy in case of disagreement.
Annual Joint Review
Regarding the proposed Annual Joint Review mechanism mentioned in the Privacy Shield framework, the WP29 noted that the Joint Review is a key factor to the credibility of the Privacy Shield. It points out, however, that the specific modalities for operations, such as the resulting report, its publicity, and the possible consequences, as well as the financing, need to be agreed upon well in advance of the first review.
Consistency with the General Data Protection Regulation
The WP29 notes that the Privacy Shield needs to be consistent with the EU data protection legal framework, in both scope and terminology. It suggests that a review should be undertaken shortly after the entry into application of the General Data Protection Regulation (GDPR), to ensure that the higher level of data protection offered by the GDPR is followed in the adequacy decision and its annexes.
Structure and Content
Regarding the structure and content of the documents, the WP29 noted that the complexity of the structure of the documents that constitute the Privacy Shield make the documents difficult to understand. They are also concerned that the lack of clarity in the new framework might cause it to be difficult to comprehend by data subjects, organizations, and even data protection authorities. In addition, they note occasional inconsistencies within the 110 pages that form the current draft of the Privacy Shield framework. The WP29 urges the Commission to make the documents more clear and understandable for both sides of the Atlantic.
In its 58-page opinion, the WP29 made great efforts to point to the improvements brought by the Privacy Shield compared to the Safe Harbor decision. However, overall, the evaluation of the 110-page proposed Privacy Shield framework is generally negative. The WP29 appears to doubt that the protection that would be offered under the Privacy Shield would be equivalent to that of the EU. The extent to which the EU Commission will be able to address these concerns, identify appropriate solutions and provide the requested clarifications in order to improve the proposed documents remains to be seen.
Six months after the CJEU invalidated the EU Commission decision that had created the EU-US Safe Harbor, it seems that cross-Atlantic data transfers are still in limbo. There is still no simple, business friendly solution to addressing the stringent prohibition against cross border data transfers between EU/EEA entities and US based companies. The viability of the Privacy Shield remains in question. With the negative opinion issued by the WP29, a very influential body of the European Union, it is uncertain whether and when a stable and final draft will be completed. Assuming such framework may reach a form that is satisfactory to both sides, it would then need to be implemented. At a minimum, a new infrastructure, a website, and additional personnel will also be needed to make it operational—these are all things that take even more time.
In the meantime, US companies that built their operations and business models around the simple and easy to use EU-US Safe Harbor should review the legality of their cross border data transfers with their counsel. With no light at the end of the tunnel, it is urgent that they evaluate and implement means to address the stringent restriction against cross border data transfers in effect in the European Union and European Economic Area, and that they understand and address the needs of their counterparts in the EU/EEA region in order to minimize the risk of enforcement action against the European entities.
April 19, 2016 | Leave a Comment
By Susan Richardson, Manager/Content Strategy, Code42
Despite some surveys that say Bring Your own Device (BYOD) is growing, the CyberEdge Group’s recently released 2016 Cyberthreat Defense Report found that enterprise BYOD programs have stalled. Only one-third of respondents this year had implemented a BYOD policy—the same as two years ago. And 20 percent still have no plans to add one.
The delay in leveraging BYOD programs may be because organizations find them harder to establish, manage and secure than first thought. But the lack of an official policy doesn’t mean employees aren’t plugging their unapproved devices into the network. A Gartner survey found that 45 percent of workers use a personal device for work without their employer’s knowledge.
So here are answers to three key BYOD sticking points, to help organizations get unstuck and leverage the increased productivity gains BYOD can bring:
Q: How do we separate corporate and personal data on a device?
Most mobile device management (MDM) programs today allow you to separate the corporate workspace from the personal workspace on mobile devices. Containerization, also know as sandboxing, helps reduce the number of policies required to effectively manage mobile risks. It can also assuage employee fears that if they’re terminated or report a device missing, you’ll wipe away the entire contents of their device—including personal data like photographs and emails.
Q: How do we keep tabs on all that roaming mobile data?
A: With a comprehensive cloud endpoint backup system.
Modern cloud endpoint backup solutions serve as the new data guardian, continuously and automatically moving data from a device to the cloud and back again to a new machine whenever it’s needed. It protects enterprise data by continuously backing up every change and deletion. The best endpoint backup systems also give IT a comprehensive, single point of aggregation and control. You can see what’s on your network, how each device is configured, how it interacts with your environment, as well as where and when data was created, if it’s been altered, and who changed it. This happens whenever the machine is connected to the Internet, without prompting the user to engage with it, all while running seamlessly and silently in the background.
Q: Who pays and how?
A: You, the enterprise, by automating reimbursement.
With California leading the way, BYOD reimbursement won’t just be the ethical thing to do, it will be legally required under fair labor laws. But manually managing reimbursement via expense reports is archaic and expensive. It can cost $15 to $20 per expense report in internal labor, because so many different departments have to touch the report, from accounts payable to finance to IT. Instead, do like Intel did and automate reimbursement by setting up corporate-funded plans with mobile providers. That way, your company takes care of the bill and can negotiate corporate discounts with providers.
To get started developing a BYOD strategy, download this BYOD checklist.
April 12, 2016 | Leave a Comment
By Rick Orloff, Chief Security Officer, Code42
The unprecedented leak of 11.5 million files from the database of the world’s fourth biggest offshore law firm is riveting. As details continue to emerge about the Panama Papers leak, the money laundering and secretive tax regimes and high-profile clientele make for a juicy story. But from an enterprise data security perspective, here at Code42 we’re shaking our heads.
It’s hard to imagine a situation where the stakes for data protection could be higher. This is an organization whose entire “empire” is built on “secret” data. And it was an all-or-nothing game: Mossack Fonseca will likely never recover to earn the trust of a future client—tax evader or otherwise. If there ever was an organization that warranted exceptional network security tools and data security measures, Mossack Fonseca was it.
A data security wake-up call for honest law firms everywhere
If a massive international law firm dealing exclusively in extremely sensitive data is this easily hacked, how vulnerable is your average, above board law firm?
According to the statistics, the answer is “very.” John McAfee penned an article for Business Insider in which he concludes that “law firms are easy pickings for hackers.” Bloomberg found that 80 percent of large U.S. law firms were hacked in 2015. Even more alarming, in the 2015 ABA Technology Survey, 23 percent of firms surveyed said they “don’t know” if they’ve experienced a breach, and only 10 percent have any sort of cyber liability coverage. For a cohort that knows a thing or two about liability lawsuits—and certainly knows that “ignorance of the law” is a poor defense—this is surprising.
Data protection is a high-stakes game for every law firm
And while a data breach at your average law-abiding law firm isn’t likely to result in indictments for fraud, the stakes are still extremely high. “The implications of law firm breaches are mind boggling,” Philip Lieberman, president of Lieberman Software, told Computer Business Review.
Most clearly, a firm stands to destroy every shred of trust with its clients—a reputation bomb that will be tough to recover from. In many cases, a leak could compromise legal proceedings and eliminate advantages by placing litigation strategy and privileged information out in the open.
Even if a firm’s clients and reputation escapes unscathed, data loss of any kind can trigger significant financial impact. A damaged laptop, or ransomware that holds data hostage, can leave an associate without access to critical information. The loss of billable hours quickly adds up. Add to that breach reporting requirements and potential fines, and the ROI of modern enterprise data security tools is easily apparent.
It will be interesting to watch the continued fallout from the Panama Papers, and we’re happy to count this as a win for the “good guys.” But as it dominates headlines and newsfeeds, we hope it’s also a major reminder for law firms—and enterprises in every industry—to re-examine what they’re doing to protect their data.
Download The Guide to Modern Endpoint Backup and Data Visibility to learn more about selecting a modern endpoint backup solution in a dangerous world.
April 11, 2016 | Leave a Comment
By Daniele Catteddu, Chief Technology Officer, Cloud Security Alliance
Today, the Cloud Security Alliance has released the CSA STAR Program & Open Certification Framework in 2016 and Beyond, an important new whitepaper that has been created to provide the security community with a description of some of the key security certification challenges and how the CSA intends to address them moving forward.
As background, launched in 2011, the CSA’s Security, Trust and Assurance Registry (STAR) program has become the industry’s leading trust mark for cloud security with the successful objective to improve trust in the cloud market by offering increased transparency and information security assurance. The Open Certification Framework, also developed by the CSA, is an industry initiative to allow global, accredited, trusted certification of cloud providers. It allows for flexible, incremental and multi-layered cloud service provider (CSP) certifications according to the CSA’s industry leading security guidance.
Together the OCF/STAR program comprises a global cloud computing assurance framework with a scope of capabilities, flexibility of execution, and completeness of vision that far exceeds the risk and compliance objectives of other security audit and certification programs.
Since the launch of STAR, the cloud market has evolved and matured, and so has the cloud audit and certification landscape with now more than fifteen options including national, regional and global, sector-specific, cloud-specific and generic certification schemes available. This proliferation has resulted, in among other things, a barrier to entry for CSPs that cannot afford to get certified by multiple countries and organizations.
Aside for the time and cost of pursuing and maintaining these numerous certifications, there are a number of other concerns including:
- Lack of means to provide higher level of assurance and transparency
- Privacy not adequately taken into account
- Limited transparency
- Lack of means to streamline GRC
To address these certification challenges, the CSA is proposing, through the OCF, to offer the cloud community with both a global recognition scheme for security and privacy certification, and a set of GRC tools and practices that address the many complex assurance and transparency requirements of cloud stakeholders.
The three core ideas behind the CSA suggested solutions are that an effective and efficient approach to trust and assurance has to:
- delicately balance the need of nations and business sectors to develop their specific certification schemas with the need of CSPs to reduce compliance costs
- avoid that humans (auditors) do activities that can be performed by machines (e.g. collecting data)
- make sure that accurate and reliable evidences/information are provided to relevant people, in a timely fashion, leveraging as much as possible automatic means
The paper also outlines how a number of other frameworks and controls should play a part in this solution including:
- Leveraging CCM and OCF/STAR as normalizing factors
- Conducting continuous monitoring/auditing
Integrating privacy level agreements code of conduct into the STAR Program
The CSA is currently seeking validation for its proposed OCF-STAR program action plan and is seeking input and support from the CSA community. To become involved, visit the Open Certification Working Group.