What They’re Not Telling You About Global Deduplication

January 29, 2016 | Leave a Comment

By Rachel Holdgrafer, Business Content Strategist, Code42

01_18_16_global_duplication2When it comes to endpoint backup, is global deduplication a valuable differentiator?

Not if data security and recovery are your primary objectives.

Backup vendors that promote global deduplication say it minimizes the amount of data that must be stored and provides faster upload speeds. What they don’t say is how data security and recovery are sacrificed to achieve these “benefits.”

Here’s a key difference: with local deduplication, data redundancy is evaluated and removed on the endpoint before data is backed up. Files are stored in the cloud by the user and are easily located and restored to any device. With global deduplication, all data is sent to the cloud, but only one instance of a data block is stored.

They tell you: “You’ll store less data!”
It’s true that global deduplication reduces the number of files in your data store, but that’s not always a good thing. At first blush, storing less data sounds like a benefit, especially if you’re paying for endpoint backup based on data volume. But other than potential cost savings, how does storing less data actually benefit your organization?

Not as much as you think.

For most organizations, the bulk of the files removed by the global deduplication process will be unstructured data such as documents, spreadsheets and presentations—files that are not typically big to begin with—making storage savings resulting from global dedupe minimal. The files that gobble up the bulk of your data storage are those that are unlikely to be floating around in duplicate—such as databases, video and design source files, etc.

What they don’t tell you: Storing less data doesn’t actually benefit your organization. Smaller data stores benefit the solution provider. Why? Data storage costs money and endpoint backup providers pay for huge amounts of data storage and bandwidth every month. By limiting the data stored to one copy of each unique file, the solution provider can get away with storing less data for all of its customers, resulting in smaller procurement costs each month—for them.

Vendors that offer global dedupe also fail to mention that it puts an organization at risk of losing data because (essentially) all the eggs are in one basket. When one file or data block is used by many users but saved just once, (e.g., the HR handbook for a global enterprise, sales pitch decks or customer contact lists) all users will experience the same file loss or corruption if the single instance of the file is corrupted in the cloud.

They tell you: “It uploads data faster.”
First, let’s define “faster.” The question is, faster than what? Admittedly, there’s a marginal difference in upload speeds between global and local deduplication, but it’s a lot like comparing a Ferrari and a Maserati. If a Ferrari tops out at 217 miles per hour and a Maserati tops out at 185 miles per hour, clearly the Ferrari wins. It’s technically faster, but considering that the maximum legal speed on most freeways is 70-75 miles per hour, the additional speed on both vehicles is a moot point. Both cars are wickedly fast but a person is not likely to get to drive either at its top speed, so what does matter? The fact is, it doesn’t.

The same can be said about the speed “gains” achieved by utilizing global deduplication over local deduplication. Quality endpoint backup solutions will provide fast data uploads regardless of whether they use global deduplication or local deduplication. There’s a good chance that there will be no detectable difference in speed between the two methods because upload speed is limited by bandwidth. Global deduplication promoters are positioning speed as a benefit you will not experience.

What they don’t tell you: Global deduplication comes at a cost: restore speeds will be orders of magnitude slower than restoration of data that has been locally deduplicated. Here’s why: with global deduplication, all of your data is stored in one place and only one copy of a unique file is stored in the cloud regardless of how many people save a copy. Rather than store multiples of the same file, endpoint backup that utilizes global deduplication maps each user to the single stored instance. As the data store grows in size, it becomes harder for the backup solution to quickly locate and restore a file mapped to a user in the giant data set.

Imagine that the data store is like a library. Mapping is like the Dewey Decimal System, only the mapped books are stored as giant book piles rather than by topic or author. When the library is small, it’s relatively easy to scan the book spines for the Dewey Decimal numbers. However, as the library collection (that is, book piles) gets larger, finding a single book becomes more time consuming and resource intensive.

Data storage under the global deduplication framework is like the library example above. Unique files or data blocks are indexed as they come into the data store and are not grouped by user. When the data store is small, it’s relatively easy for the system to locate all of the data blocks mapped to one user when a restore is necessary. As the data store grows in size, the process of locating all of the data blocks takes longer. This slows down the restore process and forces the end user to wait at the most critical point in the process—when he or she needs to get files back in order to continue working.

The real security story: What you’re not being told about global deduplication doesn’t stop there. Two-factor encryption doesn’t mean what you think it does. Frankly, an encryption key coupled with an administrator password is NOT two-factor encryption. It’s not even two-factor authentication. It’s simply a password layered over a regular encryption key. Should someone with the encryption key compromise the password, he or she will have immediate access to all of your data.

Conclusion
Companies that deploy endpoint backup clearly care about the security of their data. They count on endpoint backup to reliably restore their data after a loss or breach. Given the vulnerabilities exposed by the global deduplication model, it is counterintuitive to sacrifice security and reliability in a backup model in favor of “benefits” that profit the seller or cannot be experienced by the buyer.

To learn more about how endpoint backup with local deduplication is a more strategic data security choice, download the ebook, Backup & Beyond.

Serious Cybersecurity Challenges Ahead in 2016

January 28, 2016 | Leave a Comment

By Phillip Marshall,  Director of Product Marketing, Cryptzone

Challenges-Ahead-250x167By now you’ll have settled into the New Year, looking ahead at what’s to come as we move swiftly through January. However, there are numerous unsettling predictions that mean 2016 is a year of many serious cybersecurity challenges – from new types of hacks, skills shortages to increased insider threats. We’ve rounded up a number of 2016 predictions from industry experts and vendors that every organization regardless of size should pay close attention to and put together a strategy to address.

  1. Increased Need to Restrict Access and Secure Content: Dark Reading presented our first noteworthy prediction. “Chief Information Security Officers (CISOs) will become the new “it” girl of security, not only in enterprises with healthy security budgets, but in data-driven startups where housing sensitive information is core to their business,” say Tim Chen, CEO, and Bruce Roberts, CTO of DomainTools.

It increasingly seems that a day does not pass without a news story on the loss of sensitive information. If that information isn’t secured properly and is accessed by unauthorized parties, the damage to an organization is massive. Financial penalties, regulatory sanctions, lost company confidential information and brand damage – all of these circumstances can be avoided by restricting access to and encrypting content wherever it lives and travels.

  1. Security is becoming a Shared Responsibility: TechCityNews offered our next prediction of merit which expands on who is responsible for cybersecurity. “Demand for security products has grown, and is only set to grow further; and responsibility for security is now held in more parts of any organization. In other words, people other than the security analyst and the chief information security officer, who have traditionally been the users of security tools, are being made responsible for making sure private information and intellectual property is secure. The responsibility lies with both the C-suite, as share price is directly impacted by a breach, as well as with the developer, who has to ship safe code and include security features on products as they are built.”

Too much is at stake for organizations that have been breached. We don’t necessarily think this is a prediction so much as a requirement for all organizations this year.

  1. Insider Threats to Increase: Insider Threats Abound – lock down your IT says ITProPortal in its 2016 predictions. “Massive disruption (Uber style) to existing industries and wholesale digitization will create job losses and potentially significant numbers of disaffected employees capable of compromising IT systems. So, we’re likely to see a renewed focus on ‘locking down’ information systems, by ensuring secure configurations, removing vulnerabilities, strictly controlled use of privileges and by ensuring that critical systems and applications are patched up to date.”

Insider threats are a clear issue especially as we believe all cybercrime is an inside job (see our webinar with Forrester Analyst, John Kindervag on this topic). In 2016, organizations need to first adopt the principles of zero trust to combat malicious insiders on the network level. Individuals should only ever have access to the resources they need to do their job, and this should only ever be granted in reasonable contexts. Otherwise, there’s nothing stopping them from spending their downtime trawling entire network segments for sensitive information. Second, to avoid data breaches caused by careless behavior, organizations need strong content-level security. By encrypting, tracking and restricting access to files that contain sensitive information, they can mitigate the consequences of misdirected emails and similar incidents.

  1. You’ll need to do more with fewer skilled professionals: Another issue in 2016 – skills shortages in cyber security increase. This prediction came up time and time again throughout our research. As the demand to defend against cyber threats increase, the resources to achieve this decrease. Skills shortages “will mean that fewer and fewer organizations are able to build or manage cyber security defenses themselves, or even be able to make effective use of cyber security technologies.”

Benjamin Jun, CEO, HVF Labs echoed this sentiment in his prediction that “Microservices will change the build vs. buy debate as identity management and customer data will be increasingly migrated to specialized cloud services in 2016. Developers will insert vetted services and code into their own software, avoid building from scratch, and obtain a security level better than most homegrown offerings. And, for companies who insist on build-your-own, relief is coming in 2017 when container technologies will allow in-house teams to practically manage and integrate microservices of their very own.”

Geoff Smith of Experis commented in one prediction that the “worrying news is that breaches are inevitable, while a shortage of skilled cybersecurity professionals is likely to push up the costs of beefing up defenses and dealing with attacks.”

The build vs. buy debate will never end, but with skills shortages a-plenty, help from cyber security vendors that specialize in network security and data protection is necessary in 2016.

  1. Customers Care! Increasingly, customers will want to know how you’re securing their data: Malcolm Marshall, Partner and Global Leader, Cyber Security at KPMG said “In 2016, we will see that consumers care about security shock – more businesses will realize that sophisticated customers actually care about security in the products and services and will realize that security, ease of use and “coolness” are not mutually exclusive.”

Allowing customers’ data to be stolen is bad for business. Your customers want to know their data is safe. They want you to comply with regulations and they want you to do everything you can to prevent cybercrime. We previously predicted this trend would continue and it has. Customers want proactive cybersecurity — not reactive analysis and temporary repairs. Findings show that companies are ramping up their spending to prevent cyberattacks after a string of breaches at financial firms and big retailers. This trend will continue.

2015 Breaches Show That Current Cybersecurity Measures Aren’t Enough

January 22, 2016 | Leave a Comment

By Corey Williams, Senior Director/Product Management and Marketing, Centrify

centrifyLast year my colleague Chris Webber predicted that “Breach Headlines will Change IT Security Spend.” Unfortunately the breach headlines of 2015 were even more striking than most could predict. 2015 breaches involved high-profile criminal and state sponsored attacks. Millions of personnel records of government employees, tens of millions of records of insurance customers, and hundreds of millions of customer records from various other companies were among the information compromised. This year we even heard of a BILLION dollar bank heist!

Many of these companies had implemented advanced malware protection and next-generation firewalls, and delivered regular security training sessions for their employees. Yet the breaches are still happening. What we know from cybersecurity experts such as Verizon and Mandiant is that nearly half of breaches occurring today are due to a single vulnerability that is still not adequately addressed.

Compromised user credentials, AKA the humble username and password, can provide outsiders with access to an organization’s most critical data, applications, systems and network devices. Through phishing, trojans and APTs, hackers today are focused on these digital “keys to the kingdom,” which are used to access sensitive data and systems.

For 2016, companies will (and must) adopt measures to mitigate the risk of compromised credentials. Yes, complex and unique passwords are a start but will never be enough. Multi-factor authentication will be implemented more broadly and across more apps and devices, adaptive access will be used to detect and stop suspicious login attempts and granular privilege management will be adopted to reduce the impact of compromised credentials. Companies will start to accept that compromised credentials are the new normal and will take steps to mitigate the risk they represent.

To read more about the state of corporate security, see our State of the Perimeter survey results.

Containers Aren’t New, But Ecosystem Growth Has Driven Development

January 21, 2016 | Leave a Comment

By Thomas Campbell, Container World 2016

kyle

Containers are getting a fair bit of hype at the moment, and February 2016 will see the first ever event dedicated to both the business and technical advantages of containers take place in Silicon Valley in the US.

Here, Container World talks to Kyle Anderson, who is the lead developer for Yelp, to learn about the company’s use of containers, and whether containers will ultimately live up to all the hype.

What special demands does Yelp’s business put on its internal computing?
Kyle Anderson: I wouldn’t say they are very special. In some sense our computing demands are boring. We need standard things like capacity, scaling, and speed. But boring doesn’t quite cut it though, and if you can turn your boring compute needs into something that is a cut above the status quo, it can become a business advantage.

And what was the background to building your own container-based PaaS? What was the decision-making process there?
KA: Building our own container-based PaaS came from a vision that things could be better if they were in containers and could be scheduled on-demand.

Ideas started bubbling internally until we decided to “just build it” with manager support. We knew that containers were going to be the future, not VMS (virtual machines). At the same time, we evaluated what was out there and wrote down what it was that we wanted in a PaaS, and saw the gap. The decision-making process there was just internal to the team, as most engineers at Yelp are trusted to make their own technical decisions.

How did you come to make the decision to open-source it?
KA: Many engineers have the desire to open-source things, often simply because they are proud of their work and want to share it with their peers.

At the same time, management likes open-source because it increases brand awareness and serves as a recruiting tool. It was natural progression for us. I tried to emphasize that it needs to work for Yelp first, and after one and a half years in production, we were confident that it was a good time to announce it.

There’s a lot of hype around containers, with some even suggesting this could be the biggest change in computing since client-server architecture. Where do you stand on its wider significance?
KA: Saying it’s the biggest change in computing since client-server architecture is very exaggerated. I am very anti-hype. Containers are not new, they just have enough ecosystem built up around them now, to the point where they become a viable option for the community at large.

Container World is taking place on February 16-18, 2016 at the Santa Clara Convention Center, CA, USA. Visit www.containervent.com to register for your pass.

 

What Is Data Deduplication and Who Cares?

January 19, 2016 | Leave a Comment

By Rachel Holdgrafer, Business Content Strategist, Code42

code42 data duplication blogData deduplication is a critical component of managing the size (and cost) of a continuously growing data store that you will hear about when you research endpoint backup. Intelligent compression or “single-instance storage” eliminates redundant data by storing one copy of a file and referencing subsequent instances of the file back to the saved copy.

There is some misunderstanding of deduplication in the marketplace even among analysts, in part because vocal endpoint backup vendors have positioned deduplication capabilities around the concept of upload speed and cost of storage rather than security and speed to recovery.

What is data deduplication?
Data deduplication is a process by which an enterprise eliminates redundant data within a data set and only stores one instance of a unique piece of data. Data deduplication can be completed at the file level or at the data block level and can occur on either the endpoint device or the server. Each of these variables plays a role in how deduplication works and its overall efficiency, but the biggest question for most folks is, “Does data deduplication matter?” Or is data deduplication a differentiator that I should care about?

If you are considering a robust and scalable enterprise endpoint backup solution, you can count on the fact that the software uses some sort of data deduplication process. Some solutions use global deduplication, others local deduplication and some use a combination of the two.

Local deduplication happens on the endpoint before data is sent to the server. Duplicate data is removed from the endpoint and then clean data is stored in a unique data set sorted by user archive on the server. Each data set is encrypted with a unique encryption key.

Global deduplication sends all of the data on an endpoint to the server. Every block of data is compared to the data index on the server and new data blocks are indexed and stored. All but one identical block of data is removed from the data store and duplicate data is replaced with a redirect to the unique data file. Since multiple users must be able to access any particular data block, data is encrypted using a common encryption key across all sets.

Regardless of the deduplication method used, the actual process should happen silently in the background, causing no slow-down or perceived impairment for the end user.

So, should I care about global deduplication?
In short, not as much as some vendors might want you to care. Data deduplication—whether global or local—is largely considered table stakes in the world of enterprise endpoint backup. There are instances where each type may be beneficial—the key is to understand how each type affects your stored data, security requirements and restore times.

Privileged-Account Attacks Are Behind Every Major Cyber Crime

January 14, 2016 | Leave a Comment

By Susan Richardson, Manager/Content Strategy, Code42

code42 keys (1)It’s unsettling: the people most accountable for an organization’s security are those most likely to compromise it. Privileged user accounts—IT administrators, application developers and C-suite executives—have been the cause of high-profile breaches over the past few years. Some cases involve intentional actions by the privileged users themselves, such as Edward Snowden’s NSA leaksand the South Korean Credit Bureau breach that exposed the personal information of almost half of all South Koreans. In other cases, cyber criminals steal or hack all-access credentials, as was the case with the 2013 Target and the 2014 Home Depot breaches. Regardless of the cause, studies of major breaches find that 100 percent of cyber crime attacks exploit privileged credentials.

Two factors make privileged-account attacks particularly devastating. First, the wide-ranging access granted by privileged credentials (dubbed “the keys to the kingdom” by Dell Security Executive Director John Milburn), whether acquired through insider threat or theft of credentials by an outside party, allow a perpetrator to move horizontally and often vertically through an organization’s data infrastructure, accessing sensitive information and installing malware to wreak further damage.

Secondly, privileged credential attacks make harmful activity harder to detect and address. For IT administrators and other privileged technical staff, accessing sensitive areas of a network doesn’t trigger red flags—it looks like everyday activity. Identifying suspicious action from executive accounts is also a challenge, as these individuals’ activities often fall outside the view of traditional data security.

A 2014 Ponemon Institute survey on insider threats reports that sixty-nine percent of IT security professionals feel they lack sufficient contextual information to identify suspicious activity from privileged accounts. So what can an organization do to mitigate the threat posed by privileged users?

Start by tightening and standardizing control over privileged user credentials. The Ponemon Institute’s Privileged User Abuse & The Insider Threat report found that forty-nine percent of respondents do not have officially defined policies for assigning privileged user access. This can lead to over-privileging—where users are granted greater access than is critically necessary for their job functions—and makes it extremely difficult to ensure accountability for all activity.

Carefully consider the level of access that is necessary for privileged users. If executives truly require all-access credentials, create an accurate log of which individuals possess which privileged credentials.

Make privileged user activities completely transparent to data security personnel. Enterprise-wide visibility into privileged user activities—whether on a server, in the cloud or on an endpoint device—is critical to establishing regular activity patterns, quickly identifying abnormal and suspicious activities, and determining context and intent in the event of a privileged account breach.

Increase in Federal Government Cyber Attacks Lays Groundwork for 2016

January 11, 2016 | Leave a Comment

By John Sellers, Vice President/Federal, Lancope

unnamedAt the end of last year, we looked back and said that 2014 was the year of high profile cyber attacks on the private sector. Target, Michaels, Sony and several healthcare companies and universities were plastered all over the news headlines. So did it get any better this year? In retrospect, 2015 was the year that government agencies were targeted. From the theft of more than 21 million records of government employees and their families from the Office of Personnel Management to breaches at the IRS, Defense Department, and the Census Bureau, both military and civilian agencies suffered significant intrusions.

Following the discovery of the OPM breach, and later revelations regarding its size and duration, President Obama ordered the federal government to undertake a 30-day Cyber Sprint to improve network defenses. Federal CIO Tony Scott directed agencies to take several steps to improve their cybersecurity postures, most notably to accelerate the use of multi-factor authentication, especially for privileged users. Though short-term in execution, the sprint has resulted in agencies stepping up their implementation of multi-factor authentication – with long-term benefits – and tightened up their standards for users’ access to different parts of their networks.

Another area of emphasis was, and remains, improved overall situational awareness within agencies’ networks – using tools such as dashboards to identify abnormal or unexpected behaviors by an asset within the network. For instance, in the OPM breach, important data was exfiltrated out of the network by a resource that never should have been communicating with the data repository.

With all of these incidents, agencies started to take a deeper look at threats with a specific look at insider threat. They began to reassess their misconceptions about what constitutes an insider threat. It’s easy to view Edward Snowden as an insider threat; it is harder, but necessary, to recognize that any employee’s action which causes loss of data or degradation of network performance, even if taken innocently, also is an insider threat.

All of these lay the groundwork for 2016.

First, there is every reason to believe there will be more breaches. Intrusions set a record in 2014 – which was promptly obliterated by the pace of intrusions in 2015. The United States and China may have come to an agreement on cyber theft of intellectual property, but there are other players interested in that type of information, and cyber espionage, whether by nation-states or non-state actors, will continue to accelerate.

The government is doing what it can to address fundamental cyber hygiene as quickly as possible, but these problems grew over time and it will take time to fix them. For many years, organizations focused on building bigger (fire)walls, fortifying the perimeters of their networks, but that can only go so far before the walls themselves cause performance degradation. It’s fair to say that organizations have prioritized network availability over security.

Part of resetting that tradeoff, and expanding on the idea of situational awareness, I see the emergence of “context-aware security.” It is not sufficient to be able to see what’s happening on a network; it is important to know what normal, everyday activity looks like on that network, in order to identify anomalous behavior, whether by a device or a user.

The application of the concept of risk management to data resources will continue. Agencies have realized that all data are not created equal – the “crown jewels,” the databases with the information critical to meeting agencies’ missions, need the greatest protection. The containerization of these data assets will reinforce the categorization of users and devices that are allowed access.

The normalization of cloud services within government agencies will lead to another security development – ongoing discussions about how to enforce policy and monitor security in an asset not actually owned and controlled by the government. FedRAMP has done a lot in this regard, but it does not require visibility at the transaction level – identifying who is making a request, where that request is going inside the cloud.

Software-defined networks will continue to spread, as they provide both affordability and flexibility in configuration and management. But has there been enough threat modeling of SDNs to understand what their potential vulnerabilities may be? There should be concern that attackers may figure out those weaknesses for us, and attention paid to finding them before they are targeted.

Another trend in government IT that raises security implications in 2016 is the rapid growth of the Internet of Things. This reinforces the need for context-aware security; the proliferation of devices, the explosion of data, makes it imperative to have a better understanding of “normal” network behavior. With IoT, the stakes become very high – whether it’s driverless cars or automated power systems, intrusions could put many lives physically at risk.

A final observation for the New Year: Agencies will continue to be hamstrung by procurement rules written to buy tanks and aircraft, commodity products and services. Between FedRAMP and FITARA, the government is doing what it can to address fundamental flaws in its purchases of IT systems, software, and services. More reform is needed, even if it is directed solely at IT security products and services – until the rules are changed for IT security solution procurements, the government won’t be able to keep up with the changing threat landscape.

Cost of Data Breach, Loss and Remediation

January 7, 2016 | Leave a Comment

By Rachel Holdgrafer, Business Content Strategist, Code42

Cost of Breach Code42The Ponemon Institute’s 2015 Cost of Data Breach Study: Global Analysis reported that fallout and clean up efforts associated with a data breach cost companies across the globe $3.79 million on average. In the United States, organizations paid an average of $6.53 million per instance. This includes the cost of the data loss itself, impact on the company’s reputation, lost business and actual remediation costs. Breach hits organizations hard and squarely in the wallet. And year-over-year growth indicates the breach trend is heating up.

Estimating the outlay an organization can expect following a data breach is not a simple calculation. Factors affecting the per-record-cost of the breach include the cause of the breach, where it occurred and the organization’s industry or category. Organizations must also factor in the inevitable cost of lost business following breach.

Breach origin affects cost-per-record
In 2015, the average data breach in the United States exposed 28,070 records. If the breach was caused by human error or negligence, global average cost per record reached $134 or $3.76 million per breach. System glitches cost a global average of $142 per lost or breached record (or $3.9 million per breach) and malicious attacks, whether from inside or outside the organization, caused the most damage at a global average of $170 per record or $4.77 million per breach. This is up from $159 per record in 2014.

World trends and per capita costs
The per capita cost associated with a data breach varies based on where the breach occurred. Factors that increase per-capita cost include third party involvement, lost or stolen device value, time to identify and notify, and cost of consultants hired to mitigate breach damages. Factors that reduce the per-capita cost of a data breach include the use of an incident response team, the presence of a CISO at the organization, extensive use of encryption, employee training, business continuity management involvement, board-level involvement and insurance protection.

The United States continues to lead the pack at $217 per capita with Germany a close second at $211 per capita. Conversely, the cost of breach per capita is cheapest in India and Brazil at $56 and $78 respectively.

Industry (dis)advantage
Remember, the estimates above are averages. Depending on its industry sector, some companies face much higher financial consequences when a data breach occurs. For example, data privacy and security are heavily regulated in health care and education organizations to protect the personal information of patients and students. Breaches in these industries reach $363 and $300 per record respectively, while breached transportation and public sector records cost just $121 and $68 per exposed record. Clearly, not all data are created equal.

Lost business
Loss of business in the wake of a data breach is often overlooked. Suffering a data breach may result in an abnormally high amount of customer turnover and diminished trust. Moreover, customers (and prospects) will view the organization with suspicion after news of the data breach is announced. To overcome objections and win new customers, internal teams may incur additional costs to increase marketing and customer acquisition activities.

The cost attributed to lost business is significant. In 2015, an organization could expect to fork over a global average of $1.57 million in lost business alone, up from $1.33 million in 2014.

Reality check
Data breach is a very real threat. Ponemon reports that as of 2014, sixty percent of companies experienced more than one data breach in the previous two years. Organizations that aren’t worried about (or protected from) data breach because they’ve “never had one before,” are increasingly vulnerable to financial risk. Savvy organizations work to limit the risk and put a strategy in place to mitigate damage when breach occurs.

To learn more about how endpoint backup helps organizations recover quickly from data breach or loss and save money in the process, download the white paper, Protecting Data in the Age of Employee Churn.

Cost of Breach Code42