100 Best Practices in Big Data Security and Privacy

August 26, 2016 | Leave a Comment

By Ryan Bergsma, Research Intern, CSA

BigDataHandbook-LI‘Big data’ refers to the massive amounts of digital information companies and governments collect about human beings and our environment. Experts anticipate that the amount of data generated will double every two years, from 2500 exabytes in 2012 to 40,000 exabytes in 2020.  Security and privacy issues are magnified by the volume, variety, and velocity of big data.  As big data expands through streaming cloud technology, traditional security mechanisms tailored to secure small-scale, static data on firewalled and semi-isolated networks offer inadequate protection.

Recently our Big Data Working Group led by Sreeranga Rajan and Daisuke Mashim released the “Big Data Security and Privacy Handbook: 100 Best Practices in Big Data Security and Privacy,” outlining the 100 best practices that should be followed by any big data service provider to fortify their infrastructure. The handbook presents 10 compelling solutions for each of the top 10 challenges in big data security and privacy, which the working group previously identified in the 2012 CSA document titled “Top Ten Big Data Security and Privacy Challenges.”

New Security Challenges
It is not merely the existence of large amounts of data that creates new security challenges. In reality, big data has been collected and utilized for several decades. The current uses of big data are novel because organizations of all sizes now have access to the information and the means to collect it. In the past, big data was limited to very large users such as governments and big enterprises that could afford to create and own the infrastructure necessary for hosting and mining large amounts of data. These infrastructures were typically proprietary and isolated from general networks. Today, big data is cheaply and easily accessible to organizations of all sizes through public cloud infrastructure.

Software infrastructure developers can easily leverage thousands of computing nodes to perform data-parallel computing. Combined with the ability to buy computing power on-demand from public cloud providers, the adoption of big data mining methodologies is greatly accelerated. Large-scale cloud infrastructures, diversity of data sources and formats, the streaming nature of data acquisition and high-volume, inter-cloud migration all play a role in the creation of unique security vulnerabilities.

Big Data Best Practices
Now that we have enormous amounts of data and know the security and privacy risks it presents, what can enterprises do to secure their information? This CSA handbook provides a roster of 100 best practices, ranging from typical cybersecurity measures, such as authentication and access control, to state-of-the-art cryptographic technologies. In each section, CSA presents 10 solutions for each of the top 10 major challenges in big data security and privacy. Each section addresses what is the best practice, why these security measures are needed and should be followed and how they can be implemented.

Read the entire “Big Data Security and Privacy Handbook: 100 Best Practices in Big Data Security and Privacy” handbook. Learn more about CSA.

Information Security Promises Are Made To Be Broken

August 25, 2016 | Leave a Comment

By Mark Wojtasiak, Director of Product Marketing, Code42

fingerMorality insists that people will abide by the law and do the right thing; those promises have and will always be broken.

Code42, along with almost every other major player in the information security space attended Black Hat 2016 in Las Vegas. Like every other Vegas trade show, Black Hat’s expo hall featured video screens, beer, popcorn and soaring banners over circus-sized booths. Nearly every booth offered sweet swag and some, a chance to win cash if you listened to their well-rehearsed threat warnings and the promise that their indispensable technology would identify, stop, detect, prevent, extract, decode, crack, and protect the enterprise against an army of intruders or individual bad actors.

Taking it all in, I came to one realization: security marketing is flawed. Booth to booth, banner to banner, sign to sign, even pitch to pitch, security decision makers are fed “information security promises” that we all know we just cannot keep. It’s not due to a lack of honesty, but a lack of velocity. We all know the bad guys are more nimble and collaborative, and they move faster to exploit vulnerabilities in software. We know it will be days, weeks, even months before we can detect and respond. It’s at the core of why the security industry exists in the first place. This is why we have BlackHat, RSA, DEF CON, InfoSecurity World, Gartner Security Summits, Cyber Security Summits, and dozens of other events.

How do we start to fix the flaw?

  1. Extend a hand: Dan Kaminsky in his keynote at BlackHat, evangelized a message that flies in the face of the competitive tradeshow landscape. He suggested—in lieu of competition—that information sharing about the endless supply of cyber threats would work faster to counter them. Our need to make things secure and functional and effective has just exploded…the need to cooperate, share code and fixes in the name of better security is now.
  2. Empower the user: Kaminsky went on to say, “people think that it’s a zero sum game, that if you’re going to get security everyone else has to suffer. Well, if we want to get security, let’s make life better for everybody else. Let’s go ahead and give people environments that are easy to work with…think in terms of the lines that you’re impacting, the time that you’re taking…”
  3. Enable the experts: Deloitte Cyber Risk Services researcher Keith Brogan told Infosecurity Magazine, “Sometimes products don’t work. But more often, they’re not being used correctly…organizations don’t always focus on how to use the products to enable business…people need to take threat intelligence, give it to the right people, and use it in informed, considered ways.”
  4. Embrace the reality: Dan Raywood, wrote in Infosecurity Magazine about Arun Vishwanath, associate professor at the State University of New York in Buffalo, who says people are the problem, that “the bad guys are really good at the social side and people are easier to compromise and once compromised, those attackers have got the keys to kingdom and that is the reality we grapple with.”

Modern endpoint backup is a good first step to making good on information security promises. Heck, that’s one of the main reasons Code42 exhibits at the likes of RSA, BlackHat and Gartner events. With visibility and control of data on the endpoints, organizations can protect and monitor data movement and restore data following any data incident. Modern endpoint backup is continuous, automatic, silent and simple. The user is empowered to not only protect data they store on their laptops, but restore when things go bad.

Securing end-user data makes the organization more secure and functional and effective—immediately—and closes gaps between IT, Security, Legal and HR teams to expose insider threats. By implementing this fundamental security layer, organizations embrace the reality that data loss is inevitable and that end users are both the target and the culprit of data theft, loss and breach.

Download The Guide to Modern Endpoint Backup and Data Visibility to learn more about selecting a modern endpoint backup solution in a dangerous world.

Which Approach Is Better When Choosing a CASB? API or Proxy? How About Both?

August 22, 2016 | Leave a Comment

By Bob Gilbert, Vice President/Product Marketing, Netskope

Black Plastic SporkThere have been recent articles and blog posts arguing that the API approach is better than the proxy approach when it comes to selecting a cloud access security broker (CASB). The argument doesn’t really make sense at all. Both surely have their advantages and disadvantages, but each covers unique use cases and while you could certainly select a CASB that supports one versus the other, why not choose a CASB that offers both so you have the option to combine the two and address expanded use cases?

Pitting one against the other is like comparing a spoon vs. a fork. A spoon was designed to hold softer food in addition to liquid so you can place it in your mouth and eat a meal. Spoons come in various sizes depending on the application. In a similar fashion, an API deployment method is primarily focused on a set of specific use cases that includes being able to inspect content in sanctioned cloud apps and support for out-of-band policies such as restrict access, revoke shares, quarantine, and encrypt.

A fork on the other hand, was designed primarily to grab and hold solid foods for eating. That is a job that the spoon cannot do.  In a similar fashion, a proxy deployment method is primarily focused on a specific set of use cases around providing real-time visibility and control over cloud traffic and depending on the type of proxy, you can cover both sanctioned and unsanctioned cloud apps in real-time.  Real-time and covering unsanctioned cloud apps is not possible with an API deployment method.  In addition to use cases, there is the comparison of effort to deploy and use. You can argue that a fork requires a bit more care versus a spoon. You might not give that fork to a toddler for example, but a spoon would be less risky with trade-off of course that they might have a hard time eating their vegetables with that spoon. Similarly, a proxy requires and inline deployment and a forward-proxy specifically requires extra configuration and care.  The effort can be worth it given the use cases.

Let’s get back to my original argument that why choose one versus the other?  Choose a CASB that covers both an API method of deployment and multiple proxy methods of deployment.  You can choose only one or combine them to expand your use case coverage.  Should we start calling API + Proxy a spork?

Here is a table that compares use case coverage for API vs Proxy to help you make the decision which one to choose or perhaps choose both.

chart

Five Scenarios Where Data Visibility Matters—A Lot

August 19, 2016 | Leave a Comment

By Charles Green, Systems Engineer, Code42

unnamedIn case you were off enjoying a well-deserved summer holiday and are, like I am, a firm believer in disconnecting from the world while on holiday, you might have missed the recent hacker document dump of the U.S. Democratic National Committee (DNC) emails. Personal note: if you did find a place remote enough to not hear about this, please send me the coordinates as I want to visit there ASAP.

Information security professionals have long operated under the mantra ‘prevention is ideal, but detection is a must.’ Many professionals have extended that mantra to include the concept of ‘response’ to detection. Usually response is considered in terms of technical tools to speed remediation and improve prevention of future attacks. The DNC hack, like many other hacks before it, highlights the financial value of knowing what was in the data that was exposed.

When it comes to evaluating the monetary value of knowing what data is exposed, ransomware is the ultimate capitalistic exercise. Hackers attempt to determine the right balance of 1) The organization’s tolerance to data loss, including the safeguards the organization may have in place; 2) The value the organization places on the data; and 3) The value they place on public knowledge of a data loss incident. The ransomer’s goal is simple, set a price point that the organization is most likely to pay.

While ransomware is foremost in many of my conversations with C-level executives, the danger of an insider threat is also a recurring topic of conversation. In the past six months I’ve been asked for help with the following:

  • “Our top designer went to work for our biggest competitor, what data did they take with them?”
  • “We had a friendly merger with another firm but their top 6 engineers left shortly after the merger, did they take any data with them?”
  • “One of our senior execs laptops was stolen; do we have any government mandated reporting requirements?”

All of these are questions ultimately seek to assign a dollar value to knowing what data was exposed and what information was in that data.

A well-designed modern endpoint backup solution can help you know the value of your data and remediate those threats by:

  1. Performing point-in-time restores to before ransomware hits.
  2. Showing you what data was copied to USB devices or personal cloud accounts before an employee leaves your organization.
  3. Helping you determine what data was on a stolen device and the extent of your exposure.
  4. Making it easy for employees to restore their data after a viral ransomware incident.
  5. Never paying a ransom.

For years, those of us in the backup space have defined our value proposition as: Knowing what data was on a device that crashed/was lost/was stolen. Modern endpoint backup extends visibility to the data on a device that was compromised by an insider or a hacker.

Download The Guide to Modern Endpoint Backup and Data Visibility to learn more about selecting a modern endpoint backup solution in a dangerous world.

CISOs: Do You Have the Five Critical Skills of a DRO?

August 11, 2016 | Leave a Comment

By Mark Wojtasiak, Director of Product Marketing, Code42

600x450 (2)CISOs exploring career advancement opportunities have a new consideration, according to Gartner VP and Distinguished Analyst Paul Proctor. At a Gartner Security & Risk Management Summit presentation in June, Proctor talked about the evolution of a new enterprise role, which is a logical next step for some CISOs: Digital Risk Officer (DRO).

While few organizations have formally created the role, Gartner predicts that by 2020, 30 percent of large enterprises will have a DRO in place. Why? Because the increasing integration of digital technologies into business operations and products—the Internet of Things (IoT)—requires someone who can assess technology risk throughout the digital enterprise and provide executives with decisions that impact business processes. An example is assessing the physical system that gathers personally identifiable information from wearable technology. The DRO would look at how the data is used in marketing and sales operations, identify privacy issues, and look at the legality of monetizing the data as a source of revenue.

Proctor reports while CISOs may not have the title, many have gradually taken on some of the tasks associated with a DRO, such as:

  • Reviewing contract clauses for technology risk and security requirements
  • Developing policies to address the growing use of technology not controlled by IT
  • Addressing the privacy and security of data gathered by IoT devices
  • Providing security expertise to Mode 2 projects
  • Dotted-line reporting to operational risk groups

For CISOs interested in making the transition, here are the skills needed, according to several experts:

  1. Fully comprehend how the business is run, recognize desired strategic outcomes and speak the language of executives in order to fully articulate digital risk factors in operational and financial terms.
  2. Understand IT, IoT and operational technology (OT), and the overlap of technology and the physical world.
  3. Have the ability to work in a bimodal organization, supporting Mode 2 projects.
  4. Understand global privacy and e-commerce regulations.
  5. Have a people-centric style to work across the organization in collaboration with businesses, legal, compliance, operations, and digital marketing and sales.

Essentially, the DRO’s role is to bridge the cultural divide between business and technology, says Nick Sanna, president of the Digital Risk Management (DRM) Institute. To do that requires building the organizational processes and best practices necessary to measure and manage digital business risk—including mapping important business processes, assessing exposure to threats and prioritizing risk mitigation initiatives. Sanna admits that building a DRM program will be a complex challenge for DROs, but also a great personal stretch opportunity.

Download The Guide to Modern Endpoint Backup and Data Visibility to learn more about selecting a modern endpoint backup solution in a dangerous world.

API vs. Proxy: How to Get the Best Protection from Your CASB

August 11, 2016 | Leave a Comment

By Ganesh Kirti, Founder and CTO, Palerra

Cloud Access Security Broker (CASB) software has emerged to help IT get its arms around the full cloud security situation. CASBs are security policy enforcement points between cloud service users and one or more cloud service providers. They can reside on the enterprise’s premises, or a cloud provider can host them. Either way, CASBs provide information security professionals with a critical control point for the secure and compliant use of cloud services across multiple cloud providers. They enforce the many layers of an enterprise’s security policies as users, devices, and other cloud entities attempt to access cloud resources.

Exactly how the CASB integrates your security policies with cloud access makes a big difference in the comprehensiveness of your security solution and network performance. There are two main CASB deployments: API and Proxy.

Proxy-based Solution
An in-line proxy solution checks and filters known users and devices through a single gateway. Because all traffic flows through a single checkpoint, the proxy can take security action in real-time. Unfortunately, the single checkpoint also means that it slows network performance, and only secures known users. Further, proxy-based solutions only secure SaaS cloud services, leaving IaaS and PaaS clouds vulnerable.

proxy_architecture

API-based Solution
An API-based CASB is an Out-of-Band solution that does not follow the same network path as data. Since the solution integrates directly with cloud services, API-based solutions have no performance degradation, and they secure both managed and unmanaged traffic across Saas, IaaS, and PaaS cloud services.

api_architecture

Some industry experts recommend a multimode approach, which is a CASB architecture that supports both API and proxy approaches. In reality, both API and proxy approaches achieve multimode functionality, though they do it differently.

multimode_architecture

As enterprises move more business-critical functions to the cloud, implementing a CASB has become a mandatory control. Prior to choosing a CASB, it is important to know the facts on the alternatives so you can make the choice that is best for you.

To learn more, join Palerra CTO Ganesh Kirti and CSA Co-Founder and CEO Jim Reavis as they discuss “API vs. Proxy: Understanding How to Get the Best Protection from Your CASB” today. Register for the webinar now, and download the full white paper for more information about API vs. Proxy CASB architecture.

 

 

Ransomware Growing More Common, More Complex; Modern Endpoint Backup Isn’t Scared

August 5, 2016 | Leave a Comment

By Susan Richardson, Manager/Content Strategy, Code42

pinkmazeThe growing ransomware threat isn’t just about more cybercriminals using the same cryptoware tools. The tools themselves are rapidly growing more sophisticated—and more dangerous.

Ransomware growing exponentially, with no signs of slowing
A new report from InformationWeek’s Dark Reading highlights key trends in the ransomware landscape, starting with the dramatic increase in total ransomware attacks. Ransomware attacks increased by 165 percent in 2015 (Lastline Labs), and this trend isn’t letting up. Anti-spyware company Enigma Software reported a 158 percent jump in the number of ransomware samples it detected between February and March 2016—and April 2016 was the worst month on record for ransomware in the U.S.

It’s also clear that ransomware growth is independent of the overall increase in cyberattacks over the past several years. The 2016 DBIR reported that phishing attacks are more common than ever, and Proofpoint found that in the first quarter of 2016, nearly 1 in 4 (24%) of all email attacks using malicious attachments contained just one strain of ransomware (Locky).

Not just more common—ransomware growing stronger and more effective
Most alarmingly, DarkReading reports that cyberattackers are rapidly evolving and diversifying their ransomware arsenal. Ransomware has become big business, and with that cash flow comes development of more complex ransomware strains and more clever techniques for infecting targets. In an ironic twist, creators of popular ransomware such as Locky are now working to “protect” their cryptoware from enterprising copycats who create knockoff versions and variants. No honor among thieves, indeed.

Better phishing lures, more brute-force attacks
DarkReading spotlighted two examples of this increasing sophistication. On the one hand, cybercriminals are developing new, more obscure ways of luring a user to install ransomware. From personalized landing pages to actually hacking a device’s boot-up process, stopping these techniques is much more complicated than just saying, “Don’t click suspicious links.”

At the same time, attackers increasingly skip the phishing lure and go straight to brute-force attacks on internet-connected remote desktop servers. For the skilled hacker, this technique is more reliable than phishing, and immediately gets the attacker much deeper into an enterprise network, allowing them to compromise more devices and ransom more data.

“No backup, no protection”
With ransomware mutating into an even bigger threat, Dark Reading encouraged companies to go back to basics, citing data backup as the essential first step in enterprise ransomware defense. We couldn’t agree more. No matter how complex and advanced the ransomware, modern endpoint backup isn’t scared. Modern endpoint backup gives you guaranteed recovery in the face of ransomware. But its protection goes beyond backup: Modern endpoint backup sees your endpoint data, sees your users’ endpoint activities, and gives you the visibility to identify and neutralize an attack as soon as it hits.

Download The Guide to Modern Endpoint Backup and Data Visibility to learn more about selecting a modern endpoint backup solution in a dangerous world.

Take-aways from the 2016 Gartner Magic Quadrant for Secure Web Gateways

August 3, 2016 | Leave a Comment

By Atri Chatterjee, CMO, Zscaler

hero-zscaler-gartner-magic-quadrant-secure-web-gatewayToday’s smart enterprises, regardless of size, should be looking at a Secure Web Gateway (SWG) as part of their defense-in-depth security strategy. In Gartner’s opinion, if you aren’t using an SWG, you are in all likelihood leaving a hole in your enterprise security strategy. Firewalls – previous, current or next generation – are not enough because they do not provide the level of protection needed. This includes deep content inspection of all web traffic including inspecting encrypted (SSL) traffic, data leak prevention (DLP) and application control.

Now once you’ve decided to deploy SWG technology or are looking to upgrade or refresh your existing SWG environment, it’s important for security practitioners to consider various deployment options: appliance, cloud or hybrid. In Gartner’s words:

“The market for secure web gateway solutions is still dominated by traditional on-premises appliances. However, cloud-based services continue to grow at a faster rate than appliances, leaving many vendors struggling to adapt.”

They go on to estimate that cloud-based SWG security is growing at a significantly higher rate than that of traditional appliance based security – 35% CAGR for cloud based solutions compared to 6% for on-premises appliances. So it should be no surprise that cloud-based solutions play an important role in Gartner’s 2016 Magic Quadrant for SWG.

With this in mind, I recently sat down with the Cloud Security Alliance’s (CSA) Founder and CEO Jim Reavis to talk about the results presented in Gartner’s SWG Magic Quadrant, the role of SWG in enterprise security, and what the future holds in store for security. In the event you missed our webcast, you can listen to it here.

About Zscaler
Zscaler is revolutionizing Internet security with the industry’s first Security as a Service platform. As the most innovative firm in the $35 billion security market, Zscaler is used by more than 5,000 leading organizations, including 50 of the Fortune 500. Zscaler ensures that more than 15 million users worldwide are protected against cyber attacks and data breaches while staying fully compliant with corporate and regulatory policies.

Zscaler is a Gartner Magic Quadrant leader for Secure Web Gateways and delivers a safe and productive Internet experience for every user, from any device and from any location — 100% in the cloud. With its multi-tenant, distributed cloud security platform, Zscaler effectively moves security into the internet backbone, operating in more than 100 data centers around the world and enabling organizations to fully leverage the promise of cloud and mobile computing with unparalleled and uncompromising protection and performance.

 

A Game of Pwns: A Storm of (Pas)swords

July 25, 2016 | Leave a Comment

By Jacob Ansari, Manager, Schellman

Despite their perpetual status as old news, passwords and their security weaknesses continue to make headlines and disrupt security in ever-expanding ways, and the usual advice about better protection continues to go unheeded or, more worryingly, fails to address the threats any longer. As attacks continue to improve, they show that a cracked password for a given user account often has significant value beyond just the compromised environment.

We do not sow.

we_do_not_sowAttackers and security testers have been cracking passwords for decades. The usual situation involves capturing the cryptographic hash of the password, where the password has undergone a one-way cryptographic transformation that does not have a decryption function (unlike encryption that uses a key to encrypt or decrypt), such that the only way to discover the value is to guess it, transform it using the same hash function, and compare the output against the captured password hash. While this may seem improbable, attacks have successfully cracked passwords this way for a long time. In part, this occurs as a result of bad password hash functions, but most of the success comes from easily guessed passwords.

The North remembers.

the_north_remembersSecurity incidents that expose passwords have a few significant effects. The most obvious, that an attacker can access that user’s account, is perhaps the least significant, barring a compromise of something significant like an online banking application, a work-related system, or a regularly used social media platform. The more likely scenario is that this compromised password is the same password used by the same individual for other accounts, and an attacker now has a pretty good guess at the password of something more valuable. The less well understood, but perhaps more important consideration, is that actual password disclosures, particularly on a large scale, improve the ability to crack passwords in the future.

That’s what I do. I drink and I know things.

thats_what_i_do_i_drink_and_i_know_thingsCracking passwords by guessing purely random strings of characters takes a comparatively long time in terms of computing effort. Because users typically pick easily guessed passwords, those who crack passwords have learned to take some shortcuts. In the beginning, these were lists of words, derived from dictionaries or other sources, but containing little insight about how users actually selected passwords.

Advancements such as rules for modifying words from the list (substituting an “e” for a “3” or appending a symbol like a “!” to the end of a word), or narrowing down brute force attempts to set patterns like four alphabetic characters followed by three numerals brought about some incremental improvements, but still guessed at the nature of user passwords rather than relied on much actual data. However, that changed with security incidents that exposed large numbers of passwords, such as the RockYou incident in 2010, or the LinkedIn incident in 2012. These events offered password crackers, both the proverbial good guys and bad guys, a major insight into the ways users select passwords. As such, password crackers can make use of previously cracked passwords as the basis for new password cracking efforts. Given the high probability of password reuse, the ever-increasing knowledge of the patterns that successfully match user passwords, and the easy accessibility to the specialized hardware and software tools (you can have an effective cracker running on Amazon Web Services up and running in less than an hour) each significant breach of credentials drives the feedback loop that improves password cracking, which results in a more effective crack of the next password breach, which improves our collective knowledge and ability to crack passwords.

And make no mistake, the dead are coming.

Simply put, passwords that our minds are capable of remembering without assiduous effort are too easily susceptible to password cracking techniques. Also, reusing the same password across more than one account creates significant risks that an attacker who obtains the password can leverage that credential to attack the user or the user’s employer more significantly (perhaps more embarrassing than dangerous is the recent news alleging that Mark Zuckerberg’s LinkedIn password was the same bad password he used for Twitter and Pinterest, although it illustrates the point quite splendidly). While regulatory requirements may call for a certain password complexity that humans can easily remember or security advice from a few years ago suggest a few memory tricks to improve password selection and recall, the reality of modern cracking efforts leads to this: select a unique, random, lengthy password (ideally 20 characters or more) for each account and do not reuse it.

The practical outcome of needing large, random, unique passwords is the urgent need for some sort of password vault.

Today, this typically takes one of two shapes: an application run locally on a computer or mobile device, such as KeePass or PasswordSafe, or a web service like LastPass. Like most security decisions, this involves a series of tradeoffs for matters of trust, usability, and protection of your credentials. Using an open-source local application like KeePass gives you perhaps more control over your accounts than a web service like LastPass. Additionally, the cost is usually $0 for the open source option. However, LastPass offers a number of useful features like accessibility on your mobile devices, a forgot password feature (which local applications usually do not have), and some ease-of-use features for browsers. Both also have security issues, as LastPass has reported some security incidents and local applications have security vulnerabilities like every other piece of software in existence.

That said, either choice, constitutes a significant security improvement over reusing easily guessed passwords, and the cost-benefit analysis for choosing one over the other grows very small when placed next to the problem of doing neither.

I am the horn that wakes the sleeper. I am the shield that guards the realms of men.

i_am_the_hornAs a consumer of Internet services, the best security advice is to begin transitioning to the use of a password vault of some sort as soon as possible, along with enabling multi-factor authentication for as many accounts that will support it (e.g., Amazon, Google). As an organization that operates applications where users authenticate, support strong passwords (shame on you if your site has a maximum length or disallows certain special characters) and start working on supporting multi-factor authentication. For your password storage, follow the current best practices for using slow hash functions like bcrypt with good, random salts and move away from outdated hash functions like MD5 or SHA1 (which we still frequently see during assessments). Attacks get better and not worse, and attacks against passwords get better with almost blinding speed. Incremental defenses like requiring a few more characters of minimum length won’t suffice; a good defense needs to change the game about authentication altogether.

Modern Endpoint Backup Sees Data Leak Before It Hurts

July 22, 2016 | Leave a Comment

By Ann Fellman, Vice President/Marketing and Enterprise Product Marketing Director, Code42

unnamed (1)Picture this: You’re enjoying a beautiful summer Saturday, watching your kid on the soccer field, when your phone rings. It’s work. Bummer. “Hi, this is Ben from the InfoSec team. It appears that John Doe, whose last day is next Friday, just downloaded the entire contents of his work hard drive to an external drive. Given his role, there’s a high probability that it includes confidential and sensitive employee data.”

There goes your Saturday.

It happened to us—it’s probably happened to you
This happened to us at Code42 a few months ago. A longtime employee was coming up on his last day, and innocently wanted to take years of work with him. We’ve all probably done this—grabbed some templates and examples of our work to use in our next chapter—and instead of sorting through years worth of work, it’s just easier to copy the whole drive. Unfortunately, this is against company policy and puts the company at risk. And in this case, there were confidential and sensitive files related to company personnel.

Not all data theft is malicious, but it’s still dangerous
Of the fifty percent of departing employees that take sensitive or confidential data—most are not malicious. Some don’t know the rules; some don’t follow the rules; and most see no harm in their small actions. At Code42, we’re fortunate to have great people, and they have good intentions. But even the best intentions can have terrible consequences, especially when it comes to enterprise data security.

Too often, “innocent” data taken by employees inadvertently includes sensitive corporate data such as financial information, employee data, trade secrets or even customer information. There are risks and costs associated with leaked data; but knowing what was leaked and where it is greatly reduces the risk and damages.

Code42 CrashPlan avenges data theft—saves the weekend
Back to the sunny soccer field, where I might have spent horrible moments dreading the fallout from this particular data pilfer, I make a single phone call and spend no time worrying about the cost of tracking down or trying to recreate lost files or deal with a potential breach.

With Code42 CrashPlan, I have complete certainty that all of this employee’s endpoint data is backed up, down to the minute. And I know our InfoSec team can tell me what the data is, what was copied and where it was copied to—down to the serial number of the external drive.

Modern endpoint backup: Sees what data you have, and it knows where it goes
From there, the resolution is quick and—while it sounds dramatic—painless. A company representative contacts the departing employee, explains that we observed the content of the hard drive has been copied to a drive and requests return of the drive to Code42 on Monday morning. The employee promptly returns the drive.

And the best part of the story, I enjoyed the rest of the weekend, without the threat of data theft clouding the summer sky.

This is the power of modern endpoint backup. No matter where insider threat comes from—malicious lone wolves, employees conspiring with external actors, or well-intentioned, accidental rule-breakers—modern endpoint backup sees it all, in real time.

Download The Guide to Modern Endpoint Backup and Data Visibility to learn more about selecting a modern endpoint backup solution in a dangerous world.