Tentative Safe Harbour Agreement Reached—For Now

February 12, 2016 | Leave a Comment

By Rachel Holdgrafer, Business Content Strategist, Code42

02_03_16_safeharbour_blogThe European Union and the United States have reached a preliminary agreement that would allow companies doing business on both sides of the Atlantic to resume transmitting individuals’ digital data.

Struck down in October 2015 for failing to sufficiently protect European citizens’ data, transatlantic data transfer governed by the Safe Harbour Agreement is the life blood of thousands of companies including Google, Amazon and Pfizer. European privacy groups demanded that an agreement be reached by January 31 and while negotiators missed the deadline, they reached agreement on February 2.

The new legislation guarantees that U.S. intelligence agencies would not have:

indiscriminate access to Europeans’ digital data when it is sent across the Atlantic in the course of business.

It also puts more responsibility on U.S. companies to protect the data of European citizens. Additionally, the European Commission reports that European citizens who believe that their data has been misused will have several possibilities for redress.

Companies have deadlines to reply to complaints. European DPAs can refer complaints to the Department of Commerce and the Federal Trade Commission. In addition, Alternative Dispute resolution will be free of charge. For complaints on possible access by national intelligence authorities, a new Ombudsperson will be created.

But companies doing business in both Europe and the United States are not in the clear yet. The New York Times reports:

Many obstacles still await the deal, which must be officially approved by the union’s 28 member states. National data protection regulators have yet to give their support to the pact, and European privacy-rights advocates are preparing to file legal challenges seeking to overturn it.

European privacy groups are skeptical that the U.S. will uphold the data protection rights that European citizens demand and “support further restrictions on how companies can move the data if they suspect it may still be misused.”

Can Wanted Cybercriminals Be Stopped?

February 11, 2016 | Leave a Comment

By Leo Taddeo,  ‎Chief Security Officer, Cryptzone

Part 2 of a 2-part series

Can-Wanted-Cybercriminals-Be-Stopped-iStock-arfo-250x167I recently wrote about the challenges around cybercrime reporting in the US. Organizations often fail to notify law enforcement after discovering a network intrusion – partly because of a reluctance on their part to admit having been a victim, but also because they may not be aware which agency has jurisdiction over their case.

The outcome of this is that a lot of cybercrime is never investigated by the authorities, and a lot of hackers – some of them extremely prolific – are never brought to justice. This makes it difficult for law enforcement to create a meaningful deterrent. The financial rewards of cybercrime are often very high; the risk of getting punished is very low.

However, it’s not just a lack of cybercrime reporting that feeds into this difficulty. There’s also the fact that while the US has had a lot of success in apprehending certain high-profile hackers, other wanted cybercriminals – individuals of similar, if not greater, stature – remain at large with little chance of arrest.

Some of these people are on the FBI’s most-wanted list. Bringing them to justice would act as a significant deterrent for other would-be hackers, and therefore do much to protect the networks of organizations in the US and elsewhere. But can they be stopped?

Apprehending Foreign Cybercriminals is Difficult
One of the key reasons the US has difficulty in stopping wanted cybercriminals is that many of them are located in China and Russia, which significantly hinders our ability to bring them to justice.

I’ve written before about the hacking threat from China and the 2014 indictment of five Chinese military officers for stealing intellectual property from American companies; naturally, those officers have never been extradited. And while President Xi and President Obama have since agreed not to “knowingly support” cybercrime, I would argue that this agreement is unenforceable. In all likelihood, China will continue to use hacking as a tool to further its global power.

Still, at least we’ve opened a discussion. No such dialogue has been sought with Russia, which means US authorities can’t rely on the cooperation of their Russian counterparts when it comes to cracking down on cybercriminal activity originating in that country.

Russian hackers have a long history of targeting financial institutions in the US, and – by all accounts – remain free to do so with relative impunity. Evgeniy Bogachev, one of the most prolific cybercriminals in the world, is a key example; despite having a bigger FBI bounty on his head – $3 million – than any other hacker, he’s reportedly treated as nothing less than a hero in Russia. One policeman in his hometown of Anapa told the British press in 2014: “I’d pin a medal on the guy.”

This is a man whose cybercriminal enterprise is believed to have stolen over $100 million from foreign banks. It’s hard to say for sure if being on the FBI’s most-wanted list has made him any less prolific, but is there any reason for him to stop what he’s doing?

US Organizations Must Act Now to Improve Security
As I said in my last blog post, law enforcement has a hugely important role to play in the fight against cybercrime. By gathering and sharing up-to-date threat intelligence, investigating network intrusions, and ultimately arresting and prosecuting hackers, agencies like the FBI make America a safer place to do business.

At the same time, issues like our inability to extradite wanted cybercriminals from Russia and China, as well as the fact so many cyber attacks go unreported, means no organization can rely on the government to protect it from this growing threat. Only by implementing the best possible controls – securing their networks, applications and data – can American companies truly defend themselves.

Don’t wait for the bad guys to be arrested; strengthen your defenses to stop the bad guys from getting in.

Learn more about Cryptzone’s secure access and data security solutions.

Five Surprising Truths from the Cloud Security Alliance’s Latest Survey

February 8, 2016 | 1 Comment

Survey of 200 it leaders finds that cloud perceptions, it security reporting structures, and cloud security approaches are changing

By Cameron Coles, Senior Product Marketing Manager, Skyhigh Networks

Screen Shot 2016-02-03 at 3.08.24 PMAfter years of IT leaders loudly voicing their concerns about the security of the cloud, trust in cloud services is now virtually on par with on-premises applications. That’s according to a survey conducted by the Cloud Security Alliance released this week (download a free copy here). It’s just one finding in the 26-page report drawn from a survey of over 200 IT executives about the state of cloud adoption, the evolving role of IT, and how enterprises approach cloud security. While trust in the cloud may be on the rise, that doesn’t mean companies aren’t looking to implement many of the same security controls they did for their on-premises systems.

“As data leaves the company data center for the cloud, IT is caught between delivering technologies to support innovation and growth in the business and securing sensitive data against proliferating threats.”
– Cloud Security Alliance “The Cloud Balancing Act for IT: Between Promise and Peril

64.9% of IT trusts the cloud as much or more than on-premises software
It’s a well-established conceit, heard whenever IT executives are discussing the merits of cloud projects, that “the cloud is not secure” but that’s changing. Despite concerns about the security of corporate data moving to the cloud, just 35.0% of IT leaders believe that, as a general rule, cloud-based systems of record are less secure than their on-premises counterparts. A majority, 64.9%, say that the cloud is either more secure than on-premises software or equally secure. One potential reason for this is that cloud providers like Salesforce and Workday have invested heavily in security, extending beyond even what some of their customers do to secure their on-premises applications.

64.9

While IT leaders are more confident in the platform security of cloud applications, there’s still a lot that can go wrong. Careless or malicious insiders, compromised accounts, and misconfigured security settings can all lead to data loss, even within enterprise-ready cloud services whose platforms are arguably more secure than what most companies run in their own data centers. Perhaps that’s why the ability to enforce corporate security policies is the number one barrier to moving applications to the cloud, indicated by 67.8% of IT leaders. That’s followed by the need to comply with regulatory requirements (61.2%) and lack of budget to replace legacy systems (31.6%).

64.9% of IT leaders say the cloud is as secure or more secure than
on-premises software

The top barrier to securing data is a lack of skilled security professionals
Surprisingly, the biggest barrier to stopping incidents that result in data loss is not a limitation with security technology or budgeting; it’s a human resource limitation. Companies are struggling to find and hire skilled employees to take advantage of their security technology. That’s because businesses are hiring IT security professionals faster than the market can educate, train, and develop experienced security professionals. In August, it was reported that JP Morgan expected to spend $500 million on cyber security in 2015, double its 2014 budget of $250 million. Rapid hiring is leading to a shortage of people to fill open positions.

CSA-Report-2015-barriers-to-detecting-data-loss-961-dark

A 2015 report from labor analytics firm Burning Glass shows that cyber security job postings grew 91% from 2010 to 2014, more than three times the rate of growth in all IT jobs. More than a third (35%) of cyber security jobs require industry certifications such as CISSP, 84% of postings require at least a bachelor’s degree, and 83% require at least three years of experience. However, education, certifications, and experience pay off for security professionals. The same report revealed that cyber security jobs have a 9% salary premium over other IT jobs. That’s why some say it’s the hottest job of 2016 and one with job security.

screenshot

24.6% of companies would pay a ransom to prevent a cyber attack
In the now infamous Sony cyber attack, hackers contacted the company and demanded a ransom before making over 100 terabytes of sensitive company data public and crippling its IT infrastructure. In the CSA survey, the greatest concern reported by IT leaders about the impact of a cyber attack is the loss of reputation and trust, followed by financial loss. In the Sony attack, external analysts estimate it cost the company $35 million to deal with the immediate aftermath of the data breach and another $83 million to completely rebuild its damaged IT infrastructure.

CSA-Report-2015-ransom-961

It’s not clear whether Sony could have stopped the release of company data if it had responded to hacker demands in the days leading up to data dump (or if, indeed, the company attempted to answer the demands of the attackers). Nevertheless, if faced with a situation in which hackers have stolen information in a major breach and plan to make the information public, 24.6% of companies would be willing to pay a ransom to prevent the release of sensitive information. Across all companies, 14.0% would be willing to pay a ransom in excess of $1 million to prevent the release of such information. Not surprisingly, companies with cyber insurance were more likely to be willing to pay a ransom to stop a breach (28.6% vs 22.6%).

14% of companies would pay a ransom of $1+ million to prevent the release of data stolen by hackers

Systems of record are the next wave of cloud adoption
In 2011, Geoffrey Moore introduced the concept of systems of engagement and predicted they would be the next wave in enterprise IT. Systems of record, which capture every dimension of data relevant to a company and process that data, were the focus of information technology initiatives last century. The new focus, he said, was on systems of engagement that enabled greater collaboration and communication. These new tools allow users to share files and information and communicate in real time via video and chat, and they were built from the ground up to run in the cloud.

Fast-forward a couple years and Moore’s prediction appears prescient. Companies have invested in a new generation of communication and collaboration tools that are cloud-native. However, as more companies experience the benefits of cloud computing, they are beginning to look toward extending these benefits to their systems of record. Systems of record, far from being left behind in legacy on-premises data centers, are starting to move to the cloud. The most common system of record to be deployed in the cloud today is customer relationship management (CRM) solutions but nearly one third of companies plan to migrate their accounting/finance, HRM, and IT service management systems to the cloud.

CSA-Report-2015-system-of-records-550

Companies with a CISO are more prepared for a cyber attack
Companies with an executive in charge of information security, known as the chief information security officer (CISO), are more confident about their internal strategy to operationalize threat data. One of the reasons that companies with a CISO may be more confident is that they are more likely to have an incident response plan. Across all companies, 82.2% have some form of an incident response plan that details how the company would respond to a serious breach, including security remediation, legal, public relations, and customer support. However, fewer than half of these companies have a complete plan that covers all of these areas.

CSA-Report-2015-incidence-response-plan-961

Just 19.0% of companies without a CISO have a complete incident response plan. However, 53.8% of companies with a CISO have a complete incident response plan. Companies with a CISO are also more likely to have cyber insurance to protect against the cost of a data breach. Across all companies, 24.6% have cyber insurance. However, just 17.2% of companies without a CISO have insurance compared with 29.2% of companies with a CISO. This insurance can help pay for the cost of a major cyber attack. Following the Target credit card breach in 2013, the company’s insurance covered $90 million of the $264 million cost related to the attack.

53.8% of companies with a CISO have a complete incident response plan
vs 19.0% of companies without a CISO

Improving Data Privacy One Employee at a Time

February 4, 2016 | Leave a Comment

By Rick Orloff, ‎Vice President and Chief Security Officer, Code 42

dpm_li crop (1)It’s no Hallmark holiday, but here at Code42, Data Privacy Day is kind of a big deal. We think it should be a big deal for your organization, too. It’s a great chance to focus on the biggest security threat in your organization: your end users and their devices.

As IT and InfoSec professionals, we spend a lot of time on complex strategies that protect us from the most sophisticated cyber threats. And then we spend more time cleaning up the messes that employees get us into just by clicking corrupt links. These unintentional “user mistakes” are the biggest insider threat today, causing around 25 percent of data loss.

Your end users don’t care about data security procedures
Why are end users so mistake-prone? Because, frankly, most don’t care. They think data security is IT’s problem—that if IT does its “job” and filters out the threats, they have nothing to worry about. Moreover, when they do something stupid, they think it’s IT’s job to come to the rescue. They don’t understand the risks they create for the company or the fact that once rung they can’t unring the bell. So, they go on ignoring security policies and finding creative workarounds for security measures that inconvenience them—such as utilizing “shadow IT.”

This is changing, and we’d like to help.

Code42 + National Cyber Security Alliance = Data Privacy Month 2016
Code42 is partnering with the National Cyber Security Alliance to champion Data Privacy Day and the entire Data Privacy Month of February. We’re helping enterprise security professionals address the problem of end-user education and motivation.

Making data security an end-user responsibility
Ready to celebrate this joyous holiday? Then it’s time to “talk turkey” with your end users. Here are some key considerations and topics to get you started:

1. Security education should be an in-your-face affair
Talk to employees, face-to-face. They ignore your emails and videos.

Your employee education has to a) deliver a crisp, meaningful message; b) demonstrate that security is a core responsibility bestowed by executives; c) close the loop between what you say and what employees understand; and d) hold employees accountable. Part of holding employees accountable is providing the easy-to-use tools and capabilities employees need to work.

2. Focus on keeping a clean machine
You might not be able to win the fight against “shadow IT,” but make sure your employees understand exactly how an unknown or unapproved app can quickly lead to a massive data breach that extends far beyond their device. It’s also important that they see how apps for personal use (social media, gaming, etc.) are not designed to offer the same level of data security as enterprise-grade productivity apps—and why installing these apps on work devices creates open doors to the entire enterprise ecosystem.

3. No more lazy passwords
This one can be fun. See if you can guess your end users’ passwords. It’s amazing how many people use something like “password” or “123456.” Call them out on using the same password for every login (as 73% of enterprise employees do). Call them out on never changing their passwords (47% of people use passwords that are 5+ years old). Take the group on a cubicle tour and see how many Post-It Note passwords you can find. If you haven’t already, implement technical controls to support your policies.

4. Have doubts? Throw it out
This one’s simple: Don’t be gullible. Don’t be stupid. Remind them not to open emails, click links or open attachments from unknown or suspicious sources. It’s uncanny how many people say, in retrospect, that “something seemed odd” about that email in broken English—but they figured the spam filter didn’t catch it, so they clicked the link. To that end, make sure they understand that spam filters are just the first line of defense—that they’re not perfect. Show them how to use your company’s spam filters: how to make sure filters are on, how to refine the filtering by flagging spam, and how to report a suspicious email, attachment, etc.

5. Endpoint backup is your best friend
Make sure your employees know that endpoint backup is the closest thing to a “Get Out of Jail Free” card in the data security world. The best way to get employees to embrace endpoint backup is to promote its benefits. Demonstrate how the “utility” makes it easy to work anywhere and recover any file in real time with or without the original device. This capability (with no IT intervention) will make IT the hero when employees lose data or suffer a malware attack at a critical moment.

6. Make the call for accountability
Make it clear that data security is everyone’s responsibility and that it’s not a cliché.

End users are actually the ones on the front lines of the battle—IT and InfoSec teams are more like the generals pushing big-picture strategies. End users are often the primary points of attack and need to embrace the defense strategies provided to them. They need to understand that all the fancy security tools in the world are worthless if they don’t follow the rules. They need to understand the true impact of even a tiny mistake—that IT can’t always “fix” it, and that a small error could easily lead to immense costs, lost productivity, brand damage and more. This can’t be understated. Most importantly, no employee—even trusted administrators and executives—should expect absolution for their ignorant or careless actions. At Code42, several data privacy “no-no’s”—not having full disk encryption on laptops, disabling Code42 CrashPlan for any reason, etc.—are fire-able offenses. Considering the damaging impact of data loss, we don’t think this is harsh—we think it’s critical to creating a culture of accountability.

Be privacy aware. Take the pledge and enter to win an iCloak.

You’ve Been the Victim of a Cybercrime. Who You Gonna Call?

February 2, 2016 | Leave a Comment

By Leo Taddeo,  ‎Chief Security Officer, Cryptzone

Part 1 of a 2-part series

Youve-Been-the-Victim-of-a-Cybercrime-250x167Right now, one of the greatest challenges in the fight against cybercrime is the difficulty we have in creating a meaningful deterrent for hackers.

Basically, the number of cybercriminals out there is demonstrably very large, and all the available data shows the number grows larger all the time. And yet the number those cybercriminals who are caught and punished is very small, and changes little from year to year. In terms of risk versus reward, it’s a very attractive game for hackers to be in.

In this blog post – the first of two – I’d like to talk about how one of the reasons for this difficulty in creating a deterrent is that US organizations often fail to engage law enforcement when their networks come under attack.

Let’s say you’ve been the victim of a cybercrime. Who you gonna call?

The Trouble with Cybercrime Reporting
The first challenge many US organizations encounter when they attempt to report cybercrime is that there’s no one correct way to do this. Even if you restrict your definition of the term to only cover network intrusions and not other illegal online activity like identity theft, there are still several different places a person can go to alert the authorities to an incident.

According to the official guidance of the Department of Justice, organizations have no fewer than three options when it comes to reporting cybercrime. They can call their local FBI office; they can call the Secret Service; or they can log a complaint with the Internet Crime Complaint Center (IC3).

On top of that, the Department of Homeland Security has its own online portal for reporting cybercrime of any type, including network intrusions. State and local authorities add more options, as some victims resort to calling their local police departments or prosecutors offices.

Then there’s the question of which agency actually has jurisdiction over what. According to Title 18 Section 1030 of the US Criminal Code, both the FBI and Secret Service have the authority to investigate criminally-motivated cyberattacks. Should an incident be a matter of national security, the FBI is designated the lead agency.

In a nutshell, cybercrime reporting can be confusing. This is exacerbated by the fact that it’s rarely possible to know whether a cyberattack is a criminal or national security issue at the outset of an investigation – you might need to study a large amount of forensic information before this becomes apparent. Who wants to deal with this level of confusion right after discovering a data breach?

Why Engage Law Enforcement, Anyway?
Consequently, a lot of cybercrime goes unreported. This is an issue I touched upon in a recent blog about the lack of reliable cybercrime statistics, and it’s troubling for a number of reasons. It means that authorities don’t consistently have access to up-to-date threat intelligence; the victim has no access to the intelligence that law enforcement does have; and, at the end of the day, nobody is arrested and prosecuted.

Obviously, no organization should rely on the government to protect it against network intrusions and any damage that occurs as a result by chasing down and locking up hackers. But if the authorities had a more complete picture of the threat landscape, it’d be an enormous net positive for the security community – we’d be better equipped as a country to fight cybercrime and therefore create the deterrent we so badly need.

My advice? If you’re the victim of a cybercrime, report it to the FBI, which has jurisdiction over both criminal and national security cases.

Really, though, you should be doing everything you possibly can to ensure it never comes to that. Invest now, and strengthen your network defenses, because we’re a long way from having a sufficiently powerful deterrent to prevent the threat from growing day by day.

In part two of this blog, I’ll talk about the difficulty we have in bringing wanted cybercriminals to justice.

What They’re Not Telling You About Global Deduplication

January 29, 2016 | Leave a Comment

By Rachel Holdgrafer, Business Content Strategist, Code42

01_18_16_global_duplication2When it comes to endpoint backup, is global deduplication a valuable differentiator?

Not if data security and recovery are your primary objectives.

Backup vendors that promote global deduplication say it minimizes the amount of data that must be stored and provides faster upload speeds. What they don’t say is how data security and recovery are sacrificed to achieve these “benefits.”

Here’s a key difference: with local deduplication, data redundancy is evaluated and removed on the endpoint before data is backed up. Files are stored in the cloud by the user and are easily located and restored to any device. With global deduplication, all data is sent to the cloud, but only one instance of a data block is stored.

They tell you: “You’ll store less data!”
It’s true that global deduplication reduces the number of files in your data store, but that’s not always a good thing. At first blush, storing less data sounds like a benefit, especially if you’re paying for endpoint backup based on data volume. But other than potential cost savings, how does storing less data actually benefit your organization?

Not as much as you think.

For most organizations, the bulk of the files removed by the global deduplication process will be unstructured data such as documents, spreadsheets and presentations—files that are not typically big to begin with—making storage savings resulting from global dedupe minimal. The files that gobble up the bulk of your data storage are those that are unlikely to be floating around in duplicate—such as databases, video and design source files, etc.

What they don’t tell you: Storing less data doesn’t actually benefit your organization. Smaller data stores benefit the solution provider. Why? Data storage costs money and endpoint backup providers pay for huge amounts of data storage and bandwidth every month. By limiting the data stored to one copy of each unique file, the solution provider can get away with storing less data for all of its customers, resulting in smaller procurement costs each month—for them.

Vendors that offer global dedupe also fail to mention that it puts an organization at risk of losing data because (essentially) all the eggs are in one basket. When one file or data block is used by many users but saved just once, (e.g., the HR handbook for a global enterprise, sales pitch decks or customer contact lists) all users will experience the same file loss or corruption if the single instance of the file is corrupted in the cloud.

They tell you: “It uploads data faster.”
First, let’s define “faster.” The question is, faster than what? Admittedly, there’s a marginal difference in upload speeds between global and local deduplication, but it’s a lot like comparing a Ferrari and a Maserati. If a Ferrari tops out at 217 miles per hour and a Maserati tops out at 185 miles per hour, clearly the Ferrari wins. It’s technically faster, but considering that the maximum legal speed on most freeways is 70-75 miles per hour, the additional speed on both vehicles is a moot point. Both cars are wickedly fast but a person is not likely to get to drive either at its top speed, so what does matter? The fact is, it doesn’t.

The same can be said about the speed “gains” achieved by utilizing global deduplication over local deduplication. Quality endpoint backup solutions will provide fast data uploads regardless of whether they use global deduplication or local deduplication. There’s a good chance that there will be no detectable difference in speed between the two methods because upload speed is limited by bandwidth. Global deduplication promoters are positioning speed as a benefit you will not experience.

What they don’t tell you: Global deduplication comes at a cost: restore speeds will be orders of magnitude slower than restoration of data that has been locally deduplicated. Here’s why: with global deduplication, all of your data is stored in one place and only one copy of a unique file is stored in the cloud regardless of how many people save a copy. Rather than store multiples of the same file, endpoint backup that utilizes global deduplication maps each user to the single stored instance. As the data store grows in size, it becomes harder for the backup solution to quickly locate and restore a file mapped to a user in the giant data set.

Imagine that the data store is like a library. Mapping is like the Dewey Decimal System, only the mapped books are stored as giant book piles rather than by topic or author. When the library is small, it’s relatively easy to scan the book spines for the Dewey Decimal numbers. However, as the library collection (that is, book piles) gets larger, finding a single book becomes more time consuming and resource intensive.

Data storage under the global deduplication framework is like the library example above. Unique files or data blocks are indexed as they come into the data store and are not grouped by user. When the data store is small, it’s relatively easy for the system to locate all of the data blocks mapped to one user when a restore is necessary. As the data store grows in size, the process of locating all of the data blocks takes longer. This slows down the restore process and forces the end user to wait at the most critical point in the process—when he or she needs to get files back in order to continue working.

The real security story: What you’re not being told about global deduplication doesn’t stop there. Two-factor encryption doesn’t mean what you think it does. Frankly, an encryption key coupled with an administrator password is NOT two-factor encryption. It’s not even two-factor authentication. It’s simply a password layered over a regular encryption key. Should someone with the encryption key compromise the password, he or she will have immediate access to all of your data.

Conclusion
Companies that deploy endpoint backup clearly care about the security of their data. They count on endpoint backup to reliably restore their data after a loss or breach. Given the vulnerabilities exposed by the global deduplication model, it is counterintuitive to sacrifice security and reliability in a backup model in favor of “benefits” that profit the seller or cannot be experienced by the buyer.

To learn more about how endpoint backup with local deduplication is a more strategic data security choice, download the ebook, Backup & Beyond.

Serious Cybersecurity Challenges Ahead in 2016

January 28, 2016 | Leave a Comment

By Phillip Marshall,  Director of Product Marketing, Cryptzone

Challenges-Ahead-250x167By now you’ll have settled into the New Year, looking ahead at what’s to come as we move swiftly through January. However, there are numerous unsettling predictions that mean 2016 is a year of many serious cybersecurity challenges – from new types of hacks, skills shortages to increased insider threats. We’ve rounded up a number of 2016 predictions from industry experts and vendors that every organization regardless of size should pay close attention to and put together a strategy to address.

  1. Increased Need to Restrict Access and Secure Content: Dark Reading presented our first noteworthy prediction. “Chief Information Security Officers (CISOs) will become the new “it” girl of security, not only in enterprises with healthy security budgets, but in data-driven startups where housing sensitive information is core to their business,” say Tim Chen, CEO, and Bruce Roberts, CTO of DomainTools.

It increasingly seems that a day does not pass without a news story on the loss of sensitive information. If that information isn’t secured properly and is accessed by unauthorized parties, the damage to an organization is massive. Financial penalties, regulatory sanctions, lost company confidential information and brand damage – all of these circumstances can be avoided by restricting access to and encrypting content wherever it lives and travels.

  1. Security is becoming a Shared Responsibility: TechCityNews offered our next prediction of merit which expands on who is responsible for cybersecurity. “Demand for security products has grown, and is only set to grow further; and responsibility for security is now held in more parts of any organization. In other words, people other than the security analyst and the chief information security officer, who have traditionally been the users of security tools, are being made responsible for making sure private information and intellectual property is secure. The responsibility lies with both the C-suite, as share price is directly impacted by a breach, as well as with the developer, who has to ship safe code and include security features on products as they are built.”

Too much is at stake for organizations that have been breached. We don’t necessarily think this is a prediction so much as a requirement for all organizations this year.

  1. Insider Threats to Increase: Insider Threats Abound – lock down your IT says ITProPortal in its 2016 predictions. “Massive disruption (Uber style) to existing industries and wholesale digitization will create job losses and potentially significant numbers of disaffected employees capable of compromising IT systems. So, we’re likely to see a renewed focus on ‘locking down’ information systems, by ensuring secure configurations, removing vulnerabilities, strictly controlled use of privileges and by ensuring that critical systems and applications are patched up to date.”

Insider threats are a clear issue especially as we believe all cybercrime is an inside job (see our webinar with Forrester Analyst, John Kindervag on this topic). In 2016, organizations need to first adopt the principles of zero trust to combat malicious insiders on the network level. Individuals should only ever have access to the resources they need to do their job, and this should only ever be granted in reasonable contexts. Otherwise, there’s nothing stopping them from spending their downtime trawling entire network segments for sensitive information. Second, to avoid data breaches caused by careless behavior, organizations need strong content-level security. By encrypting, tracking and restricting access to files that contain sensitive information, they can mitigate the consequences of misdirected emails and similar incidents.

  1. You’ll need to do more with fewer skilled professionals: Another issue in 2016 – skills shortages in cyber security increase. This prediction came up time and time again throughout our research. As the demand to defend against cyber threats increase, the resources to achieve this decrease. Skills shortages “will mean that fewer and fewer organizations are able to build or manage cyber security defenses themselves, or even be able to make effective use of cyber security technologies.”

Benjamin Jun, CEO, HVF Labs echoed this sentiment in his prediction that “Microservices will change the build vs. buy debate as identity management and customer data will be increasingly migrated to specialized cloud services in 2016. Developers will insert vetted services and code into their own software, avoid building from scratch, and obtain a security level better than most homegrown offerings. And, for companies who insist on build-your-own, relief is coming in 2017 when container technologies will allow in-house teams to practically manage and integrate microservices of their very own.”

Geoff Smith of Experis commented in one prediction that the “worrying news is that breaches are inevitable, while a shortage of skilled cybersecurity professionals is likely to push up the costs of beefing up defenses and dealing with attacks.”

The build vs. buy debate will never end, but with skills shortages a-plenty, help from cyber security vendors that specialize in network security and data protection is necessary in 2016.

  1. Customers Care! Increasingly, customers will want to know how you’re securing their data: Malcolm Marshall, Partner and Global Leader, Cyber Security at KPMG said “In 2016, we will see that consumers care about security shock – more businesses will realize that sophisticated customers actually care about security in the products and services and will realize that security, ease of use and “coolness” are not mutually exclusive.”

Allowing customers’ data to be stolen is bad for business. Your customers want to know their data is safe. They want you to comply with regulations and they want you to do everything you can to prevent cybercrime. We previously predicted this trend would continue and it has. Customers want proactive cybersecurity — not reactive analysis and temporary repairs. Findings show that companies are ramping up their spending to prevent cyberattacks after a string of breaches at financial firms and big retailers. This trend will continue.

2015 Breaches Show That Current Cybersecurity Measures Aren’t Enough

January 22, 2016 | Leave a Comment

By Corey Williams, Senior Director/Product Management and Marketing, Centrify

centrifyLast year my colleague Chris Webber predicted that “Breach Headlines will Change IT Security Spend.” Unfortunately the breach headlines of 2015 were even more striking than most could predict. 2015 breaches involved high-profile criminal and state sponsored attacks. Millions of personnel records of government employees, tens of millions of records of insurance customers, and hundreds of millions of customer records from various other companies were among the information compromised. This year we even heard of a BILLION dollar bank heist!

Many of these companies had implemented advanced malware protection and next-generation firewalls, and delivered regular security training sessions for their employees. Yet the breaches are still happening. What we know from cybersecurity experts such as Verizon and Mandiant is that nearly half of breaches occurring today are due to a single vulnerability that is still not adequately addressed.

Compromised user credentials, AKA the humble username and password, can provide outsiders with access to an organization’s most critical data, applications, systems and network devices. Through phishing, trojans and APTs, hackers today are focused on these digital “keys to the kingdom,” which are used to access sensitive data and systems.

For 2016, companies will (and must) adopt measures to mitigate the risk of compromised credentials. Yes, complex and unique passwords are a start but will never be enough. Multi-factor authentication will be implemented more broadly and across more apps and devices, adaptive access will be used to detect and stop suspicious login attempts and granular privilege management will be adopted to reduce the impact of compromised credentials. Companies will start to accept that compromised credentials are the new normal and will take steps to mitigate the risk they represent.

To read more about the state of corporate security, see our State of the Perimeter survey results.

Containers Aren’t New, But Ecosystem Growth Has Driven Development

January 21, 2016 | Leave a Comment

By Thomas Campbell, Container World 2016

kyle

Containers are getting a fair bit of hype at the moment, and February 2016 will see the first ever event dedicated to both the business and technical advantages of containers take place in Silicon Valley in the US.

Here, Container World talks to Kyle Anderson, who is the lead developer for Yelp, to learn about the company’s use of containers, and whether containers will ultimately live up to all the hype.

What special demands does Yelp’s business put on its internal computing?
Kyle Anderson: I wouldn’t say they are very special. In some sense our computing demands are boring. We need standard things like capacity, scaling, and speed. But boring doesn’t quite cut it though, and if you can turn your boring compute needs into something that is a cut above the status quo, it can become a business advantage.

And what was the background to building your own container-based PaaS? What was the decision-making process there?
KA: Building our own container-based PaaS came from a vision that things could be better if they were in containers and could be scheduled on-demand.

Ideas started bubbling internally until we decided to “just build it” with manager support. We knew that containers were going to be the future, not VMS (virtual machines). At the same time, we evaluated what was out there and wrote down what it was that we wanted in a PaaS, and saw the gap. The decision-making process there was just internal to the team, as most engineers at Yelp are trusted to make their own technical decisions.

How did you come to make the decision to open-source it?
KA: Many engineers have the desire to open-source things, often simply because they are proud of their work and want to share it with their peers.

At the same time, management likes open-source because it increases brand awareness and serves as a recruiting tool. It was natural progression for us. I tried to emphasize that it needs to work for Yelp first, and after one and a half years in production, we were confident that it was a good time to announce it.

There’s a lot of hype around containers, with some even suggesting this could be the biggest change in computing since client-server architecture. Where do you stand on its wider significance?
KA: Saying it’s the biggest change in computing since client-server architecture is very exaggerated. I am very anti-hype. Containers are not new, they just have enough ecosystem built up around them now, to the point where they become a viable option for the community at large.

Container World is taking place on February 16-18, 2016 at the Santa Clara Convention Center, CA, USA. Visit www.containervent.com to register for your pass.

 

What Is Data Deduplication and Who Cares?

January 19, 2016 | Leave a Comment

By Rachel Holdgrafer, Business Content Strategist, Code42

code42 data duplication blogData deduplication is a critical component of managing the size (and cost) of a continuously growing data store that you will hear about when you research endpoint backup. Intelligent compression or “single-instance storage” eliminates redundant data by storing one copy of a file and referencing subsequent instances of the file back to the saved copy.

There is some misunderstanding of deduplication in the marketplace even among analysts, in part because vocal endpoint backup vendors have positioned deduplication capabilities around the concept of upload speed and cost of storage rather than security and speed to recovery.

What is data deduplication?
Data deduplication is a process by which an enterprise eliminates redundant data within a data set and only stores one instance of a unique piece of data. Data deduplication can be completed at the file level or at the data block level and can occur on either the endpoint device or the server. Each of these variables plays a role in how deduplication works and its overall efficiency, but the biggest question for most folks is, “Does data deduplication matter?” Or is data deduplication a differentiator that I should care about?

If you are considering a robust and scalable enterprise endpoint backup solution, you can count on the fact that the software uses some sort of data deduplication process. Some solutions use global deduplication, others local deduplication and some use a combination of the two.

Local deduplication happens on the endpoint before data is sent to the server. Duplicate data is removed from the endpoint and then clean data is stored in a unique data set sorted by user archive on the server. Each data set is encrypted with a unique encryption key.

Global deduplication sends all of the data on an endpoint to the server. Every block of data is compared to the data index on the server and new data blocks are indexed and stored. All but one identical block of data is removed from the data store and duplicate data is replaced with a redirect to the unique data file. Since multiple users must be able to access any particular data block, data is encrypted using a common encryption key across all sets.

Regardless of the deduplication method used, the actual process should happen silently in the background, causing no slow-down or perceived impairment for the end user.

So, should I care about global deduplication?
In short, not as much as some vendors might want you to care. Data deduplication—whether global or local—is largely considered table stakes in the world of enterprise endpoint backup. There are instances where each type may be beneficial—the key is to understand how each type affects your stored data, security requirements and restore times.