December 10, 2015 | Leave a Comment
By Rachel Holdgrafer, Content Business Strategist, Code42
What do the Mercedes-Benz C Class, teeth whitening strips, the Apple iPhone and personally identifiable information have in common? Each is the item most commonly stolen from its respective category: luxury cars, personal care items, smartphones and corporate data. In the 2015 study entitled Grand Theft Data – Data exfiltration study: Actors, tactics, and detection, Intel Security reports:
• Internal actors were responsible for 43% of data loss, half of which is intentional, half accidental.
• Microsoft Office documents were the most common format of stolen data (25%).
• Personal information from customers and employees was the number one target (65%).
• Internal actors were responsible for 40% of the serious data breaches experienced by respondents and external for 57% of data breaches.
The report describes internal actors as employees, contractors and third-party suppliers, with a 60/40 split between employees and contractors/suppliers. Office documents were the most common format of data stolen by internal actors—probably because these documents are stored on employee devices—which many organizations do not manage.
In a 2013 report by LogRhythm, a cyber threat defense firm, a survey of 2000 employees found that 23 percent admitted to having looked at or taken confidential data from their workplace, with one in ten saying they do it regularly. In this study, two thirds of respondents said their employer had no enforceable systems in place to prevent access to data such as colleague salaries and bonus schemes.
Employees that move intellectual property outside the company believe it is acceptable to transfer work documents to personal computers, tablets, smart phones and file sharing applications and most do not delete the data because they see no harm in keeping it. As reported in the Employee Churn white paper, many employees attribute ownership of IP to the person who created it.
Four quick fixes to curb insider threat
As the rate of insider theft approaches the rate of successful hacks, organizations can start with four common sense principles to shore up security immediately:
- Trust but verify: Understand that the risk of data loss from trusted employees and partners is real and present. Watch for data movement anomalies in your endpoint backup data repositories and act upon them.
- Log, monitor and audit employee online actions and investigate suspicious insider behaviors.
- Disable employee credentials immediately when employees leave and implement strict password and account management policies and passwords. Astonishingly, six in ten firms surveyed do not regularly change passwords to stop ex-employees from gaining access to sites and documents.
- Implement secure backup and recovery processes to prepare for the possibility of an attack or disruption and test the processes periodically.
Download the executive brief, Protecting Data in the Age of Employee Churn, to learn more about how endpoint backup can mitigate the risks associated with insider threat.
December 9, 2015 | Leave a Comment
By Kevin Beaver, Guest Blogger, Lancope
Fairly often, I have friends and colleagues outside of IT and security ask me how work is going. They’re curious about the information security industry and ask questions like: How much work are you getting? Why are we seeing so many breaches? Are things going to get better? Given what’s happening in the industry, I’m always quick to respond with some fairly strong opinions. So, where are things now and what’s really need to resolved our security issues?
First off, based on what I see in my work and what I hear from friends and colleagues in the industry, I’m convinced that what we’re seeing in the data breaches and hearing about in the headlines is merely the tip of the iceberg. I suspect that there are three to four times the number of breaches that go undetected and unreported. I also see many IT and security shops merely going through the motions just trying to keep up. Putting out fires are their daily tactics. Big-picture strategies don’t exist.
In my specific line of work performing security assessments, I see people sweating bullets anticipating the results, unsure of how the outcome is going to reflect on them, their credibility and their jobs. I’m not saying this to speak negatively of the people responsible for information security. I just think it’s a side-effect of how IT and security challenges have evolved in recent years. The rules and oversight are being piled on. Ironically, in an industry that traditionally offers a strong level of job security, it seems that more and more people are concerned about that very thing.
A core element contributing to these challenges – and something that doesn’t get the attention it deserves – is a glaringly obvious lack of support for information security initiatives at the executive and board level. Sure, there are occasional studies that show that security budgets are increasing, however, more often than not I’m seeing and hearing sentiments along the lines of a recent study that showed the majority of C-level executives do not believe CISOs deserve a seat at the leadership table. So, it’s more than just budget. It’s political backing as well. This begs the question: who’s responsible for this lack of respect for the information security function? I believe it’s a chicken and egg debate-type situation involving responsibility and accountability on the part of both IT and security professionals as well as business leaders. I’ll save that for another blog post.
Politics and business culture aside, there are still many situations where all is assumed to be well in security when it is indeed not. The lack of visibility and data analytics is glaringly obvious in many enterprises, including large corporations and federal government agencies that one might assume really have their stuff together and are resilient to attack. In fact, I strongly believe that many – arguably most – security decisions are made based on information that’s questionable at best and this is why we continue to see the level of breaches we’re seeing.
So, where do we go from here? I’m not convinced that we need more policies. Nor am I convinced that we need better technologies. People are continually chasing down this rabbit hole and that rabbit hole in search of the latest magical security solution. Rather than a new direction, what we need is discipline. For decades, we’ve known about the core information security principles that are still lacking today. Unless and until everyone is on board with IT and security initiatives that impact business risk, I think we’re going to continue with the same struggles. I hope I am proven wrong.
Kevin Beaver is an information security consultant, expert witness and professional speaker with Atlanta-based Principle Logic, LLC.
December 7, 2015 | Leave a Comment
Market Guide Compares CASB Vendors And Provides Evaluation Criteria
By Cameron Coles, Senior Product Marketing Manager, Skyhigh Networks
As sensitive data moves to the cloud, enterprises need new ways to meet their security, compliance, and governance requirements. According to Gartner Research, “through 2020, 95% of cloud security failures will be the customer’s fault,” meaning that enterprises need to look beyond the security capabilities of their core cloud services and focus on implementing controls over how those services are used in order to prevent the vast majority of potential security breaches.
Many companies invested in firewalls, proxies, intrusion prevention systems, data loss prevention solutions, and rights management solutions to protect on-premises applications. The cloud access security broker (CASB) offers similar controls for cloud services. According to a new Gartner report (download a free copy here), a CASB is “required technology” for any enterprise using multiple cloud services. By 2020, Gartner predicts 85% of large enterprises will use a CASB, up from fewer than 5% today.
“By 2020, 85% of large enterprises will use a cloud access security broker product
for their cloud services, which is up from fewer than 5% today.
– Gartner “Market Guide for Cloud Access Security Brokers”
The need for a solution is clear. Cloud adoption within enterprise is growing exponentially – driven in large part by business units procuring cloud services and individual employees introducing ad hoc services without the involvement of IT. IT Security teams need a central control point for cloud services to understand how their employees use cloud services and enforce corporate policies across data in the cloud, rather than managing each cloud application individually. This functionality is not available in Web application firewalls (WAFs), secure Web gateways (SWGs) and enterprise firewalls, driving the need for a new solution that addresses these challenges.
Why do companies use CASBs?
In the report, Gartner explains there are three market forces driving enterprises to consider using a CASB. First, employees are moving to non-PC form factors. Employees use mobile devices to store corporate data in cloud services, and IT Security teams lack controls for this activity. Second, as corporate IT budgets are redirected toward cloud services, companies are beginning to think strategically about the security stack needed for the cloud. And lastly, as the largest enterprise software companies like Oracle, Microsoft, and IBM invest heavily in migrating their installed base to cloud services, more of these enterprise are looking to secure this data.
“CASB is a required security platform for organizations using cloud services.
– Gartner “Market Guide for Cloud Access Security Brokers”
While some cloud providers are beginning to add security and compliance controls to their solutions, companies need a more centralized approach. The average enterprise uses 1,154 cloud services, and managing a different set of policies across each of these services would not be practical for any organization. A CASB offers a central control point for thousands of cloud services for any user on any device – delivering many of the security functions found in on-premises security solutions including data loss prevention (DLP), encryption, tokenization, rights management, access control, and anomaly detection.
Gartner’s 4 Pillars of CASB Functionality
Gartner uses a four-pillar framework to describe the functions of a CASB. Not all CASB providers cover these four pillars, so customers evaluating solutions should carefully evaluate marketing claims made by vendors and ask for customer references.
- Visibility – discover shadow IT cloud services and gain visibility into user activity within sanctioned apps
- Compliance – identify sensitive data in the cloud and enforce DLP policies to meet data residency and compliance requirements
- Data security – enforce data-centric security such as encryption, tokenization, and information rights management
- Threat protection – detect and respond to insider threats, privileged user threats, compromised accounts
Deployment architecture is an important consideration in a CASB project. A CASB can be delivered via SaaS or as an on-premises virtual or physical appliance. According to Gartner, the SaaS form factor is significantly more popular and easier, making it the increasingly preferred option. Another factor to consider is whether to use an inline forward or reverse proxy model, direct API connectivity to each cloud provider, or both. Gartner refers to CASB providers that offer both proxy and API options as “multimode CASBs” and points out that certain functionality such as encryption, real-time DLP, and access control are not possible with API-only providers.
How to choose a CASB
Not all CASB solutions are equal and the features, deployment architectures, and supported cloud applications vary widely from provider to provider. Gartner splits the CASB market into Tier 1 providers that frequently appear on short lists for Gartner clients, and other vendors. Tier 1 providers are distinguished by their product maturity, scalability, partnerships and channel, experience in the market, ability to address common CASB use cases across industries, and market share and visibility among Gartner clients.
In its latest report, Gartner offers numerous recommendations that customers should consider when evaluating a CASB, including these considerations:
- Consider the functionality not available with API-only CASBs compared with multimode CASBs before making a decision
- Start with shadow IT discovery in order to know what’s in your environment today before moving to policy enforcement
- Look for CASBs that support the widest range of cloud applications, including those you plan to use in the next 12-18 months
- Look past CASB providers’ “lists of supported applications and services,” because there are often substantial differences in the capabilities supported for each specific application
- Whether the CASB deployment path will work well with your current network topology
- Whether the solution integrates with your existing security systems such as IAM, firewalls, proxies, and SIEMs
One way to evaluate claims made by CASB vendors is to speak with several customer references. Another recommended element in the selection process is conducting a proof of concept. Using real data for the proof of concept enables a potential customer to try out the analytics capabilities of a CASB, including the ability to discover all cloud services in use by employees and detect internal and external threats that could result in data loss. When you’re ready to begin looking at solutions, Skyhigh offers a free cloud audit that reveals shadow IT usage and high-risk activity within approved cloud services.
December 4, 2015 | Leave a Comment
By Krishna Narayanaswamy, Co-founder and Chief Scientist, Netskope
You don’t have to be European to care about the European Commission’s pending EU General Data Protection Regulation (GDPR). Set to be adopted in 2017 and implemented the following year, carrying penalties up to 5 percent of an enterprise’s global revenues, and replacing the current Data Protection Directive and all country-level data privacy regulations, this pending law should matter to any organization that has European customers. The purpose of the GDPR is to protect citizens’ personal data, increase the responsibility and accountability of organizations that process data (and ones that direct them to do so), and simplify the regulatory environment for businesses.
The information technology community has been abuzz on the topic for some time now. What’s been missing from the conversation up to now, however, is the cloud and how that throws a wrench into the GDPR mix. One of the biggest trends over the last decade is shadow IT. According to our latest Netskope Cloud Report, the average enterprise is using 755 cloud apps. In Europe, it’s 608. Despite increased awareness over the last year or so, IT and security professionals continue to underestimate this by 90 percent or more. This is shadow IT at its finest. So the big question is whether organizations that only know about 10 percent of the cloud apps in use can really ensure compliance with the GDPR?
We partnered with legal and privacy expert, Jeroen Terstegge, a partner with Privacy Management Partners in the Netherlands who specializes in data privacy legislation. He helped us make sense of the pending GDPR as it relates to cloud, and identified six things cloud-consuming organizations need to do to comply if they serve European customers (this is all fleshed out in this white paper, by the way):
- Know the location where cloud apps are processing or storing data. You can accomplish this by discovering all of the cloud apps in use in your organization and querying to understand where they are hosting your data. Hint: The app vendor’s headquarters are seldom where your data are being housed. Also, your data can be moved around between an app’s data centers.
- Take adequate security measures to protect personal data from loss, alteration, or unauthorized processing. You need to know which apps meet your security standards, and either block or institute compensating controls for ones that don’t. The Cloud Security Alliance’s Cloud Controls Matrix (CCM) is a perfect place to start. Netskope has automated this process by adapting the CCM to the most impactful, measurable set of 45+ parameters with our Cloud Confidence Index, so you can easily see where apps are lacking and quickly compare among similar apps.
- Close a data processing agreement with the cloud apps you’re using. Once you discover the apps in use in your organization and consolidate those with overlapping functionality, sanction a handful and execute a data processing agreement with them to ensure that they are adhering to the data privacy protection requirements set forth in the GDPR.
- Collect only “necessary” data and limit the processing of “special” data. Specify in your data processing agreement (and verify in your DLP policies) that only the personal data needed to perform the app’s function are collected by the app from your users or organization and nothing more, and that there are limits on the collection of “special” data, which are defined as those revealing things like race, ethnicity, political conviction, religion, and more.
- Don’t allow cloud apps to use personal data for other purposes. Ensure through your data processing agreement, as well as verify in your app due diligence, that apps state clearly in their terms that the customer owns the data and that they do not share the data with third parties.
- Ensure that you can erase the data when you stop using the app. Make sure that the app’s terms clearly state that you can download your own data immediately, and that the app will erase your data once you’ve terminated service. If available, find out how long it takes for them to do this. The more immediate (in less than a week), the better, as lingering data carry a higher risk of exposure.
Of course, if you end up accomplishing some of these steps via policy, make sure you can take action whether your users are on-premises or remote, on a laptop or mobile device, or on a managed or BYOD device.
This week we announced the availability of a toolkit that includes a couple of services and several complimentary tools to help our community understand and comply with the GDPR. You can access it here.
Cloud apps are useful for users, and often business-critical for organizations. Blocking them – even the shadow ones – would be silly at this point. Instead, follow the above six steps to bring your cloud app usage into compliance with the GDPR.
December 3, 2015 | Leave a Comment
By Kevin Beaver, Guest Blogger, Lancope
Look at the big security regulations, i.e. PCI DSS, and any of the long-standing security principles and you’ll see that network segmentation plays a critical role in how we manage information risks today. The premise is simple: you determine where your sensitive information and systems are located, you segment them off onto an area of the network that only those with a business need can access and everything stays in check. Or does it?
When you get down to specific implementations and business needs, that’s where complexity comes into the picture. For instance, it may be possible to segment off critical parts of the network on paper but when you consider variables such as protocols in use, web services links, remote access connections and the like, you inevitably come across distinct openings in what was considered to be a truly cordoned-off environment.
I see this all the time in my work performing security assessments. The network diagram shows one thing yet the vulnerability scanners and manual analysis paint a different picture. Digging in further and simply asking questions such as the following highlight what’s really going on:
- How are servers, databases and applications designed to communicate with one another?
- Who can really access the segmented environment? How does that access take place?
- What areas of the original system had to be changed to accommodate a technical or business need?
- What information is being gathered across the network segment in terms of network and security analytics and what is that information really telling us?
- What else are we forgetting?
Getting all of the key players involved such as database administrators, network architects, developers and even outside vendors that support systems running in these network segment(s) and asking questions such as these will often reveal what’s really going on beyond what’s documented or what’s assumed. This is not a terrible situation in and of itself. The systems need to work the way they need to work and business needs to get done. However, this exercise highlights a new level of network complexity that was otherwise unknown – or at least unacknowledged.
This leads me to my final point that’s obvious yet needs to be repeated: complexity and security don’t go well together. It’s a direct relationship – the more complexity that exists in your network environment, the more out of control you’re going to be. I’m confident that if we looked at the root causes of most of the known security breaches uncovered by reports such as the Cisco 2015 Annual Security Report and publicized on websites such as the Privacy Rights Clearinghouse Chronology of Data Breaches, we’d see that network complexity was instrumental in facilitating those incidents.
Putting aside politics, lack of budget and all the other common barriers to an effective information security program, you cannot secure what you don’t acknowledge. If vulnerabilities exist in your network segmentation, threats will surely come along and find a way to take advantage. It’s your job to figure out where the weaknesses are among the complexity of your network segmentation so you can minimize the impact of any attempted exploits moving forward. Otherwise, regardless of the levels of security visibility and analytics you might have, your systems will remain fair game for attack.
Kevin Beaver is an information security consultant, expert witness and professional speaker with Atlanta-based Principle Logic, LLC.
December 1, 2015 | Leave a Comment
By Susan Richardson, Manager/Content Strategy, Code42
If your organization relied on the now-invalid Safe Harbour agreement to legally transfer data between the U.S. and the EU, there’s good news and bad news.
The good news? The European Commission just threw you some life rings. The governing body issued a guidance Nov. 6 that outlines alternative mechanisms for legally continuing transatlantic data transfers:
Standard contractual clauses
Sometimes referred to as model clauses, standard contractual clauses are boilerplate provisions for specific types of data transfers, such as between a company and a vendor. They’re often the least costly on a short-term basis.
Binding corporate rules for intra-group transfers
These allow personal data to move freely among the different branches of a worldwide corporation. Sounds easy, but the process can be time-consuming and expensive, depending on the scope of the company. That’s because the rules have to be approved by the Data Protection Authority (DPA) in each member state from which you want to transfer data.
Derogation where contractually necessary
This exception allows for data transfers that are required to fulfill a contractual obligation. For example, when a travel agent sends details of a flight booking to an airline.
Derogation for legal claims
This exception allows for data transfers that are required to process a legal claim.
Derogation based on individual consent
The bad news? You only have until the end of January 2016 to get the new mechanisms in place before DPAs start investigating and enforcing transfer violations. Or you could hedge your bets and hold out for U.S. and EU negotiators to hammer out a Safe Harbour 2.0 agreement by then, as they’ve committed to do.
After all, the U.S. House of Representatives did surprise everyone by quickly passing the baseline requirement for moving forward on October 20th: the Judicial Redress Act would give EU citizens some rights to file suit in the States for U.S. government misuse of their data. It was received in the Senate and referred to the Committee on the Judiciary on October 21.
November 23, 2015 | Leave a Comment
By TK Keanini, Chief Technology Officer, Lancope
In last week’s post, I covered the methodologies Mark Watney used to stay alive on the surface of Mars and how those lessons can be adapted for better cyber security back on Earth. As usual, this post will contain spoilers for The Martian, so close it now if you haven’t yet read the book or seen the movie.
This week I’ll discuss the mentalities and interpersonal skills that allowed the Ares 3 crew to successfully rescue Watney after he was stranded for more than a year on a foreign planet. Whether it is the launch of a manned space probe or defending against advanced cyber threats, these lessons can be used to pull the best possible outcome out of impossible odds.
The Power of a Cross-functional Team
In space travel, every supply and gram of weight is invaluable, much like the limited resources available to most security teams. To help cope with these limitations, every member of the Ares 3 crew served multiple functions. Watney, for instance, was both a botanist and mechanical engineer. This knowledge allowed Watney to recognize that food would be his scarcest resource, find the chemical components necessary to create arable land inside his living quarters and modify the various life support systems to make the environment suitable to plant life.
When a cyber-attack hits, you may be the only one available to address it. To be able to adequately assess and respond to the event, you need to have a working knowledge of the various tools and processes at your disposal. In addition, understanding how different systems work and how different user roles interact with the network allows you to see the security weak points and understand how an attacker may operate in your environment.
Always remember to laugh
Tense situations can have a mental toll on responders, and it is important to keep a sound state of mind to make good decisions. Watney was a serial jokester, frequently laughing at the ridiculousness of his own situation and making wisecracks about what his fellow astronauts left behind on Mars. He particularly hated disco.
Though responders are in the middle of extreme circumstances, it is important not to take yourself too seriously. Laughter helps you keep a level head and can help relieve stress, both in you and your coworkers. Then you are in a better position to make sound decisions and not to give up.
Leadership is not an option, it is a necessity
Watney never faulted his fellow astronauts for leaving him on Mars. They thought he was dead, and leaving immediately was imperative to getting the others out alive. More importantly, Commander Lewis is regretful when she finds out Watney was left alive on Mars, but instead of getting too down to do anything, she focuses on what the next course of action is.
Tough situations need leaders who will make hard calls and live with it. CISOs and other security leaders are responsible for choosing which tools to implement and what practices to employ. When a cyber-attack occurs, they need to be ready to use those tools instead of wishing they had something else.
Communication makes your job easier
One of Watney’s largest challenges throughout The Martian is his inability to communicate with mission command or his own crew. Watney goes on a cross-country trip to find the Pathfinder probe just so he can use it to establish communication. It works but only until he accidentally fries the machinery a few pages later. Fortunately, we do not have this problem, but many cyber security professionals still fail to communicate effectively in the event of an attack.
It makes sense. After all, we are usually busy investigating the attack and trying to prevent data loss. But don’t forget that good communication in an attack helps prevent duplication of efforts and generally helps the entire security team respond effectively.
In a more general sense, the security team needs to be visible to the rest of the organization. Keeping all employees abreast of ongoing security issues reminds them to be vigilant against phishing and other forms of social engineering. Remember, they may know their area of the network better than you, and might be able to identify something abnormal there before you do. Of course, there are some exceptions to this mode of communication. For instance, if an insider threat is suspected, it is likely better to keep that information to a small number of individuals until actions are taken, but for the most part, regular communication with the larger organization is a good thing.
Roles are important
While versatility is a modern virtue, it is important to understand what your role is in a given scenario, even if it changes often. The crew members of Ares 3 had specializations that enabled them to perform specific duties, but they were also general enough that they could fulfill whatever role was needed in a time of emergency. While Watney was forced to rely on his own ingenuity to survive on Mars, his rescue was left almost entirely in the hands of his fellow crewmates. Each had to perform a duty in the rescue, and several had to suddenly change that role when the rescue attempt started to go south. The important thing is they were able to shift responsibilities quickly but with a clear understanding of who was best suited to perform each role, and it was all organized with a clear order of command.
In the world of cyber security, where organizations often deploy varied tools for detection, mitigation and policy enforcement, it is essential to utilize people to their greatest strengths. Investigators, operations and management all have a role to play, and while they should be flexible according to needs, they work best with what they know.
Personal connections matter
Massive amount of money, resources, time and energy went into rescuing Watney from Mars. His struggle became a weekly news segment on Earth and no expense was spared to retrieve him alive because people feared for him, hoped for him and wanted to keep him safe. Never forget that there are real victims to data breaches. Customers, clients and employees can be deeply hurt for the simple act of doing business with your organization, so keep that in mind when you are rushing through those last few reports on Friday afternoon.
The bonds between the Ares 3 crew were unshakable, as is expected when six people spend months together traveling across the solar system to a new planet. This type of relationship should be encouraged among security practitioners because it facilitates smoother operations in the event of an emergency and reduces blaming. When a team cares about each other and their mission, attacks can be stopped and catastrophes can be salvaged.
The Martian contains many lessons that can be adapted to cyber security, but in the end it is still a work of fiction. Reality is more complex and difficult to grapple with, but we need these basic driving forces to properly prepare for disaster and to operate well under pressure. Mark Watney may not be our CISO, but we can take what he learned on Mars and use it to beat an advantaged enemy and difficult odds.
November 20, 2015 | Leave a Comment
By Willy Leichter, Global Director of Cloud Security, CipherCloud
Last week’s tragic events in Paris, and fears over similar terrorist attacks around the world, have revived a long-standing debate. Early evidence suggests that the terrorists used a readily available encryption app to hide their plans and thwart detection by law enforcement. This has led to finger-pointing by intelligence officials and politicians demanding that something be done to control this dangerous technology. Keep in mind that the terrorists also used multiple other dangerous technologies including consumer electronics, explosives, lots of guns, cars, trains and probably airplanes – but these are better understood and attract less grandstanding about controlling them.
Setting aside the obvious privacy concerns, the argument for weakening encryption ignores a basic question – can this technology really be controlled? More specifically, those arguing for diluted encryption are demanding “back doors” that would allow easier access by law enforcement. For many reasons, this idea simply won’t work and will have no impact on bad guys. It also could have serious unintended negative consequences. Here are a few reasons why:
- Encryption = Keeping Secrets
Encryption is more of an idea than a technology and trying to ban ideas generally backfires. For thousands of years, good and bad actors have used encryption to protect secrets, while communicating across great distances.
In the wake of traumatic public events, it’s easy to start thinking that only bad guys need to keep secrets, but that’s clearly not true. Governments must keep important secrets. Businesses are legally required to protect secrets (such as their customers’ personal information) and individuals have reasonable expectations (and constitutional guarantees in many countries) that they can keep their personal data private. Encryption, if properly applied can be a highly effective way to protect legitimate and important secrets.
- Who Keeps the Keys to the Back Door?
Allowing government agencies unfettered access to encrypted data is not only Orwellian – it’s also simplistic and unrealistic. Assuming back doors are created, who exactly should have access? Beyond the NSA, FBI, and CIA, should we share access with British Intelligence? How about the French? The Germans? The Israelis? Saudi Arabia? How about the Russians or the Chinese? Maybe Ban Ki-Moon can keep all the keys in his desk drawer at the UN…
As we all know, the Internet doesn’t respect national boundaries and assuming that all countries will cooperate and share equal access to encryption back doors is naïve. But if governments only require companies within their respective jurisdictions to provide back doors, the bad guys will simply use similar, readily available technology from other places.
- Keys to the Back Doors Can Easily Get into the Wrong Hands
If there are back doors to encryption, hackers will almost certainly steal and exploit them. As the Snowden revelations demonstrated, large government bureaucracies are not particularly good at protecting secrets or ensuring that the wrong people don’t get access. The OPM hack, which uncovered millions of government employees’ data (purportedly by Chinese hackers), highlights the risks when large numbers of humans are involved.
In a very real way, the existence of encryption back doors would represent a serious threat to data security across the government, business and private sector.
- To Control Encryption You Need to Control Math
Ironically, while some government agencies seek to crack encryption, other agencies such as NIST are chartered with testing and validating the security efficacy of encryption algorithms and implementations. The FIPS 140-2 validation process is globally recognized and provides assurance that encryption does not have flaws.
Today’s best encryption is based on publicly vetted and widely available algorithms such as AES-256. Most smart, college-level math majors could easily implement effective encryption based on a multitude of publicly available schemes.
So far I haven’t heard policy pundits recommend that potential terrorists be barred from high-level math education. Preventing clever people anywhere in the world from applying readily available encryption or developing their own encryption schemes is impossible.
- The Tools Do Not Cause the Actions
It does appear that the Paris terrorists used commercial encryption to hide some of their communications and it must be acknowledged that this may have hindered law enforcement. They also probably also used off-the-shelf electronics to detonate their explosives, drove modern rental cars to haul people and weapons and perhaps were radicalized in the first place through social media. Today’s technology accelerates everything in ways that are often frightening, but going backwards is never an option. And the tools, no matter how advanced, do not create the murderous intent behind terrorism.
Readily available technology likely made their jobs easier, but in the absence of easy to find encryption tools, the terrorists could have found many other effective ways to hide their plans.
- Neutering Encryption Will Hurt Legitimate Businesses
So let’s imagine that in the heat of terrorist fears, the US, UK and a few other governments demand that companies within their jurisdictions create and turn over encryption back doors. Confidence in security technologies from those countries would plummet, while creative entrepreneurs in many other countries would quickly deliver more effective security products.
The growth of the Internet as a trusted platform for business has been closely tied to encryption. The development of SSL encryption by Netscape in the 90s enabled e-commerce and online banking to flourish. And today, encryption is playing a critical role in creating the trust required for today’s rapid growth of the cloud applications.
There are many recent examples of governments trying to legally close barn doors after the horses have long since disappeared. Ironically, the US government already bars the export of advanced encryption technology to rogue states and terrorist groups including ISIS. Clearly this ban had zero effect on the terrorists’ ability to easily access encryption technology.
We live in scary times and should never underestimate the challenges we all face in deterring terror. But latching onto simplistic solutions that will not work does not make us safer. In fact, if we undermine the effectiveness of our critical security technology and damage an important industry, we will be handing the terrorists a victory.
November 20, 2015 | Leave a Comment
By Rachel Holdgrafer, Content Business Strategist, Code42
CryptoWall has struck again—only this time it’s nastier than before. With a redesigned ransom note and new encryption capabilities, BleepingComputer.com’s description of the “new and improved” CryptoWall 4.0 sounds more like a marketing brochure for a well-loved software product than a ransom demand.
Like the iterations of CryptoWall that came before the 4.0 version, the only way to get your files back is to pay the ransom in exchange for the encryption key or wipe the computer clean and restore the files from an endpoint backup archive. The FBI agrees, stating “If your computer is infected with certain forms of ransomware, and you haven’t backed up that machine, just pay up.”
In addition to encrypting the data on an infected machine and demanding a ransom for the decryption key, CryptoWall 4.0 now encrypts the filenames on an infected machine too, leaving alphanumeric strings where file names once were.
The most significant change in CryptoWall 4.0 is that it now also encrypts the filenames of the encrypted files. Each file will have its name changed to a unique encrypted name like 27p9k967z.x1nep or 9242on6c.6la9. The filenames are probably encrypted to make it more difficult to know what files need to be recovered and to make it more frustrating for the victim.
Not unlike Bill Miner, infamously known as the Gentleman Robber, CryptoWall 4.0 makes a farcical attempt at politeness. CryptoWall 4.0’s ransom note reassures its victims that the infection of their computer is not done to cause harm and even congratulates its victims on becoming part of the CryptoWall community, as if it were some sort of honor.
CryptoWall Project is not malicious and is not intended to harm a person and his/her information data. The project is conducted for the sole purpose of instruction in the field of information security, as well as certification of antivirus products for their suitability for data protection. Together we make the Internet a better and safer place.
Ransomware is a lucrative business. It is estimated that the CryptoWall virus alone cost its victims more than $18 million dollars in losses and ransom fees from April of 2014 to June of 2015. In the spirit that being robbed doesn’t have to be a bad experience, CryptoWall 4.0 makes a bad attempt at customer service, claiming “we are ready to help you always.” Additionally,
CryptoWall 4.0 continues to utilize the same Decrypt Service site as previous versions. From this site a victim can make payments, find out the status of a payment, get one free decryption, and create support requests.
In closing, the ransom note states,
…that the worst has already happened and now the further life of your files depends directly on your determination and speed of your actions.
Whether hackers use CryptoLocker, CryptoWall, CTB-Locker, TorrentLocker or one of the many variants, the outcome is the same. Users have no choice but to pay the ransom—unless they have endpoint backup in place. Even with the best tech resources, decrypting the algorithm used to lock files without the key would require several lifetimes. Whereas, with automatic, continuous backup, end users will NEVER pay the ransomer because a copy of their data is always preserved.
November 19, 2015 | Leave a Comment
By Sam Bleiberg, Corporate Communications Manager, Skyhigh Networks
In the not-too-distant past, service providers had a tough time convincing enterprise IT departments that cloud platforms were secure enough for corporate data. Fortunately perspectives on cloud have matured, and more and more organizations are migrating their sanctioned file sharing applications to the cloud. Fast forward to 2020, when Gartner predicts 95% of cloud security failures will be the customers’ fault. Skyhigh Network’s latest Cloud Adoption & Risk Report shows the stakes are high for preventing “cloud user error.”
Enterprise-ready services have extensive security capabilities against external attacks, but customers have the ultimate responsibility for ensuring sensitive data is not improperly disclosed. Just as attackers can circumvent perimeter defenses such as powerful firewalls in favor of stolen credentials or alternate vectors of attack, secure cloud services can incent attackers to target the vulnerabilities inherent in day-to-day use of applications. In addition to compromised accounts, in which attackers gain access to a cloud service via stolen user credentials, enterprises need to worry about malicious insiders, compliance violations, and even accidental mismanagement of access controls.
The report, which analyzes actual usage data from over 23 million enterprise employees, uncovered an epidemic of file over-sharing. Whether IT is aware or not, cloud-based file-sharing services serve as repositories of sensitive data for the average organization. According to the report, 15.8 percent of documents in file-sharing services contain sensitive data. The employees responsible for sensitive data are not a small group: 28.1% of all employees have uploaded a file containing sensitive data to the cloud.
Most concerning is the lack of controls on who can access files once uploaded to the cloud. 12.9 percent of files are accessible by any employee within the organization, which poses a significant liability given the size of the organizations analyzed. Employees shared 28.2 percent of files with external business partners. Given the critical role business partners have played in several highly publicized breaches, companies should closely monitor data shared outside the organization, even with trusted partners. Although they make up only 6 percent of collaborations, personal email addresses raise concerns over the recipient’s identity and necessitate granular access policies; companies may not want to grant the ability to download files to personal email domains, for example. Finally, 5.4 percent of files are available to anyone with the sharing link. These documents are just one forwarded email away from ending up in the hands of a competitor or other unwanted recipient.
Breakdown of Sharing Actions
What are the different profiles of sensitive data stored in the cloud? Confidential data, or proprietary information related to a company’s business, is the biggest offender making up 7.6 percent of sensitive data. Personal data is second at 4.3 percent of said files. Third is payment data at 2.3 percent, and last is health data at 1.6 percent. The majority of these files, 58.4 percent, are discovered in Microsoft Office files.
Files Containing Keyword in the File Name
Furthermore, a surprising number of workers violate best practices for securely storing important information in the cloud. Using keywords such as ‘passwords’, ‘budget’, and ‘salary’ when naming files makes it easy for attackers to locate sensitive information, and IT security professionals typically advise against this practice. Convenience all too often trumps security, unfortunately. Past breaches have revealed instances in which credentials for multiple accounts were kept in folders named “Passwords”. The report found that the average company had 21,825 documents stored across file sharing services containing one or more of these red flags in the file name. Out of these files, 7,886 files contained ‘budget’, 6,097 ‘salary’, and 2,217 ‘confidential’.
Lastly, data revealed a few “worst employees of the month. One prolific user was responsible for uploading 284 unencrypted documents containing credit card numbers to a file sharing service. Another user uploaded 46 documents labeled “private” and 60 documents labeled “restricted”. In all seriousness, while it’s easy to point the finger and call these users bad employees, it’s likely they were simply trying to do their jobs using the best tools available to them. The onus lies with IT to make the secure path the easy path.
With more companies migrating sensitive data to the cloud, attackers will increase their efforts to exploit vulnerabilities in enterprise use of cloud services. Tellingly, attacks against cloud services increased 45% over the past year. Locating sensitive data in file-sharing services is step one for companies aimed at preventing the next generation of cloud-based threats.