Five Ways Your Employees Sidestep Information Security Policies

By Susan Richardson, Manager/Content Strategy, Code42

code42 security policiesA good employee finds ways to overcome roadblocks and get the job done. But in the case of enterprise IT security, good employees may be your biggest threat. In fact, a recent Dell survey found that nearly seventy percent of IT professionals believe employee workarounds are the greatest risk to their organizations’ security.

We’ve all been there: juggling numerous log-in credentials, following tedious document transfer policies, struggling with subpar app functionality—all the while knowing there’s a better way. IT security policies have a knack for getting in the way of getting the job done. Dell also found that ninety-one percent of workers feel their work productivity is negatively impacted by IT security measures. So what are some of the most common workarounds used by imaginative, driven but often password-fatigued employees?

Easy-to-remember passwords. The average person today has twenty-five personal and professional digital access points. Changing those twenty-five passwords every ninety days, as recommended, results in creating and recalling 125 passwords each year. It’s no wonder people use easy-to-remember passwords; and unfortunate that simple passwords negate much of the security benefit of password-based authentication. One 2015 study found that seventy-three percent of online accounts are guarded by duplicated passwords—that is, the same key unlocks many different doors. Another study found that even those who try to be clever by using unique passwords are unlikely to beat the hackers: 1 in 2 passwords follow one of thirteen predictable (read: hackable) patterns. And finally, to skirt the password-reset problem altogether, some savvy users simply call their help desk to claim a forgotten password. The IT-driven reset often overrides the regular password reset requirements, meaning employees can continually recycle the same password. Thanks to this workaround, TeleSign found that 1 in 2 people are using passwords that are at least five years old.

Tricking the session time-out. Most systems and applications have automatic session time-out features, based on a defined idle period. But many organizations take this security feature a step further, using proximity detectors that time out a user’s session as soon as they step out of range. However, many users “beat” this security feature by placing a piece of tape on the detector, or by placing a cup over the detector. When they do step away from their desks, their devices remain completely unsecured and vulnerable.

Transferring documents outside the secure network. The mobile workforce demands anytime-anywhere access to their documents and data. Most organizations have strict protocols on accessing data through secure network connections, such as a virtual personal network (VPN). But many mobile workers aim to streamline their productivity by circumventing these protocols: emailing sensitive documents to themselves, storing files in a personal Dropbox account or other public cloud, and even taking photos/screenshots with a smartphone and texting these images.

Intentionally disabling security features. One of the most popular workarounds is also the most straightforward. Where possible, users will simply turn off security features that hinder their productivity. This is especially true for BYOD workplaces, where employees have greater control over the features, functionalities and settings of their endpoint devices.

The Post-It Note Pandemic. The most common workaround is also very simple. A survey by Meldium found that most people record their passwords somewhere—whether in a spreadsheet containing all their log-in credentials, on their smartphones, or on a piece of paper, such as a trusty Post-It Note™—likely affixed to the very device it is intended to secure.

So, what’s an IT administrator to do with all these well-intentioned, hard-working, security risk takers? Most experts agree that communication is key. IT security policies should avoid edicts without explanation, leaving the end user with productivity loss and no apparent upside. Instead, many organizations are implementing more rigorous IT security training for all employees, showing them specifically how security protocols protect against data leakage, data breaches and other threats, highlighting how workarounds put data (and their jobs) at risk, and keeping IT security top-of-mind with regular communications and meetings with staff.

Download the executive brief, Protecting Data in the Age of Employee Churn, to learn more about how endpoint backup can mitigate the risks associated with insider threat.

A Perspective on the Next Big Data Breach

By Kevin Beaver, Guest Blogger, Lancope

iStock_000021503754Medium (1)In looking at the headlines and breach databases, there haven’t been any spectacular, high-visibility incidents in recent weeks. It’s almost as if the criminals are lurking in the weeds, waiting to launch their next attack during the busy, upcoming holiday season. After all, the media tends to sensationalize such breaches given the timing and that’s part of the payoff for those with ill intent. Whether the next big breach will impact consumers, corporate intellectual property or national security, no one really knows. It may be that we witness all of the above before year’s end. One thing’s for sure, the next big data breach will be predictable.

Once the dust settles and the incident response team members, investigators and lawyers have done their work and had their say, I can foresee how it’s all going to go down. It’s not at all unlike what happened a couple of years ago with the crippling snowstorms that we experienced in my hometown of Atlanta:

  • There’s an impending threat that most people are aware of. Some argue that threats are evolving. I’m not convinced that’s true. I think the technologies and techniques the threats use against us are maturing, but the threats themselves – criminal hackers, malicious insiders, unaware users, etc. – have been the same since the beginning.
  • People will get caught “off-guard” and get themselves (and their organizations) into a pickle.
  • The subsequent impact will be a lot worse than expected, or assumed.
  • Key individuals will ponder the situation and try to figure out who’s to blame.
  • Management will vow to never let it happen again, including but not limited to, providing short-term political and budgetary support for much-needed security initiatives.
  • Things will go back to normal – the typical daily routine will set back in and then months, perhaps years, will go by. Either the same people will forget the pain of what transpired or new people will be in charge and then, all of a sudden, out of nowhere – it’ll happen again.

With practically all data breaches, there are no surprises. There’s really nothing new. It’s the same story that’s repeated time and again. Comedian Groucho Marx was quoted as saying “Politics is the art of looking for trouble, finding it everywhere, misdiagnosing it and then misapplying the wrong remedies.” In most cases, the same can be said for information security. There’s a lot of talk. Some tangible action (often wheel spinning and going through the motions). There are even policies and contracts that are signed and audits that come up clean. Yet, history repeats itself.

As businessman Warren Buffett once said, there seems to be some perverse human characteristic that likes to make easy things difficult. I know it’s not truly “easy” to manage an overall information security program. I don’t envy CISOs and others in charge of this business function. However, knowing what we know today, it is easy to not repeat the mistakes of others. It’s also easy to become complacent. That’s where you have to be really careful. Too many people feel like they’ve “made it” – that they’ve got everything in place in order to be successful. Then they end up relaxing too much and letting their guard down. Then they become vulnerable again. It’s a vicious, yet predictable, cycle that leads to breach after breach after breach.

When all is said and done, your primary goal should be to determine what the very worst thing is that could happen on your network and then go about doing whatever it takes to make sure that worst thing doesn’t happen. That’s how you’ll prevent the next data breach from happening to your organization. Let the criminals go pick on someone else.

Kevin Beaver is an information security consultant, expert witness and professional speaker with Atlanta-based Principle Logic, LLC.

Code42 CSO says, “Beware the data-stealing Grinch”

By Rick Orloff, Chief Security Officer, Code42

code42 shopping (1)Historically, corporations viewed security as an overhead expense required to meet regulatory controls and audits. As we head into a new year, we know breaches are inevitable and questions about security and data protection are being asked at a higher level. Boards of directors and C-level executives want situational awareness. They want to know, as much as they can, how effective their security programs are and how they compare to peer group programs.

Companies are learning that their security tech stack should enable business functions, not restrict them. Companies are focusing on securing many different layers of their corporate infrastructure but the real focus is on the data (e.g., customer PII, HIPAA, financial records and intellectual property). In today’s workplace, a company’s most critical data isn’t living on a desktop connected to a server—it’s living on laptops, tablets, third-party applications and mobile devices. Many of those devices spend less than half of their time in the office, and represent the disappearing network edge, which can mean an increased risk of data loss. Now and into 2016, the data living on endpoint devices has become a central pillar of a company’s security strategy.

But data protection isn’t just for companies, especially this time of year. We should all follow these four tips to protect our data and ourselves during the busy holiday shopping season:

TIP 1: Don’t shop online using borrowed or public computers, such as those at a cyber cafe. A borrowed computer may be infected and could be recording all of your information.

TIP 2: Public Wi-Fi spots have significant security risks and should be avoided when possible. You’re much safer using your own Wi-Fi or cellular connection.

TIP 3: Protect your passwords—and your data. Do not reuse passwords for multiple accounts. Your email password is the most important password you have. If a hacker can access your email, he or she can simply go to your bank’s website and request a password reset, and quickly gain access to your personal information and bank account.

TIP 4: Do not use your ATM card for any shopping. If you’re the victim of fraud, you often don’t know until all of the cash has been drained from your account. It’s much better to use a credit card as a security buffer. If there is fraud, they typically reverse charges in minutes but it’s not always the same situation with an ATM card.

How can people check to make sure they are going to a reputable website versus a fake one?
Customers should not provide their personal information to e-commerce sites with which they are not familiar. Secure sites use Secure Sockets Layer (SSL) and depict a “lock image” in or near their website address. As a precaution, it’s also best to always make sure antivirus software is updated.

To learn more about how endpoint backup can help your organization protect its data, download the ebook, Backup & Beyond.

Predicting Cyber Security Trends in 2016

By TK Keanini, Chief Technology Officer, Lancope

9632811349_5a06d2c6f7_zOne of my annual rituals is to take stock of the cyber security industry and determine what trends and challenges we are likely to see in the coming year. In the ever-evolving cyberspace, technology changes on a daily basis, and attackers are always there to take advantage of it.

But before we get into what is coming, I’d like to look back on my predictions for 2015 and see how clear my crystal ball was.

2015: Three out of four

Last year, I predicted four major cyber security trends would rise to prominence – or continue rising – in 2015: Muleware, re-authentication exploitation, ransomware and targeted extortionware.

Three out of the four came true with muleware being the odd one out because it is difficult to track. That said, there were some rumblings of hotel staff physically delivering exploits to laptops left in the rooms of certain persons of interest.

Re-authentication exploitation remains popular as more attackers realize a compromised email account can facilitate the theft of many different kinds of accounts for other websites. Once an attacker controls your email account, he can begin the “forgot password” process of a website and steal the password before you notice. We need to stop looking at password authentication as single point in time, but instead as an entire lifestyle. You could have the strongest password system in the world, but if the re-authentication process is weak, then the attacker has the upper hand.

Ransomware continues to thrive in the current environment and has expanded from only Windows to Apple, Android and Linux. These attacks are countered with proper backups, which are cheaper and easier than ever, but organizations are still failing to back up their data. This method has proved to be lucrative for attackers, and as long as people are still vulnerable to it, ransomware will become even more popular.

Targeted extortionware seeks to steal sensitive data about a person and threaten to publish the data publicly if the victim doesn’t pay up. Everyone has something they would like to keep secret, and some are undoubtedly willing to pay for it. Events like the breach at adult matchmaking site Ashley Madison led to cases of extortionware, and this trend is likely to continue in 2016.

What to expect in 2016

If 2014 was the “Year of the Data Breach,” then 2015 is on track to match it. We saw insurance companies, dating sites, U.S. federal agencies, surveillance technology companies and more fall victim to attacks this year, and there are no reasons to believe it is going to slow down in 2016.

Cracking as a service
Encryption has always been a moving target. As technology becomes more advanced, encryption has to evolve with it or else it becomes too easy to crack. Certain trends such as Bitcoin mining have already led to large farms of compute clusters that could be setup for cryptanalysis without a lot effort. Like any other Software as a Service provider, it could be as simple as setting up an account. You could submit a key with some metadata and within a few minutes – maybe even seconds – a clear-text WEP key is delivered. This could include different hashes and ciphertext. Charging per compute cycle would make it an elastic business. A development such as this would require everyone to utilize longer key lengths or risk compromise.

DNA breach
Every year, more and more sensitive data is stored on Internet-connected machines, and health data in particular is on the rise. Millions of people use DNA services that track an individual’s genetic history or search for markers of disease, and it is only a matter of time until a DNA repository is compromised. Unlike a credit card number or an account password, health information cannot be changed, which mean once it is compromised, it is compromised forever. This makes it an exceptionally juicy target for attackers. A breach like this could affect millions, and compensation would be impossible.

Attack on the overlay network
As more and more organizations rush to develop and implement software-defined networking (SDN), there is widespread adoption of microarchitectures like Docker containers. In the case of Docker, VXLAN tagging facilitates an overlay network that defines the structure of the system of applications. This could have severe security implications if there is no effective entity authenticating and checking the tags. Without adequate authentication, attackers could impersonate or abuse a tag, giving them privileged access to the system and data stored within.

VXLAN is only one example of overlay technology, and frankly, there has not been enough threat modeling to determine how vulnerable it is to attack. Like all new technologies, if we don’t give enough thought to security during development, attackers will discover the vulnerabilities for us. There will be exploitation of overlay networks in 2016, and then defenders will be forced to implement security in the middle of a vulnerable and hostile environment.

Namespace is the new battleground
Software developers are quickly adopting container technology to ensure performance is consistent across different machines and environments. When hypervisor-based virtualization became common, attackers learned how to compromise the hypervisor to gain control of the operating systems on virtual machines. With container technology like Docker, these attacks take aim at namespaces in userland, including networking, processes and filesystem namespaces. In the coming year, there will be attacks originating from malicious containers trying to share the same namespace as legitimate ones. A compromise like this could give attackers complete control of the container and potentially allow them to erase all evidence of the attack.

There are companies working on cryptographic methods of securing namespace, but until a major attack on these systems take place, there won’t be a lot of demand for this as a required feature.

New approaches for new technology
Whenever new technology receives widespread adoption, people often attempt to apply old security principles to them. In some cases that works, but it often creates inefficiencies and vulnerabilities. When virtual machines first became popular, operators would often attempt to patch VMs like they were a physical machine. It didn’t take long for them to realize it was quicker and easier to just kill the VM and start a new one with up-to-date software.

As we run headlong into new technology and continue to connect more and more sensitive information to the Internet, we must consider the security implications before a breach occurs. Every year attackers develop more ways to monetize and facilitate cybercrimes, and if we fail to evolve with them then we are inviting disaster.

Smart City Security

By Brian Russell, Co-Chair CSA IoT Working Group

RussellBrianGartner defines a smart city as an “urbanized area where multiple sectors cooperate to achieve sustainable outcomes through the analysis of contextual, real time information shared among sector-specific information and operational technology systems,” and estimates that 9.7 billion devices will be used within smart cities by the year 2020.  

A smart city connects multiple technologies and services together, often in manners that were not previously thought possible. According to Juniper Research, there are five essential components of a smart city: technologies, buildings, utilities, transportation and road infrastructure, and the smart city itself. All of these building blocks are brought together, according to the Intelligent Community Forum (ICF), to “create high quality employment, increase citizen population and become great places to live and work.

There are myriad use cases for smart cities. City Pulse provides a great starting point for defining some of these. In the near future, citizens will benefit from improved service delivery as cities enable capabilities such as smart waste management, pollution sensors and smart transportation systems. Cities will also be able to stand up improved security and safety capabilities – from managing crisis situations using coordinated aerial and ground robotic tools, to monitoring seniors to identify elevated stress levels (e.g., potential falls or worse) in their home. New services will likely be stood up, both public and private, to leverage these new capabilities.    

This smart city ecosystem is dynamic.  This is true for the devices that will make up the edges of the smart city, as well as the cloud services that will support data processing, analytics and storage.  The data within a smart city is itself dynamic, crossing private and public boundaries, being shared between organizations, being aggregated with other data streams and having metadata attached, throughout its lifetime.  This all creates significant data privacy challenges that must be adequately addressed.    

These complex smart city implementations also introduce challenges to the task of keeping them secure. As an example, services will likely be implemented that ingest data from personal devices (e.g., connected automobiles, heart-rate monitors, etc) making it important that only permitted data is collected and that citizens opt-in. Interfacing to personally-owned devices also introduces new attack vectors, requiring that solutions for determining and continuously monitoring the security posture of these devices be designed and used.    

City infrastructures will also be updated and extended to support new smart capabilities. There are smart city management solutions that tie together inputs from smart devices and sensors and enable automated workflows. These solutions can be hosted in the cloud and can reach out across the cloud through integration with various web services, creating a rich attack surface that must be evaluated on a regular basis as new inputs and outputs are added. This requires the upkeep of a living security architecture and routine threat modeling activities.  

Understanding the threats facing smart cities and the vulnerabilities being introduced by new smart city technologies requires a collaborative effort between municipalities, technology providers and security researchers.  Technology providers would be well served to review secure development guidance from organizations such as the Open Web Application Security Project (OWASP) and smart device vendors should make use of 3rd party security evaluations from organizations such as builditsecure.ly.  Municipalities should look to secure implementation guidance from organizations such as the Securing Smart Cities initiative, as well as the Cloud Security Alliance (CSA).  .    

The CSA Internet of Things (IoT) Working Group (IoTWG) recently teamed up with Securing Smart Cities, to publish a document titled Cyber Security Guidelines for Smart City Technology Adoption. This document is an effort to provide city leaders with the knowledge needed to acquire secure smart city solutions, and includes guidance on technology selection, technology implementation and technology disposal. Download the document.  

The CSA IoTWG will continue to support the Securing Smart Cities initiative in their focus on providing security guidance for smart cities, and we will continue our work on providing security guidance for the IoT as a whole, to include recommendations for securing IoT cloud services, research on the uses for blockchain technology to secure the IoT, and guidance on how to design and develop secure IoT components. Keep a look-out for new publications from our WG.  

Join the CSA IoTWG.

Brian Russell (twitter: @pbjason9) is Chief Engineer/CyberSecurity for Leidos.

Humans: Still the Weakest Link In the Enterprise Information Security Posture

By Rachel Holdgrafer, Business Content Strategist, Code42

code42 unpredictable humansWhen it comes to protecting enterprise data, it’s more about understanding processes, procedures and the humans using the system, and less about defending the physical hardware. Seventy-eight percent of respondents to the Ponemon 2015 State of the Endpoint Report: User-Centric Risk indicate that the biggest threat to endpoint security is negligent or careless employees who don’t follow security policies. The Skyhigh Report finds that 89.6% of organizations experience at least one insider threat each month while the average organization experiences 9.3 insider threats each month. Humans are the weakest link in information security—for a number of reasons.

According to McAfee, internal actors are responsible for 43% of enterprise data loss. In half the cases, data loss is accidental, while the other half is intentional. In 2013 alone, U.S. companies and organizations suffered $40 billion in losses from unauthorized use of computers by employees, including “…approaching, trespassing within, communicating with, storing data in, retrieving data from, or otherwise intercepting and changing computer resources without consent.” Whether accidental or deliberate, data loss at the hands of employees is a real and present danger.

 

Accidental data breach or loss
Well-meaning employees threaten data security every day, often without realizing it. They open suspicious email attachments, fall for social engineering ploys, carelessly manage network passwords or use shadow IT applications that give hackers a way into the network. Regardless of how data loss or breach happens, insider threat poses a significant risk to organizations.

  • Shadow IT applications. In an effort to get their jobs done, employees may install unsanctioned software on their devices, and in doing so, expose their employer to hackers and malware via vulnerabilities in the software.
  • Sync and share technology. Sync and share applications are powerful collaboration tools for increasing employee productivity, especially for distributed and remote teams. Unfortunately, sharing data has a down side; 28% of employees have uploaded a file containing sensitive data to the cloud. A team member might inadvertently delete a shared document or corrupt the only version of a key file, rendering the data useless. Sensitive data, such as social security or customer payment information, could be shared with internal employees or with external users, putting the data at risk and the company out of compliance.
  • Social engineering. From urgent emails that appear to come from C-suite executives requesting large wire transfers to “friendly” phone calls from hackers posing as corporate IT staff, social engineering is on the rise at organizations of all sizes.
  • Poor password security. What appears to be innocuous password sharing can result in significant data loss. Employees good-naturedly share passwords with coworkers or post their network passwords at their workstations, unintentionally allowing others to access the system using their credentials.

Intentional data sabotage
In a perfect world, employees would always work in the best interest of their employers. Unfortunately, this is not always the case. As a result, organizations must monitor individuals on the payroll to spot incidents of intentional data sabotage.

  • Dealing with disgruntled employees. Malicious cyber-sabotage conducted by disgruntled employees is on the rise. Whether passed over for a promotion, terminated for cause or as a part of a reduction in force, unhappy employees pose a risk to data security. Disgruntled employees may delete important files or emails, lock administrators out of admin accounts by changing passwords or take sensitive data with them when they leave. NakedSecurity by Sophos reports that:

The FBI has found that terminated employees installed unauthorized RDP (remote desktop protocol) software before they exited their companies, thereby ensuring that they could retain access to the businesses’ networks to carry out their crimes.

  • Malware introduction and planting logic bombs. Employees on their way out the door may purposely infect the employer’s network with malware or plant logic bombs that “go off” in the future, wiping out data when the employee is long gone.
  • Selling corporate data for fun and profit. It’s troubling, but true; current employees may extract and sell sensitive corporate data to the highest bidder on the black market. They may also sell customer account lists, product plans or other intellectual property to their employer’s competitors for financial gain. Some enjoy the challenge of accessing the data, some need the cash and others, like arsonists, enjoy watching the company burn.

Conclusion
Humans continue to be the weakest link in information security. Whether deliberate or accidental, the actions of employees can quickly destroy a company. Organizations must keep this in mind when creating information security policies and while implementing safeguards.

Learn more about the impacts of insider threat. Download the executive brief, Protecting data in the age of employee churn.

The Twelve Days of Cyber Plunder

By Phillip Marshall,  Director of Product Marketing, Cryptzone

christmas-1078714_1920As the holiday season approaches, we caution you to take heed of the cyber perils in this familiar holiday tune.
While we had a little fun with the verse, this cautionary tale unfortunately rings true for many.

On the first day of Christmas the Cyber Grinch sent to me, a holiday invitation phishing to see if I would give him info on me.
On the second day of Christmas the Cyber Grinch gave to me, malware on my PC.
On the third day of Christmas the Cyber Grinch took from me, my personal passwords and IDs.
On the fourth day of Christmas the Cyber Grinch stole from me, a credit card and bought a TV.
On the fifth day of Christmas the Cyber Grinch went on a shopping spree, and bought his girlfriend five golden rings – courtesy of me.
On the sixth day of Christmas the Cyber Grinch took from me, my company login and ID.
On the seventh day of Christmas the Cyber Grinch used my credentials to slink about the network and VLANs to boot, seeking something to loot.
On the eighth day of Christmas the Cyber Grinch got even bolder and found a password folder.
On the ninth day of Christmas the Cyber Grinch found, to his glee, some really neat company IP.
On the tenth day of Christmas the Cyber Grinch did the deed, and exfiltrated all our customer data, in his greed.
On the eleventh day of Christmas the Cyber Grinch tripped a false security alert, his detection he managed to avert.
c. Someone bought it on the dark web, the guy is now some hacker celeb.
To protect you and your company from further verses of this song,
Please consider taking us along,
Try our Segment-of-One and you’ll be safe,
Context and content controls in place,
You’ll be the envy of all the companies in your space,
And the likes of the Cyber Grinch lockout,
Let Cryptzone help you out.

 

Fix Insider Threat with Data Loss Prevention

By Rachel Holdgrafer, Business Content Strategist, Code42

code42 insider threatWhat do the Mercedes-Benz C Class, teeth whitening strips, the Apple iPhone and personally identifiable information have in common? Each is the item most commonly stolen from its respective category: luxury cars, personal care items, smartphones and corporate data. In the 2015 study entitled Grand Theft Data – Data exfiltration study: Actors, tactics, and detection, Intel Security reports:

• Internal actors were responsible for 43% of data loss, half of which is intentional, half accidental.
• Microsoft Office documents were the most common format of stolen data (25%).

• Personal information from customers and employees was the number one target (65%).
• Internal actors were responsible for 40% of the serious data breaches experienced by respondents and external for 57% of data breaches.

Whodunnit?
The report describes internal actors as employees, contractors and third-party suppliers, with a 60/40 split between employees and contractors/suppliers. Office documents were the most common format of data stolen by internal actors—probably because these documents are stored on employee devices—which many organizations do not manage.

In a 2013 report by LogRhythm, a cyber threat defense firm, a survey of 2000 employees found that 23 percent admitted to having looked at or taken confidential data from their workplace, with one in ten saying they do it regularly. In this study, two thirds of respondents said their employer had no enforceable systems in place to prevent access to data such as colleague salaries and bonus schemes.

Employees that move intellectual property outside the company believe it is acceptable to transfer work documents to personal computers, tablets, smart phones and file sharing applications and most do not delete the data because they see no harm in keeping it. As reported in the Employee Churn white paper, many employees attribute ownership of IP to the person who created it.

Four quick fixes to curb insider threat
As the rate of insider theft approaches the rate of successful hacks, organizations can start with four common sense principles to shore up security immediately:

  1. Trust but verify: Understand that the risk of data loss from trusted employees and partners is real and present. Watch for data movement anomalies in your endpoint backup data repositories and act upon them.
  2. Log, monitor and audit employee online actions and investigate suspicious insider behaviors.
  3. Disable employee credentials immediately when employees leave and implement strict password and account management policies and passwords. Astonishingly, six in ten firms surveyed do not regularly change passwords to stop ex-employees from gaining access to sites and documents.
  4. Implement secure backup and recovery processes to prepare for the possibility of an attack or disruption and test the processes periodically.

Download the executive brief, Protecting Data in the Age of Employee Churn, to learn more about how endpoint backup can mitigate the risks associated with insider threat.

An Overview of the Security Space and What’s Needed Today

By Kevin Beaver, Guest Blogger, Lancope

Backlit_keyboardFairly often, I have friends and colleagues outside of IT and security ask me how work is going. They’re curious about the information security industry and ask questions like: How much work are you getting? Why are we seeing so many breaches? Are things going to get better? Given what’s happening in the industry, I’m always quick to respond with some fairly strong opinions. So, where are things now and what’s really need to resolved our security issues?

First off, based on what I see in my work and what I hear from friends and colleagues in the industry, I’m convinced that what we’re seeing in the data breaches and hearing about in the headlines is merely the tip of the iceberg. I suspect that there are three to four times the number of breaches that go undetected and unreported. I also see many IT and security shops merely going through the motions just trying to keep up. Putting out fires are their daily tactics. Big-picture strategies don’t exist.

In my specific line of work performing security assessments, I see people sweating bullets anticipating the results, unsure of how the outcome is going to reflect on them, their credibility and their jobs. I’m not saying this to speak negatively of the people responsible for information security. I just think it’s a side-effect of how IT and security challenges have evolved in recent years. The rules and oversight are being piled on. Ironically, in an industry that traditionally offers a strong level of job security, it seems that more and more people are concerned about that very thing.

A core element contributing to these challenges – and something that doesn’t get the attention it deserves – is a glaringly obvious lack of support for information security initiatives at the executive and board level. Sure, there are occasional studies that show that security budgets are increasing, however, more often than not I’m seeing and hearing sentiments along the lines of a recent study that showed the majority of C-level executives do not believe CISOs deserve a seat at the leadership table. So, it’s more than just budget. It’s political backing as well. This begs the question: who’s responsible for this lack of respect for the information security function? I believe it’s a chicken and egg debate-type situation involving responsibility and accountability on the part of both IT and security professionals as well as business leaders. I’ll save that for another blog post.

Politics and business culture aside, there are still many situations where all is assumed to be well in security when it is indeed not. The lack of visibility and data analytics is glaringly obvious in many enterprises, including large corporations and federal government agencies that one might assume really have their stuff together and are resilient to attack. In fact, I strongly believe that many – arguably most – security decisions are made based on information that’s questionable at best and this is why we continue to see the level of breaches we’re seeing.

So, where do we go from here? I’m not convinced that we need more policies. Nor am I convinced that we need better technologies. People are continually chasing down this rabbit hole and that rabbit hole in search of the latest magical security solution. Rather than a new direction, what we need is discipline. For decades, we’ve known about the core information security principles that are still lacking today. Unless and until everyone is on board with IT and security initiatives that impact business risk, I think we’re going to continue with the same struggles. I hope I am proven wrong.

Kevin Beaver is an information security consultant, expert witness and professional speaker with Atlanta-based Principle Logic, LLC.

Gartner’s Latest CASB Report: How to Evaluate Vendors

Market Guide Compares CASB Vendors And Provides Evaluation Criteria

By Cameron Coles, Senior Product Marketing Manager, Skyhigh Networks

blog-banner-gartner-casb-report-1024x614As sensitive data moves to the cloud, enterprises need new ways to meet their security, compliance, and governance requirements. According to Gartner Research, “through 2020, 95% of cloud security failures will be the customer’s fault,” meaning that enterprises need to look beyond the security capabilities of their core cloud services and focus on implementing controls over how those services are used in order to prevent the vast majority of potential security breaches.

Many companies invested in firewalls, proxies, intrusion prevention systems, data loss prevention solutions, and rights management solutions to protect on-premises applications. The cloud access security broker (CASB) offers similar controls for cloud services. According to a new Gartner report (download a free copy here), a CASB is “required technology” for any enterprise using multiple cloud services. By 2020, Gartner predicts 85% of large enterprises will use a CASB, up from fewer than 5% today.

“By 2020, 85% of large enterprises will use a cloud access security broker product
for their cloud services, which is up from fewer than 5% today.
– Gartner “Market Guide for Cloud Access Security Brokers”

The need for a solution is clear. Cloud adoption within enterprise is growing exponentially – driven in large part by business units procuring cloud services and individual employees introducing ad hoc services without the involvement of IT. IT Security teams need a central control point for cloud services to understand how their employees use cloud services and enforce corporate policies across data in the cloud, rather than managing each cloud application individually. This functionality is not available in Web application firewalls (WAFs), secure Web gateways (SWGs) and enterprise firewalls, driving the need for a new solution that addresses these challenges.

Why do companies use CASBs?
In the report, Gartner explains there are three market forces driving enterprises to consider using a CASB. First, employees are moving to non-PC form factors. Employees use mobile devices to store corporate data in cloud services, and IT Security teams lack controls for this activity. Second, as corporate IT budgets are redirected toward cloud services, companies are beginning to think strategically about the security stack needed for the cloud. And lastly, as the largest enterprise software companies like Oracle, Microsoft, and IBM invest heavily in migrating their installed base to cloud services, more of these enterprise are looking to secure this data.

“CASB is a required security platform for organizations using cloud services.
– Gartner “Market Guide for Cloud Access Security Brokers”

While some cloud providers are beginning to add security and compliance controls to their solutions, companies need a more centralized approach. The average enterprise uses 1,154 cloud services, and managing a different set of policies across each of these services would not be practical for any organization. A CASB offers a central control point for thousands of cloud services for any user on any device – delivering many of the security functions found in on-premises security solutions including data loss prevention (DLP)encryption, tokenization, rights management, access control, and anomaly detection.

Gartner’s 4 Pillars of CASB Functionality
Gartner uses a four-pillar framework to describe the functions of a CASB. Not all CASB providers cover these four pillars, so customers evaluating solutions should carefully evaluate marketing claims made by vendors and ask for customer references.

  • Visibility – discover shadow IT cloud services and gain visibility into user activity within sanctioned apps
  • Compliance – identify sensitive data in the cloud and enforce DLP policies to meet data residency and compliance requirements
  • Data security – enforce data-centric security such as encryption, tokenization, and information rights management
  • Threat protection – detect and respond to insider threats, privileged user threats, compromised accounts

Deployment architecture is an important consideration in a CASB project. A CASB can be delivered via SaaS or as an on-premises virtual or physical appliance. According to Gartner, the SaaS form factor is significantly more popular and easier, making it the increasingly preferred option. Another factor to consider is whether to use an inline forward or reverse proxy model, direct API connectivity to each cloud provider, or both. Gartner refers to CASB providers that offer both proxy and API options as “multimode CASBs” and points out that certain functionality such as encryption, real-time DLP, and access control are not possible with API-only providers.

How to choose a CASB
Not all CASB solutions are equal and the features, deployment architectures, and supported cloud applications vary widely from provider to provider. Gartner splits the CASB market into Tier 1 providers that frequently appear on short lists for Gartner clients, and other vendors. Tier 1 providers are distinguished by their product maturity, scalability, partnerships and channel, experience in the market, ability to address common CASB use cases across industries, and market share and visibility among Gartner clients.

In its latest report, Gartner offers numerous recommendations that customers should consider when evaluating a CASB, including these considerations:

  1. Consider the functionality not available with API-only CASBs compared with multimode CASBs before making a decision
  2. Start with shadow IT discovery in order to know what’s in your environment today before moving to policy enforcement
  3. Look for CASBs that support the widest range of cloud applications, including those you plan to use in the next 12-18 months
  4. Look past CASB providers’ “lists of supported applications and services,” because there are often substantial differences in the capabilities supported for each specific application
  5. Whether the CASB deployment path will work well with your current network topology
  6. Whether the solution integrates with your existing security systems such as IAM, firewalls, proxies, and SIEMs

One way to evaluate claims made by CASB vendors is to speak with several customer references. Another recommended element in the selection process is conducting a proof of concept. Using real data for the proof of concept enables a potential customer to try out the analytics capabilities of a CASB, including the ability to discover all cloud services in use by employees and detect internal and external threats that could result in data loss. When you’re ready to begin looking at solutions, Skyhigh offers a free cloud audit that reveals shadow IT usage and high-risk activity within approved cloud services.

The EU GDPR and Cloud: Six Must-Dos to Comply

By Krishna Narayanaswamy, Co-founder and Chief Scientist, Netskope

You don’t have to be European to care about the European Commission’s pending EU General Data Protection Regulation (GDPR). Set to be adopted in 2017 and implemented the following year, carrying penalties up to 5 percent of an enterprise’s global revenues, and replacing the current Data Protection Directive and all country-level data privacy regulations, this pending law should matter to any organization that has European customers. The purpose of the GDPR is to protect citizens’ personal data, increase the responsibility and accountability of organizations that process data (and ones that direct them to do so), and simplify the regulatory environment for businesses.

The information technology community has been abuzz on the topic for some time now. What’s been missing from the conversation up to now, however, is the cloud and how that throws a wrench into the GDPR mix. One of the biggest trends over the last decade is shadow IT. According to our latest Netskope Cloud Report, the average enterprise is using 755 cloud apps. In Europe, it’s 608. Despite increased awareness over the last year or so, IT and security professionals continue to underestimate this by 90 percent or more. This is shadow IT at its finest. So the big question is whether organizations that only know about 10 percent of the cloud apps in use can really ensure compliance with the GDPR?

CSA-GDPR-6-Questions

We partnered with legal and privacy expert, Jeroen Terstegge, a partner with Privacy Management Partners in the Netherlands who specializes in data privacy legislation. He helped us make sense of the pending GDPR as it relates to cloud, and identified six things cloud-consuming organizations need to do to comply if they serve European customers (this is all fleshed out in this white paper, by the way):

  1. Know the location where cloud apps are processing or storing data. You can accomplish this by discovering all of the cloud apps in use in your organization and querying to understand where they are hosting your data. Hint: The app vendor’s headquarters are seldom where your data are being housed. Also, your data can be moved around between an app’s data centers.
  2. Take adequate security measures to protect personal data from loss, alteration, or unauthorized processing. You need to know which apps meet your security standards, and either block or institute compensating controls for ones that don’t. The Cloud Security Alliance’s Cloud Controls Matrix (CCM) is a perfect place to start. Netskope has automated this process by adapting the CCM to the most impactful, measurable set of 45+ parameters with our Cloud Confidence Index, so you can easily see where apps are lacking and quickly compare among similar apps.
  3. Close a data processing agreement with the cloud apps you’re using. Once you discover the apps in use in your organization and consolidate those with overlapping functionality, sanction a handful and execute a data processing agreement with them to ensure that they are adhering to the data privacy protection requirements set forth in the GDPR.
  4. Collect only “necessary” data and limit the processing of “special” data. Specify in your data processing agreement (and verify in your DLP policies) that only the personal data needed to perform the app’s function are collected by the app from your users or organization and nothing more, and that there are limits on the collection of “special” data, which are defined as those revealing things like race, ethnicity, political conviction, religion, and more.
  5. Don’t allow cloud apps to use personal data for other purposes. Ensure through your data processing agreement, as well as verify in your app due diligence, that apps state clearly in their terms that the customer owns the data and that they do not share the data with third parties.
  6. Ensure that you can erase the data when you stop using the app. Make sure that the app’s terms clearly state that you can download your own data immediately, and that the app will erase your data once you’ve terminated service. If available, find out how long it takes for them to do this. The more immediate (in less than a week), the better, as lingering data carry a higher risk of exposure.

Of course, if you end up accomplishing some of these steps via policy, make sure you can take action whether your users are on-premises or remote, on a laptop or mobile device, or on a managed or BYOD device.

This week we announced the availability of a toolkit that includes a couple of services and several complimentary tools to help our community understand and comply with the GDPR. You can access it here.

Cloud apps are useful for users, and often business-critical for organizations. Blocking them – even the shadow ones – would be silly at this point. Instead, follow the above six steps to bring your cloud app usage into compliance with the GDPR.

Network Segmentation and Its Unintended Complexity

By Kevin Beaver, Guest Blogger, Lancope

analyticsLook at the big security regulations, i.e. PCI DSS, and any of the long-standing security principles and you’ll see that network segmentation plays a critical role in how we manage information risks today. The premise is simple: you determine where your sensitive information and systems are located, you segment them off onto an area of the network that only those with a business need can access and everything stays in check. Or does it?

When you get down to specific implementations and business needs, that’s where complexity comes into the picture. For instance, it may be possible to segment off critical parts of the network on paper but when you consider variables such as protocols in use, web services links, remote access connections and the like, you inevitably come across distinct openings in what was considered to be a truly cordoned-off environment.

I see this all the time in my work performing security assessments. The network diagram shows one thing yet the vulnerability scanners and manual analysis paint a different picture. Digging in further and simply asking questions such as the following highlight what’s really going on:

  • How are servers, databases and applications designed to communicate with one another?
  • Who can really access the segmented environment? How does that access take place?
  • What areas of the original system had to be changed to accommodate a technical or business need?
  • What information is being gathered across the network segment in terms of network and security analytics and what is that information really telling us?
  • What else are we forgetting?

Getting all of the key players involved such as database administrators, network architects, developers and even outside vendors that support systems running in these network segment(s) and asking questions such as these will often reveal what’s really going on beyond what’s documented or what’s assumed. This is not a terrible situation in and of itself. The systems need to work the way they need to work and business needs to get done. However, this exercise highlights a new level of network complexity that was otherwise unknown – or at least unacknowledged.

This leads me to my final point that’s obvious yet needs to be repeated: complexity and security don’t go well together. It’s a direct relationship – the more complexity that exists in your network environment, the more out of control you’re going to be. I’m confident that if we looked at the root causes of most of the known security breaches uncovered by reports such as the Cisco 2015 Annual Security Report and publicized on websites such as the Privacy Rights Clearinghouse Chronology of Data Breaches, we’d see that network complexity was instrumental in facilitating those incidents.

Putting aside politics, lack of budget and all the other common barriers to an effective information security program, you cannot secure what you don’t acknowledge. If vulnerabilities exist in your network segmentation, threats will surely come along and find a way to take advantage. It’s your job to figure out where the weaknesses are among the complexity of your network segmentation so you can minimize the impact of any attempted exploits moving forward. Otherwise, regardless of the levels of security visibility and analytics you might have, your systems will remain fair game for attack.

Kevin Beaver is an information security consultant, expert witness and professional speaker with Atlanta-based Principle Logic, LLC.

Good and Bad News on Safe Harbour: Take a Life Ring or Hold Out for a New Agreement?

By Susan Richardson, Manager/Content Strategy, Code42

liferingIf your organization relied on the now-invalid Safe Harbour agreement to legally transfer data between the U.S. and the EU, there’s good news and bad news.

The good news? The European Commission just threw you some life rings. The governing body issued a guidance Nov. 6 that outlines alternative mechanisms for legally continuing transatlantic data transfers:

Standard contractual clauses
Sometimes referred to as model clauses, standard contractual clauses are boilerplate provisions for specific types of data transfers, such as between a company and a vendor. They’re often the least costly on a short-term basis.

Binding corporate rules for intra-group transfers
These allow personal data to move freely among the different branches of a worldwide corporation. Sounds easy, but the process can be time-consuming and expensive, depending on the scope of the company. That’s because the rules have to be approved by the Data Protection Authority (DPA) in each member state from which you want to transfer data.

Derogation where contractually necessary
This exception allows for data transfers that are required to fulfill a contractual obligation. For example, when a travel agent sends details of a flight booking to an airline.

Derogation for legal claims
This exception allows for data transfers that are required to process a legal claim.

Derogation based on individual consent
Legal folks say this option isn’t a slam dunk. Many DPAs have ruled that it’s not possible to obtain meaningful consent from employees, given the lopsided nature of the employer-employee relationship. On the consumer side, it may be difficult to demonstrate that consumers provided meaningful consent if the relevant notice is embedded in a lengthy privacy policy they may never read. Data privacy experts at law firm BakerHostetler recommend a click-through privacy policy with an “I agree” checkbox, as opposed to a browsewrap privacy policy that implies consent by virtue of the consumer simply using the website, app or service.

The bad news? You only have until the end of January 2016 to get the new mechanisms in place before DPAs start investigating and enforcing transfer violations. Or you could hedge your bets and hold out for U.S. and EU negotiators to hammer out a Safe Harbour 2.0 agreement by then, as they’ve committed to do.

After all, the U.S. House of Representatives did surprise everyone by quickly passing the baseline requirement for moving forward on October 20th: the Judicial Redress Act would give EU citizens some rights to file suit in the States for U.S. government misuse of their data. It was received in the Senate and referred to the Committee on the Judiciary on October 21.

More Cyber Security Lessons From “The Martian”

By TK Keanini, Chief Technology Officer, Lancope

Tim KeaniniIn last week’s post, I covered the methodologies Mark Watney used to stay alive on the surface of Mars and how those lessons can be adapted for better cyber security back on Earth. As usual, this post will contain spoilers for The Martian, so close it now if you haven’t yet read the book or seen the movie.

This week I’ll discuss the mentalities and interpersonal skills that allowed the Ares 3 crew to successfully rescue Watney after he was stranded for more than a year on a foreign planet. Whether it is the launch of a manned space probe or defending against advanced cyber threats, these lessons can be used to pull the best possible outcome out of impossible odds.

The Power of a Cross-functional Team
In space travel, every supply and gram of weight is invaluable, much like the limited resources available to most security teams. To help cope with these limitations, every member of the Ares 3 crew served multiple functions. Watney, for instance, was both a botanist and mechanical engineer. This knowledge allowed Watney to recognize that food would be his scarcest resource, find the chemical components necessary to create arable land inside his living quarters and modify the various life support systems to make the environment suitable to plant life.

When a cyber-attack hits, you may be the only one available to address it. To be able to adequately assess and respond to the event, you need to have a working knowledge of the various tools and processes at your disposal. In addition, understanding how different systems work and how different user roles interact with the network allows you to see the security weak points and understand how an attacker may operate in your environment.

Always remember to laugh
Tense situations can have a mental toll on responders, and it is important to keep a sound state of mind to make good decisions. Watney was a serial jokester, frequently laughing at the ridiculousness of his own situation and making wisecracks about what his fellow astronauts left behind on Mars. He particularly hated disco.

Though responders are in the middle of extreme circumstances, it is important not to take yourself too seriously. Laughter helps you keep a level head and can help relieve stress, both in you and your coworkers. Then you are in a better position to make sound decisions and not to give up.

Leadership is not an option, it is a necessity
Watney never faulted his fellow astronauts for leaving him on Mars. They thought he was dead, and leaving immediately was imperative to getting the others out alive. More importantly, Commander Lewis is regretful when she finds out Watney was left alive on Mars, but instead of getting too down to do anything, she focuses on what the next course of action is.

Tough situations need leaders who will make hard calls and live with it. CISOs and other security leaders are responsible for choosing which tools to implement and what practices to employ. When a cyber-attack occurs, they need to be ready to use those tools instead of wishing they had something else.

Communication makes your job easier
One of Watney’s largest challenges throughout The Martian is his inability to communicate with mission command or his own crew. Watney goes on a cross-country trip to find the Pathfinder probe just so he can use it to establish communication. It works but only until he accidentally fries the machinery a few pages later. Fortunately, we do not have this problem, but many cyber security professionals still fail to communicate effectively in the event of an attack.

It makes sense. After all, we are usually busy investigating the attack and trying to prevent data loss. But don’t forget that good communication in an attack helps prevent duplication of efforts and generally helps the entire security team respond effectively.

In a more general sense, the security team needs to be visible to the rest of the organization. Keeping all employees abreast of ongoing security issues reminds them to be vigilant against phishing and other forms of social engineering. Remember, they may know their area of the network better than you, and might be able to identify something abnormal there before you do. Of course, there are some exceptions to this mode of communication. For instance, if an insider threat is suspected, it is likely better to keep that information to a small number of individuals until actions are taken, but for the most part, regular communication with the larger organization is a good thing.

Roles are important
While versatility is a modern virtue, it is important to understand what your role is in a given scenario, even if it changes often. The crew members of Ares 3 had specializations that enabled them to perform specific duties, but they were also general enough that they could fulfill whatever role was needed in a time of emergency. While Watney was forced to rely on his own ingenuity to survive on Mars, his rescue was left almost entirely in the hands of his fellow crewmates. Each had to perform a duty in the rescue, and several had to suddenly change that role when the rescue attempt started to go south. The important thing is they were able to shift responsibilities quickly but with a clear understanding of who was best suited to perform each role, and it was all organized with a clear order of command.

In the world of cyber security, where organizations often deploy varied tools for detection, mitigation and policy enforcement, it is essential to utilize people to their greatest strengths. Investigators, operations and management all have a role to play, and while they should be flexible according to needs, they work best with what they know.

Personal connections matter
Massive amount of money, resources, time and energy went into rescuing Watney from Mars. His struggle became a weekly news segment on Earth and no expense was spared to retrieve him alive because people feared for him, hoped for him and wanted to keep him safe. Never forget that there are real victims to data breaches. Customers, clients and employees can be deeply hurt for the simple act of doing business with your organization, so keep that in mind when you are rushing through those last few reports on Friday afternoon.

The bonds between the Ares 3 crew were unshakable, as is expected when six people spend months together traveling across the solar system to a new planet. This type of relationship should be encouraged among security practitioners because it facilitates smoother operations in the event of an emergency and reduces blaming. When a team cares about each other and their mission, attacks can be stopped and catastrophes can be salvaged.

The Martian contains many lessons that can be adapted to cyber security, but in the end it is still a work of fiction. Reality is more complex and difficult to grapple with, but we need these basic driving forces to properly prepare for disaster and to operate well under pressure. Mark Watney may not be our CISO, but we can take what he learned on Mars and use it to beat an advantaged enemy and difficult odds.

Six Reasons Why Encryption Back Doors Won’t Deter Terrorists

By Willy Leichter, Global Director of Cloud Security, CipherCloud

CSA Byline_graphicLast week’s tragic events in Paris, and fears over similar terrorist attacks around the world, have revived a long-standing debate. Early evidence suggests that the terrorists used a readily available encryption app to hide their plans and thwart detection by law enforcement. This has led to finger-pointing by intelligence officials and politicians demanding that something be done to control this dangerous technology. Keep in mind that the terrorists also used multiple other dangerous technologies including consumer electronics, explosives, lots of guns, cars, trains and probably airplanes – but these are better understood and attract less grandstanding about controlling them.

Setting aside the obvious privacy concerns, the argument for weakening encryption ignores a basic question – can this technology really be controlled? More specifically, those arguing for diluted encryption are demanding “back doors” that would allow easier access by law enforcement. For many reasons, this idea simply won’t work and will have no impact on bad guys. It also could have serious unintended negative consequences. Here are a few reasons why:

  1. Encryption = Keeping Secrets

Encryption is more of an idea than a technology and trying to ban ideas generally backfires. For thousands of years, good and bad actors have used encryption to protect secrets, while communicating across great distances.

In the wake of traumatic public events, it’s easy to start thinking that only bad guys need to keep secrets, but that’s clearly not true. Governments must keep important secrets. Businesses are legally required to protect secrets (such as their customers’ personal information) and individuals have reasonable expectations (and constitutional guarantees in many countries) that they can keep their personal data private. Encryption, if properly applied can be a highly effective way to protect legitimate and important secrets.

  1. Who Keeps the Keys to the Back Door?

Allowing government agencies unfettered access to encrypted data is not only Orwellian – it’s also simplistic and unrealistic. Assuming back doors are created, who exactly should have access? Beyond the NSA, FBI, and CIA, should we share access with British Intelligence? How about the French? The Germans? The Israelis? Saudi Arabia? How about the Russians or the Chinese? Maybe Ban Ki-Moon can keep all the keys in his desk drawer at the UN…

As we all know, the Internet doesn’t respect national boundaries and assuming that all countries will cooperate and share equal access to encryption back doors is naïve. But if governments only require companies within their respective jurisdictions to provide back doors, the bad guys will simply use similar, readily available technology from other places.

  1. Keys to the Back Doors Can Easily Get into the Wrong Hands

If there are back doors to encryption, hackers will almost certainly steal and exploit them. As the Snowden revelations demonstrated, large government bureaucracies are not particularly good at protecting secrets or ensuring that the wrong people don’t get access. The OPM hack, which uncovered millions of government employees’ data (purportedly by Chinese hackers), highlights the risks when large numbers of humans are involved.

In a very real way, the existence of encryption back doors would represent a serious threat to data security across the government, business and private sector.

  1. To Control Encryption You Need to Control Math

Ironically, while some government agencies seek to crack encryption, other agencies such as NIST are chartered with testing and validating the security efficacy of encryption algorithms and implementations. The FIPS 140-2 validation process is globally recognized and provides assurance that encryption does not have flaws.

Today’s best encryption is based on publicly vetted and widely available algorithms such as AES-256. Most smart, college-level math majors could easily implement effective encryption based on a multitude of publicly available schemes.

So far I haven’t heard policy pundits recommend that potential terrorists be barred from high-level math education. Preventing clever people anywhere in the world from applying readily available encryption or developing their own encryption schemes is impossible.

  1. The Tools Do Not Cause the Actions

It does appear that the Paris terrorists used commercial encryption to hide some of their communications and it must be acknowledged that this may have hindered law enforcement. They also probably also used off-the-shelf electronics to detonate their explosives, drove modern rental cars to haul people and weapons and perhaps were radicalized in the first place through social media. Today’s technology accelerates everything in ways that are often frightening, but going backwards is never an option. And the tools, no matter how advanced, do not create the murderous intent behind terrorism.

Readily available technology likely made their jobs easier, but in the absence of easy to find encryption tools, the terrorists could have found many other effective ways to hide their plans.

  1. Neutering Encryption Will Hurt Legitimate Businesses

So let’s imagine that in the heat of terrorist fears, the US, UK and a few other governments demand that companies within their jurisdictions create and turn over encryption back doors. Confidence in security technologies from those countries would plummet, while creative entrepreneurs in many other countries would quickly deliver more effective security products.

The growth of the Internet as a trusted platform for business has been closely tied to encryption. The development of SSL encryption by Netscape in the 90s enabled e-commerce and online banking to flourish. And today, encryption is playing a critical role in creating the trust required for today’s rapid growth of the cloud applications.

There are many recent examples of governments trying to legally close barn doors after the horses have long since disappeared. Ironically, the US government already bars the export of advanced encryption technology to rogue states and terrorist groups including ISIS. Clearly this ban had zero effect on the terrorists’ ability to easily access encryption technology.

We live in scary times and should never underestimate the challenges we all face in deterring terror. But latching onto simplistic solutions that will not work does not make us safer. In fact, if we undermine the effectiveness of our critical security technology and damage an important industry, we will be handing the terrorists a victory.

 

Never Pay the Ransomer

By Rachel Holdgrafer, Business Content Strategist, Code42

code42_ransomer_blog[1]CryptoWall has struck again—only this time it’s nastier than before. With a redesigned ransom note and new encryption capabilities, BleepingComputer.com’s description of the “new and improved” CryptoWall 4.0 sounds more like a marketing brochure for a well-loved software product than a ransom demand.

Like the iterations of CryptoWall that came before the 4.0 version, the only way to get your files back is to pay the ransom in exchange for the encryption key or wipe the computer clean and restore the files from an endpoint backup archive. The FBI agrees, stating “If your computer is infected with certain forms of ransomware, and you haven’t backed up that machine, just pay up.”

In addition to encrypting the data on an infected machine and demanding a ransom for the decryption key, CryptoWall 4.0 now encrypts the filenames on an infected machine too, leaving alphanumeric strings where file names once were.

The most significant change in CryptoWall 4.0 is that it now also encrypts the filenames of the encrypted files. Each file will have its name changed to a unique encrypted name like 27p9k967z.x1nep or 9242on6c.6la9. The filenames are probably encrypted to make it more difficult to know what files need to be recovered and to make it more frustrating for the victim.

Not unlike Bill Miner, infamously known as the Gentleman Robber, CryptoWall 4.0 makes a farcical attempt at politeness. CryptoWall 4.0’s ransom note reassures its victims that the infection of their computer is not done to cause harm and even congratulates its victims on becoming part of the CryptoWall community, as if it were some sort of honor.

CryptoWall Project is not malicious and is not intended to harm a person and his/her information data. The project is conducted for the sole purpose of instruction in the field of information security, as well as certification of antivirus products for their suitability for data protection. Together we make the Internet a better and safer place.

Ransomware is a lucrative business. It is estimated that the CryptoWall virus alone cost its victims more than $18 million dollars in losses and ransom fees from April of 2014 to June of 2015. In the spirit that being robbed doesn’t have to be a bad experience, CryptoWall 4.0 makes a bad attempt at customer service, claiming “we are ready to help you always.” Additionally,

CryptoWall 4.0 continues to utilize the same Decrypt Service site as previous versions. From this site a victim can make payments, find out the status of a payment, get one free decryption, and create support requests.

In closing, the ransom note states,

…that the worst has already happened and now the further life of your files depends directly on your determination and speed of your actions.

Whether hackers use CryptoLocker, CryptoWall, CTB-Locker, TorrentLocker or one of the many variants, the outcome is the same. Users have no choice but to pay the ransom—unless they have endpoint backup in place. Even with the best tech resources, decrypting the algorithm used to lock files without the key would require several lifetimes. Whereas, with automatic, continuous backup, end users will NEVER pay the ransomer because a copy of their data is always preserved.

The Numbers Behind Cloud User Error

By Sam Bleiberg, Corporate Communications Manager, Skyhigh Networks

CloudIn the not-too-distant past, service providers had a tough time convincing enterprise IT departments that cloud platforms were secure enough for corporate data. Fortunately perspectives on cloud have matured, and more and more organizations are migrating their sanctioned file sharing applications to the cloud. Fast forward to 2020, when Gartner predicts 95% of cloud security failures will be the customers’ fault. Skyhigh Network’s latest Cloud Adoption & Risk Report shows the stakes are high for preventing “cloud user error.”

Enterprise-ready services have extensive security capabilities against external attacks, but customers have the ultimate responsibility for ensuring sensitive data is not improperly disclosed. Just as attackers can circumvent perimeter defenses such as powerful firewalls in favor of stolen credentials or alternate vectors of attack, secure cloud services can incent attackers to target the vulnerabilities inherent in day-to-day use of applications. In addition to compromised accounts, in which attackers gain access to a cloud service via stolen user credentials, enterprises need to worry about malicious insiders, compliance violations, and even accidental mismanagement of access controls.

The report, which analyzes actual usage data from over 23 million enterprise employees, uncovered an epidemic of file over-sharing. Whether IT is aware or not, cloud-based file-sharing services serve as repositories of sensitive data for the average organization. According to the report, 15.8 percent of documents in file-sharing services contain sensitive data. The employees responsible for sensitive data are not a small group: 28.1% of all employees have uploaded a file containing sensitive data to the cloud.

Most concerning is the lack of controls on who can access files once uploaded to the cloud. 12.9 percent of files are accessible by any employee within the organization, which poses a significant liability given the size of the organizations analyzed. Employees shared 28.2 percent of files with external business partners. Given the critical role business partners have played in several highly publicized breaches, companies should closely monitor data shared outside the organization, even with trusted partners. Although they make up only 6 percent of collaborations, personal email addresses raise concerns over the recipient’s identity and necessitate granular access policies; companies may not want to grant the ability to download files to personal email domains, for example. Finally, 5.4 percent of files are available to anyone with the sharing link. These documents are just one forwarded email away from ending up in the hands of a competitor or other unwanted recipient.

Breakdown of Sharing Actions

Breakdown

 

What are the different profiles of sensitive data stored in the cloud? Confidential data, or proprietary information related to a company’s business, is the biggest offender making up 7.6 percent of sensitive data. Personal data is second at 4.3 percent of said files. Third is payment data at 2.3 percent, and last is health data at 1.6 percent. The majority of these files, 58.4 percent, are discovered in Microsoft Office files.

 

confidential

Files Containing Keyword in the File Name

Furthermore, a surprising number of workers violate best practices for securely storing important information in the cloud. Using keywords such as ‘passwords’, ‘budget’, and ‘salary’ when naming files makes it easy for attackers to locate sensitive information, and IT security professionals typically advise against this practice. Convenience all too often trumps security, unfortunately. Past breaches have revealed instances in which credentials for multiple accounts were kept in folders named “Passwords”. The report found that the average company had 21,825 documents stored across file sharing services containing one or more of these red flags in the file name. Out of these files, 7,886 files contained ‘budget’, 6,097 ‘salary’, and 2,217 ‘confidential’.

 

 

budget

Lastly, data revealed a few “worst employees of the month. One prolific user was responsible for uploading 284 unencrypted documents containing credit card numbers to a file sharing service. Another user uploaded 46 documents labeled “private” and 60 documents labeled “restricted”. In all seriousness, while it’s easy to point the finger and call these users bad employees, it’s likely they were simply trying to do their jobs using the best tools available to them. The onus lies with IT to make the secure path the easy path.

With more companies migrating sensitive data to the cloud, attackers will increase their efforts to exploit vulnerabilities in enterprise use of cloud services. Tellingly, attacks against cloud services increased 45% over the past year. Locating sensitive data in file-sharing services is step one for companies aimed at preventing the next generation of cloud-based threats.

 

 

 

 

 

 

 

 

Cyber Security Lessons from “The Martian”

By TK Keanini, Chief Technology Officer, Lancope

First things first, if you have not seen the movie or read the book “The Martian,” stop right now and do not continue because there will be spoilers. You have been warned.

On more than one occasion in my life as a security professional, I have felt like I was stranded on Mars – all alone with only my wits and spirit to survive. As I read The Martian, I kept thinking about what skills and practices would help a security practitioner in their day-to-day life. What would Mark Watney do?

During an ongoing attack, there is no time to deploy new tools and there is no one else who is more familiar with your network environment than you. Instead, you must use the tools and knowledge immediately available to survive, and time is not on your side. Maybe that is why this book resonated so well with me.

This post is the first in a two-part series. Watney’s approaches can be divided between methodologies and psychological skills, both of which are equally important in a stressful situation such as a cyber-attack. In this post, I’ll explore how Watney approached problem-solving and what logic he used to give himself the best chance of survival.

Science is helpful for what can be explained by science
Sciences like physics, chemistry and botany teach us that a small percentage of the future can be predicted if we play within the laws that are deterministic. It is within these formulas that we can predict the future outcome of an action, but what “The Martian” illustrates is even with all that science provides, the majority of the future cannot be determined and we just need to deal with it. Science only explains a very small percentage of what we as humans experience, so if you happen to be on the high horse of science, get off before you fall.

Science only takes you so far; for the rest you are on your own.

Adapt or die
During the entire time on Mars, Watney needed to adapt to an unfriendly and deadly environment. He needed to assume the role of farmer, trucker and construction worker to survive. As a farmer, he used his limited resources to create an environment suitable for growing potatoes to sustain a diet until rescue. As a trucker, he had to get his entire living space mobile for the trek across plans and mountains to a rescue craft. As a construction worker, he needed to modify the craft and reduce weight and other properties so that he could get to orbit with the fuel that was on hand.

All of these roles are crafts, which means they encompass not just processes and skills but resources and tools as well. Watney needed all of it to survive. It is likely that an individual in your organization fulfills multiple roles such as incident responder, business leader, IT operations, etc. as they go about their daily job. Adaptation is a survival skill on any planet.

Utilize lateral thinking
While Watney had advanced machinery and materials designed specifically for Mars, none of it was meant for use beyond 31 days. Watney had to stretch it for a year and a half and use it in ways it wasn’t intended. To do that, he had to get creative. He modified machines, adapted materials and jury-rigged a potato farm in his living quarters.

In cyber-security, organizations cannot afford to buy a new tool for every specific need. In fact, attempting to do so is ineffective and can lower the overall security. Instead, we must adapt our tools. Oftentimes, we can use them for purposes the designer did not envision and make them work with our other tools in creative ways. Again, this is also applicable to processes. What doesn’t work at another organization may work in yours. Maybe your team is versatile and benefits from regular role reassignments. Maybe your tools are also beneficial to network operations, which can help garner more funding for future cooperative investments. Don’t be afraid to try new and crazy things. It just might save you.

Plan for Failure
A plan is good until it makes first contact with the enemy. Unfortunately, systems sometimes fail and processes may prove ineffective. You cannot rely on success. For every plan that Watney thought of, he tested and prepared for failure. Whenever he made modifications to the rover, Watney would drive it around his living area for days to see how it held up to use. When he reestablished communication with Houston using the remains of the Mars Pathfinder probe, he created a plan on how to provide updates via Morse code should communications fail. Of course, Watney couldn’t imagine every failure scenario, but he planned for enough to keep himself alive.

In cyber security, we must plan for failures. Having strong network perimeter defenses are important, but they cannot be relied on as the sole source of security. Monitoring internal network traffic, utilizing proper segmentation and detecting anomalous and malicious behaviors are important measures to ensure attackers can be stopped after other measures fail.

Also, don’t forget to save a nice meal for the day you survive something that should have killed you.

Testing and rehearsals are critical
According to Watney, “in space no one can hear you scream like a little girl.” We can plan for failure, but that doesn’t make it any less terrifying. To avoid that terror Watney tested and tested and rehearsed and tested some more before he did anything. His modified rover had days’ worth of travel time on the odometer before he drove further than walking distance from the Hab. He put his makeshift tent through the ringer, breaking it in the process, before he ever spent a night in it.

Some failures are so complete that there are no possible backup plans, so we must push our tools and responses until they break in order to make them as strong as possible. This is the mentality behind penetration testing. Security teams need to know exactly what to do in the event of an attack. If they don’t know something, the need to be able to find it out – in minutes. Security tools must function properly under pressure, and responses need to be effective.

Start with these questions: Do you have an incident response plan? (You should) Have you tested that plan? (You should) Do you know what to do in the event of an outside attack? What about an inside attack? What are the limits of your tools? Are there any critical blind spots or vulnerabilities in your network? How do you know? Rehearse attack scenarios to find out the answer to these questions. Then rehearse some more, and do it regularly. If you don’t identify your own weaknesses first, someone else will.

Next week, I’ll cover what Watney did to stay sane in the face of isolation and death. I’ll also touch on what interpersonal factors were present in the entire Ares 3 crew, which ultimately allowed them to rescue Watney without losing a single person.

CISA Threatens Privacy, Moves on Anyway

By Rachel Holdgrafer, Business Content Strategist, Code42

Code42_CISAThe Cyber Information Sharing Act (CISA) passed in a 74-21 U.S. Senate vote last week. Critics of CISA say the bill will allow the government to collect sensitive personal data unchecked. Civil liberty, privacy groups, leading technology companies and (via Twitter) Edward Snowden have come out against the bill.

The stated intention of CISA is “to improve cybersecurity in the United States through enhanced sharing of information about cybersecurity threats, and for other purposes.” In short, CISA encourages technology companies to share with the government information about cyber attacks on their networks—as a strategy to fight hacking and cyber crime. Sounds like a noble goal at first blush, but further investigation reveals something decidedly less so.

CISA empowers organizations to monitor and share private citizens’ personal data with government agencies—without the consent of the owner of the data. Given that CISA directly targets technology organizations, the data in question includes that of U.S. citizens.

Your personal data. My personal data. Monitored without a warrant or notification if it is deemed to be a “cyber threat. The challenge is that CISA’s definition of cybersecurity threat is broad; The only noted exclusion is “any action that solely involves a violation of a consumer term of service or licensing agreement.” Even more troubling, CISA incentivizes tech companies to share cyber threat indicators by providing them with legal immunity against antitrust lawsuits.

Legislators supporting CISA have gone on record saying that CISA “is not a surveillance bill” although privacy groups, technology organizations and legislators who oppose the bill disagree.

This is not the first time an information sharing bill has been proposed in an effort to fight cybercrime. CISPA, the Cyber Intelligence Sharing and Protection Act – the precursor to CISA, was passed by the House of Representatives in 2013, but was shelved when President Barack Obama threatened to veto it due to issues with the bill’s privacy protections. President Obama has endorsed CISA and indicates that he will sign the law.

Enterprise Data Breaches on the Rise Despite Infosec Policies

By Rachel Holdgrafer, Business Content Strategist, Code42

The results of the 2014 Protiviti IT Security and Privacy Survey reports that:code42_data_breach_rises[1]

•  77% of organizations have a password policy or standard.
•  67% of organizations have a data production and privacy policy.
•  67% of organizations have an information security policy.
•  59% of organizations have a workstation/laptop security policy.
•  59% of organizations have a user (privileged) access policy.

Based on these statistics, the enterprise organization has plenty of IT and information security policies in place, and yet, data breaches are on the rise, doubling from December of 2014 to August of 2015. Given these statistics, it seems unlikely that enterprise security policies are, in fact, keeping enterprise organizations safe.

Human users are touted as the weakest link in an information security system. Historically, IT has taken a top down approach that forced users to work within the confines of a system that didn’t take user productivity into consideration. IT and security professionals focused on creating limits to protect the network from the user, throwing up barriers in the name of network security. This impacted user productivity but was accepted as collateral damage in the fight to keep the enterprise network safe. Users were left to choose between upholding security protocols and personal productivity.

Given the choice between job security and network security, most users will choose productivity and hope for the best when it comes to protecting the network. Christian Anschuetz on the Wall Street Journal blog, CIO Journal, agrees. “Forced to choose between disruptive, apparently irrational, and easily circumvented security directives and getting their job done, employees invariably choose to be productive,” states Anschuetz.

Changing priorities
While maintaining enterprise security will always be the number one priority of information security professionals everywhere, the modern information security professional recognizes that times are changing. Network security at the expense of user productivity is counterproductive. When threatened with limitations to productivity, users have proven that they will find ways around IT and information security initiatives through shadow IT.

Progressive, security-focused organizations must consider their users when they create security policies. Backing into security policies and initiatives based on user needs allows enterprise organizations to simultaneously meet security and user-productivity demands. Rather than forcing users to work outside of their usual workflows, modern information security secures the enterprise where and how its users prefer to work, eliminating unsanctioned workarounds and shadow IT solutions. The result is greater enterprise security and happier end users.