Three Lessons From the San Francisco Muni Ransomware Attack

By Laurie Kumerow, Consultant, Code42

On Black Friday, a hacker hit San Francisco’s light rail agency with a ransomware attack. Fortunately, this story has a happy ending: the attack ended in failure. So why did it raise the hairs on the back of our collective neck? Because we fear that next time a critical infrastructure system is attacked, it could just as easily end in tragedy. But it doesn’t have to if organizations with Industrial Control Systems (ICS)  heed three key lessons from San Francisco’s ordeal.

First, let’s look at what happened: On Friday, Nov. 25, a hacker infected the San Francisco Municipal Transportation Agency’s (SMFTA) network with ransomware that encrypted data on 900 office computers, spreading through the system’s Windows operating system. As a precautionary measure, the third party that operates SMFTA’s ticketing system shut down payment kiosks to prevent the malware from spreading. Rather than stop service, SMFTA opened the gates and offered free rides for much of the weekend. The attacker demanded a 100 Bitcoin ransom, or around $73,000, to unlock the affected files. SFMTA refused to pay since it has a backup system. By Monday, most of the agency’s computers and systems were back up and running.

Here are three key lessons other ICS organizations should learn from the event, so they’re prepared to derail similar ransomware attacks as deftly:

  1. Recognize you are increasingly in cybercriminals’ cross hairs. Cyberattacks on ICS systems, which control public and private infrastructure such as electrical grids, oil pipelines and water systems, are on the rise. In 2015, the U.S. Industrial Control Systems Cyber Emergency Response Team (ICS-CERT) responded to 20% more cyber incidents than in 2014. And for the first time since the agency started tracking reported incidents in 2009, the critical manufacturing sector experienced more incidents than the energy sector. Critical manufacturing organizations produce products like turbines, generators, primary metals, commercial ships and rail equipment that are essential to other critical infrastructure sectors.
  1. Keep your IT and OT separate. Thankfully, the San Fran Muni ransomware attack never went beyond SFMTA’s front-office systems. But, increasingly, cyber criminals are penetrating control systems through enterprise networks. An ICS-CERT report noted that while the 2015 penetration of OT systems via IT systems was low at 12 percent of reported incidents, it represented a 33 percent increase from 2014. Experts say the solution is to adopt the Purdue Model, a segmented network architecture with separate zones for enterprise, manufacturing and control systems.
  1. Invest in off-site, real-time backup. SFMTA was able to recover the encrypted data without paying the ransom because it had a good backup system. That wasn’t the case with the Lansing (Michigan) Board of Water & Light. When its corporate network suffered a ransomware attack in April, the municipal utility agency paid $25,000 in ransom to unlock its accounting system, email service and phone lines.

If San Francisco’s example isn’t enough to motivate ICS organizations to take cybersecurity seriously, then Booz Allen Hamilton’s 2016 Industrial CyberSecurity Threat Briefing should do the trick. It includes dozens of cyber threats to ICS organizations.

Adding Up the Full Cost of a Data Breach

By Nigel Hawthorn, Skyhigh Networks, EMEA Marketing Director

Data breaches are happening all the time; often they hit the news for a short while then they are replaced with the latest list of victims, so we thought we’d review a data breach from a year ago and look back at the total cost to the company involved. The data breach took place in October 2015 where a UK service provider (TalkTalk) was the victim of a DDoS attack and a SQL injection to extract the data.

Background
TalkTalk suffered a data breach in October 2015 resulting in the theft of personal data. Full details of the loss are available in other articles, so there’s no need to go into the technical details here.

There was a huge amount of publicity in the UK, during the first few days the situation and amount of data lost were not clear. In the end, 156,959 sets of personal details were stolen and 15,656 of these included bank account details. The company contacts each of its customers trying to reassure them and provided a free credit monitoring subscription for a year in case other data had also been lost and was misused.

In the following financial results, the company admitted to lost customers, direct costs to the business of £60,000,000 and a revenue drop of £80,000,000. A subsequent review of the total market showed that they had lost 4.4% market share.

On year later, in October 2016, TalkTalk was fined £400,000 by the Information Commissioner’s Office (ICO) for the incident. The fine is the highest ever imposed by the ICO, with TalkTalk’s lack of cybersecurity cited for the amount. The Information Commissioner, Elizabeth Denham, said that TalkTalk’s “failure to implement the most basic cybersecurity measures allowed hackers to penetrate systems with ease”. While in the eyes of some the fine may seem high, it’s only £2.50 per impacted customer.

Here’s a receipt for the current costs to the company:

This breach can be examined further and there are key lessons all businesses should learn.

  1. The total cost of a data breach isn’t always obvious
    While the £400,000 fine is substantial, it’s really just the tip of the iceberg in regards to how much the data breach actually cost. There were so many other financial repercussions which, to some other firms, may have been fatal. There was the 11 percent drop in share price, as well as the loss of 101,000 existing customers and potential future ones. All in all, when remediation costs are included too, TalkTalk calculated that the breach cost it more than £80 million in revenue. That’s hardly pocket change.
  2. Acquisitions and demergers affect cyber risk
    When Carphone Warehouse purchased the UK subsidiary of Tiscali, the business was merged with TalkTalk, which it also owned at the time. Following the data breach, the ICO’s investigation revealed that the hackers had gained access to the customer database through vulnerable web pages that had belonged to Tiscali. When companies join or split, how the action impacts IT systems must be managed, regardless of how insignificant they may seem. Systems will have different parentage, which can impact how effective a cybersecurity solution or process is, leaving potential access points unguarded.
  3. Patching and updating can mitigate some of the risks caused by aging systems
    It’s no great surprise that older systems are more vulnerable to cyber attacks than newer ones. Yet, some businesses continue to rely on aging systems without patching or updating them, which is simply making things even easier for cybercriminals. The targeted Tiscali web pages had not been patched for three and a half years and the backend database is no longer supported by the supplier. When you consider the rapid pace of cyber threat evolution, that’s the equivalent of leaving the windows and doors open. Businesses must ensure they are patching on a regular basis and setting aside time for major updates.
  4. Warnings and red flags should be investigated
    TalkTalk has and will continue to face scrutiny for its handling of the debacle, but one of the biggest criticisms is that it did not investigate numerous warnings that something was wrong. While it was the October 2015 data breach that made these particular headlines, TalkTalk customers had fallen victim to scams due to a previous breach and the regulator’s investigation found there had been two previous SQL injection attacks in the previous three months but TalkTalk were not monitoring those particular webpages. Whether the company ignored the warnings or was simply ignorant, businesses should investigate any signs that an issue exists. This also includes red flags generated by cybersecurity systems. Almost a third of companies suffer from alert fatigue, due to their general frequency and numerous false positives, and do not investigate.
  5. Communication plans are essential
    How a company communicates a data breach is vital in mitigating the potential damage to reputation. If customer data has been compromised, they need to be made aware of it, with the need even more pressing if bank details are taken. To ensure all stakeholders are reassured that the situation is being handled, firms must have a communication plan including draft email, letter and script templates in place so they can be issued immediately, unfortunately TalkTalk’s initial responses fanned the flames due in part to lack of preparation as well as slow identification of the total data loss. While companies must be proactive with their communications, they must also have the necessary resources to deal with customers calling in. TalkTalk customers faced long holding times when ringing to find out more information, compounding anger further.
  6. EU GDPR will increase fines
    The ICO’s fine is a record amount, but TalkTalk is fortunate that the breach took place before the EU GDPR comes into force in May 2018. The new regulation will see potential fines increase to four percent of global turnover or €20 million, whichever is higher, in TalkTalk’s case this could mean a fine of around £73M, roughly the same amount as their profit in their last financial year.
  7. EU GDPR enforces disclosure
    The GDPR demands disclosure of all data loss incidents of unencrypted data; any company that experiences data loss, regardless of whether it’s their fault or a third parties’, will have 72 hours to disclose it to the regulators and have to inform data subjects “without delay”, so being able to investigate data transfers and monitor cloud use will become essential.
  8. Cybersecurity is a boardroom issue
    If a company were to take only one lesson away from TalkTalk’s breach, it’s that data is now the crown jewels of any business. Not only will it help drive sales and growth, but mishandling it can lead to severe fines and even closure. It needs to be treated with the utmost respect and that means understanding that cybersecurity is now a boardroom discussion. For too long it has been considered the remit of IT but, with so many areas where a business can become vulnerable, it must now be an enterprise-wide endeavour.

 

Cyber Insurance Against Phishing? There’s a Catch

By Jeremy Zoss, Managing Editor, Code42

If one of your employees gets duped into transferring money or securities in a phishing scam, don’t expect your cyber insurance policy to cover it. And even your crime policy won’t cover it unless you purchase a specific social engineering endorsement. Many companies have learned the hard way and tried to sue their insurance carriers, with little luck.

Aqua Star, a New York seafood importer, expected to be covered after a spoofed email from a supplier drove an employee to change the supplier’s bank account, causing Aqua Star to wire more than $700,000 to a hacker instead of the supplier. Aqua Star has a crime policy through Travelers, which includes Computer Fraud coverage that applies to loss caused by the fraudulent entry of electronic data into any computer system owned, leased or operated by the insured. But when Aqua Star filed the claim, Travelers pointed out an exclusion if the data was entered by an authorized user. Aqua Star then sued Travelers, but the court agreed with Travelers, ruling that the employee was clearly an authorized user.

A similar phishing scam resulted in Apache Corp., an oil and gas producer, wiring $2.4 million to cybercriminals. It’s insurance company, Great American, denied the payout, so Apache went to district court and won. However, Great American appealed to a higher court, which reversed the decision, saying the bogus email didn’t directly cause the loss.

What commercial cyber insurance policies do cover
Cyber insurance policies cover losses that result from unauthorized data breaches or system failures. But they vary greatly in the details and exceptions. Most will cover forensic investigation fees, monetary losses caused by network downtime, data loss recovery fees, costs to notify affected parties and manage a crisis, legal expenses, and regulatory fines.

When it comes to ransomware, you need to look closely at the policy’s Cyber Extortion coverage. If it offers only third-party coverage, then ransomware isn’t covered.

Crime insurance policies cover losses that result from theft, fraud or deception. But as the Aqua Star and Apache examples illustrate, insurers typically deny coverage for social engineering fraud, claiming that the loss didn’t result from “direct” fraud. Insurers contend that the crime policy applies only if a cybercriminal penetrates the company’s computer system and illegally takes money out of company coffers.

Some crime policies also contain a “voluntary parting” exclusion that specifically bars social engineering claims by barring coverage for losses that arise out of anyone acting with authority who voluntarily gives up title to, or possession of, company property.

Fishing for a solution? Add an endorsement
Many insurance companies offer a social engineering fraud endorsement, like this one from Chubb. It’s offered under a crime policy for a nominal additional premium. The coverage, sometimes referred to as an impersonation fraud or fraudulent instruction endorsement, is typically up to $250,000 per occurrence, with no annual aggregate, but higher limits are available for a higher premium.

The net lesson: a phishing endorsement is an easy fix to a potentially costly oversight.

Standardizing Cloud Security with CSA STAR Certification

By Tolga Erbay, Senior Manager, Security Risk and Compliance, Dropbox

In early 2014 Dropbox joined the Cloud Security Alliance (CSA). Working with the CSA is an important part of Dropbox’s commitment to security and transparency.

In June of 2014 Dropbox achieved Level 1 Certification through STAR, the CSA’s publicly available registry, which documents how Dropbox’s security practices measure up to industry-accepted standards and the CSA’s best practices. Building on its Level 1 Self-Assessment, Dropbox recently announced CSA STAR Level 2 Certification which attests to its security controls and processes.

“Dropbox continuously proves to be at the forefront of compliance standards,” said Jim Reavis, co-founder and CEO of the Cloud Security Alliance (CSA). “With rigorous independent auditing and certification for both well-accepted and up-and-coming standards, they’re demonstrating an impressive dedication to their customers’ security. We’re excited to have Dropbox on the short list of companies that have achieved our Security, Trust & Assurance Registry (STAR) Level 2 Certification.”

Dropbox is dedicated to building trust with its customers across the globe, and helping them fit Dropbox into their compliance strategies. Dropbox is proud to work closely with the CSA to establish open and transparent cloud security best practices within the industry. Dropbox strives to stay ahead of the curve as new standards and certifications are introduced and will continue to partner with the CSA to support research and education in key cloud security areas.

Standards such as CSA STAR certification underscore Dropbox’s commitment to keeping customer data safe, operating at the highest levels of availability, and maintaining transparency in data storage and processing. And they demonstrate Dropbox’s leadership in the SaaS industry, as Dropbox is one of the first major providers to achieve CSA STAR certification. Dropbox is excited to make continued strides with these compliance milestones.

IBM Touts Major Mac Cost Savings; IT Professionals Still Hesitant

By Lance Logan, Manager/Global Marketing Program, Code42

For the second year in a row, IBM’s Fletcher Previn wowed the audience at the JAMF user conference with impressive statistics on how the company’s growing Mac-based workforce is delivering dramatic and measurable business value.

IBM expects Macs to save $26M in IT costs over four years
Big Blue says each Mac device will save them at least $265 over a four-year lifespan (and up to $535 depending on model) versus comparable PCs. With IBM’s Mac workforce at 90,000 (and adding 1,300 Mac users per month), that adds up to more than $26 million savings over the next four years—a huge margin. Simpler IT support and a high level of user self-service drive the bulk of this cost savings. IBM reports that just 3.5 percent of its Mac users currently call the help desk, compared to 25 percent of its PC users. This enables IBM to support 90,000+ Mac users (and 217,000 Apple device users) with just 50 IT employees.

It’s not just IT cost savings driving Mac adoption among big names in business tech. Deloitte calls iOS “the most secure platform for business” and says “Apple’s products are essential to the modern workforce.” Cisco has also jumped on the Apple bandwagon, believing Apple devices will accelerate productivity. Basic user satisfaction also shouldn’t be ignored, as IBM reports a 91 percent satisfaction rate among Mac users and says its pro-Mac policies help the company attract and retain top talent.

The average enterprise is still hesitant about widespread Mac deployment
It’s one thing for big-name tech innovators like IBM and Cisco to proclaim the promise of Macs in the enterprise, but what’s happening across the rest of the enterprise landscape? Code42 recently conducted a survey on Mac deployment among our diverse business contacts, and the results tell a less enthusiastic story.

Macs have a major—and growing—presence in the modern enterprise
Among Code42’s enterprise contacts, one-third (33.6%) have more than 500 Mac users and one in five (22.8%) have 1,000+ Mac users. These numbers further demonstrate that the modern enterprise is supporting OS diversity with a substantial Mac-based workforce—and we fully expect these numbers to grow in the coming years.

User preference—not business value—still drives most Mac adoption
While IBM and others put total cost of ownership, security and productivity as top reasons for Mac adoption, our results show user preference continues to be the main reason that enterprises are embracing Macs today.

Top reasons for Mac adoption
1. Happier end users (37%)
2. Fewer help desk tickets (14%)
3. Better OS security (12%)

Top IT challenges are Macs’ top strengths
Our survey showed the time-consuming burdens of tech refresh and help desk tickets are the most significant IT challenges associated with end user devices across operating systems, followed by malware/ransomware. These challenges are actually two of Mac devices’ greatest strengths. Macs traditionally enable a much higher level of self-service, and Code42 enables user-driven tech refresh for Mac users (and PC users, too). This level of self-service produces the kind of IT cost savings IBM has seen with its dramatically reduced help desk tickets. For the time being, Macs also continue to be less targeted and less vulnerable to malware and ransomware.

Many IT professionals remain wary of widespread Mac deployment
While our survey showed most enterprises may not be seeing million-dollar IT savings from Mac deployments, they did report a range of definitive benefits. So it’s revealing that one in five respondents said they’re ultimately not big fans of their companies’ Mac adoption.

Realizing advantages of Macs in the enterprise requires preparation, time
Supporting a large Mac-based workforce isn’t as simple as flicking a switch or changing a policy. It requires substantial changes to technology infrastructure and processes to make sure everything from calendars to apps to backup work seamlessly across both Mac and PC users. This often leaves IT stuck in the middle of user preferences and resource realities: Users want Macs, but IT needs the time—and the budget—to put the tools and processes in place to support a hybrid workforce.

But with IBM’s results ringing in the ears of the business world, more and more companies of every size and in every industry are sure to begin exploring the benefits of a larger Mac-based workforce. The best strategy for IT leaders is to act now to get ahead of this inevitable shift. Start examining your infrastructure to find the holes in Mac compatibility, and seek out technology partners that build solutions for this modern hybrid device environment.

Or, as IBM’s Previn put it, “Give employees the devices they want, manage those devices in a modern way, and drive self sufficiency in the environment.”

To learn more about how endpoint backup can protect the data on enterprise Macs, download the market brief Securing & Enabling the Mac-Empowered Enterprise.

DevOpsSec, SecDevOps, DevSecOps: What’s in a Name?

By Jamie Tischart, CTO Cloud/SaaS, Intel Security

private-cloud-shot-2016-07-22-1The world is awash in DevOps, but what does that really mean? Although DevOps can mean several things to different individuals and organizations, ultimately it is about the cultural and technical changes that occur to deliver cloud services in a highly competitive environment.

Cultural changes come in the form of integrating teams that historically have been disparate around a single vision. Technical changes come with automating as much of the development, deployment, and operational environment as possible to more rapidly deliver high-quality and highly secure code.

This is where I believe the DevOps debate becomes cloudy (sorry for the pun). As is normal in engineering endeavors, we often forget the purpose or the problem we are trying to solve and instead get mired in the details of the process or the tool. We tend to lose site that bringing DevOps together has the purpose of solving how to more rapidly deliver higher-quality, more secure products to our customers, so they can solve their problems and we stay ahead of our competitors.

I found it interesting that there is little information about whether DevOps or OpsDev is the terminology coined but that adding security into the mix has three different coined terms of DevSecOps, SecDevOps, and DevOpsSec. At first I didn’t give it much thought and I figured that over time it would converge into an industry standard and we would move on our merry way of trying to achieve that difficult goal of high-quality, highly secure continuous deployment of cloud services. Then I looked closer and thought that there might be something to these three different nomenclatures and that they highlight the different challenges that security has in integrating into the software development lifecycle.

Let’s talk about the general purpose of including security in DevOps practices. Security was often an assumed part of the development and testing process to which few people paid attention.  Or, security was an afterthought that slowed down the development process and release cycle, executed by some other team requiring fixes to obscure vulnerabilities that would never be found or leveraged for harm.

That entire mindset, while flawed, worked reasonably well in the world of single-tenant application development where a 12-month release cycle was the norm and applications were deployed behind several layers of security appliances. This all changed when we started delivering multi-tenant cloud offerings where any vulnerability could put millions of customers and the reputation of our companies at risk. Yet, we still held onto some of these archaic practices. We were slow to integrate secure coding and testing practices into our everyday engineering execution. We continued to leave security activities until the end of cycles and we left many vulnerabilities unattended because it slowed the release. This was until, of course, someone exploited the vulnerability and then everyone dropped everything and all hell broke loose.

Integrating Security into DevOps
Integrating security into DevOps practices is the goal to alleviate these problems. It is the way to continuously evolve security through automated techniques and to achieve our goal of rapidly delivered high-quality, highly secure products. This brings me back to the different terms for integrating security into the DevOps movement and how each organization needs to determine how security is integrated.

Let’s first look at DevOpsSec. Consider the order and how that implies that security still comes at the end of the process. Maybe I am just being paranoid but this is a practice we need to curtail and instead imbed security into every aspect of the lifecycle. If we expound on that a bit and take it literally (and maybe we shouldn’t), the team will complete dev, deploy and operate, and then review security. If this is done in small increments and completed rapidly it is still a massive improvement to the end-game security testing we have seen in the past. However, it still may expose vulnerabilities within cloud production environments and require reversion or patching that could have been completed before hand.

Next let’s review SecDevOps. This would imply that the security activities occur before any development or operations. I am not sure that this is truly practical, although it is certainly a well-intentioned principle and has merits that should be incorporated into the DevOps practice. My interpretation of this is that new requirements/user stories/features – whatever your method – include security requirements in the development. If we take this to the next step, then these security requirements would have automated tests created and added to the automation suites so they can run continuously to ensure that security is inclusive throughout the cycle. Hmm, this sounds pretty good…

The last one is DevSecOps. Literally, you can expand this to completing development, then reviewing and automating for security, and then deploying and operating. This articulation hopes to catch the security concerns before they are deployed to the world but are not as incorporated into the overall process as SecDevOps. Certainly DevSecOps has the benefit of focusing on security before introducing a vulnerability to the the wild, but it is not security-focused in every activity.

Maybe I am taking it too literally, but maybe what we need is SecDevSecOpsSec. Here, security is a continuous activity in itself that needs to be incorporated into all stages of the product lifecycle. However, that is quite a mouthful…

The important thing is that when your organization is approaching DevOps, don’t forget the security aspect. Think about how you are going to integrate security into every aspect of your lifecycle. As for which term to utilize, I am going to standardize on SecDevOps. Integrating security at the start has the best of intentions and will lead to the most secure practices.

Insurance Carrot Beats Government Stick in Quest for Stronger Cybersecurity

By Laurie Kumerow, Consultant, Code42

insurance-carrot-1When it comes to cybersecurity, the U.S. federal government recognizes the carrot is more effective than the stick. Instead of using regulations to increase data security and protect personal information within private organizations, the White House is enlisting the insurance industry to offer incentives for adopting security best practices.

In March 2016, the U.S. House Homeland Security Cybersecurity Subcommittee held a hearing to explore possible market-driven cyber insurance incentives. The idea, said Rep. John Ratcliffe, chairman of the subcommittee, is to enable “all boats to rise, thereby advancing the security of the nation.”

The issue isn’t a lack of cyber insurance. Today, 80% of companies with more than 1,000 employees have a standalone cybersecurity policy, according to a Risk and Insurance Management Society survey. The real issue is getting companies to maintain more than a minimum set of security standards.

Borrowing from the fire insurance playbook
The insurance industry has been a catalyst for change in the past. Attendees of the Homeland Security Cybersecurity Subcommittee hearing pointed to the fire insurance market as a good example of using a carrot to drive positive behavior. Insurers offer lower rates to policyholders who adhere to certain fire safety standards, such as installing sprinklers and having extinguishers nearby.

Identifying best practices
So, what are the cybersecurity equivalents of sprinklers and fire alarms? Hearing attendees highlighted four components of an effective cyber risk culture:

  • Executive leadership: what boards of directors should do to build corporate cultures that manage cyber risk well.
  • Education and awareness: training and other mechanisms that are necessary to foster a culture of cybersecurity.
  • Technology: specific technologies that can improve cybersecurity protections.
  • Information sharing: ensuring the right people within the company have the information they need to enhance cybersecurity risk investments.

Spurring much-needed actuarial data
The hearing also touched on a major missing element in the current cyber insurance industry: reliable actuarial data regarding data breaches and other cyber incidents. Auto insurers know the likelihood of car accidents, so they know how to price the liability and measure the risk. But the likelihood and ramifications of various data breaches are a wildcard today, leading to problems in pricing cybersecurity policies.

Hearing attendees discussed creating an actuarial data repository with data from leading actuarial firms, forensic technology firms and individual insurer cyber claims. The proposed database would be housed at a nongovernmental location such as the Insurance Services Office Inc. (ISO), which has managed insurer actuarial databases for more than four decades. The hope is the database would encourage voluntary sharing of information about data breaches, business interruption events and cybersecurity controls to aid in risk mitigation.

While the cyber insurance carrot is a long way from becoming reality, at least the seed has been planted.

One Day Is a Lifetime in Container Years

By Jon King, Security Technologist and Principal Engineer, Intel Security

Securing virtual assets that appear and disappear.

private-cloud-shot-2016-07-22-1The average life span of a container is short and getting shorter. While some organizations use containers as replacements for virtual machines, many are using them increasingly for elastic compute resources, with life spans measured in hours or even minutes. Containers allow an organization to treat the individual servers providing a service as disposable units, to be shut down or spun up on a whim when traffic or behavior dictates.

Since the value of an individual container is low, and startup time is short, a company can be far more aggressive about its scaling policies, allowing the container service to scale both up and down faster. Since new containers can be spun up on the order of seconds or sub seconds instead of minutes, they also allow an organization to scale down further than would previously have provided sufficient available overhead to manage traffic spikes. Finally, if a service is advanced enough to have automated monitoring and self-healing, a minuscule perturbation in container behavior might be sufficient to cause the misbehaving instance to be destroyed and a new container started in its place.

At container speeds, behavior and traffic monitoring happens too quickly for humans to process and react. By the time an event is triaged, assigned, and investigated, the container will be gone. Security and retention policies need to be set correctly from the time the container is spawned. Is this workload allowed to run in this location? Are rules set up to manage the arbitration between security policies and SLAs?

The volume of events from containers also overwhelms human capabilities. Automation and machine learning are essential to collect this data, filter it, and augment the human security professionals who are doing triage. Identifying suspicious traffic or unexpected container behavior through pattern recognition, correlation, and historical comparisons are essential jobs that machines are very good at.

Perhaps the biggest issue with container life spans is the potential lack of information available for investigations. If you have a container breach, the container is probably gone when you need it for forensic details. It’s like the scene of a crime being deleted before the detectives arrive.

The good news is that if you collect information from a container while it is running, you have a wealth of information available to you. Memory dumps can be captured and analyzed for traces of a malware infection or exfiltration function. And stopped containers can be saved for later analysis. Done well, this is like going back in time to a crime scene, able to examine every detail—not just the faint traces the criminal left behind. Of course, saving this type of data is counter to many of the container benefits of ephemerality, and could quickly consume a huge amount of storage, so once again automation and machine learning are crucial to help decide what artifacts to retain.

As the latest form of resource virtualization, containers enable a new and growing set of security opportunities and threats. Actively involving the security team in container architecture discussions will make sure you are using them to best advantage.

Out of the Shadows

By Patty Hatter, Vice President and General Manager, Intel Security Group Professional Services

How to Bring Cloud Usage into the Light

private-cloud-shot-2016-07-22-1On any given day – with a quick spot-check – you’ll probably find that up to half of your company’s IT usage is basically hidden in the shadows of various business units. Marketing, finance, sales, human resources, and engineering are using file sharing services with customers, online collaboration tools with contractors and suppliers, and multiple SaaS solutions in addition to on-demand IaaS compute resources. Business areas oftentimes make swift decisions to keep their business operations running. As departments look for the best way to do their jobs and efficiently meet their business objectives, they opt for immediate solutions that often operate outside of corporate IT security policies and guidelines.

When it comes to business units – if you haven’t created an environment of trust – IT can quickly rank the least-loved group in a company. Worse yet, you could be seen as the department of prevention. While the business units are looking for new apps or elastic compute to increase productivity, IT is looking for efficiency, security, and compliance. Departments will side step IT if they believe the needed services won’t be available in time, or if the value proposition is weak.

In today’s cyberattack-riddled environments, “shadow IT” is undeniably risky. To ensure optimum safety, you’ve got to bring IT into the light. Multiple file sharing services have been breached, and credential theft can potentially allow an adversary into any of these services. You’ve got to have IT security experts involved in the selection of these cloud services or construction of private clouds. Period.

Soon after joining McAfee, I took on the added responsibility as CIO in addition to my role as VP of operations. No easy task – but I saw what the business functions needed to move forward, and I knew that IT had to be at the center of it, as a “reliable and trustworthy business partner.” My first objective was the transformation of IT into a more collaborative and positive role. There was a lot of shadow IT at the company then and a pervasive attitude of mistrust.

Transformation is an issue of trust. If other groups within the company felt they could not work with IT, we needed to counter that perception. We started with the business functions, which tend to have simpler IT needs, such as marketing and sales, and moved up to the big challenge of winning over engineering.

Start with forgiveness
“It’s easier to ask for forgiveness than permission” is something you often hear when groups are discussing a shadow IT project. I suggest approaching with an attitude of forgiveness and understanding – to rebuild what are often strained relationships. Recent hacks and breaches will make this easier. You may have to remind your colleagues that their data is better off under the IT security tent if something bad happens, and that you will be their partner in this. Having to face the board of directors because the new marketing strategy, product designs, or customer data was stolen is a scenario that should convince most managers to at least participate in talks.

Build trust with transparency
You still need to address the agility and cost issues that are the root cause of shadow IT, or the problem will persist. We put together an effective governance model that enabled a high level of transparency on what was and wasn’t working. IT doesn’t always think the same way as the other groups, and clear communication and governance were important steps to understanding the business unit’s needs and building trust. Developing the cost models together, our business units realized that they got a much better financial deal when working with IT. Moreover, they were operating within the boundaries of corporate security policies.

Set up a cloud architecture team
Tackling shadow IT from the engineering department brought new issues to light. With their own technical resources, “do it yourself” is often the default path for engineering. This not only results in a gap between IT and engineering, but different development stacks and services between the various product teams, which makes it costly and difficult to scale. We set up an engineering/IT cloud architecture team to build a consistent set of use cases and identify big bets that we could put our joint resources on, so we could move forward quickly. It took time to get this started, but we were playing the long game here, working to bridge these two groups, not trying for a quick takeover.

In the end, the teaming approach among IT, the business functions, and engineering enabled us to develop a total view of business needs and a joint architectural approach. We had full visibility of the on-prem and SaaS managed infrastructure and capabilities that allowed us to get the results we needed like rapid achievement of new capabilities and an improved cost model.

Evolving Threats Compel an About-face in Data Protection Strategy

By  Vijay Ramanathan, Vice President of Product Management, Code42

evolving-threats-change-protection-strategy-blogIt’s time to flip our thinking about enterprise information security. For a long time, the starting point of our tech stacks has been the network. We employ a whole series of solutions on servers and networks—from monitoring and alerts to policies and procedures—to prevent a network breach. We then install some antivirus and malware detection tools on laptops and devices to catch anything that might infect the network through endpoints.

But this approach isn’t working. The bad guys are still getting in. We like to think we can just keep building a bigger wall, but motivated cybercriminals and insiders keep figuring out ways to jump over it or tunnel underneath it. How? By targeting users, not the network. Today, one-third of data compromises are caused by insiders, either maliciously and unwittingly.

Just because we have antivirus software or malware detection on our users’ devices doesn’t mean we’re protected. Those tools are only effective about 60% to 70% of the time at best. And with the increasing prevalence of BYOD, we can’t control everything on an employee’s device.

Even when we do control enterprise-issued devices, our security tools can’t prevent a laptop from being stolen. Or keep an employee from downloading client data onto a USB drive. Or stop a high-level employee from emailing sensitive data to a spear phisher posing as a co-worker.

We need to change our thinking. We need to admit that breaches are inevitable and be prepared to quickly recover and remediate. That means starting at the outside, with our increasingly vulnerable endpoints.

With a good endpoint backup system in place, one that’s backing up data in real time, you gain a window into all your data. You can see exactly where an attack started and what path it took. You can see what an employee who just gave his two weeks’ notice is doing with data. You can see if a stolen laptop has any sensitive data on it, so you know if it’s reportable or not.

By starting with endpoints, you eliminate blind spots. And isn’t that the ultimate goal of enterprise infosec?

To learn more about the starting point in the modern security stack watch the on-demand webinar.

Container Sprawl: The Next Great Security Challenge

By Jon King, Security Technologist and Principal Engineer, Intel Security

And you thought virtualization was tough on security …

private-cloud-shot-2016-07-22-1Containers, the younger and smaller siblings of virtualization, are more active and growing faster than a litter of puppies. Recent stats for one vendor show containers now running on 10% of hosts, up from 2% 18 months ago. Adoption is skewed toward larger organizations running more than 100 hosts. And the number of running containers is expected to increase by a factor of 5 in nine months, with few signs of slowing. Once companies go in, they go all in. The number of containers per host is increasing, with 25% of companies running 10 or more containers simultaneously on one system. Containers also live for only one-sixth the time of virtual machines. These stats would appear to support the assertion that containers are not simply a replacement for server virtualization, but the next step in granular resource allocation.

Adequately protecting the large number of containers could require another level of security resources and capabilities. To better understand the scope of the problem, think of your containers as assets. How well are you managing your physical server assets? How quickly do you update details when a machine is repaired or replaced? Now multiply that by 5 to 10 units, and reduce the turnover rate to a couple of days. If your current asset management system is just keeping up with the state of physical machines, patches, and apps, containers are going to overwhelm it.

Asset management addresses the initial state of your containers, but these are highly mobile and flexible assets. You need to be able to see where your containers are, what they are doing, and what data they are operating on. Then you need sufficient controls to apply policies and constraints to each container as they spin up, move around, and shut down. It is increasingly important to be able to control your data movements within virtual environments, including where it can go, encrypting it in transit, and logging access for compliance audits.

While the containers themselves have an inherent level of security and isolation, the large number of containers and their network of connections to other resources increase the attack surface. Interprocess communications have been exploited in other environments, so they should be monitored for unusual behavior, such as destinations, traffic volume, or inappropriate encryption.

One of the great things about containers, from a security perspective, is the large amount of information you can get from each one for security monitoring. This is also a significant challenge, as the volume will quickly overwhelm the security team. Security information and event management (SIEM) tools are necessary to find the patterns and correlations that may be indicators of attack, and compare them with real-time situational awareness and global threat intelligence.

Containers provide the next level of resource allocation and efficiency, and in many ways deliver greater isolation than virtual machines. However, if you are not prepared for the significant increase in numbers, connections, and events, your team will quickly be overwhelmed. Make sure that, as you take the steps to deploy containers within your data center, you also appropriately augment and equip your security team.

Fight Against Ransomware Takes to the Cloud

By Raj Samani, EMEA CTO, Intel Security

private-cloud-shot-2016-07-22-1“How many visitors do you expect to access the No More Ransom Portal?”

This was the simple question asked prior to this law enforcement (Europol’s European Cybercrime Centre, Dutch Police) and private industry (Kaspersky Lab, Intel Security) portal going live, which I didn’t have a clue how to answer. What do YOU think? How many people do you expect to access a website dedicated to fighting ransomware? If you said 2.6 million visitors in the first 24 hours, then please let me know six numbers you expect to come up in the lottery this weekend (I will spend time until the numbers are drawn to select the interior of my new super yacht). I have been a long-time advocate of public cloud technology, and its benefit of rapid scalability came to the rescue when our visitor numbers blew expected numbers out of the water. To be honest, if we had attempted to host this site internally, my capacity estimates would have resulted in the portal crashing within the first hour of operation. That would have been embarrassing and entirely my fault.

Indeed, my thoughts on the use of cloud computing technology are well documented in various blogs, my work within the Cloud Security Alliance, and the book I recently co-authored. I have often used the phrase, “Cloud computing in the future will keep our lights on and water clean.” The introduction of Amazon Web Services (AWS) and AWS Marketplace into the No More Ransom Initiative to host the online portal demonstrates that the old myth, “One should only use public cloud for noncritical services,” needs to be quickly archived into the annals of history.

To ensure such an important site was ready for the large influx of traffic at launch, we had around-the-clock support out of Australia and the U.S. (thank you, Ben Potter and Nathan Case from AWS!), which meant everything was running as it should and we could handle millions of visitors on our first day. This, in my opinion, is the biggest benefit of the cloud. Beyond scalability, and the benefits of outsourcing the management and the security of the portal to a third party, an added benefit was that my team and I could focus our time on developing tools to decrypt ransomware victims’ systems, conduct technical research, and engage law enforcement to target the infrastructure to make such keys available.

AWS also identified controls to reduce the risk of the site being compromised. With the help of Barracuda, they implemented these controls and regularly test the portal to reduce the likelihood of an issue.

Thank you, AWS and Barracuda, and welcome to the team! This open initiative is intended to provide a noncommercial platform to address a rising issue targeting our digital assets for criminal gain. We’re thrilled that we are now able to take the fight to the cloud.

Personalized Ransomware: Price Set by Your Ability to Pay

By Susan Richardson

price-discrimination-blog-2Smart entrepreneurs have long employed differential pricing strategies to get more money from customers they think will pay a higher price. Cyber criminals have been doing the same thing on a small scale with ransomware: demanding a larger ransom from individuals or companies flush with cash, or organizations especially sensitive to downtime and service disruptions. But now it appears cyber criminals have figured out how to improve their ROI by attaching basic price discrimination to large-scale, phishing-driven ransomware campaigns. So choosing to pay a ransom could come with an even heftier price tag in the near future.

Personalization made easy: no code required
Typically, a ransom payment amount is provided by a command and control server or is hardcoded into the executable. But Malware Hunter Team recently discovered a new ransomware variant called Fantom that uses the filename to set the size of the ransom demand. A post on the BleepingComputer blog explains that this allows the developer to create various distribution campaigns using the same exact sample, but request different ransom amounts depending on how the distributed file is named—no code changes required. When executed, the ransomware will examine the filename and check if it contains certain substrings. Depending on the matched substrings, it will set the ransom to a particular amount.

Businesses beware
The news is salt in the wound for businesses, which have already been targeted by ransomware at a growing pace with higher price demands. A 2016 Symantec survey found that while consumers account for a slight majority of ransomware attacks today, the long-term trend shows a steady increase in attacks on organizations.

Those most vulnerable? Healthcare and financial organizations, according to a 2016 global ransomware survey by Malwarebytes. Both industries were targeted well above the average 39 percent ransomware penetration rate. Over a one-year period, healthcare organizations were targeted the most at 53 percent penetration, with financial organizations a close second at 51 percent.

And while one-third of ransomware victims face demands of $500 or less, large organizations are being extorted for larger sums. Nearly 60 percent of all enterprise ransomware attacks demanded more than $1,000, and more than 20 percent asked for more than $10,000, according to the Malwarebytes survey.

A highly publicized five-figure ransom was demanded of the Los Angeles-based Hollywood Presbyterian Medical Center in February. A ransomware attack disabled access to the hospital’s network, email and patient data. After 10 days of major disruption, hospital officials paid the $17,000 (40-bitcoin) ransom to get their systems back up. Four months later, the University of Calgary paid $20,000 CDN in bitcoins to get its crippled systems restored.

Now with a new price-discrimination Fantom on the loose, organizations can expect to be held hostage for even higher ransoms in the future.

Cyber Security Tip for CISOs: Beware of Security Fatigue

By Susan Richardson, Manager/Content Strategy, Code42

security-fatigue-blogWhat’s the most effective thing you can do for cyber security awareness? Stop talking about it, according to a new study that uncovered serious security fatigue among consumers. The National Institute of Standards and Technology study, published recently, found many users have reached their saturation point and become desensitized to cyber security. They’ve been so bombarded with security messages, advice and demands for compliance that they can’t take any more—at which point they become less likely to comply.

Security fatigue wasn’t even on the radar
Study participants weren’t even asked about security fatigue. It wasn’t until researchers analyzed their notes that they found eight pages (single-spaced!) of comments about being annoyed, frustrated, turned off and tired of being told to “watch out for this and watch out for that” or being “locked out of my own account because I forgot or I accidentally typed in my password incorrectly.” In fact, security fatigue was one of the most consistent topics that surfaced in the research, cited by 63 percent of the participants.

The biases tied to security fatigue
When people are fatigued, they’re prone to fall back on cognitive biases when making decisions. The study uncovered three cognitive biases underlying security fatigue:

  • Users are personally not at risk because they have nothing of value—i.e., who would “want to steal that message about how I made blueberry muffins over the weekend.”
  • Someone else, such as an employer, a bank or a store is responsible for security, and if targeted, they will be protected—i.e., it’s not my responsibility
  • No security measures will really make a difference—i.e., if Target and the government and all these large organizations can’t protect their data from cyber attacks, how can I?

The repercussions of security fatigue
The result of security fatigue is the kind of online behavior that keeps a CISO up at night. Fatigued users:

  • Avoid unnecessary decisions
  • Choose the easiest available option
  • Make decisions driven by immediate motivations
  • Behave impulsively
  • Feel a loss of control

What can you do to overcome employee security fatigue?
To help users maintain secure online habits, the study suggests organizations limit the number of security decisions users need to make because, as one participant said, “My [XXX] site, first it gives me a login, then it gives me a site key I have to recognize, and then it gives me a password. If you give me too many more blocks, I am going to be turned off.”

The study also recommends making it simple for users to choose the right security action. For example, if users can log in two ways—either via traditional username and password or via a more secure and more convenient personal identity verification card—the card should show up as the default option.

The Dyn Outage and Mirai Botnet: Using Yesterday’s Vulnerabilities to Attack Tomorrow’s Devices Today

By Jacob Ansari, Manager, Schellman

the-dyn-outage-and-mirai-botnet-using-yesterdays-vulnerabilities-to-attack-tomorrows-devices-todayOn Oct. 21, Dyn, a provider of domain name services (DNS), an essential function of the Internet that translates names like www.schellmanco.com to its numerical IP address, went offline after a significant distributed denial of service (DDoS) attack affected Dyn’s ability to provide DNS services to major Internet sites like Twitter, Spotify, and GitHub. Initial analysis showed that the DDoS attack made use of Mirai, malware that takes control of Internet of Things (IoT) devices for the purposes of directing Internet traffic at the target of the DDoS attack. Commonly referred to as botnets, these networks of compromised devices allow for the distributed version of denial of service attacks; the attack traffic occurs from a broad span of Internet addresses and devices, making the attack more powerful and more difficult to contain.

Mirai is not the first malware to target IoT devices for these purposes, and security researchers have found numerous security vulnerabilities in all manner of IoT devices, including cameras, kitchen appliances, thermostats, and children’s toys. The author of the Mirai code, however, published the full source code online, allowing attackers with only a modicum of technical capability to make use of it to hijack IoT devices and create potentially significant DDoS attacks, but the core of the issue remains the fundamental insecurities of IoT devices.

While IoT device manufacturers might face complicated security challenges from working in new environments or with the kinds of hardware or software constraints not seen on desktop systems or consumer mobile devices, the reality, at least for now, is that IoT devices have the kinds of security weaknesses that the rest of the Internet learned about 20 years ago, primarily default administrative accounts, insecure remote access, and out-of-date and vulnerable software components. Researchers have found that they can remotely control IoT devices, such as baby monitors or even automobiles, extract private data from the mobile apps used to interface with devices, or cause damage to other equipment the IoT device controls, such as harming a furnace by toggling the thermostat on and off repeatedly.

Ultimately, defending against DDoS attacks has a few components. ISPs and carriers bear some responsibility to identify these kinds of attacks and take the actions that only they can take. Security and Internet services like Dyn or companies that provide DDoS mitigation will need to scale up their capabilities to address greater orders of magnitude in the attacks they could face. But for IoT-based botnet attacks, the lion’s share of responsibility falls on IoT device manufacturers, who have a lot of catching up to do on good security practice for the devices and applications that they provide.

References:
rollbar.com/blog/dns-ddos-postmortem/
arstechnica.com/information-technology/2016/10/inside-the-machine-uprising
threatpost.com/dyn-confirms-ddos-attack-affecting-twitter-github-many-others/121438/

To Include or Not to Include – Scoping ISO 27001 and Colocation Service Providers

By Ryan Mackie, Principal and ISO Certification Services Practice Director, Schellman

to-include-or-not-to-include-scoping-iso-27001-and-colocation-service-providers-introductionIntroduction
ISO 27001 North American GrowthISO/IEC 27001:2015 (ISO 27001) certification is becoming more of a conversation in most major businesses in the United States. To provide some depth, there was a 20% increase in ISO 27001 certificates maintained globally (comparing the numbers from 2014 to 2015 as noted in the recent ISO survey).

As for North America, there was a 78% growth rate in ISO 27001 certificates maintained, compared to those in North America in 2014. So it is clear evidence that the compliance effort known as ISO 27001 is making its imprint on organizations in the United States. However, it’s just the beginning. Globally, there are 27,563 ISO 27001 certificates maintained, of which only 1,247 are maintained in the United States; that is 4.5% of all ISO 27001 certificates.

As the standard makes its way into board room and compliance department discussions, one of the first questions is understanding the scope of the effort. What will be discussed in this short narrative is something that we, as an ANAB and UKAS accredited ISO 27001 certification body, deal with often when current clients or prospects ask about scoping their ISO 27001 information security management system (ISMS), and specifically related to how to handle third party data centers or colocation service providers.

Scenario
Consider an organization is a software as a services (SaaS) provider with customers throughout the world. All operations are centrally managed out of one location in the United States but to meet the needs of global customers, the organization has placed their infrastructure at colocation facilities located in India, Ireland, and Germany. They have a contractual requirement to obtain ISO 27001 certification for their SaaS services and are now starting from the ground up. First things first, they need to determine what their scope should be.

Considerations
It is quite clear that given the scenario above, the scope will include their SaaS offering. As with ISO 27001, the ISMS will encompass the full SaaS offering (to ensure that the right people, processes, procedures, policies, and controls are in place to meet their confidentiality, integrity, and availability requirements as well as their regulatory and contractual requirements). When determining the reach of the control set, organizations typically consider those that are straight forward: the technology stack, the operations and people supporting it, its availability and integrity, as well as the supply chain fostering it. This example organization is no different but struggles with how it should handle its colocation service providers. Ultimately, there are two options – Inclusion and Carve-out.

Inclusion
The organization can include the sites in scope of its ISMS. The key benefit is that the locations themselves would be included on the final certificate. But, with an ISMS, an organization cannot include the controls of another organization within its scope as there is no responsibility for the design, maintenance, and improvement of those controls in relation to the risk associated with the services provided.

So, to include a colocation service provider, it would be no different than including an office space that is rented in a multi-tenant building. The organization is responsible for and maintains the controls once the individual enters its boundaries but all other controls would be the responsibility of the landlord. The controls within the rented space of the colocation service provider would be considered relevant to the scope of the ISMS. These controls would be limited, which is understandable given their already very low risk; however, they would still require to be assessed. That would mean that an onsite audit would be required to be performed to ensure that the location, should it be included within the scope and ultimately on the final certificate, has the proper controls in place and has been physically validated by the certification body.

As a result, the inclusion of these locations would allow for them to be on the certificate but would require the time and cost necessary to audit them (albeit the assessment would be limited and focused only on those controls the organization is responsible for within the rented space of the colocation service provider).

Carve-out
The organization can choose to carve out the colocation service provider locations. As compared to the inclusion method, this is by far cheaper in that onsite assessments are not required. More reliance would be applied to the controls supporting the Supplier Relations control domain in Annex A of ISO 27001; however, these controls would be critical for both the inclusive and carve-out method. The downside of this option – the locations could not be included on the final ISO 27001 certificate (as they were not included within the scope of the ISMS), and it may require additional conversations with customers highlighting that though those locations were not physically assessed as part of the audit, the logical controls of the infrastructure sited within those locations were within the scope of the assessment and were tested.

Conclusion
Ultimately, it is a clear business decision. Nothing in the ISO 27001 standard requires certain locations to be included within the scope of the ISMS, and the organization is free to scope their ISMS as it suits. Additionally, unlike other compliance efforts (such as AICPA SOC examinations), there is not a required assertion from the third party regarding their controls, as the ISMS, by design, does not include any controls outside of the responsibility of the organization being assessed. However, the organization should keep in mind the final certificate and if it will be fully accepted by the audience that is receiving it. Does the cost of requiring the onsite audit warrant these locations to be included or is the justification just not there.

If this scenario is applicable to your situation or scoping, Schellman can have further discussions to talk through the benefits and drawbacks of each option so that there is scoping confidence heading into the certification audit.

Defeating Insider Threats in the Cloud

By Evelyn de Souza, Data Privacy and Security Leader, Cisco Systems  and Strategy Advisor, Cloud Security Alliance

insider-threatEverything we know about defeating the insider threat seems to not be solving the problem. In fact, evidence from the Deep, Dark and Open Web points to a greatly worsening problem. Today’s employees work with a number of applications and with a series of clicks information can be both maliciously and accidentally leaked.

The Cloud Security Alliance has been keen to uncover the extent of the insider threat problem with its overall mission of providing security assurance within Cloud Computing, and providing education to help secure cloud computing.

As a follow up to the Top Threats in Cloud Computing and over recent months we surveyed close to 100 professionals on the extent of the following:

  • Employees leaking critical information and tradecraft on illicit sites
  • Data types and formats being exfiltrated along with exfiltration mechanisms
  • Why so many data threats go undetected
  • What happens to the data after it has been exfiltrated
  • Tools to disrupt and prevent the data exfiltration cycle
  • Possibilities to expunge traces of data once exfiltrated

We asked some difficult questions that have surprised our audience and that many were hard pressed to answer. We wanted to get a clear picture of the extent of knowledge and where the gaps lay. We hear lots of talk about the threats to the cloud and challenges that organizations facing it take. And, in the wake of emerging data privacy regulation, we see much discussion about ensuring levels of compliance. However, the results of this survey show there is a gap with dealing with both present and future requirements for data erasure in the cloud. And, that despite the fact that accidental insider threats or misuse of data is a common phenomenon, there is a distinct lack of procedure for dealing with instances across cloud computing.

To provide insights on what happens to data after it has been exfiltrated, we partnered with LemonFish to obtain their unique insights. Download this Cloud Security Alliance Survey report.

Everything You’ve Ever Posted Becomes Public from Tomorrow

By Avani Desai, Executive Vice President, Schellman & Co.

everything-youve-ever-posted-becomes-public-from-tomorrow

As I sit here, ironically just wrapping up a privacy conference, scrolling my Facebook wall,  I am seeing dozens of posts from smart, professional, aware people, all posting an apparent disclaimer to Facebook in an attempt to protect their personal privacy from the new Facebook privacy policy. This disclaimer, known as UCC 1 1-308-308 1-103 and the Rome Statute, is in fact a hoax. It first surfaced in 2012 but is making the rounds again. The post encourages users to share a Facebook status which allows them to be immune from Facebook sharing any of their data uploaded to their platform.

As I read my Facebook wall – I realized this isn’t new, these disclaimers had the same tone as the old chain letters, which had the stark warning, “DEADLINE tomorrow.”  I suddenly got flashbacks to 1980 when my mother would walk in the door, her face full of terror, after checking the mail, holding a chain letter in her hand. She would sit down on the dining room table frantically writing the same letter over and over to make sure our family avoided famine.  This Facebook hoax is the 2016 version of the chain letter– minus the hand cramps.

My first reaction, as a privacy professional, is to scream at my screen. The second reaction is to write on every single one of their walls and explain the concept of opt-in vs. opt-out and the use of Facebook privacy settings. My third reaction, after my initial annoyance subdued, was to educate; educating Facebook users about what level of privacy they should expect from a platform like Facebook.

In our society today, we fortunately have a heightened awareness of personal privacy online – we care about what people and organizations do with our personal data.  This is especially true in the post-Snowden era.  Yet, our urge is to share, over share, it is a human instinct.  We sternly tell our children and our employees “think before you post on social media … anything you post today can be seen years from now” and “nothing is deleted in the technology era.” We question the government when there is a breach and we diligently check our credit reports to make sure we were not victims of identity theft. This increased awareness of security and privacy is borne out by industry analysts like Forrester who have seen a sea change in attitudes towards privacy, as people become more aware of the issues surrounding the sharing of personal data on social platforms. This Facebook “chain-disclaimer” proves how passionate the public is about their privacy.

However, there still lacks a fundamental understanding of online privacy since many educated people believe that you can share, share, share, but by simply pasting a short statement they will be fully protected. This then, leaves us with a question. Why doesn’t the mainstream user understand privacy? There are a number of reasons why this may be the case. I have attempted to highlight some of them here, from a technical viewpoint but I am sure sociologists, anthropologists and psychologists could offer more insights.

  1. Privacy policies are only for the lawyers. Privacy and the policies that shore privacy up are written in legalese that the average person cannot understand.  If you are like most people, you click through those policies, hitting, next, next and next until you see submit so you can go on your merry way of using your new program, software, or service.  I look forward to the day when companies offer an abridged version of their privacy policy to really understand what you are agreeing to. However, on the positive side, there have been a number of campaigns by industry leaders, such as the International Association of Privacy Professionals (IAPP) that are encouraging a more user friendly language approach to privacy policy writing, so those CliffsNotes may not be too long off.
  2. Opt-in and opt-out isn’t as clear as it should be. In good privacy policy best practice, the advisory is to always offer opt in. However, the U.S. is an “uncheck the box”  Companies will always have the box checked for you and let’s face it, we all skim through text and we don’t read the fine print, in which case you’ll probably have opted into getting a wide variety of communication that effectively becomes spam.
  3. Breaches get a lot of media attention – but prevention isn’t top of an individual’s mind. We need more education on how to protect our personal data and understand who has access and what can be done with that data.

So what can you do to be a good digital citizen?
Mostly it’s about being aware:

  1. Privacy aware – Use, update, and care about your privacy settings. They are there to allow you to make the choice of what you want to share and with whom. Configure your privacy settings; they are there to tell the hosting organization, e.g. Facebook, what to share and with whom. Putting a privacy disclaimer notice on your wall, or in an email, spoof or not, will not have any effect on what the hosting platform shares.
  2. Spam aware – Fact check before spreading the good word. If it is on the Internet, even from a reputable source, it may not be true. Remember those ‘Nigerian Prince’ spoof emails? Of course he was neither a Nigerian nor a prince, but rather a popular email scam.  Or remember that email from your mom telling you she is stuck in on some island without her passport and she needs $10,000 dollars?  A quick check on snopes.com usually will tell you if it is true or not.
  3. Spoof aware – Don’t share links or “like” things on Facebook to win prizes. Most of the time when you see Disney saying they are giving away free cruises, or Target has a $500 gift card for you, or Bill Gates is going to send you $10 for every share that post gets – put on your logical cap;  most likely, these offers are too good to be true.  Offers like these are typically after personal information, or to get access to your social profile, or even share dangerous links with friends for a social engineering attack.

At the end of the day, the Facebook privacy disclaimer hoax is a lesson for all of us on personal privacy.  Social media is like a wildfire for spreading information and the more we rely on digital venues to get our news, share updates with our family, share pictures, and for professional use, the more diligent we have to be in our understanding of what privacy is and the impact it can have.  In the meantime, please, please, please go delete that paragraph long status off your wall and instead post a picture of your cute kids!

Five Prevention Tips and One Antidote for Ransomware

By Susan Richardson, Manager/Content Strategy, Code42

ncsam_twitter_1024x512_asym_rjm2-01editDuring National Cyber Security Awareness Month, understanding the ins and outs of ransomware seems particularly important—given the scandalous growth of this malware. In this webinar on ransomware hosted by SC Magazine, guest speaker John Kindervag, vice president and principal analyst at Forrester, talks about what ransomers are good at—and offers best practices for hardening defenses. Code42 System Engineer Arek Sokol is also featured as a guest speaker, defining continuous data protection as a no-fail solution that assures recovery without paying the ransom.

The art of extortion
Kindervag says ransomers are good at leveraging known vulnerabilities when organizations are slow to patch. They are also excellent phishermen, posing skillfully as trusted brands to lure their prey; collaborative entrepreneurs who learn and share information; and enthusiastic teachers, eager to impart how to pay in bitcoin for the unschooled.

Like Pearl Harbor, Kindervag says, the day the enterprise gets hit with across-the-board ransomware will live in infamy—unless the organization has planned for the event with effective backup.

Kindervag advises the following to prevent the delivery of ransomware:

  1. Prioritized patch management to avoid poor security hygiene that puts computer systems at risk.
  2. Email and web content security that includes effective anti-spam, gray mail categorization, and protection for employees against poisoned attachments.
  3. Improved endpoint protection with key capabilities that include prevention, detection and remediation, USB device control to reduce the ransomware infection vector, and isolation of vulnerable software through app sandboxing and network segmentation.
  4. Hardening network security with a zero trust architecture in which any entity (users, devices, applications, packets, etc.) requires verification regardless of its location on or with respect to the corporate network to prevent the lateral movement of malware.
  5. A focus on clean, effective backups.

The ransomware antidote
Following Kindervag’s “hardening defenses” presentation, Sokol reports on the number of businesses hit by ransomware in 2015 (47 percent) and how many incidents come through the endpoint (78 percent). He also dispels the rumor that file sync and share are synonymous with rather than antithetical to endpoint backup.

ransomware-in-the-workplac

 

During the webinar, Sokol demonstrates the extensibility of modern, continuous, cross-platform endpoint backup. He describes the efficacy of endpoint backup in recovering data following ransomware or a breach, its utility in speeding and simplifying data migration and its ability to visualize data movement—thereby identifying insider threats when employees leak or take confidential data. Don’t miss it.

ncsam_blogbanner_785x150_rjm2-01

 

Happy Birthday to… Wait, Who’s This Guy?

By Jacob Ansari, Manager, Schellman

birthday-whoHow many arbitrary people do you have to get into a room before two of them share the same birthday? Probability theory has considered this problem for so long that no one is quite certain who first posed the so-called “birthday problem” or “birthday paradox.” What we do know is that this occurs with many fewer people than we might have guessed. In fact, there’s a 50% chance that two people will share a birthday (month and date) with only 23 people. That confidence goes up to 99% with 75 people.

Beyond just awkward situations about who gets the first slice of cake, this idea has applications in cryptography and security situations. The short of the idea is that things that seem unpredictable or unlikely are often much more likely than we would think. For a security system based on random numbers and unpredictability, this can pose a few dangerous security problems. Some researchers from the French Institute for Research in Computer Science and Automation (INRIA) recently published some work that shows significant weaknesses with practical exploits in 64-bit block ciphers, particularly 3DES and Blowfish, and in their most common uses in HTTPS and VPN connections.

Most modern ciphers that use a symmetric key, that is a key that both parties need to have to encrypt and decrypt messages, are what cryptographers call “block ciphers.” They encrypt blocks of data, rather than bit by bit. Often, the block length is the size of the key, but in some cases it isn’t. So a 3DES cipher, which performs three cryptographic operations using 64-bit blocks and 64-bit keys (technically 56-bit keys with eight bits used for error checking) divides up its message into 64-bit segments and encrypts each one. The problem related to the birthday paradox is this: when you have a 64-bit key, an exhaustive attack would potentially need to try 264 guesses at the key value to see if it could decrypt the encrypted message (this is what we call a brute-force attack).

In practice, however, block ciphers use what are called modes of operation, which link blocks of messages together. In these situations, with a 64-bit block length, encrypting more than 2 (block length/2) or 232 bits of data presents a well-known cryptographic danger. The operation will inevitably repeat enough data for patterns to emerge and for an attack to determine the key from these patterns. Thus, good design prevents more than 232 bits of data encrypted by the same key, and cryptographers refer to this as the birthday bound.

This attack goes from theoretical to practical in two significant applications: HTTPS using 3DES (typically with TLS 1.0 or earlier), and OpenVPN, which uses Blowfish (which has 64-bit blocks) as its default cipher. With 64-bit blocks, the birthday bound is approximately 32GB of data transfer, which is something a reasonably fast connection can handle in about an hour. Thus, the practicality of collecting these data and attacking the key is an entirely reasonable prospect. Further, modern uses of HTTPS and VPN connections often find cases where the session lasts for long periods of time, and thus continues to use the same key for those long periods, making both the recovery of the key and its use in an attack practical and effective.

Ultimately, the solution for this kind of attack is to replace the use of 64-bit block ciphers with 128-bit block ciphers like AES. In many cases, the capability to do this already exists and organizations facing this threat can do so with reasonable expedience. In some cases, particularly when supporting legacy connections such as TLS 1.0 and the corresponding support for 3DES ciphers, this becomes more complicated. While many organizations have made advances in moving to more secure block ciphers, others have compatibility and legacy support issues. These kinds of advances in attacks make those transitions all the more urgent.

Organizations currently in transition should strongly consider accelerating those efforts and eliminating the use of ciphers like 3DES and Blowfish entirely.