New Security Research – the Software-Defined Perimeter for the Cloud

February 13, 2017 | Leave a Comment

By Jason Garbis, Vice President of Products, Cryptzone

On behalf of the Cloud Security Alliance, I’m pleased to announce the publication of our newest security research from the Software Defined Perimeter (SDP) Working Group, exploring how the SDP can be applied to Infrastructure-as-a-Service environments. Thanks to all the people who commented and contributed to this research over the past 10 months, especially Puneet Thapliyal from Trusted Passage.

Cloud adoption has soared over the past few years, and yet recent surveys indicate that security is still a concern. In one Cloud Security Alliance survey, over 67% of respondents indicated that an inability to enforce corporate security standards represents a barrier to cloud adoption, while 61% noted that compliance concerns pose a barrier.

It’s quickly becoming widely understood that SDP is the preferred new way to securely deploy services. Leading analyst firms are recommending that public-facing services be protected with a new security approach, and are talking about SDP as a strong alternative to traditional network security solutions.

Enterprises have recognized that SDP can address their concerns about adopting cloud, but the Software-Defined Perimeter approach is still relatively unknown to many (here is a quick primer on SDP if you need a refresh). Security architects and IT leaders are eager to learn more about how to best design and deploy SDP-based systems.

As a vendor that offers an SDP solution, and as a leader of the SDP Working Group, we’re happy to share our knowledge and experience. This is why we’ve spent the time and effort, in partnership with other SDP practitioners, to create this new security research outlining how Software-Defined Perimeter applies to IaaS environments.

Security for IaaS is particularly interesting, because it’s a responsibility that’s shared between enterprises and cloud providers, and because IaaS has different (and in some ways more challenging) user access and security requirements than traditional on-premises systems. Our new research focuses on how SDP can be applied to Infrastructure-as-a-Service environments, and explores the following use cases:

  • Secure Access by Developers into IaaS Environment
  • Secure Business User Access to Internal Corporate Application Services
  • Secure Admin Access To Public Facing Services
  • Updating User Access When New Server Instances Are Created
  • Hardware Management Plane Access for Service Provider
  • Controlling Access Across Multiple Enterprise Accounts

This research is now available here – and we look forward to getting your feedback. Please join the SDP Working Group to collaborate.

Finally, now that this research has been published, we’re just beginning work to outline more architectures and new applications of the protocol in version 2 of the SDP specification. Please join us if you’re interesting in contributing or learning more about that project as well.

3-2-1, Takeoff. The STARWatch Cloud Security Management Application Has Launched

February 13, 2017 | Leave a Comment

By Daniele Catteddu, Chief Technology Officer, Cloud Security Alliance

Compliance, assurance and vendor management are becoming more and more complex and resource-intensive issues, so we created STARWatch, a Software as a Service (SaaS) application designed to provide organizations a centralized way to manage and maintain the integrity of the vendor review and assessment process. Today, we’re excited to announce its official launch. Even more exciting is that we are emerging from Beta with more than 250 active licenses activated.

STARWatch delivers the content of the CSA’s de facto standards Cloud Control Matrix (CCM) and CSA’s Consensus Assessments Initiative Questionnaire v3.0.1 (CAIQ) in a database format, enabling users to manage compliance of cloud services with CSA best practices. It was designed to provide cloud users, providers, auditors and security providers with assurance and compliance on-demand. Additionally, it provides users the ability to:

  • manage all cloud service providers and their own private clouds to assure a consistent security baseline is maintained;
  • build and maintain a CSA Security Trust and Assurance Registry (STAR) entry and provide customers with rapid responses to their compliance questions;
  • perform audits and assessments of cloud services/provider security;
  • have a clear reference between CCM controls and the corresponding controls in other industry standards;
  • leverage the STARWatch solution database format and technical specifications for integration within an organization’s cloud environment; and
  • enabling sharing and peer reviewing of cloud services security assessments.

CSA STARWatch is free to CSA corporate members. Non-members may purchase licenses starting at $3,000 annually for an Expert license and $5,000 annually for Enterprise licenses. Learn more about CSA STARWatch.

STARWatch is part of the larger CSA STAR program, the industry’s most powerful program for security assurance in the cloud, which encompasses the key principles of transparency, rigorous auditing and harmonization of standards, with continuous monitoring. Currently there are 230 Cloud Service Providers in the STAR program, which includes STAR Self-Assessment, STAR Certification, STAR Attestation and C-STAR Assessment.

On Data Privacy Day, Keep Your Data Safe by Identifying the Threats

January 30, 2017 | Leave a Comment

By Rick Orloff, Chief Security Officer, Code42

Saturday, January 28th was Data Privacy Day. We’re proud champions of the National Cyber Security Alliance’s focused effort on protecting privacy and safeguarding data. But at Code42, we know that one day isn’t enough. We dedicate an entire month each year to reaffirm our critical role in keeping our customers’ data safe.

This year, we initiated an annual Certified Information Systems Security Professional (CISSP) training program at Code42 and trained staff on the eight common bodies of knowledge defined by (ICS)2 to earn the coveted credential. We embedded a new tool in our email system for Code42 employees to report phishing attempts. And, we hosted a panel discussion with representatives from the FBI and Secret Service to learn more about how they combat cybercrime.

But we’re not here to talk about what we did to keep our data safe. We’re here to talk about what you can do to protect yours. The first step in any cybersecurity strategy: situational awareness.

Your Employees Are Being Targeted: Part One
Your end users, and their devices, represent a very large mobile attack surface. IT and InfoSec professionals spend far too much time cleaning up issues caused by employees who fall for phishing emails, click corrupt links, or engage in careless online behavior. These unintentional “user mistakes” are one of the biggest threats today, causing around 25 percent of data exfiltration events.

Why do users make so many mistakes? To put it simply, most don’t care. They believe that if IT is doing its job, no threats will reach them and they have nothing to worry about. They believe that if they have an error in judgment, or do something foolish, IT will always come to the rescue. They actively ignore security policies and find creative workarounds for security measures they view as an inconvenience.

Your Employees Are Being Targeted: Part Two
It’s one thing for your employees to make mistakes. It’s another for them to deliberately remove data from your organization. Unfortunately, that’s exactly what happens quite often, and it’s part of the reason why 78% of security professionals say insiders are the biggest contributors to data misappropriation.

With your company’s IP making up 80% of its value, the potential damage from malicious insider threat is enormous. To help spot vulnerabilities, look for “Shadow IT,” the tools and solutions your employees use without explicit organizational approval that often pose measurable risks. Many tools that are unapproved by your IT department also place the data they’re accessing at risk and often there’s no overall management of these tools.

The Solution: Backup and Real-time Recovery
I have often said that there are only two types of networks in this world, those that have been breached and those that are being attacked. The fact is, security breaches occur to varying degrees of severity at all Fortune 500 companies. If a breach results in being denied access to your data, the C-Suite expects IT to get them back up and running. What they are just now learning is that this can be accomplished in mere minutes, or hours without overwhelming support staff! The solution to protecting your company from inside threats, ransomware, or any other cybersecurity issue is real-time recovery on the endpoints.

This is what the FBI has been urging businesses to do for years: regularly back up data and verify the integrity of those backups. It’s equally important to ensure that backed-up files aren’t susceptible to ransomware’s ability to infect multiple sources and backups. Consider these key points:

  1. When endpoints are infected by ransomware, real-time recovery can roll back clean versions of every file, including system files.
  2. While other solutions such as File Sync and Share (FSS) programs can import ransomware to its mirror mate (as they are designed to do), enterprise endpoint recovery solutions can roll back all files to earlier dates (versions) and restore them.
  3. When a device gets stolen or damaged for whatever reason, or when an employee leaves with valuable company data, real-time recovery can roll back each and every file on the device. This keeps the business operational and provides options relative to how they want to deal with the departed employee.

There are many tools on the market that claim to protect your data, and many indeed do a good job. But a sound cybersecurity policy begins within. You can’t protect your data if you don’t understand where it is and the threats you’re up against.

CSA releases Quantum-Safe Security Glossary

January 25, 2017 | Leave a Comment

The Cloud Security Alliance’s Quantum-Safe Security (QSS) Working Group announces their latest release with the Quantum-Safe Security Glossary. The QSS Working Group was formed to address key generation and transmission methods and to help the industry understand quantum-safe methods for protecting networks and data. The working group is focused on long-term data protection amidst a climate of rising cryptanalysis capabilities. As the working group continues to produce documents to address concerns in a quantum world, the opportunity to share terms to provide a starting point to learn more about quantum-safe security.

This glossary is a collective contribution of the QSS Working Group to increase quantum-safe security awareness, and includes a compilation of common terms used in the world of quantum-safe cryptography. The document was created with the working groups input and went through an open peer review for collaboration and completeness. However, quantum-safe cryptography is a very dynamic issue, prone to unpredictable patterns and instability. In anticipation of these characteristics, the QSS Working Group plans to update this document from time to time moving forward. For more information on the Quantum-Safe Security Working Group, please visit https://cloudsecurityalliance.org/group/quantum-safe-security/.

STAR- A Window to the Cloud

January 20, 2017 | Leave a Comment

By Raj Samani, Chief Technology Officer/EMEA, Intel Security

We are all going to live in the cloud. Well that is what every study, and forecast tells us. From our clash of clans villages, to our connected cars we can expect all of our data to be hosted in an unmarked data center in a town that we have never heard of. Perhaps this is a slight exaggeration, but the reality is for many of us, we simply have no idea where our data will be stored, and indeed even if we are given the name of a physical location have little insight into the operational procedures, staff vetting, or even physical security employed at the location.  This old chestnut is described as the lack of transparency, but the truth is that cloud service providers do remain transparent so long as you ask the question.

It sounds simple, and indeed by all accounts, major providers have entire teams dedicated to just that, answering questions from potential customers about the security controls deployed on site. Such a process however is incredibly inefficient, and reminds me of how insurance used to work. I remember getting the telephone book, and flicking to the section titled insurance. There, you would phone as many providers as you could answering questions about your car in order to find the most competitive quote. With every call, you felt a small part of your youth just ebbing away as your tolerance for small talk reduced with every quote. In the end you were met with a saving of eleven pounds for three hours work. Of course it was worth it wasn’t it?

In many cases every element of our industry is met with a similar fragmented approach, do you want to get a quote for staff training, well do a google search and contact every training company you have the patience to contact. Differentiating the commoditized offerings such as insurance with price is simple, but deciding which company you want to host all of your corporate data, well that is a different matter.

It is for this reason that the Cloud Security Alliance, and in particular the Security, Trust & Assurance Registry (STAR), is such a valuable resource. This program encompasses key principles of transparency and a validation of the security posture of cloud offerings. The STAR program includes a complimentary registry that documents the security controls provided by popular cloud computing offerings. This publicly accessible registry is designed for users of cloud services to assess their cloud providers, security providers and advisory and assessment services firms in order to make the best procurement decisions. Now in one single place, potential cloud customers can gain insight into the security maturity of multiple providers in a single instance. Recognizing the need for greater transparency, we are pleased to confirm that Intel Security has our McAfee ePolicy Orchestrator Cloud STAR certified and will add others as they come online.

It is not question of whether the cloud will be ubiquitous, but whether we can ensure that the data centers holding every detail of our business or personal life have the appropriate level of protection. The STAR initiative is integral into providing a foundation for anybody considering using such services, but more importantly the CSA has been at the forefront of cloud security.

So if you are considering outsourcing your work, make sure that STAR is your first port of call and consider attending the CSA Summit at RSA this year on February 13, where I will be sharing my thoughts on “Security in the Cloud: Evolution or Revolution?”

 

 

People Are Not IP Addresses…So Why Do Security Solutions Think They Are?

January 18, 2017 | Leave a Comment

By Jason Garbis, Vice President of Products, Cryptzone

Attackers are erasing database contents and replacing them with a note demanding Bitcoin ransom payment for restoration. It also appears that victims who pay are often not getting their data back, and that multiple attackers are overwriting each other’s ransom demands. Seriously, these databases are of course important to their owners, and these attacks are clearly a headache for them. Hopefully they have backups.

Let’s explore this situation a bit more, and then step back for some analysis.

Here’s What We Know

There is no indication of a vulnerability in MongoDB; rather these systems are allowing administrative access from any IP address, and are (mis)configured for either no authentication or default credentials. There are a large number of such systems – Internet service search engines show approximately 100,000 exposed instances, and several independent security researchers have identified over 27,000 instances that have been hijacked as of January 8, a number that’s growing daily.

Putting aside the mistaken configuration that enabled access with no/weak authentication, let’s look at this from a user access and network perspective. At the risk of being too obvious, these systems are Internet-facing either intentionally or unintentionally. If intentional, their admins clearly require remote access, and therefore these systems must expose some network service.

“People are not IP addresses!”
— Jason Garbis, Vice President of Products at Cryptzone

The problem comes down to how access is restricted – and a realization that relying solely on authentication is not enough. Too many systems are either misconfigured (as appears to be the case with these MongoDB) or are subject to vulnerabilities – enterprises need to limit access at a network level. The issue is that network security tools are built around controlling access by IP address, yet the problem we need to solve is how people (identities) access these systems. And people are not IP addresses!

If these databases were unintentionally exposed to the Internet, then no remote access is required – either admins have local system access, or they’re relying on another security mechanism such as being on a LAN or accessing the network through a VPN. Yet, these systems are exposed directly to the Internet, and therefore not likely on an internal corporate network. Looking at the discovered instances on Shodan, it appears that many of them have IP addresses associated with cloud or hosting providers!

This is an interesting pattern. Because cloud network access is managed by IP addresses, users may be simply setting their cloud network security groups to permit access from anyone on the internet – much to their detriment, as this attack shows.

Clearly, misconfiguring a database to not require authentication is a problem, but there are many exploits that exist even in properly secured and properly configured systems. It’s time to realize that the bigger problem is in allowing unauthorized users to have network access to these systems in the first place. Why are there 100,000 instances of MongoDB available for a public scan? I suggest that most of these were not intended for public access.

The ability to access a service on the network is a privilege, and it must be treated as such. The principle of least privilege demands that we prevent unauthorized users from scanning, connecting to, or accessing our services. Following this principle will dramatically reduce the ability of attackers to exploit misconfigurations or vulnerabilities.

But there’s a problem. There is a disconnect between how we need to model users – as people – and our network security systems, which are centered on IP addresses. And, to repeat myself, people are not IP addresses.

Let’s Bring This Together

Organizations need to secure network access in an identity-centric way, and in a way that’s driven by automated policies so that users – who are people – get appropriate access. Network security systems must be able to do this, and allow us to easily limit user access to the minimum necessary.

The good news is that this is achievable today. The Software-Defined Perimeter (SDP) – an open specification published by the Cloud Security Alliance – defines a model where network access is controlled in an identity-centric way. Every user obtains a dynamically adjusted network perimeter that’s individualized based on their specific requirements and entitlements. The Software-Defined Perimeter is well-suited to cloud environments; network services such as MongoDB can be easily protected by SDP network gateways.

With SDP, organizations can easily define policies that control which users get access to these database instances, and prevent all unauthorized users from scanning or accessing these services – even if they’re misconfigured and don’t require authentication. And, because this access is built around users, not IP addresses, authorized users can securely access these systems from anywhere, with strong authentication enforced at the network level.

We’ll never be completely safe in our hyper-connected world, but we’re unnecessarily making things harder for ourselves, as this latest attack shows. We need to take a new, identity-centric approach to network security, and the Software-Defined Perimeter model provides exactly this. Putting this in place will go a long way towards making our systems more secure while keeping our users productive.

Windows 10 Steps Up Ransomware Defense

January 17, 2017 | Leave a Comment

By Jeremy Zoss, Managing Editor, Code42

Here’s some good news for the countless businesses getting ready for the migration to Windows 10: Microsoft recently announced that its Windows 10 Anniversary Update features security updates specifically targeted to fight ransomware. No defense is completely hack-proof, but it’s great to see the biggest names in the tech world are putting ransomware at the top of their list of concerns.

Patching holes, preventing users from “clicking the link”
Microsoft released a guide on how the latest Windows 10 Anniversary Update specifically enhances protection against ransomware. The company focused on eliminating the vulnerabilities hackers have exploited in the past, and says its updated Microsoft Edge browser has no known successful zero-day exploits or exploit kits to date.

The company says its smart email filtering tools helped identify some 58 million attempts to distribute ransomware via email—in July 2016 alone. But what if a phishing email does reach gullible and mistake-prone end users? Microsoft says it has invested in improving its SmartScreen URL filter, which builds a list of questionable or untrustworthy URLs and alerts users should they click on a link to a “blacklisted” domain.

Thanks to security upgrades, Microsoft says Windows 10 users are 58 percent less likely to encounter ransomware than those running Windows 7.

Better threat visibility for IT
On the response end, the Windows 10 Anniversary Updates also sees the launch of the Windows Defender Advanced Threat Protection (ATP) service. The basic idea behind Windows Defender ATP is to use contextual analytics of network activity to see signs of attacks that other security layers miss. Microsoft says the new service gives “a more holistic view of what is attacking the enterprise…so that enterprise security operations teams can investigate and respond.” Better visibility of your users’ activities—now that’s something we at Code42 can get behind.

Using the intelligence of the “hive mind” to fight ransomware
One impediment to the fight against ransomware has been organizations’ reluctance to share information on attacks, both attempted and successful. We already know that new strains of ransomware emerge daily, but without this shared knowledge, even older strains are essentially new and unknown (and thus remarkably effective) to most of the enterprise world. The sheer size and market share of Windows puts Microsoft in a unique position to solve this problem. Its threat detection products are now bringing together detailed information on the millions of attempted ransomware attacks that hit Windows systems every day. With Microsoft now focused on fighting this threat, we’re eager to see the company leverage the intelligence of this hive mind to beat back the advance of the ransomware threat.

What does Microsoft say about ransomware recovery?
It’s important to note that responding to a ransomware attack is not necessarily the same as recovering from an attack. In other words, Windows 10 says it can help you detect successful attacks sooner and limit their impact—but how does it help you deal with the damage already done? How does it help you recover the data that is encrypted? How does it help you get back to business?

The Windows 10 ransomware guide makes just one small mention of recovery, urging all to “implement a comprehensive backup strategy.” However, Microsoft offers a rather antiquated look at backup strategies, leaving endpoint devices uncovered, focusing on user-driven processes instead of automatic, continuous backup, and even suggesting enterprises use Microsoft OneDrive as a backup solution. As we’ve explained before, OneDrive alone is insufficient data protection. It’s an enterprise file sync-and-share solution (EFSS), built to enable file sharing and collaborative productivity—not continuous, secure backup and fast, seamless restores.

Making the move to Windows 10? Make sure your backup is ready
Most enterprises are at least beginning to plan for the move to Windows 10, as they should be. The new OS offers plenty of advantages, not least of which are security features that undoubtedly make Windows 10 more hack-resistant. But as security experts and real-world examples continually show, nothing can completely eliminate the risk of ransomware. That’s why your recovery strategy—based on the ability to quickly restore all data—is just as critical as your defense strategy.

Moreover, as more organizations make the move to Windows 10, they’re seeing that the ability to efficiently restore all data is the key ingredient to a successful migration. Faster, user-driven migrations reduce user downtime and IT burden, and guaranteed backup eliminates the data loss (and resulting lost productivity) that plagues the majority of data migration projects.

Long Con or Domino Effect: Beware the Secondary Attack

January 12, 2017 | Leave a Comment

By  Jeremy Zoss, Managing Editor, Code42

Lightning may not strike twice, but cybercrime certainly does. The latest example: A year after the major hack of the U.S. Office of Personnel Management (OPM), cyber criminals are again targeting individuals impacted by the OPM breach with ransomware attacks.

In the new attack, a phishing email impersonates an OPM official, warning victims of possible fraud and asking them to review an attached document—which, of course, launches the ransomware.

OPM attack part of bigger trends in ransomware
The new round of attacks could come from two sources—both are part of trends in ransomware.

  • The long con: The first scenario is that the same individuals that executed the original OPM hack are now launching these ransomware attacks. If this is the case, it at least alleviates some concerns that the OPM hack was state-sponsored cyberterrorism and/or a sign of a new kind of “cold war.” But the trend toward this type of “long con” is scary in its own right. Users are already more likely than ever to “click the link”—now patient cyber criminals are using hacked data to deploy extremely authentic phishing scams.
  • The “kick ‘em while they’re down” attack: It’s more likely that the OPM ransomware attack is just an example of enterprising cybercriminals seeing vulnerability in the already-victimized. This is another unsettlingly effective trend—like “ambulance chasing” for cybercriminals: Follow the headlines to find organizations that have recently been hit with a cyberattack (of any kind), then swoop in posing as official “help” in investigating or preventing further damage. Clever cybercriminals know they can prey on the anxiety, fear and uncertainty of users in this position.

How can you get ahead of evolving ransomware?
Though we’ve said it a thousand times, it’s more true than ever: Ransomware is evolving at an incredible rate and it is overwhelming traditional data security tools. Paying the ransom becomes an appealing option to unprepared businesses, and this steady cash flow only fuels the problem.

Want to see where ransomware is headed next and understand how you can snuff out this threat? Read our new report, The ransomware roadmap for CXOs: where cybercriminals will attack next.

Six Cloud Threat Protection Best Practices from the Trenches

January 6, 2017 | Leave a Comment

By Ajmal Kohgadai, Product Marketing Manager, Skyhigh Networks

As enterprises continue to migrate their on-premises IT infrastructure to the cloud, they often find that their existing threat protection solutions aren’t sufficient to consistently detect threats that arise in the cloud. While security information and event management (SIEM) solutions continue to rely on rule-based (or heuristics-based) approach to detect threats, they often fail when it comes to the cloud. This is, in large part, because SIEMs don’t evolve without significant human input as user behavior changes over time, new cloud services are adopted, and new threat vectors are introduced.

Without a threat protection solution built for the cloud, enterprises can suffer data loss when:

  • Malicious or careless insiders download data from a corporate sanctioned cloud service, then upload it to a shadow cloud file sharing service (e.g. Anthem breach of 2015)
  • An employee downloads data onto a personal device, regardless of being on or off-network, at which point control over that data is lost
  • Privileged users of a cloud service (such as administrators) change security configurations inappropriately
  • An employee shares data with a third party, such as a vendor or partner
  • Malware on a corporate computer leverages an unmanaged cloud service as a vector to exfiltrate data stolen from on-premises systems of record
  • A user endpoint device syncs malware to a file sharing cloud service and exposes other users and the corporate network to malware
  • Data in a sanctioned cloud services is lost to an insecure and unmanaged cloud service via an API connection between the two services

However, even the most advanced cloud threat protection technology can be rendered ineffective when it’s not being used to its fullest potential. Below are some of the proven best practices and must-haves when implementing a cloud threat protection solution.

  1. Focus on multi-dimensional threats, not simple anomalies – a user logs in from a new IP address, or downloads a higher than average volume of data, or changes a security setting within an application. In isolation, these are anomalies but not necessarily indicative of a security threat. Focus first on threats that combine multiple indicators and anomalies together, providing strong evidence that an incident is in progress.
  2. Start with machine-defined models, then refine – aside from accuracy limitations, it’s difficult to get started with threat protection by configuring detailed rules with thresholds for which you have no context. Start with unsupervised machine learning – that is software that analyzes user behavior and automatically begins detecting threats. Augment with feedback later to fine tune threat detection and reduce false positives.
  3. Monitor all cloud usage for shadow and sanctioned apps – cloud activity within one service might appear routine because threats are often signaled by multiple activities across services. Correlate activity across other apps and a pattern will start to appear if a threat is in motion. That’s why it is important to start with visibility into both sanctioned and unsanctioned cloud services to get the full picture.
  4. Leverage your existing SIEM and SOC workflow – events generated by a cloud threat protection solution should flow into existing SOC/SIEM solutions in real time via a standard feed. This capability will allow security experts to both correlate cloud anomalies with on-premises ones while also allowing the integration of cloud threat incidence response with incident response workflows within their existing SOC/SIEM.
  5. Correlate cloud usage with other data sources – looking at a single data source to detect threats is inadequate. It is necessary to bring in additional information for context. That data can include whether the user is logging in using an anonymizing proxy or using a TOR connection, or whether her account credentials are for sale on the Darknet.
  6. Whitelist low-risk users and known events – a general rule of thumb is to allow the threat protection system to generate as many threat events as the security team has the bandwidth to follow up on. One way to do it is to test the system by increasing thresholds. Another way is to whitelist events generated by low risk (trusted) users. This capability can protect your IT security from being inundated with false positives.

 

Three Lessons From the San Francisco Muni Ransomware Attack

December 22, 2016 | Leave a Comment

By Laurie Kumerow, Consultant, Code42

On Black Friday, a hacker hit San Francisco’s light rail agency with a ransomware attack. Fortunately, this story has a happy ending: the attack ended in failure. So why did it raise the hairs on the back of our collective neck? Because we fear that next time a critical infrastructure system is attacked, it could just as easily end in tragedy. But it doesn’t have to if organizations with Industrial Control Systems (ICS)  heed three key lessons from San Francisco’s ordeal.

First, let’s look at what happened: On Friday, Nov. 25, a hacker infected the San Francisco Municipal Transportation Agency’s (SMFTA) network with ransomware that encrypted data on 900 office computers, spreading through the system’s Windows operating system. As a precautionary measure, the third party that operates SMFTA’s ticketing system shut down payment kiosks to prevent the malware from spreading. Rather than stop service, SMFTA opened the gates and offered free rides for much of the weekend. The attacker demanded a 100 Bitcoin ransom, or around $73,000, to unlock the affected files. SFMTA refused to pay since it has a backup system. By Monday, most of the agency’s computers and systems were back up and running.

Here are three key lessons other ICS organizations should learn from the event, so they’re prepared to derail similar ransomware attacks as deftly:

  1. Recognize you are increasingly in cybercriminals’ cross hairs. Cyberattacks on ICS systems, which control public and private infrastructure such as electrical grids, oil pipelines and water systems, are on the rise. In 2015, the U.S. Industrial Control Systems Cyber Emergency Response Team (ICS-CERT) responded to 20% more cyber incidents than in 2014. And for the first time since the agency started tracking reported incidents in 2009, the critical manufacturing sector experienced more incidents than the energy sector. Critical manufacturing organizations produce products like turbines, generators, primary metals, commercial ships and rail equipment that are essential to other critical infrastructure sectors.
  1. Keep your IT and OT separate. Thankfully, the San Fran Muni ransomware attack never went beyond SFMTA’s front-office systems. But, increasingly, cyber criminals are penetrating control systems through enterprise networks. An ICS-CERT report noted that while the 2015 penetration of OT systems via IT systems was low at 12 percent of reported incidents, it represented a 33 percent increase from 2014. Experts say the solution is to adopt the Purdue Model, a segmented network architecture with separate zones for enterprise, manufacturing and control systems.
  1. Invest in off-site, real-time backup. SFMTA was able to recover the encrypted data without paying the ransom because it had a good backup system. That wasn’t the case with the Lansing (Michigan) Board of Water & Light. When its corporate network suffered a ransomware attack in April, the municipal utility agency paid $25,000 in ransom to unlock its accounting system, email service and phone lines.

If San Francisco’s example isn’t enough to motivate ICS organizations to take cybersecurity seriously, then Booz Allen Hamilton’s 2016 Industrial CyberSecurity Threat Briefing should do the trick. It includes dozens of cyber threats to ICS organizations.