April 15, 2014 | Leave a Comment

By Sekhar Sarukkai, SkyHigh Networks
A fascinating scientific discovery
There was a fascinating discovery last month on a new state of matter never before seen in biology in, of all places, the eyes of chicken – a state of “disordered hyperuniformity”. This arrangement of particles in the chicken’s eyes appears disorganized over small distances but has a hidden order that allows material to behave like both a crystal and a liquid. Eyes of other animals have cones arranged in a discernable pattern; insects for example, have cones in a hexagonal scheme.
The fundamental characteristic of this state of matter is that they exhibit order over long distances but disorder over short distances.
This is what disordered hyperuniformity looks like
Take a look at the diagram below. This diagram depicts the spatial distribution of the five types of light-sensitive cells known as cones in the chicken retina (Image courtesy of Joseph Corbo and Timothy Lau, Washington University in St. Louis).  Each type of cone is has an “exclusion region” that excludes other similar cones, and due to different sizes of cones these exclusion regions seem random but, in reality, cones have a hyper-uniform structure. For example, cones of a particular type arrange in a triangular formation with each other due to their exclusion regions.
Chicken Eyes 1
Reading this, inspired me to draw a parallel to the analytics work we do at Skyhigh Networks in the field of Cloud Security where we analyze enterprise access events to discern patterns that indicate insider threats and security breaches by determining what normal access patterns look like and weeding those out. Finally….I think we have a single term to describe what we are looking for in terabytes of daily cloud access data we analyze – we are looking for deviations from “disordered hyperuniformity”.
Dare I say, security is in the [chicken] eyes of the beholder!
There is no dearth of headlines on enterprise data breaches, usually via advanced persistent threats that exfiltrate data over extended periods of time – including at technology companies like Adobe, and more recently at many retailers including Target and Neiman Marcus.  In all these cases signals of anomalous activity existed but was lost in a sea of otherwise harmless access log data. Security analysts in enterprises are faced with a problem that big data has compounded even further – that of a deluge of data. The signals of nefarious activities are invariably lost in the tera- and peta- bytes of daily logs that are routinely collected.
Check out disordered hyperuniformaity applied to cloud analytics
Security analysts tasked with weeding out good actors (the majority of who fall into this pattern of disordered hyperuniformity) from the bad actors look for anomalies using behavioral and statistical anomaly detection techniques.  For example, the below snapshots of a few minutes of access patterns (each dot representing a user access to a high/medium/low risk service) from a large enterprise repeats itself over very large time scales..  Techniques including both behavioral and statistical approaches are used to analyze this time series data for strong correlations indicating expected behavioral patterns.
Chicken Eyes 2
These techniques are meant to efficiently bubble-up actors whose traffic patterns deviate from the norm. Codifying what is the “norm” is challenging due to variations in real time access patterns that invariably appear disordered. Hence, a macro-view that, for example, looks at logical Service chaining, category & risk of Services, user groups, etc. lends itself to more natural clustering of access patterns that make it easier to discern behavioral outliers. Using these techniques we at Skyhigh have found data exfiltration attempts using twitter, scans of cloud storage services for use as drop zones, command & control interactions from popular Infrastructure-as-a-Service providers, watering hole attacks from tracking services, and backdoor attacks from code hosting Services, to name a few.
Want to see the usage anomalies in your data?
At Skyhigh, we have analyzed access patterns of almost 10 millions employees over extended periods of time to create baseline disordered hyperuniformity usage models and are constantly pushing the boundaries of new techniques (even if it means looking at it through chicken eyes) to help enterprises manage their threats and risks, both from outside and  inside.
– See more at:

FTC Recognizes Value of Trust Established by SSL and Digital Certificates

April 14, 2014 | Leave a Comment

Attacks on digital certificates and trusted connections drive FTC to action

Recognizing that the trust established by Secure Sockets Layer (SSL) and digital certificates plays an important role in everyday life, the US Federal Trade Commission (FTC) brought charges against Fandango and Credit Karma for failing to protect this trust. Both companies failed to validate digital certificates used for SSL/Transport Layer Security (TLS) connections in their mobile applications. The FTC acknowledged that these failures allow attackers to circumvent the trust established by digital certificates and gain access to users’ confidential personal and financial information. Once this trust is compromised, attackers can redirect traffic to an untrusted site, and the users’ applications cannot detect that traffic has been diverted. Ironically, digital certificates and SSL/TLS secure connections are designed to thwart these Man-in-the-Middle (MiTM) attacks.

The FTC illustrates how a comprised or fake digital certificate can be used for MiTM attacks against unsuspecting users.

The importance of the settlement is not that businesses must deal with another compliance requirement. Instead, the FTC is reinforcing the fact that securing the trust established by digital certificates is critical. The FTC’s action underscores what others have already found:

  • Microsoft concluded that “PKI is under attack.”
  • In its 2013 fourth quarter threat report, McAfee reported that malware that misuses digital certificates increased 52% over the third quarter.

Protecting trust is so important that no business or government can ignore it. A single compromised certificate or application that fails to validate certificates can make all the other security controls useless.

A fake certificate purporting to be for GoDaddy’s email service could allow an attacker to masquerade as GoDaddy if applications don’t check if a certificate is trusted.

Attacks on mobile applications that fail to validate digital certificates are nothing new. In an article published earlier this year, Netcraft reported that it had found dozens of fake digital certificates deployed across the Internet. Unlike many attacks using compromised digital certificates, the fake certificates that Netcraft found probably targeted users of mobile applications—40% of which, like Fandango’s and Mobile Karma’s applications, failed to validate the trust established by legitimate digital certificates. While the FTC has started its action with Fandango and Credit Karma, significantly wider holes in SSL and digital certificate security have been reported. In February 2014, for example, Apple patched Mac OS X and iOS because both failed to validate digital certificates for SSL/TLS—an issue that could have been exploited by MiTM attacks.

With Gartner predicting that 50% of network attacks will use SSL by 2017, enterprises must protect the trust established by digital certificates. The FTC provides some basic recommendations that all mobile developers should follow. In addition, developers should evaluate security, including the validation of digital certificates, with the help of a third party. Beyond this, organizations must secure and protect the keys and certificates that establish trust for mobile applications, web browsers, and the thousands of applications behind the firewall. Although every organization depends on these applications, they create a huge surface area of attack.

In response to the rise in attacks on keys and certificates, Forrester recommends that organizations:

  • Gain visibility into threats. Only about half (52%) of organizations know how many keys and certificates are in use, how those keys and certificates are used, and who is responsible for them. You can’t control what you don’t know you have.
  • Enforce policy to establish norms and detect anomalies. Once an organization has gained visibility into its key and certificate inventory, it can begin to enforce policies and establish a norm. This makes detecting anomalies easier, whether they’re accidental policy violations by a well-intentioned developer or a malicious attack.
  • Automate key and certificate functions to gain control and reduce risk. A typical large enterprise has thousands of keys and certificates to secure and protect. Work smarter, not harder, by automating security for processes such as key generation, certificate requests, monitoring for changes and anomalies, and other related tasks. This automation not only streamlines and centralizes these tasks, but also helps to establish the necessary controls to reduce risk, shrink the threat surface of attacks, and help the organization respond to attacks faster. Automation and control are part of establishing a norm that can be monitored for possible anomalies and attacks.
  • Analyze data to gain intelligence. Analysis of data gained from securing keys and certificates will provide a wealth of information and insight that can help to identify opportunities to reduce risk. By looking at the data generated, organizations can spot patterns of potentially suspicious activity or anomalies that require further investigation. Reports may also help identify keys and certificates that may be problematic, such as those that are about to expire or are no longer needed.

In line with these recommendations from Forrester, Venafi TrustAuthority enables organizations to quickly gain visibility, fix vulnerabilities, and establish policies for keys and certificates. Venafi TrustForceautomates key and certificate functions to further eliminate the opportunity for compromise and enable organizations to enforce policies and remediate security incidents. IT security teams must start by gaining visibility into how keys and certificates are used, fixing vulnerable certificates, and enforcing policies to protect the trust upon which their business depends—from their mobile applications back to the data center.

Mad Max Here We Come: Heartbleed shows how much we blindly trust keys and certificates (and take them for granted)

April 10, 2014 | Leave a Comment


The race is on to respond and remediate by replacing keys and certificates in use with millions of applications because patching won’t help.

The world runs on the trust established by digital certificates and cryptographic keys. Every business, every government. It’s the way the architects of the Internet solved the problem of securing data, keeping communications private and knowing a server, device, cloud is authenticated. Because keys and certificates provide the foundation for almost everything we know in our highly digital world, if you attack the trust established by keys and certificates then all our other security defenses become at best less effective. At worst completely ineffective. It’s why Forrester Research found: “Advanced threat detection provides an important layer of protection but is not a substitute for securing keys and certificates then that can provide an attacker trusted status that evades detection.”

We’re now seeing how a single vulnerability in OpenSSL named Heartbleed, present since 2011 and in use with tens of thousands of applications that make commerce and communications work online and offline, exposes keys and certificates to attack and compromise. Yes, it exposes the keys and certificates that every business and government use to bank, purchase, and communicate with online and offline. And it doesn’t require an attacker to breach firewalls and other security defenses! The Cryptopocalypse has arrived, and it’s probably much sooner and worse than researchers at Black Hat 2013 dreamed of.

The scope of the problem is massive: just one application that uses OpenSSL, Apache, is used to run 346M public websites or about 47% of the Internet today. And the problem is even larger since this doesn’t include the tens of millions of behind-the-firewall applications, devices, appliances, and things that run Apache and use OpenSSL. And it’s just one application that relies on OpenSSL.

The consequences of this vulnerability and exposure of keys and certificates is scary. Attackers can spoof trusted websites and decrypt private communications. Accomplish this and it’s game over, cybercriminals win.

Live Webinar: Remediating Heartbleed Vulnerability — Register Now

Researchers that identified the vulnerability sum up the impact simply: “Any protection given by the encryption and the signatures in the X.509 certificates can be bypassed.” You must assume keys and certificates are compromised and immediately replace them to remediate.

While the vulnerable code has been fixed, sadly most organizations will remain vulnerable. They are unable to change out their keys and certificates — the thousands of keys and certificates in every Global 2000 enterprise and modern government. The continued exposed vulnerability means attackers can spoof legitimate websites or decrypt private communications.

But, this isn’t the first attack of keys and certificate and it won’t be the last. We’ve seen APT groupsstealing keys and certificates, most recently the Mask APT group, breached organizations not remediating to change keys and certificates, and remaining owned by the attackers. The infamous Stuxnet attacks used stolen certificates to attack Iran nuclear facilities. And as a leaked NSA memo showed, Edward Snowden used compromised keys and certificates to execute his breach of the NSA. All of this and more is why back in December 2012 Gartner concluded: “Certificates can no longer be blindly trusted.”

Now Heartbleed. The success in using compromised keys and certificates has proven there will be only more attacks and more vulnerabilities. The value of IT security is measured in how fast security teams can respond and remediate – to create new, trusted and uncompromised keys, revoke current certificates, create new trusted certificates, and get them installed and trusted before they can be misused.

It’s not, as some have understood, about how can we setup a good process to optimize procedures and best practices. Nor is it just about patching software. The researchers that exposed Heartbleed further identified the requirements to remediate: “revocation of the compromised keys and reissuing and redistributing new keys.”

Respected John Hopkins cryptographer Matthew Green explained further: “It’s a nightmare vulnerability, since it potentially leaks your long term secret key — the one that corresponds with your server certificate. Worse, there’s no way to tell if you’ve been exploited. That means the prudent thing to do now is revoke your certificate and get a new one.”

Live Webinar: Remediating Heartbleed Vulnerability — Register Now

Respond & Remediate Now Before It’s Too Late

The clock is on. Our adversaries know about these vulnerabilities. Following the example set by NIST’s guidance on responding to a CA compromise, remediation for keys and certificates can be simplified as:

  • Know where all keys and certificates are located
  • Revoke, replace, install, and verify keys and certificates with new ones

For organizations that do not have a system to identify all keys and certificates used with SSL — whether in the datacenter or out in the cloud — Venafi can help. Venafi TrustAuthority™ can quickly be deployed, establish a comprehensive inventory of keys and certificates, where they’re used, and who is responsible for the ones that to be replaced. This is followed by the revocation and replacement with new keys and certificates from one or many trust Certificate Authorities (CAs) used by your enterprise. TrustAuthority handles these complexities for security teams around the world every day. New organizations that now must respond to Heartbleed can be up, running, and back to a secured state quickly.

For organizations that already have Venafi TrustAuthority™ (previously known as Enrolment for Server Certificate Manager), security administrators already have the inventory of keys and certificates in use that need to replaced. Your TrustAuthority policy identifies the applications that keys and certificates are used with, including Apache systems. Security administrators, working with application owners, can quickly, securely and easily generate new keys and certificates from one or more of the trusted CAs used by organization. TrustAuthority can then validate that new keys and certificates are in use.

For organizations that have placed a priority on security and are using Venafi TrustForce™ (previously known as Provisioning for Server Certificate Manager), security administrators can quickly have new keys and certificates automatically generated and installed without waiting for assistance from application and operations teams. Using the data, intelligence, and policy from TrustAuthority, TrustForce securely distributes new keys and certificates, installs them, and validates the application is back up and running with the new trusted keys and certificates. This is the automated response and remediation that security teams need to deal with increasing attacks on keys and certificates.

Mad Max Here We Come: Heartbleed shows how much we blindly trust keys and certificates

The stage is set: attackers know that we can’t secure the trust that everything digital we know depends upon — we can’t secure keys and certificates and we can’t respond and remediate when attacked. The world Gartner painted — an almost Mad Max world — of “Living in a World Without Trust” is about to become reality if we don’t take securing keys and certificates seriously, and put automated capabilities in place to respond and remediate immediately. One thing is for certain: this won’t be the last time we’re in this same position and need to respond quickly.

Contact Venafi now to get help responding and remediating to Heartbleed and be ready for attacks to come on keys and certificates.


April 10, 2014 | Leave a Comment

By Harold Byun, Skyhigh Networks
Over the past weeks, security teams across country have been grappling with end of life for Windows XP, which is still running on 3 out of 10 computers. That issue has been completely overshadowed with news of the Heartbleed vulnerability in OpenSSL, which is used extensively to secure transactions and data on the web.
Heartbleed makes the SSL encryption layer used by millions of websites and thousands of cloud providers vulnerable. With a simple exploit, an attacker could gain access to passwords, usernames, and even encryption keys used to protect data in transit. While the focus in the media was initially on high profile consumer sites like Yahoo! Mail, many cloud services present an even greater risk to companies storing sensitive data on those services.
Many cloud services are still vulnerable
Skyhigh’s Service Intelligence Team tracks vulnerabilities and security breaches across thousands of cloud providers, including the Heartbleed vulnerability. Even 24 hours after the vulnerability was widely publicized, 368 cloud providers are still not patched, making them vulnerable to attack. These services include some of the leading backup, HR, security, collaboration, CRM, ERP, cloud storage, and backup services.
The average company uses 626 cloud services, making the likelihood they use at least one affected services extremely high. Across 175 companies using Skyhigh, 96% are using at least one cloud provider that is still not patched 24 hours later. We’ll continue tracking these services and provide updates as they are patched.
What actions you can take
In order to close the vulnerability, cloud providers need to update OpenSSL and reissue their certificates that could be used to impersonate the service. Skyhigh has contacted each of the cloud providers affected and is working with them to ensure they patch their SSL and perform remediation such as revoking and reissuing certificates. We‘ve also alerted our customers who use affected services.
There are 5 steps that every company needs to take in response to Heartbleed:
1. Determine your exposure: Skyhigh automatically alerted customers to services they use that are affected by Heartbleed, but anyone can check individual services here:
2. Change your passwords: All the passwords used by employees for affected services are potentially vulnerable and should be changed immediately. If you reused passwords across services, also change these passwords.
3. Enable multi-factor authentication: Require a security token so a remote attacker could not login to a service with just the password alone. As noted by Skyhigh’s recent report, only 15% of cloud providers offer this feature.
4. Contact cloud providers: Reach out to affected providers so you can receive updates when they are patched and their certificates have been reissued. Skyhigh automatically tracks and presents this information in our product.
5. Use an encryption gateway: Encrypt all data before it’s uploaded to the cloud so that even if the provider is breached, your data is encrypted using enterprise-controlled encryption keys that remain on premises
– See more at:


Cloud Policy? I’ll Take a Sharp Stick in the Eye, Please!

April 10, 2014 | Leave a Comment

By Jamie Barnett, VP Marketing, Netskope

We were struck by a survey we conducted with RSA Conference attendees in February when we learned that even though more than 60% of respondents didn’t have or didn’t know if they had a cloud app policy, 70% cared enough to think about their organization’s privacy policy before using a cloud app. People are aware, and they want to take care when logging into cloud apps.


But wow, do that many enterprises really not have a cloud app policy? Maybe they’re just scattered across a bunch of policies. One of our customers rattled off his list: “Well, there’s third-party vendor, access control, acceptable use, remote access or work-from-home, mobile/BYOD, user privacy, internet monitoring, data classification/DLP, data retention/e-discovery, data encryption, disaster recovery/business continuity, incident management, and more.” Holy cow! No wonder nobody wants to deal with their cloud policy! If I had to open up that can of worms, I’d beg for something sharp and jam it into my eye just to ease the pain!

But there are people who have enacted a cloud app policy…and lived to tell about it. We call these Cloud Policy Survivors (there’s even a hashtag: #CloudPolicySurvivor). We’ve picked these folks’ brains and come up with a checklist. Here’s the CliffsNotes version. If you want the full version, you can download it here.

#1 Communicate with your stakeholders. Start small and call a 30-minute meeting with 5 “friendlies.” Listen hard and use the feedback as the basis for your communications strategy.

#2 Discover the cloud apps in your organization and understand how they’re being used. At last count, we see 461 per enterprise, including 47 marketing, 41 HR, and 27 finance/accounting. How many do you think you have?

#3 Segment your cloud apps into business-critical, user-important, and non-critical. This will help you bucketize and deal with the 461 apps you’ve just discovered.

#4 Assess cloud app risk in three ways: look at inherent risk in the app, usage risk, and data risk. This, plus #3 will enable you to triage your cloud apps and figure out which ones to ignore, which to recommend, which to consolidate, which to monitor closely, and in which to enforce usage policies.

#5 Inventory your “in-scope” cloud app policies. Instead of one tidy policy, these are scattered all over the place. See the laundry list above: mobile, user privacy, monitoring, etc. Just bite the bullet.

#6 Consolidate policies. Find overlapping policies and merge them. Now doesn’t that feel good?

#7 Look at your existing policies with a critical eye. What’s not working? We see that 90% of cloud app usage is in apps that have been blocked by a firewall or perimeter technology. We call this “exception sprawl!” Don’t do this. Get rid of policies that don’t work anymore!

#8 Find and fill the policy gaps created by cloud and mobile. Here are some new dynamics that existing policies don’t account for: Anybody can procure and deploy an app, even a mission-critical one. Anybody can be an administrator. And many are. There’s no such thing as a super-admin and privileged user monitoring. Also, content can be uploaded, shared with an endless tapestry of cloud-connected endpoints, and downloaded to any device.

#9 Start an administrator amnesty program. Suss out those folks running important apps (like HR, finance/accounting, and ERP) and managing access and permissions willy-nilly. Gently bring them into your fold. Or at least call it a draw and get visibility and control over those apps without administering them.

#10 Coach users. This is a continuation of the communication point in #1. Convey trust and transparency with users by creating coaching messages that tell users what they did wrong, and give them an alternative action item when they’ve been blocked from doing what they want to do in a cloud app. Give them an opportunity to talk back and communicate with you.

Are you a Cloud Policy Survivor? What made the difference on your checklist?
Share your success on social media by including #CloudPolicySurvivor or better yet, send us an anonymized version of your cloud policy to [email protected] and we’ll send you a Netskope t-shirt!


April 10, 2014 | Leave a Comment

By Harold Byun, Skyhigh Networks
April 8 = Y2K all over again?
I may be dating myself a little bit here by writing this, but at the turn of the century, the impending arrival of the year 2000 led to multi-year coding projects, systems upgrades, and a massive testing effort to ensure Y2K compliancy.
Another technology-related milestone will be arriving shortly on April 8: the end of support for Microsoft’s Windows XP platform.  The platform has dutifully served hundreds of millions of users, on hundreds of millions of desktops and laptops, for twelve years.  While the date will come and pass with little more than the bat of an eye, we should be concerned about the lack of preparation and migrations to protect infrastructure.
Cloud security risk is real – nearly 1 in 3 uses XP
Consider that roughly 29% of the worldwide PC and laptop market (over 500 million units) is still running on Windows XP.  The end of life event means that many of the devices that access and handle data may be unpatched and vulnerable.  Estimates put the U.S. federal government desktop population at 10% for Windows XP, and administrators have been advised to cordon systems on segmented networks or dedicated subnets.  While many companies can likely control the impact through managed IT rollouts and updates, the problem is certainly exacerbated by remote users and teleworkers who may be operating on outdated systems.
Compounding the problem further is the Consumerization of IT, which has massively increased the number of employees utilizing cloud services from whichever device they choose.  After April 8, devices running unsupported XP operating systems present significant security risk when they upload or download data to/from corporate cloud services. IT administrators will be challenged with enforcing control over users who are accessing data from systems that are not under IT’s control.
How to secure corporate data in the cloud
So what should security professionals do? One option is to enforce access control policies to prevent unsupported devices from accessing corporate cloud services (Salesforce, ServiceNow, etc). Skyhigh Secure’s access control capability provides OS version controls and directional traffic blocking.  As companies continue to adopt the cloud and place their enterprise data in the cloud, they now have a mechanism for preventing access or uploads/downloads from unsupported operating systems such as XP.
The presence of Skyhigh as a “virtual cloud edge” can help businesses address “non-events” such as the passing of a major computing platform, while still enabling the business to reap the benefits of secure cloud services and applications.
– See more at:


Why Should You Update Your Trusted CAs and Enforce Certificate Whitelists?

April 9, 2014 | Leave a Comment

By Patriz Regalado, Product Marketing Manager, Venafi


Your organization’s policies—or lack of policies—regarding trusted root CA certificates are exposing you to unnecessary risk. Because certificates serve as credentials for so many mission-critical transactions, attackers are constantly trying to obtain trusted certificates that can be used in targeted attacks. Systems, for their part, refer to their store of trusted root certificates to determine which certificates to trust. If a certificate is signed by a trusted CA, the system trusts the certificate. To compromise a system, therefore, an attacker simply needs to obtain a certificate that is signed by a trusted root CA—whether by tricking the CA into issuing a fraudulent certificate or by compromising the CA. Every CA that your systems trust represents a potential entry point for attackers.

Many organizations expose themselves to unnecessary risk by allowing far too many of these entry points. They retain the default trust stores distributed with most operating systems and application servers, which include far more certificates than are necessary. According to a University Hannover Germany study, common trust stores for various platforms and operating systems—such as Windows, Linux, MacOS, Firefox, iOS and Android—contained more than 400 trusted root certificates. However, only 66% of these certificates were used to sign HTTPS certificates, leaving the other 34% of trusted root certificates susceptible to use in certificate-based attacks.

We are seeing more and more evidence of malware signed with a legitimate CA because an attacker stole a private key or obtained a fake certificate. Consider the following scenario: Your organization is currently trusting AcmeCA on many of your systems simply because AcmeCA is approved by the vendor providing the software for your systems. If a malicious attacker gains access to a fraudulent certificate from AcmeCA, that attacker can use it to attack multiple systems within your organization.

Your organization has outward-facing systems, such as those focused on customer interaction or users’ desktops, that must trust a particular CA in order to perform day-to-day business. However, your organization also includes systems that don’t need to trust a particular CA but are, in fact, trusting it. For example, internal systems that communicate only with other internal systems don’t need to trust any CAs but the internal CA(s). In addition, partner-focused systems that communicate with a limited number of partners require just a handful of CAs.

Most organizations have no visibility into which root certificates they are trusting and where those root certificates are deployed; consequently, they cannot limit their exposure to certificate-based attacks. As a critical first step, organizations must gain visibility into which root certificates are being trusted within their environment. They must compile an inventory of their root certificates so they can reduce the risks caused by unnecessary trust. In the AcmeCA scenario, for example, you would see that AcmeCA is installed on multiple systems within your organization, determine that these systems don’t need to trust AcmeCA, and remove it. Thus, an attacker would be unable to use a fraudulent certificate from AcmeCA to successfully attack your organization.

By identifying CAs that should not be trusted on mission-critical systems, organizations have the intelligence and risk awareness to prevent attacks that leverage certificates from those CAs. One way to take action is through certificate whitelisting, which ensures that whitelisted certificates are included in trust stores and blacklisted certificates are excluded from trust stores. Certificate whitelisting limits the number of CAs that are trusted, allowing organizations to secure and protect the CAs they trust while flagging or disallowing untrusted SSL/TLS sessions. As a result, the attack surface is dramatically reduced.

Organizations can eliminate unnecessary risk from digital certificates signed by untrusted CAs by establishing and enforcing certificate whitelists and updating which CAs are trusted within the enterprise. They can enforce baseline requirements for which CAs should be trusted (whitelist) and not trusted (blacklist) on mission-critical systems and ensure whitelisted certificates are included in trust stores and blacklisted certificates are excluded, preventing attacks that leverage certificates from blacklisted CAs.

Windigo: Another Multi-Year APT Targets SSH Credentials

April 4, 2014 | Leave a Comment

By Gavin Hill, Director, Product Marketing and Threat Intelligence, Venafi

Last month, ESET, a leading IT security company, published a detailed analysis of operation Windigo. This operation, active since 2011, has compromised over 25,000 Linux and Unix webservers. Cyber-criminals use these servers to steal SSH credentials, redirect visitors to malicious websites, and send millions of spam messages per day. The ESET report provides information on several components of Windigo, including Linux/Ebury, an OpenSSH backdoor used to steal payloads, SSH passwords, SSH keys, private keys, private key passphrases, and other credentials.

I found it very intriguing that the report indicated that Windigo does not exploit any cryptographic or system vulnerabilities. Instead, this operation leverages only stolen credentials—highlighting the rapidly increasing prevalence of trust-based attacks.

At the heart of operation Windigo’s success is the SSH credential-stealing Linux/Ebury backdoor. Without the SSH credentials, Windigo is not able to expand and compromise additional systems. Once malicious actors have obtained the SSH credentials and installed Linux/Ebury on systems, they can continue to collect new or modified credentials on infected systems. As they do with SSH daemon backdoors, cyber-criminals exploit the blind trust in encryption to own the compromised systems, maintaining access even if the credentials are later changed.

Stolen SSH credentials that do not provide root-level access do not go to waste; they are used as part of spam bot operations or to log into other servers. ESET monitored data sent to exfiltration servers over a period of five days. During that time, ESET captured 5,362 unique successful logins. The figure below shows the number of logins that used root credentials as compared to other forms of access.

Although the Windigo botnet is smaller than most end-user botnets, it’s important to note that Windigo-compromised systems are all webservers with a magnified ability to direct users to malicious sites hosting malware. In fact, Windigo is redirecting over 500’000 web visitors to malicious content every day. By using keys, adversaries have the privileges and trusted status required to turn legitimate systems into a malicious infrastructure that dwarfs even some cloud computing vendors.

Infected systems that are part of the operation Windigo botnet are extremely difficult to detect, in no small part because adversaries have the elevated privileges required to install any binaries they choose. They then conceal these highly sophisticated binaries with advanced cryptography. “System administrators attempting to clean systems that are part of the Windigo operationare usually able to remove other malware components such as Linux/Cdorked, but often overlook the OpenSSH backdoor due to the stealth mechanisms used.” With the backdoor still open, the Windigo operators can return at a later date and revert the changes made by system administrators.

For this reason, the ESET paper advises administrators to “completely wipe their [infected] servers and rebuild them from scratch using a verified source.” Administrators should also assume that all administrator credentials on a compromised system have also been compromised. Like Mask malware, used to steal cryptographic keys and digital certificates, operation Windigo demonstrates the increasing numbers of advanced and persistent adversaries targeting keys and credentials. Last week the latest set of released Snowden documents, titled “I Hunt Sys Admins,” further revealed how malicious attackers and nation states target the SSH credentials of system admins for theft. This unsurprising information still highlights most organizations’ lack of visibility and control over their keys and certificates.

It’s no surprise that adversaries are increasingly using keys and certificates in their nefarious campaigns. Too many organizations employ a lackluster approach to protecting their SSH keys, recklessly exposing themselves to eager cyber-criminals. In addition, most organizations have little visibility into their cryptographic assets—the very assets that criminals are exploiting—making it hard for administrators to understand the scope of the problem or to detect anomalous usages.

In research conducted by Venafi, 74% of organizations have inadequate SSH security policies. This statistic alone is an enticing invitation for any attacker. Why not target an organization with no security controls or ability to respond? Based on revelations in just the first three months of this year (including the release of more Snowden documents and revelations about Mask and Windigo), I’d suggest that we are seeing only the first crest of a threatening tsunami of attacks on SSH. It’s time organizations understand what trust-based attacks are and how to protect against them.