Mobile and Cloud: BFFs 4Ever

By Krishna Narayanaswamy, Chief Scientist, Netskope

Netskope Cloud Report - October 2014We released the Netskope Cloud Report for October today. In it, we analyze the aggregated, anonymized data collected from tens of billions of events across millions of users in the Netskope Active Platform, and highlight key findings about cloud app usage in enterprise as seen in the Netskope Active Platform. This includes our count of enterprise cloud apps (579) and percent that are enterprise-ready (88.7 percent), as well as top apps, activities, and policy violations. But what was really interesting about this quarter’s findings is the level of cloud app activity occurring on mobile devices.

As we all know, mobile is the perfect medium for information “snacking.” When it comes to enterprise cloud apps, they also happen to be perfect for bite-sized work. In a world where the workday never seems to end, every minute is a zero-sum-game. So, whether it’s a quick approval of an expense report, a quickly dashed-off email, or a “while I’m thinking of it” document share from cloud storage, nearly half of all activities occur on mobile devices. Some of the most common are send (57 percent), approve (53 percent), view (48 percent), login (47 percent), and post (45 percent).

With all of those activities, mobile is also a place for an increasing number of policy violations. We define a policy violation as when a user attempts an activity on which an administrator has set a policy in the Netskope Active Platform (such as “Don’t share content from cloud storage outside of the company”). We found that 59 percent of all policy violations involving download, and more than one-third of policy violations involving a DLP profile (such as PII, PCI, PHI, Confidential, etc.), occur on mobile devices. Our researchers believe that the high rate of download policy violations on mobile devices could be due to administrators both setting “no download” policies as well as “no download to mobile” policies (the latter because that is a source of concern for data leakage, especially in the case of BYOD), both of which would be triggered on a mobile device.

See the Netskope Cloud Report infographic here and get the full report here.

Are you enforcing cloud app policies for mobile users? Tell me here or Tweet it @Krishna_Nswamy #mobilecloudbffs



In Plain Sight: How Hackers Exfiltrate Corporate Data Using Video

By Kaushik Narayan, Chief Technology Officer, Skyhigh Networks

data-exfiltration-blog-imageConsumers and companies are embracing cloud services because they offer capabilities simply not available with traditional software. Cyber criminals are also beginning to use the cloud because it offers scalability and speed for delivering malware, such as in the recent case of Dyre, which used file sharing services to infect users. The latest evolution of this trend is attackers using the cloud to overcome a key technical challenge – extracting data from a company. Under the cover of popular consumer cloud services, attackers are withdrawing data from the largest companies in ways that even sophisticated intrusion prevention systems cannot detect.

Previously, researchers at Skyhigh uncovered malware using Twitter to exfiltrate data 140 characters at a time. Skyhigh recently identified a new type of attack that packages data into videos hosted on popular video sharing sites, a technique difficult to distinguish from normal user activity.

The Industrialization of Hacking
The target of these attacks ranges from customer data such as credit card numbers and social security numbers to intellectual property, which can include design diagrams and source code. In recent years, hacking has undergone a revolution. Once a hobbyist pursuit, hacking is now performed at industrial-scale with well-funded teams backed by cartels and national governments. Stealing data is big business, whether to compromise payment credentials and resell them for profit or to gain access to intellectual property that could allow a competitor to catch up on years (or decades) of research and development.

In response, companies have made significant investments in software that can detect telltale signals that attackers have gained access to their network and are attempting to extract sensitive data. With these intrusion prevention systems in place, it can be quite challenging for attackers to remove a large amount of data without being discovered. In the same way that thieves would find it difficult to sneak bags of money out the front door of a bank undetected by guards and security cameras, today’s cyber criminals need a way to mask their exit. That’s why they’ve turned to cloud services to make large data transfers.

Their latest technique involves consumer video sites. There are two attributes that make video sites an excellent way to steal data. First, they’re widely allowed by companies and used by employees. There are many legitimate uses of these sites such as employee training videos, product demos, and marketing the company’s products and services. Second, videos are large files. When attackers need to extract large volumes of data, video file formats offer a way to mask data without arousing suspicions about a transfer outside the company.

How the Attack Works
Once attackers gain access to sensitive data in the company, they split the data into compressed files of identical sizes, similar to how the RAR archive format transforms a single large archive into several smaller segments. Next, they encrypt this data and wrap each compressed file with a video file. In doing so, they make the original data unreadable and further obscure it by hiding it inside a file format that typically has large file sizes. This technique is sophisticated; the video files containing stolen data will play normally.

They upload the videos containing stolen data to a consumer video sharing site. While they’re large files, it’s not unusual for users to upload video files to these types of sites. If anyone checked, the videos would play normally on the site as well.

After the videos are on the site, the attacker downloads the videos and performs the reverse operation, unpacking the data from the videos and reassembling it to arrive at the original dataset containing whatever sensitive data they sought to steal.



What Companies Can Do to Protect Themselves
Traditional intrusion detection technology generally does not detect data exfiltration using this technique. One way to identify this attack is an anomalous upload of several video files with identical file sizes. To identify this type of activity, what is needed is a big data approach to analyzing the routine usage of cloud services in the enterprise to detect these anomalous events.

Skyhigh analyzes all cloud activity to develop behavioral baselines using time series analysis and machine learning, and identified the attack in the wild at a customer site. Importantly, the detection relied on analysis of normal usage activity rather than detecting malware signatures that don’t exist before the attack has been catalogued. Skyhigh’s approach requires no knowledge of the attack before it’s detected.

Companies can proactively take steps to protect themselves by limiting uploads to video sharing sites while allowing the viewing or download of videos. Deploying a cloud-aware anomaly detection solution can also give early warning to an attack in progress and either block it from occurring or quickly allow a company to take action to stop the attack and prevent additional data from being exfiltrated.

The volume and sophistication of attacks is increasing. In this threat environment, companies must take additional steps to protect data while allowing the use of cloud services that also drive innovation and growth in their businesses. State-sponsored attacks and sophisticated criminal organizations are now using the cloud as a delivery vehicle for malware and as an exfiltration vector, but companies can also take advantage of a new generation of cloud-based detection and protection services to safeguard their data and protect themselves. Download our cheat sheet to learn other actionable steps for reducing risk to data in the cloud.


Poodle – How Bad Is Its Bite? (Here’s the Data)

By Sekhar Sarukkai, VP of Engineering, Skyhigh Networks

Poodle PicA major vulnerability affecting the security of cloud services dubbed POODLE (Padding Oracle on Downgraded Legacy Encryption) was reported on October 14th by three Google security researchers—Bodo Moller, Thai Duong, and Krzysztof Kotowicz. Their paper about the vulnerability is available here.

What is POODLE?
POODLE affects SSLv3 or version 3 of the Secure Sockets Layer protocol, which is used to encrypt traffic between a browser and a web site or between a user’s email client and mail server. It’s not as serious as the recent Heartbleed and Shellshock vulnerabilities, but POODLE could allow an attacker to hijack and decrypt the session cookie that identifies you to a service like Twitter or Google, and then take over your accounts without needing your password.

This vulnerability allows for the hijacking and decryption of SSL version 3.0 connections, which is used to encrypt traffic between a browser and a web site or between a user’s email client and mail server. While usage of SSL 3.0 is generally limited, there is still prevalent backward-compatibility support of the protocol that exposes nearly all browsers and users.

The SSLv3 protocol has been in use since its publication in 1996. TLSv1 was introduced in 1999 to address weaknesses in SSLv3, notably introducing protections against CBC (Cipher block chaining) attacks. Although SSLv3 is considered a legacy protocol, it is still commonly permitted for backward compatibility by the default configurations of many web servers including Apache HTTP Server and Nginx. Many browsers’ support will fall back to the use of SSLv3 if an HTTPS connection to a server doesn’t support the TLSv1 protocol or a TLSv1 protocol negotiation fails for any reason.

What’s the risk?
The danger arising from the POODLE attack is that a malicious actor with control of an HTTPS server or some part of the intervening network can cause an HTTPS connection to downgrade to the SSLv3 protocol. An attack against SSLv3’s CBC encryption schemes can then be used to begin decrypting the contents of the session. Essentially, POODLE could allow an attacker to hijack and decrypt the session cookie that identifies a cloud service user to a service like Twitter or Google, and then take over your accounts without needing your password.

How to protect your company’s data
We recommend disabling the SSLv3 protocol on all servers, relying only on TLSv1.0 or greater. Additionally, company browsers and forward proxies should disallow SSLv3 and likewise permit only TLSv1.0 or greater as a minimum SSL protocol version. Enterprises should also disable the use of CBC-mode ciphers. To patch retrying of failed connections, apply TLS_FALLBACK_SCSV option (e.g.

Legacy applications relying solely on SSLv3 should be considered at-risk and vulnerable. Generic encryption wrapper software like Stunnel can be used as a workaround to provide encrypted TLSv1 tunnels.

How many cloud services are vulnerable?
As of this morning, 61% of cloud services had not addressed the Poodle vulnerability with a fix. The fact that many cloud services still support SSLv3 is a sign that cloud providers are not paying attention to what protocols are offered by their SSL stack. Cloud service providers should start looking at their SSL stack configuration and make sure they have disabled previous versions of SSLv3. In the process, they should also ensure the SSL stack’s proper use of ciphers.

We are working with customers to proactively identify vulnerable services and users and provide guidance for measures required to protect their data and user accounts. To learn more about our recommendations for securing corporate data in the cloud, download our cheat sheet.


Malicious Security—Can You Trust Your Security Technology?

By Gavin Hill, Director, Product Marketing And Threat Intelligence, Venafi

Encryption and cryptography have long been thought of as the exemplars of Internet security. Unfortunately, this is not the case anymore. Encryption keys and digital certificates have become the weakest link in most organizations’ security strategies, resulting in diminished effectiveness of other security investments like NGFW, IDS/IPS, WAF, AV, etc.

In my previous post, I discussed the difference between key management and key security. The problem today is not that encryption and cryptography are broken, but rather that there are mediocre implementations to secure and protect keys and certificates from theft. Worse yet, most organizations cannot even tell the difference between rogue and legitimate usage of keys and certificates on their networks or stop attackers from using them. Bad actors and nation states continue to abuse the trust that most have in encryption, but very few in the security industry are actually doing something about it.

Undermining Your Critical Security Controls
The threatscape has changed:

Even with all the advances in security technology over the last decade, cybercriminals are still very successful at stealing your data. The challenge is that security technologies are still designed to trust encryption. When threats use encryption, they securely bypass other security controls and hide their actions. Let’s review an example of how a bad actor can use keys and certificates to subvert any security technology or control.

Using Keys and Certificates throughout the Attack Chain
The use of keys and certificates in APT campaigns is cyclical. A typical trust-based attack can be broken up into four primary steps that include the theft of the key, use of the key, exfiltration of data, and expansion of its foothold on the network.

keys and certificates used throughout the attack chain

Step 1: Steal the Private Key
When Symantec analyzed sample malware designed to steal private keys from certificate stores, the same behavior was noted for every malware variant that was studied. In this current example, the CertOpenSystemStoreA function is used to open stored certificates, and the PFXExportCertStoreEx function exports the following certificate stores:

  • MY: A certificate store that holds certificates with the associated private keys
  • CA: Certificate authority certificates
  • ROOT: Root certificates
  • SPC: Software Publisher Certificates

The malware samples were able to steal the digital certificate and corresponding private key by performing the following actions:

  1. Opens the MY certificate store
  2. Allocates 3C245h bytes of memory
  3. Calculates the actual data size
  4. Frees the allocated memory
  5. Allocates memory for the actual data size
  6. The PFXExportCertStoreEx function writes data to the CRYPT_DATA_BLOB area to which the pPFX points
  7. Writes data (No decryption routine is required when it writes the content of the certificate store)

Step 2: Use the Key
With access to the private key, there are a multitude of use cases for a malicious campaign. Let’s review how cybercriminals impersonate a website and sign malware with a code-signing certificate.

Website impersonation can easily be achieved using the stolen private key as part of a spear-phishing campaign. The attacker sets up a clone version of the target website—Outlook Web Access (OWA) or a company portal would be a prime target. By using the stolen private key and certificate anyone that visits the website would not see any errors in the browser. The fake website also hosts the malware that is intended for the victim.

Step 3: Exfiltrate the Data
Now that the fake website is prepped and ready to go, it’s time to execute the spear-phishing campaign. Using popular social networks like LinkedIn, it is a simple process to profile a victim and formulate a well-crafted email that will entice the victim to click on a malicious link. Imagine you get an email from the IT administrator stating that your password will be expiring shortly, and that you need to change your password by logging into OWA. The IT administrator very kindly also provided you with a link to OWA in the email for you to click on and reset your password.

When you click on the link and input your credentials into the OWA website, not only are your credentials stolen, but malware is installed onto your machine. It’s important to note that the malware is also signed using a stolen code-signing certificate to avoid detection. By signing the malware with a legitimate code-signing certificate the attackers increase their chances of avoiding detection.

In part 2 of this blog series, I will cover step 4 and discuss some examples of the actions trust-based threats perform and how bad actors use keys and certificates to maintain their foothold in the enterprise network. I will also offer some guidance on how to mitigate trust-based attacks.

Register for a customized vulnerability report to better understand your organizations SSL vulnerabilities that cybercriminals use to undermine the security controls deployed in your enterprise network.

Trust Is a Necessity, Not a Luxury

By Tammy Moskites, Chief Information Security Officer, Venafi

Mapping Certificate and Key Security to Critical Security Controls
I travel all over the world to meet with CIOs and CISOs and discuss their top-of-mind concerns. Our discussions inevitably return to the unrelenting barrage of trust-based attacks. Vulnerabilities like Heartbleed and successfully executed trust-based attacks have demonstrated just how devastating these attacks can be: if an organization’s web servers, cloud systems, and network systems cannot be trusted, that organization cannot run its business.

Given the current threat landscape, securing an organization’s infrastructure can seem a bit daunting, but CISOs aren’t alone in their efforts to protect their critical systems. Critical controls are designed to help organizations mitigate risks to their most important systems and confidential data. For example, the SANS 20 Critical Security Controls provides a comprehensive framework of security controls for protecting systems and data against cyber threats. These controls are based on the recommendations of experts worldwide—from both private industries and government agencies.

sans_20_critical_security_controls_619x330These experts have realized what I’ve maintained for years—just how critical an organization’s keys and certificates are to its security posture. What can be more critical than the foundation of trust for all critical systems? As a result, the SANS 20 Critical Security Controls have been updated to include measures for protecting keys and certificates. Organizations need to go through their internal controls and processes—like I’ve done as a CISO—and ensure that their processes for handling keys and certificates map to recommended security controls.

For example, most organizations know that best practices include implementing Secure Socket Layer (SSL) and Secure Shell (SSH), but they may not realize that they must go beyond simply using these security protocols to using them correctly. Otherwise, they have no protection against attacks that exploit misconfigured, mismanaged, or unprotected keys. SANS Control 12 points out two such common attacks for exploiting administrative privileges: the first attack dupes the administrative user into opening a malicious email attachment, but the second attack is arguably more insidious, allowing attackers to guess or crack passwords and then elevate their privileges—Edward Snowden used this type of attack to gain access to information he was not authorized to access.

SANS Control 17, which focuses on data protection, emphasizes the importance of securing keys and certificates using “proven processes” defined in standards such as the National Institute of Standards and Technology (NIST) SP 800-57. NIST 800-57 outlines best practices for managing and securing cryptographic keys and certificates from the initial certificate request to revocation or deletion of the certificate. SANS Control 17 suggests several ways to get the most benefit from these NIST best practices. I’m going to highlight just a couple:

  • Only allow approved Certificate Authorities (CAs) to issue certificates within the enterprise (CSC 17-10)
  • Perform an annual review of algorithms and key lengths in use for protection of sensitive data (CSC 17-11)

Think for a moment about how you would begin mapping your processes to these two recommendations:

  • Do you have policies that specify which CAs are approved?
  • Do you have an auditable process that validates that administrators must submit certificate requests to approved CAs?
  • Do you have a timely process for replacing certificates signed by non-approved CAs with approved certificates?
  • Do you have an inventory of all certificates in your environment, their issuing CAs, and their private key algorithms?
  • Do you have an inventory of all SSH keys in your environment, their key algorithms, and key lengths?
  • Do you have a system for validating that all certificates and SSH keys actually in use in your environment are listed in this inventory?

I LOVE that I can say that Venafi solutions allow you to answer “yes” to all of these.

If you are interested in more details about mapping your processes for securing keys and certificates to the SANS Critical Security Controls, stay tuned: my white paper on that subject, coauthored with George Muldoon, will be coming soon.

The 7 Deadly Sins of Cloud Data Loss Prevention

By Chau Mai, Senior Product Marketing Manager, Skyhigh Networks

flamesIt’s good to learn from your mistakes. It’s even better to learn from the mistakes of others. Skyhigh has some of the security world’s most seasoned data loss prevention (DLP) experts who’ve spent the last decade building DLP solutions and helping customers implement them. So, we thought we’d pick their brains, uncover some of the most common missteps they’ve seen IT make when rolling out DLP in practice, and share them so you can avoid the mistakes of IT practitioners past.

In this piece, we specifically address mistakes when rolling out DLP to protect data in the cloud. So without further ado – the 7 deadly sins of Cloud DLP:

  • Lust – It’s natural to be tempted by the allure of cloud DLP. However, make sure that your cloud DLP deployment preserves the actual functionality of your cloud applications. You don’t want to break the native cloud applications’ behavior. For example, let’s say your DLP solution has detected sensitive content in Box and enforces it via encryption. Your end users should still be able to preview documents, perform searches, and overall have a seamless experience even with cloud DLP in place.
  • Greed – Cloud applications can contain enormous amounts of information – in some cases, glittering terabytes of data. However, as with traditional on-premise DLP, there’s no need to try and scan everything all at once. We recommend filtering on user attributes (group, geography, employee type, etc.) as well as on sharing permissions (i.e. externally vs internally) and prioritizing high-risk documents.
  • Envy – Do your employees envy others who have the ability to do their work and access cloud apps from anywhere they are? Companies are increasingly embracing the BYOD trend, and cloud DLP helps to enable that. Tame the green-eyed monster at your organization by letting cloud DLP catch all activity regardless of where the user is located, what operating system they’re using, and if they’re on-network or off-network – without the hassle of VPN.
  • Gluttony – Don’t overreach and accidentally intrude on user privacy during your DLP consumption. Security teams oftentimes have access to very sensitive information, but their access should be limited to business traffic. Make sure your cloud DLP practices do not involve sniffing personal traffic (such as employees’ use of Facebook, their activity on personal banking sites, etc).
  • Wrath – Avoid the wrath of employees and don’t let your cloud DLP solution negatively impact the user experience. Your employees should be able to seamlessly access and use cloud applications and enjoy the rapid responsiveness they’re accustomed to. Forward-proxies, especially when used for scanning a large amount of traffic, can cause lag and performance issues that are visible (and irritating) to the end user.
  • Pride – Having strong DLP technology, processes, and people in place is something to be proud of. However, not all cloud DLP solutions are created equal. Keep your cloud DLP program running smoothly by avoiding solutions that require you to deploy agents and install certificates – an operational nightmare. And certain cloud apps, such as Dropbox and Google Drive, will detect the man-in-the-middle and refuse to work as designed.
  • Sloth – This is where it pays off to be a little lazy. Let your cloud DLP provider integrate with your existing enterprise DLP solution. There’s no reason to re-work the efforts you’ve put into the people, processes, and technology. Look for a vendor that will extend your existing on-premise DLP policies to the cloud.

Cloud DLP is rapidly becoming a priority for security and compliance teams. As you evaluate solutions, be sure to keep these mistakes in mind. To learn more about common DLP missteps, check out our cheat sheet.

PCI Business-as-Usual Security—Best Practice or Requirement?

By Christine Drake, Senior Product Marketing Manager, Venafi

When attending the 2014 PCI Community Meetings in Orlando in early September, the PCI SSC kicked off the conference with a presentation by Jake Marcinko, Standards Manager, on Business-as-Usual (BAU) compliance practices. The PCI DSS v3, released in November 2013, emphasizes that security controls implemented for compliance should be part of an organization’s business-as-usual security strategy, enabling organizations to maintain compliance on an ongoing basis.

pci_2014_community_meetings_600x318Compliance is not meant to be a single point in time that is achieved annually to pass an audit. Instead, compliance is meant to be an ongoing state, ensuring sustained security within the Cardholder Data Environment (CDE). Security should be maintained as part of the normal day-to-day routines and not as a periodic compliance project.

To highlight the lack of business-as-usual security processes, Jake referenced the Verizon 2014 PCI Compliance Report, saying that almost no organization achieved compliance without requiring remediation following the assessment and there is dismally low continued compliance—only 1 out of 10 passed all 12 of the PCI DSS requirements in their 2013 assessments. But this was up from only 7.5% in 2012.

Four elements of ongoing, business-as-usual security processes were outlined:

  • Monitor security control operations
  • Detect and respond to security control failures
  • Understand how changes in the organization affect security controls
  • Conduct periodic security control assessments, and identify and respond to vulnerabilities

Jake mentioned that automated security controls help with maintaining security as a business-as-usual process, providing ongoing monitoring and alerting. If manual processes are used, they need to ensure that regular monitoring is conducted for continuous security.

The PCI DSS emphasis on business-as-usual security processes does not apply to any particular PCI DSS requirement, but instead applies across the standard. When considering how this applies to keys and certificates, manual security processes are unsustainable. A study by Ponemon Research found that, on average, there are 17,000 keys and certificates in an enterprise network, but 51% of organizations are unaware of how many certificates and keys are actively in use. Although some of these keys and certificates will not be in scope of the PCI DSS, a considerable number are used in the CDE to protect Cardholder Data (CHD).

In a recent webinar on PCI DSS v3 compliance for keys and certificates with 230 attendees, a poll revealed that over half (53%) either applied manual processes to securing their keys and certificates (41%) or did not secure them at all (12%). When specifically asked about their business-as-usual security processes for keys and certificates, more than half (53%) said they had no business-as-usual processes, but merely applied a manual process at the time of audit.

Organizations need automated security to deliver business-as-usual security processes for keys and certificates. This should include comprehensive discovery for a complete inventory of keys and certificates in scope of the PCI DSS, daily monitoring of all keys and certificates, establishment of a baseline, alerts of any anomalous activity, and automatic remediation so that errors, oversights, and attacks do not become breaches.

During his presentation, Jake noted that, for now, implementing business-as-usual security controls is a best practice according to the PCI DSS v3, and not a requirement. But he said that best practices often become requirements—so don’t wait! Start incorporating business-as-usual security practices now.

Learn how Venafi can help you automate key and certificate security required in PCI DSS v3—simplifying and ensuring repeated audit success while providing ongoing security for your CDE.

The Ability to Inspect What You Didn’t See

By Scott Hogrefe, Senior Director, Netskope

Netskope_Active_IntrospectionContent inspection has come a long way in the past several years. Whether it is our knowledge and understanding of different file types (from video to even the most obscure) or the reduction of false positives through proximity matching, the industry has cracked a lot of the code and IT and businesses are better off as a result. One constant that has remained true, however, is the fact that you just can’t inspect content you can’t see. This probably seems like an obvious point, and for traditional solutions, we can solve for this by simply pointing the tool at repositories that might have been (for whatever reason) overlooked. But these repositories are relatively easy to discover because, frankly, it’s harder to hide content when it’s occupying storage that IT is responsible for maintaining in the first place. It’s hard to lose a NAS (though not impossible — some of us have stories we could share, no doubt). But this changes when it comes to content in the cloud. Let’s break down some of the challenges here:

  • There are 153 cloud storage providers today and the average organization, according to the Netskope Cloud Report, is using 34 of them. Considering IT are typically unaware of 90% of the cloud apps running in their environment, this means that content is in 30+ cloud apps that IT has no knowledge of (and that’s just cloud storage, the average enterprise uses 508 cloud apps!).
  • Once you know that an app is in use, inspection of content in the cloud has required movement of said content. Since many traditional tools perform inspection of content as it flies by, the scope of inspection is limited to when content is being uploaded or when it is downloaded. Therefore, content may exist in a cloud app for several years before it’s ever inspected.
  • The “sharing” activity so popular in cloud apps today is done by sending links rather than the traditional “attachment” method. Since the link doesn’t contain the file, the inspection is useless.

For the first of our challenges above, vendors like Netskope can quickly discover all apps running in your enterprise and tell you whether the usage of these apps is risky or not.

For challenges two and three, Netskope just introduced Netskope Active Introspection, which enables customers to examine, take action or enforce policies over all content stored in a cloud app. This means that regardless of whether the data was placed in a cloud app yesterday or years ago, enterprise IT can take advantage of this solution’s leading real-time and activity-aware platform to protect it.  In addition, Active Introspection provides data inventory and classification, understands app and usage context, creates a content usage audit trail, and can be deployed alongside Active Cloud DLP.

What’s even more killer is that Active Introspection can be run as part of your overall policy framework and can typically run through an entire repository in less than 30 minutes. So let’s say that you want to encrypt specific data – Active Introspection discovers the content, understands whether the content meets certain criteria (such as sensitive or high value content), and completes the step of encrypting it, right then and there. There are additional actions that can be triggered automatically, such as alerting the end user, changing to ownership of the content to the appropriate person, encrypting the content, and many more.

My colleague, Rajneesh Chopra, just published a Movie Line Monday that talks about how customers are using Active Introspection and inspection capabilities together. If we think of this as a spectrum, imagine that on one side you’ve got content that’s constantly being moved in and out of a cloud app – for that, we have inspection that’s happening in real-time. On the other side of the spectrum you have content that’s already in the cloud app and being shared via links – for that, we have introspection. It’s complete coverage. You should check it out here, but suffice it to say, for our customers, the availability of Active Introspection within the Netskope Active Platform means that they are now able to go more confidently into cloud apps they’ve cautiously embraced. For these customers, there’s a strong understanding that safe cloud enablement requires a comprehensive solution that can be flexible enough to cover the myriad use cases they’re confronted with.

Do you have a solid handle on the cloud apps in your organization? What about the content contained within them? We’d love to hear from you and address any questions you have or show you a demo. Reach out to us at [email protected] or @Netskope to get a conversation started.


4 Lessons Learned From High Profile Credit Card Breaches

By Eric Sampson, Manager and QSA Lead, BrightLine

4-lessons-learned-breachesThe media has been filled with stories of high profile credit card breaches, including those from Target, Neiman Marcus, P.F. Chang’s and most recently Home Depot. Details on the Home Depot breach are still emerging, but the details around the Target and Neiman Marcus breaches are well known and causing the public to ask if it will happen again?

However, the real question we should be asking ourselves is when will it happen again?

Experienced Qualified Security Assessors (QSAs) will acknowledge that securing the cardholder data environment by meeting PCI DSS requirements provides a certain baseline level of security; however, it would be naïve to say that this alone will protect an organization from an attack. It is important to note there are areas where a merchant should realize the PCI DSS is an important start, but is only the foundation. One example is event logging.

The detailed requirements for event logging (section 10.6) assume that a merchant or service provider will utilize the documents for investigative purposes. That said, having a process to review audit logs on a daily basis does not guarantee that the employees responsible for reviewing logs and alerts will appropriately identify important or suspicious events in a timely and accurate manner. Similarly, during a PCI DSS assessment, QSAs are tasked with validating that daily log review processes and/or the use of log harvesting technologies are implemented. However, QSAs will not critique the details of the log review process or evaluate the robustness of log parsing tools.

So, how does this pertain to recent breach events?

It has been reported that many relevant security log events pertinent to the breaches were generated, but either ignored, or not acted upon in a timely manner, perhaps lost in the myriad of audit logs.

To go beyond the baseline standard, we can ask more probing questions such as:

  • How do log events ensure correct action?
  • How quickly should they be addressed?
  • Does the team responsible for reviewing these events and alerts have sufficient training and tools necessary to identify possible attacks?

Verizon’s 2014 data breach investigation reported that 1% of data breaches were discovered by a review of audit logs. Surely, a much higher number of breaches could have been detected through an effective internal review of audit logs. What does that say about our ability to detect breaches as they occur?

I have four thoughts for consideration:

  1. Devote to training. Individuals responsible for reviewing security events and alerts need to develop the skills to identify and act upon suspicious events that may indicate unauthorized activity.
  1. Invest in good tools. Does the organization currently have sufficiently capable log monitoring and file integrity monitoring tools? These tools should allow an organization to scan large amounts of information, but be able to extract specific events that could impact the organization.
  1. Be proactive. Understanding how alerts are generated, what data is contained in the alert and who reviews them is paramount. A careful plan can avoid finding out that a critical system is missing logs which may result in an incomplete view of an incident and potentially unnecessary future expenditure.
  1. Prepare drills. In a variety of specialties, including the military, medicine, and airline industry, exercises in handling emergency events have made many lives safer. Although we try to prevent a breach from happening, if it does happen, it can be resolved quickly and effectively. Reviewing audit logs and alerts can be a tedious activity at times. Make it interesting by staging mock attacks. Consider making this exercise a component of incident response plan tests and penetration tests.

Organizations face an ever expansive landscape of threats, vulnerabilities, and risks, not to mention an ever rising mountain of logs to review and manage. Bringing thoughtful consideration to security log management will enable an organization to take action where needed, understand important events, and address potential security threats when identified.

Was the Cloud ShellShocked?

By Pathik Patel, Senior Security Engineer, Skyhigh Networks

ShellShockInternet security has reached the highest defcon level. Another day, another hack – the new bug on the scene known as “Shellshock” blew up headlines and Twitter feeds.

Shellshock exposes a vulnerability in Bourne Again Shell (Bash), the widely-used shell for Unix-based operating systems such as Linux and OS X. The bug allows the perpetrator to remotely execute commands on vulnerable ports. The vulnerability is extremely easy to exploit, not requiring extensive knowledge of application or computational resources. The extensive functionality, along with the relative ease of launching an attack, led industry analysts to label the bug more serious than Heartbleed. The National Institute of Standards and Technology assigned the vulnerability their highest risk score of 10.

What are the implications of ShellShock for cloud security? At Skyhigh, we reviewed enterprise use of over 7,000 cloud service providers for vulnerabilities. The results surprised us.

We initially expected to discover rampant vulnerability to Shellshock amongst cloud service providers. The data portrayed a more mixed-bag of cloud application security.

Four percent of end-user devices in the enterprise environment employ the vulnerable version of Bash on employee devices – reflecting the dominance of Windows in enterprise networks. We also found that only three cloud service providers employ common gateway interface (CGI), the primary vector of attack. While cloud service providers may be vulnerable through other vectors (i.e. ForceCommand), the fact that they avoid the primary attack vector of the bug through design and architectural complexity is an indication of the maturity of today’s cloud applications.

However, when we scanned the top IaaS providers(e.g. AWS, Rackspace) for the Bash vulnerability, 90% of checks reported the vulnerable Bash version on the default images provisioned. Customers should not wait and rely on their IaaS providers to take the initiative. To ensure immunity from ShellShock, all organizations should immediately update their systems with the latest version of Bash.

But remediation measures shouldn’t end there. Given the current rate of breaches, organizations can expect the next event won’t be far off. Our recommendation: A Web Application Firewall (WAF) deployed to protect against pre-defined attack vectors can come in handy at times like this. System administrators can quickly write rules for WAFs to defend against this and similar bugs.  In our case, we quickly updated our WAF rules in addition to updating the vulnerable Bash version.

A sample ruleset for mod_security (WAF) is as below:

Request Header values:
SecRule REQUEST_HEADERS “^() {” “phase:1,deny,id:1000000,t:urlDecode,status:400,log,msg:’CVE-2014-6271 – Bash Attack’”

SecRule REQUEST_LINE “() {” “phase:1,deny,id:1000001,status:400,log,msg:’CVE-2014-6271 – Bash Attack’”

GET/POST names:
SecRule ARGS_NAMES “^() {” “phase:2,deny,id:1000002,t:urlDecode,t:urlDecodeUni,status:400,log,msg:’CVE-2014-6271 – Bash Attack’”

GET/POST values:
SecRule ARGS “^() {” “phase:2,deny,id:1000003,t:urlDecode,t:urlDecodeUni,status:400,log,msg:’CVE-2014-6271 – Bash Attack’”

File names for uploads:
SecRule FILES_NAMES “^() {” “phase:2,deny,id:1000004,t:urlDecode,t:urlDecodeUni,status:400,log,msg:’CVE-2014-6271 – Bash Attack’”

We recommend evaluating this ruleset based on your own application design. For additional best practices, check out our five keys for protecting data in the cloud.


2015 PCI SIG Presentations—Rallying the Vote for Securing Keys and Certificates

By Christine Drake, Senior Product Marketing Manager, Venafi

At the 2014 PCI Community Meetings in Orlando, the 2014 PCI Special Interest Groups (SIGs) provided updates on their progress and presentations were given on the 2015 PCI SIG proposals in hopes of getting votes to become 2015 PCI SIG projects. As I’ve mentioned in previous blogs, Venafi has co-submitted a 2015 PCI SIG proposal with SecurityMetrics on Cryptographic Keys and Digital Certificates Security Guidelines. In the 2015 SIG proposal presentations, Kevin Bocek, VP of Security Strategy and Threat Intelligence at Venafi, delivered the presentation for this SIG proposal on securing keys and certificates. Watching the sessions at the PCI Community Meetings, now is the right time for this PCI SIG topic.


In the 2014 PCI Community Meeting keynote from Bob Arno, Adventures of a Thiefhunter, it really called into question our trust of other people. He talked about how teams of pickpockets work together to steal from unsuspecting victims and how they use the stolen credit cards. The pickpockets are successful, because we generally trust the people around us. Keys and certificates also establish trust, but, in both cases, criminals are leveraging this trust to avoid detection while committing their crimes.

Merchants, financial institutions, and payment processors rely on thousands of keys and certificates as the foundation of trust in the cardholder data environments (CDE), protecting cardholder data (CHD) across their websites, virtual machines, mobile devices, and cloud servers. Yet it is this very trust that cybercriminals want to use, not only to evade detection, but to achieve authentication and trusted status that bypasses other security controls and allows their actions to remain hidden. If only one of your critical keys or certificates is compromised, the digital trust you have established is eliminated. And this opens organizations up to PCI DSS audit failures and, more importantly, breaches.

The PCI SIG on Cryptographic Keys and Digital Certificates Security Guidelines has already rallied support from Global 100 merchants, PCI Qualified Security Assessors (QSAs), and security experts, and we’re looking for more support from the PCI community.

The 2015 PCI SIG proposals will be presented again at the 2014 PCI Community Meetings in Berlin (Oct 7-9). Then PCI Participating Organizations will vote on the 2015 PCI SIG proposals from October 13-23. After the vote, the PCI Security Standards Council (PCI SSC) will select 2-3 presentations to become 2015 PCI SIG projects. In early November, there will be a call for participation for the selected SIGs and the projects will kick off in January 2015.

Want more information? Want to get involved? Visit the website for the PCI SIG on Cryptographic Keys and Digital Certificates Security Guidelines at

CSA Congress Recap Roundup

Last week the CSA Congress and IAPP Privacy Academy in San Jose, California. It was the Cloud Security Alliance’s first time to partner with IAPP for their respective events. It was a successful event where cloud security and privacy professionals were able to rub elbows and learn best practices that encompass their fields.

During Congress, there were a spectrum of releases, events, awards, speakers, and survey results and encompassed CSA’s endeavors. Below are some links that aggregate some of the activity that occurred during CSA Congress 2014.

Ron Knode Award Winners

Each year at Congress, the CSA recognizes a few of our members around the globe for their excellence in volunteerism. Named in honor of Ron Knode, a member of the CSA family who passed away in 2012, as a means to award and recognize members whose contributions were invaluable. To learn who were the winners of the 2014 Ron Knode Service Awards, please visit –

Big Data Taxonomy Document

The Cloud Security Alliance’s Big Data Working Group released the Big Data Taxonomy Report, a new guidance report that aims to help decision makers understand and navigate the myriad choices within the big data designation, including data domains, compute and storage infrastructures, data analytics, visualization, security and privacy. For more information on the report, please visit –

CSA Survey Finds IT Professionals Underestimating How Many Cloud Apps Exist in the Business Environment

In what could be called a tale of perception versus reality, the CSA released the results of a new survey that found a significant difference between the number of cloud-based applications IT and security professionals believe to be running in their environments, and the number reported by cloud application vendors. The survey titled, Cloud Usage: Risks and Opportunities was released at CSA Congress 2014. For more information, please visit –

Hackathon On! Cloud Security Alliance Challenges Hackers to Break its Software Defined Perimeter (SDP) at CSA Congress 2014

The CSA launched its second Hackathon at the CSA Congress, to validate the CSA Software Defined Perimeter (SDP) Specification to protect application resources distributed across multiple public clouds. In a twist from its last event (where no one was able to hack the SDP), the CSA is inviting Congress participants, along with hackers from all over the world to attempt to access a file server in a public cloud, which is protected by the SDP via a different public cloud. The first participant to successfully capture the target information on the protected file server will receive $10,000. Additionally, all participants will be entered into a random drawing to win $500. For more information, please visit –

To participate in Hackathon, visit –

The Shared Burden of Cloud Data Security & Compliance

By Gerry Grealish, Chief Marketing Officer, Perspecsys

cloud-security2_COMPRESSEDData security remains a top concern for enterprises deploying popular cloud applications. While most will instinctively think of cloud data security and compliance as being handled only by IT departments, many enterprises are realizing that all aspects of security – from selecting a cloud service provider (CSP) to monitoring cloud use over time – requires involvement across the organization.



Cloud Data Security & Compliance Begins with Vetting Providers
There are key areas of due diligence for an enterprise depending on its industry, but all share common security requirements when selecting a CSP. Perhaps, as TechTarget recently suggested, FedRamp Standards will regulate security outside the government as well, but for now enterprises must have their own standards for evaluating a CSP.  An excellent existing resource is the Security, Trust and Assurance Registry (STAR) Program supported by the Cloud Security Alliance iso(CSA). This public registry provides a comprehensive set of offerings for CSP trust. The CSA’s Cloud Controls Matrix (CCM) includes a framework of cloud security standards and their Consensus Assessments Initiative Questionnaire (CAIQ) offers questions an enterprise should ask any CSP under consideration. CSPs should also be able to provide details on any third party security certifications they have obtained. I.e. the ISO/IEC 27001 standards for information security management systems (ISMS).

Questions for the CSP frequently begin with specifics on strategies used – such as encryption for data protection and multifactor user authentication for cloud access. It is also important to know who will have access to data, how often audits are conducted and what if any security incident have occurred in the past and, if there has been a security incident, how cloud  customers were notified and how quickly. Having representation from across the enterprise involved in the vetting of a CSP is critical – not only IT – but also Security, Data Privacy & Governance and End Users can help ensure all relevant questions are answered and that necessary security protocols are implemented. The standard language used in the FedRamp contract example is one place to start for any enterprise signing on with a new CSP.

Internal Security Standards
Security and compliance of sensitive corporate data going to the cloud falls primarily on the enterprise itself. Despite any guarantees in contracts with CSPs, when a security breach occurs it is the enterprise that experiences the consequences and many would say holds the most interest in minimizing damages for the enterprise and/or customers. If there is a security incident, clients and customers will certainly look at the enterprise itself to protect their data.

Internal security standards begin with adherence to well-defined protocols and security strategies established and agreed to by – again – not just IT, but representatives from Legal, Security, Governance and End Users. Questions to be answered include what data will actually be allowed to leave the physical premises of the enterprise and in what form. Industry and regulatory penalties compel most industries to have clear security standards in place. In some cases, security incidents have brought on class-action lawsuits against the enterprise. Strict internal security standards are one way to further protect the enterprise and its customers from having to go that route.

Employee Buy-In is Key
With the proliferation of mobile computing and bring your own device (BYOD), it is essential that employees are brought in to participate, understand and agree to the security policies established for the enterprise. This includes employees throughout the organization – the time, resources, or money it takes to establish this buy-in through training, policy communication and proper monitoring or support is well worth it when compared to damages organizations experience from careless BYOD policies.

Security Strategies – Encryption and tokenization
Encryption and tokenization are two data security methods that many enterprises are utilizing strengthen the enterprise’s cloud security strategy while maintaining control of their cloud data. Both methods can be used to safeguard sensitive information in public networks, the Internet, and in mobile devices. These powerful and interoperable solutions are being used by leading organizations to also ensure compliance with sector specific requirements such as HIPAA, PCI DSS, GLBA, and CJIS.

While hacking and data attacks continue to occur, an enterprise with proven security strategies in place minimizes the impact for itself and its customers. An enterprise with security responsibility held by not just IT, but other departments as well, including end-users, puts itself in the best possible situation to avoid major data breaches and be prepared to deal with one should it occur. See this infographic on how to respond to a cloud security breach, should one occur.

About the Author
Gerry Grealish is the Chief Marketing Officer at Perspecsys and is responsible for defining and executing the marketing and product vision. Previously, Gerry ran Product Marketing for the TNS Payments Division, helping create and execute the marketing and product strategy for its payment gateway and tokenization/encryption security solutions.