Zen and the Art of Acing Your Cloud Compliance Audit

By Mike Pav, VP of Engineering, Spanning by EMC

cloudiconWe all know cloud adoption is rampant, even though cloud security remains a big concern; a recent study from CloudEntr showed that 89% of IT pros said they were worried about cloud security. While IT admins are busy ensuring compliance for sanctioned IT, shadow IT runs rampant, causing headaches they don’t even know they have. Because of this, the word “audit” often brings to mind the onerous thudding of storm troopers marching in. A heavy weight settles into the stomach as blood pressure spikes with a sharp intake of breath.

But what if you could approach an audit with zen-like calm? Good news: it’s possible. It’s all about creating an audit-friendly culture within your company such that an auditor could walk in any time and you’d get a clean bill of health. Here’s how to do it:

  • Understand the alphabet soup of regulations and frameworks. Which ones apply to your organization? What controls apply to you? The Cloud Security Alliance offers a Cloud Controls Matrix (CCM) that is a great place to get started.
  • Embrace Shadow IT. Accept that shadow IT will exist whether you like it or not, and take the necessary steps to ensure that what you don’t know doesn’t hurt you the next time a compliance audit comes your way. First, you need to discover what rogue apps are being used to store or transmit company data. Then, you need to analyze each one for risk by evaluating the SaaS vendor using tools like the Cloud Controls Matrix or Skyhigh Networks’ risk assessment. Finally, you can either take the appropriate measures to secure these apps or find an alternative that satisfies the employees needs in terms of productivity and the company’s needs in terms of compliance.
  • Build compliance into your company’s DNA. If we may modify the old saying a bit, live each day like it’s your last before the auditor arrives. Educate your entire staff about how using shadow IT might harm the well-being of the company, and build in audit-proofing as you create or revise processes.
  • Move to the cloud – with your eyes wide open. Cloud providers have already done a lot of the security work for you, so they’ll have built-in protection better (and cheaper) than any you could build yourself in-house. But it’s important to understand what they have covered and what blanks are left for you to fill in. Before signing up for cloud services, put the provider through their paces in terms of security, and make sure that the security evaluation is SaaS-specific and not just reusing your on-premises checklist.

If you want to greet your next audit feeling calm and secure, we invite you to join CSA’s Jim Reavis, Harold Byun of Skyhigh Networks and me, Mike Pav of Spanning to explore these issues more in-depth at our upcoming webinar “Cloud Security: 3 Ways to Embrace and Ace Your Compliance Audits” on Thursday, December 11 at 10:00am CT. Click here to register now.

CSA Guide to Cloud Computing – Now Available

By Jim Reavis, Executive Director CSA (Twittter @jimreavis); Brian Honan, President CSA Chapter Ireland (Twitter @BrianHonan); and Raj Samani, Chief Innovation Officer CSA & EMEA CTO Intel Security (Twitter @Raj_Samani)

41+-DCqY0yL._AA160_We are pleased to announce the availability of “CSA Guide to Computing: Implementing Cloud Privacy and Security.” The first of its kind for the CSA, this book aims to incorporate as much of the excellent research conducted by the CSA community into one single publication. Not only does it incorporate research from within the CSA community but also the latest information across the industry relating to threats and measures that can be used to protect those using or considering using the cloud.

In 2014, we witnessed a number of attacks that led to headlines declaring that the cloud is not a safe platform to host data. The reality is that such a conclusion is not so binary; therefore, this publication aims to dispel some of these myths and provides real, practical information on how someone can leverage a Cloud Service Provider, whilst managing the risk to a level that they and their customers would be comfortable with.

So what does the book entail?

The following defines how the book is structured:

  • Chapter One: We start with a view into what the cloud actually is, the various models, and also consider the benefits and role it plays within the internet economy.
  • Chapter Two: A practical guide into how to select and engage with a Cloud Service Provider, this looks at the available mechanisms to measure the security deployed by prospective providers.
  • Chapter Three: A view into the top threats to cloud computing that will include references to CSA research as well as third parties that have evaluated the threat landscape.
  • Chapter Four: Analysis into the top threats associated with mobile computing for the cloud.
  • Chapter Five: Building security into the cloud – Following two chapters considering the threats to cloud computing, we will turn our focus to the steps that end customers need to consider in order to make the move to the cloud.
  • Chapter Six: Certification standards for cloud computing – Whilst the previous chapter presents the security controls to mitigate the threat, the reality is that for many end customers their ability to influence the security measures will be limited. Indeed, even the level of transparency into the controls deployed will be limited. This is why cloud certifications will be so important, they are used more and more as the vehicle to provide assurance regarding the security deployed by providers to potential customers.
  • Chapter Seven: The Privacy imperative – The discussion about privacy associated within the cloud is one of the most contentious issues within technology. This chapter will consider the overall debate, and provide mechanisms for both providers, and end customers to address many of these concerns.
  • Chapter Eight: CSA Research topics – As mentioned earlier, our intention is to provide a singular reference for all CSA research. This chapter will provide the reader with an overview of the various working groups within the CSA, and details of their current findings.
  • Chapter Nine: Dark Clouds, managing security incidents in the cloud – With corporate resources now stored, and managed (to some extent) by third parties, the need to have a strong security incident management policy is imperative. This chapter will recommend the steps required to address the fundamental question; what happens when something does go wrong?
  • Chapter Ten: The Future Cloud – Cloud computing is evolving, and this chapter considers its role within critical national infrastructure, as well what will be required to secure such critical assets. It is intended to provide a view into the components required to secure the cloud of tomorrow.

We hope you enjoy the book and find the information contained as useful in your journey into the cloud.

The CSA Guide to Cloud Computing is available in Paperback and Kindle versions and can be found here on Amazon.

Right to Be Forgotten: Guidelines from WP29

Update: The final document regarding the right to be forgotten has been published. A new article, which goes more in depth, and analyzes the details of the Guidelines published by the Article 29 Working Party is available here: http://itlawgroup.com/resources/articles/237-right-to-be-forgotten-guidelines-casting-a-wider-net

The following blog excerpt on “Right to Be Forgotten: Guidelines from WP29” was written by the external legal counsel of the CSA, Ms. Francoise Gilbert of the IT Law Group. We repost it here with her permission. It can be viewed in its original form at: http://www.francoisegilbert.com/2014/11/right-to-be-forgotten-guidelines-from-wp29/

The Article 29 Working Party (WP29) has adopted Right to Be Forgotten Guidelines, to help Data Protection Authorities in the implementation of the May 13, 2014 judgment of the Court of Justice of European Union (CJEU) in the case Google Spain SL and Google Inc. v Agencia Espanola de Proteccion de Datos (AEPD) and Mario Costeja Gonzalez (C-131/12) (“Google Spain”). The WP 29 Guidelines provide the WP29’s view on the interpretation of the CJEU’s ruling, and identify the criteria that will be used by the data protection authorities when addressing complaints.

The Apple-IBM Alliance: Illuminating the Future of BYOD

By Yorgen Edholm, CEO, Accellion

Accellion-Blog-Apple-IBM-FINALThe mobile revolution, while firmly embedded in the consumer world, is now beginning to hit its stride in the enterprise world. This can be seen in the recent announcement from Apple and IBM, whose strategic alliance to develop joint solutions leveraging Apple devices and IBM software is an important next step for how enterprises consider mobile technology.
Ginni Rometty, IBM’s CEO, described the partnership as combining two complementary sets of assets, stating that IBM has the big data, the analytics capabilities, the integration work, and the cloud. On the other hand she mentions that Apple has the devices, the development environment, and the focus on usability. The combination of these elements is what will make a truly groundbreaking enterprise experience on mobile devices.

So what can we conclude from the Apple/IBM alliance?

  • iPhones and iPads, are clearly ready for enterprise-grade computing. Whatever skepticism businesses had about the iPhone back in 2007 and 2008 has largely dissipated, so much so that IBM is willing to bet major R&D and sales initiatives on iOS devices.
  • Enterprises like iOS devices, but they’re also looking for a mature software platform with proven capabilities in the areas of security, scalability, and control.
  • IBM and Apple see the opportunity to bridge the gap between consumer mobile devices and enterprise-grade solutions for data access, data management, and communication.

We agree – the enterprise is ready to seriously take on the mobile revolution. At Accellion we have already begun bridging the enterprise mobile gap by enabling secure file sharing, synchronization and collaboration on mobile devices. The kiteworks solution enables business users with iPhones, iPads, Android devices and Windows Phones to have access to their enterprise content wherever it is stored inside or outside the firewall to be able to share and collaborate on those files securely. The kiteworks platform provides rigorous security features such as 256-bit encryption, built-in AV scanning, and rule-based access controls, along with critical enterprise features, such as LDAP support, Data Loss Prevention (DLP) support, and essential enterprise content connectors for integrating mobile solutions with existing enterprise infrastructure and enterprise content systems.

I’m looking forward to see what kind of enterprise solutions for analytics, cloud services, and mobility Apple and IBM create through their best-of-breed partnership. There should be interesting opportunities for combining our enterprise mobile technologies to unleash the productivity gains of a mobile workforce.

Shared Responsibilities for Security in the Cloud, Part 2

By Alexander Anoufriev, CISO, ThousandEyes

Shared Responsibilities for Security in the Cloud continues…

Infrastructure Protection Services
This domain uses a traditional defense in depth approach to make sure that the data containers and communications channels are secure. For infrastructure protection services, all server, network, and application-related processes are fully owned by the service provider (see Figure 5).

Fig 5
Figure 5: Responsibility for Infrastructure Protection Services

End-point security remains an independent object on both sides of the responsibility matrix. The service provider is responsible for securing the end-points used by its workers, while the service consumers ensure the security of their own desktops, laptops, and other end-user computing devices.

Data Protection
This domain is really the most central to information security, since data is the asset we protect. Data protection needs to cover all data lifecycle stages, data types, and data states. Data stages include creation, storage, access, roaming, sharing, and retention. Data types include unstructured data such as word processing documents, structured data such as data within databases, and semi-structured data such as emails.

Fig 6
Figure 6: Responsibility for Data Protection

As is to be expected, this is one of the most involved areas of information security for both parties. See Figure 6 for detailed information on the responsibilities of these two parties. Data lifecycle management is a process driven by the asset owner. Often, the customer of the service is also the owner. At ThousandEyes, this is always the case. Other processes/services have their own implementations on both sides.

Policies and Standards
Security policies and standards are derived from risk-based business requirements. They include Information Technology security (infrastructure and applications), physical security, business security, and human resources security. Security policies are statements that capture requirements specifying what type of security and how much should be applied to protect the business. Figure 7 provides details on responsibility relating to policies and standards.

Fig 7
Figure 7: Responsibility for Security Policies and Standards

As we can see, in the cloud era, the provider owns the operational security baseline (the consumer still owns their part, which is minimal for the scope of provided services and represents end-point and connectivity parts). Job aid guidelines traverse both parties, and the data owner (consumer) defines data classification. All other processes/services exist in their scope at both sides.

In a shared security model it is really important to understand who is responsible for what. This must be defined in associated security level agreements. Ask your CSP what you should do to ensure that security is implemented end-to-end and your data stays secure despite changing operational responsibilities.

Security and Risk Management TCI Domain.

Shared Responsibilities for Security in the Cloud, Part 1

By Alexander Anoufriev, CISO, ThousandEyes

Introduction: Security Responsibilities in the Cloud Era

When businesses owned their applications and all underlying infrastructure, they also owned their security. Now this is changing with a shift in ownership and operational responsibilities over many applications as they are moving to the Cloud. In the cloud era, security is not owned solely by the cloud service provider (CSP) or consumer. Cloud security is a shared responsibility.

To illustrate this model of shared responsibility I will be using:

  • ThousandEyes SaaS Platform as an example of a cloud application which is owned and operated by ThousandEyes
  • Cloud Security Alliance (CSA) Trusted Cloud Initiative (TCI) reference architecture

We’ll need to understand the high level architecture of this specific solution. The ThousandEyes solution consists of three major components (see Figure 1):

  1. SaaS Platform, which is installed and operated in the ThousandEyes data center
  2. Enterprise Agent, which is installed in the customer’s network
  3. Cloud Agent, which is installed in hosting providers’ networks and managed by ThousandEyes

We monitor the performance of networks and applications inside of an enterprise, on the internet and in the cloud. As a part of our service, we process and store the following data elements:

  • User accounts (name, email)
  • Hashes of passwords (only if local authentication is in place; in Web SSO with SAML scenario this is not applicable)
  • Definitions of network performance tests
  • Results of the tests (measurements)
  • Alerts
  • Reports
  • Support tickets

Fig 1
Figure 1: ThousandEyes solution overview

Responsibilities by TCI Domain

Governance, Risk and Compliance
Figure 2 illustrates responsibility for the governance, risk and compliance (GRC) domain of TCI architecture. This domain is responsible for the identification and implementation of the appropriate organizational structures, processes, and controls to maintain effective information security governance, risk management and compliance. Both parties, the service provider (ThousandEyes in this example) and the consumer, are independently responsible for all of the listed processes.

Fig 2
Figure 2: Responsibility for Governance, Risk and Compliance

Responsibility for specific processes will differ between provider and consumer, for example: the service provider manages compliance with its internal policies, control standards and procedures, while it designs, develops, deploys and operates the service. The customers manage compliance while they use the service.

Privilege Management Infrastructure
Privilege Management Infrastructure ensures that users have the access and privileges required to execute their duties and responsibilities with Identity and Access Management (IAM) functions. Figure 3 illustrates shared responsibilities in IAM.

Fig 3
Figure 3: Responsibility for Privilege Management Infrastructure

In our example, the identity management process extends from a service provider to a service consumer while other related processes and services exist independently in both entities. With the ThousandEyes SaaS Platform, customers are able to take advantage of their own web single sign on (SSO) technologies. In this case, they become responsible for authentication, authorization, and privilege management. Alternatively, they can use ThousandEyes-supplied identity information.

Threat and Vulnerability Management
This domain provides core IT security service and processes. Figure 4 demonstrates how responsibilities are allocated between the service provider and consumer.

Fig 4
Figure 4: Responsibility for Threat and Vulnerability Management

Here we can see that some of the security processes/services are fully shifted to the service provider. All infrastructure-related compliance testing, vulnerability management and penetration testing are operated by the service provider, while threat management exists on both sides and often covers different threats. Due to this, they are two different processes.

(Part 2 of this post will run tomorrow.)

Lessons from Apple iCloud Data Leak

By Paul Skokowski, Chief Marketing Officer, Accellion

Accellion-Blog-Apple-iCloud-FINALThe theft of celebrity photos from Apple iCloud is a stark reminder of the need to think twice before storing data. For many people using a Mac the default behavior is to automatically back up and save data to iCloud. It’s wonderfully appealing and convenient and seamlessly integrates into practically everything you do on the Mac.  In fact it is so easy most people don’t think twice about what they are storing and that is where the problem begins.

When I recently updated my Macbook it felt as if I was being repeatedly nudged, reminded, coaxed, and invited to store my data in iCloud. Saying “no” to each of these invitations wasn’t easy and most people cave in quite quickly, because they think “what could be the harm?” The recent Apple iCloud scandal clearly illustrates the potential risks. While in this case the target of the iCloud theft was celebrity photos, the theft could have been similarly damaging to a business if sensitive information had been stolen and shared.

One of the biggest concerns that companies have around cloud technologies is the security of their digital content. Personal pictures are one thing, but it’s important to remember that companies manage sensitive data ranging from upcoming product plans to employee personnel files every single day, and that it all needs be secured. That’s why, instead of allowing employees to use solutions such as iCloud for work-related information, companies must take the time to map out a cloud security strategy and deploy enterprise grade solutions to share and store their business data.

The Apple iCloud scandal offers several important lessons:

  • Use Two-Factor Authentication: Two-factor authentication that requires the user to enter not only a password but also a one-time PIN sent to a trusted cell phone should be the default setting for cloud-storage services. While it is possible to set up two-factor authentication for iCloud, it was not easy or obvious how to do so.  If the victims’ accounts had been configured to require two-factor authentication, the hackers would not have been able to log in even knowing the account passwords.
  • Store With Care: While automatic backup and sync makes life easy, it is not always the best bet when you’re working with sensitive materials.  For work, this sensitive information could include personal data from employment records, financial data, customer information or product roadmap details. Ensuring that sensitive material is only being saved into secure solutions is essential for sensitive work-related information.
  • Trust Private Clouds: For the highest degree of confidence in and control over cloud storage, enterprises should deploy private-cloud solutions, so they are not at the mercy of the security practices (and security lapses) of third-party software providers.

So what do I use to securely sync and store my work information and make sure I have a backup?

I use Time Machine with an external hard drive to make sure I can easily restore all my content if my computer gets damaged or when I want to copy over all my content to a new machine.  And to help me do my work on a daily basis I use kiteworks by Accellion for syncing and sharing information across my iPad, iPhone and Macbook since it encrypts all data in transit and at rest, supports two-factor authentication, and automatically detects and stops brute-force password attacks.

So next time that pop up window invites you to store data to iCloud – remember the celeb photos scandal and think twice. By deploying kiteworks on trusted private clouds, enterprises can greatly reduce their vulnerability to sensitive information being exposed.

From BYOD to WYOD: Get Ahead of Wearable Device Security

By Paula Skokowski, Chief Marketing Officer, Accellion

Accellion-Blog-WYOD-FINALWearable technology is the new “it” thing. From FitBit, to Google Glass, to Samsung Galaxy Gear, and now the Apple iWatch, users are literally arming themselves with the latest gadgets. This is particularly true among early adopters who are counting the days until the release of the Apple iWatch.

While early adopters used to represent only a small number of technology trendsetters, a 2014 study found that of individuals aged 18 to 44, 56% say they have been the first among their friends, colleagues or family members to try a new product or service. With soon-to-be-launched devices promoted widely online and user reviews instantly pushed out to the masses via social media, anyone can step forward to be the first in line to make a purchase – including employees at your company.

This means enterprise IT teams also need to be one step ahead of the trends. While this doesn’t mean that IT needs to camp out overnight at the Apple store, it does mean that IT needs to anticipate what devices will be coming into the workplace and how to keep enterprise security intact. According to a global forecast by CCS Insight, wearable device shipments are expected to hit 22 million this year, up from 9.7 million in 2013, and will continue to grow to 135 million in 2018. The age of Wear Your Own Device (WYOD) is here, and IT needs to include these devices into their security strategies to make sure that any corporate data accessed on these devices is secure. Wearable devices are promising users easy access to applications and data on smartphones, which could eventually include enterprise information

So it’s not too early to start planning how to extend your BYOD policy beyond smartphones to WYOD. Wearable tech is considered fun and hip, but from an enterprise standpoint it needs to be taken seriously. While WYOD offers opportunities for increased mobile productivity it needs to be worked into an organization’s overall mobile security strategy.


By Avani Desai, Executive Vice President, BrightLine

rest-in-peace-soc-3-sealOn October 2, 2014, the AICPA and CPA Canada announced their joint decision to discontinue the seal programs for Systrust and SOC 3 Systrust for Service Organizations.

In their announcement, the AICPA and CPA Canada stated that both of these organizations recognize that there has been growth in the attestation/assurance services market, especially in the area of systems reliability and service organization controls – and it’s with this in mind that they will continue to ensure the effectiveness of these services despite the seal program coming to an end.

This doesn’t mean that the SOC 3 examination is gone, just the seal. According to Bryan Walker, Director of Practitioner Support, CICA:

“The SOC 3 for SysTrust for Service Organizations will remain as part of the initiatives for Service Organization Controls. The SOC 3 seal program will be terminated and the SOC 3 seal will no longer be available.”

Therefore, service organizations still complete a SOC 3 examination, which provides a shorter report than a SOC 2 examination including only the auditor’s opinion, management’s assertion and the system description.

So what does that mean for service organizations that underwent a SOC 3 examination?

After December 31, 2014, only the seal that was jointly managed by the AICPA and CPA Canada will not be provided to service organizations.. Meanwhile, a seal that has already been issued under an existing license will remain active through its expiration date. Seals will still be issued through to December 31, 2014, to any SysTrust and SOC 3 engagements currently in progress – including renewals of existing SOC 3 SysTrust for Service Organization and SysTrust seals. After that date, however, anyone who continues to use SysTrust related marks must disclose to clients that the seal program is not active, and is not supported by or associated with AICPA and CPA Canada.

Also, do not fret; service organizations can still complete the SOC 2 examination, which will still provide the user entity the same level of comfort – just not freely distributed.

You should also know that CPA Canada stated in the announcement that they are reviewing the WebTrust for Certification Authorities seal program. While it currently continues, the review is to determine whether the benefits of the program justify the resources necessary for its continuation.

Clearly, there is a lot of assessment and change underway – however, every effort has been made to see that these changes will not cause a disruption within the service organization control reporting world.


The Data Factory: 12 Essential Facts on Enterprise Cloud Usage & Risk

By Kamal Shah, VP of Products and Marketing

carr blog imageBetween headlines from the latest stories on data breaches and the hottest new apps on the block, it’s easy to be captivated with what people are saying, blogging, and tweeting about the state of cloud adoption and security. But let’s face it: It’s hard to separate the hype from the truth, and stories about security can range from hyperbolic to accurately frightening.

The fifth installment of our quarterly Cloud Adoption and Risk (CAR) Report presents a data-based analysis of enterprise cloud usage. With cloud usage data from over 13 million enterprise employees and 350 organizations spanning all major verticals, the report is the industry’s most comprehensive and authoritative source of information on how employees are using cloud services. For the first time in the report’s history, we’ve partnered with the Cloud Security Alliance to gather IT managers’ perceptions on cloud adoption and risk and compare their perceptions with hard data. The results reveal a disparity between perception of enterprise cloud use and reality.

You can download the full report here. In addition to popular recurring features such as the Top 20 Enterprise Cloud Services and the Ten Fastest-Growing Applications, the latest report contains several shocking findings.

Mind the Cloud Enforcement Gap
IT often blocks cloud services that fail to meet their organization’s acceptable use policies. Due to changing cloud service URLs, inconsistent policy enforcement, and unmonitored exceptions, the cloud enforcement gap is a shocking 6x. For example, more than 50% of the enterprises intended to block Apple iCloud, but actual usage data showed iCloud was blocked in only 9% of the enterprises.

Don’t Underestimate Insider Threat
Security professionals believe insider threat incidents are rare, with only 17% of respondents aware of an incident at their organization in the past year. The reality is 85% of companies had cloud usage activity strongly indicative of insider threat.

The Cloud 1% and the 80-20 Rule
While the average organization employed 831 cloud services, the distribution of data revealed that 80% of data uploaded to the cloud goes to just 11 cloud services – less than 1% of the total number. Still, enterprises can’t ignore other cloud services: The remaining 20% of data account for 81.3% of anomalous activity indicative of malware, compromised account, and insider threat.

IT’s Worst Nightmare: The World’s Riskiest User
One anonymous user uploaded more than 15 GB of data to high-risk services such as Sourceforge and ZippyShare over 3 months. This individual used 182 high-risk cloud services, any one of which could have been a vector for confidential data to be inappropriately leaked or for malware to be introduced into the enterprise, thus proving that even a single employee is capable of significant damage to corporate security.

Mobile and Cloud: BFFs 4Ever

By Krishna Narayanaswamy, Chief Scientist, Netskope

Netskope Cloud Report - October 2014We released the Netskope Cloud Report for October today. In it, we analyze the aggregated, anonymized data collected from tens of billions of events across millions of users in the Netskope Active Platform, and highlight key findings about cloud app usage in enterprise as seen in the Netskope Active Platform. This includes our count of enterprise cloud apps (579) and percent that are enterprise-ready (88.7 percent), as well as top apps, activities, and policy violations. But what was really interesting about this quarter’s findings is the level of cloud app activity occurring on mobile devices.

As we all know, mobile is the perfect medium for information “snacking.” When it comes to enterprise cloud apps, they also happen to be perfect for bite-sized work. In a world where the workday never seems to end, every minute is a zero-sum-game. So, whether it’s a quick approval of an expense report, a quickly dashed-off email, or a “while I’m thinking of it” document share from cloud storage, nearly half of all activities occur on mobile devices. Some of the most common are send (57 percent), approve (53 percent), view (48 percent), login (47 percent), and post (45 percent).

With all of those activities, mobile is also a place for an increasing number of policy violations. We define a policy violation as when a user attempts an activity on which an administrator has set a policy in the Netskope Active Platform (such as “Don’t share content from cloud storage outside of the company”). We found that 59 percent of all policy violations involving download, and more than one-third of policy violations involving a DLP profile (such as PII, PCI, PHI, Confidential, etc.), occur on mobile devices. Our researchers believe that the high rate of download policy violations on mobile devices could be due to administrators both setting “no download” policies as well as “no download to mobile” policies (the latter because that is a source of concern for data leakage, especially in the case of BYOD), both of which would be triggered on a mobile device.

See the Netskope Cloud Report infographic here and get the full report here.

Are you enforcing cloud app policies for mobile users? Tell me here or Tweet it @Krishna_Nswamy #mobilecloudbffs



In Plain Sight: How Hackers Exfiltrate Corporate Data Using Video

By Kaushik Narayan, Chief Technology Officer, Skyhigh Networks

data-exfiltration-blog-imageConsumers and companies are embracing cloud services because they offer capabilities simply not available with traditional software. Cyber criminals are also beginning to use the cloud because it offers scalability and speed for delivering malware, such as in the recent case of Dyre, which used file sharing services to infect users. The latest evolution of this trend is attackers using the cloud to overcome a key technical challenge – extracting data from a company. Under the cover of popular consumer cloud services, attackers are withdrawing data from the largest companies in ways that even sophisticated intrusion prevention systems cannot detect.

Previously, researchers at Skyhigh uncovered malware using Twitter to exfiltrate data 140 characters at a time. Skyhigh recently identified a new type of attack that packages data into videos hosted on popular video sharing sites, a technique difficult to distinguish from normal user activity.

The Industrialization of Hacking
The target of these attacks ranges from customer data such as credit card numbers and social security numbers to intellectual property, which can include design diagrams and source code. In recent years, hacking has undergone a revolution. Once a hobbyist pursuit, hacking is now performed at industrial-scale with well-funded teams backed by cartels and national governments. Stealing data is big business, whether to compromise payment credentials and resell them for profit or to gain access to intellectual property that could allow a competitor to catch up on years (or decades) of research and development.

In response, companies have made significant investments in software that can detect telltale signals that attackers have gained access to their network and are attempting to extract sensitive data. With these intrusion prevention systems in place, it can be quite challenging for attackers to remove a large amount of data without being discovered. In the same way that thieves would find it difficult to sneak bags of money out the front door of a bank undetected by guards and security cameras, today’s cyber criminals need a way to mask their exit. That’s why they’ve turned to cloud services to make large data transfers.

Their latest technique involves consumer video sites. There are two attributes that make video sites an excellent way to steal data. First, they’re widely allowed by companies and used by employees. There are many legitimate uses of these sites such as employee training videos, product demos, and marketing the company’s products and services. Second, videos are large files. When attackers need to extract large volumes of data, video file formats offer a way to mask data without arousing suspicions about a transfer outside the company.

How the Attack Works
Once attackers gain access to sensitive data in the company, they split the data into compressed files of identical sizes, similar to how the RAR archive format transforms a single large archive into several smaller segments. Next, they encrypt this data and wrap each compressed file with a video file. In doing so, they make the original data unreadable and further obscure it by hiding it inside a file format that typically has large file sizes. This technique is sophisticated; the video files containing stolen data will play normally.

They upload the videos containing stolen data to a consumer video sharing site. While they’re large files, it’s not unusual for users to upload video files to these types of sites. If anyone checked, the videos would play normally on the site as well.

After the videos are on the site, the attacker downloads the videos and performs the reverse operation, unpacking the data from the videos and reassembling it to arrive at the original dataset containing whatever sensitive data they sought to steal.



What Companies Can Do to Protect Themselves
Traditional intrusion detection technology generally does not detect data exfiltration using this technique. One way to identify this attack is an anomalous upload of several video files with identical file sizes. To identify this type of activity, what is needed is a big data approach to analyzing the routine usage of cloud services in the enterprise to detect these anomalous events.

Skyhigh analyzes all cloud activity to develop behavioral baselines using time series analysis and machine learning, and identified the attack in the wild at a customer site. Importantly, the detection relied on analysis of normal usage activity rather than detecting malware signatures that don’t exist before the attack has been catalogued. Skyhigh’s approach requires no knowledge of the attack before it’s detected.

Companies can proactively take steps to protect themselves by limiting uploads to video sharing sites while allowing the viewing or download of videos. Deploying a cloud-aware anomaly detection solution can also give early warning to an attack in progress and either block it from occurring or quickly allow a company to take action to stop the attack and prevent additional data from being exfiltrated.

The volume and sophistication of attacks is increasing. In this threat environment, companies must take additional steps to protect data while allowing the use of cloud services that also drive innovation and growth in their businesses. State-sponsored attacks and sophisticated criminal organizations are now using the cloud as a delivery vehicle for malware and as an exfiltration vector, but companies can also take advantage of a new generation of cloud-based detection and protection services to safeguard their data and protect themselves. Download our cheat sheet to learn other actionable steps for reducing risk to data in the cloud.


Poodle – How Bad Is Its Bite? (Here’s the Data)

By Sekhar Sarukkai, VP of Engineering, Skyhigh Networks

Poodle PicA major vulnerability affecting the security of cloud services dubbed POODLE (Padding Oracle on Downgraded Legacy Encryption) was reported on October 14th by three Google security researchers—Bodo Moller, Thai Duong, and Krzysztof Kotowicz. Their paper about the vulnerability is available here.

What is POODLE?
POODLE affects SSLv3 or version 3 of the Secure Sockets Layer protocol, which is used to encrypt traffic between a browser and a web site or between a user’s email client and mail server. It’s not as serious as the recent Heartbleed and Shellshock vulnerabilities, but POODLE could allow an attacker to hijack and decrypt the session cookie that identifies you to a service like Twitter or Google, and then take over your accounts without needing your password.

This vulnerability allows for the hijacking and decryption of SSL version 3.0 connections, which is used to encrypt traffic between a browser and a web site or between a user’s email client and mail server. While usage of SSL 3.0 is generally limited, there is still prevalent backward-compatibility support of the protocol that exposes nearly all browsers and users.

The SSLv3 protocol has been in use since its publication in 1996. TLSv1 was introduced in 1999 to address weaknesses in SSLv3, notably introducing protections against CBC (Cipher block chaining) attacks. Although SSLv3 is considered a legacy protocol, it is still commonly permitted for backward compatibility by the default configurations of many web servers including Apache HTTP Server and Nginx. Many browsers’ support will fall back to the use of SSLv3 if an HTTPS connection to a server doesn’t support the TLSv1 protocol or a TLSv1 protocol negotiation fails for any reason.

What’s the risk?
The danger arising from the POODLE attack is that a malicious actor with control of an HTTPS server or some part of the intervening network can cause an HTTPS connection to downgrade to the SSLv3 protocol. An attack against SSLv3’s CBC encryption schemes can then be used to begin decrypting the contents of the session. Essentially, POODLE could allow an attacker to hijack and decrypt the session cookie that identifies a cloud service user to a service like Twitter or Google, and then take over your accounts without needing your password.

How to protect your company’s data
We recommend disabling the SSLv3 protocol on all servers, relying only on TLSv1.0 or greater. Additionally, company browsers and forward proxies should disallow SSLv3 and likewise permit only TLSv1.0 or greater as a minimum SSL protocol version. Enterprises should also disable the use of CBC-mode ciphers. To patch retrying of failed connections, apply TLS_FALLBACK_SCSV option (e.g. http://marc.info/?l=openssl-dev&m=141333049205629&w=2).

Legacy applications relying solely on SSLv3 should be considered at-risk and vulnerable. Generic encryption wrapper software like Stunnel can be used as a workaround to provide encrypted TLSv1 tunnels.

How many cloud services are vulnerable?
As of this morning, 61% of cloud services had not addressed the Poodle vulnerability with a fix. The fact that many cloud services still support SSLv3 is a sign that cloud providers are not paying attention to what protocols are offered by their SSL stack. Cloud service providers should start looking at their SSL stack configuration and make sure they have disabled previous versions of SSLv3. In the process, they should also ensure the SSL stack’s proper use of ciphers.

We are working with customers to proactively identify vulnerable services and users and provide guidance for measures required to protect their data and user accounts. To learn more about our recommendations for securing corporate data in the cloud, download our cheat sheet.


Malicious Security—Can You Trust Your Security Technology?

By Gavin Hill, Director, Product Marketing And Threat Intelligence, Venafi

Encryption and cryptography have long been thought of as the exemplars of Internet security. Unfortunately, this is not the case anymore. Encryption keys and digital certificates have become the weakest link in most organizations’ security strategies, resulting in diminished effectiveness of other security investments like NGFW, IDS/IPS, WAF, AV, etc.

In my previous post, I discussed the difference between key management and key security. The problem today is not that encryption and cryptography are broken, but rather that there are mediocre implementations to secure and protect keys and certificates from theft. Worse yet, most organizations cannot even tell the difference between rogue and legitimate usage of keys and certificates on their networks or stop attackers from using them. Bad actors and nation states continue to abuse the trust that most have in encryption, but very few in the security industry are actually doing something about it.

Undermining Your Critical Security Controls
The threatscape has changed:

Even with all the advances in security technology over the last decade, cybercriminals are still very successful at stealing your data. The challenge is that security technologies are still designed to trust encryption. When threats use encryption, they securely bypass other security controls and hide their actions. Let’s review an example of how a bad actor can use keys and certificates to subvert any security technology or control.

Using Keys and Certificates throughout the Attack Chain
The use of keys and certificates in APT campaigns is cyclical. A typical trust-based attack can be broken up into four primary steps that include the theft of the key, use of the key, exfiltration of data, and expansion of its foothold on the network.

keys and certificates used throughout the attack chain

Step 1: Steal the Private Key
When Symantec analyzed sample malware designed to steal private keys from certificate stores, the same behavior was noted for every malware variant that was studied. In this current example, the CertOpenSystemStoreA function is used to open stored certificates, and the PFXExportCertStoreEx function exports the following certificate stores:

  • MY: A certificate store that holds certificates with the associated private keys
  • CA: Certificate authority certificates
  • ROOT: Root certificates
  • SPC: Software Publisher Certificates

The malware samples were able to steal the digital certificate and corresponding private key by performing the following actions:

  1. Opens the MY certificate store
  2. Allocates 3C245h bytes of memory
  3. Calculates the actual data size
  4. Frees the allocated memory
  5. Allocates memory for the actual data size
  6. The PFXExportCertStoreEx function writes data to the CRYPT_DATA_BLOB area to which the pPFX points
  7. Writes data (No decryption routine is required when it writes the content of the certificate store)

Step 2: Use the Key
With access to the private key, there are a multitude of use cases for a malicious campaign. Let’s review how cybercriminals impersonate a website and sign malware with a code-signing certificate.

Website impersonation can easily be achieved using the stolen private key as part of a spear-phishing campaign. The attacker sets up a clone version of the target website—Outlook Web Access (OWA) or a company portal would be a prime target. By using the stolen private key and certificate anyone that visits the website would not see any errors in the browser. The fake website also hosts the malware that is intended for the victim.

Step 3: Exfiltrate the Data
Now that the fake website is prepped and ready to go, it’s time to execute the spear-phishing campaign. Using popular social networks like LinkedIn, it is a simple process to profile a victim and formulate a well-crafted email that will entice the victim to click on a malicious link. Imagine you get an email from the IT administrator stating that your password will be expiring shortly, and that you need to change your password by logging into OWA. The IT administrator very kindly also provided you with a link to OWA in the email for you to click on and reset your password.

When you click on the link and input your credentials into the OWA website, not only are your credentials stolen, but malware is installed onto your machine. It’s important to note that the malware is also signed using a stolen code-signing certificate to avoid detection. By signing the malware with a legitimate code-signing certificate the attackers increase their chances of avoiding detection.

In part 2 of this blog series, I will cover step 4 and discuss some examples of the actions trust-based threats perform and how bad actors use keys and certificates to maintain their foothold in the enterprise network. I will also offer some guidance on how to mitigate trust-based attacks.

Register for a customized vulnerability report to better understand your organizations SSL vulnerabilities that cybercriminals use to undermine the security controls deployed in your enterprise network.

Trust Is a Necessity, Not a Luxury

By Tammy Moskites, Chief Information Security Officer, Venafi

Mapping Certificate and Key Security to Critical Security Controls
I travel all over the world to meet with CIOs and CISOs and discuss their top-of-mind concerns. Our discussions inevitably return to the unrelenting barrage of trust-based attacks. Vulnerabilities like Heartbleed and successfully executed trust-based attacks have demonstrated just how devastating these attacks can be: if an organization’s web servers, cloud systems, and network systems cannot be trusted, that organization cannot run its business.

Given the current threat landscape, securing an organization’s infrastructure can seem a bit daunting, but CISOs aren’t alone in their efforts to protect their critical systems. Critical controls are designed to help organizations mitigate risks to their most important systems and confidential data. For example, the SANS 20 Critical Security Controls provides a comprehensive framework of security controls for protecting systems and data against cyber threats. These controls are based on the recommendations of experts worldwide—from both private industries and government agencies.

sans_20_critical_security_controls_619x330These experts have realized what I’ve maintained for years—just how critical an organization’s keys and certificates are to its security posture. What can be more critical than the foundation of trust for all critical systems? As a result, the SANS 20 Critical Security Controls have been updated to include measures for protecting keys and certificates. Organizations need to go through their internal controls and processes—like I’ve done as a CISO—and ensure that their processes for handling keys and certificates map to recommended security controls.

For example, most organizations know that best practices include implementing Secure Socket Layer (SSL) and Secure Shell (SSH), but they may not realize that they must go beyond simply using these security protocols to using them correctly. Otherwise, they have no protection against attacks that exploit misconfigured, mismanaged, or unprotected keys. SANS Control 12 points out two such common attacks for exploiting administrative privileges: the first attack dupes the administrative user into opening a malicious email attachment, but the second attack is arguably more insidious, allowing attackers to guess or crack passwords and then elevate their privileges—Edward Snowden used this type of attack to gain access to information he was not authorized to access.

SANS Control 17, which focuses on data protection, emphasizes the importance of securing keys and certificates using “proven processes” defined in standards such as the National Institute of Standards and Technology (NIST) SP 800-57. NIST 800-57 outlines best practices for managing and securing cryptographic keys and certificates from the initial certificate request to revocation or deletion of the certificate. SANS Control 17 suggests several ways to get the most benefit from these NIST best practices. I’m going to highlight just a couple:

  • Only allow approved Certificate Authorities (CAs) to issue certificates within the enterprise (CSC 17-10)
  • Perform an annual review of algorithms and key lengths in use for protection of sensitive data (CSC 17-11)

Think for a moment about how you would begin mapping your processes to these two recommendations:

  • Do you have policies that specify which CAs are approved?
  • Do you have an auditable process that validates that administrators must submit certificate requests to approved CAs?
  • Do you have a timely process for replacing certificates signed by non-approved CAs with approved certificates?
  • Do you have an inventory of all certificates in your environment, their issuing CAs, and their private key algorithms?
  • Do you have an inventory of all SSH keys in your environment, their key algorithms, and key lengths?
  • Do you have a system for validating that all certificates and SSH keys actually in use in your environment are listed in this inventory?

I LOVE that I can say that Venafi solutions allow you to answer “yes” to all of these.

If you are interested in more details about mapping your processes for securing keys and certificates to the SANS Critical Security Controls, stay tuned: my white paper on that subject, coauthored with George Muldoon, will be coming soon.

The 7 Deadly Sins of Cloud Data Loss Prevention

By Chau Mai, Senior Product Marketing Manager, Skyhigh Networks

flamesIt’s good to learn from your mistakes. It’s even better to learn from the mistakes of others. Skyhigh has some of the security world’s most seasoned data loss prevention (DLP) experts who’ve spent the last decade building DLP solutions and helping customers implement them. So, we thought we’d pick their brains, uncover some of the most common missteps they’ve seen IT make when rolling out DLP in practice, and share them so you can avoid the mistakes of IT practitioners past.

In this piece, we specifically address mistakes when rolling out DLP to protect data in the cloud. So without further ado – the 7 deadly sins of Cloud DLP:

  • Lust – It’s natural to be tempted by the allure of cloud DLP. However, make sure that your cloud DLP deployment preserves the actual functionality of your cloud applications. You don’t want to break the native cloud applications’ behavior. For example, let’s say your DLP solution has detected sensitive content in Box and enforces it via encryption. Your end users should still be able to preview documents, perform searches, and overall have a seamless experience even with cloud DLP in place.
  • Greed – Cloud applications can contain enormous amounts of information – in some cases, glittering terabytes of data. However, as with traditional on-premise DLP, there’s no need to try and scan everything all at once. We recommend filtering on user attributes (group, geography, employee type, etc.) as well as on sharing permissions (i.e. externally vs internally) and prioritizing high-risk documents.
  • Envy – Do your employees envy others who have the ability to do their work and access cloud apps from anywhere they are? Companies are increasingly embracing the BYOD trend, and cloud DLP helps to enable that. Tame the green-eyed monster at your organization by letting cloud DLP catch all activity regardless of where the user is located, what operating system they’re using, and if they’re on-network or off-network – without the hassle of VPN.
  • Gluttony – Don’t overreach and accidentally intrude on user privacy during your DLP consumption. Security teams oftentimes have access to very sensitive information, but their access should be limited to business traffic. Make sure your cloud DLP practices do not involve sniffing personal traffic (such as employees’ use of Facebook, their activity on personal banking sites, etc).
  • Wrath – Avoid the wrath of employees and don’t let your cloud DLP solution negatively impact the user experience. Your employees should be able to seamlessly access and use cloud applications and enjoy the rapid responsiveness they’re accustomed to. Forward-proxies, especially when used for scanning a large amount of traffic, can cause lag and performance issues that are visible (and irritating) to the end user.
  • Pride – Having strong DLP technology, processes, and people in place is something to be proud of. However, not all cloud DLP solutions are created equal. Keep your cloud DLP program running smoothly by avoiding solutions that require you to deploy agents and install certificates – an operational nightmare. And certain cloud apps, such as Dropbox and Google Drive, will detect the man-in-the-middle and refuse to work as designed.
  • Sloth – This is where it pays off to be a little lazy. Let your cloud DLP provider integrate with your existing enterprise DLP solution. There’s no reason to re-work the efforts you’ve put into the people, processes, and technology. Look for a vendor that will extend your existing on-premise DLP policies to the cloud.

Cloud DLP is rapidly becoming a priority for security and compliance teams. As you evaluate solutions, be sure to keep these mistakes in mind. To learn more about common DLP missteps, check out our cheat sheet.

PCI Business-as-Usual Security—Best Practice or Requirement?

By Christine Drake, Senior Product Marketing Manager, Venafi

When attending the 2014 PCI Community Meetings in Orlando in early September, the PCI SSC kicked off the conference with a presentation by Jake Marcinko, Standards Manager, on Business-as-Usual (BAU) compliance practices. The PCI DSS v3, released in November 2013, emphasizes that security controls implemented for compliance should be part of an organization’s business-as-usual security strategy, enabling organizations to maintain compliance on an ongoing basis.

pci_2014_community_meetings_600x318Compliance is not meant to be a single point in time that is achieved annually to pass an audit. Instead, compliance is meant to be an ongoing state, ensuring sustained security within the Cardholder Data Environment (CDE). Security should be maintained as part of the normal day-to-day routines and not as a periodic compliance project.

To highlight the lack of business-as-usual security processes, Jake referenced the Verizon 2014 PCI Compliance Report, saying that almost no organization achieved compliance without requiring remediation following the assessment and there is dismally low continued compliance—only 1 out of 10 passed all 12 of the PCI DSS requirements in their 2013 assessments. But this was up from only 7.5% in 2012.

Four elements of ongoing, business-as-usual security processes were outlined:

  • Monitor security control operations
  • Detect and respond to security control failures
  • Understand how changes in the organization affect security controls
  • Conduct periodic security control assessments, and identify and respond to vulnerabilities

Jake mentioned that automated security controls help with maintaining security as a business-as-usual process, providing ongoing monitoring and alerting. If manual processes are used, they need to ensure that regular monitoring is conducted for continuous security.

The PCI DSS emphasis on business-as-usual security processes does not apply to any particular PCI DSS requirement, but instead applies across the standard. When considering how this applies to keys and certificates, manual security processes are unsustainable. A study by Ponemon Research found that, on average, there are 17,000 keys and certificates in an enterprise network, but 51% of organizations are unaware of how many certificates and keys are actively in use. Although some of these keys and certificates will not be in scope of the PCI DSS, a considerable number are used in the CDE to protect Cardholder Data (CHD).

In a recent webinar on PCI DSS v3 compliance for keys and certificates with 230 attendees, a poll revealed that over half (53%) either applied manual processes to securing their keys and certificates (41%) or did not secure them at all (12%). When specifically asked about their business-as-usual security processes for keys and certificates, more than half (53%) said they had no business-as-usual processes, but merely applied a manual process at the time of audit.

Organizations need automated security to deliver business-as-usual security processes for keys and certificates. This should include comprehensive discovery for a complete inventory of keys and certificates in scope of the PCI DSS, daily monitoring of all keys and certificates, establishment of a baseline, alerts of any anomalous activity, and automatic remediation so that errors, oversights, and attacks do not become breaches.

During his presentation, Jake noted that, for now, implementing business-as-usual security controls is a best practice according to the PCI DSS v3, and not a requirement. But he said that best practices often become requirements—so don’t wait! Start incorporating business-as-usual security practices now.

Learn how Venafi can help you automate key and certificate security required in PCI DSS v3—simplifying and ensuring repeated audit success while providing ongoing security for your CDE.

The Ability to Inspect What You Didn’t See

By Scott Hogrefe, Senior Director, Netskope

Netskope_Active_IntrospectionContent inspection has come a long way in the past several years. Whether it is our knowledge and understanding of different file types (from video to even the most obscure) or the reduction of false positives through proximity matching, the industry has cracked a lot of the code and IT and businesses are better off as a result. One constant that has remained true, however, is the fact that you just can’t inspect content you can’t see. This probably seems like an obvious point, and for traditional solutions, we can solve for this by simply pointing the tool at repositories that might have been (for whatever reason) overlooked. But these repositories are relatively easy to discover because, frankly, it’s harder to hide content when it’s occupying storage that IT is responsible for maintaining in the first place. It’s hard to lose a NAS (though not impossible — some of us have stories we could share, no doubt). But this changes when it comes to content in the cloud. Let’s break down some of the challenges here:

  • There are 153 cloud storage providers today and the average organization, according to the Netskope Cloud Report, is using 34 of them. Considering IT are typically unaware of 90% of the cloud apps running in their environment, this means that content is in 30+ cloud apps that IT has no knowledge of (and that’s just cloud storage, the average enterprise uses 508 cloud apps!).
  • Once you know that an app is in use, inspection of content in the cloud has required movement of said content. Since many traditional tools perform inspection of content as it flies by, the scope of inspection is limited to when content is being uploaded or when it is downloaded. Therefore, content may exist in a cloud app for several years before it’s ever inspected.
  • The “sharing” activity so popular in cloud apps today is done by sending links rather than the traditional “attachment” method. Since the link doesn’t contain the file, the inspection is useless.

For the first of our challenges above, vendors like Netskope can quickly discover all apps running in your enterprise and tell you whether the usage of these apps is risky or not.

For challenges two and three, Netskope just introduced Netskope Active Introspection, which enables customers to examine, take action or enforce policies over all content stored in a cloud app. This means that regardless of whether the data was placed in a cloud app yesterday or years ago, enterprise IT can take advantage of this solution’s leading real-time and activity-aware platform to protect it.  In addition, Active Introspection provides data inventory and classification, understands app and usage context, creates a content usage audit trail, and can be deployed alongside Active Cloud DLP.

What’s even more killer is that Active Introspection can be run as part of your overall policy framework and can typically run through an entire repository in less than 30 minutes. So let’s say that you want to encrypt specific data – Active Introspection discovers the content, understands whether the content meets certain criteria (such as sensitive or high value content), and completes the step of encrypting it, right then and there. There are additional actions that can be triggered automatically, such as alerting the end user, changing to ownership of the content to the appropriate person, encrypting the content, and many more.

My colleague, Rajneesh Chopra, just published a Movie Line Monday that talks about how customers are using Active Introspection and inspection capabilities together. If we think of this as a spectrum, imagine that on one side you’ve got content that’s constantly being moved in and out of a cloud app – for that, we have inspection that’s happening in real-time. On the other side of the spectrum you have content that’s already in the cloud app and being shared via links – for that, we have introspection. It’s complete coverage. You should check it out here, but suffice it to say, for our customers, the availability of Active Introspection within the Netskope Active Platform means that they are now able to go more confidently into cloud apps they’ve cautiously embraced. For these customers, there’s a strong understanding that safe cloud enablement requires a comprehensive solution that can be flexible enough to cover the myriad use cases they’re confronted with.

Do you have a solid handle on the cloud apps in your organization? What about the content contained within them? We’d love to hear from you and address any questions you have or show you a demo. Reach out to us at [email protected] or @Netskope to get a conversation started.


4 Lessons Learned From High Profile Credit Card Breaches

By Eric Sampson, Manager and QSA Lead, BrightLine

4-lessons-learned-breachesThe media has been filled with stories of high profile credit card breaches, including those from Target, Neiman Marcus, P.F. Chang’s and most recently Home Depot. Details on the Home Depot breach are still emerging, but the details around the Target and Neiman Marcus breaches are well known and causing the public to ask if it will happen again?

However, the real question we should be asking ourselves is when will it happen again?

Experienced Qualified Security Assessors (QSAs) will acknowledge that securing the cardholder data environment by meeting PCI DSS requirements provides a certain baseline level of security; however, it would be naïve to say that this alone will protect an organization from an attack. It is important to note there are areas where a merchant should realize the PCI DSS is an important start, but is only the foundation. One example is event logging.

The detailed requirements for event logging (section 10.6) assume that a merchant or service provider will utilize the documents for investigative purposes. That said, having a process to review audit logs on a daily basis does not guarantee that the employees responsible for reviewing logs and alerts will appropriately identify important or suspicious events in a timely and accurate manner. Similarly, during a PCI DSS assessment, QSAs are tasked with validating that daily log review processes and/or the use of log harvesting technologies are implemented. However, QSAs will not critique the details of the log review process or evaluate the robustness of log parsing tools.

So, how does this pertain to recent breach events?

It has been reported that many relevant security log events pertinent to the breaches were generated, but either ignored, or not acted upon in a timely manner, perhaps lost in the myriad of audit logs.

To go beyond the baseline standard, we can ask more probing questions such as:

  • How do log events ensure correct action?
  • How quickly should they be addressed?
  • Does the team responsible for reviewing these events and alerts have sufficient training and tools necessary to identify possible attacks?

Verizon’s 2014 data breach investigation reported that 1% of data breaches were discovered by a review of audit logs. Surely, a much higher number of breaches could have been detected through an effective internal review of audit logs. What does that say about our ability to detect breaches as they occur?

I have four thoughts for consideration:

  1. Devote to training. Individuals responsible for reviewing security events and alerts need to develop the skills to identify and act upon suspicious events that may indicate unauthorized activity.
  1. Invest in good tools. Does the organization currently have sufficiently capable log monitoring and file integrity monitoring tools? These tools should allow an organization to scan large amounts of information, but be able to extract specific events that could impact the organization.
  1. Be proactive. Understanding how alerts are generated, what data is contained in the alert and who reviews them is paramount. A careful plan can avoid finding out that a critical system is missing logs which may result in an incomplete view of an incident and potentially unnecessary future expenditure.
  1. Prepare drills. In a variety of specialties, including the military, medicine, and airline industry, exercises in handling emergency events have made many lives safer. Although we try to prevent a breach from happening, if it does happen, it can be resolved quickly and effectively. Reviewing audit logs and alerts can be a tedious activity at times. Make it interesting by staging mock attacks. Consider making this exercise a component of incident response plan tests and penetration tests.

Organizations face an ever expansive landscape of threats, vulnerabilities, and risks, not to mention an ever rising mountain of logs to review and manage. Bringing thoughtful consideration to security log management will enable an organization to take action where needed, understand important events, and address potential security threats when identified.

Was the Cloud ShellShocked?

By Pathik Patel, Senior Security Engineer, Skyhigh Networks

ShellShockInternet security has reached the highest defcon level. Another day, another hack – the new bug on the scene known as “Shellshock” blew up headlines and Twitter feeds.

Shellshock exposes a vulnerability in Bourne Again Shell (Bash), the widely-used shell for Unix-based operating systems such as Linux and OS X. The bug allows the perpetrator to remotely execute commands on vulnerable ports. The vulnerability is extremely easy to exploit, not requiring extensive knowledge of application or computational resources. The extensive functionality, along with the relative ease of launching an attack, led industry analysts to label the bug more serious than Heartbleed. The National Institute of Standards and Technology assigned the vulnerability their highest risk score of 10.

What are the implications of ShellShock for cloud security? At Skyhigh, we reviewed enterprise use of over 7,000 cloud service providers for vulnerabilities. The results surprised us.

We initially expected to discover rampant vulnerability to Shellshock amongst cloud service providers. The data portrayed a more mixed-bag of cloud application security.

Four percent of end-user devices in the enterprise environment employ the vulnerable version of Bash on employee devices – reflecting the dominance of Windows in enterprise networks. We also found that only three cloud service providers employ common gateway interface (CGI), the primary vector of attack. While cloud service providers may be vulnerable through other vectors (i.e. ForceCommand), the fact that they avoid the primary attack vector of the bug through design and architectural complexity is an indication of the maturity of today’s cloud applications.

However, when we scanned the top IaaS providers(e.g. AWS, Rackspace) for the Bash vulnerability, 90% of checks reported the vulnerable Bash version on the default images provisioned. Customers should not wait and rely on their IaaS providers to take the initiative. To ensure immunity from ShellShock, all organizations should immediately update their systems with the latest version of Bash.

But remediation measures shouldn’t end there. Given the current rate of breaches, organizations can expect the next event won’t be far off. Our recommendation: A Web Application Firewall (WAF) deployed to protect against pre-defined attack vectors can come in handy at times like this. System administrators can quickly write rules for WAFs to defend against this and similar bugs.  In our case, we quickly updated our WAF rules in addition to updating the vulnerable Bash version.

A sample ruleset for mod_security (WAF) is as below:

Request Header values:
SecRule REQUEST_HEADERS “^() {” “phase:1,deny,id:1000000,t:urlDecode,status:400,log,msg:’CVE-2014-6271 – Bash Attack’”

SecRule REQUEST_LINE “() {” “phase:1,deny,id:1000001,status:400,log,msg:’CVE-2014-6271 – Bash Attack’”

GET/POST names:
SecRule ARGS_NAMES “^() {” “phase:2,deny,id:1000002,t:urlDecode,t:urlDecodeUni,status:400,log,msg:’CVE-2014-6271 – Bash Attack’”

GET/POST values:
SecRule ARGS “^() {” “phase:2,deny,id:1000003,t:urlDecode,t:urlDecodeUni,status:400,log,msg:’CVE-2014-6271 – Bash Attack’”

File names for uploads:
SecRule FILES_NAMES “^() {” “phase:2,deny,id:1000004,t:urlDecode,t:urlDecodeUni,status:400,log,msg:’CVE-2014-6271 – Bash Attack’”

We recommend evaluating this ruleset based on your own application design. For additional best practices, check out our five keys for protecting data in the cloud.