Evolution of Distributed Policy Enforcement in the Cloud

By Krishna Narayanaswamy, chief scientist at Netskope

Narayanaswamy-Krishna

 

 

 

 

As computing shifts to the cloud, so too must the way we enforce policy.

Until recently, enterprise applications were hosted in private data centers under the strict control of centralized IT. Between firewalls and intrusion prevention systems, IT was able to protect the soft inner core of enterprise information from external threats. Ever more sophisticated logging and data leakage prevention solutions supplemented those with a layer of intelligence to help IT identify and prevent not only external but also internal threats that led to costly data breaches. Even remote workers were shoe-horned into this centralized model using VPN technology so they can be subjected to the same security enforcement mechanisms.

The cloud has brought so many benefits, with users of compute services being able to procure the service that best fits their needs, independent of the others, and providers able to focus on what they do well, whether building scalable infrastructure or solving a business problem with a software service. The distributed nature of the cloud also means that users enjoy the availability and performance benefits of multiple redundant data centers. The model also aligns well with the proliferation of smart devices and users’ need to access content anywhere, anytime.

But as computing has moved to the cloud – and we are now at a tipping point with nearly one-third of compute spend reported to be on cloud infrastructure, platform, and software services – legacy security architectures are quickly becoming ineffective.

We need a fresh way to solve the problem. But first a short primer on security policy enforcement:

Security reference architectures consist of two components: the Policy Control Point (PCP) and Policy Enforcement Point (PEP). The PCP is where security policies are defined. In general, there is one or a small number of PCPs in an enterprise. The PEP is where the security policies are enforced. Typically there are many PEPs in an enterprise network, and a group of PEPs may enforce a specific type of policy.

The way it works is the PCP updates the many PEPs with the specific policy rules that pertain to the PEPs’ capabilities. The PEPs, for their part, act in real-time on the policy trigger, such as discovering data passing through a network and enforcing the policy as a pre-defined triggered action happens. PEPs that experience a policy trigger then send policy event logs back to the PCP to convey the attempted policy violation and confirm enforcement for compliance reporting purposes. Event logs provide information from the PEP about how and when the policy was triggered that can be used to create new or tune existing policies.In practice, the PCP and PEPs are usually not a single physical entity but a collection of physical entities that provide the logical functions described above.

image1

What are the key requirements for a cloud security framework?

The fact that enterprises’ applications, platform, and infrastructure servicesare moving to the cloud breaks the notion of a centralized service delivery point.Cloud service providers have optimized their ownsolutions for the specific types of services they’re offering or enabling, e.g., CRM, backup, storage, etc.This means that there are no common security controls across all of the services that enterprises are accessing.

Adding insult to injury, enterprises have another dimension of complexity to deal with: They need to plan for users to get both on-premand off-prem access to enterprise apps, as well as access from corporate-owned and personalsystems and a plethora of mobile devices. And in the face of all of this complexity, of course, the service and the policy enforcement needs to be efficient, as transparent as possible, and “always on.”

A tall order.

What are the ways to ensure this?

One possibility is the status quo: Ensure that all access to cloud services from any device, whether corporate-owned or BYOD are backhauled to the enterprise datacenter where the PEPs are deployed. This approach creates an hour-glassconfiguration where traffic from differentaccess locations is funneled to a choke-point and then fans out to the eventual destination, which is generally all over the Internet. Great for policy enforcement. Not so much for user experience.

Another possibility is to enforce policies at the server end. This is more efficient from a traffic standpoint, but isn’t effective because every cloud serviceprovider has a proprietary policy framework and different levels of policy enforcement capabilities. This means the PCP has to be able to convert the configuredpolicies to the specific construct supported by each service provider.

Netskope2

A third possibility: Distributed cloud enforcement (in case you haven’t guessed it yet, this is the recommended one). This involves distributing PEPs in the cloud so that traffic can be inspected for both analytics and policy triggers, irrespective of where it originates. It also means that PEPs will be deployed close to user locations, allowing for minimal traffic detours enroute to theapplication hosted by the cloud service provider. The distributed PEPs are controlled by a central PCP entity. This all sounds very easy, and of course, the devil is in the details.

In order to do this right, the solution enforcing the policies must employ efficient steering mechanisms in order to get traffic to the PEPs in the cloud. The PEPs must enforce enterprises’ security policies accurately and quickly, and send those policy logs to the PCP in a secure, reliable way each and every time. This reference architecture resembles legacy architecture in terms of the level of control it provides while obviating the need to backhaul traffic back to the enterprise datacenter. The PEP only has to provide the various security functions that were deployed in the datacenter: access control, data loss prevention, anomaly detection, etc.The architecture also provides an option for introducing new services that are relevant to the emerging trends. For example, with corporate data moving to the cloud which is not in the direct control of the enterprise, data protection becomes an important requirement. The cloud resident PEP scan provide encryption functionality to address this requirement, among other non-security capabilities such as performance, SLA, and cost measurements.

Netskope3

It’s clear that emerging trends like cloud and BYOD have obviated existing security architectures.We are not alone in addressing this issue. Organizations such as the Cloud Security Alliance, which recently kicked off its Software Defined Perimeter (SDP) initiative, are looking hard at the best ways to tackle this. I submit that addressing the above trends with a distributed cloud policy enforcement framework meets key requirements and provides a foundation for adding new security (and non-security) services that will become relevant in the near future.

What’s New With the Security as a Service Working Group?

CSA members are invited to join the Security-as-a-Service Working Group (SecaaS WG) which aims to promote greater clarity in the Security as a Service model.

Why a Security as a Service Working Group?

Numerous security vendors are now leveraging cloud based models to deliver security solutions. This shift has occurred for a variety of reasons including greater economies of scale and streamlined delivery mechanisms. Regardless of the motivations for offering such services, consumers are now faced with evaluating security solutions which do not run on premises. Consumers need to understand the unique nature of cloud delivered security offerings so that they are in a position to evaluate the offerings and to understand if they will meet their needs.

Research from this working group aims to identify consensus definitions of what Security as a Service means, to categorize the different types of Security as a Service and to provide guidance to organizations on reasonable implementation practices.

Ongoing Research

As part of its charter, the group expects to publish three key pieces of research related to the Security as a Service model over the course of the following six months

1. A Category Framework Proposal. This will include business and technical elements as well as a survey on this framework proposal and how it applies to existing categories

2. Categories of Service v2.0. This document will include sections based off of the new framework

3. Implementation Documents v2.0,. These implementation documents will include templates based off of the new framework as well business and technical elements as well as a detailed guidance.

To get involved, visit the SecaaS Working Group page.

 

 

CloudTrust Protocol (CTP) Working Group Kicks Off at CSA Congress

The Cloud Trust Protocol (CTP) aims to provide a protocol to enable Cloud Users to query Cloud Providers in real time about the security level of their service. It aims to foster transparency and trust in the cloud supply chain, bringing greater visibility to cloud users and providing them with data on a continuous basis in order to inform their daily risk management decisions.

As a monitoring mechanism, CTP also ambitions to become the pillar of CSA’s future continuous-monitoring based certification, complementing the STAR third party certification and attestation in the Open Certification Framework.

Earlier this fall, Cloud Security Alliance launched the CTP Working Group. The goal of the Working Group is to leverage the initial idea of Ron Knode and turn CTP into close to market solution in the next 18 months, drawing both on recent research conducted by the CSA EMEA Research team and on the inputs of leading stakeholders in the cloud industry, including both providers and users.

The CTP Working Group’s mission is to refine, challenge and extend the existing CTP framework and API specification, establish standard monitored cloud security attributes, implement a pilot and assure the proper integration of CTP in the Open Certification Framework.

The CTP Working Group will be chaired by the following people:

  • John DiMaria – British Standards Institute
  • Tim Sandage – Amazon Web Services
  • Sandeep Singh – Dell

Dr Alain Pennetrat, Senior Researcher at the CSA EMEA, will be the WG Technical Lead.

For more information, visit https://cloudsecurityalliance.org/research/ctp/.  We’ll announce the official kick-off call within the next month.

Introducing the CSA Financial Services Working Group

At our annual CSA Congress today, the CSA is pleased to introduce the new Financial Services Working Group (FSWG), which aims to provide knowledge and guidance on how to deliver and manage secure cloud solutions in the financial industry, and to foster cloud awareness within the sector and related industries. It will complement, enrich and customize the results of other CSA WG in a way to provide a sector specific guidance.

Why a financial services working group?

Financial services organizations have specific, often unique requirements regarding security, privacy and compliance. The Financial Services Working Group’s main objective is the identify and share the challenges, risks and best practices for the development, deployment and management of secure cloud services in the financial and banking industry.

Research from this working group aims to accelerate the adoption of secure cloud services in the financial industry by enabling the adoption of best practices by:

  • Identifying and sharing the industry’s main concerns regarding the delivery and management of cloud services in their sector.
  • Identifying industry needs and requirements (both technical and regulatory)
  • Identifying adequate strategic security approaches to ensure protection of business processes and data in the cloud.
  • Reviewing existing CSA research and identify potential gaps from the financial services standpoint.

Initial Research

As part of its charter, the group expects to publish four key pieces of research related to the financial services industry:

  1. A survey of existing & potential cloud solutions (products and services) in the banking and financial services sector
  2. Technical and regulatory requirements in the sector
  3. Identification and assessment of risks in cloud solutions in the sector, including interaction with other approaches such as mobile computing, social computing, and big data.
  4. Recommendations and best practices of cloud solutions for the sector.

For more information about the working group, visit https://cloudsecurityalliance.org/research/financialservices

 

 

Introducing the CSA’s Anti-Bot Working Group

Among the many exciting new working groups being established and meeting at CSA Congress, today we’d like to also introduce our Anti-Bot Working Group. Chaired by Shelbi Rombout from USBank, this group’s mission is to develop and maintain a research portfolio providing capabilities to assist the cloud provider industry in taking a lifecycle approach to botnet prevention.

Why an anti-bot group?

Botnets have long been a favored attack mechanism of malicious actors.  A recent evolution in botnet innovation has been the introduction of server-based bots as an alternative to single user personal computers.  The access to vastly greater upload bandwidths and higher compute performance has attracted the same adversaries who have built and operated earlier botnets.

As cloud computing is rapidly becoming the primary option for server-based computing and hosted IT infrastructure, CSA as the industry leader has an obligation to articulate solutions to prevent, respond and mitigate against botnets occurring on cloud infrastructure.  The CSA Anti-Bot Working Group is the primary stakeholder for coordinating these activities.

Initial Research

As part of its charter, the group expects to publish two key pieces of research related to botnets – Fundamental Anti-Bot Practices for Cloud Providers, and an Anti-Bot Toolkit Repository for Cloud Providers.

For more information about the working group, visit:  https://cloudsecurityalliance.org/research/antibot

Introducing the CSA’s New Virtualization Working Group

There’s been a lot of noise around the establishment of new working groups at this year’s Congress and today we’d like to also introduce another important addition: the Virtualization Working Group. Chaired by Kapil Raina of Zscaler, the Virtualization Working Group is chartered to lead research into the combined virtualized operating system and SDN technologies.  The group will build upon existing Domain 13 research and provide more detailed guidance as to threats, architecture, hardening and recommended best practices.

Why a Virtualization Working Group?

Virtualization is a critical part of cloud computing. Virtualization provides an important layer of abstraction from physical hardware, enabling the elasticity and resource pooling commonly associated with cloud. Virtualized operating systems are the backbone of Infrastructure as a Service (IaaS).

The CSA Security Guidance for Critical Areas of Focus in Cloud Computing focused exclusively on virtualized operating systems in Domain 13. Recent developments in software defined networking (SDN) show great potential to virtualize data networks in the same way that operating systems have been virtualized. Additionally, the future integration and potential convergence of virtualization of operating systems and networks promise to greatly impact the next generation of cloud architectures. The security issues and recommended best practices of this broader view of virtualization merit additional focused research from a reconstituted version of the CSA Virtualization Working Group.

Initial Research

As part of its charter, the CSA Virtualization Working group plans to publish a Domain 13 Virtualization Whitepaper as part of the CSA Security Guidance for Critical Areas of Focus in Cloud Computing. The paper is scheduled for release at the upcoming RSA Conference taking place in February.

For more information about the working group, visit https://cloudsecurityalliance.org/research/virtualization/

 

Announcing the Consensus Assessments Initiative Questionnaire (CAIQ) V.3 Open Review Period

At CSA Congress 2013 this week we are announcing the open review period of the Consensus Assessments Initiative Questionnaire (CAIQ) v.3 and we hope you will take a few moments and provide your input to this very important initiative.  Lack of security control transparency is a leading inhibitor to the adoption of cloud services. The Cloud Security Alliance Consensus Assessments Initiative (CAI) was launched to perform research, create tools and create industry partnerships to enable cloud computing assessments.

About CAIQ

The CSA is focused on providing industry-accepted ways to document what security controls exist in IaaS, PaaS, and SaaS offerings, providing security control transparency. CAIQ, by design, is integrated with and will support other projects from our research partners.   The CAIQ Questionnaire is available in spreadsheet format, and provides a set of questions a cloud consumer and cloud auditor may wish to ask of a cloud provider. It provides a series of “yes or no” control assertion questions which can then be tailored to suit each unique cloud customer’s evidentiary requirements.

This question set is meant to be a companion to the CSA Guidance and the CSA Cloud Controls Matrix (CCM), and these documents should be used together.  This question set is a simplified distillation of the issues, best practices and control specifications from our Guidance and Controls Matrix, intended to help organizations build the necessary assessment processes for engaging with cloud providers.  The Consensus Assessments Initiative is part of the CSA GRC Stack.

What’s New and Why we Need YOUR Input:

Now in its third version, the Cloud Assessments Initiative Working Group will start the open review period for a set of questions intended to help organizations further build the necessary assessment processes for engaging with cloud providers.

We are in need of input from the cloud community on a number of fronts. First, we would like input on the current CAIQ questions:  are these questions still relevant to cloud security; are they written in a way that is easy for all stakeholders to understand, and should they remain important questions to ask during the cloud assessment process.

Second, we would like to have input on what questions should be added to the assessment to help strengthen the process overall for each domain.  Finally, as CAIQ is a companion to the recently updated CCM V.3, we are seeking input on what questions should be added to two new control domains, Mobile Security and Interoperability and Portability.

As a side, the new CAIQ is now color coded to match the CCM V.3 domains for easy review.

ACTION: The open review period ends on January 6, 2013

This is your opportunity to provide feedback and comments to the v.3 of CAIQ.  Submitting feedback is easy with our 3-step process. Follow the link below to the CSA Interact peer review site:

https://interact.cloudsecurityalliance.org/index.php/caiq/caiq_v3

Thank you in advance for your time and contribution.  We look forward to your input.  If you have any questions, you can contact us by emailing [email protected].

Feel free to reference the following CCM documents during your review:

How Snowden Breached the NSA

NOVEMBER 12TH, 2013 – BY: 

How Edward Snowden did it and is your enterprise next?

There’s one secret that’s still lurking at the NSA: How did Edward Snowden breach the world’s most sophisticated IT security organization? This secret has as much to do with the NSA as it does with your organization. In this exclusive infographic, Venafi breaks open how Edward Snowden breached the NSA. Venafi is sharing this information and challenges the NSA or Edward Snowden to provide more information so that enterprises around the world can secure their systems and valuable data.

 

NSA

NSA Director General Keith Alexander summed up well Snowden’s attack: “Snowden betrayed the trust and confidence we had in him.” The attack on trust, the trust that’s established by cryptographic keys and digital certificates, is what left the NSA unable to detect or respond. From SSH keys to self-signed certificates, every enterprise is vulnerable. This exclusive infographic provides you with the analysis needed to understand the breach and how it could impact you and your organization.

 

Edward Snowden Infographic

DOWNLOAD INFOGRAPHIC (JPG)

Learn more about how Edward Snowden compromised the NSA.

Seeing Through the Clouds

By TK Keanini, CTO, Lancope

The economics of cyber-attacks have changed over the years. Fifteen years ago, it was all about network penetration, but today advanced attackers are more concerned about being detected. Similarly, good bank robbers are concerned about breaking into the bank, but great bank robbers have mastered how to get out of the bank without any detection.

Virtualization Skews Visibility

Because virtual-machine-to-virtual-machine (VM2VM) communications inside a physical server cannot be monitored by traditional network and security devices, the cloud can potentially give attackers more places to hide. Network and security professionals need to be asking themselves what cost-effective telemetry can be put in the cloud and across all of their networks such that the advanced persistent threat can’t escape detection.

The answer, I believe, lies in flow-based standards like NetFlow and IPFIX. Originally developed by Cisco, NetFlow is a family of standard protocols spoken by a wide variety of popular network equipment. IPFIX is a similar standard that was created by the Internet Engineering Task Force (IETF) and is based on NetFlow Version 9. These standards provide the most feasible, pervasive and trusted ledger of network activity for raising operational visibility across both physical and virtual environments.

Regaining Cloud Control

Regaining control of the cloud starts with basic awareness. Security teams need to know what applications, data and workloads are moving into cloud environments, where that data resides at any particular time, who is accessing that data and from where. They need this information in real time, and they need historical records, so that in the event that a breach is suspected it is possible to reconstruct what happened in the past. The recipe for success here is simple: leverage NetFlow or IPFIX from all of your routers, switches, firewalls and wireless access points to obtain a complete picture of everything happening across your network.

Flow-based standards like NetFlow and IPFIX provide details of every conversation taking place on the network. Some people think they need full packet capture of everything traveling on the network, and while that would be nice, it simply cannot scale. However, the metadata of that same traffic flow, as provided via NetFlow and IPFIX, does scale quite well and if need be, you can make the decision to also ‘tap’ a flow of interest to gather further intelligence.

Selecting a Monitoring Solution

By collecting and analyzing flow data, organizations can cost-effectively regain the internal visibility needed to detect and respond to advanced attacks across large, distributed networks and cloud environments. However, not all flow collection and analysis systems are created equal. It is important to determine the following when selecting a security monitoring solution for your physical and virtualized network and/or private cloud:

  1. Does the solution indeed provide visibility into virtual environments? (Some can only monitor physical infrastructure.)
  2. Are you getting an unsampled NetFlow or IPFIX feed? (Sampled flow data does not provide a complete picture of network activity.)
  3. Does the solution conduct in-depth analysis of the flow data? Is the intelligence it supplies immediately actionable?
  4. Does the solution deliver additional layers of visibility including application awareness and user identity monitoring, which can be critical for finding attackers within the network?
  5. Does the solution allow for long-term flow storage to support forensic investigations?

 

It is also important to conduct similar due diligence on the security technologies and practices used by various providers if you decide to outsource your IT services to the public cloud.

 

Thwarting Advanced Attacks

 

As the CTO of Lancope, it is my goal to ensure that the bad guys cannot persist on your networks. No matter which stage you are in with your cloud strategy –whether virtualizing your infrastructure, or using a public or private cloud – the collection and analysis of existing flow data can dramatically enhance your security. When every router/switch/wireless access point/firewall is reporting unsampled flow records, and you are able to synthesize that data into actionable intelligence, there is just nowhere for the adversary to hide.

 

For more details on NetFlow for security, check out the Lancope blog or follow me on Twitter @tkeanini.

 

TK Keanini is a Certified Information Systems Security Professional (CISSP) who brings nearly 25 years of network and security experience to his role of CTO at Lancope. He is responsible for leading Lancope’s evolution toward integrating security solutions with private and public cloud-based computing platforms.

Cloud Collaboration: Maintaining Zero Knowledge across International Boundaries

The increasingly global nature of business requires companies to collaborate more and more across borders, exchanging all manner of documents: contracts, engineering documents and other intellectual property, customer lists, marketing programs and materials, and so on. Unfortunately, the combination of recent NSA revelations and new European regulations are likely to make the challenge of securing business data even more difficult than it already is.It is therefore likely that new approaches will be needed that more easily allow trust across borders for confidential document exchange.

Evolving Regulatory Environments

Data shared across national boundaries may be subject to multiple legal frameworks depending on the nature of the information. The regulatory environment in the European Union is evolving significantly, with countries working to update their laws and regulations to protect citizens’ electronic data, even when it is held outside the EU. This includes almost everything a person might post to the Internet, including photos, blogs and so on. The concern is that the EU will strengthen their regulations to a level that will be extremely difficult and expensive for companies to comply with.

There is currently an agreement with the EU(“Safe Harbor”) that US companies can voluntarily participate in if they are holding EU citizens’ data. That agreement could be replaced by much more stringent requirements, though they will not take effect before 2016. US companies are required to implement a number of protections for citizen data under the EU agreement, and there is no provision that allows them to release personal data to the government.

All of these developments were in play before the Edward Snowden revelations took place. Since then, European attitudes on data privacy have hardened even further.In the meantime, attitudes in the rest of the world towards US-based service providers have also soured. To make matters worse, the Snowden information leaks not only exposed “NSA snooping”, it also raised suspicions that some vendor equipment and standardized algorithms may have been compromised with backdoors or weaknesses.

New Reality is Impacting Cloud Sharing

Meanwhile, organizations are seeking to leverage cloud computing as much as possible for business agility and cost control reasons. The natural choice will be to use a cloud-based document sharing provider for external collaboration.  A big reason for this is that business partners need to update documents, not just read them. Granting such access to data inside an organization’s data center is problematic from both a security and administrative perspective.

Given this quagmire, organizations that want to use a cloud provider for external collaboration across international boundaries have two choices, both of which are problematic:

  • US Provider: This is a good option for organizations that prefer to use a well-established provider, are not worried about the government or NSA accessing their content and are not concerned about equipment backdoors.But it may not be acceptable to your international business partners.
  • Non-US Provider: This approach may appeal to organizations that want to allay concerns expressed by their foreign partners, especially those in Europe, about US government access to their data. However, a European operator is unlikely to be as well established as a US cloud provider, US businesses will not have any realistic leverage with them and foreign governments are known to dabble in data interception themselves. Finally, depending on who the organization is doing business with, they may face resistance from a non-European partner not willing to use a European cloud provider.

Given these alternatives, some organizations may be tempted to just give up and keep data internal. This approach reintroduces the security and management headaches that most companies were trying to eliminate by adopting cloud sharing in the first place. It also poses a problem for the organization’s partners because they will need to manage a different access model for every business with which they collaborate.

Federation May be the Answer

Fortunately, new federated encryption and key management technologies have emerged to addresses these problems. As a starting point, consider encryption. Crypto is an obvious solution to the cloud provider dilemma for international collaboration. If the data is encrypted then it should be protected from unauthorized access. In reality, it’s not that simple.

In most environments, the cloud provider is performing the encryption. As a result, the provider could receive a lawful request to access data under their control or their systems could be compromised. Both would result in a data breech. Furthermore, some providers may not encrypt data end-to-end. This fact alone may cause European organizations to balk, particularly if regulated data is involved.

There are other options that move control of encryption keys into the hands of data owners. However, most require a “trusted third party” to handle encryption support services such as key management, opening another hole, and inviting problems with European regulations.

Replacing Trust with Trustworthiness

A new class of federation and mediation technologies offers the best hope for cross-border encryption. In this model, the central cloud service provider does not need to be “trusted”. Instead, they serve a “mediator” to facilitate secure document collaboration, but do not have the necessary data access privileges or keys to actually decrypt files or access them in an unencrypted form.

This architecture consists of a mediator and two or more end-user software elements, and works as follows:

  • The central (cloud-based) mediator receives enrollment requests from the various users who want to collaborate. No distinction is made between the users based on location – they can be anywhere.
  • The meditator enrolls these users into a cryptographically protected group, and establishes a data repository for the documents that will be shared. Using advanced key management techniques, the relevant key material is fragmented, re-encrypted and distributed. As a result, the mediator does not end up with enough key material to decrypt anything, and each user must have the “approval” of the mediator to decrypt documents in the group repository. Note that because documents are initially encrypted at the end stations and the mediator cannot decrypt them, this architecture has removed the need for a “trusted third party” in the cloud.
  • As users submit documents into the shared repository, these are encrypted and the activity logged.
  • When any user tries to access a document, they submit their (cryptographically authenticated) credentials to the mediator. If they mediator concurs that the request is valid, a portion of key material is released to the requesting user. This missing key fragment plus the user’s own key material, allow the document to be decrypted.

Advancing Security through Mediation

Besides delivering confidentiality, this federated architecture offers advanced services that basic encryption facilities do not. The key enabler is the mediation function: since it serves as gatekeeper for data access. Using the mediator, business partners can pre-agree on special conditions for document access, in addition to the normal release that takes place when participating users authenticate themselves to the system.

As a simple example, access revocation becomes trivial. If the group agrees to revoke a person’s access to documents, the mediator can be instructed to deny access to that person, and immediately this request is implemented, since the mediator must approve all document release. Contrast this with certificate revocation, which can take a significant amount of time before actually terminating access.

For a more powerful example, let’s assume that collaborating companies agree that they want to ensure that if their participating end-user is on vacation or leaves the company, protected documents can still be accessed. Using the mediator, they would establish a cryptographically protected “release circuit,” which would authorize document access when a combination of other staff agrees.

A typical example might be the combination of a member of the executive team, an IT administrator, and a representative from HR.  A member of all three teams would need to authorize the release using a cryptographically secure process. Only then would the document be decrypted and delivered to whomever the team selects. The mediator logs all this activity in an encrypted, centralized log facility. Since all participants can audit activity,there’s no risk of a rogue IT person compromising the logs.

A federated architecture also supports controlled access within documents for searching and eDiscovery.  When an end-station encrypts a file, it can also extract metadata (including keywords, revision history, etc.), encrypt that metadata using a different set of keys, and pass it to the mediator for storage. As with the document itself, mediators on their own cannot decrypt the metadata. However, mediators can implement a [circuit] for metadata release. For example, business partners could agree that a combination of an executive and an IT person at any collaborating firm can “unlock” the metadata, so that they can being an eDiscovery search or security investigation. Once they find the document subset of interest (if any), they can initiate the (more restrictive) document release process on just those files. In this way, the companies involved can still meet their data governance requirements without compromising overall security.

Conclusion

Privacy concerns and emerging government regulations are making secure document sharing across international boundaries significantly more difficult and expensive to implement. This threatens the ability of organizations to move to cloud-based solutions, decreasing agility and efficiency. Fortunately, new security architectures such as federated and mediated encryption are capable of meeting these challenges. Like all privacy systems, such technologies must be properly deployed and maintained to be effective. Since they eliminate the need for a trusted third party in the cloud, they offer the best hope for establishing a trustworthy framework for secure document collaboration locally or internationally.

About the Author: Jonathan Gohstand is an expert in security and virtualization technologies, and Vice President of Products & Marketing at AlephCloud, a provider of cloud content privacy solutions.

Protecting Your Company from Backdoor Attacks – What You Need to Know

November 14th, 2013

By Sekhar Sarukkai

“We often get in quicker by the back door than the front” — Napoleon Bonaparte

A rare example of a backdoor planted in a core industry security standard has recently come to light.  It is now widely believed that the NSA compromised trust in NIST’s encryption standard (called the Dual EC DRBG standard) by adding the ability for NSA to decipher any encrypted communication over the Internet. This incident brings to fore the question of how much trust is warranted in the technologies that enable business over the Internet today.

There are only a few organizations in the world (all with 3 letter acronyms) that can pull off a fundamental backdoor coup such as this. More commonly entities undertaking backdoor attacks do not have that level of gravitas or such far reaching ambitions – instead the majority of these entities tend to leverage backdoors to undertake cybercrime missions ranging from advanced persistent threats on specific target companies, to botnet and malware/adware networks for monetary gains.  In these instances, Cloud services are a favorite vector for injecting backdoors into the enterprise.

What can we really trust?

In his 1984 Turing Award acceptance speech, Ken Thompson points out that trust is relative in what is perhaps the first major paper on this topic titled Reflections on Trusting Trust which describes the threat of backdoor attacks. He describes a backdoor mechanism, which relies on the fact that people only review source (human-written) software, and not compiled machine code. A program called a compiler is used to create the latter from the former, and the compiler is usually trusted to do an honest job. However, as he demonstrated, this trust on the compiler to do an honest job can, and has, been abused.

Inserting backdoors via compilers

As an example, Sophos labs discovered a virus attack on Delphi in August 2009. The W32/Induc-A virus infected the program compiler for Delphi, a Windows programming language. The virus introduced its own code to the compilation of new Delphi programs, allowing it to infect and propagate to many systems, without the knowledge of the software programmer. An attack that propagates by building its own Trojan horse can be especially hard to discover. It is believed that the Induc-A virus had been propagating for at least a year before it was discovered.

 

 

While backdoors in compilers are more frequent than backdoors in standards, they are not as prevalent as backdoors in open-source software. Enterprises freely trust closed- and open-source software as evidenced by its extensive use today. In our experience, we have not come across any corporate enterprise that does not use (and hence trust) at least some open-source software today.

The open-source conundrum

The global software contributor base and publicly reviewable source code are both hallmarks of an open-source ecosystem that actually provides transparency and value for free. Yet, these are the same characteristics that pose the biggest risk of backdoor exploits into enterprises by malicious actors intent on capturing competitive advantage. Unlike surpassing huge barriers in influencing (or writing) an industry standard, open-source projects enable someone to choose any of the millions of open-source projects (> 300,000 hosted in SourceForge alone, at last count) in hundreds of mirror sites opening up a broad surface area of attack.

One of the earliest known open-source backdoor attacks occurred in none less than the Linux kernel — exposed in November 2003.  This example serves to show just how subtle such a code change can be. In this case, a two-line change appeared to be a typographical error, but actually gave the caller to the sys_wait4 function root access to the system.

Hiding in plain sight
Given the complexity of today’s software, it is possible for backdoors to hide in plain sight.

More recently, there have been many backdoors exposed including an incident last September with an official mirror of SourceForge. In this attack, users were tricked into downloading a compromised version of phpMyAdmin that contained a backdoor. The backdoor contained code that allowed remote attackers to take control of the underlying server running the modified phpMyAdmin, which is a web-based tool for managing MySQL databases. In another case that came to light as recently as August, 2013, a popular open-source ad software (OpenX) used by many Fortune 500 companies including was determined to have a backdoor giving hackers administrative control of the web server. Worse than the number of these backdoors is the time elapsed between the planting of the backdoor and the actual discovery of the backdoor. These backdoors often go unnoticed for months.

How to prevent backdoor attacks

The reality in today’s enterprise is that software projects/products that have little or unknown trust are leveraged every day.  We have found that many of these backdoors elude malware detection tools because there are no executables, Enterprises must now look for new ways to track the open-source projects that enter their enterprise from external untrusted sources, such as open-source code repositories and must be able to rapidly respond to any backdoors discovered in these projects.  If not, these backdoors have the potential to inflict serious and prolonged harm on the enterprise.

 

Thoughts and key takeaway: Cloud Security Alliance CEE summit

The Cloud Security Alliance Central Eastern Europe Summit gave a good opportunity to learn about the Cloud Computing market in areas of Europe that are less reviewed. The congress, held in the center of the old city of Ljubljana, provided interesting mixture of Information Security professionals along with various cloud providers and end users coming to explore the news in this dynamic world of cloud computing.

And the news was definitely coming in a storm. First speaker for the morning was Raj Samani, EMEA CTO for McAfee who gave an interesting look at the eco-system of Cybercrimes. In an excellent performance, Mr. Samani described how the cloud models are also propagating into the Cyber Crimes ecosystems. “Cyber Criminals today do not need to be a disturbed computer genius”, he explained, “All you need to have is a credit card”.

Cybercrimes usually containthree components: Research, CrimeWare and infrastructure. All those components can be acquired in the same models of cloud services as we know from our daily life,  McAfee CTO revealed as he ran slides describing different services starting from spam and botnet for hire but also going all the way up to e-mail hacking service and even guns and hit-man as service websites. While we know that those services exist for a long time now, it was hard not to be impressed from the sophistication and the granularity of each service details. The level of transparency and detailed SLA that some of those “hackers of a service” adopted, can even provide some lessonsto traditional cloud providers.

In the next presentation, François Gratiolet, EMEA CISO for Qualys, gave a brief review about the business drivers and market characteristics of security as service offering. “SecAAS can improve the business security by enabling the organization to focus on itskey assets and risk management while maintaining flexibility and agility”, he explained, “but the offering still needs to mature and provide more governance, liability and transparency”.

The call for more transparency from the cloud providers is repeating in all cloud security conferences, and some cloud providers recognize it as business advantage. Jan Bervar, CTO for NIL, presented how NIL, a local IaaS and PaaS Provider, has taken the strategy of providing secure cloud services that are trustable and transparent.  “We set controls and strict standards on our services”, explained Mr. Bervar while he listed cloud computing top threats and how NIL offering is protecting customers against those risks.

 

 

Governments and the EU commission are also aware of the fact that they need to help cloud consumers and cloud providers to increase trust among them. The EU strategy for cloud computing includes a plan to “cut through the jungle of laws and regulation” that currently many stakeholders encounter. Big part of this process is dependent on the new data protection law for the EU that is being promoted as we speak. Gloria Marcoccio, from the Italian chapter of the Cloud Security Alliance, reviews the progress of the new EU data protection legislation and its effect on cloud computing players. Judging from that lecture and other lectures such as lawyer Boris Kozlevcarpresentationabout SLA and PLA challenges in the cloud, emphasize how important governments role in enabling the business and legal framework for cloud computing practices.

When discussing the future of cloud computing, we are starting to hear more about “Cloud Brokerage”. Dr. Jesus Luna Garcia from the Cloud Security Alliance explained the role of Cloud Brokerage in his presentation about Helix Nebula, a cloud environment built for providing computing resources for science and academic organizations in the EU.  Helix Nebula project act as intermediate between the consumers and a variety of cloud services and provide added value services such as standard security policy and secure data transfer across providers as well as continues monitoring and different service levels. This interesting model is a good sign for how the future implementation of cloud brokerage will look like.

Shifting from the legal and business aspects to the technology challenges.Interesting presentations heard from Trend Micro presenting their solution for virtual environments and the future of security in hybrid clouds. The new software define network technology was also introduced in a presentation by researchers from the university of Ljubljana elaborating this new technology challenges and benefits. SDN technology will probably change the way we treat network security in the cloud and got a good potential to give akick start for new technologies dealing with the threats of tomorrow.

And of course, no security conference these days is complete without discussing the challenges of government access to data, inspired by PRISM and Snowden leaks.  In the two concluding presentations from Astec and Slovenian cert it was discussed the effects of the latest news about the extent of US government and other governments in their pursuit of data access.  There is much to be said on this topic and it hard to summarize it in one article, but bottom line is that governments across the globe are spying on private communication and will probably continue to do so.
The effect on cloud computing adoption will probably remain for the short term only, since the cloud value proposition is just too high to ignore.

 

Moshe is a security entrepreneur and investor. With over 20 years’ experience in information security at various industry positions.  Currently focused on Cloud Computing as board member for Cloud alliance Israeli Chapter, public speaker on various cloud aspects and investor at Clarisite and FortyCloud – Startup companies with innovative security solutions. More information can be found at: www.onlinecloudsec.com

picblog1

 

What should cloud enabled data security protections look like in the future?

While listening to one of my favorite podcasts about two months ago, I heard a quote from a man named William Gibson that really resonated with me. He said, “The future is here already, it’s just not evenly distributed”. As I was driving along continuing to listen, it really started the synapses in my brain to fire. I’ve been spending a lot of time lately thinking about a long-term strategic vision to enable a device agnostic, data centric protection vision for the future. My goal is to enable the integrated use of company data in the cloud, mobile, and enterprise assets.

 

As I continued to listen, I started to wonder, if I were to look at the unevenly distributed future that is now, then what and where are the enterprise class Security, Risk, and Privacy controls that theoretically should exist today, that would enable me to truly break free of the barriers that currently exist preventing me from delivering a holistic, end point agnostic data centric protection vision?

 

As I pondered the question that drove me to blog, I decided to set out to evaluate the industry to see what pieces and parts are actually available to see how far away we are from being able to build this ecosystem of ubiquitous data controls, that are platform agnostic, enabling me to use any cloud app, the big three mobile devices (iOS, Android, and Windows Mobile), and enterprise class endpoints (Windows, Mac, and Linux).

 

Defining success in my mind meant setting a framework with a core set of principle requirements:

 

1)    Controls must run on all my platforms.

2)    Data protections must be able to be applied at rest, in use, in motion, and enable data destruction based on an automated function supporting a legal data retention schedule.

3)    The controls must be capable of enterprise class management for any of the deployed technologies.

4)    The technology must allow for the full spectrum use of the data across platforms. Essentially read, write, modify.

5)    The controls must be able to employ several key data protection principles automatically:

  • Identification and permanent meta data tagging of who created the data (Data owner)
  • Automated user interaction asking, “What the data is?” (Data Classification)
  • Automated and end user managed policy application of who should have access to the data (access control)
  • Automated and end user manageable policy application of what should the group be able to do with the data (permissions)
  • Automated workflow review of access rights over time (attestations)
  • Automated ability to recognize data that should be encrypted, and give the option for the user to choose encryption.
  • The solutions must allow an organization to retain/recover/rotate/destroy/retrieve/manage the encryption keys
  • Centralized Logging: The 5 W’s, Who, What, When, Where, Why?

6)    There has to be minimal user interactions or behavior changes in the way the users are used to working with/creating the data

7)    The ability to recognize the “dual personas” of devices supporting user data creation (personal data, and corporate data existing on the same asset), only instantiating the controls for corporate data.

 

It has been an interesting two months since I set out on this quest. I’ve met with at least 70 different security technology vendors, scoured the internet looking for new ideas and new technologies, called friends all around the world to hear what they have been seeing, and have even been meeting with VC firms to see what’s on their radar. The answer I have come up with so far is that I believe Mr. Gibson is sort of correct. The future is definitely almost here, the technologies are independently scattered, and you can’t accomplish everything I set out to do today quite yet all from one (or even three) technologies, but I believe in the very near future, we could accomplish this goal with some effort.

 

Let me tell you why I believe this. Today, if you think about cloud, mobile, and enterprise data platforms/assets, everyone does basically the same things with them. They generate data on or with them. If you think about my requirements, it’s odd to me that these have all been solved really everywhere but at the endpoint. If you boil down the core problem, we want data to be accessible on endpoints, but yet we have no common middle component that enables us to enact all of these reasonably sane security, risk and privacy requirements.

 

It seems pretty simple when you think about what we really need if you look at the common denominator. The data is always the same on every device, we just need something that can go with the data, a wrapper if you will, that enables all of my security risk and privacy requirements.

 

I believe what we really need to tie all of these requirements into one holistic solution that is A multi-use agent that runs on all of our platforms, that can be employed when and where needed, with appropriate provisioning, that would essentially provide you the ability to share and interact with this secured data, all while retaining control with confidence.

 

If we look at where the future is taking us, data is generated on endpoints and stored in files, it is being manually moved to cloud platforms like Dropbox for example, or automatically through services like iCloud, and it is being entered into cloud applications (private and public) as raw text. If we look at how we create data, we use relatively the same sets of technologies across our end user compute nodes. We use things like office productivity suites, PDF generators, or manually input or batch load data for example. So if we make the leap that fundamentally we all compute the same, and while computing, we all really generate data the same, what is stopping us from being able to take the next logical step, automatically inserting a corporate protection layer for the data we generate at the time of creation that meets all of these requirements?

 

I think the answer is obvious why there are no heterogeneous options right now. Even though the basic fundamental principles of data protection I described are not a mystery, the fundamental challenge we have is how we get our vendors to develop a data security enablement model that supports our overarching needs to share and use corporate information in a cloud/mobile/enterprise model with the appropriate protections for information in this cross platform world.

 

We have a myriad of point technologies that solve these specific requirements I laid out for databases, file systems, proxied access to cloud apps, and email, but the one thing we lack (which in my mind is the most critical thing), is a cross platform endpoint DRM like technology that runs in every major end user compute platform (Windows, Mac, Linux, iOS, Android, Windows Mobile) allowing us to apply all of the principles I talked about, but still having the ability to natively use the various apps and tools we currently use today when working with our data. It seems silly in retrospect that when you look at the common denominator, it’s the most logical place to start, focus on the data itself.

 

V.Jay LaRosa

Senior Director, Global Converged Security Architecture

Office of the CSO

As the Senior Director of Converged Security Architecture for one of the world’s largest providers of business outsourcing solutions, V.Jay leads a global team of security architects with responsibility for the end-to-end design, and implementation of ADP’s converged security strategy and business protection programs. V.Jay and his team of converged security architect’s cover the entire spectrum of Cyber Security and the Physical Security protections employed at ADP. Additionally V.Jay is also responsible for the Red Team program at ADP as well as the Advanced Fraud Technologies program which is used to identify, design, and oversee the implementation of a myriad of advanced techniques and technologies used to defend ADP and it’s clients funds.

A New Business Case for “Why IT Matters” in the Cloud Era

October 23rd, 2013

Author: Kamal Shah @kdshah 

Knowledge workers know that cloud services make our work lives easier, drive business agility and increase productivity. For instance, when colleagues need to share a file that’s too large to attach to an email message, they simply toss it into a cloud-based file sharing service and get back to work. It’s great that people find their own ways to team up, but can there be “too much of a good thing”?

Too much of a good thing?

Recently we analyzed the cloud services usage of more than 3 million end users across more than a hundred companies spanning a wide range of industries. We learned that the average enterprise uses 19 different file sharing & collaboration services. That’s right, 19.

Certainly there is benefit in groups and individuals using the services that best meet their needs, but with certain services, like file sharing and collaboration, an unmanaged policy can actually impede collaboration and productivity.

How? The collaborative value of these services increases as more employees use the same services. So, there is a productivity value in standardization.

Think about this common scenario

Consider a cross-functional team tasked working on Project Launchpad. At the kick-off meeting, they agree to use a file sharing site for the project. The marketing team recommends DropBox, the engineering team recommends Hightail, the customer service team recommends Box , the finance team recommends Egnyte, and so on. Now add multiple projects and keep track of which projects are in which file sharing service and you can see what a problem it becomes for the individual and the organization as a whole

No company uses 19 different CRM services or 19 ERP services or 19 email services or 19 project management applications. Similarly, it likely doesn’t make sense to use 19 different file sharing services.

Beyond productivity to economics and risk management

Aside from the productivity benefits, there is also economic value in procuring enterprise licenses over thousands of individual licenses, as well as security benefits in managing data in a smaller number of third-party cloud service providers. This latter point is most important for organizations that must maintain good oversight of where their data is—and today that is every business, large or small.

CIOs should encourage employees to identify and test new services that can boost productivity. Our customer Brian Lillie, CIO of Equinix, says that the CIO’s job is to be the “chief enabler” of the business.

The new role of “Chief Enablement Officer”

Being the chief enabler means understanding not just which cloud services employees have purchased or signed up for, but which ones they actually use and get real value out of. Then it’s the CIO’s responsibility to evaluate those services for risks and benefits, and standardize on and promote the ones that best meet the organization’s needs.

In this way, the CIO can maximize the value of the services and help drive an organized and productive movement to the cloud.

To see the services most used by today’s enterprises, check out the 2013 Cloud Adoption & Risk Report below, which summarizes data across 3 million enterprise users.

SSH – Does Your “Cloud Neighbor” Have an Open Backdoor to Your Cloud App?

October 22, 2013

By Gavin Hill, Director, Product Marketing & Threat Research Center at Venafi

Secure Shell (SSH) is the de facto protocol used by millions to authenticate to workloads running in the cloud and transfer data securely. Even more SSH sessions are established automatically between systems, allowing those systems to securely transfer data without human intervention. In either case, this technology underpins the security of vital network communications. According to the Ponemon Institute, organizations recognize SSH’s role in securing network communication and list threats to their SSH keys as the number one most alarming threat arising from failure to control trust in the cloud.

SSH authentication holds only as strong as the safeguards around the authentication tokens, the SSH keys. Failure to secure and protect these keys can compromise the environment, breaking down the trust that SSH should establish. Malicious actors take advantage of common mistakes in key management, the following are some of the common pitfalls organizations fall prey to.

The Weakest Link

Malicious actors often target SSH keys because SSH bypasses the authentication controls that typically regulate a system’s elevated privileges. In their efforts to exploit SSH, malicious actors naturally focus on compromising the weakest link in a highly secure protocol—human error and mismanagement of the private SSH keys.

The risks are real, and so are the costs. According to the Ponemon Institute, the average U.S. organization risks losing up to $87 million per stolen SSH key.

Lack of control

Less than 50% of organizations have a clear understanding of their encryption key and certificate inventory—let alone efficient controls to provision, rotate, track, or remove SSH keys. System administrators usually deploy keys manually, with different groups managing their own independent silos, leading to a fractured, distributed system. Without centralized monitoring and automated tools, system administrators cannot secure or maintain control of keys.

A report issued by Dell SecureWorks’ Counter Threat Unit revealed that one in every five Amazon Machine Images (AMI) has unknown SSH keys, each of which represents a door into the system to which an unknown party has access. As shocking as this fact seems, it is actually not surprising when you consider the ad-hoc management practices common in many organizations. In performing their jobs, application administrators copy their host key to multiple workloads but often fail to document the locations. As employees move on to new jobs, the keys linger, and the organization loses all ability to manage and assess its systems’ exposure to unauthorized access.

Injected elevated trust

An SSH server uses public-key cryptography to validate the authenticity of the connecting host. If the server simply accepts a public key without truly validating the identity of the connecting host, however, the server could easily give an attacker elevated access.

The mass assignment vulnerability, which is still largely unpatched, offers one example of an injected elevated trust exploit. In secure networks, users require root or admin privileges to append their own SSH keys to the authorized key file. Using the mass-assignment vulnerability, however, attackers create accounts that have the appropriate permissions. They then add their own SSH keys to gain the elevated privileges required to compromise the system.

Recycled rogue workloads

Cloud computing environments often reuse workloads. Amazon Web Services (AWS), for example, offers thousands of AMIs. However, you should exercise extreme caution when reusing a workload; educate yourself about the workload’s applications and configuration. In addition to rogue SSH keys, you may also find specific compromised packages.

For example, earlier this year hackers compromised thousands of web servers’ SSH daemons with a rootkit. The rootkit rendered companies’ key and password rotation policies futile: the SSH daemon simply yielded the new credentials to the attackers. The SSH rootkit completely replaced the ssh-agent and sshd binaries; only reinstalling SSH completely eliminated the threat.

BEST PRACTICES

Establish a baseline

Cloud computing has proliferated the use of SSH keys, and administrative efforts have not kept pace. Yet, when you fail to understand the SSH deployment in your organization—which keys give access to which systems and who has access to those keys—you risk losing intellectual property and, worse, losing control of the workloads.

Inventory the entire organization on a regular basis to discover SSH keys on workloads running in the cloud and in the datacenter. Establish a baseline of normal usage so that you easily detect any anomalous SSH behavior.

Enforce policies

Frequent credential rotation is a best practice, and you should make no exception with SSH keys. Unfortunately many organizations leave SSH keys on systems for years without any rotation. Although most cloud computing workloads are ephemeral, they are typically spun up from templates with existing SSH credentials, which are rarely rotated. Malicious actors can also crack vulnerable versions of SSH or SSH keys that use exploitable hash algorithms or weak key length.

To secure your environment, enforce cryptographic encryption policies that prohibit the use of weak algorithms and key lengths, implement version control, and mandate key rotation.

Scrutinize workload templates

If you choose to use prebuilt templates, implement an assessment process before the workload is used in production. Do not simply accept a pre-built workload template created by someone you do not know. First carefully inspect the template; ensure that the applications are patched, the workload configuration is secure, and that there are no rogue applications or keys that may be used as a backdoor.

 

 

Venafi Blog URL:

http://www.venafi.com/ssh-an-open-backdoor-to-your-cloud-app/

Patching the Perpetual MD5 Vulnerability

October 17, 2013

By Gavin Hill, Director, Product Marketing & Threat Research Center at Venafi

Earlier this month, Microsoft updated the security advisory that deprecates the use of MD5 hash algorithms for certificates issued by certification authorities (CA) in the Microsoft root certificate program. The patch has been released so that administrators can test its impact before a Microsoft Update on February 11, 2014 enforces the deprecation. This is an important move in the fight against the cyber-criminal activity that abuses the trust established by cryptographic assets like keys and certificates.

For over 17 years, cryptographers have been recommending against the use of MD5. MD5 is considered weak and insecure; an attacker can easily use an MD5 collision to forge valid digital certificates. The most well-known example of this type of attack is when attackers forged a Microsoft Windows code-signing certificate and used it to sign the Flame malware. Although the move to deprecate weak algorithms like MD5 is most certainly a step in the right direction, there still are some questions that need to be addressed.

Why is the Microsoft update important?

Cryptographers have been recommending the use of hash algorithms other than MD5 since 1996, yet Flame malware was still successful in 2012. This demonstrates that security professionals have failed to identify a vulnerability in their security strategy. However, cyber-criminals have most certainly not missed the opportunity to use cryptographic keys and digital certificates as a new way into enterprise networks. That Microsoft will soon enforce the deprecation of MD5 indicates that vendors and security professionals are starting to take note of keys and certificates as an attack vector.

Research performed by Venafi reveals that 39% of hash algorithms used by global 2000 organizations are still MD5. Such widespread use is worrying on a number of different levels as it clearly highlights that organizations either do not understand the ramifications of using weak algorithms like MD5 or that they simply have no idea that MD5 is being used in the first place. Research from the Ponemon Institute provides evidence that organizations simply don’t know that MD5 is being used—how could they when more than half of them don’t even know how many keys and certificates are in use within their networks?

What’s the impact of the security update?

Microsoft’s update is not to be taken lightly; this is probably why Microsoft has given organizations six months to test the patch. Once they have deployed the update, administrators will be able to monitor their environments for weak cryptography and take action to protect themselves from the vulnerabilities associated with MD5 hash algorithms or inadequate key sizes. Options available to administrators include the ability to block cryptographic algorithms that override operating system settings.

However, if a business has critical certificates that use MD5, enforcing such a security policy could result in system outages that may impact the business’s ability to service customer requests. For this reason, the update allows administrators to choose whether to opt-in or opt-out of each policy independently as well as to log access attempts by certificates with weak algorithms but to take no action to protect the system. The update also allows policies to be set based on certificate type such as all certificates, SSL certificates, code-signing certificates, or time stamping certificates.

Although I understand that Microsoft is allowing customers to choose how wide a net they are able to cast on MD5, the choices system administrators have when a security event is triggered should be of concern. Instead of choosing to apply the security policy to “all certificates,” some companies, out of concern for system outages, may limit the enforcement to a subset of certificate types. After all, history has shown that organizations have neglected to do anything about the known MD5 vulnerability for many years; they might easily continue to postpone the requisite changes. As a result, some companies may leave a massive open door for cyber-criminals to exploit.

Are there other weaknesses in cryptography that should concern me?

MD5 is not the only vulnerability to cryptography that should concern IT security professionals—there are many. However, I am only going to focus on a few of the most common.

Insufficient key length: Since 2011 the National Institute of Standards and Technology (NIST) has deprecated encryption keys of 1024 bits or less. After December 31, 2013, the use of 1024-bit keys will be disallowed due to their insecurity. Despite this, as surveyed by Venafi, 66% of the encryption keys still used by global 2000 organizations are 1024-bit keys. Vendors and service providers like Google, Microsoft, and PayPal made the shift to 2048-bit keys earlier this year. If you have 1024-bit keys in use, now is the time to upgrade to 2048-bit keys.

Lack of visibility: The majority of organizations lack visibility into or understanding of their key and certificate population. Organizations simply don’t know how many keys and certificates are in use on the network, what access they provide to critical systems, who has access to them, or how they are used. Businesses without visibility into such a critical attack vector—and with limited or no ability to respond quickly—are an attacker’s dream. To mitigate against these vulnerabilities, you must gain a complete understanding of your key and certificate population so that you know where your organization is vulnerable.

Inability to remediate: How can you defend something if you don’t know what you are defending? The lack of visibility has led to real vulnerabilities. Forrester Research found that 44% of organizations have already experienced an attack based on keys and certificates. Moreover, 60% of these businesses could not respond to the attacks, whether on SSH or SSL, within 24 hours. And the response, when unrolled, usually involves a laborious manual process that often leaves some systems unchecked.

What can I do to avoid these vulnerabilities?

To protect your business against attacks on keys and certificates, I recommend that you invest wisely in technologies that apply robust policies against the use of weak algorithms and poorly configured cryptography. At the same time, the technology should be able to detect anomalous behavior of keys and certificates and automatically respond, remediating any key—and certificate-based risks.

Safeguarding Cloud Computing One Step at a Time

by Manoj Tripathi, PROS

manoj headshot

There’ve been a lot of conversations around the concept of “the cloud.” Cloud storage and cloud computing continue to emerge as significant technology and business enablers for organizations. In many cases, cloud computing is a preferred option – it’s fast to set up and affordable. However, with this cloud convenience can come questions surrounding the security challenges of shared computing environments. It’s important that these cloud concerns are discussed and researched to continue to build momentum and increased trust in cloud computing. The Cloud Security Alliance (CSA) was formed to do just that.

The CSA is a member-driven, nonprofit organization with the objective to promote cloud security best practices. It promotes research, encourages open-forum discussions and articulates findings about what vendors and customers alike need to safeguard their interests in the cloud, and resolve cloud computing security concerns.

The current list of the CSA corporate members is impressive, with big name players in the technology biz including PROS partners Accenture, Deloitte, Microsoft, Oracle and Salesforce. PROS is pleased to announce that it has joined the ranks of the CSA.

PROS is dedicated to providing customers with secure and trusted cloud-based technology that adheres to security best practices and standards. The cloud is no longer a technology “playground.” It’s quickly becoming a mainstream solution, and with that increase in usage comes the obligation to ensure that data, systems and users are secure wherever they may reside. We’re excited to step up to the challenge of developing and improving the guidelines surrounding this progressive technology.

If you’re interested in discussing big data and security, I’ll be speaking at the Secure World Expo on Oct. 23 in Dallas and the Lonestar Application Security Conference 2013 on Oct. 25 in Austin.

Manoj Tripathi is a security architect at PROS where he where he focuses on security initiatives for areas of security, including strategy, architecture, controls, secure development and engineering for the enterprise, products and cloud functions for the company. Prior to joining PROS, Tripathi worked as a software and security architect for CA Technologies’ Catalyst Platform. Throughout his career, he has worked in diverse roles ranging from architecture design, technical leadership, project leadership and software development. Tripathi is a Certified Information Systems Security Professional (CISSP). He earned a B.E. in Electronics Engineering from the Motilal Nehru National Institute of Technology in India.

Visit http://www.pricingleadership.com/safeguarding-cloud-computing-one-step-at-a-time for more details

Gone in 60 Months or Less

by Gavin Hill, Director, Product Marketing & Threat Research Center at Venafi

For years, cybercriminals have been taking advantage of the blind trust organizations and users place in cryptographic keys and digital certificates. Only now are vendors starting to respond to the use of keys and certificates as an attack vector.

Last month, for example, Google announced that as of Q1 2014 Google Chrome and the Chromium browser will not accept digital certificates with a validity period of more than 60 months. Certificates with a longer validity period will be considered invalid.[i] Mozilla is considering implementing the same restrictions, however no decision has been announced yet. But are the responses from vendors enough in the constant battle against compromised keys and certificates as an attack vector?

The Certificate Authority Browser (CA/B) Forum, a volunteer organization that includes leading Certificate Authorities (CAs) and software vendors, has issued some baseline requirements for keys and certificates, which include reducing the certificate’s validity period. By 1 April 2015 CAs should not issue certificates that have a validity period greater than 39 months.[ii] The CA/B Forum makes some—very few—exceptions whereby CAs are allowed to issue certificates that have a 60-month validity period.

The National Institute of Standards and Technology (NIST) has disallowed the use of 1024-bit keys after 31 December 2013 because they are insecure. Rapid advances in computational power and cloud computing make it easy for cybercriminals to break 1024-bit keys. When a researcher from Ecole Polytechnique Fédérale de Lausanne (EPFL) in Switzerland cracked a 700-bit RSA key[iii] in 2007, he estimated that 1024-bit key lengths would be exploitable 5 to 10 years from then. Not even three years later, in 2010, researchers cracked a 1024-bit RSA key.[iv]

Last week Symantec responded to the NIST’s recommendation in a Symantec blog, stating that on 1 October 2013 Symantec will automatically revoke all certificates that have a key length of less than 2048 bits. The only exception is certificates that are set to expire before 31 December 2013. Symantec responded quickly because the company wants to help customers avoid potential disruptions to their websites and internal systems during the holiday period.[v]

Both the certificate’s validity period and the key’s length are paramount in any security strategy. The deprecation of vulnerable key lengths is the first step in mitigating against keys and certificates as an attack vector, but reducing the validity period of certificates is an important second step. Longer validity periods offer an inviting open door to cybercriminals who can take advantage of advances in computational power and cloud computing to launch more sophisticated attacks. No one knows when 2048-bit keys will be broken, but enforcing a 60-month validity period will help organizations adhere to best practices, rotating certificates on a regular basis and when doing so potentially replacing older certificates with ones that have better cypher strengths. Who knows, in 60 months companies may need to move to 4096-bit keys to achieve adequate security.

Symantec’s move to revoke all 1024-bit certificates with expiration dates after 31 December 2013 on the 1 October 2013 is a bold move, which is most certainly in the right direction. With such a short amount of time before the certificates become invalid, however, it will be very challenging for many organizations to replace the certificates in time. Most organizations—more than 50%–don’t have a clue how many keys and certificates they have in their inventory.[vi] Moreover, they manage their certificate inventories manually, making it difficult to respond quickly to new guidelines or actual attacks.

Cyber-attacks continue to advance in complexity and speed and increasingly target the keys and certificates used to establish trust—from the data center to the cloud. With the advances in technology, is a 60-month, or even a 39-month, validity period for certificates short enough to reduce risk? Perhaps certificates should be ephemeral, with a lifespan of only a few seconds? Reducing the lifespan of certificates to only a few seconds may drastically limit the exploitation of certificates as an attack vector.

The Power of “Yes”

Allow-new-Block

 

by Sanjay Beri, CEO of Netskope

Beri-Sanjay-800x800

 

 

 

 

Shadow IT is a big deal. The problem is clear: People want their apps so they can go fast. IT needs to attest that the company’s systems and data are secure and compliant.

Everybody seems to have a Shadow IT solution these days. The problem is they’re all focused on blocking stuff. But blocking is so old school. Nobody wants to be in the blocking business these days. It’s counter to the efficiency and progress that the cloud brings.

IT and security leaders are smarter than that. Many of you are at the forefront of cloud adoption and want to lead your organization through this strategic shift.

Rather than say “no”, we at Netskope recommend saying “yes.” More specifically, we recommend “yes, and.” It’s a pretty powerful concept!

Try it out:

Yes, you can use that useful app, and I’m going to set a very precise policy that will mitigate our risk.”

Yes, music company people, use Content Management apps to share content with your rock stars, promoters, and partners. And I’m going to make sure that only those authorized can share files outside of the company.”

Yes, developer of oncology solutions, you can use that Collaboration tool that will help your projects run smoothly. And I’m going to alert our risk officer if anybody in the clinical division uploads images that may make us non-compliant with HIPAA.”

Yes, major e-commerce company, you can use your CRM. And I’m going to make sure that our Call Center employees outside of the U.S. aren’t downloading customer information.”

You can say “yes” when you can deeply inspect apps and answer the key questions – who’s taking what action in what app, when, from where, sharing with whom, sending to where, downloading what content, uploading what content, editing what object or field…and whether any of it is anomalous.

And you can say “yes” when you can set policies at a very precise and granular level, and ENFORCE them in real-time before bad stuff happens.

If you can do those things, you can take a “yes, and” approach. As a cloud-forward technology leader in your company, this is the most powerful statement you can make.

If you happen to be at Gartner Symposium and ITxpo, come see Netskope in the Emerging Technology Pavilion, booth ET19, to talk more about the power of “yes, and” and attend this session where you’ll hear not one, but three, of Netskope’s customers talk about how they are letting users go rogue and how they’re doing it safely so the business can go fast, with confidence.

And if you can’t see us at the event, be sure to check us out at www.netskope.com. Learn more about Netskope’s deep analytics and real-time policy enforcement platform. It lets you say “yes, and.”

 

Watering Hole Attacks: Protecting Yourself from the Latest Craze in Cyber Attacks

Author: Harold Byun, Skyhigh Networks
Cyber criminals are clever and know how to evolve – you’ve got to give them that. They’ve proven this once again with their latest cyber attack strategy, the Watering Hole Attack, which leverages cloud services to help gain access to even the most secure and sophisticated enterprises and government agencies.
Attacks Used to be Humorously Simple

In earlier days, attackers operated more simply using emails entitled “ILOVEYOU” or poorly worded messages from Nigerian generals promising untold fortunes of wealth. Over the years, the attacks have evolved into complex spear phishing operations that target specific individuals who can help navigate an organization’s personnel hierarchy or identify digital certificate compromises that lead to command and control over the enterprise infrastructure. In either scenario, the success of the attacks has always been predicated on the fact that users are humans who will occasionally click on or open something that is suspect or compromised.
Now the Bad Guys are getting Smart

More recently, a new, more sophisticated, type of attack is hitting the enterprise. The concept behind the watering hole attack is that in order to insert malware into a company, you must stalk an individual or group and place malware on a site that they trust (a “watering hole”), as opposed to in an email that will be quickly discarded.

Identifying the “Watering Hole”

Inserting malware into a frequently visited site sounds like a great plan, but how do attackers find the right sites? It’s pretty tough to get malware onto the major sites that most people visit like cnn.com or espn.com, so attackers need to know which smaller, less-secure sites (i.e. watering holes) are frequented by employees of the targeted company.

But, how can an attacker know what watering holes users frequents most often? How can an attacker find what watering holes an entire organization or company frequents and how often? And how can they capture this information without anyone clicking anything? The answer…
Tracking Services

Users unknowingly provide all of this information simply by surfing the internet as they normally do. When a user surfs the internet from their company today, automated tracking methods used by marketing and ad tracking services identify traffic patterns and accesses.  These tracking services silently capture all this information without users ever being aware their actions online are being followed.

This would seem to be harmless information (aside from the irritatingly persistent retargeting ads you must endure), but the tracking services are essentially mapping the behavioral web patterns of your entire organization. This shows which sites employees frequent, and this information also allows attackers to deduce your company’s browsing and cloud services access policies. In other words, it tells an attacker which watering holes you let your users visit.

Planting the Trap

This gives the adversary a map of the sites to target for infiltration. They target the most vulnerable sites, smaller companies or blogs that don’t have strict security. They plant malicious code on the watering hole site. Once the trap is laid, they simply wait for users to visit the sites they have frequented in the past.

The probability of success is significantly higher for watering hole attacks since the attacker has used the tracking service’s data to confirm that traffic to the site is both allowed and frequent. When a user visits the site, the malicious code redirects the user’s browser to a malicious site so the user’s machine can be assessed for vulnerabilities. The trap is sprung.
Malware Phone Home

Once the user steps in the trap by visiting the watering hole they are assessed for vulnerabilities. Using drive-by downloading techniques, attackers don’t need users to click or download any files to their computer. A small piece of code is downloaded automatically in the background. When it runs, it scans for zero-day vulnerabilities (software exploits discovered by the most sophisticated cyber criminals that are unknown to the software companies) or recently discovered exploits that users have not yet patched in Java, Adobe Reader, Flash, and Internet Explorer (that software update from Adobe may be important, after all).

The user’s computer is assessed for the right set of vulnerabilities and if they exist, an exploit, or a larger piece of code is delivered that will carry out the real attack. Depending on the user’s access rights, the attacker can now access sensitive information in the target enterprise, such as IP, customer information, and financial data. Attackers also often use the access they’ve gained to plant more malware into software source code the user is developing, making the attack exponentially more threatening.