Why Higher Education Institutions NeedCloud-Based Identity Providers Arrow to Content

January 9, 2014 | Leave a Comment

By Dan Dagnall, Chief Technology Strategist for Fischer International Identity

 

Federation is definitely a hot topic these days, with NSTIC attempting to create an identity ecosystem, InCommon continuing to build its service-providerfederation, and state-level initiatives gearing up (some are already operational) to provide federated identity services to 4-year schools, community colleges, K-12, and every entity in between.  But I’ve found that many institutions and schools are not prepared to commit to the rigorous list of technical requirements to enter into such federations.  This is primarily because of a lack of talent, lack of budget, and resource utilization constraints.

 

Institutions that choose the federated identity path faceother potential roadblocks.The standard model requires a unique Shibboleth installation (which by default,would provide a localized identity provider(IdP) login screen and a custom URL for the login screen)which is simply not appropriate for smaller colleges and universities that can ill afford to hire more technical talent. Localized IdPs are also not feasible for K-12 as they would require technical resources to be located at each school.

 

A Cloud-based identity provider (cIdP) model is the best option for institutions that don’t currently have any SSO capabilities as it eliminates 90% of the technical federation hurdles. As a result, this model should resonate well with smaller colleges and universities, K-12, and other entities lacking the key components to enter the secure world of federated identity management.   The only real difference between a Cloud-based IdP and a localized one is you will typically spend half the money and 90% less time to deploy a Cloud-based IdP than a localized one.   And I can find no reason why entities falling behind in the federated world shouldn’t consider deploying a Cloud-based IdP.

 

The Cloud-based IdP model reduces costs through economies of scale by securely sharing resources among multiple institutions. In contrast tolocal IdPs,which require at least one instance of Shibboleth software to be installed for each institution, each with its own set of metadata to be configured, and each of these with its own maintenance by technical specialists, all of which add to costs, the Cloud-based IdP model is a far simpler approach. EachCloud-based IdP needs only a single installation of the Shibboleth software and a single set of metadata to accommodate multiple institutions, which dramatically reduces the on-going support compared to supportingat leastone IdP per entity.  It is also important to note that because the Shibboleth software does not reside on campus, there is no need for each individual campus to have any technical federation knowledge. Therefore, the Cloud-based model unlocks the door to the federated world for institutions that lack talent, time, money or all three.  In fact, ifwe judge success by a massive uptake of federation, then using Cloud-based IdPs provides the best chance for success.

 

However, every new technology has its critics, and Cloud-based federation is no exception:Some people believe that the federation identity provider(IdP) should always be local (i.e., on-campus) and therefore, unable to leverage the Cloud for IdP services.  Perhaps this is because some in the industry are not yet comfortable with a Cloud-basedapproach, possibly out of a lack of understanding regarding security and risk for a Cloud-based versus an on-campus IdP.I’ll address their concerns in turn.

 

Some criticsassert that a cIdP is not secure, but security issues for a Cloud-based IdP are no different than for a localized IdP deployment.  In both cases, SAMLis the underlying protocol, with the same security mechanisms in place, the same configuration in place, the same platforms are leveraged, and the same application set is accessible.  Also, service providers hosting cIdPs are often more secure and frequently provide higher availability than many institutions can achieve locally.

 

Some critics argue that a Cloud-based IdP is never feasible because they believe there is a lack of capabilities for institutionalbranding. But, Cloud-based IdPs have the same branding options as local IdPs,and the user experience for the Cloud-based login process is identical toa fully-branded and customized local IdP deployment.

 

Some federationstry to disallow Cloud-based IdPs into their trust models, presumably because they don’t believe that Cloud-based IdPs are as trustworthy, or possibly the federation managementlacks understanding of the implications that cIdPs have (or don’t have) on their businesstrust models. From a technical perspective,Cloud-based IdPs are often more trustworthythan local IdPs:Typically,the data center is more secure, access to the data is more secure, and so on.From a business perspective, although the technical aspects of a cIdP areoutsourced, the business trust model remains unchanged and between the same parties, as business agreements are not outsourced.

 

Some critics advocate for localized IdPs while at the same time supporting internal deployments of dirSynch or Google’s synch process to provide Cloud-based email services to their user populations, which, by the way, stores user data in the Cloud. So if the issue is related to where the data is stored, then their logic is flawed as they are advocating both sides of the same coin. It’s like they’re cutting off their noses to spite their faces.Maybe their real issue is with reporting, as reporting actual IdP installations might look better to some people than reporting the same number of institutions on Cloud-based IdPs? Or maybe some people are simply attempting to undermine commercial solutions in attempt to supporttheir own pet open-source initiatives?

 

All things considered, Cloud-based identity providers scale much better than localized IdP / federated infrastructures.  Cloud-based IdPs are just as secure and trustworthy (often more so), and are more cost effective for institutions tasked with federating their users to access service providers.  My advice: don’t discount a Cloud-based approach, as aCloud-based IdP can quickly be operational, can be configured easily, and can federate your users in a fraction of the time it takes to deploy your own infrastructure.

Evolution of Distributed Policy Enforcement in the Cloud Arrow to Content

December 10, 2013 | Leave a Comment

By Krishna Narayanaswamy, chief scientist at Netskope

Narayanaswamy-Krishna

 

 

 

 

As computing shifts to the cloud, so too must the way we enforce policy.

Until recently, enterprise applications were hosted in private data centers under the strict control of centralized IT. Between firewalls and intrusion prevention systems, IT was able to protect the soft inner core of enterprise information from external threats. Ever more sophisticated logging and data leakage prevention solutions supplemented those with a layer of intelligence to help IT identify and prevent not only external but also internal threats that led to costly data breaches. Even remote workers were shoe-horned into this centralized model using VPN technology so they can be subjected to the same security enforcement mechanisms.

The cloud has brought so many benefits, with users of compute services being able to procure the service that best fits their needs, independent of the others, and providers able to focus on what they do well, whether building scalable infrastructure or solving a business problem with a software service. The distributed nature of the cloud also means that users enjoy the availability and performance benefits of multiple redundant data centers. The model also aligns well with the proliferation of smart devices and users’ need to access content anywhere, anytime.

But as computing has moved to the cloud – and we are now at a tipping point with nearly one-third of compute spend reported to be on cloud infrastructure, platform, and software services – legacy security architectures are quickly becoming ineffective.

We need a fresh way to solve the problem. But first a short primer on security policy enforcement:

Security reference architectures consist of two components: the Policy Control Point (PCP) and Policy Enforcement Point (PEP). The PCP is where security policies are defined. In general, there is one or a small number of PCPs in an enterprise. The PEP is where the security policies are enforced. Typically there are many PEPs in an enterprise network, and a group of PEPs may enforce a specific type of policy.

The way it works is the PCP updates the many PEPs with the specific policy rules that pertain to the PEPs’ capabilities. The PEPs, for their part, act in real-time on the policy trigger, such as discovering data passing through a network and enforcing the policy as a pre-defined triggered action happens. PEPs that experience a policy trigger then send policy event logs back to the PCP to convey the attempted policy violation and confirm enforcement for compliance reporting purposes. Event logs provide information from the PEP about how and when the policy was triggered that can be used to create new or tune existing policies.In practice, the PCP and PEPs are usually not a single physical entity but a collection of physical entities that provide the logical functions described above.

image1

What are the key requirements for a cloud security framework?

The fact that enterprises’ applications, platform, and infrastructure servicesare moving to the cloud breaks the notion of a centralized service delivery point.Cloud service providers have optimized their ownsolutions for the specific types of services they’re offering or enabling, e.g., CRM, backup, storage, etc.This means that there are no common security controls across all of the services that enterprises are accessing.

Adding insult to injury, enterprises have another dimension of complexity to deal with: They need to plan for users to get both on-premand off-prem access to enterprise apps, as well as access from corporate-owned and personalsystems and a plethora of mobile devices. And in the face of all of this complexity, of course, the service and the policy enforcement needs to be efficient, as transparent as possible, and “always on.”

A tall order.

What are the ways to ensure this?

One possibility is the status quo: Ensure that all access to cloud services from any device, whether corporate-owned or BYOD are backhauled to the enterprise datacenter where the PEPs are deployed. This approach creates an hour-glassconfiguration where traffic from differentaccess locations is funneled to a choke-point and then fans out to the eventual destination, which is generally all over the Internet. Great for policy enforcement. Not so much for user experience.

Another possibility is to enforce policies at the server end. This is more efficient from a traffic standpoint, but isn’t effective because every cloud serviceprovider has a proprietary policy framework and different levels of policy enforcement capabilities. This means the PCP has to be able to convert the configuredpolicies to the specific construct supported by each service provider.

Netskope2

A third possibility: Distributed cloud enforcement (in case you haven’t guessed it yet, this is the recommended one). This involves distributing PEPs in the cloud so that traffic can be inspected for both analytics and policy triggers, irrespective of where it originates. It also means that PEPs will be deployed close to user locations, allowing for minimal traffic detours enroute to theapplication hosted by the cloud service provider. The distributed PEPs are controlled by a central PCP entity. This all sounds very easy, and of course, the devil is in the details.

In order to do this right, the solution enforcing the policies must employ efficient steering mechanisms in order to get traffic to the PEPs in the cloud. The PEPs must enforce enterprises’ security policies accurately and quickly, and send those policy logs to the PCP in a secure, reliable way each and every time. This reference architecture resembles legacy architecture in terms of the level of control it provides while obviating the need to backhaul traffic back to the enterprise datacenter. The PEP only has to provide the various security functions that were deployed in the datacenter: access control, data loss prevention, anomaly detection, etc.The architecture also provides an option for introducing new services that are relevant to the emerging trends. For example, with corporate data moving to the cloud which is not in the direct control of the enterprise, data protection becomes an important requirement. The cloud resident PEP scan provide encryption functionality to address this requirement, among other non-security capabilities such as performance, SLA, and cost measurements.

Netskope3

It’s clear that emerging trends like cloud and BYOD have obviated existing security architectures.We are not alone in addressing this issue. Organizations such as the Cloud Security Alliance, which recently kicked off its Software Defined Perimeter (SDP) initiative, are looking hard at the best ways to tackle this. I submit that addressing the above trends with a distributed cloud policy enforcement framework meets key requirements and provides a foundation for adding new security (and non-security) services that will become relevant in the near future.

What’s New With the Security as a Service Working Group? Arrow to Content

December 9, 2013 | Leave a Comment

CSA members are invited to join the Security-as-a-Service Working Group (SecaaS WG) which aims to promote greater clarity in the Security as a Service model.

Why a Security as a Service Working Group?

Numerous security vendors are now leveraging cloud based models to deliver security solutions. This shift has occurred for a variety of reasons including greater economies of scale and streamlined delivery mechanisms. Regardless of the motivations for offering such services, consumers are now faced with evaluating security solutions which do not run on premises. Consumers need to understand the unique nature of cloud delivered security offerings so that they are in a position to evaluate the offerings and to understand if they will meet their needs.

Research from this working group aims to identify consensus definitions of what Security as a Service means, to categorize the different types of Security as a Service and to provide guidance to organizations on reasonable implementation practices.

Ongoing Research

As part of its charter, the group expects to publish three key pieces of research related to the Security as a Service model over the course of the following six months

1. A Category Framework Proposal. This will include business and technical elements as well as a survey on this framework proposal and how it applies to existing categories

2. Categories of Service v2.0. This document will include sections based off of the new framework

3. Implementation Documents v2.0,. These implementation documents will include templates based off of the new framework as well business and technical elements as well as a detailed guidance.

To get involved, visit the SecaaS Working Group page.

 

 

CloudTrust Protocol (CTP) Working Group Kicks Off at CSA Congress Arrow to Content

December 6, 2013 | Leave a Comment

The Cloud Trust Protocol (CTP) aims to provide a protocol to enable Cloud Users to query Cloud Providers in real time about the security level of their service. It aims to foster transparency and trust in the cloud supply chain, bringing greater visibility to cloud users and providing them with data on a continuous basis in order to inform their daily risk management decisions.

As a monitoring mechanism, CTP also ambitions to become the pillar of CSA’s future continuous-monitoring based certification, complementing the STAR third party certification and attestation in the Open Certification Framework.

Earlier this fall, Cloud Security Alliance launched the CTP Working Group. The goal of the Working Group is to leverage the initial idea of Ron Knode and turn CTP into close to market solution in the next 18 months, drawing both on recent research conducted by the CSA EMEA Research team and on the inputs of leading stakeholders in the cloud industry, including both providers and users.

The CTP Working Group’s mission is to refine, challenge and extend the existing CTP framework and API specification, establish standard monitored cloud security attributes, implement a pilot and assure the proper integration of CTP in the Open Certification Framework.

The CTP Working Group will be chaired by the following people:

  • John DiMaria – British Standards Institute
  • Tim Sandage – Amazon Web Services
  • Sandeep Singh – Dell

Dr Alain Pennetrat, Senior Researcher at the CSA EMEA, will be the WG Technical Lead.

For more information, visit https://cloudsecurityalliance.org/research/ctp/.  We’ll announce the official kick-off call within the next month.

Introducing the CSA Financial Services Working Group Arrow to Content

December 4, 2013 | Leave a Comment

At our annual CSA Congress today, the CSA is pleased to introduce the new Financial Services Working Group (FSWG), which aims to provide knowledge and guidance on how to deliver and manage secure cloud solutions in the financial industry, and to foster cloud awareness within the sector and related industries. It will complement, enrich and customize the results of other CSA WG in a way to provide a sector specific guidance.

Why a financial services working group?

Financial services organizations have specific, often unique requirements regarding security, privacy and compliance. The Financial Services Working Group’s main objective is the identify and share the challenges, risks and best practices for the development, deployment and management of secure cloud services in the financial and banking industry.

Research from this working group aims to accelerate the adoption of secure cloud services in the financial industry by enabling the adoption of best practices by:

  • Identifying and sharing the industry’s main concerns regarding the delivery and management of cloud services in their sector.
  • Identifying industry needs and requirements (both technical and regulatory)
  • Identifying adequate strategic security approaches to ensure protection of business processes and data in the cloud.
  • Reviewing existing CSA research and identify potential gaps from the financial services standpoint.

Initial Research

As part of its charter, the group expects to publish four key pieces of research related to the financial services industry:

  1. A survey of existing & potential cloud solutions (products and services) in the banking and financial services sector
  2. Technical and regulatory requirements in the sector
  3. Identification and assessment of risks in cloud solutions in the sector, including interaction with other approaches such as mobile computing, social computing, and big data.
  4. Recommendations and best practices of cloud solutions for the sector.

For more information about the working group, visit https://cloudsecurityalliance.org/research/financialservices

 

 

Introducing the CSA’s Anti-Bot Working Group Arrow to Content

December 4, 2013 | Leave a Comment

Among the many exciting new working groups being established and meeting at CSA Congress, today we’d like to also introduce our Anti-Bot Working Group. Chaired by Shelbi Rombout from USBank, this group’s mission is to develop and maintain a research portfolio providing capabilities to assist the cloud provider industry in taking a lifecycle approach to botnet prevention.

Why an anti-bot group?

Botnets have long been a favored attack mechanism of malicious actors.  A recent evolution in botnet innovation has been the introduction of server-based bots as an alternative to single user personal computers.  The access to vastly greater upload bandwidths and higher compute performance has attracted the same adversaries who have built and operated earlier botnets.

As cloud computing is rapidly becoming the primary option for server-based computing and hosted IT infrastructure, CSA as the industry leader has an obligation to articulate solutions to prevent, respond and mitigate against botnets occurring on cloud infrastructure.  The CSA Anti-Bot Working Group is the primary stakeholder for coordinating these activities.

Initial Research

As part of its charter, the group expects to publish two key pieces of research related to botnets – Fundamental Anti-Bot Practices for Cloud Providers, and an Anti-Bot Toolkit Repository for Cloud Providers.

For more information about the working group, visit:  https://cloudsecurityalliance.org/research/antibot

Introducing the CSA’s New Virtualization Working Group Arrow to Content

December 3, 2013 | Leave a Comment

There’s been a lot of noise around the establishment of new working groups at this year’s Congress and today we’d like to also introduce another important addition: the Virtualization Working Group. Chaired by Kapil Raina of Zscaler, the Virtualization Working Group is chartered to lead research into the combined virtualized operating system and SDN technologies.  The group will build upon existing Domain 13 research and provide more detailed guidance as to threats, architecture, hardening and recommended best practices.

Why a Virtualization Working Group?

Virtualization is a critical part of cloud computing. Virtualization provides an important layer of abstraction from physical hardware, enabling the elasticity and resource pooling commonly associated with cloud. Virtualized operating systems are the backbone of Infrastructure as a Service (IaaS).

The CSA Security Guidance for Critical Areas of Focus in Cloud Computing focused exclusively on virtualized operating systems in Domain 13. Recent developments in software defined networking (SDN) show great potential to virtualize data networks in the same way that operating systems have been virtualized. Additionally, the future integration and potential convergence of virtualization of operating systems and networks promise to greatly impact the next generation of cloud architectures. The security issues and recommended best practices of this broader view of virtualization merit additional focused research from a reconstituted version of the CSA Virtualization Working Group.

Initial Research

As part of its charter, the CSA Virtualization Working group plans to publish a Domain 13 Virtualization Whitepaper as part of the CSA Security Guidance for Critical Areas of Focus in Cloud Computing. The paper is scheduled for release at the upcoming RSA Conference taking place in February.

For more information about the working group, visit https://cloudsecurityalliance.org/research/virtualization/

 

Announcing the Consensus Assessments Initiative Questionnaire (CAIQ) V.3 Open Review Period Arrow to Content

December 3, 2013 | Leave a Comment

At CSA Congress 2013 this week we are announcing the open review period of the Consensus Assessments Initiative Questionnaire (CAIQ) v.3 and we hope you will take a few moments and provide your input to this very important initiative.  Lack of security control transparency is a leading inhibitor to the adoption of cloud services. The Cloud Security Alliance Consensus Assessments Initiative (CAI) was launched to perform research, create tools and create industry partnerships to enable cloud computing assessments.

About CAIQ

The CSA is focused on providing industry-accepted ways to document what security controls exist in IaaS, PaaS, and SaaS offerings, providing security control transparency. CAIQ, by design, is integrated with and will support other projects from our research partners.   The CAIQ Questionnaire is available in spreadsheet format, and provides a set of questions a cloud consumer and cloud auditor may wish to ask of a cloud provider. It provides a series of “yes or no” control assertion questions which can then be tailored to suit each unique cloud customer’s evidentiary requirements.

This question set is meant to be a companion to the CSA Guidance and the CSA Cloud Controls Matrix (CCM), and these documents should be used together.  This question set is a simplified distillation of the issues, best practices and control specifications from our Guidance and Controls Matrix, intended to help organizations build the necessary assessment processes for engaging with cloud providers.  The Consensus Assessments Initiative is part of the CSA GRC Stack.

What’s New and Why we Need YOUR Input:

Now in its third version, the Cloud Assessments Initiative Working Group will start the open review period for a set of questions intended to help organizations further build the necessary assessment processes for engaging with cloud providers.

We are in need of input from the cloud community on a number of fronts. First, we would like input on the current CAIQ questions:  are these questions still relevant to cloud security; are they written in a way that is easy for all stakeholders to understand, and should they remain important questions to ask during the cloud assessment process.

Second, we would like to have input on what questions should be added to the assessment to help strengthen the process overall for each domain.  Finally, as CAIQ is a companion to the recently updated CCM V.3, we are seeking input on what questions should be added to two new control domains, Mobile Security and Interoperability and Portability.

As a side, the new CAIQ is now color coded to match the CCM V.3 domains for easy review.

ACTION: The open review period ends on January 6, 2013

This is your opportunity to provide feedback and comments to the v.3 of CAIQ.  Submitting feedback is easy with our 3-step process. Follow the link below to the CSA Interact peer review site:

https://interact.cloudsecurityalliance.org/index.php/caiq/caiq_v3

Thank you in advance for your time and contribution.  We look forward to your input.  If you have any questions, you can contact us by emailing [email protected].

Feel free to reference the following CCM documents during your review:

How Snowden Breached the NSA Arrow to Content

November 20, 2013 | Leave a Comment

NOVEMBER 12TH, 2013 – BY: 

How Edward Snowden did it and is your enterprise next?

There’s one secret that’s still lurking at the NSA: How did Edward Snowden breach the world’s most sophisticated IT security organization? This secret has as much to do with the NSA as it does with your organization. In this exclusive infographic, Venafi breaks open how Edward Snowden breached the NSA. Venafi is sharing this information and challenges the NSA or Edward Snowden to provide more information so that enterprises around the world can secure their systems and valuable data.

 

NSA

NSA Director General Keith Alexander summed up well Snowden’s attack: “Snowden betrayed the trust and confidence we had in him.” The attack on trust, the trust that’s established by cryptographic keys and digital certificates, is what left the NSA unable to detect or respond. From SSH keys to self-signed certificates, every enterprise is vulnerable. This exclusive infographic provides you with the analysis needed to understand the breach and how it could impact you and your organization.

 

Edward Snowden Infographic

DOWNLOAD INFOGRAPHIC (JPG)

Learn more about how Edward Snowden compromised the NSA.

Seeing Through the Clouds Arrow to Content

November 20, 2013 | Leave a Comment

By TK Keanini, CTO, Lancope

The economics of cyber-attacks have changed over the years. Fifteen years ago, it was all about network penetration, but today advanced attackers are more concerned about being detected. Similarly, good bank robbers are concerned about breaking into the bank, but great bank robbers have mastered how to get out of the bank without any detection.

Virtualization Skews Visibility

Because virtual-machine-to-virtual-machine (VM2VM) communications inside a physical server cannot be monitored by traditional network and security devices, the cloud can potentially give attackers more places to hide. Network and security professionals need to be asking themselves what cost-effective telemetry can be put in the cloud and across all of their networks such that the advanced persistent threat can’t escape detection.

The answer, I believe, lies in flow-based standards like NetFlow and IPFIX. Originally developed by Cisco, NetFlow is a family of standard protocols spoken by a wide variety of popular network equipment. IPFIX is a similar standard that was created by the Internet Engineering Task Force (IETF) and is based on NetFlow Version 9. These standards provide the most feasible, pervasive and trusted ledger of network activity for raising operational visibility across both physical and virtual environments.

Regaining Cloud Control

Regaining control of the cloud starts with basic awareness. Security teams need to know what applications, data and workloads are moving into cloud environments, where that data resides at any particular time, who is accessing that data and from where. They need this information in real time, and they need historical records, so that in the event that a breach is suspected it is possible to reconstruct what happened in the past. The recipe for success here is simple: leverage NetFlow or IPFIX from all of your routers, switches, firewalls and wireless access points to obtain a complete picture of everything happening across your network.

Flow-based standards like NetFlow and IPFIX provide details of every conversation taking place on the network. Some people think they need full packet capture of everything traveling on the network, and while that would be nice, it simply cannot scale. However, the metadata of that same traffic flow, as provided via NetFlow and IPFIX, does scale quite well and if need be, you can make the decision to also ‘tap’ a flow of interest to gather further intelligence.

Selecting a Monitoring Solution

By collecting and analyzing flow data, organizations can cost-effectively regain the internal visibility needed to detect and respond to advanced attacks across large, distributed networks and cloud environments. However, not all flow collection and analysis systems are created equal. It is important to determine the following when selecting a security monitoring solution for your physical and virtualized network and/or private cloud:

  1. Does the solution indeed provide visibility into virtual environments? (Some can only monitor physical infrastructure.)
  2. Are you getting an unsampled NetFlow or IPFIX feed? (Sampled flow data does not provide a complete picture of network activity.)
  3. Does the solution conduct in-depth analysis of the flow data? Is the intelligence it supplies immediately actionable?
  4. Does the solution deliver additional layers of visibility including application awareness and user identity monitoring, which can be critical for finding attackers within the network?
  5. Does the solution allow for long-term flow storage to support forensic investigations?

 

It is also important to conduct similar due diligence on the security technologies and practices used by various providers if you decide to outsource your IT services to the public cloud.

 

Thwarting Advanced Attacks

 

As the CTO of Lancope, it is my goal to ensure that the bad guys cannot persist on your networks. No matter which stage you are in with your cloud strategy –whether virtualizing your infrastructure, or using a public or private cloud – the collection and analysis of existing flow data can dramatically enhance your security. When every router/switch/wireless access point/firewall is reporting unsampled flow records, and you are able to synthesize that data into actionable intelligence, there is just nowhere for the adversary to hide.

 

For more details on NetFlow for security, check out the Lancope blog or follow me on Twitter @tkeanini.

 

TK Keanini is a Certified Information Systems Security Professional (CISSP) who brings nearly 25 years of network and security experience to his role of CTO at Lancope. He is responsible for leading Lancope’s evolution toward integrating security solutions with private and public cloud-based computing platforms.

Page Dividing Line