Cloud Signaling – The Data Center’s Best Defense Arrow to Content

July 27, 2011 | 1 Comment

By Rakesh Shah, Director, Product Marketing & Strategy at Arbor Networks

Recent high-profile security incidents heightened awareness of how Distributed Denial of Service (DDoS) attacks can compromise the availability of critical Web sites, applications and services.  Any downtime can result in lost business, brand damage, financial penalties, and lost productivity. For many large companies and institutions, DDoS attacks have been a sobering wake-up call, and threats to availability are also one of the biggest potential hurdles before moving to, or rolling out a cloud infrastructure.

Arbor Networks’ sixth annual Worldwide Infrastructure Security Report shows that DDoS attacks are growing rapidly and can vary widely in scale and sophistication. At the high end of the spectrum, large volumetric attacks, reaching sustained peaks of 100 Gbps have been reported. These attacks exceed the aggregate inbound bandwidth capacity of most Internet Service Providers (ISPs), hosting providers, data center operators, enterprises, application service providers (ASPs) and government institutions that interconnect most of the Internet’s content.

At the other end of the spectrum, application and service-layer DDoS attacks focus not on denying bandwidth but on degrading the back-end computation, database and distributed storage resources of Web-based services. For example, service or application-level attacks may cause an application server to patiently wait for client data—thus causing a processing bottleneck.  Application-layer attacks are the fastest-growing DDoS attack vector.

Detecting and mitigating the most damaging attacks is a challenge that must be shared by network operators, hosting providers and enterprises. The world’s leading carriers generally use specialized, high-speed mitigation infrastructures—and sometimes the cooperation of other providers—to detect and block attack traffic. Beyond ensuring that their providers have these capabilities, enterprises must also deploy intelligent DDoS mitigation systems to protect critical applications and services.

Why Existing Security Solutions Can’t Stop DDoS Attacks

Why can’t enterprises protect themselves against DDoS attacks when they have sophisticated security technology? Enterprises continuously deploy products like firewalls and Intrusion Prevention Systems (IPS), but the attacks continue. While IPS, firewalls and other security products are essential elements of a layered-defense strategy, they do not solve the DDoS problem.  Because they are designed to protect the network perimeter from infiltrations and exploits and to be policy enforcement points in the security portfolio of organizations, they leverage stateful traffic inspection technologies to enforce network policy and integrity. This makes these devices susceptible to state resource exhaustion, which results in dropped traffic, device lock-ups and potential crashes.

The application-layer DDoS threat actually amplifies the risk to data center operators. That’s because IPS devices and firewalls become more vulnerable to the increased state demands of this emerging attack vector—making the devices themselves more susceptible to the attacks.  Moreover, there is a distinct gap in the ability of existing edge-based solutions to leverage the cloud’s growing DDoS mitigation capacity, the service provider’s DDoS infrastructure or the dedicated DDoS mitigation capacity deployed upstream of the victim’s infrastructure.

Current solutions do not take advantage of the distributed computing power available in the network and cannot coordinate upstream resources to deflect an attack before saturating the last mile. No existing solution enables both DDoS mitigation at the edge and in the cloud.

Cloud Signaling: A Faster, Automated Approach to Comprehensive DDoS Mitigation

Enterprises need comprehensive, integrated protection from the data center edge to the service provider cloud. For example, when data center operators discover they are under a service-disrupting DDoS attack, they should be able to quickly mitigate the attack in the cloud by triggering a signal to upstream infrastructure of their provider’s network.

The following scenario demonstrates the need for cloud signaling from an enterprise’s perspective. A network engineer notices that critical services such as corporate sites, email and DNS are no longer accessible. After a root cause analysis, the engineer realizes that its servers are under a significant DDoS attack. Because its external services are down, the entire company, along with its customers, are suddenly watching every move. He must then work with customer support centers from multiple upstream ISPs to coordinate a broad DDoS mitigation response to stop the attack.

Simultaneously, he must provide constant updates internally to management teams and various application owners. To be effective, the engineer must also have the right internal tools available in front of the firewalls to stop the application-layer attack targeting the servers. All of this must be done in a high-pressure, time-sensitive environment.

Until now, no comprehensive threat resolution mechanism has existed that completely addresses application-layer DDoS attacks at the data center edge, and volumetric DDoS attacks in the cloud. True, many data center operators have purchased DDoS protection services from their ISP or MSSP. But they lack a simple mechanism to connect the premises to the cloud and a single dashboard to provide visibility. These capabilities can stop targeted application attacks as well as upstream volumetric threats that can be distributed across multiple providers.

The previous hypothetical scenario would be quite different if the data center engineer had the option of signaling to the cloud. Once he discovered that the source of the problem is a DDoS attack, the engineer could choose to mitigate the attack in the cloud by triggering a cloud signal to the provider network. The cloud signal would include details about the attack to increase the effectiveness of the provider’s response. This would take internal pressure off the engineer from management and application owners. It would also allow the engineer to communicate with the upstream cloud provider to give more information about the attack and fine-tune the cloud defense.

As DDoS attacks become more prevalent, data center operators and service providers must find new ways to identify and mitigate evolving DDoS attacks. Vendors must empower data center operators to quickly address both high-bandwidth attacks and targeted application-layer attacks in an automated and simple manner. This saves companies from major operational expense, customer churn and revenue loss. It’s called Cloud Signaling and it’s the next step in protecting data centers in the cloud, including revenue-generating applications and services.

Rakesh Shah has been with Arbor Networks since 2001, helping to take products from early stage to category-leading solutions.  Before managing the product marketing group, Rakesh was the Director of Product Management for Arbor’s Peakflow products, and he was also a manager in the engineering group.  Previously, Rakesh held various engineering and technical roles at Lucent Technologies and CGI/AMS.  He holds a M.Eng. fromCornellUniversityand a B.S. fromUniversityofIllinoisat Urbana-Champaign both in Electrical and Computer Engineering.

Pass the Buck: Who ‘s Responsible for Security in the Cloud? Arrow to Content

July 27, 2011 | Leave a Comment

Cloud computing changes the equation of responsibility and accountability for information security and poses some new challenges for enterprise IT. At Vormetric we are working with service providers and enterprises to help them secure and control sensitive data in the cloud with encryption, which has given us a good perspective on the issues surrounding who is responsible for cloud security.

While data owners are ultimately accountable for maintaining security and control over their information, the cloud introduces a shared level of responsibility between the data owner and the service provider. This division of responsibility varies depending on the cloud delivery model and specific vendor agreements with the cloud service provider (CSP).  In addition, the use of multi-tenant technology by CSPs to achieve economies of scale by serving customers using shared infrastructure and applications introduces another layer of risk.

Where the buck stops or gets passed on poses some new operational and legal issues.  Let’s look at each cloud delivery model to understand how each creates a slightly different balance of security responsibility between the data owner and CSP.

Software-as-a-Service (SaaS) and Platform-as-a-Service (PaaS) models typically place much of the responsibility for data security and control in the hands of the SaaS or PaaS provider. There is not much leeway for enterprises to deploy data security or governance solutions in SaaS and PaaS environments since the CSP owns most of the IT and security stack.

Infrastructure-as-a-Service (IaaS) tilts the balance towards a greater degree of shared responsibility.  IaaS providers typically provide some baseline level of security such as firewalls and load balancing to mitigate Distributed Denial of Service (DDoS) attacks. Meanwhile, responsibility for securing the individual enterprise instance and control of the data inside of that instance typically falls to the enterprise.

A widely-referenced example that clearly describes IaaS security responsibilities can be found in the Amazon Web Services Terms of Service. While enterprises can negotiate liability, terms and conditions in their Enterprise Agreements with service providers, the IaaS business model is not well suited for CSPs to assume inordinate amounts of security risk. CSPs aren’t typically willing to take on too much liability because this could jeopardize their business.

Since an enterprise’s ownership of security in the cloud gradually increases between SaaS, PaaS and IaaS, it’s important to clearly understand the level of responsibility provided in the terms and conditions of CSP agreements.

Having established what a cloud provider is delivering in the way of security, enterprises should backfill these capabilities with additional controls necessary to adequately protect and control data.  This includes identity and access management, encryption, data masking and monitoring tools such as Security Information and Event Management (SIEM) or Data Loss Prevention (DLP).  One valuable resource for evaluating cloud service provider security is the Cloud Security Alliance Cloud Controls Matrix.

Enterprises looking to further mitigate the risk of data security incidents in the cloud can also investigate Cyber insurance offerings that protect against cyber events such as cyber extortion, loss of service or data confidentiality breach.  Finally, enterprises should develop both a data recovery plan and exit strategy if they need to terminate their relationship with a CSP.

Cloud security is a new and evolving frontier for enterprises as well as CSPs. Understanding the roles, responsibilities, and accountability for security in the cloud is critical for making sure that data is protected as well in the cloud as it is in an enterprise data center. The process starts with a thorough due diligence of what security measures are provided and not provided by the CSP, which enables enterprises to know where they need to shore up cloud defenses. Until further notice, the cloud security buck always stops with the enterprise.

Todd Thiemann is Senior Director of Product Marketing at Vormetric and co-chair of the Cloud Security Alliance (CSA) Solution Provider Advisory Council.

PKI Still Matters, Especially in the Cloud Arrow to Content

July 15, 2011 | Leave a Comment

By:  Merritt Maxim

Director of IAM Product Marketing

CA Technologies Inc.

Infosec veterans probably remember (with a smirk) how Public Key Infrastructure (PKI) was heralded as the next “big thing” in information security at the dawn of the 21st century.  While PKI failed to reach the broad adoption the hype suggested, certain PKI capabilities such as key management are still important.  The Diffie-Hellman key exchange protocol which solved the serious technical challenge of how to exchange private keys over an insecure channel basically created PKI.

I had not thought about key management until a recent visit to my local car dealer for an oil change.  While waiting, I noticed several dealer employees struggling with a large wall-mounted metal box.  This box is the dealer’s central repository for all car keys on the dealer’s lot.  The box is accessed via a numeric keypad which appeared to be a sensible approach since the keypad logs all access attempts for auditing and tracking purposes.

However, on this particular day, the numeric codes would not open the box, leaving the keys inaccessible and employees quite frustrated.  I left before seeing how the problem was resolved, but this incident reminded me of key management and how this technology is still crucial for data management especially with rise of cloud computing.

Key management often goes unnoticed for extended periods of time and only surfaces when a problem appears, as was the case at the dealer.  When problems appear, key management is either the solution or the culprit.  In the latter case, key management is generally the culprit because of an improper implementation.  Poor key management can create several significant problems such as:

  • Complete Compromise-A poor key management system, if broken, could mean that all keys are compromised and all encrypted data is thus at risk (see my postscript for a great example).  And fixing a broken key management system can be complex and costly.
  • Inaccessibility-As I witnessed at the dealer, a poorly implemented key management may prevent any or some access to encrypted data.  That may seem good from a security standpoint, but the security must be weighed against the inconvenience and productivity loss created from being unable to access data.

With the continued stream of data breaches that appear in daily headlines, a common refrain is that data encryption is the solution to preventing data breaches.  While data encryption is certainly a good security best practice and important first step, especially for sensitive data or PII, effective key management must accompany any data encryption effort to ensure a comprehensive implementation.

Here’s why.

Just throwing encryption at a problem especially after a breach is not a panacea-it must be deployed within the context of a broader key management system.  NIST Special Publication 800-57, “Recommendation for Key Management-Part 1-General” published in March 2007 stated,

“The proper management of cryptographic keys is essential to the effective use of cryptography for security. Keys are analogous to the combination of a safe. If the combination becomes known to an adversary, the strongest safe provides no security against penetration. Similarly, poor key management may easily compromise strong algorithms. Ultimately, the security of information protected by cryptography directly depends on the strength of the keys, the effectiveness of mechanisms and protocols associated with keys, and the protection afforded the keys. “

Even though this NIST publication is more than four years old, this statement is still relevant.

A centralized key management solution should deal with the three ‘R’s-Renewal, Revocation and Recovery.  Key management is necessary to solve problems such as:

  • Volume of keys-In a peer to peer model, using freeware like PGP may work, but when you are an organization with thousands of users, you need centralized key management.  Just like organizations need to revoke privileges and entitlements when a user leaves the organization, you need to do the same with cryptographic keys.  This can only be achieved via central key management and would crumble in a peer to peer model.
  • Archiving and Data Recovery.  Data retention policies vary by regulation and policy, but anywhere from three to 10 years is common.  If archived data is encrypted (generally a good practice), key management is necessary to ensure that the data can be recovered and decrypted in the future if needed as part of an investigation.  The growth on cloud-based storage makes this problem particularly acute.

Organizations that encrypt data without a centralized comprehensive key management system are still at risk of a breach because the lack of a centralized system can cause inconsistencies and error-prone manual processes.  Further, today’s sophisticated hackers are more likely to attack a poorly implemented key management system rather than attack an encrypted file, much like the German Army flanked France’s Maginot Line in 1940 to avoid dealing with the line’s formidable defenses.  This is why an important aspect of key management is ensuring appropriate checks and balances on the administrators of these systems as well as ongoing auditing of the key management processes and systems to detect any potential design errors, or worse, malicious activity by authorized users.

Key management is not going away.  As cloud computing adoption grows, key management is going to become even more crucial especially around data storage in the cloud.  We have already seen some examples with online storage providers that show how key management is already an issue in the cloud.  Cloud computing and encryption are great concepts, but organizations must accompany these with a sound key management strategy.  Otherwise, the overall effectiveness of such systems will be reduced.

PS-a great example of what happens with an ineffective key management implementation is convicted spy John Walker who managed cryptographic keys for US Naval communications but copied the keys and gave them to the USSR for cash.  Walker compromised a significant volume of US Navy encrypted traffic but because there was no significant auditing of his duties, his spying went undetected for years. There are several books on the Walker case, but I recommend Peter Earley’s “Family of Spies”

 

Merritt Maxim is director of IAM product marketing and strategy at CA Technologies.  He has 15+ years of product management and product marketing experience in Identity and Access Management (IAM) and the co-author of “Wireless Security.” Merritt blogs and is an active tweeter on a range of IAM, security & privacy topics.  Merritt received his BA cum laude from Colgate University and his MBA from the MIT Sloan School of Management.

 

Understanding Best-in-Class Cloud Security Measures and How to Evaluate Providers Arrow to Content

July 11, 2011 | Leave a Comment

By Fahim Siddiqui

Despite a broader interest in cloud computing, many organizations have been reluctant to embrace the technology due to security concerns. While today’s businesses can benefit from cloud computing’s on-demand capacity and economies of scale, the model does require they relinquish part of the control over the application and data.

 

Unfortunately, security controls vary significantly from one cloud provider to the next.  Therefore, companies need to make certain the providers they use have invested in state-of-the-art security measures. This will help ensure that a company’s customer security and data protection policies can be seamlessly extended to the cloud applications to which they subscribe. Best practices dictate that critical information should be protected at all times, and from all possible avenues of attack. When evaluating cloud providers, practitioners should address four primary areas of concern — application, infrastructure, process and personnel security — each of which is subject to its own security regimen.

 

1. Application Security

With cloud services, the need for security begins as soon as users access the supporting application. The best cloud providers protect their offerings with strong authentication and equally potent authorization systems. Authentication ensures that only those with valid user credentials (who can also prove their identity claims) obtain access, while authorization controls allow administrators to decide which services and data items users may access and update. Multi-factor authentication may also be provided for controlling access to high sensitivity privileges (e.g. administrators) or information.

 

All application-level access should be protected using strong encryption to prevent unauthorized sniffing or snooping of online activities. Application data needs to be validated on the way in and on the way out to ensure security. Robust watermarking features ensure that materials cannot be reproduced or disseminated without permission. More advanced security measures include the use of rights management technology to enforce who can print, copy or forward data, and prevent such activity unless it is specifically authorized, as well as impose revocation and digital shredding even after documents leave the enterprise.

 

2. Infrastructure Security

Best-in-class providers will have a highly available, redundant infrastructure to provide uninterruptible services to their customers. A cloud provider or partner should use real-time replication, multiple connections, alternate power sources and state-of-the-art emergency response systems to provide complete and thorough data protection. Network and periphery security are paramount for infrastructure elements. Therefore, leading-edge technologies for firewalls, load balancers and intrusion detection/prevention should be in place and continuously monitored by experienced security personnel.

 

3. Process Security

Cloud providers, particularly those involved in business critical information, invest large amounts of time and resources into developing security procedures and controls for every aspect of their service offerings. Truly qualified cloud providers will have earned SAS 70 Type II certification or international equivalents.  Depending upon geography or industry requirements, they may have enacted measures to keep their clients in compliance with appropriate regulations (e.g., the U.S. Food and Drug Administration (FDA) 21 CFR 11 regulations for the Pharmaceutical industry). ISO-27001 certification is another good measure of a provider’s risk management strategies. These certifications ensure thorough outside reviews of security policies and procedures.

 

4. Personnel Security

People are an important component of any information system, but they can also present insider threats that no outside attacker can match. At the vendor level, administrative controls should be in place to limit employee access to client information. Background checks of all employees and enforceable confidentiality agreements should be mandatory.

 

Putting Providers to the Test

When evaluating a cloud provider’s security approach, it’s important to ask them to address how they provide the following:

  • Holistic, 360-degree security: Providers must adhere to the most stringent of industry security standards, and meet client expectations, regulatory requirements and prevailing best practices.

This includes their coverage of application, data, infrastructure, product development, personnel and process security.

  • Complete security cycle: A competent cloud provider understands that implementing security involves more than technology — it requires a complete lifecycle approach. Providers should offer a comprehensive approach to training, implementation and auditing/testing.
  • Proactive security awareness and coverage: The best cloud providers understand that security is best maintained through constant monitoring, and they take swift, decisive steps to limit potential exposures to risks.
  • Defense-in-depth strategy: Savvy cloud vendors understand the value of defense in depth, and can explain how they use multiple layers of security protection to protect sensitive data and assets.
  • 24/7 customer support: Just as their applications are available around-the-clock, service providers should operate support and incident response teams at all times.

 

Tips for Obtaining Information from Service Providers

When comparing cloud providers, it is essential to check their ability to deliver on their promises. All cloud providers promise to provide excellent security, but only through discussions with existing customers, access to the public record and inspection of audit and incident reports can the best providers be distinguished from their run-of-the-mill counterparts.

 

Ideally, obtaining information about security from providers should require little or no effort. The providers who understand security — particularly those for whom security is a primary focus — will provide detailed security information as a matter of course, if not a matter of pride.

Fahim Siddiqui, chief product officer, IntraLinkswww.intralinks.com

Fahim has been with IntraLinks since January 2008. Prior to joining IntraLinks, he served as CEO at Sereniti, a privately held technology company. He was also the Managing Partner of K2 Software Group, a technology consulting partnership providing product solutions to companies in the high tech, energy and transportation industries. Previously, Fahim held executive and senior management positions in engineering and information systems with ICG Telecom, Enron Energy Services, MCI, Time Warner Telecommunications and Sprint.

 

Watch Out for the Top 6 Cloud Gotchas! Arrow to Content

July 11, 2011 | Leave a Comment

By Margaret Dawson, VP of Product Management, Hubspan

I am a huge proponent of cloud-based solutions, but I also have a bailiwick for people who look to the cloud just for cloud’s sake, and do not take time to do the due diligence.  While the cloud can bring strong technical, economic and business benefits if managed correctly, it can also cause pain just like any solution with which you do not follow clear criteria for evaluation to make sure it meets your needs today and in the future.

In my many discussions with IT leaders and from my own experience, I have outlined the top six cloud gotchas that you need to watch out for:

  1. Standards: The cloud, while filling our life right now, is still relatively young with minimal standards. This one is particularly important with Platform as a Service (PaaS) vendors. Many of these platforms provide an easy-to-use and fast-to-deploy application development and life cycle environment. However, most are also based on proprietary platforms that do not play nice with other solutions. It’s important to understand potential proprietary lock-in as well as how you interface with the cloud platform or with the API infrastructure.
  2. Flexibility: This seems odd for a cloud gotcha since flexibility and agility is touted as one of the cloud’s greatest benefits.  In this case, I’m talking about flexibility within the cloud environment and in the way you interact with the cloud.  What communication protocols are supported, such as REST, SOAP, FTPS, etc.?  In the PaaS world, what languages are supported – is it flexible or, for example, a JAVA or .NET environment only.  Does it have a flexible API infrastructure?
  3. Reliability & Scalability: Everyone knows that the cloud provides on-demand scalability, but make sure your solution scales both up and DOWN – with the latter being the stickler for most companies.  Burst capacity and quick addition of scalability might be easy, but what if you want to scale back your deployment? Make sure it’s just as easy and without penalties.  Overall, know the bandwidth capability across the deployment, not just the first or last mile. On the reliability front, be wary of claims of four or five nines (99.999% uptime) and ask for an uptime report from your cloud vendor.  Build uptime into your SLA (service level agreement) if this cloud deployment is mission critical for your business.
  4. Security: This one is probably the most discussed and debated.  I believe, and many vendors have proved this, that a cloud-based solution is as secure if not more secure than an on-premise approach.  But as with technology in general, not all clouds are created equal, and security needs to be evaluated holistically.  The platform should provide end-to-end data protection, which means encryption both in motion and at rest, as well as strong and auditable access control rules.  Do you know where the data is located amid the vendor’s many data centers, and is the level of data protection consistent among all of those environments? Does the vendor use secure protocols for moving the data, such as SSL. Look for key compliance adherence by the vendor, such as PCI DSS and SAS 70 Type 2.  There’s a reason the Cloud Security Alliance (CSA) is now developing a PCI courseware – it’s because there’s a clear link between the security capabilities of a cloud platform and its ability to meet the most stringent security and data protection demands found in the PCI mandate.
  5. Costs: I can hear everyone now saying “duh” this is obvious.  Yes, the initial cost of deployment or your monthly subscription fees are an easy evaluation.  However, look for hidden or unexpected costs, and make sure you fully understand the pricing model.  Many cloud solutions are cost-effective for a standard deployment, but then each additional module or add-on feature slaps you with additional costs.  Does the vendor charge a “per support” charge? Are upgrades to new versions included?  Also, there are often pricing tiers or “buckets”, and when you hit that tier, your costs can significantly increase.  Finally, look for a way to clearly show your ROI or success metrics for this solution.  Align your costs with your expected results, whether quantifiable or qualifiable.  This is particularly important if your company is new to cloud consumption, as your ability to show success with an initial deployment will influence future implementations.
  6. Integration: Integration is truly the missing link in the cloud.  It’s so appealing to put our data in the cloud or develop new applications or extend our current infrastructure that sometimes we forget that the data in the cloud needs to be accessible, secured and managed just like on-premise data.  How are you migrating data to the cloud?  If you are putting everything on a physical disk and shipping it to the cloud vendor, doesn’t that rather run contrary to the whole cloud benefit?  How are you exchanging and sharing information between cloud-based environments and on-premise infrastructure or even between two clouds?  Think about integration before you deploy a new cloud solution and think about integration among internal systems and people as well as external partners and corporate divisions.  Gartner is doing a lot of work in this area, and has a new market category called “cloud brokers”.

As I’ve said many times in presentations on the cloud, you should first buy the solution, then buy the cloud. The cloud is not a panacea, and while a cloud architectural approach brings strong business and IT value, you need to thoroughly evaluate any solution to ensure it not only meets your company’s technical and business requirements, but also enables you to grow and evolve.

 

Margaret Dawson is vice president of product management for Hubspan. She’s responsible for the overall product vision and roadmap and works with key partners in delivering innovative solutions to the market. She has over 20 years’ experience in the IT industry, working with leading companies in the network security, semiconductor, personal computer, software, and e-commerce markets, including Microsoft and Amazon.com. Dawson has worked and traveled extensively in Asia, Europe and North America, including ten years working in the Greater China region, consulting with many of the area’s leading IT companies and serving as a BusinessWeek magazine foreign correspondent.

 

Page Dividing Line