What’s a CASB? Gartner Report Outlines Use Cases, Architecture, and Evaluation Criteria

September 29, 2015 | Leave a Comment

By Cameron Coles, Sr. Product Marketing Manager, Skyhigh Networks

blog-banner-gartner-casb-reportGiven the explosive growth of cloud computing and numerous high-profile security and compliance incidents, it’s not surprising that surveys of IT leaders find that cloud tops the list of security priorities this year. In its latest technology overview (download a free copy here), Gartner gives a detailed overview of the emerging security category called cloud access security brokers (CASB) that offer a control point for enforcing security policies across cloud services. By 2016, Gartner predicts 25% of enterprises will secure their cloud usage using a CASB, up from less than 1% in 2012. Organizations across all industries are deploying CASB solutions because they enable them to migrate to the cloud securely.

As corporate data moves to the cloud and employees access data from mobile devices, they bypass existing security technologies. Gartner says this has created a “SaaS security gap”. In response, many organizations have attempted to block cloud services en masse using their firewall or proxy. However, with thousands of cloud services available today, organizations block the ones that are well known and that causes employees to seek out lesser-known, potentially riskier cloud services that are not being blocked. CASB solutions will, according to Gartner, enable IT to shift from the “no” team to the “let’s do this and here’s how” team.

Gartner’s 4 Pillars of Required CASB Functionality
Gartner organizes CASB capabilities into four pillars of required functionality: visibility, compliance, data security, and threat protection. While cloud providers are starting to offer some limited policy enforcement capabilities, one benefit of using a cross-cloud CASB solution that addresses each functional area, says Gartner, is that an organization has a centralized place to manage and enforce policies. Since capabilities vary widely among cloud providers (and even CASB vendors) this also ensures a consistent set of controls across cloud services.

Visibility Compliance Data Security Threat Protection
Gives organizations visibility into users, services, data, and devices. Provides file content monitoring to find and report on regulated data in the cloud. Adds an additional layer of protection including encryption. Analyzes traffic patterns to identify compromised accounts and malicious usage.

Using cloud access security brokers, organizations can:

  • Identify what Shadow IT cloud services are in use, by whom, and what risks they pose to the organization and its data
  • Evaluate and select cloud services that meet security and compliance requirements using a registry of cloud services and their security controls
  • Protect enterprise data in the cloud by preventing certain types of sensitive data from being uploaded, and encrypting and tokenizing data
  • Identify threats and potential misuse of cloud services
  • Enforce differing levels of data access and cloud service functionality based on the user’s device, location, and operating system

CASBs Have Multiple Deployment Models
While many CASBs leverage log data from firewalls and web proxies to gain visibility into cloud usage, Gartner defines two major deployment architectures that CASB solutions use to enforce policies across cloud services: proxies and APIs. In proxy mode, a CASB sits between the end user and the cloud service to monitor traffic and enforce inline policies such as encryption and access control. CASBs can leverage a forward proxy, reverse proxy, or both. Another deployment mode is direct integration to specific cloud providers that have exposed events and policy controls via their API. Depending on the cloud provider’s API, a CASB can view end user activity and define policies.

Certain security capabilities are dependent on the deployment model, and Gartner recommends organizations look to CASB solutions that offer a full range of architecture options to cover all cloud access scenarios. They also note that vendors offering API-based controls today are not well-positioned to extend their platforms to include proxy-based controls given the significant investment needed to develop a robust proxy architecture that scales to the large data volumes exchanged between end users and cloud services. Depending on industry regulations, customers may also look for on-premises proxy solutions, so Gartner recommends looking for a vendor that offers both on premises and cloud-based proxy models.

CASB Evaluation Criteria
According to Gartner, while many providers focus on limited areas of the four CASB functionality pillars, most organizations prefer to select a single CASB provider that covers all use cases. Gartner recommends that organizations carefully evaluate CASB solutions based on multiple criteria. One consideration is how many cloud providers the CASB solution can discover and the breadth of attributes tracked in the CASB’s registry of cloud providers. Another consideration is whether the CASB supports controls for the business-critical cloud services currently in use or planned in the near future.

Finally, Gartner notes that the CASB market is crowded and expects that consolidation will occur and some vendors will exit the market in the next five years. A good predictor of whether a vendor will continue operating is whether they are one of the leaders in the market in terms of customer traction. Companies with more customers will naturally have a more complete view of customer needs, which will enable them to develop better solutions to meet those needs that will, in turn, attract more customers and support a sustainable business. To read more about Gartner’s view of the market, I encourage you to download a free copy today.

AV Can’t Stop Zero-Day Attacks and They’re Hurting Productivity

September 22, 2015 | Leave a Comment

By Susan Richardson, Manager/Content Strategy, Code42

Cat & Mouse by Linda Braucht, computer graphics

It’s been almost 18 months since Symantec officially declared antivirus software “dead” in an interview with the Wall Street Journal. So why did a recent study by ESG find that 73 percent of enterprises have at least two AV products deployed and nearly one-third use three or more?

With antivirus, more is less
In the face of industry reports that AV software is only 50 percent effective in identifying malware, it seems that many enterprises are adopting a “more is better” mindset: More AV products mean a bigger database of known malware “signatures,” which increases the chances of catching malware before it breaches the enterprise environment—right?

Wrong. Deploying multiple AV products might expand the total number of known malware signatures in your AV armor, but this approach doesn’t combat the biggest flaw: new, zero-day malware that no AV product has ever encountered (and therefore can’t possibly recognize). Even with frequent updates to the signature database, AV software just can’t keep up. The September 2015 release of Symantec’s AV product includes a total of 37 million malware signatures. But the AV-TEST Institute registers over 390,000 new pieces of malware every single day—and sophisticated cybercriminals are doing their own QA, running new malware against common AV products to make sure they will go undetected.

As AV piles up, productivity goes down
It’s a game of cat and mouse that you’re destined to lose, and it’s eating up your IT budget—and hampering productivity. IT staff have to learn and configure multiple platforms, and all your staff are impacted by the frequent required updates. And if you’ve ever run a manual AV scan, you know that your computing capacity is reduced to a crawl.

Focus on detection and response
AV software remains a valuable first line of malware defense—and often a requirement for regulatory compliance. But instead of investing time and money in layering AV products on top of each other, enterprises need to shift to a “detect and respond” mindset. This means leveraging a centralized, real-time repository of all the data in your enterprise environment—including laptops and other mobile endpoints—to enable ongoing forensic analysis that will catch aberrations and anomalies across your entire system.

With this progressive security approach, you have the power to quickly isolate malicious code, identify where it entered and what data was affected in the environment, and mitigate the impacts of the breach. You might not be able to stop a new piece of malware from breaching your environment, but you’re in a strong position to corner the “mouse” before it does serious damage.

How Uber Uses the Cloud to Drive a Mobile Workforce

September 16, 2015 | Leave a Comment

By Susan Richardson, Manager/Content Strategy, Code42

Code42_Cloud_UberOceans of ink have already been spilled extolling Uber’s innovative practices and growing profits, but here’s one aspect getting less attention: How the company’s nearly 100 percent cloud-based business empowers its vast network of mobile workers (the drivers):

  • Uber drivers have instant connectivity: Uber drivers enjoy instant connectivity to Uber’s cloud-based network at any time, from any place. This allows Uber to instantly deploy drivers and respond in real-time to demand.
  • Uber drivers get the convenience of BYOD: Uber gives their drivers the freedom and convenience of a true BYOD environment. Drivers simply install the secure Uber app on the mobile device of their choosing. With no equipment or training, Uber has virtually no onboarding costs for adding new drivers.
  • Drivers’ devices aren’t major security threats: Since no private Uber data is stored on drivers’ mobile devices, both Uber and their drivers don’t have to sweat the security risk of a driver losing a device (or having it stolen).
  • Drivers can switch devices without downtime: Drivers can quickly bounce back from a lost or stolen device, as well as easily switch to a new mobile device — without losing days or weeks of productivity.
  • Drivers can count on Uber uptime: Uber’s cloud-based service has significantly less risk of downtime, so drivers don’t have to worry about being stuck without access to Uber’s servers.

All of this adds up to a mutually beneficial relationship between Uber and its vast network of drivers. Uber drivers are drawn to the reliable service and convenience of the cloud-based platform, and Uber reaps the benefits of greater productivity from this empowered mobile workforce.

Like Uber, Like Everyone
Over the last year, the mobile workforce grew to nearly 100 million in the U.S. and Gartner expects that by 2017, 70 percent of all workers will conduct business via their personal devices. Like Uber, enterprises in every industry are increasingly finding themselves in charge of huge mobile workforces—and scrambling to maximize the opportunity of mobile productivity. Like Uber, the modern enterprise can look to cloud-based services to solve this challenge by:

  • Leveraging the cloud to enable BYOD convenience and reliable anytime-anywhere connectivity for mobile workers
  • Deploying sophisticated cloud storage and automated endpoint backup to gain complete visibility of employee data
  • Mitigating and rapidly remediating risk by knowing who had what data when in the event of lost or stolen devices

What’s Hindering the Adoption of Cloud Computing in Europe?

September 15, 2015 | Leave a Comment

As with their counterparts in North America, organizations across Europe are eagerly embracing cloud computing into their operating environment. However, despite the overall enthusiasm around the potential of cloud computing to transform their business practices, many CIOs have real concerns about migrating their sensitive data and applications to public cloud environments. Why? In essence, it boils down to a few core areas of concern:

  1. A perceived lack of clarity in existing Cloud Service Level Agreements and security policy agreements
  2. The application, monitoring, and enforcement of security SLA’s
  3. The relative immaturity of cloud services

These issues of course are far from new and, in fact, great progress has been made over the past five years to address these and other concerns around fostering greater trust in cloud computing. The one threat that is present across these issues is transparency – the greater transparency that a cloud service provider can provide into their approach to information security, the more confident organizations will be in adopting and trusting public cloud providers with their data and assets.

To this end, the European Commission (EC) launched the Cloud Selected Industry Group (SIG) on Certification in April of 2013 with the aim of supporting the identification of certifications and schemes deemed “appropriate” for the European Economic Area (EEA) market. Following this, ENISA (European Network and Information Security Agency) launched their Cloud Certification Schemes Metaframework (CCSM) initiative in 2014 to map detailed security requirements used in the public sector to describe security objectives in existing cloud certification schemes. And of course, the Cloud Security Alliance has also played a role in defining security-specific certification schemes with the creation the CSA Open Certification Framework (CSA OCF) which works to enable cloud providers to achieve a global, accredited and trusted certification.

Beyond defining a common set of standards and certifications, SLA’s have become an important proxy by which to gauge visibility into a Cloud provider’s security and privacy capabilities. The specification of security parameters in Cloud Service Level Agreements (“secSLAs)” has been recognized as a mechanism to bring more transparency and trust for both cloud service providers and their customers.  Unfortunately, the conspicuous lack of relevant Cloud security SLA standards has also become a barrier for their adoption. For these reasons, standardized Cloud secSLAs should become part of the more general SLAs/Master Service Agreements signed between the CSP and their customers. Current efforts from the CSA and ISO/IEC in this field are expected to bring some initial results by 2016.

This topic will be a key theme at this year’s EMEA Congress, taking place November 17-19 in Berlin, Germany, with a plenary panel on “Cloud Trust and Security Innovation” featuring Nathaly Rey, Head of Trust, Google for Work as well as a track on Secure SLA’s which is being led by Dr. Michaela Iorga, Senior Security Technical Lead for Cloud Computing, NIST.

To register for the EMEA Congress, visit: https://csacongress.org/event/emea-2015/#registration.

 

Four criteria for legal hold of electronically stored information (ESI)

September 9, 2015 | Leave a Comment

By Chris Wheaton, Privacy and Compliance Counsel, Code42

Scales of Justice in the Courtroom

The average enterprise sees its data double every 14 months — nearly one-third of which is stored on endpoints, such as laptops and mobile devices. This rapid growth in electronically stored information (ESI) creates new challenges and drives unplanned costs in the corporate litigation process. But while many companies have implemented a solution for preserving and producing ESI for litigation, many still worry that their processes will be judged insufficient, exposing them to sanctions that result in high monetary and reputation costs. Since 2005, sanctions for spoliation of evidence have increased nearly 300 percent. In one landmark case in 2015, sanctions totaled nearly $1 million for repeated negligence in the eDiscovery process.

While the eDiscovery space is clearly in an evolutionary phase, the judgments—which can be both subjective and relative—appear to be based on four main criteria:

  1. Duty to Preserve. This is the expectation that counsel begins preserving relevant data from the moment a reasonable expectation of litigation emerges. The precise moment is hard to pinpoint, but is often months—even years—ahead of an official filing of litigation. By taking a proactive approach, enterprises can ensure continuous collection of ESI, so that legal holds can be quickly issued, custodians immediately notified and data instantly preserved and protected.
  2. Scope. This is the expectation that you preserve, collect and produce any and all information pertinent to the litigation. It refers to both the subject of content, as well as the type of data (email, internal files, social media, etc.). The impending changes to eDiscovery regulations aim to speed litigation and reduce costs by limiting frivolous information requests. Enterprises must still strike a balance in the information produced for and presented to the court. Submitting too little information can be perceived as a red flag. It gives the impression the organization is trying to conceal evidence and can lead to costly and time-consuming remedial information requests. Conversely, submitting too much information is also a risk. Requiring courts to parse excessive irrelevant data could be viewed unfavorably by a judge. Equally concerning: Producing non-pertinent information could expose your organization to additional litigation and put more of your private data at risk.
  3. Chain of Custody. The issue of modern connectivity also creates a twist on an existing consideration—chain of custody. In addition to producing data, you typically must also provide a continuous record of data movement and custody—who created it, who edited it, where it was stored, how it moved from location to location, etc. This extends beyond the issuance of the legal hold. Tracking the movement and custodians of data during eDiscovery is also critical to mitigating risk of sanctions and privacy breaches.
  4. Data Management Philosophy – Tying It All Together. As the merit of your eDiscovery process is judged by the subjective quality of “reasonableness,” even a statement of intent, such as an official corporate data management policy or philosophy, lends credibility to your efforts. In the event that you are unable to preserve or produce a given piece of ESI, a judge may look to your data management policy to determine whether you failed despite good intentions, or failed as a result of a negligent data management philosophy.

Organizations have been sanctioned for antiquated data management philosophies that fail to accommodate the modern realities of litigation involving ESI. “We delete all data after 90 days,” for example, is not likely to be considered a reasonable excuse for failing to produce relevant ESI. Instead, the stated philosophy should take a proactive stance, acknowledging the need for ongoing preservation and protection of data, preparing for immediate issuance of legal holds and notification of custodians, and comprehensively tracking the movement of all ESI.

With a solid, comprehensive data management philosophy guiding your efforts, you can create a foundation for a “reasonable” eDiscovery process. Meeting your duty to preserve, producing the right scope of ESI and thoroughly documenting the chain of custody will follow naturally from this overarching philosophy. Also, an effective data management philosophy makes it more likely that a judge—even one well-versed in “reasonable” eDiscovery and the expanding view of ESI—will view any and all of your eDiscovery actions in a “reasonable” light.

Info security: an eggshell defense or a layer cake strategy

September 2, 2015 | Leave a Comment

By Susan Richardson, Manager/Content Strategy, Code42

58118920_1920_1080Eggshell security describes a practice in which organizations depend on a traditional model of a “hardened outer layer of defenses and a network that is essentially wide open, once the attacker has made it past perimeter defenses.”

In an article published in The Register, a leading global online publication headquartered in London, Trevor Pott describes the four pillars of Modern IT security as layers of protection in lieu of a brittle and penetrable outer shell protecting the interior.

Eggshell computing is a fantastically stupid concept, Pott says, yet our entire industry is addicted to it. We focus on the “bad guys” battering down the WAN with port scans and spam. We ignore the insider threats from people downloading malware, being malicious or even just Oopsie McFumbleFingers YOLOing the delete key.

Prevention is only the first layer of security surrounding the network. It includes firewalls, patches, security access lists, two-factor authentication and other technology designed to prevent security compromises.

Detection is the second layer of defense: it includes real time monitoring of breach types via periodic scanning. In this category, intrusion detection systems, mail gateways that scan for credit card numbers moving through email, or auditing systems that scan logs comprise the layer.

Mitigation is the third layer. This is a series of practices in which the idea of compromise is accepted as part of doing business. Thus, an organization designs a network so that a compromise in one system will not result in a compromise of the entire network.

Because an incident is inevitable, incident response rounds out the layered security methodology.

Accepting that your network will inevitably be compromised, what do you do about it? How do you prevent a malware infection, external malicious actor, or internal threat from escalating their beachhead into a network-wide compromise?

The ability to respond to the inevitable by reloading from clean backups, learning via forensic analysis and returning to work from compromised systems (thereby assuring business continuity) isn’t giving up the fight, it’s understanding that the enemy will penetrate (or is already inside)—but recovery is always within reach.