Little Bits of Security – Micro-Segmentation in Clouds

June 27, 2016 | Leave a Comment

By Darren Pulsipher, Enterprise Solution Architect, Intel Corp.

800x536-CloudcampaignCloud environments have made some things much easier for development teams and IT organizations. Self-service portals have cut down the amount of “hands on” intervention to spin up new environments for new products. Provisioning of new infrastructure has moved from weeks or days to minutes. One thing that barely changed with this transformation is security. But new techniques and tools are starting to emerge that are moving security to the next level in the Cloud. One of these technologies is called micro-segmentation.

Traditional datacenter security
To understand micro-segmentation let’s first look at current datacenter security philosophy. Most security experts focus on creating a hardened outer-shell to the datacenter. Nothing gets in or out without logging it, encrypting it, and locking it down. Firewall rules slow malicious hackers from getting into the datacenter. With the increase of more devices connected to the datacenter, security experts are looking at ways to secure, control, and authenticate all these connected devices.

Inside the datacenter, security measures are put into place to make sure that applications do not introduce security holes. Audit logs and incident alerts are analyzed to detect intrusions—notifying security analysts to lock things down. Security policies and procedures are created to try and mitigate human error in order to protect vital data. All of this creates a literal fortress, with multiple layers of protection from a myriad of attacks.

Micro-segmentation adds a hardened inner shell
Wouldn’t it be nice if I could create a hardened shell around each one of my applications or services within my datacenter? Opening access to the applications through firewalls and segmented networks that would make your security even more robust? If my outer datacenter security walls were breached, hackers would uncover a set of additional security walls—one for each service/application in your IT infrastructure. The best way to envision this is to think about a bank that has safety deposit boxes in the safe. Even if you broke into the safe there is nothing to take—just a set of secure boxes that also need to be cracked.

One of the benefits of this approach is when someone hacks into your datacenter, they only get access to at most one application. And they need to breach each application one by one. This extra layer of protection gives security experts a very powerful tool to slow down hackers wreaking havoc on your infrastructure. The downside to this approach is it can take time and resources setting up segmented networks, firewalls, and security policies.

SDI (Software-Defined Infrastructure) increases risk or security
Now I want you to imagine that you have given developers or line of business users the ability to create infrastructure through a self-service portal. Does that scare you? How are you going to enforce your security practices? How do you make sure that new applications are not exposing your whole datacenter to poorly architected solutions? Have you actually increased the attack surface of your datacenter? All of these questions keep security professionals up at night. So, shouldn’t a good security officer be fighting against SDI and self-service clouds?

Not so fast. There are some great benefits to SDI. First off, you can programmatically provision infrastructure (storage, compute and yes, network elements.) This last one, software-defined networking, gives you some flexibility around security that you might not have had in the past. You can create security policies enforced through software and templates that can increase your security around applications and the datacenter outer shell.

Software-defined infrastructure enabling micro-segmentation
Now take the benefits of both SDI and micro-segmentation. Imagine that you put together templates and/or scripts that create a segmented network, setup firewall rules and routers, and manages ssh keys for each application that is launched. Now when a user creates a new application or set of applications a micro-segmented “hardened shell” is created. So even if your application developer is not practicing good security practices you are only exposed for that one application.

The beginnings of micro-segmentation is available in some form from all of the major SDI platforms. The base functionality and most prevalent in all of the SDI platforms is the ability to provision a network, router, and firewall in your virtual infrastructure. Both template-driven and programmable APIs are available. So there is some work that needs to be done by the security teams. And enforcing the use of these templates is always a battle. The key is to make them easy to consume.

Don’t ignore the details
One thing that SDI does bring to your infrastructure is the propagation of bad policies and tools. If you make it easy to use, people will use it. Pay attention to the details. Setup the right policies and procedures and then leverage SDI to implement them. Don’t be like the banker that writes the combination to the safe on a piece of paper and tapes it to the top of their desk. And then photocopies it and shares it with everyone in the office.

SDI can make micro-segmentation a viable tool in the security professional’s toolkit. Just like any tool, make sure you have established the processes and procedures before you propagate them to a large user community. Otherwise you are just making yourself more exposed

Verizon DBIR Says You Can’t Stop the Storm—But You Can See It Coming

June 22, 2016 | Leave a Comment

By Susan Richardson, Manager/Content Strategy, Code42

Blog Images_6-13-16_Blog_600x450 (3)The 2016 Verizon Data Breach Investigations Report (DBIR) paints a grim picture of the unavoidable enterprise data breach. But accepting the inevitability of breaches doesn’t mean accepting defeat. It’s like severe weather: you can’t prevent a tornado or hurricane. But with the right visibility tools, you can recognize patterns and mitigate your risk.

Likewise with data security, visibility is critical. “You cannot effectively protect your data if you do not know where it resides,” says Verizon.

Most enterprises plagued by poor data visibility
The report shows that most organizations lack the data visibility tools for effective breach remediation. Hackers gain access more easily than ever, with 93 percent of attacks taking just minutes to compromise the enterprise ecosystem. Yet without the ability to see what’s happening on endpoint devices, 4 in 5 victimized organizations don’t catch a breach for weeks—or longer.

Here’s a look at how data visibility solves many of the major threats highlighted in the 2016 DBIR:

Phishing: See when users take the bait
The report showed users are more likely than ever to fall for phishing. One in ten users click the link; only three percent end up reporting the attack. Instead of waiting for the signs of an attack to emerge, IT needs the endpoint visibility to know what users are doing—what they’re clicking, what they’re installing, if sensitive data is suspiciously flowing outside the enterprise network. The “human element” is impossible to fix, but visibility lets you “keep your eye on the ball,” as Verizon put it, catching phishing attacks before they penetrate the enterprise.

Malware and ransomware: Encryption + endpoint backup
With laptops the most common vector for the growing threats of malware and ransomware, Verizon stresses that “protecting the endpoint is critical.” The report urges making full-disk encryption (FDE) “part of the standard build” to gain assurance that your data is protected if a laptop falls into the wrong hands. Continuous endpoint backup is the natural complement to FDE. If a device is lost or stolen, IT immediately has visibility into what sensitive data lived on that device, and can quickly restore files and enable the user to resume productivity. Plus, in the case of ransomware, guaranteed backup ensures that you never truly lose your files—and you never pay the ransom.

Privilege abuse: “Monitor the heck” out of users
Authorized users using their credentials for illegitimate purposes “are among the most difficult to detect.” There’s no suspicious phishing email. No failed login attempts. No signs of a hack. And for most organizations, no way of knowing a breach has occurred until the nefarious user and your sensitive data is long gone. Unless, of course, you have complete visibility into the endpoint activities of your users. Verizon urges enterprises to “monitor the heck out of authorized daily activity,” so you can see when a legitimate user is breaking from their use pattern and extricating sensitive data.

Forensics: Skip the hard part for big cost savings
The most costly part of most enterprise data breaches—accounting for half of the average total cost—involves figuring out what data was compromised, tracking down copies of files for examination, and other forensic tasks required for breach reporting and remediation. Most often, an organization must bring in legal and forensic consultants—at a steep price. If you have complete visibility of all enterprise data to begin with, including endpoint data, you can skip much of the hard work in the forensics phase. If you already have continuous and guaranteed backup of all files, all your files are securely stored and easily searchable. Modern endpoint backup solutions go a step further, offering robust forensic tools that make it easy and cost-effective to conduct breach remediation, forensics and reporting tasks without eating up all of IT’s time, or requiring expensive ongoing consultant engagement.

See your data, understand your patterns, mitigate your risk
The whole point of the DBIR is to shed light on data to see the patterns and trends in enterprise data security incidents—to mitigate risk through greater visibility. So read the report. Understand the common threats. But make sure you apply this same methodology to your own organization. With the right data visibility tools in place, you can see your own patterns and trends, learn your own lessons, and fight back against the inevitable data breach.

Download The Guide to Modern Endpoint Backup and Data Visibility to learn more about selecting a modern endpoint backup solution in a dangerous world.

Why You Need a Multi-Layer Approach to Public Cloud Security

June 20, 2016 | Leave a Comment

By Scott Montgomery, Vice President & Chief Technical Strategist, Intel Security Group

Would you hand your house keys to a total stranger and then go away on vacation for two weeks? Probably not, but that’s precisely what some businesses do when they move applications and data to the public cloud.

Security has long been the principal fear that weighs on cloud investments. While perceptions are improving, Intel Security’s recent State of Cloud Adoption study found that data breaches remain the biggest concern of companies deploying Software as a Service (SaaS), Infrastructure as a Service (IaaS), and even private cloud models. A 2015 survey by Crowd Research Partners found that nine in 10 security professionals worry about cloud security.

These concerns, however, are not stopping enterprises from investing in the cloud. The Intel Security study found that

intel_cisco-100657835-large970.idge
While the survey shows that confidence in cloud security is increasing, only one-third of respondents believe their senior executives understand the security risks.

Investments in cloud security should be commensurate with the level of migration to cloud services. But budgeting for security in the public cloud is distinctly different than planning for on-premise prevention. One fundamental shift is that cloud providers use a “shared responsibility model” that spreads risks between vendor and customer. Another difference: Customers don’t buy the same mix of products and equipment to secure the cloud that they do in the data center.

intel-security-chart-100658994-large970.idge
Budgeting for security in the public cloud begins by considering which applications and infrastructure components will live there. Some, like website hosting and document serving, are of relatively low risk and don’t demand the most stringent safeguards. Also consider the consumption models you’ll use. SaaS providers generally assume responsibility for security and the application and system levels. However, IaaS providers tend to cede those responsibilities to the customer. What’s more, no public cloud provider is likely to assume responsibility for user access and data protection, although there are measures they can take to support your own efforts.

There are three levels of security to consider as you build out your public cloud strategy:

System-level security for IaaS
This is secured plumbing: systems-level components such as operating systems, networks, virtual machines, management utilities and containers. Here, you want to invest in cloud providers that make it easy for you to keep your systems current with the latest patches and updates. The service provider should also provide thorough visibility into your cloud instances so that you can see all instances that are running. One of the challenges of public cloud is that it’s so convenient to spin up new VMs and containers that you may forget to shut them down later. These so-called “zombies” are latent security threats because they present potential attack vectors into more business or mission critical systems.

If you plan to use containers, as a growing number of enterprises are, be diligent about the level of security protection they offer. The market for containers is still immature, and security – while improving – is considered one of the technology’s weakest areas.

Remember, you are responsible for system-level security in your Infrastructure as a Service (IaaS) and Platform as a Server (PaaS) instances. Integrating these security controls and reporting in with your on-premises systems will create efficiencies. Be sure to include the appropriate controls for the type of server employed. These may include tools such as intrusion prevention, application control, advanced antimalware solutions and threat detection. These should be all be centrally managed for visibility and compliance in addition to policy and threat intelligence sharing with your on-premises infrastructure.

Application-level security
This level is primarily about identity and access management. Your best investment here isn’t financial; it’s a policy that limits the ability of users to deploy cloud applications without IT’s knowledge.

After ensuring policies are in place that offer IT visibility, the next step is to invest in multifactor authentication and identity management. The first approach uses two or more devices or applications to permit access. For example, a verification code can be sent to a phone or email address to ensure that a stolen password isn’t a critical failure point.

Identify management locks down application access by requiring users to authenticate through a secure resource such as LDAP or Active Directory. If your organization already uses a directory, consider investing in cloud brokering software that supports single sign-on so that users can authenticate to all their cloud services through their local directory. This gives IT complete visibility and shifts access control from the cloud service to your own IT organization. Consider also investing in a secure VPN tunnel so sessions are never exposed to the public Internet.

Data-level security
This level of protection involves securing the data itself. No cloud provider will take responsibility for your data, but there are solutions you can purchase to help.

Many cloud providers, for example, offer encryption as a standard option, but you may be surprised at how many do not, or who encrypt data only part of the time. Anything less than 256-bit encryption is considered inadequate these days.

More important is that you have full control of the encryption keys. If a cloud provider insists on owning them, you have no guarantees that your data will be safe. Seek another provider.

In addition, make sure your data is unencrypted only when in use. Some providers require that data be transmitted to their facilities in plain-text format. That’s a security risk.

As noted in the Cloud Security Primer, none of these levels should be secured in isolation. Cloud security, the primer states, is “an end-to-end challenge whereby the solutions must be built into the overall IT environment and not tacked on as an afterthought.”

Whatever cloud provider you adopt, make sure their security guarantees spelled out in their contract and SLA. A good contract should spell out exactly what procedures will be employed, along with any penalties the provider will face for non-compliance, how they will report upon it, and how you can audit to ensure your contractual terms are being met. A strong SLA ensures that you don’t simply toss the keys to your cloud provider as you’re walking out the door.

Confident Endpoint Visibility Responds to Modern Data Protection Problems

June 17, 2016 | Leave a Comment

By Joe Payne, President and CEO, Code42

Blog Images_6-13-16_Blog_600x450 (1)Consumer tech adoption has outpaced tech evolution in business for more than ten years. SaaS and cloud solutions, new apps and devices are at the disposal of empowered workers, making it very easy for employees to get what they need to work anywhere or—despite policies forbidding it—take career-making IP as they exit one company for the next. Legacy backup can neither unlock nor disarm these threats.

At the same time, data has become the new currency: cyber-crime syndicates have boomed with new variations on stealing or disabling data, particularly spear phishing and ransomware targeted at employees. As for breach, the headlines say it’s not a matter of if. It’s when. Legacy backup, long rejected by workers, simply cannot address these threats.

Finally, encrypted data moving through the network has made the intelligence it houses opaque—even to its stewards. A CISO recently shared with us that more than 75% of his network traffic is encrypted, making it nearly impossible to identify the threats facing his organization.

While it’s safe to say encryption is a must, it also means the focus of security must shift to the endpoints to mitigate risk and regain control.

Modern endpoint backup sees what you can’t
Modern endpoint backup gives IT and InfoSec the ability to see, monitor movement of and recover data housed on every employee device.

It neutralizes the threat of ransomware by making up-to-the-minute data recovery simple and fast. It decreases the cost of litigation by leveraging a complete dataset for legal holds, and it supports rapid response and remediation of breach via data attribution—with or without the device. From a productivity perspective, modern endpoint backup makes everyday challenges like data migration a lighter lift for IT and end users.

In response to modern data security problems, more than 39,000 businesses—including ten of the most recognized brands in the world, the 7 of the top 10 technology brands, and 7 of the 8 Ivy League schools—have adopted Code42 to regain visibility and mitigate risk.

In 2008, Code42 launched its enterprise endpoint backup software—knowing it was time for backup to catch up. Now approaching its sixth-generation platform, Code42 provides visibility of all the data through a single console and the real-time recovery and security tools the enterprise needs to be more resilient, more accountable, and more defensible.

Modern endpoint backup imparts the right to “Be Certain” in the face of modern data protection and security problems. We invite you to find out how.

More Than One-Fourth of Malware Files “Shared”

June 15, 2016 | Leave a Comment

By Krishna Narayanaswamy, Chief Scientist, Netskope

netskopeLast week, Netskope released its global Cloud Report as well as its Europe, Middle East and Africa version highlighting cloud activity from January through March of 2016. Each quarter we report on aggregated, anonymized findings such as top used apps, top activities, top policy violations, and other cloud security findings from across our customers using the Netskope Active Platform, including by industry.

This report took up where we last off last quarter on our cloud malware research, in which we found that 4.1 percent of enterprises had at least one sanctioned cloud app laced with malware. This quarter that number has risen to 11.0 percent, or nearly triple since last quarter. This is before counting unsanctioned apps, which we are researching and will incorporate into future reports. When we do, we expect these numbers to increase dramatically. Beyond sharing volume of detections, this quarter’s report breaks down those malware into the following observed categories, several of which are known to be used to distribute or propagate ransomware:

  1. JavaScript exploits and droppers
  2. MS Office macros
  3. Backdoors
  4. Mobile malware
  5. Spy- and Adware
  6. Mac malware

We also rated discovered malware in terms of its severity based on the extent to which it affects user privacy and computer security and causes damage to files, computers, or networks. 73.5 percent of detected malware this quarter ranks “high” in terms of severity, with 8.3 percent “medium,” and 18.2 percent “low.”

Perhaps the most shocking finding is that 26.2 percent of discovered malware files had been shared, either internally (with one or more people inside of the organization), externally (with one or more people outside of the organization), or publicly (with a publicly-accessible link). Sync and share, two important capabilities that characterize the cloud, are liabilities when it comes to malware because malware can use sync and share to propagate rapidly between users and devices, and the reason we dubbed this issue the cloud malware fan-out effect.

What do we recommend to combat the fan-out? Five things:

  1. Back up versions of your critical content in the cloud. Enable your app’s “trash” feature and set the default purge to a week or more. This is one of your best bets for preserving your data should you become infected with data destructing malware such as ransomware.
  2. Use your CASB to scan for and remediate cloud malware in your sanctioned apps. Make sure to check for infected users through sync and share. Integrate your CASB with, and share detections across, your existing security infrastructure such as your sandbox and endpoint detection and response (EDR) so you can stop malware wherever it’s propagating in your environment.
  3. Detect malware incoming via sanctioned and unsanctioned apps.
  4. Detect anomalies in your sanctioned and unsanctioned cloud apps, such as unusual file upload activity or other out-of-the-norm behaviors.
  5. Monitor uploads to sanctioned and unsanctioned cloud apps for sensitive data, which can indicate exfiltration in which malware is communicating with a cloud-based command and control server.

Securing the Hybrid Cloud: What Skills Do You Need?

June 14, 2016 | Leave a Comment

By Brian Dye, Corporate Vice President & General Manager/Corporate Products, Intel Security Group

With enterprises moving to hybrid cloud environments, IT architectures are increasingly spread among on-premises infrastructure and public and private cloud platforms. Hybrid models offer many well-documented benefits, but they also introduce more complexity for securing data and applications across the enterprise. And this added complexity requires an increasingly diverse skill set for security teams.

That’s a challenge, considering the growing cybersecurity skills shortage. In one recent study, 46% of organizations said they have a “problematic shortage” of cybersecurity skills – up from 28% just a year ago. One-third of those respondents said their biggest gap was with cloud security specialists.

Modern security teams require a broad and deep mix of technology skills, ranging from twists on traditional network and OS technology all the way to security on data itself, to address a rapidly evolving threat landscape. But they also need “softer” expertise, such as knowledge of compliance regulations and vendor-management skills. Driving this dual focus is the public cloud’s “shared responsibility model,” in which service providers and enterprises divvy up various levels of protection across the IT stack. These responsibilities – and the requisite skills – vary depending on the type of public cloud service.

intel-security-chart-100658994-large970.idge
Security Skills
Certain skills are required across all uses of public cloud. For example, you’ll need in-house expertise with encryption and data loss prevention controls for content-rich cloud applications. Your IT teams need to know (and track) where your enterprise data resides in the cloud, what offerings your cloud service providers offer for data protection, and most importantly, how to integrate data protection policies in the cloud with your own company policies. On a similar note, your team will need sophisticated identity and access management (IAM) and multifactor authentication, including tokenization, regardless of whether you’re deploying SaaS, PaaS, IaaS, or a combination of those services.

For SaaS, your security teams needs to be familiar with the various applications in use and how to use logging and monitoring tools to detect security violations and alert appropriate IT staff. Post-incident analysis is a critically important skill for mitigating active threats and improving your security posture for future threats.

For PaaS deployments, you will also need to add skills to ensure that native cloud applications are being developed with security built in at the API level. Adoption of open security APIs can help to bridge the gaps among proprietary cloud environments.

For IaaS environments, the ability to provision software-defined infrastructure carries the need for highly technical security professionals who can create policies for server, storage, and network security on AWS or other platforms. These skills include the ability to monitor usage of compute, storage, networking, and database services, as well as the ability to manage security incidents identified in the cloud platform you’re using.

Audit and Compliance Skills
Many of the softer skills needed for cloud success stem from the need for organizations to gain more visibility into hybrid environments that are becoming more complex as SaaS, PaaS, and IaaS services are cobbled together with each other and private clouds.

“The challenge has never been about security, but about transparency,” wrote Raj Samani, our Chief Technology Officer here at Intel Security’s Europe, Middle East and Africa division, in a recent blog post. To gain visibility into the security posture of a third-party provider, IT teams should at a minimum secure audit rights to examine the provider’s practices and ensure the proper certifications are in place.

Audit rights can be built into a service level agreement (SLA) as a way to make sure the provider complies with corporate security policies and industry or government regulations. This is one reason why the ability to develop comprehensive SLAs with service providers is an increasingly important skill. IT and security teams will need to work together to negotiate terms that provide maximum protection and visibility into third-party services, to ensure that data, applications, and other components of your cloud environment are secure and compliant.

In addition to formal audits, security professionals require skills (and tools) for continuously monitoring compliance and threats across SaaS, PaaS, and IaaS deployments in two key areas: threats and applications. Starting with threats, achieving (or maintaining) visibility to specific threats across these environments so your organization has a full view of attacks is critical. That visibility needs to extend across endpoint, infrastructure, and network elements in order to recognize and respond to coordinated, multi-angle attacks.

Second, in application security experience with cloud access security brokers (CASBs) will help security professionals increase the visibility into user behavior and their needs across public cloud service providers.

That said, we see convergence between the need for application visibility, threat visibility, and data security for SaaS applications, so look for skills that bridge those three areas as you build an organization for the future. The same need for a blended skill set will increasingly be true as threat and application needs converge.

Organizations in highly regulated industries also need to devote resources to tracking how third-party providers handle data and applications to ensure compliance with industry-specific regulations. The same goes for global players: Requirements around data storage can vary dramatically by country, requiring in-depth knowledge of local regulations regarding where data resides and how it is transmitted for any geography in which you do business.

Skills for Hybrid: the New Private Cloud
Security practices for a private cloud deployment – which enables enterprises to keep data and applications under their control – would seem to be more traditional than public deployments. But the virtualization technology that is inherent in the private cloud model creates a need for new security skills beyond those for traditional on-premise environments. The first is understanding the difference in the infrastructure itself, for example between a traditional virtual machine and a framework like OpenStack.

Second, as organizations explore software defined networking (SDN), they see a need for more automation skills, as security policy must co-exist with the orchestration to fully exploit an SDN environment.

Third, the security operations center will need more network insight as the east-west traffic becomes more material to threat analysis.

These skills become especially important as virtualization expands beyond servers and into networks and storage.

That said, most private clouds are truly hybrid clouds – and these will be the default moving forward. Hybrid clouds demand cross-domain threat visibility, along with the skills across the various cloud types to prioritize and respond to them. This requires both a broader level of technical depth but also more cross-team facilitation and leadership to analyze and respond to critical threats. Revisiting the soft skills points made earlier, this also includes leadership not just within the organization but across the set of SaaS providers relevant to a given situation.

The Bottom Line on Cloud Skills
The takeaway for security leaders: It’s time to optimize the skills of your team to the different types of cloud. Public cloud security – spanning SaaS, PaaS, and IaaS environments – is (a) more about policy, audit, analysis, and teamwork skills rather than pure technical depth, and (b) will include more cross-domain skills than are required in the more silo’d on-premise structure. Creating the proper mix of skillsets for all of these scenarios will help build your confidence as you build out your hybrid cloud model.

Here are some tips for training – and retaining – good cloud security employees.

Leaky End Users Star in DBIR 2016

June 10, 2016 | Leave a Comment

By Susan Richardson, Manager/Content Strategy, Code42

Blog Images_5-23-18_Blog_600x450Insider threat once again tops the list of enterprise cyber security threats in the 2016 Verizon Data Breach Investigations Report (DBIR). For the second straight year, Verizon research showed that the average enterprise is less likely to have its data stolen than to have an end user give away sensitive credentials and data—whether unintentionally or maliciously.

From insecure storage, transfer or disposal of sensitive information, to lost or stolen endpoint devices, to intentional data theft and privilege abuse, to simply entering the wrong recipient name in the email address field, the vast majority of breaches can be traced back to end users. “Our findings boil down to one common theme,” said Verizon Enterprise Solutions Executive Director of Global Services Bryan Sartin, “the human element.”

Overall, 2015 trends persist in 2016
The 2016 DBIR pulls trends and insights from more than 100,000 incidents—and 3,141 confirmed data breaches—across 82 countries. Is there anything groundbreaking in this year’s DBIR? Nope. Verizon reports “no drastic shifts” and no “show-stopping talking point.” For the most part, last year’s trends and patterns continued. But to “strike a deceased equine” (as Verizon put it), these persistent trends bear reviewing.

Phishing still works—end users are more likely than ever to click the link
The 2016 DBIR found hackers increasingly targeting devices and people instead of servers and networks, with phishing attacks growing from less than 10 percent of all attacks in 2009 to more than 20 percent in 2015. Why? Because people are more likely than ever to “click the link.” Verizon says 12 percent of people tested will click on a phishing attachment—up from 11 percent in 2014. Also of note: the same study found only three percent of users that receive a phishing email report the attack attempt. The IT department is stuck between a rock and a hard place. More people fall for the scam, and no one gives IT a heads-up.

Privilege abuse is still a top insider threat—with an emerging twist
Traditional privilege abuse involves an internal user stealing or corrupting sensitive data—whether for personal gain or in collusion with an external actor. Verizon noted an emerging twist: external parties with legitimate access credentials (a customer or vendor, for example) colluding with another external actor. Verizon also showed that insider threat detection is extremely difficult in cases of privilege abuse, with most incidents taking months for the enterprise to discover. This year, privilege abuse was the top defined category of cyber security threats, second only to the catchall category of “Miscellaneous Errors.”

Something new: the three-pronged attack
Cybercriminals aren’t just getting smarter—they’re growing more patient. Verizon highlighted what it called the “new three-pronged attack”:

  1. Phishing email lures user to malicious link or attachment.
  2. Clicking the link installs malware that targets a user’s various digital access credentials. Sophisticated malware can even compromise other users’ credentials through this one entry point.
  3. Those credentials are later used in other attacks.

The first challenge here is tracing the subsequent attack back to the initially-targeted user and the original phishing email. The second is figuring out just how deep the attack went—which credentials were compromised and which data may have been exposed or stolen. Playing the “long con” gives cybercriminals a chance to slowly, silently extend the reach of the breach, with users and IT unaware.

Biggest cost: tracking down data during breach recovery
With sophisticated attacks leveraging insider credentials to go deeper and broader, it’s no surprise that the biggest cost of an enterprise data breach comes from the daunting task of forensic analysis. Figuring out what data was compromised, and tracking down copies of the files, puts an enormous strain on IT resources, and accounts for nearly 50 percent of the average total cost of an enterprise data breach.

TL;DR—Breaches are inevitable; data visibility is key
The DBIR is great reading (really—you’re guaranteed a laugh or two), but it’s 85 pages long. Here’s the quick-and-dirty:

  • “No locale, industry or organization is bulletproof.” In other words, breaches are inevitable.
  • Know your biggest threats. Take five minutes to check out the tables on pages 24 and 25, showing incident patterns by industry.
  • “You cannot effectively protect your data if you do not know where it resides.” Breach remediation is crucial. Data visibility is key.

Next, we’ll tackle this last point—why data visibility is essential to effective breach remediation, and how an enterprise can enhance data visibility.

Download The Guide to Modern Endpoint Backup and Data Visibility to learn more about selecting a modern endpoint backup solution in a dangerous world.

Filling the Cloud Security IT Skills Gap… and Preventing Attrition

June 8, 2016 | Leave a Comment

By Brian Dye, Corporate Vice President & General Manager/Corporate Products, Intel Security Group

800x536-CloudcampaignWith all the various cloud services being offered in multiple deployment options, coupled with the 500,000 new security threats discovered daily, the strain on IT staff has never been greater. The need to retain cyber-security pros, versed in all the cloud specifics, has never been greater. Unfortunately, competition for those professionals is also at an all-time high.

More than 209,000 cybersecurity jobs are unfilled in the U.S., and the number of postings has jumped 74% percent over the past five years, according to Peninsula Press, a project of the Stanford University journalism program. Demand is expected to grow by another 53% through 2018. And as IT evolves, the skillsets must evolve – meaning this shortage is only doing to get worse.

“If the predictions are even partially true, we’ll be in a world of hurt in our industry if we don’t act now” to train the next generation of cyber security experts, said Christopher D. Young, Senior Vice President and General Manager, Intel Security Group, in a March 2016 RSA Conference keynote.

Cloud computing, in particular, presents a host of new security issues to IT organizations related to issues such as protecting data, facilitating encryption and security protocols across multiple cloud providers, and negotiating service level agreements (SLAs) that ensure security and compliance. Never before has your cloud security team been more important. Here are a few techniques CISOs can consider.

Bite the bullet on cost. If you want skilled professionals, you have to pay for them. While there is little information available on pay rates for cloud-specific security skills, lead software security engineers earn an average of more than $233,000 annually, according to Dice.com. This makes them the highest-paid line staff in the IT profession. But consider their value. It’s estimated that the average consolidated cost of a data breach is $3.8 million. And that doesn’t account for the massive reputational damage that can accompany such attacks. Funding of course is always a challenge, but investing in automation can both pay for the premium talent you need and ensure they are focused on your hardest problems.

Define career paths. Pay is overestimated as a factor in job satisfaction among knowledge professionals, and security is no exception. In fact, nearly 30% of respondents to the SANS Institute’s 2014 Cybersecurity Professional Trends report listed career advancement as their main goal in pursuing a new position, edging out compensation.

This is where the cloud presents opportunities. With cloud security standards still being defined, your security pros can take on new and critical roles in creating strategies and governance standards for your organization. Your training investments in this area will pay off for your organization as well as your people.

Cloud security will also open up new career paths, and creating well-defined career paths is a good retention strategy in any field. Cloud offerings and configurations are changing so rapidly that ambitious pros should find plenty of opportunity to grow.

Use the cloud to vary the responsibilities of your team by offering assignments in emerging specialty fields like software-defined data center security, hybrid cloud authentication, shadow IT identification, mobile device management, and threat-detection analytics. There are even new certifications, like Certificate of Cloud Security Knowledge and Certified Cloud Security Professional, that offer additional room for growth.

Optimize the skills of your team to the different types of cloud. For example, security for public cloud infrastructure requires a highly technical security professional who brings their security knowledge and business context to that public cloud infrastructure (likely with incremental training). By contrast, IT security for SaaS requires more policy and SLA audit and analysis skills more than technical depth.

Develop Cloud SMEs. Here’s another tactic: assign individuals to become cloud subject matter experts (SMEs). For example, identify a talented pro to become your IaaS SLA expert, then have him/her brief your leadership on your strategy and/or the steps you’re taking. If you have a chance to present the report to senior executives, who are increasingly putting security front and center, it’s a great way to recognize the contributions of a talented staffer (never mind stress the necessary investment needed).

Encourage collaboration. Security is the most collaborative of all IT professions, with experts freely sharing new discoveries and prevention tactics. Sponsor your best staff to represent your company on committees and local networking groups and to attend and present at conferences. Yes, there’s a risk they’ll be hired away, but your willingness to invest in their visibility is a powerful argument in your favor. In most cases, cloud security requires collaboration with 3rd-party cloud service providers, especially when drafting your SLA – who better to help contribute to the conversation?

Provide training opportunities. The risks that dominate the cyber security field change continually. Investing in skills development isn’t a “nice to have.” Your best people should be selected for the best training programs. While you might be enhancing their marketability, the more important issue is that you’re protecting your company.

Don’t forget diversity. We are seeing the value of driving a diverse workforce, and security is no exception. Given the talent shortage it can at times feel like a luxury to consider diversity, but a more balanced organization will operate more effectively and increase the overall productivity of your team.

Retaining good cloud security employees may not be easy—but the consequences of not doing so are worse. We have the hard challenge of securing our organizations, and need the best resources possible to do so.

Five Telltale Signs You Don’t Have the Latest Backup System

June 2, 2016 | Leave a Comment

By Susan Richardson, Manager/Content Strategy, Code42

060116_5TelltaleSigns_BlogIt’s Backup Awareness Month—time to take stock of how well your backup system is serving your organization. To help you get started, here are five telltale signs you don’t have the most modern endpoint backup system:

1. You still get Help Desk calls to retrieve lost data.
The latest backup systems feature intuitive, self-service backup so employees can restore their own data. Not surprisingly, enterprises with a modern endpoint backup system cited fewer backup/restore-related support tickets as a top benefit in a recent survey. More importantly, they were able to use the reduced support time to cost justify their more-advanced system.

2. Your backup system doesn’t support multiple platforms.
Today, 96 percent of companies support Macs. That’s because the enterprise has gone heterogeneous and your backup system should, too. A modern endpoint backup system doesn’t care whether a file is on Windows, Linux or OS X, or whether a device operates on iOS, Android or Kindle Fire. It backs up every file, every time, from anywhere—without requiring a cumbersome VPN connection.

3. You have no visibility into what’s on employee devices.
The latest backup systems give IT a comprehensive, single point of visibility and control across every employee device in the enterprise—including desktops, tablets and smartphones. You gain the insight to pinpoint leaks and prevent insider threat, because you know:

  • Which employees are uploading which files to third-party clouds
  • Which employees have transferred which files to removable media
  • Which employees have uploaded which files via web browsers, including web-based email attachments
  • Unusual file restores that may signal compromised credentials
  • The content of files and folders
  • The location of sensitive, classified and “protected” data

4. You can’t pinpoint where a breach occurred.
With legacy backup, you have to conduct lots of inquiries that take lots of time. With a modern endpoint system, you have visibility into every endpoint (see #3 above), so you can quickly identify where a breach occurred and reduce your Mean Time to Contain (MTTC). You also eliminate unnecessary reporting, because with 100 percent data attribution, you know for certain whether or not there was a breach.

5. You have to confiscate a device to enact a legal hold.
Really? Are you still putting up with that significant productivity drain? With a modern endpoint backup system, your legal team can conduct in-place legal holds and file collection without confiscating user devices—and without having to rely on IT staff.

If two or more of these statements apply to your organization, it’s time to go shopping for modern endpoint backup. See #1 above on how to cost justify it.

Download The Guide to Modern Endpoint Backup and Data Visibility to learn more about selecting a modern endpoint backup solution that protects data without sacrificing productivity for today’s mobile workforce.

Which Security Topics Are AWS Users Most Interested In?

May 26, 2016 | Leave a Comment

By David Lucky, Director of Product Management, Datapipe

AWSSecurityWe hope this blog provides an insightful dive into topics like cloud computing, managed services, products, and ways to improve your business strategy. Of course, our partners have great things to say, as well. One of those partners is AWS, and they’ve been kind enough to highlight the most popular security posts on their blog from the past year. There is some great info here; below is our take on just a few of these posts.

Privacy and Data Security
Security has always been a concern for the enterprise. Initially, it was a major barrier to entry for migrating to the cloud, but over the past few years, a greater number of businesses have realized that, like us, AWS takes security very seriously. This post talks about some of the best practices of the company.

Perhaps the biggest is protecting the privacy of its customers. AWS doesn’t disclose customer information unless required to do so to comply with a legally valid and binding order. And, if they do have to disclose information, they’ll notify customers beforehand. AWS also offers strong encryption as one of many standard security features, and gives organizations the option of managing their own encryption keys. That’s one of the driving forces behind our Datapipe Access Control Model for AWS (DACMA) offering – you get to hang onto the keys to your system, and maintain complete control of your virtual infrastructure and your data. What’s more, DACMA requires two-factor authentication, and all system access and activities are tied back to unique user names, without the hassle of managing an exhaustive list of AWS users. This added layer of security and accountability ensures your business is protected and meeting compliance requirements.

Receiving Alerts
It’s never a bad idea to have an extra layer of security within your infrastructure. As an AWS administrator, you can be notified of any security configuration changes. Changes are to be expected, but if anything seems out of the norm, you can make sure no changes to your AWS Identity and Access Management (IAM) configuration are made without you being made aware.

This post from AWS goes into detail on some of the steps you can take to stay in touch with all that’s going on within your AWS structure. From using CloudWatch filter patterns, to monitoring changes to IAM, to generating alarms and metrics, these are all necessary to ensure nothing gets by your watchful eye. Once everything is set up, you’ll receive an alert via email or SNS topic. The below image illustrates the process:

alertsdiagram_p

 

PCI Compliance in the AWS Cloud
Payment Card Industry (PCI) compliance is important for just about any business. However, one of the more complex aspects of cloud hosting is deciding which party is responsible for PCI requirements. The PCI Compliance workbook provides a guide on where AWS can cover compliance requirements, and which areas a business must cover itself.

There are twelve top-level PCI requirements in all, and they are quite complex. It can be easy to miss certain requirements or not stay up to date with audits. It’s important to note that you can’t just arbitrarily ignore a PCI requirement—all of them must be met. It may be possible that not all requirements apply to your business, so a PCI assessor is helpful for clarifying which do and do not apply. We were one of the first hosting providers in the world to achieve PCI DDS Level 1 service provider status—the highest, most rigorous status in the industry—and are happy to work with enterprises to ensure they’re setup and maintain their AWS environment compliance.

As a business, it’s refreshing to know your provider has your best interests in mind. For more information, check out our previous posts on AWS security.