Bridging the Divide Between CISOs and IT Decision Makers

May 20, 2016 | Leave a Comment

By Rick Orloff, Chief Security Officer, Code42

Blog Images_5-12-18_Blog_600x450In a large organization, leaders create a vision and strategy for the business and employees work to achieve the vision. At the business unit level in information technology, CIOs, CSOs and CISOs define their strategies while other IT decision makers work to implement it. The key to success is a team working in unison with effective strategies and KPI’s. But this might be a case of “theory vs practice.”

When we surveyed 400 IT decision makers (ITDMs) for our 2016 Datastrophe Study, we discovered that CISOs, CIOs and other IT decision makers often diverge in the real world in terms of everyday data security implementation and addressing real-world issues such as BYOD policy administration, reputation management and insider threats. That’s the scary reality of the unseen divide: when the people who are meant to protect the enterprise do not agree, then the CXO’s need to step up and lead.

The Datastrophe Study reveals several specific drivers contributing to the disconnect between C-level and other IT decision makers and ways in which businesses can bridge the gap.

Image issues
Data breaches are hitting organizations left, right and center, and there is little doubt that brands’ reputations are at stake. CISOs, with their executive hats on, spend their time on risk mitigation: more than half of CISO/CIOs (53%) say their ability to protect corporate and customer data is vital to their company’s brand and reputation. However, only two fifths (43%) of ITDMs share that focus.

While the Datastrophe Study reveals a 10% difference between leaders and decision makers, when it comes to sensitive data, even a little complacency can lead to security failures. This may be an issue of operational efficiencies being developed without using a secure framework. Data security needs to be part of the design starting with strategy at the CXO (horizontal) level and vertically with tactical execution.

In order to ensure that risk and the potential of reputational damage is reasonably mitigated, C-level and ITDMs need to work in concert. ITDMs have the clearest view of incumbent systems and employee behaviors—and should not be afraid to speak up. Equally, C-level executives need to take this information on board, if not back to the Board, in order to help ITDMs fulfill the vision of building a secure enterprise.

The insider threat is very real
All security professionals will agree that the insider threat is a reality in any business. But it seems that CISOs, CIOs and other ITDMs have not aligned on the scope and magnitude of the threat or the threat vectors. Sixty-four percent of CISOs and CIOs believe that insider data security threats will increase in the next twelve months. Only 50% of other ITDMs agree with them.

Is the view from the top—with a focus on protecting the organization and brand—skewing reality? Or, with the day-to-day liaison between ITDMs and employees, could it simply be that ITDMs lack the proactive (instead of traditional detective) tools required to provide real-time situational awareness? Even so, if they haven’t aligned on the threat vectors, the probability is very high that ITDM’s aren’t aligned on what to measure or monitor. There is, today, a potential tendency for both parties to underestimate threats. A study by Forrester reported that 70% of data breaches could be traced to employee negligence. In order to overcome the insider threat, the C-level and all other ITDMs have to agree on the best strategic course forward. More importantly, both parties need to engage employees and help to educate them on behaviors that could lead to data breach. For example, C-level execs could use a workshop format to explain to employees the costs and damages caused by employee negligence, while ITDMs can provide practical tips and examples of how to actively avoid behaviors that put data at risk.

Anomaly at the endpoint
In an increasingly mobile workplace, BYOD is a key driver for adoption of policies to manage employee-owned devices connected to organizational networks. But things are never as simple as they seem. Among the normally skeptical CISO/CIOs, 87% believe their companies have clearly defined BYOD policies in place. Meanwhile, only 65% of ITDMs say their organizations have defined BYOD policies. To add more contention to the mix, 67% of knowledge workers (employees who think for a living and engage with mobile devices daily), believe their companies have no apparent BYOD policies.

This disconnect is a major cause for concern: CISOs/CIOs believe that 47% of corporate data is held on endpoint devices, as opposed to the more moderate estimation of 43% by other ITDMs. It’s clear that C-level and ITDMs need to work collaboratively to clarify, communicate and implement well-defined BYOD policies.

Ultimately
The simple solution to bridging the gap? Better communication. CISO/CIOs need to talk to their teams and their teams need to talk back. Better alignment and integration between the vision and the reality will go a long way to building more secure enterprises.

Addressing Cloud Security Concerns in the Enterprise

May 18, 2016 | Leave a Comment

By David Lucky, Director of Product Management, Datapipe

cloudsecurityBusinesses want to move to the cloud, they really do. And more than ever, they’re starting to make the switch: A Cloud Security Alliance (CSA) study that polled more than 200 IT professionals found that 71.2 percent of companies now have a formal process for users to request new cloud services.

That CSA study also found that nearly two-thirds of IT professionals trust the security of cloud computing equally or even more than their on-premise systems. About a third of respondents cited better security capabilities to be a benefit of the cloud. However, almost 68 percent of respondents noted the ability to enforce their corporate security policies remains a barrier to cloud adoption.

Companies know there’s top-notch security in the cloud, yet security remains the biggest hurdle in getting over to the cloud. Kind of a catch-22, huh? Fortunately, there are a few things you can do to help assuage these fears.

Cloud security is something everyone in a company should be concerned with, not just the IT department or decision-makers. And while the tools we use are improving and more people are starting to better understand cloud computing, people still play a big part in security. Your team of security professionals should get the correct training early on in their tenure, and constant training will allow them keep their skills sharp.

Outside of security professionals, all employees within a company should know their role in maintaining a secure environment. Having a proactive approach to security risks is the first step, which is something that 82.2 percent of companies have. However, fewer than half of the companies that responded have a complete incident response plan. With real concerns like loss of reputation or trust, financial loss, and destruction of data, it’s imperative to have a plan in place to combat any potential security issues head-on, rather than reacting after the fact.

To help with the development of that plan, some businesses have turned to a managed service provider (MSP). Naturally, there are concerns surrounding that, as well­–the CSA report notes 87.3 percent of companies cite access control as an important asset of cloud security. Our Datapipe Access Control Model for AWS (DACMA) addresses this concern by letting a business stay in control by securely delegating access to Datapipe while retaining control of their credentials. DACMA’s role-based access and accountability elements also ensure the right people within an enterprise are accessing certain data. And with 24/7/365 security monitoring, you’ll be on top of the ball should an issue arise.

Whether or not you choose to partner with an MSP to assist with security, there are plenty of reasons to develop a cloud security strategy that works within your enterprise. There’s no one right method, but there is a wrong approach: not doing anything about it. To learn more about first steps you can take, visit our Managed Security page.

Cloud Computing: A Little Less Cloudy

May 16, 2016 | Leave a Comment

By Christina McGhee, Manager/FedRAMP Technical Lead, Schellman

Cloud-Computing-A-Little-Less-CloudyToday, consumers have an increasing interest in implementing cloud solutions to process and store their data. They are looking to take advantage of the benefits provided by cloud computing, including flexibility, cost savings, and availability. Fortunately, there are many cloud solutions available to consumers, touting cloud computing features such as multi-tenancy, virtualization, or increased collaboration. But is it really a cloud service?

With the rapid growth of these types of solutions, consumers and other interested organizations want to identify whether a service is actually a cloud service.

In actuality, there is such thing as a cloud service. It has a definition and we have seen federal agencies require cloud service providers to justify why their service is considered a cloud service.

The five essential cloud characteristics are based on the National Institute of Standards and Technology’s (NIST) definition of cloud computing in Special Publication (SP) 800-145. Here, NIST defines cloud computing as a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.

According to NIST SP 800-145, a cloud service employs all of the following five characteristics:

  1. On-demand self-service – A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider.
  2. Broad network access – Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations).
  3. Resource pooling – The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Examples of resources include storage, processing, memory, and network bandwidth.
  4. Rapid elasticity – Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be appropriated in any quantity at any time.
  5. Measured service – Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.

Whether you are a cloud service provider, consumer, or other interested party, it is important to identify how the cloud service offering meets each of the five essential characteristics. For example, cloud service providers in the FedRAMP authorization process usually document how their service meets each of the five essential cloud computing characteristics in their System Security Plan (SSP).

It goes without saying that regardless of whether or not a service meets the definition of a cloud service, the cloud service provider and consumer must always plan and prepare for the security risks associated with providing or using a the cloud service and the types of data the cloud service will consume. The cloud service provider is responsible for selecting a security program framework to implement security controls specific for cloud environments and the data protection requirements of their customers. Equally, the consumer must be fully aware of the data they plan to process and/or store with the cloud service and their responsibilities to protect that data.

 

Providing Trust and Assurance Through Cloud Certification and Attestation: A Complimentary CSA STAR Program Webinar by Schellman

May 12, 2016 | Leave a Comment

 

ByAvani Desai, Executive Vice President, Schellman

resource-csa-star-programIn the last 24 months, the Cloud Security Alliance (CSA) has made great strides in enhancing their CSA Security, Trust and Assurance Registry (STAR) Program.  In brief, the STAR Program is a publicly available registry designed to recognize assurance requirements and maturity levels of cloud service providers (CSPs).  Prior to issuing the guidance for STAR Certification and STAR Attestation, a CSP could only perform a self-assessment, which meant completing the Consensus Assessments Initiative questionnaire (CAIQ) and making the responses publicly available on the CSA Register.  The CAIQ was completed in several different ways and the content varied from short answers to full-page responses.  It was relevant information but not independently validated.  This created a path for the STAR Certification and STAR Attestation Programs.

Join Schellman during a complimentary webinar titled “CSA STAR Program: Attestation and Certification”.  The webinar will be held on May 13th from 12:00pm EST to 1:00pm EST and will provide one (1) hour of CPE.  Debbie Zaller, Schellman Principal, and Ryan Mackie, Practice Leader, STAR Program, will provide an in-depth discussion on the opportunities to undergo third party assessments, through the CSA STAR Programs, to validate maturity level or control activities.

Organizations, specifically cloud service providers, are continuously working to provide confidence to their customers regarding the security and operating effectiveness of their controls supporting the cloud and the STAR Certification and STAR Attestation options provided by the CSA allow for these organizations to further establish confidence in the market,” said Ryan Mackie.  “This webinar is a practical introduction to the STAR Level 2 offerings, outlining their benefits, requirements, and process, and how these types of third party validation can clearly compliment a cloud provider’s governance and risk management system.”

This informative webinar will provide:

  • An overview and journey of the CSA STAR Programs
  • A definition of the CCM framework
  • An overview of the Certification and Attestation purpose and scope
  • The process and preparations
  • A discussion of the common challenges and benefits

For more information and to register for the webinar, click here .  The event will also be recorded and available for on-demand viewing,. Click for more information.

ABOUT THE SPEAKERS
Debbie Zaller leads Schellman’s CSA STAR Attestation and SOC 2 services practice  where she is responsible for internal training, methodology creation, and quality reporting.  Debbie has performed over 150 SOC 2 assessments and Debbie also holds a Certificate of Cloud Security Knowledge (CCSK).

Ryan Mackie leads Schellman’s CSA STAR Certification and ISO 27001 certification services practice where he is an integral part of the methodology creation and the planning and execution of assessments.  Ryan has performed over 100 ISO 27001 assessments and is a certified ISO 27001 Lead Auditor trainer.

 

Outdated Privacy Act Close to Getting an Upgrade

May 12, 2016 | Leave a Comment

By Susan Richardson, Manager/Content Strategy, Code42

050916_OutdatedPrivacyAct_BlogThe outdated Electronic Communications Privacy Act (ECPA) may finally get a much-needed upgrade, but the reform can’t come soon enough for Microsoft, other cloud providers and privacy advocates. Here’s what you need to know:

The issues:
The ECPA was enacted in 1986, as electronic communication started to become more prevalent. The intent was to extend federal restrictions on government wiretaps from telephones to computer communications. But as we created other electronic communication devices and moved content to the cloud, the Act became outdated. The primary gripes are that it:

  • Allows government agencies to request emails more than 180 days old with just an administrative subpoena, which the agency itself can issue, vs. having to get a warrant from a judge.
  • Doesn’t require notifying affected customers when their data is being requested, giving them a chance to challenge the data demand. In fact, the Act includes a non-disclosure provision that can specifically prohibit providers from notifying customers.

The lobbying and lawsuits:
Plenty of wide-ranging groups have been advocating for ECPA reform, including the American Civil Liberties Union, the Center for Democracy & Technology, the Electronic Frontier Foundation, the Digital Due Process Coalition, the Direct Marketing Association and even the White House, in its 2014 Big Data Report.

On April 14, Microsoft added a little more weight to its argument. The company filed a lawsuit against the U.S. Justice Department, suing for the right to tell its customers when a federal agency is looking at their email. The lawsuit points out that the government’s non-disclosure secrecy requests have become the rule vs. the exception. In 18 months, Microsoft was required to maintain secrecy in 2,576 legal demands for customer data. Even more surprising, the company said, was that 68 percent of those requests had no fixed end date—meaning the company is effectively prohibited forever from telling its customers that the government has obtained their data.

The reform:
Two weeks after Microsoft filed its suit, the U.S. House voted 419-0 in favor of the Email Privacy Act, which would update the ECPA in these key ways:

  • Require government representatives to get a warrant to access messages older than 180 days from email and cloud providers.
  • Allows providers to notify affected customers when their data is being requested, unless the court grants a gag order.

The last step in the process is for the Senate to turn to the reform bill into law. While no timeline has been given, the Senate is getting a lot of pressure to act quickly.

Download The Guide to Modern Endpoint Backup and Data Visibility to learn more about selecting a modern endpoint backup solution in a dangerous world.

How to Reduce Costs and Security Threats Using Two Amazon Tools

May 10, 2016 | Leave a Comment

By David Lucky, Director of Product Management, Datapipe

AWSCloudfrontHave you ever gone to see a movie that would have been amazing if not for one person? The plot was engaging, the dialogue was well-written, and there were strong performances from most of the cast. But there was just that one actor who simply didn’t live up to the rest of the film, and it made every scene he was in that much worse? Simply put, that actor was bad, and brought down the whole operation.

That idea of the “bad actor” can be applied to Internet clients, as well. Fortunately, you’re not hurting any feelings by sussing them out: the bad actors are usually automated processes that can harm your systems. The two most common forms are content scrapers, which dig into your content for their own profit, and bad bots, who will misrepresent who they are to get around any restrictions stopping them.

We’d all like to believe that everyone accessing content will use it appropriately. Unfortunately, we can’t always assume the best, and being proactive in dealing with these bad actors will reduce security threats to your infrastructure and apps.

Even better, blocking bad actors will also lower your operating costs. When these bots access your content, you’re serving the traffic to them, whether you want to or not. That adds more to your overall costs. By blocking them, you’re restricting traffic from a number of undesired sources. Luckily, AWS has a pair of tools you can combine to say goodbye to these bad actors: Amazon CloudFront with an AWS web application firewall (WAF).

With AWS WAF, you can define a set of rules known as a web access control list (web ACL). Every single rule contains a set of conditions, plus an action. Any request that’s received by CloudFront gets handed over to AWS WAF for further inspection; if the request matches, the user can access the content as attempted. If the request doesn’t match the conditions in a specified rule, the default action of the web ACL is taken. These conditions will remove quite a bit of unwanted traffic, as you can set filters by source IP address, strings of text, and a whole lot more. As for the web ACL actions, you can count the request for later analysis, allow it, or block it.

Perhaps the best attribute of the WAF is that you can smoothly integrate it within your existing DevOps, and automate workflows to react. Since bad actors are always switching their methods to mask their actions, your proactive detection methods must constantly change, as well. Having those automations in place is immensely helpful in finding bad actors and restricting their access.

There’s a great walkthrough of how to set up this solution on the AWS Security Blog, step-by-step. Feel free to check it out for more information, or get in touch with us if you have any additional questions. And for AWS customers that need even more than what the AWS WAF has to offer, there are services that are complimentary to the AWS WAF that provide enhanced protection for business critical applications on AWS. You won’t even need to thank the Academy when all of those bad actors are removed.

DoD Updates Government Security Requirements for Cloud, But What Does That Really Mean?

May 6, 2016 | Leave a Comment

By Brian Burns, Bid Response Manager/Government Affairs, Datapipe

DF-ST-87-06962IT officials from the Department of Defense (DoD) have released an update to the Cloud Computing Security Requirements Guide (CC SRG), which establishes security requirements and other criteria for commercial and non-Defense Department cloud providers to operate within DoD. These kinds of updates are not uncommon. In fact, they are encouraged through an interesting use of a DevOps type methodology – as the DoD explains:

DoD Cloud computing policy and the CC SRG is constantly evolving based on lessons learned with respect to the authorization of Cloud Service Offerings and their use by DoD Components. As such the CC SRG is following an “Agile Policy Development” strategy and will be updated quickly when necessary.

The DoD offers a continuous public review option and accepts comments on the current version of the CC SRG at all times, moving to update the document quickly and regularly to address the constantly changing concerns of an evolving technology like public and private cloud infrastructure. The most recent update includes administrative changes and corrections and some expanded guidance on previously instated requirements, with the main focus on the updates being to clarify standards set in version one and alleviate confusion and any potential inaccuracy.

If you are interested, you can read through the entire CC SRG revision history online.

What is particularly interesting here is the DoD’s acknowledgment that management of cloud environments is constantly evolving, security requirements and best practices need to be iterative, and updates need to be made regularly to ensure relevancy. It’s also important to note that the CC SRG is only one of many government policies put in place to help government agencies securely and effectively implement cloud infrastructures. There are also guidelines like NIST SP 800-37 Risk Management, NIST 800-53, FISMA and FedRAMP to consider. All of these provide a knowledge base for cloud computing security authorization processes and security requirements for government agencies.

What the DoD’s updates to the CC SRG should reinforce for agencies is that they need to have a clear cloud strategy in place in order to ensure compliance and success in the cloud. Determining the best implementation of these guidelines for your needs is difficult in and of itself. Add to that the ongoing management and updates required to keep up with ever-evolving guidelines and an IT team can find itself struggling.

By partnering with systems integrators and software vendors, or working directly with a managed service provider, like Datapipe, government agencies can more easily develop a long-term cloud strategy to architect, deploy, and manage high-security and high-performance cloud and hosted solutions, and stay on top of evolving government policies and guidelines.

For example, Microsoft Azure recently announced new accreditation for their Government Cloud, Amazon AWS has an isolated AWS region designed to host sensitive data and regulated workloads called AWS GovCloud, and you can learn more about our new Federal Community Cloud Platform (FCCP), which meets all FISMA controls and FedRAMP requirements, and all of our specific government cloud solutions on the Datapipe Government Solutions section of our site.

Five Endpoint Backup Features That Help Drive Adoption

May 3, 2016 | Leave a Comment

By Susan Richardson, Manager/Content Strategy, Code42

042516_5EndpointFeatures_BlogIf you’re among the 28 percent of enterprises that still haven’t implemented a planned endpoint backup system, here are 5 key attributes to look for in a system, to help drive adoption and success. These recommendations are courtesy of Laura DuBois, program vice president at IDC, a global market intelligence provider with 1,500 highly mobile, knowledge-driven employees:

1. Supports Productivity
Look for a lightweight system that doesn’t put a drag on memory, so employees can access data and collaborate quickly. If the system slows people down, they won’t use it.

2. Increases Security
While some people think of endpoint backup primarily for disaster recovery, you should think of it as a data loss prevention tool, too. A good endpoint backup system offers a multi-layered security model that includes transmission security, account security, password security, encryption security (both in transit and at rest) and secure messaging.

3. Offers Intuitive Self-Service
Employees don’t want to wait for IT to recapture lost data. Having an easy-to-use, self-service interface allows employees to locate and retrieve their own data. Not only does this help increase adoption, it also cuts down on calls to the IT Help Desk to save administrative time and money. A survey of Code42 customers found that 36 percent had fewer restore support tickets after installing the CrashPlan endpoint backup system, and 49 percent reduced IT hours spent on data restores.

In fact, for CISOs looking to make the case for an endpoint backup system, DuBois suggests compiling Help Desk volume data and the productivity associated with it.

4. Supports Heterogeneity
DuBois’ research showed that the average corporate employee uses 2.5 devices for work, some company issued and some not. Your endpoint backup system has to accommodate today’s diversity in devices, platforms and network connectivity.

5. Handles the Added Traffic
Some endpoint backup systems can get bogged down with lots of users and not enough network bandwidth. Look for a system that backups up almost continuously, so the processing is spread out vs. taxing the system all at once and slowing it down.

To learn more, see DuBois’ webinar, “5 Expert Tips to Drive User Adoption in Endpoint Backup Deployments.”

 

10 Key Questions to Answer Before Upgrading Enterprise Software

April 27, 2016 | Leave a Comment

By Rachel Holdgrafer, Business Content Strategist, Code42

042016_upgradesoftware2_blogThe evolution of software has made possible things we never dreamed. With software upgrades come new competencies and capabilities, better security, speed, power and often disruption. Whenever something new enters an existing ecosystem, it can upset the works.

The cadence of software upgrades in large organizations is typically guided by upgrade policies; the risk of disruption is greater in large organizations—which is the chief reason large companies lag up to two versions behind current software releases. They take a wait-and-see approach, observe how the early adopters fare with software upgrades and adopt as a late majority.

A proper upgrade process involves research, planning and execution. Use these top 10 principles to establish when and why to upgrade:

1. What’s driving the upgrade? Software upgrades addressing known security vulnerabilities are a priority in the enterprise. Usability issues that impact productivity should also be addressed quickly.

2. Who depends on the legacy software? Identifying departments that depend on legacy software allows IT to schedule an upgrade when it has the least impact on productivity.

3. Can the upgrade be scheduled according to our policy? Scheduling upgrades within the standard upgrade cycle minimizes distraction and duplication of effort. Change control policies formalize how products are introduced into the environment and minimize disruption to the enterprise and IT.

4. Is the organization ready for another upgrade? Just because an organization needs a software upgrade doesn’t mean it can sustain that upgrade. Upgrade and patch fatigue are very real. Consider the number of upgrades you’ve deployed in recent months when deciding whether to undertake another one.

5. What is the upgrade going to cost? Licensing costs are only one part of the total cost associated with software upgrades. Services, staff time, impact to other projects, tech support for associated systems and upgrades for systems that no longer work with the new platform must also be included in the total cost.

6. What is the ROI of the upgrade? Software updates that defeat security vulnerabilities are non-negotiable—security itself is the ROI. Non-security related upgrades, however, must demonstrate their value through increased productivity or improved efficiency and reduced costs.

7. How will the customer be impacted? Consider all the ways an upgrade could impact customers and make adjustments before the upgrade begins. Doing so ensures you mitigate any potential issues before they happen.

8. What could go wrong? Since your goal is to increase performance, not diminish it, draft contingency plans for each identified scenario to readily address performance and stability issues, should they arise.

9. What level of support does the vendor provide? Once you understand what could go wrong during the upgrade, look into the level of support the vendor provides. Identify gaps in coverage and source outside resources to fill in as needed.

10. What’s your recourse? No one wants to think about it, but sometimes upgrades do more harm than good. In the event something goes wrong and you need to revert to a previous software version, can you?

Download The Guide to Modern Endpoint Backup and Data Visibility to learn more about how a modern endpoint backup solution can simplify software upgrades.

Survey of IT Pros Highlights Lack of Understanding of SaaS Data Loss Risks

April 26, 2016 | Leave a Comment

By Melanie Sommer, Director of Marketing, Spanning by EMC

Recently, Spanning – an EMC company and provider of backup and recovery for SaaS applications – announced the results of a survey* of over 1,000 IT professionals across the U.S. and the U.K. about trends in SaaS data protection. It turns out that IT pros across the pond have the same concerns as here in the U.S., as the survey found that security is the top concern when moving critical applications to cloud. Specifically, 44 percent of U.S. and U.K. IT pros cited external hacking/data breaches as their top concerns, ahead of insider attacks and user error.

But that’s not the most interesting finding, as the survey found that perceived concerns differ from reality when it comes to actual data loss. In total, nearly 80 percent of respondents have experienced data loss in their organizations’ SaaS deployments. Accidental deletion of information was the leading cause of data loss from SaaS applications (43 percent in U.S., 41 percent in U.K.), ahead of data loss caused by malicious insiders and hackers.

While organizations in both the U.S. and U.K. have experienced data loss due to accidental deletions, migration errors (33 percent in U.S., 31 percent in U.K.), and accidental overwrites (27 percent in U.S., 26 percent in U.K.) also led external and insider attacks as top causes of data loss.

How SaaS Backup and Recovery Helps
As a case in point, consider one serious user error – clicking a malicious link or file and triggering a ransomware attack. If an organization uses cloud-based collaboration tools like Office 365 One Drive for Business or Google Drive, the impact from a ransomware attack is multiplied at compute speed. How? An infected laptop contains files that automatically sync to the cloud (via Google Drive, or OneDrive for Business). Those newly-infected files sync, then infect and encrypt other files in every connected system – including those of business partners or customers, whose files and collaboration tools will be similarly compromised.

This is where backup and recovery enters the picture. Nearly half of respondents in the U.S. not already using a cloud-to-cloud backup and recovery solution said that they trust their SaaS providers with managing backup, while the other half rely on manual solutions. In most cases, SaaS providers are not in a position to recover lost or deleted data due to user error, and cannot blunt the impact of a ransomware attack on their customers. Further, with many organizations relying both on manual backups and an assumption that none of the admins in charge are malicious, the opportunity for accidental neglect or oversight is too big to ignore. The industry would seem to agree. Roughly a third of organizations in the U.S. (37 percent) are already using or plan to use a cloud-to-cloud backup provider for backup and recovery of their SaaS applications within the next 12 months.

Since the survey included U.K. respondents, it also gauged sentiment around the rapidly changing data privacy regulations in the EU, specifically in regards to the “E.U.-U.S. Privacy Shield.” The vast majority of IT professionals surveyed agree (66 percent in the U.K., 72 percent in the U.S.) that storing data in a primary cloud provider’s EU data center will ensure 100 percent compliance with data and privacy regulations.

These results paint a picture of an industry that is as unsure as they are underprepared; while security is a top concern when moving critical applications to the cloud, most organizations trust the inherent protection of their SaaS applications to keep their data safe, even though the leading cause of data loss is user error, which is not normally covered under native SaaS application backup. The results also show that the concerns influencing cloud adoption have little to do with the real cause of everyday data loss and more with a fear of data breaches or hackers.

The takeaway from these survey results: more IT pros need an increased awareness and understanding about where, when, and how critical data can be lost to reduce their cloud adoption concerns; and, more IT pros need to learn how to minimize the true sources of SaaS data loss risk. To learn more, download the full survey report, or view an infographic outlining the major findings of the survey.

*Survey Methodology
Spanning by EMC commissioned the online survey, which was completed by 1,037 respondents in December 2015. Of the respondents, 537 (52 percent) were based in the United Kingdom, and 500 in the United States (48 percent). A full 100 percent of the respondents “have influence or decision making authority on spending in the IT department” of their organization.
Respondents were asked to select between two specific roles: “IT Function with Oversight for SaaS Applications” (75 percent U.S., 78 percent U.K., 77 percent overall); “Line of Business/SaaS application owner” (39 percent U.S., 43 percent U.K., 41 percent overall); the remaining identified as “other.”