What’s Hindering the Adoption of Cloud Computing in Europe?

September 15, 2015 | Leave a Comment

As with their counterparts in North America, organizations across Europe are eagerly embracing cloud computing into their operating environment. However, despite the overall enthusiasm around the potential of cloud computing to transform their business practices, many CIOs have real concerns about migrating their sensitive data and applications to public cloud environments. Why? In essence, it boils down to a few core areas of concern:

  1. A perceived lack of clarity in existing Cloud Service Level Agreements and security policy agreements
  2. The application, monitoring, and enforcement of security SLA’s
  3. The relative immaturity of cloud services

These issues of course are far from new and, in fact, great progress has been made over the past five years to address these and other concerns around fostering greater trust in cloud computing. The one threat that is present across these issues is transparency – the greater transparency that a cloud service provider can provide into their approach to information security, the more confident organizations will be in adopting and trusting public cloud providers with their data and assets.

To this end, the European Commission (EC) launched the Cloud Selected Industry Group (SIG) on Certification in April of 2013 with the aim of supporting the identification of certifications and schemes deemed “appropriate” for the European Economic Area (EEA) market. Following this, ENISA (European Network and Information Security Agency) launched their Cloud Certification Schemes Metaframework (CCSM) initiative in 2014 to map detailed security requirements used in the public sector to describe security objectives in existing cloud certification schemes. And of course, the Cloud Security Alliance has also played a role in defining security-specific certification schemes with the creation the CSA Open Certification Framework (CSA OCF) which works to enable cloud providers to achieve a global, accredited and trusted certification.

Beyond defining a common set of standards and certifications, SLA’s have become an important proxy by which to gauge visibility into a Cloud provider’s security and privacy capabilities. The specification of security parameters in Cloud Service Level Agreements (“secSLAs)” has been recognized as a mechanism to bring more transparency and trust for both cloud service providers and their customers.  Unfortunately, the conspicuous lack of relevant Cloud security SLA standards has also become a barrier for their adoption. For these reasons, standardized Cloud secSLAs should become part of the more general SLAs/Master Service Agreements signed between the CSP and their customers. Current efforts from the CSA and ISO/IEC in this field are expected to bring some initial results by 2016.

This topic will be a key theme at this year’s EMEA Congress, taking place November 17-19 in Berlin, Germany, with a plenary panel on “Cloud Trust and Security Innovation” featuring Nathaly Rey, Head of Trust, Google for Work as well as a track on Secure SLA’s which is being led by Dr. Michaela Iorga, Senior Security Technical Lead for Cloud Computing, NIST.

To register for the EMEA Congress, visit: https://csacongress.org/event/emea-2015/#registration.


Four criteria for legal hold of electronically stored information (ESI)

September 9, 2015 | Leave a Comment

By Chris Wheaton, Privacy and Compliance Counsel, Code42

Scales of Justice in the Courtroom

The average enterprise sees its data double every 14 months — nearly one-third of which is stored on endpoints, such as laptops and mobile devices. This rapid growth in electronically stored information (ESI) creates new challenges and drives unplanned costs in the corporate litigation process. But while many companies have implemented a solution for preserving and producing ESI for litigation, many still worry that their processes will be judged insufficient, exposing them to sanctions that result in high monetary and reputation costs. Since 2005, sanctions for spoliation of evidence have increased nearly 300 percent. In one landmark case in 2015, sanctions totaled nearly $1 million for repeated negligence in the eDiscovery process.

While the eDiscovery space is clearly in an evolutionary phase, the judgments—which can be both subjective and relative—appear to be based on four main criteria:

  1. Duty to Preserve. This is the expectation that counsel begins preserving relevant data from the moment a reasonable expectation of litigation emerges. The precise moment is hard to pinpoint, but is often months—even years—ahead of an official filing of litigation. By taking a proactive approach, enterprises can ensure continuous collection of ESI, so that legal holds can be quickly issued, custodians immediately notified and data instantly preserved and protected.
  2. Scope. This is the expectation that you preserve, collect and produce any and all information pertinent to the litigation. It refers to both the subject of content, as well as the type of data (email, internal files, social media, etc.). The impending changes to eDiscovery regulations aim to speed litigation and reduce costs by limiting frivolous information requests. Enterprises must still strike a balance in the information produced for and presented to the court. Submitting too little information can be perceived as a red flag. It gives the impression the organization is trying to conceal evidence and can lead to costly and time-consuming remedial information requests. Conversely, submitting too much information is also a risk. Requiring courts to parse excessive irrelevant data could be viewed unfavorably by a judge. Equally concerning: Producing non-pertinent information could expose your organization to additional litigation and put more of your private data at risk.
  3. Chain of Custody. The issue of modern connectivity also creates a twist on an existing consideration—chain of custody. In addition to producing data, you typically must also provide a continuous record of data movement and custody—who created it, who edited it, where it was stored, how it moved from location to location, etc. This extends beyond the issuance of the legal hold. Tracking the movement and custodians of data during eDiscovery is also critical to mitigating risk of sanctions and privacy breaches.
  4. Data Management Philosophy – Tying It All Together. As the merit of your eDiscovery process is judged by the subjective quality of “reasonableness,” even a statement of intent, such as an official corporate data management policy or philosophy, lends credibility to your efforts. In the event that you are unable to preserve or produce a given piece of ESI, a judge may look to your data management policy to determine whether you failed despite good intentions, or failed as a result of a negligent data management philosophy.

Organizations have been sanctioned for antiquated data management philosophies that fail to accommodate the modern realities of litigation involving ESI. “We delete all data after 90 days,” for example, is not likely to be considered a reasonable excuse for failing to produce relevant ESI. Instead, the stated philosophy should take a proactive stance, acknowledging the need for ongoing preservation and protection of data, preparing for immediate issuance of legal holds and notification of custodians, and comprehensively tracking the movement of all ESI.

With a solid, comprehensive data management philosophy guiding your efforts, you can create a foundation for a “reasonable” eDiscovery process. Meeting your duty to preserve, producing the right scope of ESI and thoroughly documenting the chain of custody will follow naturally from this overarching philosophy. Also, an effective data management philosophy makes it more likely that a judge—even one well-versed in “reasonable” eDiscovery and the expanding view of ESI—will view any and all of your eDiscovery actions in a “reasonable” light.

Info security: an eggshell defense or a layer cake strategy

September 2, 2015 | Leave a Comment

By Susan Richardson, Manager/Content Strategy, Code42

58118920_1920_1080Eggshell security describes a practice in which organizations depend on a traditional model of a “hardened outer layer of defenses and a network that is essentially wide open, once the attacker has made it past perimeter defenses.”

In an article published in The Register, a leading global online publication headquartered in London, Trevor Pott describes the four pillars of Modern IT security as layers of protection in lieu of a brittle and penetrable outer shell protecting the interior.

Eggshell computing is a fantastically stupid concept, Pott says, yet our entire industry is addicted to it. We focus on the “bad guys” battering down the WAN with port scans and spam. We ignore the insider threats from people downloading malware, being malicious or even just Oopsie McFumbleFingers YOLOing the delete key.

Prevention is only the first layer of security surrounding the network. It includes firewalls, patches, security access lists, two-factor authentication and other technology designed to prevent security compromises.

Detection is the second layer of defense: it includes real time monitoring of breach types via periodic scanning. In this category, intrusion detection systems, mail gateways that scan for credit card numbers moving through email, or auditing systems that scan logs comprise the layer.

Mitigation is the third layer. This is a series of practices in which the idea of compromise is accepted as part of doing business. Thus, an organization designs a network so that a compromise in one system will not result in a compromise of the entire network.

Because an incident is inevitable, incident response rounds out the layered security methodology.

Accepting that your network will inevitably be compromised, what do you do about it? How do you prevent a malware infection, external malicious actor, or internal threat from escalating their beachhead into a network-wide compromise?

The ability to respond to the inevitable by reloading from clean backups, learning via forensic analysis and returning to work from compromised systems (thereby assuring business continuity) isn’t giving up the fight, it’s understanding that the enemy will penetrate (or is already inside)—but recovery is always within reach.




M&A Concern: Is your data walking out the door with employees?

August 25, 2015 | Leave a Comment

By Susan Richardson, Manager/Content Strategy, Code42

Code42_Blog_Is_your_data_walking_out_the_doorIf you’re at one of the 40,000+ companies a year worldwide that announce a merger or acquisition, your biggest worry may not be combining IT systems. It may be all those employees walking out the door with your data.

Layoffs and voluntary departures are a given after a merger or acquisition. That means stolen data is a given, too: Half of departing employees keep confidential corporate data when they leave organizations, according to a recent study. And 42% believe it’s their right. The BYOD trend just adds insult to injury: departing employees leave with fully stocked devices they own and operate.

So what are employees taking? Some pilfered data is innocuous and already in the public realm. But some of it is classified. A partner at a law firm that specializes in labor and employment law says 90% of the data losses he sees involve customer lists. Not just names and addresses, but confidential information such as buying habits, contract terms and potential deals.

Other classified information could include credit card information, health information, financial records, software code, email lists, strategic plans, proprietary formulas, databases and employee records.

To avoid data breaches by departing employees—and the risk of operational, financial and reputation damage—security experts recommend three key steps:

  1. Educate employees: Make it very clear to employees that taking confidential information is wrong. Your security awareness training should include a detailed section on intellectual property theft.
  2. Enforce and clarify non-disclosure agreements: In nearly half of insider theft cases there were IP agreements in place, but the employee either misunderstood them or they weren’t enforced, according to the study. Start by including stronger, more specific language in employment agreements. Then make sure employees are aware that policy violations will be enforced and that theft of company information will have negative consequences—to them and any future employer who benefits from the stolen data. Just as importantly, make sure exit interviews include focused conversations around the employee’s continued responsibility to protect confidential information and return all company property and information—including company data stored on personal devices.
  3. Monitor technology to catch breaches early: By monitoring all the data that moves with your employees—on any device and in the cloud—you can quickly identify and rectify any inappropriate access and use of confidential data.

A good endpoint backup system, one that can be rapidly deployed to the acquired company via a single platform and managed via one console enables companies to track and audit the content passing through a device. So if there is an insider theft, you have the ability to build a legal case and recover the data. An added benefit? A good endpoint backup system lessens the temptation to steal intellectual property.

The Cloud and Cybersecurity

August 20, 2015 | Leave a Comment

By Vibhav Agarwal, Senior Manager of Product Marketing, MetricStream

VibhavTomorrow’s digital enterprise is at war today. War not only with external cybersecurity hackers and viruses, but also within the organization itself – a conclusion based on my discussions with information security managers and cloud architects around the world. While most executives understand the importance of a business driven, risk management-focused cybersecurity model, they do not address cybersecurity as an organizational issue, but more as a compliance or IT checklist issue.

As business models transform, becoming the leading and modern digital enterprises of the future, we see a shift in other areas as well. This is the age of the customer, and in a digital world, customer service or dis-service can be decided by one successful phishing attempt on an organization’s website. As recent events have proven, a successful cyber-attack has the ability to not only bring the organization down to its knees in minutes, but makes getting up quickly nearly impossible.

Furthermore, as business leaders lean more and more on the Cloud as a default choice for newer, faster systems of engagement with customers, new complexities come into the picture. Fast speed and customer centric front-end application characteristics like zero downtime, instant cross channel functionality deployment, and real time performance management make the cloud an ideal environment. But cloud and cybersecurity – how do we take care of that?

There are five key things that every IT Manager and Architect should think about as they aspire to be the CISO of tomorrow’s leading digital enterprise:

  1. It’s a Business Problem: As custodians of sensitive customer information and business value delivery, the CISOs of tomorrow should understand the importance of keeping data safe and secure. CISOs should ensure that they are part of a core team looking at the organizational risk appetite, which includes aspects like loss of IP, customer information loss, business operation disruption, and more. The CISO should present the organizational cybersecurity risk in the context of business by correlating IT assets and their residual scores with their business importance. The trade-offs of newer cybersecurity investments versus the status quo need to be examined from a more strategic and organizational perspective, rather than mere annual investment or upgrade perspective.
  2. The First C of an Effective Cloud Strategy Is Controls: The focus needs to be on controls, not on cost. If the CISO of tomorrow is not able to effectively implement controls with regards to data segregation, data security and infrastructure security, then the cost of keeping the data in the cloud can be prohibitive. Incorporating the right set of controls into your organization’s cloud deployments from the start and establishing a sustainable monitoring mechanism is key to ensuring that cloud investments have a positive trade-off from a total cost of ownership perspective.
  3. Effective Governance and Reporting is Not an Afterthought: Keeping business stakeholders informed on IT policies and controls from the start, especially those critical to business operations and cybersecurity, is important. The CISO of tomorrow should put in place a granular governance and reporting mechanism to encapsulate not only the organizational IT assets and ecosystem, but also cloud deployments. This system should handle all risk and compliance reporting related-requirements and their correlation with the business operations in order to make sense to business heads.
  4. Is the Business Continuity Plan in Place: Cyber attack planning and response is one of the biggest challenges for the CISO of tomorrow. With cloud-based infrastructure, the problem gets even more complicated. Having a clear incident response strategy and manual, a well-defined business impact analysis, and a mass notification and tracking mechanism are just some of the aspects that will be highly critical for ensuring that business disruptions are handled in a tightly coordinated manner. Again, having business context is important to achieve this.
  5. Should we Consider Cyber-Insurance: Indemnification against cyber attacks and the resulting loss of reputation, data and revenue is going to become a trend fairly soon. The CISOs of tomorrow should monitor the need and requirements of getting cyber insurance proactively, and counsel business stakeholders appropriately. This will be an important hedging strategy to minimize possible financial losses from lawsuits, business disruptions and data losses.

Today, with ubiquitous Internet connectivity, cloud-based IT ecosystems and an ever-evolving cyber-engagement business model, cybersecurity is a growing social and business issue. The CISO and CIOs of tomorrow need to ensure sustained support and focus from top management if they want to succeed in their cyber-fortification efforts. They also need to enhance their horizons across the business context, financial aspects and wider strategic objectives to guarantee that the organization’s data security is evolving in line. If the digital enterprise of tomorrow wants to grow and innovate, the question is not “are we doing enough today”, but rather, “are we thinking enough about tomorrow?

MITRE Matrix: Going on the ATT&CK

August 19, 2015 | Leave a Comment

By TK Keanini, Chief Technology Officer, Lancope

Tim KeaniniMost cybersecurity work is geared towards finding ways to prevent intrusions – passwords, two-factor authentication, firewalls, to name a few – and to identify the “chinks in the armor” that need to be sealed. The characteristics of malware are shared publicly, to give everyone from system administrators through users a heads up to guard against an attack.

Little has been done, however, to identify the characteristics of an adversary after they are already inside a network, where they have ways to hide their presence.

Now the MITRE Corporation is working to remedy that. The government-funded, non-profit research and development center has released ATT&CK™ – the Adversarial Tactics, Techniques & Common Knowledge matrix. We recently sat down with the folks at MITRE to discuss the new matrix and the impact that it will have on the industry.

“There are a lot of new reports [of] new threat action groups…We want to find the commonality across incidents [and] adversary behavior,” Blake Strom told us. Strom is MITRE’s lead cybersecurity engineer heading up the project. “We want to focus on behaviors at a technical level.”

In the Cyber Attack Lifecycle, attention has been paid mostly to the opening rounds – reconnaissance, weaponization, delivery and exploitation. The ATT&CK wiki addresses the later stages – control, execution and maintenance – when the malware is already resident on the network.

According to Strom, ATT&CK further refines these three stages into a collection of nine different tactics: persistence, privilege escalation, credential access, host enumeration, defense evasion, lateral movement, execution, command and control, and exfiltration. Under these categories, the matrix identifies numerous adversarial techniques, such as logon scripts (characterized by their persistence), or network sniffing (a way to gain credential access).

“Some techniques require very specific [diagnostic tools], like BIOS implant,” Strom said. “It’s harder to detect because there aren’t a whole lot of tools.” Others might be very intricate, requiring several tools, he said.

A major purpose of developing the matrix is to give cybersecurity professionals signposts for what security tools need to be created, Strom said. System administrators that have seen some kinds of attacks and exploits may not have seen others yet, so the matrix also might provide guidance about what attackers might try in the future.

ATT&CK might also prove useful in addressing insider threats, since the matrix focuses on how attackers perform their actions once inside a system. “There’s some overlap between what an insider could do and what attack vector groups are doing,” Strom said.

As for gathering the information, MITRE invites contributions to the database of tactics and techniques. Strom said the organization is serving as curator of the wiki; contributors can’t modify the ATT&CK matrix on their own, but can submit the information to his group.

Strom said MITRE is working to bring the matrix to the attention of the broader IT community. It was presented at the NSA Information Assurance Symposium at the end of June, and will be presented again at the CyberMaryland conference in October.

How to create the perfect climate for endpoint data migration

August 18, 2015 | Leave a Comment

By Andy Hardy, EMEA Managing Director, Code42

57190577_1920_1080Today’s enterprise organizations face an escalating problem. As the use of laptops, tablets and smartphones continues to grow, the amount of data created and stored on these endpoint devices is increasing at pace.

In fact, typically half of an organization’s business data is now stored on these ‘endpoints,’ and away from the traditional domain of the centralized corporate data center.

As workforce mobility continues to grow, new software, operating systems and regular firmware rollouts have had to match pace to keep up with end-user expectations.

This constant need to upgrade the user experience has become a strategic nightmare for the IT department to manage—especially as valuable enterprise data now shifts from device to device, and platform to platform. Endpoint data is more vulnerable to loss as a result of the frequency of operating system (OS) migrations and technology refreshes.

In fact, IT departments spend significant amounts of time and money each year migrating data onto new devices. On average, organizations typically replace 25% to 30% of their client devices annually.

This means that for routine replacements due to device losses, failure and planned migrations, an organization with 10,000 devices could be faced with 3,000 time-consuming and disruptive migration processes in a mere 12-month period, or roughly eight per day. It becomes easy to see why, when handled incorrectly, data migration is a drain on resources and a waste of time for employees whose devices undergo the process.

Hassle-free data migration from devices is now a top concern for CIOs
It is best practice to ensure files are continuously and automatically backed up prior to undertaking data migration. Continuous, automatic backup eliminates ad hoc backup and saves time and money that would otherwise be associated with replacing workstations and laptops.

Some endpoint data protection platforms allow individuals to easily carry out their own migrations, without IT department intervention. This self-service approach means employees migrate data when it works for them, and are up and running in mere minutes—as opposed to leaving their devices with the IT department for a few days or weeks.

Ease the pressure with worry-free data migration
In order to avoid inefficient and costly data migration processes, companies must first identify their endpoint backup needs and ensure they are met. To do so requires evaluation of a range of factors, including the quantity of data that needs to be stored, the location and duration of data storage. Other critical factors to consider include the time available for a backup, how to preserve the integrity of the stored data, and whether the implemented system is scalable.

In many organizations, requests for file restore assistance may sit unaddressed for several days due to overstretched IT departments or lack of resources. However, the pressure caused by these types of situations can be eased without increasing overhead. Instead, enterprise users should be empowered to quickly and easily restore their own data whenever necessary.

This serves the dual purpose of making the lives of IT teams easier, whilst freeing them up to concentrate on projects that add real business value.

A better path to workstation and laptop migration
So what does an effective, efficient migration process look like? In the past, the IT department would roll out upgrades in waves, which had the tendency to frustrate users. By ensuring files are first automatically and transparently backed up, the most recent version of employees’ work can quickly be restored to new devices. The result is a streamlined, simplified operation where desktop support focuses on the migration process itself rather than troubleshooting backup—all the while keeping the end-user experience intact, and reducing IT costs associated with migration significantly.

It really is crucial to implement a comprehensive endpoint backup solution before undertaking and implementing a data migration process. Without it, your company is wasting time and money, not to mention risking an expensive disaster recovery process.

Correct implementation should protect everyone in the enterprise, regardless of the number of users or their locations. It should run quietly in the background, allowing users to continue to work, and it should be transparent with users unaware that backup is occurring. Also, the knowledge that restoring a file is only a few clicks away means that users are often happy to restore data themselves; a ‘golden win’ for the IT department.

Trusting the Cloud: A Deeper Look at Cloud Computing Market Maturity

August 12, 2015 | Leave a Comment

By Frank Guanco, Research Project Manager, CSA

Due to its exponential growth in recent years, cloud computing is no longer considered an emerging technology. Cloud computing, however, cannot yet be considered a mature and stable technology. Cloud computing comes with both the benefits and the drawbacks of innovation. To better understand the complexity of cloud computing, the Cloud Security Alliance (CSA) and ISACA recently released the results of a study examining cloud market maturity through four lenses: cloud use and satisfaction level, expected growth, cloud-adoption drivers, and limitations to cloud adoption.

The study determined that the increased rate of cloud adoption is the result of perceived market maturity and the number of available services to implement, integrate and manage cloud services. Cloud adoption is no longer thought of as just an IT decision; it’s a business decision. Cloud has become a critical part of a company’s landscape and a cost effective way to create more agile IT resources and support the growth of a company’s core business.

Cloud Computing Maturity Stage
Cloud computing is still in a growing phase. This growth stage is characterized by the significant adoption, rapid growth and innovation of products offered and used, clear definitions of cloud computing, the integration of cloud into core business activities, a clear ROI and examples of successful usage. With roles and responsibilities still somewhat unclear, especially in the areas of data ownership and security and compliance requirements, cloud computing has yet to reach its market growth peak.

Cloud Adoption and Growth
How does cloud computing continue to mature? Security and privacy continue to be the main inhibitors of cloud adoption because of insufficient transparency into cloud-provider security. Cloud providers do not supply cloud users with information about the security that is implemented to protect cloud-user assets. Cloud users need to trust the operations and understand any risk. Providing transparency into the system of internal controls gives users this much needed trust.

Companies are experimenting with cloud computing and trying to determine how cloud fits into their business strategy. For some, it is clear that cloud can provide new process models that can transform the business and add to their competitive advantage. By adopting cloud-based applications to support the business, Software as a Service (SaaS) adoption is enabling organizations to channel resources into the development of their core competencies. Infrastructure as a Service (IaaS) and Platform as a Service (PaaS) adoptions enable businesses to experiment with new technologies and new services that require resources that would be expensive if they were completed through in-house implementation. IaaS and PaaS also allow companies to adapt to the rapid changes in market demand, because they create a completely new, faster and cheaper offering.

User Satisfaction
According to the ISACA/CSA respondents, the level of satisfaction with cloud services is on the rise. Cloud services are now commonly being used to meet business as usual (BAU) and strategic goals, with the expectation that they will be more important for BAU than strategic plans in the future.

It’s not perfect yet, but the level of satisfaction with cloud services and deployment models is expected to increase as the market matures and vendors define standards to minimize the complexity around cloud adoption and management. The increase of cloud service brokers and integrators is helping businesses to integrate applications, data and shared storage in a more efficient way, making ongoing maintenance much easier.

Moving Past the Challenges
The ISACA/CSA study found that the most significant cloud concerns involve security and international data privacy requirements, data custodianship, legal and contractual issues, provider control over information, and regulatory compliance. Both cloud providers and cloud users have a role is moving past cloud concerns. Cloud providers need to demonstrate their capabilities to deliver services in a secure and reliable manner. Companies must understand their own accountability for security and compliance and their responsibility for implementing the necessary controls to protect their assets.

Gaining Maturity

The decision to invest in cloud products and services needs to be a strategic decision. Top management and business leaders need to be involved throughout a cloud product’s life cycle. Any cloud-specific risk should be treated as a business risk, requiring management to understand cloud benefits and challenges to be able to address cloud-specific risk. The need remains for better explanations of the benefits that cloud can bring to an organization and how cloud computing can fit into the overall core strategy of a business.


To read the entire “Cloud Computing Market Maturity” white paper as well as the study results, please click here.
To learn more about CSA, visit https://cloudsecurityalliance.org.
More information on ISACA may be found at https://www.isaca.org.

What is Quantum Key Distribution?

August 11, 2015 | Leave a Comment

By Frank Guanco, Research Project Manager, CSA

Following this year’s RSA Conference, the Cloud Security Alliance formed a new working group called the Quantum-Safe Security Working Group (QSSWG). The QSSWG working group recently published a follow-up paper entitled “What is Quantum-Safe Computing” to help raise awareness and promote the early adoption of technologies to protect data in the cloud in preparation for commercially available quantum computers, which should theoretically be able to crack RSA and ECC encryption algorithms.

As a follow-up to this paper, the QSSWG has recently published a new paper titled “What is Quantum Key Distribution” which addresses the issues around sharing and securing encryption keys in a quantum world. The new position paper provides an overview of key distribution in general, examines some of current approaches and existing challenges of key distribution, and provides a brief overview of how Quantum Key Distribution works in the real world. Finally the paper looks at how QKD has evolved from a largely experimental field into becoming a solid commercial proposition and what the road ahead for QKD might look like. We welcome your thoughts on this latest research, and hope you find it valuable.

Are endpoints the axis of evil or the catalyst of creation?

August 11, 2015 | Leave a Comment

By Dave Payne, Vice President of Systems Engineering, Code42

Code42 Evil or AwesomeIf security pros had their way, they’d make laptops so secure they’d be virtually unusable. Protecting against every imaginable attack–not to mention the fallibility of the human connected to the laptop–is a battle we keep losing. Seventy percent of successful breaches happen at the endpoint. So it’s either keep layering the security stack or abolish laptops altogether—because they’re counterintuitive to a secure enterprise.

On the flip side, the workforce views endpoint devices as a marvelous, immutable extension of self: the computers they carry are magical devices that transform mere mortals into digital superhumans—giving them speed, power, boundless knowledge and connection. Take away that muscular machine, and employees will rebel.

Are endpoints awesome or evil?
I look at the conundrum between IT and the workforce as the classic good vs. evil story. The looming threats are disorienting, but if IT takes the right approach, they can give “analog” humans what they want AND protect the enterprise too.

The first step is accepting reality: the Haddon matrix theory, the most commonly used paradigm in the injury prevention field, says you plan for a disaster by planning for three phases of the disaster – pre-disaster, disaster, post-disaster. The presumption is that disaster is inevitable.

How does this translate in IT? Through acceptance that the world is inherently dangerous, by blocking and tackling to address known issues, and planning for pre-disaster, disaster and post-disaster to limit risk.

Survive in the wild with a simple plan
My session at the Gartner Catalyst Conference 2015—called Mitigate Risk Without Multiplying the Tech Stack—is about the first commandment of IT: thou shalt have a copy of data in an independent system in a separate location. But more than that, it’s about utilizing the backup agent already on employee laptops for additional tasks. Once the data is stored, IT can use it to rapidly remediate and limit risk following breach, protect against insider threats, even add back up sets from third party clouds—where employees sync and share data—to a centralized platform where all the data can be protected and utilized to recover from any type of data loss.

Data security is a combination of protection, detection and response. Like Bruce Schneier says,

You need prevention to defend against low-focus attacks and to make targeted attacks harder. You need detection to spot the attackers who inevitably get through. And you need response to minimize the damage, restore security and manage the fallout.

What I tell IT and infosec pros is this: focus on what you can control and leverage what you have. Instead of deploying a new agent every time you need to block a behavior or protect against a threat, wrap protection around the data with real-time, continuous data capture.

With this approach, you give employees their magical machines while staying focused on data recovery, visibility and forensics, as well as security and analytics. Now instead of good vs. evil, it’s win/win.