Are Cloud Services Taking on a Life of Their Own? Arrow to Content

June 30, 2014 | Leave a Comment

By Nina Seth, Senior Product Marketing Manager, Accellion

A new report from SkyHigh Networks – a company that tracks the use of cloud services for corporate customers – found that cloud services are growing exponentially within enterprises. The findings in the report were based on traffic generated in the cloud by more than 8.3 million users in organizations spanning multiple industries. The research showed that services seem to be multiplying by the minute, deployed faster than IT can even say, “Help!”

Check out these numbers:

  • On average, organizations use 759 cloud services – a 33 percent increase since last quarter.
  • On average, 24 different file sharing services are being used and 91 different collaboration services.

Managing such an overwhelming quantity of services could only be done by a superhero in disguise. And since most IT administrators aren’t donning capes, they are finding themselves outnumbered by the seemingly unstoppable growth of cloud services. Does it sound ominous? It should, particularly when SkyHigh did additional digging – looking at encryption, retention policies and past security compromises – and discovered that of the 3,571 cloud services in use only 7 percent were “enterprise-ready.”

So, not only are there way too many cloud services available for employees to use on a whim, but the vast majority are not secure. It’s time for enterprises to take inventory of what services are available, and be selective about which ones are being used to share or collaborate on sensitive business information. No company should have 91 collaboration solutions running, or 24 file sharing solutions. Having this many competing solutions running in an organization decreases productivity, as employees try to learn how to use different solutions, and it dramatically increases the risk of data leakage through an unsecure, public-cloud solution.

Take back control of your IT environment by deploying a standardized set of cloud services for file sharing and collaboration that are designed for enterprise use, with robust security capabilities.

The 5 Steps to Prepare for a PCI Assessment Arrow to Content

June 19, 2014 | Leave a Comment

5-steps-to-prepare-for-a-pci-assessmentPreparing for a Payment Card Industry (PCI) compliance assessment is a major task for any size organization. However, companies that store, process, or transmit credit card transactions are required to comply with PCI’s Data Security Standards (DSS). PCI DSS includes up to 13 requirements that specify the framework for a secure payment environment. The PCI requirements are prescriptive in nature and provide guidance for organizations to become secure.

As a QSA, BrightLine has performed hundreds of audits. From our experience, there are five steps to follow when preparing for a PCI DSS assessment.

Complete a Risk Assessment
The goal of PCI DSS is to reduce the risk of credit card breaches. That, however, is a broad statement intended to apply to any business model and security control set. In order for an organization to effectively manage its own risk, it must complete a detailed risk analysis on its own environment. The goal for the risk analysis is for the organization to determine the threats and vulnerabilities to services performed and assets. As part of a risk assessment the organization should define its critical assets including hardware, software, and sensitive information – and then determine risk levels for those components. This in turn allows the organization to determine a prioritization level for reducing risk. It is important to note that risks should be prioritized for systems that will be in-scope for PCI DSS and then other company systems and networks. The PCI Security Standards Council (SSC) and the PCI DSS requirements themselves provide a lot of guidance on scoping a PCI DSS environment but this may be an area where the organization would want to contract with a QSA firm to validate the scope.

Document Policies and Procedures
Once the risk assessment has been completed the organization should have a much clearer view of its security threats and risks and can begin determining the security posture of the organization. Policies and procedures form the foundation of any security program and comprise a large percentage of the PCI DSS requirements. Business leaders and department heads should be armed with the PCI DSS requirements and the results of the risk analysis to establish detailed security policies and procedures that address the requirements but are tailored to business processes and security controls within the organization.

Identify Compliance Gaps
Building upon the foundation of security policies, the committee of business leaders and department heads should now review the PCI DSS requirements in detail and discuss any potential compliance gaps and establish a remediation plan for closing those gaps. This is where it is important to have the full support of business leaders who can authorize necessary funds and manpower to implement any remediation activities. Once the remediation plan is completed, it may also be reassuring at this stage to once again contract with a QSA firm to conduct a gap analysis. A gap analysis can either help determine high level areas that would not be compliant or can include a review much like a full PCI DSS assessment with the big difference being that a missed requirement will not fail the audit. The QSA will review security policies for accuracy and completeness and help identify any additional compliance gaps that need remediation before a full-scale assessment. It is critical once the final control set is in place to perform internal vulnerability scans and contract with an ASV to perform quarterly external scans. This is also the time to schedule the required annual penetration testing. These are typically performed by third parties, but is not required to be performed by third parties, and can take some time to schedule, perform, and remediate (if necessary). The results of a PCI DSS assessment will be delayed until the penetration test is completed so now is the time to schedule it.

Conduct Training to Educate Employees
After remediation activities are completed and policies and procedures are implemented, the next step is training and educating employees. Technical employees should obtain any certifications or training classes necessary so that they can operate and monitor the security control set in place. If software development is performed at the organization, OWASP offers training materials for secure coding guidelines; incident responders can review NIST SP 800-61. Non-technical employees must be trained on general security awareness practices such as password protection, spotting phishing attacks, recognizing social engineering, etc. All the security controls and policies in the world will provide no protection if employees do not know how to operate the tools in a secure manner. Likewise, the strongest 42-character password with special characters, numbers, mixed case, etc. is utterly broken if an employee writes it on a sticky note attached to their monitor.

It’s Assessment Time
From this point the organization is ready for a full-scale PCI DSS assessment and can now enter a maintenance mode where periodic internal audits occur and regular committee meetings are held to perform risk assessments and update policies, procedures, and security controls as necessary to respond to an ever changing threat landscape. PCI DSS must become integrated into the everyday operation of the organization so that the organization remains secure and to ease the burden of the annual assessments. As Bob Russo, head of the PCI SSC stated, “In the case of the PCI standards, it’s especially important that it does not become a once a year event like people think of when they think of compliance…You can be in compliance today and be totally out of compliance tomorrow.”

About the Author
Phil Dorczuk is a Senior Associate with BrightLine, where he specializes in PCI DSS assessments and gap assessments.

OpenSSL CCS Injection Vulnerability Countdown Arrow to Content

June 16, 2014 | Leave a Comment

By Krishna Narayanaswamy, Netskope Chief Scientist

Netskope OpenSSL Vulnerability CountdownOn June 5, researchers discovered an OpenSSL vulnerability (CVE-2014-0224) that could result in a man-in-the-middle attack exploiting some versions of OpenSSL. Called the OpenSSL Change Cipher Spec (CCS) Injection, this vulnerability requires that both a server and a user’s client be vulnerable and enables an attacker to modify traffic from the server and client and subsequently decrypt the entire communication between the server and client.

 

Netskope has been researching the enterprise cloud apps in our Cloud Confidence Index database for this vulnerability starting with 4,837 apps across 44,572 domains, and found that, as of June 6, 2014, there were 1,832 cloud apps vulnerable. On June 11, there were 1,732; on June 13, there were 1,656; and on June 16, there were 1,416.

As of today, there are 3,421 apps that are not vulnerable.

See the vulnerable apps countdown on the Netskope blog.

Learn more in today’s Movie Line Monday by researcher, Ravi Balupari: http://www.netskope.com/blog/movie-line-monday-openssl-ccs-injection-vulnerability

Recommendations

All client versions are vulnerable. Affected server versions include OpenSSL 0.9.8, 1.0.0, and 1.0.1. It is recommended that all users of OpenSSL servers 1.0.1 or earlier upgrade their systems as a precaution:

  • OpenSSL 0.9.8 SSL/TLS users (client and/or server) should upgrade to 0.9.8za
  • OpenSSL 1.0.0 SSL/TLS users (client and/or server) should upgrade to 1.0.0m
  • OpenSSL 1.0.1 SSL/TLS users (client and/or server) should upgrade to 1.0.1h

What Should You Do?

To protect yourself from this and future vulnerabilities, you should:

  • Discover all of the cloud apps running in your environment.
  • Measure the apps’ enterprise-readiness against an objective yardstick (CSA Cloud Controls Matrix is a great starting point, and there are also vendors, including Netskope, who perform this service free of charge).
  • Compare the discovered apps against the list of remaining vulnerable apps and take steps to curtail usage or introduce countermeasures.
  • Beyond the apps affected by the OpenSSL CCS Injection vulnerability, review all of the low-scoring apps and determine whether they’re business-critical.
  • For non-critical apps, help users migrate to more appropriate apps.
  • For critical apps, work with your app vendor to introduce enterprise capabilities and develop a plan to remediate vulnerabilities.
  • Adopt a process to continuously discover and gain visibility into the cloud apps in your environment, including the unsanctioned ones, as they change frequently.

 

 

 

TweetDeck — Just another hack or a missed opportunity to tighten cloud security? Arrow to Content

June 13, 2014 | Leave a Comment

June 12, 2014
By Harold Byun, Senior Director of Product Management, Skyhigh Networks

TD TNThe recent TweetDeck hack on Twitter presents a common cloud dilemma for information security teams.  On the one hand, the BYOX trends that drive cloud service adoption and worker self-enablement are transforming traditional IT into a User-Centric IT model that focuses on empowering and enabling workers.  On the other hand, the free-wheeling nature of the cloud and the regular news of breaches creates a gap in security teams’ ability to quickly assess risk and exposure for these types of events.  Further, with the cloud-based self-service model, it becomes more difficult to identify affected users and formulate a rational response plan.

This shift not only drives home the importance of gaining in-depth visibility into cloud usage, but also emphasizes that the role of information security is transforming in terms of remediation strategies and user education.  As the TweetDeck hack exemplifies, there are two alternate scenarios of response that security teams can take.

In one scenario, security teams can quickly assess that 35.9% of their users have accessed Twitter in the past week, and of these users, 42.2% also accessed TweetDeck.  This readily gives InfoSec teams an assessment of their attack surface for this specific cloud-based vulnerability.  In fact, Skyhigh ran this exact analysis on its own platform and determined that over the past week, the average enterprise customer had 11,991 users accessing Twitter, with 5,060 of those accessing TweetDeck.  Using these findings, a security response team can easily notify the affected TweetDeck users of the breach and provide remediation instructions as well as notify potentially affected Twitter users of the vulnerability.  For teams interested in a more proactive approach, sequential transaction analysis can also be used to identify TweetDeck sessions and subsequent site accesses or cross-domain accesses.

For additional monitoring, analysts can also look at concurrent logins and geographically disparate logins to identify compromised accounts and any other anomalous activity from specific users and/or impacted endpoints given that login tokens may very well be a logical target of this type of vulnerability.  Further, organizations can formulate a user attack landscape based on breached services accessed by users to identify clusters of higher risk internal targets.  Finally, organizations can implement user education redirect pages for users accessing the impacted cloud service to further notify them of the risks associated with using a given service.  This type of real-time education can have a profound effect on increasing user awareness to potential risks.

The above response plan is one scenario that provides a comprehensive set of actions which teams could readily implement that would ultimately provide better visibility and monitoring for this vulnerability and future exposures as well.

There is also an alternate scenario.  In the latter scenario, security teams will simply note the vulnerability and service breach and rely on existing security solutions to notify them of a potential exploit on their systems.  After the noise around this particular breach dies down, they’ll return to their day jobs and focus on other higher priority issues.  Unfortunately, this latter scenario is likely the more common path taken.

The irony here is that just as BYOX gives workers a choice on which services to use for work, information security also has a choice on how to educate users and respond to events in a more unconstrained technology environment.  The visibility and analytics needed to take a more proactive approach to address your organization’s exposure to breaches exist; it’s up to the security practitioner to leverage the information that’s available to him or her to enact a more proactive and robust security response model.

DON’T GET SNOWDENED: 5 QUESTIONS EVERY CEO SHOULD ASK THEIR CIO / CISO Arrow to Content

June 5, 2014 | Leave a Comment

By Sekhar Sarukkai, Founder, VP of Engineering
Skyhigh Networks

Snowden blogToday is the 1-year anniversary of the historic Snowden disclosure.  In the year since the first stories about Edward Snowden appeared, one of the lasting affects of the scandal is a heightened awareness of the risk posed by rogue insiders. This increased focus on rogue insiders has spread beyond the government to the private sector, and from security circles to corporate executives.

From product designs, formulas, and customer information, all companies have data that could harm their business in the hands of a competitor, making insider threats like Snowden an executive-level concern due to the potential negative impact on the company’s business operations and value. And with the ubiquity of cloud services, insiders are increasing exploiting the cloud to exfiltrate data.

We’ve distilled lessons learned from Snowden scandal and created 5 questions every CEO should be asking their CIO / CISO in order to avoid a catastrophic rogue insider event in the private sector both in using cloud as a vector of exfiltration as well as protecting their data stored in the cloud.

1. Can we identify unusual user or network activity to cloud services?
Many companies already archive log data from firewalls and proxies and use basic search capabilities to look for specific behavior. Unfortunately, basic search capabilities are ineffective at analyzing petabytes of data to proactively identify different forms of anomalous behavior. Today, there are machine learning techniques algorithms that establish baseline behavior for every user and every cloud service and immediately identify any anomalous activity indicative of security breach or insider threat.

2. Can we track who accesses what cloud-hosted data and when?
Snowden was able to steal roughly 1.7 million files and to this day the NSA doesn’t know exactly what he took. With the rapid adoption of cloud services, companies need to make sure that their cloud services provide the basic logging of all access to cloud services, including those by admins and via application APIs. Furthermore, companies need to make sure that cloud services provide historical log data of all accesses in order to support forensic investigations when an event does occur.

3. How are we protecting against insider attacks at the cloud service providers?
Encrypting data using enterprise-managed keys will enable employees to access information while stopping unauthorized third parties from reading the same data. Experts recommend encrypting sensitive information stored on premises and also in the cloud. By encrypting data in this manner, companies add an additional layer of protection over and above authentication and authorization that protects against insider attacks at the cloud service provider end.

4. How do we know unprotected sensitive data is not leaving the corporate network?
Many companies enforce data loss prevention policies for outbound traffic.  With the increasing use of cloud services (the average company uses 759 cloud services), companies should also extend their access control and DLP policy enforcement to data stored in the cloud. And as they do so, they should make sure that they are not reinventing the wheel and rather leverage their existing infrastructure. Companies should consider augmenting on-premise DLP systems and their existing processes to extend DLP to the cloud, with reconnaissance services that look for sensitive data in cloud services in use by the enterprise.

5. Can we reduce surface area of attack by limiting access based on device and geography?
The ability to access sensitive information should be dependent on context. For example, a salesperson in Indianapolis viewing customer contacts stored in Salesforce for customers in her territory using a secure device is appropriate access. Using an unsecure or unapproved device from another location may not be appropriate and could expose the company to risk. Limiting access to appropriate devices and appropriate locations will help prevent exposure.

The Evolution of Threats against Keys and Certificates Arrow to Content

June 5, 2014 | Leave a Comment

By George Muldoon, Regional Director, Venafi

In my blog post about the Heartbleed hype, I stress that threats against keys and certificates neither started with the Heartbleed vulnerability, nor certainly will end with it. Threats specifically against keys and certificates go back to 2009 and 2010, where Stuxnet and Duqu provided the virtual blueprint to the cyber criminal communities around the world by using stolen certificates to make the malware infection payload look legitimate.

Attacks on the Certificate Authorities themselves accelerated in 2011, with well-known CAs such as Comodo and Digicert suffering breaches. In September of 2011, with the breach of DigiNotar, some of its customers were left with no choice but to consider shutting down operations all together. By the end of 2011, there were 12 significant, publicly disclosed breaches of Certificate Authorities around the globe. It’s also worth mentioning that it was New Year’s Eve 2011 when Heartbleed was “born.”

In simple terms, Heartbleed is the result of a developer’s coding flaw. It’s a mistake that resulted in a massive 2+ year exposure. And no one knew it was happening. Vulnerabilities that expose keys and certificates occur frequently, although certainly not on as massive a scale as Heartbleed. Weak cryptography, along with weak processes and mistakes working with cryptography, are a daily occurrence.

In 2012, this burgeoning war on trust continued to evolve. To counteract the growing install base of advanced threat detection solutions in Global 2000 enterprises, we began to see a run on code signing certificates and widespread adoption of signing malware with certificates. Adobe announced that its code signing infrastructure had been compromised. Security vendors themselves were targeted, such as the case in which Bit9 had its secret code signing certificates stolen.

Bad actors of 2012 also realized they could subvert trust by obtaining and misusing Secure Shell (SSH) Keys on a wide scale. Various breaches and vulnerabilities, which ultimately exposed SSH Keys, were reported, most notably at GitHub and FreeBSD. Exposures involving SSH Keys are even more nebulous in some regards in that enterprises have much less visibility into or control of them. Moreover, unlike a digital certificate with a validity period shelf-life, which will eventually expire, SSH Keys have no such expiration date.

If 2012 was the year that attacks against trust learned to walk, then 2013 was the year they learned to drive….and drive fast. New attack schemes against TLS/SSL, such as Lucky 13, BEAST, CRIME, BREACH, and more, emerged, allowing for attackers to exfiltrate sensitive data from encrypted pages. Edward Snowden went from being an obscure, soft-spoken NSA contractor living in Hawaii to becoming a household name after stealing thousands of classified NSA files—all made possible by subverting the trust and access security provided SSH keys and digital certificates. The year 2013 also marked the first time we began to see a significant percentage increase in Android malware enabled by digital certificates (24% of all Android malware as of October 2013, up from 6.6% in 2012 and 2.9% in 2011).

Here in 2014, attacks on trust have graduated from college and are here to stay. Highly complex Advanced Persistent Threats exist with the main objective of stealing legitimate corporate keys and certificates of all types. Have a look at the breakdown of “El Careto” (or “The Mask”), which was discovered by Kaspersky in February after 7+ years undetected in the wild. Careto, which looks like a state-sponsored campaign due its complexity and professionalism, gathers sensitive data from infected systems, largely including VPN configurations, SSH Keys, and RDP files.

We’ve also seen substantive evidence of forged certificates being used to decrypt and monitor traffic as well as steal credentials and sensitive data. In a recent study by Facebook and Carnegie Mellon researchers, over 6,800 connections to Facebook used forged certificates.

Then over the past few weeks, evidence emerged around “ZBerp,” which is a hybrid Trojan “love child” of Zeus and Carberp and uses SSL to secure communications with command and control to evade detection by today’s most popular network security products.

From the accidental introduction of vulnerabilities, like Heartbleed, to advanced, persistent, professional efforts to both circumvent and misuse keys and certificates, the risk to these cryptographic assets is evolving and advancing. These threats undermine the trust we inherently place in keys and certificates to authenticate people and machines and encrypt data we intend to safeguard and keep private. PKI is under attack. Securing, protecting, and controlling enterprise keys and certificates is no longer simply a nice operational benefit. It’s a must have to defend the veracity of your entire business and brand.

The Cloud Multiplier Effect on Data Breaches Arrow to Content

June 4, 2014 | Leave a Comment

by Krishna Narayanaswamy, Chief Scientist at Netskope

All of the things we love about cloud and SaaS apps can also put us at risk of a data breach. First, we love that we can get our favorite apps quickly and easy without having to answer to anyone. This leads to massive app growth, usually of inherently low quality and un-secured apps, and often outside of the purview of IT and security teams. Second, we can get access to our favorite apps from any device. And each of us often does from three or more devices. This increases the surface area of a potential breach. And finally, today we can share content to and from those apps with greater speed than ever before, which means it’s easy for content to get out of our control. Each of these examples can be thought of as multipliers, or factors that can increase the probability of a data breach.

To take the pulse of the market and quantify this idea, we asked the Ponemon Institute, a foremost expert in data breach research, to conduct a study on the topic. Today we released the results of that study in a first-of-its-kind report called “Data Breach: The Cloud Multiplier Effect.”

Check out the full report or the handy infographic (also shown below), which points to some of the key learnings from the study.

The study, which is based on a survey of 613 IT and security professionals, finds that increasing use of cloud services can increase the probability of a $20 million data breach by as much as 3x. It also revealed other key findings, including:

  • 36 percent of business-critical applications are housed in the cloud, yet IT isn’t aware of nearly half of them;
  • 66 percent of respondents believe that their organization’s use of the cloud diminishes their ability to protect sensitive or confidential information; and
  • 72 percent of respondents don’t believe that their cloud service provider would notify them immediately if they had a data breach involving the loss or theft of their intellectual property or business confidential information.

Does this mean we should pick up our marbles and go home when it comes to cloud? Not even, and I would submit that there are some pretty simple things we can do to mitigate this multiplier effect. Here are four:

The first is to figure out what apps you have and prioritize them by the extent to which they house, or can be a gateway to, sensitive content.

Second, get support. The Cloud Security Alliance is a great resource, and lives and breathes issues like this. The Cloud Controls Matrix is a great starting point for how to think about apps and their inherent risk.

Of course, inherent risk is one dimension. So a third is to think about usage. Which of your top apps enable downloading? Sharing? Probably more than you think. We have noticed in the Netskope Active Platform that people share in app categories ranging from software development to business intelligence to CRM. Sharing content isn’t just something that happens in cloud storage/file sharing apps.

The fourth is to triage. Build a sequenced plan, or roadmap. Tackle the most critical things first. Like that software development app that has a zillion users and also happens to be rated “high risk.” Yeah, the one that houses your source code, roadmap, bug queue, agile sprint project plan, and internal engineering discussions. Or that CRM app in which your customer service professionals are mistakenly entering your company’s customers’ electronic personal health information.

So, yes, data breaches are serious business, and if the 613 respondents to the Ponemon Institute survey are right, cloud creates a multiplier effect that can as much as triple the expected economic impact of a breach. But there is a way forward, and it’s very do-able.

Netskope-data-breach-cloud-multiplier-effect

Heartbleed Hype Left Enterprises Uninformed Arrow to Content

June 3, 2014 | 1 Comment

By George Muldoon, Regional Director, Venafi

In early April, the vulnerability known simply as “Heartbleed” became the latest rage. During the first week after discovery, the mainstream media aggressively reported on Heartbleed, stirring up a tornado of fear, uncertainty, and doubt amongst all Internet users. Never thought I’d see “Fox and Friends” talking about OpenSSL, two-factor authentication, and digital certificates, but it happened daily only 7 short weeks ago.

This “Heartbleed Tornado” subsequently led to enterprise security professionals receiving email inbox loads of offers claiming to help you remediate. For many, especially those in the executive suites and board rooms, it was the first time they understood the true power and importance of private encryption keys and digital certificates, as well as the imperative need to protect them. Finally, I thought, the world is waking up and understanding the need to secure and protect its most valuable assets, which provide the backbone of a trustworthy Internet—encryption keys and digital certificates.

Unfortunately, as loud as the Heartbleed Tornado roared, the lions’ share of the remediation advice related to Heartbleed was simply the following:

  1. Check and see if websites you use are vulnerable (and have been patched), and
  2. Emphasize the importance of changing your passwords.

Patching OpenSSL and changing user-credential passwords are two of the steps to remediation. But the elephant in the room, the exposure of private encryption keys and certificates (and thus the need to revoke and reissue them ALL), was only consistently reported on by those media outlets and bloggers in the security space itself.

Any hot media story has a shelf life, and there’s only so many Heartbleed stories that will continue to draw readers in. So once the clicks died down, the mainstream all but forgot it. And those mainstream stories that remain, only touch upon the surface of the vulnerability, such as NBC’s cosmetic piece on “How Major Websites Rank on Password Security.”

But the important thing to realize is this: The threat against a trustworthy digital universe did not begin with Heartbleed. And it certainly does not end with it either. Heartbleed was simply the latest in a growing mountain of threats that continue to evolve against encryption keys and digital certificates, and thus trust online.

For more information on Heartbleed and how to remediate effectively, check out the Venafi Heartbleed Solution page.

Too Many Employees Ignore BYOD Security Arrow to Content

June 2, 2014 | Leave a Comment

By Nina Seth, Accellion

Considering the risks that BYOD mobile activity can pose to enterprises, CIOs have a right to be dismayed by two recent surveys showing just how little some employees care about protecting data on mobile devices.

A recent survey by Centrify found that:

  • 43% have accessed sensitive corporate data on an unsecured public network.
  • 15% have had their personal account or password compromised.
  • 15% believe they have little or no responsibility to protect the data stored on their personal devices.

Imagine 150 employees of a 1,000-person company casually using public Wi-Fi hotspots and downloading risky public-cloud file sharing services and other risky apps. While they may not be concerned about protecting the corporate data on their devices, a single breach could potentially cost the organization millions of dollars.

A separate survey conducted by Absolute Software found that:

  • 25% of employees in industries such as banking, energy, healthcare, and retail feel that it’s not their problem if they accidentally leak confidential data.
  • About 33% of employees who had lost their phones did not change their habits afterwards.
  • 59% of employees estimated the value of the corporate data on their phones to be less than $500.

Employees are far too sanguine about the value of corporate data: even 50KB of the right data can be worth a lot more than $500. A study of data breach costs by the Ponemon Institute and Symantec found that the cost of a single breached healthcare record in 2013 was $233, not counting any additional costs from penalties imposed by the HHS and the FTC. Across all industries, the cost of a single breached record in the U.S. was estimated at $188.

Leaking confidential data such as product plans or partner contracts can erode an organization’s competitive advantage, costing potentially millions of dollars. Clearly, employees need to be reminded about the true value and costs associated with corporate data.

Knowing that many employees are lackadaisical about data security, CIOs should invest in mobile security solutions that do not rely on end users following best practices or being security-minded. A mobile security solution that keeps corporate data separate from personal data is a crucial choice for enterprises, especially when employees are casual, if not careless, about data security and compliance.

Page Dividing Line