Effective Access Control with Active Segmentation Arrow to Content

July 30, 2015 | Leave a Comment

By Scott Block, Senior Product Marketing Manager, Lancope

Fotolia_46647398_XSAs the threat landscape has evolved to include adversaries with deep pockets, immense resources and plenty of time to compromise their intended target, security professionals have been struggling to stave off data breaches. We’ve all heard it ad nauseam – it’s not a matter of if your network will be compromised, but when.

Since many companies have built up their perimeter defenses to massive levels, attackers have doubled down on social engineering. Phishing and malware-laden spam are designed to fool company employees into divulging login information or compromising their machine. According to the security consulting company Mandiant, 100 percent of data breaches the company has studied involved stolen access credentials.

Since threat actors have become so good at circumventing traditional defenses, we cannot afford to have only a single point of failure. Without proper internal security, attackers are given free reign of the network as soon as they gain access to it.

Instead, attackers should encounter significant obstacles between the point of compromise and the sensitive data they are after. One way to accomplish this is with network segmentation.

Keep your hands to yourself
In an open network without segmentation, everyone can touch everything. There is nothing separating Sales from Legal, or Marketing from Engineering. Even third-party vendors may get in on the action.

The problem with this scenario is that it leaves the data door wide open for anyone with access credentials. In a few hours, a malicious insider could survey the network, collect everything of value and make off with the goods before security personnel get wind of anything out of the ordinary.

What makes this problem even more frustrating is that there is no reason everyone on the network should be able to touch every resource. Engineers don’t need financial records to perform their job, and accountants don’t need proprietary product specifications to do theirs.

By simply cordoning off user groups and only allowing access to necessary resources, you can drastically reduce the potential damage an attacker could inflict on the organization. Instead of nabbing the crown jewels, the thief will have to settle for something from the souvenir shop. Additionally, the more time the attacker spends trying to navigate and survey your network, the more time you have to find them and throw them out, preventing even the slightest loss of data in the process.

How it works
It is best to think of a segmented network as a collection of zones. Groups of users and groups of resources are defined and categorized, and users are only able to “see” the zones appropriate to their role. In practice, this is usually accomplished by crafting access policies and using switches, virtual local area networks (VLANs) and access control lists to enforce them.

While this is all well and good, segmentation can quickly become a headache in large corporate environments. Network expansion, users numbering in the thousands and the introduction of the cloud can disrupt existing segmentation policies and make it difficult to maintain efficacy. Each point of enforcement could contain hundreds of individual policies. As the network grows in users and assets, segmentation policies can quickly become outdated and ineffective.

Retaining segmentation integrity is an important security function in today’s world of advanced threats and high-profile data breaches. To properly protect themselves, organizations need to constantly maintain segmentation, adding new policies and adjusting existing ones as network needs change.

One way to tackle the challenges of traditional access control is with software-defined segmentation, which abstracts policies away from IP addresses and instead bases them on user identity or role. This allows for much more effective and manageable segmentation that can easily adapt to changes in the network topology.

Active segmentation for effective access control
When you couple software-defined segmentation with an intelligent planning and implementation methodology, you get active segmentation. This approach to segmentation allows network operators to effectively cordon off critical network assets and limit access appropriately with minimal disruption to normal business functions.

When implemented correctly, active segmentation is a cyclical process of:

  • Identifying and classifying all network assets based on role or function
  • Understanding user behavior and interactions on the network
  • Logically designing access policies
  • Enforcing those policies
  • Continuously evaluating policy effectiveness
  • Adjusting policies where necessary

Here is a high-level overview of the active segmentation cycle:

TrustSec-graphic-diagram

Network visibility enables active segmentation
One of the cornerstones of active segmentation is comprehensive network visibility. Understanding how your network works on a daily basis and what resources users are accessing as part of their role is paramount to designing an adequate policy schema.

Leveraging NetFlow and other forms of network metadata with advanced tools like Lancope’s StealthWatch® System provides the information needed to understand what users are accessing and their behavior when operating on the network. This end-to-end visibility allows administrators to group network hosts and observe their interactions to determine the best way to craft segmentation policies without accidently restricting access to resources by people who need it.

After the segmentation policies have been implemented, the visibility allows security personnel to monitor the effectiveness of the policies by observing access patterns to critical network assets. Additionally, the network insight quickly highlights new hosts and traffic on the network, which can help assign segmentation policies to them. This drastically reduces the amount of time and effort required to ensure segmentation policies are keeping pace with the overall growth of the enterprise network.

In short, active segmentation is the process of logically designing policies based on network data and constantly keeping an eye on network traffic trends to make sure access controls are utilized effectively and intelligently to obstruct attackers without impeding normal business functions. With the right tools and management, organizations can minimize the headaches and time involved with network segmentation while significantly improving their overall cybersecurity posture.

Why 87.3% of Companies Use Office 365 Arrow to Content

July 29, 2015 | Leave a Comment

The Surprising Numbers Behind Office 365 Benefits and Risks

By Cameron Coles, ‎Senior Product Marketing Manager, Skyhigh Networks

By all accounts, Office 365 is a huge success for Microsoft and its customers. In the quarter that ended June 30, 2015, Microsoft’s commercial cloud revenue grew 88% to an annual run rate of over $8 billion. Skyhigh analyzed the cloud usage of our 21 million users and found that an impressive 87.3% of organizations have at least 100 active Office 365 users. When asked to identify the benefits of Office 365, IT leaders frequently mention its cost advantages and ability to improve the productivity of an increasingly mobile workforce. Skyhigh’s own analysis of how enterprises use Office 365 has uncovered several additional benefits and a few areas of caution.

With the release of Windows 10, we thought this was the perfect time to take a look at how Office 365 is changing how companies work. Windows 10 will offer deeper integration with OneDrive and the new Universal Office apps for Windows 10 (the new version of Office that supports desktop, tablet, and mobile devices), will require an Office 365 subscription. We expect that this requirement will lead many companies to accelerate their Office 365 migrations. Today, as you’ll see below, most Office 365 customers have taken a staged approach to migration. Most of them are running a hybrid of on-premise versions of Microsoft applications for most employees, while they migrate users to the cloud versions incrementally.

Platform for inter-company collaboration
SharePoint Online and OneDrive are not just platforms for employees to collaborate with each other; they also facilitate collaboration between companies. Consider the example of a manufacturing company that works collaboratively on product launch plans stored in SharePoint Online with their PR agency. While this type of collaboration has always occurred, it now happens via cloud platforms instead of faxes, emails, and phone calls. The average organization works with 72 business partners via these two applications, more than any other cloud-based collaboration platform. The top industries organizations connect to via Office 365 include high tech, manufacturing, energy, financial services, and business services.

blog-image-O365-business-partners[4]

 

Home to business-critical data
Microsoft has invested heavily in security, and some have even suggested that cloud applications such as Office 365 may be even more secure than on-premise software. The reason is that companies like Microsoft have large, sophisticated security teams that spend time working to prevent intrusions to their cloud applications. That’s a good thing, considering that the average company uploads 1.37 TB of data to Yammer, SharePoint Online, and OneDrive each month. However, certain types of sensitive or regulated data should not be uploaded to the cloud or shared within cloud applications to third parties, and a surprising amount of this sensitive data has been uploaded to Microsoft’s signature productivity suite.

blog-image-O365-sensitive-data[4]

While companies have deployed data loss prevention tools to protect their data in Exchange and SharePoint on premises, many have lagged in extending those policies to their data in the cloud. Skyhigh analyzed data stored in OneDrive and SharePoint Online and found that 17.4% of documents contain sensitive data. Broken down by data type, 4.2% of files contain sensitive personal information, 2.2% contain protected health information, 1.8% contain bank accounts and card numbers, and 9.2% contain confidential data. The average company also has a shocking 143 files on OneDrive that contain the word “password” in the filename (not surprisingly, security experts recommend against storing your passwords in an unencrypted word document or spreadsheet called passwords.xlsx).

blog-image-O365-passwords[3]

Most companies migrate in stages
When we looked deeper at the user counts within companies, we found something surprising. Most organizations have started moving to Office 365, but they are migrating in stages rather than moving all users to the cloud at once. At the average organization, just 6.8% of users have migrated to Office 365. This adoption curve is consistent with what IT experts are recommending. First, companies adopt Office 365 for a department or line of business, and these users co-exist with other employees using on-premise Exchange, SharePoint, and Windows file servers. This way, the company can incrementally develop its expertise in managing cloud environments and work out any rough edges before migrating the entire company.

blog-image-O365-land-and-expand[4]

The adoption numbers for Office 365 also reveal a tremendous opportunity for organizations and for Microsoft. As Microsoft customers continue to migrate the remaining 93.2% of their users still on legacy on-premise Microsoft products, they’ll experience greater cost savings from no longer running these applications on their own hardware. They’ll also see the productivity improvements as employees have access to improved collaboration with each other and with business partners from any Internet-connected device, anywhere in the world. And as Microsoft migrates its massive installed base to its cloud platforms, it will also see increased revenue. It’s a win-win for Microsoft and its customers.

Evaluate Cloud Security Like Other Outsourced IT Arrow to Content

July 28, 2015 | Leave a Comment

By Susan Richardson, Manager/Content Strategy, Code42

56432593Now that business cloud usage is ubiquitous, you’d think we could get past all the hype around cloud security, and just start treating the cloud like any other IT platform that needs a rigorous, well-rounded security strategy with appropriate access controls, encryption, you know the drill.

But as with any new technology (like those newfangled steam locomotives traveling at a record-breaking 15 mph that will almost certainly make it impossible for passengers to breathe), it takes awhile for reality to catch up with our fear of the unknown. So let’s take a deep breath, take off our doomsday-colored glasses and look at cloud security from a realistic perspective:

Security concerns are sensationalized
You can find plenty of surveys that say security is the top reason holding companies back from adopting cloud solutions. A Cloud Security Alliance (CSA) survey found it to be the top reason for a whopping 73% of unadopters. But those cloud Luddites only represent a tiny fraction of the overall business-computing universe. Most surveys put the holdout percentage between single digits and 15%, so 73% of those companies only represent about 11% of all businesses. Not exactly a headline-grabbing statistic. And, once a company adopts the cloud, those fears diminish over time, according to a RightScale study.

Your S&R team may be good, but it’s not that good
Even if you’re the most conscientious security and risk professional, with a talented staff and company leadership willing to invest adequately in security, your team simply can’t match the resources of the top cloud service providers (CSP). Is your data center secured with biometric scanning and advanced surveillance systems? Do your practices stand up to the stringent security requirements of certifications and accreditations such as SOC 1, SOC 2, PCI DSS, HIPAA, FERPA, FISMA and others?

At the 2014 Amazon Web Services (AWS) Summit, the company’s Senior Vice President Andy Jassy was quoted as saying that even with a substantial investment, the average company’s infrastructure is outdated by the time it’s completed.

“With on-prem, you’re going to spend a large amount of money building a relatively frozen platform and implementation that has the functionality that looks a lot like Amazon circa 2010,” Jassy said. “It will improve at a very expensive and slow rate vs. being on something like AWS that has much broader functionality, can deploy more people to keep iterating on your behalf, keep evolving and improving the technology and platform.”

Been there, done that
Forgetting some things from the 1990s is permissible. Dial-up modems and Furbies come to mind. But have we already forgotten all the data processing, servers and networks that we started outsourcing to third parties in the ‘90s? The trend has continued, with IT outsourcing budgets marking healthy increases over the past decade, according to a 2015 CIO Outsourcing Report by NashTech. The sooner we start treating the cloud as a viable form of outsourcing that requires appropriate security controls, the sooner we can all breath a little easier.

Who knew? The Internet is not infinite Arrow to Content

July 22, 2015 | Leave a Comment

By Susan Richardson, Manager/Content Strategy, Code42

In January 2011, the world ran out of Internet addresses. Every device on the Internet—including routers, phones, laptops, game consoles, TVs, thermostats and coffeemakers—needs its own IP address to move data over the Net.

When it began, it seemed like 4.3 billion 32-bit Internet addresses would be ample. Now, tech companies (and governments) are scrambling to move onto a new system for Internet traffic routing. By 2020 there will be an estimated 50 billion devices online—so something’s got to give.

According to the Wall Street Journal, “The limited supply of new Internet Protocol addresses is nearly gone. Asia essentially ran out in 2011, and Europe a year later. North America’s allotment is due to dry up this summer.”

IPv6 saves the Internet!

IPv4 is the fourth revision of the Internet Protocol (IP) used to identify devices on a network through an addressing system. Its successor is Internet Protocol Version 6 (IPv6), a 128-bit address scheme that will provide 340 undecillion IP addresses.

That’s 2128 or 340,000,000,000,000,000,000,000,000,000,000,000,000 new addresses.

IPv6 has been in development since the mid-1990s when it became clear that demand for IP addresses would exceed supply. IPv6 will coexist with and eventually replace IPv4. The transition will happen gradually to avoid the division of the Internet into separate v4 and v6 networks and to ensure connection for all v4 and v6 nodes. Most companies use a strategy called dual stack to ensure that their equipment can use both v4 and v6 for the foreseeable future.

To start deploying IPv6:

    • Ensure all networking equipment (including planned purchases) are IPv6 capable
    • Individuals and businesses can request IPv6 connectivity from their ISP and users can ascertain if their connections support IPv6 here
    • Content creators, developers, and enterprises can make their own websites and content available over IPv6
    • Governments can require IPv6 compliance of all contractors and business relationships, and lead by example in deploying IPv6 across all websites and services

Update: The American Registry for Internet Numbers (ARIN) activated the IPv4 Unmet Requests policy (NPRM 4.1.8) in July 2015. For the first time, ARIN is unable to fulfill an IPv4 address request.  Requests are now either being added to a waiting list or requestors are referred to an exchange where they have the opportunity to acquire surplus IPs.

CISO role ranges from beat cop to boardroom Arrow to Content

July 17, 2015 | Leave a Comment

By Adam Best, Social Media Manager, Code42

55728111_1920_1080Every executive role has changed in the past decade or so, but none more than the chief information security officer. Ten years ago, if you asked someone to describe his CISO, he’d probably answer, “You mean my CIO?”

Out of the server room
In a globally-connected economy, data security is arguably more important than physical security; if someone wants to profit from corporate crime or espionage, they go for the data. The fact that the most lucrative crime is electronic has elevated and expanded the role of CISO.

So who is the modern CISO?

On patrol: He or she is a beat cop; making her presence known internally to raise visibility of the importance of security. Like law enforcement in general, the more time spent in a prevention role, the easier other parts of the job are.

Before her current beat, she worked border security: “CISOs used to be 99% concerned with making the firewall impenetrable,” recalls Greg Mancusi-Ungaro, BrandProtect CMO. But that changed with recognition that protecting the perimeter is not enough. Breach can happen wherever computers wander and from inside the organization too. Which leads to….

Threat assessment: Like a military intelligence officer briefing a unit commander, the CISO informs other executives about possible or probable threats, their severity and the resources and actions required to protect against and mitigate each.

She may not play a direct role in the practical implementation of the threat response plan, and she might even be dealing with technology or services not subject to her approval. But she’s ultimately responsible for the plan’s success or failure. However, as the old military adage goes, no plan survives first contact with the enemy. There will be data loss. Time to switch to….

Forensic investigator: When the inevitable breach happens, she’s the medical examiner at the scene of the crime, piecing together what happened, when and how. A chain of evidence must be created. What data was taken? From what source? Where did the data originate, and when was it changed or moved? What methods were used in all of the above? But that’s not all.

Lead detective: The CISO guides the investigation following breach: Where is that data now? Can it be recovered? Were there internal or external bad actors (or both)? Can damage to the company from this incident be prevented or at least mitigated to some extent? Can it be prevented from happening again? How?

Just like a detective assigned to a case, the CISO answers to stakeholders from above and below her pay grade. She helps the company PR team understand what to tell customers and respond to news media inquiries. She informs Legal if missing data is subject to government oversight or regulation and whether the company must disclose the breach. She apprises the CFO who wants to know how much it cost, and when will it be over.

The CISO is the decision and communications hub for some of the most critical incidents the organization will face. “They must be a full partner of HR, Legal and Marketing,” says Mancusi-Ungaro.

A little bit grad student; a little bit Pythia
After the incident, what do you have? A brand new threat to assess for your security posture. Learning from the breach and remembering its lessons is critical, but if he stops there, the CISO might be “fighting the last war” during the next data threat. So like the Oracle of Delphi, he must predict the next challenge, drawing from his experiences while also staying up to date on InfoSec best practices.

Isn’t it ironic?
For a role so concerned with privacy, there is little that is private about the CISO’s work. Besides InfoSec professionals, the general public and news media are now acutely aware of the importance of data security.

Bill Hargenrader, cybersecurity manager and senior lead technologist at Booz Allen Hamilton, a Fortune 500 technology and strategy consulting firm, says “As the general public hears more about hacking, privileged access violations, and data breaches, there is growing pressure to mitigate the dangers that are present. That’s not to say these types of activities weren’t happening before; as our tools for detection get better, and as the media is quick to pounce on these breaches (for good reason), there is a greater shift towards cybersecurity to address the risk profile for an organization.”

Which adds another interesting coda to the CISO’s role: public affairs officer. Hargenrader’s words of wisdom: “If InfoSec leaders can’t properly communicate the risk to non-cybersecurity versed organizational leadership, then they are at a disadvantage.”

The Art of (Cyber) War Arrow to Content

July 15, 2015 | Leave a Comment

By Chris Hines, Product Marketing Manager, Bitglass

Screen_Shot_2015-07-07_at_1.52.32_PM“If you know the enemy and know yourself, you need not fear the results of a hundred battles.” – Sun Tzu

We are at war. Cyber criminals vs. enterprises and their security counterparts. Black Hatters vs. White Hatters. If you don’t believe it, do a quick Google search for “data breach” and take a look at the vast amount of headlines that pop up in .3 seconds. You’ll probably see a news article posted within the last 5 hours or so, maybe even in an industry you currently function within.

But why war? Why are we fighting in the first place? What are we attempting to protect?

The answers to those questions are quite simple. We are fighting because we must do so in order to protect our customers, our employees, and our data from criminals. These cyber criminals have created sophisticated phishing attacks, hacked public wi-fi networks, stolen sensitive company information, infected enterprise networks and unleashed a litany of other tactics gauged at causing damage. The motivation? In most cases, money and fame.

But we as enterprise stakeholders need not fear. Not if we take the time to truly understand our enemies, and to recognize the weaknesses within our own IT environments. What we need is a battle plan.

The Plan

Using what we’ve seen in recent data breaches we can understand the methods black hatters are using and predict their moves before they even make them. We know cyber criminals are phishing employees. Let’s train our employees to look out for them and use single sign-on solutions to help limit data exposure. We know malware is trying to infiltrate our environments and siphon off data. Let’s use technology that can recognize malware and cleanse our networks. We know criminals are leveraging our adoption of both cloud applications and BYOD devices to wreak havoc, so lets work to secure them. We already have the ability to track data anywhere on the Internet. Let’s use this technology to detect anomalous activity and oust breaches before they cause irreparable damage.

But we must also recognize the holes within our own systems and ask ourselves “how do we improve our own security posture?” Do we have visibility into user activity, control over who can access our public cloud applications from mobile endpoints,  the ability to stop sensitive data from leaking out to risky destinations? If the answer to any of these is “no” then fix it. Find the right security solutions that can plug up YOUR security gaps. Be honest about the security tools you need, and don’t attempt to repurpose existing security solutions to protect against a situation in which they were not intended for.

Realize that there is no one “fix all” security solution. We must use a collection of security technologies that will help protect our employees, customers and data from the cyber criminals attempting to pillage enterprise data stores.

So ask yourself this. Do you know the enemy? Do you know your security gaps?

FedRAMP and PCI – A Comparison of Scanning and Penetration Testing Requirements Arrow to Content

July 13, 2015 | Leave a Comment

By Matt Wilgus, Director of Security Assessment Services, BrightLine

fedramp-pci-comparisonOverview
In the last 30 days, the FedRAMP Program Management Office (PMO) has published guidance for both vulnerability scanning and penetration testing. The updated guidance comes on the heels of PCI mandating the enhanced penetration testing requirements within its requirement 11.3 as part of the 3.0, now 3.1, version of the DSS. These augmented PCI requirements, introduced in the fall of 2013, took effect on June 30. For many cloud service providers this means the requirements for vulnerability scanning and penetration testing are more thorough and will require additional resources for planning, executing and remediating findings. This article will walk through the updates and discuss the differentiation between FedRAMP and the PCI Data Security Standard (DSS).

Vulnerability Scanning
PCI: Requirement 11.2 of the PCI DSS obliges organizations to, “Run internal and external network vulnerability scans at least quarterly and after any significant change in the network.” Many organizations will provide their ASVs a listing of Internet facing IP addresses and/or hostnames and the scans will be performed.  Internally, a similar process will occur, where a list of in-scope internal IP addresses or hostnames are provide and the scans will be performed by the ASV or an in-house team.

FedRAMP: The FedRAMP document titled, “FedRAMP JAB P-ATO Vulnerability Scan Requirements Guide” was developed for CSPs undergoing the approval via the Joint Authorization Board (JAB); however, agency authorizations can also use the guide. There are several differences with FedRAMP’s guide as compared to the PCI DSS including:

  • Scans must be performed with authentication (i.e. credentialed scans), which PCI doesn’t require.
  • Scans include the full system boundary, which is often, but not always, larger than the in-scope PCI environment, which consists of the cardholder data environment (CDE) and associated system components.
  • Scans must be conducted monthly, where PCI is quarterly.
  • CSPs musts use must use operating system and network vulnerability scanners, database vulnerability scanners and web application vulnerability scanners. PCI doesn’t provide the same level of detail and alludes to network vulnerability scanners.  It is notable that web application scanning is one way to address compliance aspects of PCI DSS requirement 6.6.

Penetration Testing
PCI: For years there has been a debate about how a penetration test should be conducted in support of PCI DSS compliance. In March 2015, the PCI Security Standards Council published an information supplement providing additional guidance. This document offers useful information, however the PCI Standards Council emphasizes that this is just guidance – and the DSS within Requirement 11.3 is still the letter of the law. Also, testing is required from an internal and external perspective, along with testing networks, systems, and applications. In addition, PCI DSS 3.0 introduced additional requirements for a formal testing methodology and testing of segmentation controls. These new measures went from recommended to required on June 30, 2015.

FedRAMP: Also on June 30, 2015, FedRAMP published a document titled, “FedRAMP Penetration Test Guidance.” The goal of this document was similar to the PCI guidance and has overlapping content within methodology, reporting and qualifications. However, the most significant difference is the emphasis on attack vectors and scope. For example, the PCI guidance states social engineering testing is optional, whereas the FedRAMP guidance details tasks including “unannounced spear phishing exercises targeted at the CSP system administrators.” Additionally, the FedRAMP requirements touch on additional aspects of internal testing, such as those specific tests and attacks that should occur from the perspective of a credentialed system user. Physical (facility) penetration testing is also covered in the FedRAMP guidance. While not recommending that 3PAOs scale walls, it does ask for the 3PAO to verify that locks and other physical security mechanisms are in place. Some of these tasks can also be found in Requirement 9 of the PCI DSS.

Next steps
Unlike the PCI update, the FedRAMP penetration testing guidance did not include an implementation time frame or any caveats around being just “guidance.” As such, the requirements are effective immediately. As this guidance was not available prior to June 30, some assessments underway may not have taken the guidance into account in its entirety. CSPs and 3PAOs are encouraged to work with their JAB or Agency authorizing officials to review the attack vectors and ensure that security assessment plans sufficiently assess risk based on the goals and objectives of FedRAMP and standards such as NIST 800-115.

References
PCI:

FedRAMP:

93 Percent of Cloud Services in Healthcare Are Medium to High Risk Arrow to Content

July 10, 2015 | Leave a Comment

By Sam Bleiberg, Corporate Communications Executive, Skyhigh Networks

Exam Room: Doctors Looking at Digital TabletWe recently released our first-ever Cloud Adoption & Risk in Healthcare Report, with anonymized cloud usage data from over 1.6 million employees at healthcare providers and payers. Unlike surveys that ask people to self report their behavior, our report is the first data-driven analysis of how healthcare organizations are embracing cloud services to reduce IT cost, increase productivity, and improve patient outcomes. However, while the cloud has transformed the way healthcare organizations operate and deliver service to their patients, companies are still responsible for ensuring the security of sensitive patient data.

IT may be vigilant in evaluating whether sanctioned cloud services meet organizational policies, but employees can bring cloud services into the workplace. These services not known by the IT department, referred to as shadow IT, contribute to an industry average of 928 cloud services in use per company. Security teams are responsible for sensitive data uploaded to these cloud services, but IT is typically aware of only 60 cloud services in use – less than 10% of the total amount. In addition to maintaining compliance with internal policies, regulations like HIPAA and HITECH require that healthcare companies secure protected health information (PHI) even as it migrates to the cloud.

Healthcare organizations have come under fire as the targets of an increasing number of criminal hacks in the past year. The number of healthcare records exposed in the last 12 months now totals 94 million, led by blockbuster breaches at Anthem and CHS. This flurry of attacks is driven by the high price healthcare records fetch on the black market. At an average of $50 per record, an individual healthcare records is worth more than a US-based credit card and personal identity with a social security number combined.

Considering that the healthcare industry is highly regulated and handles some of the most sensitive and personal data about individuals, many statistics from the report are troubling. Download the full report to read all the findings.

Only 7.0% of cloud services are enterprise ready
A mere 7.0% of cloud services in use meet enterprise security and compliance requirements. The average healthcare organization uploads 6.8 TB of data to the cloud each month. That’s more than all of Wikipedia’s archives (5.64 TB)! Just 15.4% of services support multi-factor authentication, a key line of defense in preventing unauthorized access to sensitive data.

Silos of Collaboration Uncovered
The cloud is a revolutionary technology for enabling collaboration between employees, but too many cloud services in use can actually be an impediment to collaboration. The average healthcare company uses 188 collaboration services. We call the ensuing phenomenon “silos of collaboration,” in which employees have difficulty sharing data because there are so many different cloud services in use. Paid licenses for redundant services can also unnecessarily drive up costs.

Undetected Insider Threats
Enterprise-ready cloud services can offer even better security capabilities than on-premise solutions, but even secure cloud services can be used in risky ways. The majority of insider threat incidents are quiet and may not be discovered immediately, if ever. With healthcare records so valuable on the black market, especially for patients with certain status or conditions, hospital employees may choose to sell records he or she has access to. We compared perceptions of insider threat with reality and found that 33% of healthcare companies surveyed reported an insider threat incident in the last year, but 79% of companies had usage behavior indicative of an insider threat. 

Employee Passwords on the Loose
There were more software vulnerabilities discovered and more data breaches in 2014 than any other year on record. The result is that many users now have their login credentials for sale on the darknet. In fact, 14.4% of all healthcare employees have a login credential for sale online, exposing 89.2% of organizations. A single health insurance company had 9,932 credentials for sale on the darknet.

Cloud Hyperconnectors in Healthcare
Cloud services are now the main way that employees collaborate across different companies. We discovered that a selection of cloud services, called “cloud hyperconnectors,” were responsible for enabling a large number of these connections. In the customer service category, these services were Zendesk, Salesforce, and Needle. The cloud hyperconnectors in the file-sharing category were ShareFile, Box, and Egynte. In the collaboration category, the top connecting services were Cisco WebEx, Office 365, and Basecamp.

The Most Prolific Cloud User
How much can a single employee rely on the cloud? We spotlighted one prolific healthcare employee who uses more cloud services than anyone else. The average employee uses 26 cloud services, but the most prolific cloud user actually employs an impressive 444 cloud services including 97 collaboration services and 74 social media services. A surprising 30.6% of these services were high-risk – much greater than the industry average of 5.6%.

These findings reflect a wake up call for IT in healthcare organizations: employees are using cloud services now, regardless of sanctioned applications or policies prohibiting cloud use. IT’s new role is to enable secure cloud use, helping employees navigate the cloud while complying with organizational security policies.

 

The heavy cost of ignoring dwell time Arrow to Content

July 9, 2015 | Leave a Comment

By Susan Richardson, Manager/Content Strategy, Code42

If you’re among the 44% of organizations that aren’t measuring Mean Time to Identify (MTTI), more commonly known as dwell time, then how will you know if you’re reducing it, which is a critical step to improving incident response?

The average dwell time for a major data breach today is months: Mandiant puts it at 205 days, and a Ponemon Institute survey tallied 98 days for Financial Services and 197 days for Retail. With all that free time to roam in your system, attackers can wreak more havoc than just sifting through your information for vulnerabilities, identifying critical information, mapping your network and stealing millions of records.

The fallout of a data breach can also include:

Loss of business: A recent Brunswick Group report found that 34% of customers no longer shopped at a retailer due to a past data breach issue. But retail is among the least-likely industries to experience customer churn. The three industries most susceptible to losing customers following a breach are Health Care, Pharmaceuticals and Financial Services, according to a 2014 Ponemon Institute global study. It also found that France, Italy and the United Kingdom had the highest customer turnover.

Significant lawsuits: An average single data breach claim, according to a recent NetDiligence study, costs a company $733,109, at a cost–per-record of $956.21. Home Depot, in its 10-Q filing with the SEC following a breach, reported that it was facing at least 44 lawsuits. Target paid $10 million to settle its class action lawsuit and another $19 million to reimburse financial institutions for the charges they incurred reissuing compromised cards.

A drop in company valuation: While stock price is affected by many factors, The Brunswick Group analyzed 10 companies that recently experienced a large data breach and found that the average daily stock price dropped and hadn’t yet recovered two quarters later.

Executive casualties: Target CEO Gregg Steinhafel resigned five months after the retailer’s highly publicized breach in December 2013, with the company’s official statement noting “he held himself personally responsible.”

An LA Times story following the more recent Anthem data breach talks about what’s at stake for company CEO Joseph Swedish, who was already fighting to improve the insurer’s customer service reputation before hackers compromised 78 million records in a typosquatting scheme.

Bridging The Chasm Between Business and IT – The GRC Way Arrow to Content

July 6, 2015 | Leave a Comment

By Rajesh Raman, Vice President, Zaplet/MetricStream

ra-205x205Business And IT
In today’s world, company operations function at two distinct levels: the business operation level and the IT infrastructure operation level. While the two functions operate independently, IT exists to support the business. Many of the IT operations, like the deployment and management of IT infrastructure, applications and services are driven by the business layer requirements in a top-down fashion to enable the company to carry out its business. IT infrastructure management, including addressing cyber security risks is exclusively done in the IT layer. There are several tools, such as FireEye, McAfee, Qualys, ArchSight and BMC Software which IT deploys and uses in order to identify and manage IT security risk, but something is missing.

A chasm exists between the IT layer and business layer, when looked at from a bottom-up perspective.

Let’s say someone hacked into your organization’s network, and some data was compromised. What does that IT event really mean for the business? It’s vital to understand that in business terms because that event could potentially put the company at serious risk.

Perhaps the data breach jeopardized the company’s financial data; then it will need to do some proactive reporting. Perhaps the data breach made the company non-compliant with a regulatory requirement; then it will need to re-certify. Perhaps the data breach compromised personnel records, like the June 2015 federal government hack; then the company will need to alert its employees.

The point is you need a unified business and IT perspective towards comprehensive enterprise risk assessment and management.

Don’t lose the signal in the noise
signal-e1435763277791The massive 2013 Target data breach showed what can happen when you ignore the gap between IT and business risks.

Target was PCI-certified (Payment Card Industry), thanks in part to a $1.6 million malware detection system from FireEye. On November 30, 2013, Target’s security team in Bangalore, India, received alerts from FireEye, and informed Target headquarters in Minneapolis. But no one foresaw the risk to the business. In the words of Molly Snyder, a Target spokeswoman: “Based on their interpretation and evaluation of that activity, the team determined that it did not warrant immediate follow up.

Obviously something critical got lost in the noise. The first IT event detected was not high priority—from the IT perspective. But from the business perspective, red flags waved: the event occurred during the busiest shopping period and involved customers’ credit card information.

The data breach cut Target’s profit for the holiday shopping period by 46 percent, compared to the previous year. Worse yet, Target still faces dozens of potential class-action lawsuits and legal actions from creditors.

Prioritization is Key
Today’s IT departments cope with a tsunami of security events, but how do they know which one to prioritize? How do they know that an event that may, on the surface, seem trivial and unimportant can have significant impact and jeopardize the business?

There is an obvious need to fill this gap that exists where low-level IT events can be mapped into enterprise risk from a bottom-up perspective.

  • A Governance, Risk and Compliance (GRC) system, promises to help fill this gap between the IT and business layers.
  • Governance, Risk, and Compliance (GRC) systems help organizations connect the dots across key areas: the limits for regulatory compliance, the analytics for risk management, and the metrics for risk controls. Because a GRC system spans the enterprise, it can help guide and prioritize the appropriate response.
  • When setting up a GRC system, a company must define its critical assets, metrics, and risk assessment controls. The system can help manage and prioritize anything that impacts regulatory compliance —such as Payment Card Industry (PCI) compliance or the Health Insurance Portability and Accountability Act (HIPAA).
  • A GRC solution provides a bottoms-up approach to managing and addressing IT events—by keeping the business needs in mind. An apparently low-level risk will be given higher priority if it threatens a critical asset—or if it jeopardizes a regulatory requirement. This integrated and pervasive 360-view is where the value of a GRC solution lies.

Visibility
visibility-e1435762915539
It’s all about understanding the business risk context when prioritizing IT assets and responses to IT events.

Chief Risk Officers (CROs) see and understand key business risks. They have the visibility and they can make the call. They report out on the organization’s risk profile by leveraging a variety of tools and risk dashboards to the Board of Directors level. Equally important, they collaborate across the C-suite and provide management with the guidance on what needs to be addressed, how, and when.

Needles in Haystacks
Moving forward, I see challenges ahead on all three fronts: regulations, systems and threats.

Regulatory requirements are increasing and it’s more challenging for companies to be 100 percent compliant with all the appropriate risk controls in place. In terms of systems, an organization’s IT footprint and adoption of cloud-based applications is constantly evolving and expanding.

Meanwhile, the variety and number of cyber threats are increasing, and malware is becoming ever more sophisticated. I anticipate that the volume and severity of IT events will increase significantly. Figuring out which events will have the biggest impact on a company’s business is like finding a needle in a haystack. But it need not be.

That’s precisely where a GRC system can help. By leveraging the correct GRC analytics and intelligence, organizations are able to identify and understand their risks from both the business and IT perspective. Bridging this gap can lead to better data-driven decision making and superior business performance.

(Image Source: Shutterstock)

Originally posted in CloudTweaks.

Page Dividing Line