Evaluate Cloud Security Like Other Outsourced IT Arrow to Content

July 28, 2015 | Leave a Comment

By Susan Richardson, Manager/Content Strategy, Code42

56432593Now that business cloud usage is ubiquitous, you’d think we could get past all the hype around cloud security, and just start treating the cloud like any other IT platform that needs a rigorous, well-rounded security strategy with appropriate access controls, encryption, you know the drill.

But as with any new technology (like those newfangled steam locomotives traveling at a record-breaking 15 mph that will almost certainly make it impossible for passengers to breathe), it takes awhile for reality to catch up with our fear of the unknown. So let’s take a deep breath, take off our doomsday-colored glasses and look at cloud security from a realistic perspective:

Security concerns are sensationalized
You can find plenty of surveys that say security is the top reason holding companies back from adopting cloud solutions. A Cloud Security Alliance (CSA) survey found it to be the top reason for a whopping 73% of unadopters. But those cloud Luddites only represent a tiny fraction of the overall business-computing universe. Most surveys put the holdout percentage between single digits and 15%, so 73% of those companies only represent about 11% of all businesses. Not exactly a headline-grabbing statistic. And, once a company adopts the cloud, those fears diminish over time, according to a RightScale study.

Your S&R team may be good, but it’s not that good
Even if you’re the most conscientious security and risk professional, with a talented staff and company leadership willing to invest adequately in security, your team simply can’t match the resources of the top cloud service providers (CSP). Is your data center secured with biometric scanning and advanced surveillance systems? Do your practices stand up to the stringent security requirements of certifications and accreditations such as SOC 1, SOC 2, PCI DSS, HIPAA, FERPA, FISMA and others?

At the 2014 Amazon Web Services (AWS) Summit, the company’s Senior Vice President Andy Jassy was quoted as saying that even with a substantial investment, the average company’s infrastructure is outdated by the time it’s completed.

“With on-prem, you’re going to spend a large amount of money building a relatively frozen platform and implementation that has the functionality that looks a lot like Amazon circa 2010,” Jassy said. “It will improve at a very expensive and slow rate vs. being on something like AWS that has much broader functionality, can deploy more people to keep iterating on your behalf, keep evolving and improving the technology and platform.”

Been there, done that
Forgetting some things from the 1990s is permissible. Dial-up modems and Furbies come to mind. But have we already forgotten all the data processing, servers and networks that we started outsourcing to third parties in the ‘90s? The trend has continued, with IT outsourcing budgets marking healthy increases over the past decade, according to a 2015 CIO Outsourcing Report by NashTech. The sooner we start treating the cloud as a viable form of outsourcing that requires appropriate security controls, the sooner we can all breath a little easier.

Who knew? The Internet is not infinite Arrow to Content

July 22, 2015 | Leave a Comment

By Susan Richardson, Manager/Content Strategy, Code42

In January 2011, the world ran out of Internet addresses. Every device on the Internet—including routers, phones, laptops, game consoles, TVs, thermostats and coffeemakers—needs its own IP address to move data over the Net.

When it began, it seemed like 4.3 billion 32-bit Internet addresses would be ample. Now, tech companies (and governments) are scrambling to move onto a new system for Internet traffic routing. By 2020 there will be an estimated 50 billion devices online—so something’s got to give.

According to the Wall Street Journal, “The limited supply of new Internet Protocol addresses is nearly gone. Asia essentially ran out in 2011, and Europe a year later. North America’s allotment is due to dry up this summer.”

IPv6 saves the Internet!

IPv4 is the fourth revision of the Internet Protocol (IP) used to identify devices on a network through an addressing system. Its successor is Internet Protocol Version 6 (IPv6), a 128-bit address scheme that will provide 340 undecillion IP addresses.

That’s 2128 or 340,000,000,000,000,000,000,000,000,000,000,000,000 new addresses.

IPv6 has been in development since the mid-1990s when it became clear that demand for IP addresses would exceed supply. IPv6 will coexist with and eventually replace IPv4. The transition will happen gradually to avoid the division of the Internet into separate v4 and v6 networks and to ensure connection for all v4 and v6 nodes. Most companies use a strategy called dual stack to ensure that their equipment can use both v4 and v6 for the foreseeable future.

To start deploying IPv6:

    • Ensure all networking equipment (including planned purchases) are IPv6 capable
    • Individuals and businesses can request IPv6 connectivity from their ISP and users can ascertain if their connections support IPv6 here
    • Content creators, developers, and enterprises can make their own websites and content available over IPv6
    • Governments can require IPv6 compliance of all contractors and business relationships, and lead by example in deploying IPv6 across all websites and services

Update: The American Registry for Internet Numbers (ARIN) activated the IPv4 Unmet Requests policy (NPRM 4.1.8) in July 2015. For the first time, ARIN is unable to fulfill an IPv4 address request.  Requests are now either being added to a waiting list or requestors are referred to an exchange where they have the opportunity to acquire surplus IPs.

CISO role ranges from beat cop to boardroom Arrow to Content

July 17, 2015 | Leave a Comment

By Adam Best, Social Media Manager, Code42

55728111_1920_1080Every executive role has changed in the past decade or so, but none more than the chief information security officer. Ten years ago, if you asked someone to describe his CISO, he’d probably answer, “You mean my CIO?”

Out of the server room
In a globally-connected economy, data security is arguably more important than physical security; if someone wants to profit from corporate crime or espionage, they go for the data. The fact that the most lucrative crime is electronic has elevated and expanded the role of CISO.

So who is the modern CISO?

On patrol: He or she is a beat cop; making her presence known internally to raise visibility of the importance of security. Like law enforcement in general, the more time spent in a prevention role, the easier other parts of the job are.

Before her current beat, she worked border security: “CISOs used to be 99% concerned with making the firewall impenetrable,” recalls Greg Mancusi-Ungaro, BrandProtect CMO. But that changed with recognition that protecting the perimeter is not enough. Breach can happen wherever computers wander and from inside the organization too. Which leads to….

Threat assessment: Like a military intelligence officer briefing a unit commander, the CISO informs other executives about possible or probable threats, their severity and the resources and actions required to protect against and mitigate each.

She may not play a direct role in the practical implementation of the threat response plan, and she might even be dealing with technology or services not subject to her approval. But she’s ultimately responsible for the plan’s success or failure. However, as the old military adage goes, no plan survives first contact with the enemy. There will be data loss. Time to switch to….

Forensic investigator: When the inevitable breach happens, she’s the medical examiner at the scene of the crime, piecing together what happened, when and how. A chain of evidence must be created. What data was taken? From what source? Where did the data originate, and when was it changed or moved? What methods were used in all of the above? But that’s not all.

Lead detective: The CISO guides the investigation following breach: Where is that data now? Can it be recovered? Were there internal or external bad actors (or both)? Can damage to the company from this incident be prevented or at least mitigated to some extent? Can it be prevented from happening again? How?

Just like a detective assigned to a case, the CISO answers to stakeholders from above and below her pay grade. She helps the company PR team understand what to tell customers and respond to news media inquiries. She informs Legal if missing data is subject to government oversight or regulation and whether the company must disclose the breach. She apprises the CFO who wants to know how much it cost, and when will it be over.

The CISO is the decision and communications hub for some of the most critical incidents the organization will face. “They must be a full partner of HR, Legal and Marketing,” says Mancusi-Ungaro.

A little bit grad student; a little bit Pythia
After the incident, what do you have? A brand new threat to assess for your security posture. Learning from the breach and remembering its lessons is critical, but if he stops there, the CISO might be “fighting the last war” during the next data threat. So like the Oracle of Delphi, he must predict the next challenge, drawing from his experiences while also staying up to date on InfoSec best practices.

Isn’t it ironic?
For a role so concerned with privacy, there is little that is private about the CISO’s work. Besides InfoSec professionals, the general public and news media are now acutely aware of the importance of data security.

Bill Hargenrader, cybersecurity manager and senior lead technologist at Booz Allen Hamilton, a Fortune 500 technology and strategy consulting firm, says “As the general public hears more about hacking, privileged access violations, and data breaches, there is growing pressure to mitigate the dangers that are present. That’s not to say these types of activities weren’t happening before; as our tools for detection get better, and as the media is quick to pounce on these breaches (for good reason), there is a greater shift towards cybersecurity to address the risk profile for an organization.”

Which adds another interesting coda to the CISO’s role: public affairs officer. Hargenrader’s words of wisdom: “If InfoSec leaders can’t properly communicate the risk to non-cybersecurity versed organizational leadership, then they are at a disadvantage.”

The Art of (Cyber) War Arrow to Content

July 15, 2015 | Leave a Comment

By Chris Hines, Product Marketing Manager, Bitglass

Screen_Shot_2015-07-07_at_1.52.32_PM“If you know the enemy and know yourself, you need not fear the results of a hundred battles.” – Sun Tzu

We are at war. Cyber criminals vs. enterprises and their security counterparts. Black Hatters vs. White Hatters. If you don’t believe it, do a quick Google search for “data breach” and take a look at the vast amount of headlines that pop up in .3 seconds. You’ll probably see a news article posted within the last 5 hours or so, maybe even in an industry you currently function within.

But why war? Why are we fighting in the first place? What are we attempting to protect?

The answers to those questions are quite simple. We are fighting because we must do so in order to protect our customers, our employees, and our data from criminals. These cyber criminals have created sophisticated phishing attacks, hacked public wi-fi networks, stolen sensitive company information, infected enterprise networks and unleashed a litany of other tactics gauged at causing damage. The motivation? In most cases, money and fame.

But we as enterprise stakeholders need not fear. Not if we take the time to truly understand our enemies, and to recognize the weaknesses within our own IT environments. What we need is a battle plan.

The Plan

Using what we’ve seen in recent data breaches we can understand the methods black hatters are using and predict their moves before they even make them. We know cyber criminals are phishing employees. Let’s train our employees to look out for them and use single sign-on solutions to help limit data exposure. We know malware is trying to infiltrate our environments and siphon off data. Let’s use technology that can recognize malware and cleanse our networks. We know criminals are leveraging our adoption of both cloud applications and BYOD devices to wreak havoc, so lets work to secure them. We already have the ability to track data anywhere on the Internet. Let’s use this technology to detect anomalous activity and oust breaches before they cause irreparable damage.

But we must also recognize the holes within our own systems and ask ourselves “how do we improve our own security posture?” Do we have visibility into user activity, control over who can access our public cloud applications from mobile endpoints,  the ability to stop sensitive data from leaking out to risky destinations? If the answer to any of these is “no” then fix it. Find the right security solutions that can plug up YOUR security gaps. Be honest about the security tools you need, and don’t attempt to repurpose existing security solutions to protect against a situation in which they were not intended for.

Realize that there is no one “fix all” security solution. We must use a collection of security technologies that will help protect our employees, customers and data from the cyber criminals attempting to pillage enterprise data stores.

So ask yourself this. Do you know the enemy? Do you know your security gaps?

FedRAMP and PCI – A Comparison of Scanning and Penetration Testing Requirements Arrow to Content

July 13, 2015 | Leave a Comment

By Matt Wilgus, Director of Security Assessment Services, BrightLine

fedramp-pci-comparisonOverview
In the last 30 days, the FedRAMP Program Management Office (PMO) has published guidance for both vulnerability scanning and penetration testing. The updated guidance comes on the heels of PCI mandating the enhanced penetration testing requirements within its requirement 11.3 as part of the 3.0, now 3.1, version of the DSS. These augmented PCI requirements, introduced in the fall of 2013, took effect on June 30. For many cloud service providers this means the requirements for vulnerability scanning and penetration testing are more thorough and will require additional resources for planning, executing and remediating findings. This article will walk through the updates and discuss the differentiation between FedRAMP and the PCI Data Security Standard (DSS).

Vulnerability Scanning
PCI: Requirement 11.2 of the PCI DSS obliges organizations to, “Run internal and external network vulnerability scans at least quarterly and after any significant change in the network.” Many organizations will provide their ASVs a listing of Internet facing IP addresses and/or hostnames and the scans will be performed.  Internally, a similar process will occur, where a list of in-scope internal IP addresses or hostnames are provide and the scans will be performed by the ASV or an in-house team.

FedRAMP: The FedRAMP document titled, “FedRAMP JAB P-ATO Vulnerability Scan Requirements Guide” was developed for CSPs undergoing the approval via the Joint Authorization Board (JAB); however, agency authorizations can also use the guide. There are several differences with FedRAMP’s guide as compared to the PCI DSS including:

  • Scans must be performed with authentication (i.e. credentialed scans), which PCI doesn’t require.
  • Scans include the full system boundary, which is often, but not always, larger than the in-scope PCI environment, which consists of the cardholder data environment (CDE) and associated system components.
  • Scans must be conducted monthly, where PCI is quarterly.
  • CSPs musts use must use operating system and network vulnerability scanners, database vulnerability scanners and web application vulnerability scanners. PCI doesn’t provide the same level of detail and alludes to network vulnerability scanners.  It is notable that web application scanning is one way to address compliance aspects of PCI DSS requirement 6.6.

Penetration Testing
PCI: For years there has been a debate about how a penetration test should be conducted in support of PCI DSS compliance. In March 2015, the PCI Security Standards Council published an information supplement providing additional guidance. This document offers useful information, however the PCI Standards Council emphasizes that this is just guidance – and the DSS within Requirement 11.3 is still the letter of the law. Also, testing is required from an internal and external perspective, along with testing networks, systems, and applications. In addition, PCI DSS 3.0 introduced additional requirements for a formal testing methodology and testing of segmentation controls. These new measures went from recommended to required on June 30, 2015.

FedRAMP: Also on June 30, 2015, FedRAMP published a document titled, “FedRAMP Penetration Test Guidance.” The goal of this document was similar to the PCI guidance and has overlapping content within methodology, reporting and qualifications. However, the most significant difference is the emphasis on attack vectors and scope. For example, the PCI guidance states social engineering testing is optional, whereas the FedRAMP guidance details tasks including “unannounced spear phishing exercises targeted at the CSP system administrators.” Additionally, the FedRAMP requirements touch on additional aspects of internal testing, such as those specific tests and attacks that should occur from the perspective of a credentialed system user. Physical (facility) penetration testing is also covered in the FedRAMP guidance. While not recommending that 3PAOs scale walls, it does ask for the 3PAO to verify that locks and other physical security mechanisms are in place. Some of these tasks can also be found in Requirement 9 of the PCI DSS.

Next steps
Unlike the PCI update, the FedRAMP penetration testing guidance did not include an implementation time frame or any caveats around being just “guidance.” As such, the requirements are effective immediately. As this guidance was not available prior to June 30, some assessments underway may not have taken the guidance into account in its entirety. CSPs and 3PAOs are encouraged to work with their JAB or Agency authorizing officials to review the attack vectors and ensure that security assessment plans sufficiently assess risk based on the goals and objectives of FedRAMP and standards such as NIST 800-115.

References
PCI:

FedRAMP:

93 Percent of Cloud Services in Healthcare Are Medium to High Risk Arrow to Content

July 10, 2015 | Leave a Comment

By Sam Bleiberg, Corporate Communications Executive, Skyhigh Networks

Exam Room: Doctors Looking at Digital TabletWe recently released our first-ever Cloud Adoption & Risk in Healthcare Report, with anonymized cloud usage data from over 1.6 million employees at healthcare providers and payers. Unlike surveys that ask people to self report their behavior, our report is the first data-driven analysis of how healthcare organizations are embracing cloud services to reduce IT cost, increase productivity, and improve patient outcomes. However, while the cloud has transformed the way healthcare organizations operate and deliver service to their patients, companies are still responsible for ensuring the security of sensitive patient data.

IT may be vigilant in evaluating whether sanctioned cloud services meet organizational policies, but employees can bring cloud services into the workplace. These services not known by the IT department, referred to as shadow IT, contribute to an industry average of 928 cloud services in use per company. Security teams are responsible for sensitive data uploaded to these cloud services, but IT is typically aware of only 60 cloud services in use – less than 10% of the total amount. In addition to maintaining compliance with internal policies, regulations like HIPAA and HITECH require that healthcare companies secure protected health information (PHI) even as it migrates to the cloud.

Healthcare organizations have come under fire as the targets of an increasing number of criminal hacks in the past year. The number of healthcare records exposed in the last 12 months now totals 94 million, led by blockbuster breaches at Anthem and CHS. This flurry of attacks is driven by the high price healthcare records fetch on the black market. At an average of $50 per record, an individual healthcare records is worth more than a US-based credit card and personal identity with a social security number combined.

Considering that the healthcare industry is highly regulated and handles some of the most sensitive and personal data about individuals, many statistics from the report are troubling. Download the full report to read all the findings.

Only 7.0% of cloud services are enterprise ready
A mere 7.0% of cloud services in use meet enterprise security and compliance requirements. The average healthcare organization uploads 6.8 TB of data to the cloud each month. That’s more than all of Wikipedia’s archives (5.64 TB)! Just 15.4% of services support multi-factor authentication, a key line of defense in preventing unauthorized access to sensitive data.

Silos of Collaboration Uncovered
The cloud is a revolutionary technology for enabling collaboration between employees, but too many cloud services in use can actually be an impediment to collaboration. The average healthcare company uses 188 collaboration services. We call the ensuing phenomenon “silos of collaboration,” in which employees have difficulty sharing data because there are so many different cloud services in use. Paid licenses for redundant services can also unnecessarily drive up costs.

Undetected Insider Threats
Enterprise-ready cloud services can offer even better security capabilities than on-premise solutions, but even secure cloud services can be used in risky ways. The majority of insider threat incidents are quiet and may not be discovered immediately, if ever. With healthcare records so valuable on the black market, especially for patients with certain status or conditions, hospital employees may choose to sell records he or she has access to. We compared perceptions of insider threat with reality and found that 33% of healthcare companies surveyed reported an insider threat incident in the last year, but 79% of companies had usage behavior indicative of an insider threat. 

Employee Passwords on the Loose
There were more software vulnerabilities discovered and more data breaches in 2014 than any other year on record. The result is that many users now have their login credentials for sale on the darknet. In fact, 14.4% of all healthcare employees have a login credential for sale online, exposing 89.2% of organizations. A single health insurance company had 9,932 credentials for sale on the darknet.

Cloud Hyperconnectors in Healthcare
Cloud services are now the main way that employees collaborate across different companies. We discovered that a selection of cloud services, called “cloud hyperconnectors,” were responsible for enabling a large number of these connections. In the customer service category, these services were Zendesk, Salesforce, and Needle. The cloud hyperconnectors in the file-sharing category were ShareFile, Box, and Egynte. In the collaboration category, the top connecting services were Cisco WebEx, Office 365, and Basecamp.

The Most Prolific Cloud User
How much can a single employee rely on the cloud? We spotlighted one prolific healthcare employee who uses more cloud services than anyone else. The average employee uses 26 cloud services, but the most prolific cloud user actually employs an impressive 444 cloud services including 97 collaboration services and 74 social media services. A surprising 30.6% of these services were high-risk – much greater than the industry average of 5.6%.

These findings reflect a wake up call for IT in healthcare organizations: employees are using cloud services now, regardless of sanctioned applications or policies prohibiting cloud use. IT’s new role is to enable secure cloud use, helping employees navigate the cloud while complying with organizational security policies.

 

The heavy cost of ignoring dwell time Arrow to Content

July 9, 2015 | Leave a Comment

By Susan Richardson, Manager/Content Strategy, Code42

If you’re among the 44% of organizations that aren’t measuring Mean Time to Identify (MTTI), more commonly known as dwell time, then how will you know if you’re reducing it, which is a critical step to improving incident response?

The average dwell time for a major data breach today is months: Mandiant puts it at 205 days, and a Ponemon Institute survey tallied 98 days for Financial Services and 197 days for Retail. With all that free time to roam in your system, attackers can wreak more havoc than just sifting through your information for vulnerabilities, identifying critical information, mapping your network and stealing millions of records.

The fallout of a data breach can also include:

Loss of business: A recent Brunswick Group report found that 34% of customers no longer shopped at a retailer due to a past data breach issue. But retail is among the least-likely industries to experience customer churn. The three industries most susceptible to losing customers following a breach are Health Care, Pharmaceuticals and Financial Services, according to a 2014 Ponemon Institute global study. It also found that France, Italy and the United Kingdom had the highest customer turnover.

Significant lawsuits: An average single data breach claim, according to a recent NetDiligence study, costs a company $733,109, at a cost–per-record of $956.21. Home Depot, in its 10-Q filing with the SEC following a breach, reported that it was facing at least 44 lawsuits. Target paid $10 million to settle its class action lawsuit and another $19 million to reimburse financial institutions for the charges they incurred reissuing compromised cards.

A drop in company valuation: While stock price is affected by many factors, The Brunswick Group analyzed 10 companies that recently experienced a large data breach and found that the average daily stock price dropped and hadn’t yet recovered two quarters later.

Executive casualties: Target CEO Gregg Steinhafel resigned five months after the retailer’s highly publicized breach in December 2013, with the company’s official statement noting “he held himself personally responsible.”

An LA Times story following the more recent Anthem data breach talks about what’s at stake for company CEO Joseph Swedish, who was already fighting to improve the insurer’s customer service reputation before hackers compromised 78 million records in a typosquatting scheme.

Bridging The Chasm Between Business and IT – The GRC Way Arrow to Content

July 6, 2015 | Leave a Comment

By Rajesh Raman, Vice President, Zaplet/MetricStream

ra-205x205Business And IT
In today’s world, company operations function at two distinct levels: the business operation level and the IT infrastructure operation level. While the two functions operate independently, IT exists to support the business. Many of the IT operations, like the deployment and management of IT infrastructure, applications and services are driven by the business layer requirements in a top-down fashion to enable the company to carry out its business. IT infrastructure management, including addressing cyber security risks is exclusively done in the IT layer. There are several tools, such as FireEye, McAfee, Qualys, ArchSight and BMC Software which IT deploys and uses in order to identify and manage IT security risk, but something is missing.

A chasm exists between the IT layer and business layer, when looked at from a bottom-up perspective.

Let’s say someone hacked into your organization’s network, and some data was compromised. What does that IT event really mean for the business? It’s vital to understand that in business terms because that event could potentially put the company at serious risk.

Perhaps the data breach jeopardized the company’s financial data; then it will need to do some proactive reporting. Perhaps the data breach made the company non-compliant with a regulatory requirement; then it will need to re-certify. Perhaps the data breach compromised personnel records, like the June 2015 federal government hack; then the company will need to alert its employees.

The point is you need a unified business and IT perspective towards comprehensive enterprise risk assessment and management.

Don’t lose the signal in the noise
signal-e1435763277791The massive 2013 Target data breach showed what can happen when you ignore the gap between IT and business risks.

Target was PCI-certified (Payment Card Industry), thanks in part to a $1.6 million malware detection system from FireEye. On November 30, 2013, Target’s security team in Bangalore, India, received alerts from FireEye, and informed Target headquarters in Minneapolis. But no one foresaw the risk to the business. In the words of Molly Snyder, a Target spokeswoman: “Based on their interpretation and evaluation of that activity, the team determined that it did not warrant immediate follow up.

Obviously something critical got lost in the noise. The first IT event detected was not high priority—from the IT perspective. But from the business perspective, red flags waved: the event occurred during the busiest shopping period and involved customers’ credit card information.

The data breach cut Target’s profit for the holiday shopping period by 46 percent, compared to the previous year. Worse yet, Target still faces dozens of potential class-action lawsuits and legal actions from creditors.

Prioritization is Key
Today’s IT departments cope with a tsunami of security events, but how do they know which one to prioritize? How do they know that an event that may, on the surface, seem trivial and unimportant can have significant impact and jeopardize the business?

There is an obvious need to fill this gap that exists where low-level IT events can be mapped into enterprise risk from a bottom-up perspective.

  • A Governance, Risk and Compliance (GRC) system, promises to help fill this gap between the IT and business layers.
  • Governance, Risk, and Compliance (GRC) systems help organizations connect the dots across key areas: the limits for regulatory compliance, the analytics for risk management, and the metrics for risk controls. Because a GRC system spans the enterprise, it can help guide and prioritize the appropriate response.
  • When setting up a GRC system, a company must define its critical assets, metrics, and risk assessment controls. The system can help manage and prioritize anything that impacts regulatory compliance —such as Payment Card Industry (PCI) compliance or the Health Insurance Portability and Accountability Act (HIPAA).
  • A GRC solution provides a bottoms-up approach to managing and addressing IT events—by keeping the business needs in mind. An apparently low-level risk will be given higher priority if it threatens a critical asset—or if it jeopardizes a regulatory requirement. This integrated and pervasive 360-view is where the value of a GRC solution lies.

Visibility
visibility-e1435762915539
It’s all about understanding the business risk context when prioritizing IT assets and responses to IT events.

Chief Risk Officers (CROs) see and understand key business risks. They have the visibility and they can make the call. They report out on the organization’s risk profile by leveraging a variety of tools and risk dashboards to the Board of Directors level. Equally important, they collaborate across the C-suite and provide management with the guidance on what needs to be addressed, how, and when.

Needles in Haystacks
Moving forward, I see challenges ahead on all three fronts: regulations, systems and threats.

Regulatory requirements are increasing and it’s more challenging for companies to be 100 percent compliant with all the appropriate risk controls in place. In terms of systems, an organization’s IT footprint and adoption of cloud-based applications is constantly evolving and expanding.

Meanwhile, the variety and number of cyber threats are increasing, and malware is becoming ever more sophisticated. I anticipate that the volume and severity of IT events will increase significantly. Figuring out which events will have the biggest impact on a company’s business is like finding a needle in a haystack. But it need not be.

That’s precisely where a GRC system can help. By leveraging the correct GRC analytics and intelligence, organizations are able to identify and understand their risks from both the business and IT perspective. Bridging this gap can lead to better data-driven decision making and superior business performance.

(Image Source: Shutterstock)

Originally posted in CloudTweaks.

Six archetypes of insider exfiltration Arrow to Content

July 2, 2015 | Leave a Comment

By Susan Richardson, Manager/Content Strategy, Code42

With all the talk about insider threats and the potentially dangerous brew of nomadic employees and data-to-go, there’s no time like the present to identify behaviors that come before a data leak.

Here are the top six:

  1. The Ship Jumper: Frequent absences, unexplained disappearances or unexpected medical appointments point to an employee who’s unhappy, distracted or looking to jump ship. Workers who have accepted a new job are the most likely to give data to a competitor. In what must be the most common insider threat scenario, a sales representative leaves the company for a competitor, taking sales opportunities with him. Concern over defectors leaving with data is prevalent in organizations of all industries and sizes, especially in competitive markets. Stealing customer data and leads is not only difficult to detect because it occurs on unsanctioned corporate applications, but it is also incredibly detrimental to the business.
  2. The Unhappy Camper: An employee who has been reviewed poorly or put on a performance improvement plan may seek revenge. When a bad performance review has been delivered, HR and IT should communicate so both can heighten monitoring. In a case where an IT employee was disgruntled, the hosting service Code Spaces was forced to go out of business when an attacker gained access to their Amazon Web Services (AWS) control panel and deleted customer data and backups.
  3. The Spendthrift: When an employee talks excessively about money, gets calls from collection agencies or takes a second job it may be a clue that he or she is experiencing financial problems. Be wary: these folks may steal data or sabotage company systems for personal gain.
  4. The Angler: When employees engage in “atypical” computer behaviors like taking their computer home for the first time, trying to exfiltrate CRM data, changing their computer configurations, repeated attempts to access privileged folders on the Intranet or shared drive, or the sudden appearance of external drives to back up data, it may be a tell that company data is being exfiltrated.
  5. The Uploader: If employees are using personal clouds, it’s highly likely they’re uploading files to take home (or elsewhere). Also, if the free space on an employee’s computer increases he may be deleting files to cover his tracks.
  6. The Ex: When office romance goes bad, some scorned lovers may seek to access personnel files or other personal information to “stalk” ex-lovers. Watch for increased failed password attempts. Other acts of revenge may be far more serious, like this one reported in the Harvard Business Review:

A manager complained to his superior about the person in question—a systems administrator who had been sending him flowers at work and inappropriate text messages and had continually driven past his home. Once clearly rejected, the attacker corrupted the company’s database of training videos and rendered the backups inaccessible. The company fired him. But knowing that it lacked proof of his culpability, he blackmailed it for several thousand euros by threatening to publicize its lack of security, which might have damaged an upcoming IPO. This costly incident—like most other insider crimes—went unreported.

It’s common sense to remove terminated employees from systems, yet a 2014 infosec survey showed that 13 percent of respondents still had access to previous employers’ systems using their own credentials. It is critical to void passwords, privileges and user accounts immediately—and to document and adhere to “stand down” procedures to protect the enterprise.

Cloud Security Open API: The Future of Cloud Security Arrow to Content

June 29, 2015 | Leave a Comment

Today, CipherCloud announced that the Cloud Security Alliance (CSA) is launching a Cloud Security Open API Working Group, co-led by CipherCloud. The charter of the working group is to provide guidance for enterprises and cloud service providers on the operation and interoperability of cloud security functions, with a specific goal to protect PII and sensitive data across multiple clouds. Current members of the working group include Deloitte, Intel Security, CipherCloud, SAP, Symantec, Infosys, and a few others.

Unlike many API efforts where the APIs typically allow access to a particular solution provider’s core code base, this effort aims to span multiple cloud services and bridge the gap between proprietary cloud environments.

Why Focus on Cloud Security Open APIs?
As cloud deployments become more extensive in enterprise, the ecosystem surrounding cloud deployments is becoming more and more complex. The number of places that touch personal data, company IPs, and other confidential information are quickly ballooning out of control. This is a conceptual diagram that illustrates the cloud ecosystem of an enterprise. As seen, personal data from the enterprise could go into CSP1, CSP2, and CSP3. In addition, partner app1 and partner app 2 may process personal data as well as the ISVs that help integration and customization efforts.

OpenAPI

For the enterprise to retain complete control over your security and compliance-sensitive data in such an environment, it requires a monumental effort. Not only you need to have complete visibility of the entire ecosystem including partner applications outside of clouds with which you work directly. You must also exercise gate-keeping functions at each integration point, which quickly becomes non-scalable.

The Cloud Security Open APIs provide a layer of abstraction via which cloud users and third party technology providers can access and integrate with the core functions of cloud services. This common layer of abstraction across clouds allows end-user organizations the ability to exercise standard integrations with ease, eliminating the need for costly one-off custom development efforts. Ultimately, this will accelerate the pace of cloud adoption and innovation.

An analogy to the Cloud Security Open APIs is the Automated Clearing House (ACH) network in the banking industry. ACH is a widely adopted industry standard across different financial institutions and clearing houses. A bank can switch from one clearing house to another without changing the way they do funds transfers and payment processing. This is possible because the clearing houses and the banking institutions all adhere to the ACH standards. In a way, the Cloud Security Open APIs is the ACH standard for cloud security operations.

Benefits of Cloud Security Open APIs
Expedite cloud deployments: A well-known and standard API layer will give enterprise developers the ability to leverage core cloud functions quickly, thus expediting the pace of cloud deployments.

Foster cross-cloud innovations: With the Cloud Security Open APIs, developers now have a way to write cross-cloud functions without having to custom integrate with each cloud that it touches. This may open up breakthrough innovations in new economic venues, new ways of doing business for cloud users and providers alike.

Extend cloud services reach to new functionality: From the perspective of a cloud service provider (CSP), the Cloud Security Open APIs will allow a much larger set of developers (than those within the CSP’s own company) to leverage the CSP’s core code base/data and deliver adjacent functionality.  Sometimes this model can lead to entirely new and unexpected user experiences and technology advances, which can make the service much more appealing to end users.

What Will the Working Group Produce and What Does It Mean for You?
Today the business drivers for the Cloud Security Open APIs are about eliminating business and technology frictions when organizations move to embrace cloud applications. With this in mind, the working group will execute this roadmap going forward:

  1. Defining a set of concrete security use cases covered by the Open APIs
  2. Produce the Cloud Security Open API framework
  3. Generate a reference architecture that implements the API framework
  4. Produce industry guidance and white papers

If you are a cloud service provider, participating in the Open API program will allow you to go beyond just a service and become a platform for innovation. If you are a technology provider to the cloud environment, being part of the Open API will make your offering more agile and more appealing to a broad set of partners and users. If you are an end-user organization, the Cloud Security Open APIs really aim to make your life easier and should represent what you want to see in the security ecosystem. Your input therefore is extremely important.

The CSA Working Group and ways to participate can be found here. Get involved and get your voice heard!

Page Dividing Line