August 11, 2016 | Leave a Comment
By Ganesh Kirti, Founder and CTO, Palerra
Cloud Access Security Broker (CASB) software has emerged to help IT get its arms around the full cloud security situation. CASBs are security policy enforcement points between cloud service users and one or more cloud service providers. They can reside on the enterprise’s premises, or a cloud provider can host them. Either way, CASBs provide information security professionals with a critical control point for the secure and compliant use of cloud services across multiple cloud providers. They enforce the many layers of an enterprise’s security policies as users, devices, and other cloud entities attempt to access cloud resources.
Exactly how the CASB integrates your security policies with cloud access makes a big difference in the comprehensiveness of your security solution and network performance. There are two main CASB deployments: API and Proxy.
An in-line proxy solution checks and filters known users and devices through a single gateway. Because all traffic flows through a single checkpoint, the proxy can take security action in real-time. Unfortunately, the single checkpoint also means that it slows network performance, and only secures known users. Further, proxy-based solutions only secure SaaS cloud services, leaving IaaS and PaaS clouds vulnerable.
An API-based CASB is an Out-of-Band solution that does not follow the same network path as data. Since the solution integrates directly with cloud services, API-based solutions have no performance degradation, and they secure both managed and unmanaged traffic across Saas, IaaS, and PaaS cloud services.
Some industry experts recommend a multimode approach, which is a CASB architecture that supports both API and proxy approaches. In reality, both API and proxy approaches achieve multimode functionality, though they do it differently.
As enterprises move more business-critical functions to the cloud, implementing a CASB has become a mandatory control. Prior to choosing a CASB, it is important to know the facts on the alternatives so you can make the choice that is best for you.
To learn more, join Palerra CTO Ganesh Kirti and CSA Co-Founder and CEO Jim Reavis as they discuss “API vs. Proxy: Understanding How to Get the Best Protection from Your CASB” today. Register for the webinar now, and download the full white paper for more information about API vs. Proxy CASB architecture.
August 5, 2016 | Leave a Comment
By Susan Richardson, Manager/Content Strategy, Code42
The growing ransomware threat isn’t just about more cybercriminals using the same cryptoware tools. The tools themselves are rapidly growing more sophisticated—and more dangerous.
Ransomware growing exponentially, with no signs of slowing
A new report from InformationWeek’s Dark Reading highlights key trends in the ransomware landscape, starting with the dramatic increase in total ransomware attacks. Ransomware attacks increased by 165 percent in 2015 (Lastline Labs), and this trend isn’t letting up. Anti-spyware company Enigma Software reported a 158 percent jump in the number of ransomware samples it detected between February and March 2016—and April 2016 was the worst month on record for ransomware in the U.S.
It’s also clear that ransomware growth is independent of the overall increase in cyberattacks over the past several years. The 2016 DBIR reported that phishing attacks are more common than ever, and Proofpoint found that in the first quarter of 2016, nearly 1 in 4 (24%) of all email attacks using malicious attachments contained just one strain of ransomware (Locky).
Not just more common—ransomware growing stronger and more effective
Most alarmingly, DarkReading reports that cyberattackers are rapidly evolving and diversifying their ransomware arsenal. Ransomware has become big business, and with that cash flow comes development of more complex ransomware strains and more clever techniques for infecting targets. In an ironic twist, creators of popular ransomware such as Locky are now working to “protect” their cryptoware from enterprising copycats who create knockoff versions and variants. No honor among thieves, indeed.
Better phishing lures, more brute-force attacks
DarkReading spotlighted two examples of this increasing sophistication. On the one hand, cybercriminals are developing new, more obscure ways of luring a user to install ransomware. From personalized landing pages to actually hacking a device’s boot-up process, stopping these techniques is much more complicated than just saying, “Don’t click suspicious links.”
At the same time, attackers increasingly skip the phishing lure and go straight to brute-force attacks on internet-connected remote desktop servers. For the skilled hacker, this technique is more reliable than phishing, and immediately gets the attacker much deeper into an enterprise network, allowing them to compromise more devices and ransom more data.
“No backup, no protection”
With ransomware mutating into an even bigger threat, Dark Reading encouraged companies to go back to basics, citing data backup as the essential first step in enterprise ransomware defense. We couldn’t agree more. No matter how complex and advanced the ransomware, modern endpoint backup isn’t scared. Modern endpoint backup gives you guaranteed recovery in the face of ransomware. But its protection goes beyond backup: Modern endpoint backup sees your endpoint data, sees your users’ endpoint activities, and gives you the visibility to identify and neutralize an attack as soon as it hits.
Download The Guide to Modern Endpoint Backup and Data Visibility to learn more about selecting a modern endpoint backup solution in a dangerous world.
August 3, 2016 | Leave a Comment
By Atri Chatterjee, CMO, Zscaler
Today’s smart enterprises, regardless of size, should be looking at a Secure Web Gateway (SWG) as part of their defense-in-depth security strategy. In Gartner’s opinion, if you aren’t using an SWG, you are in all likelihood leaving a hole in your enterprise security strategy. Firewalls – previous, current or next generation – are not enough because they do not provide the level of protection needed. This includes deep content inspection of all web traffic including inspecting encrypted (SSL) traffic, data leak prevention (DLP) and application control.
Now once you’ve decided to deploy SWG technology or are looking to upgrade or refresh your existing SWG environment, it’s important for security practitioners to consider various deployment options: appliance, cloud or hybrid. In Gartner’s words:
“The market for secure web gateway solutions is still dominated by traditional on-premises appliances. However, cloud-based services continue to grow at a faster rate than appliances, leaving many vendors struggling to adapt.”
They go on to estimate that cloud-based SWG security is growing at a significantly higher rate than that of traditional appliance based security – 35% CAGR for cloud based solutions compared to 6% for on-premises appliances. So it should be no surprise that cloud-based solutions play an important role in Gartner’s 2016 Magic Quadrant for SWG.
With this in mind, I recently sat down with the Cloud Security Alliance’s (CSA) Founder and CEO Jim Reavis to talk about the results presented in Gartner’s SWG Magic Quadrant, the role of SWG in enterprise security, and what the future holds in store for security. In the event you missed our webcast, you can listen to it here.
Zscaler is revolutionizing Internet security with the industry’s first Security as a Service platform. As the most innovative firm in the $35 billion security market, Zscaler is used by more than 5,000 leading organizations, including 50 of the Fortune 500. Zscaler ensures that more than 15 million users worldwide are protected against cyber attacks and data breaches while staying fully compliant with corporate and regulatory policies.
Zscaler is a Gartner Magic Quadrant leader for Secure Web Gateways and delivers a safe and productive Internet experience for every user, from any device and from any location — 100% in the cloud. With its multi-tenant, distributed cloud security platform, Zscaler effectively moves security into the internet backbone, operating in more than 100 data centers around the world and enabling organizations to fully leverage the promise of cloud and mobile computing with unparalleled and uncompromising protection and performance.
July 25, 2016 | Leave a Comment
By Jacob Ansari, Manager, Schellman
Despite their perpetual status as old news, passwords and their security weaknesses continue to make headlines and disrupt security in ever-expanding ways, and the usual advice about better protection continues to go unheeded or, more worryingly, fails to address the threats any longer. As attacks continue to improve, they show that a cracked password for a given user account often has significant value beyond just the compromised environment.
We do not sow.
Attackers and security testers have been cracking passwords for decades. The usual situation involves capturing the cryptographic hash of the password, where the password has undergone a one-way cryptographic transformation that does not have a decryption function (unlike encryption that uses a key to encrypt or decrypt), such that the only way to discover the value is to guess it, transform it using the same hash function, and compare the output against the captured password hash. While this may seem improbable, attacks have successfully cracked passwords this way for a long time. In part, this occurs as a result of bad password hash functions, but most of the success comes from easily guessed passwords.
The North remembers.
Security incidents that expose passwords have a few significant effects. The most obvious, that an attacker can access that user’s account, is perhaps the least significant, barring a compromise of something significant like an online banking application, a work-related system, or a regularly used social media platform. The more likely scenario is that this compromised password is the same password used by the same individual for other accounts, and an attacker now has a pretty good guess at the password of something more valuable. The less well understood, but perhaps more important consideration, is that actual password disclosures, particularly on a large scale, improve the ability to crack passwords in the future.
That’s what I do. I drink and I know things.
Cracking passwords by guessing purely random strings of characters takes a comparatively long time in terms of computing effort. Because users typically pick easily guessed passwords, those who crack passwords have learned to take some shortcuts. In the beginning, these were lists of words, derived from dictionaries or other sources, but containing little insight about how users actually selected passwords.
Advancements such as rules for modifying words from the list (substituting an “e” for a “3” or appending a symbol like a “!” to the end of a word), or narrowing down brute force attempts to set patterns like four alphabetic characters followed by three numerals brought about some incremental improvements, but still guessed at the nature of user passwords rather than relied on much actual data. However, that changed with security incidents that exposed large numbers of passwords, such as the RockYou incident in 2010, or the LinkedIn incident in 2012. These events offered password crackers, both the proverbial good guys and bad guys, a major insight into the ways users select passwords. As such, password crackers can make use of previously cracked passwords as the basis for new password cracking efforts. Given the high probability of password reuse, the ever-increasing knowledge of the patterns that successfully match user passwords, and the easy accessibility to the specialized hardware and software tools (you can have an effective cracker running on Amazon Web Services up and running in less than an hour) each significant breach of credentials drives the feedback loop that improves password cracking, which results in a more effective crack of the next password breach, which improves our collective knowledge and ability to crack passwords.
And make no mistake, the dead are coming.
Simply put, passwords that our minds are capable of remembering without assiduous effort are too easily susceptible to password cracking techniques. Also, reusing the same password across more than one account creates significant risks that an attacker who obtains the password can leverage that credential to attack the user or the user’s employer more significantly (perhaps more embarrassing than dangerous is the recent news alleging that Mark Zuckerberg’s LinkedIn password was the same bad password he used for Twitter and Pinterest, although it illustrates the point quite splendidly). While regulatory requirements may call for a certain password complexity that humans can easily remember or security advice from a few years ago suggest a few memory tricks to improve password selection and recall, the reality of modern cracking efforts leads to this: select a unique, random, lengthy password (ideally 20 characters or more) for each account and do not reuse it.
The practical outcome of needing large, random, unique passwords is the urgent need for some sort of password vault.
Today, this typically takes one of two shapes: an application run locally on a computer or mobile device, such as KeePass or PasswordSafe, or a web service like LastPass. Like most security decisions, this involves a series of tradeoffs for matters of trust, usability, and protection of your credentials. Using an open-source local application like KeePass gives you perhaps more control over your accounts than a web service like LastPass. Additionally, the cost is usually $0 for the open source option. However, LastPass offers a number of useful features like accessibility on your mobile devices, a forgot password feature (which local applications usually do not have), and some ease-of-use features for browsers. Both also have security issues, as LastPass has reported some security incidents and local applications have security vulnerabilities like every other piece of software in existence.
That said, either choice, constitutes a significant security improvement over reusing easily guessed passwords, and the cost-benefit analysis for choosing one over the other grows very small when placed next to the problem of doing neither.
I am the horn that wakes the sleeper. I am the shield that guards the realms of men.
As a consumer of Internet services, the best security advice is to begin transitioning to the use of a password vault of some sort as soon as possible, along with enabling multi-factor authentication for as many accounts that will support it (e.g., Amazon, Google). As an organization that operates applications where users authenticate, support strong passwords (shame on you if your site has a maximum length or disallows certain special characters) and start working on supporting multi-factor authentication. For your password storage, follow the current best practices for using slow hash functions like bcrypt with good, random salts and move away from outdated hash functions like MD5 or SHA1 (which we still frequently see during assessments). Attacks get better and not worse, and attacks against passwords get better with almost blinding speed. Incremental defenses like requiring a few more characters of minimum length won’t suffice; a good defense needs to change the game about authentication altogether.
July 22, 2016 | Leave a Comment
By Ann Fellman, Vice President/Marketing and Enterprise Product Marketing Director, Code42
Picture this: You’re enjoying a beautiful summer Saturday, watching your kid on the soccer field, when your phone rings. It’s work. Bummer. “Hi, this is Ben from the InfoSec team. It appears that John Doe, whose last day is next Friday, just downloaded the entire contents of his work hard drive to an external drive. Given his role, there’s a high probability that it includes confidential and sensitive employee data.”
There goes your Saturday.
It happened to us—it’s probably happened to you
This happened to us at Code42 a few months ago. A longtime employee was coming up on his last day, and innocently wanted to take years of work with him. We’ve all probably done this—grabbed some templates and examples of our work to use in our next chapter—and instead of sorting through years worth of work, it’s just easier to copy the whole drive. Unfortunately, this is against company policy and puts the company at risk. And in this case, there were confidential and sensitive files related to company personnel.
Not all data theft is malicious, but it’s still dangerous
Of the fifty percent of departing employees that take sensitive or confidential data—most are not malicious. Some don’t know the rules; some don’t follow the rules; and most see no harm in their small actions. At Code42, we’re fortunate to have great people, and they have good intentions. But even the best intentions can have terrible consequences, especially when it comes to enterprise data security.
Too often, “innocent” data taken by employees inadvertently includes sensitive corporate data such as financial information, employee data, trade secrets or even customer information. There are risks and costs associated with leaked data; but knowing what was leaked and where it is greatly reduces the risk and damages.
Code42 CrashPlan avenges data theft—saves the weekend
Back to the sunny soccer field, where I might have spent horrible moments dreading the fallout from this particular data pilfer, I make a single phone call and spend no time worrying about the cost of tracking down or trying to recreate lost files or deal with a potential breach.
With Code42 CrashPlan, I have complete certainty that all of this employee’s endpoint data is backed up, down to the minute. And I know our InfoSec team can tell me what the data is, what was copied and where it was copied to—down to the serial number of the external drive.
Modern endpoint backup: Sees what data you have, and it knows where it goes
From there, the resolution is quick and—while it sounds dramatic—painless. A company representative contacts the departing employee, explains that we observed the content of the hard drive has been copied to a drive and requests return of the drive to Code42 on Monday morning. The employee promptly returns the drive.
And the best part of the story, I enjoyed the rest of the weekend, without the threat of data theft clouding the summer sky.
This is the power of modern endpoint backup. No matter where insider threat comes from—malicious lone wolves, employees conspiring with external actors, or well-intentioned, accidental rule-breakers—modern endpoint backup sees it all, in real time.
Download The Guide to Modern Endpoint Backup and Data Visibility to learn more about selecting a modern endpoint backup solution in a dangerous world.
July 15, 2016 | Leave a Comment
By Jim Reavis, Co-founder and CEO, Cloud Security Alliance
As cloud computing and unmanaged endpoints continue to gain traction, it is a foregone conclusion that information security technical controls must become more virtual – that is to say, software-based. Rapidly disappearing are the days of physical perimeters and hardwired network architectures.
One of Cloud Security Alliance’s most promising research projects, Software Defined Perimeter (SDP), looks to accelerate the implementation of virtual controls to make organizations more secure without losing the agility cloud and mobility offer. SDP is inspired by the military’s classified, “need to know” network access model. SDP provides the blueprint for an on-demand, point-of-use security perimeter with a tremendous number of interesting security use cases.
The linked slide deck is a presentation about SDP from Kirk House, who is an SDP Working Group leader as well as Global Director, Enterprise Architecture at The Coca Cola Company. Kirk’s presentation provides an enterprise view of how we need to rethink security with SDP. By starting with zero trust, the ability to achieve application segmentation, eliminate a wide variety of intermediate attack vectors and achieve greater overall security is compelling.
Software Defined Perimeter is coming to you soon, and I hope you will take the time to learn more about it.
July 15, 2016 | Leave a Comment
By Mark Wojtasiak, Director of Product Marketing, Code42
Gartner’s June 2016 article, “Use These Five Backup and Recovery Best Practices to Protect Against Ransomware,” outlines five steps for mitigating the threat and/or risk of being hit with ransomware. I will spare you the market stats and dollar figures intended to scare you into taking action now. If you have an affinity for ransomware horror stories, click here, here, here, or even here.
Or let’s spend time looking at Gartner’s best practices to determine if you believe we are a legit provider of ransomware protection. Heads-up: when it comes to ransomware, one-third of our customers recover from ransomware using our endpoint backup + restore software, so Code42 customers represent.
Gartner Step 1: Form a single crisis management team
Typically, a crisis management team consists of only the customer’s employees, but Code42 does have a virtual seat at this table. Each and every day Code42 system engineers, IT staff, product managers, developers, professional services and customer support staff meet to discuss and address issues raised by our customers. This response team works together to solve customer problems so customers can effectively conduct internal risk assessments and respond to incidents that threaten the health of their endpoint data.
Gartner Step 2: Implement endpoint backup
This IS our responsibility, and we are the best at it, so say our customers. Including one senior IT manager who said, “CrashPlan gives me immense confidence as an IT manager. Case in point: an executive was traveling to Switzerland for a big presentation and had his laptop stolen en route. He was able to go to an Apple store, purchase a new machine, install CrashPlan, sign in and restore his files in time for the presentation. And we won the business. I was able to talk him through this on a five-minute phone call. It does not get better than that.” (Click here to read the entire review.*) Or instead of reading through all the reviews and case studies, we can cut to the chase and simply answer the question: Why are we the best? Because we deliver what matters most to enterprise customers—from end users to admins to executives.
- It just works. Code42 works continuously to back up your data no matter the device, no matter the network. In fact, 7/10 IT admins consider themselves more productive after deploying Code42, which translates to more time focused on projects that are more strategic and rewarding.
- It scales bigger and faster than any other enterprise endpoint backup solution.
- Service and support is “stellar,” according to our customers. But don’t take our word for that, take theirs.
Gartner Step 3: Identify network storage locations and servers vulnerable to ransomware encryption
Yes, you need to protect your servers, but let’s get to the point: or rather, let’s start at the endpoint where 95% of ransomware attacks originate. Server backup wasn’t designed to restore data to endpoints.
Gartner Step 4: Develop appropriate RPOs and backup cadences for network storage and servers
We choose to focus on the source of attack where we are the best at meeting recovery point objectives (RPO) and backup cadences. Our backup frequency is 15 minutes by default, configurable down to one minute; whereas our competitor’s default backup frequency is every four hours, configurable down to five minutes. The more frequent the backup cadence, the better the protection against data loss. Gartner’s “Five Backup and Recovery Best Practices to Protect Against Ransomware,” advises, “The primary goal is to leverage newer backup methodologies to achieve more frequent recovery points…The goal here is backing up more often.” This is not just a server and network-storage best practice, it’s an endpoint best practice too.
Gartner Step 5: Create reporting notifications for change volume anomalies
Step five centers on endpoint backup reporting capabilities. Here Code42 is resoundingly on point. In the first half of 2016, in the 5 series release of Code42 CrashPlan, a reporting web app that makes it easy to assess when users are not backing up frequently enough—putting your RPO in jeopardy. In addition, the ability to securely index and search user data archives helps security and IT teams find and identify malicious files through MD5 hash, keyword or metadata searches. Combine indexing and searching capabilities with web reporting capabilities to identify anomalies at the individual, department or group-level.
For our take on how to mitigate the risk and remediate quickly from ransomware attacks, check out our white paper “Reeling in Ransomware – Data Protection for You and Your Users.”
*Gartner Peer Insights reviews constitute the subjective opinions of individual end-users based on their own experiences, and do not represent the views of Gartner or its affiliates.
What You Need to Know: Navigating EU Data Protection Changes – EU-US Privacy Shield and EU General Data Protection Regulation
July 12, 2016 | Leave a Comment
By Marshall England, Industry Marketing Director, Technology & Cloud, Coalfire
If you’re an organization with trans-Atlantic presence that transmits and stores European citizen data (e.g. employee payroll & HR data, client & prospect data) in the U.S. you will want to pay attention. What we will discuss was administered under the European Union’s Data Protection Directive and a previous EU-U.S. agreement called Safe Harbor. We will cover what happened, what’s next, new rules (and penalties) that are set to go into effect and our recommendations.
Safe Harbor, invalidated by a European Court of Justice (ECJ) ruling (PDF) in October 2015, allowed companies to transmit and store EU citizen data in the US so long as the U.S. companies agreed to meet requirements as described in Decision 2000/520/EC otherwise known as ‘Safe Harbor Privacy Principles’. The European Court of Justice ruled to invalidate the Safe Harbor agreement as it determined that US companies were not able to meet Safe Harbor Privacy Principles as they conflicted with National Security Agency or other government agency subpoenas request for information and other government data collection programs. Data on EU citizens was found as a result of US government surveillance program information being made public. In other words, if U.S. companies were complying with Safe Harbor Privacy Principles, that information would not have been found or made public as a result of those programs.
In early February 2016, the US Department of Commerce and the European Commission announced a new framework called the Privacy Shield. Since then, a group known as the Article 29 Working Party, Europe’s data protection body, issued its own statement (PDF) about the Privacy Shield framework and expressed their reservations regarding the adequacy of the “Privacy Shield.” On July 8, 2016 the European Union Member States Representatives approved the final version of the Privacy Shield. The new Privacy Shield framework allows for transatlantic data transmission and outlines obligations on companies handling the data, in addition to written assurances from the U.S. that among other items rules out indiscriminate mass surveillance of European citizens’ data.
Additionally, in early 2016 the European Union enacted a new data protection framework that has been in the works since 2012, known as the General Data Protection Regulation. This new Regulation repeals and replaces the pre-existing European Union’s Data Protection Directive. While not much has changed in the new ‘Regulation’ U.S. companies should note that policies and procedures as it relates to employee data transmission from the EU to U.S. be updated as well as be aware of new penalties. The new rules of the Regulation (and penalties) “will become applicable two years thereafter.” So, in 2018, the rules and penalties around the General Data Protection Regulation will go into effect.
New Rules that will go into effect (enforceable, starting in January 2018):
- Strong obligations on companies handling Europeans’ personal data and robust enforcement: U.S. companies wishing to import personal data from Europe will need to commit to robust obligations on how personal data is processed and individual rights are guaranteed. The Department of Commerce will monitor that companies publish their commitments, which makes them enforceable under U.S. law by the US. Federal Trade Commission. In addition, any company handling human resources data from Europe has to commit to comply with decisions by European DPAs.
- Clear safeguards and transparency obligations on U.S. government access: For the first time, the US has given the EU written assurances that the access of public authorities for law enforcement and national security will be subject to clear limitations, safeguards and oversight mechanisms. These exceptions must be used only to the extent necessary and proportionate. The U.S. has ruled out indiscriminate mass surveillance on the personal data transferred to the US under the new arrangement. To regularly monitor the functioning of the arrangement there will be an annual joint review, which will also include the issue of national security access. The European Commission and the U.S. Department of Commerce will conduct the review and invite national intelligence experts from the U.S. and European Data Protection Authorities to it.
- Effective protection of EU citizens’ rights with several redress possibilities: Any citizen who considers that their data has been misused under the new arrangement will have several redress possibilities. Companies have deadlines to reply to complaints. European DPAs can refer complaints to the Department of Commerce and the Federal Trade Commission. In addition, Alternative Dispute resolution will be free of charge. For complaints on possible access by national intelligence authorities, a new Ombudsperson will be created.
New Penalties that will go into effect (enforceable, starting in January 2018):
Under Article 79 of the Regulation, penalties and enforcements are described for Organizations less than 250 personnel and Enterprises. Violations of certain provisions for Enterprise organizations (> 250 employees) will carry a penalty of “up to 2% of total worldwide annual [revenue] of the preceding financial year.” Violations of other provisions will carry a penalty of “up to 4% of total worldwide annual [revenue] of the preceding financial year.” The 4% penalty applies to “basic principles for processing, including conditionals for consent,” as well as “data subjects’ rights” and “transfers of personal data to a recipient in a third country or an international organization.”
What should U.S. companies consider?
There are a few options we’ll highlight here such as conducting Privacy Assessments with Privacy Shield and GDPR regulations in mind, ISO 27001 / 27018 certification, cyber risk program development to include vendor risk management, incident response planning and cyber risk assessments.
What to do – Privacy Shield
As it relates to the new EU-U.S. Privacy Shield, companies should review and be aware of the legal requirements outlined in the Privacy Shield (PDF). For certified Safe Harbor organizations, continue to abide by those elements within Safe Harbor, as you still have an obligation to protect EU data transfers, and begin to incorporate the Privacy Shield requirements as you will have to obtain certification (in-house or third-party) to gain listing on the Privacy Shield website maintained by the Department of Commerce.
New requirements for Privacy Shield participating companies as outlined on the Commerce.gov site include:
- Informing individuals about data processing
- Maintaining Data Integrity and purpose limitation
- Ensuring accountability for data transferred to third parties
- Cooperating with the Department of Commerce
- Transparency related to enforcement actions
- Ensuring commitments are kept as long as data is held
What to do – EU GDPR
Under the new EU General Data Protection Regulation (Chapter 4, Section 2), not only is there also a requirement for an annual assessment, but the Regulation requires for data breach notification, incident response planning and security awareness training for staff involved in the data transmission process.
As it pertains to incident response plan and handling, the regulation stipulates notification to a supervisory authority within the European Union within 24 hours and notification to data owners without undue delay. Having an incident response plan in place will be critical to an organizations ability to respond to a data compromise incident.
On vendor risk management, Article 26 stipulates that subcontractors cannot process or transmit data on behalf of the organization (e.g Data controller). Since most organizations have programs for vendors to access systems or assist in data management, you’ll want to evaluate your vendors’ security and risk posture, since you could be affected by their negligence and entangled into one of those 2% or 4% of total revenue fine situations.
There are many other certifications and services that organizations should consider if they are not being done already including ISO 27001/27018 certification and attestation, privacy assessments and vendor risk management services to ensure data processors participate with Privacy Shield requirements and GDPR regulations.
ISO 27001 AND 27018 Certifications are an international security framework for securing information systems. ISO 27001 establishes an Information Security Management System and is an independent verification that your organization meets the ISO 27001 security standard.
ISO 27018 is a compliment to ISO 27001 and specifically focuses on protecting Personally Identifiable Information (PII) transmission and storage in the cloud. For Data Controllers and Data processors, meeting ISO 27018 will provide your organization with a method to establish control objectives, controls and guidelines for implementing measures to protect PII in the cloud in accordance with privacy principles in ISO/IEC 29100.
The finalized Privacy Shield and the updated EU General Data Protection Regulation will require U.S. Companies to make EU citizen privacy a paramount priority to avoid any ramifications from EU regulations. Contact Coalfire to discuss any of the above information. Where needed we can also pull in our partner law firm to further educate and provide guidance on the updated EU privacy and data changes.
July 11, 2016 | Leave a Comment
By Jane Melia, VP/Strategic Business Development, QuintessenceLabs
“If your security sucks now, you’ll be pleasantly surprised by the lack of change when you move to cloud.” — Chris Hoff, Former CTO of Security, Jupiter Networks
The chances are, almost everyone in your organization loves the convenience of the cloud for data storage and for collaborative workflow needs. And why wouldn’t they when documents and files are now easily accessible to all team members, whether down the hall, in another state or even on another continent? From a cost and operations perspective, cloud storage is certainly pretty compelling. However “almost everyone” might not include CIOs, CISOs and their teams, who often harbor concerns about the security of data in the cloud, and particularly where sensitive data is involved. I have similar misgivings. I’m not saying that we should not use the cloud, but I do believe that we can improve how we secure sensitive data stored on it.
Blue Skies or Dark Clouds Ahead?
In a recent report titled “Blue Skies Ahead? The State of Cloud Adoption,” Intel Security said that IT decision makers are warming to the cloud along with the rest of us with 77 percent saying they trusted the cloud more than they did a year ago. This hides a darker reality that only 13 percent of respondents actually voiced full trust in the public cloud, with 37 percent trusting their private cloud. Surprisingly, a full 40 percent of respondents claim to process sensitive data in the cloud, indicating that there is both room and a real need for cloud security improvement.
Adding Peace of Mind to Cloud Storage
When I hand over data to a third party, I want to be sure that they are not only contractually obliged to look after it properly but are actually equipped to do it. This means protecting it from accidental loss, malicious attacks and from silent subpoenas, among other threats. Logging and multi-factor authentication are part of the tool kit that can be implemented, as is encryption. There is an existing (and growing) awareness of the importance of encryption which is why most cloud service providers offer encryption options of one kind or another. But too frequently the third-party vendor is doing the encrypting, and holding the keys, which isn’t very reassuring to say the least.
Fundamentally, the best way to ensure data is safe and managed well is to pre-encrypt it before it’s sent to the cloud. Coupled with a policy of keeping key management in house, these precautions should allow for several hours of blissful sleep each night for members of the IT security team whether the cloud is public, private, or a hybrid of the two! Other approaches include using 2 or more different vendors to handle the different parts of the storage solution: one vendor can manage the keys while the other manages storage itself. Key wrapping is another way to reduce risk: the end customer can manage master keys that in turn wrap the document keys, giving you some assurance of isolation between your data and that of other customers stored on the same cloud, as well as control for document access. Through these approaches, you can provide a significantly higher level of protection for data stored in the cloud.
Encryption is the best tool we have for protecting sensitive information so we need to use it to support and enable our expansion to the cloud. As seen above, the devil is in the details of how we do it, but keeping control of keys is fundamental. Of course, there is also the issue of how strong the keys are that you are using, but that is a topic for another day….
July 7, 2016 | Leave a Comment
Rolf Haas, Enterprise Technology Specialist/Network Security and Content Division, Intel Security
Cloud use continues to grow rapidly in the enterprise and has unquestionably become a part of mainstream IT – so much so that many organizations now claim to have a “cloud-first” strategy.
That’s backed up by a survey* we commissioned here at Intel Security that questioned 1,200 cloud security decision makers across eight countries. One of the most startling findings: that 80% of respondents’ IT spend will go to cloud services within just 16 months.
Even if that outlook overestimates cloud spend it still shows a dramatic shift in mindset, and it’s often the business, rather than the IT department, that is driving that shift. In today’s digital world the pull of the cloud and its benefits of flexibility, speed, innovation, cost, and scalability are now too great to be dismissed by the usual fears. To compete today businesses need to rapidly adopt and deploy new services, to both scale up or down in response to demand and meet the ever-evolving needs and expectations of employees and customers.
This new-found optimism for the cloud inevitably means more critical and sensitive data is put into cloud services. And that means security is going to become a massive issue.
If we look at our survey results the picture isn’t great when it comes to how well organizations are ensuring cloud security today. Some 40% are failing to protect files located on SaaS with encryption or data loss prevention tools, 43% do not use encryption or anti-malware in their private cloud servers, and 38% use IaaS without encryption or anti-malware.
Many organizations have already been at the sharp end of cloud security incidents. Nearly a quarter of respondents (23%) report cloud provider data losses or breaches, and one in five reports unauthorized access to their organization’s data or services in the cloud. The reality check here is that the most commonly cited cloud security incidents were actually around migrating services or data, high costs, and lack of visibility into the provider’s operations.
Trust is growing in cloud providers and services, but 72% of decision makers in our survey point to cloud compliance as their greatest concern. That’s not surprising given the current lack of visibility around cloud usage and where cloud data is being stored.
The wider trend to move away from the traditional PC-centric environment to unmanaged mobile devices is another factor here. Take a common example: an employee wants to copy data to their smartphone from a CRM tool via the Salesforce app. The problem is they have the credentials to go to that cloud service and access that data, but in this case are using an untrusted and unmanaged device. Now multiply that situation across all an organization’s cloud services and user devices.
There is clearly a need for better cloud-control tools across the stack. Large organizations may have hundreds or even thousands of cloud services being used by employees – some of which they probably don’t even know about. It is impossible to implement separate controls and polices for each of them.
To securely reap the benefits of cloud while meeting compliance and governance requirements, enterprises will need to take advantage of technologies and tools such as two-factor authentication, data leakage prevention, and encryption, on top of their cloud services and applications.
Increasingly, organizations are also investing in security-as-a-service (SECaaS) and other tools that can help orchestrate security across multiple providers and environments. These help tackle the visibility issue and ensure compliance needs are met. That’s why I believe we are starting to see the rise of so-called “broker” security services. These cloud access security brokers (CASBs) will enable consolidated enterprise security policy enforcement between the cloud service user and the cloud service provider. That’s backed up by Gartner, which has picked out CASBs as a high-growth spot in the security market. Gartner predicts by 2020, 85% of large enterprises will use a CASB for their cloud services, up from fewer than 5% today.
This will all be driven by the rapid growth in enterprise cloud adoption and the need for a new model of security that enables the centralized control or orchestration of the myriad cloud services and apps employees use across the enterprise. Cloud security is now a critical element of any business, and it needs to be taken seriously from the boardroom right down to the end users.
*Blue Skies Ahead? The State of Cloud Adoption
The survey of 1,200 IT decision makers with responsibility for cloud security in their organizations was conducted by Vanson Bourne in June 2015. Respondents were drawn from Australia, Brazil, Canada, France, Germany, Spain, the UK, and the US across a range of organizations, from those with 251 to 500 employees to those with more than 5,000 employees.