Agentless Mobile Security: No More Tradeoffs

By Kevin Lee, Systems QA Engineer, Bitglass

Have you ever seen a “Pick two out of three” diagram? They present three concepts and force individuals to select the one that they see as the least important. The tradeoffs between convenience, privacy, and security serve as a perfect example of a “Pick two” situation for many mobile security solutions. 

Industries have seen massive growth in the number of personal devices that touch sensitive information, resulting in a need to secure data as it is accessed by these endpoints. Various solutions have been adopted by many companies, but all tend to fall into the classic “Pick two” scenario. When evaluating these inadequte solutions, companies normally select security as one of their two priorities, leaving them to choose from only the two scenarios below.

Security and Convenience

Mobile device management (MDM) is a fairly popular solution for securing data on personal mobile devices. Using MDM is often seen as a good strategy because, in theory, it permits employees to use their personal devices and allows employers to monitor and control data as they see fit. However, the major downside to MDM is the need for agents to be installed on personal devices. These agents give employers visibility into employees’ personal traffic. Obviously, this raises questions about employee privacy. 

Security and Personal Privacy
For individuals who wish to keep their personal information private, using one or more work-only devices is an option. Whether these devices are mobile phones with MDM or managed computers on-premises, the strategy allows employers to monitor corporate data without touching employees’ personal data. The large disadvantage with this approach is the lack of convenience for employees. They are required either to carry multiple devices at all times or to access work-related information from few, select locations.  

The Solution
As seen above, there always seems to be a tradeoff when choosing a mobile security strategy. However, does it have to be that way? What if there were a security tool that could ensure data security, provide convenience for employees, and respect the right to privacy all at the same time? It only seems far-fetched when one assumes that agents are necessary to secure data.

To learn about cloud access security brokers and agentless mobile security, download the solution brief.

Saturday Security Spotlight: Military, Apps, and Threats

By Jacob Serpa, Marketing Manager, Bitglass

Here are the top cybersecurity stories of recent weeks:

—Fitness app exposes military bases
—Soldiers’ names revealed by app
—Google Play filled with fake apps
—Medical devices easily hacked
—The internet of things creates risk for the enterprise

Fitness app exposes military bases
Strava, the creators of a fitness tracking app, released heatmaps of its users’ movements. Unfortunately, this revealed the inner workings of military bases abroad by highlighting the movements of soldiers who use said app within their bases. Naturally, making this information publicly available raises questions of privacy and national security.

Soldiers’ names revealed by app
After learning of the above heatmaps and how they expose military bases and personnel, a Norwegian researcher decided to test other aspects of Strava’s security. In so doing, he succeeded in tricking the app to reveal the names and identities of military personnel who use Strava.

Google Play filled with fake apps
Despite efforts to clean up Google PlayGoogle’s app marketplace still contains many fake applications. While some are fairly innocuous, others can spread malware or steal information from users’ mobile devices. In light of BYOD (bring your own device), this should be a concern for the enterprise.

Medical devices easily hacked
Researchers in cybersecurity have determined that medical devices like MRI machines face a high risk of cyberattack. As healthcare technology evolves and connects to the internet more and more, the risk will only increase. Researchers warn that these devices must be designed in ways that ensure more security.

The internet of things creates risk for the enterprise
As enterprises adopt IoT devices for the efficiency that they provide, they are also increasing the number of attack surfaces that can be exploited by malicious parties. These devices serve as entry points for malware and can enable access to corporate networks.

The cybersecurity landscape is constantly shifting. Organizations must stay ahead of threats with advanced security solutions. To learn about cloud access security brokers, download the Definitive Guide to CASBs.

 

Why Next-Gen Firewalls Can’t Replace CASBs

By Joe Green, Vice President,/WW Solutions Engineering, Bitglass

A security solution is only as good as the data it protects. Some solutions focus on data protection on the corporate network, others focus entirely on cloud data, and a select few enable security at access from any network.

Next-gen firewalls (NGFWs) are the traditional solution for many organizations looking to secure their corporate networks. They are effective at what they do, securing corporate network traffic by routing everything through on-premises appliances. As corporate data begins moving outside the corporate network, as it does with cloud and mobile, the NGFW can no longer provide protection. Major gaps include access from managed devices that don’t use VPN while outside the corporate network, access from unmanaged devices like employees’ personal mobile devices, and cloud data-at-rest.

Why are cloud and mobile such a big gap? With the flexibility and mobility provided by cloud apps, employees often work outside premises-based security infrastructure. Additionally, unmanaged devices with unmitigated access to corporate apps (whether in the cloud or on premises), can be lost, stolen, or abused by malicious insiders. IT needs to secure data in these situations, yet a perimeter-focused security tool like an NGFW has no way to secure this traffic.

Providing security beyond the firewall typically requires a data-centric approach rather than a control-oriented approach. After all, with cloud and BYOD, the organization neither controls the applications nor the underlying infrastructure on which those applications reside. As a result, organizations must move from network- and application-based allow/block controls to robust, data-centric tools like data loss prevention (DLP) and encryption. Other key requirements of a data-centric approach are remediation (such as DRM, redaction, and more), identity integration and strong authentication, and data-at-rest scanning. All of these capabilities must be delivered via an architecture that can intermediate users’ connections to an app, like Office 365, even when they use a personal device or public network – no small task, and definitely not one an NGFW can handle!

Recognizing these gaps, and the future impact on the firewall market, some NGFW vendors have acquired or built basic API-based cloud access security broker (CASB) offerings. Unfortunately, these offerings don’t provide real-time data & threat protection, and have proven unable to keep up with the rapidly evolving CASB use cases in the enterprise. As a result, the last couple of years have seen CASBs rise from an unknown acronym to the de facto standard for data & threat protection in the cloud and mobile enterprise, complete with their own Magic Quadrant from Gartner.

Apps have evolved and moved to the cloud – shouldn’t you?

Only a CASB built from the ground up to protect data in a cloud- and mobile-first environment can secure cloud apps and BYOD. Instead of opting for a tool that simply augments existing firewall capabilities, adopt a solution that provides visibility and control over all corporate data wherever it goes.

Download the Top CASB Use Cases.

EMV Chip Cards Are Working – That’s Good and Bad

By Rich Campagna, CEO, Bitglass

For many years, credit card companies and retailers ruled the news headlines as victims of breaches. Why? Hackers’ profit motives lead them to credit card numbers as the quickest path to monetization. Appropriate data in hand and a working counterfeit card could be cranked out in seconds and used to purchase a laptop or TV at the local Walmart — easy to fence in the local black market.

Sick of being the target, the payment card industry got smart about fraud detection, created a set of regulatory compliance requirements (PCI-DSS) and perhaps even more importantly, rolled out EMV “chip-and-pin” technologies, which are meant to reduce counterfeit card fraud by presenting a unique cryptographic code for each transaction — much more difficult to duplicate than the static information embedded in the magnetic stripe of older cards. The results have been astounding — according to Visa, “for merchants who have completed the chip upgrade, counterfeit fraud dollars have dropped 66%!” That’s great news, but bad news at the same time.

The bad news comes in that hackers, still seeking profit motive, will continue to seek out the fastest and most lucrative path to monetization. Since credit card information has essentially become valueless, data that can be used to apply for new cards (or other monetary instruments or services) is now the target. This is why we saw a massive increase in healthcare-related breaches over the past few years. As healthcare gets their act together, hackers will move on to the next most viable target, whatever industry that may be.

Not only does this impact information security professionals in enterprises, but it also impacts consumers in a big way. For consumers, credit cards have always had limited liability, meaning outside of a few calls to the credit card company, fraudulent card use didn’t make much impact. Unfortunately, you can’t “cancel” your social security number, date of birth, and mother’s maiden name — those are permanent. And once someone gets their hands on that data, they own them permanently as well.

So, kudos to credit card issuers and retailers for making tremedous progress. Hopefully peers in other industries will continue to follow suit.

BTW, it’s entirely likely that your organization’s shift to cloud and mobile includes some of the aforementioned data to be protected. Might be time to check out a cloud access security broker (CASB).

Saturday Security Spotlight: Cyberwarfare and Cryptocurrency

By Jacob Serpa, Product Marketing Manager, Bitglass

Here are the top cybersecurity stories of recent weeks:

—Cyberattacks deemed a top threat to society
—Hackers target data around the world
—Poor app designs threaten countries’ infrastructure
—Olympic Committee emails leaked by hackers
—Half of UK firms fail to secure cloud
—WiFi can be hacked to mine cryptocurrency

Cyberattacks deemed a top threat to society
The World Economic forum recently released a report detailing the top threats to society. Cybersecurity concerns like cyberwarfare landed within the top three. The fact that these threats were paired with the likes of natural disasters highlights the growing dangers of our cloud-first society and serves as a reminder that organizations everywhere should adopt next-gen security solutions.

Hackers target data around the world
Dark Caracal, a cyberespionage group, has recently been linked to an extensive list of cybercrimes. In over twenty-one countries, the group used Pallas, its custom mobile spyware, to steal data from the mobile devices of healthcare workers, lawyers, reporters, members of the armed forces, and more.

Poor app designs threaten countries’ infrastructure
Mobile applications used for critical infrastructure (water, electricity, etc.) are reported to contain numerous vulnerabilities that can be exploited by malicious parties. These SCADA (supervisory control and data acquisition) applications are often designed without adequate consideration for security, leaving nations vulnerable to attack.

Olympic Committee emails leaked by hackers
Self-proclaimed hacktivist group, Fancy Bears, has leaked email correspondences from within the International Olympic Committee (IOC). While the group claims to hold honorable intentions, their leaking of athlete medical records is believed to be a response to Russia’s ban from the 2018 Winter Games.

Half of UK firms fail to secure cloud
A recent research report uncovered that only half of UK companies have security policies around data in the cloud. These statistics are particularly worrying in light of the approaching General Data Protection Regulation (GDPR).

Public WiFi can be hacked to mine cryptocurrency
A new study, CoffeeMiner, details how public WiFi networks can be used to mine cryptocurrency through connected devices. The research demonstrates the dangers of public WiFi for both individuals and their employers.

Cybersecurity threats are constantly spreading and evolving. To learn about cloud access security brokers, solutions that protect data in the cloud, download the Definitive Guide to CASBs.

Download the Definitive Guide to CASBs

Nine Myths of Account Takeover

By Dylan Press, Director of Marketing, Avanan

Account takeover attacks are a nearly invisible tactic for conducting cyber espionage. Because these breaches can take months or years to detect, we are slowly discovering that this attack vector is much more common than we thought. The more we learn about new methodologies, the more we realize just how misunderstood account takeover attacks can be. Many of the common myths about account takeover attacks are making it easier for the attackers to continue undetected, which is why we feel obligated to debunk them.

What Is an Account Takeover Attack?

Account takeover is a strategy used by attackers to silently embed themselves within an organization to slowly gain additional access or infiltrate new organizations. While ransomware and other destructive attacks immediately make the headlines, a compromised account may remain undiscovered for months, years or not at all. (See the Verizon 2017 Data Breach Report graph.)

On average we find at least one compromised account in half of our new installs, oftentimes finding that they have been there for months. We hope this blog can provide a better understanding of how they work and how to defend against them.

Scan your own account for an historical breach.

Myth 1: I’ve installed the latest antivirus software. I’m safe.

Reality: Account takeover attacks seldom use malware or malicious links.

You may have the latest patches. You might have the latest URL filters. You might have installed an MTA mail gateway to scan every message. None of these, however, would have detected the most common attacks of 2017. Few, if any, used an attachment or malicious link. Instead they relied upon convincing a user to authorize an app or share credentials via an otherwise legitimate site. Account takeover attacks do not want to infect a desktop or steal a bank account’s routing number. They seek only to gain access to a legitimate user’s account for as long as possible. Step one in their methodology is to avoid detection by the most common tools.

Myth 2: We’ve all had security training. Attacks are obvious.

Reality: User training is not enough to defend against targeted attacks.

Everyone would like to believe that they are smart enough to notice an attack before they are compromised, but even the most vigilant user would miss the more recent strategies. A CISO once called user training an “attack signature that gets updated once a year.” While you may be able to identify the traits of an older method, new, more sophisticated techniques are developed every day. It is no longer enough to look for misspelled words or bad grammar. They are now highly personalized, well timed and sent in moderation. It is easy to forget that attackers read the same best practice documents you read, and use them as their checklist of things to evade.

Myth 3: An account takeover always starts with an email.

Reality: Attackers are starting to use other collaboration tools.

As organizations are moving away from email to Slack, Teams, and Chatter for internal collaboration, so are the attackers. Your employees are naturally wary of messages that come by email, but they seldom transfer that suspicion to internal messaging tools. While only 12 percent of employees might be likely to click on a malicious email, more than half would click on the same message when it arrives via internal Slack chat from a ‘trusted’ user. While there are dozens of tools to monitor and protect user email, these internal tools typically have no phishing or malware protection at all.

Scan your own account for an historical phishing attack.

Myth 4: Account takeover always starts with a phishing message.

Reality: Hackers can get your credentials without a phishing attack.

Although phishing messages are the most common way for hackers to gain access to an account, they are far from the only method. Large, third-party data leaks like Yahoo and LinkedIn have created a market for hackers to exchange stolen passwords. Even Post-It Notes are not safe from online distributionA breach might include passwords for one service that employees have re-used on corporate accounts. Even a breach that doesn’t include raw credentials might include the personal information (street address, high school, mother’s maiden name) that make it possible for attackers to gain temporary access by requesting a password change. The Equifax breach probably contains more personal information than the average person even knows about themself. Although anti-phishing security is important, it is only one part of the equation when it comes to defending against account takeover.

Myth 5: I would notice right away if my account was compromised.

Reality: Account takeovers are specifically designed to evade detection.

Although it may seem like you would have to be blind to not notice a second user in your email inbox, hackers have become incredibly adept at navigating and using compromised accounts without detection. Tactics like the alternate inbox method, in which the attacker uses hidden and unchecked trash folders as their inbox, can make even the most active attacker invisible to the account’s rightful owner. When your account is compromised, you will likely never notice anything out of the ordinary.

Myth 6: The hacker will log in from a suspicious location.

Reality: Hackers can appear to log in from anywhere.

If a hacker is regularly logging into your account, wouldn’t their location raise a flag? It is reasonable to assume that to detect a compromised account, you just need to keep an eye out for suspicious locations in your account history. Unfortunately, publicly available VPNS are an easy way to avoid this obvious giveaway. A competent hacker based in North Korea can appear to be from an IP address in your own town, looking as benign as a login from your local CoffeeCafe. If they’ve already compromised another victim, they could even stage their attack from a partner’s network.

Myth 7: Changing my password will get rid of them.

Reality: Hackers can continue to access your account without a password.

Many cyber-security best-practices guides will advise you to change your password if your account is compromised. The first step in most attacks, however, includes creating a secondary back door so they can avoid using the primary login. For example, they may install malicious cloud applications that provide full rights to the account. These API-based connections use their own, permanent tokens that must be individually revoked and often never get logged. Or they may create rules to forward and redirect messages through the account without the need to log in again. Even if you change your password or turn on multi-factor authentication within seconds of a breach, they may no longer have need of your password.

Scan your own account for an historical breach.

Myth 8: I’m not “important” enough to be valuable to an attacker.

Reality: Every employee’s account is useful to a hacker.

It can be comforting to think that cyber security is only a concern for executives or employees with high levels of access to sensitive company data. Typically, however, the initial account takeover breach is imprecise and opportunistic. The initial goal of the hacker is to simply get access to any internal account. Once they have access, they take advantage of internal trust relationships to move from employee to employee until they find the sensitive data they need. A user doesn’t need to be high up or have a high level of access to serve as a hub for a hacker to base their operations. In fact, lower level employees are often under less scrutiny and can serve as a better vessel to use and remain undetected.

Myth 9: Our company is not worth targeting.

Reality: Your company can be used to attack your customers and partners.

If your company has customers, their employees will likely trust yours. If your company has providers, it could serve as the attacker’s way in. Although the hacks of major financial institutions and Fortune 500 companies make the headlines, hundreds of small ‘invisible’ companies in niche industries are attacked every day. Because smaller companies typically do not have the security staff of the larger firms, they can be an easy path into a much more lucrative target.

Cloud App Encryption and CASB

By Kyle Watson, Partner/Information Security, Cedrus Digital

Many organizations are implementing Cloud Access Security Broker (CASB) technology to protect critical corporate data stored within cloud apps. Amongst many other preventative and detective controls, a key feature of CASBs is the ability to encrypt data stored within cloud apps. At the highest level, the concept is quite simple – data flowing out of the organization is encrypted, as it is stored in the cloud. However, in practice there are nuances in the configuration options that may have impact on how you implement encryption in the cloud. This article outlines important architectural decisions to be made prior to the implementation of encryption solutions through CASB.


Gateway Delivered, Bring Your Own Key (BYOK), or Vendor Encryption

There are three generic methods in cloud-based encryption.

Gateway delivered encryption – In this model, the CASB may integrate with your organization’s existing key management solution through Key Management Interoperability Protocol (KMIP) or provide a cloud-based key management solution. In either case, the keys used to encrypt your data never leave your CASB.

  • Data is encrypted before it leaves your environment and is stored at the vendor
  • You control the keys
  • The vendor retains no capability to access your data

BYOK encryption – In this model, the keys are generated and managed by your organization, and then are supplied to the vendor. BYOK allows you to manage the lifecycle of the keys, which are then shared with the vendor. This includes revoking and rotating keys. The keys are then provided to and utilized by the vendor to decrypt requested data for use by authorized users. CASB can be involved as a broker of the keys to simplify, centralize, and streamline the process of key management by allowing you to perform this administration directly in the CASB User Interface (UI). This also may be done using KMIP with your existing key management solution. Alternatively, without a CASB you may still enjoy the benefits of encryption with your own keys, but administration would be manual on an app-by-app basis.

  • Data is encrypted at the vendor
  • You can control the keys
  • The vendor retains the capability to access your data

Vendor provided encryption – In this model, the vendor provides keys and key management. The administration may be provided through user interfaces provided by the vendor. The CASB is not involved.

  • Data is encrypted at the vendor
  • The vendor controls the keys
  • The vendor retains the capability to access your data


Important Considerations

There is not a “best” way to manage encryption for cloud apps. One important consideration for you to make the best decisions for your company begins with your motivation. Is your primary concern compliance, mitigating risk of vendor compromise, protecting data from being disclosed in blind subpoenas, all three?

  • Compliance – Encryption for compliance can be met easily by any of the three approaches, and is simplest with vendor provided encryption.
  • Mitigating risk of vendor compromise – Using encryption to mitigate the risk of vendor compromise implies the need to manage your own key, since your data will not be accessible without the key. Gateway delivered encryption is the approach that can provide the highest level of risk mitigation due to vendor compromise, as your keys never leave your environment. Cyber- attackers stealing your data will not be able to decrypt it without using your key or breaking your encryption. Risk may also be mitigated through BYOK, but agreements must be secured from the vendor to communicate breaches in a timely fashion. Then you must take appropriate revocation actions in your key management process.
  • Protecting data from being disclosed in subpoenas / blind subpoenas – Using encryption to protect data from being disclosed in subpoenas also implies the need to manage your own key. Gateway delivered encryption is the approach that can provide the highest level of risk mitigation from blind subpoena through a completely technical means, as third parties retrieving your data will not be able to decrypt it without your key. Risk may also be mitigated through BYOK, but agreements must be secured from the vendor to communicate third-party requests for your data in a timely fashion. Then you must take appropriate revocation actions in your key management process.


Unstructured and Structured Data

To further explain these approaches we must break out two very different types of data prevalent in the cloud: Unstructured and structured data. Unstructured data refers to data generated and stored as unique files and is typically served through end user apps, for example, Microsoft Word documents. Structured data refers to data that conforms to a data model and is typically served through relational databases and User Interfaces (UI), for example, Salesforce UI.


Structured Data

  • Gateway delivered encryption – Since the CASB sits between your end user and the application, structured data can represent a challenge to usability. From a usability perspective, whenever the application vendor changes field structures, the encryption must be addressed in order to maintain usability. From a security perspective, the app must decrypt and reveal some information in order to allow search, sort, and type-ahead fields to work properly in a cloud app UI. This is known as “Format Preserving”, “Order Preserving”, and “Order Revealing” encryption, which can lower the overall standard. A growing body of research is challenging this method and exposing weaknesses that may lead to compromise. For example, if you were to type “JO” in a field and it revealed all of the persons with names beginning with JO, this data has to be retrieved decrypted to support the UI.
  • BYOK encryption – since you supply the keys to the vendor, encryption/decryption occurs within the vendor application architecture. This reduces the risk of usability problems when using encryption, because the decryption happens under vendor control. From a security perspective, BYOK does not suffer from the same risk of compromise in “reveal”, as exists in gateway delivered encryption.
  • Vendor provided encryption – Since the vendor owns the keys, encryption/decryption occurs within the vendor application architecture. This reduces the risk of usability problems when using encryption, because the decryption happens under vendor control. From a security perspective, vendor provided encryption does not suffer from the same risk of compromise in “reveal”, as exists in gateway delivered encryption.


Unstructured Data

  • Gateway delivered encryption – Risk of usability problems is low on unstructured data in cloud storage. However, an important consideration is key rotation. Data encrypted under one set of keys can only be opened with those keys. Keys may need to remain available in archive, for reads, even if they have been retired.
  • BYOK encryption – Since the keys are supplied to the vendor, encryption/decryption occurs within the vendor application architecture as does key rotation and management.
  • Vendor provided encryption – Since the vendor owns the keys, encryption/decryption occurs within the vendor application architecture. This reduces the risk of usability problems when using encryption, because the decryption happens under vendor control. Key management processes will be dependent upon the vendor.


Industry Direction

Most major cloud vendors are moving toward the support of a BYOK model. Some of these include Salesforce, ServiceNow, Box, Amazon Web Services (AWS), and Microsoft Azure to name a few. As more and more vendors are offering this type of capability, at Cedrus we believe that this is the direction of cloud encryption.


Opinion

  • Gateway delivered encryption – This is the highest level of security that can be provided when it comes to cloud app encryption, but may have an impact to the business in usability issues, especially when applied to structured data. High-risk apps and data are safest in this configuration and require the most care and feeding.
  • BYOK encryption – This implementation can provide a very high level of security without the impact that comes with gateway encryption. Through integration with a CASB as a broker of keys to centralize this management, this solution provides an excellent balance between protection and usability for high-risk apps and data.
  • Vendor provided encryption – This implementation provides a much higher level of security than not implementing encryption. This solution may be best suited for apps and data of lower criticality or meeting compliance requirements, only.


Recommendations

As with all security decisions, risk and compliance must be the yardstick in any decision. Since we do not know the industry, application, or risk to your business; this is a generic recommendation.

Where possible, always leverage your own keys over vendor-provided keys. Remember, a breach into a lower-risk app may provide clues to breach other apps.

When provided as an option, the best trade-off between security and usability is BYOK. It is very important to gain agreement from vendors for proactive communication. Where BYOK is not offered, the risks must be weighed carefully between vendor provided and gateway delivered encryption, especially for structured data.

When considering a move to gateway encryption, risk analysis of the app and data are critical. The risk of compromise should be clear and present danger. This is because a decision to move to gateway encryption for structured data means a commitment to the management and maintenance at a much higher level than BYOK or vendor provided encryption. This is not a recommendation against taking this course, but advice to consider this path carefully and plan the resources necessary to maintain this type of implementation. In a recent exchange with a customer they articulated the challenge: “We use CASB to provide field level encryption for our Salesforce instance. There are many issues requiring a lot of support and we have plans to move away from it and leverage encryption that is part of the Salesforce platform.”

Saturday Morning Security Spotlight: Breaches and Intel

By Jacob Serpa, Product Marketing Manager, Bitglass

Here are the top cybersecurity stories of recent weeks:

—Data on 123 million US households leaked
—Tech giants investing in healthcare technology
—Intel chips contain security vulnerability
—DHS suffers breach of over 247,000 records
—Forever 21 finds malware in PoS systems

Data on 123 million US households leaked
Alteryx, an analytics firm, was found to have an AWS misconfiguration that exposed the personal data of 123 million US households. This was the largest such leak to date. While it is unclear how much the data was actually accessed by malicious parties, it remained publicly available for a number of months.

Tech giants investing in healthcare technology
Large technology companies are beginning to focus their time and energy on healthcare. As the industry is large, growing, and profitable, organizations like Google, Apple, and Microsoft are investing in technologies that will help them to serve healthcare providers (and their customers) in innovative ways.

Intel chips found to contain security vulnerability
Intel’s chip-level technology (spanning the last two decades) was found to contain a vulnerability that exposes sensitive information to hackers. Passwords, encryption keys, and more can be taken from affected computers’ kernels. Obviously, this discovery has massive security ramifications.

DHS suffers breach of over 247,000 records
The Department of Homeland Security experienced an unauthorized data transfer that leaked over 247,000 records. The breach (which was caused internally rather than by an external hacker) exposed the personally identifiable information (PII) of many current and former employees; for example, their Social Security numbers.

Forever 21 finds malware in PoS systems
Point-of-sale devices at retailer Forever 21 were used by hackers to install malware and gain access to the company’s network. While not all PoS systems were infected, the culprits still gained access to the credit card information of many customers. The extent of the malware infection and data theft are not yet known.

Whether it’s breaches, leaks, malware, or anything else, these news stories highlight the importance of cybersecurity. Organizations must adopt complete security solutions in order to protect their data. To learn about cloud access security brokers, download the Definitive Guide to CASBs.

GDPR and the Art of Applying Common Sense

By Daniele Catteddu, Chief Technology Officer , Cloud Security Alliance

On November 21, the CSA released the Code of Conduct for GDPR Compliance. This new document is part of CSA’s continuous effort to support the community with best practices that will help cloud providers and customers alike face the tremendous challenge of General Data Protection Regulation (GDPR) compliance.

Our code has been officially submitted to the attention of the Information Commissioner’s Office, the UK Data Protection Authority, for its review, as well as to all the other European Data Protection Authorities (DPAs). We are confident that we’ll receive positive feedback that will allow CSA to proceed with the final submission to the Article 29 Working Party (WP29) and European Commission for their endorsement.

GDPR, as many have already commented, represents a substantial change in the privacy and security landscape. It will affect every business sector, and cloud computing won’t be exempt. GDPR imposes on companies doing business in Europe a set of new obligations, but perhaps most importantly it demands a change in attitude vis-a-vis the way organizations handle personal data.

The GDPR requests that companies take a new approach to privacy and security and be good stewards of the data that is entrusted to them. Further, they are being asked to demonstrate accountability and transparency. In theory, this shouldn’t be a big shock to anyone since the principles of accountability, responsibility and transparency are meant to be the basic foundations of any company’s corporate code of ethics. Unfortunately, we have realized that not all of the companies out there have been applying these principles of common sense in a consistent manner.

Perhaps the biggest change that GDPR is imposing is related to the stricter approach to the enforcement of the rules that regulators have taken.

But perhaps the biggest change that GDPR is imposing is related to the stricter approach to the enforcement of the rules that regulators have taken. The fines that will be imposed for non-compliance definitely reflect a punitive logic. Fines will be substantial and are meant to be a deterrent to those organizations looking for short cuts.

In such a context, we are all noticing a crazy rush to GDPR compliance, with countdowns all over the internet reminding us how quickly the May 25 deadline is approaching.

So just in case you weren’t confused enough on how to tackle GDPR compliance, you can be even more stressed about it.

A cultural change doesn’t happen overnight though. The radically new attitude requested by GDPR and the related updates to policies and procedures can’t possibly be defined, tested and implemented in one day. Those familiar with the management of corporate governance are well aware of how lengthy and expensive the process of changing the internal rules and approaches can be. Rome wasn’t built in a day, and likewise this privacy revolution won’t magically happen one minute past midnight on May 25.

Given the magnitude of the effort requested by GDPR compliance, both in terms of cultural change and money, it is unlikely that all of the organizations, especially small- and medium-sized companies and public administrations, will be able to meet the May deadline.

My bet is that given the magnitude of the effort requested by GDPR compliance, both in terms of cultural change and money, it is unlikely that all of the organizations, especially small- and medium-sized companies and public administrations, will be able to meet the May deadline.

This is because beside the objective difficulty of the task there are still some provisions and requirements to be clarified, for instance, the Data Breach Notification (the WP29 is working on it). Moreover, there are some known and some hidden problems. For example, the tension between data back up and data deletion that will manifest itself when the new rules are put into practice.

To complicate matters further, in the period leading up to May 25, companies will still need to do business and sign contracts that in the majority of cases aren’t GDPR-ready, and it is likely that a supplemental effort will be requested for a retrofitting compliance exercise.

It will take time to achieve 100-percent compliance and in some cases, even that won’t be entirely possible.

None of above is an excuse for not working hard to achieve compliance, but rather to say that it will take time to achieve 100-percent compliance and in some cases, even that won’t be entirely possible.

What to do? I’d personally look at the GDPR compliance project as a journey that has already started and won’t finish in May. I’d focus on defining the policies and procedures for GDPR compliance, and I’d start implementing them. I’d base my new approach, as much as possible, on standards and best practices. That typically provides me with a good direction. Perhaps standards won’t be the ideal route for me, but that’s not important since to find the ideal route some correction to the general trajectory is always required.

Standards will assure me that the approach I’m using and the policy I’m defining are likely to be understood by my business partners. Policy interoperability between the cloud service provider and the customer is a fundamental requirement for a sound cloud governance approach, and it will be a key requirement for a successful GDPR compliance journey.

So, adoption of standards, policy interoperability, and what else? Well, transparency of course.

I’d aim for transparency within my organization, and I’d seek out transparency in my business partners. If I want to be a proper steward of data, if I want to make proper risk decisions, if I need to implement accountability, then I need to rely on data, evidence, and facts, which means that I need to work with partners that are willing to collaborate with me and be transparent.

And what if I won’t be 100-percent ready by May? I’d make sure I’m documenting all the actions taken in order to build and implement my GDPR compliance framework. This will help me provide evidence of my strategy, my good faith, my direction, and my final goal for the regulators. After all, the law is not demanding perfect privacy and security, it’s asking for a risk-based approach to privacy.

I recommend that everyone reading this post seriously consider the adoption of the CSA Code of Conduct for GDPR compliance in association with our Cloud Control Matrix (or any equivalent information security best practice). Those are the free standards we offer the community members for supporting their GDPR compliance journey.

 

 

Your Top Three Cloud Security Resolutions for 2018 Categories: Blog, Cloud Security

By  Doug Lane, Vice President/Product Marketing, Vaultive

With 2017 behind us, it’s time to prepare your IT strategy and goals for the new year. There is a good chance that, if you aren’t using the cloud already, there’s a cloud services migration in store for your organization this year. No matter where you are on your cloud adoption timeline, here are three steps IT security teams and business leaders can take today to kick-off 2018 with a strong cloud security position:

Take Inventory
It’s critical to understand what data your company is collecting from customers, prospects, and employees and how it’s being processed by your organization. Mapping out how sensitive information flows through your organization today will give you a clear idea of where to focus your security efforts and investments. It also gives you the opportunity to delete data or processes that may be redundant or no longer necessary, which will reduce your overall risk and save resources long-term.

Secure Sensitive Data and Materials
Once you’ve identified potentially sensitive information and the services processing it, there are several measures you can put in place protect data. Some of the most common and effective controls include:

  • Encryption: The first option for protecting sensitive information is to encrypt data before it ever flows out of your environment and into the cloud. While many cloud providers offer encryption using bring your own key (BYOK) features, the provider will still require access to the key in some form, continuing to put your data at risk for insider threats and blind subpoenas. Organizations that choose to encrypt cloud data should seek a solution; even it means approaching a third-party, that allows them sole control and access to the encryption keys.
  • Data Loss Protection (DLP): Another common data protection measure includes implementing DLP in your environment. By inspecting cloud computing activities and detecting the transmission of certain types of information, an organization can prevent them from ever being stored in the cloud. This approach ensures sensitive data, particularly personally identifiable information (PII), never leaves the premises or your IT security team’s control.
  • Privilege & Access Control: While many organizations have used privilege management and access control as a practical on-premises security strategy for years, few have applied them to their cloud environments. In many cloud services, administrator roles can mean unlimited access and functionality. In addition to severely increased risk if an administrator goes rogue, this can lead to downtime and critical configuration errors in a few clicks. IT teams should seek to limit user access and functionality within a cloud service to only what they need to be productive.
    Supplementary to limiting access based on user identity, IT teams should also consider blocking activity or requiring additional approval in certain contexts, such when a user logs in from an odd or new location, an out of date browser is detected, or a highly sensitive transaction is executed (e.g., bulk export of files).
  • Enforce Two-Factor & Step-Up Authentication: Even if user access and privileges are limited in scope, an unauthorized party making use of compromised credentials is still a risk. It’s important to have an additional layer of security in place to ensure the user at the endpoint is genuinely who their credentials claim them to be. Configuring your cloud services to require re-authentication or step-up authentication with your preferred identity and access management (IAM) vendor based on criteria you select is an effective strategy.

Prepare a Breach Notification Plan
Finally, while many companies last year, such has Yahoo and Equifax, opted for an extended period of silence to investigate and strategize how they would communicate a detected breach, the public and regulators are quickly losing patience for this sort of behavior from companies. In fact, new regulations such as the EU General Data Protection Regulation are now placing requirements and time limits around data breach notifications with hefty penalties for non-compliance.

Though a strong security strategy can reduce the risk of a significant data breach, businesses should have a plan of action if the worst should happen. Identify who in your organization should be notified and sketch out a general response.

Let’s make 2018 the year we’re accountable and prepared when it comes to data privacy and security in the cloud.

Cloud Access Security Brokers: Past, Present, and Future

By Jacob Serpa, Product Marketing Manager, Bitglass

Leading cloud access security brokers (CASBs) currently provide data protection, threat protection, identity management, and visibility. However, this has not always been the case.  Since the inception of the CASB market, cloud access security brokers have offered a variety of tools and undergone a number of evolutions. For organizations to ensure that they are adopting the correct solutions and adequately protecting their data, they must understand the past, present, and future of CASBs.

Agents and APIs
CASBs were originally used primarily for discovery capabilities. Through agents installed on users’ devices, CASBs would give organizations information about the unsanctioned cloud applications that were being used to store and process corporate data. Additionally, integrations with application programming interfaces (APIs) were used to exert control over data at rest within sanctioned cloud apps. However, these strategies provided little help with securing unmanaged devices and protecting data at access in real time.

Proxies
To address the shortcomings of agents and APIs, CASBs with proxies were used to, as the name implies, proxy traffic. By standing between devices and cloud applications, proxies control the flow of data in real time and provide controls to govern data access based on factors like job function. Because proxies take a data-centric approach rather than a device-centric approach, they are even able to secure unmanaged and mobile device access – without the use of agents.

Hybrid Architectures
Today, leading CASBs utilize a hybrid or multimode architecture. This means that they offer a combination of proxies and API integrations. In this way, they are able to provide complete protection – APIs secure data at rest in cloud applications, while proxies monitor data at access even for unmanaged and mobile devices. When deployed together, these tools provide advanced capabilities such as malware protection for data at upload, data at download, and data at rest within cloud applications.

Machine Learning
The future of security belongs to artifical intelligence (AI). As such, machine learning is already a core component of advanced CASBs like Bitglass. In general, machine learning allows CASBs to make more automated, effective, rapid security decisions than ever before. For example, with user and entity behavior analytics (UEBA), they can recognize suspicious behaviors (logging in to cloud apps from two places at once or downloading unusual amounts of data) and remediate in real time. They can also evaluate unsanctioned apps as they are accessed by employees to determine if they are safe and impose controls around the uploading of data.

To learn more, watch “The Evolution of CASBs,” or download the Definitive Guide to Cloud Access Security Brokers.

The Stakes for Protecting Personally Identifiable Information Will Be Higher in 2018

By  Doug Lane, Vice President/Product Marketing, Vaultive

While it’s tough to predict what the most significant single threat of 2018 will be, it’s safe to say that 2017 was certainly a wake-up call for both businesses and consumers when it comes to data breaches. From the rampant misconfiguration of Amazon S3 data buckets to stolen email credentials, the number of breaches and amount of personal data leaked to unauthorized parties in 2017 was staggering. However, one case stands above the rest as particularly damaging to all parties involved.

In July of this year Equifax, one of the leading U.S.-based credit bureaus, reported that the personal information of more than 143 million U.S. customers was accessed when an unauthorized party exploited an application vulnerability at their organization. The data exposed in the Equifax incident is more severe than other breaches because of the type of information that was stolen. Once a criminal has your birth date, social security number, etc., and has used it for illicit purposes, it is incredibly difficult to recover your personally identifiable information (PII).

It’s also naïve to assume that the data stolen from Equifax will not be exploited in some way. Not only can that information be abused to commit identity theft under the impacted parties’ names, and we certainly expect to start seeing more of those incidents in 2018, but we also predict it will be abused to access existing user accounts with other services. Much of the ‘permanent data’ that was stolen during the July Equifax incident also happens to be just the sort of information used as secondary authentication for many of our everyday accounts. Think of how many times the ‘last four of your social’ was used to identify you with your card company or at your doctor’s office this year.

Rightfully, the breach was met with a flurry of media and consumer attention and outrage. Equifax’s stock fell by 33 percent in the days following their announcement, and they were a regular headline for several news cycles. In the aftermath, the credit reporting firm found itself the subject of numerous investigations, the resignation of many executive leaders, and more than 240 class action lawsuits.

Evolving Data Regulations
Additionally, new global laws such as the EU’s General Data Protection Regulation (GDPR), which goes into effect May 25, 2018, will further raise the stakes and fines of future breaches. The law will enforce data protection and cybersecurity with a new set of stringent regulations and unprecedented penalties. If the Equifax breach occurred under GDPR, Equifax would have faced additional legal claims and penalties.

With recent events and emerging regulations, organizations and IT security teams who don’t prioritize data security on-premises or in the cloud will find themselves writing some very expensive checks, or worse, closing their doors altogether because of steep fines and liability.

In her recent article GDPR: True Cost of Compliance Far Less Than Non-Compliance, Tara Seals from Infosecuritymagazine reported that the cost of non-compliance, with EU GDPR and other data privacy regulations is quickly rising, “…costs widely vary based on the amount of sensitive or confidential information a particular industry handles and is required to secure. That said, the average cost of compliance increased 43% from 2011, and totals around $5.47 million annually.”

Unfortunately, simply sticking your head in the sand and hoping for the best isn’t a good plan either. The EU GDPR requires organizations to notify regulators of a breach promptly. Many industry leaders have speculated that regulators are keen to make examples of both European and overseas businesses for any instance of non-compliance.  So, watch out American companies, you aren’t exempt.

In another InfoSecurity article, Matt Fisher provides a warning and some very sound advice for those subject to the EU GDPR:

“The deadline of May 2018 is only the beginning, not the end. Policy makers are already under monumental pressure to smoke out prosecutable cases in the aftermath of the regulation’s implementation. As an organization, if you cannot complete your GDPR project in time for the deadline, taking firm steps to indicate ‘best efforts’ are vital to make your organization a far less attractive target”

Don’t Forget About the Cloud
In a recent Forbes article summarizing Forrester’s 2018 cloud predictions, it was estimated that “the total global public cloud market will be $178B in 2018, up from $146B in 2017, and will continue to grow at a 22% compound annual growth rate.”

It’s undeniable that this growth will mean more data flowing into IT-sanctioned applications. Because of this, it’s critical for organizations to take the necessary steps to ensure unified data security and governance in their environment, both on-premises and in the cloud.

Increased government involvement and consumer awareness, combined with the potential for financial and reputation damage Equifax and others have suffered, will drive a renewed focus on data protection in the cloud computing space during 2018.

 

 

Saturday Morning Security Spotlight: Jail Breaks and Cyberattacks

By Jacob Serpa, Product Marketing Manager, Bitglass

Here are the top cybersecurity stories of recent weeks:

— Man attempts prison break through cyberattacks
— Mailsploit allows for perfect phishing attacks
— 1.4 billion credentials found in dark web database
— Starbucks WiFi hijacks connected devices
— Hackers target cryptocurrency employees for bitcoins

Man attempts prison break through cyberattacks

In an attempt to acquire an early release for his imprisoned friend, a man launched a thought-out cyberattack against his local prison. Through a combination of phishing and malware, the hacker successfully stole the credentials of over 1,000 of his local county’s employees. While he was ultimately caught, he did gain access to the jail’s computer system.

Mailsploit allows for perfect phishing attacks
By exploiting bugs in numerous email clients, a researcher demonstrated how to make an email appear as though it were sent from any email address. Affected clients include Outlook 2016, Thunderbird, Apple Mail, Microsoft Mail, and many more. While some were quick to patch their offerings, others are refusing to address their vulnerabilities.

1.4 billion credentials found in dark web database
Dark web researchers have uncovered a massive database listing 1.4 billion unencrypted credentials. The database contains usernames and passwords from LinkedIn, Pastebin, RedBox, Minecraft, and much more. Individuals who reuse passwords across multiple accounts (and their employers) are put at massive risk by the discovery.

Starbucks WiFi hijacks connected devices
The WiFi of a Starbucks in Argentina was recently found to hijack connected devices to mine for cryptocurrency. The event highlights the dangers of connecting to public networks – even those that may appear trustworthy. Unfortunately, many individuals believe the desire for convenience to outweigh the need for security, putting their employers at risk.

Hackers target cryptocurrency employees for bitcoins
Hackers from what is believed to be the Lazarus Group are targeting high-level employees of cryptocurrency firms – presumably to steal bitcoins. Attacks begin with phishing email attachments that, when opened, launch malware in the targets’ systems.

To defend against phishing, account theft, malware, and other security threats, organizations must adopt complete security solutions. Learn how to achieve comprehensive visibility and control over data by reading the Definitive Guide to Cloud Access Security Brokers.

Adding Value to Native Cloud Application Security with CASB

By Paul Ilechko, Senior Security Architect, Cedrus

Many companies are starting to look at the Cloud Access Security Broker (CASB) technology as an extra layer of protection for critical corporate data as more and more business processes move to the cloud.

CASB technologies protect critical corporate data stored within cloud apps and among their preventative and detective controls, a key feature is the ability to encrypt data stored within cloud apps.

At the highest level, the concept is quite simple – data flowing out of the organization is encrypted, as it is stored in the cloud. However, in practice there are nuances in the configuration options that may have impact on how you implement encryption in the cloud.

Most users will start with a discovery phase, which typically involves uploading internet egress logs from firewalls or web proxies to the CASB for examination. This provides a detailed report of all cloud application access, usually sorted by a risk assessment that is specific to the CASB vendor doing the evaluation (all of the major CASB vendors have strong research teams who do the Cloud service risk evaluation for you, so that you don’t have to).

This enables a company to start thinking about the policy needed to protect themselves in the cloud, and also to drive conversations with the business departments using the cloud services, to get an understanding of why they are using them, and if they really need them to get their jobs done. This can drive a lot of useful considerations, such as:

  • Is this service safe, or is it putting my business/data at risk?
  • If it is creating risk, what should I do about? Can I safely block it, or will it cause an issue with my business users?
  • If my business users need this functionality, are there better options out there that achieve the same goals without the risk?

This discovery, assessment and policy definition phase can take some time, possibly weeks or even months, before you are ready to take the next step into a more active CASB implementation. To summarize the ways in which CASB can be integrated into a more active protection scheme:

  • CASBs provide API level integration with many of the major SaaS, PaaS and IaaS services, allowing for out-of-band integration that perform functions like retroactive analysis of data stored in the cloud, or near real-time data protection capabilities than can be implemented in either a polling or a callback model.
  • CASBs typically provide an in-line proxy model of traffic inspection, where either all, or some subset, of your internet traffic can be proxied in real time, and decisions can be made on whether to allow the access to proceed. This can incorporate various Data Loss Prevention (DLP) policies, can check for malware, and can perform contextual access control based around a variety of factors, such as user identity, location, device, time of day, etc. – as well as sophisticated anomaly and threat protection using data analytics, such as unexpected data volumes, non-typical location access, and so on.
  • For users who are leery about using a CASB inline for all traffic, particularly when that traffic is already traversing a complex stack of products (firewall, web proxy, IPS, Advanced Threat Protection…), many CASB vendors also provide a “reverse proxy” model for integration with specific sanctioned applications, allowing for deeper control and analysis that integrates the CASB with the cloud service using SAML redirection at login time.

Policy-based encryption
Many platforms, such as Salesforce with its Salesforce Shield capability, provide the ability to encrypt data. With Shield, for example, this can be at either at the file or field level. However, Shield is configured at the organization level. Most companies that use Salesforce will probably have created multiple Salesforce Orgs. It’s likely that you want to define policy consistently across organizations, and even across multiple applications, such as Salesforce and Office365.

A CASB can provide you with the capability to define policy once and apply it many times. You have the option to use the CASB’s own encryption, or in some cases to make use of the CASB’s ability to use API integration to interact with the platform’s own native tools (e.g., some CASB’s are able to call out to Salesforce Shield to perform selective encryption as required by policy). The CASB can protect your data no matter where in an application it resides: in a document, in a record, or in a communication channel such as Chatter. (The CASB can, of course, provide these capabilities for many applications, we are just using Salesforce here as an example.)

Continuous Data Monitoring
A CASB can provide real-time or near-real time monitoring of data. It can use API’s to retroactively examine data stored in a cloud provider looking for exceptions to policy, threats such as malware, or anomalies such as potential ransomware encryptions. It can act as a proxy, examining data in flight and taking policy based actions at a granular level.

Threat and anomaly recognition
CASB’s typically provide strong capabilities around threat protection and anomaly recognition. Using advanced data science techniques against a “big data” store of knowledge, they can recognize negligent and/or malicious behavior, compromised accounts, entitlement sprawl and the like. The exact same set of analytics and policies can be applied across a range of service providers, rather than forcing you to attempt it on a piecemeal basis.

Cross-cloud activity monitoring
Because a CASB can be used to protect multiple applications, it can provide a detailed audit trail of user and administrative actions that traverse actions across multiple clouds, and which can be extremely useful in incident evaluation and forensic investigations. The CASB acts as a single point of activity collection, which can then be used as a channel into your SIEM.

So, to summarize: while many of the major cloud service providers have added interesting and useful security features to their applications, a CASB can add significant additional benefit by streamlining, enhancing and consolidating your security posture across a wide range of applications.

It Could Happen To You

By  Yael Nishry, Vice President/Business Development, Vaultive; Arthur van der Wees LLM, Arthur’s Legal; and Jiri Svorc LLM, Arthur’s Legal

For organizations around the world, implementing state-of-the-art security and personal data protection (using both technical and organizational measures) is now a must. In the wake of the recent Equifax incident, this article outlines why data security and privacy accountability is important and how organizations can responsibly manage their sensitive data.

You Got Equifax-ed!
On September 7, 2017, Equifax disclosed arguably the most severe personal data breach ever, affecting up to 143 million US consumers, between 400,000 and 44 million British consumers, and approximately 100,000 Canadian residents. The global consumer credit reporting agency announced that between March and July 2017 hackers were able to access consumers’ personal data, including names, social security numbers, birthdates as well as driver license numbers. In addition, the details of up to 209,000 credit cards were reportedly compromised.

While previous breaches have exposed the details of more people overall, the Equifax incident is significant due to the highly sensitive nature of the leaked information. Although some of the data is of temporary nature and can easily be refreshed (such as credit card numbers), other types are more difficult to change (including addresses or social security numbers). It’s not difficult to imagine why the leak of unchangeable “lifetime data, including customers’ names and birthdates, is extremely alarming to consumers. As a result, the incident has been followed by significant media outcry, inspired the introduction of legislation, and sparked investigations from the FTC and FBI. Not to mention the value of Equifax’s stock fell by a third in the days following the disclosure.

A Case for Encryption
Due to the extent of the Equifax data breach, it is not surprising that it took less than two weeks for the first privacy regulator to take legal action. The attorney general of the state of Massachusetts filed a law suit against Equifax pursuant to the state’s consumer protection laws.

The complaint alleges that the credit reporting agency failed to adequately secure its portal after the public disclosure of a major vulnerability in the open-source software used to build its consumer redress portal and failed to maintain multiple layers of security around consumer data. Also, it argues that the credit rating agency violated the law by keeping Massachusetts’ residents’ information accessible in an unencrypted form on a part of its network accessible from the internet. Given the fact that the company collects and aggregates the information of over 800 million individual consumers worldwide, it is disturbing to learn that encryption was not being used effectively by its IT security team in this case. This is even more surprising when viewed through the lens of the Equifax’s main business activities: acquiring, compiling, analyzing, and selling sensitive personal data.

The Massachusetts’ claim alleges that Equifax’s market position and business nature obliges the company to go beyond the regulations’ minimum requirements and “implement administrative, technical, and physical safeguards […] which are at least consistent with industry best practices.” As one of the most commonly used and best-practice security measures, the encryption of sensitive consumer data should have been ensured.

From What If …
What if the Equifax incident had occurred a year later?

In the first months of 2018, several important pieces of new legislation will go into effect in the EU, including the General Data Protection Regulation (GDPR) and the directive concerning measures for a high common level of security of network and information systems across the Union (NIS Directive). Both laws bring about significant changes in the domain of data protection and cybersecurity and introduce a new set of requirements for companies to comply with. Had the Equifax breach occurred in July 2018, the agency would likely face legal claims pursuant to GDPR and NIS Directive.

The NIS Directive aims to achieve a high common level of security of network and information systems within the EU. In doing so, its provisions apply to all providers of digital services active in the EU as well as operators of essential services active in the Union. GDPR, on the other hand, places stringent data protection and security obligations on anyone handling personal data of EU citizens. Similar to NIS Directive, the GDPR requires companies processing personal data to implement appropriate technical and organizational measures that ensure a level of security appropriate to the risk, taking into account state-of-the-art costs, purposes, and impact. In this respect, the regulation regards encryption as one of the appropriate technical measures to be implemented. Failing to encrypt customers’ data properly, Equifax would likely be non-compliant with its relevant provisions.

In addition, GDPR requires an organization to notify authorities within 72 hours of becoming aware of the breach, so it’s Equifax’s disclosure of the data breach more than six weeks after it occurred would certainly not comply with the obligation to notify the supervisory authority without undue delay.  Once again, had the incident occurred a year later, failing to act in accordance with the law could result in Equifax being charged with penalty fees of up to 4% of its total worldwide annual turnover, which would amount to about EUR 130 million, per breach.

Data Protection Impact Assessment
Both breaches could have been prevented had Equifax diligently carried out the Data Protection Impact Assessment (DPIA) required by the EU GDPR. This is a legal requirement under the GDPR for organizations processing personal data in a way which is likely to result in high risk to the rights and freedoms of natural persons. Though it is not only important from the legal compliance perspective, the DPIA can also provide organizations with a systematic description of personal data processing, including special categories of data, an assessment of its necessity and processing, as well as identification of risks and the measures in place to address them. In other words, DPIA serves as a valuable strategy and validation tool for testing and assuring data and security strategy. It provides organizations with many benefits, including a potential for structural savings, data minimization, and scalability of the business model. Hence, based on the extent of the incident it is clear that a diligently carried out DPIA would and should have raised plentiful red flags for Equifax to address.

It Could Happen to You
Given the thousands of UK and Canadian citizens who were also affected by the Equifax incident, some have claimed that the filing of the lawsuit by the Massachusetts attorney general may just be the tip of the iceberg. Indeed, it may as well be the case. At the same time, however, there remain thousands of organizations processing sensitive personal data which constitutes an essential part of their business. Irrespective of the new legislation entering into application in 2018, if organizations have not started addressing the issues of security and protection of personal data of their customers, the Equifax saga may in the end only serve as an overture to a swiftly developing and extensive narrative featuring a growing number of unprepared characters.

Avoid a Breach: Five Tips to Secure Data Access

By Jacob Serpa, Product Marketing Manager, Bitglass

Although the cloud is a boon to productivity, flexibility, and cost savings, it can also be a confusing tool to utilize properly. When organizations misunderstand how to use it, they often expose themselves to threats. While there aren’t necessarily more threats when using the cloud, there are different varieties of threats. As such, organizations need to employ the below cloud security best practices when they make use of applications like SalesforceOffice 365, and more.

Password123
When an employee uses one insecure password across multiple accounts, it makes it easier for nefarious parties to steal corporate information wherever that password is used. In light of this, organizations should require unique passwords of sufficient length and complexity for each of a user’s SaaS accounts. Additionally, requiring employees to change their passwords regularly – perhaps every other month – can provide an additional layer of security.

Authenticate or Else
Whether it occurs through employee carelessness, a breach from a hacker, or a combination of the two, credential compromise is a large threat to organizations. As detecting rogue accounts can be a challenging endeavor, multi-factor authentication should be employed as a means of verifying that accounts are being used by their true owners. Before allowing a user to access sensitive data, organizations should require a second level of verification through an email, a text message, or a hardware token (a unique physical item carried by the user).

Data on the Go
The rise of BYOD (bring your own device) has given individuals access to corporate data from their unmanaged mobile devices and, consequently, exposed organizations to new threats. In light of this, enterprises must secure BYOD, but do so in a way that is simple to deploy and doesn’t harm device functionality or user privacy. This is typically done through data-centric, agentless security. With these tools, organizations can secure data on unmanaged mobile devices in a timely, secure, non-invasive fashion.

Put the Pro in Proactive
Oftentimes, as more and more data moves to the cloud, organizations fail to monitor and protect it accordingly. They adopt after-the-fact security that can allow months of data exfiltration before detecting any threats or enabling remediation. However, in a world with regulatory compliance penalties, well-informed consumers, and hackers who can steal massive amounts of data in an instant, a reactive posture is not adequate. Organizations should adopt proactive cloud security platforms that enable real-time detection of malicious activity. Failure to utilize tools that respond to threats the moment they occur can prove disastrous for an organization’s security, finances, reputation, and livelihood.

More Malware More Problems
With all of the cloud applications and devices storing, uploading, and downloading data, malware has a number of attack surfaces it can use to infect organizations. If a single device uploads a contaminated file to the cloud, it can spread to connected cloud apps and other users who download said file. While protecting endpoints from malware is necessary, it is no longer sufficient. Today, organizations must deploy anti-malware capabilities that can defend from threats at upload, threats at download, and threats already resting in cloud applications. Defenses must lie in wait wherever data moves.

Now What?
Cloud access security brokers provide a breadth of capabilities that can enable the above best practices. Download the Definitive Guide to CASBs to learn more.

MSP: Is Your New Digital Service Compliant?

By Eitan Bremler, VP Marketing and Product Management, Safe-T Data

Offering managed services seems like an easy proposition. You offer IT services for companies that don’t have the infrastructure to support their own, bundle in services like cloud storage or remote desktop access, then sit back and watch the money roll in.

Of course, that’s a dramatic oversimplification of how an MSP works, especially because this description contains a rather substantial omission — security. As an MSP, you’re handling the sensitive digital data from dozens of companies. Not only are you subject to well-known compliance regimes such as PCI-DSS and HIPAA, you might also be subject to newer regulations from the NY DFS or soon, the GDPR.

Some of these regimes are known quantities and others not so much, but if you fail to follow them, one thing is certain — your customers will quickly cut ties. How can managed service providers provide secure and compliant digital services?

MSPs Are Likely to Be Covered by Multiple Overlapping Compliance Regimes
Each managed services provider is likely to be covered by at least one of the following four compliance standards, based on who they do business with.

  • If you touch PHI from a healthcare provider, you are subject to HIPAA and must execute a Business Associate Agreement (BAA) before you’re allowed to start working with them.
  • If you process credit card numbers, or store credit card numbers for another company, you are subject to PCI-DSS. Companies who process more credit cards are subject to stricter standards, so it pays to keep track of how many cards you’re processing.
  • If you work with a company that’s under the jurisdiction of New York’s Department of Financial Services, then you will be subject to compliance regulations recently laid down by the DFS. These regulations mandate a number of security controls, backed up by regular audits.
  • If you work with a company that deals with the data of EU citizens, or do business with an EU company direction, then after May 25th, 2018, you will be subject to the GDPR.

These bullets are outlines, not guidelines. If you’re unsure as to whether your organization is affected by one or more of these compliance regimes, it’s best to talk to a lawyer. Also remember that it’s extremely common to believe that you’re unaffected by a particular compliance standard, only to receive a nasty surprise. For example, you might also be affected by the GLBA, FISMA, FERPA, or SOX, depending on your target market or business model.

Different Compliance Regimes Will Affect Different Companies in Different Ways
Here’s where it gets tricky. Many compliance regimes specify that companies secure their most valuable information in different ways, or follow different procedures in the event of a breach. HIPAA, for example, mandates that companies report data breaches within 60 days, but PCI-DSS and the GDPR both give companies just 72 hours to report breaches.

In 2010, the SANS Institute recommended that companies affected by multiple compliance regimes adopt what they referred to as a Mother of All Control Lists (MOACL). The process of creating an MOACL is perhaps easier to describe than it is to carry out.

Step One: Understand all of the various compliance regimes that one is subject to.
Step Two: Understand the best practice recommendations of those regimes.
Step Three: Attempt to adhere to the strictest recommendation from every compliance regime. E.G., if HIPAA mandates a 60-day breach reporting schedule, but PCI-DSS mandates three days, then companies should plan on having three days to submit breach reports in every case.

The concept of an MOACL is a great starting point for MSPs (and any business subject to multiple compliance regimes) but the drawback is that it may take a great deal of time to implement. Fortunately, the MOACL can be replicated with tools that turn compliance into a turnkey service.

Decoding NYCRR Part 500: What Finance Institutions Need to Know

By Kyle Watson, Identity and Access Management and Cloud Access Security Broker Expert, Cedrus

For those of you in organizations subject to NYDFS oversight, you are probably aware of 23 NYCRR 500, a new set of cybersecurity requirements that went into effect this past March for financial services companies operating in New York. Its purpose is to address the heightened risk of cyberattacks by nation-states, terrorist organizations and independent criminal actors.

So who does NYDFS NYCRR Part 500 apply to? If your company operates in New York, the first question you should ask is: Does my company meet the definition of a Covered Entity? According to the DFS website, the following entities are subject to compliance:

  1. Licensed lenders
  2. State-chartered banks
  3. Trust companies
  4. Service contract providers
  5. Private bankers
  6. Mortgage companies
  7. Insurance companies doing business in New York
  8. Non-U.S. banks licensed to operate in New York

As the year comes to an end, it is imperative that your organization is ready to comply and file the annual DFS Certification of Compliance, which is due on February 15, 2018.

In Financial Services, you should already have a set of policies, procedures, standards and, guidelines based on a Commons Security Framework (ISO, COBIT, etc.) that allow you to perform risk assessments and comply to regulatory mandates. Policies drive the necessary processes and procedures that govern your day to day operations, enabling your business to be secure and compliant. You should be reinforcing this with all types of people that have access to your systems and data through awareness training during onboarding and on a periodic basis.

There has been an increasing focus on compliance at the data level of protection. NYDFS classifies this data as Non-Public Information. It is necessary for organizations to have data protection strategies in place to protect employees, partners, and customers. The increase of threats and breaches have ignited legislative bodies to subsequently issue regulations to ensure that companies are behaving in a way that mitigates risk. Many new regulations have come into play in recent years. Prior to NYDFS 23 NYCRR 500,  there was the EU General Data Protection Regulation (EU-GDPR) in 2016, and Service Org Control (SOC) in 2011 (formerly SSAE16 in 2010 and SAS70 in 1992).  A robust risk-based approach to data protection means that your company should have a short distance to get to compliance, but each new regulatory mandate introduces changes that must be considered in data protection, visibility, and reporting to the executive level.

NYDFS started in March 2017, and there was a transitional period that ended in August 2017, with the deadline for filing an extension ending in September 2017. The timeline starts getting more specific as the new year rolls out with the first annual certification due on February 18, 2018. Following the early 2018 deadline is a timeline for implementation of specific components of the regulatory mandate required controls.

The NYDFS 23 NYCRR 500 Timeline

There are five key things that you need to do immediately if you have not done so:

  1. Appoint a Chief Information Security Officer (CISO) with specific responsibilities
  2. Ensure that senior management files an annual certification confirming compliance with the NYCRR Part 500 regulations
  3. Conduct regular assessments, including penetration testing, vulnerability assessments, and risk assessments
  4. Deploy key technologies including encryption, multi-factor authentication, and others
  5. Ensure your processes allow you to report to NYDFS within 72 hours any cybersecurity event “that has a reasonable likelihood of materially affecting the normal operation of the entity or that affects Nonpublic Information.”

What makes this new set of regulations unique is that it requires companies to comply with more specific, enforceable rules than they currently use. It also differs from existing guidance, frameworks, and regulations in that it has a broad definition of protected information, an increased oversight of third parties, calls for the timely destruction of NPI (Non-Public Information) and prompt notification of a cybersecurity event (72 hours). Entities are also mandated to maintain unaltered audit trails and transaction records and submit annual certification.

So How Is Compliance Measured?
A recent survey by the Ponemon Institute reports that 60 percent of respondents (who primarily work in their organizations’ IT, IT security and compliance functions) believe this regulation will be more challenging to implement than GLBA, HIPAA, PCI DSS,  and SOX. What is unique about NYDFS NYCRR Part 500 is that it obligates entities to comply with more specific and enforceable rules that they currently face. It differs from existing guidance, frameworks, and revelations in several meaningful ways:

  • Broad definition of protected information
  • Broad oversight of third parties
  • Timely destruction of NPI (nonpublic information)
  • Prompt notification of cybersecurity event (72 hours)
  • Maintaining unaltered audit trails and transactions records
  • Annual certification (first submission due on February 15, 2018)

As an NYDFS covered entity, an organization must certify that they have implemented the controls as outlined in the requirements of NYCRR Part 500.  In order to certify, the Board of Directors or Senior Officers must have evidence that appropriate governance, processes, and controls are in place.  This evidence is provided through the Risk Assessment.

There are nine major components of the NYDFS regulation that should drive an entity’s Risk Assessment:

  1. Program
  2. Policies
  3. Training
  4. Third-party Risk Management
  5. Vulnerability & Penetration Testing
  6. Logging and Monitoring
  7. Access Security
  8. Multi-factor Authentication
  9. Encryption

It is important to note that the Risk Assessment must be conducted periodically, updated as necessary, and conducted in accordance with written policies and procedures so that it’s a defined and auditable process.  Finally, it must be well documented. Meeting compliance will be a challenge for some, even though financial services companies have expected the new cybersecurity regulation for some time. Some of the challenges that we foresee in achieving NYDFS compliance are:

  • Keeping senior management and key stakeholders involved in the planning and reporting process
  • Running regular risk assessments, noting deficiencies from each assessment, and adjusting as necessary
  • Validating that within your technology line-up, you are covered. Are key technologies such as encryption and multifactor authentication in place?
  • Reporting within 72 hours. As you review your incident process, assess whether you can respond to the reporting requirements for cybersecurity events within the required 72 hour period.

In addition to protecting customer data and fortifying the information systems of financial entities, another significant attribute of NYDFS 23 NYCRR Part 500 is its widening the net of regulated data protection.  NYDFS is driving organizations to properly secure sensitive Non Public Information, known as NPI. Even though NPI classification is not new (GLBA was one of the first regulations to introduce personal data security data requirements for NPI), the NYDFS regulation has a more prescriptive approach than others – it requires entities to implement policies, procedures, and technologies to comply.

NPI acts as an umbrella over PII (Personally Identifiable Information) and PHI (Protected Health Information). All three data types have their nuances though, so even if you secure your PII and PHI, it doesn’t mean that your NPI is 100% secure and that you’re in compliance.  Take some time to evaluate NPI in your organization – see section 500.01.g for the NYDFS definition of NPI.

Being Compliant with NYDFS Through the Proper Protection of NPI
Cloud Access Security Broker (CASB) and Identity and Access Management (IAM) are two key components that can help an organization with its overall compliance strategy for Part 500, and ultimately improve your ability to protect sensitive data and avoid a breach.

CASB is a key security technology for NYDFS compliance

CASB provides critical features necessary in the control strategy for cloud applications:

  • Discover what cloud applications are in use as well as where specific data is going to cloud applications, such as PII, PHI, or NPI
  • Invoke actions such as alerting the user or blocking a specific app or activity, like upload or download, based upon unusual behavior through user behavior analytics
  • Detect data compromises and anomalies and take action while informing other security systems like Security Information and Event Management (SIEM) for event correlation and forensics
  • Provide vendor risk analysis and ranking including important items such as recent breaches and incidents, infrastructure used to serve the application, and the vendor’s policies around data ownership and destruction
  • Control access over critical cloud apps and data using the context of device, data, location, or other behavioral risk information
  • Monitor authorized users to track their application use


IAM is also a key security technology for NYDF

When it comes to IAM, the value lies in Access Privileges and Multi-Factor Authentication. IAM enterprise tools can tie access provisioning to job functions and job roles, which allow you to manage to the minimum necessary/least privilege. They can also provide access attestation features so you can review access to applications with regulated information on a periodic interval, and approve or revoke the access based upon a need-to-know basis. Finally, IAM technology is invoked to trigger the need for Multi-Factor Authentication in applications and services (typically in conjunction with a third-party Multi-Factor Authentication end-user solution such as Google Authenticator or DUO).


CASB and IAM work together to provide critical controls for cloud applications

Achieving and maintaining cybersecurity compliance is a complicated process, but it doesn’t have to be a difficult or stressful one. Find out more by downloading our  Road to CASB: Key Business Requirements 2.0 whitepaper, designed to provide you with requirements that you can use as input consideration for your CASB initiative.

 

AWS Cloud: Proactive Security and Forensic Readiness – Part 1

By Neha Thethi, Information Security Analyst, BH Consulting

Part 1 – Identity and Access Management in AWS
This is the first in a five-part blog series that provides a checklist for proactive security and forensic readiness in the AWS cloud environment. This post relates to identity and access management in AWS.

In a recent study by Dashlane regarding password strength, AWS was listed as an organization that supports weak password rules. However, AWS has numerous features that enable granular control for access to an account’s resources by means of the Identity and Access Management (IAM) service. IAM provides control over who can use AWS resources (authentication) and how they can use those resources (authorization).

The following list focuses on limiting access to, and use of, root account and user credentials; defining roles and responsibilities of system users; limiting automated access to AWS resources; and protecting access to data stored in storage buckets – including important data stored by services such as CloudTrail.

The checklist provides best practice for the following:

  1. How are you protecting the access to and the use of AWS root account credentials?
  2. How are you defining roles and responsibilities of system users to control human access to the AWS Management Console and API?
  3. How are you protecting the access to and the use of user account credentials?
  4. How are you limiting automated access to AWS resources?
  5. How are you protecting your CloudTrail logs stored in S3 and your Billing S3 bucket?

Best-practice checklist

1) How are you protecting the access to and the use of AWS root account credentials?

  • Lock away your AWS account (root) login credentials
  • Use multi-factor authentication (MFA) on root account
  • Make minimal use of root account (or no use of root account at all if possible). Use IAM user instead to manage the account
  • Do not use AWS root account to create API keys.

2) How are you defining roles and responsibilities of system users to control human access to the AWS Management Console and API?

  • Create individual IAM users
  • Configure a strong password policy for your users
  • Enable MFA for privileged users
  • Segregate defined roles and responsibilities of system users by creating user groups. Use groups to assign permissions to IAM users
  • Clearly define and grant only the minimum privileges to users, groups, and roles that are needed to accomplish business requirements.
  • Use AWS defined policies to assign permissions whenever possible
  • Define and enforce user life-cycle policies
  • Use roles to delegate access to users, applications, or services that don’t normally have access to your AWS resources
  • Use roles for applications that run on Amazon EC2 instances
  • Use access levels (list, read, write and permissions management) to review IAM permissions
  • Use policy conditions for extra security
  • Regularly monitor user activity in your AWS account(s).

3) How are you protecting the access to and the use of user account credentials?

  • Rotate credentials regularly
  • Remove/deactivate unnecessary credentials
  • Protect EC2 key pairs. Password protect the .pem and .ppk file on user machines
  • Delete keys on your instances when someone leaves your organization or no longer requires access
  • Regularly run least privilege checks using IAM user Access Advisor and IAM user Last Used Access Keys
  • Delegate access by using roles instead of by sharing credentials
  • Use IAM roles for cross-account access and identity federation
  • Use temporary security instead of long-term access keys.

4) How are you limiting automated access to AWS resources?

  • Use IAM roles for EC2 and an AWS SDK or CLI
  • Store static credentials securely that are used for automated access
  • Use instance profiles or Amazon STS for dynamic authentication
  • For increased security, implement alternative authentication mechanisms (e.g. LDAP or Active Directory)
  • Protect API access using Multi-factor authentication (MFA).

5) How are you protecting your CloudTrail logs stored in S3 and your Billing S3 bucket?

  • Limit access to users and roles on a “need-to-know” basis for data stored in S3
  • Use bucket access permissions and object access permissions for fine-grained control over S3 resources
  • Use bucket policies to grant other AWS accounts or IAM

 

For more details, refer to the following AWS resources:

Next up in the blog series, is Part 2 – Infrastructure Level Protection in AWS – best practice checklist. Stay tuned.

Let us know if we have missed anything in our checklist!

DISCLAIMER: Please be mindful that this is not an exhaustive list. Given the pace of innovation and development within AWS, there may be features being rolled out as these blogs were being written. Also, please note that this checklist is for guidance purposes only.

 

What Will Software Defined Perimeter Mean for Compliance?

By Eitan Bremler, VP Marketing and Product Management, Safe-T Data

Your network isn’t really your network anymore. More specifically, the things you thought of as your network — the boxes with blinking lights, the antennae, the switches, the miles of Cat 5 cable — no longer represent the physical reality of your network in the way that they once did. In addition to physical boxes and cables, your network might run through one or more public clouds, to several branch offices over a VPN, and even through vendor or partner networks if you use a managed services provider. What’s more, most of the routing decisions will be made automatically. In total, these new network connections and infrastructure add up to a massive attack surface.

The software defined perimeter is a response to this new openness. It dictates that just because parts of your infrastructure are connected to one another, that doesn’t mean they should be allowed access. Essentially, the use of SDP lets administrators place a digital fence around parts of their network, no matter where it resides.

Flat Networks Leave Data Vulnerable
Where security is concerned, complicated networks can be a feature, not a bug. For companies above a certain size, who must protect critical data, a degree of complexity in network design is recommended. For example, can everyone in your company access the shared drive where you store your cardholder’s information? This is bad practice — what you need to adopt is a practice known as segmentation, recommended by US-CERT.

Any network in which every terminal can access every part of the network is known as a “flat” network. That is, every user and application can access only those resources which are absolutely critical for them to do their jobs. A flat network operates by the principle of most privilege — everyone gets access to anything. In other words, if a hacker gets into an application, or an employee goes rogue, prepare for serious trouble.

Flat networks are also a characteristic of networks lacking a software defined perimeter.

Create Nested Software Defined Perimeters for Extra Security
Flat networks introduce a high level of risk for flat organizations, but the use of SDP can eliminate this risk. The software-defined approach can create isolated network segments around applications and databases. What’s more, this approach doesn’t rely on either physically rewiring a network or creating virtual LANs, both of which are time-consuming processes.

This approach is already used in public cloud data centers, where thousands of applications that must not communicate with one another must coexist on VMs that are hosted on the same bare-metal servers. The servers themselves are all wired to one another in the manner of a flat network, but SDN keeps their networks or data from overlapping.

Do You Need SDP in Order to Be Compliant?
Software defined perimeters are strongly recommended for security, but it’s actually not necessary for compliance — yet. PCI DSS 3.2 doesn’t require network segmentation — in the main because the technology is still in its relative infancy, and is not yet accessible to every company. Those companies that can segment their networks, however, do receive a bit of a bonus.

If you manage to segment your network appropriately, only the segments of your network that contain cardholder data will be subject to PCI audit. Otherwise, the entirety of a flat network will be subject to scrutiny. Clearly, it’s easier to defend and secure a tiny portion of your network than the entire thing. Those who learn the art of network segmentation will have a massive advantage in terms of compliance.

Look for Software-Defined Perimeter Solutions
Solutions using the SDP method will help organizations set Zero Trust boundaries between different applications and databases. These are effectively more secure than firewalls, because they obviate the necessity of opening ports between any two segmented networks. This additional security feature lets companies reduce the scope of PCI without changing