Data Loss Threatens M&A Deals

May 11, 2017 | Leave a Comment

By Jeremy Zoss, Managing Editor, Code42

One of the most popular breakout sessions at Evolution17 featured a great merger and acquisition (M&A) scenario: Midway through the deal, critical information leaks, devastating the value of the deal. How can you figure out how much info leaked—by whom and to whom?

Here’s why that storyline was so riveting: 2016 saw more than $3.5 trillion in M&A deals. And the vast majority of those deals revolved around valuations of intellectual property (IP), which today makes up about 80 percent of a typical company’s value. If you’re a buyer organization, consider these questions:

  • Are you aware of all the IP within the target company?
  • Can you be sure all this IP will come with the deal?
  • Can you be certain it won’t leak to a competitor?

Data loss is a growing M&A problem
For most buyers, the answers to the questions above are no, no and no. This lack of visibility and security for the very assets a company is buying is startling, and it’s increasingly impeding the success of M&A deals. A 2016 survey of dealmakers found that about three in four M&A deals end up getting delayed—sometimes indefinitely—by data loss. Those that eventually get back on track often end up hobbled by missing data. Experts say this is a big part of the reason that 80 percent of M&As fail to achieve their potential or expected value.

M&A amps up the insider threat
Data loss is increasingly common in M&A for the same reason it’s increasingly common throughout the business world: More than half of all enterprise data now lives on endpoints, beyond traditional visibility and security tools centered on a network drive or central server. If the target company can’t see what its employees are doing with data on their laptops and desktops, then a potential buyer has near zero visibility. Couple that with the unique circumstances of an M&A deal and you’ve got a much higher risk of insider data theft. Laid-off employees freely take their endpoint data—sometimes for personal gain, other times just to sabotage their former employer. Those that do stick around tend to feel little loyalty toward their new company, lowering their inhibitions toward selling or taking data for personal gain.

There’s a better way to protect IP during M&A deals
IP is what an acquiring company is buying—the info that is critical to the value and competitive advantage gained through a deal. To make the most of an M&A opportunity, buyers need a better way to collect, protect and secure all data living on a target company’s endpoints—before, during and after a deal. Fortunately, with the right tools, a buyer can gain complete visibility of all endpoint data, take control of valuable IP and drive a deal to its most successful outcome.

Don’t let data loss sink an M&A. Read our new white paper, Best Practices for Data Protection During Mergers and Acquisitions.

What You Need to Know About Changes to the STAR Program

May 9, 2017 | Leave a Comment

By Debbie Zaller, CPA, CISSP, PCI QSA, Principal, Schellman & Co., LLC

The CSA recently announced that the STAR Program will now allow a one-time, first-year only, Type 1 STAR Attestation report. What is a Type 1 versus Type 2 examination and what are the benefits for starting with a Type 1 examination?

Type 1 versus Type 2
There are two types of System and Organization Control (SOC) 2 reports, Type 1 and Type 2. Both types of reports examine a service organization’s internal controls relating to one or more of the American Institute of CPAs’ (AICPA) Trust Services Principles and Criteria, as well as the Cloud Security Alliance’s (CSA) Cloud Controls Matrix (CCM). Both reports include an examination on the service organization’s description of its system.

A Type 1 report examines the suitability of the design of the service organization’s controls at a point in time, also referred to as the Review Date. A Type 2 report examines not only the suitability of the design of controls that meet the criteria but also the operating effectiveness of controls over a specific period of time, also referred to as the Review Period.

In Type 2 examination, the auditor is required to perform more detailed testing, request more documentation from the organization, and spend more time performing a Type 2 examination than with a Type 1 examination. The additional documentation and testing requirements can put a greater strain on an organization and require more resources to complete the audit.

A service organization that has not been audited against the criteria in the past may find it easier to complete a Type 1 examination during the first audit as it requires less documentation, less preparation, and the organization can respond quicker to gaps noted during the examination.

The cost for a Type 1 examination is less than for a Type 2 examination because the examination testing efforts are less than what is needed for a Type 2. Additionally, fewer organization resources will be utilized for a Type 1, resulting in additional cost savings.

If the service organization, or specific service line or business unit of the organization, was recently implemented, the organization would have to not only ensure that controls were put in place to meet the criteria, but also ensure the controls have been operating for a certain period of time prior to completing a Type 2 examination. In this situation, there would not be enough history or length of time for a service auditor to perform a Type 2 examination. A Type 1 examination would allow for a quicker report rather than waiting for the review period in a Type 2 examination.

Benefits of a Type 1
There are several benefits to starting with a Type 1 report that include:

  • Quicker report turn-around time and STAR Registry
  • Shorter testing period
  • Cost efficiencies
  • Easier to apply to new environment or new service line

An organization might be trying to win a certain contract or respond to a client’s request for a STAR Attestation in a short period of time. A Type 1 examination does not require controls to be operating for a period of time prior to the examination. Therefore, the examination and resulting report can be provided sooner to the service organization.

Starting with a Type 1 report has many benefits for a first-year STAR Attestation. The organization will find this useful when moving to a Type 2 examination in the following year.

It is important to note, though, that Type 1 shall be considered just as an intermediate and preparatory step prior to achieving a Type 2 STAR Attestation.

Mind the Gap

May 5, 2017 | Leave a Comment

By Matt Piercy, Vice President and General Manager EMEA, Zscaler

The sheer number of IT departments that are not acknowledging the numerous security gaps for cyber-attackers to exploit is astonishing. The problem is that many of those within the industry believe they have their security posture under control but they haven’t looked at the wider picture. The number of threats is increasing every day and as new technologies and opportunities emerge, companies need new security infrastructure to cope with the modifications of the threat landscape. Currently, C-level executives struggle to keep up with the necessity to approve budget requirements to bring their enterprise security up to the next level of protection. If companies are not up to date with the latest trends, businesses are being left more vulnerable to data breached as a consequence.

Executives are well advised to check, whether they have the following points considered in their security shield.
  1. More than 50% of all internet traffic is SSL encrypted today. This may sound secure, but has unfortunately an opposite effect as well. It is too easy to hide modern cyber-attacks in SSL-encrypted traffic as a lot of companies are not inspecting that traffic for various reasons. One may be performance issues of their existing security infrastructure, as SSL-scanning needs high bandwidth and powerful engines. Regulatory reasons may be another excuse, as companies have not yet worked out how they can scan the encrypted traffic compliant with their local regulations. As a consequence over 50% of all internet related traffic remains uninspected for modern malware – and attackers are aware of that situation.
  2. Mobile devices are another issue – with users potentially accessing corrupted websites or applications on devices that are not controlled under the company’s security umbrella. As the mobile user is the weakest link in the security shield, there exists a real danger that an infected mobile device is logging on to the corporate network and allows the malware to spread further. The device could be owned by the employer, and if it isn’t secured, sensitive customer and business data could also be easily retrievable. What is surprising is that despite mobile traffic accounting for more than half of all internet traffic, it isn’t yet thought of as an important part to secure. There are modern security technologies available, that are effectively able to monitor traffic on every device at every location the user is visiting. Organisations need to start thinking about implementing these technologies to close more gaps in their security shield.
  3. Office 365, for all of its success stories as a cloud application, also needs to be considered by security executives. Companies struggle to cope with the increased MPLS network traffic and bandwidth requirements going along with O365, so they might be tempted to break out that traffic directly to the internet where it bounces between users, devices and clouds freely. To avoid devastating effects on an organisation, companies are well advised to think about modernising their security infrastructure to take into account that all locations and branch offices need fast and secure access to the cloud to enable a great user experience.
  4. The incoming EU General Data Protection Regulations (GDPR) will require companies to secure Personal Identifiable information (PII) more than ever before, or risk huge fines as well as subsequent reputational damage in case of a data breach. What is important to note is that even UK companies will have to comply with GDPR after the Brexit if they process personal data of European Citizens. Companies will need to get valid consent for using personal data, hire a data protection officer (DPO), notify the local data protection watchdog when they have been hit with a data breach and perhaps most crucially companies could be fined up to €20m or 4% of their annual turnover if they are breached. With so much to do, businesses need to do their homework to ensure they’re compliant by May 2018.

Companies are setting off on their path towards digital transformation. They do well, if they start considering security requirements going along with the needs of a modern world before they set off on that path.

How to Choose a Sandbox

April 24, 2017 | Leave a Comment

Grab a shovel and start digging through the details

By Mathias Wilder, Area Director and General Manager/EMEA Central, Zscaler

Businesses have become painfully aware that conventional approaches — virus signature scanning and URL filtering — are no longer sufficient in the fight against cyberthreats. This is in part because malware is constantly changing, generating new signatures with a frequency that far outpaces the updates of signature detection systems. In addition, malware today tends to be targeted to specific sectors, companies, or even individual members of a management team, and such targeted attacks are difficult to spot. It has become necessary to use state-of-the-art technology based on behavioral analysis, also known as the sandbox. This blog examines how a sandbox can increase security and it looks at what to consider when choosing a sandbox solution.

The sandbox as a playground against malware
Zero-day ransomware and new malware strains are spreading at a frightening pace. Due to the dynamic nature of the attacks, it is no longer possible to develop a signature for each new variant. In addition, signatures tend to be available only after malware has reached a critical mass — in other words, after an outbreak has occurred. As malware changes its face all the time, the code is likely to change before a new signature for any given type of malware can be developed, and the game starts from scratch. How can we protect ourselves against such polymorphous threats?

There is another trend that should influence your decision about the level of protection you need: malware targeted at individuals. It is designed to work covertly, making smart use of social engineering mechanisms that are difficult to identify as fake. It only take a moment for a targeted attack to drop the harmful payload — and the amount of time between system infection and access to information is getting shorter all the time.

What is needed is a quick remedy that does not rely on signatures alone. To detect today’s amorphous, malicious code, complex behavioural analysis is necessary, which in turn requires new security systems. The purpose of a sandbox is to analyse suspicious files in a protected environment before they can reach the user. The sandbox provides a safe space, where the code can be run without doing any harm to the user’s system.

The right choice to improve security
Today’s market appears crowded with providers offering various solutions. Some of them include virtualization technology (where an attack is triggered through what appears to be virtual system) or a simulated hardware solution (where the malware is offered a PC), through to solutions in which the entire network is mapped in the sandbox. However, malware developers have been hard at work, too, and a well-coded package can recognize whether a person is sitting in front of the PC, it can detect if it’s in a virtual environment in which case it can alter its behavior, and it can undermine the sandboxing measures by delaying activation of the malicious code after infection. So, what should companies look for when they want to enhance their security posture through behavioral analysis?

What to look for in a sandbox

  • The solution should cover all users and their devices, regardless of their location. Buyers should check whether mobile users are also covered by a solution.
  • The solution should work inline and not in a TAP mode. This is the only way one can identify threats and block them directly without having to create new rules through third-party devices such as firewalls.
  • First-file sandboxing is crucial to prevent an initial infection without an existing detection pattern.
  • It should include a patient-zero identification capability to detect an infection affecting a single user.
  • Smart malware often hides behind SSL traffic, so a sandbox solution should be able to examine SSL traffic. With this capability, it is also important to look at performance, because SSL scanning drains a system’s resources. With respect to traditional appliances, a multitude of new hardware is often required to enable SSL scanning — up to eight times more hardware, depending on the manufacturer.
  • In the case of a cloud sandbox, it should comply with relevant laws and regulations, such as the Federal Data Protection Act in Germany. It is important to ensure that the sandboxing is done within the EU, ideally in Germany. The strict German data protection regulations also benefit customers from other EU countries.
  • A sandbox is not a universal remedy, so it should, as an intelligent solution, be able to work with other security modules. For example, it is important to be able to stop the outbound traffic to a command-and-control (C&C) centre in the case of an infection. In turn, it should be possible to turn off the infected computer by tracing back the C&C communication.

Putting it all together
All these criteria can be covered by an efficient and highly integrated security platform, rather than individual hardware components (“point” appliances). One advantage of such a model is that you get almost instantly correlated logs from across the security modules on the platform without any manual interaction. If a sandbox is part of the platform, the interplay of various protection technologies through the automated correlation of data ensures faster and significantly higher protection. This is because it is no longer necessary to feed the SIEM system manually with logs from different manufacturers.

Platform models do not lose any information as they allow all security tools — such as proxy, URL filters, antivirus, APT protection, and other technologies — to communicate with one another. It eliminates the time-consuming evaluation of alerts, as the platform blocks unwanted data extraction automatically. A cloud-based sandbox together with a security platform is, therefore, an effective solution. It complements an existing security solution by adding behavioral analysis components to detect previously unknown malware and strengthens the overall security posture — without increasing operating costs.

Self-Driving Information Security

April 21, 2017 | Leave a Comment

By Jim Reavis, Co-founder and CEO, Cloud Security Alliance

The prospects of autonomous self-driving vehicles becoming a pervasive presence on our roadways seems more likely everyday. From the big automakers to Tesla to Google to Uber, a wide range of companies are investing a tremendous amount of money to create a world without carbon-based drivers. The motivation for a big payday abounds, but the hope is that this will be a huge boon to vehicle safety, and I believe it ultimately will be. As we have learned at hacker conferences, there are a lot of security concerns about self-driving cars that we need to solve, but that is not what I want to talk about here.

What I would like to do here is steal the term from the automotive industry and apply “Self Driving” to Information Security. What is Self-Driving Information Security? For me, this is an initiative to apply the ever growing power of computing to solve complex and fast changing information security problems dynamically and without human intervention. Do I believe we can eliminate humans from the information security industry? No, I don’t believe that is possible or desirable, and it certainly would make BlackHat a lot less fun. However, I think we need to rapidly take steps to push the envelope on where we can take the person out of the loop, simply because we are not going to have enough humans to go around and insert into every potential security problem space. In a world where we will soon have thousands of Internet connected devices for every person on Earth, it’s highly unlikely we will have enough information security professionals to go around to solve all of the resultant problems.

Automation is a very old idea that is present in every industry. In information technology, we seek to automate every repetitive task we can. But like in other industries, the explosion in compute power is causing us to explore automating ever more sophisticated tasks. It is no longer just assembly line robots, but advances in computing are taking on white collar jobs and in many cases doing a great job. Computers are diagnosing diseases more accurately than doctors. Computers are doing journalism and even taking on the legal profession.

Are you a skeptic in regards to computer encroachment on sophisticated and complex professions? One of the most seminal moments in computing history that impacted me was the chess contest between Garry Kasparov and IBM Deep Blue. Personally, I was rooting for the human until the bitter end. When Deep Blue ultimately defeated the world’s greatest chessmaster, I was in mourning for days. That was 20 years ago.

To be clear, Self-Driving Information Security will not be bereft of humans. Humans are the biggest part of information security today by any measure – clearly by the budgetary metric. I think we will continue to grow the overall number of people employed in the profession for the foreseeable future. The unpredictability of information security and its adversarial, logic-defying nature will require humans. But Self-Driving Information Security will gobble up the jobs we are doing today, and I am not quite sure what jobs we will be doing in the future. What I do know is, if we do not implement Self-Driving Information Security, we are going to drown in information and incidents.

What are some of the building blocks of Self-Driving Information Security? It is actually many things we are working on today, they just need to gain maturity:

DevSecOps. This idea of merging DevOps with Security Operations, enabled by cloud, is gaining in popularity with very diverse security teams. The ability to tear down and instantiate new computing systems, using “serverless” capabilities and applying some imagination is leading to automation of security process that can seem like magic to an old security guy like me.

Autonomics. The ability for computers be self-managing, self-healing, self-optimizing – self-EVERYTHING is important. A big part of how the Internet works today is through some levels of hierarchy and “command and control” systems. Clearly this model is going to break. I think about the apartment of the future with thousands of computers. Then I think about the bad guy that attacks the upstream link or servers. Or perhaps malware is injected into one of the apartment’s devices. In both cases, the nodes must not only be resilient and independent, but may need to collaborate and attack the infected device.

Blockchain. The distributed, immutable ledger technology that underpins Bitcoin is a favorite of VCs and the finance industry. I believe we are going to find a lot of applications for Bitcoin in information security. An authoritative, tamper-proof log of transactions which can be either public or private has fascinating implications. We can record any change in a very granular manner. I think about IT audit and having a record of all security control implementations, it can really change how that job is done.

Analytics. Data Science. The answers are in the data. If the data sets are large enough, if the quality is good enough and if the algorithms are well designed and speedy, we will find the security answers we are looking for. I believe our massive and inexpensive compute infrastructure is going to excel in finding the right answer to a new security problem

Artificial Intelligence. AI is certainly controversial, even trying to define it can cause fights. Many are terrified by AI and its potential threat to mankind. Some security solutions claim to use AI, others say that the current products are really employing machine learning. Closely related to analytics, having access to quality data is going to enable AI to make security decisions and take action before a human can blink.

In addition to all of these areas of focus, it is safe to assume that computing is going to get faster, cheaper and bigger at an ever-increasing pace. Quantum computing may be years away, but there are already serious efforts in government and industry to make a massive leap in computational speed. It’s also safe to assume that the bad guys will want to harness or exploit all of these trends for themselves.

The building blocks above will soon be assembled together into Self-Driving Information Security. It will be quite necessary for this to happen to manage our rapidly increasing compute universe. The jobs we know today will go away. I am convinced new jobs will replace them in greater numbers, but it may be messy. The paradox of automation is that humans will operate in a world with more layers of complex technical abstraction. We aren’t as intimately involved, but when we are needed, it is for more critical reasons.

At Cloud Security Alliance, we think it is important to be considering these trends now to be true to our mantra of “solving tomorrow’s problems today”. That’s why we have research in all of these areas happening in 2017. As always, our research is your research and we encourage you to join us.

 

There May Be a Shark Circling Your Data

April 17, 2017 | Leave a Comment

By Jacob Serpa, Product Marketing Manager, Bitglass

In today’s business environment, cybersecurity remains a topic of great importance. As more companies migrate to the cloud, security concerns continue to evolve. While BYOD (bring your own device) affords employees more flexibility as they work from a multitude of devices, it also exposes data to nefarious parties in new ways. In the face of increasingly sophisticated cyber attacks, companies must learn and adapt or suffer the consequences.

In its latest cybersecurity report, “Threats Below the Surface,” Bitglass discusses the results of its survey of over 3,000 IT professionals. With the help of the CyberEdge Group and the Information Security Community, Bitglass was able to uncover the threats, priorities, and capabilities seen as most relevant by these professionals. The fact that the last year has seen 87% of organizations become victims of cyber attacks (and that a third of those organizations were hacked over five times) lends credence to cybersecurity concerns.

Despite the importance of maintaining visibility into data usage, relatively few firms are doing it well. While over 60% of companies monitor their desktops, laptops, and networks for security threats, the percentage drops to 36% for mobile devices and 24% for SaaS and IaaS applications. As organizations (inevitably) adopt BYOD and public cloud apps for increased productivity, they should proactively monitor for the corresponding security risks. However, when the survey respondents were asked about their firms’ current security postures, they indicated that they were primarily concerned about vulnerability with respect to mobile devices. Other prominent concerns included malware, privacy, and data leakage.

While most companies plan to increase their security budgets for next year, they should already be taking steps to ensure cybersecurity systems that consider contemporary tools like the cloud and BYOD. In particular, firms should be utilizing end-to-end solutions that secure data on devices, in transit, and at-rest in the cloud, while addressing concerns about topics like privacy.

More and more, conscientious companies are turning to CASBs (Cloud Access Security Brokers) and UEBA (user and entity behavior analytics) for modern-day cybersecurity. CASBs allow for discovering shadow IT apps, ensuring regulatory compliance in the cloud, preventing unwanted data disclosures, and more. With UEBA, a core component of CASBs, enterprises can detect account hijacking, data exfiltration, and other threats. CASBs and UEBA give companies a great deal of visibility and control over their data – a huge help in keeping an eye on the threats below the surface.

The Cure for Infectious Malware

April 10, 2017 | Leave a Comment

By Chantelle Patel, Marketing Manager, Bitglass

Organizations have seen rapid growth in cloud adoption over the last few years which in turn have introduced new threats and increased the risk of data leakage. Among the most prominent threats are malware and ransomware – long a problem on endpoints. With the advent of public cloud apps, interconnected and widely used, malware and ransomware have the potential to touch more data than ever before.

Unfortunately, despite the risk to data in the cloud, few providers offer any malware protection whatsoever. Those that do offer limited signature-based threat protection, based on solutions from IPS/IDS vendors, can only identify known threats. The most dangerous threats are not these known pieces of malware, but the unknown, zero-day threats that can go undetected, resulting in weeks or months of data exfiltration unbeknownst to IT.

Some solutions offer threat protection that is reactive rather than proactive, and what little proactive protection they provide is ineffective when end-users need instant access to data in the cloud or expect instant upload of a file. This gets at a critical difference between traditional signature-based malware and next-generation AI-based malware. Traditional tools rely on dynamic analysis, executing a file in a sandbox before taking action. Next-generation tools from companies like Cylance leverage static analysis, basing a risk decision on hundreds of characteristics associated with a file.

Once malware makes its way into a cloud app, there’s little an organization can do to stop its spread. These malicious files are often downloaded to endpoints, make their way to connected apps, and are shared across the organization. The only way to protect against these threats is to prevent their spread.

With Advanced Threat Protection (ATP), a core component of any complete Cloud Access Security Broker (CASB) solution, organizations can protect the cloud from malware before it hits the app, assess the risk of any one file, and stop malicious attacks in their tracks.

Why You Need a CASB for GDPR Compliance

April 4, 2017 | Leave a Comment

By Rich Campagna, Senior Vice President/Products & Marketing, Bitglass

With enforcement of the EU’s General Data Protection Regulation (GDPR) is just over a year away in May, 2018, your planning efforts should already be well underway. Adoption of cloud applications across the EU continues at a rapid clip, and the global nature of leading cloud applications means that protecting personal data and achieving data residency can be difficult to achieve.

With mandatory breach notifications and very steep fines (4% of annual revenues), the cost of non-compliance is high. On the other hand, it’s nearly impossible to stop the move to cloud in most organizations, so that’s not an option either. Fortunately, you still have time to arm your organization with the key to combining cloud adoption and GDPR compliance: cloud access security broker (CASB). Let’s take a look at some of the areas where a CASB can help:

  • Identifying personal data – the EU GDPR is primarily concerned with the protection of any data that can be used to identify a person (name, address, email, driver’s license number, and much, much more). The first thing that you need to do in order to protect that data is to identify where it is. CASBs can scan across both data-in-transit and data-at-rest for a wide range of cloud-delivered apps (SaaS, IaaS, and custom applications). Any CASB you choose should have a library of pre-built identifiers that can be used to scan for names, phone numbers, addresses, national identity and driver’s license numbers, health record information, bank account numbers, and more.
  • Controlling the flow of personal data – Once you’ve identified where sensitive data resides, you want to control where it can go. CASBs include a range of policy options that allow you to do things like geofence personal information, control access from unmanaged/unprotected devices, control external sharing, and encrypt data upon download. All of these options can help mitigate the risk of non-compliance.
  • Maintaining data residency and sovereignty – Major cloud applications often have global architectures which makes it difficult, it not impossible, to keep data within a given country or region. Fortunately, the GDPR allows for the use of encryption to meet GDPR requirements, if the cloud provider transfers data outside of the EU. Seek out a CASB that offers the killer app for GDPR – full-strength cloud encryption – across both unstructured (file) and structured (field) data.
    • Word of caution: some cloud application vendors offer their own “built-in” or “platform” encryption. With these schemes, the cloud provider has access to the keys and, therefore, the data as well. This is a GDPR gray area and may leave you, the data controller, on the hook for those hefty fines and mandatory notifications.
  • Monitor Risky Activity – A CASB can give you visibility into everything that’s happening with your users and your data across protected cloud applications. User Behavior Analytics and alerting capabilities let you know when risky activity is happening. This might mean reporting on indicators of breach, credential compromise, personal data access from outside the EU, or more. This critical visibility will allow you to identify and stop activities that might otherwise leave you staring down a fine of 4% of revenues (and a corresponding loss of your job).
  • Identify Shadow IT – simply put, GDPR and Shadow IT are a volatile and risky mix. There’s simply no feasible way to get the controls and visibility needed over applications that your organization has no ability to control. A CASB can give you much needed visibility into Shadow IT applications, and their corresponding risk, but your only option when faced with GDPR is get out of the shadows – either sanction and protect shadow IT through a CASB, or block unsanctioned apps altogether.

These CASB controls can really jumpstart a successful GDPR program across your organization, leaving you free to consider some of the many other GDPR-related controls and policies you’ll need to put in place over the next 12 months, including appointing a Data Protection Officer, figuring out how to implement “right to be forgotten,” and reevaluating licensing terms and data ownership across your many cloud application vendors.

CASB Is Eating the IDaaS Market

March 31, 2017 | Leave a Comment

By Rich Campagna, Senior Vice President/Products & Marketing, Bitglass

In the past 6-9 months, I’ve noticed a trend amongst Bitglass customers where more and more of them are opting to use the identity capabilities built into our  Cloud Access Security Broker (CASB) in lieu of a dedicated Identity as a Service (IDaaS) product. As CASB identity functionality has evolved, there is less need for a separate, standalone product in this space and we are seeing the beginnings of CASBs eating the IDaaS market.

A few years back, Bitglass’ initial identity capabilities consisted solely of our SAML proxy, which ensures that even if a user goes direct to a cloud application from an unmanaged device and on a public network, they are transparently redirected into Bitglass’ proxies – without agents!

From there, customer demand lead us to build Active Directory synchronization capability for group and user management, authentication directly against AD, and native multifactor authentication. Next came SCIM support and the ability to provide SSO not only for sanctioned/protected cloud applications, but any application.

So what’s left? If you look at Gartner’s Magic Quadrant for Identity and Access Management as a Service, Worldwide*, Greg Kreizmann and Neil Wynne break IDaaS capabilities into three categories:

  • Identity Governance and Administration – “At a minimum, the vendor’s service is able to automate synchronization (adds, changes and deletions) of identities held by the service or obtained from customers’ identity repositories to target applications and other repositories. The vendor also must provide a way for customers’ administrators to manage identities directly through an IDaaS administrative interface, and allow users to reset their passwords. In addition, vendors may offer deeper functionality, such as supporting identity life cycle processes, automated provisioning of accounts among heterogeneous systems, access requests (including self-service), and governance over user access to critical systems via workflows for policy enforcement, as well as for access certification processes. Additional capabilities may include role management and access certification.”
  • Access – “Access includes user authentication, single sign-on (SSO) and authorization enforcement. At a minimum, the vendor provides authentication and SSO to target applications using web proxies and federation standards. Vendors also may offer ways to vault and replay passwords to get to SSO when federation standards are not supported by the applications. Most vendors offer additional authentication methods — their own or through integration with third-party authentication products.”
  • Identity log monitoring and reporting – “The vendor logs IGA and access events, makes the log data available to customers for their own analysis, and provides customers with a reporting capability to answer the questions, ‘Who has been granted access to which target systems and when?’ and ‘Who has accessed those target systems and when?’”

Check, check, and check! Not only do leading CASBs offer these capabilities as part of their cloud data protection suites, in some cases, they go quite a bit farther. Take logging and reporting for example. An IDaaS product sees login and logout events, but nothing that happens during the session. CASBs can log and report on every single transaction – login, logout and everything in between.

Another example is multifactor authentication. Whereas an IDaaS can trigger MFA at the beginning of a session due to a suspicious context, a CASB can trigger MFA at any time – such as mid-session if a user starts to exhibit risk behaviors.

Since these capabilities have evolved as part of CASBs, which offer comprehensive data protection capabilities for cloud applications, I expect that 2017 will be a year with a lot more enterprises considering CASB platforms for both cloud identity and cloud data protection.

*Magic Quadrant for Identity and Access Management as a Service, Worldwide, Greg Kreizmann and Neil Wynne, 06 June 2016

Brexit or Bust: What Does It Mean for Data?

March 23, 2017 | Leave a Comment

By Nic Scott, Managing Director/UK, Code 42

What’s the latest on Brexit? When the UK government triggers Article 50, it will signal the start of the official two-year countdown until the UK leaves the European Union. According to UK Prime Minister Theresa May, this is still on track to happen at some point in March.

While there are still many unknowns in regards to geopolitical policies and legislation that will be created, annulled, or abolished post-Brexit, the UK government has given away one handy hint when it comes to the now-infamous General Data Protection Regulation (GDPR).

Post-Brexit, the UK will be mirroring data protection regulations to that which exists in Europe.

This means that from May 2018, when the UK is still an EU member, the GDPR will be applicable to UK businesses. And, even when the UK exits the EU in 2019, an identical version of the GDPR will also be enforced.

Needless to say, this isn’t good news for UK organizations that have been burying their heads in the sand, hoping that this pesky EU legislation will just go away post-Brexit. Unfortunately, these rules aren’t going anywhere. It’s time for companies to wake up to the consequence of data negligence regarding the GDPR. This isn’t just infosecurity providers scaremongering for sales, and it’s not a ‘potential’ occurrence like the Y2K bug, this is actually happening.

Get your ducks in a row, or get fined
Should a sensitive data breach occur under the GDPR, the European Data Protection Board (or likely the Information Commissioner’s Office, post-Brexit) will evaluate whether the affected company has been negligent in its data protection operations and the level of compensation a company must pay affected parties—which can reach €20m or a fine of up to four percent of its global turnover. Not a pretty thought for the C-suite, which by nature is tasked with mitigating risk.

Concerningly, according to Code42’s 2016 Datastrophe Study, in which over 400 UK IT decision makers (ITDMs) were surveyed, 50 percent of them acknowledged that the security measures they have in place currently will not be enough to meet GDPR standards.

How to become compliant
The first step is for an organization to know what kind of data falls under GDPR protection, where it is stored, and for how long it should be kept. Moreover, what is the best way to secure it, to what extent that data should be backed up, and how to prevent any leaks from the inside of the company. Simple, right?

The implementation of the right endpoint security stack is vital—one that takes into consideration first-line defense, such as intrusion detection systems and antivirus solutions, right down to last line defense, to easily remediate and recover should a breach occur. The right solution is an important advantage given the number of people and devices accessing potentially sensitive corporate information.

Also, enterprises should create internal policies that promote accessibility and flexibility with approved solutions, without locking the enterprise down to the point of stifling productivity. Employees play a big role regarding the sanctity of corporate information. That is why it is vital to train and educate your staff about possible intrusions, how they can secure data themselves, and how to avoid being tricked into leaking sensitive information.

Taking these precautions will allow an organization to gain control of its own information and ensure that the CIO’s overall focus is on increasing profit and expanding technological reach, rather than worrying about the safety of the zeroes and ones.