Guidance for Critical Areas of Focus in Cloud Computing Has Been Updated

Newest version reflects real-world security practices, future of cloud computing security

By J.R. Santos, Executive Vice President of Research, Cloud Security Alliance

Today marks a momentous day not only for CSA but for all IT and information security professionals as we release Guidance for Critical Areas of Focus in Cloud Computing 4.0, the first major update to the Guidance since 2011.

As anyone involved in cloud security knows, the landscape we face today is a far cry from what was going on 10, even five, years ago. To keep pace with those changes almost every aspect of the Guidance was reworked. In fact, almost 80 percent of it was rewritten from the ground up, and domains were restructured to better reflect the current state of cloud computing, as well as the direction in which this critical sector is heading.

For those unfamiliar with what is widely considered to be the definitive guide for cloud security, the Guidance acts as a practical, actionable roadmap for individuals and organizations looking to safely and securely adopt the cloud paradigm. This newest version includes significant content updates to address leading-edge cloud security practices and incorporates more of the various applications used in the security environment today.

Guidance 4.0 covers such topics as:

  • DevOps, continuous delivery, and secure software development;
  • Software Defined Networks, the Software Defined Perimeter and cloud network security.
  • Microservices and containers;
  • New regulatory guidance and evolving roles of audits and compliance inheritance;
  • Using CSA tools such as the CCM, CAIQ, and STAR Registry to inform cloud risk decisions;
  • Securing the cloud management plane;
  • More practical guidance for hybrid cloud;
  • Compute security guidance for containers and serverless, plus updates to managing virtual machine security; and
  • The use of immutable, serverless, and “new” cloud architectures.

Today is the culmination of more than a year of input and review from the CSA and information security communities. Guidance 4.0 was drafted using an open research model (a herculean effort for those unfamiliar with the process), and none of it would have been possible without the assistance of Securosis, whose research analysts oversaw the project. We owe them—and everyone involved—a tremendous thanks.

You can learn more about the Guidance and read the updated version here.

Patch Me If You Can

By Yogi Chandiramani, Technical Director/EMEA, Zscaler

In May, the worldwide WannaCry attack infected more than 200,000 workstations. A month later, just as organizations were regaining their footing, we saw another ransomware attack, which impacted businesses in more than 65 countries.

What have we learned about these attacks?

  • Compromises/infections can happen no matter what types of controls you implement – zero risk does not exist
  • The security research community collaborated to identify indicators of compromise (IOCs) and provide steps for mitigation
  • Organizations with an incident response plan were more effective at mitigating risk
  • Enterprises with a patching strategy and process were better protected

Patching effectively
Two months before the attack, Microsoft released a patch for the vulnerability that WannaCry exploited. But, because many systems did not receive the patch, and because WannaCry was so widely publicized, the patching debate made it to companies’ board-level leadership, garnering the sponsorship needed for a companywide patch strategy.

Even so, the attack of June 27 spread laterally using the SMB protocol a month after WannaCry, by which time most systems should have been patched. Does the success of this campaign reflect a disregard for the threat? A lack of urgency when it comes to patching? Or does the problem come down to the sheer volume of patches?

Too many security patches
As we deploy more software and more devices to drive productivity and improve business outcomes, we create new vulnerabilities. Staying ahead of them is daunting, with the need to continually update security systems, and patch end-user devices running different operating systems and software versions. Along with patch and version management, there is change control, outage windows, documentation processes, post-patch support, and more. And it’s only getting worse.

The following graph illustrates the severity of vulnerabilities over time, and you can see that halfway through 2017, the number of disclosed vulnerabilities is already close to the overall patch volume of 2016.

source: National Vulnerability Database, a part of the National Institute of Standards and Technology (NIST). (https://nvd.nist.gov/vuln-metrics/visualizations/cvss-severity-distribution-over-time)

The challenge for companies is the sheer number of patches that need to be processed to remain fully up to date (a volume that continues to increase). Technically speaking, systems will always be one step behind in terms of vulnerability patching.

Companies must become aware of security gap
In light of the recent large-scale attacks, companies should revisit their patching strategy as a part of their fundamental security posture. Where are the gaps? The only way to know is through global visibility — for example, visibility into vulnerable clients or identifying botnet traffic — which provides key insights in terms of where to start and focus.

Your security platform’s access logs are a gold mine, providing data as well as context, with information such as who, when, where, and how traffic is flowing through the network. The following screen capture is a sample log showing a botnet callback attempt. With this information, you can see where to where to focus your attention and your security investments.

In the following example, you can identify potentially vulnerable browsers or plugins. It’s important to ensure that your update strategies include these potential entry points for malware, as well.

These are but two examples of potential gaps that can be easily closed with the appropriate insight into what software and versions are being used within an organisation. As a next step, companies should focus on patching those gaps with the highest known risk as a starting point.

But patching remains an onerous, largely manual task that is difficult to manage. A better alternative is a cloud-delivered security-as-a-service solution, which automates updates and the patching process. With threat actors becoming increasingly inventive as they design their next exploits, it pays to have a forward-thinking strategy that reduces the administrative overhead, improves visibility, and delivers protections that are always up to date.

 

Cyberattacks Are Here: Security Lessons from Jon Snow, White Walkers & Others from Game of Thrones

An analysis of Game of Thrones characters as cyber threats to your enterprise.

By Virginia Satrom, Senior Public Relations Specialist, Forcepoint

As most of you have probably seen, we recently announced our new human point brand campaign. Put simply, we are leading the way in making security not just a technology issue, but a human-centric one. In light of this, I thought it would be fun to personify threats to the enterprise with one of my favorite shows – Game of Thrones. Surprisingly, there are a lot of lessons that can be learned from GoT in the context of security.

Before we start, I’d like to provide a few disclaimers:

  • This is meant to be tongue in cheek, not literal, so take off your troll hat for the sake of some interesting analogies.
  • This is not comprehensive. Honestly, I could have written another 5,000 words around ALL the characters that could be related to threats.
  • This is based off of the Game of Thrones television series, not the books.
  • And finally, spoilers people. There are spoilers if you are not fully caught up through Season 6. You’ve been warned 🙂

Now, let’s dive in, lords and ladies…

What makes this Game of Thrones analysis so interesting is that these characters, depending on external forces, can change drastically from season to season. Therefore, our favorite character could represent a myriad of threats during a given season or the series overall. This concept relates to what we call ‘The Cyber Continuum of Intent’ which places insiders in your organization on a continuum which can move fluidly from accidental to malicious given their intent and motivations. There are also many instances where a character is a personification of a cyber threat or attack method.

Let’s start with one of the most devious characters – Petyr Baelish aka Littlefinger. Littlefinger is a good example of an advanced evasion technique (AET) that maneuvers throughout your network delivering an exploit or malicious content into a vulnerable target so that the traffic looks normal and security devices will pass it through. As Master of Coin and a wealthy business owner, he operates in the innermost circle of King’s Landing, while secretly undermining those close to him to raise his standing within Westeros. He succeeds, in fact, by marrying Lady Tulley to ultimately become the Protector of the Vale with great influence over its heir – Robyn Arryn of the Vale. Looking at his character from another angle, Littlefinger could also be considered a privileged user within a global government organization or enterprise. He is trusted by Ned Stark with Ned’s plans to expose the Lannister’s lineage and other misdoings, but he ultimately uses that information and knowledge for personal gain – causing Ned’s demise. And let’s not forget that Littlefinger also betrays Sansa Stark’s confidence and trust, marrying her to Ramsay Snow.

Varys and his ‘little birds’ equate to bots, and collectively, a botnet. Botnets are connected devices in a given network that can be controlled via an owner with command and control software. Of course, Varys (aptly also known as the Spider) commands and controls his little birds through his power, influence and also money. When it comes to security, botnets are used to penetrate a given organization’s systems – often through DDoS attacks, sending spam, and so forth. This example is similar to Turkish hackers who actually gamified DDoS attacks, offering money and rewards to carry out cybercrime.

Theon Greyjoy begins the series as a loyal ward to Eddard Stark and friend to Robb and Jon, but through his own greed and hunger for power becomes a true malicious insider. He also is motivated by loyalty to his family and home that he has so long been away from. He overtook The North with his fellow Ironborns, fundamentally betraying the Starks.

Theon Greyjoy and Ramsay Bolton (formerly Snow) are no strangers to one another, and play out a horrific captor/captive scenario through Seasons 4 and 5. Ramsay is similar to Ransomware as it usually coerces its victims to pay a ransom through fear. In the enterprise, this means a ransom is demanded in Bitcoin for the return of business critical data or IP. Additionally, Ramsay Snow holds RIckon Stark as a hostage in Season 6. He agrees to return Rickon to Jon Snow and Sansa Stark, but has his men kill Rickon right as the siblings reunite. This is often the case in Ransomware that infiltrates the enterprise – often, even if Ransom is paid, data is not returned.

Gregor Clegane, also known as The Mountain, uses sheer brute force to cause mayhem within Westeros, which would be similar to brute force cracking. This is a trial and error method used to decode encrypted data, through exhaustive effort. The Mountain is used for his strength and training as a combat warrior, defeating a knight in a duel in Season 1, and in Season 4 defeating Prince Oberyn Martell in trial by combat – in a most brutal way. He could also be compared to a nation state hacker, with fierce loyalty to the crown — particularly the Lannister family. He is also a reminder that physical security can be as important as virtual for enterprises.

Depending on the season or the episode, this can fluctuate, but 99% of the time I think we can agree that Cersei Lannister is a good example of a malicious insider and more specifically a rogue insider. She is keen to keep her family in power and will do whatever it takes to maintain control over their destiny. My favorite part about Cersei is though she is extremely easy to loathe, throughout the entire series it is clear she loves her children and would do anything for them. After the last of her children dies, she quickly evolves from grief to rage. As the adage says, sad people harm themselves but mad people harm others. Cersei can be related to a disgruntled employee who intends to steal critical data with malicious intent that is facing challenges from within or outside of the workplace.

If we take a look at Seasons 4 and 5, and the fall of Jon Snow, many of the Night’s Watch members are good examples of insiders. Olly, for example, starts out as a loyal brother among the Night’s Watch. If he happened to leak any intel that could harm Jon Snow’s leadership or well-being, it would have been accidental. This could be compared to an employee within an organization who is doing their best, but accidentally clicks on a malicious link. However, as Snow builds his relationships with the wildlings, Olly cannot help but foster disdain and distrust toward Snow for allying with the people that harmed his family. Conversely, Alliser Thorne was always on the malicious side of the continuum, having it out for Snow especially after losing the election to be the 998th Lord Commander of the Night’s Watch. Ultimately, Thorne’s rallying of the Night’s Watch to his side led to Snow’s demise (even if it was only temporary).

Sons of the Harpy mirror a hacktivist group fighting the rule of Daenerys Targaryen over Meereen. They wreak havoc on Daenerys’s Unsullied elite soldiers and are backed by the leaders who Daenerys overthrew – the ‘Masters’ of Meereen – in the name of restoring the ‘tradition’ of slavery in their city. They seek to overthrow Daenerys and use any means necessary to ensure there is turmoil and anarchy. Hacktivists are often politically motivated. If the hacktivist group is successful, it can take the form of a compromised user on the Continuum – through impersonation. After all, the most pervasive malware acts much like a human being.

Let’s not forget about the adversaries that live beyond The Wall – The White Walkers. The White Walkers represent a group of malicious actors seeking to cause harm in the Seven Kingdoms, or for this analogy, your network. What is interesting about these White Walkers is that they are a threat that has been viewed as a legend or folklore except for those that have actually seen them. However, we know that this season they become very real. Secondly, what makes the White Walkers so remarkable is that we do not know their intentions or motivations, they cannot be understood like most of these characters seeking power or revenge. I argue that this makes them the most dangerous and hardest threat to predict. And lastly, if we think about how the White Walkers came to be, we know that they were initially created to help defend the Children of the Forest against the First Men. But, we now know that they have grown exponentially in number and begun to take on a life (pun intended) of their own. This is equated to the use of AI in the technology space which some fear will overtake us humans.

In my mind The Wall itself could be considered a character, and therefore a firewall of sorts. Its purpose is to keep infiltration out; however, as we learned at the end of Season 6, this wall is penetrable. This leads me to the main takeaway – enterprises and agencies face a myriad of threats and should not rely on traditional perimeter defenses, but have multi-layered security solutions in place.

With all of these parallels, it becomes clear that people are the true constant complexity in security. It is known that enterprises must have people-centric, intelligent solutions to combat the greatest threats like those faced in Westeros.

CSA Industry Blog Listed Among 100 Top Information Security Blogs for Data Security

Our blog was recently ranked 35th among 100 top information security blogs for data security professionals by Feedspot. Among the other blogs named to the list were The Hacker News, Krebs on Security and Dark Reading. Needless to say, we’re honored to be in such good company.

To be listed, Feedspot’s editorial team and expert reviews, assessed each blog on the following criteria:

• Google reputation and Google search ranking;
• Influence and popularity on Facebook, Twitter and other social media sites; and
• Quality and consistency of posts.

We strive to offer our readers broad range of informative content that provides not only varying points of view but information you can use as a jumping off point to enhance your organization’s cloud security.

We’re glad to be in such great company and hope that you’ll take the time to visit our blog. We invite you to sign up to receive it and other CSA announcements. We think you’ll like what you see.

Locking-in the Cloud: Seven Best Practices for AWS

By Sekhar Sarukkai, Co-founder and Chief Scientist, Skyhigh Networks

With the voter information of 198 million Americans exposed to the public, the Deep Root Analytics leak brought cloud security to the forefront. The voter data was stored in an AWS S3 bucket with minimal protection. In fact, the only level of security that separated the data from being outright published online was a simple six-character Amazon sub-domain. Simply put, Deep Root Analytics wasn’t following some of the most basic AWS security best practices.

More importantly, this leak demonstrated how essential cloud security has become to preventing data leaks. Even though AWS is the most popular IaaS system, its security, especially on the customer end, is frequently neglected. This leaves sensitive data vulnerable to both internal and external threats. External threats are regularly covered in the news, from malware to DDoS hacking. Yet the Deep Root Analytics leak proves that insider threats can be dangerous, even if they are based on negligence rather than malicious intent.

Amazon already addressed the issue of outside threats through its numerous security investments and innovations, such as the AWS shield for DDoS attacks. Despite extensive safety precautions, well-organized and persistent hackers could still break Amazon’s defenses. However, Amazon cannot be blamed for the AWS security breaches, as it is estimated that 95 percent of cloud security breaches by 2020 will be the customer’s fault.

This is because AWS is based on a system of cooperation between Amazon and its customers. This system, known as the shared responsibility model, operates on the assumption that Amazon is responsible for safeguarding and monitoring the AWS infrastructure and responding to fraud and abuse. On the other hand, customers are responsible for the security “in” the cloud. Specifically, they are in charge of configuring and managing the services themselves, as well as installing updates and security patches.

AWS Best Practices

The following best practices serve as a background to securing configuring AWS.

  1. Activate CloudTrail log file validation:

CloudTrail log validation ensures that any changes made to a log file can be identified after they have been delivered to the S3 bucket. This is an important step towards securing AWS because it provides an additional layer of security for S3, something that could have prevented the Deep Root Analytics leak.

  1. Turn on access logging for CloudTrail S3 buckets:

Log data captured by CloudTrail is stored in the CloudTrail S3 buckets, which can be useful for activity monitoring and forensic investigations. With access logging turned on, customers can identify unauthorized or unwarranted access attempts, as well as track these access requests, improving the security of AWS.

  1. Use multifactor authentication:

Multifactor authentication (MFA) should be activated when logging into both root and Identity and Access Management (IAM) user accounts. For the root user, the MFA should be tied to a dedicated device and not any one user’s personal device. This would ensure that the root account is accessible even if the user’s personal device is lost or if that user leaves the company. Lastly, MFA needs to be required for deleting CloudTrail logs, as hackers are able to avoid detection for longer by deleting S3 buckets containing CloudTrail logs.

  1. Rotate IAM access keys regularly:

When sending requests between the AWS Command Line Interface (CLI) and the AWS APIs, an access key is needed. Rotating this access key after a standardized and selected number of days decreases the risk of both external and internal threats. This additional level of security ensures that data cannot be accessed with a lost or stolen key if it has been sufficiently rotated.

  1. Minimize number of discrete security groups:

Account compromise can come from a variety of sources, one of which is misconfiguration of a security group. By minimizing the number of discrete security groups, enterprises can reduce the risk of misconfiguring an account.

  1. Terminate unused access keys:

AWS users must terminate unused access keys, as access keys can be an effective method for compromising an account. For example, if someone leaves the company and still has access to a key, that person would be able to use it until its termination. Similarly, if old access keys are deleted, external threats only have a brief window of opportunity. It is recommended that access keys left unused for 30 days be terminated.

  1. Restrict access to CloudTrail bucket:

No user or administrator account should have unrestricted access to CloudTrail logs, as they are susceptible to phishing attacks. Even if users have no malicious intent, they are still susceptible. As a result, access to the CloudTrail logs needs to be restricted to limit the risk of unauthorized access.

These best practices for the AWS infrastructure could go a long way in securing your sensitive information. By applying even a few of them to your AWS configuration, sensitive information could remain secure, and another Deep Root Analytics leak could be prevented in the future.

Clouding Within the Lines: Keeping User Data Where It Belongs in the Age of GDPR

By Nathan Narayanan, Director of Product Management, Netskope

Importance around data residency hygiene has been around for a long time, but cloud services that often show up tend to focus more on user productivity and less on user data privacy. The highly-productive nature of these services increases their adoption resulting in a higher risk to the privacy of data.

According to Gartner, by May 25, 2018 (the day that GDPR takes effect) less than 50 percent of all organizations will be fully compliant with the EU’s General Data Protection Regulation (GDPR). It’s time to take steps to keep up.

Here are some things to consider.

Identify important data. Enforcing a very broad policy on all types of content can be too restrictive and may hinder productivity. Enterprises will need to identify critical data that will needs to be controlled within the geo-boundaries. This may be data relating to regulatory mandates such as health records, personally identifiable information and even company confidential data. All other content that do not fall under these constraints need not be controlled within the geo-boundaries.

Determine your geo-boundary and monitor movement of your data. According to the Netskope Cloud Report 40.7 percent of cloud services replicate data in geographically dispersed data centers. With this in mind, you need to keep your important data where it belongs, you also need to determine the boundaries where the data should reside. In some cases, PII may be required to stay with a region such as EU and in other cases it may be required to stay within the narrow bounds of a country such as Germany. A CASB can perform content inspection to identify important data as well as report on the movement of such data. To control data traveling beyond the geo-boundaries will require the CASB solution to map IP address into graphical locations and proactively apply policies to keep the data where it should reside.

Ensure cloud services enforce geo-control. Get visibility into the cloud services used by your organization and understand how ready these applications are for enterprise use. A CASB can also allow you to rate cloud services from a GDPR readiness standpoint. This rating is usually based on research on the cloud service and considers factors such as SLAs around data residency, level of encryption of the content processed, and terms in the agreement between the enterprise and the cloud service. For example, applications that take ownership of the user data will be rated poorly for GDPR readiness. Since 66.9 percent of cloud services do not specify if you or they own your data in their terms of service, finding out this information might take longer than you think.

Build policies to ensure data is within its geo-boundaries. No matter how ready the cloud services are, there may be a legitimate need to move data outside the region for business reasons. Also, sometimes employees may inadvertently move data outside its geo-boundaries. There are several steps you can take to proactively enforce geo-control in these situations. A CASB solution can help with enforcing a policy so that data is encrypted if moved outside the geo-boundaries for legitimate reasons. In all other cases, enforce policies to simply stop data from leaving the geo-boundary.

Remember employees will often travel outside the region and will need access to sensitive data so that they can continue to be productive. Ensure policies for such employees continue to respect data residency. It may be easier to simply block traffic to or from certain countries based on how your business is conducted.

Build a process for tighter geo-control. Employees play a big part in the data residency hygiene. Reduce risk by educating users on a periodic basis. A CASB solution can be setup to coach the employee at the time the risky data transfer if conducted. Coaching can also be used to discourage applications that are not ready for geo-control. It is also important to continually monitor and sharpen the policies as you learn how your sensitive data travels.

Want to learn more about GDPR and the cloud? Download Managing the Challenges of the Cloud Under the New EU General Data Protection Regulation white paper.

Crank Up Your Cloud Security Knowledge with These Upcoming Webinars

By Hillary Barron, Research Analyst and CloudBytes Program Manager, Cloud Security Alliance

Whether you’re trying to make the move to cloud while managing an outdated endpoint backup, attempting to figure out how to overcome the challenges pertaining to developing and deploying security automation, or determining how and why you should build an insider threat program CSA has a webinar that can answer your questions and help set you on the right path.

June 13: 4 Lessons IT Pros Have Learned From Managing ​Outdated Endpoint Backup (Presentation by Aimee Simpson of Code42, Shawn Donovan of F5 Networks, and Kurt Levitan of Harvard University)

In this session, you’ll hear​ from IT professionals at F5 Networks and Harvard University, as well as​ a Code42 expert​ as they ​discuss:

  • Why all endpoint backup isn’t created equally.
  • How outdated or insufficient backup solutions leave you with gaps ​that put user data at risk.
  • What technical capabilities you should ​look for in your next ​backup solution.

 

June 15: Security Automation Strategies for Cloud Services (Presentation by Peleus Uhley of Adobe)

Security automation strategies are a necessity for any cloud-scale enterprise. There are challenges to be met at each phase of developing and deploying security automation including identifying the appropriate automation goals, creating an accurate view of the organization, tool selection, and managing the returned data at scale. This presentation will provide the details of various of open-source materials and methods that can be used to address each of those challenges.

 

June 20: How and Why to Build an Insider Threat Program (Presentation by Jadee Hanson of Code42)

Get a behind-the-scenes look at what it’s really like to run an insider threat program — a program in which you can take steps to prevent employees from leaking, exfiltrating, and exposing company information. This webinar will provide cloud security professionals with insider threat examples (and why you should care), recommendations for how to get buy-in from key stakeholders, and lessons learned from someone who has experienced it firsthand.

Who Touched My Data?

You don’t know what you don’t know

By Yael Nishry, Vice President of Business Development, Vaultive, and Arthur van der Wees, Founder and Managing Director, Arthur’s Legal

Ransomware
IT teams generally use encryption to enable better security and data protection. However, in the hands of malicious parties, encryption can be utilized as a tool to prevent you from accessing your files and data. We have been aware of this kind of cyberattack for a long time, but the most recent attack by the WannaCry ransomware cryptoworm was extensive, global and on the front page.

Under any circumstance, a ransomware exploit is terrible for an organization. The preliminary impact can cause extensive downtime and may put lives and livelihoods at risk. However in the latest attack, several hospitals, banks, and telecom providers found their names mentioned in the news as well, suffering damage to their reputations and losing the trust of patients and customers alike. For a thorough summary of the events, we refer you to the many articles, opinions and other publications about the WannaCry ransomware attacks. This article covers the rarely discussed secondary effects of ransomware attacks.

Data exploits
What should you do if you discover your data has been encrypted by ransomware?

When there is a loss of data control, most IT teams immediately think of avoiding unauthorized data disclosure and ensuring all sensitive materials remain confidential. And indeed, these are sound measures.

However, what if you can retrieve your organization’s data because a decryption tool was made available by a third-party (experts recommend strongly against paying the ransom)? One may think that business can continue as usual and it can be assumed the data was not compromised or disclosed, right?

Who touched my hamburger?
Unfortunately, if no mechanism was in place beforehand to track if the retrieved data maintained its integrity during the ransomware timeframe, one simply does not know. Thus it will not be clear whether it was modified, manipulated, or otherwise altered. Are you willing to still eat that hamburger?

Furthermore, one does not know whether a copy has been made, either in part or as a whole. And, if a copy was made, IT teams cannot track where it is, and whether it left regulatory data zones such as the European Union or European Economic Area.

Secondary effect of ransomware
The loss of control described above is the secondary effect of a ransomware attack, which may be even more far-reaching than the original wave. With very little information about what happened to the data during the attack, it is up to the respective data controller or data processor to perform analysis on the long-term impact to the data, data subjects, and respective stakeholders.

Under the Dutch Security Breach Notification Act (WMD), established in 2016, data integrity breaches are a trigger to initiate the notification protocols, in the same way as confidentiality breaches and availability breaches are triggers. Under Article 33 of the General Data Protection Regulation (GDPR), loss of control is also a trigger to notify the data protection authorities.

In most cases it will be very difficult to demonstrate accurately that the breach has not resulted in a risk to the rights and freedoms of the respective natural persons (or as set forth in both the GDPR and WMD, the breach must not adversely affect the data, or adversely affect the privacy of the data subject), obligating the data controller to notify the authorities.

Besides notification, what other measures should be put in place to monitor irregular activities, and for how long? The window of liability for any identity thefts resulting from the breach will remain open for quite a while, so mitigating risk should be on the top of the priority list.

Encryption
Encrypting data and maintaining the encryption keys on site would not have spared an organization from falling victim to such an attack. However, it would enable the exposure to be significantly reduced. This would allow an organization to convey, with confidence that, by maintaining the original encryption keys on-premises, they were in complete control of the data, even when it was encrypted by the attackers using another set of keys.

Accountability
The GDPR is aimed to give data control back to the data subjects. Encryption is mentioned four times in the GDPR, which will enter force within one year, on May 25, 2018. It is explicitly mentioned as an example of a security measure component that enables data controllers and data processors to meet the appropriate level of state-of-the-art security measures as set forth in Article 32 of the GPDR. In real-life examples, such as WannaCry and similar ransomware hacks, it can also make the difference between control and loss of data, and the associated loss of trust and reputation.

The GDPR it is not about being compliant but about being accountable and ensuring up-to-date levels of protection by having layers of data protection and security in place to meet the appropriate dynamic accountability formula set forth in the GDPR. Continuously.

So, encryption can not only save embarrassing moments and loss of control after the ransomware or similar attacks, but it can also help organizations to keep data appropriately secure and therefore accountable.

My Second Attempt at Explaining Blockchain to My Wife

I tried explaining blockchain to my wife and here’s what happened…

By Antony Ma, CTO/PowerData2Go, Founding Chairman/CSA Hong Kong and Macau Chapter, and Board Member/CSA Singapore Chapter

I introduced my wife to Python around nine months ago, and now she’s tinkering and has drawn a tortoise on her MacBook. After spending more time on geeky websites, she became more inquisitive, asking me one day, “Can you explain to me what blockchain is and why it is a game changer?” It sounded like a challenge!

With my 15 years of experience in banking, audit, and IT security, I should be able to nail this. I opened my mouth and mentioned some terms I’ve read on blogs and news websites—distributed ledger, low transaction cost, no central computer, smart contracts, etc. After 45 minutes and some drawings, she asked, “Why the fuss? Is it like a database with hash?”

It looked like I was able to explain what blockchain is but failed to justify why it is ground-breaking. Her question on how a distributed ledger can profoundly transform the Internet was unanswered.

That question also struck me. Despite reading so many articles on the importance of blockchain and how it could change our digital life, not many articles can explain in layman’s term how the technology is so different from other Internet tech and why it’s a paradigm shift.

I started reviewing my readings, and here now is my second attempt at explaining blockchain in understandable terms.

The reason for blockchain
It all started in the 1970s when military research labs invented TCP/IP (Transmission Control Protocol/Internet Protocol), the foundation of the Internet with a high priority on resilience and recoverability. Researchers could add/remove nodes to/from the system (following some protocols) without affecting other network components.

Trust (or simply security) was secondary. If your enemy could cripple your network with one strike, protecting the system against espionage or infiltration was irrelevant. Flexibility and resiliency were implemented first, but came as costs. A lack of security design exposed the network and data transmitted on it to spoofing and wiretapping.

Confidentiality and integrity features were not mandatory in the first version of the Internet. Most of the security features we are using today are patches on a design that was focused on availability and recoverability. SSL (Secure Sockets Layer), OTP (One-Time Password), and PKI (Public Key Infrastructure) were adopted after the Internet started proliferating.

Elements of trust such as authenticity, accuracy, and non-reversible records are hinged on a non-security-minded design (just like when the first version of the Internet was built) and decades of patching. The Internet is virtual and intangible because the integrity of information is not guaranteed. You don’t know whether you are chatting with a dog. Trust on the Internet relies on information security controls deployed and their effectiveness.

A software bug or control lapse may allow anyone with access to a system to make unauthorized changes. For example, a bank staff may exploit a known vulnerability and edit records in the credit score database. As it was already proven that no security control is 100-percent effective, trust in cyberspace is built on multi-layers of data protection mechanisms.

We do not trust cyberspace since information integrity is not guaranteed. Because of a lack of trust between different parties on the Internet, there are many intermediaries trying to use physical world verifications to secure or protect transactions. Since the virtual world is intangible and alterations are sometimes hard to detect, when security controls fail users need to go back to the physical world to fix it either by calling a call center or even visiting an office.

Consider this example now from Philipp Schmidt:

In Germany, many carpenters still do an apprenticeship tour that lasts for no less than three years and one day. They carry a small book in which they collect stamps and references from the master carpenters with whom they work along the way. The carpenter’s traditional (and now hipster) outfit, the book of stamps they carry, and(if all goes well) the certificate of acceptance into the carpenter guild are proofs that here is a man or woman you can trust to build your house.

Being in control doesn’t mean it would be easy to lie. Similar to the carpenter’s book of references, it should not be possible to just rip out a few pages without anyone noticing. But being in control means having a way to save credentials, to carry them around with us, and to share them with an employer if we chose to do so.

You may say it is old-fashioned or outdated, but carpenters trust it—even now. Their trust is built on their understanding that the paper cannot be easily tampered with without leaving a trace. Each page is linked to the next and alterations are easily detected without relying on a third party. With the law of physics, there is no need for an intermediary.

The virtual and physical worlds
Blockchain is the new form of paper in cyberspace, which breaks the wall between the virtual and the physical world. Records created using blockchain technology are immutable and do not require other systems or entities for verification. The immutable properties of blockchain are defined by mathematics, similar to how paper follows the law of physics.

An interaction that was recorded using the blockchain system cannot be altered, but you can add a new record that supersedes the previous one. Both the first and the new versions are part of the chain of records. Blockchain is a technology that defines how the chain of records is maintained. Integrity is an inherent part of a blockchain record.

How does blockchain achieve immutability?  The Register has a simple explanation:

In blockchain, a hash is a cryptographic number function which is a result of running a mathematical algorithm against the string of data in a block and results in a number which is entirely dependent on the block contents.

What this means is that if you encounter a block in a chain of blocks and want to read its contents you can’t do it unless you can read the preceding block’s contents because these create the starting data hash (prefix) of the block you are interested in.

And you can’t read that preceding block in the chain unless you can read its preceding block as its starting data item is a hash of its preceding block and so on down the chain. It’s a practical impossibility to break into a block chain and read and then do whatever you want with the data unless you are an authorized reader.

Bringing properties of the physical world into the virtual world is why blockchain is ground-breaking.

For my next post, I will write about the physical properties that blockchain creates and how they are related to trust.

Antony Ma received CSA Ron Knode Service Award in 2013. Follow him on Twitter at https://twitter.com/Antony_PD2G.

Office 365 Deployment: Research Suggests Companies Need to “Think Different”

Survey shows what companies expected and what they found out

By Atri Chatterjee, Chief Marketing Officer, Zscaler

It’s been six years since Microsoft introduced Office 365, the cloud version of the most widely used productivity software suite. In those years, Office 365 has earned its place as the fastest-growing cloud-delivered application suite, with more than 85 million users today, according to Gartner. Even so, it’s just getting started. The use of Office 365 represents a fraction — just seven percent — of the Office software in use worldwide, and there is tremendous growth on the horizon. That means there is still plenty of room for enterprises of all sizes to capitalize on the agility benefits of Office 365, but getting the deployment right is the key to success.

Understanding the Office 365 deployment experience
We know that Office 365 brings about considerable changes in IT so we teamed up with market research firm TechValidate to do an independent survey of enterprises that had deployed Office 365 or were in the process of doing so. The results have been illuminating.

We surveyed 205 enterprise IT decision makers from a variety of industries in North America. More than 60 percent of them were managers of IT, 25 percent were at the director or VP level, and 14 percent were C-level. In our questions, we hoped to learn about their experiences in three broad categories:

  1. What they did to prepare for their Office 365 adoption
  2. How the implementation went, given their preparation
  3. What they learned and what are they going to do going forward

Key results

Preparation for Office 365 was “old school” and fell short
A majority of companies surveyed used traditional approaches to prepare for the increased network demands of Office 365. Many increased bandwidth capacity of their existing hub-and-spoke network by over 50 percent in preparation for deployment, and an even greater majority (65 percent) upgraded their data center firewall appliances. And while most companies estimated big budget increases in network expenditures, almost 50 percent had cost overruns after deployment.

Fewer than one in three companies implemented a network architecture involving local breakouts to the Internet from branch offices.

Most implementations fell short on user experience due to bandwidth and latency
Even after bandwidth increases, latency was a big problem, with 70 percent reporting weekly problems and 33 percent reporting daily problems. Firewall upgrades did not help. Sixty-nine percent of those who upgraded firewalls still had latency problems. Ultimately, this results in issues with user experience, with almost 70 percent of C-level executives citing these issues as a top concern.

Lessons learned
Seventy percent of the respondents are now looking to do something different to their existing network architecture and deploy a direct-to-Internet connection to improve performance and user experience.

In addition, 85 percent reported problems with bandwidth control and traffic shaping, and are now looking for solutions to better control network traffic so that business applications like Office 365 are not starved by consumer traffic to Facebook, gaming sites, and streaming media.

More data, insight, and recommendations
The full report provides a lot more data in each of these areas, and it offers key recommendations based on the real-world experiences of over 700 enterprises that have been through the transition to Office 365. You can also check out a summary of the findings on this infographic.

If you are thinking about embarking on such a journey, here are some additional resources to help you plan.

Want To Empower Remote Workers? Focus On Their Data

By Jeremy Zoss, Managing Editor, Code42

Here’s a nightmare scenario for IT professionals: Your CFO is working from the road on a high-profile, highly time sensitive business deal. Working late on documentation for the deal, a spilled glass of water threatens everything. His laptop is fried; all files are lost. What options does your organization have? How can you get the CFO these critical files back, ASAP, when he’s on the other side of the country?

Remote user downtime has high costs
It’s not just traveling executives that worry IT pros. Three-quarters of the global workforce now regularly works remotely, and one in three work away from the office the majority of the time. Across every sector, highly mobile, on-the-go users play increasingly important roles. When these remote users lose, destroy or otherwise corrupt a laptop, the consequences can be serious.

  • On-site consultants: Every hour of downtime is lost billable time.
  • Distributed sales teams: Downtime can threaten deals.
  • On-site training and technical support: Downtime interrupts services, which can hurt relationships and reputations.
  • Work-from-home employees: These might not be high-profile users, but downtime brings productivity to a halt—a cost magnified across the growing work-from-home workforce in most organizations.

Maximizing remote productivity starts with protecting remote user data
Businesses clearly recognize the huge potential in empowering remote workers and mobile productivity. That’s why they’re spending time and money on enabling secure, remote access to digital assets. But too many forget about the other end of the spectrum: collecting and protecting the digital assets that remote workers are creating in real-time—files and data that haven’t made it back to the office yet. As productivity moves further away from the traditional perimeter, organizations can’t let that data slip out of view and beyond backup coverage.

Get six critical tips to empower your mobile users
Read the new white paper and see how endpoint visibility provides a powerful foundation for enabling and supporting anytime-anywhere users.

New CSA Report Offers Observations, Recommendations on Connected Vehicle Security

By John Yeoh, Research Director/Americas, Cloud Security Alliance

Connected Vehicles are in the news for introducing new features and capabilities to the modern automobile. Headlines also highlight security hacks that compromise vehicle operations and usability. While sources note that the vulnerabilities identified so far have been addressed, a greater understanding is needed on how tomorrow’s Connected Vehicle will operate in an environment composed of both legacy and modernized traffic infrastructure. The Connected Vehicle will be designed to communicate with countless other devices and interfaces. Security systems, tools, and guidance are needed to aid in protecting vehicles and the supporting infrastructure.

Through research and development within the CSA Internet of Things Working Group and the United States Department of Transportation Federal Highway Administration, CSA is introducing “Observations and Recommendations on Connected Vehicle Security” to keep consumers and manufacturers up to date on the evolution of vehicle connectivity, areas of concern, and recommendations for securing the connected vehicle environment. The paper will provide a “big picture” view of the various aspects of vehicles and infrastructure components to better understand their interrelationships, dependencies and threats to the traffic ecosystem.

Learn about:
  • Connected Vehicle reference architectures and messaging protocols
  • V2V, V2I, V2X interactions
  • Potential System-of-System attacks and outcomes
  • Cross collaboration of IoT devices and systems
  • Vehicle design, platform, and infrastructure security best practices

The CSA Internet of Things Working Group continually evaluates and conducts research on new technologies involving cloud and the Internet of Things. CSA collaborates with other industry organizations to bring the latest guidance and security best practices to IT and enterprise.

A Management System for the Cloud – Why Your Organization Should Consider ISO 27018

By Alex Hsiung, Senior Associate, Schellman & Co.

Cloud computing technologies have revolutionized the way organizations manage and store their information.  Where companies used to house and maintain their own data, a host of organizations have now made the switch to a cloud-based model due to the ease of use and cost-saving benefits promised by the cloud.

But what is a cloud without a little rain?  The benefits of cloud technologies have not come without their costs.

Within the world of cloud computing, there have been three persistent concerns:

  1. Security
  2. Security
  3. Security

A quick search for the pitfalls and concerns organizations face with cloud computing yields a recurring motif.  Every company looking to incorporate a cloud-based service has to weigh the benefits that a cloud environment affords against the risks associated with entrusting an organization with its sensitive data.  This data tends to include personally identifiable information (henceforth referred to as PII), which is generally the most scrutinized category of data and is subject to some of the strictest legal and regulatory requirements.

Customers of cloud service providers want to rest assured that the PII they have entrusted a cloud service provider with is maintained and held to at least the same level of security standards that they would have placed if the data had remained within their control.  For some organizations, the stakes are even higher as this is mandated by certain legal and regulatory requirements such as the Health Insurance Portability and Accountability Act (HIPAA) for electronic personal health information and the Graham-Leach-Bliley Act (GLBA) for sensitive financial information.

Many cloud service providers maintain that they are ignorant to the data ingested on behalf of their customers.  However, in the event of a security breach involving either personal health information or sensitive financial data, significant fines and reputational damage can be incurred by the cloud service provider if appropriate security and privacy measures are not in place.  This is where an effective information security management system, with specific control considerations tailored to cloud security and privacy surrounding PII, can prove invaluable to a cloud service provider.

You may have questions regarding what an information security management system is.  To define an information security management system, it may be easier to first understand what it is not.  An information security management system is not referring to an actual “system”, “application”, or “tool” that performs information security functions.

A broader definition is as follows: an information security management system represents the organization’s holistic approach to addressing information security concerns.  This includes top management’s buy-in to addressing these risks which can be demonstrated in its actions by performing the following:

  • Fostering a top-down approach to information security that encourages personnel throughout the organization to be aware of information security best practices
  • Performing risk assessments that are tailored to its organization’s unique threats and vulnerabilities
  • Proactively searching for issues and concerns through the use and selection of internal auditors
  • Monitoring and measuring the performance and effectiveness of the information security management system
  • Establishing a commitment to continually improving the information security management system
  • Ensuring that security controls are implemented and applicable to its organization’s goals and purpose

The standard most commonly used to demonstrate an organization’s effective implementation of an information security management system is the ISO 27001 standard.  The ISO 27001 standard serves as a baseline framework which virtually all service providers, cloud-based or otherwise, can work toward implementing.  It is worth noting that ISO 27001 provides a multitude of benefits to organizations that implement an effective information security management system, but two are perhaps the most pertinent and deserve to be mentioned:

  • An effective information security management system demonstrates to prospective and current customers that the service organization means business about protecting the data that it is entrusted with and responsible for.
  • An effective information security management system assists organizations with establishing a forward-thinking, proactive approach to addressing information security concerns as opposed to enabling a backward-looking mindset which is generally fostered by audit culture, which typically focuses on historical information.

The above-mentioned points may be enough for any service organization to consider implementing an information security management system.  The reputational benefit that an organization can enjoy by demonstrating to its customers that it takes its handling of information seriously is difficult to measure.  The cost-savings that an organization can enjoy by implementing effective response procedures in the event of a security incident are also incalculable – just ask United Airlines.  Sure, maybe that was a different kind of incident, but the age-old adage remains: failing to prepare is preparing to fail – this is the essence of ISO.

However, the buck does not stop at ISO 27001, especially for cloud service providers who by virtue of their trade must take information security more seriously.  This is where organizations can implement, in addition to the requirements held forth by the ISO 27001 standard, a slew of measures to increase the security and privacy measures in place when handling sensitive data, such as PII.  This standard is referred to as ISO 27018, which can be achieved in tandem with an effective information security management system in accordance with the ISO 27001 standard.

ISO 27018, otherwise referred to as ISO/IEC 27018:2014, builds upon an organization’s information security management system by establishing a group of privacy-based controls that are dedicated to protecting PII in public clouds that act as PII processors, with an emphasis on protecting PII in the cloud.  ISO 27018 provides a new subset of controls dedicated to the protection of sensitive personal data.

A high-level overview of some of the ISO 27018 requirements are included below:

  • Providing cloud customers with the ability to access, correct, and erase their own PII
  • Ensuring that data is processed according to its intended purpose and not taken out of context
  • Procedures for the deletion of temporary files
  • Implementing defined disclosure procedures
  • Providing open, transparent notice in the event that sub-contractors are utilized
  • Encouraging accountability on behalf of the cloud service provider through the implementation of breach notification procedures
  • More stringent information security requirements on the part of the cloud service provider

Hopefully after considering the above, it is more clear that implementing an information security system aligned with ISO 27001 is tremendous for a service organization, but for cloud service providers hoping to assuage any security and privacy concerns for their customers, aligning these controls with ISO 27018 may be the organization’s best option.

As the technologies around us evolve, so do their underlying threats and vulnerabilities.  An effective information security management system affords an organization a proactive, forward-thinking approach to information security.  This is all the more important given that cloud computing technologies have been plagued with security and privacy concerns since their inception; the risks will only continue to increase.

If you represent a cloud service provider, it may be time to consider how your organization can benefit from the implementation of an information security management system that aligns its 27001 controls with the ISO 27018 objectives.

For more information on ISO 27018, you can view our webinar on-demand: Privacy in the Cloud – an introduction to ISO 27018

Ransomware 101

By Jacob Serpa, Product Marketing Manager, Bitglass

Unless you’ve been living under a rock for the last few weeks, you know that there has been a notable increase in cyberattacks around the world. Hackers have been spreading a type of ransomware called “WannaCry” via emails that trick recipients to open attachments that make them vulnerable to the attack.

Since Friday, over 150 countries have been affected by WannaCry, with the largest impact being on the NHS in England and Scotland. The attack hit over 16 organizations, crippling hospitals and general practices, forcing them to shut down and turn away patients.

What you need to know about ransomware
Once your system is infected, ransomware will encrypt your files, rendering them useless without a key. The guilty hackers will then demand some form of payment (typically via bitcoins) for the return of the hostage information.

Ransomware’s effects are not limited to the files on a device – they can also affect the device as a whole. Hackers can put locks on user profiles that make it impossible for individuals to log into their devices without paying a ransom. Similarly, they may alter a computer’s startup process so that it cannot finish unless a ransom is paid.

What you need to do to protect against ransomware
Companies must ensure adequate employee training to protect from ransomware. For example, employees must be able to identify phishing attempts and illegitimate emails. Additionally, users must be sure to keep their systems, software, and applications up to date. Finally, regular backups of data are a necessity.

In addition to the above, organizations must embrace technological solutions that can protect against ransomware. While traditional, signature-based solutions can detect previously identified threats, advanced solutions that utilize capabilities like machine learning must be adopted to protect against unknown threats.

As hackers become more sophisticated, companies must use a multi-pronged approach to prevent the spread of ransomware.

CTRL-Z and the Changing Data Landscape

By Mark Wojtasiak, Director of Product Marketing, Code42

The massive “WannaCry” ransomware attack that appeared in Europe last week and spread to over 150 countries is a perfect illustration of why enterprise data storage is in a period of flux. Today, organizations can choose to keep their data in the cloud, on-premise, or across both in a hybrid deployment. This variety of choice is great – it caters to pretty much every type of organization and allows IT decision makers to see where sensitive corporate information is at all times—right?

Wrong.

In 2017, 50 percent of all corporate data is actually held locally, at the endpoint, on employee devices. This is according to 800 IT decision makers (ITDMs) and 400 business decision makers (BDMs) surveyed as part of our brand new CTRL-Z Study, a pan-global report looking into the data practices of some of the world’s largest organizations and most senior stakeholders—including the C-suite—across the U.S., U.K., and Germany. The endpoint is also where 78 percent of ransomware attacks begin, and WannaCry has reportedly spread to over 100,000 organizations so far.

When ‘benefits’ outweigh the risks
The serious security implications and risks to productivity that this shift in data repositories represents are well understood at the top of the organization, with 65 percent of CIOs and 63 percent of CEOs stating that losing all the data held at the endpoint would destroy their business. But, in reality, awareness of the risk is doing little to dissuade poor security practices.

Three quarters (75 percent) of CEOs and more than half (52 percent) of business decision makers admit that they use applications/programs that are not approved by their IT department. The vast majority (80 percent) of CEOs and 65 percent of BDMs also say they use these unauthorized solutions to ensure productivity. This is despite 91 percent of CEOs and 83 percent of BDMs acknowledging that their behaviors could be considered a security risk to their organization.

So, to put it bluntly, there’s behavior at the top of numerous enterprises that favors productivity and getting the job done over data security, and CEOs and key BDMs realize this. Therefore, especially in light of coordinated global cyberattacks, the big question is: “Where does the enterprise go from here?”

Recovery is the key to data security
Productivity is undoubtedly the key to business success. At the same time, it is integral to business continuity to protect data and to be able to rapidly recover from a breach or to undo a ransomware infection. Around 50 percent of respondents to the CTRL-Z study admitted that their organization had suffered a data breach in the last 18 months. As evidenced by these numbers, the days of a ‘prevention only’ approach to security is not sufficient. Tried and tested recovery must now be at the core of enterprise data protection strategy—to get employees back up and running quickly should a breach occur. After all, the biggest cost of a ransomware attack isn’t the ransom payment—it’s the lost productivity that can result from not having the right backup and restore solution in place.

When it comes to security, there are three pillars to ensure success. First, organizations must be able to spot risk sooner. Gaining visibility over where data is, how it moves, who accesses it and when could act as an early warning system to alert ITDMs to both insider and external threats. Second, the enterprise as a whole always needs to be able to bounce back. When a data incident occurs, internal teams and the backup solutions in place need to be tested and ready to face the challenge. Finally, if the organization is to remain competitive, it needs to recover quickly. Time is money, and in the modern enterprise, so is data. Whatever goes wrong, whether that be a company-wide breach or an insider leaking a single file, IT professionals need to be able to identify the where, when and who of the situation immediately if they hope to mitigate the risk.

Now is definitely the time for change, and the enterprises that want to remain competitive are starting to act. As many organizations around the world have learned in recent days, it’s not if you will be hit by a cyberattack, but when.

 

Malware: Painting a Picture

By Jacob Serpa, Product Marketing Manager, Bitglass

Part One
Now more than ever, companies are flocking to the cloud. Through a variety of software as a service (SaaS) and infrastructure as a service (IaaS), enterprises are able to raise their efficiency, increase their flexibility, and decrease costs. However, pursuing these benefits does come with some risk. In particular, malware and ransomware have transformed from issues on endpoints to systematic threats to organizations’ suites of cloud apps.

While it may be tempting to run from the cloud (and the threats hiding in its billows), the fact remains that it is a staple of modern business – it’s here to stay. So, enterprises must take steps to understand malware and safely capture the benefits of the cloud. This process is similar to composing a painting in that there are many items to consider when trying to complete a picture of the ideal future. Each piece of secure cloud migration corresponds with one aspect of painting – see how in this two-part blog series.

The Saboteur: Types of Malware
Malware can be thought of as a sly saboteur waiting for an opportunity to throw paint at your canvas and ruin your design.

Malware can be divided into a number of smaller classifications. For example, horror stories often revolve around worms, spyware, trojan horses, ransomware, and many other types of Malware. Despite this lengthy list, two overarching categories are of primary importance. When evaluating malware, one must think in terms of known threats and unknown threats. While a known threat is a common piece of malware that has been seen in the past, an unknown threat (or zero-day threat) is malware that is relatively new and has not yet been identified. Zero-day malware is a particular risk because it is harder to detect – there can be months of damage, theft, and infection before it’s noticed. They each present different challenges and must be addressed in unique ways – as will be discussed in Part Two.

Data Loss Threatens M&A Deals

By Jeremy Zoss, Managing Editor, Code42

One of the most popular breakout sessions at Evolution17 featured a great merger and acquisition (M&A) scenario: Midway through the deal, critical information leaks, devastating the value of the deal. How can you figure out how much info leaked—by whom and to whom?

Here’s why that storyline was so riveting: 2016 saw more than $3.5 trillion in M&A deals. And the vast majority of those deals revolved around valuations of intellectual property (IP), which today makes up about 80 percent of a typical company’s value. If you’re a buyer organization, consider these questions:

  • Are you aware of all the IP within the target company?
  • Can you be sure all this IP will come with the deal?
  • Can you be certain it won’t leak to a competitor?

Data loss is a growing M&A problem
For most buyers, the answers to the questions above are no, no and no. This lack of visibility and security for the very assets a company is buying is startling, and it’s increasingly impeding the success of M&A deals. A 2016 survey of dealmakers found that about three in four M&A deals end up getting delayed—sometimes indefinitely—by data loss. Those that eventually get back on track often end up hobbled by missing data. Experts say this is a big part of the reason that 80 percent of M&As fail to achieve their potential or expected value.

M&A amps up the insider threat
Data loss is increasingly common in M&A for the same reason it’s increasingly common throughout the business world: More than half of all enterprise data now lives on endpoints, beyond traditional visibility and security tools centered on a network drive or central server. If the target company can’t see what its employees are doing with data on their laptops and desktops, then a potential buyer has near zero visibility. Couple that with the unique circumstances of an M&A deal and you’ve got a much higher risk of insider data theft. Laid-off employees freely take their endpoint data—sometimes for personal gain, other times just to sabotage their former employer. Those that do stick around tend to feel little loyalty toward their new company, lowering their inhibitions toward selling or taking data for personal gain.

There’s a better way to protect IP during M&A deals
IP is what an acquiring company is buying—the info that is critical to the value and competitive advantage gained through a deal. To make the most of an M&A opportunity, buyers need a better way to collect, protect and secure all data living on a target company’s endpoints—before, during and after a deal. Fortunately, with the right tools, a buyer can gain complete visibility of all endpoint data, take control of valuable IP and drive a deal to its most successful outcome.

Don’t let data loss sink an M&A. Read our new white paper, Best Practices for Data Protection During Mergers and Acquisitions.

What You Need to Know About Changes to the STAR Program

By Debbie Zaller, CPA, CISSP, PCI QSA, Principal, Schellman & Co., LLC

The CSA recently announced that the STAR Program will now allow a one-time, first-year only, Type 1 STAR Attestation report. What is a Type 1 versus Type 2 examination and what are the benefits for starting with a Type 1 examination?

Type 1 versus Type 2
There are two types of System and Organization Control (SOC) 2 reports, Type 1 and Type 2. Both types of reports examine a service organization’s internal controls relating to one or more of the American Institute of CPAs’ (AICPA) Trust Services Principles and Criteria, as well as the Cloud Security Alliance’s (CSA) Cloud Controls Matrix (CCM). Both reports include an examination on the service organization’s description of its system.

A Type 1 report examines the suitability of the design of the service organization’s controls at a point in time, also referred to as the Review Date. A Type 2 report examines not only the suitability of the design of controls that meet the criteria but also the operating effectiveness of controls over a specific period of time, also referred to as the Review Period.

In Type 2 examination, the auditor is required to perform more detailed testing, request more documentation from the organization, and spend more time performing a Type 2 examination than with a Type 1 examination. The additional documentation and testing requirements can put a greater strain on an organization and require more resources to complete the audit.

A service organization that has not been audited against the criteria in the past may find it easier to complete a Type 1 examination during the first audit as it requires less documentation, less preparation, and the organization can respond quicker to gaps noted during the examination.

The cost for a Type 1 examination is less than for a Type 2 examination because the examination testing efforts are less than what is needed for a Type 2. Additionally, fewer organization resources will be utilized for a Type 1, resulting in additional cost savings.

If the service organization, or specific service line or business unit of the organization, was recently implemented, the organization would have to not only ensure that controls were put in place to meet the criteria, but also ensure the controls have been operating for a certain period of time prior to completing a Type 2 examination. In this situation, there would not be enough history or length of time for a service auditor to perform a Type 2 examination. A Type 1 examination would allow for a quicker report rather than waiting for the review period in a Type 2 examination.

Benefits of a Type 1
There are several benefits to starting with a Type 1 report that include:

  • Quicker report turn-around time and STAR Registry
  • Shorter testing period
  • Cost efficiencies
  • Easier to apply to new environment or new service line

An organization might be trying to win a certain contract or respond to a client’s request for a STAR Attestation in a short period of time. A Type 1 examination does not require controls to be operating for a period of time prior to the examination. Therefore, the examination and resulting report can be provided sooner to the service organization.

Starting with a Type 1 report has many benefits for a first-year STAR Attestation. The organization will find this useful when moving to a Type 2 examination in the following year.

It is important to note, though, that Type 1 shall be considered just as an intermediate and preparatory step prior to achieving a Type 2 STAR Attestation.

Mind the Gap

By Matt Piercy, Vice President and General Manager EMEA, Zscaler

The sheer number of IT departments that are not acknowledging the numerous security gaps for cyber-attackers to exploit is astonishing. The problem is that many of those within the industry believe they have their security posture under control but they haven’t looked at the wider picture. The number of threats is increasing every day and as new technologies and opportunities emerge, companies need new security infrastructure to cope with the modifications of the threat landscape. Currently, C-level executives struggle to keep up with the necessity to approve budget requirements to bring their enterprise security up to the next level of protection. If companies are not up to date with the latest trends, businesses are being left more vulnerable to data breached as a consequence.

Executives are well advised to check, whether they have the following points considered in their security shield.
  1. More than 50% of all internet traffic is SSL encrypted today. This may sound secure, but has unfortunately an opposite effect as well. It is too easy to hide modern cyber-attacks in SSL-encrypted traffic as a lot of companies are not inspecting that traffic for various reasons. One may be performance issues of their existing security infrastructure, as SSL-scanning needs high bandwidth and powerful engines. Regulatory reasons may be another excuse, as companies have not yet worked out how they can scan the encrypted traffic compliant with their local regulations. As a consequence over 50% of all internet related traffic remains uninspected for modern malware – and attackers are aware of that situation.
  2. Mobile devices are another issue – with users potentially accessing corrupted websites or applications on devices that are not controlled under the company’s security umbrella. As the mobile user is the weakest link in the security shield, there exists a real danger that an infected mobile device is logging on to the corporate network and allows the malware to spread further. The device could be owned by the employer, and if it isn’t secured, sensitive customer and business data could also be easily retrievable. What is surprising is that despite mobile traffic accounting for more than half of all internet traffic, it isn’t yet thought of as an important part to secure. There are modern security technologies available, that are effectively able to monitor traffic on every device at every location the user is visiting. Organisations need to start thinking about implementing these technologies to close more gaps in their security shield.
  3. Office 365, for all of its success stories as a cloud application, also needs to be considered by security executives. Companies struggle to cope with the increased MPLS network traffic and bandwidth requirements going along with O365, so they might be tempted to break out that traffic directly to the internet where it bounces between users, devices and clouds freely. To avoid devastating effects on an organisation, companies are well advised to think about modernising their security infrastructure to take into account that all locations and branch offices need fast and secure access to the cloud to enable a great user experience.
  4. The incoming EU General Data Protection Regulations (GDPR) will require companies to secure Personal Identifiable information (PII) more than ever before, or risk huge fines as well as subsequent reputational damage in case of a data breach. What is important to note is that even UK companies will have to comply with GDPR after the Brexit if they process personal data of European Citizens. Companies will need to get valid consent for using personal data, hire a data protection officer (DPO), notify the local data protection watchdog when they have been hit with a data breach and perhaps most crucially companies could be fined up to €20m or 4% of their annual turnover if they are breached. With so much to do, businesses need to do their homework to ensure they’re compliant by May 2018.

Companies are setting off on their path towards digital transformation. They do well, if they start considering security requirements going along with the needs of a modern world before they set off on that path.

How to Choose a Sandbox

Grab a shovel and start digging through the details

By Mathias Wilder, Area Director and General Manager/EMEA Central, Zscaler

Businesses have become painfully aware that conventional approaches — virus signature scanning and URL filtering — are no longer sufficient in the fight against cyberthreats. This is in part because malware is constantly changing, generating new signatures with a frequency that far outpaces the updates of signature detection systems. In addition, malware today tends to be targeted to specific sectors, companies, or even individual members of a management team, and such targeted attacks are difficult to spot. It has become necessary to use state-of-the-art technology based on behavioral analysis, also known as the sandbox. This blog examines how a sandbox can increase security and it looks at what to consider when choosing a sandbox solution.

The sandbox as a playground against malware
Zero-day ransomware and new malware strains are spreading at a frightening pace. Due to the dynamic nature of the attacks, it is no longer possible to develop a signature for each new variant. In addition, signatures tend to be available only after malware has reached a critical mass — in other words, after an outbreak has occurred. As malware changes its face all the time, the code is likely to change before a new signature for any given type of malware can be developed, and the game starts from scratch. How can we protect ourselves against such polymorphous threats?

There is another trend that should influence your decision about the level of protection you need: malware targeted at individuals. It is designed to work covertly, making smart use of social engineering mechanisms that are difficult to identify as fake. It only take a moment for a targeted attack to drop the harmful payload — and the amount of time between system infection and access to information is getting shorter all the time.

What is needed is a quick remedy that does not rely on signatures alone. To detect today’s amorphous, malicious code, complex behavioural analysis is necessary, which in turn requires new security systems. The purpose of a sandbox is to analyse suspicious files in a protected environment before they can reach the user. The sandbox provides a safe space, where the code can be run without doing any harm to the user’s system.

The right choice to improve security
Today’s market appears crowded with providers offering various solutions. Some of them include virtualization technology (where an attack is triggered through what appears to be virtual system) or a simulated hardware solution (where the malware is offered a PC), through to solutions in which the entire network is mapped in the sandbox. However, malware developers have been hard at work, too, and a well-coded package can recognize whether a person is sitting in front of the PC, it can detect if it’s in a virtual environment in which case it can alter its behavior, and it can undermine the sandboxing measures by delaying activation of the malicious code after infection. So, what should companies look for when they want to enhance their security posture through behavioral analysis?

What to look for in a sandbox

  • The solution should cover all users and their devices, regardless of their location. Buyers should check whether mobile users are also covered by a solution.
  • The solution should work inline and not in a TAP mode. This is the only way one can identify threats and block them directly without having to create new rules through third-party devices such as firewalls.
  • First-file sandboxing is crucial to prevent an initial infection without an existing detection pattern.
  • It should include a patient-zero identification capability to detect an infection affecting a single user.
  • Smart malware often hides behind SSL traffic, so a sandbox solution should be able to examine SSL traffic. With this capability, it is also important to look at performance, because SSL scanning drains a system’s resources. With respect to traditional appliances, a multitude of new hardware is often required to enable SSL scanning — up to eight times more hardware, depending on the manufacturer.
  • In the case of a cloud sandbox, it should comply with relevant laws and regulations, such as the Federal Data Protection Act in Germany. It is important to ensure that the sandboxing is done within the EU, ideally in Germany. The strict German data protection regulations also benefit customers from other EU countries.
  • A sandbox is not a universal remedy, so it should, as an intelligent solution, be able to work with other security modules. For example, it is important to be able to stop the outbound traffic to a command-and-control (C&C) centre in the case of an infection. In turn, it should be possible to turn off the infected computer by tracing back the C&C communication.

Putting it all together
All these criteria can be covered by an efficient and highly integrated security platform, rather than individual hardware components (“point” appliances). One advantage of such a model is that you get almost instantly correlated logs from across the security modules on the platform without any manual interaction. If a sandbox is part of the platform, the interplay of various protection technologies through the automated correlation of data ensures faster and significantly higher protection. This is because it is no longer necessary to feed the SIEM system manually with logs from different manufacturers.

Platform models do not lose any information as they allow all security tools — such as proxy, URL filters, antivirus, APT protection, and other technologies — to communicate with one another. It eliminates the time-consuming evaluation of alerts, as the platform blocks unwanted data extraction automatically. A cloud-based sandbox together with a security platform is, therefore, an effective solution. It complements an existing security solution by adding behavioral analysis components to detect previously unknown malware and strengthens the overall security posture — without increasing operating costs.