Improving Metrics in Cyber Resiliency: A Study from CSA

August 30, 2017 | Leave a Comment

By  Dr. Senthil Arul, Lead Author, Improving Metrics in Cyber Resiliency

With the growth in cloud computing, businesses rely on the network to access information about operational assets being stored away from the local server. Decoupling information assets from other operational assets could result in poor operational resiliency if the cloud is compromised. Therefore, to keep the operational resiliency unaffected, it is essential to bolster information asset resiliency in the cloud.

To study the resiliency of cloud computing, the CSA formed a research team consisting of members from both private and public sectors within the Incident Management and Forensics Working Group and the Cloud Cyber Incident Sharing Center.

To measure cyber resiliency, the team leveraged a model developed to measure the resiliency of a community after an earthquake. Expanding this model to cybersecurity introduced two new variables that could be used to improve cyber resiliency.

  • Elapsed Time to Identify Failure (ETIF)
  • Elapsed Time to Identify Threat (ETIT)

Measuring these and developing processes to lower the values of ETIF and ETIT can improve the resiliency of an information system.

The study also looked at recent cyberattacks and measured ETIF for each of the attacks. The result showed that the forensic analysis process is not standard across all industries and, as such, the data in the public domain are not comparable. Therefore, to improve cyber resiliency, the team recommends that the calculation and publication of ETIF be transferred to an independent body (such as companies in IDS space) from the companies that experienced cyberattacks. A technical framework and appropriate regulatory framework need to be created to enable the measurement and reporting of ETIF and ETIT.

Download the full study.

Security Needs Vs. Business Strategy – Finding a Common Ground

August 21, 2017 | Leave a Comment

By Yael Nishry, Vice President of Business Development, Vaultive

Even before cloud adoption became mainstream, it wasn’t uncommon for IT security needs to conflict with both business strategy and end user preferences. Almost everyone with a background in security has found themselves in the awkward position of having to advise on going against a technology with significant appeal and value because it would introduce too much risk.

In my time working both as a vendor and as a risk management consultant, few IT leaders I’ve come across want to be a roadblock when it comes to achieving business goals and accommodating (reasonable) user preferences and requests. However, they also understand the costs of a potential security or non-compliance issue down the road. Unfortunately, many IT security teams have also experienced the frustration of being overridden, either officially by executives electing to accept the risk or by users adopting unregulated, unsanctioned applications and platforms, introducing risk into the organization against their recommendation.

In today’s world of cloud computing there are more vendor options than ever and end users often come to the table with their preferences and demands.  More and more I speak to IT and security leaders who have been directed to move to the cloud or have been pressured to move data to a specific cloud application for business reasons but find themselves saying no because the native cloud security controls are not enough.

Fortunately, in the past few years, solutions have emerged that allow IT and security leaders to stop saying no and instead enable the adoption of business-driven requests while giving IT teams the security controls they need to reduce risk. Cloud vendors spend a lot of time and resources to secure their infrastructure and applications, but what they are not responsible for is ensuring compliant cloud usage in their customer’s organizations.

The legal liability for data breaches is yours and yours alone.  Only you can guarantee compliant usage within your organization, so it’s important to understand the types of data that will be flowing into the cloud environment and work with various stakeholders to enforce controls that will reduce risk to an acceptable level and comply with any geographic or industry regulations.

It can be tempting, as always, to lock everything down and allow users only the most basic functionality in cloud applications. However, that often results in a poor user experience and leads to unsanctioned cloud use and shadow IT.

While cloud environments are very different from on premise environments, many of the security principles are still valid. As a foundation, I often guide organizations to look at what they are doing today for on-premises security and begin with extending those same principles into the cloud. Three useful principles to begin with are:

Privilege Management
Privilege management has been used in enterprises for years as an on-premises method to secure sensitive data and guide compliant user behavior by limiting access. In some cloud services, like Amazon Web Services (AWS), individual administrators can quickly amass enough power to cause significant downtime or security concerns, either unintentionally or through compromised credentials. Ensuring appropriate privilege management in the cloud can help reduce that risk.

In addition to traditional privilege management, the cloud also introduces a unique challenge when it comes to cloud service providers. Since they can access your cloud instance, it’s important to factor into your cloud risk assessment that your cloud provider also has access to your data. If you’re concerned about insider threats or government data requests served directly to the cloud provider, evaluating options to segregate data from your cloud provider is recommended.

Data Loss Protection
Another reason it’s so important to speak with stakeholders and identify the type of data flowing into the cloud is to determine what data loss protection (DLP) policies you need to enforce. Common data characteristics to look out for include personally identifiable information, credit card numbers, or even source code. If you’re currently using on-premises DLP, it’s a good time to review and update your organizations’ already defined patterns and data classification definitions to ensure that they are valid and relevant as you look to extend them to the cloud.

It’s also important to also educate end users on what to expect. Good cloud security should be mostly frictionless, but, if you decided to enforce policies such blocking a transaction or requiring additional authentication for sensitive transactions, it’s important to include this in your training materials and any internal documentation provided to users. It not only lets users know what to expect, leading to fewer helpdesk tickets but also can be used to refresh users on internal policies and security basics.

Auditing
A key aspect of any data security strategy is to maintain visibility into your data to ensure compliant usage. Companies need to make sure that they do not lose this capability as they migrate their data and infrastructure into the cloud. If you use security information event management (SIEM) tools today, it’s worth taking the time to decide on what cloud applications and transactions you should integrate into your reports.

By extending the controls listed above into your cloud environment, you can establish a common ground of good security practices that protect business enabling technology. With the right tools and strategy in place, it’s possible to stop saying no outright and instead come to the table enabled to empower relevant business demands while maintaining appropriate security and governance controls.

 

 

Ransomware Explained

August 18, 2017 | Leave a Comment

By Ryan Hunt, PR and Content Manager, SingleHop

How it Works    Plus Tips for Prevention & Recovery
Ransomware attacks — a type of malware (a.ka. malicious software) — are proliferating around the globe at a blistering pace. In Q1 2017, a new specimen emerged every 4.2 seconds!* What makes ransomware a go-to mechanism for cyber attackers? The answer is in the name itself.

How it works
Unlike other hacks, the point of ransomware isn’t to steal or destroy valuable data; it’s to hold it hostage.

Ransomware enters computer systems via email attachments, pop-up ads, outdated business applications and even corrupted USB sticks.

Even if one computer is initially infected, ransomware can easily spread network-wide via a LAN or by gaining access to username and passwords.

Once the malware activates, the hostage situation begins: Data is encrypted and the user is instructed to pay a ransom to regain control.

Ransomware Prevention

  1. Install Anti-Virus/Anti-Malware Software
  2. But Be Sure to Update & Patch Software/Operating Systems
  3. Invest In Enterprise Threat Detection Systems and Mail Server Filtering
  4. Educate Employees on Network Security

What to do if your data is held hostage? If attacked, should your company pay?
Remember: Preventative measures are never 100% effective.

Paying the ransom might get you off the hook quickly, but will make you a repeat target for attack.

There’s a better way
Beat the attackers to the punch by investing in Cloud Backups and Disaster Recovery as a Service.

Backups
Daily Offsite Backups = You’ll Always Have Clean, Recent Copies of Your Data

Disaster Recovery
Disaster Recovery Solutions are crucial in the event Ransomware compromises your entire system. Here, you’ll be able to operate your business as usual via a redundant network and infrastructure. Sorry, Malware Ninjas.

Is the Cloud Moving Too Fast for Security?

July 28, 2017 | Leave a Comment

By Doug Lane, Vice President/Product Marketing, Vaultive

In February 2017, a vulnerability in Slack was discovered which had the potential to expose the data of the company’s reported four million daily active users. Another breach in February on CloudFlare, a content delivery network, leaked sensitive customer data stored by millions of websites powered by the company. On March 7, the Wikileaks CIA Vault 7 exposed 8,761 documents on alleged agency hacking operations. On June 19, Deep Root Analytics, a conservative data firm, misconfigured an Amazon S3 Server that housed information on 198 million U.S. voters. On July 12, Verizon had the same issue and announced a misconfigured Amazon S3 data repository at a third-party vendor that exposed the data of more than 14 million U.S. customers.

That’s at least five-major cloud application and infrastructure data breach incidents for 2017, and we’re only in July. Add in the number of ransomeware and other attacks during the first half of this year and it’s clear the cloud has a real security problem.

By now, most everyone recognizes the benefits of the cloud; bringing new applications and infrastructure online quickly and scaling it to meet ever changing business demands. Although highly valuable for the business side, when security teams lose control over how and where new services are implemented, the network is at risk and subsequently, so is their data. The balance of allowing businesses to move at the speed of the cloud and maintain the needed security controls is becoming increasingly difficult. With the spike in data exposures and breaches, it shows that security teams are struggling to secure cloud use.

The Slack breach is a great example at the application-level. Slack is simple to use and implement, which has driven the application’s record-breaking growth. Departments, teams, and small groups can easily spin up Slack without IT approval or support, and instances of the application can spread quickly across an organization. Although Slack patched the vulnerability identified in February before any known exposure occurred, if it were hacked, the attacker could have had full access and control over four million user accounts.

In the Verizon situation, a lack of control at the infrastructure level is what caused so many of their customers to be exposed this month. When servers can be brought online so easily and configured remotely by third-party partners, the right security protocols can be missed or ignored.

As more businesses move to the cloud and as cloud services continue to grow, organizations must establish a unified set of cloud security and governance controls for business-critical SaaS applications and IaaS resources. In most cases, cloud providers will have stronger security than any individual company can maintain and manage on-premise. However, each new service comes with it’s own security capabilities, which can increase risks because of feature gaps or human error during configuration. Adding additional encryption and policy controls independently of the vendor, is a proven way for organizations to fully entrust their data to a cloud provider without giving up complete control over who can access it while also making sure employees are compliant when using SaaS applications. These controls allow businesses to move at the speed of the cloud without placing their data at risk.

The reality is that threats are increasing in frequency and severity. The people behind attacks are far more sophisticated and their intentions far more sinister. We, as individuals and businesses, entrust a mind-boggling amount of data to the cloud but there doesn’t exist today a way to entirely prevent hackers from getting through the door at the service, infrastructure or software provider. Remaining in control of your data that traverses all the cloud services that you use is the safest thing you can do to protect your business. Because, in the end, if they can’t read it or use it, is data really data?

Guidance for Critical Areas of Focus in Cloud Computing Has Been Updated

July 26, 2017 | Leave a Comment

Newest version reflects real-world security practices, future of cloud computing security

By J.R. Santos, Executive Vice President of Research, Cloud Security Alliance

Today marks a momentous day not only for CSA but for all IT and information security professionals as we release Guidance for Critical Areas of Focus in Cloud Computing 4.0, the first major update to the Guidance since 2011.

As anyone involved in cloud security knows, the landscape we face today is a far cry from what was going on 10, even five, years ago. To keep pace with those changes almost every aspect of the Guidance was reworked. In fact, almost 80 percent of it was rewritten from the ground up, and domains were restructured to better reflect the current state of cloud computing, as well as the direction in which this critical sector is heading.

For those unfamiliar with what is widely considered to be the definitive guide for cloud security, the Guidance acts as a practical, actionable roadmap for individuals and organizations looking to safely and securely adopt the cloud paradigm. This newest version includes significant content updates to address leading-edge cloud security practices and incorporates more of the various applications used in the security environment today.

Guidance 4.0 covers such topics as:

  • DevOps, continuous delivery, and secure software development;
  • Software Defined Networks, the Software Defined Perimeter and cloud network security.
  • Microservices and containers;
  • New regulatory guidance and evolving roles of audits and compliance inheritance;
  • Using CSA tools such as the CCM, CAIQ, and STAR Registry to inform cloud risk decisions;
  • Securing the cloud management plane;
  • More practical guidance for hybrid cloud;
  • Compute security guidance for containers and serverless, plus updates to managing virtual machine security; and
  • The use of immutable, serverless, and “new” cloud architectures.

Today is the culmination of more than a year of input and review from the CSA and information security communities. Guidance 4.0 was drafted using an open research model (a herculean effort for those unfamiliar with the process), and none of it would have been possible without the assistance of Securosis, whose research analysts oversaw the project. We owe them—and everyone involved—a tremendous thanks.

You can learn more about the Guidance and read the updated version here.

Patch Me If You Can

July 24, 2017 | Leave a Comment

By Yogi Chandiramani, Technical Director/EMEA, Zscaler

In May, the worldwide WannaCry attack infected more than 200,000 workstations. A month later, just as organizations were regaining their footing, we saw another ransomware attack, which impacted businesses in more than 65 countries.

What have we learned about these attacks?

  • Compromises/infections can happen no matter what types of controls you implement – zero risk does not exist
  • The security research community collaborated to identify indicators of compromise (IOCs) and provide steps for mitigation
  • Organizations with an incident response plan were more effective at mitigating risk
  • Enterprises with a patching strategy and process were better protected

Patching effectively
Two months before the attack, Microsoft released a patch for the vulnerability that WannaCry exploited. But, because many systems did not receive the patch, and because WannaCry was so widely publicized, the patching debate made it to companies’ board-level leadership, garnering the sponsorship needed for a companywide patch strategy.

Even so, the attack of June 27 spread laterally using the SMB protocol a month after WannaCry, by which time most systems should have been patched. Does the success of this campaign reflect a disregard for the threat? A lack of urgency when it comes to patching? Or does the problem come down to the sheer volume of patches?

Too many security patches
As we deploy more software and more devices to drive productivity and improve business outcomes, we create new vulnerabilities. Staying ahead of them is daunting, with the need to continually update security systems, and patch end-user devices running different operating systems and software versions. Along with patch and version management, there is change control, outage windows, documentation processes, post-patch support, and more. And it’s only getting worse.

The following graph illustrates the severity of vulnerabilities over time, and you can see that halfway through 2017, the number of disclosed vulnerabilities is already close to the overall patch volume of 2016.

source: National Vulnerability Database, a part of the National Institute of Standards and Technology (NIST). (https://nvd.nist.gov/vuln-metrics/visualizations/cvss-severity-distribution-over-time)

The challenge for companies is the sheer number of patches that need to be processed to remain fully up to date (a volume that continues to increase). Technically speaking, systems will always be one step behind in terms of vulnerability patching.

Companies must become aware of security gap
In light of the recent large-scale attacks, companies should revisit their patching strategy as a part of their fundamental security posture. Where are the gaps? The only way to know is through global visibility — for example, visibility into vulnerable clients or identifying botnet traffic — which provides key insights in terms of where to start and focus.

Your security platform’s access logs are a gold mine, providing data as well as context, with information such as who, when, where, and how traffic is flowing through the network. The following screen capture is a sample log showing a botnet callback attempt. With this information, you can see where to where to focus your attention and your security investments.

In the following example, you can identify potentially vulnerable browsers or plugins. It’s important to ensure that your update strategies include these potential entry points for malware, as well.

These are but two examples of potential gaps that can be easily closed with the appropriate insight into what software and versions are being used within an organisation. As a next step, companies should focus on patching those gaps with the highest known risk as a starting point.

But patching remains an onerous, largely manual task that is difficult to manage. A better alternative is a cloud-delivered security-as-a-service solution, which automates updates and the patching process. With threat actors becoming increasingly inventive as they design their next exploits, it pays to have a forward-thinking strategy that reduces the administrative overhead, improves visibility, and delivers protections that are always up to date.

 

Cyberattacks Are Here: Security Lessons from Jon Snow, White Walkers & Others from Game of Thrones

July 19, 2017 | Leave a Comment

An analysis of Game of Thrones characters as cyber threats to your enterprise.

By Virginia Satrom, Senior Public Relations Specialist, Forcepoint

As most of you have probably seen, we recently announced our new human point brand campaign. Put simply, we are leading the way in making security not just a technology issue, but a human-centric one. In light of this, I thought it would be fun to personify threats to the enterprise with one of my favorite shows – Game of Thrones. Surprisingly, there are a lot of lessons that can be learned from GoT in the context of security.

Before we start, I’d like to provide a few disclaimers:

  • This is meant to be tongue in cheek, not literal, so take off your troll hat for the sake of some interesting analogies.
  • This is not comprehensive. Honestly, I could have written another 5,000 words around ALL the characters that could be related to threats.
  • This is based off of the Game of Thrones television series, not the books.
  • And finally, spoilers people. There are spoilers if you are not fully caught up through Season 6. You’ve been warned 🙂

Now, let’s dive in, lords and ladies…

What makes this Game of Thrones analysis so interesting is that these characters, depending on external forces, can change drastically from season to season. Therefore, our favorite character could represent a myriad of threats during a given season or the series overall. This concept relates to what we call ‘The Cyber Continuum of Intent’ which places insiders in your organization on a continuum which can move fluidly from accidental to malicious given their intent and motivations. There are also many instances where a character is a personification of a cyber threat or attack method.

Let’s start with one of the most devious characters – Petyr Baelish aka Littlefinger. Littlefinger is a good example of an advanced evasion technique (AET) that maneuvers throughout your network delivering an exploit or malicious content into a vulnerable target so that the traffic looks normal and security devices will pass it through. As Master of Coin and a wealthy business owner, he operates in the innermost circle of King’s Landing, while secretly undermining those close to him to raise his standing within Westeros. He succeeds, in fact, by marrying Lady Tulley to ultimately become the Protector of the Vale with great influence over its heir – Robyn Arryn of the Vale. Looking at his character from another angle, Littlefinger could also be considered a privileged user within a global government organization or enterprise. He is trusted by Ned Stark with Ned’s plans to expose the Lannister’s lineage and other misdoings, but he ultimately uses that information and knowledge for personal gain – causing Ned’s demise. And let’s not forget that Littlefinger also betrays Sansa Stark’s confidence and trust, marrying her to Ramsay Snow.

Varys and his ‘little birds’ equate to bots, and collectively, a botnet. Botnets are connected devices in a given network that can be controlled via an owner with command and control software. Of course, Varys (aptly also known as the Spider) commands and controls his little birds through his power, influence and also money. When it comes to security, botnets are used to penetrate a given organization’s systems – often through DDoS attacks, sending spam, and so forth. This example is similar to Turkish hackers who actually gamified DDoS attacks, offering money and rewards to carry out cybercrime.

Theon Greyjoy begins the series as a loyal ward to Eddard Stark and friend to Robb and Jon, but through his own greed and hunger for power becomes a true malicious insider. He also is motivated by loyalty to his family and home that he has so long been away from. He overtook The North with his fellow Ironborns, fundamentally betraying the Starks.

Theon Greyjoy and Ramsay Bolton (formerly Snow) are no strangers to one another, and play out a horrific captor/captive scenario through Seasons 4 and 5. Ramsay is similar to Ransomware as it usually coerces its victims to pay a ransom through fear. In the enterprise, this means a ransom is demanded in Bitcoin for the return of business critical data or IP. Additionally, Ramsay Snow holds RIckon Stark as a hostage in Season 6. He agrees to return Rickon to Jon Snow and Sansa Stark, but has his men kill Rickon right as the siblings reunite. This is often the case in Ransomware that infiltrates the enterprise – often, even if Ransom is paid, data is not returned.

Gregor Clegane, also known as The Mountain, uses sheer brute force to cause mayhem within Westeros, which would be similar to brute force cracking. This is a trial and error method used to decode encrypted data, through exhaustive effort. The Mountain is used for his strength and training as a combat warrior, defeating a knight in a duel in Season 1, and in Season 4 defeating Prince Oberyn Martell in trial by combat – in a most brutal way. He could also be compared to a nation state hacker, with fierce loyalty to the crown — particularly the Lannister family. He is also a reminder that physical security can be as important as virtual for enterprises.

Depending on the season or the episode, this can fluctuate, but 99% of the time I think we can agree that Cersei Lannister is a good example of a malicious insider and more specifically a rogue insider. She is keen to keep her family in power and will do whatever it takes to maintain control over their destiny. My favorite part about Cersei is though she is extremely easy to loathe, throughout the entire series it is clear she loves her children and would do anything for them. After the last of her children dies, she quickly evolves from grief to rage. As the adage says, sad people harm themselves but mad people harm others. Cersei can be related to a disgruntled employee who intends to steal critical data with malicious intent that is facing challenges from within or outside of the workplace.

If we take a look at Seasons 4 and 5, and the fall of Jon Snow, many of the Night’s Watch members are good examples of insiders. Olly, for example, starts out as a loyal brother among the Night’s Watch. If he happened to leak any intel that could harm Jon Snow’s leadership or well-being, it would have been accidental. This could be compared to an employee within an organization who is doing their best, but accidentally clicks on a malicious link. However, as Snow builds his relationships with the wildlings, Olly cannot help but foster disdain and distrust toward Snow for allying with the people that harmed his family. Conversely, Alliser Thorne was always on the malicious side of the continuum, having it out for Snow especially after losing the election to be the 998th Lord Commander of the Night’s Watch. Ultimately, Thorne’s rallying of the Night’s Watch to his side led to Snow’s demise (even if it was only temporary).

Sons of the Harpy mirror a hacktivist group fighting the rule of Daenerys Targaryen over Meereen. They wreak havoc on Daenerys’s Unsullied elite soldiers and are backed by the leaders who Daenerys overthrew – the ‘Masters’ of Meereen – in the name of restoring the ‘tradition’ of slavery in their city. They seek to overthrow Daenerys and use any means necessary to ensure there is turmoil and anarchy. Hacktivists are often politically motivated. If the hacktivist group is successful, it can take the form of a compromised user on the Continuum – through impersonation. After all, the most pervasive malware acts much like a human being.

Let’s not forget about the adversaries that live beyond The Wall – The White Walkers. The White Walkers represent a group of malicious actors seeking to cause harm in the Seven Kingdoms, or for this analogy, your network. What is interesting about these White Walkers is that they are a threat that has been viewed as a legend or folklore except for those that have actually seen them. However, we know that this season they become very real. Secondly, what makes the White Walkers so remarkable is that we do not know their intentions or motivations, they cannot be understood like most of these characters seeking power or revenge. I argue that this makes them the most dangerous and hardest threat to predict. And lastly, if we think about how the White Walkers came to be, we know that they were initially created to help defend the Children of the Forest against the First Men. But, we now know that they have grown exponentially in number and begun to take on a life (pun intended) of their own. This is equated to the use of AI in the technology space which some fear will overtake us humans.

In my mind The Wall itself could be considered a character, and therefore a firewall of sorts. Its purpose is to keep infiltration out; however, as we learned at the end of Season 6, this wall is penetrable. This leads me to the main takeaway – enterprises and agencies face a myriad of threats and should not rely on traditional perimeter defenses, but have multi-layered security solutions in place.

With all of these parallels, it becomes clear that people are the true constant complexity in security. It is known that enterprises must have people-centric, intelligent solutions to combat the greatest threats like those faced in Westeros.

CSA Industry Blog Listed Among 100 Top Information Security Blogs for Data Security

July 10, 2017 | Leave a Comment

Our blog was recently ranked 35th among 100 top information security blogs for data security professionals by Feedspot. Among the other blogs named to the list were The Hacker News, Krebs on Security and Dark Reading. Needless to say, we’re honored to be in such good company.

To be listed, Feedspot’s editorial team and expert reviews, assessed each blog on the following criteria:

• Google reputation and Google search ranking;
• Influence and popularity on Facebook, Twitter and other social media sites; and
• Quality and consistency of posts.

We strive to offer our readers broad range of informative content that provides not only varying points of view but information you can use as a jumping off point to enhance your organization’s cloud security.

We’re glad to be in such great company and hope that you’ll take the time to visit our blog. We invite you to sign up to receive it and other CSA announcements. We think you’ll like what you see.

Locking-in the Cloud: Seven Best Practices for AWS

July 6, 2017 | Leave a Comment

By Sekhar Sarukkai, Co-founder and Chief Scientist, Skyhigh Networks

With the voter information of 198 million Americans exposed to the public, the Deep Root Analytics leak brought cloud security to the forefront. The voter data was stored in an AWS S3 bucket with minimal protection. In fact, the only level of security that separated the data from being outright published online was a simple six-character Amazon sub-domain. Simply put, Deep Root Analytics wasn’t following some of the most basic AWS security best practices.

More importantly, this leak demonstrated how essential cloud security has become to preventing data leaks. Even though AWS is the most popular IaaS system, its security, especially on the customer end, is frequently neglected. This leaves sensitive data vulnerable to both internal and external threats. External threats are regularly covered in the news, from malware to DDoS hacking. Yet the Deep Root Analytics leak proves that insider threats can be dangerous, even if they are based on negligence rather than malicious intent.

Amazon already addressed the issue of outside threats through its numerous security investments and innovations, such as the AWS shield for DDoS attacks. Despite extensive safety precautions, well-organized and persistent hackers could still break Amazon’s defenses. However, Amazon cannot be blamed for the AWS security breaches, as it is estimated that 95 percent of cloud security breaches by 2020 will be the customer’s fault.

This is because AWS is based on a system of cooperation between Amazon and its customers. This system, known as the shared responsibility model, operates on the assumption that Amazon is responsible for safeguarding and monitoring the AWS infrastructure and responding to fraud and abuse. On the other hand, customers are responsible for the security “in” the cloud. Specifically, they are in charge of configuring and managing the services themselves, as well as installing updates and security patches.

AWS Best Practices

The following best practices serve as a background to securing configuring AWS.

  1. Activate CloudTrail log file validation:

CloudTrail log validation ensures that any changes made to a log file can be identified after they have been delivered to the S3 bucket. This is an important step towards securing AWS because it provides an additional layer of security for S3, something that could have prevented the Deep Root Analytics leak.

  1. Turn on access logging for CloudTrail S3 buckets:

Log data captured by CloudTrail is stored in the CloudTrail S3 buckets, which can be useful for activity monitoring and forensic investigations. With access logging turned on, customers can identify unauthorized or unwarranted access attempts, as well as track these access requests, improving the security of AWS.

  1. Use multifactor authentication:

Multifactor authentication (MFA) should be activated when logging into both root and Identity and Access Management (IAM) user accounts. For the root user, the MFA should be tied to a dedicated device and not any one user’s personal device. This would ensure that the root account is accessible even if the user’s personal device is lost or if that user leaves the company. Lastly, MFA needs to be required for deleting CloudTrail logs, as hackers are able to avoid detection for longer by deleting S3 buckets containing CloudTrail logs.

  1. Rotate IAM access keys regularly:

When sending requests between the AWS Command Line Interface (CLI) and the AWS APIs, an access key is needed. Rotating this access key after a standardized and selected number of days decreases the risk of both external and internal threats. This additional level of security ensures that data cannot be accessed with a lost or stolen key if it has been sufficiently rotated.

  1. Minimize number of discrete security groups:

Account compromise can come from a variety of sources, one of which is misconfiguration of a security group. By minimizing the number of discrete security groups, enterprises can reduce the risk of misconfiguring an account.

  1. Terminate unused access keys:

AWS users must terminate unused access keys, as access keys can be an effective method for compromising an account. For example, if someone leaves the company and still has access to a key, that person would be able to use it until its termination. Similarly, if old access keys are deleted, external threats only have a brief window of opportunity. It is recommended that access keys left unused for 30 days be terminated.

  1. Restrict access to CloudTrail bucket:

No user or administrator account should have unrestricted access to CloudTrail logs, as they are susceptible to phishing attacks. Even if users have no malicious intent, they are still susceptible. As a result, access to the CloudTrail logs needs to be restricted to limit the risk of unauthorized access.

These best practices for the AWS infrastructure could go a long way in securing your sensitive information. By applying even a few of them to your AWS configuration, sensitive information could remain secure, and another Deep Root Analytics leak could be prevented in the future.

Clouding Within the Lines: Keeping User Data Where It Belongs in the Age of GDPR

July 3, 2017 | Leave a Comment

By Nathan Narayanan, Director of Product Management, Netskope

Importance around data residency hygiene has been around for a long time, but cloud services that often show up tend to focus more on user productivity and less on user data privacy. The highly-productive nature of these services increases their adoption resulting in a higher risk to the privacy of data.

According to Gartner, by May 25, 2018 (the day that GDPR takes effect) less than 50 percent of all organizations will be fully compliant with the EU’s General Data Protection Regulation (GDPR). It’s time to take steps to keep up.

Here are some things to consider.

Identify important data. Enforcing a very broad policy on all types of content can be too restrictive and may hinder productivity. Enterprises will need to identify critical data that will needs to be controlled within the geo-boundaries. This may be data relating to regulatory mandates such as health records, personally identifiable information and even company confidential data. All other content that do not fall under these constraints need not be controlled within the geo-boundaries.

Determine your geo-boundary and monitor movement of your data. According to the Netskope Cloud Report 40.7 percent of cloud services replicate data in geographically dispersed data centers. With this in mind, you need to keep your important data where it belongs, you also need to determine the boundaries where the data should reside. In some cases, PII may be required to stay with a region such as EU and in other cases it may be required to stay within the narrow bounds of a country such as Germany. A CASB can perform content inspection to identify important data as well as report on the movement of such data. To control data traveling beyond the geo-boundaries will require the CASB solution to map IP address into graphical locations and proactively apply policies to keep the data where it should reside.

Ensure cloud services enforce geo-control. Get visibility into the cloud services used by your organization and understand how ready these applications are for enterprise use. A CASB can also allow you to rate cloud services from a GDPR readiness standpoint. This rating is usually based on research on the cloud service and considers factors such as SLAs around data residency, level of encryption of the content processed, and terms in the agreement between the enterprise and the cloud service. For example, applications that take ownership of the user data will be rated poorly for GDPR readiness. Since 66.9 percent of cloud services do not specify if you or they own your data in their terms of service, finding out this information might take longer than you think.

Build policies to ensure data is within its geo-boundaries. No matter how ready the cloud services are, there may be a legitimate need to move data outside the region for business reasons. Also, sometimes employees may inadvertently move data outside its geo-boundaries. There are several steps you can take to proactively enforce geo-control in these situations. A CASB solution can help with enforcing a policy so that data is encrypted if moved outside the geo-boundaries for legitimate reasons. In all other cases, enforce policies to simply stop data from leaving the geo-boundary.

Remember employees will often travel outside the region and will need access to sensitive data so that they can continue to be productive. Ensure policies for such employees continue to respect data residency. It may be easier to simply block traffic to or from certain countries based on how your business is conducted.

Build a process for tighter geo-control. Employees play a big part in the data residency hygiene. Reduce risk by educating users on a periodic basis. A CASB solution can be setup to coach the employee at the time the risky data transfer if conducted. Coaching can also be used to discourage applications that are not ready for geo-control. It is also important to continually monitor and sharpen the policies as you learn how your sensitive data travels.

Want to learn more about GDPR and the cloud? Download Managing the Challenges of the Cloud Under the New EU General Data Protection Regulation white paper.