CCSK Success Stories: From a Data Privacy Consultant

headshot of Satishkumar Tadapalli

By the CSA Education Team

This is the fourth part in a blog series on cloud security training, in which we will be interviewing Satishkumar Tadapalli a certified and seasoned information security and data privacy consultant. Tadapalli has 12+ years of multi-functional IT experience in pre-sales, consulting, risk advisory and business analysis. He has rich experience in information protection and data privacy, risk management, information security with various ISO 27001 implementation, audits and is currently working for a London-based bank as a risk advisor, looking after 3rd-party assurance and cloud risk assessments.

Satish holds several certifications including: CISM, CIPM, CIPT, CCSK, ISO27001 LA, CISRA, CPISI, and ITIL V3.

Can you describe your role?

In this diverse, cloud-connected, dynamic world, it’s not easy for me to describe a specific role as I’m required to wear multiple hats depending on the table at which I’m seated. Having said that, currently I’m performing a risk advisory role at one of the largest banks in the UK. This position keeps me challenged in performing contractual risk assurance, data privacy consultations and cloud risk assessment of 3rd-, 4th-, and 5th-party vendors, and governing the supplier risk-assurance activities to ensure that the consumer and providers are adhering to the privacy and security principles and keeping customer data safe and secure.

What got you into cloud security in the first place? What made you decide to earn your CCSK?

Cloud security is an interesting and evolving topic for me. I believe cloud adoption isn’t a choice for organizations in this era, now. For this reason, keeping myself updated on the must-have knowledge in cloud made me pay attention to cloud security. Once I’d decided to get my hands into cloud security, I felt CCSK was my go-to in order to get started with concepts as it covers the foundations of real-world, complex scenarios in cloud implementation, migration, issues in adoption, evaluation of cloud and many others.

… makes you not only think from the cloud deployment view, but also provides guidance for both cloud service provider and consumer views which is very uniquely appreciated and helps in real-world solutioning—especially when you wear multiple hats—of risks from vendor to consumers.”

Could you elaborate on how the materials covered in the exam specifically helped in that way?

Sure, as we all know CCSK isn’t a specific, cloud product-related exam. Rather, I think the intention of this exam is to evaluate how well the key elements or domains of cloud models/service(s) are understood by candidates. Hence, this exam expects you to be aware of key areas such as governance, legal challenges, incident response, compliance, and risk management, which are very essential and challenging in cloud adoption for both consumers and service providers of cloud.

How did you prepare for the CCSK exam?

I mainly followed the CCSK exam preparation kit available on CSA site, plus my limited experience in security and 3rd-party risk assessment helped to crack the CCSK exam.

If you could go back and take it again, how would you prepare differently?

As I mentioned earlier, cloud is a constantly changing world with new threats and challenges evolving almost every day. Hence, I would elevate my knowledge by looking at current study materials from CSA and explore the real challenges and solutions in industries for cloud implementation and adoption.

Were there any specific topics on the exam that you found trickier than others?

I felt that the legal and compliance management along with security incidents handling domains were quite interesting. Primarily, because these areas bring different challenges to cloud services, mainly in detailing the roles and responsibilities and limitations for both cloud consumers and cloud providers.

What is your advice to people considering earning their CCSK?

I strongly advise CCSK aspirants look at this exam as a foundational course and use it as a stepping stone in the vast cloud security journey. CCSK won’t just differentiate you from others by giving you a credential, it will also help you in a longer journey irrespective of your role (cloud consumer, provider or independent cloud risk advisor, etc.) due to its essential concepts, which aren’t specific to any cloud vendor/solution.

Lastly, what material from the CCSK has been the most relevant in your work and why?

It is a bit hard for me to point out one or any specific domain(s) as most of the domains and materials were and are relevant to my work as I’m required to play multiple roles given the nature of business we are in today. Specifically, I use the Security Guidance and the Cloud Controls Matrix the most as I deal with vendor risk management. These help to clarify key roles and responsibilities between the cloud provider and consumer. In addition, these documents act as a guide for me to reassure myself of cloud concepts.

Interested in learning more about cloud security training? Discover our free prep-kit, training courses, and resources to prepare to earn your Certificate of Cloud Security Knowledge here.

Invest in your future with CCSK training

Prying Eyes Inside the Enterprise: Bitglass’ Insider Threat Report

By Jacob Serpa, Product Marketing Manager, Bitglass

Threatbusters Insider Threat report cover

When words like cyberattack are used, they typically conjure up images of malicious, external threats. While hackers, malware, and other outside entities pose a risk to enterprise security, they are not the only threats that need to be remediated. 

Insider threats, which involve either malicious or careless insiders, are another significant threat to corporate data that must be addressed. Fortunately, Bitglass has the latest information on this topic. Read on to learn more.

In Threatbusters, Bitglass’ 2019 Insider Threat Report, Bitglass set out to learn about the state of insider attacks, as well as to uncover what organizations are doing to defend against them. This was accomplished by partnering with a cybersecurity community and surveying the IT professionals therein. A breadth of survey questions yielded a wealth of information, ranging from the tools that organizations are using to defend against threats, to how long it takes them to recover from these types of attacks. Two examples can be found below.

The frequency of attack

A staggering 73 percent of survey respondents claimed that insider threats are becoming a more common occurrence. In 2017, when Bitglass released its previous Insider Threat Report, this number was only 56 percent. Additionally, 59 percent of respondents revealed that their organization had experienced at least one insider attack within the last 12 months. For organizations to stay secure in today’s high-speed, cloud-first world where data is shared, accessed, and downloaded more rapidly and widely than ever before, appropriate security controls simply have to be put in place.

The damage done 

Eighty-seven percent of respondents said that it was either moderately difficult or very difficult to determine the damage done in the wake of an insider attack. This should not come as a surprise. Because insider attacks involve the use of legitimate credentials, distinguishing legitimate user activity from threatening user activity can be challenging (especially because said behavior can go unnoticed for extended periods of time if the proper tools are not in place). Naturally, this means that it can be difficult to ascertain the extent of the damage that these authorized users have done.

The above items are only a sample of what Bitglass was able to uncover in its most recent research. To learn more about insider attacks and how organizations are addressing them, download the full report.

CSA STAR – The Answer to Less Complexity, Higher Level of Compliance, Data Governance, Reduced Risk and More Cost-Effective Management of Your Security and Privacy System

By John DiMaria, Assurance Investigatory Fellow, Cloud Security Alliance

STAR Registry: Security on the Cloud Verified

We just launched a major refresh of the CSA STAR (Security, Trust and Assurance Risk) program, and if you were at the CSA Summit at RSA, you got preview of what’s in store. So let me put things in a bit more context regarding the evolution of STAR.

The more complex systems become, the less secure they become, even though security technologies improve. There are many reasons for this, but it can all be traced back to the problem of complexity. Why? Because we give a lot of attention to technology, and we have increased silos of a plethora of regulations and standards. Therefore, we become fragmented and too complexed.

The adversary works in the world of the stack, and that complexity is where they thrive.

Ron Ross, Senior Scientist and Fellow at NIST

Complexed systems:

  • have more independent processes and that creates more security risks.
  • have more interfaces and interactions and create more security risks.
  • are harder to monitor and therefore, are more likely to have untested, unaudited portions.
  • are harder to develop and implement securely.
  • are harder for employees and stakeholders to understand and be trained on.

By using a single system for the ongoing management of compliance, regulatory, legal, and information security obligations, overlapping requirements can be identified, efficiencies leveraged, and greater visibility and assurance provided to the organization.

CSA STAR: Built to Support

To respond to these growing business concerns, the Cloud Security Alliance (CSA) created the Cloud Control Matrix (CCM). Developed in conjunction with an international industry working group, it specifies common controls which are relevant for cloud security and is the foundation on which the three pillars of CSA STAR are built.

In the same approach, we recently released the GDPR Code of Conduct (CoC). The GDPR CoC shows adherence to GDPR privacy requirements, streamlines contracting, accelerates sales cycles and provides assurance to the cloud customer of data privacy in conjunction with CSA STAR.

CSA STAR is being recognized as the international harmonized solution, leading the way of trust for cloud providers, users, and their stakeholders, by providing an integrated cost-effective solution that decreases complexity and increases trust and transparency while enabling organizations to secure their information, protect against cyber-threats, reduce risk, and strengthen their information governance. It creates trust and accountability in the cloud market with increasing levels of transparency and assurance. What’s more, it provides the solution to an increasingly complex and resource-demanding compliance landscape by providing technical standards, an integrated certification and attestation framework, and public registry of trusted data.

The STAR Registry documents the security and privacy controls provided by popular cloud computing offerings. This publicly accessible registry allows cloud customers to assess their security providers in order to make the best procurement decisions and also to manage their supply-chain. Additionally, it allows cloud service providers (CSPs) to benchmark themselves against like CSPs in their industry.

STARWatch can then be used for benchmarking and/or third-party risk management. STARWatch is a SaaS application to help organizations manage compliance with CSA STAR Registry requirements. STARWatch delivers the content of the CCM and Consensus Assessments Initiative Questionnaire (CAIQ) in a database format, enabling users to manage compliance of cloud services with CSA best practices.

While it is understood that ISO/IEC 27001, the international management systems standard for information security, and SOC 2 are both widely recognized and respected, their requirements are more generic. As such, there can be a perception that they do not focus on certain areas of security that are critical for particular sectors, such as cloud security, in enough detail.

By adopting STAR as an extension of your ISO/IEC 27001 or SOC 2 System, you’ll be sending a clear message to existing and potential customers that your security systems are robust and have addressed the specific issues critical to cloud security.

STAR Certification can boost customer and stakeholder confidence, enhance your corporate reputation, and give your business a competitive advantage.

Take the STAR Challenge

Take the first step in evaluating how your organization stacks up against the CCM. Fill out the self-assessment using the CAIQ and the CCM. You can then upload your information into the STAR Registry, taking credit for your compliance efforts.

Additionally you can evaluate yourself against the GDPR Code of Conduct. Just fill out the self-assessment, which can then be uploaded to the STAR Registry, along with your Statement of Adherence . Our team of experts will evaluate your submission and either respond with questions or approve your submission for posting. Again, you’ll be making a major statement about your compliance posture.

Once you have completed this step (or along the way) you can make decisions on whether there is a business case to move into Level 2 (certification and/or attestation).

Contact us to find out more about CSA STAR and the opportunities available for you to contribute and have a voice in this growing area of increasing trust and transparency in the cloud.

Healthcare Breaches and the Rise of Hacking and IT Incidents

By Jacob Serpa, Product Marketing Manager, Bitglass

Healthcare breach report 2019

In the course of their day-to-day operations, healthcare organizations handle an extensive amount of highly sensitive data. From Social Security numbers to medical record numbers and beyond, it is imperative that these personal details are properly secured. 

Each year, Bitglass conducts an analysis and uncovers how well healthcare organizations are protecting their data. In 2019’s report, we detail the state of security in healthcare as well as shed light on recent breach trends in the vertical. Read on to learn more.

Bitglass’ 2019 Healthcare Breach Report analyzes data stored in the Department of Health and Human Services’ “Wall of Shame,” a database wherein details about healthcare breaches are stored. By scrutinizing this data set, Bitglass uncovered information related to the size of healthcare breaches, the causes of healthcare breaches, the states in which these breaches occur, and much more, over the last few years. A snapshot of some of this data is provided below.

The rise of hacking and IT incidents

Over the last few years, the threat landscape has been shifting in healthcare. It used to be that lost and stolen devices were the leading contributor to exposed data. However, each year since 2014, the number of breaches caused by lost and stolen devices has decreased. At the same time, hacking and IT incidents have enabled more and more breaches each year – in 2018, they were the leading cause of breaches in healthcare. 

The decreasing numbers of healthcare breaches

Despite the above, 2018 saw the number of healthcare breaches reach its lowest point in the last few years. Obviously, this is good news. While healthcare firms need to do something to address the growing number of hacking and IT incidents that are exposing their data, the fact that the overall breach number is down still bodes well for the industry’s progress in securing sensitive data. 

To learn more about the above findings as well as other interesting facts and figures, download the full 2019 Healthcare Breach Report.


12 Ways Cloud Upended IT Security (And What You Can Do About It)

By Andrew Wright, Co-founder & Vice President of Communications, Fugue

12 ways cloud upended IT security (and what you can do about it)

The cloud represents the most disruptive trend in enterprise IT over the past decade, and security teams have not escaped turmoil during the transition. It’s understandable for security professionals to feel like they’ve lost some control in the cloud and feel frustrated while attempting to get a handle on the cloud “chaos” in order to secure it from modern threats.

Here, we take a look at the ways cloud has disrupted security, with insights into how security teams can take advantage of these changes and succeed in their critical mission to keep data secure.

1. The cloud relieves security of some big responsibilities

Organizations liberate themselves from the burdens of acquiring and maintaining physical IT infrastructure when they adopt cloud, and this means security is no longer responsible for the security of physical infrastructure. The Shared Security Model of Cloud dictates that Cloud Service Providers (CSPs) such as AWS and Azure are responsible for the security of the physical infrastructure. CSP customers (that’s you!) are responsible for the secure use of cloud resources. There’s a lot of misunderstanding out there about the Shared Responsibility Model however, and that brings risk.

2. In the cloud, developers make their own infrastructure decisions

Cloud resources are available on-demand via Application Programming Interfaces (APIs). Because the cloud is self-service, developers move fast, sidestepping traditional security gatekeepers. When developers spin up cloud environments for their applications, they’re configuring the security of their infrastructure. And developers can make mistakes, including critical cloud resource misconfigurations and compliance policy violations.

3. And developers change those decisions constantly

Organizations can innovate faster in the cloud than they ever could in the datacenter. Continuous Integration and Continuous Deployment (CI/CD) means continuous change to cloud environments. And it’s easy for developers to change infrastructure configurations to perform tasks like getting logs from an instance or troubleshoot an issue. So, even if they got the security of their cloud infrastructure is correct on day one, a misconfiguration vulnerability may have been introduced on day two (or hour two).

4. The cloud is programmable and can be automated

Because cloud resources can be created, modified, and destroyed via APIs, developers have ditched web-based cloud “consoles” and taken to programming their cloud resources using infrastructure-as-code tools like AWS CloudFormation and Hashicorp Terraform. Massive cloud environments can be predefined, deployed on-demand, and updated at will–programmatically and with automation. These infrastructure configuration files include the security-related configurations for critical resources.

5. There’s more kinds of infrastructure in the cloud to secure

In certain respects, security in the datacenter is easier to manage. You have your network, firewalls, and servers on racks. The cloud has those too, in virtualized form. But the cloud also produced a flurry of new kinds of infrastructure resources, like serverless and containers. AWS alone has introduced hundreds of new kinds of services over the past few years. Even familiar things like networks and firewalls operate in unfamiliar ways in the cloud. All require new and different security postures.

6. There’s also more infrastructure in the cloud to secure

There’s simply more cloud infrastructure resources to track and secure, and due to the elastic nature of cloud, “more” varies by the minute. Teams operating at scale in the cloud may be managing a dozens of environments across multiple regions and accounts, and each may involve tens of thousands of resources that are individually configured and accessible via APIs. These resources interact with each other and require their own identity and access control (IAM) permissions. Microservice architectures compound this problem.

7. Cloud security is all about configuration—and misconfiguration

Cloud operations is all about the configuration of cloud resources, including security-sensitive resources such as networks, security groups, and access policies for databases and object storage. Without physical infrastructure to concern yourself with, security focus shifts to the configuration of cloud resources to make sure they’re correct on day one, and that they stay that way on day two and beyond.

8. Cloud security is also all about identity

In the cloud, many services connect to each other via API calls, requiring identity management for security rather than IP based network rules, firewalls, etc. For instance, a connection from a Lambda to an S3 bucket is accomplished using a policy attached to a role that the Lambda takes on—its service identity. Identity and Access Management (IAM) and similar services are complex and feature rich, and it’s easy to be overly permissive just to get things to work. And since these cloud services are created and managed with configuration, see #7.

9. The nature of threats to cloud are different

Bad actors use code and automation to find vulnerabilities in your cloud environment and exploit them, and automated threats will always outrun manual or semi-manual defenses. Your cloud security must be resilient against modern threats, which means they must cover all critical resources and policies, and recover from any misconfiguration of those resources automatically, without human involvement. The key metric here is Mean Time to Remediation (MTTR) for critical cloud misconfiguration. If yours is measured in hours, days, or (gasp!) weeks, you’ve got work to do.

10. Datacenter security doesn’t work in the cloud

By now, you’ve probably concluded that many of the security tools that worked in the datacenter aren’t of much use in the cloud. This doesn’t mean you need to ditch everything you’ve been using, but learn which still apply and which are obsolete. For instance, application security still matters, but network monitoring tools that rely on spans or taps to inspect traffic don’t because CSPs don’t provide direct network access. The primary security gap you need to fill is concerned with cloud resource configuration.

11. Security can be easier and more effective in the cloud

You’re probably ready for some good news. Because the cloud is programmable and can be automated, the security of your cloud is also programmable and can be automated. This means cloud security can be easier and more effective than it ever could be in the datacenter. In the midst of all this cloud chaos lies opportunity!

Monitoring for misconfiguration and drift from your provisioned baseline can be fully automated, and you can employ self-healing infrastructure for your critical resources to protect sensitive data. And before infrastructure is provisioned or updated, you can run automated tests to validate that infrastructure-as-code complies with your enterprise security policies, just like you do to secure your application code. This lets developers know earlier on if there are problems that need to be fixed, and it ultimately helps them move faster and keep innovating.

12. Compliance can also be easier and more effective in the cloud

There’s good news for compliance analysts as well. Traditional manual audits of cloud environments can be incredibly costly, error-prone, and time-consuming, and they’re usually obsolete before they’re completed. Because the cloud is programmable and can be automated, compliance scanning and reporting can be as well. It’s now possible to automate compliance audits and generate reports on a regular basis without investing a lot of time and resources. Because cloud environments change so frequently, a gap between audits that’s longer than a day is probably too long.

Where to start with cloud security

  1. Learn what your developers are doing
    What cloud environments are they using, and how are they separating concerns by account (i.e. dev, test, prod)? What provisioning and CI/CD tools are they using? Are they currently using any security tools? The answers to these questions will help you develop a cloud security roadmap and identify ideal areas to focus.
  2. Apply a compliance framework to an existing environment. 
    Identify violations and then work with your developers to bring it into compliance. If you aren’t subject to a compliance regime like HIPAA, GDPR, NIST 800-53, or PCI, then adopt the CIS Benchmark. Cloud providers like AWS and Azure have adapted it to their cloud platforms to help remove guesswork on how they apply to what your organization is doing.
  3. Identify critical resources and establish good configuration baselines.
    Don’t let the forest cause you to lose sight of the really important trees. Work with your developers to identify cloud resources that contain critical data, and establish secure configuration baselines for them (along with related resources like networks and security groups). Start detecting configuration drift for these and consider automated remediation solutions to prevent misconfiguration from leading to an incident.
  4. Help developers be more secure in their work. 
    Embrace a “Shift Left” mentality by working with developers to bake in security earlier in the software development lifecycle (SLDC). DevSecOps approaches such as automated policy checks during development exist to help keep innovation moving fast by eliminating slow, manual security and compliance processes.

The key to an effective and resilient cloud security posture is close collaboration with your development and operations teams to get everyone on the same page and talking the same language. In the cloud, security can’t operate as a stand-alone function.

Better Vulnerability Management: How to Master Container Security in Three Steps

By Nate Dyer, Product Marketing Director, Tenable

International Container Cargo ship in the ocean,

Application containers like Docker have exploded in popularity among IT and development teams across the world. Since its inception in 2013, Docker software has been downloaded 80 billion times and more than 3.5 million applications have been “dockerized” to run in containers.

With all the enthusiasm and near-mainstream adoption status, it’s important to understand the reasons why security continues to be the top challenge with container deployments. Let’s take a look.


Security is the top container management challenge

In study after study security comes up as the top container management challenge. In many ways, container security issues are no different than those impacting traditional IT. Poor cyber hygiene, such as developers using vulnerable versions of Kubernetes or misconfigured Docker services, creates a lot of turmoil in the container ecosystem. Security teams need to find vulnerabilities and prioritize their remediation based on actual cyber risk – just as they would for any other computing asset.

Containers create unique issues for security teams

But, in other ways, container security issues are rather unique. Modern application development today is largely focused on assembling existing software components, many of which are open-source code, instead of writing code from scratch.

For example, many developers turn to container image repositories like Docker Hub to construct their own container images quickly. Unfortunately, very few of these assembled components are actually analyzed by security teams to assess business risk.

And the risks are real: 17 Docker images were recently discovered and removed from Docker Hub because they had installed cryptocurrency miners on unwitting users’ servers. The question we all need to ask is: Do you know where your container images are coming from?

Traditional vulnerability management approaches don’t work for security containers

To make matters more difficult, traditional vulnerability management approaches don’t work for securing containers. The average lifespan of a container is often measured in hours, making it very challenging to discover running containers using large IP ranges in the scan configuration.

Then, if you come across a running container in a scan, it’s difficult to assess it due to its “just enough operation system” design principles. Many containers don’t have an IP address or SSH logins to run a credentialed scan.

Finally, if you happen to find a security issue in a container, you don’t just apply a patch to remediate the flaw. Rather, you have to shut down the container, fix the bug in the container image code and then redeploy as part of the new, immutable infrastructure mindset where IT infrastructure is treated as code.

Three steps to mastering container security

While Docker containers have turned traditional vulnerability management on its head, there is a path forward. You can master container security by following three steps:

  1. Discover and secure container infrastructure. This includes detecting Docker in your environment, patching host and orchestration infrastructure and hardening services based on industry best practices.
  2. Shift left with security controls. Focus your security testing, policy assurance and remediation workflows on the development process before software is shipped into production to prevent vulnerabilities.
  3. Incorporate containers into your holistic Cyber Exposure program. Rather than relying on a point solution to secure a new type of computing asset, make sure your vulnerability management approach supports containers alongside other assets across your attack surface.

Want to learn more how to master these three steps? Check out Container Security Best Practices: A How-to Guide to start reaping the benefits.


Continuous Auditing – STAR Continuous – Increasing Trust and Integrity

By John DiMaria, Assurance Investigatory Fellow, Cloud Security Alliance

As a SixSigma Black Belt I was brought up over the years with the philosophy of continual monitoring and improvement, moving from a reactive state to a preventive state. Actually, I wrote a white paper a couple of years ago on how SixSigma is applied to security.

The basic premise is it emphasizes early detection and prevention of problems, rather than the correction of problems after they have occurred. It eliminates the point in time “inspection” by deploying continuous monitoring and auditing. This approach basically saved the automotive industry back in the 1980s.

This age-old and proven process is the best way I can describe what CSA has done with the launch of another step in the direction of increasing transparency and assurance … continuous auditing.

Continuous auditing focuses on testing for the occurrence of a risk and the on-going effectiveness of a control. A framework and detailed procedures, along with technology, are key to enabling such an approach. Continuous auditing offers an enhanced way to understand risks and controls and improve on sampling from periodic reviews to ongoing testing.

STAR Continuous is a component of the CSA STAR Program that gives cloud service providers (CSP) the opportunity to integrate their approach to cloud security compliance and certification with additional capabilities to validate their security posture on an ongoing basis. Continuous auditing empowers an organization to make precise statements on the compliance status at any time over the whole time span in which the continuous audit process is executed, achieving an “always up-to-date” compliance status by increasing the frequency of the auditing process. 

Continuous auditing is not intended to replace traditional auditing, but rather is to be used as a tool to enhance audit effectiveness and increase transparency to stakeholders and interested parties.

STAR Continuous contains three models for continuous monitoring. Each of the three models provides a different level of assurance by covering requirements of continuous auditing with various levels of scrutiny. The three models are defined as:

1. Continuous self-assessment
2. Extended certification with continuous self-assessment
3. Continuous certification

chart showing levels of auditing

Essentially, the proposed framework starts from a simple process of the timely submission of self- assessment compliance reports and moves up to a continuous certification of the fulfillment of control objectives.

How does it help you as a cloud service provider?

• Provides top management with greater visibility, so that they can evaluate the effectiveness of their management system in real-time in relation to expectations of internal, regulatory and the cloud security industry standards;

• Implements an audit that is designed to reflect how your organization’s objectives are aimed at optimizing the cloud services;

• Demonstrates progress and performance levels that go beyond the traditional “point in time” scenario; and

• For customers of cloud service providers, STAR Continuous will provide a greater understanding of the level of controls that are in place and their effectiveness.

CSA is committed to helping customers have a deeper understanding of their security postures. Since the STAR Registry was launched in 2011 as the first step in improving transparency and assurance in the cloud, it has evolved into a program that encompasses key principles of transparency, rigorous auditing, and harmonization of standards. Companies who use STAR indicate best practices and validate the security posture of their cloud offerings.

CSA STAR is being recognized as the international harmonized solution leading the way of trust for cloud providers, users and their stakeholders by providing an integrated, cost-effective solution that decreases complexity and increases assurance and transparency. It simultaneously enables organizations to secure their information, protect themselves from cyber-threats, reduce risk and strengthen their information governance and privacy platform.

Want to find out more? Contact us at [email protected]

Are Cryptographic Keys Safe in the Cloud?

By Istvan Lam, CEO, Tresorit

encryption key inside the cloud

By migrating data to the cloud, businesses can enjoy scalability, ease of use, enhanced collaboration and mobility, together with significant cost savings. The cloud can be especially appealing to subject-matter experts as they no longer have to invest in building and maintaining their own infrastructure. However, the cloud also brings challenges when it comes to information security.

Given the cloud has a much higher density of data than a local storage, it gives a bigger surface to attack. The reward for getting into a cloud system is much higher than getting into a local file server. The cloud stores millions of companies’ data, while a local server hosts the data of one company only. This makes the cloud a much greater target for hackers.

Maintaining data integrity and security is, therefore, a significant challenge for cloud-based services and is one of the key reasons that holds companies back from moving to the cloud. That’s where encryption comes into the picture; it can play a key role in preserving the confidentiality and integrity of data stored in the cloud and significantly reduce the risk of a data breach.

Not all encryption is created equal

Most cloud providers offer some sort of data encryption and therefore claim that your data is safe in the cloud. However, it’s important to take a closer look at what exactly the provider is offering and how it stores the encryption keys. In order to ensure the confidentiality of your data, the system needs to be designed in a way that at no point can the cloud provider have access to it. This is called end-to-end encryption.

However, the vast majority of file sync and sharing services only use encryption in transit and at rest. In transit, or channel encryption, means that there is an encrypted channel between you and the server, but once the information gets out of the channel, it gets decrypted. Hence, once your data arrives at the server, it can be accessible to a hacker or a rogue employee. In this case, the encryption keys are shared between you and the server, which is good protection if, for example, you are using public Wi-Fi to upload data. However, when it comes to the security of your data on the server, there is a vulnerability and anyone who exploits it can get access to your information.

In the case of at-rest encryption, the cloud provider encrypts the file before storing it on its disks. However, the service provider also holds the encryption keys to your files. This means that their system administrators and anyone who manages to hack their servers or simply get hold of an administrator’s password can access and read your files. This has already happened to a mainstream cloud service provider; hackers got hold of and used an employee’s password to get into the provider’s corporate network and steal user credentials.

The confidentiality of your files can only be guaranteed if the cloud provider uses end-to-end encryption. With end-to-end encryption based on zero-knowledge authentication methods, all the encryption happens on your computer—neither your files nor your password leaves your device unencrypted. This means that the admins who run the cloud cannot access the content of your files. In addition, in case of a breach into the cloud provider’s servers, your data would still be safe, as there would be no means for hackers to decrypt them.

Therefore, to ensure the confidentiality of your files in the cloud, you should look for a cloud service provider which offers its customers the ability to manage key generation on the customer side. The right way to store the information in the cloud is by putting the client in control of both the key management and the encryption process. This is what ENISA, The European Union Agency for Network and Information Security, points out in its paper on Privacy and Security in Personal Data Clouds:

“To this end, the lack of implementation of client-side encryption is an additional security challenge, as this type of encryption is the only way to provide the user with true control over his/her data, while mitigating the risk of an unauthorised or unwanted assess by third parties (such as a rogue administrator or government mass surveillance programs).”

To conclude, even if data stored by a cloud storage provider is encrypted, the type of encryption and the key management methods are what matter. Not only your documents but your keys also have to be kept safe. Public key cryptography combined with strong symmetric encryption algorithms is a standard, proven way that allows you to share documents with others without the storage provider or any third-party having access to your files any time. Look for solutions that allow you to bring your own hardware keys or the ones that do not offer password reset functionality—a good sign that the provider does not have access to your keys. Only this way can you be reassured that your and your clients’ files are protected against data breaches.

Webinar: The Ever Changing Paradigm of Trust in the Cloud

By CSA Staff

abstract line connection on night city background implying cloud computing

The CSA closed its 10th annual Summit at RSA on Monday, and the consensus was that the cloud has come to dominate the technology landscape and revolutionize the market, creating a tectonic shift in accepted practice.

The advent of the cloud has been a huge advancement in technology. Today’s need for flexible access has led to an increase in business demand for cloud computing, bringing with it increased security and privacy concerns. How organizations evaluate Cloud Service Providers (CSPs) has become key to providing increased levels of assurance and transparency.

On Thursday, March 14 at 2 pm ET, John DiMaria, Cloud Security Alliance’s Assurance Investigatory Fellow and one of the key innovators in the evolution of CSA STAR, will share his insight on the:

  • current global landscape of cloud computing,
  • ongoing concerns regarding the cloud, and the
  • evolution of efforts to answer to the demand for higher transparency and assurance.

Join John DiMaria as he reviews the efforts being led by CSA to answer this call. You’ll walk away with a deeper understanding of how these efforts are aimed at helping organizations optimize processes, reduce costs, and decrease risk while simultaneously meeting the continuing rigorous international demands on cloud services allowing for the highest level of assurance and transparency.

Register today.

CSA Summit Recap Part 2: CSP & CISO Perspective

By Elisa Morrison, Marketing Intern, Cloud Security Alliance

When CSA was started in 2009, Uber was just a German word for ‘Super’ and all CSA stood for was Community Supported Agriculture. Now in 2019, spending on cloud infrastructure has finally exceeded on-premises, and CSA is celebrating its 10th anniversary. For those who missed the Summit, this is the CSA Summit Recap Part 2, and in this post we will be highlighting key takeaways from sessions geared towards CSPs and CISOs.

Can you trust your eyes? Context as the basis for “Zero Trust” systems – Jason Garbis

During this session, Jason Garbis identified three steps towards implementing Zero Trust: reducing attack surfaces, securing access, and neutralizing adversaries. He also addressed how to adopt modern security architecture to make intelligent actions for trust. In implementing Zero Trust, Garbis highlighted the need for:

  • Authentication. From passwords to biometric to tokens. That said, authentication alone is not sufficient for adequate security, as he warned it is too late in the process.
  • Network technology changes. Firewall technology is too restricted (e.g. IP addresses are shared across multiple people). The question in these cases is yes or no access. This not Zero Trust. Better security is based on the role or person and data definition. This has more alternatives and is based on many attributes, as well as the role and data definition.
  • Access control requirements. There is a need for requirements that dynamically adjust based on context. If possible, organizations need to find a unified solution via Software-Defined Perimeter.

Securing Your IT Transformation to the Cloud – Jay Chaudhry, Bob Varnadoe, and Tom Filip

Every CEO wants to embrace cloud, but how can you do it securely? The old world was network-centric, and the data center was the center of universe. We could build a moat around our network with firewalls and proxies. The new world is user-centric, and the network control is fluid. Not to mention, the network is decoupled from security, and we rely on policy-based access as depicted in the picture below.

Slide: Old World vs New World

In order to address this challenge, organizations need to view security with a clean slate. Applications and network must be decoupled. More traffic on the cloud is encrypted, but offers a way for malicious users to get in, so proxy and firewalls should be used for inspection of traffic.

Ten Years in the Cloud – PANEL

The responsibility to protect consumers and enterprise has expanded dramatically. Meanwhile, the role of the CISO is changing – responsibilities now include both users and the company. CISOs are faced with challenges as legacy tools don’t always translate to the cloud. Now there is also a need to tie the value of the security program to business, and the function of security has changed especially in support. In light of these changes, the panel unearthed the following five themes in their discussion of lessons learned in the past 10 years of cloud.

  1. Identity as the new perimeter. How do we identify people are who they say they are?
  2. DevOps as critical for security. DevOps allows security to be embedded into the app, but it is also a risk since there is faster implementation and more developers.
  3. Ensuring that security is truly embedded in the code. Iterations in real-time require codified security.
  4. Threat and data privacy regulations. This is on the legislative to-do list for many states; comparable to the interest that privacy has in financial services and health care information.
  5. Security industry as a whole is failing us all. It is not solving problems in real-time; as software becomes more complex it poses security problems. Tools are multiplying but they do not address the overall security environment. Because of this, there’s a need for an orchestrated set of tools.

Finally! Cloud Security for Unmanaged Devices… for All Apps – Nico Popp

Now we have entered the gateway wars …Web vs. CASB vs. SDP. Whoever wins, the problem of BYOD and unmanaged devices still remains. There is also the issue that we can’t secure endpoint users’ mobile devices. As is, the technologies of mirror gateway and forward proxy solve the sins of “reverse proxy” and have become indispensable blades. Forward proxy is the solution for all apps when you can manage the endpoint, and mirror gateway can be used for all users, all endpoints and all sanctioned apps.

Lessons from the Cloud -David Cass

Slide:

Cloud is a means to an end … and the end requires organizations to truly transform. This is especially important as regulators expect a high level of control in a cloud environment. Below are the key takeaways presented:

  • Cloud impacts the strategy and governance from the strategy, to controls, to monitoring, measuring, and managing information all the way to external communications.
  • The enterprise cloud requires a programmatic approach with data as the center of the universe and native controls only get you so far. Cloud is a journey, not just a change in technology.
  • Developing a cloud security strategy requires taking into account service consumption, IaaS, PaaS, and SaaS. It is also important to keep in mind that cloud is not just an IT initiative.

Security Re-Defined – Jason Clark and Bob Schuetter

This session examined how Valvoline went to the cloud to transform its security program and accelerate its digital transformation. When Valvoline split as an IPO with two global multi-billion startup they had no datacenter for either. The data was flowing like water, there was complexity and control created friction, not to mention a lack of visibility.

Slide: Digital transformation

They viewed cloud as security’s new north star, and said the ‘The Fourth Industrial Revolution’ was moving to the cloud. So how did they get there? The following are the five lessons they shared:

  1. Stop technical debt
  2. Go where your data is going
  3. Think big, move fast, and start small
  4. Organizational structure, training, and mindset
  5. Use the power of new analytics

Blockchain Demo

Slide: A simple claim example

Inspired by the cryptocurrency model, OpenCPEs is a way to revolutionize how security professionals measure their professional development experiences.

OpenCPEs provides a method of validating experiences listed on your resume without maintaining or storing an individual’s personal data. Learn more about this project by downloading the presentation slides.

The full slides to the summit presentations are available for download.

CSA Summit Recap Part 1: Enterprise Perspective

By Elisa Morrison, Marketing Intern, Cloud Security Alliance

CSA’s 10th anniversary, coupled with the bestowal of the Decade of Excellence Awards gave a sense of accomplishment to this Summit that bodes well yet also challenges the CSA community to continue its pursuit of excellence.

The common theme was the ‘Journey to the Cloud’ and emphasized how organizations can not only go faster but also reduce costs during this journey. The Summit this year also touched on the future of privacy, disruptive technologies, and introduced CSA’s newest initiatives in Blockchain, IoT and the launch of the STAR Continuous auditing program. Part 1 of this CSA Summit Recap highlights sessions from the Summit geared toward the enterprise perspective.

Securing Your IT Transformation to the Cloud – Jay Chaudhry, Bob Varnadoe, and Tom Filip

Slide: Network security is becoming irrelevant

Every CEO wants to embrace cloud but how to do it securely? To answer this question this trio looked at the journeys other companies such as Kellogg and NRC took to the cloud. In Kellogg’s case they found that when it comes to your transformation the VMs of single-tenant won’t cut it. They also brought to light the question of  the ineffectiveness of services such as hybrid security. Why pay the tax for services not used?

For NCR, major themes were how to streamline connectivity and access to cloud service. The big question was how do end users access NCR data in a secure environment? They found that applications and network must be decoupled. And, while more traffic on the cloud is encrypted, it offers another way for malicious users to get in. Their solution was to use proxy and firewalls for inspection of traffic.

The Future of Privacy: Futile or Pretty Good? – Jon Callas

ACLU technology fellow Jon Callas brought to light the false dichotomy we see when discussing privacy. It is easy to be nihilistic about privacy, but positives are out there as well.

There is movement in the right direction that we can already see, examples include: GDPR, California Privacy Law, Illinois Biometric Privacy Law, and the Carpenter, Riley, and Oakland Magistrate decisions. There has also been a precedent set for laws with more privacy toward consumers. For organizations, privacy has also become the focus of competition and companies such as Apple, Google, and Microsoft all compete on privacy. Protocols such as TLS and DNS are also becoming a reality. Other positive trends include default encryption and that disasters are documented, reported on, and a concern.

Unfortunately, there has also been movement in the wrong direction. There is a balancing act between the design for security versus design for surveillance. The surveillance economy is increasing, and too many platforms and devices are now collecting data and selling it. Lastly, government arrogance and the overreach to legislate surveillance over security is an issue.

All in all, Callas summarized that the future is neither futile nor pretty good and it’s necessary to balance both moving forward.

From GDPR to California Privacy – Kevin Kiley

Slide: Steps to better vendor risk management

This session touched on third-party breaches, regulatory liability, the need for strong data processing paramount to scope and how to comply with GDPR and CCPA. Kiley identified a need for a holistic approach with more detailed vendor vetting requirements. He outlined five areas organizations should improve to better their vendor risk management.

  1. Onboarding. Who’s doing the work for procurement, privacy, or security?
  2. Populating & Triaging. Leverage templated vendor evaluation assessments and populate with granular details.
  3. Documentation and demonstration
  4. Monitoring vendors
  5. Offboarding

Building an Award-Winning Cloud Security Program – Pete Chronis and Keith Anderson

This session covered key lessons learned along the way as Turner built its award-winning cloud security program. One of the constant challenges Turner faced was the battle between the speed to market over security program. To improve their program, Turner enacted continuance compliance measurement by using open source for cloud plane assessment. They also ensured each user attestation was signed by both the executive and technical support. For accounts, they implemented intrusion prevention, detection, and security monitoring. They learned to define what good looks like, while also developing lexicon and definitions for security. It was emphasized that organizations should always be iterating from new > good > better. Lastly, when building your cloud security program they emphasized that not all things need to be secured the same and not all data needs the same level of security.

Case Study: Behind the Scenes of MGM Resorts’ Digital Transformation – Rajiv Gupta and Scott Howitt

MGM’s global user base meant they wanted to expand functions to guest services, check-in volume management and find a way of bringing new sites online faster. To accomplish this, MGM embarked on a cloud journey. Their journey was broken into business requirements (innovation velocity and M&A agility) along with necessary security requirements (dealing with sensitive data, the need to enable employees to move faster, and the ability to deploy a security platform).

Slide: Where is your sensitive data in the cloud?

As they described MGM’s digital transformation the question was raised, where is sensitive data stored in the cloud? An emerging issue that continues to come up is API management. Eighty-seven percent of companies permit employees to use unmanaged devices to access business apps, and the BYOD policy is often left unmanaged or unenforced. In addition, MGM found that on average number 14 misconfigured IaaS services are running at a given time in an average organization, and the average organization has 1527 DLP incidents in PaaS/IaaS in a month.

To address these challenges, organizations need to consider the relations between devices, network and the cloud. The session ended with three main points to keep in mind during your organization’s cloud journey. 1) Focus on your data. 2) Apply controls pertinent to your data. 3) Take a platform approach to your cloud security needs.

Taking Control of IoT – Hillary Baron

image of IoT connected devices overlayed on a cityscape

There is a gap in the security controls framework for IoT. With the landscape changing at a rapid pace and over 2020 billion IoT devices, the need is great. Added to that is the fact that IoT manufacturers typically do not build security into devices; hence the need for the security controls framework. You can learn more about the framework and its accompanying guidebook covered in this session here.

Panel – The Approaching Decade of Disruptive Technologies

While buzzwords can mean different things to different organizations, organizations should still implement processes among new and emerging technologies such as AI, Machine Learning, and Blockchain, and be conscious of what is implemented.

This session spent a lot of its time examining Zero Trust. The perimeter is in different locations for security, and it is challenging looking for the best place to establish the security perimeter. It can no longer be a fixed point, but must flex with the mobility of users, e.g. mobile phones require very flexible boundaries. Zero Trust can help address these issues, it’s BYOD-friendly. There are still challenges, but  Web Authentication helps as a standard for Zero Trust.

Cloud has revolutionized security in the past decade. With cloud, you inherit security and with it the idea of a simple system has gone out the window. One of the key questions that was asked was “Why are we not learning the security lessons from the cloud?” The answer? Because the number of developers grows exponentially among new technology.  

The key takeaway: Don’t assume your industry is different. Realize that others have faced these threats and have come up with successful treatment methodologies when approaching disruptive technologies.

CISO Guide to Surviving an Enterprise Cloud Journey – Andy Kirkland, Starbucks

Five years ago, the Director of  Information and Security for Starbucks, Andy Kirkland, recommended not going to the cloud for cautionary purposes. Since then, Starbucks migrated to the cloud and learned a lot on the way. Below is an outline of Starbucks’ survival tips for organizations wanting to survive a cloud journey:

  • Establish workload definitions to understand criteria
  • Utilize standardized controls across the enterprise
  • Provide security training for the technologist
  • Have a security incident triage tailored to your cloud provider
  • Establish visibility into cloud security control effectiveness
  • Define the security champion process to allow for security to scale

PANEL – CISO Counterpoint

In this keynote panel, leading CISOs discussed their cloud adoption experiences for enterprise applications. Jerry Archer, CSO for Sallie Mae, described their cloud adoption journey as “nibbling our way to success.” They started by putting things into the cloud that were small. By keeping up constant conversations with regulators, there were no surprises during the migration to the cloud. Now, they don’t have any physical supplies remaining. Other takeaways were that in 2019 containers have evolved and we now see: ember security, arbitrage workloads, and RAIN (Refracting Artificial Intelligence Networks).

Download the full summit presentation slides here.

CCSK Success Stories: From an Information Systems Security Manager

By the CSA Education Team

This is the third part in a blog series on Cloud Security Training. Today, we will be interviewing Paul McAleer. Paul is a Marine Corps veteran and currently works as an Information Systems Security Manager (ISSM) at Novetta Solutions, an advanced data analytics company headquartered in McLean, VA.  He holds the CCSK, CISSP, CISM, and CAP certifications among others and lives in the Washington, D.C. area.

Can you describe your role?

I am an ISSM at Novetta Solutions and am primarily responsible for certification and accreditation, continuous monitoring, and the overall security posture of the information systems under my purview. Novetta is also partnered with AWS and that partnership continues to grow so it is a very exciting company to work for.  

What got you into cloud security in the first place? What made you decide to earn your Certificate of Cloud Security Knowledge (CCSK)?

My first InfoSec position was with First Information Technology Services, a Third Party Assessment Organization (3PAO) supporting Microsoft. I was part of the Continuous Monitoring Team, and part of my job was providing adequate justification of open vulnerabilities and depicting mitigation for cloud environments. Understanding cloud security was imperative in performing my job.  I was seeking more of a foundational understanding focused primarily on cloud security. I heard about CCSK through CSA and ISC(2) after doing some research on the best and most valuable Cloud certifications. After reviewing the certification outline and expectations, I decided to review the material and prep for the exam. 

“Open book means nothing when it comes to this exam. There are too many questions that requires a deep understanding of the material…”

Can you elaborate on what the exam experience was like? How did you prepare for the CCSK exam?

The CCSK was not an easy exam by any means. Not only was it a requirement to get 80 percent to pass, but there were only 90 minutes to answer 60 questions. The exam required a deep understanding of the CSA Cloud Security Guidance, as well as the ENISA Cloud Computing Risk Assessment Report. At least for me, it was imperative to read through all of the course material and ensure I understood everything listed in the exam objectives to pass the exam.

If you could go back and take it again, how would you prepare differently?

If I could prepare differently, I would have devoted more time to studying and reading the CSA Guidance and ENISA Report a second time through. To me, one read-through isn’t enough for the depth of this exam and the style of questions the exam presents. It is a hard exam to prepare for. To gain a full understanding of what is expected, it’s important to go through the material more than once and to take notes on your weak areas and subsequently come back to the sections that you feel weakest on and focusing on those areas. 

Were there any specific topics on the exam that you found trickier than others?

Topics on the exam that I found trickier than others included questions that pertained to governance within the cloud and understanding the various security as a service (SecaaS) requirements and the different services regarding SecaaS implementation.

What is your advice to people considering earning their CCSK?

I highly recommend the CCSK for anyone seeking a deeper understanding of cloud security. My advice to people considering the CCSK is to study for the exam like you would any other certification that wasn’t open book. In other words, don’t rely on the fact that it is open book. 

Lastly, what part of the material from the CCSK have been the most relevant in your work and why?

The most relevant material from the CCSK for my career has been Compliance and Audit Management, which was Domain 4 of the CSA Guide v3 when I took the exam. I believe that domain related more to my work experience than any other domain due to my cloud compliance role at the time of my certification. I definitely took the most away from the topics discussed in that domain, such as issues pertaining to Enterprise Risk Management, Compliance and Audit Assurance, and Corporate Governance. The Information Management and Data Security domain was also a very relevant domain for my work.

Interested in learning more about cloud security? Discover our free prep-kit, training courses, and resources to prepare to earn your Certificate of Cloud Security Knowledge here.

Invest in your future with CCSK training

A Decade of Vision

By Jim Reavis, Co-founder and CEO, Cloud Security Alliance

CSA 10th anniversary logo

Developing a successful and sustainable organization is dependent upon a lot of factors: quality services, a market vision, focus, execution, timing and maybe a little luck. For Cloud Security Alliance, now celebrating our 10th anniversary, I would add one more factor—believers. 

While we have had a few doubters, we have had more believers who have helped us fulfill our vision and allowed us to be one of the most important information security associations on a global basis. On this occasion, we want to recognize three such believers, who were there at the beginning and have all stayed intimately involved with CSA during our first decade. I am referring to our three founding CEOs, whom we are providing our Decade of Vision Leadership award. These CEOs provided the initial startup funding and, more than that, have provided consistent support and mentoring, as well as evangelizing the CSA mission on a global basis.

Philippe Courtot, CEO of Qualys, is inarguably an industry visionary as well as a generous human being. Philippe has been promoting the benefits of a cloud approach to security at Qualys for over 18 years, well before we called this cloud. He has had a unique determination in pursuit of his goals and eschewed more expedient paths to cement Qualys as an industry leader based upon integrity. Philippe always supports worthwhile industry initiatives, including CSA, but many others as well. We are proud to be Philippe’s partner with the CIO/CISO Interchange.

Jay Chaudhry, CEO of Zscaler, has an unbelievable record as a serial entrepreneur in information security and has led one of the most successful industry IPOs in our history.  Jay started Zscaler at about the same time as CSA was getting off the ground, and never fails to get behind important CSA initiatives. Jay was the first person who fully articulated Security-as-a-Service to me, which helped craft our mission statement of securing the cloud, as well as leveraging the cloud to secure the rest of the world.

Phil Dunkelberger, CEO of Nok Nok Labs, was a founding CEO while leading PGP Corporation. Phil’s zeal in promoting ubiquitous encryption was not merely based upon helping a company protect its information, but on how accelerating the exchange of trusted information can transform business as we know it. Phil has supported numerous industry initiatives and was a key stakeholder in launching the FIDO Alliance, tackling the very difficult problem of online identities and strong authentication.

I have found that successful CEOs in our industry share common traits.  They have a sense of the magnitude of the battle we fight that supersedes any one company’s mission. They think about the world with a longer term perspective than the immediacy of today. They have tremendous empathy for the good guys in our industry and want to make them successful. A company could do a lot worse than to have three such founding CEOs.  

Education: A Cloud Security Investigation (CSI)

By Will Houcheime, Product Marketing Manager, Bitglass

cloud education painted on pavement

Cloud computing is now widely used in higher education. It has become an indispensable tool for both the institutions themselves and their students. This is mainly because cloud applications, such as such as G Suite and Microsoft Office 365, come with built-in sharing and collaboration functionality – they are designed for efficiency, teamwork, and flexibility. This, when combined with the fact that education institutions tend to receive massive discounts from cloud service providers, has led to a cloud adoption rate in education that surpasses that of every other industry. Naturally, this means that education institutions need to find a cloud security solution that can protect their data wherever it goes.

Cloud adoption means new security concerns

When organizations move to the cloud, there are new security concerns that must be addressed; for example, cloud applications, which are designed to enable sharing, can be used to share data with parties that are not authorized to view it. Despite the fact that some of these applications have their own native security features, many lack granularity, meaning that sensitive data such as personally identifiable information (PII), personal health information (PHI), federally funded research, and payment card industry data (PCI) can still fall into the wrong hands.

Complicating the situation further is the fact that education institutions are required to comply with multiple regulations; for example, FERPA, FISMA, PCI DSS, and HIPAA. Additionally, when personal devices are used to access data (a common occurrence for faculty and students alike), securing data and complying with regulatory demands becomes even more challenging.

Fortunately, cloud access security brokers (CASBs) are designed to protect data in today’s business world. Leading CASBs provide complete visibility and control over data in any app, any device, anywhere. Identity and access management capabilities, zero-day threat detection, and granular data protection policies ensure that sensitive information is safe and regulatory demands are thoroughly addressed.

Want to learn more? Download the Higher Education Solution Brief.

Introducing CAIQ-Lite

By Dave Christiansen, Marketing Director, Whistic

CAIQ-Lite: A New Framework for Cloud Vendor Assessment report cover

The Cloud Security Alliance and Whistic are pleased to release CAIQ-Lite beta, a new framework for cloud vendor assessment.

CSA and Whistic identified the need for a lighter-weight assessment questionnaire in order to accommodate the shift to cloud procurement models, and to enable cybersecurity professionals to more easily engage with cloud vendors. CAIQ-Lite was developed to meet the demands of an increasingly fast-paced cybersecurity environment, where adoption is becoming paramount when selecting a vendor security questionnaire.

With the initial objective of developing an effective questionnaire containing 100 or less questions, CAIQ-Lite contains 73 questions compared to the 295 found in the CAIQ, while maintaining representation of 100 percent of the original 16 control domains present in the Cloud Controls Matrix (CCM) 3.0.1. Contributing research leveraged multiple sources of CSA member and Whistic customer feedback, as well as a panel of hundreds of IT security professionals. Research behind Whistic’s proprietary scoring algorithm was utilized as a part of the final CAIQ-Lite question selection process.

We look forward to community feedback on CAIQ-Lite, which can be accessed by CSA members for free at Whistic,  as well as from CSA. The current version will be improved over the next 12 months, based on additional community input. Also, any members that already have a CAIQ on the CSA STAR Program will automatically have a CAIQ-Lite generated for them on the Whistic Platform.

Click to access the full whitepaper, containing further details regarding the creation and deployment of this new cloud service questionnaire. 


Five Years of the GitHub Bug Bounty Program

By Philip Turnbull, Senior Application Security Engineer, GitHub

octocat detective
Image credit: GitHub, This article was originally published by the GitHub team.

GitHub launched our Security Bug Bounty program in 2014, allowing us to reward independent security researchers for their help in keeping GitHub users secure. Over the past five years, we have been continuously impressed by the hard work and ingenuity of our researchers. Last year was no different and we were glad to pay out $165,000 to researchers from our public bug bounty program in 2018.

We’ve previously talked about our other initiatives to engage with researchers. In 2018, our researcher grantsprivate bug bounty programs, and a live-hacking event allowed us to reach even more independent security talent. These different ways of working with the community helped GitHub reach a huge milestone in 2018: $250,000 paid out to researchers in a single year.

We’re happy to share some of our highlights from the past year and introduce some big changes for the coming year: full legal protection for researchers, more GitHub properties eligible for rewards, and increased reward amounts.

2018 Highlights

GraphQL and API authorization researcher grant

Since the launch of our researcher grants program in 2017 we’ve been on the lookout for bug bounty researchers who show a specialty in particular features of our products. In mid-2018 @kamilhism submitted a series of vulnerabilities to the public bounty program showing his expertise in the authorization logic of our REST and GraphQL APIs. To support their future research, we provided Kamil with a fixed grant payment to perform a systematic audit of our API authorization logic. Kamil’s audit was excellent, uncovering and allowing us to fix an additional seven authorization flaws in our API.

H1-702

In August, GitHub took part in HackerOne’s H1-702 live-hacking event in Las Vegas. This brought together over 75 of the top researchers from HackerOne to focus on GitHub’s products for one evening of live-hacking. The event didn’t disappoint—GitHub’s security improved and nearly $75,000 was paid out for 43 vulnerabilities. This included one critical-severity vulnerability in GitHub Enterprise Server. We also met with our researchers in-person and received great feedback on how we could improve our bug bounty program.

GitHub Actions private bug bounty

In October, GitHub launched a limited public beta of GitHub Actions. As part of the limited beta, we also ran a private bug bounty program to complement our extensive internal security assessments. We sent out over 150 invitations to researchers from last year’s private program, all H1-702 participants, and invited a number of the best researchers that have worked with our public program. The private bounty program allowed us to uncover a number of vulnerabilities in GitHub Actions.

We also held an office-hours event so that the GitHub security team and researchers could meet. We took the opportunity to meet face-to-face with other researchers because it’s a great way to build a community and learn from each other. Two of our researchers, @not-an-aardvark and @ngaloggc, gave an overview of their submissions and shared details of how they approached the target with everyone.

Workflow improvements

We’ve been making refinements to our internal bug bounty workflow since we last announced it back in 2017.  Our ChatOps-based tools have continued to evolve over the past year as we find more ways to streamline the process. These aren’t just technical changes—each day we’ve had individual on-call first responders who were responsible for handling incoming bounty submissions. We’ve also added a weekly status meeting to review current submissions with all members of the Application Security team. These meetings allow the team to ensure that submissions are not stalled, work is correctly prioritized by engineering teams based on severity, and researchers are getting timely updates on their submissions.

A key success metric for our program is how much time it takes to validate a submission and triage that information to the relevant engineering team so remediation work can begin. Our workflow improvements have paid off and we’ve significantly reduced the average time to triage from four days in 2017 down to 19 hours. Likewise, we’ve reduced our average time to resolution from 16 days to six days. Keep in mind: for us to consider a submission as resolved, the issue has to either be fixed or properly prioritized and tracked, by the responsible engineering team.

We’ve continued to reach our target of replying to researchers in less than 24 hours on average. Most importantly for our researchers, we’ve also dropped our average time for rewarding a submission from 17 days in 2017 down to 11 days. We’re grateful for the effort that researchers invest in our program and we aim to reduce these times further over the next year.

2019 initiatives

Although our program has been running successfully for the past five years, we know that we can always improve. We’ve taken feedback from our researchers and are happy to announce three major changes to our program for 2019:

Keeping bounty program participants safe from the legal risks of security research is a high priority for GitHub. To make sure researchers are as safe as possible, we’ve added a robust set of Legal Safe Harbor terms to our site policy. Our new policies are based on CC0-licensed templates by GitHub’s Associate Corporate Counsel, @F-Jennings. These templates are a fork of EdOverflow’s Legal Bug Bounty repo, with extensive modifications based on broad discussions with security researchers and Amit Elazari’s general research in this field. The templates are also inspired by other best-practice safe harbor examples including Bugcrowd’s disclose.io project and Dropbox’s updated vulnerability disclosure policy.

Our new Legal Safe Harbor terms cover three main sources of legal risk:

  • Your research activity remains protected and authorized even if you accidentally overstep our bounty program’s scope. Our safe harbor now includes a firm commitment not to pursue civil or criminal legal action, or support any prosecution or civil action by others, for participants’ bounty program research activities. You remain protected even for good faith violations of the bounty policy.
  • We will do our best to protect you against legal risk from third parties who won’t commit to the same level of safe harbor protections. Our safe harbor terms now limit report-sharing with third parties in two ways. We will share only non-identifying information with third parties, and only after notifying you and getting that third party’s written commitment not to pursue legal action against you. Unless we get your written permission, we will not share identifying information with a third party.
  • You won’t be violating our site terms if it’s specifically for bounty research. For example, if your in-scope research includes reverse engineering, you can safely disregard the GitHub Enterprise Agreement’s restrictions on reverse engineering. Our safe harbor now provides a limited waiver for relevant parts of our site terms and policies. This protects against legal risk from DMCA anti-circumvention rules or similar contract terms that could otherwise prohibit necessary research tasks like reverse engineering or deobfuscating code.

Other organizations can look to these terms as an industry standard for safe harbor best practices—and we encourage others to freely adopt, use, and modify them to fit their own bounty programs. In creating these terms, we aim to go beyond the current standards for safe harbor programs and provide researchers with the best protection from criminal, civil, and third-party legal risks. The terms have been reviewed by expert security researchers, and are the product of many months of legal research and review of other legal safe harbor programs. Special thanks to MGMugwumpjones, and several other researchers for providing input on early drafts of @F-Jennings’ templates.

Expanded scope

Over the past five years, we’ve been steadily expanding the list of GitHub products and services that are eligible for reward. We’re excited to share that we are now increasing our bounty scope to reward vulnerabilities in all first party services hosted under our github.com domain. This includes GitHub EducationGitHub Learning LabGitHub Jobs, and our GitHub Desktop application. While GitHub Enterprise Server has been in scope since 2016, to further increase the security of our enterprise customers we are now expanding the scope to include Enterprise Cloud.

It’s not just about our user-facing systems. The security of our users’ data also depends on the security of our employees and our internal systems. That’s why we’re also including all first-party services under our employee-facing githubapp.com and github.net domains.

Increased rewards

We regularly assess our reward amounts against our industry peers. We also recognize that finding higher-severity vulnerabilities in GitHub’s products is becoming increasingly difficult for researchers and they should be rewarded for their efforts. That’s why we’ve increased our reward amounts at all levels:

  • Critical: $20,000–$30,000+
  • High: $10,000–$20,000
  • Medium: $4,000–$10,000
  • Low: $617–$2,000

Our broad ranges have served us well, but we’ve been consistently impressed by the ingenuity of researchers. To recognize that, we no longer have a maximum reward amount for critical vulnerabilities. Although we’ve listed $30,000 as a guideline amount for critical vulnerabilities, we’re reserving the right to reward significantly more for truly cutting-edge research.

Get involved

The bounty program remains a core part of GitHub’s security process and we’re learning a lot from our researchers. With our new initiatives, now is the perfect time to get involved. Details about our safe harbor, expanded scope, and increased awards are available on the GitHub Bug Bounty site.

Working with the community has been a great experience—we’re looking forward to triaging your submissions in the future!

Bitglass Security Spotlight: DoD, Facebook & NASA

By Will Houcheime, Product Marketing Manager, Bitglass

red arrow with news icon

Here are the top cybersecurity stories of recent weeks: 

—Cybersecurity vulnerabilities found in US missile system
—Facebook shares private user data with Amazon, Netflix, and Spotify
—Personal information of NASA employees exposed
—Chinese nationals accused of hacking into major US company databases
—Private complaints of Silicon Valley employees exposed via Blind

Cybersecurity vulnerabilities found in US missile system
The United States Department of Defense conducted a security audit on the U.S. ballistic missile system and found shocking results. The system’s security was outdated and not in keeping with protocol. The audit revealed that the US’s ballistic system was lacking data encryption, antivirus programs, and multifactor authentication. Additionally, the Department of Defense also found 28-year-old security gaps that were leaving computers vulnerable to local and remote attacks. Obviously, the Missile Defense Agency must improve its cybersecurity posture before the use of defense weaponry is required.

Facebook shares private user data with Amazon, Netflix, and Spotify
The security of Facebook users continues to be in question due to the company’s illicit use of private messages. The New York Times discovered Facebook documents from 2017 that explained how companies such as Spotify and Netflix were able to access private messages from over 70 million users per month. There are reports that suggest that companies had the ability to read, write, and delete these private messages on Facebook, which is disturbing news to anyone who uses the popular social network.

Personal information of NASA employees exposed
The personally identifiable information (PII) of current and former NASA employees was compromised early last year. The organization reached out to the affected individuals notifying them of the data breach. The identity of the intruder was unknown; however, it was confirmed that the breach allowed Social Security numbers to be compromised. 

Chinese nationals accused of hacking into major US company databases
A group of hackers working for the Chinese government has been indicted by the U.S. Government for stealing intellectual property from tech companies. While the companies haven’t been named, prosecutors have charged two Chinese nationals with computer hacking, conspiracy to commit wire fraud, and aggravated identity theft.

Private complaints of Silicon Valley employees exposed via Blind
A social networking application by the name of Blind failed to secure sensitive user information when it left a database server completely exposed. Blind allows users to anonymously discuss topics including tech, finance, e-commerce, as well as the happenings within their workplace  (the app is used by employees of over 70,000 different companies). Anyone who knew how to find the online server had the ability to view each user’s account information without the use of a password. Unfortunately, this security lapse exposed users’ identities and, consequently, allowed their employers to be implicated in their work-related stories.

To learn about cloud access security brokers (CASBs) and how they can protect your enterprise from ransomware, data leakage, misconfigurations, and more, download the Definitive Guide to CASBs.

Rocks, Pebbles, Shadow IT

By Rich Campagna, Chief Marketing Officer, Bitglass

Way back in 2013/14, Cloud Access Security Brokers (CASBs) were first deployed to identify Shadow IT, or unsanctioned cloud applications. At the time, the prevailing mindset amongst security professionals was that cloud was bad, and discovering Shadow IT was viewed as the first step towards stopping the spread of cloud in their organization.

Flash forward just a few short years and the vast majority of enterprises have done a complete 180º with regards to cloud, embracing an ever increasing number of “sanctioned” cloud apps. As a result, the majority of CASB deployments today are focused on real-time data protection for sanctioned applications – typically starting with System of Record applications that handle wide swaths of critical data (think Office 365Salesforce, etc). Shadow IT discovery, while still important, is almost never the main driver in the CASB decision making process.

Regardless, I still occasionally hear of CASB intentions that harken back to the days of yore – “we intend to focus on Shadow IT discovery first before moving on to protect our managed cloud applications.” Organizations that start down this path quickly fall into the trap of building time consuming processes for triaging and dealing with what quickly grows from hundreds to thousands of applications, all the while delaying building appropriate processes for protecting data in the sanctioned applications where they KNOW sensitive data resides.

This approach is a remnant of marketing positioning by early vendors in the CASB space. For me, it brings to mind Habit #3 from Stephen Covey’s The 7 Habits of Highly Effective People -“Put First Things First.” 

Putting first things first is all about focusing on your most important priorities. There’s a video of Stephen famously demonstrating this habit on stage in one of his seminars. In the video, he asks an audience member to fill a bucket with sand, followed by pebbles, and then big rocks. The result is that once the pebbles and sand fill the bucket, there is no more room for the rocks. He then repeats the demonstration by having her add the big rocks first. The result is that all three fit in the bucket, with the pebbles and sand filtering down between the big rocks.

Now, one could argue that after you take care of the big rocks, perhaps you should just forget about the sand, but regardless, this lesson is directly applicable to your CASB deployment strategy:

You have major sanctioned apps in the cloud that contain critical data. These apps require controls around data leakage, unmanaged device access, credential compromise and malicious insiders, malware prevention, and more. Those are your big rocks and the starting point of your CASB rollout strategy. Focus too much on the sand and you’ll never get to the rocks.

Read what Gartner has to say on the topic in 2018 Critical Capabilities for CASBs.

Rethinking Security for Public Cloud

Symantec’s Raj Patel highlights how organizations should be retooling security postures to support a modern cloud environment

By Beth Stackpole, Writer, Symantec

old fashioned scales with glass globe on one side and gold coins on the other

Enterprises have come a long way with cyber security, embracing robust enterprise security platforms and elevating security roles and best practices. Yet with public cloud adoption on the rise and businesses shifting to agile development processes, new threats and vulnerabilities are testing traditional security paradigms and cultures, mounting pressure on organizations to seek alternative approaches.

Raj Patel, Symantec’s vice president, cloud platform engineering, recently shared his perspective on the shortcoming of a traditional security posture along with the kinds of changes and tools organizations need to embrace to mitigate risk in an increasingly cloud-dominant landscape.

Q: What are the key security challenges enterprises need to be aware of when migrating to the AWS public cloud and what are the dangers of continuing traditional security approaches?

A: There are a few reasons why it’s really important to rethink this model. First of all, the public cloud by its very definition is a shared security model with your cloud provider. That means organizations have to play a much more active role in managing security in the public cloud than they may have had in the past.

Infrastructure is provided by the cloud provider and as such, responsibility for security is being decentralized within an organization. The cloud provider provides a certain level of base security, but the application owner directly develops infrastructure on top of the public cloud, thus now has to be security-aware.

The public cloud environment is also a very fast-moving world, which is one of the key reasons why people migrate to it. It is infinitely scalable and much more agile. Yet those very same benefits also create a significant amount of risk. Security errors are going to propagate at the same speed if you are not careful and don’t do things right. So from a security perspective, you have to apply that logic in your security posture.

Finally, the attack vectors in the cloud are the entire fabric of the cloud. Traditionally, people might worry about protecting their machines or applications. In the public cloud, the attack surface is the entire fabric of the cloud–everything from infrastructure services to platform services, and in many cases, software services. You may not know all the elements of the security posture of all those services … so your attack surface is much larger than you have in a traditional environment.

Q: Where does security fit in a software development lifecycle (SDLC) when deploying to a public cloud like AWS and how should organizations retool to address the demands of the new decentralized environment?

A: Most organizations going through a cloud transformation take a two-pronged approach. First, they are migrating their assets and infrastructure to the public cloud and second, they are evolving their software development practices to fit the cloud operating model. This is often called going cloud native and it’s not a binary thing—it’s a journey.

With that in mind, most cloud native transformations require a significant revision of the SDLC … and in most cases, firms adopt some form of a software release pipeline, often called a continuous integration, continuous deployment (CI/CD) pipeline. I believe that security needs to fit within the construct of the release management pipeline or CI/CD practice. Security becomes yet another error class to manage just like a bug. If you have much more frequent release cycles in the cloud, security testing and validation has to move at the same speed and be part of the same release pipeline. The software tools you choose to manage such pipelines should accommodate this modern approach.

Q: Explain the concept of DevSecOps and why it’s an important best practice for public cloud security?

A: DevOps is a cultural construct. It is not a function. It is a way of doing something—specifically, a way of building a cloud-native application. And a new term, DevSecOps, has emerged which contends that security should be part of the DevOps construct. In a sense, DevOps is a continuum from development all the way to operations, and the DevSecOps philosophy says that development, security, and operations are one continuum.

Q: DevOps and InfoSec teams are not typically aligned—what are your thoughts on how to meld the decentralized, distributed world of DevOps with the traditional command-and-control approach of security management?

A: It starts with a very strong, healthy respect for the discipline of security within the overall application development construct. Traditionally, InfoSec professionals didn’t intersect with DevOps teams because security work happened as an independent activity or as an adjunct to the core application development process. Now, as we’re talking about developing cloud-native applications, security is part of how you develop because you want to maximize agility and frankly, harness the pace of development changes going on.

One practice that works well is when security organizations embed a security professional or engineer within an application group or DevOps group. Oftentimes, the application owners complain that the security professionals are too far removed from the application development process so they don’t understand it or they have to explain a lot, which slows things down. I’m proposing breaking that log jam by embedding a security person in the application group so that the security professional becomes the delegate of the security organization, bringing all their tools, knowledge, and capabilities.

At Symantec, we also created a cloud security practitioners working group as we started our cloud journey. Engineers involved in migrating to the public cloud as well as our security professionals work as one common operating group to come up with best practices and tools. That has been very powerful because it is not a top-down approach, it’s not a bottoms-up approach–it is the best outcome of the collective thinking of these two groups.

Q: How does the DevSecOps paradigm address the need for continuous compliance management as a new business imperative?

A: It’s not as much that DevSecOps invokes continuous compliance validation as much as the move to a cloud-native environment does. Changes to configurations and infrastructure are much more rapid and distributed in nature. Since changes are occurring almost on a daily basis, the best practice is to move to a continuous validation mode. The cloud allows you to change things or move things really rapidly and in a software-driven way. That means lots of good things, but it can also mean increasing risk a lot. This whole notion of DevSecOps to CI/CD to continuous validation comes from that basic argument.

Bitglass Security Spotlight: Financial Services Facing Cyberattacks

By Will Houcheime, Product Marketing Manager, Bitglass

young man in hoodie staring at financial screens

Here are the top cybersecurity stories of recent months:

—Customer information exposed in Bankers Life hack
—American Express India leaves customers defenseless
—Online HSBC accounts breached
—Millions of dollars taken from major Pakistani banks
—U.S. government infrastructure accessed via DJI drones

Customer information exposed in Bankers Life hack
566,000 individuals have been notified that their personal information has been exposed. Unauthorized third parties breached Bankers Life websites by obtaining employee credentials. The hackers were then able to access personal information belonging to applicants and customers; for example, the last four digits of Social Security numbers, full names, addresses, and more.

American Express India leaves customers defenseless
Through an unsecured server, 689,262 American Express India records were found in plain text. Anyone who came across the database housing this information could easily access personally identifiable information (PII) such as customer names, phone numbers, and addresses. The extent of access is not currently known.

Online HSBC accounts breached
HSBC has announced that about 1% of its U.S. customers’ bank accounts have been hacked. The bank has stated that the attackers had access to account numbers, balances, payee details, and more. Naturally, financial details are highly sensitive and must be thoroughly protected.

Millions of dollars taken from major Pakistani banks
According to the Federal Investigation Agency (FIA), almost all of the major Pakistani banks have been affected by a cybersecurity breach. This event exposed the details of over 19,000 debit cards from 22 different banks. This was the biggest cyberattack to ever hit the banking system of Pakistan, resulting in a loss of $2.6 million dollars.

U.S. government infrastructure accessed via DJI drones
Da Jiang Innovations (DJI) was accused of leaking confidential U.S. law enforcement information to the Chinese government. DJI quickly denied the passing of any information to another organization. However, it has since been determined that DJI’s data security was inadequate, and that sensitive information could be easily accessed by unauthorized third parties.

To defend against these threats, financial services firms should adopt a comprehensive security solution like a cloud access security broker (CASB.)

To learn more about the state of security in financial services, download Bitglass’ 2018 Financial Breach Report.