Happy Birthday GDPR! – Defending Against Illegitimate Complaints

By John DiMaria; CSSBB, HISP, MHISP, AMBCI, CERP, Assurance Investigatory Fellow – Cloud Security Alliance

On May 25th we will celebrate the first birthday of GDPR. Yes, one year ago GDPR was sort of a four-letter word (or acronym if you will). People were in a panic of how they were going to comply and worse yet, many didn’t even know if they had to and even worse yet, some just ignored it all together.

The European Data Protection board (EDPB) published an infographic on compliance and enforcement of the GDPR from May 2018 to January 2019. It shows that 95,180 complaints have been made to EU national data protection authorities by individuals who believe their rights under the GDPR have been violated. Two thirds of the most common of these complaints had to do with telemarketing and promotional emails which practically every organization uses as the main tools for communication.

Now, we can discuss the some of the biggest fines levied like Google and Facebook, but that’s been done to death and quite frankly the largest percentage of companies globally don’t fall into the category of Google and Facebook, nor due their budgets even come close.

I would prefer to concentrate on a topic you don’t see covered in the news much…complaints and the time effort and cost to defend yourself even if you’re not guilty.

Think about it, anybody can log a complaint. Whether or not you are in violation is one issue, proving you are not is another. While this is a troubling issue for large enterprises, small and medium size organizations can have a particularly tough time as time money and resources are at a premium. As the EDPB report mentioned, 95,180 complaints have been made to EU national data protection authorities by individuals who “believe” their rights under the GDPR have been violated. As you can imagine this can send a company scrambling to pull all the data and evidence together to not only prove compliance, but to prove the effectiveness of the system. Further, what if you are called out, technically not guilty of the specific infraction logged, but in the course of the investigation major non-conformities are found in your process?

So what is the best way to protect yourself and ensure not only compliance, but readiness, both from a process and forensic perspective?

Ensure you have a good solid data governance program in place that covers both security and privacy aspects of your organization. While there are many ways to attack this, cloud service providers and users need to make sure the proper sector specific controls are in place not just generic ones and that your scope is fit-for-purpose. It must cover all of people, process and technology to ensure holistic coverage.

CSA has been researching solutions to address these issues and since 2011 CSA STAR has evolved into a total GRC solution for cloud service providers and it continues to improve.

The Security, Trust, Assurance, and Risk (STAR) Program was developed by the Cloud Security Alliance in order to provide the industry a standard for which enterprises procuring cloud services could make informed data driven decisions.

The STAR program encompasses four key principles of transparency, rigorous auditing, all-inclusive and harmonization of standards providing a single program and a comprehensive suite that covers both security and privacy compliance.

So what level is best for you? You can read our quick reference guide, but gap assessments are always the best starting point. Measure where you are at against where you want to go and act on the differences! Also, this allows you to give yourself credit for your strengths. Many organizations have a lot of good things going on, so just don’t assume you have a major hurdle. A combination of STAR Level 1 and the GDPR Code of Conduct self-assessment (or code of practice) is the one-two punch to the road of due diligence. If you are already certified to ISO/IEC 27001 or you get regular SOC2 assessments, then you may want to also consider STAR Level 2 certification or attestation which not only increases your level of transparency but also assurance because it is third party tested and certified. The GDPR COC is still in the self-assessment stage, but a third-party certification will be available as soon as the European Data Protection Board finalizes all the annexes related to accreditation and certification (est. Q4). However, your submission is vetted thoroughly by our GDPR experts and once approved, you can file a PLA Code of Conduct (CoC): Statement of Adherence Self-Assessment and your organization will be posted on the registry. After publication, your company will receive authorized use of a Compliance Mark, valid for 1 year. You are then expected to revise your assessment every time there is a change to the company policies or practices related to the service under assessment.

There is a small fee to cover administration, maintenance and the vetting process, but it shows due diligence and when you consider the potential millions of Euros in fines you face ( or % of annual global turnover – whichever is higher) for non-compliance[1], the fee is a drop in the bucket for some piece of mind. If you already think you are compliant then the GDPR COC self-assessment can serve as another set of eyes and also provide a public statement of transparency.

It makes sense no matter where you fall in the supply chain to take data privacy seriously. The CSA GDPR COC can help you establish a security-conscious culture. GDPR requires organizations to identify their security strategy and adopt adequate administrative and technical measures to protect personal data. Thanks to CSA’s research, the CSA GDPR COC provides the roadmap that will facilitate your organizations efforts to ensure, your processes will become more consolidated, ensuring good governance, compliance and prove that all important due diligence. Additionally, your data will be easier to use, and you will realize an underling value and ROI.

For more information and to discuss with one of our experts, contact us at [email protected]

[1] Up to €10 million, or 2% annual global turnover – whichever is higher; or for higher violations

Up to €20 million, or 4% annual global turnover – whichever is higher.

12 Ways Cloud Upended IT Security (And What You Can Do About It)

This article was originally published on Fugue's blog here. 

By Andrew Wright, Co-founder & Vice President of Communications, Fugue

12 ways cloud upended IT security (and what you can do about it)

The cloud represents the most disruptive trend in enterprise IT over the past decade, and security teams have not escaped turmoil during the transition. It’s understandable for security professionals to feel like they’ve lost some control in the cloud and feel frustrated while attempting to get a handle on the cloud “chaos” in order to secure it from modern threats.

Here, we take a look at the ways cloud has disrupted security, with insights into how security teams can take advantage of these changes and succeed in their critical mission to keep data secure.

1. The cloud relieves security of some big responsibilities

Organizations liberate themselves from the burdens of acquiring and maintaining physical IT infrastructure when they adopt cloud, and this means security is no longer responsible for the security of physical infrastructure. The Shared Security Model of Cloud dictates that Cloud Service Providers (CSPs) such as AWS and Azure are responsible for the security of the physical infrastructure. CSP customers (that’s you!) are responsible for the secure use of cloud resources. There’s a lot of misunderstanding out there about the Shared Responsibility Model however, and that brings risk.

2. In the cloud, developers make their own infrastructure decisions

Cloud resources are available on-demand via Application Programming Interfaces (APIs). Because the cloud is self-service, developers move fast, sidestepping traditional security gatekeepers. When developers spin up cloud environments for their applications, they’re configuring the security of their infrastructure. And developers can make mistakes, including critical cloud resource misconfigurations and compliance policy violations.

3. And developers change those decisions constantly

Organizations can innovate faster in the cloud than they ever could in the datacenter. Continuous Integration and Continuous Deployment (CI/CD) means continuous change to cloud environments. And it’s easy for developers to change infrastructure configurations to perform tasks like getting logs from an instance or troubleshoot an issue. So, even if they got the security of their cloud infrastructure is correct on day one, a misconfiguration vulnerability may have been introduced on day two (or hour two).

4. The cloud is programmable and can be automated

Because cloud resources can be created, modified, and destroyed via APIs, developers have ditched web-based cloud “consoles” and taken to programming their cloud resources using infrastructure-as-code tools like AWS CloudFormation and Hashicorp Terraform. Massive cloud environments can be predefined, deployed on-demand, and updated at will–programmatically and with automation. These infrastructure configuration files include the security-related configurations for critical resources.

5. There’s more kinds of infrastructure in the cloud to secure

In certain respects, security in the datacenter is easier to manage. You have your network, firewalls, and servers on racks. The cloud has those too, in virtualized form. But the cloud also produced a flurry of new kinds of infrastructure resources, like serverless and containers. AWS alone has introduced hundreds of new kinds of services over the past few years. Even familiar things like networks and firewalls operate in unfamiliar ways in the cloud. All require new and different security postures.

6. There’s also more infrastructure in the cloud to secure

There’s simply more cloud infrastructure resources to track and secure, and due to the elastic nature of cloud, “more” varies by the minute. Teams operating at scale in the cloud may be managing a dozens of environments across multiple regions and accounts, and each may involve tens of thousands of resources that are individually configured and accessible via APIs. These resources interact with each other and require their own identity and access control (IAM) permissions. Microservice architectures compound this problem.

7. Cloud security is all about configuration—and misconfiguration

Cloud operations is all about the configuration of cloud resources, including security-sensitive resources such as networks, security groups, and access policies for databases and object storage. Without physical infrastructure to concern yourself with, security focus shifts to the configuration of cloud resources to make sure they’re correct on day one, and that they stay that way on day two and beyond.

8. Cloud security is also all about identity

In the cloud, many services connect to each other via API calls, requiring identity management for security rather than IP based network rules, firewalls, etc. For instance, a connection from a Lambda to an S3 bucket is accomplished using a policy attached to a role that the Lambda takes on—its service identity. Identity and Access Management (IAM) and similar services are complex and feature rich, and it’s easy to be overly permissive just to get things to work. And since these cloud services are created and managed with configuration, see #7.

9. The nature of threats to cloud are different

Bad actors use code and automation to find vulnerabilities in your cloud environment and exploit them, and automated threats will always outrun manual or semi-manual defenses. Your cloud security must be resilient against modern threats, which means they must cover all critical resources and policies, and recover from any misconfiguration of those resources automatically, without human involvement. The key metric here is Mean Time to Remediation (MTTR) for critical cloud misconfiguration. If yours is measured in hours, days, or (gasp!) weeks, you’ve got work to do.

10. Datacenter security doesn’t work in the cloud

By now, you’ve probably concluded that many of the security tools that worked in the datacenter aren’t of much use in the cloud. This doesn’t mean you need to ditch everything you’ve been using, but learn which still apply and which are obsolete. For instance, application security still matters, but network monitoring tools that rely on spans or taps to inspect traffic don’t because CSPs don’t provide direct network access. The primary security gap you need to fill is concerned with cloud resource configuration.

11. Security can be easier and more effective in the cloud

You’re probably ready for some good news. Because the cloud is programmable and can be automated, the security of your cloud is also programmable and can be automated. This means cloud security can be easier and more effective than it ever could be in the datacenter. In the midst of all this cloud chaos lies opportunity!

Monitoring for misconfiguration and drift from your provisioned baseline can be fully automated, and you can employ self-healing infrastructure for your critical resources to protect sensitive data. And before infrastructure is provisioned or updated, you can run automated tests to validate that infrastructure-as-code complies with your enterprise security policies, just like you do to secure your application code. This lets developers know earlier on if there are problems that need to be fixed, and it ultimately helps them move faster and keep innovating.

12. Compliance can also be easier and more effective in the cloud

There’s good news for compliance analysts as well. Traditional manual audits of cloud environments can be incredibly costly, error-prone, and time-consuming, and they’re usually obsolete before they’re completed. Because the cloud is programmable and can be automated, compliance scanning and reporting can be as well. It’s now possible to automate compliance audits and generate reports on a regular basis without investing a lot of time and resources. Because cloud environments change so frequently, a gap between audits that’s longer than a day is probably too long.

Where to start with cloud security

  1. Learn what your developers are doing
    What cloud environments are they using, and how are they separating concerns by account (i.e. dev, test, prod)? What provisioning and CI/CD tools are they using? Are they currently using any security tools? The answers to these questions will help you develop a cloud security roadmap and identify ideal areas to focus.
  2. Apply a compliance framework to an existing environment. 
    Identify violations and then work with your developers to bring it into compliance. If you aren’t subject to a compliance regime like HIPAA, GDPR, NIST 800-53, or PCI, then adopt the CIS Benchmark. Cloud providers like AWS and Azure have adapted it to their cloud platforms to help remove guesswork on how they apply to what your organization is doing.
  3. Identify critical resources and establish good configuration baselines.
    Don’t let the forest cause you to lose sight of the really important trees. Work with your developers to identify cloud resources that contain critical data, and establish secure configuration baselines for them (along with related resources like networks and security groups). Start detecting configuration drift for these and consider automated remediation solutions to prevent misconfiguration from leading to an incident.
  4. Help developers be more secure in their work. 
    Embrace a “Shift Left” mentality by working with developers to bake in security earlier in the software development lifecycle (SLDC). DevSecOps approaches such as automated policy checks during development exist to help keep innovation moving fast by eliminating slow, manual security and compliance processes.

The key to an effective and resilient cloud security posture is close collaboration with your development and operations teams to get everyone on the same page and talking the same language. In the cloud, security can’t operate as a stand-alone function.

Read more industry insights from the team at Fugue here!

Continuous Auditing – STAR Continuous – Increasing Trust and Integrity

By John DiMaria, Assurance Investigatory Fellow, Cloud Security Alliance

As a SixSigma Black Belt I was brought up over the years with the philosophy of continual monitoring and improvement, moving from a reactive state to a preventive state. Actually, I wrote a white paper a couple of years ago on how SixSigma is applied to security.

The basic premise is it emphasizes early detection and prevention of problems, rather than the correction of problems after they have occurred. It eliminates the point in time “inspection” by deploying continuous monitoring and auditing. This approach basically saved the automotive industry back in the 1980s.

This age-old and proven process is the best way I can describe what CSA has done with the launch of another step in the direction of increasing transparency and assurance … continuous auditing.

Continuous auditing focuses on testing for the occurrence of a risk and the on-going effectiveness of a control. A framework and detailed procedures, along with technology, are key to enabling such an approach. Continuous auditing offers an enhanced way to understand risks and controls and improve on sampling from periodic reviews to ongoing testing.

STAR Continuous is a component of the CSA STAR Program that gives cloud service providers (CSP) the opportunity to integrate their approach to cloud security compliance and certification with additional capabilities to validate their security posture on an ongoing basis. Continuous auditing empowers an organization to make precise statements on the compliance status at any time over the whole time span in which the continuous audit process is executed, achieving an “always up-to-date” compliance status by increasing the frequency of the auditing process. 

Continuous auditing is not intended to replace traditional auditing, but rather is to be used as a tool to enhance audit effectiveness and increase transparency to stakeholders and interested parties.

STAR Continuous contains three models for continuous monitoring. Each of the three models provides a different level of assurance by covering requirements of continuous auditing with various levels of scrutiny. The three models are defined as:

1. Continuous self-assessment
2. Extended certification with continuous self-assessment
3. Continuous certification

chart showing levels of auditing

Essentially, the proposed framework starts from a simple process of the timely submission of self- assessment compliance reports and moves up to a continuous certification of the fulfillment of control objectives.

How does it help you as a cloud service provider?

• Provides top management with greater visibility, so that they can evaluate the effectiveness of their management system in real-time in relation to expectations of internal, regulatory and the cloud security industry standards;

• Implements an audit that is designed to reflect how your organization’s objectives are aimed at optimizing the cloud services;

• Demonstrates progress and performance levels that go beyond the traditional “point in time” scenario; and

• For customers of cloud service providers, STAR Continuous will provide a greater understanding of the level of controls that are in place and their effectiveness.

CSA is committed to helping customers have a deeper understanding of their security postures. Since the STAR Registry was launched in 2011 as the first step in improving transparency and assurance in the cloud, it has evolved into a program that encompasses key principles of transparency, rigorous auditing, and harmonization of standards. Companies who use STAR indicate best practices and validate the security posture of their cloud offerings.

CSA STAR is being recognized as the international harmonized solution leading the way of trust for cloud providers, users and their stakeholders by providing an integrated, cost-effective solution that decreases complexity and increases assurance and transparency. It simultaneously enables organizations to secure their information, protect themselves from cyber-threats, reduce risk and strengthen their information governance and privacy platform.

Want to find out more? Contact us at [email protected]

OneTrust and Cloud Security Alliance Partner to Launch Free Vendor Risk Tool for CSA Members

By Gabrielle Ferree, Public Relations and Marketing Manager, OneTrust

CSA OneTrust Vendor Risk Management toolOneTrust is excited to announce that we have partnered with Cloud Security Alliance to launch a free Vendor Risk Management (VRM) tool.

The tool, available to CSA members today, automates the vendor risk lifecycle for compliance with the GDPR, CCPA and other global privacy frameworks.

Get started today with the CSA-OneTrust VRM tool.

As the world’s leading organization dedicated to defining and raising awareness of best practices to help ensure a secure cloud computing environment, CSA has 90,000 individual members, 80 chapters globally and 400 corporate members. CSA members can access the VRM tool today and automate vendor risk management at no cost.

[Related: Ovum Radar Report: OneTrust Focused on Identifying and Managing Risk in Vendor Management]

The CSA-OneTrust VRM tool is pre-populated with templates reproducing the CSA’s best practices for cloud security and privacy assurance and compliance, including the Cloud Controls Matrix (CCM), the Consensus Assessments Initiative Questionnaire (CAIQ) and GDPR Code of Conduct. Privacy and security tea­­ms can also build upon existing templates or create custom vendor assessments based on their business-specific needs.

The CSA-OneTrust VRM tool automates the entire vendor management lifecycle, including:

  • onboarding and offboarding vendors
  • triaging vendors
  • populating vendor information and monitoring the vendor risk lifecycle
  • maintaining records for accountability and compliance purposes.

The tool is powered by Vendorpedia™ by OneTrust, a database of privacy and security details of more than 4,000 vendors that automatically populates vendor assessments based on the most up-to-date vendor information.

Our goal is to provide privacy and security professionals the power to automate and simplify what can be an overwhelming task of managing and monitoring vendor risk. We’re proud to work alongside leaders in the industry like CSA and look forward to providing vendor risk assessment and compliance automation for its more than 90,000 members.

To learn more, read our press release. For additional news and updates visit our LinkedIn, Twitter and Facebook.

Get started with the CSA-OneTrust VRM tool or request a demo today.

PCI Compliance for Cloud Environments: Tackle FIM and Other Requirements with a Host-Based Approach

By Patrick Flanders, Director of Marketing, Lacework

PCI compliance for cloudCompliance frameworks and security standards are necessary, but they can be a burden on IT and security teams. They provide structure, process, and management guidelines that enable businesses to serve customers and interoperate with other organizations, all according to accepted guidelines that facilitate a better experience for end users.

Yet, when their IT environment is the cloud there is the additional challenge of trying to maintain the fairly static state of compliance in an environment where change is continuous. Every configuration change, addition of a new users, or transaction between data sources, even seemingly minor changes, can have hidden implications that when discovered, can render the organization non-compliant.

Payment Card Industry Data Security Standard (PCI DSS) is an industry standard intended to protect credit, debit, and cash card owners against theft of their personally identifiable information (PII), and to equip companies with best practices guidelines to secure payment processes and supporting IT systems. Originally established as a collaborative effort by American Express, Discover, MasterCard, Visa, and JCB, the original intent was to promote credit card activity for e-commerce.

Play it safe with PCI

PCI is intended to keep all those transactions safe, but with more money exchanging digital hands, there are more endpoints that PII and financial data touch. At the same time, more financial organizations are moving critical workloads to the cloud, which means they’re managing more change in the name of agility.

Many turn to open source tools to give them PCI monitoring. These tools are intended to provide high level file integrity monitoring, but they are only a surface layer. Data transacting inside the cloud environment, and activity moving outside of it can be targeted by hackers because these tools don’t target inconsistencies with configurations, and they’re not able to scale the demands of cloud workloads. Their focus is the network and they aren’t equipped to look at anything else in the cloud stack. Yet, without insight at a level where one can identify and evaluate every cloud action, there really can be no true understanding of what is at risk, to what degree the  organization is out of compliance, and there’s not ability to pinpoint where the problem is so it can be fixed.

Many IT groups piece together open source FIM tools along with legacy security tools like SIEMs and network-based detection systems. In an earlier era when there were fewer endpoints and control governance could be extended to the firewall, this was adequate. But financial organizations are now extending payment options through mobile apps and even IoT devices; the number of endpoints and potential holes in the system can grow exponentially.

This concept of monitoring and analyzing activity at every layer of the cloud stack maps to what’s necessary for today’s workloads and IT environments. Intrusion detection monitoring certainly is still necessary at the network layer, but it’s what’s happening with cardholder data as it travels through to different apps and repositories that can be complicated and hard to identify. Using a host-based system for monitoring network traffic throughout the infrastructure of the organization is mandatory because it’s functioning at the depth of configuration, access, and asset change levels.

Out of compliance, out of business

Being PCI-compliant is a necessity for any organization that facilitates ecommerce transactions with credit or debit cards. If ever there was a growth industry, it is online shopping. In 2017, ecommerce represented just 13% of all total retail sales, but 49% of all retail growth. Consumers made $454 billion worth of online purchases last year, and online sales grew 16% from the previous year. The consequences, therefore, of being out of compliance are huge – at best, fines and remediation will get you back in business. But if you really don’t have control over the activity within your cloud, you are liable to attacks and compliance issues that could eradicate customer trust, or altogether put you out of business.

To be effective at validating PCI compliance, it’s best to use an approach that analyzes cloud activity against normalized behavior to identify status of all PCI controls. Awareness of every event, every endpoint, and automatic identification of anomalies is critical to ensuring you are prepared with an effective PCI compliance framework.

Towards a “Permanent Certified Cloud”: Monitoring Compliance in the Cloud with CTP 3.0

Cloud services can be monitored for system performance but can they also be monitored for compliance? That’s one of the main questions that the Cloud Trust Protocol aims to address in 2013.

Compliance and transparency go hand in hand.

The Cloud Trust Protocol (CTP) is designed to allow cloud customers to query cloud providers in real-time about the security level of their service. This is measured by evaluating “security attributes” such as availability, elasticity, confidentiality, location of processing or incident management performance, just to name a few examples. To achieve this, CTP will provide two complementary features:

  • First, CTP can be used to automatically retrieve information about the security offering of cloud providers, as typically represented by an SLA.
  • Second, CTP is designed as a mechanism to report the current level of security actually measured in the cloud, enabling customers to be alerted about specific security events.

These features will help cloud customers compare competing cloud offerings to discover which ones provide the level of security, transparency and monitoring capabilities that best match the control objectives supporting their compliance requirements. Additionally, once a cloud service has been selected, the cloud customer will also be able to compare what the cloud provider offered with what was later actually delivered.

For example, a cloud customer might decide to implement a control objective related to incident management through a procedure that requires some security events to be reported back to a specific team within a well-defined time-frame. This customer could then use CTP to ask the maximum delay the cloud provider commits to for reporting incidents to customers during business hours. The same cloud customer may also ask for the percentage of incidents that were actually reported back to customers within that specific time-limit during the preceding two-month period. The first example is typical of an SLA while the second one describes the real measured value of a security attribute.

CTP is thus designed to promote transparency and accountability, enabling cloud customers to make informed decisions about the use of cloud services, as a complement to the other components of the GRC stack. Real time compliance monitoring should encourage more businesses to move to the cloud by putting more control in their hands.

From CTP 2.0 to CTP 3.0

CTP 2.0 was born in 2010 as an ambitious framework designed by our partner CSC to provide a tool for cloud customers to “ask for and receive information about the elements of transparency as applied to cloud service providers”. CSA research has begun undertaking the task of transforming this original framework into a practical and implementable protocol, referred to as CTP 3.0.

We are moving fast and the first results are already ready for review. On January 15th, CSA completed a first review version of the data model and a RESTful API to support the exchange of information between cloud customers and cloud provider, in a way that is independent of any cloud deployment model (IaaS, PaaS or SaaS). This is now going through the CSA peer review process.

Additionally, a preliminary set of reference security attributes is also undergoing peer review. These attributes are an attempt to describe and standardize the diverse approaches taken by cloud providers to expressing the security features reported by CTP. For example, we have identified more than five different ways of measuring availability. Our aim is to make explicit the exact meaning of the metrics used. For example, what does unavailability really mean for a given provider? Is their system considered unavailable if a given percentage of users reports complete loss of service? Is it considered unavailable according to the results of some automated test to determine system health?

As well as all this nice theory, we are also planning to get our hands dirty and build a working prototype implementation of CTP 3.0 in the second half of 2013.

Challenges and research initiatives

While CTP 3.0 may offer a novel approach to compliance and accountability in the cloud, it also creates interesting challenges.

To start with, providing metrics for some security attributes or control measures can be tricky. For example, evaluating the quality of vulnerability assessments performed on an information system is not trivial if we want results to be comparable across cloud providers. Other examples are data location and retention, which are both equally complex to monitor, because of the difficulty of providing supporting evidence.

As a continuous monitoring tool, CTP 3.0 is a nice complement to traditional audit and certification mechanisms, which typically only assess compliance at a specific point in time. In theory, this combination brings up the exciting possibility of a “permanently certified cloud”, where a certification could be extended in time through automated monitoring. In practice however, making this approach “bullet-proof” requires a strong level of trust in the monitoring infrastructure.

As an opportunity to investigate these points and several other related questions, CSA has recently joined two ambitious European Research projects: A4Cloud and CUMULUS. A4Cloud will produce an accountability framework for the entire cloud supply chain, by combining risk analysis, creative policy enforcement mechanisms and monitoring. CUMULUS aims to provide novel cloud certification tools by combining hybrid, incremental and multi-layer security certification mechanisms, relying on service testing, monitoring data and trusted computing proofs.

We hope to bring back plenty of new ideas for CTP!

Help us make compliance monitoring a reality!

A first draft of the “CTP 3.0 Data Model and API” is currently undergoing expert review and will then be opened to public review. If you would like to provide your expert feedback, please do get in touch!

by Alain Pannetrat 

[email protected]

Dr. Alain Pannetrat is a Senior Researcher at Cloud Security Alliance EMEA. He works on CSA’s Cloud Trust Protocol providing monitoring mechanisms for cloud services, as well as CSA research contributions to EU funded projects such as A4Cloud and Cumulus. He is a security and privacy expert, specialized in cryptography and cloud computing. He previously worked as a IT Specialist for the CNIL, the French data protection authority, and was an active member of the Technology Subgroup of the Article 29 Working Party, which informs European policy on data protection. He received a PhD in Computer Science after conducting research at Institut Eurecom on novel cryptographic protocols for IP multicast security.