Going up? Safety first, then send your data to the cloud Arrow to Content

March 28, 2013 | Leave a Comment

By: Joe Sturonas, CTO, PKWARE

As the proliferation of data continues to plague businesses, the pressure is on for companies to migrate away from their physical data centers. Cloud computing is being adopted at a rapid rate because it addresses not only the costs for physical space, but also rising energy costs and mandates for more scalable IT services. Enterprises are drastically reducing their storage spend by using online storage solution providers to store massive amounts of data on third-party servers.

The cloud is definitely calling, but even the most seasoned IT processionals debate, grapple and get a bit intimidated by an otherwise simple term that has taken the world by storm.

Inevitable Risk

Every minute of every day presents the opportunity for a data mishap. A security breach, as well as lost, stolen or even compromised records, triggers negative exposure that quickly equates to forfeited sales, legal fees, disclosure expenses and a host of remediation costs. The fallout can result in years of struggle to recoup reputation and repair a brand in the marketplace. Cloud providers do not want to be held liable for any issues related to your data loss.  Best case, they will credit back your fees, but nothing can help a damaged reputation or customers who leave your organization when a data breach occurs.

While the cloud environment seems be to a holy grail for trends around data proliferation and massive storage needs; clouds present complex security issues and put critical corporate data, intellectual property, customer information, and PII in potential jeopardy. Enterprises forfeit security and governance control when data is handed over and cloud providers do not assume responsibility.

The recent cyber attacks by groups like Anonymous and data breaches like that of LinkedIn illustrate the need to incorporate an advanced risk and compliance plan that includes any third-party managed cloud environment. Clearly, the cloud often opens a Pandora’s Box for unanticipated consequences.

Storing huge amounts of data on third party servers may mean instant online access and lower costs; however, that data is often comingled on shared servers and exposed to users you don’t know. If your Cloud storage provider encrypts your data but holds the key, anyone working for that Cloud storage provider can gain access to your data. That means the potential of your data be shared, sold, marketed to and profiled for someone else’s gain.

Data also has to actually “get to” the cloud, which usually means leaving your trusted infrastructure and overcoming compounded transfer vulnerabilities as data moves to and from the cloud. Even the most unintended data breach could cost a company its reputation.

Potential Pitfalls

Transfer vulnerabilities- The potential for data breaches is multiplied as data travels to and from the cloud using various networks especially in highly mobile and distributed workforces.

Non-compliance penalties- Extended enterprises, partner networks and virtual machines are continuously scrutinized for compliance. All sensitive data must be protected with appropriate measures.

Storage expense- Companies are charged by the amount of data that is put into the cloud; therefore providers lack motivation to compress that data. Any compression by providers is deemed unreliable since encrypted data cannot be compressed.

Provider holds the keys- Cloud agreements can address how internal folks at the vendor will be managing your data.  Provisions can limit administrative access and grant who has hiring and oversight over those privileged administrators.  If the data that is housed in the Cloud is, in fact, encrypted then the issue becomes more about who maintains the keys.

To summarize…

  • Security breaches will happen even for the most vigilant that do not encrypt their data.
  • Your company’s reputation is at stake.
  • Security regulations are increasing.
  • The Cloud introduces new levels of risk.
  • Cloud providers have root access to all your unencrypted data in the cloud, and they are not your employees.

The only way to protect data in the cloud is if you encrypt the data and you maintain control of the private key.

CLOUD SECURITY BEST PRACTICES

Impact on security policies and procedures?

Your existing security policies and procedures need to be reviewed to evaluate the use of Cloud applications and storage.  Some companies choose to shut off access to certain Cloud applications, some choose to implement application-stores to limit access to specific approved applications, and some do not attempt to curtail access at all.  Shutting off access is not a popular option to your employees who are most likely already familiar with consumer type options, such as Dropbox.  Your end-users have certain problems like transferring or sharing a large file too large for email that they know such services can solve.

Employees, internal team members and partners, may not have any idea of the risk of putting insecure data in the Cloud.  They probably don’t know that unsecure services, such as Dropbox, pose a security risk and may have sensitive company data stored there.  You need to alert them to the data security risks of the Cloud and have them sign a security policy to that effect.  Taking draconian measures toward preventing the use of services like Dropbox will only force employees to find even less secure ways to exchange data. Providing a secure way for employees to use services like Dropbox is a far better approach.

The regulatory standards issues that you deal with today in your own data center are just as important in the Cloud.  Compliance with PCI DSS, EU Privacy Act, Sarbanes-Oxley, and FIPS140-2, etc. are just as imperative. If you know that the data is encrypted before it goes into the Cloud, you may be compliant with any number of these regulations. Even if the Cloud vendor is hacked or someone uses an administrative password improperly, your data is impregnable at that location. 

EVALUATING SECURITY SOLUTIONS FOR THE CLOUD

Encrypting your data and maintaining the keys yourself is considered by industry experts as the only way of making sure that no one can read your data, period.  It doesn’t matter if a privileged user has access to your data, they still can’t decipher it.

Regulatory compliance counts in any cloud, any environment, and any country. You must ensure your data is compliant with any regulation standards for your industry.

If there are assistants, executive and sales representatives who use different operating systems on different computing platforms and want to share that data securely inside or outside of the private or public cloud…then you need data-centric, file-level encryption that is portable across all.

Be sure to evaluate Data Location and Data Segregation as they relate to co-tenancy. Not only do you want to hold the key, but you want to encrypt all of your data so that your data, especially sensitive data (PII), is protected if comingled with other organizations’ data.

A Cloud security solution must also enable recovery and provide you with the ability to restore your data many years from now.  To meet some regulatory compliance statutes you have to keep your data for seven, even 20 years.

Cloud providers might assure users that the communications from your browser to their servers are encrypted using TLS. That provides a level of protection of the data only as it travels through the Internet, but then data remains in the clear once it lands on their server.

Worry-free breach

Odds are you will have to report a breach one day. If that day comes, you want to announce that no data was compromised and minimize corporate liability both in dollars and reputation.  With data-centric encryption where you hold the keys and the data is encrypted at the file level, no one can access that data.  Therefore, you may not even have to report it as a breach and you don’t really have to rely on all the remediation contractual issues…because essentially there was a breach but no data was lost.

So before you store sensitive data in the Cloud, make sure you encrypt that data. This insures that your data is safe and accessible to you and only you.

About the author: Joe Sturonas is Chief Technology Officer for PKWARE.

PKWARE, the industry leader in enterprise data security products, has a history rooted in innovation, starting with the creation of the .ZIP file in 1986. Since then, PKWARE has been at the forefront of creating products for reducing and protecting data – from mainframes to servers to desktops and into virtual and Cloud environments. www.pkware.com

 

How to Harden Your APIs Arrow to Content

March 26, 2013 | Leave a Comment

The market for APIs has experienced explosive growth in recent years, yet the major issues that providers still face are protection and hardening of the APIs that they expose to users. In particular, when you are exposing APIs from a cloud based platform, this becomes very difficult to achieve given the various cloud provider constraints. In order to achieve this you would have to implement a solution that will provide the hardening capabilities out of the box, but that still permits for customization of the granular settings to meet the nuances of a specific environment. If this is something you desire, this article might help you foresee the many uses and versatility.

Identify sensitive data and sensitivity of your API.

The first step in protecting sensitive data is identifying it as such. This could be PII, PHI and PCI data (PII – Personally Identifiable Information, PHI – Protected/ Personal Health Information, PCI – Payment Card Industry). Perform a complete analysis of your inbound and outbound data to your API, including all parameters, to figure this out.

Once identified, make sure only authorized people can access the data.

This will require solid identity, authentication, and authorization systems to be in place. These all can be provided by the same system. Your API should be able to identify multiple types and classes of identities. In order to achieve an effective identity strategy, your system has to accept identities of the older formats such as X.509, SAML, WS-Security as well as the newer breed of OAuth, Open ID, etc. In addition, your identity systems must mediate the identities, as an Identity Broker, so it can securely and efficiently relate these credentials to your API for consumption.

API Governance.

You should implement identity-based governance policies. These policies need to be enforced globally, not just locally. Effectively, this means you must have predictable results that are reproducible regardless of where you deploy your policies. Once the user is identified and authenticated, then you can use that result to authorize the user based on not only that credential, but also based on the location where the invocation came from, time of the day, day of the week, etc. Furthermore, for highly sensitive systems the data or user can be classified as well. Top secret data can be accessed only by top classified credentials, etc. In order to build very effective policies and govern them at run time, you need to integrate with a mature policy decision engine. It can be either standard based, such as XACML, or integrated with an existing legacy system provider.

Protect Data.

Protect your data as if your business depends on it, as it often does, or should. Make sure that the sensitive data, whether in transit or at rest (storage), is not in an unprotected original format. While there are multiple ways the data can be protected, the most common ones are encryption or tokenization. In the case of encryption, the data will be encrypted, so only authorized systems can decrypt the data back to its original form. This will allow the data to circulate encrypted and decrypt as necessary along the way by secured steps. While this is a good solution for many companies, you need to be careful about the encryption standard you choose, your key management and key rotation policies. The other standard, “tokenization”, is based on the fact you can’t steal what is not there. You can basically tokenize anything from PCI, PII or PHI information. The original data is stored in a secure vault and a token (or pointer, representing the data) will be sent in transit downstream. The advantage is that if any unauthorized party gets hold of the token, they wouldn’t know where to go to get the original data, let alone have access to the original data. Even if they do know where the token data is located, they are not white listed, so the original data is not available to them. The greatest advantage with tokenization systems is that it reduces the exposure scope throughout your enterprise, as you have eliminated vulnerabilities throughout the system by eliminating the sensitive and critical data from the stream thereby centralizing your focus and security upon the stationary token vault rather than active, dynamic and pliable data streams. While you’re at it, you might want to consider a mechanism, such as DLP, which is highly effective in monitoring for sensitive data leakage. This process can automatically tokenize or encrypt the sensitive data that is going out. You might also want to consider policy based information traffic control. While certain groups of people may be allowed to communicate certain information (such as company financials by an auditor,etc.) the groups may not be allowed to send that information. You can also enforce that by a location based invocation (ie. intranet users vs. mobile users who are allowed to get certain information).

I wrote a series of Context Aware Data Protection articles on this recently.

QOS.

While APIs exposed in the cloud can let you get away with scalability from an expansion or a burst during peak hours, it is still a good architectural design principle to make sure that you limit or rate access to your API. This is especially valuable if you are offering an open API and exposure to anyone, which is an important and valuable factor. There are two sides to this: a business side and a technical side. The technical side will allow your APIs to be consumed in a controlled way, and the business side will let you negotiate better SLA contracts based on usage model you have handy.

You also need to have a flexible throttling mechanism. The throttling mechanism should allow you to have the following options: just notify, throttle the excessive traffic, or shape the traffic by holding the messages until the next sampling period starts.In addition, there should be a mechanism to monitor and manage traffic, both for long term and for short term, which can be based on two different policies.

Protect your API.

The attacks or misuse of your publicly exposed API can be intentional or accidental. Either way, you can’t afford for anyone to bring your API down. You need to have application aware firewalls that can look into the application level messages and prevent attacks. Generally the application attacks tend to fall under Injection attacks (SQL Injection, Xpath injection, etc.), Script attacks, or attack on the infrastructure itself.

Message Security.

You also must provide both transport level and message level security features. While transport security features, such as SSL and TSL, provide some data privacy you need to have an option to encrypt/ sign message traffic, so it will reach the end systems safely and securely and can authenticate the end user who sent the message.

Monitor effectively.

If you don’t collect metrics on the usage of the APIs by monitoring,you will be shooting blind. Unless you understand who is using it, when, how they are using itand the patterns of usage,it is going to be very hard to protect it. All of the above actions are built proactively based on certain assumptions. You need to monitor your traffic not only to validate your assumptions, but also to make sure you are ready for reactive measures based on what is happening. This becomes critical in mitigating the risk for cloud based API deployments.

 

Andy is the Chief Architect & Group CTO for the Intel unit responsible for Cloud/ Application security, API, Big Data, SOA and Mobile middleware solutions, where he is responsible for architecting API, SOA, Cloud, Governance, Security, and Identity solutions for their major corporate customers. In his role, he is responsible for helping Intel/McAfee field sales, technical teams and customer executives. Prior to this role, he has held technology architecture leadership and executive positions with L-1 Identity Solutions, IBM (Datapower), BMC, CSC, and Nortel. His interests and expertise include Cloud, SOA, identity management, security, governance, and SaaS. He holds a degree in Electrical and Electronics engineering and has over 25+ years of IT experience.

He blogs regularly at www.thurai.net/securityblog on API, Security, SOA, Identity, Governance and Cloud topics. You can find him on LinkedIn at http://www.linkedin.com/in/andythurai. or on Twitter at @AndyThurai.

Three Critical Features That Define an Enterprise-Grade Cloud Service Arrow to Content

March 22, 2013 | Leave a Comment

By David Baker, CSO at Okta

 

The line between enterprise and consumer is fading as employees work from all manner of devices to access the on-premises, cloud and even consumer applications needed to get work done. But it’s important to not confuse enterprise and consumer services from a security standpoint. Enterprises are increasingly trusting cloud serviceproviders to secure private, often sensitive data. These services must be held to more rigorous standards—but what does it really take to be considered truly “enterprise grade”?

 

Cloud services today are ubiquitous and are quick to use terms like security, high availability and transparency. There are many features that define enterprise services, but the three that stand out for me areplatform security, service availability and multi-tenant architecture.

 

Platform Security

 

Whether you call it Layer 7 or application security, armorizing a cloud service is especially critical in the enterprise. These services are entrusted to handle sensitive corporate and customer data, and enterprises must be able to trust that their cloud vendors have rigorous security standards in place and that their customers’ data is behind lock and key.

 

The most basic step toward enterprise security is independent third-party certification. Yes, the c-word. I have seen any check-box attestations and certifications, buta certification alone does not mean that platform security is solid. There are many tiers of security validation, and programs such as FedRamp, ISO 27001, and SOC stand out as good benchmarks of operational security for cloud service providers. On top of operational security validation, enterprise cloud services should be able to demonstrate additional validation through recurring third-party application penetration testing. And the penetration tests should be shared with customers because transparency builds trust.

 

I have been pleasantly surprised by how many customers ask me to present my security controls according to the CSA Security, Trust & Assurance Registry (STAR) program. In fact, I’m working with my SOC auditors now to build additional narratives to our SOC 2 Type II report that map directly to STAR. A powerful way to demonstrate platform security is to not only provide the SOC 2 report, but to also provide every penetration report and STAR CCM as well.

 

Service Availability

 

Availability is a critical component of enterprise-ready services. Areliable cloud service does little good if customers are unable to access it. Remember, enterprise cloud services are either replacing a legacy service or providing something that the enterprise needs 24x7x365.  “Four 9s availability” is a good industry benchmark for enterprise cloud services, but the number of 9s is only part of the equation.

 

Enterprise cloud vendors should guarantee availability with SLAs to ensure the service’s availability.Service providers are increasingly choosingcommodity IaaS providers, and customers are left to wonder whether the cloud service provides a better SLA than the IaaS providers to the vendor. If a cloud service is built on top of an IaaS, transparency is key.

 

Enterprise cloud vendors should be able to demonstrate (through at least two years of historical availability) that their cloud architecture is able to withstand. With today’s cloud infrastructures, it should be assumed that virtual instances will disappear because of hardware and network failures, natural disasters and power loss. Enterprise cloud services must be built for disaster avoidance, not disaster recovery!

 

The service must be built for resiliency, and it must be maintained. Maintenance windows are a thing of the past. Show me a cloud service with a “four 9s” SLA and a monthly service window, and I will show you a service provider with a “three 9s” SLA, no maintenance windows and higher availability.

 

Multi-Tenancy

 

Security and availability are essential components of any application that’s ready for the enterprise. But perhaps the most important characteristic of an enterprise-grade service is how it deals with the conundrum of multi-tenancy. The most common question prospective customers ask is, “How do you protect and secure my data from your other customers’ data?”Dedicated subnets and dedicated servers per each customerdon’t scale within a multi-tenant cloud infrastructure. It’s purpose is to be low-cost accommodate elastic scalability as needed.The solution to segmenting customer data is encryption, not subnets or dedicated instances. Yes, that means each customer’s data is uniquely encrypted while at rest within the service.

 

Making this work, however, is not always straightforward. The cloud service must assign a unique key to encrypt each customer’s data. This, in turn,requires a robust key management architecture that uses in-memory secrets that are never stored to a disk or written down to ensure the integrity of customers’ key stores and data. And the key management system should also be resilient to losing encrypted data structures and be able to quickly expire keys. Sure, it sounds obvious, but it’s scary how often developers focus on building the safe but forget to secure the key.

 

I’ve worked in corporate security for more than 15 years, and I’ve seen numerous instances of built-in encryption security gone terribly wrong. Encryption protocols that are either too easily cracked or encryption keys that arestored in the same database as the encrypted data they are used to protect.

 

Three Prongs of an Enterprise Cloud Service

 

Enterprise users should expect more rigorous security standards from the applications they use at work. The stakes are higher in business, with repercussions that extend beyond just the end-user and can affect the entire organization. There are many components that make a cloud service truly enterprise ready, but platform security, availability and multi-tenancy are, in my opinion, the three most important. How a cloud service measures up determines whether it’s truly enterprise-grade, or whether it’s merely pretending to be.

 

—-By David Baker, chief security officer of Okta, an enterprise-grade identity management service that addresses the challenges of a cloud, mobile and interconnected business world. Follow him on Twitter at@bazaker.

 

The Shrinking Security Model: Micro-perimeters Arrow to Content

March 20, 2013 | Leave a Comment

By Ed King, VP Product Marketing – Emerging Technologies, Axway (following acquisition of Vordel)

 

As Cloud and mobile computing make enterprise IT ever more extended, the traditional security model of keeping the bad guys out and allowing only the good guys in no longer works well.  While the reach of the enterprise has expanded, the security perimeter may actually have to shrink to around the smallest entities such as the application and the dataset.  A truly scalable security model for this world of BYOx (fill in device, application, identity) seems to be one based on massively scalable micro-perimeters. What is big is now small and what is small is now big.

 

Micro-perimeter #1: Applications

 

Application security has long been secondary to network security.  In the old days, since most business applications were only accessible on the corporate network via a browser or fat client, applications only needed rudimentary authentication and authorization capabilities.  However, now with the pervasiveness of Cloud based services and mobile access, the network perimeter has effectively evaporated and application security is a front and center of house issue.  By shrinking the security perimeter to each individual application, enterprise IT can control a user’s access to the application from anywhere and any device, without having to rely on a cumbersome VPN connection.  For applications in the Cloud, Cloud service providers already provide basic network security such as firewalling.  Application security, however, is the responsibility of the enterprise.  Any access control that was previously implemented at the network level needs to move to the application level.  Setting up a micro-perimeter around applications involves:

  • Authentication and single sign-on -  This can mean strong and multi-factor authentication if a higher level of assurance is required.  If the application is being used by third-party users, a federated scheme is highly recommended.
  • Authorization – This typically means a role or attribute based scheme.  More advanced authorization schemes can involve fine grained entitlement management, as well as risk based schemes.  If federated access is required, definitely consider OAuth, which has become the de-facto federated authorization scheme of today.

 

Building authentication and authorization capabilities into individual applications is neither economical nor scalable.  Look for access management technologies that can front new and legacy applications and support the latest federation standards such as OAuth, OpenID Connect, and SCIM (System for Cross-domain Identity Management).

 

Micro-perimeter #2: APIs

 

How we use applications has changed since Apple introduced the iPhone and the App Store.  We no longer use a small number of larger complex applications (think Excel, Word), but a large number of small purpose-built applications.  How many applications do you have on your smartphone?  This same trend is true for Cloud applications.  Instead of large ERP platforms such as SAP and Oracle, enterprises are now favoring smaller, best-of-breed applications such as Salesforce and Workday.  In addition, the modern application user experience is cross-modal, users use a number of applications on different platforms to complete tasks within the same business process.  This new breed of applications use web APIs to enable integration and support multiple user engagement applications on mobile and Cloud.  API has become the common access point given the proliferation of applications and endpoints.  Setting up a micro-perimeter around APIs involve 3 aspects of protection:

  • Interface security to ensure transport level security and blocking of attacks such as SQL injection and cross-site scripting
  • Access control to ensure only the right user, device and applications are allowed to access the APIs, along with integration to enterprise identity and access management platforms
  • Data security to monitor all data passing through the API, including header, message body, and any attachment, for sensitive data, then perform real-time redaction

Just as with application security, do not reinvent the wheel when installing micro-perimeters around APIs.  Consider products such as API Servers and API Gateways that offer comprehensive API protection in all three areas.

 

Micro-perimeter #3: Devices

 

Mobile devices are more easily compromised than servers and desktop computers, and thus have a much bigger attack surface.  In addition to the typical endpoint security vulnerabilities such as malware and operating system exploits, a lost or stolen device gives attackers physical access to the device, which opens up additional exploit options at the hardware, firmware, operating system and application levels.  Beyond physical security, the widespread use of application stores creates opportunities for malware to be downloaded freely and spread quickly.  Deploying a micro-perimeter around the mobile device has been a hot security field in recent years.  Various solutions ranging from MDM (mobile device management), mobile virtual machines and containers, to application signing are available.  Look for technologies that can:

  • Validate application authenticity and integrity
  • Secure operating system and applications from malware and viruses
  • Detect and block suspicious/unauthorized cross-application activities
  • Secure keys and identities on the device
  • Secure communication and prevent man-in-the-middle exploits

 

Micro-perimeter #4: Data

 

In this ultra-connected world, data drives applications and user interactions.  Data is often passed from application to application and from device to device.  Data security measures are usually in place at the original egress point when the data leave its source, but once the data is sent to its first client, what happens after that is anybody’s guess.  Using identity data as an example, once user data is sent to a Cloud service, that service may be caching the user credential to allow single sign-on to a third-party service.  The second leg of that integration may not have proper user consent.  How the identity data is handled by the second service is an unknown risk to the enterprise.  The way to secure data in a federated environment is to put up a micro-perimeter around the data set.  The data set should be encrypted so only authorized endpoints have the means to consume the data.  An example of this is the OAuth 2.0 standard that replaces user identity and authorization scope with an opaque token, then provides interaction mechanisms to ensure user consent is provided when a new third-party needs to consume the OAuth token.  This type of technology has not yet expanded to handle arbitrary data sets, beyond the traditional cumbersome PKI infrastructure.  Future capabilities may also include wrapping data sets with policies that can be directly consumed by client applications.

 

While mobile and Cloud technologies have expanded the reach of enterprise security, moving to a micro-perimeter based security model maybe the key to having a massively scalable security model.  What is big is now small and what is small is now big.

 

About the author:

Ed King VP Product Marketing Emerging Technologies Axway (recently acquired Vordel)
Ed has responsibility for Product Marketing of emerging technologies around Cloud and Mobile at Axway, following their recent acquisition of Vordel. At Vordel, he was VP Product Marketing for the API Server product that defined the Enterprise API Delivery Platform. Before that he was VP of Product Management at Qualys, where he directed the company’s transition to its next generation product platform. As VP of Marketing at Agiliance, Ed revamped both product strategy and marketing programs to help the company double its revenue in his first year of tenure. Ed has also held senior executive roles in Product Management and Marketing at Qualys, Agiliance, Oracle, Jamcracker, Softchain and Thor Technologies. He holds an engineering degree from the Massachusetts Institute of Technology and a MBA from the University of California, Berkeley.

Upcoming Cloud Security Training in EMEA – sign up today! Arrow to Content

March 14, 2013 | Leave a Comment

Securosis has recently updated the CCSK training curriculum to be in alignment with the Cloud Security Alliance Guidance V3.0, and the training class is much improved. Many of the hands-on exercises have been overhauled, and if you are looking to get familiar with cloud security you will want to check out this class.
A unique CCSK training class will happen April 8-10 in Reading, UK to deliver the Basic, Plus, and Train the Trainer (TTT) courses. That’s right, there will be a third day to train the next group of CCSK curriculum instructors. Securosis’ Mike Rothman will be the instructor and he was one of the developers of the training curriculum and one of two people certified to train other instructors.
With the CSA making a fairly serious investment, as evidenced by their recent announcement naming HP as a Master Training Partner and some other upcoming strategic alliances, the CCSK is going to grow gangbusters in 2013. So if you do training, or would like cloud security to be a larger part of your business, getting certified as a CCSK trainer would be a good thing. If you want to become certified to teach, you need to attend one of these courses.
And even if you aren’t interested in teaching, it’s also a good opportunity to get trained by the folks who built the course.

You can get details and sign up for the training in Reading, UK, April 8-10. (http://ccskuk.eventbrite.com/)
Here is the description of each of the 3 days of training:
Day 1: There is a lot of hype and uncertainty around cloud security, but this class will slice through the hyperbole and provide students with the practical knowledge they need to understand the real cloud security issues and solutions. The Certificate of Cloud Security Knowledge (CCSK) – Basic class provides a comprehensive one day review of cloud security fundamentals and prepares them to take the Cloud Security Alliance CCSK certification exam. Starting with a detailed description of cloud computing, the course covers all major domains in the latest Guidance document from the Cloud Security Alliance, and the recommendations from the European Network and Information Security Agency (ENISA). The Basic class is geared towards security professionals, but is also useful for anyone looking to expand their knowledge of cloud security. (We recommend attendees have at least a basic understanding of security fundamentals, such as firewalls, secure development, encryption, and identity management).
Day 2: The CCSK-Plus class builds upon the CCSK Basic class with expanded material and extensive hands-on activities with a second day of training. The Plus class (on the second day) enhances the classroom instruction with real world cloud security labs! Students will learn to apply their knowledge as they perform a series of exercises, as they complete a scenario bringing a fictional organization securely into the cloud. This second day of training includes additional lecture, although students will spend most of their time assessing, building, and securing a cloud infrastructure during the exercises. Activities include creating and securing private clouds and public cloud instances, as well as encryption, applications, identity management, and much more.
Day 3: The CCSK Instructor workshop adds a third day to train prospective trainers. More detail about how to teach the course will be presented, as well as a detailed look into the hands-on labs, and an opportunity for all trainers to present a portion of the course. Click here for more information on the CCSK Training Partner Program (PDF) - https://cloudsecurityalliance.org/wp-content/uploads/2011/05/CCSK-Partner-Program.pdf.

 

Page Dividing Line