CCSK Success Stories: From an Information Systems Security Manager

By the CSA Education Team

This is the third part in a blog series on Cloud Security Training. Today, we will be interviewing Paul McAleer. Paul is a Marine Corps veteran and currently works as an Information Systems Security Manager (ISSM) at Novetta Solutions, an advanced data analytics company headquartered in McLean, VA.  He holds the CCSK, CISSP, CISM, and CAP certifications among others and lives in the Washington, D.C. area.

Can you describe your role?

I am an ISSM at Novetta Solutions and am primarily responsible for certification and accreditation, continuous monitoring, and the overall security posture of the information systems under my purview. Novetta is also partnered with AWS and that partnership continues to grow so it is a very exciting company to work for.  

What got you into cloud security in the first place? What made you decide to earn your Certificate of Cloud Security Knowledge (CCSK)?

My first InfoSec position was with First Information Technology Services, a Third Party Assessment Organization (3PAO) supporting Microsoft. I was part of the Continuous Monitoring Team, and part of my job was providing adequate justification of open vulnerabilities and depicting mitigation for cloud environments. Understanding cloud security was imperative in performing my job.  I was seeking more of a foundational understanding focused primarily on cloud security. I heard about CCSK through CSA and ISC(2) after doing some research on the best and most valuable Cloud certifications. After reviewing the certification outline and expectations, I decided to review the material and prep for the exam. 

“Open book means nothing when it comes to this exam. There are too many questions that requires a deep understanding of the material…”

Can you elaborate on what the exam experience was like? How did you prepare for the CCSK exam?

The CCSK was not an easy exam by any means. Not only was it a requirement to get 80 percent to pass, but there were only 90 minutes to answer 60 questions. The exam required a deep understanding of the CSA Cloud Security Guidance, as well as the ENISA Cloud Computing Risk Assessment Report. At least for me, it was imperative to read through all of the course material and ensure I understood everything listed in the exam objectives to pass the exam.

If you could go back and take it again, how would you prepare differently?

If I could prepare differently, I would have devoted more time to studying and reading the CSA Guidance and ENISA Report a second time through. To me, one read-through isn’t enough for the depth of this exam and the style of questions the exam presents. It is a hard exam to prepare for. To gain a full understanding of what is expected, it’s important to go through the material more than once and to take notes on your weak areas and subsequently come back to the sections that you feel weakest on and focusing on those areas. 

Were there any specific topics on the exam that you found trickier than others?

Topics on the exam that I found trickier than others included questions that pertained to governance within the cloud and understanding the various security as a service (SecaaS) requirements and the different services regarding SecaaS implementation.

What is your advice to people considering earning their CCSK?

I highly recommend the CCSK for anyone seeking a deeper understanding of cloud security. My advice to people considering the CCSK is to study for the exam like you would any other certification that wasn’t open book. In other words, don’t rely on the fact that it is open book. 

Lastly, what part of the material from the CCSK have been the most relevant in your work and why?

The most relevant material from the CCSK for my career has been Compliance and Audit Management, which was Domain 4 of the CSA Guide v3 when I took the exam. I believe that domain related more to my work experience than any other domain due to my cloud compliance role at the time of my certification. I definitely took the most away from the topics discussed in that domain, such as issues pertaining to Enterprise Risk Management, Compliance and Audit Assurance, and Corporate Governance. The Information Management and Data Security domain was also a very relevant domain for my work.

Interested in learning more about cloud security? Discover our free prep-kit, training courses, and resources to prepare to earn your Certificate of Cloud Security Knowledge here.

Invest in your future with CCSK training

A Decade of Vision

By Jim Reavis, Co-founder and CEO, Cloud Security Alliance

CSA 10th anniversary logo

Developing a successful and sustainable organization is dependent upon a lot of factors: quality services, a market vision, focus, execution, timing and maybe a little luck. For Cloud Security Alliance, now celebrating our 10th anniversary, I would add one more factor—believers. 

While we have had a few doubters, we have had more believers who have helped us fulfill our vision and allowed us to be one of the most important information security associations on a global basis. On this occasion, we want to recognize three such believers, who were there at the beginning and have all stayed intimately involved with CSA during our first decade. I am referring to our three founding CEOs, whom we are providing our Decade of Vision Leadership award. These CEOs provided the initial startup funding and, more than that, have provided consistent support and mentoring, as well as evangelizing the CSA mission on a global basis.

Philippe Courtot, CEO of Qualys, is inarguably an industry visionary as well as a generous human being. Philippe has been promoting the benefits of a cloud approach to security at Qualys for over 18 years, well before we called this cloud. He has had a unique determination in pursuit of his goals and eschewed more expedient paths to cement Qualys as an industry leader based upon integrity. Philippe always supports worthwhile industry initiatives, including CSA, but many others as well. We are proud to be Philippe’s partner with the CIO/CISO Interchange.

Jay Chaudhry, CEO of Zscaler, has an unbelievable record as a serial entrepreneur in information security and has led one of the most successful industry IPOs in our history.  Jay started Zscaler at about the same time as CSA was getting off the ground, and never fails to get behind important CSA initiatives. Jay was the first person who fully articulated Security-as-a-Service to me, which helped craft our mission statement of securing the cloud, as well as leveraging the cloud to secure the rest of the world.

Phil Dunkelberger, CEO of Nok Nok Labs, was a founding CEO while leading PGP Corporation. Phil’s zeal in promoting ubiquitous encryption was not merely based upon helping a company protect its information, but on how accelerating the exchange of trusted information can transform business as we know it. Phil has supported numerous industry initiatives and was a key stakeholder in launching the FIDO Alliance, tackling the very difficult problem of online identities and strong authentication.

I have found that successful CEOs in our industry share common traits.  They have a sense of the magnitude of the battle we fight that supersedes any one company’s mission. They think about the world with a longer term perspective than the immediacy of today. They have tremendous empathy for the good guys in our industry and want to make them successful. A company could do a lot worse than to have three such founding CEOs.  

Education: A Cloud Security Investigation (CSI)

By Will Houcheime, Product Marketing Manager, Bitglass

cloud education painted on pavement

Cloud computing is now widely used in higher education. It has become an indispensable tool for both the institutions themselves and their students. This is mainly because cloud applications, such as such as G Suite and Microsoft Office 365, come with built-in sharing and collaboration functionality – they are designed for efficiency, teamwork, and flexibility. This, when combined with the fact that education institutions tend to receive massive discounts from cloud service providers, has led to a cloud adoption rate in education that surpasses that of every other industry. Naturally, this means that education institutions need to find a cloud security solution that can protect their data wherever it goes.

Cloud adoption means new security concerns

When organizations move to the cloud, there are new security concerns that must be addressed; for example, cloud applications, which are designed to enable sharing, can be used to share data with parties that are not authorized to view it. Despite the fact that some of these applications have their own native security features, many lack granularity, meaning that sensitive data such as personally identifiable information (PII), personal health information (PHI), federally funded research, and payment card industry data (PCI) can still fall into the wrong hands.

Complicating the situation further is the fact that education institutions are required to comply with multiple regulations; for example, FERPA, FISMA, PCI DSS, and HIPAA. Additionally, when personal devices are used to access data (a common occurrence for faculty and students alike), securing data and complying with regulatory demands becomes even more challenging.

Fortunately, cloud access security brokers (CASBs) are designed to protect data in today’s business world. Leading CASBs provide complete visibility and control over data in any app, any device, anywhere. Identity and access management capabilities, zero-day threat detection, and granular data protection policies ensure that sensitive information is safe and regulatory demands are thoroughly addressed.

Want to learn more? Download the Higher Education Solution Brief.

Introducing CAIQ-Lite

By Dave Christiansen, Marketing Director, Whistic

CAIQ-Lite: A New Framework for Cloud Vendor Assessment report cover

The Cloud Security Alliance and Whistic are pleased to release CAIQ-Lite beta, a new framework for cloud vendor assessment.

CSA and Whistic identified the need for a lighter-weight assessment questionnaire in order to accommodate the shift to cloud procurement models, and to enable cybersecurity professionals to more easily engage with cloud vendors. CAIQ-Lite was developed to meet the demands of an increasingly fast-paced cybersecurity environment, where adoption is becoming paramount when selecting a vendor security questionnaire.

With the initial objective of developing an effective questionnaire containing 100 or less questions, CAIQ-Lite contains 73 questions compared to the 295 found in the CAIQ, while maintaining representation of 100 percent of the original 16 control domains present in the Cloud Controls Matrix (CCM) 3.0.1. Contributing research leveraged multiple sources of CSA member and Whistic customer feedback, as well as a panel of hundreds of IT security professionals. Research behind Whistic’s proprietary scoring algorithm was utilized as a part of the final CAIQ-Lite question selection process.

We look forward to community feedback on CAIQ-Lite, which can be accessed by CSA members for free at Whistic,  as well as from CSA. The current version will be improved over the next 12 months, based on additional community input. Also, any members that already have a CAIQ on the CSA STAR Program will automatically have a CAIQ-Lite generated for them on the Whistic Platform.

Click to access the full whitepaper, containing further details regarding the creation and deployment of this new cloud service questionnaire. 


Five Years of the GitHub Bug Bounty Program

By Philip Turnbull, Senior Application Security Engineer, GitHub

octocat detective
Image credit: GitHub, This article was originally published by the GitHub team.

GitHub launched our Security Bug Bounty program in 2014, allowing us to reward independent security researchers for their help in keeping GitHub users secure. Over the past five years, we have been continuously impressed by the hard work and ingenuity of our researchers. Last year was no different and we were glad to pay out $165,000 to researchers from our public bug bounty program in 2018.

We’ve previously talked about our other initiatives to engage with researchers. In 2018, our researcher grantsprivate bug bounty programs, and a live-hacking event allowed us to reach even more independent security talent. These different ways of working with the community helped GitHub reach a huge milestone in 2018: $250,000 paid out to researchers in a single year.

We’re happy to share some of our highlights from the past year and introduce some big changes for the coming year: full legal protection for researchers, more GitHub properties eligible for rewards, and increased reward amounts.

2018 Highlights

GraphQL and API authorization researcher grant

Since the launch of our researcher grants program in 2017 we’ve been on the lookout for bug bounty researchers who show a specialty in particular features of our products. In mid-2018 @kamilhism submitted a series of vulnerabilities to the public bounty program showing his expertise in the authorization logic of our REST and GraphQL APIs. To support their future research, we provided Kamil with a fixed grant payment to perform a systematic audit of our API authorization logic. Kamil’s audit was excellent, uncovering and allowing us to fix an additional seven authorization flaws in our API.

H1-702

In August, GitHub took part in HackerOne’s H1-702 live-hacking event in Las Vegas. This brought together over 75 of the top researchers from HackerOne to focus on GitHub’s products for one evening of live-hacking. The event didn’t disappoint—GitHub’s security improved and nearly $75,000 was paid out for 43 vulnerabilities. This included one critical-severity vulnerability in GitHub Enterprise Server. We also met with our researchers in-person and received great feedback on how we could improve our bug bounty program.

GitHub Actions private bug bounty

In October, GitHub launched a limited public beta of GitHub Actions. As part of the limited beta, we also ran a private bug bounty program to complement our extensive internal security assessments. We sent out over 150 invitations to researchers from last year’s private program, all H1-702 participants, and invited a number of the best researchers that have worked with our public program. The private bounty program allowed us to uncover a number of vulnerabilities in GitHub Actions.

We also held an office-hours event so that the GitHub security team and researchers could meet. We took the opportunity to meet face-to-face with other researchers because it’s a great way to build a community and learn from each other. Two of our researchers, @not-an-aardvark and @ngaloggc, gave an overview of their submissions and shared details of how they approached the target with everyone.

Workflow improvements

We’ve been making refinements to our internal bug bounty workflow since we last announced it back in 2017.  Our ChatOps-based tools have continued to evolve over the past year as we find more ways to streamline the process. These aren’t just technical changes—each day we’ve had individual on-call first responders who were responsible for handling incoming bounty submissions. We’ve also added a weekly status meeting to review current submissions with all members of the Application Security team. These meetings allow the team to ensure that submissions are not stalled, work is correctly prioritized by engineering teams based on severity, and researchers are getting timely updates on their submissions.

A key success metric for our program is how much time it takes to validate a submission and triage that information to the relevant engineering team so remediation work can begin. Our workflow improvements have paid off and we’ve significantly reduced the average time to triage from four days in 2017 down to 19 hours. Likewise, we’ve reduced our average time to resolution from 16 days to six days. Keep in mind: for us to consider a submission as resolved, the issue has to either be fixed or properly prioritized and tracked, by the responsible engineering team.

We’ve continued to reach our target of replying to researchers in less than 24 hours on average. Most importantly for our researchers, we’ve also dropped our average time for rewarding a submission from 17 days in 2017 down to 11 days. We’re grateful for the effort that researchers invest in our program and we aim to reduce these times further over the next year.

2019 initiatives

Although our program has been running successfully for the past five years, we know that we can always improve. We’ve taken feedback from our researchers and are happy to announce three major changes to our program for 2019:

Keeping bounty program participants safe from the legal risks of security research is a high priority for GitHub. To make sure researchers are as safe as possible, we’ve added a robust set of Legal Safe Harbor terms to our site policy. Our new policies are based on CC0-licensed templates by GitHub’s Associate Corporate Counsel, @F-Jennings. These templates are a fork of EdOverflow’s Legal Bug Bounty repo, with extensive modifications based on broad discussions with security researchers and Amit Elazari’s general research in this field. The templates are also inspired by other best-practice safe harbor examples including Bugcrowd’s disclose.io project and Dropbox’s updated vulnerability disclosure policy.

Our new Legal Safe Harbor terms cover three main sources of legal risk:

  • Your research activity remains protected and authorized even if you accidentally overstep our bounty program’s scope. Our safe harbor now includes a firm commitment not to pursue civil or criminal legal action, or support any prosecution or civil action by others, for participants’ bounty program research activities. You remain protected even for good faith violations of the bounty policy.
  • We will do our best to protect you against legal risk from third parties who won’t commit to the same level of safe harbor protections. Our safe harbor terms now limit report-sharing with third parties in two ways. We will share only non-identifying information with third parties, and only after notifying you and getting that third party’s written commitment not to pursue legal action against you. Unless we get your written permission, we will not share identifying information with a third party.
  • You won’t be violating our site terms if it’s specifically for bounty research. For example, if your in-scope research includes reverse engineering, you can safely disregard the GitHub Enterprise Agreement’s restrictions on reverse engineering. Our safe harbor now provides a limited waiver for relevant parts of our site terms and policies. This protects against legal risk from DMCA anti-circumvention rules or similar contract terms that could otherwise prohibit necessary research tasks like reverse engineering or deobfuscating code.

Other organizations can look to these terms as an industry standard for safe harbor best practices—and we encourage others to freely adopt, use, and modify them to fit their own bounty programs. In creating these terms, we aim to go beyond the current standards for safe harbor programs and provide researchers with the best protection from criminal, civil, and third-party legal risks. The terms have been reviewed by expert security researchers, and are the product of many months of legal research and review of other legal safe harbor programs. Special thanks to MGMugwumpjones, and several other researchers for providing input on early drafts of @F-Jennings’ templates.

Expanded scope

Over the past five years, we’ve been steadily expanding the list of GitHub products and services that are eligible for reward. We’re excited to share that we are now increasing our bounty scope to reward vulnerabilities in all first party services hosted under our github.com domain. This includes GitHub EducationGitHub Learning LabGitHub Jobs, and our GitHub Desktop application. While GitHub Enterprise Server has been in scope since 2016, to further increase the security of our enterprise customers we are now expanding the scope to include Enterprise Cloud.

It’s not just about our user-facing systems. The security of our users’ data also depends on the security of our employees and our internal systems. That’s why we’re also including all first-party services under our employee-facing githubapp.com and github.net domains.

Increased rewards

We regularly assess our reward amounts against our industry peers. We also recognize that finding higher-severity vulnerabilities in GitHub’s products is becoming increasingly difficult for researchers and they should be rewarded for their efforts. That’s why we’ve increased our reward amounts at all levels:

  • Critical: $20,000–$30,000+
  • High: $10,000–$20,000
  • Medium: $4,000–$10,000
  • Low: $617–$2,000

Our broad ranges have served us well, but we’ve been consistently impressed by the ingenuity of researchers. To recognize that, we no longer have a maximum reward amount for critical vulnerabilities. Although we’ve listed $30,000 as a guideline amount for critical vulnerabilities, we’re reserving the right to reward significantly more for truly cutting-edge research.

Get involved

The bounty program remains a core part of GitHub’s security process and we’re learning a lot from our researchers. With our new initiatives, now is the perfect time to get involved. Details about our safe harbor, expanded scope, and increased awards are available on the GitHub Bug Bounty site.

Working with the community has been a great experience—we’re looking forward to triaging your submissions in the future!

Bitglass Security Spotlight: DoD, Facebook & NASA

By Will Houcheime, Product Marketing Manager, Bitglass

red arrow with news icon

Here are the top cybersecurity stories of recent weeks: 

—Cybersecurity vulnerabilities found in US missile system
—Facebook shares private user data with Amazon, Netflix, and Spotify
—Personal information of NASA employees exposed
—Chinese nationals accused of hacking into major US company databases
—Private complaints of Silicon Valley employees exposed via Blind

Cybersecurity vulnerabilities found in US missile system
The United States Department of Defense conducted a security audit on the U.S. ballistic missile system and found shocking results. The system’s security was outdated and not in keeping with protocol. The audit revealed that the US’s ballistic system was lacking data encryption, antivirus programs, and multifactor authentication. Additionally, the Department of Defense also found 28-year-old security gaps that were leaving computers vulnerable to local and remote attacks. Obviously, the Missile Defense Agency must improve its cybersecurity posture before the use of defense weaponry is required.

Facebook shares private user data with Amazon, Netflix, and Spotify
The security of Facebook users continues to be in question due to the company’s illicit use of private messages. The New York Times discovered Facebook documents from 2017 that explained how companies such as Spotify and Netflix were able to access private messages from over 70 million users per month. There are reports that suggest that companies had the ability to read, write, and delete these private messages on Facebook, which is disturbing news to anyone who uses the popular social network.

Personal information of NASA employees exposed
The personally identifiable information (PII) of current and former NASA employees was compromised early last year. The organization reached out to the affected individuals notifying them of the data breach. The identity of the intruder was unknown; however, it was confirmed that the breach allowed Social Security numbers to be compromised. 

Chinese nationals accused of hacking into major US company databases
A group of hackers working for the Chinese government has been indicted by the U.S. Government for stealing intellectual property from tech companies. While the companies haven’t been named, prosecutors have charged two Chinese nationals with computer hacking, conspiracy to commit wire fraud, and aggravated identity theft.

Private complaints of Silicon Valley employees exposed via Blind
A social networking application by the name of Blind failed to secure sensitive user information when it left a database server completely exposed. Blind allows users to anonymously discuss topics including tech, finance, e-commerce, as well as the happenings within their workplace  (the app is used by employees of over 70,000 different companies). Anyone who knew how to find the online server had the ability to view each user’s account information without the use of a password. Unfortunately, this security lapse exposed users’ identities and, consequently, allowed their employers to be implicated in their work-related stories.

To learn about cloud access security brokers (CASBs) and how they can protect your enterprise from ransomware, data leakage, misconfigurations, and more, download the Definitive Guide to CASBs.

Rocks, Pebbles, Shadow IT

By Rich Campagna, Chief Marketing Officer, Bitglass

Way back in 2013/14, Cloud Access Security Brokers (CASBs) were first deployed to identify Shadow IT, or unsanctioned cloud applications. At the time, the prevailing mindset amongst security professionals was that cloud was bad, and discovering Shadow IT was viewed as the first step towards stopping the spread of cloud in their organization.

Flash forward just a few short years and the vast majority of enterprises have done a complete 180º with regards to cloud, embracing an ever increasing number of “sanctioned” cloud apps. As a result, the majority of CASB deployments today are focused on real-time data protection for sanctioned applications – typically starting with System of Record applications that handle wide swaths of critical data (think Office 365Salesforce, etc). Shadow IT discovery, while still important, is almost never the main driver in the CASB decision making process.

Regardless, I still occasionally hear of CASB intentions that harken back to the days of yore – “we intend to focus on Shadow IT discovery first before moving on to protect our managed cloud applications.” Organizations that start down this path quickly fall into the trap of building time consuming processes for triaging and dealing with what quickly grows from hundreds to thousands of applications, all the while delaying building appropriate processes for protecting data in the sanctioned applications where they KNOW sensitive data resides.

This approach is a remnant of marketing positioning by early vendors in the CASB space. For me, it brings to mind Habit #3 from Stephen Covey’s The 7 Habits of Highly Effective People -“Put First Things First.” 

Putting first things first is all about focusing on your most important priorities. There’s a video of Stephen famously demonstrating this habit on stage in one of his seminars. In the video, he asks an audience member to fill a bucket with sand, followed by pebbles, and then big rocks. The result is that once the pebbles and sand fill the bucket, there is no more room for the rocks. He then repeats the demonstration by having her add the big rocks first. The result is that all three fit in the bucket, with the pebbles and sand filtering down between the big rocks.

Now, one could argue that after you take care of the big rocks, perhaps you should just forget about the sand, but regardless, this lesson is directly applicable to your CASB deployment strategy:

You have major sanctioned apps in the cloud that contain critical data. These apps require controls around data leakage, unmanaged device access, credential compromise and malicious insiders, malware prevention, and more. Those are your big rocks and the starting point of your CASB rollout strategy. Focus too much on the sand and you’ll never get to the rocks.

Read what Gartner has to say on the topic in 2018 Critical Capabilities for CASBs.

Rethinking Security for Public Cloud

Symantec’s Raj Patel highlights how organizations should be retooling security postures to support a modern cloud environment

By Beth Stackpole, Writer, Symantec

old fashioned scales with glass globe on one side and gold coins on the other

Enterprises have come a long way with cyber security, embracing robust enterprise security platforms and elevating security roles and best practices. Yet with public cloud adoption on the rise and businesses shifting to agile development processes, new threats and vulnerabilities are testing traditional security paradigms and cultures, mounting pressure on organizations to seek alternative approaches.

Raj Patel, Symantec’s vice president, cloud platform engineering, recently shared his perspective on the shortcoming of a traditional security posture along with the kinds of changes and tools organizations need to embrace to mitigate risk in an increasingly cloud-dominant landscape.

Q: What are the key security challenges enterprises need to be aware of when migrating to the AWS public cloud and what are the dangers of continuing traditional security approaches?

A: There are a few reasons why it’s really important to rethink this model. First of all, the public cloud by its very definition is a shared security model with your cloud provider. That means organizations have to play a much more active role in managing security in the public cloud than they may have had in the past.

Infrastructure is provided by the cloud provider and as such, responsibility for security is being decentralized within an organization. The cloud provider provides a certain level of base security, but the application owner directly develops infrastructure on top of the public cloud, thus now has to be security-aware.

The public cloud environment is also a very fast-moving world, which is one of the key reasons why people migrate to it. It is infinitely scalable and much more agile. Yet those very same benefits also create a significant amount of risk. Security errors are going to propagate at the same speed if you are not careful and don’t do things right. So from a security perspective, you have to apply that logic in your security posture.

Finally, the attack vectors in the cloud are the entire fabric of the cloud. Traditionally, people might worry about protecting their machines or applications. In the public cloud, the attack surface is the entire fabric of the cloud–everything from infrastructure services to platform services, and in many cases, software services. You may not know all the elements of the security posture of all those services … so your attack surface is much larger than you have in a traditional environment.

Q: Where does security fit in a software development lifecycle (SDLC) when deploying to a public cloud like AWS and how should organizations retool to address the demands of the new decentralized environment?

A: Most organizations going through a cloud transformation take a two-pronged approach. First, they are migrating their assets and infrastructure to the public cloud and second, they are evolving their software development practices to fit the cloud operating model. This is often called going cloud native and it’s not a binary thing—it’s a journey.

With that in mind, most cloud native transformations require a significant revision of the SDLC … and in most cases, firms adopt some form of a software release pipeline, often called a continuous integration, continuous deployment (CI/CD) pipeline. I believe that security needs to fit within the construct of the release management pipeline or CI/CD practice. Security becomes yet another error class to manage just like a bug. If you have much more frequent release cycles in the cloud, security testing and validation has to move at the same speed and be part of the same release pipeline. The software tools you choose to manage such pipelines should accommodate this modern approach.

Q: Explain the concept of DevSecOps and why it’s an important best practice for public cloud security?

A: DevOps is a cultural construct. It is not a function. It is a way of doing something—specifically, a way of building a cloud-native application. And a new term, DevSecOps, has emerged which contends that security should be part of the DevOps construct. In a sense, DevOps is a continuum from development all the way to operations, and the DevSecOps philosophy says that development, security, and operations are one continuum.

Q: DevOps and InfoSec teams are not typically aligned—what are your thoughts on how to meld the decentralized, distributed world of DevOps with the traditional command-and-control approach of security management?

A: It starts with a very strong, healthy respect for the discipline of security within the overall application development construct. Traditionally, InfoSec professionals didn’t intersect with DevOps teams because security work happened as an independent activity or as an adjunct to the core application development process. Now, as we’re talking about developing cloud-native applications, security is part of how you develop because you want to maximize agility and frankly, harness the pace of development changes going on.

One practice that works well is when security organizations embed a security professional or engineer within an application group or DevOps group. Oftentimes, the application owners complain that the security professionals are too far removed from the application development process so they don’t understand it or they have to explain a lot, which slows things down. I’m proposing breaking that log jam by embedding a security person in the application group so that the security professional becomes the delegate of the security organization, bringing all their tools, knowledge, and capabilities.

At Symantec, we also created a cloud security practitioners working group as we started our cloud journey. Engineers involved in migrating to the public cloud as well as our security professionals work as one common operating group to come up with best practices and tools. That has been very powerful because it is not a top-down approach, it’s not a bottoms-up approach–it is the best outcome of the collective thinking of these two groups.

Q: How does the DevSecOps paradigm address the need for continuous compliance management as a new business imperative?

A: It’s not as much that DevSecOps invokes continuous compliance validation as much as the move to a cloud-native environment does. Changes to configurations and infrastructure are much more rapid and distributed in nature. Since changes are occurring almost on a daily basis, the best practice is to move to a continuous validation mode. The cloud allows you to change things or move things really rapidly and in a software-driven way. That means lots of good things, but it can also mean increasing risk a lot. This whole notion of DevSecOps to CI/CD to continuous validation comes from that basic argument.

Bitglass Security Spotlight: Financial Services Facing Cyberattacks

By Will Houcheime, Product Marketing Manager, Bitglass

young man in hoodie staring at financial screens

Here are the top cybersecurity stories of recent months:

—Customer information exposed in Bankers Life hack
—American Express India leaves customers defenseless
—Online HSBC accounts breached
—Millions of dollars taken from major Pakistani banks
—U.S. government infrastructure accessed via DJI drones

Customer information exposed in Bankers Life hack
566,000 individuals have been notified that their personal information has been exposed. Unauthorized third parties breached Bankers Life websites by obtaining employee credentials. The hackers were then able to access personal information belonging to applicants and customers; for example, the last four digits of Social Security numbers, full names, addresses, and more.

American Express India leaves customers defenseless
Through an unsecured server, 689,262 American Express India records were found in plain text. Anyone who came across the database housing this information could easily access personally identifiable information (PII) such as customer names, phone numbers, and addresses. The extent of access is not currently known.

Online HSBC accounts breached
HSBC has announced that about 1% of its U.S. customers’ bank accounts have been hacked. The bank has stated that the attackers had access to account numbers, balances, payee details, and more. Naturally, financial details are highly sensitive and must be thoroughly protected.

Millions of dollars taken from major Pakistani banks
According to the Federal Investigation Agency (FIA), almost all of the major Pakistani banks have been affected by a cybersecurity breach. This event exposed the details of over 19,000 debit cards from 22 different banks. This was the biggest cyberattack to ever hit the banking system of Pakistan, resulting in a loss of $2.6 million dollars.

U.S. government infrastructure accessed via DJI drones
Da Jiang Innovations (DJI) was accused of leaking confidential U.S. law enforcement information to the Chinese government. DJI quickly denied the passing of any information to another organization. However, it has since been determined that DJI’s data security was inadequate, and that sensitive information could be easily accessed by unauthorized third parties.

To defend against these threats, financial services firms should adopt a comprehensive security solution like a cloud access security broker (CASB.)

To learn more about the state of security in financial services, download Bitglass’ 2018 Financial Breach Report.

The 12 Most Critical Risks for Serverless Applications

By Sean Heide, CSA Research Analyst and Ory Segal, Israel Chapter Board Member

12 Most Critical Risks for Serverless Applications 2019 report cover

When building the idea and thought process around implementing a serverless structure for your company, there are a few key risks one must take into account to ensure the architecture is gathering proper controls when speaking to security measures and how to adopt a program that can assist in maintaining the longevity of applications. Though this is a list of 12 highlighted risks that are deemed the most occurring, there should always be the idea that other potential risks need to be treated just the same.

Serverless architectures (also referred to as “FaaS,” or Function as a Service) enable organizations to build and deploy software and services without maintaining or provisioning any physical or virtual servers. Applications made using serverless architectures are suitable for a wide range of services and can scale elastically as cloud workloads grow. As a result of this wide array of off-site application structures, it opens up a string of potential attack surfaces that take advantage of vulnerabilities spanning from the use of multiple APIs and HTTP.

From a software development perspective, organizations adopting serverless architectures can focus instead on core product functionality, rather than the underlying operating system, application server or software runtime environment. By developing applications using serverless architectures, users relieve themselves from the daunting task of continually applying security patches for the underlying operating system and application servers. Instead, these tasks are now the responsibility of the serverless architecture provider. In serverless architectures, the serverless provider is responsible for securing the data center, network, servers, operating systems, and their configurations. However, application logic, code, data, and application-layer configurations still need to be robust—and resilient to attacks. These are the responsibility of application owners.

While the comfort and elegance of serverless architectures is appealing, they are not without their drawbacks. In fact, serverless architectures introduce a new set of issues that must be considered when securing such applications, including increased attack surface, attack surface complexity, inadequate security testing, and traditional security protections such as firewalls.

Serverless application risks by the numbers

Today, many organizations are exploring serverless architectures, or just making their first steps in the serverless world. In order to help them become successful in building robust, secure and reliable applications, the Cloud Security Alliance’s Israel Chapter has drafted the “The 12 Most Critical Risks for Serverless Applications 2019.” This new paper enumerates what top industry practitioners and security researchers with vast experience in application security, cloud and serverless architectures believe to be the current top risks, specific to serverless architectures

Organized in order of risk factor, with SAS-1 being the most critical, the list breaks down as the following:

  • SAS-1: Function Event Data Injection
  • SAS-2: Broken Authentication
  • SAS-3: Insecure Serverless Deployment Configuration
  • SAS-4: Over-Privileged Function Permissions & Roles
  • SAS-5: Inadequate Function Monitoring and Logging
  • SAS-6: Insecure Third-Party Dependencies
  • SAS-7: Insecure Application Secrets Storage
  • SAS-8: Denial of Service & Financial Resource Exhaustion
  • SAS-9: Serverless Business Logic Manipulation
  • SAS-10: Improper Exception Handling and Verbose Error Messages
  • SAS-11: Obsolete Functions, Cloud Resources and Event Triggers
  • SAS-12: Cross-Execution Data Persistency

In developing this security awareness and education guide, researchers pulled information from such sources as freely available serverless projects on GitHub and other open source repositories; automated source code scanning of serverless projects using proprietary algorithms; and data provided by our partners, individual contributors and industry practitioners.

While the document provides information about what are believed to be the most prominent security risks for serverless architectures, it is by no means an exhaustive list. Interested parties should also check back often as this paper will be updated and enhanced based on community input along with research and analysis of the most common serverless architecture risks.

Thanks must also be given to the following contributors, who were involved in the development of this document: Ory Segal, Shaked Zin, Avi Shulman, Alex Casalboni, Andreas N, Ben Kehoe, Benny Bauer, Dan Cornell, David Melamed, Erik Erikson, Izak Mutlu, Jabez Abraham, Mike Davies, Nir Mashkowski, Ohad Bobrov, Orr Weinstein, Peter Sbarski, James Robinson, Marina Segal, Moshe Ferber, Mike McDonald, Jared Short, Jeremy Daly, and Yan Cui.

Deciphering DevSecOps


Security needs to be an integral part of the DevOps roadmap. Enterprise Strategy Group’s Doug Cahill shows the way

By Beth Stackpole, Writer, Symantec

Security has moved to the forefront of the IT agenda as organizations push forward with digital transformation initiatives. At the same time, DevOps, a methodology that applies agile and lean principles to software development, is also a top priority. The problem is the two enterprise strategies are often not aligned.

We recently spoke with Doug Cahill, senior analyst and group director at Enterprise Strategy Group, to get his take on the importance of the DevSecOps approach as well as how to retool organizations to adopt the emerging principle.

Q: Cyber security is often not an integral part of the DevOps roadmap. What are the dangers of such a siloed approach and what is the impact on the business?

A: Application development is now often being driven out of line of business, outside of the purview of centralized IT and cyber security teams. That’s because there’s a need to get new applications into production, or update applications already in production, as quickly as possible.

The risk of not having security integrated in a decentralized IT and application development approach is that there are too often no security controls applied. That means that too often new “code-as-infrastructure” is getting deployed into production for which security wasn’t contemplated at all.

Another problem is the use of default settings. Some basic examples are server workloads that are provisioned in the public cloud without going through a jump host or single proxy, which means they can be subject to being port scanned. Another common mistake is the lack of appropriate authentication controls; use of multifactor authentication (MFA) is something that a security practitioner would champion, but without security involvement in the DevOps process, it may not be thought about.

The risk to the business is as more application infrastructure becomes public cloud resident, we’re finding more of that is business-critical and sensitive. That exposes the organization to a variety of cyber security threats, both internal and external.

Q: Explain the schism between DevOps and cyber security teams that leads to siloed operations and failure to embrace more integrated DevSecOps practices.

A: It is really based in competing objectives. The AppDev and DevOps teams are chartered with moving quickly, getting new applications to production and updating those applications iteratively based on feedback from the market. Security, on the other hand, is chartered with making sure those applications behave in their intended state, meaning they are not compromised. Therefore, security professionals generally take a more deliberate, methodical approach to their job.

Security practitioners sometimes see DevOps akin to running with scissors—bad things happen when you move fast, from their perspective. DevOps, on the other hand, thinks security is just going to slow them down. In reality, there is a way to secure infrastructure at the speed of DevOps, so it’s a misunderstanding based on competing objectives. The gap can be closed, but the first thing is to understand that there is a gap.

Q: There’s a lot of talk about “shifting security left,” but also “shifting security right.” Can you explain what is meant by both and how it addresses integrated DevSecOps best practices.

A: The shift security left, shift security right metaphors are akin to the notion of having security bolted in versus bolted on. Traditionally, security has been bolted on; it hasn’t always necessarily been part of the design center. The world of continuous integration and continuous delivery (CI/CD) is really an opportunity to bake security into all environments and stages from development to test to production environments. We can think of shift left as pre-deployment and shift right as runtime. The notes to shift right is a reminder that we still have to apply runtime controls to those production servers and applications to protect them from intrusions. This includes things like appropriate access controls in terms of updating host-based firewalls, anti-malware controls, and anti-exploit controls.

Q: Why should a company integrate security processes and controls with DevOps?

A: There are a number of really compelling benefits to integrating security into the CI/CD pipelines, something sometime referred to as DevSecOps. One is the ability to secure at scale. Just as groups autoscale based on the capacity requirements of the application, security will be automatically integrated with the way you orchestrate and provision the new server.

Integrating security controls helps organizations meet and maintain compliance with regulations such as PCI and DSS as environments are provisioned and managed through the DevOps processes. It’s really about security and compliance at scale, but there is also a level of efficiency. If you can automate applying the right security controls based on the role of the server workload, it’s a highly efficient approach. There are so many corollaries in terms of project work—we know if we have to go back and do things later, it’s much less efficient than doing it right the first time.

Q: What is your set of recommended best practices for putting DevSecOps into action?

A: The best practices for DevSecOps are composed of people, processes, and technologies. If we take a page out of the shared responsibility security model that cloud service providers talk about, CSPs are responsible for physical security, network security, all the way up to the hypervisor. The customers are responsible for everything north of the hypervisor like the workload, operating system, applications, data, identity and access management. We should have a similar approach for the relationship between the DevOps team and the security team—both teams need to work collaboratively to secure public cloud infrastructure.

The second is to look at this as a risk management approach. In larger organizations, you’ll have multiple teams developing a wide variety of applications, but not all those applications have an equal level of risk to the business. If an organization is just starting down the DevSecOps path, they should start with one or two applications where they have the most risk for their business.

The next suggestion is to leverage the agile software development processes used to do CI/CD to write cyber security-related user stories. The cyber security team should partner with the product owner who is typically responsible for defining user stories and tasks that will be implemented over the course of the next sprint. The cyber security representative should really become part of the SCRUM team. They should be attending daily stand ups and explaining the value and importance of implementing these different user stories.

Q: Is there any sort of special ingredients that make for a DevSecOps-friendly culture?

A: I think it’s that security needs to be owned by everybody, and making security a requirement needs to come from leadership. You also need a dose of pragmatism—if an organization has a readiness gap and you’re playing catch-up, it’s taking a risk-based approach to identify where you have the most exposure and start there.

Bitglass Security Spotlight: Breaches Expose Millions of Emails, Texts, and Call Logs

By Will Houcheime, Product Marketing Manager, Bitglass

red arrow with news icon

Here are the top cybersecurity stories of recent weeks: 

—773 million email accounts published on hacking forum
— Unprotected FBI data and Social Security numbers found online
— Millions of texts and call logs exposed on unlocked server
—South Korean Defense Ministry breached by hackers
—Ransomware forces City Hall of Del Rio to work offline

773 million email accounts published on hacking forum
Data breaches have been a significant topic for organizations in the past few years, but this latest data breach in particular, emphasizes the importance of proper cybersecurity. This monumental breach revealed 772,904,991 unique email addresses and over 21 million unique passwords. This immense volume of credentials was posted to a hacking forum just two weeks into the new year.

Unprotected FBI data and Social Security numbers found online
A cybersecurity researcher by the name of Greg Pollock found 3 terabytes of unprotected data from the Oklahoma Securities Commission. This included sensitive FBI data, including files whose creation dated back to 2012. Social Security numbers were also found, some of which were collected as far back as the1980s. The FBI has not confirmed or denied the data breach but, according to UpGuard, the cybersecurity firm investigating, this data breach is significant and affects the entire agency statewide.

Millions of texts and call logs exposed on unlocked server
Voipo, a California communications provider, left a database full of text messages and call logs completely exposed. A cybersecurity researcher found this unprotected server with 6 million text messages and 8 million call logs. The data also included documents with encryptedpasswords that would put the company at risk if accessed by a malicious user.

South Korea Defense Ministry breached by hackers
Data on weapons and munitions acquisitions were exposed when a South Korean government agency’s computer systems were breached. This data included military weapons such as concepts of fighter aircrafts. The attackers were able to hack into an unsecured server for a program that is present on all government computers. The South Korean National Intelligence Service investigated the data breach and, although they have disclosed the occurrence to the public, they have not announced whether or not they’ve discovered the identity of the hackers.

Ransomware forces City Hall of Del Rio to work offline
Del Rio City Hall servers were shut down after a ransomware attack. The Management Information Systems (MIS) department had no choice but to stop all devices from connecting to the internet to halt the spread of the malware. With no access to data online, employees of each department were then forced to use pen and paper for all of their daily operations. City Hall officials have reported the incident to the FBI but it is still unclear whether or not data has been compromised or who was behind the attack.

To learn about cloud access security brokers (CASBs) and how they can protect your enterprise from ransomware, data leakage, misconfigurations, and more, download the Definitive Guide to CASBs

Security Risks and Continuous Development Drive Push for DevSecOps

How the need to speed application creation and subsequent iterations has catalyzed the adoption of the DevOps philosophy

By Dwight B. Davis, Writer, Symantec

curved steel bridge

The sharp rise in cyber security attacks and damaging breaches in recent years has driven a new mantra among both application developers and security professionals: “Build security in from the ground up.” Although it’s hard to argue with that commonsense objective, actually achieving it has proven to be far from straightforward.

Traditionally, of course, developers have focused on delivering reliable software that first and foremost provided the desired functionality. Security was largely an afterthought, if a thought at all – something that was to be layered on top of the application once it hit production. When, inevitably, applications were found to have code vulnerabilities, developers crafted and distributed patches to fix them.

Never ideal, the patch and update model has become untenable as security threats have escalated and software development cycles have accelerated. Much of this acceleration has been driven by public cloud computing, which has fostered the rise of a continuous integration/continuous delivery (CI/CD) development model.

At a more macro level, the need to speed application creation and subsequent iterations has also catalyzed the adoption of the DevOps philosophy. The fundamental basis for this movement is to better integrate developer and operations teams, thus ensuring that each side has a better understanding of the other’s needs and constraints.

By some estimations, DevOps and CI/CD can inherently aid in the creation of more secure software. Why? Because, thanks to the rapid application iterations, software flaws can be more quickly identified and more swiftly patched.

The problem with this supposed benefit is that it doesn’t alter developers’ build/patch mentality. Vulnerability fixes may indeed occur more quickly, but security still isn’t a core developer concern or responsibility.

Spanning the development & deployment cycle

To truly deliver on the “security from the ground up” objective, DevOps teams need to blend in a security component that spans the entire software development and deployment lifecycle. That need has resulted in the notion of a DevSecOps paradigm or culture. Here again, though, the concept is easier to grasp than to achieve.

The challenges associated with DevSecOps range from cultural to technical. Developers long disinterested in security issues need to change their mindsets and expand their skillsets. Development and security teams that have largely operated in their own isolated domains need to learn how to tightly collaborate. Developers who already have tools for code production and management now need tools for building secure apps. For their part development team managers need tools to give them visibility into the security of the code each developer is producing.

When it comes to implementing DevSecOps, there is no one-size-fits-all guidebook. Cloud native and more entrepreneurial firms may be able to mesh their developer, operations and security teams more quickly and easily than more mature organizations that must break down existing functional silos. Even though there will be a gradient in how tightly integrated the various teams become in different organizations, however, there is a fundamental need to move security beyond its own isolated domain.

“The goal is to decentralize and democratize security,” explains Hardeep Singh, cloud security architect at Symantec. “Having centralized security and decentralized development and operations is a recipe for disaster.”

Although developers must become more conversant and capable regarding security issues and approaches, the degree of their security expertise will vary considerably. Likewise, security pros typically aren’t going to become coding wizards. That said, each group needs to better understand the other’s worlds, and how the two must intersect.

Often, the security members within DevSecOps environments will set high level priorities and best practices, while the developers will be tasked with implementing them.

“You can’t expect app developers to know the best practices to secure IP data or to identify a SQL injection,” says Raj Patel, Symantec’s VP of Cloud Platform Engineering. “But you can train them on safe development practices and techniques.”

There may be some resistance among developers asked to take more responsibility for building secure code, just as some security experts may balk at bringing developers more fully into the security lifecycle process. Most often, though, these two groups are happy to share the burden of producing secure applications, since doing so not only reduces cyber security risks but also makes their own work lives easier.

DevSecOps can also drive a meeting-in-the-middle truce between the historical – and polar opposite – attitudes of developers and security pros. The bias of developers has been to say “Yes” to user requests, while that of security experts – prioritizing security over functionality and ease-of-use – has been to say “No.”

“In some ways, DevSecOps can be how security teams move to ‘Yes,’ because they’ve helped developers address security needs right from the start,” says Patel.

CCSK Success Stories: From the Financial Sector

By the CSA Education Team

This is the second part in a blog series on Cloud Security Training. Today we will be interviewing an infosecurity professional working in the financial sector. John C Checco is President Emeritus for the New York Metro InfraGard Members Alliance, as well as an Information Security professional providing subject matter expertise across various industries. John is also a part-time NYS Fire Instructor, a volunteer firefighter with special teams training in vehicular extrication and dive/ice rescue, an amateur novelist, and routinely donates blood in several adult hockey leagues.

Can you describe your role?

Currently I lead the “Security Innovation Evaluation Team” at a large financial firm where we forage and test emerging technology solutions that will build upon our security posture and fortify our resilience far into the future.

What got you into cloud security in the first place? What made you decide to earn your CCSK?

Whether you are in the automotive, engineering, medical, retail or the information security field, one needs to constantly stay abreast of emerging trends and hype – indeed, “cloud” was one of those emerging trends and hype combined which represented a logical transition from existing legacy infrastructures.

I am a lifelong learner; seeing the early explosion of “cloud providers” who really just wrapped an orchestration layer around virtualization rather than true holistic solutions, was the jumpstart I needed to understand how important the CCM (and CCSK) was.

The CCSK reflects both the operational knowledge of the CCM, as well as the strategic goals for the CSA. The CCM itself is a superset of many existing security control standards, which makes the CCSK all the more relevant to today’s security environment.

Can you elaborate on how the CCSK reflects the operational knowledge of the CCM? Why do you think this is important knowledge for infosec professionals to know?

The CCM builds upon existing NIST/ISO standards and produces new controls where existing controls cannot adequately cover the cloud paradigm. If one knows how to properly interpret and use the CCM standard, they most likely understand the non-cloud security standards as well. The CCSK is represents knowledge assurance of the CCM at an operational level; and having a shared origin to the CCM, the CCSK can truly test proficiency of the spirit of the CCM as it was designed, not just its definitions.

How did you prepare for the CCSK exam?

I was an initial member of the NY Metro Chapter of the CSA and aware of the Cloud Controls Matrix. Although my employer was not explicitly referencing the CCM as a security standard, I was pulling from it as a security controls guidance for my employer’s projects.

If you could go back and take it again, how would you prepare differently?

As information security has become more complex and more splintered, simply studying definitions is no longer an effective method to have lasting knowledge. I would suggest two additional study techniques:

  • Understand the “WHY” of each control in the CCM: what was the originating problem statement, what is the scope of that problem statement, and was the control defined to resolve the problem or simply reduce the problem’s impact to a tolerable level? Once you have a good comprehension of the background, then there is no memorization needed … it becomes common sense to the learner.
  • Get DIRTY with some hands-on experience – whether it be an existing work project or reworking an old personal project. Taking an old project and redeploying it using newer technologies and security controls gives the learner unimaginable insight into why a control is written in a certain way. The advantage of using an existing project is that you can focus on the coding, deployment and security control aspects rather than features and requirements. I have revamped my personal “Resume Histogram” project originally written in 1990 as a dial-up BBS site → to a CGI website → to a RoR web application (hey, not every decision was a good one) → to a social media plugin → to a containerized web API.

Were there any specific topics on the exam that you found trickier than others?

I suspect that everyone will have a different topic of weakness. Legal aspects were my weakness, and from the plethora of recent changes in standards and regulations – PCI DSS3, NIST revisions, NYS DFS 500, GDPR and the myriad of local regulations – I suspect it is not going to get any easier.

What is your advice to people considering earning their CCSK?

I have four points of advice:

  1. Get real-life quality experience before you attempt a certification … doctors, nurses, architects and engineers are required to, so why not InfoSec professionals?
  2. Make a habit of learning something every day …  knowledge gets stale, intelligence doesn’t.
  3. Avoid the shortcuts, like boot camps, it’s a crash diet of ignorance;
  4. Be humble, keep an open mind, and listen before you speak … things change, so what you knew was right today may be turned on its head tomorrow. Nobody should want to gain a reputation of being “CIA” (certified, ignorant and arrogant).

Lastly, what part of the material from the CCSK have been the most relevant in your work and why?

Ironically, my work over the years has made my weakest area – legal – also the most important and relevant one; especially when it comes to contracts with cloud providers for enterprise projects as well as vendors and managed service providers who run in the cloud.

Interested in earning your CCSK? Download our free CCSK prep-kit here.

Invest in your future with CCSK training

CCM Addenda Updates for Two Additional Standards

By the CSA CCM Working Group


Dear Colleagues,

We’re happy to announce the publication of the updated Cloud Controls Matrix (CCM) Addenda for the following standards:
— German Federal Office for Information Security (BSI) Cloud Computing Compliance Controls Catalogue (C5) 
ISO/IEC 27002, ISO/IEC 27017 and ISO/IEC 27018

These CCM addenda aim to help organizations assess and bridge compliance gaps between the CCM and other security frameworks. 

The documents contain:  

  • A controls mapping between the above mentioned standards and the CCM (e.g. which control(s) in CCM maps to each given control in ISO27017).
  • A gap analysis
  • Compensating controls (i.e. the actual “addendum”)

Additionally, the addendum for the German BSI C5 contains both mappings and reverse mappings.

The CSA and the CCM Working Group hope that organizations will find this document useful for their security compliance programs. 

Best Regards,
CSA CCM Working Group

Addressing the Skills Gap in Cloud Security Professionals

By Ryan Bergsma, Training Program Director, CSA

bridging the skills gapOne of the math lessons that has always stuck with me from childhood is that if you took a penny and doubled it every day for a month,  it would make you a millionaire. In fact, it wouldn’t even take the whole month, you would be a millionaire on the 28th day. Of course, most of us realize this would be nearly impossible to accomplish in reality (unless you invested in the right crypto at the right time in the fall and early winter of 2017). The reason that this old math lesson comes to mind when I think about the skills gap in IT security, and in particular cloud security, is because of Moore’s Law.

The rise of cybercrime & IT security

Granted doubling every two years is a lot different than doubling every day, but if you take 1970 as the starting point, we are already over 85 percent of the way to our computational power being one million times greater than what it was. I bring this up because it speaks to the rapid increase of power behind the tools that are at the disposal of criminal hackers today. Couple that with the fact that:

  1. Modern society relies so heavily on IT and…
  2. So many of our assets (from personal information, to intellectual property, to bank ledgers) can now be found online

And you have a scenario that is ripe for exploitation. With so much opportunity, albeit illegal, it is no wonder that bad actors have become prolific. And with this group of bad actors growing so rapidly, we see the boom of the IT security industry. Especially given the fact that though it may only take one persistent bad actor to breach a system or network, it generally requires an entire team to protect it.

So… the demand for cybersecurity professionals continues to balloon.

In fact, the Cybersecurity Ventures 2017 Cybersecurity Jobs Report says “Cybercrime will more than triple the number of job openings over the next 5 years” and predicts that there will be 3.5 million unfilled cybersecurity positions by 2021.”

Increased threats to cloud computing

One particular realm of IT that has exploded into the mainstream consciousness in the past decade is cloud computing. Some of the benefits of cloud computing have driven large scale adoption of its use by both individuals and businesses. In many cases, it may even be in use without awareness of its use (or the potential impacts). Whether the awareness of the use of cloud offerings is there or not, the need for security in cloud computing most certainly is. Though it may be possible for cloud solutions to provide heightened levels of security when compared with traditional on-premises IT infrastructures and services, cloud  infrastructures, platforms and services do come with their own unique set of risks. CSA even maintains a list of Top Threats for cloud environments. These factors have left many businesses, even those with already existing IT security departments, scrambling to understand and mitigate the risks associated with the myriad of cloud solutions.

Meanwhile the shift to cloud continues to accelerate. The same Cybersecurity Ventures report also mentions that “Microsoft estimated that 75 percent of infrastructure will be under third-party control (i.e., cloud providers or Internet Services Providers) by 2020.”

Why the skills gap exists

With cybercrime driving the growing demand for cybersecurity professionals, the explosion of cloud usage, and it subsequent need for cloud security professionalswhy is it that so many of these jobs remain unfilled?

The harsh reality is that employers are not able to find the employees to fill these positions because the demand is so great. There are not enough individuals with the skill set and years of experience that employers are looking for to fill these critical positions. A survey of industry influencers conducted by Logic Monitor found that “58% agreed lack of cloud experience in their employees was one of the biggest challenges.” Employers are then left with the choice of leaving the positions unfilled or filling them with under qualified applicants. A 2017 Global Information Security Workforce Study  says that “It is not uncommon for cybersecurity workers to arrive at their jobs via unconventional paths. The vast majority, 87% globally, did not start in cybersecurity, but rather in another career. While many moved to cybersecurity from a related field such as IT, many professionals worldwide arrived from a non-IT background.”

What can be done to address this skills-gap?

Given the growing business demand for skilled cloud security professionals, what can be done to stem the tide of this increasing skills gap?

As an organization

To begin to combat the skills gap in cybersecurity professionals, and cloud security professionals in particular, businesses need to start taking proactive steps. Get your business behind initiatives to document current best practices in security and turn that documentation in training materials for the workforce. In cloud this is especially critical given its rapid development and expansion. This could be in the form of encouraging your senior employees to use some portion of on the clock time to volunteer for these types of initiatives, or it could be directly funding projects to create the new training materials. Organizations need to encourage and incentivize current employees that are less knowledgeable in security to take advantage of current training offers. It could also be worth considering setting up scholarship programs to make cybersecurity training more accessible for the next generation of cybersecurity professionals.

Of course given the gap, businesses also need to be more open to hiring these newly trained security professional into entry level and junior positions so that they can begin to build the experience required to fill more senior positions.

As an individual

And, for individuals who are interested in a cybersecurity career, get yourself into training and pursue certificates and certifications that demonstrate your interests and abilities to businesses that are desperately in need of qualified cybersecurity professionals. There are a wide range of options when it comes to cybersecurity, so make an effort to figure out where your interests lie. Some of the many options include things like computer forensics, pen testing, network security, security policy, end user education, security audit or secure software development. Whether you are interested in writing code or working with people, there are likely security opportunities that will be a good fit for you personally.

If you already have some level of security knowledge and are interested in cloud, our Certificate of Cloud Security Knowledge (CCSK) offering is a great place to start. Holders of the Certified Information Systems Security Professional (CISSP) from (ISC)2 benefit from the alignment between the bodies of knowledge of the two credentials. All CISSP’s 10 domains have an analog in CCSK’s 14 domains; where the domains overlap, CCSK builds on the CISSP domain and provides cloud-specific context.

For those holding ISACA’s Certified Information Systems Auditor (CISA) designation, better understanding of how clouds work and how they can be secured makes it easier to identify the appropriate measures to test control objectives and make appropriate recommendations.

If you’re interested in learning more about cloud security training for you or your team please visit our CCSK Training page or download our Free Prep-Kit.

Invest in your future with CCSK training

Correction: An earlier version of this post incorrectly attributed the Cybersecurity Jobs Report to Herjavec, when in fact this report was produced by Cybsecurity Ventures. 

headshot of Ryan BergsmaRyan Bergsma is the Training Program Direct at CSA where he manages CSA’s training programs including the Certificate of Cloud Security Knowledge (CCSK) and Cloud Controls Matrix (CCM) Training.

Development of Cloud Security Guidance, with Mapping MY PDPA Standard to CCM Control Domains, Jointly Developed by MDEC and CSA

By Ekta Mishra, Research Analyst/APAC, Cloud Security Alliance

CCM logoThe Cloud Security Alliance Cloud Controls Matrix (CCM) provides a controls framework that gives detailed understanding of security concepts and principles that are aligned to the Cloud Security Alliance guidance in 13 domains. The foundations of the CSA CCM rest on its customized relationship to other industry-accepted security standards, regulations, and controls frameworks such as the ISO 27001/27002, ISACA COBIT, PCI, NIST, Jericho Forum and NERC CIP and will augment or provide internal control direction for service attestations and control reports provided by cloud providers.

As a framework, the CSA CCM provides organizations with the needed structure, detail, and clarity relating to information security tailored to the cloud industry. The CSA CCM strengthens existing information security control environments by emphasizing business information security control requirements, reduces and identifies consistent security threats and vulnerabilities in the cloud, provides standardized security and operational risk management, and seeks to normalize security expectations, cloud taxonomy and terminology, and security measures implemented in the cloud.

The Malaysian Personal Data Protection Commissioner issued the Personal Data Protection Standards 2015, which came into force on 23 December 2015 (the “Standards”). To those who are affected, namely any person that “processes” and “has control over or authorizes the processing of any personal data in relation to commercial transactions” (in other words, any person or company that deals with personal data in the course of its business, also known as “data users”), the Standards stand to be a new compliance hurdle and would impose additional responsibilities on these data users, over and above those set by the Malaysia Personal Data Protection Act 2010 (“PDPA”).

The inclusion of the Malaysian Personal Data Protection Standards into the CSA CCM aligns the  regional standard to over 30 global frameworks mapped in the CSA framework. Additionally, the mapping, conducted by the Malaysian Digital Economic Corporation (MDEC),  further expands the coverage of the CSA CCM into the APAC region.

How to read the document:
1. The 4 sections from MY PDPA 2015 were mapped with CCM control domains. This was accomplished through matching each control in the CCM to a control(s) in MY PDPA to determine equivalence. This approach considered which CCM control is associated with control(s) in MY PDPA, and to what degree they are equivalent to each other. The extent of equivalence between controls of the two frameworks approximates the amount of efforts necessary to incorporate MY PDPA, using CCM as a base.
2. The CCM Control ID was used as a reference for the CCM control domain name.
3. A gap identification and analysis was conducted for remaining controls not considered equivalent (ie Partial and Full gaps) after the initial mapping.  Furthermore, a gap analysis provides indicators on how much efforts it may take to bridge gaps between the two frameworks.
4. The controls from MY PDPA which were determined to have Full and Partial gaps will be used as compensating controls in the main CCM document.

 

The four sections of the document have been derived from Malaysia (MY) Personal Data Protection Standard 2015
Data Security for Personal Data Processed Electronically A data user shall take practical steps to protect the personal data from any loss, misuse, modifications, unauthorized or accidental access or disclosure, alteration or destruction
Data Security for Personal Data Processed Non-Electronically A data user shall take practical steps to protect the personal data from any loss, misuse, modifications, unauthorized or accidental access or disclosure, alteration or destruction
Retention Standard A data user shall take practical steps to ensure that all personal data is destroyed or permanently deleted if it is no longer required for the purpose for which it was to be processed
Data Integrity Standard A data user shall take reasonable steps to ensure that the personal data is accurate, complete, not misleading and kept updated by having regard to the purpose, including any directly related purpose, for which the personal data was collected and processed further

 

Typical Challenges in Understanding CCSK and CCSP: Technology Architecture

By Peter HJ van Eijk, Head Coach and Cloud Architect, ClubCloudComputing.com

CCSK examAs cloud computing is becoming increasingly mainstream, more people are seeking cloud computing security certification. Because I teach prep courses for the two most popular certifications—the Certificate of Cloud Security Knowledge (CCSK), organized by the Cloud Security Alliance (CSA), and the Certified Cloud Security Professional (CCSP), as organized by (ISC)2—I naturally see a wide variety of people as they work to pass these exams.

My students come from many different backgrounds, each bringing with them a unique set of experiences that color their understanding of the way the cloud is managed and controlled. To some these varying backgrounds might seem a hindrance, but quite the opposite is true because secure cloud adoption is a team sport where diverse backgrounds count in order to reduce the risk to organizations.

Despite their varying backgrounds, they all face similar challenges. A common challenge I see in my courses, especially for less technical people, is understanding information technology architecture in general. It’s something they struggle with, and also something that can be a hurdle in passing the exam. So, what is technology architecture and why is it important?

A technology architecture primer

Cloud computing, in my opinion, does not have that much new technology. Most of the technology we have today was already in existence before the advent of cloud computing.

Today, a common characteristic of the technologies that are relevant for cloud computing is the fact that they facilitate resource pooling and interconnection of systems. Resource pooling is an essential characteristic of cloud computing, and a technology such as server virtualization helps implement that sharing. But that technology should also guarantee proper separation between otherwise independent cloud tenants.

Technologies such as APIs and federated identity management allow the cloud to be made up of a lot of collaborating independent companies. This helps create an IT supply chain. Your average company has hundreds of SaaS suppliers who in turn use hundreds of other cloud companies to help them deliver their services.

APIs also enable the essential cloud characteristic of automatic self-service provisioning. For example, through APIs we can set up auto-scaling services. Again, this is a tool in building the IT supply chain.

Sharing requires caring

The new thing in cloud is sharing between independent companies, interconnecting different, independent providers and automating that. The whole technology architecture now spans the IT supply chain.

This has big governance and security implications. For example, when that collaboration or isolation fails, we cannot escalate these problems to our own CTO or CIO to resolve them. These problems are not confined to a single company anymore. They have to be resolved between companies.

The technical collaboration between companies will only work with proper contracts and management processes. This has to be set up in advance, instead of figuring out how it works later, as is so common inside an enterprise. And the people whose competence is to review these contracts and set up the service management processes therefore must understand how the technology enables that collaboration.

That is why technology architecture is so important for less technical people. And that is also why it can be hard. The CCSK body of knowledge focuses specifically on how cloud technology architecture has an impact on cloud management, in particular on cloud risk management, and that makes it a great tool for building effective cloud adoption teams.

Peter van Eijk is one of the world’s most experienced cloud trainers. He has worked for 30+ years in research, with IT service providers and in IT consulting (University of Twente, AT&T Bell Labs, EDS, EUNet, Deloitte). In more than 100 training sessions, he has helped organizations align on security and speed up their cloud adoption. He is an authorized CSA CCSK and (ISC)2 CCSP trainer, and has written or contributed to several cloud training courses.

 

Bitglass Security Spotlight: US Government Breaches Abound

By Jacob Serpa, Product Manager, Bitglass

man reading cybersecurity headlines in newspaperHere are the top cybersecurity headlines of recent weeks:

—Healthcare.gov breached
—US weapons systems contain cybersecurity gaps
—Over 35 million US voter records for sale
—National Guard faces ransomware attack

Healthcare.gov breached

75,000 people had their personal details stolen when hackers breached a government system that is frequently used to help individuals sign up for healthcare plans. Obviously, the information contained in the system was highly sensitive; for example, Social Security numbers. There are plans in motion for helping those affected through services like credit protection.

US weapons systems contain cybersecurity gaps

A new report finds that American weapons systems contain cybersecurity vulnerabilities. The US Department of Defense is reported to have neglected best security practices in these systems. These security gaps are described as being “mission-critical.”

Over 35 million US voter records for sale

An online forum that is well known for selling information exposed in data breaches was recently found to boast more than 35 million US voter records. Exposed data includes names, phone numbers, physical addresses, and much more belonging to residents of 19 states. Unfortunately, the accuracy of these private details was confirmed by experts. As such, anyone can purchase this sensitive information whenever they please.

National Guard faces ransomware attack

In Indiana, the National Guard was recently the victim of a ransomware attack. A system housing the personal details of military personnel and civilians was compromised in the event. The good news is that the attack is not believed to be a part of a coordinated assault on the National Guard – the organization was supposedly not specifically targeted. Regardless, information was exposed.

To learn about cloud access security brokers (CASBs) and how they can protect your enterprise from ransomware, data leakage, misconfigurations, and more, download the Definitive Guide to CASBs.

Documentation of Distributed Ledger Technology and Blockchain Use

By Ashish Mehta, Co-chair, CSA Blockchain/Distributed Ledger Working Group

Beyond Cryptocurrency blockchain DLT use casesCSA’s newest white paper, Beyond Cryptocurrency: Nine Relevant Blockchain and Distributed Ledger Technology (DLT) Use Cases, aims to identify wider use cases for both technologies beyond just cryptocurrency, an area with which both technologies currently have the widest association.

In the process of outlining several use cases across discrete economic application sectors, we covered multiple industry verticals, as well as some use cases which cover multiple verticals simultaneously.

For the purpose of this document, we considered a use case as relevant when it provides potential for any of the following:

—disruption of existing business models or processes;
—strong benefits for an organization, such as financial, improvement in speed of transactions, auditability, etc.;
—large and widespread application; and
—concepts that can be applied in real-world scenarios.

From concept to production environment, we also identified six separate stages of maturity—concept, proof of concept, prototype, pilot, pilot production, and production—to get a better assessment of how much work has been done within the scope and how much more work remains to be done.

Some of the industry verticals which we identified are traditional industries, such as shipping, airline ticketing, insurance, banking, supply chain, and real estate, all of which are ripe for disruption from a technological point of view.

We also clearly identified the expected benefits from adoption of DLTs/blockchain in these use cases, type of DLT, use of private vs public blockchain, infrastructure provider-CSP and the type of services (IaaS, PaaS, SaaS). Identification of some other key features in the use case implementations such as Smart Contracts and Distributed Databases have also been outlined.

Future iterations of this document will provide updates on these use cases, depending on the level of progress seen over time. We hope this document will be a valuable reference to all key stakeholders in the blockchain/DLT ecosystem, as well as contribute to its maturity.

The success of the Beyond Cryptocurrency: Nine Relevant Blockchain and Distributed Ledger Technology Use Cases is the result of the dedicated professionals within the Blockchain/Distributed Ledger Working Group and would not have been possible without the expertise, focus, and collaboration of the following working group members:

  • Nadia Diakun
  • Raul Documet
  • Vishal Dubey
  • Akshay Hundia
  • Sabri Khemissa
  • Nishanth Kumar Pathi
  • Michael Roza

Download Beyond Cryptocurrency now.