Cybersecurity Trends and Training Q and A

cybersecurity word montageBy Jon-Michael C. Brook, Principal, Guide Holdings, LLC

Q: Why is it important for organizations and agencies to stay current in their cybersecurity training?

A: Changes accelerate in technology. There’s an idea called Moore’s Law, named after Gordon Moore working with Intel, that the power of a micro-chip doubles every 18 months. When combined with the virtualization aspects necessary for cloud computing, technology professionals tackle ideas seen as science fiction 30 years ago. You carry around more processing power in an Apple Watch than launched the space shuttle. Big Data, Blockchain, Internet of Things, AI and self-driving cars were inconceivable. Now you see advertisements for the NCAA trend analysis (Big Data), Bitcoin (Blockchain), Alexa and smart homes (Internet of Things), AI (Watson) and Tesla. Humans create all of this new technology; we’re flaw ridden, and cybersecurity researchers find exploitable bugs every day.

Training for developers is important —they’re a small population and make a huge impact limiting the types and quantities of flaws. Training for general users helps them avoid clicking malicious links, phishing schemes and opening files of unknown pedigree. Staying current keeps users only a half step behind the latest exploitation schemes; everything turns over entirely too fast for reliance on 10-year-old security knowledge. Ransomware wasn’t something we trained people on 15 years ago, even though the PC Cyborg virus demanded the first $378 payment in 1989. Now, people clicking a link could lock out a company’s entire data store.

Q: Do you find that most organizations and agencies employ a workforce that is woefully undertrained in cybersecurity?

A: There are companies like KnowBe4 and PhishMe that specifically target under-trained employees. KnowBe4 calls it the Human Firewall—accurate when it works properly. In the cybersecurity world, we’ve said for years two things about users—you have to trust someone, and users are the weakest link in any computer architecture. We made inroads limiting the damage by segmenting networks, limiting access privileges and better authentication capabilities, but training is a moving target and people forget or get careless.

Q: Is cybercrime on the upswing? Do you have statistics or studies to back this up?

A: The trends for cybercrime show increases in the total occurrences. Part of that is “who’s” doing the work for the majority of the takeovers. In many cases, self-replicating viruses and bots do the work—they don’t sleep. Some cybersecurity researchers find flaws and immediately publish their sample code. Not contacting the product manufacturer first is irresponsible. The sample code gets weaponized and added to existing exploit development kits and loaded into malware, including ransomware, for instance. Ransomware encrypts all the files on a drive and rose from 22nd to 5th-most-common malware between 2014 and 2016 (2017 Verizon Data Breach Investigations Report). Recently, the city of Atlanta was hit with a $51,000 demand.

Executives at a company the size and stature of Uber decided to pay a ransomware demand. They clearly didn’t have good backup and recovery processes, and we can’t expect the 718,000 other victims in 2016 to do much better. Uber, in turn, funded the next round of development. According to Symantec, the cyber criminals saw per-victim value increases of 266 percent from 2015 to 2017, and continue their efforts. There are over 50 families of ransomware alone. That’s families—not applications. Cracking a single variant in a family doesn’t necessarily eliminate that version’s effectiveness. An effort by Europol and several cybersecurity vendors to inform users and collect decryption keys started last year with the site noransomware.org.

Q: Which organizations are currently most targeted for cybercrime, and why?

A: There was a quote in the New Yorker during the 1950’s where Willie Sutton answered the question why he robbed banks. His response was straightforward:  “I rob banks because that’s where the money is.” This trend has held true throughout history, be it land during feudal times, stage coaches and trains during the Old West, and finally cybercrime today.

So where is the proverbial money in today’s cloud-connected, on-demand, app-everywhere world?

The industry most people think of with cybercrime and fraud is the credit card and banking institutions referred to as the Payment Card Industry (PCI). They really worked to lock everything down starting with the Payment Card Industry Data Security Standard (PCI-DSS) in December 2004. The rationale was simple —rampant fraud in the late 1990’s. They were losing every time someone called about a bad charge.

Credit card companies are steadily improving to the point now where your bank tracks your location and habits and will proactively block suspicious transactions, calling or sending a text message as an additional authorization step. I’ve seen it fail miserably (a friend of mine received a deny on a charge at the local Kroger after using the same card at the same store weekly for the past 18 months) and work stupendously (a $1 Burger King charge in Mexico while I was buying snacks at the Ft Lauderdale airport). The chip cards are also reducing fraud, as they prove to the card processors that you have the original card and not a fake copy. The Payment Card Industry does such a good job now that bulk credit card numbers on the Dark Web cost pennies per thousands.

That’s not the same for the healthcare industry, however. Personal Health Information (PHI) continues to be the most profitable data, running in the $0.50 to $7 range. That is down significantly from the $150 range less than 5 years ago. However, extensive health histories provide a treasure chest of fraud possibilities but are now purchased with additional information purchases like birth dates, Social Security numbers, and driver’s license data. Knowing a patient’s previous diagnosis of high cholesterol makes fake claims for heart procedures more plausible. CIPP Guide pointed out how common abandoned medical records were 10 years ago. Doctors place a premium on their time, but the HIPAA compliance actions for Electronic Health Records (EHR) and the ease of which the information may be destroyed eliminates the same sort of abandonment. It does open up a new situation, where a patient actually wants their previous health history to continue with a new practice. At that point, people must take personal responsibility and keep their own EHR.

Let’s investigate where the money isn’t … sort of. Cyberattacks were a significant part of the Russian attacks on Georgia and the Ukraine in 2017. One of the first nation-state attributed cyberweapons, Stuxnet, set back the Iranian nuclear program in 2010 by attacking power plant equipment—Supervisory Control and Data Acquisition (SCADA)—responsible for their uranium enrichment centrifuges. The Russian Government election interference in the US elections is a continued congressional topic. And early in 2018, the city of Atlanta experienced ransomware demands. While governments typically have big budgets, getting to them will prove more difficult.

Lastly, the area I’m most concerned about is transportation. Money is replaceable. More “intelligent” features are making their way into mass production, from braking assist and lane departure to auto-pilot. Two researchers demonstrated a remote automobile attack at the DEF CON hacking conference in 2015. The conference introduced a Car Hacking Village, where attendees could try the exploits themselves. Since that time, self-driving vehicles, including cars and semi-trucks are under development by Tesla, Uber and NVidia. Uber recently suspended self-driving car tests after a pedestrian accident in Arizona on March 19, 2018.

The possibility of a driverless future, where there is limited road range and fewer traffic fatalities sounds promising. The fact of the matter is that the systems use external connections to download updates. History shows remote updates as a vulnerability. The automobile immobilizer remote disablement feature flaws were demonstrated in 2016. The possibilities to stop a car suddenly are already part of police controls for theft prevention and recovery. Hollywood TV shows dramatize accelerating quickly. The prospects of ransom or terrorism are frightening at 60 MPH.

Q: How bad is cybercrime expected to be in the future?

A: Cybercrime success in the future depends on the diligence of everyone involved. Punishment for unacceptable behavior was documented in biblical times. Deterrence depends on risk versus reward similar to the drug trade. The main difference surrounds education—hacking requires access to computers and coding skills. In the US, our Bill of Rights and Constitution keeps American hackers from being executed with the exception of treason. Life in prison or heavy fines are the punishments for choice. If you don’t have money, the heavy fines don’t look as daunting. A serious prison term carries a bit more weight. That’s not how most of the US laws read currently. Kevin Mitnick, one of the best known hackers, received a 5-year sentence after breaking into several corporations’ networks, including Pacific Bell’s voice mail system. The main charge that got him jail time was wire fraud.

Folks outside of the US, especially organized crimes in the poorer nations of Africa and Asia, already show a great deal of interest in cybercrime–mostly phishing schemes. Eastern Europe also has several well-known hacking groups. Their tools are getting better and easier to use. That’s a double-edged sword—less knowledgeable users will probably make implementation mistakes that allow projects like NoMoreRansomware work.

Cybersecurity protections will continue evolving. Organizations within the PCI are now asking for continuous access to your location data so they can correlate your spending with your charge card and ATM usage, the next logical evolution in their fraud detection. Until you forget your phone. And at that point, we need to adjust where the “money” is, and start examining what can be done with your location information and other low-hanging fruit. If criminals know you’re not in your residence, will the crime statistics show a spike in burglaries? Will social engineers or phishing scams target you based on the most susceptible device? Email scams work best on your tablet, text scams on your phone and click fraud on your laptop?

Q: Who are these cyber criminals and where do they come from?

A: In the past, we dealt a lot with individual hackers. There were hacktivists and folks who wanted to see how they could get in and what they could do in infiltration. That has since moved to organized crime, with the bulk of cyber criminals motivated by money, and how quickly they can turn whatever they find into cash. Most of the latest attacks are external, financially focused, and automated to increase return on investment.

Q: A lot is now being discussed about cyber criminals holding the data of individuals and organizations hostage. How is this possible and what can be done to prevent it?

A: The data hostage taking refers to a type of malware called ransomware. It is so named as a ransomware infected system will scramble all the stored data using encryption and demand payment for release of the decryption key. Most anti-virus companies will catch all but the latest 0-day hacks (those not yet discovered by cybersecurity professionals).

Keep the cybersecurity software up to date. Likewise, keep ALL your systems patched—most operating systems will automatically install them and unlike the old days for desktop systems at least, everything won’t crash. Mobile device users are slightly less accepting of auto-updates, for fear of favorite apps no longer working or battery draining updates. Keep in mind, the favorite apps could be part of the reason for the patch. Lastly, invest in some sort of backup software. Plenty of choices will automatically save all of your files—Apple has iCloud, Microsoft has OneDrive, you could use Google Drive or Amazon’s S3 cloud service. There are plenty of third-party solution providers, including Carbonite, CrashPlan and others. Make the best choice that fits with your lifestyle—if you own all Apple devices, that’s probably your best choice. And as mentioned on noransomware.org, paying the ransom equates to venture funding the next round of attacks.

Q: Besides cyber blackmail, are there other new schemes in cybercrime that organizations need to be aware of?

A: An emerging scheme involves stealing cycles from people’s web browsers, or cryptojacking. It’s a combination of Bitcoin mining and a “free” component— the advertising revenue stream is augmented or replaced with either pornography or a game depending on the user set. There is additional code on the page that uses your computer to mine Bitcoin for them. My kids were playing a tank game that crashed my system from heat. Bitcoin thefts a couple years ago (see Mt Gox, for instance) were popular because there was little risk of getting caught. With cryptojacking, people think it’s just a poorly written web page and restart their browser/computer. You never get something for nothing.

These examples highlight the negatives and shouldn’t all be seen as daunting. The technology behind Bitcoin opens up a new world of possibilities around worldwide money transactions. A company called Ripple, an “altcoin” using the same blockchain technology, based their whole business model on efficiently and effectively moving money between countries in Southeast Asia. IBM commercials tout the advantages for our food supply and eliminating “blood diamonds.” Even with all the accident reports on driverless cars, autonomous vehicles have the potential of saving millions of lives eliminating driving under the influence or distracted driving. EHR and smart watches, for instance, allow doctors access to continuous monitoring of vital signs, looking for abnormalities day-to-day rather than relying on just the annual patient screening. All of these were science fiction or unfathomable even 20 years ago. As a society, we need to be aware and diligent of criminal activity, but being aware shouldn’t scare the world into a techno-free cave.

Jon-Michael C. Brook, Principal at Guide Holdings, LLC, has 20 years of experience in information security with such organizations as Raytheon, Northrop Grumman, Booz Allen Hamilton, Optiv Security and Symantec. He is co-chair of CSA’s Top Threats Working Group and the Cloud Broker Working Group, and contributor to several additional working groups. Brook is a Certified Certificate of Cloud Security Knowledge+ (CCSK+) trainer and Cloud Controls Matrix (CCM) reviewer and trainer.

Cybersecurity Certifications That Make a Difference

By Jon-Michael C. Brook, Principal, Guide Holdings, LLC

cloud security symbol overlaying laptop for cybersecurity certificationsThe security industry is understaffed. By a lot. Previous estimates by the Ponemon Institute suggest as much as 50 percent underemployment for cybersecurity positions. Seventy percent of existing IT security organizations are understaffed and 58 percent say it’s difficult to retain qualified candidates. ESG’s 2017 annual global survey of IT and cybersecurity professionals suggests the biggest shortage of skills is in cybersecurity for at least six years running. It’s a fast moving field with hacker’s crosshairs constantly targeting companies; mess up and you’re on the front page of the Wall Street Journal. With all of the pressure and demand, security is also one of the best paying segments of IT.

Cybersecurity is a different vernacular, with a set of acronyms and ideas far outside even its information technologies brethren. For the gold standard as a security professional, the title to have is the Certified Information Systems Security Professional (CISSP) from the ISC2 (isc2.org). The requirements grow increasingly strict since my testing in 2001. Not lax, mind you, but five-year industry minimums and certified professional attestation gives the credential even more heft. There is an associate version available, the Associate Systems Security Certified Practitioner (SSCP) that eliminates the time and sponsorship minimums and would be appropriate for someone new to the field.

Adding to the professional shortages are new IT delivery methods, a la cloud computing. Amazon Web Services is the giant in the space, offering several certifications for cloud architecture and implementation. Microsoft and Google round out the top three. These, too, are hot commodities, as cloud is a relatively nascent industry and not very well understood. Layer security onto the cloud platform, and you find certifications such as the Cloud Security Alliance’s Certificate of Cloud Security (CCSK) and, again, the ISC2’s Certified Cloud Security Professional (CCSP). In 2017, Certification Magazine listed cloud security certifications as some of the highest salary increases available to an IT professional.

One caveat to all of the excitement of underemployment: recruiters, headhunters and hiring managers. Position requirements are sometimes outlandish or poorly vetted, such as the requisition asking for 10 years of cloud and 20 years of security experience. Amazon Web Services started in 2006. Microsoft Azure and Google Compute Platform were seen as cannibalistic to existing revenue streams. Even five years of cloud industry experience is a lifetime, and the industry moves so fast that AWS’s Certified Solutions Architect (AWS-ASA) requires re-certification every two years vs. the standard three for the rest of IT. They, too, have a security exam recently out of beta, the AWS Certified Security Specialty, though it requires one of their associate certifications first.

If you have the appetite for learning, add privacy to the mix. The number of industry vertical regulations (healthcare’s HIPAA, Payment Card Industry’s PCI-DSS, finance’s FINRA/SOX, etc…) and regionally specific requirements (EU’s GDPR) have the International Association of Privacy Professionals (IAPP), offering eight Certified Information Privacy Professional (CIPP) certifications. As an IT professional in the US, the Certified Information Privacy Technologist (CIPT) and CIPP/US are probably the most attainable and attractive.

Jon-Michael C. Brook, Principal at Guide Holdings, LLC, has 20 years of experience in information security with such organizations as Raytheon, Northrop Grumman, Booz Allen Hamilton, Optiv Security and Symantec. He is co-chair of CSA’s Top Threats Working Group and the Cloud Broker Working Group, and contributor to several additional working groups. Brook is a Certified Certificate of Cloud Security Knowledge+ (CCSK+) trainer and Cloud Controls Matrix (CCM) reviewer and trainer.

Microsoft Workplace Join Part 2: Defusing the Security Timebomb

By Chris Higgins, Technical Support Engineer, Bitglass

timebomb countdown to Workplace Join infosecurity riskIn my last post, I introduced Microsoft Workplace Join. It’s a really convenient feature that can automatically log users in to corporate accounts from any devices of their choosing. However, this approach essentially eliminates all sense of security.

So, if you’re a sane and rational security professional (or even if you’re not), you clearly want to disable this feature immediately. Your options?

Option #1 (Most Secure, Most Convenient): Completely disable InTune Mobile Device Management for O365 and then disable Workplace Join

As Workplace Join can create serious security headaches, one of the most secure and most convenient options is to disable the InTune MDM for Office 365 and then disable Workplace Join completely. Obviously, these should quickly be replaced by other, less invasive security tools. In particular, organizations should consider agentless security for BYOD and mobile in order to protect data and preserve user privacy.

Option #2 (Least Convenient): Use InTune policies to block all personal devices

Microsoft does not provide a method of limiting this feature that does not utilize InTune policies. Effectively, you must either not use InTune at all, or pay to block unwanted access. However, the latter approach means blocking all BYO devices (reducing employee flexibility and efficiency) and introduces the complexity of downloading software to every device, raising additional costs.

Option #3 (Least Convenient and Least Secure): Whack-a-mole manual policing of new device registrations

As an administrator in Azure AD, deleting or disabling an account only prevents automated logins on each of that account’s registered devices—this has to be done manually every time a user links a new endpoint. Unfortunately, deactivation and deletion in Azure do not remove the “Join Workplace or School” link from the control panel of the machine in question. Additionally, deactivation still allows the user to manually log in, as does deletion—neither action prevents the user from re-enrolling the same device. In other words, pursuing this route means playing an endless game of deactivation and deletion whack-a-mole.

Firmware Integrity in the Cloud Data Center

By John Yeoh, Research Director/Americas, Cloud Security Alliance

firmware integrity in the cloud data center coverAs valued members, we wanted you to be among the first to hear about the newest report out from CSA—Firmware Integrity in the Cloud Data Center, in which key cloud providers and datacenter development stakeholders share their thoughts on building cloud infrastructure using secure servers that enable customers to trust the cloud providers’ infrastructure at the hardware/firmware level.

Authored by the Cloud Security Industry Summit (CISC) Technical Working Group, the  position paper is aimed at hardware and firmware manufacturers, and  identifies gaps in the industry, which make it difficult to meet the recently published NIST 800-193 requirements with ‘standard’ general-purpose servers and offers ways in which to build servers designed to meet the NIST requirements (including calling out missing technology when applicable) and enable cloud providers to increase trust in commodity hardware. The paper also suggests additional requirements that could further strengthen the level of security of servers.

Among the gaps that CSIS singles out for immediate attention by hardware manufacturers are:

  1. First-instruction integrity – The ability to ensure integrity of the first instruction (the first code or data loaded from mutable non-volatile media) in a way that is verifiable by the cloud provider and not just by the manufacturer.
  2. Chain-of-Trust for peripherals – The ability to leverage the host root of trust and other roots of trust to create a chain of trust to peripherals (e.g. for PCIe devices or other symbiont devices).
  3. Automatable Recovery – The ability to perform automated recovery back to a known boot-time state upon detection of corrupted firmware (after initial boot).

With the increasing level of sophistication of attackers and nation state threat mitigations, we think it’s critical to build a new, more secure generation of servers. The hardware/firmware industry must do a better job of building firmware with high code-quality and minimal potential for vulnerabilities at the firmware level. It’s vital that supply chain security can be verified every step along the way from component to system to solution The CISC’s opinion is that these requirements can be met without cloud vendors having to design and build specialized hardware but rather through standardized commodity hardware.

We hope you will find this informative and that it will lead to further discussions within your own organization about the challenges involved in the future of cloud security.

Read the position paper.

Continuous Monitoring in the Cloud

By Michael Pitcher, Vice President, Technical Cyber Services, Coalfire Federal

lock and key for cloud security

I recently spoke at the Cloud Security Alliance’s Federal Summit on the topic “Continuous Monitoring / Continuous Diagnostics and Mitigation (CDM) Concepts in the Cloud.” As government has moved and will continue to move to the cloud, it is becoming increasingly important to ensure continuous monitoring goals are met in this environment. Specifically, cloud assets can be highly dynamic, lacking persistence, and thus traditional methods for continuous monitoring that work for on-premise solutions don’t always translate to the cloud.

Coalfire has been involved with implementing CDM for various agencies and is the largest Third Party Assessment Organization (3PAO), having done more FedRAMP authorizations than anyone, uniquely positioning us to help customers think through this challenge. However, these concepts and challenges are not unique to the government agencies that are a part of the CDM program; they also translate to other government and DoD communities as well as commercial entities.

To review, Phase 1 of the Department of Homeland Security (DHS) CDM program focused largely on static assets and for the most part excluded the cloud. It was centered around building and knowing an inventory, which could then be enrolled in ongoing scanning, as frequently as every 72 hours. The objective is to determine if assets are authorized to be on the network, are being managed, and if they have software installed that is vulnerable and/or misconfigured. As the cloud becomes a part of the next round of CDM, it is important to understand how the approach to these objectives needs to adapt.

Cloud services enable resources to be allocated, consumed, and de-allocated on the fly to meet peak demands. Just about any system is going to have times where more resources are required than others, and the cloud allows compute, storage, and network resources to scale with this demand. As an example, within Coalfire we have a Security Parsing Tool (Sec-P) that spins up compute resources to process vulnerability assessment files that are dropped into a cloud storage bucket. The compute resources only exist for a few seconds while the file gets processed, and then they are torn down. Examples such as this, as well as serverless architectures, challenge traditional continuous monitoring approaches.

However, potential solutions are out there, including:

  • Adopting built-in services and third-party tools
  • Deploying agents
  • Leveraging Infrastructure as Code (IaC) review
  • Using sampling for validation
  • Developing a custom approach

Adopting built-in services and third-party tools

Dynamic cloud environments highlight the inadequacies of performing active and passive scanning to build inventories. Assets may simply come and go before they can be assessed by a traditional scan tool. Each of the major cloud services providers (CSPs) and many of the smaller ones provide inventory management services in addition to services that can monitor resource changes – examples include AWS’ System Manager Inventory Manager and Cloud Watch, Microsoft’s Azure Resource Manager and Activity Log, and Google’s Asset Inventory and Cloud Audit Logging. There are also quality third-party applications that can be used, some of them even already FedRAMP authorized. Regardless of the service/tool used, the key here is interfacing them with the integration layer of an existing CDM or continuous monitoring solution. This can occur via API calls to and from the solution, which are made possible by the current CDM program requirements.

Deploying Agents

For resources that are going to have some degree of persistence, agents are a great way to perform continuous monitoring. Agents can check in with a master to maintain the inventory and also perform security checks once the resource is spun up, instead of having to wait for a sweeping scan. Agents can be installed as a part of the build process or even be made part of a deployment image. Interfacing with the master node that controls the agents and comparing that to the inventory is a great way to perform cloud-based “rogue” asset detection, a requirement under CDM. This concept employed on-premises is really about finding unauthorized assets, such as a personal laptop plugged into an open network port. In the cloud it is all about finding assets that have drifted from the approved configuration and are out of compliance with the security requirements.

For resources such as our Coalfire Sec-P tool from the previous example, where it exists as code more than 90 percent of the time, we need to think differently. An agent approach may not work as the compute resources may not exist long enough to even check in with the master, let alone perform any security checks.

Infrastructure as Code Review

IaC is used to deploy and configure cloud resources such as compute, storage, and networking. It is basically a set of templates that “programs” the infrastructure. It is not a new concept for the cloud, but the speed at which environments change in the cloud is bringing IaC into the security spotlight.

Now, we need to consider how we can perform assessment on the code that builds and configures the resources. There are many tools and different approaches on how to do this; application security is not anything new, it just must be re-examined when we consider it part of performing continuous monitoring on infrastructure. The good news is that IaC uses structured formats and common languages such as XML, JSON, and YAML. As a result, it is possible to use tools or even write custom scripts to perform the review. This structured format also allows for automated and ongoing monitoring of the configurations, even when the resources only exist as code and are not “living.” It is also important to consider what software is spinning up with the resources, as the packages that are leveraged must include up-to-date versions that do not have vulnerabilities. Code should undergo a security review when it changes, and thus the approved code can be continuously monitored.

Setting asset expiry is one way to enforce CDM principals in a high DevOps environment that leverages IaC. The goal of CDM is to assess assets every 72 hours, and thus we can set them to expire (get torn down, and therefore require rebuild) within the timeframe to know they are living on fresh infrastructure built with approved code.

Sampling

Sampling is to be used in conjunction with the methods above. In a dynamic environment where the total number of assets is always changing, there should be a solid core of the fleet that can be scanned via traditional means of active scanning. We just need to accept that we are not going to be able to scan the complete inventory. There should also be far fewer profiles, or “gold images,” than there are total assets. The idea is that if you can get at least 25% of each profile in any given scan, there is a good chance you are going to find all the misconfigurations and vulnerabilities that exist on all the resources of the same profile, and/or identify if assets are drifting from the fleet. This is enough to identify systemic issues such as bad deployment code or resources being spun up with out-of-date software. If you are finding resources in a profile that have a large discrepancy with the others in that same profile, then that is a sign of DevOps or configuration management issues that need to be addressed. We are not giving up on the concept of having a complete inventory, just accepting the fact that there really is no such thing.

Building IaC assets specifically for the purposes of performing security testing is a great option to leverage as well. These assets can have persistence and be “enrolled” into a continuous monitoring solution to report on the vulnerabilities in a similar manner to on-premises devices, via a dashboard or otherwise. The total number of vulnerabilities in the fleet is the quantity found on these sample assets, multiplied by the number of those assets that are living in the fleet. As we stated above, we can get this quantity from the CSP services or third-party tools.

Custom Approaches

There are many different CSPs out there for the endless cloud-based possibilities, and all CSPs have various services and tools available from them, and for them. What I have reviewed are high-level concepts, but each customer will need to dial in the specifics based on their use cases and objectives.

Microsoft Workplace Join Part 1: The Security Timebomb

By Chris Higgins, Technical Support Engineer, Bitglass

timebomb countdown to Workplace Join infosecurity riskIt’s no secret that enterprise users wish to access work data and applications from a mix of both corporate and personal devices. In order to help facilitate this mix of devices, Microsoft has introduced a new feature called Workplace Join into Azure Active Directory, Microsoft’s cloud-based directory and identity service. While the intent of streamlining user access to work-related data is helpful, the delivery of this feature has resulted in a large security gap—one that can’t easily be disabled. This is another example of an app vendor optimizing for user experience ahead of appropriate controls and protections—demonstrating the basis for the cloud app shared responsibility model and the need for third-party security solutions like cloud access security brokers (CASBs).

According to Microsoft, “…by using Workplace Join, information workers can join their personal devices with their company’s workplace computers to access company resources and services. When you join your personal device to your workplace, it becomes a known device and provides seamless second factor authentication and Single Sign-On to workplace resources and applications.”

How does it work?

When a user links their Windows machine to “Access Work or School,” the machine is registered in Azure AD, and a master OAuth token is created for use between all Microsoft client applications as well as Edge/I.E. browsers. Subsequent login attempts to any Office resource will cause the application to gather an access token and log in the user without ever prompting for credentials. The ideology behind this process is that logging in to Windows is enough to identify a user and give them unrestricted access to all Office 365 resources.

In plain language, this means that once you login to Office 365 from any device (Grandma’s PC, hotel kiosks, etc.), you, and anyone accessing that device, are logged in to Office 365 automatically moving forward.

Why is this such a big security issue?

Workplace Join undoes all of your organization’s hard work establishing strong identity processes and procedures—all so that an employee can access corporate data from Grandma’s PC (without entering credentials). Since Grandma only has three grandkids and one cat, it likely won’t take a sophisticated robot to guess her password—exposing corporate data to anyone who accesses her machine. Making matters worse, user accounts on Windows 10 don’t even require passwords, making it even easier for data to be exfiltrated from such unmanaged devices.

Workplace Join is enabled by default for all O365 tenants. Want to turn it off? You’ll have to wait for the next blog post to sort that out.

In the meantime, download the Definitive Guide to CASBs to learn how cloud access security brokers can help secure your sensitive data.

Cloud Security Trailing Cloud App Adoption in 2018

By Jacob Serpa, Product Marketing Manager, Bitglass

In recent years, the cloud has attracted countless organizations with its promises of increased productivity, improved collaboration, and decreased IT overhead. As more and more companies migrate, more and more cloud-based tools arise.

In its fourth cloud adoption report, Bitglass reveals the state of cloud in 2018. Unsurprisingly, organizations are adopting more cloud-based solutions than ever before. However, their use of key cloud security tools is lacking. Read on to learn more.

The Single Sign-On Problem

Single sign-on (SSO) is a basic, but critical security tool that authenticates users across cloud applications by requiring them to sign in to a single portal. Unfortunately, a mere 25 percent of organizations are using an SSO solution today. When compared to the 81 percent of companies that are using the cloud, it becomes readily apparent that there is a disparity between cloud usage and cloud security usage. This is a big problem.

The Threat of Data Leakage

While using the cloud is not inherently more risky than the traditional method of conducting business, it does lead to different threats that must be addressed in appropriate fashions. As adoption of cloud-based tools continues to grow, organizations must deploy cloud-first security solutions in order to defend against modern-day threats. While SSO is one such tool that is currently underutilized, other relevant security capabilities include shadow IT discoverydata loss prevention (DLP), contextual access control, cloud encryptionmalware detection, and more. Failure to use these tools can prove fatal to any enterprise in the cloud.

Microsoft Office 365 vs. Google’s G Suite

Office 365 and G Suite are the leading cloud productivity suites. They each offer a variety of tools that can help organizations improve their operations. Since Bitglass’ 2016 report, Office 365 has been deployed more frequently than G Suite. Interestingly, this year, O365 has extended its lead considerably. While roughly 56 percent of organizations now use Microsoft’s offering, about 25 percent are using Google’s. The fact that Office 365 has achieved more than two times as many deployments as G Suite highlights Microsoft’s success in positioning its product as the solution of choice for the enterprise.

The Rise of AWS

Through infrastructure as a service (IaaS), organizations are able to avoid making massive investments in IT infrastructure. Instead, they can leverage IaaS providers like Microsoft, Amazon, and Google in order to achieve low-cost, scalable infrastructure. In this year’s cloud adoption report, every analyzed industry exhibited adoption of Amazon Web Services (AWS), the leading IaaS solution. While the technology vertical led the way at 21.5 percent adoption, 13.8 percent of all organizations were shown to use AWS.

To gain more information about the state of cloud in 2018, download Bitglass’ report, Cloud Adoption: 2018 War.

Five Cloud Migration Mistakes That Will Sink a Business

By Jon-Michael C. Brook, Principal, Guide Holdings, LLC

intersection of success and failure Today, with the growing popularity of cloud computing, there exists a wealth of resources for companies that are considering—or are in the process of—migrating their data to the cloud. From checklists to best practices, the Internet teems with advice. But what about the things you shouldn’t be doing? The best-laid plans of mice and men often go awry, and so, too, will your cloud migration unless you manage to avoid these common cloud mistakes:

“The Cloud Service Provider (CSP) will do everything.”

Cloud computing offers significant advantages—cost, scalability, on-demand service and infinite bandwidth. And the processes, procedures, and day-to-day activities a CSP delivers provides every cloud customer–regardless of size–with the capabilities of Fortune 50 IT staff. But nothing is idiot proof. CSPs aren’t responsible for everything–they are only in charge of the parts they can control based on the shared responsibility model and expect customers to own more of the risk mitigation.

Advice: Take the time upfront to read the best practices of the cloud you’re deploying to. Follow cloud design patterns and understand your responsibilities–don’t trust that your cloud service provider will take care of everything. Remember, it is a shared responsibility model.

“Cryptography is the panacea; data-in-motion, data-at-rest and data-in-transit protection works the same in the cloud.”

Cybersecurity professionals refer to the triad balance: Confidentiality, Integrity and Availability. Increasing one decreases the other two. In the cloud, availability and integrity are built into every service and even guaranteed with Service Level Agreements (SLAs).The last bullet in the confidentiality chamber involves cryptography, mathematically adjusting information to make it unreadable without the appropriate key. However, cryptography works differently in the cloud. Customers expect service offerings will work together, and so the CSP provides the “80/20” security with less effort (i.e. CSP managed keys).

Advice: Expect that while you must use encryption for the cloud, there will be a learning curve. Take the time to read through the FAQs and understand what threats each architectural option really opens you up to.

“My cloud service provider’s default authentication is good enough.”

One of cloud’s tenets is self-service. CSPs have a duty to protect not just you, but themselves and everyone else that’s virtualized on their environment. One of the early self-service aspects is authentication—the act of proving you are who you say you are. There are three ways to accomplish this proof: 1) Reply with something you know (i.e., password); 2) Provide something you have (i.e., key or token); or 3) Produce something you are (i.e., a fingerprint or retina scan). These are all commonplace activities. For example, most enterprise systems require a password with a complexity factor (upper/lower/character/number), and even banks now require customers to enter additional password codes received as text messages. These techniques are imposed to make the authentication stronger, more reliable and with wider adoption. Multi-factor authentication uses more than one of them.

Advice: Cloud Service Providers offer numerous authentication upgrades, including some sort of multi-factor authentication option—use them.

“Lift and shift is the clear path to cloud migration.”

Cloud cost advantages evaporate quickly due to poor strategic decisions or architectural choices. A lift-and-shift approach in moving to cloud is where existing virtualized images or snapshots of current in-house systems are simply transformed and uploaded onto a Cloud Service Provider’s system. If you want to run the exact same system in-house rented on an IaaS platform, it will cost less money to buy a capital asset and depreciate the hardware over three years.  The lift-and-shift approach ignores the elastic scalability to scale up and down on demand, and doesn’t use rigorously tested cloud design patterns that result in resiliency and security. There may be systems within a design that are appropriate to be an exact copy, however, placing an entire enterprise architecture directly onto a CSP would be costly and inefficient.

Advice: Invest the time up front to redesign your architecture for the cloud, and you will benefit greatly.

“Of course, we’re compliant.”

Enterprise risk and compliance departments have decades of frameworks, documentation and mitigation techniques. Cloud-specific control frameworks are less than five years old, but are solid and are continuing to be understood each year.

However, adopting the cloud will need special attention, especially when it comes to non-enterprise risks such as an economic denial of service (credit card over-the-limit), third-party managed encryption keys that potentially give them access to your data (warrants/eDiscovery) or compromised root administrator account responsibilities (CSP shutting down your account and forcing physical verification for reinstatement).

Advice: These items don’t have direct analogs in the enterprise risk universe. Instead, the understandings must expand, especially in highly regulated industries. Don’t face massive fines, operational downtime or reputational losses by not paying attention to a widened risk environment.

Jon-Michael C. Brook, Principal at Guide Holdings, LLC, has 20 years of experience in information security with such organizations as Raytheon, Northrop Grumman, Booz Allen Hamilton, Optiv Security and Symantec. He is co-chair of CSA’s Top Threats Working Group and the Cloud Broker Working Group, and contributor to several additional working groups. Brook is a Certified Certificate of Cloud Security Knowledge+ (CCSK+) trainer and Cloud Controls Matrix (CCM) reviewer and trainer.

Cybersecurity and Privacy Certification from the Ground Up

By Daniele Catteddu, CTO, Cloud Security Alliance

The European Cybersecurity Act, proposed in 2017 by the European Commission, is the most recent of several policy documents adopted and/or proposed by governments around the world, each with the intent (among other objectives) to bring clarity to cybersecurity certifications for various products and services.

The reason why cybersecurity, and most recently privacy, certifications are so important is pretty obvious: They represent a vehicle of trust and serve the purpose of providing assurance about the level of cybersecurity a solution could provide. They represent, at least in theory, a simple mechanism through which organizations and individuals can make quick, risk-based decisions without the need to fully understand the technical specifications of the service or product they are purchasing.

What’s in a certification?

Most of us struggle to keep pace with technological innovations, and so we often find ourselves buying services and products without sufficient levels of education and awareness of the potential side effects these technologies can bring. We don’t fully understand the possible implications of adopting a new service, and sometimes we don’t even ask ourselves the most basic questions about the inherent risks of certain technologies.

In this landscape, certifications, compliance audits, trust marks and seals are mechanisms that help improve market conditions by providing a high-level representation of the level of cybersecurity a solution could offer.

Certifications are typically performed by a trusted third party (an auditor or a lab) who evaluates and assesses a solution against a set of requirements and criteria that are in turn part of a set of standards, best practices, or regulations. In the case of a positive assessment, the evaluator issues a certification or statement of compliance that is typically valid for a set length of time.

One of the problems with certifications under the current market condition is that they have a tendency to proliferate, which is to say that for the same product or service more than one certification exists. The example of cloud services is pretty illustrative of this issue. More than 20 different schemes exist to certify the level of security of cloud services, ranging from international standards to national accreditation systems to sectorial attestation of compliance.

Such a proliferation of certifications can serve to produce the exact opposite result that a certification was built for. Rather than supporting and streamlining the decision-making process, they could create confusion, and rather than increasing trust, they favor uncertainty. It should be noted, however, that such a proliferation isn’t always a bad thing. Sometimes, it’s the result of the need to accommodate important nuances of various security requirements.

Crafting the ideal certification

CSA has been a leader in cloud assurance, transparency and compliance for many years now, supporting the effort to improve the certification landscape. Our goal has been—and still is—to make the cloud and IoT technology environment more secure, transparent, trustworthy, effective and efficient by developing innovative solutions for compliance and certification.

It’s in this context that we are surveying our community and the market at-large to understand what both subject matter experts and laypersons see as the essential features and characteristics of the ideal certification scheme or meta-framework.

Our call to action?

Tell us—in a paragraph, a sentence or a word—what you think a cybersecurity and privacy certification should look like. Tell us what the scope should be (security/privacy, product /processes /people, cloud/IoT, global/regional/national), what’s the level of assurance offered, which guarantees and liabilities are expected, what’s the tradeoff between cost and value, how it should be proposed/communicated to be understood and valuable for the community at large.

Tell us, but do it before July 2 because that’s when the survey closes.

How ChromeOS Dramatically Simplifies Enterprise Security

By Rich Campagna, Chief Marketing Officer, Bitglass

Google’s Chromebooks have enjoyed significant adoption in education, but have seen very little interest in the enterprise until recently. According to Gartner’s Peter Firstbrook in Securing Chromebooks in the Enterprise (6 March 2018), a survey of more than 700 respondents showed that nearly half of organizations will definitely purchase or probably will purchase Chromebooks by EOY 2017. And Google has started developing an impressive list of case studies, including WhirlpoolNetflixPinterestthe Better Business Bureau, and more.

And why wouldn’t this trend continue? As the enterprise adopts cloud en masse, more and more applications are available anywhere through a browser – obviating the need for a full OS running legacy applications. Additionally, Chromebooks can represent a large cost savings – not only in terms of a lower up-front cost of hardware, but lower ongoing maintenance and helpdesk costs as well.

With this shift comes a very different approach to security. Since Chrome OS is hardened and locked down, the need to secure the endpoint diminishes, potentially saving a lot of time and money. At the same time, the primary storage mechanism shifts from the device to the cloud, meaning that the need to secure data in cloud applications, like G Suite, with a Cloud Access Security Broker (CASB) becomes paramount. Fortunately, the CASB market has matured substantially in recent years, and is now widely viewed as “ready for primetime.”

Overall, the outlook for Chromebooks in the enterprise is positive, with a very real possibility of dramatically simplifying security. Now, instead of patching and protecting thousands of laptops, the focus shift towards protecting data in a relatively small number of cloud applications. Quite the improvement!

What If the Cryptography Underlying the Internet Fell Apart?

By Roberta Faux, Director of Research, Envieta

Without the encryption used to secure passwords for logging in to services like Paypal, Gmail, or Facebook, a user is left vulnerable to attack. Online security is becoming fundamental to life in the 21st century. Once quantum computing is achieved, all the secret keys we use to secure our online life are in jeopardy.

The CSA Quantum-Safe Security Working Group has produced a new primer on the future of cryptography. This paper, “The State of Post-Quantum Cryptography,” is aimed at helping non-technical corporate executives understand what the impact of quantum computers on today’s security infrastructure will be.

Some topics covered include:
–What Is Post-Quantum Cryptography
–Breaking Public Key Cryptography
–Key Exchange & Digital Signatures
–Quantum Safe Alternative
–Transition Planning for Quantum-Resistant Future

Quantum Computers Are Coming
Google, Microsoft, IBM, and Intel, as well as numerous well-funded startups, are making significant progress toward quantum computers. Scientists around the world are investigating a variety of technologies to make quantum computers real. While no one is sure when (or even if) quantum computers will be created, some experts believe that within 10 years a quantum computer capable of breaking today’s cryptography could exist.

Effects on Global Public Key Infrastructure
Quantum computing strikes at the heart of the security of the global public key infrastructure (PKI). PKI establishes secure keys for bidirectional encrypted communications over an insecure network. PKI authenticates the identity of information senders and receivers, as well as protects data from manipulation. The two primary public key algorithms used in the global PKI are RSA and Elliptic Curve Cryptography. A quantum computer would easily break these algorithms.

The security of these algorithms is based on intractably hard mathematical problems in number theory. However, they are only intractable for a classical computer, where bits can have only one value (a 1 or a 0). In a quantum computer, where k bits represent not one but 2^k values, RSA and Elliptic Curve cryptography can be solved in polynomial time using an algorithm called Shor’s algorithm. If quantum computers can scale to work on even tens of thousands of bits, today’s public key cryptography becomes immediately insecure.

Post-Quantum Cryptography
Fortunately, there are cryptographically hard problems that are believed to be secure even from quantum attacks. These crypto-systems are known as post-quantum or quantum-resistant cryptography. In recent years, post-quantum cryptography has received an increasing amount of attention in academic communities as well as from industry. Cryptographers have been designing new algorithms to provide quantum-safe security.

Proposed algorithms are based on a number of underlying hard problems widely believed to be resistant to attacks even with quantum computers. These fall into the following classes:

  • Multivariate cryptography
  • Hash-based cryptography
  • Code-based cryptography
  • Supersingular elliptic curve isogeny cryptography

Our new white paper explains the pros and cons of the various classes for post-quantum cryptography. Most post-quantum algorithms will require significantly larger key sizes than existing public key algorithms which may pose unanticipated issues such as compatibility with some protocols. Bandwidth will need to increase for key establishment and signatures. These larger key sizes also mean more storage inside a device.

Cryptographic Standards
Cryptography is typically implemented according to a standard. Standard organizations around the globe are advising stakeholders to plan for the future. In 2015, the U.S. National Security Agency posted a notice urging the need to plan for the replacement of current public key cryptography with quantum-resistant cryptography. While there are quantum-safe algorithms available today, standards are still being put in place.

Standard organizations such as ETSI, IETF, ISO, and X9 are all working on recommendations. The U.S. National Institute for Standards and Technology, known as NIST, is currently working on a project to produce a draft standard of a suite of quantum resistant algorithms in the 2022-2024 timeframe. This is a challenging process which has attracted worldwide debate. Various algorithms have advantages and disadvantages with respect to computation, key sizes and degree of confidence. These factors need to be evaluated against the target environment.

Cryptographic Transition Planning
One of the most important issues that the paper underscores, is the need to being planning for cryptographic transition to migrate from existing public key cryptography to post-quantum cryptography. Now is the time to vigorously investigate the wide range of post quantum cryptographic algorithms and find the best ones for use in the future. This point is vital for corporate leaders to understand and begin transition planning now.

The white paper, “The State of Post-Quantum Cryptography,” was released by CSA Quantum-Safe Security Working Group. This introduces non-technical executives to the current and evolving landscape in cryptographic security.

Download the paper now.

Surprise Apps in Your CASB PoC

By Rich Campagna, Chief Marketing Officer, Bitglass

Barely five years old, the Cloud Access Security Broker (CASB) market is undergoing its second major shift in primary usage. The first CASBs to hit the market way back in 2013-2014 primarily provided visibility into Shadow IT. Interest in that visibility use case quickly waned in favor of data protection (and later threat protection) for sanctioned, well-known SaaS applications like Office 365 and Box — this was the first major shift in the CASB market.

The second major shift, the one that we’re currently undergoing, doesn’t replace this use case, but adds on to it. As IT and security teams have gotten comfortable with cloud applications like Office 365, the business has responded with demands for more applications. Sometimes that means other SaaS apps; sometimes it means custom apps or packaged software moving to the cloud. Regardless, what started as a relatively small, defined set of applications has exploded to a much broader demand over the past year or so, and is showing no signs of slowing down — this is the second major shift and we’re seeing it in every industry and across organizations of all sizes.

The quandary here is trying to sort out whether the CASBs that you’re evaluating will meet not only your current needs, but the needs of your business down the road as well. A really interesting approach that I have seen several times now is the concept of surprise apps in a proof of concept (PoC). When calling vendors in for the PoC, the enterprise will enumerate some of the applications to be tested, but leave others as a surprise for the vendor. The objective is to test whether the CASB will be able to meet their organization’s future cloud security needs, whatever those might be.

Most CASB vendors still rely on a fixed catalog of applications that they support and you don’t want to be waiting months (or longer) for a new app to be added to their roadmap when you have the GM of your company’s biggest line of business breathing down your neck to deploy that new application they so desperately need.

Majority of Australian Data Breaches Caused by Human Error

By Rich Campagna, Chief Marketing Officer, Bitglass

It wasn’t long ago that the first breach under the Office of the Australian Information Commissioner’s (OAIC) Privacy Amendment Bill was made public. Now, OAIC is back with their first Quarterly Statistics Report of Notifiable Data Breaches. While the report doesn’t offer much in the way of detail, it does highlight a couple of interesting trends.

The statistic that jumps out most is that of the 63 reported breaches in this first (partial) quarter, the majority (51%) were the result of “human error.” According to OAIC, “human error may include inadvertent disclosures, such as by sending a document containing personal information to the incorrect recipient.” Sounds like too few Australian organizations are controlling things like external sharing, even though sharing (and many other potentially risky activities) can be controlled quite easily with a Cloud Access Security Broker (CASB).

human error leading cause of breaches

The report also breaks down number of breaches by industry. Health service provides had the misfortune of leading the charge in this initial quarter, representing nearly a quarter of breaches. Healthcare organizations have a particularly difficult task with data protection. On one hand, they have a very mobile workforce that requires immediate access to data, from anywhere and from any device. On the other hand, medical records are some of the most valuable sources of personal data, including not only medical history, but personal information, financial information, and more.

healthcare most breaches

Fortunately, this first quarter didn’t include any large, “mega-breaches,” as more than half involved the personal information of fewer than 10 individuals, and 73% involving fewer than 100 individuals.

most breaches small

It will be interesting to see whether schemes like this, and the upcoming GDPR, have an impact on overall data protection outcomes.

baseStriker: Office 365 Security Fails To Secure 100 Million Email Users

By Yoav Nathaniel, Customer Success Manager, Avanan

We recently uncovered what may be the largest security flaw in Office 365 since the service was created. Unlike similar attacks that could be learned and blocked, using this vulnerability hackers can completely bypass all of Microsoft’s security, including its advanced services – ATP, Safelinks, etc.

The name baseStriker refers to the method hackers use to take advantage of this vulnerability: splitting and disguising a malicious link using a tag called the <base> URL tag.

So far we have only seen hackers using this vulnerability to send phishing attacks, but but it is also capable of distributing ransomware, malware and other malicious content.

How a baseStriker Attack Works

The attack sends a malicious link, that would ordinarily be blocked by Microsoft, past their security filters by splitting the URL into two snippets of HTML: a base tag and a regular href tag. Here’s a short video showing how it works:

Traditional Phish: This html email would be blocked because the URL is known to be malicious.

When scanning this, Office 365 sees the malicious URL, performs a lookup against a list of known bad links, and blocks it. Office 365 Safelink, for customers that purchased ATP, also replaces the URL with a Safelink URL and prevents the end-user from going to the phishing site.

Phish using baseStriker method: This email, however, has the same malicious link presented to the end-user but is let through because the email filters are not handling the <base> HTML code correctly.

In this example, Office 365 only performs the lookup on the base domain, ignoring the relative URL in the rest of the body. Because only part of the URL is tested, it mistakenly appears to not exist in the malicious URL database and the email is let through. Furthermore, Safelinks does not replace the malicious link, and the user get the original malicious link, can click it to get right to the phishing page.

In a nutshell, this attack method is the email equivalent of a virus that blinds the immune system. So even if the attack is already known, Microsoft does not have a way to see it and lets it through.

Are you vulnerable?

We have tested the vulnerability on several configurations and found that anyone using Office 365 in any configuration is vulnerable. If you are using Gmail, you don’t have this issue. If you are protecting Office 365 with Mimecast you are secure. Proofpoint is also vulnerable – if you are using Proofpoint you also have this problem.

Here’s a summary of our findings:

I am using:  Am I Vulnerable to baseStriker?
Office 365  Yes – you are vulnerable
Office 365 with ATP and Safelinks  Yes – you are vulnerable
Office 365 with Proofpoint MTA  Yes – you are vulnerable
Office 365 with Mimecast MTA  No – you are safe
Gmail  No – you are safe
Gmail with Proofpoint MTA  We are still in testing and will be updated soon
Gmail with Mimecast MTA  No – you are safe
Other configurations not here?  Contact us if you want us to help you test it

What can you do?

As of the time of writing, there still is no fix so there’s no configuration you can make in your Office 365. We have notified Microsoft and Proofpoint and will update if we learn more.

Because this vulnerability is already known to hackers, an immediate first step would be to notify your end-users and reinforce the risk of phishing attacks.

We are recommending customers enable multi-factor authentication to make it harder to take over their account. This will not protect from malware and other types of phishing, but will help with credential harvesting.

Finally, for users of Gmail and Office 365, even if you are not vulnerable to this attack, we always recommend adding a layer of email security for malware, phishing, and account take-over to protect from the sophisticated attacks that the default security does not block. As this is not the first attack that has found a way past default security measures and it will not be the last.

 


Updates

  • 5/1/2018: Avanan identified attackers are leveraging a critical vulnerability in Microsoft Office 365 email service that allows them to completely bypass O365 built in security
  • 5/2/2018 11:00am: Avanan reported this issue to Microsoft
  • 5/2/2018 11:00am: Avanan tested Gmail and it does not suffer from this vulnerability
  • 5/2/2018 11:30am: Avanan tested Mimecast and Proofpoint.
    • Mimecast is fine.
    • Proofpoint has the same vulnerability. Therefore, if you use Proofpoint you are not secured. We informed Proofpoint at 11:44am EDT on May 2nd, 2018.

Are Traditional Security Tools Dead?

By Salim Hafid, Product Marketing Manager, Bitglass

When evaluating security options, CISOs and security architects are always looking to the solution that will minimize cost and administrative overhead while maximizing data protection. At the highest levels, enterprises have relied on traditional tools as a means of protecting data over the long term, but as cloud app adoption rises, these tools are increasingly seen as ineffective; incapable of meeting the needs of an evolving enterprise with a growing stable of cloud apps and unmanaged endpoints capable of accessing those apps.

We surveyed over 500 cybersecurity professionals in our Cloud Hard 2018 survey to better understand how organizations are approaching these traditional security tools. Whether they view firewalls and endpoint security as obsolete, and what tools they believe to be critical and protecting the next generation of platforms that will store and extend access to corporate data.

While awareness of these challenges has risen a great deal in the last several years – 84% say traditional security solutions don’t work well in a cloud environment – many haven’t yet taken action and are suffering the consequences. Breaches at Orbitz, Panera, Sears, and more so far this year have proven the need for improved visibility and control over internal systems.

Asked about visibility, only 58% of respondents indicated that they had some means of tracking file downloads. Even fewer – 15% – were able to see anomalous activity across applications. These rates are worrisome in that without visibility, organizations are more prone to being completely unaware of an attack and could be hit by surprise when a hacker leaks their customer or client data. An event that could be incredibly costly to remedy and could permanently tarnish a brand.

We explore these responses and the state of cloud security in our Cloud Hard 2018 survey. Download the full report  for more.

CCSK vs CCSP: An Unbiased Comparison

By Graham Thompson, CCSK, CCSP, CISSP, Authorized Trainer, Intrinsec Security

Introduction

CCSK vs CCSP–I’m commonly asked two questions whenever someone discovers I’m an instructor for both the Cloud Security Alliance CCSK and (ISC)2 CCSP courses:

1 – “What’s the difference between the two certifications?”
2 – “How hard is the CCSK exam?” … It’s very hard, but more on that later!

In this entry I’ll identify the differences between two of the industry’s highest regarded cloud security certifications, CCSK and CCSP. Hopefully after reading you’ll know which certification will better fit your professional goals. I don’t believe I have a bias here because I’ve been teaching both courses for a while. In fact, I delivered the first public CCSK course outside of the initial Train-the-Trainer in San Jose. As for the CCSP, I actually helped develop that course. I believe what follows is an honest opinion between the two courses.

CCSK| Certificate of Cloud Security Knowledge (Updated for v4 Course)

The Certificate of Cloud Security Knowledge (CCSK) by the Cloud Security Alliance is considered to be the grand-daddy of cloud security certifications. Why? Primarily because the CCSK was quite literally the industry’s first examination of cloud security knowledge when it was released back in 2011. The course breakdown is roughly split 60/40 between tactical (technical) and strategic (business driven) discussion of cloud security. It is agnostic in approach. To be honest, when I’m delivering CCSK training I probably spend a little too much time equating IaaS tactical security discussions to how it’s done in AWS, but I (and students) feel this approach drives home the controls they cover in the course.

Update for CCSK Version 4

The best way to describe the updates for CCSK V4 are that from a strategic 20,000-foot view it’s mostly more of the same. Governance, contracts, risk management, legal aspects are covered to mostly the same degree but they expanded it to be more global in nature.

However, drop down the viewpoint to that of a more tactical 1,000-foot view and the updated version is very different. Example: leveraging Lambda serverless computing and object storage to remove network attack paths back to the datacenter isn’t exactly a governance item; but from a more tactical approach, it really shows the different architecture patterns you can leverage in cloud that are basically impossible in traditional computing. They also pull in discussions that didn’t exist before such as containers, CI/CD toolchains, DevOps, Chaos engineering and expanded discussions surrounding Software Defined Networking security concepts.

CCSK Course Details

For the CCSK course itself, it’s delivered in two different formats:

  • CCSK Foundation (1 or 2-day course)
  • CCSK PLUS (2 or 3-day course)

What’s the main difference between the two different formats, aside from the course length? It comes down to practical experience and course exercises.

  • The CCSK Foundation format can be delivered over one day, which means you have the time to review theory, but not enough for in-depth class discussion or practical exercises.
  • The CCSK PLUShas everything presented in the CCSK Foundation format, but with more time to really drive home the major topics and learning objectives with course exercises/activities. Quite literally, the following formula applies:

CCSK PLUS = CCSK Foundation + AWS labs

In my personal opinion, a person with limited cloud exposure will find a 1-day crash course to be a complete waste of time. I’ve seen it myself, and that is why as a trainer I don’t usually deliver the course in a single day. However if you are new to cloud and can only do the 1-day session, do yourself a favor and read/understand the guidance v4 document before you take the class. Alternatively, if you’ve been working in cloud for a while and are looking to understand what CSA has to say on cloud security, you would likely prefer the 1-day approach. If you are looking for more info, a lot of these details about the CCSK can be found on Cloud Security Alliance’s website.

CCSK Exam Breakdown

I mentioned the exam was pretty hard at the start of this blog entry. The reason for this has everything to do with the split between tactical and strategic domains of knowledge.

People are either tactical types or strategic governance types. The tactical types enjoy the bits and bytes of computing and that’s totally cool. Then, you have the governance types. These are the managers, directors and others where the mindset is how the business as a whole may be impacted by cloud adoption. One person having a foot in both areas is pretty rare, and that is what makes the CCSK exam so hard. I’ve seen hardcore techies fail, and I’ve seen MBA’s fail.

One thing to note that I’ve heard from heads of training departments has to do with it being an open book exam that is not proctored, rather it is taken online from any location (home/office/hotel). It appears these traits lead some to think less of the exam because it doesn’t seem to be as “legitimate” as closed-book proctored. I still contend properly-written open book exams are legitimate and the exam is tough. I believe it would be impossible to answer 60 questions in 90 minutes if you have to research every question. I would have no problem hiring someone who has a CCSK but not the CCSP.

Continuing Professional Education Credits (CPE)

The CCSK course is CPE eligible. Keep in mind the CPE guidelines for courses are that you must take lunch and breaks into account, meaning a 3-day course winds up netting you 21 CPEs (7 per day). Not bad! Side note- the CCSK does not require CPE maintenance, once you have earned it—it’s yours.

Concluding Thoughts on CCSK

With the updated v4 content, the CCSK remains highly relevant to security professionals who are seeking a course that delivers a general tactical and strategic understanding of the challenges and advantages of cloud. Ready to get started? Download our CCSK prep kit or look for upcoming training sessions near you. If, instead, you are looking for coverage of traditional information security concepts in addition to cloud specific issues, you might want to look at the CCSP.

CCSP| Certified Cloud Security Professional (updated for 2017 version)

(ISC)² is the organization who gets the credit for the CCSP. However, (ISC)² and Cloud Security Alliance (the organization who founded CCSK) collaborated to create the CCSP course and certification exam. Also (ISC)² is the same organization who developed the popular CISSP designation.  The CCSP looks and feels like a cloud version of the CISSP.

The CCSP is, in my humble opinion, more suited for CISSP holders. The CCSP will go into many subjects that are assumed knowledge in the CCSK.  For example, the OSI reference model is covered in the CCSP whereas the CCSK assumes you have this knowledge already when talking to encapsulation of packets in an SDN network.

Course Details

The main difference between CCSP and CCSK can be found in three areas: Expanded governance discussion, Datacenter Security and Privacy. A CISSP is expected to understand a wide range of security domains and ISC2 wants to ensure that CCSP certified professionals are fully aware of the governance and security issues that come along with cloud, the datacenter and the privacy of consumers using cloud services.  So really, when the dust settles, the following formula pretty much sums up the new CCSP:

CCSP = CCSK + Expanded Governance Items + Traditional Security + Privacy

The CCSP course is typically delivered over a 5-day period.There’s some repetition in the material and you can finish it in the allotted 5 days. I wouldn’t say it can be done in 4 days either.

Course Format

The CCSP course is pretty much 100% lecture. There are no labs at all. Zero. None. Zilch. Nada. Instead, you have a series of Q&A and work-group type of scenarios that are peppered throughout the course. This makes the CCSP a course that could be considered more strategic in nature. I would give the CCSP a 70% strategic, 30% tactical approach; almost the inverse of the CCSK.

Update for 2017 version

(ISC)² updated the CCSP Common Book of Knowledge (CBK) and the course in 2017.  The CBK itself is about 150 pages bigger than its predecessor (735 vs 584) This update expands on concepts, introduces new subjects (such as economics of cloud, business requirements, etc.) and new technologies (e.g. DevOps, Containers, etc.), albeit to a lesser technical degree than the CCSK.

CCSP Exam Breakdown

As for the exam itself, I’m under an NDA, so I naturally can’t get into the types of questions they present. I think it would be a fair statement though to say the average CCSP exam candidate is a CISSP holder and  would be tested on knowledge of both cloud and traditional data center security concepts.

Continuing Professional Education Credits (CPE)

CCSP is listed as a 40-hour course, so you should be taking home roughly 35 CPE’s.  Of note for current CISSP’s is that future CPEs earned apply to both the CISSP and CCSP designations. Keep in mind-the CSA’s CCSK can be substituted for one year of experience in pursuit of the (ISC)2 CCSP Certification.

Concluding Thoughts on CCSP

While the latest version of the CCSP expands discussion on strategic issues it doesn’t get into the same depth of tactical discussion that is found in the CCSK. The course is written along the same lines of the CISSP, so coverage includes everything that an Information Security Professional should know to secure an environment, ranging from the physical design of a datacenter up to cloud application security.

 CCSK vs. CCSP| Final Thoughts

As I said earlier, I don’t have a bias here. I’ve laid out what I consider to be the strengths of both offerings This table basically recaps some highlights:

CCSK Course Highlights CCSP Course Highlights
100% focused on cloud security. –Covers traditional information security and cloud security
60% tactical, 40% strategic 70% strategic, 30% tactical
Quicker delivery and more comprehensive review of cloud-specific technologies (e.g. SDN, DevOps, Serverless) More comprehensive review of IT security principles along the lines of the CISSP CBK
Less expensive course and exam More expensive course and exam
Open book exam online (exam included with training cost) Closed book proctored exam at testing center (exam additional charge)

Which Do I Prefer?

I appreciate the coverage of the CCSP, but if I had to do only one, I would do the CCSK because it is 100% focused on cloud security and architectural patterns as well as cloud-specific technologies are covered in greater depth (even more so after the v4 update). I also prefer how it’s consumed in a shorter time frame (due to aforementioned cloud focus).

If you have the time and resources doing both is not a bad idea either. In that case, I would do the CCSK first then the CCSP (and the CCSK counts as 1 year of experience towards the CCSP requirements, as well). Either way, the only way you can go wrong is by not doing either one.

About the author
Graham Thompson is a cloud security architect and delivers both CCSK and CCSP official courses as an authorized trainer for Intrinsec Security. You can reach Graham on LinkedIn or by old fashioned e-mail.

GDPR Is Coming: Will the Industry Be Ready?

By Jervis Hui, Senior Product Marketing Manager, Netskope

With the impending May 25, 2018, date for GDPR compliance coming up, Netskope worked with the Cloud Security Alliance (CSA) to survey IT and security professionals for a recently released report covering GDPR preparation and challenges. According to one of our recent Netskope Cloud Reports, only about 25 percent of all cloud services across SaaS and IaaS are GDPR-ready. And with the ubiquity of cloud and web services, organizations face steep challenges with just SaaS, IaaS, and web alone, not to mention the myriad of other issues they need to address for the GDPR.

To help better understand the challenges, CSA and Netskope asked over 1,000 respondents questions that covered topics like their ability and confidence to achieve compliance, specific plans and tools being used to meet GDPR requirements, what they consider to be the most challenging elements of GDPR in terms of compliance, and the impact of GDPR on company plans for the adoption of new technologies, provider relationships, and budgets. Key findings of the report include:

  • Eighty-three percent of companies do not feel very prepared for GDPR, with companies in the APAC region feeling less prepared than other regions.
  • Fifty-nine percent of companies are making GDPR a high priority. Even so more than 10 percent of companies still have no defined plan to prepare for GDPR.
  • Seventy-one percent of the respondents feel confident that their organizations will meet GDPR compliance in time.
  • Thirty-one percent of companies have well-defined plans for meeting GDPR compliance, 85 percent have something in place, and 73 percent have begun executing that plan.
  • The GDPR’s “right to erasure,” (53%) “data protection by design and by default,” (42%) and “records of processing activities” (39%) were cited as being among the biggest challenges organizations face in achieving compliance.
  • Documentation of data-collection policies (68%), codes of conduct (56%), and third-party audits and assessments (55%) are among the most common tools being used to demonstrate GDPR compliance.

The results seem to indicate that while organizations are in the midst of implementing programs, solutions, and processes to comply with the GDPR, many were still feeling under-prepared as of the survey dates of January 25-February 21, 2018. The interpretation of the articles and how DPAs will enforce the GDPR probably only exacerbated organizations’ feelings of under-preparedness. The good thing is that 70 percent of respondents indicated that they either felt ‘somewhat confident’ or ‘very confident’ that their respective organizations would be ready to meet GDPR compliance by the May deadline.

Across Netskope customers and prospects, we’ve seen many security teams work across their organizations, collaborating with legal, compliance, and technology teams to implement policies and solutions to meet GDPR guidelines. While cloud and web services present more risk vectors for data loss and threats, securing the use of these services allows for continued productivity gains and flexibility by employees. The full GDPR Preparation and Challenges Survey Report contains more information on how organizations are preparing for the GDPR.

Download the full report to get more specifics and see how others compare to your current GDPR plans.

Imagine a Day Without Safe Cryptography

By Jeffrey Ritter, Visiting Fellow, Kellogg College, University of Oxford

Every security professional, at one time or another (or at many times), confronts executive opposition to changing technology. We all know that every innovation in technology requires adaptations in the security services, introducing new costs tied to shifts in equipment, third-party services, and human resources. We also know, whatever the new risks tied to the new technology, that executives want assurances that any new spending will produce effective results. So often, the end result is for the company to sit back, defer the spending, and better evaluate the severity of the risks.

But what if a new technology emerges that could disrupt nearly 100 percent of your security services with the velocity of a zero-day exploit? That scenario is exactly what drove the CSA Quantum-Safe Security Working Group to produce a new white paper aimed at the non-technical corporate executives frequently in the approval chain for new security investments.

The potential for quantum computing is accelerating with amazing velocity. Nearly each week in 2018 sees new announcements of improving capabilities and increased computational power in quantum computing. Even before the quantum machines have been built to the size required for complex commercial uses, researchers are authoring new programming languages with which to strengthen the potential of quantum computers to solve computational problems that existing computers cannot functionally solve.  

That, of course, is the security blanket most encryption represents—solving the math to calculate the proper key(s) to decrypt content or system protections has been computationally infeasible. But, when controlled by bad actors, quantum computing makes the infeasible entirely feasible. Those bad actors could be hostile nation-states, international criminal syndicates, competitors, or others who value the content or systems safeguarded by the encryption.

The new white paper, “A Day Without Safe Cryptography”, was released at the CSA Summit at RSA this week. Rather than get bogged down in technical jargon, the white paper illustrates the dramatic, sobering impact on a company and its corporate leaders when (not if) quantum computing is used to overwhelm commercial encryption services currently available. Executives are presented over a dozen examples of how their personal and corporate lives could be disrupted by quantum computing used by malicious forces.

Simply, the potential for quantum computing to overwhelm encryption protections is so great that, once deployed against company or government systems, there will not be an opportunity to sit back and wait.

Quantum-safe security is already possible, but building a full portfolio of the skills, technologies, and capabilities to get there will be challenging. This is one new security risk against which companies must begin work now, even before quantum computing becomes fully realized.

The Working Group hopes the white paper will help you start the conversation with your executives on the right foot.  

Building a Foundation for Successful Cyber Threat Intelligence Exchange: A New Guide from CSA

By Brian Kelly, Co-chair/Cloud Cyber Incident Sharing Center (CISC) Working Group, and CSO/Rackspace

No organization is immune from cyber attack. Malicious actors collaborate with skill and agility, moving from target to target at a breakneck pace. With new attacks spreading from dozens of companies to a few hundred within a matter of days, visibility into the past cyber environment won’t cut it anymore. Visibility into what’s coming next is critical to staying alive.

Sophisticated organizations, particularly cloud providers, know the difference between a minor incident and massive breach lies in their ability to quickly detect, contain, and mitigate an attack. To facilitate this, they are increasingly participating in cyber intelligence and cyber incident exchanges, programs that enable cloud providers to share cyber-event information with others who may be experiencing the same issue or who are at risk for the same type of attack.

To help organizations navigate the sometimes treacherous waters of cyber-intelligence sharing programs, CSA’s Cloud Cyber Incident Sharing Center (Cloud-CISC) Working Group has produced Building a Foundation for Successful Cyber Threat Intelligence Exchange. This free report is the first in a series that will provide a framework to help corporations seeking to participate in cyber intelligence exchange programs that enhance their event data and incident response capabilities.

The paper addresses such challenges as:

  • determining what event data to share. This is essential (and fundamental) information for those organizations that struggle to understand their internal event data
  • incorporating cyber intelligence provided by others via email, a format which by its very nature limits the ability to integrate it into ones own.
  • scaling laterally to other sectors and vertically with one’s supply chains.
  • understanding that the motive for sharing is not necessarily helping others, but rather supporting internal response capabilities.

Past, Present, Future

Previous programs were more focused on sharing information about cyber security incidents after the fact and acted more as a public service to others than as a tool to support rapid incident response. That’s changed, and today’s Computer Security Incident Response Teams have matured.

New tools and technologies in cyber intelligence, data analytics and security incident management have created new opportunities for faster and actionable cyber intelligence exchange. Suspicious event data can now be rapidly shared and analyzed across teams, tools and even companies as part of the immediate response process.

Even so, there are questions and concerns beyond simply understanding the basics of the exchange process itself:

  • How do I share this information without compromising my organization’s sensitive data?
  • How do I select an exchange platform that best meets my company’s needs?
  • Which capabilities and business requirements should I consider when building a value-driven cyber intelligence exchange program?

Because the cloud industry is already taking advantage of many of the advanced technologies that support cyber intelligence exchange—and has such a unique and large footprint across the IT infrastructure—we believe that we have a real opportunity to take the lead and make cyber-intelligence sharing pervasive.

The Working Group’s recommendations were based largely on the lessons learned through their own development and operation of Cloud CISC, as well as their individual experiences in managing these programs for their companies.

Our industry cannot afford to let another year pass working in silos while malicious actors collaborate against us. It is time to level the playing field, and perhaps even gain an advantage. Come join us.

 

Speeding the Secure Cloud Adoption Process

By Vinay Patel, Chair, CSA Global Enterprise Advisory Board, and Managing Director, Citigroup

Innovators and early adopters have been using cloud for years, taking advantage of the quicker deployment, greater scalability, and cost saving of services. The growth of cloud computing continues to accelerate, offering more solutions with added features and benefits, and with proper implementation, enhanced security. In the age of information digitalization and innovation, enterprise users must keep pace with consumer demand and new technology solutions ensuring they can meet both baseline capabilities and security requirements.

CSA’s new report, State of Cloud Security 2018, observes some of the latest cloud practices and technologies that the enterprise information security practitioner must be aware of as organizational data expands beyond the traditional perimeter. This free resource provides a roadmap to developing best practices where providers, regulators, and the enterprise can come together in the establishment of baseline security requirements needed to protect organizational data.

The report, authored by the CSA Global Enterprise Advisory Board, examines such areas as the adoption of cloud and related technologies, what both enterprises and cloud providers are doing to ensure security requirements are met, how to best work with regulators, the evolving threat landscape, and goes on to touch upon the industry skills gap.

Among the report’s key takeaways are:

  • Exploration of case studies and potential use cases for blockchain, application containers, microservices and other technologies will be important to keep pace with market adoption and the creation of secure industry best practices.
  • With the rapid introduction of new features, safe default configurations and ensuring the proper use of features by enterprises should be a goal for providers.
  • As adversaries collaborate quickly, the information security community needs to respond to attacks swiftly with collaborative threat intelligence exchanges that include both providers and enterprise end users.
  • A staged approach on migrating sensitive data and critical applications to the cloud is recommended.
  • When meeting regulatory compliance, it is important for enterprises to practice strong security fundamentals to demonstrate compliance rather than use compliance to drive security requirements.

Understanding the use of cloud and related technologies will help in brokering the procurement and management of these services while maintaining proper responsibility of data security and ownership. Education and awareness still needs to improve around provider services and new technologies for the enterprise. Small-scale adoption projects need to be shared so that security challenges and patterns can be adopted to scale with the business and across industry verticals. This skills gap, particularly around cloud and newer IT technologies, needs to be met by the industry through partnership and collaboration between all parties of the cyber ecosystem.

The state of cloud security is a work in progress with an ever-increasing variety of challenges and potential solutions. It is incumbent upon the cloud user community, therefore, to collaborate and speak with an amplified voice to ensure that their key security issues are heard and addressed.

Download the full report.