Cloud Security and Compliance Is a Shared Responsibility

By Gail Coury, Chief Information Security Officer, Oracle Cloud

Organizations around the world are ramping up to comply with the European Union’s General Data Protection Regulation (GDPR), which will be enforced beginning on May 25, 2018, and each must have the right people, processes and technology in place to comply or else potentially face litigation and heavy fines. The drive for more regulations is in large part  the direct consequence of the rise in data breaches and cyber security incidents. In an effort to protect data privacy, governments are stepping in and demanding greater transparency in how organizations handle sensitive personal data. GDPR is just one such privacy mandate that will affect organizations globally and impact the lifeblood of their operations. Many have spent countless hours already preparing for the deadline, while others are just getting started.

Organizations are rapidly embracing cloud services to gain agility and thrive in today’s digital economy. This has created a strategic imperative to better manage cybersecurity risk and ensure compliance while keeping pace at scale as firms move critical apps to the cloud. According to the Oracle and KPMG Cloud Threat Report, 2018, 87 percent of organizations have a cloud-first orientation.

The conventional mindset—that security is an obstacle to cloud adoption—is rapidly losing relevance. Enterprises in highly regulated industries are becoming more confident putting sensitive data in the cloud. Ninety percent of organizations say that more than half of their cloud data is sensitive information, according to the same report. Although customers are confident in their cloud service provider’s (CSP) security, they should vet their cybersecurity programs vigorously, and conduct a comprehensive review assessment of their security and compliance posture. Trust has always been important in business and paramount when choosing a cloud partner.

GDPR is top of mind for a lot of organizations because it’s a people, process and technology challenge and requires a coordinated strategy that incorporates different organizational entities versus a single technology solution. It is a complicated law and introduces intricate new regulations and requirements for handling personal data. In fact, 95 percent of firms affected by GDPR say that the regulation will impact their cloud strategies and CSP choices, based on findings published by Oracle and KPMG. One of the central considerations would be movement of sensitive data between CSP data centers. Organizations need to understand and clarify how their CSPs employ essential data protection controls and standards to meet GDPR requirements because every cloud platform and vendor has unique cybersecurity standards.

As you may know by now, cloud security and compliance is a shared responsibility, where the cloud provider and the tenant each have a role to play. Although it sounds relatively simple, customers are often not clear where their provider’s role ends and their obligations start, creating gaps. Knowing what security controls the vendor provides allows the business to take steps to secure their own cloudenvironment and ensure compliance. Almost every organization today has more than one regulation with which they need to comply and they increase the complexity with each cloud service they add. As organizations continue to lift and shift their apps to the cloud, they need to keep pace with scale and ensure security and compliance is maintained.

I am excited to explore these topics with other industry experts at the Cloud Compliance Zeitgeist panel on April 16 (12:50 p.m. – 1:35 p.m.), at the Cloud Security Alliance Summit at the RSA Conference 2018. Also, my colleague, Mary Ann Davidson, Oracle’s Chief Security Officer, will lead the panel Getting to Mission Critical with Cloud. You will hear directly from some large complex global enterprises about their journey to the cloud, cybersecurity challenges and their complex compliance mandates.

We look forward to seeing you there!

The Early Bird Gets the Virus

By Kevin Lee, Systems QA Engineer, Bitglass

Most people have heard of the proverb, “The early bird gets the worm.” The part that many haven’t heard is the followup, “But the second mouse gets the cheese.” The latter proverb makes a lot of sense when you apply it to the current state of virus and malware detection.

Today, most established virus and malware detection services use a signature-based method. This means that they leverage lists of known malware signatures to scan files for threats. This works well when protecting against known malware. However, as with the mice in the proverb above, someone has to spring the trap to make the cheese obtainable. When enterprises use these solutions, they must simply hope that other organizations encounter new malware first. That way, lists of dangerous signatures can be updated.

An additional problem with these tools rests with the strictness of their signature matching. This is because they search for highly specific hashes (patterns) generated from the contents of known malicious files. Unfortunately, it is extremely easy to create new variants with new signatures by changing even minor aspects of attacks. In other words, even a small edit to a file containing a threat can alter the signature enough so that it will go undetected by signature-based tools. This results in the signature-based method always being reactionary and a second too slow.

More and more, organizations are turning to behavior-based anti-malware solutions. The advantage of these advanced detection methods is that they don’t require a sacrificial lamb (mouse) to figure out that a certain file is dangerous. Instead, they scrutinize large numbers of file characteristics and behaviors in order to identify threats. In addition, due to the fact that they don’t depend on signatures for detection, they cannot be fooled as easily by altered variants of existing malware. This leads to a simple conclusion. When implemented and utilized effectively, a zero-day solution should make any early bird, mouse, or human feel safe.

To learn more about cloud access security brokers and true advanced threat protection, download Bitglass’ Malware P.I. report.

Five Reasons to Reserve Your Seat at the CCSK Plus Hands-on Course at RSAC 2018

By Ryan Bergsma, Training Program Director, Cloud Security Alliance

The IT job market is tough and it’s even tougher to stand out from the pack, whether it’s to your current boss or a prospective one. There is one thing, though, that can put you head and shoulders above the rest—achieving your Certificate of Cloud Security Knowledge (CCSK). CCSK certificate holders have an advantage over their colleagues and get noticed by employers across the IT industry, and no wonder.

It’s been called the “mother of all cloud computing security certifications” by CIO Magazine, and Search Cloud Security notes that it’s “a good alternative cloud security certification for an entry-level to midrange security professional with an interest in cloud security.” So it was no surprise when Certification Magazine listed CCSK at #1 on the Average Salary Survey 2016.

For those interested in taking their careers to the next level, we are offering the CCSK Plus Hands-on Course (San Francisco, April 15-16) at the 2018 RSA Conference.

Our intensive 2-day course gives you hands-on, in-depth cloud security training, where you’ll learn to apply your knowledge as you perform a series of exercises to complete a scenario bringing a fictional organization securely into the cloud.

Divided into six theoretical modules and six lab exercises, the course begins with a detailed description of cloud computing, and goes on to cover material from the official Security Guidance for Critical Areas of Focus in Cloud Computing, Cloud Controls Matrix v3.0.1 (CCM) documents from Cloud Security Alliance, and recommendations from the European Network and Information Security Agency (ENISA).

Still on the fence? Here are five reasons you need to register today.

  1. Get trained by THE best in the business. Rich Mogull, a prominent industry analyst and sought-after speaker at events such as RSAC and BlackHat, will be there to guide you through this 2-day, intensive cloud security course. Not only is he the most experienced CCSK trainer in the industry, but he created the course content. Need we say more?
  2. Gain actionable security knowledge. In addition to learning the foundational differences of cloud, you’ll acquire practical knowledge and the skills to build and maintain a secure cloud business environment right away. It’s good for you and good for your company.
  3. Make the boss sit up and notice. Your newfound knowledge will translate to increased confidence and credibility when working within the cloud, and just maybe a better job or dare we say, a raise?
  4. Move to the head of the class. By the end of the course, you’ll be prepared to take the CCSK exam to earn your Cloud Security Alliance CCSK v4.0 certificate, a highly regarded certification throughout the industry certifying competency in key cloud security areas. ‘Nuff said.
  5. Invest in your future. The course price includes the cost of the exam, a $395 value. That’s what we call a sound investment.

Still not convinced? Watch this and you will be.


Australia’s First OAIC Breach Forecasts Grim GDPR Outcome

By Rich Campagna, Chief Marketing Officer, Bitglass

The first breach under the Office of the Australian Information Commissioner’s (OAIC) Privacy Amendment Bill was made public on March 16. While this breach means bad press for the offending party, shipping company Svitzer Australia, more frightening is the grim outcome it forecasts for organizations subject to GDPR regulations, which go into effect on May 25, 2018.

In the Svitzer case, 60,000 emails containing sensitive personal information on more than 400 employees were “auto-forwarded” to external accounts, a not uncommon way for employees to “get access” to their work emails from outside of the office. While the details of why these auto-forwarding rules were set up, and whether the intent was malicious or benign, in many cases, the objective is to avoid IT management of the user’s device while still gaining access to sensitive data.

Another common scheme to bypass unwanted IT controls is to set up sharing of one’s cloud file sharing drive to a personal email account. Both of these challenges are easily solved with Cloud Access Security Brokers (CASBs), which can secure employee devices without taking management control (helping to avoid auto-forwarding outcomes), and control the flow of data into/out-of cloud apps (including external sharing control).

The outcome in this case is bad press for Svitzer, causing loss of goodwill and perhaps some customers. It could have been worse, however. Under the Australian scheme, when OAIC if notified of the breach, which Svitzer has apparently done, the breach is made public but there are no direct financial penalties. If Svitzer hadn’t notified, they would have been subject to fines of “up to $1.8 million.” Penalties initially start with public apologies and compensation payments to the victims, with continued examples of non-notification ratcheting up fines to a maximum of $1.8 million.

What does all of this have to do with GDPR? Simple. With the upcoming GDPR enforcement deadline, some organizations are scrambling to reach compliance, while others are taking a wait-and-see approach. Once we pass the deadline, there WILL be companies with similarly simple issues that have a breach. The difference is in the penalties with GDPR. Rather than starting with simple fixes such as apologies and victim compensation, GDPR comes with severe penalties of up to €20 million or 4% of annual revenue, whichever is greater. Depending on the size and health of the organization, penalties like this could be terminal.

My prediction? We’ll quickly see the first examples, like Svitzer, and before the end of 2018, we’ll see the first bankruptcy as a result of GDPR fines and loss of business.

CSA Summit at RSA Conference 2018 Turns Its Focus to Enterprise Grade Security: Will you be there?

By J.R. Santos,  Executive Vice President of Research, Cloud Security Alliance

Today’s enterprise cloud adoption has moved well beyond the early adopters to encompass a wide range of mission-critical business functions. As financial services, government and other industries with regulatory mandates have made significant steps into the cloud over the past year, it’s only fitting that this year’s CSA Summit at RSA Conference 2018, now in its ninth year, turn its attention to enterprise-grade security.

For both companies and governments, however, making this leap has not come without effort. It’s required a transformation in both the technology of security and the mindset of security professionals. To help facilitate this transformation, we’ll again be bringing together some of the best and brightest minds from across the industry to share the common practices that are enabling the shift to cloud as our dominant IT system.

Thought leaders from multi-national enterprises, government, cloud providers and the information security industry will be speaking on some of cloud security’s most pressing topics, including:

  • Appetite for Destruction – The Cloud Edition. Over the last two years, the multitude of data leaks and breaches in the cloud has skyrocketed. Many of these leaks are reminiscent of the past security lessons, and some show new attributes unique to our evolving computing environments. In this short talk, Raj Samani, chief scientist at McAfee, takes a look at the past, and peers toward the future.
  • Cloud Security Journey. Get a preview of how a major retailer solves the problem of security software chaos and fragmentation while addressing new security requirements in this session from Symantec and Albertsons Companies. You’ll get a real-world perspective on how they approached cloud security while addressing end-to-end compliance, data governance, and threat protection requirements.
  • A GDPR-Compliance & Preparation Report Card. With the impending May 2018 deadline for GDPR compliance, organizations worldwide need to account for the regulation in their security policies and programs. Join Netskope Chief Scientist Krishna Narayanaswamy and CSO Jason Clark for an interactive session that previews their recent study with the Cloud Security Alliance on how organizations are preparing for compliance.
  • The Software-Defined Perimeter in Action. Cyxtera’s Cybersecurity Officer Chris Day will chronicle how organizations have taken CSA’s Software-Defined Perimeter (SDP) from experimental to enterprise-grade. You’ll walk away with valuable insights and learn compelling best practices on how enterprises can make SDP adoption a reality.

Other discussions and panels will also explore new frontiers that are accelerating change in information security, such as artificial intelligence, blockchain and fog computing.

Register for RSAC and the Summit today using the discount code 18UCSAFD to receive $100 of the full conference pass to RSAC or receive a complimentary expo pass with the code X8ECLOUD. The CSA Summit is a free event for all registered conference attendees regardless of pass.

For those interested in taking their careers to the next level, we also are offering the CCSK Plus Hands-on Course (April 15-16) at the RSA Conference 2018. Our intensive 2-day course gives you hands-on, in-depth cloud security training, where you’ll learn to apply your knowledge as you perform a series of exercises to complete a scenario bringing a fictional organization securely into the cloud and emerge prepared to take the Certificate of Cloud Security Knowledge exam.

The CCSK gives you a distinct edge over your cloud security colleagues. Why else would CIO Magazine have called it the “Mother of all cloud computing security certifications?” Certification Magazine even listed CCSK at #1 on the Average Salary Survey 2016.

So what are you waiting for? Register now.


The “Ronald Reagan” Attack Allows Hackers to Bypass Gmail’s Anti-phishing Security

By Yoav Nathaniel, ‎Customer Success Manager, Avanan

We started tracking a new method hackers use to bypass Gmail’s SPF check for spear-phishing. The hackers send from an external server, the user sees an internal user (For example, your CEO) and Gmail’s SPF-check, designed to indicate the validity of the sender, shows “SPF-OK.”

Why are we calling this “The Ronald Reagan Attack”? Several of these attacks originated from, a website that offers a private email with the domain name of Ronald Reagan, the 40th president of the United States, encouraging its users to “be a cowboy.”

If you are a Gmail for Business customer and suffer from phishing attacks, you might find some comfort in knowing that the lives of Office 365 users are much worse. Maybe it’s because hackers target Office 365 more, maybe because Google does a better job in filtering phishing attacks, but the end result is that Gmail users get less phishing.

How The Ronald Reagan Attack Works
At the core of the attack is the fact that when Gmail’s anti-phishing layer scans the email for impersonation and performs an SPF check, it looks at one “sender” field in the email header but the sender name presented to the human receiver of the email in the Gmail web interface, is taken from another field in the email header.

There are two fields in the email header that play a role here:

1. X-Sender-Id: This is the field that Gmail uses in its SPF-check and and is used for spear phishing and impersonation analysis

2. From: This is the field that is actually presented to the Gmail user

The result is that the machine that tests for phishing finds this email completely legitimate and passes it to the recipient with no warning. But for the recipient, this shows up as an email from someone else, presumably someone in the organization that they know. The recipient has no practical way to find the actual value of the X-Sender-Id field.

Here’s an example of a real such attack and why it was effective (Numbers explained below):


(1) X-Sender-Id – This is the real sender. It is the most important part of the attack because in fact it is not spoofed. It is well aligned with the server that sent the email.

(2) The “Reply To” header is presented to the end-user but the actual reply goes to a field called “Return-Path” (Field 5 below). This is also spoofed – it uses the impersonated victim name in a domain that doesn’t exist (And indicates the usage of a mobile phone)

(3,4) Authentication-Results, Received-SPF and Arc-Authentication-Results: These are the fields that indicates the receiver’s (Gmail) calculated authenticity of the email. Note that in all tests, the email address from the X-Sender-ID is selected as the identity to test with (As indicated in the X-Auth-Id field) and that all tests pass successfully – the email is ‘authentic’

(5) Return Path: This field is what the mail server would use if the end-user chooses to reply to sender. The hackers spoofed the “Reply To” field with an address that is presented to the end-user but does not exist, however any actual reply from the recipient will be routed to the attacker.

(6) From: This is the actual attack – the email address of the impersonated sender. The field was used nowhere aside from when it is presented to the recipient!

What Is Google Doing About It?
A quick search in Google’s reporting system reveals that they don’t see SPF vulnerabilities as critical. Therefore, not prioritized to be fixed. If we hear differently, we will update this blog.

What Can I Do?
In many phishing attacks we described, we started by the regular ‘educate your users’, ‘suspect every email’, ‘look at the links’, etc. etc.  But this is one examples when the end-users can do nothing. Their email client shows an authentic email of an internal sender and there’s no warning in the email to indicate anything is wrong with it. In this case, you do need another layer of security on top of the default security from Google.


Saturday Security Spotlight: Cryptomining, AWS, and O365

By Jacob Serpa, Product Marketing Manager, Bitglass

Here are the top cybersecurity stories of recent weeks:
—Malicious cryptomining the top cybercrime
—New details emerge on unsecured AWS buckets
—Data Keeper ransomware begins to spread
—Office 365 used in recent mass phishing attacks
—SgxSpectre attacking Intel SGX enclaves

Malicious cryptomining the top cybercrime
Since September of 2017, malicious cryptomining has been the most commonly detected cybercrime. With cryptocurrencies growing in value, hackers have increasingly altered their attacks so that victims’ devices can be hijacked to mine Bitcoin, for example. Desktops, mobile devices, and organizations as a whole have fallen prey to these attacks.

New details emerge on unsecured AWS buckets
Over the last few moths, unsecured AWS instances have left many organizations vulnerable and, in some cases, have led to breaches. New research by HTTPCS found a variety of information about the rate at which enterprises’ AWS buckets are misconfigured to allow public access. 20% of public AWS S3 buckets can even be edited by the public at large.

Data Keeper ransomware begins to spread
Data Keeper is a new ransomware as a service (RaaS) that is quickly growing in popularity. RaaS typically functions by providing malicious parties (customers on the dark web) with prebuilt platforms that they can use to spread infections and hold users’ data for ransom. In the case of Data Keeper, there were only two days between its creation and the first reported infections.

Office 365 used in recent mass phishing attacks
Phishing attacks are constantly being refined to improve their success rates. In recent weeks, phishing emails disguised as tax-related messages from the government have included Office 365 attachments in an effort to appear more legitimate. Unfortunately, the strategy has been fairly effective – numerous users have opened the documents and unknowingly surrendered their credentials.

SgxSpectre attacking Intel SGX enclaves
The recent Meltdown and Spectre attacks caused great concern throughout the business world, but proved unable to infiltrate Intel’s SGX (Software Guard eXtensions) enclaves. Unfortunately, the more recent SgxSpectre is capable of invading said enclaves and stealing information such as passwords, encryption keys, and more.

Few security tools are capable of handling the breadth of cyberattacks faced by cloud-first organizations. As such, the enterprise must research advanced solutions like cloud access security brokers. To learn more about these next-gen security solutions, download the Definitive Guide to CASBs.

AWS Cloud: Proactive Security and Forensic Readiness – Part 3

Part 3: Data protection in AWS

By Neha Thethi, Information Security Analyst, BH Consulting

This is the third in a five-part blog series that provides a checklist for proactive security and forensic readiness in the AWS cloud environment. This post relates to protecting data within AWS.

Data protection has become all the rage for organizations that are processing personal data of individuals in the EU, because the EU General Data Protection Regulation (GDPR) deadline is fast approaching.

AWS is no exception. The company is providing customers with services and resources to help them comply with GDPR requirements that may apply to their operations. These include granular data access controls, monitoring and logging tools, encryption, key management, audit capability and, adherence to IT security standards (for more information, see the AWS General Data Protection Regulation (GDPR) Center, and Navigating GDPR Compliance on AWS Whitepaper). In addition, AWS has published several privacy related whitepapers, including country specific ones. The whitepaper Using AWS in the Context of Common Privacy & Data Protection Considerations, focuses on typical questions asked by AWS customers when considering privacy and data protection requirements relevant to their use of AWS services to store or process content containing personal data.

This blog, however, is not just about protecting personal data. The following list provides guidance on protecting any information stored in AWS that is valuable to your organisation. The checklist mainly focuses on protection of data (at rest and in transit), protection of encryption keys, removal of sensitive data from AMIs, and, understanding access data requests in AWS.

The checklist provides best practice for the following:

  1. How are you protecting data at rest?
  2. How are you protecting data at rest on Amazon S3?
  3. How are you protecting data at rest on Amazon EBS?
  4. How are you protecting data at rest on Amazon RDS?
  5. How are you protecting data at rest on Amazon Glacier?
  6. How are you protecting data at rest on Amazon DynamoDB?
  7. How are you protecting data at rest on Amazon EMR?
  8. How are you protecting data in transit?
  9. How are you managing and protecting your encryption keys?
  10. How are you ensuring custom Amazon Machine Images (AMIs) are secure and free of sensitive data before publishing for internal (private) or external (public) use?
  11. Do you understand who has the right to access your data stored in AWS?

IMPORTANT NOTE: Identity and access management is an integral part of protecting data, however, you’ll notice that the following checklist does not focus on AWS IAM. I have created a separate checklist on IAM best practices here.

Best-practice checklist

1.How are you protecting data at rest?

  • Define polices for data classification, access control, retention and deletion
  • Tag information assets stored in AWS based on adopted classification scheme
  • Determine where your data will be located by selecting a suitable AWS region
  • Use geo restriction (or geoblocking), to prevent users in specific geographic locations from accessing content that you are distributing through a CloudFront web distribution
  • Control the format, structure and security of your data by masking, making it anonymised or encrypted in accordance with the classification
  • Encrypt data at rest using server-side or client-side encryption
  • Manage other access controls, such as identity, access management, permissions and security credentials
  • Restrict access to data using IAM policies, resource policies and capability policies

Back to List

2. How are you protecting data at rest on Amazon S3?

  • Use bucket-level or object-level permissions alongside IAM policies
  • Don’t create any publicly accessible S3 buckets. Instead, create pre-signed URLs to grant time-limited permission to download the objects
  • Protect sensitive data by encrypting data at rest in S3. Amazon S3 supports server-side encryption and client-side encryption of user data, using which you create and manage your own encryption keys
  • Encrypt inbound and outbound S3 data traffic
  • Amazon S3 supports data replication and versioning instead of automatic backups. Implement S3 Versioning and S3 Lifecycle Policies
  • Automate the lifecycle of your S3 objects with rule-based actions
  • Enable MFA Delete on S3 bucket
  • Be familiar with the durability and availability options for different S3 storage types – S3, S3-IA and S3-RR.

Back to List

3. How are you protecting data at rest on Amazon EBS?

  • AWS creates two copies of your EBS volume for redundancy. However, since both copies are in the same Availability Zone, replicate data at the application level, and/or create backups using EBS snapshots
  • On Windows Server 2008 and later, use BitLocker encryption to protect sensitive data stored on system or data partitions (this needs to be configured with a password as Amazon EC2 does not support Trusted Platform Module (TPM) to store keys)
  • On Windows Server, implement Encrypted File System (EFS) to further protect sensitive data stored on system or data partitions
  • On Linux instances running kernel versions 2.6 and later, you can use dmcrypt and Linux Unified Key Setup (LUKS), for key management

Back to List

4.  How are you protecting data at rest on Amazon RDS?

(Note: Amazon RDS leverages the same secure infrastructure as Amazon EC2. You can use the Amazon RDS service without additional protection, but it is suggested to encrypt data at application layer)

  • Use built-in encryption function that encrypts all sensitive database fields, using an application key, before storing them in the database
  • Use platform level encryption
  • Use MySQL cryptographic functions – encryption, hashing, and compression
  • Use Microsoft Transact-SQL cryptographic functions – encryption, signing, and hashing
  • Use Oracle Transparent Data Encryption on Amazon RDS for Oracle Enterprise Edition under the Bring Your Own License (BYOL) model

Back to List

5. How are you protecting data at rest on Amazon Glacier?(Note: Data stored on Amazon Glacier is protected using server-side encryption. AWS generates separate unique encryption keys for each Amazon Glacier archive, and encrypts it using AES-256)

  • Encrypt data prior to uploading it to Amazon Glacier for added protection

Back to List

6. How are you protecting data at rest on Amazon DynamoDB?(Note: DynamoDB is a shared service from AWS and can be used without added protection, but you can implement a data encryption layer over the standard DynamoDB service)

  • Use raw binary fields or Base64-encoded string fields, when storing encrypted fields in DynamoDB

Back to List

7. How are you protecting data at rest on Amazon EMR?

  • Store data permanently on Amazon S3 only, and do not copy to HDFS at all. Apply server-side or client-side encryption to data in Amazon S3
  • Protect the integrity of individual fields or entire file (for example, by using HMAC-SHA1) at the application level while you store data in Amazon S3 or DynamoDB
  • Or, employ a combination of Amazon S3 server-side encryption and client-side encryption, as well as application-level encryption

Back to List

8. How are you protecting data in transit?

  • Encrypt data in transit using IPSec ESP and/or SSL/TLS
  • Encrypt all non-console administrative access using strong cryptographic mechanisms using SSH, user and site-to-site IPSec VPNs, or SSL/TLS to further secure remote system management
  • Authenticate data integrity using IPSec ESP/AH, and/or SSL/TLS
  • Authenticate remote end using IPSec with IKE with pre-shared keys or X.509 certificates
  • Authenticate remote end using SSL/TLS with server certificate authentication based on the server common name(CN), or Alternative Name (AN/SAN)
  • Offload HTTPS processing on Elastic Load Balancing to minimise impact on web servers
  • Protect the backend connection to instances using an application protocol such as HTTPS
  • On Windows servers use X.509 certificates for authentication
  • On Linux servers, use SSH version 2 and use non-privileged user accounts for authentication
  • Use HTTP over SSL/TLS (HTTPS) for connecting to RDS, DynamoDB over the internet
  • Use SSH for access to Amazon EMR master node
  • Use SSH for clients or applications to access Amazon EMR clusters across the internet using scripts
  • Use SSL/TLS for Thrift, REST, or Avro

Back to List

9. How are you managing and protecting your encryption keys?

  • Define key rotation policy
  • Do not hard code keys in scripts and applications
  • Securely manage keys at server side (SSE-S3, SSE-KMS) or at client side (SSE-C)
  • Use tamper-proof storage, such as Hardware Security Modules (AWS CloudHSM)
  • Use a key management solution from the AWS Marketplace or from an APN Partner. (e.g., SafeNet, TrendMicro, etc.)

Back to List

10. How are you ensuring custom Amazon Machine Images (AMIs) are secure and free of sensitive data before publishing for internal (private) or external (public) use?

  • Securely delete all sensitive data including AWS credentials, third-party credentials and certificates or keys from disk and configuration files
  • Delete log files containing sensitive information
  • Delete all shell history on Linux

Back to List

11. Do you understand who has the right to access your data stored in AWS?

  • Understand the applicable laws to your business and operations, consider whether laws in other jurisdictions may apply
  • Understand that relevant government bodies may have rights to issue requests for content, each relevant law will contain criteria that must be satisfied for the relevant law enforcement body to make a valid request.
  • Understand that AWS notifies customers where practicable before disclosing their data so they can seek protection from disclosure, unless AWS is legally prohibited from doing so or there is clear indication of illegal conduct regarding the use of AWS services. For additional information, visit Amazon Information Requests Portal.

Back to List

For more details, refer to the following AWS resources:

Next up in the blog series, is Part 4 – Detective Controls in AWS – best practice checklist. Stay tuned.

DISCLAIMER: Please be mindful that this is not an exhaustive list. Given the pace of innovation and development within AWS, there may be features being rolled out as these blogs were being written. Also, please note that this checklist is for guidance purposes only.

34 Cloud Security Terms You Should Know

By Dylan Press, Director of Marketing, Avanan

We hope you use this as a reference not only for yourself but for your team and in training your organization. Print this out and pin it outside your cubicle.

How can you properly research a cloud security solution if you don’t understand what you are reading? We have always believed cloud security should be simple, which is why we created Avanan. In an attempt to simplify it even more we have created a glossary of 34 commonly misunderstood cloud security terms and what they mean.

Account Takeover

A type of cyber attack in which the hacker spends extended periods of time dormant in a compromised account, spreading silently within the organization through internal messages until they have access to information that is valuable to them. They may use the account to attack other organizations.

Related: Read our whitepaper Cloud Account Takeover

Advanced Persistent Threat (APT)

This an attack in which an the attacker gains access to an account or network and remains undetected after the initial breach. The “advanced” describes the initial breach technique (phishing or malware) that was able to evade the victim’s security. The attack is “persistent” because the attacker continues to carry out the attack through reconnaissance and internal spread long after the initial breach.

Advanced Threat Protection (Microsoft ATP)

Microsoft offers its Advanced Threat Protection for an additional $24 per user per year. It includes capabilities not available in the default Office 365/ account:

  • Safe Links: replaces each URL, checking the site before redirecting the users.
  • Safe Attachments: scanning attachments for malware
  • Spoof Intelligence: analyzes external emails that match your domain.
  • Anti-phishing Filters: looks for signs of incoming phishing attacks.


A type of behavior or action that seems abnormal when observed in the context of an organization and a user’s historical activity. It is typically analyzed using some sort of machine-learning algorithm that builds a profile based upon historical event information including login locations and times, data-transfer behavior and email message patterns. Anomalies are often a sign that an account is compromised.

API Attack

An API (Application Programming Interface) allows two cloud applications to talk to one other directly, allowing a third party to read or make changes directly within a cloud application. Creating an API connection requires a user’s approval, but once created, runs silently in the background, often with little or no monitoring. An API-based attack typically involves fooling the user into approving an API connection with a phishing attack. Once granted the API token, the attacker has almost complete access and control, even if the user changes the account password. To break the connection, the user must manually revoke the API token.

Behavioral Analysis

A security measure in which a file’s behavior is monitored and analyzed in an isolated environment in order to see if it contains hidden malicious functions or is communicating with an unknown third-party.

Brand Impersonation

A method of phishing attack in which the perpetrator spoofs the branding of a well-known company to fool the recipient into entering credentials, sharing confidential information, transferring money or clicking on a malicious link. An example might be a forged email that looks like it is from a social media company asking to verify a password.

Breach Response

A form of security that remedies the damage caused by a breach. For example, changing passwords, revoking API tokens, resetting permissions for shared documents, enabling multi-factor-authentication, restoring lost or edited documents, documenting and classifying leaked information, identifying potential pathways to collateral compromise.


An acronym for Cloud Access Security Broker. This is a type of security that monitors and controls the cloud applications that an organization’s employees might use. Typically, the control is enforced by routing web traffic through a forward- or reverse-proxy. CASBs are good for managing Shadow IT and limiting employee’s use of certain SaaS or the activity within those SaaS but do not monitor third-party activity in the cloud–i.e. shared documents or email.

Related: Can a CASB Protect You from Phishing or Ransomware?

Cloud Access Trojan

Also known as a CAT, a Cloud Access Trojan describes any method of accessing a cloud account without the use of a username and password, for example, a malicious user syncing a desktop app, forwarding all email to an external account, connecting a malicious script or simply authorizing a backup service for which they have full access. In each case, the attacker needs only momentary access, often gained through a phishing attack.

Related: Cloud Access Trojan: The Invisible Back Door to Your Enterprise Cloud

Cloud Messaging Apps

Cloud-based communication services that include email but are used by companies for internal communication but also might include trusted partners. Often employees imbue more trust in these apps even though they are just as capable of distributing malware or phishing messages.


Taking a software that was created for on-premise or datacenter usage, wrapping it with an API container and converting it to a cloud service. For example, taking the malware analysis blade from a perimeter appliance and adapting it so that it can be configured and scaled without the need for direct management. This also includes the automation of software licensing and version control.

Compromised Account

An account which has been accessed and is possibly controlled by an outside party for malicious reasons. This can be done either via API connection or by gaining credentials to the account from a leak or phishing email. Typically, the goal of the attacker is to remain undetected, in order to use it as a base for further attacks.

Related: Account Takeover: A Critical Layer Of Your Email Security

Data Classification

A security and compliance measure in which all of an organization’s documents are scanned and categorized based on their sensitivity and then are automatically encrypted or adjusted to the correct sharing level permissions. For example documents containing customer information or employee social security numbers would be classified as highly sensitive and encrypted where as an external facing white paper would be classified as non-sensitive and likely not encrypted.

DLP (Data Leak Prevention or Data Loss Prevention)

A type of security that prevents sensitive data, usually files, from being shared outside the organization or to unauthorized individuals within the organization. This is done usually through policies that encrypt data or control sharing settings.


Digital Rights Management: a set of access control technologies for restricting the use of confidential information, proprietary hardware and copyrighted works, typically using encryption and key management. (Also see IRM)


A gateway is any device or  is another word for an MTA, please see the definition for MTA.


Information Rights Management is a subset of Digital Rights Management that protects corporate information from being viewed or edited by unwanted parties typically using encryption and permission management. (also see DRM)


The added time it takes for an email to be delivered to its intended recipient. Security measures sometimes add latency as they perform scans on the email prior to allowing the email to reach the user’s inbox.


A deliberate configuration change within a system by a malicious actor, typically to create back-door access or exfiltrate information. While the original change in configuration might involve a compromised account or other vulnerability, a malconfiguration has the benefit of offering long term access using legitimate tools, without further need of a password or after a vulnerability is closed.


A dangerous or unapproved configuration of an account that could potentially lead to a compromise typically done by a well-intentioned user attempting to solve an immediate business problem. While there is no malicious intent, misconfiguration is actually the leading cause of data loss or compromise.


An acronym for Message Transfer Agent. An MTA is an appliance or service that acts as the authorized server-of-record for electronic messages, eventually passing them on to the final mail server.

Related: 7 Reasons Not to Use an MTA Gateway


A type of attack in which a message (often email, but could be any messaging system) is sent from a malicious party disguised as a trusted source with the intention of fooling the recipient into giving up credentials, money, or confidential data. It often includes a malicious link or file, but could be a simple as a single sentence that causes some sort of insecure response. (Also see Spearphishing.)


A proxy can include any gateway, service or appliance that causes a rerouting of traffic through an appliance or cloud service. For example, a web proxy or CASB will redirect a user’s web browsing in order to decrypt the traffic and block particular applications or data. Mail proxy gateways (see MTA) reroute incoming email in order to scan and block spam, phishing or other malicious email. A proxy is limited in its visibility as it cannot monitor or control traffic it cannot see, i.e. remote and non-employee web usage or internal email traffic.


The act of encrypting, moving or changing the share permissions of a file so that it is unreachable by a user until it can be deemed safe or authorized by the intended recipient.


A type of malware that encrypts the files on an endpoint device using a mechanism for which only the attacker has the keys. While the attacker will offer the key in exchange for payment, fewer than half of victims that do pay actually recover their files.


A type of security measure that involves testing a file or link in a controlled environment to see what effect it has on the emulated operating system, typically the first line of defense against zero-day attacks for which there is no signature or pre-knowledge of the code.

Shadow IT

Any unapproved cloud-based account or solution implemented by an employee for business use. It might also include the use of an unknown account with an approved provider, but administered by the user rather than corporate IT.

Shadow SaaS

An unapproved cloud application that is connected in some way (typically by API) to that organization’s SaaS or IaaS with access to corporate data but without permission from the organization.


A type of phishing attack that is designed to target a small number of users, sometimes only one user such as a CEO. Spear-phishing attacks usually involve intensive research by the hacker to increase the chances that the intended target will fall for it.


A unique authorization key used for API interactions. Each token is granted a certain level of access and control and often continues to provide access until the token is manually revoked.

URL Analysis

A security measure that reviews a link to assess if it is genuine and will direct to a safe and expected destination with no unintended side effects.

URL Impersonation

A technique used in phishing attacks in which the hacker creates a URL that looks like a link to a trusted website to the untrained eye. These techniques can be thwarted using URL analysis.

User Impersonation

A technique used in phishing attacks in which the hacker makes their email look like it is coming from a trusted sender, either a corporation or another employee. This can be done by editing their nickname or using an email address that looks like it is from a trusted organization.

We will be continuing to add to this list and if you have any suggestions for terms to include please reach out to [email protected].

Are Healthcare Breaches Down Because of CASBs?

By Salim Hafid, Product Marketing Manager, Bitglass

Bitglass just released its fourth annual Healthcare Breach Report, which dives into healthcare breaches over 2017 and compares the rate of breach over previous years. A big surprise this year was the precipitous drop in the volume of breaches and the scope of each attack. Our research team set out to discover why this happened.

Our annual healthcare report is based on breach data from the US Department of Health and Human Services. The government mandates that all healthcare organizations and their affiliates publicly disclose breaches that affect at least 500 individuals. The result is several years of data on the causes of healthcare breaches as well as information about which firms are targeted by attackers.

It seems that after several years of being a top target for hackers looking to steal valuable data, healthcare firms‘ security teams are now getting their act together. For each organization in this vertical, security has become a priority. Many are migrating to the cloud in an effort to shift the infrastructure security burden to powerful tech giants like AmazonGoogle, and Microsoft. This shift to cloud has also driven many to adopt third-party security solutions that allow them to obtain cross-app security, achieve HIPAA compliance, and mitigate the risk and impact of breaches.

In particular, cloud access security brokers are taking the healthcare sector by storm and are proving to play an important part in preventing breaches. Back in 2015, few had a CASB deployed and many were at risk of massive data loss. Today, forward-thinking organizations like John Muir Health have deployed a Next-Gen CASB to great success. IT administrators can be immediately alerted to high-risk data outflows and new applications that pose a threat, and can define granular policies that prevent mega-breaches of the sort that cost Anthem and Premera hundreds of millions of dollars.

Read the full healthcare breach report to learn about the leading causes of breaches in the sector, the average cost of a stolen health record, and more.

You Are the Weakest Link – Goodbye

By Jacob Serpa, Product Marketing Manager, Bitglass

Security in the cloud is a top concern for the modern enterprise. Fortunately, provided that organizations do their due diligence when evaluating security tools, storing data in the cloud can be even more secure than storing data on premises. However, this does require deploying a variety of solutions for securing data at rest, securing data at access, securing mobile and unmanaged devices, defending against malware, detecting unsanctioned cloud apps (shadow IT), and more. Amidst this rampant adoption of security tools, organizations often forget to bolster the weakest link in their security chain, their users.

The Weak Link in the Chain
While great steps are typically taken to secure data, relatively little thought is given to the behaviors of its users. This is likely due to an ingrained reliance upon static security tools that fail to adapt to situations in real time. Regardless, users make numerous decisions that place data at risk – some less obvious than others. In the search for total data protection, this dynamic human element cannot be ignored.

External sharing is one example of a risky user behavior. Organizations need visibility and control over where their data goes in order to keep it safe. When users send files and information outside of the company, protecting it becomes very challenging. While employees may do this either maliciously or just carelessly, the result is the same – data is exposed to unauthorized parties. Somewhat similarly, this can occur through shadow IT when users store company data in unsanctioned cloud applications over which the enterprise has no visibility or control.

Next, many employees use unsecured public WiFi networks to perform their work remotely. While this may seem like a convenient method of accessing employers’ cloud applications, it is actually incredibly dangerous for the enterprise. Malicious individuals can monitor traffic on these networks in order to steal users’ credentials. Additionally, credentials can fall prey to targeted phishing attacks that are enabled by employees who share too much information on social media. The fact that many individuals reuse passwords across multiple personal and corporate accounts only serves to exacerbate the problem.

In addition to the above, users place data at risk through a variety of other ill-advised behaviors. Unfortunately, traditional, static security solutions have a difficult time adapting to users’ actions and offering appropriate protections in real time.

Reforging the Chain
In the modern cloud, automated security solutions are a must. Reactive solutions that rely upon humans to analyze threats and initiate a response are incapable of protecting data in real time. The only way to ensure true automation is by using machine learning. When tools are powered by machine learning, they can protect data in a comprehensive fashion in the rapidly evolving, cloud-first world.

This next-gen approach can be particularly helpful when addressing threats that stem from compromised credentials and malicious or careless employees. User and entity behavior analytics (UEBA) baseline users’ behaviors and perform real-time analyses to detect suspicious activities. Whether credentials are used by thieving outsiders or employees engaging in illicit behaviors, UEBA can detect threats and respond by enforcing step-up, multi-factor authentication before allowing data access.

Machine learning is helpful for defending against other threats, as well. For example, advanced anti-malware solutions can leverage machine learning to analyze the behaviors of files. In this way, they can detect and block unknown, zero-day malware; something beyond the scope of traditional, signature-based solutions that can only check for documented, known malware.

Even less conventional tools like shadow IT discovery are beginning to be endowed with machine learning. Historically, these solutions have relied upon lists generated by massive human teams that constantly categorize and evaluate the risks of new cloud applications. However, this approach fails to keep pace with the perpetually growing number of new and updated apps. Because of this, leading cloud access security brokers (CASBs) are using machine learning to rank and categorize new applications automatically, enabling immediate detection of new cloud apps in use. In other words, organizations can uncover all of the locations that careless and conniving employees store corporate data.

While training employees in best security practices is necessary, it is not sufficient for protecting data. Education must be paired with context-aware, automated security solutions (like CASBs) in order to reinforce the weak links in the enterprise’s security chain.

AWS Cloud: Proactive Security and Forensic Readiness – Part 2

By Neha Thethi, Information Security Analyst, BH Consulting

Part 2: Infrastructure-level protection in AWS 

This is the second in a five-part blog series that provides a checklist for proactive security and forensic readiness in the AWS cloud environment. This post relates to protecting your virtual infrastructure within AWS.

Protecting any computing infrastructure requires a layered or defense-in-depth approach. The layers are typically divided into physical, network (perimeter and internal), system (or host), application, and data. In an Infrastructure as a Service (IaaS) environment, AWS is responsible for security ‘of’ the cloud including the physical perimeter, hardware, compute, storage and networking, while customers are responsible for security ‘in’ the cloud, or on layers above the hypervisor. This includes the operating system, perimeter and internal network, application and data.

Infrastructure protection requires defining trust boundaries (e.g., network boundaries and packet filtering), system security configuration and maintenance (e.g., hardening and patching), operating system authentication and authorizations (e.g., users, keys, and access levels), and other appropriate policy enforcement points (e.g., web application firewalls and/or API gateways).

The key AWS service that supports service-level protection is AWS Identity and Access Management (IAM) while Virtual Private Cloud (VPC) is the fundamental service that contributes to securing infrastructure hosted on AWS. VPC is the virtual equivalent of a traditional network operating in a data center, albeit with the scalability benefits of the AWS infrastructure. In addition, there are several other services or features provided by AWS that can be leveraged for infrastructure protection.

The following list mainly focuses on network and host-level boundary protection, protecting integrity of the operating system on EC2 instances and Amazon Machine Images (AMIs) and security of containers on AWS.

The checklist provides best practice for the following:

  1. How are you enforcing network and host-level boundary protection?
  2. How are you protecting against distributed denial of service (DDoS) attacks at network and application level?
  3. How are you managing the threat of malware?
  4. How are you identifying vulnerabilities or misconfigurations in the operating system of your Amazon EC2 instances?
  5. How are you protecting the integrity of the operating system on your Amazon EC2 instances?
  6. How are you ensuring security of containers on AWS?
  7. How are you ensuring only trusted Amazon Machine Images (AMIs) are launched?
  8. How are you creating secure custom (private or public) AMIs?

IMPORTANT NOTE: Identity and access management is an integral part of securing an infrastructure, however, you’ll notice that the following checklist does not focus on the AWS IAM service. I have covered this in a separate checklist on IAM best practices here.

Best-practice checklist

1. How are you enforcing network and host-level boundary protection?

  • Establish appropriate network design for your workload to ensure only desired network paths and routing are allowed
  • For large-scale deployments, design network security in layers – external, DMZ, and internal
  • When designing NACL rules, consider that it’s a stateless firewall, so ensure to define both outbound and inbound rules
  • Create secure VPCs using network segmentation and security zoning
  • Carefully plan routing and server placement in public and private subnets.
  • Place instances (EC2 and RDS) within VPC subnets and restrict access using security groups and NACLs
  • Use non-overlapping IP addresses with other VPCs or data centre in use
  • Control network traffic by using security groups (stateful firewall, outside OS layer), NACLs (stateless firewall, at subnet level), bastion host, host based firewalls, etc.
  • Use Virtual Gateway (VGW) where Amazon VPC-based resources require remote network connectivity
  • Use IPSec or AWS Direct Connect for trusted connections to other sites
  • Use VPC Flow Logs for information about the IP traffic going to and from network interfaces in your VPC
  • Protect data in transit to ensure the confidentiality and integrity of data, as well as the identities of the communicating parties.

2. How are you protecting against distributed denial of service (DDoS) attacks at network and application level?

  • Use firewalls including Security groups, network access control lists, and host based firewalls
  • Use rate limiting to protect scarce resources from overconsumption
  • Use Elastic Load Balancing and Auto Scaling to configure web servers to scale out when under attack (based on load), and shrink back when the attack stops
  • Use AWS Shield, a managed Distributed Denial of Service (DDoS) protection service, that safeguards web applications running on AWS
  • Use Amazon CloudFront to absorb DoS/DDoS flooding attacks
  • Use AWS WAF with AWS CloudFront to help protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources
  • Use Amazon CloudWatch to detect DDoS attacks against your application
  • Use VPC Flow Logs to gain visibility into traffic targeting your application.

3. How are you managing the threat of malware?

  • Give users the minimum privileges they need to carry out their tasks
  • Patch external-facing and internal systems to the latest security level.
  • Use a reputable and up-to-date antivirus and antispam solution on your system.
  • Install host based IDS with file integrity checking and rootkit detection
  • Use IDS/IPS systems for statistical/behavioural or signature-based algorithms to detect and contain network attacks and Trojans.
  • Launch instances from trusted AMIs only
  • Only install and run trusted software from a trusted software provider (note: MD5 or SHA-1 should not be trusted if software is downloaded from random source on the internet)
  • Avoid SMTP open relay, which can be used to spread spam, and which might also represent a breach of the AWS Acceptable Use Policy.

4. How are you identifying vulnerabilities or misconfigurationsin the operating system of your Amazon EC2 instances?

  • Define approach for securing your system, consider the level of access needed and take a least-privilege approach
  • Open only the ports needed for communication, harden OS and disable permissive configurations
  • Remove or disable unnecessary user accounts.
  • Remove or disable all unnecessary functionality.
  • Change vendor-supplied defaults prior to deploying new applications.
  • Automate deployments and remove operator access to reduce attack surface area using tools such as EC2 Systems Manager Run Command
  • Ensure operating system and application configurations, such as firewall settings and anti-malware definitions, are correct and up-to-date; Use EC2 Systems Manager State Manager to define and maintain consistent operating system configurations
  • Ensure an inventory of instances and installed software is maintained; Use EC2 Systems Manager Inventory to collect and query configuration about your instances and installed software
  • Perform routine vulnerability assessments when updates or deployments are pushed; Use Amazon Inspector to identify vulnerabilities or deviations from best practices in your guest operating systems and applications
  • Leverage automated patching tools such as EC2 Systems Manager Patch Manager to help you deploy operating system and software patches automatically across large groups of instances
  • Use AWS CloudTrail, AWS Config, and AWS Config Rules as they provide audit and change tracking features for auditing AWS resource changes.
  • Use template definition and management tools, including AWS CloudFormation to create standard, preconfigured environments.

5. How are you protecting the integrity of the operating system on your Amazon EC2 instances?

  • Use file integrity controls for Amazon EC2 instances
  • Use host-based intrusion detection controls for Amazon EC2 instances
  • Use a custom Amazon Machine Image (AMI) or configuration management tools (such as Puppet or Chef) that provide secure settings by default.

6. How are you ensuring security of containers on AWS?

  • Run containers on top of virtual machines
  • Run small images, remove unnecessary binaries
  • Use many small instances to reduce attack surface
  • Segregate containers based on criteria such as role or customer and risk
  • Set containers to run as non-root user
  • Set filesystems to be read-only
  • Limit container networking; Use AWS ECS to manage containers and define communication between containers
  • Leverage Linux kernel security features using tools like SELinux, Seccomp, AppArmor
  • Perform vulnerability scans of container images
  • Allow only approved images during build
  • Use tools such as Docker Bench to automate security checks
  • Avoid embedding secrets into images or environment variables, Use S3-based secrets storage instead.

7. How are you ensuring only trusted Amazon Machine Images (AMIs) are launched?

  • Treat shared AMIs as any foreign code that you might consider deploying in your own data centre and perform the appropriate due diligence
  • Look for description of shared AMI, and the AMI ID, in the Amazon EC2 forum
  • Check aliased owner in the account field to find public AMIs from Amazon.

8. How are you creating secure custom (private or public) AMIs?

  • Disable root API access keys and secret key
  • Configure Public Key authentication for remote login
  • Restrict access to instances from limited IP ranges using Security Groups
  • Use bastion hosts to enforce control and visibility
  • Protect the .pem file on user machines
  • Delete keys from the authorized_keys file on your instances when someone leaves your organization or no longer requires access
  • Rotate credentials (DB, Access Keys)
  • Regularly run least privilege checks using IAM user Access Advisor and IAM user Last Used Access Keys
  • Ensure that software installed does not use default internal accounts and passwords.
  • Change vendor-supplied defaults before creating new AMIs
  • Disable services and protocols that authenticate users in clear text over the network, or otherwise insecurely.
  • Disable non-essential network services on startup. Only administrative services (SSH/RDP) and the services required for essential applications should be started.
  • Ensure all software is up to date with relevant security patches
  • For in instantiated AMIs, update security controls by running custom bootstrapping Bash or Microsoft Windows PowerShell scripts; or use bootstrapping applications such as Puppet, Chef, Capistrano, Cloud-Init and Cfn-Init
  • Follow a formalised patch management procedure for AMIs
  • Ensure that the published AMI does not violate the Amazon Web Services Acceptable Use Policy. Examples of violations include open SMTP relays or proxy servers. For more information, see the Amazon Web Services Acceptable Use Policy

Security at the infrastructure level, or any level for that matter, certainly requires more than just a checklist. For a comprehensive insight into infrastructure security within AWS, we suggest reading the following AWS whitepapers – AWS Security Pillar and AWS Security Best Practises.

For more details, refer to the following AWS resources:

Next up in the blog series, is Part 3 – Data Protection in AWS – best practice checklist. Stay tuned.

DISCLAIMER: Please be mindful that this is not an exhaustive list. Given the pace of innovation and development within AWS, there may be features being rolled out as these blogs were being written. Also, please note that this checklist is for guidance purposes only.

Saturday Security Spotlight: Tesla, FedEx, & the White House

By Jacob Serpa, Product Marketing Manager, Bitglass

Here are the top cybersecurity stories of recent weeks:

—Tesla hacked and used to mine cryptocurrency
—FedEx exposes customer data in AWS misconfiguration
—White House releases cybersecurity report
—SEC categorizes knowledge of unannounced breaches as insider information
—More Equifax data stolen than initially believed

Tesla hacked and used to mine cryptocurrency
By targeting a Tesla instance of Kubernetes, Google’s open-source administrative console for cloud apps, hackers were able to infiltrate the company. The malicious parties then obtained credentials to Tesla’s AWS environment, gained access to proprietary information, and began running scripts to mine cryptocurrency using Tesla’s computing power.

FedEx exposes customer data in AWS misconfiguration
FedEx is one of the latest companies to suffer from an AWS misconfiguration. Bongo, acquired by FedEx in 2014 and subsequently renamed CrossBorder, is reported to have left its S3 instance completely unsecured, exposing the data of nearly 120,000 customers. While it is believed that no data theft occurred, the company still left sensitive information (like customer passport details) exposed for an extended period.

White House releases cybersecurity report
In light of the escalating costs of cyberattacks in the United States, the White House released a report scrutinizing the current state of cybersecurity. In particular, the report recognized the critical link between cybersecurity and the economy at large. Should other countries execute cyberattacks against organizations responsible for US infrastructure, the repercussions could be severe.

SEC categorizes knowledge of unannounced breaches as insider information
The Securities and Exchange Commission recently announced that knowledge of unannounced breaches is insider information that should not be used to inform the purchase or sale of stock. This comes largely in response to Intel and Equifax executives selling stock before their companies announced breaches.

More Equifax data stolen than initially believed
In September of 2017, Equifax announced a massive breach that leaked names, home addresses, Social Security Numbers, and more. Interestingly (and frighteningly), it now appears that even more data was leaked than the company originally reported.

FedRAMP – Three Stages of Vulnerability Scanning and their Pitfalls

By Matt Wilgus, Practice Leader, Threat & Vulnerability Assessments, Schellman & Co.

Though vulnerability scanning is only one of the control requirements in FedRAMP, it is actually one of the most frequent pitfalls in terms of impact to an authorization to operate (ATO), as FedRAMP requirements expect cloud service providers (CSPs) to have a mature vulnerability management program. A CSP needs to have the right people, processes and technologies in place, and must successfully demonstrate maturity for all three. CSPs that have an easier time with the vulnerability scanning requirements follow a similar approach, which can be best articulated by breaking down the expectations into three stages.

1. Pre-Assessment

Approximately 60-90 days from an expected security assessment report (SAR), a CSP should provide the third-party assessment organization (3PAO) a recent set of scans, preferably from the most recent three months. The scan data should be provided in a format that can be parsed by the 3PAO. There are several questions that can be answered by providing scans well ahead of time:

  • Credentials – Are the scans being conducted from an authenticated perspective with a user having the highest level of privileges available?
  • Scan Types – Are infrastructure, database, and web application scans being performed?
  • Points of Contact – Who is responsible for configuring the scanner and running scans? Who is responsible for remediation?
  • Entire Boundary Covered – Is the full, in-scope environment being scanned?
  • Remediation – Are high severity findings being remediated in 30 days? Are moderate severity findings being remediated within 90 days?

Within the pre-assessment, having all plugins enabled is frequently an area of discussion, as many CSPs want to disable plugins or sets of checks. Should a check need to be disabled, there must be a documented reason (e.g. degradation of performance or denial of service occurs with a given plugin). Do not disable checks simply because it is assumed a given type of asset doesn’t exist in the environment.

Properly configured and authenticated vulnerability scanners will typically not send families of vulnerability checks against hosts if the operating system or application does not match what is required by the family of checks–i.e., Netware checks will not be run if Netware is not detected during the scan of the environment. The safest bet is to always enable everything. If a given check needs to be disabled, it should be noted as an exception with formal documentation detailing why it is disabled, and what processes are in place to ensure the vulnerability being detected is covered by other mitigating factors.

The pre-assessment phase is also a good time for the CSP to document any known false positives that occur within the scan results and any operational requirements that prevent remediation from occurring.

2. Assessment

During the assessment kickoff, the CSP should be ready for the 3PAO to conduct vulnerability scans. If the CSP successfully addresses the questions in the pre-assessment phase, then any findings or issues during the assessment phase should be easy to address. There are three main areas to tackle while reviewing the scan data in the assessment past:

  1. Current Picture – What vulnerabilities exist in the environment as of the current date?
  2. Reassurance on Remediation – Are vulnerabilities continuing to be remediated in a timely manner?
  3. Adjustments – What changes have been taken since the pre-assessment?

Of the aforementioned three items, adjustments often have the biggest impact. Examples of adjustments that frequently occur and need to be addressed include if the:

  • vulnerability scanning tool has changed
  • scan checks have been modified
  • personnel responsible for configuring and running the scans are no longer with the organization
  • technologies within the environment have changed
  • environment hosting the solution has changed

If any of these adjustments exist, the 3PAO will need to perform additional validation activities.

3. Final Scan

A final round of scans should be run by the CSP five to 10 days prior to the issuance of the SAR. At this point, all questions related to the personnel running the scans, the processes deployed, and the technologies implemented should be answered. The last set of scans should be limited in scope and used to show evidence of remediation activities on the vulnerabilities identified in the assessment phase. There are three primary goals related to the last piece of scan evidence:

  1. Targeted scans – Has a final set of scans that shows remediation of findings from the assessment phase been provided?
  2. Operational Requirements (OR) and False Positives (FP) – Are all ORs and FPs documented, reviewed and understood?
  3. Ready for Continuous Monitoring – Are there any high severity findings remaining, and is the CSP ready to provide monthly results to an agency or the Joint Authorization Board (JAB)?

High severity findings are highlighted due to their outsized impact on a FedRAMP ATO. A CSP cannot receive a recommendation for an ATO if any high severity vulnerabilities are present. Should any findings persist as of the date the SAR is issued, these findings should be tracked in the CSPs Plan of Action and Milestones (POA&M).

For additional information on the timing and handling of vulnerability scans, please see the following documents on the FedRAMP website:


Securing the Internet of Things: Connected Cars

By Ranjeet Khanna, Director of Product Management–IoT/Embedded Security, Entrust Datacard

Establishing safety and security in automotive design goes far beyond crash test dummies.

By 2022, the global automotive Internet of Things (IoT) market is expected to skyrocket to $82.79 billion – and manufacturers are racing to capitalize on this growing opportunity. While embedded computation and networking has been around since the 1980s, the advent of connectivity opens up an array of new options for automakers. From advanced collision detection and predictive diagnostics, to entertainment systems that load a driver’s favorite tunes the second they sit down, connected cars are poised to enhance the consumer experience.

Those extra conveniences, however, aren’t without their downsides. If not properly secured, connected cars threaten to expose sensitive consumer information. With data being passed between so many different connected channels, it’s easier than ever for hackers to get their hands on personally identifiable information.

In 2015, Chrysler announced a recall of 1.4 million vehicles after two technology researchers hacked into a Jeep Cherokee’s dashboard connectivity system. But the right security solutions can make such incidents a thing of the past.

Through new IoT security solutions, automotive manufacturers are able to assign a trusted identity to each and every device – regardless of whether it’s located inside a vehicle or across the IoT ecosystem. This extra layer of security sets the stage for trusted communication between authorized users, devices and applications. Ensuring the right security level for the right device helps prevent data being made accessible to unauthorized users or devices. Using cryptographic protection as well as strong authorization requirements will restrict access to those things, systems and users with the proper privileges.

In addition to creating a trusted IoT ecosystem, automotive designers also stand to realize significant business value. Instead of spending precious time determining which devices to trust, ioTrust makes it easy to not only recognize trusted devices, but operationalize them. That same convenience also extends to the supply chain, where manufacturers can get a better look at a product’s entire lifecycle – from creation to release.

IoT has burst onto the scene in a big way, especially in the quest to securely design the next connected car. But before making the most of automotive IoT, manufacturers must consider how to keep consumer data under wraps. By provisioning managed identities and authorization privileges, ioTrust paves the way for securely connected automotive systems.

Note: This is part of a blog series on Securing the IoT. 

CASBs and Education’s Flight to the Cloud

By Jacob Serpa, Product Marketing Manager, Bitglass

Cloud is becoming an integral part of modern organizations seeking productivity and flexibility. For higher education, cloud enables online course creation, dynamic collaboration on research documents, and more. As many cloud services like G Suite are discounted or given to educational institutions for free, adoption is made even simpler. However, across the multiple use cases in education, comprehensive security solutions must be used to protect data wherever it goes. The vertical as a whole needs real-time protection on any app, any device, anywhere.

The Problems
For academic institutions, research is often of critical importance. Faculty members create, share, edit, and reshare various documents in an effort to complete projects and remain at the cutting edges of their fields. Obviously, using cloud apps facilitates this process of collaboration and revision. However, doing so in an unsecured fashion can allow proprietary information to leak to unauthorized parties.

Another point of focus in education is how student and faculty PII (personally identifiable information) is used and stored in the cloud. As information moves to cloud apps, traditional security solutions fail to provide adequate visibility and control over data. Obviously, this creates compliance concerns with regulations, like FISMA and FERPA, that aim to protect personal information. Medical schools have the additional requirement of securing protected health information (PHI) and complying with HIPAA.

The Solutions
Fortunately, cloud access security brokers (CASBs) offer a variety of capabilities that address the above security concerns. Data leakage prevention, for example, can be used to protect data and reach regulatory compliance. DLP policies allow organizations to redact data like PII, quarantine sensitive files, and watermark and track documents. Encryption can be used to obfuscate sensitive data and prevent unauthorized users from viewing things like PHI. Contextual access controls govern data access based on factors like user group, geographical location, and more.

To secure cloud, present-day organizations must also secure mobile data access. Fortunately, agentless mobile security solutions enable BYOD without requiring installations on unmanaged devices. This is critical for ensuring device functionality, user privacy, and employee adoption. Some agentless solutions can enforce device security configurations like PIN codes, selectively wipe corporate data on any device, and more.

Saturday Security Spotlight: Malware, AWS, and US Defense

By Jacob Serpa, Product Marketing Manager, Bitglass

Here are the top cybersecurity stories of recent weeks:

—AndroRAT malware spies on Android users
—Smart TVs easily hackable
—BuckHacker tool finds unsecured data in AWS buckets
—Octoly breach exposes social media stars’ personal data
—Russian hackers target US defense contractors

AndroRAT malware spies on Android users
A new type of malware targeting Android devicesgives hackers extensive control over users’ phones. The threat allows malicious parties to use devices’ microphones (to record audio), cameras (to take pictures) and files (to steal information). This is obviously a large privacy concern for Android users around the world.

Smart TVs easily hackable
As new types of devices connect to the internet, nefarious individuals have more targets to attack. In particular, Samsung and Roku televisions were recently deemed to have multiple vulnerabilities. Hackers can target certain security gaps to control volume, channel, and more. This raises additional privacy concerns around consumers being monitored within their homes.

BuckHacker tool finds unsecured data in AWS buckets
Whitehat hackers recently created a tool that uncovers publicly available information resting within AWS buckets. While the tool is designed to help organizations uncover their misconfigurations within AWS, it also highlights the growing ease with which malicious hackers can steal unsecured data in the cloud.

Octoly breach exposes social media stars’ personal data
Brand marketing company Octoly was recently the victim of a breach, leaking the personal information of over 12,000 social media celebrities through, once again, an unsecured AWS S3 bucket. Data was exposed in the cloud for about a month before the vulnerability was noticed.

Russian hackers target US defense contractors
Hackers belonging to the Russian Fancy Bears group have been targeting US defense contractors. In an attempt to steal information about secret military technology and projects, they have been using targeted phishing emails. This can obviously have extensive ramifications for the country’s national security.

In order to address leaks, hacks, and malware, organizations must utilize next-gen security solutions. To learn about cloud access security brokers, download the Definitive Guide to CASBs.

Unmanaged Device Controls, External Sharing, and Other Real CASB Use Cases

By Salim Hafid, Product Marketing Manager, Bitglass

Many in the security industry have heard about CASBs  (cloud access security brokers) as the go-to solutions for data and threat protection in the cloud. But where exactly do CASBs slot in? If you already have a NGFW (next-gen firewall) or perhaps a secure-web-gateway-type solution, why invest in deploying a CASB?

Below, we will hone in on three of the most common real-world use cases for a cloud access security broker.

External Sharing
Most cloud applications have some form of built-in external sharing control. Perhaps an administrator is able to revoke access to certain documents, set granular permissions across the organization, or block sharing on the whole.

For organizations with multiple cloud apps, setting these controls within each app can be cumbersome. What’s more, not all apps share the same security capabilities. While Office 365 may feature granular sharing controls, an enterprise messaging app like Slack, which also enables external sharing, does not. A lack of feature parity across applications contributes to a core CASB use case – the ability to set external sharing controls for any app. This is done by leveraging APIs provided by each app vendor.

Cloud Malware Protection
Perhaps all managed endpoints in your organization feature some sort of malware scanning – a traditional and reliable approach to blocking known malware once it hits the device. The cloud malware challenge, however, is a whole different ballgame.

Cloud malware comes in many forms and is a major threat because of the rate at which it spreads. Say a spreadsheet with embedded malware is uploaded to a cloud application. That malware is likely to remain at rest in the cloud and can easily be transmitted to a connected cloud application or downloaded to a user’s device. Without cloud malware protection, IT has no way of identifying these threats. Cloud apps, intended for productivity and improved security, instead become a means of malware distribution. Only a CASB, with threat prevention capabilities that stretch across applications, can detect malware in real time as it’s uploaded. By combining a best-in-class AI-based malware engine with multi-protocol proxies, Bitglass helps organizations in every sector limit the risks of cloud malware.

Unmanaged Device Access Control
The most critical of CASB use cases is the ability for an organization to control access from unmanaged devices. Demand for bring your own device (BYOD) programs has reached unprecedented new heights, pushing IT departments to rethink their security stances with respect to unmanaged device access.

Given that employees are likely to work around IT if they are unable to work from their personal devices (particularly in the age of cloud where off-network access is highly common), steps must be taken to extend secure access to unmanaged endpoints. With a CASB, enteprises can focus on protecting data as opposed to protecting devices or infrastructure. IT-defined policies can prevent downloads of sensitive data and apply protections with built-in data loss prevention (DLP). Identify, remediate, and secure sensitive corporate data in any app, any device, anywhere.

To learn more, download the Top CASB Use Cases.

A Home for CASB

By Kyle Watson, Partner, Information Security, Cedrus

Over the past 18 months, I’ve been working on CASB in some form or another including:

—Educational architectural and technical videos
—Request for Proposal (RFP) assistance
—Pre-sales presentations and demos
—Proof of Concepts (POCs)
—Operations build-out and transition

I’ve discovered some interesting things working with vendors, clients, and our own security technical staff here at Cedrus. One of them is about the ownership model. There is not a 1:1 map when you compare CASB solution features to the structures of organizations that are deploying them. There seems to be a lack of organizational placement, a permanent home when it comes to CASB. This extends both to technology and business process ownership.

Most CASB solutions are a natural evolution out of the network layer of technology and hence so did many of the key players at CASB vendors. These folks are experts in networks, firewalls, proxies, Intrusion Detection Systems (IDS)/Intrusion Prevention Systems (IPS), Security Information and Event Management (SIEM), etc.

However, many of the features being offered by CASB extend into areas that don’t typically overlap with the responsibilities of the teams that run these areas of the Security Operations Center (SOC). These include things like Identity and Access Management (IAM), Data Loss Prevention (DLP), Encryption, Application Programming Interface (API) integration, and Malware prevention. Working on technical integrations with CASB there is a need to bridge at least four groups that are often separate in enterprises.

  1. Networks/Firewalls/Proxies
  2. Active Directory Admins
  3. Identity and Access Management (IAM) Team(s)
  4. Information/Data Protection
  5. And Public Key Infrastructure (PKI) / Encryption if they’re separate from one of the other teams

That’s only the technical part. From an operational perspective, most of the work CASBs are doing are directly related to people, applications, and data. For instance:

  • Encrypt Protected Health Information (PHI) when it gets stored in Google
  • Scan all documents in the corporate OneDrive to find and move Personally Identifiable Information (PII)
  • Prevent people from uploading confidential documents as attachments on LinkedIn

This brings up the question: What is the best group for management of CASB?
All of this means that we need people constructing and approving policy that have an understanding of what’s important to the business, what regulatory mandates are instructing the organization to do, and what makes a “good” cloud app vendor vs. a “risky” one. A strong grasp of change control process must be realized and followed. Like SIEM, false positive alert evolution has to be done by this team within the CASB tool in order to get useful alerting that can be used to take concrete action. We also need these folks to be able to understand and/or work with IAM Federated Single Sign-On (SSO) configurations and redirects, PKI certificates, and DLP policies. Finally, this group has to be able to engage the business constructively, to help them transition from risky to sanctioned apps, and educate personnel on risky actions. With CASB being so new, many organizations only have a small portion of functionality deployed, such as the application discovery features that can assist organizations in resolving the ever-expanding Shadow IT. Discovery functionality can be easily managed in an existing team as a secondary responsibility. This person or team can produce reports that can be reviewed and action can be taken out of band.

A home for CASB
As CASB solutions get integrated with full enterprise security systems and processes this won’t be enough. At minimum, a Center of Excellence (COE) will have to be established for CASB. Long term, I believe a business service is needed to effectively leverage the solution for maximum risk reduction with minimum business disruption. I would love to hear other views on this as well, so please comment and share your insight!

Malware P.I. – Odds Are You’re Infected

By Jacob Serpa, Product Marketing Manager, Bitglass

In Bitglass’ latest report, Malware P.I., the Next-Gen CASB company uncovered startling information about the rate of malware infection amongst organizations. Additionally, experiments with a new piece of zero-day malware yielded shocking results. Here is a glimpse at some of the outcomes.

Nearly half of organizations have malware in one of their cloud apps
While the cloud endows organizations with great flexibility, efficiency, and collaboration, cloud apps and personal devices accessing corporate data can inadvertantly house and spread malware. However, this does not mean that operating in the cloud is inherently more dangerous than the traditional way of doing things. In the cloud, threats merely adopt new forms and require novel methods of defense. For organizations that fail to adopt cloud-first security solutions like cloud access security brokers (CASBs) that are complete with advanced threat protection (ATP), the consequences can be severe. A single piece of malware is enough to inflict massive damage to any enterprise.

Zero-day malware “ShurL0ckr” deteced by Cylance and not Microsoft or Google
In addition to uncovering the above information, Bitglass’ Threat Research Team also discovered a new variety of ransomware. Dubbed “ShurL0ckr,” the threat encrypts users’ data and demands a ransom in exchange for decryption. Armed with this zero-day malware, tests were performed with a variety of antivirus engines. Cylance, a Bitglass technology partner that uses machine learning to detect unknown threats, was able to detect the ransomware. However, few other engines proved capable of doing so.

Somewhat alarmingly, native ATP tools within Microsoft SharePoint and Google Drive were unable to detect ShurL0ckr. This highlights the growing dangers of relying solely upon cloud applications’ native security features. When adopting cloud apps, it is imperative that organizations also adopt advanced, specialized security solutions. In this way, they can ensure that their data is completely secured.

To learn more about malware’s assault on the enterprise, download Malware P.I.