Saturday Security Spotlight: Cryptomining, AWS, and O365

By Jacob Serpa, Product Marketing Manager, Bitglass

Here are the top cybersecurity stories of recent weeks:
—Malicious cryptomining the top cybercrime
—New details emerge on unsecured AWS buckets
—Data Keeper ransomware begins to spread
—Office 365 used in recent mass phishing attacks
—SgxSpectre attacking Intel SGX enclaves

Malicious cryptomining the top cybercrime
Since September of 2017, malicious cryptomining has been the most commonly detected cybercrime. With cryptocurrencies growing in value, hackers have increasingly altered their attacks so that victims’ devices can be hijacked to mine Bitcoin, for example. Desktops, mobile devices, and organizations as a whole have fallen prey to these attacks.

New details emerge on unsecured AWS buckets
Over the last few moths, unsecured AWS instances have left many organizations vulnerable and, in some cases, have led to breaches. New research by HTTPCS found a variety of information about the rate at which enterprises’ AWS buckets are misconfigured to allow public access. 20% of public AWS S3 buckets can even be edited by the public at large.

Data Keeper ransomware begins to spread
Data Keeper is a new ransomware as a service (RaaS) that is quickly growing in popularity. RaaS typically functions by providing malicious parties (customers on the dark web) with prebuilt platforms that they can use to spread infections and hold users’ data for ransom. In the case of Data Keeper, there were only two days between its creation and the first reported infections.

Office 365 used in recent mass phishing attacks
Phishing attacks are constantly being refined to improve their success rates. In recent weeks, phishing emails disguised as tax-related messages from the government have included Office 365 attachments in an effort to appear more legitimate. Unfortunately, the strategy has been fairly effective – numerous users have opened the documents and unknowingly surrendered their credentials.

SgxSpectre attacking Intel SGX enclaves
The recent Meltdown and Spectre attacks caused great concern throughout the business world, but proved unable to infiltrate Intel’s SGX (Software Guard eXtensions) enclaves. Unfortunately, the more recent SgxSpectre is capable of invading said enclaves and stealing information such as passwords, encryption keys, and more.

Few security tools are capable of handling the breadth of cyberattacks faced by cloud-first organizations. As such, the enterprise must research advanced solutions like cloud access security brokers. To learn more about these next-gen security solutions, download the Definitive Guide to CASBs.

AWS Cloud: Proactive Security and Forensic Readiness – Part 3

Part 3: Data protection in AWS

By Neha Thethi, Information Security Analyst, BH Consulting

This is the third in a five-part blog series that provides a checklist for proactive security and forensic readiness in the AWS cloud environment. This post relates to protecting data within AWS.

Data protection has become all the rage for organizations that are processing personal data of individuals in the EU, because the EU General Data Protection Regulation (GDPR) deadline is fast approaching.

AWS is no exception. The company is providing customers with services and resources to help them comply with GDPR requirements that may apply to their operations. These include granular data access controls, monitoring and logging tools, encryption, key management, audit capability and, adherence to IT security standards (for more information, see the AWS General Data Protection Regulation (GDPR) Center, and Navigating GDPR Compliance on AWS Whitepaper). In addition, AWS has published several privacy related whitepapers, including country specific ones. The whitepaper Using AWS in the Context of Common Privacy & Data Protection Considerations, focuses on typical questions asked by AWS customers when considering privacy and data protection requirements relevant to their use of AWS services to store or process content containing personal data.

This blog, however, is not just about protecting personal data. The following list provides guidance on protecting any information stored in AWS that is valuable to your organisation. The checklist mainly focuses on protection of data (at rest and in transit), protection of encryption keys, removal of sensitive data from AMIs, and, understanding access data requests in AWS.

The checklist provides best practice for the following:

  1. How are you protecting data at rest?
  2. How are you protecting data at rest on Amazon S3?
  3. How are you protecting data at rest on Amazon EBS?
  4. How are you protecting data at rest on Amazon RDS?
  5. How are you protecting data at rest on Amazon Glacier?
  6. How are you protecting data at rest on Amazon DynamoDB?
  7. How are you protecting data at rest on Amazon EMR?
  8. How are you protecting data in transit?
  9. How are you managing and protecting your encryption keys?
  10. How are you ensuring custom Amazon Machine Images (AMIs) are secure and free of sensitive data before publishing for internal (private) or external (public) use?
  11. Do you understand who has the right to access your data stored in AWS?

IMPORTANT NOTE: Identity and access management is an integral part of protecting data, however, you’ll notice that the following checklist does not focus on AWS IAM. I have created a separate checklist on IAM best practices here.

Best-practice checklist

1.How are you protecting data at rest?

  • Define polices for data classification, access control, retention and deletion
  • Tag information assets stored in AWS based on adopted classification scheme
  • Determine where your data will be located by selecting a suitable AWS region
  • Use geo restriction (or geoblocking), to prevent users in specific geographic locations from accessing content that you are distributing through a CloudFront web distribution
  • Control the format, structure and security of your data by masking, making it anonymised or encrypted in accordance with the classification
  • Encrypt data at rest using server-side or client-side encryption
  • Manage other access controls, such as identity, access management, permissions and security credentials
  • Restrict access to data using IAM policies, resource policies and capability policies

Back to List

2. How are you protecting data at rest on Amazon S3?

  • Use bucket-level or object-level permissions alongside IAM policies
  • Don’t create any publicly accessible S3 buckets. Instead, create pre-signed URLs to grant time-limited permission to download the objects
  • Protect sensitive data by encrypting data at rest in S3. Amazon S3 supports server-side encryption and client-side encryption of user data, using which you create and manage your own encryption keys
  • Encrypt inbound and outbound S3 data traffic
  • Amazon S3 supports data replication and versioning instead of automatic backups. Implement S3 Versioning and S3 Lifecycle Policies
  • Automate the lifecycle of your S3 objects with rule-based actions
  • Enable MFA Delete on S3 bucket
  • Be familiar with the durability and availability options for different S3 storage types – S3, S3-IA and S3-RR.

Back to List

3. How are you protecting data at rest on Amazon EBS?

  • AWS creates two copies of your EBS volume for redundancy. However, since both copies are in the same Availability Zone, replicate data at the application level, and/or create backups using EBS snapshots
  • On Windows Server 2008 and later, use BitLocker encryption to protect sensitive data stored on system or data partitions (this needs to be configured with a password as Amazon EC2 does not support Trusted Platform Module (TPM) to store keys)
  • On Windows Server, implement Encrypted File System (EFS) to further protect sensitive data stored on system or data partitions
  • On Linux instances running kernel versions 2.6 and later, you can use dmcrypt and Linux Unified Key Setup (LUKS), for key management

Back to List

4.  How are you protecting data at rest on Amazon RDS?

(Note: Amazon RDS leverages the same secure infrastructure as Amazon EC2. You can use the Amazon RDS service without additional protection, but it is suggested to encrypt data at application layer)

  • Use built-in encryption function that encrypts all sensitive database fields, using an application key, before storing them in the database
  • Use platform level encryption
  • Use MySQL cryptographic functions – encryption, hashing, and compression
  • Use Microsoft Transact-SQL cryptographic functions – encryption, signing, and hashing
  • Use Oracle Transparent Data Encryption on Amazon RDS for Oracle Enterprise Edition under the Bring Your Own License (BYOL) model

Back to List

5. How are you protecting data at rest on Amazon Glacier?(Note: Data stored on Amazon Glacier is protected using server-side encryption. AWS generates separate unique encryption keys for each Amazon Glacier archive, and encrypts it using AES-256)

  • Encrypt data prior to uploading it to Amazon Glacier for added protection

Back to List

6. How are you protecting data at rest on Amazon DynamoDB?(Note: DynamoDB is a shared service from AWS and can be used without added protection, but you can implement a data encryption layer over the standard DynamoDB service)

  • Use raw binary fields or Base64-encoded string fields, when storing encrypted fields in DynamoDB

Back to List

7. How are you protecting data at rest on Amazon EMR?

  • Store data permanently on Amazon S3 only, and do not copy to HDFS at all. Apply server-side or client-side encryption to data in Amazon S3
  • Protect the integrity of individual fields or entire file (for example, by using HMAC-SHA1) at the application level while you store data in Amazon S3 or DynamoDB
  • Or, employ a combination of Amazon S3 server-side encryption and client-side encryption, as well as application-level encryption

Back to List

8. How are you protecting data in transit?

  • Encrypt data in transit using IPSec ESP and/or SSL/TLS
  • Encrypt all non-console administrative access using strong cryptographic mechanisms using SSH, user and site-to-site IPSec VPNs, or SSL/TLS to further secure remote system management
  • Authenticate data integrity using IPSec ESP/AH, and/or SSL/TLS
  • Authenticate remote end using IPSec with IKE with pre-shared keys or X.509 certificates
  • Authenticate remote end using SSL/TLS with server certificate authentication based on the server common name(CN), or Alternative Name (AN/SAN)
  • Offload HTTPS processing on Elastic Load Balancing to minimise impact on web servers
  • Protect the backend connection to instances using an application protocol such as HTTPS
  • On Windows servers use X.509 certificates for authentication
  • On Linux servers, use SSH version 2 and use non-privileged user accounts for authentication
  • Use HTTP over SSL/TLS (HTTPS) for connecting to RDS, DynamoDB over the internet
  • Use SSH for access to Amazon EMR master node
  • Use SSH for clients or applications to access Amazon EMR clusters across the internet using scripts
  • Use SSL/TLS for Thrift, REST, or Avro

Back to List

9. How are you managing and protecting your encryption keys?

  • Define key rotation policy
  • Do not hard code keys in scripts and applications
  • Securely manage keys at server side (SSE-S3, SSE-KMS) or at client side (SSE-C)
  • Use tamper-proof storage, such as Hardware Security Modules (AWS CloudHSM)
  • Use a key management solution from the AWS Marketplace or from an APN Partner. (e.g., SafeNet, TrendMicro, etc.)

Back to List

10. How are you ensuring custom Amazon Machine Images (AMIs) are secure and free of sensitive data before publishing for internal (private) or external (public) use?

  • Securely delete all sensitive data including AWS credentials, third-party credentials and certificates or keys from disk and configuration files
  • Delete log files containing sensitive information
  • Delete all shell history on Linux

Back to List

11. Do you understand who has the right to access your data stored in AWS?

  • Understand the applicable laws to your business and operations, consider whether laws in other jurisdictions may apply
  • Understand that relevant government bodies may have rights to issue requests for content, each relevant law will contain criteria that must be satisfied for the relevant law enforcement body to make a valid request.
  • Understand that AWS notifies customers where practicable before disclosing their data so they can seek protection from disclosure, unless AWS is legally prohibited from doing so or there is clear indication of illegal conduct regarding the use of AWS services. For additional information, visit Amazon Information Requests Portal.

Back to List

For more details, refer to the following AWS resources:

Next up in the blog series, is Part 4 – Detective Controls in AWS – best practice checklist. Stay tuned.

DISCLAIMER: Please be mindful that this is not an exhaustive list. Given the pace of innovation and development within AWS, there may be features being rolled out as these blogs were being written. Also, please note that this checklist is for guidance purposes only.

34 Cloud Security Terms You Should Know

By Dylan Press, Director of Marketing, Avanan

We hope you use this as a reference not only for yourself but for your team and in training your organization. Print this out and pin it outside your cubicle.

How can you properly research a cloud security solution if you don’t understand what you are reading? We have always believed cloud security should be simple, which is why we created Avanan. In an attempt to simplify it even more we have created a glossary of 34 commonly misunderstood cloud security terms and what they mean.

Account Takeover

A type of cyber attack in which the hacker spends extended periods of time dormant in a compromised account, spreading silently within the organization through internal messages until they have access to information that is valuable to them. They may use the account to attack other organizations.

Related: Read our whitepaper Cloud Account Takeover

Advanced Persistent Threat (APT)

This an attack in which an the attacker gains access to an account or network and remains undetected after the initial breach. The “advanced” describes the initial breach technique (phishing or malware) that was able to evade the victim’s security. The attack is “persistent” because the attacker continues to carry out the attack through reconnaissance and internal spread long after the initial breach.

Advanced Threat Protection (Microsoft ATP)

Microsoft offers its Advanced Threat Protection for an additional $24 per user per year. It includes capabilities not available in the default Office 365/Outlook.com account:

  • Safe Links: replaces each URL, checking the site before redirecting the users.
  • Safe Attachments: scanning attachments for malware
  • Spoof Intelligence: analyzes external emails that match your domain.
  • Anti-phishing Filters: looks for signs of incoming phishing attacks.

Anomaly

A type of behavior or action that seems abnormal when observed in the context of an organization and a user’s historical activity. It is typically analyzed using some sort of machine-learning algorithm that builds a profile based upon historical event information including login locations and times, data-transfer behavior and email message patterns. Anomalies are often a sign that an account is compromised.

API Attack

An API (Application Programming Interface) allows two cloud applications to talk to one other directly, allowing a third party to read or make changes directly within a cloud application. Creating an API connection requires a user’s approval, but once created, runs silently in the background, often with little or no monitoring. An API-based attack typically involves fooling the user into approving an API connection with a phishing attack. Once granted the API token, the attacker has almost complete access and control, even if the user changes the account password. To break the connection, the user must manually revoke the API token.

Behavioral Analysis

A security measure in which a file’s behavior is monitored and analyzed in an isolated environment in order to see if it contains hidden malicious functions or is communicating with an unknown third-party.

Brand Impersonation

A method of phishing attack in which the perpetrator spoofs the branding of a well-known company to fool the recipient into entering credentials, sharing confidential information, transferring money or clicking on a malicious link. An example might be a forged email that looks like it is from a social media company asking to verify a password.

Breach Response

A form of security that remedies the damage caused by a breach. For example, changing passwords, revoking API tokens, resetting permissions for shared documents, enabling multi-factor-authentication, restoring lost or edited documents, documenting and classifying leaked information, identifying potential pathways to collateral compromise.

CASB

An acronym for Cloud Access Security Broker. This is a type of security that monitors and controls the cloud applications that an organization’s employees might use. Typically, the control is enforced by routing web traffic through a forward- or reverse-proxy. CASBs are good for managing Shadow IT and limiting employee’s use of certain SaaS or the activity within those SaaS but do not monitor third-party activity in the cloud–i.e. shared documents or email.

Related: Can a CASB Protect You from Phishing or Ransomware?

Cloud Access Trojan

Also known as a CAT, a Cloud Access Trojan describes any method of accessing a cloud account without the use of a username and password, for example, a malicious user syncing a desktop app, forwarding all email to an external account, connecting a malicious script or simply authorizing a backup service for which they have full access. In each case, the attacker needs only momentary access, often gained through a phishing attack.

Related: Cloud Access Trojan: The Invisible Back Door to Your Enterprise Cloud

Cloud Messaging Apps

Cloud-based communication services that include email but are used by companies for internal communication but also might include trusted partners. Often employees imbue more trust in these apps even though they are just as capable of distributing malware or phishing messages.

Cloudify

Taking a software that was created for on-premise or datacenter usage, wrapping it with an API container and converting it to a cloud service. For example, taking the malware analysis blade from a perimeter appliance and adapting it so that it can be configured and scaled without the need for direct management. This also includes the automation of software licensing and version control.

Compromised Account

An account which has been accessed and is possibly controlled by an outside party for malicious reasons. This can be done either via API connection or by gaining credentials to the account from a leak or phishing email. Typically, the goal of the attacker is to remain undetected, in order to use it as a base for further attacks.

Related: Account Takeover: A Critical Layer Of Your Email Security

Data Classification

A security and compliance measure in which all of an organization’s documents are scanned and categorized based on their sensitivity and then are automatically encrypted or adjusted to the correct sharing level permissions. For example documents containing customer information or employee social security numbers would be classified as highly sensitive and encrypted where as an external facing white paper would be classified as non-sensitive and likely not encrypted.

DLP (Data Leak Prevention or Data Loss Prevention)

A type of security that prevents sensitive data, usually files, from being shared outside the organization or to unauthorized individuals within the organization. This is done usually through policies that encrypt data or control sharing settings.

DRM

Digital Rights Management: a set of access control technologies for restricting the use of confidential information, proprietary hardware and copyrighted works, typically using encryption and key management. (Also see IRM)

Gateway

A gateway is any device or  is another word for an MTA, please see the definition for MTA.

IRM

Information Rights Management is a subset of Digital Rights Management that protects corporate information from being viewed or edited by unwanted parties typically using encryption and permission management. (also see DRM)

Latency

The added time it takes for an email to be delivered to its intended recipient. Security measures sometimes add latency as they perform scans on the email prior to allowing the email to reach the user’s inbox.

Malconfiguration

A deliberate configuration change within a system by a malicious actor, typically to create back-door access or exfiltrate information. While the original change in configuration might involve a compromised account or other vulnerability, a malconfiguration has the benefit of offering long term access using legitimate tools, without further need of a password or after a vulnerability is closed.

Misconfiguration

A dangerous or unapproved configuration of an account that could potentially lead to a compromise typically done by a well-intentioned user attempting to solve an immediate business problem. While there is no malicious intent, misconfiguration is actually the leading cause of data loss or compromise.

MTA

An acronym for Message Transfer Agent. An MTA is an appliance or service that acts as the authorized server-of-record for electronic messages, eventually passing them on to the final mail server.

Related: 7 Reasons Not to Use an MTA Gateway

Phishing

A type of attack in which a message (often email, but could be any messaging system) is sent from a malicious party disguised as a trusted source with the intention of fooling the recipient into giving up credentials, money, or confidential data. It often includes a malicious link or file, but could be a simple as a single sentence that causes some sort of insecure response. (Also see Spearphishing.)

Proxy

A proxy can include any gateway, service or appliance that causes a rerouting of traffic through an appliance or cloud service. For example, a web proxy or CASB will redirect a user’s web browsing in order to decrypt the traffic and block particular applications or data. Mail proxy gateways (see MTA) reroute incoming email in order to scan and block spam, phishing or other malicious email. A proxy is limited in its visibility as it cannot monitor or control traffic it cannot see, i.e. remote and non-employee web usage or internal email traffic.

Quarantine

The act of encrypting, moving or changing the share permissions of a file so that it is unreachable by a user until it can be deemed safe or authorized by the intended recipient.

Ransomware

A type of malware that encrypts the files on an endpoint device using a mechanism for which only the attacker has the keys. While the attacker will offer the key in exchange for payment, fewer than half of victims that do pay actually recover their files.

Sandboxing

A type of security measure that involves testing a file or link in a controlled environment to see what effect it has on the emulated operating system, typically the first line of defense against zero-day attacks for which there is no signature or pre-knowledge of the code.

Shadow IT

Any unapproved cloud-based account or solution implemented by an employee for business use. It might also include the use of an unknown account with an approved provider, but administered by the user rather than corporate IT.

Shadow SaaS

An unapproved cloud application that is connected in some way (typically by API) to that organization’s SaaS or IaaS with access to corporate data but without permission from the organization.

Spearphishing

A type of phishing attack that is designed to target a small number of users, sometimes only one user such as a CEO. Spear-phishing attacks usually involve intensive research by the hacker to increase the chances that the intended target will fall for it.

Tokens

A unique authorization key used for API interactions. Each token is granted a certain level of access and control and often continues to provide access until the token is manually revoked.

URL Analysis

A security measure that reviews a link to assess if it is genuine and will direct to a safe and expected destination with no unintended side effects.

URL Impersonation

A technique used in phishing attacks in which the hacker creates a URL that looks like a link to a trusted website to the untrained eye. These techniques can be thwarted using URL analysis.

User Impersonation

A technique used in phishing attacks in which the hacker makes their email look like it is coming from a trusted sender, either a corporation or another employee. This can be done by editing their nickname or using an email address that looks like it is from a trusted organization.

We will be continuing to add to this list and if you have any suggestions for terms to include please reach out to [email protected].

Are Healthcare Breaches Down Because of CASBs?

By Salim Hafid, Product Marketing Manager, Bitglass

Bitglass just released its fourth annual Healthcare Breach Report, which dives into healthcare breaches over 2017 and compares the rate of breach over previous years. A big surprise this year was the precipitous drop in the volume of breaches and the scope of each attack. Our research team set out to discover why this happened.

Our annual healthcare report is based on breach data from the US Department of Health and Human Services. The government mandates that all healthcare organizations and their affiliates publicly disclose breaches that affect at least 500 individuals. The result is several years of data on the causes of healthcare breaches as well as information about which firms are targeted by attackers.

It seems that after several years of being a top target for hackers looking to steal valuable data, healthcare firms‘ security teams are now getting their act together. For each organization in this vertical, security has become a priority. Many are migrating to the cloud in an effort to shift the infrastructure security burden to powerful tech giants like AmazonGoogle, and Microsoft. This shift to cloud has also driven many to adopt third-party security solutions that allow them to obtain cross-app security, achieve HIPAA compliance, and mitigate the risk and impact of breaches.

In particular, cloud access security brokers are taking the healthcare sector by storm and are proving to play an important part in preventing breaches. Back in 2015, few had a CASB deployed and many were at risk of massive data loss. Today, forward-thinking organizations like John Muir Health have deployed a Next-Gen CASB to great success. IT administrators can be immediately alerted to high-risk data outflows and new applications that pose a threat, and can define granular policies that prevent mega-breaches of the sort that cost Anthem and Premera hundreds of millions of dollars.

Read the full healthcare breach report to learn about the leading causes of breaches in the sector, the average cost of a stolen health record, and more.

You Are the Weakest Link – Goodbye

By Jacob Serpa, Product Marketing Manager, Bitglass

Security in the cloud is a top concern for the modern enterprise. Fortunately, provided that organizations do their due diligence when evaluating security tools, storing data in the cloud can be even more secure than storing data on premises. However, this does require deploying a variety of solutions for securing data at rest, securing data at access, securing mobile and unmanaged devices, defending against malware, detecting unsanctioned cloud apps (shadow IT), and more. Amidst this rampant adoption of security tools, organizations often forget to bolster the weakest link in their security chain, their users.

The Weak Link in the Chain
While great steps are typically taken to secure data, relatively little thought is given to the behaviors of its users. This is likely due to an ingrained reliance upon static security tools that fail to adapt to situations in real time. Regardless, users make numerous decisions that place data at risk – some less obvious than others. In the search for total data protection, this dynamic human element cannot be ignored.

External sharing is one example of a risky user behavior. Organizations need visibility and control over where their data goes in order to keep it safe. When users send files and information outside of the company, protecting it becomes very challenging. While employees may do this either maliciously or just carelessly, the result is the same – data is exposed to unauthorized parties. Somewhat similarly, this can occur through shadow IT when users store company data in unsanctioned cloud applications over which the enterprise has no visibility or control.

Next, many employees use unsecured public WiFi networks to perform their work remotely. While this may seem like a convenient method of accessing employers’ cloud applications, it is actually incredibly dangerous for the enterprise. Malicious individuals can monitor traffic on these networks in order to steal users’ credentials. Additionally, credentials can fall prey to targeted phishing attacks that are enabled by employees who share too much information on social media. The fact that many individuals reuse passwords across multiple personal and corporate accounts only serves to exacerbate the problem.

In addition to the above, users place data at risk through a variety of other ill-advised behaviors. Unfortunately, traditional, static security solutions have a difficult time adapting to users’ actions and offering appropriate protections in real time.

Reforging the Chain
In the modern cloud, automated security solutions are a must. Reactive solutions that rely upon humans to analyze threats and initiate a response are incapable of protecting data in real time. The only way to ensure true automation is by using machine learning. When tools are powered by machine learning, they can protect data in a comprehensive fashion in the rapidly evolving, cloud-first world.

This next-gen approach can be particularly helpful when addressing threats that stem from compromised credentials and malicious or careless employees. User and entity behavior analytics (UEBA) baseline users’ behaviors and perform real-time analyses to detect suspicious activities. Whether credentials are used by thieving outsiders or employees engaging in illicit behaviors, UEBA can detect threats and respond by enforcing step-up, multi-factor authentication before allowing data access.

Machine learning is helpful for defending against other threats, as well. For example, advanced anti-malware solutions can leverage machine learning to analyze the behaviors of files. In this way, they can detect and block unknown, zero-day malware; something beyond the scope of traditional, signature-based solutions that can only check for documented, known malware.

Even less conventional tools like shadow IT discovery are beginning to be endowed with machine learning. Historically, these solutions have relied upon lists generated by massive human teams that constantly categorize and evaluate the risks of new cloud applications. However, this approach fails to keep pace with the perpetually growing number of new and updated apps. Because of this, leading cloud access security brokers (CASBs) are using machine learning to rank and categorize new applications automatically, enabling immediate detection of new cloud apps in use. In other words, organizations can uncover all of the locations that careless and conniving employees store corporate data.

While training employees in best security practices is necessary, it is not sufficient for protecting data. Education must be paired with context-aware, automated security solutions (like CASBs) in order to reinforce the weak links in the enterprise’s security chain.

AWS Cloud: Proactive Security and Forensic Readiness – Part 2

By Neha Thethi, Information Security Analyst, BH Consulting

Part 2: Infrastructure-level protection in AWS 

This is the second in a five-part blog series that provides a checklist for proactive security and forensic readiness in the AWS cloud environment. This post relates to protecting your virtual infrastructure within AWS.

Protecting any computing infrastructure requires a layered or defense-in-depth approach. The layers are typically divided into physical, network (perimeter and internal), system (or host), application, and data. In an Infrastructure as a Service (IaaS) environment, AWS is responsible for security ‘of’ the cloud including the physical perimeter, hardware, compute, storage and networking, while customers are responsible for security ‘in’ the cloud, or on layers above the hypervisor. This includes the operating system, perimeter and internal network, application and data.

Infrastructure protection requires defining trust boundaries (e.g., network boundaries and packet filtering), system security configuration and maintenance (e.g., hardening and patching), operating system authentication and authorizations (e.g., users, keys, and access levels), and other appropriate policy enforcement points (e.g., web application firewalls and/or API gateways).

The key AWS service that supports service-level protection is AWS Identity and Access Management (IAM) while Virtual Private Cloud (VPC) is the fundamental service that contributes to securing infrastructure hosted on AWS. VPC is the virtual equivalent of a traditional network operating in a data center, albeit with the scalability benefits of the AWS infrastructure. In addition, there are several other services or features provided by AWS that can be leveraged for infrastructure protection.

The following list mainly focuses on network and host-level boundary protection, protecting integrity of the operating system on EC2 instances and Amazon Machine Images (AMIs) and security of containers on AWS.

The checklist provides best practice for the following:

  1. How are you enforcing network and host-level boundary protection?
  2. How are you protecting against distributed denial of service (DDoS) attacks at network and application level?
  3. How are you managing the threat of malware?
  4. How are you identifying vulnerabilities or misconfigurations in the operating system of your Amazon EC2 instances?
  5. How are you protecting the integrity of the operating system on your Amazon EC2 instances?
  6. How are you ensuring security of containers on AWS?
  7. How are you ensuring only trusted Amazon Machine Images (AMIs) are launched?
  8. How are you creating secure custom (private or public) AMIs?

IMPORTANT NOTE: Identity and access management is an integral part of securing an infrastructure, however, you’ll notice that the following checklist does not focus on the AWS IAM service. I have covered this in a separate checklist on IAM best practices here.

Best-practice checklist

1. How are you enforcing network and host-level boundary protection?

  • Establish appropriate network design for your workload to ensure only desired network paths and routing are allowed
  • For large-scale deployments, design network security in layers – external, DMZ, and internal
  • When designing NACL rules, consider that it’s a stateless firewall, so ensure to define both outbound and inbound rules
  • Create secure VPCs using network segmentation and security zoning
  • Carefully plan routing and server placement in public and private subnets.
  • Place instances (EC2 and RDS) within VPC subnets and restrict access using security groups and NACLs
  • Use non-overlapping IP addresses with other VPCs or data centre in use
  • Control network traffic by using security groups (stateful firewall, outside OS layer), NACLs (stateless firewall, at subnet level), bastion host, host based firewalls, etc.
  • Use Virtual Gateway (VGW) where Amazon VPC-based resources require remote network connectivity
  • Use IPSec or AWS Direct Connect for trusted connections to other sites
  • Use VPC Flow Logs for information about the IP traffic going to and from network interfaces in your VPC
  • Protect data in transit to ensure the confidentiality and integrity of data, as well as the identities of the communicating parties.

2. How are you protecting against distributed denial of service (DDoS) attacks at network and application level?

  • Use firewalls including Security groups, network access control lists, and host based firewalls
  • Use rate limiting to protect scarce resources from overconsumption
  • Use Elastic Load Balancing and Auto Scaling to configure web servers to scale out when under attack (based on load), and shrink back when the attack stops
  • Use AWS Shield, a managed Distributed Denial of Service (DDoS) protection service, that safeguards web applications running on AWS
  • Use Amazon CloudFront to absorb DoS/DDoS flooding attacks
  • Use AWS WAF with AWS CloudFront to help protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources
  • Use Amazon CloudWatch to detect DDoS attacks against your application
  • Use VPC Flow Logs to gain visibility into traffic targeting your application.

3. How are you managing the threat of malware?

  • Give users the minimum privileges they need to carry out their tasks
  • Patch external-facing and internal systems to the latest security level.
  • Use a reputable and up-to-date antivirus and antispam solution on your system.
  • Install host based IDS with file integrity checking and rootkit detection
  • Use IDS/IPS systems for statistical/behavioural or signature-based algorithms to detect and contain network attacks and Trojans.
  • Launch instances from trusted AMIs only
  • Only install and run trusted software from a trusted software provider (note: MD5 or SHA-1 should not be trusted if software is downloaded from random source on the internet)
  • Avoid SMTP open relay, which can be used to spread spam, and which might also represent a breach of the AWS Acceptable Use Policy.

4. How are you identifying vulnerabilities or misconfigurationsin the operating system of your Amazon EC2 instances?

  • Define approach for securing your system, consider the level of access needed and take a least-privilege approach
  • Open only the ports needed for communication, harden OS and disable permissive configurations
  • Remove or disable unnecessary user accounts.
  • Remove or disable all unnecessary functionality.
  • Change vendor-supplied defaults prior to deploying new applications.
  • Automate deployments and remove operator access to reduce attack surface area using tools such as EC2 Systems Manager Run Command
  • Ensure operating system and application configurations, such as firewall settings and anti-malware definitions, are correct and up-to-date; Use EC2 Systems Manager State Manager to define and maintain consistent operating system configurations
  • Ensure an inventory of instances and installed software is maintained; Use EC2 Systems Manager Inventory to collect and query configuration about your instances and installed software
  • Perform routine vulnerability assessments when updates or deployments are pushed; Use Amazon Inspector to identify vulnerabilities or deviations from best practices in your guest operating systems and applications
  • Leverage automated patching tools such as EC2 Systems Manager Patch Manager to help you deploy operating system and software patches automatically across large groups of instances
  • Use AWS CloudTrail, AWS Config, and AWS Config Rules as they provide audit and change tracking features for auditing AWS resource changes.
  • Use template definition and management tools, including AWS CloudFormation to create standard, preconfigured environments.

5. How are you protecting the integrity of the operating system on your Amazon EC2 instances?

  • Use file integrity controls for Amazon EC2 instances
  • Use host-based intrusion detection controls for Amazon EC2 instances
  • Use a custom Amazon Machine Image (AMI) or configuration management tools (such as Puppet or Chef) that provide secure settings by default.

6. How are you ensuring security of containers on AWS?

  • Run containers on top of virtual machines
  • Run small images, remove unnecessary binaries
  • Use many small instances to reduce attack surface
  • Segregate containers based on criteria such as role or customer and risk
  • Set containers to run as non-root user
  • Set filesystems to be read-only
  • Limit container networking; Use AWS ECS to manage containers and define communication between containers
  • Leverage Linux kernel security features using tools like SELinux, Seccomp, AppArmor
  • Perform vulnerability scans of container images
  • Allow only approved images during build
  • Use tools such as Docker Bench to automate security checks
  • Avoid embedding secrets into images or environment variables, Use S3-based secrets storage instead.

7. How are you ensuring only trusted Amazon Machine Images (AMIs) are launched?

  • Treat shared AMIs as any foreign code that you might consider deploying in your own data centre and perform the appropriate due diligence
  • Look for description of shared AMI, and the AMI ID, in the Amazon EC2 forum
  • Check aliased owner in the account field to find public AMIs from Amazon.

8. How are you creating secure custom (private or public) AMIs?

  • Disable root API access keys and secret key
  • Configure Public Key authentication for remote login
  • Restrict access to instances from limited IP ranges using Security Groups
  • Use bastion hosts to enforce control and visibility
  • Protect the .pem file on user machines
  • Delete keys from the authorized_keys file on your instances when someone leaves your organization or no longer requires access
  • Rotate credentials (DB, Access Keys)
  • Regularly run least privilege checks using IAM user Access Advisor and IAM user Last Used Access Keys
  • Ensure that software installed does not use default internal accounts and passwords.
  • Change vendor-supplied defaults before creating new AMIs
  • Disable services and protocols that authenticate users in clear text over the network, or otherwise insecurely.
  • Disable non-essential network services on startup. Only administrative services (SSH/RDP) and the services required for essential applications should be started.
  • Ensure all software is up to date with relevant security patches
  • For in instantiated AMIs, update security controls by running custom bootstrapping Bash or Microsoft Windows PowerShell scripts; or use bootstrapping applications such as Puppet, Chef, Capistrano, Cloud-Init and Cfn-Init
  • Follow a formalised patch management procedure for AMIs
  • Ensure that the published AMI does not violate the Amazon Web Services Acceptable Use Policy. Examples of violations include open SMTP relays or proxy servers. For more information, see the Amazon Web Services Acceptable Use Policy http://aws.amazon.com/aup/

Security at the infrastructure level, or any level for that matter, certainly requires more than just a checklist. For a comprehensive insight into infrastructure security within AWS, we suggest reading the following AWS whitepapers – AWS Security Pillar and AWS Security Best Practises.

For more details, refer to the following AWS resources:

Next up in the blog series, is Part 3 – Data Protection in AWS – best practice checklist. Stay tuned.

DISCLAIMER: Please be mindful that this is not an exhaustive list. Given the pace of innovation and development within AWS, there may be features being rolled out as these blogs were being written. Also, please note that this checklist is for guidance purposes only.

Securing the Internet of Things: Devices & Networks

By Ranjeet Khanna, Director of Product Management–IoT/Embedded Security, Entrust Datacard

The Internet of Things (IoT) is changing manufacturing for the better.

With data from billions of connected devices and trillions of sensors, supply chain and device manufacturing operators are taking advantage of new benefits. Think improved efficiency and greater flexibility among potential business models. But as the IoT assumes a bigger role across industries, security needs to take top priority. Here’s a look at four key challenges that must be taken care of before realizing the rewards of increased connectivity.

Reducing risk
Mitigating risk doesn’t always have to come at the expense of uptime and reliability. With the right IoT security solutions, manufacturers can assign trusted identities to all devices or applications to ensure fraudsters remain on the outside looking in. Better yet, the integration of identity management can also pave the way for improved visibility of business operations, scalability, and access control. Instead of getting caught off guard by unforeseen occurrences, manufacturers will be prepared to address problems throughout every step of the product lifecycle.

Setting the stage for data sharing
Data drives the IoT. As more data is shared across connected ecosystems, the potential for analytics-based and even predictive advancements increases.. Such improvements, however, aren’t all positive. Increased data sharing opens to the door to additional cyber attacks. To help keep sensitive information under wraps, businesses should consider embedding trusted identities for devices at the time of manufacturing. From electronic control units within cars to the connected devices that make up smart cities, introducing trusted identities promises to not only secure data sharing, but also improve supply chain integrity and speed up IoT deployments along the way.

Securing networks & protocols
Through the IoT, old networks and protocols are being introduced to new devices. Enterprise-grade encryption-based technologies keep both greenfield and brownfield environments secure, regardless of protocol. While this extra step may take some time, the benefits are well worth it. Whether it’s an additional source of revenue or heightened security, implementing solutions that are effective across systems, designs and protocols can help ensure improved security for years to come.

Tying identity to security
Physical and digital security may seem like different subjects on the surface, but a closer look reveals some valuable similarities. Just as authorization is needed to enter a highly secure building, sensitive information should only be made available to users with the proper credentials. Dependent upon a variety of conditions – such as the time of day or type of device – rule-based authentication is one way to ensure untrusted devices or users can’t access a secure environment.

Supply chain and device manufacturing operators have not yet taken full advantage of IoT’s impressive potential. By enabling fast-tracking of deployment timelines and allowing organizations to more quickly realize business value in areas such as process optimization and automation, ioTrust could soon change that. Leverage the power of ioTrust to stay one step ahead of the competition.

Note: This is part two in a four-part blog series on Securing the IoT.
Check out Part One: Connected Cars

Zero-Day in the Cloud – Say It Ain’t So

By Steve Armstrong, Regional Sales Director, Bitglass

Zero-day vulnerabilities are computer or software security gaps that are unknown to the public – particularly to parties who would like to close said gaps, like the vendors of vulnerable software.

To many in the infosec community, the term “zero-day” is synonymous with the patching or updating of systems. Take, for example, the world of anti-malware vendors. There are those whose solutions utilize signatures or hashes to defend against threats. Their products ingest a piece of malware, run it through various systems, perhaps have a human analyze the file, and then write a signature. This is then pushed to their subscribers’ end points in order to update systems and defend them against that particular piece of malware. The goal is to get the update to systems before there is an infection (sadly, updates are not always timely). On the other hand, there are some vendors who reject this traditional, reactive method. Instead, they use artificial intelligence to solve the problem in real time.

When assessing threats, it comes down to what you don’t know. It can be difficult to respond to unknown threats until they strike. As they say, it’s not what you know that kills you. This is also true in the SaaS space. The analogy is simple, new applications appear daily – some good, some bad – and even the good ones can have unknown data leakage paths. Treat them as a threat.

In order to respond to unknown cloud applications, you can do one of two things.

First, the standard practice from CASBs (cloud access security brokers) is to find the new application, work to understand the originating organization, analyze the application, identify the data leakage paths, gain an understanding of the controls, and then write a signature. This is all done by massive teams of people who have limited capacities to work – very much like the inefficient, signature-based anti-malware vendors. It can take days, weeks, or even months until an application signature is added to a support catalog. For organizations who want to protect their data, this is simply not good enough.

Option two is to utilize artificial intelligence and respond to new applications in the same manner as advanced anti-malware solutions. This route entails analyzing the application, identifying the data leakage paths, designing the control, and securing the application automatically in real time.

New, unknown applications should be responded to in the same fashion that an enterprise would respond to any other threat. Rather than waiting days, weeks, or months, they should be addressed immediately.

 

Saturday Security Spotlight: Tesla, FedEx, & the White House

By Jacob Serpa, Product Marketing Manager, Bitglass

Here are the top cybersecurity stories of recent weeks:

—Tesla hacked and used to mine cryptocurrency
—FedEx exposes customer data in AWS misconfiguration
—White House releases cybersecurity report
—SEC categorizes knowledge of unannounced breaches as insider information
—More Equifax data stolen than initially believed

Tesla hacked and used to mine cryptocurrency
By targeting a Tesla instance of Kubernetes, Google’s open-source administrative console for cloud apps, hackers were able to infiltrate the company. The malicious parties then obtained credentials to Tesla’s AWS environment, gained access to proprietary information, and began running scripts to mine cryptocurrency using Tesla’s computing power.

FedEx exposes customer data in AWS misconfiguration
FedEx is one of the latest companies to suffer from an AWS misconfiguration. Bongo, acquired by FedEx in 2014 and subsequently renamed CrossBorder, is reported to have left its S3 instance completely unsecured, exposing the data of nearly 120,000 customers. While it is believed that no data theft occurred, the company still left sensitive information (like customer passport details) exposed for an extended period.

White House releases cybersecurity report
In light of the escalating costs of cyberattacks in the United States, the White House released a report scrutinizing the current state of cybersecurity. In particular, the report recognized the critical link between cybersecurity and the economy at large. Should other countries execute cyberattacks against organizations responsible for US infrastructure, the repercussions could be severe.

SEC categorizes knowledge of unannounced breaches as insider information
The Securities and Exchange Commission recently announced that knowledge of unannounced breaches is insider information that should not be used to inform the purchase or sale of stock. This comes largely in response to Intel and Equifax executives selling stock before their companies announced breaches.

More Equifax data stolen than initially believed
In September of 2017, Equifax announced a massive breach that leaked names, home addresses, Social Security Numbers, and more. Interestingly (and frighteningly), it now appears that even more data was leaked than the company originally reported.

FedRAMP – Three Stages of Vulnerability Scanning and their Pitfalls

By Matt Wilgus, Practice Leader, Threat & Vulnerability Assessments, Schellman & Co.

Though vulnerability scanning is only one of the control requirements in FedRAMP, it is actually one of the most frequent pitfalls in terms of impact to an authorization to operate (ATO), as FedRAMP requirements expect cloud service providers (CSPs) to have a mature vulnerability management program. A CSP needs to have the right people, processes and technologies in place, and must successfully demonstrate maturity for all three. CSPs that have an easier time with the vulnerability scanning requirements follow a similar approach, which can be best articulated by breaking down the expectations into three stages.

1. Pre-Assessment

Approximately 60-90 days from an expected security assessment report (SAR), a CSP should provide the third-party assessment organization (3PAO) a recent set of scans, preferably from the most recent three months. The scan data should be provided in a format that can be parsed by the 3PAO. There are several questions that can be answered by providing scans well ahead of time:

  • Credentials – Are the scans being conducted from an authenticated perspective with a user having the highest level of privileges available?
  • Scan Types – Are infrastructure, database, and web application scans being performed?
  • Points of Contact – Who is responsible for configuring the scanner and running scans? Who is responsible for remediation?
  • Entire Boundary Covered – Is the full, in-scope environment being scanned?
  • Remediation – Are high severity findings being remediated in 30 days? Are moderate severity findings being remediated within 90 days?

Within the pre-assessment, having all plugins enabled is frequently an area of discussion, as many CSPs want to disable plugins or sets of checks. Should a check need to be disabled, there must be a documented reason (e.g. degradation of performance or denial of service occurs with a given plugin). Do not disable checks simply because it is assumed a given type of asset doesn’t exist in the environment.

Properly configured and authenticated vulnerability scanners will typically not send families of vulnerability checks against hosts if the operating system or application does not match what is required by the family of checks–i.e., Netware checks will not be run if Netware is not detected during the scan of the environment. The safest bet is to always enable everything. If a given check needs to be disabled, it should be noted as an exception with formal documentation detailing why it is disabled, and what processes are in place to ensure the vulnerability being detected is covered by other mitigating factors.

The pre-assessment phase is also a good time for the CSP to document any known false positives that occur within the scan results and any operational requirements that prevent remediation from occurring.

2. Assessment

During the assessment kickoff, the CSP should be ready for the 3PAO to conduct vulnerability scans. If the CSP successfully addresses the questions in the pre-assessment phase, then any findings or issues during the assessment phase should be easy to address. There are three main areas to tackle while reviewing the scan data in the assessment past:

  1. Current Picture – What vulnerabilities exist in the environment as of the current date?
  2. Reassurance on Remediation – Are vulnerabilities continuing to be remediated in a timely manner?
  3. Adjustments – What changes have been taken since the pre-assessment?

Of the aforementioned three items, adjustments often have the biggest impact. Examples of adjustments that frequently occur and need to be addressed include if the:

  • vulnerability scanning tool has changed
  • scan checks have been modified
  • personnel responsible for configuring and running the scans are no longer with the organization
  • technologies within the environment have changed
  • environment hosting the solution has changed

If any of these adjustments exist, the 3PAO will need to perform additional validation activities.

3. Final Scan

A final round of scans should be run by the CSP five to 10 days prior to the issuance of the SAR. At this point, all questions related to the personnel running the scans, the processes deployed, and the technologies implemented should be answered. The last set of scans should be limited in scope and used to show evidence of remediation activities on the vulnerabilities identified in the assessment phase. There are three primary goals related to the last piece of scan evidence:

  1. Targeted scans – Has a final set of scans that shows remediation of findings from the assessment phase been provided?
  2. Operational Requirements (OR) and False Positives (FP) – Are all ORs and FPs documented, reviewed and understood?
  3. Ready for Continuous Monitoring – Are there any high severity findings remaining, and is the CSP ready to provide monthly results to an agency or the Joint Authorization Board (JAB)?

High severity findings are highlighted due to their outsized impact on a FedRAMP ATO. A CSP cannot receive a recommendation for an ATO if any high severity vulnerabilities are present. Should any findings persist as of the date the SAR is issued, these findings should be tracked in the CSPs Plan of Action and Milestones (POA&M).

For additional information on the timing and handling of vulnerability scans, please see the following documents on the FedRAMP website:

 

Securing the Internet of Things: Connected Cars

By Ranjeet Khanna, Director of Product Management–IoT/Embedded Security, Entrust Datacard

Establishing safety and security in automotive design goes far beyond crash test dummies.

By 2022, the global automotive Internet of Things (IoT) market is expected to skyrocket to $82.79 billion – and manufacturers are racing to capitalize on this growing opportunity. While embedded computation and networking has been around since the 1980s, the advent of connectivity opens up an array of new options for automakers. From advanced collision detection and predictive diagnostics, to entertainment systems that load a driver’s favorite tunes the second they sit down, connected cars are poised to enhance the consumer experience.

Those extra conveniences, however, aren’t without their downsides. If not properly secured, connected cars threaten to expose sensitive consumer information. With data being passed between so many different connected channels, it’s easier than ever for hackers to get their hands on personally identifiable information.

In 2015, Chrysler announced a recall of 1.4 million vehicles after two technology researchers hacked into a Jeep Cherokee’s dashboard connectivity system. But the right security solutions can make such incidents a thing of the past.

Through new IoT security solutions, automotive manufacturers are able to assign a trusted identity to each and every device – regardless of whether it’s located inside a vehicle or across the IoT ecosystem. This extra layer of security sets the stage for trusted communication between authorized users, devices and applications. Ensuring the right security level for the right device helps prevent data being made accessible to unauthorized users or devices. Using cryptographic protection as well as strong authorization requirements will restrict access to those things, systems and users with the proper privileges.

In addition to creating a trusted IoT ecosystem, automotive designers also stand to realize significant business value. Instead of spending precious time determining which devices to trust, ioTrust makes it easy to not only recognize trusted devices, but operationalize them. That same convenience also extends to the supply chain, where manufacturers can get a better look at a product’s entire lifecycle – from creation to release.

IoT has burst onto the scene in a big way, especially in the quest to securely design the next connected car. But before making the most of automotive IoT, manufacturers must consider how to keep consumer data under wraps. By provisioning managed identities and authorization privileges, ioTrust paves the way for securely connected automotive systems.

Note: This is part of a blog series on Securing the IoT. 

CASBs and Education’s Flight to the Cloud

By Jacob Serpa, Product Marketing Manager, Bitglass

Cloud is becoming an integral part of modern organizations seeking productivity and flexibility. For higher education, cloud enables online course creation, dynamic collaboration on research documents, and more. As many cloud services like G Suite are discounted or given to educational institutions for free, adoption is made even simpler. However, across the multiple use cases in education, comprehensive security solutions must be used to protect data wherever it goes. The vertical as a whole needs real-time protection on any app, any device, anywhere.

The Problems
For academic institutions, research is often of critical importance. Faculty members create, share, edit, and reshare various documents in an effort to complete projects and remain at the cutting edges of their fields. Obviously, using cloud apps facilitates this process of collaboration and revision. However, doing so in an unsecured fashion can allow proprietary information to leak to unauthorized parties.

Another point of focus in education is how student and faculty PII (personally identifiable information) is used and stored in the cloud. As information moves to cloud apps, traditional security solutions fail to provide adequate visibility and control over data. Obviously, this creates compliance concerns with regulations, like FISMA and FERPA, that aim to protect personal information. Medical schools have the additional requirement of securing protected health information (PHI) and complying with HIPAA.

The Solutions
Fortunately, cloud access security brokers (CASBs) offer a variety of capabilities that address the above security concerns. Data leakage prevention, for example, can be used to protect data and reach regulatory compliance. DLP policies allow organizations to redact data like PII, quarantine sensitive files, and watermark and track documents. Encryption can be used to obfuscate sensitive data and prevent unauthorized users from viewing things like PHI. Contextual access controls govern data access based on factors like user group, geographical location, and more.

To secure cloud, present-day organizations must also secure mobile data access. Fortunately, agentless mobile security solutions enable BYOD without requiring installations on unmanaged devices. This is critical for ensuring device functionality, user privacy, and employee adoption. Some agentless solutions can enforce device security configurations like PIN codes, selectively wipe corporate data on any device, and more.