How Traffic Mirroring in the Cloud Works

By Tyson Supasatit, Sr. Product Marketing Manage, ExtraHop

Learn how Amazon traffic mirroring and the Azure vTAP fulfill the SOC visibility triad

After years of traffic mirroring not being available in the cloud, between Amazon VPC traffic mirroring and the Azure vTAP, it’s finally here! In this lightboard video, we’ll explain what traffic mirroring is and why the availability of traffic mirroring for the cloud is so significant.

Traffic mirroring is required for any type of product that wants to passively listen or analyze network traffic, such as IDS, DLP, packet capture solutions, and network detection and response (NDR) products like ExtraHop Reveal(x). The advantages for security is that this method of analysis is virtually undetectable by attackers and cannot be turned off. Rob Joyce, director of the NSA’s hacking unit, called passive network monitoring his team’s “worst nightmare” for these reasons.

Previously, traffic mirroring in the cloud was challenging. To get packets to analysis tools, vendors would either have to route traffic through an in-line virtual appliance or install packet-forwarding agents on cloud instances. These workarounds added complexity and overhead. With native traffic mirroring capabilities—vTAP in Azure and VPC traffic mirroring in AWS—organizations can easily route copies of traffic from specific instances or entire VPCs to analysis tools with the click of a button. As you would expect, the cloud providers take care of all the “plumbing.” This will actually be a huge relief to many Security teams who have to go through arduous processes to get copies of traffic in on-premises environments.

The introduction of native traffic mirroring for AWS and Azure means that the public cloud is growing in technical maturity with more of the capabilities that were available on-premises now available in the cloud. This shows that Azure and AWS are focusing on production enterprise workloads. The cloud is not just for developers any more!

With vTap in Azure and VPC traffic mirroring in AWS, Security Operations teams can nap “tap” into the three key data sources for security visibility: logs, endpoint data, and network data. Gartner calls this the “SOC visibility triad.”

Finally, traffic mirroring enables new ways to package and deliver products in order to build cloud-first network detection and response products.

Rethinking Security for Public Cloud

Symantec’s Raj Patel highlights how organizations should be retooling security postures to support a modern cloud environment

By Beth Stackpole, Writer, Symantec

old fashioned scales with glass globe on one side and gold coins on the other

Enterprises have come a long way with cyber security, embracing robust enterprise security platforms and elevating security roles and best practices. Yet with public cloud adoption on the rise and businesses shifting to agile development processes, new threats and vulnerabilities are testing traditional security paradigms and cultures, mounting pressure on organizations to seek alternative approaches.

Raj Patel, Symantec’s vice president, cloud platform engineering, recently shared his perspective on the shortcoming of a traditional security posture along with the kinds of changes and tools organizations need to embrace to mitigate risk in an increasingly cloud-dominant landscape.

Q: What are the key security challenges enterprises need to be aware of when migrating to the AWS public cloud and what are the dangers of continuing traditional security approaches?

A: There are a few reasons why it’s really important to rethink this model. First of all, the public cloud by its very definition is a shared security model with your cloud provider. That means organizations have to play a much more active role in managing security in the public cloud than they may have had in the past.

Infrastructure is provided by the cloud provider and as such, responsibility for security is being decentralized within an organization. The cloud provider provides a certain level of base security, but the application owner directly develops infrastructure on top of the public cloud, thus now has to be security-aware.

The public cloud environment is also a very fast-moving world, which is one of the key reasons why people migrate to it. It is infinitely scalable and much more agile. Yet those very same benefits also create a significant amount of risk. Security errors are going to propagate at the same speed if you are not careful and don’t do things right. So from a security perspective, you have to apply that logic in your security posture.

Finally, the attack vectors in the cloud are the entire fabric of the cloud. Traditionally, people might worry about protecting their machines or applications. In the public cloud, the attack surface is the entire fabric of the cloud–everything from infrastructure services to platform services, and in many cases, software services. You may not know all the elements of the security posture of all those services … so your attack surface is much larger than you have in a traditional environment.

Q: Where does security fit in a software development lifecycle (SDLC) when deploying to a public cloud like AWS and how should organizations retool to address the demands of the new decentralized environment?

A: Most organizations going through a cloud transformation take a two-pronged approach. First, they are migrating their assets and infrastructure to the public cloud and second, they are evolving their software development practices to fit the cloud operating model. This is often called going cloud native and it’s not a binary thing—it’s a journey.

With that in mind, most cloud native transformations require a significant revision of the SDLC … and in most cases, firms adopt some form of a software release pipeline, often called a continuous integration, continuous deployment (CI/CD) pipeline. I believe that security needs to fit within the construct of the release management pipeline or CI/CD practice. Security becomes yet another error class to manage just like a bug. If you have much more frequent release cycles in the cloud, security testing and validation has to move at the same speed and be part of the same release pipeline. The software tools you choose to manage such pipelines should accommodate this modern approach.

Q: Explain the concept of DevSecOps and why it’s an important best practice for public cloud security?

A: DevOps is a cultural construct. It is not a function. It is a way of doing something—specifically, a way of building a cloud-native application. And a new term, DevSecOps, has emerged which contends that security should be part of the DevOps construct. In a sense, DevOps is a continuum from development all the way to operations, and the DevSecOps philosophy says that development, security, and operations are one continuum.

Q: DevOps and InfoSec teams are not typically aligned—what are your thoughts on how to meld the decentralized, distributed world of DevOps with the traditional command-and-control approach of security management?

A: It starts with a very strong, healthy respect for the discipline of security within the overall application development construct. Traditionally, InfoSec professionals didn’t intersect with DevOps teams because security work happened as an independent activity or as an adjunct to the core application development process. Now, as we’re talking about developing cloud-native applications, security is part of how you develop because you want to maximize agility and frankly, harness the pace of development changes going on.

One practice that works well is when security organizations embed a security professional or engineer within an application group or DevOps group. Oftentimes, the application owners complain that the security professionals are too far removed from the application development process so they don’t understand it or they have to explain a lot, which slows things down. I’m proposing breaking that log jam by embedding a security person in the application group so that the security professional becomes the delegate of the security organization, bringing all their tools, knowledge, and capabilities.

At Symantec, we also created a cloud security practitioners working group as we started our cloud journey. Engineers involved in migrating to the public cloud as well as our security professionals work as one common operating group to come up with best practices and tools. That has been very powerful because it is not a top-down approach, it’s not a bottoms-up approach–it is the best outcome of the collective thinking of these two groups.

Q: How does the DevSecOps paradigm address the need for continuous compliance management as a new business imperative?

A: It’s not as much that DevSecOps invokes continuous compliance validation as much as the move to a cloud-native environment does. Changes to configurations and infrastructure are much more rapid and distributed in nature. Since changes are occurring almost on a daily basis, the best practice is to move to a continuous validation mode. The cloud allows you to change things or move things really rapidly and in a software-driven way. That means lots of good things, but it can also mean increasing risk a lot. This whole notion of DevSecOps to CI/CD to continuous validation comes from that basic argument.

AWS Cloud: Proactive Security and Forensic Readiness – Part 4

Part 4: Detective Controls in AWS

By Neha Thethi, Information Security Analyst, BH Consulting

Security controls can be either technical or administrative. A layered security approach to protecting an organization’s information assets and infrastructure should include preventative controls, detective controls and corrective controls.

Preventative controls exist to prevent the threat from coming in contact with the weakness. Detective controls exist to identify that the threat has landed in our systems. Corrective controls exist to mitigate or lessen the effects of the threat being manifested.

This post relates to detective controls within AWS Cloud. It’s the fourth in a five-part series that provides a checklist for proactive security and forensic readiness in the AWS Cloud environment.

Detective controls in AWS Cloud

AWS detective controls include processing of logs and monitoring of events that allow for auditing, automated analysis, and alarming.

These controls can be implemented using AWS CloudTrail logs to record AWS API calls, Service-specific logs (for Amazon S3, Amazon CloudFront, CloudWatch logs, VPC flow logs, ELB logs, etc) and AWS Config to maintain a detailed inventory of AWS resources and configuration. Amazon CloudWatch is a monitoring service for AWS resources and can be used to trigger CloudWatch events to automate security responses. Another useful tool is Amazon GuardDuty which is a managed threat detection service in AWS and continuously monitors for malicious or unauthorized.

Event logging

Security event logging is crucial for detecting security threats or incidents. Security teams should produce, keep and regularly review event logs that record user activities, exceptions, faults and information security events. They should collect logs centrally and automatically analysed to detect suspicious behavior. Automated alerts can monitor key metrics and events related to security. It is critical to analyse logs in a timely manner to identify and respond to potential security incidents. In addition, logs are indispensable for forensic investigations.

The challenge of managing logs

However, managing logs can be a challenge. AWS makes log management easier to implement by providing the ability to define a data-retention lifecycle or define where data will be preserved, archived, or eventually deleted. This makes predictable and reliable data handling simpler and more cost-effective.

The following list recommends use of AWS Trusted Advisor for detecting security threats within the AWS environment. It covers collection, aggregation, analysis, monitoring and retention of logs, and, monitoring security events and billing to detect unusual activity.

The checklist provides best practice for the following:

  1. Are you using Trusted Advisor?
  2. How are you capturing and storing logs?
  3. How are you analyzing logs?
  4. How are you retaining logs?
  5. How are you receiving notification and alerts?
  6. How are you monitoring billing in your AWS account(s)?

Best-practice checklist

1. Are you using Trusted Advisor?
  • Use AWS Trusted Advisor to check for security compliance.
    Back to List
2. How are you capturing and storing logs?
  • Activate AWS Cloud Trail.
  • Collect logs from various locations/services including AWS APIs and user-related logs (e.g. AWS CloudTrail), AWS service-specific logs (e.g. Amazon S3, Amazon CloudFront, CloudWatch logs, VPC flow logs, ELB logs, etc.), operating system-generated logs, IDS/IPS logs and third-party application-specific logs
  • Use services and features such as AWS CloudFormation, AWS OpsWorks, or Amazon Elastic Compute Cloud (EC2) user data, to ensure that instances have agents installed for log collection
  • Move logs periodically from the source either directly into a log processing system (e.g., CloudWatch Logs) or stored in an Amazon S3 bucket for later processing based on business needs.Back to List
3. How are you analyzing logs?
  • ​​Parse and analyse security data using solutions such as AWS Config, AWS CloudWatch, Amazon EMR, Amazon Elasticsearch Service, etc.
  • Perform analysis and visualization with Kibana.
    Back to List
4. How are you retaining logs?
  • Store data centrally using Amazon S3, and, for long-term archiving if required, using Amazon Glacier
  • Define data-retention lifecycle for logs. By default, CloudWatch logs are kept indefinitely and never expire. You can adjust the retention policy for each log group, keeping the indefinite retention, or choosing a retention period between 10 years and one day
  • Manage log retention automatically using AWS Lambda.
    Back to List
5. How are you receiving notification and alerts?
  • ​​Use Amazon CloudWatch Events for routing events of interest and information reflecting potentially unwanted changes into a proper workflow
  • Use Amazon GuardDuty to continuously monitor for malicious or unauthorized behavior
  • Send events to targets like an AWS Lambda function, Amazon SNS, or other targets for alerts and notifications.
    Back to List
6. How are you monitoring billing in your AWS account(s)?
  • Use detailed billing to monitor your monthly usage regularly
  • Use consolidated billing for multiple accounts. Back to List

Refer to the following AWS resources for more details:

Next up in the blog series, is Part 5 – Incident Response in AWS – best practice checklist. Stay tuned. Let us know in the comments below if we have missed anything in our checklist!

DISCLAIMER: Please be mindful that this is not an exhaustive list. Given the pace of innovation and development within AWS, there may be features being rolled out as these blogs were being written. Also, please note that this checklist is for guidance purposes only.

Avoiding Holes in Your AWS Buckets

AWS cloudEnterprises are moving to the cloud at a breathtaking pace, and they’re taking valuable data with them. Hackers are right behind them, hot on the trail of as much data as they can steal. The cloud upends traditional notions of networks and hosts, and it topples security practices that use them as a proxy to protect data access. In public clouds, networks and hosts are no longer the most adequate control options available for resources and data.

Amazon Web Services (AWS) S3 buckets are the destination for much of the data moving to the cloud. Given how important this sensitive data is, one would expect enterprises to pay close attention to their S3 security posture. Unfortunately, many news stories highlight how many S3 buckets have been mistakenly misconfigured and left open to public access. It’s one of the most common security weaknesses in the great migration to the cloud, leaving gigabytes of data for hackers to grab.

When investigating why cloud teams were making what seemed to be an obvious configuration mistake, two primary reasons surfaced:

1. Too Much Flexibility (Too Many Options) Turns Into Easy Mistakes

S3 is the oldest AWS service and was available before EC2 or Identity and Access Management (IAM). Some access controls capabilities were built specifically for S3 before IAM existed. As it stands, there are five different ways to configure and manage access to S3 buckets.

  • S3 Bucket Policies
  • IAM Policies
  • Access Control Lists
  • Query string authentication/ static Web hosting
  • API access to change the S3 policies

The more ways to configure implies more flexibility but also means that higher chances of making a mistake. The other challenge is that there are two separate policies one for buckets and one for the objects within the bucket which make things more complex.

2. A “User” in AWS Is Different from a “User” in Your Traditional Datacenter

Amazon allows great flexibility in making sure data sharing is simple and users can easily access data across accounts or from the Internet. For traditional enterprises the concept of a “user” typically means a member of the enterprise. In AWS the definition of user is different. On an AWS account, the “Everyone” group includes all users(literally anyone on the internet) and “AWS Authenticated User” means any user with an AWS account. From a data protection perspective, that’s just as bad because anyone on the Internet can open an AWS account.

The customer moving from traditional enterprise – if not careful – can easily misread the meaning of these access groups and open S3 buckets to “Everyone” or “AWS authenticated User” – which means opening the buckets to world.

S3 Security Checklist

If you are in AWS, and using S3, here is a checklist of things you should configure to ensure your critical data is secure.

Audit for Open Buckets Regularly:  On regular intervals check for buckets which are open to the world. Malicious users can exploit these open buckets to find objects which have misconfigured ACL permissions and then can access these compromised objects.

Encrypt the Data: Enable server-side encryption on AWS as then it will encrypt the data at rest i.e. when objects are written and decrypt when data is read. Ideally you should enable client side.

Encrypt the Data in Transit: SSL in transport helps secure data in transit when it is accessed from S3 buckets. Enable Secure Transport in AWS to prevent man in middle attacks.

Enable Bucket Versioning: Ensure that your AWS S3 buckets have the versioning enabled. This will help preserve and recover changed and deleted S3 objects which can help with ransomware and accidental issues.

Enable MFA Delete: The “S3 Bucket” can be deleted by user even if he/she does not login using MFA by default. It is highly recommended that only users authenticated using MFA have ability to delete buckets. Using MFA to protect against accidental or intentional deletion of objects in S3 buckets will add an extra layer of security

Enable Logging: If the S3 buckets has Server Access Logging feature enabled you will be able to track every request made to access the bucket. This will allow user to ability to monitor activity, detect anomalies and protect against unauthorized access

Monitor all S3 Policy Changes: AWS CloudTrail provides logs for all changes to S3 policy. The auditing of policies and checking for public buckets help – but instead of waiting for regular audits, any change to the policy of existing buckets should be monitored in real time.

Track Applications Accessing S3: In one attack vector, hackers create an S3 bucket in their account and send data from your account to their bucket. This reveals a limitation of network-centric security in the cloud: traffic needs to be permitted to S3, which is classified as an essential service. To prevent that scenario, you should have IDS capabilities at the application layer and track all the applications in your environment accessing S3. The system should alert if a new application or user starts accessing your S3 buckets.

Limit Access to S3 Buckets: Ensure that your AWS S3 buckets are configured to allow access only to specific IP addresses and authorized accounts in order to protect against unauthorized access.

Close Buckets in Real time:  Even a few moments of public exposure of an S3 bucket can be risky as it can result in leakage. S3 supports tags which allows users to label buckets. Using these tags, administrators can label buckets which need to be public with a tag called “Public”. CloudTrail will alert when policy changes on a bucket and it becomes public which does not have the right tag. Users can use Lambda functions to change the permissions in real-time to correct the policies on anomalous or malicious activity.