Are Healthcare Breaches Down Because of CASBs?

By Salim Hafid, Product Marketing Manager, Bitglass

Bitglass just released its fourth annual Healthcare Breach Report, which dives into healthcare breaches over 2017 and compares the rate of breach over previous years. A big surprise this year was the precipitous drop in the volume of breaches and the scope of each attack. Our research team set out to discover why this happened.

Our annual healthcare report is based on breach data from the US Department of Health and Human Services. The government mandates that all healthcare organizations and their affiliates publicly disclose breaches that affect at least 500 individuals. The result is several years of data on the causes of healthcare breaches as well as information about which firms are targeted by attackers.

It seems that after several years of being a top target for hackers looking to steal valuable data, healthcare firms‘ security teams are now getting their act together. For each organization in this vertical, security has become a priority. Many are migrating to the cloud in an effort to shift the infrastructure security burden to powerful tech giants like AmazonGoogle, and Microsoft. This shift to cloud has also driven many to adopt third-party security solutions that allow them to obtain cross-app security, achieve HIPAA compliance, and mitigate the risk and impact of breaches.

In particular, cloud access security brokers are taking the healthcare sector by storm and are proving to play an important part in preventing breaches. Back in 2015, few had a CASB deployed and many were at risk of massive data loss. Today, forward-thinking organizations like John Muir Health have deployed a Next-Gen CASB to great success. IT administrators can be immediately alerted to high-risk data outflows and new applications that pose a threat, and can define granular policies that prevent mega-breaches of the sort that cost Anthem and Premera hundreds of millions of dollars.

Read the full healthcare breach report to learn about the leading causes of breaches in the sector, the average cost of a stolen health record, and more.

You Are the Weakest Link – Goodbye

By Jacob Serpa, Product Marketing Manager, Bitglass

Security in the cloud is a top concern for the modern enterprise. Fortunately, provided that organizations do their due diligence when evaluating security tools, storing data in the cloud can be even more secure than storing data on premises. However, this does require deploying a variety of solutions for securing data at rest, securing data at access, securing mobile and unmanaged devices, defending against malware, detecting unsanctioned cloud apps (shadow IT), and more. Amidst this rampant adoption of security tools, organizations often forget to bolster the weakest link in their security chain, their users.

The Weak Link in the Chain
While great steps are typically taken to secure data, relatively little thought is given to the behaviors of its users. This is likely due to an ingrained reliance upon static security tools that fail to adapt to situations in real time. Regardless, users make numerous decisions that place data at risk – some less obvious than others. In the search for total data protection, this dynamic human element cannot be ignored.

External sharing is one example of a risky user behavior. Organizations need visibility and control over where their data goes in order to keep it safe. When users send files and information outside of the company, protecting it becomes very challenging. While employees may do this either maliciously or just carelessly, the result is the same – data is exposed to unauthorized parties. Somewhat similarly, this can occur through shadow IT when users store company data in unsanctioned cloud applications over which the enterprise has no visibility or control.

Next, many employees use unsecured public WiFi networks to perform their work remotely. While this may seem like a convenient method of accessing employers’ cloud applications, it is actually incredibly dangerous for the enterprise. Malicious individuals can monitor traffic on these networks in order to steal users’ credentials. Additionally, credentials can fall prey to targeted phishing attacks that are enabled by employees who share too much information on social media. The fact that many individuals reuse passwords across multiple personal and corporate accounts only serves to exacerbate the problem.

In addition to the above, users place data at risk through a variety of other ill-advised behaviors. Unfortunately, traditional, static security solutions have a difficult time adapting to users’ actions and offering appropriate protections in real time.

Reforging the Chain
In the modern cloud, automated security solutions are a must. Reactive solutions that rely upon humans to analyze threats and initiate a response are incapable of protecting data in real time. The only way to ensure true automation is by using machine learning. When tools are powered by machine learning, they can protect data in a comprehensive fashion in the rapidly evolving, cloud-first world.

This next-gen approach can be particularly helpful when addressing threats that stem from compromised credentials and malicious or careless employees. User and entity behavior analytics (UEBA) baseline users’ behaviors and perform real-time analyses to detect suspicious activities. Whether credentials are used by thieving outsiders or employees engaging in illicit behaviors, UEBA can detect threats and respond by enforcing step-up, multi-factor authentication before allowing data access.

Machine learning is helpful for defending against other threats, as well. For example, advanced anti-malware solutions can leverage machine learning to analyze the behaviors of files. In this way, they can detect and block unknown, zero-day malware; something beyond the scope of traditional, signature-based solutions that can only check for documented, known malware.

Even less conventional tools like shadow IT discovery are beginning to be endowed with machine learning. Historically, these solutions have relied upon lists generated by massive human teams that constantly categorize and evaluate the risks of new cloud applications. However, this approach fails to keep pace with the perpetually growing number of new and updated apps. Because of this, leading cloud access security brokers (CASBs) are using machine learning to rank and categorize new applications automatically, enabling immediate detection of new cloud apps in use. In other words, organizations can uncover all of the locations that careless and conniving employees store corporate data.

While training employees in best security practices is necessary, it is not sufficient for protecting data. Education must be paired with context-aware, automated security solutions (like CASBs) in order to reinforce the weak links in the enterprise’s security chain.

AWS Cloud: Proactive Security and Forensic Readiness – Part 2

By Neha Thethi, Information Security Analyst, BH Consulting

Part 2: Infrastructure-level protection in AWS 

This is the second in a five-part blog series that provides a checklist for proactive security and forensic readiness in the AWS cloud environment. This post relates to protecting your virtual infrastructure within AWS.

Protecting any computing infrastructure requires a layered or defense-in-depth approach. The layers are typically divided into physical, network (perimeter and internal), system (or host), application, and data. In an Infrastructure as a Service (IaaS) environment, AWS is responsible for security ‘of’ the cloud including the physical perimeter, hardware, compute, storage and networking, while customers are responsible for security ‘in’ the cloud, or on layers above the hypervisor. This includes the operating system, perimeter and internal network, application and data.

Infrastructure protection requires defining trust boundaries (e.g., network boundaries and packet filtering), system security configuration and maintenance (e.g., hardening and patching), operating system authentication and authorizations (e.g., users, keys, and access levels), and other appropriate policy enforcement points (e.g., web application firewalls and/or API gateways).

The key AWS service that supports service-level protection is AWS Identity and Access Management (IAM) while Virtual Private Cloud (VPC) is the fundamental service that contributes to securing infrastructure hosted on AWS. VPC is the virtual equivalent of a traditional network operating in a data center, albeit with the scalability benefits of the AWS infrastructure. In addition, there are several other services or features provided by AWS that can be leveraged for infrastructure protection.

The following list mainly focuses on network and host-level boundary protection, protecting integrity of the operating system on EC2 instances and Amazon Machine Images (AMIs) and security of containers on AWS.

The checklist provides best practice for the following:

  1. How are you enforcing network and host-level boundary protection?
  2. How are you protecting against distributed denial of service (DDoS) attacks at network and application level?
  3. How are you managing the threat of malware?
  4. How are you identifying vulnerabilities or misconfigurations in the operating system of your Amazon EC2 instances?
  5. How are you protecting the integrity of the operating system on your Amazon EC2 instances?
  6. How are you ensuring security of containers on AWS?
  7. How are you ensuring only trusted Amazon Machine Images (AMIs) are launched?
  8. How are you creating secure custom (private or public) AMIs?

IMPORTANT NOTE: Identity and access management is an integral part of securing an infrastructure, however, you’ll notice that the following checklist does not focus on the AWS IAM service. I have covered this in a separate checklist on IAM best practices here.

Best-practice checklist

1. How are you enforcing network and host-level boundary protection?

  • Establish appropriate network design for your workload to ensure only desired network paths and routing are allowed
  • For large-scale deployments, design network security in layers – external, DMZ, and internal
  • When designing NACL rules, consider that it’s a stateless firewall, so ensure to define both outbound and inbound rules
  • Create secure VPCs using network segmentation and security zoning
  • Carefully plan routing and server placement in public and private subnets.
  • Place instances (EC2 and RDS) within VPC subnets and restrict access using security groups and NACLs
  • Use non-overlapping IP addresses with other VPCs or data centre in use
  • Control network traffic by using security groups (stateful firewall, outside OS layer), NACLs (stateless firewall, at subnet level), bastion host, host based firewalls, etc.
  • Use Virtual Gateway (VGW) where Amazon VPC-based resources require remote network connectivity
  • Use IPSec or AWS Direct Connect for trusted connections to other sites
  • Use VPC Flow Logs for information about the IP traffic going to and from network interfaces in your VPC
  • Protect data in transit to ensure the confidentiality and integrity of data, as well as the identities of the communicating parties.

2. How are you protecting against distributed denial of service (DDoS) attacks at network and application level?

  • Use firewalls including Security groups, network access control lists, and host based firewalls
  • Use rate limiting to protect scarce resources from overconsumption
  • Use Elastic Load Balancing and Auto Scaling to configure web servers to scale out when under attack (based on load), and shrink back when the attack stops
  • Use AWS Shield, a managed Distributed Denial of Service (DDoS) protection service, that safeguards web applications running on AWS
  • Use Amazon CloudFront to absorb DoS/DDoS flooding attacks
  • Use AWS WAF with AWS CloudFront to help protect your web applications from common web exploits that could affect application availability, compromise security, or consume excessive resources
  • Use Amazon CloudWatch to detect DDoS attacks against your application
  • Use VPC Flow Logs to gain visibility into traffic targeting your application.

3. How are you managing the threat of malware?

  • Give users the minimum privileges they need to carry out their tasks
  • Patch external-facing and internal systems to the latest security level.
  • Use a reputable and up-to-date antivirus and antispam solution on your system.
  • Install host based IDS with file integrity checking and rootkit detection
  • Use IDS/IPS systems for statistical/behavioural or signature-based algorithms to detect and contain network attacks and Trojans.
  • Launch instances from trusted AMIs only
  • Only install and run trusted software from a trusted software provider (note: MD5 or SHA-1 should not be trusted if software is downloaded from random source on the internet)
  • Avoid SMTP open relay, which can be used to spread spam, and which might also represent a breach of the AWS Acceptable Use Policy.

4. How are you identifying vulnerabilities or misconfigurationsin the operating system of your Amazon EC2 instances?

  • Define approach for securing your system, consider the level of access needed and take a least-privilege approach
  • Open only the ports needed for communication, harden OS and disable permissive configurations
  • Remove or disable unnecessary user accounts.
  • Remove or disable all unnecessary functionality.
  • Change vendor-supplied defaults prior to deploying new applications.
  • Automate deployments and remove operator access to reduce attack surface area using tools such as EC2 Systems Manager Run Command
  • Ensure operating system and application configurations, such as firewall settings and anti-malware definitions, are correct and up-to-date; Use EC2 Systems Manager State Manager to define and maintain consistent operating system configurations
  • Ensure an inventory of instances and installed software is maintained; Use EC2 Systems Manager Inventory to collect and query configuration about your instances and installed software
  • Perform routine vulnerability assessments when updates or deployments are pushed; Use Amazon Inspector to identify vulnerabilities or deviations from best practices in your guest operating systems and applications
  • Leverage automated patching tools such as EC2 Systems Manager Patch Manager to help you deploy operating system and software patches automatically across large groups of instances
  • Use AWS CloudTrail, AWS Config, and AWS Config Rules as they provide audit and change tracking features for auditing AWS resource changes.
  • Use template definition and management tools, including AWS CloudFormation to create standard, preconfigured environments.

5. How are you protecting the integrity of the operating system on your Amazon EC2 instances?

  • Use file integrity controls for Amazon EC2 instances
  • Use host-based intrusion detection controls for Amazon EC2 instances
  • Use a custom Amazon Machine Image (AMI) or configuration management tools (such as Puppet or Chef) that provide secure settings by default.

6. How are you ensuring security of containers on AWS?

  • Run containers on top of virtual machines
  • Run small images, remove unnecessary binaries
  • Use many small instances to reduce attack surface
  • Segregate containers based on criteria such as role or customer and risk
  • Set containers to run as non-root user
  • Set filesystems to be read-only
  • Limit container networking; Use AWS ECS to manage containers and define communication between containers
  • Leverage Linux kernel security features using tools like SELinux, Seccomp, AppArmor
  • Perform vulnerability scans of container images
  • Allow only approved images during build
  • Use tools such as Docker Bench to automate security checks
  • Avoid embedding secrets into images or environment variables, Use S3-based secrets storage instead.

7. How are you ensuring only trusted Amazon Machine Images (AMIs) are launched?

  • Treat shared AMIs as any foreign code that you might consider deploying in your own data centre and perform the appropriate due diligence
  • Look for description of shared AMI, and the AMI ID, in the Amazon EC2 forum
  • Check aliased owner in the account field to find public AMIs from Amazon.

8. How are you creating secure custom (private or public) AMIs?

  • Disable root API access keys and secret key
  • Configure Public Key authentication for remote login
  • Restrict access to instances from limited IP ranges using Security Groups
  • Use bastion hosts to enforce control and visibility
  • Protect the .pem file on user machines
  • Delete keys from the authorized_keys file on your instances when someone leaves your organization or no longer requires access
  • Rotate credentials (DB, Access Keys)
  • Regularly run least privilege checks using IAM user Access Advisor and IAM user Last Used Access Keys
  • Ensure that software installed does not use default internal accounts and passwords.
  • Change vendor-supplied defaults before creating new AMIs
  • Disable services and protocols that authenticate users in clear text over the network, or otherwise insecurely.
  • Disable non-essential network services on startup. Only administrative services (SSH/RDP) and the services required for essential applications should be started.
  • Ensure all software is up to date with relevant security patches
  • For in instantiated AMIs, update security controls by running custom bootstrapping Bash or Microsoft Windows PowerShell scripts; or use bootstrapping applications such as Puppet, Chef, Capistrano, Cloud-Init and Cfn-Init
  • Follow a formalised patch management procedure for AMIs
  • Ensure that the published AMI does not violate the Amazon Web Services Acceptable Use Policy. Examples of violations include open SMTP relays or proxy servers. For more information, see the Amazon Web Services Acceptable Use Policy

Security at the infrastructure level, or any level for that matter, certainly requires more than just a checklist. For a comprehensive insight into infrastructure security within AWS, we suggest reading the following AWS whitepapers – AWS Security Pillar and AWS Security Best Practises.

For more details, refer to the following AWS resources:

Next up in the blog series, is Part 3 – Data Protection in AWS – best practice checklist. Stay tuned.

DISCLAIMER: Please be mindful that this is not an exhaustive list. Given the pace of innovation and development within AWS, there may be features being rolled out as these blogs were being written. Also, please note that this checklist is for guidance purposes only.

Securing the Internet of Things: Devices & Networks

By Ranjeet Khanna, Director of Product Management–IoT/Embedded Security, Entrust Datacard

The Internet of Things (IoT) is changing manufacturing for the better.

With data from billions of connected devices and trillions of sensors, supply chain and device manufacturing operators are taking advantage of new benefits. Think improved efficiency and greater flexibility among potential business models. But as the IoT assumes a bigger role across industries, security needs to take top priority. Here’s a look at four key challenges that must be taken care of before realizing the rewards of increased connectivity.

Reducing risk
Mitigating risk doesn’t always have to come at the expense of uptime and reliability. With the right IoT security solutions, manufacturers can assign trusted identities to all devices or applications to ensure fraudsters remain on the outside looking in. Better yet, the integration of identity management can also pave the way for improved visibility of business operations, scalability, and access control. Instead of getting caught off guard by unforeseen occurrences, manufacturers will be prepared to address problems throughout every step of the product lifecycle.

Setting the stage for data sharing
Data drives the IoT. As more data is shared across connected ecosystems, the potential for analytics-based and even predictive advancements increases.. Such improvements, however, aren’t all positive. Increased data sharing opens to the door to additional cyber attacks. To help keep sensitive information under wraps, businesses should consider embedding trusted identities for devices at the time of manufacturing. From electronic control units within cars to the connected devices that make up smart cities, introducing trusted identities promises to not only secure data sharing, but also improve supply chain integrity and speed up IoT deployments along the way.

Securing networks & protocols
Through the IoT, old networks and protocols are being introduced to new devices. Enterprise-grade encryption-based technologies keep both greenfield and brownfield environments secure, regardless of protocol. While this extra step may take some time, the benefits are well worth it. Whether it’s an additional source of revenue or heightened security, implementing solutions that are effective across systems, designs and protocols can help ensure improved security for years to come.

Tying identity to security
Physical and digital security may seem like different subjects on the surface, but a closer look reveals some valuable similarities. Just as authorization is needed to enter a highly secure building, sensitive information should only be made available to users with the proper credentials. Dependent upon a variety of conditions – such as the time of day or type of device – rule-based authentication is one way to ensure untrusted devices or users can’t access a secure environment.

Supply chain and device manufacturing operators have not yet taken full advantage of IoT’s impressive potential. By enabling fast-tracking of deployment timelines and allowing organizations to more quickly realize business value in areas such as process optimization and automation, ioTrust could soon change that. Leverage the power of ioTrust to stay one step ahead of the competition.

Note: This is part two in a four-part blog series on Securing the IoT.
Check out Part One: Connected Cars

Zero-Day in the Cloud – Say It Ain’t So

By Steve Armstrong, Regional Sales Director, Bitglass

Zero-day vulnerabilities are computer or software security gaps that are unknown to the public – particularly to parties who would like to close said gaps, like the vendors of vulnerable software.

To many in the infosec community, the term “zero-day” is synonymous with the patching or updating of systems. Take, for example, the world of anti-malware vendors. There are those whose solutions utilize signatures or hashes to defend against threats. Their products ingest a piece of malware, run it through various systems, perhaps have a human analyze the file, and then write a signature. This is then pushed to their subscribers’ end points in order to update systems and defend them against that particular piece of malware. The goal is to get the update to systems before there is an infection (sadly, updates are not always timely). On the other hand, there are some vendors who reject this traditional, reactive method. Instead, they use artificial intelligence to solve the problem in real time.

When assessing threats, it comes down to what you don’t know. It can be difficult to respond to unknown threats until they strike. As they say, it’s not what you know that kills you. This is also true in the SaaS space. The analogy is simple, new applications appear daily – some good, some bad – and even the good ones can have unknown data leakage paths. Treat them as a threat.

In order to respond to unknown cloud applications, you can do one of two things.

First, the standard practice from CASBs (cloud access security brokers) is to find the new application, work to understand the originating organization, analyze the application, identify the data leakage paths, gain an understanding of the controls, and then write a signature. This is all done by massive teams of people who have limited capacities to work – very much like the inefficient, signature-based anti-malware vendors. It can take days, weeks, or even months until an application signature is added to a support catalog. For organizations who want to protect their data, this is simply not good enough.

Option two is to utilize artificial intelligence and respond to new applications in the same manner as advanced anti-malware solutions. This route entails analyzing the application, identifying the data leakage paths, designing the control, and securing the application automatically in real time.

New, unknown applications should be responded to in the same fashion that an enterprise would respond to any other threat. Rather than waiting days, weeks, or months, they should be addressed immediately.


Saturday Security Spotlight: Tesla, FedEx, & the White House

By Jacob Serpa, Product Marketing Manager, Bitglass

Here are the top cybersecurity stories of recent weeks:

—Tesla hacked and used to mine cryptocurrency
—FedEx exposes customer data in AWS misconfiguration
—White House releases cybersecurity report
—SEC categorizes knowledge of unannounced breaches as insider information
—More Equifax data stolen than initially believed

Tesla hacked and used to mine cryptocurrency
By targeting a Tesla instance of Kubernetes, Google’s open-source administrative console for cloud apps, hackers were able to infiltrate the company. The malicious parties then obtained credentials to Tesla’s AWS environment, gained access to proprietary information, and began running scripts to mine cryptocurrency using Tesla’s computing power.

FedEx exposes customer data in AWS misconfiguration
FedEx is one of the latest companies to suffer from an AWS misconfiguration. Bongo, acquired by FedEx in 2014 and subsequently renamed CrossBorder, is reported to have left its S3 instance completely unsecured, exposing the data of nearly 120,000 customers. While it is believed that no data theft occurred, the company still left sensitive information (like customer passport details) exposed for an extended period.

White House releases cybersecurity report
In light of the escalating costs of cyberattacks in the United States, the White House released a report scrutinizing the current state of cybersecurity. In particular, the report recognized the critical link between cybersecurity and the economy at large. Should other countries execute cyberattacks against organizations responsible for US infrastructure, the repercussions could be severe.

SEC categorizes knowledge of unannounced breaches as insider information
The Securities and Exchange Commission recently announced that knowledge of unannounced breaches is insider information that should not be used to inform the purchase or sale of stock. This comes largely in response to Intel and Equifax executives selling stock before their companies announced breaches.

More Equifax data stolen than initially believed
In September of 2017, Equifax announced a massive breach that leaked names, home addresses, Social Security Numbers, and more. Interestingly (and frighteningly), it now appears that even more data was leaked than the company originally reported.

FedRAMP – Three Stages of Vulnerability Scanning and their Pitfalls

By Matt Wilgus, Practice Leader, Threat & Vulnerability Assessments, Schellman & Co.

Though vulnerability scanning is only one of the control requirements in FedRAMP, it is actually one of the most frequent pitfalls in terms of impact to an authorization to operate (ATO), as FedRAMP requirements expect cloud service providers (CSPs) to have a mature vulnerability management program. A CSP needs to have the right people, processes and technologies in place, and must successfully demonstrate maturity for all three. CSPs that have an easier time with the vulnerability scanning requirements follow a similar approach, which can be best articulated by breaking down the expectations into three stages.

1. Pre-Assessment

Approximately 60-90 days from an expected security assessment report (SAR), a CSP should provide the third-party assessment organization (3PAO) a recent set of scans, preferably from the most recent three months. The scan data should be provided in a format that can be parsed by the 3PAO. There are several questions that can be answered by providing scans well ahead of time:

  • Credentials – Are the scans being conducted from an authenticated perspective with a user having the highest level of privileges available?
  • Scan Types – Are infrastructure, database, and web application scans being performed?
  • Points of Contact – Who is responsible for configuring the scanner and running scans? Who is responsible for remediation?
  • Entire Boundary Covered – Is the full, in-scope environment being scanned?
  • Remediation – Are high severity findings being remediated in 30 days? Are moderate severity findings being remediated within 90 days?

Within the pre-assessment, having all plugins enabled is frequently an area of discussion, as many CSPs want to disable plugins or sets of checks. Should a check need to be disabled, there must be a documented reason (e.g. degradation of performance or denial of service occurs with a given plugin). Do not disable checks simply because it is assumed a given type of asset doesn’t exist in the environment.

Properly configured and authenticated vulnerability scanners will typically not send families of vulnerability checks against hosts if the operating system or application does not match what is required by the family of checks–i.e., Netware checks will not be run if Netware is not detected during the scan of the environment. The safest bet is to always enable everything. If a given check needs to be disabled, it should be noted as an exception with formal documentation detailing why it is disabled, and what processes are in place to ensure the vulnerability being detected is covered by other mitigating factors.

The pre-assessment phase is also a good time for the CSP to document any known false positives that occur within the scan results and any operational requirements that prevent remediation from occurring.

2. Assessment

During the assessment kickoff, the CSP should be ready for the 3PAO to conduct vulnerability scans. If the CSP successfully addresses the questions in the pre-assessment phase, then any findings or issues during the assessment phase should be easy to address. There are three main areas to tackle while reviewing the scan data in the assessment past:

  1. Current Picture – What vulnerabilities exist in the environment as of the current date?
  2. Reassurance on Remediation – Are vulnerabilities continuing to be remediated in a timely manner?
  3. Adjustments – What changes have been taken since the pre-assessment?

Of the aforementioned three items, adjustments often have the biggest impact. Examples of adjustments that frequently occur and need to be addressed include if the:

  • vulnerability scanning tool has changed
  • scan checks have been modified
  • personnel responsible for configuring and running the scans are no longer with the organization
  • technologies within the environment have changed
  • environment hosting the solution has changed

If any of these adjustments exist, the 3PAO will need to perform additional validation activities.

3. Final Scan

A final round of scans should be run by the CSP five to 10 days prior to the issuance of the SAR. At this point, all questions related to the personnel running the scans, the processes deployed, and the technologies implemented should be answered. The last set of scans should be limited in scope and used to show evidence of remediation activities on the vulnerabilities identified in the assessment phase. There are three primary goals related to the last piece of scan evidence:

  1. Targeted scans – Has a final set of scans that shows remediation of findings from the assessment phase been provided?
  2. Operational Requirements (OR) and False Positives (FP) – Are all ORs and FPs documented, reviewed and understood?
  3. Ready for Continuous Monitoring – Are there any high severity findings remaining, and is the CSP ready to provide monthly results to an agency or the Joint Authorization Board (JAB)?

High severity findings are highlighted due to their outsized impact on a FedRAMP ATO. A CSP cannot receive a recommendation for an ATO if any high severity vulnerabilities are present. Should any findings persist as of the date the SAR is issued, these findings should be tracked in the CSPs Plan of Action and Milestones (POA&M).

For additional information on the timing and handling of vulnerability scans, please see the following documents on the FedRAMP website:


Securing the Internet of Things: Connected Cars

By Ranjeet Khanna, Director of Product Management–IoT/Embedded Security, Entrust Datacard

Establishing safety and security in automotive design goes far beyond crash test dummies.

By 2022, the global automotive Internet of Things (IoT) market is expected to skyrocket to $82.79 billion – and manufacturers are racing to capitalize on this growing opportunity. While embedded computation and networking has been around since the 1980s, the advent of connectivity opens up an array of new options for automakers. From advanced collision detection and predictive diagnostics, to entertainment systems that load a driver’s favorite tunes the second they sit down, connected cars are poised to enhance the consumer experience.

Those extra conveniences, however, aren’t without their downsides. If not properly secured, connected cars threaten to expose sensitive consumer information. With data being passed between so many different connected channels, it’s easier than ever for hackers to get their hands on personally identifiable information.

In 2015, Chrysler announced a recall of 1.4 million vehicles after two technology researchers hacked into a Jeep Cherokee’s dashboard connectivity system. But the right security solutions can make such incidents a thing of the past.

Through new IoT security solutions, automotive manufacturers are able to assign a trusted identity to each and every device – regardless of whether it’s located inside a vehicle or across the IoT ecosystem. This extra layer of security sets the stage for trusted communication between authorized users, devices and applications. Ensuring the right security level for the right device helps prevent data being made accessible to unauthorized users or devices. Using cryptographic protection as well as strong authorization requirements will restrict access to those things, systems and users with the proper privileges.

In addition to creating a trusted IoT ecosystem, automotive designers also stand to realize significant business value. Instead of spending precious time determining which devices to trust, ioTrust makes it easy to not only recognize trusted devices, but operationalize them. That same convenience also extends to the supply chain, where manufacturers can get a better look at a product’s entire lifecycle – from creation to release.

IoT has burst onto the scene in a big way, especially in the quest to securely design the next connected car. But before making the most of automotive IoT, manufacturers must consider how to keep consumer data under wraps. By provisioning managed identities and authorization privileges, ioTrust paves the way for securely connected automotive systems.

Note: This is part of a blog series on Securing the IoT. 

CASBs and Education’s Flight to the Cloud

By Jacob Serpa, Product Marketing Manager, Bitglass

Cloud is becoming an integral part of modern organizations seeking productivity and flexibility. For higher education, cloud enables online course creation, dynamic collaboration on research documents, and more. As many cloud services like G Suite are discounted or given to educational institutions for free, adoption is made even simpler. However, across the multiple use cases in education, comprehensive security solutions must be used to protect data wherever it goes. The vertical as a whole needs real-time protection on any app, any device, anywhere.

The Problems
For academic institutions, research is often of critical importance. Faculty members create, share, edit, and reshare various documents in an effort to complete projects and remain at the cutting edges of their fields. Obviously, using cloud apps facilitates this process of collaboration and revision. However, doing so in an unsecured fashion can allow proprietary information to leak to unauthorized parties.

Another point of focus in education is how student and faculty PII (personally identifiable information) is used and stored in the cloud. As information moves to cloud apps, traditional security solutions fail to provide adequate visibility and control over data. Obviously, this creates compliance concerns with regulations, like FISMA and FERPA, that aim to protect personal information. Medical schools have the additional requirement of securing protected health information (PHI) and complying with HIPAA.

The Solutions
Fortunately, cloud access security brokers (CASBs) offer a variety of capabilities that address the above security concerns. Data leakage prevention, for example, can be used to protect data and reach regulatory compliance. DLP policies allow organizations to redact data like PII, quarantine sensitive files, and watermark and track documents. Encryption can be used to obfuscate sensitive data and prevent unauthorized users from viewing things like PHI. Contextual access controls govern data access based on factors like user group, geographical location, and more.

To secure cloud, present-day organizations must also secure mobile data access. Fortunately, agentless mobile security solutions enable BYOD without requiring installations on unmanaged devices. This is critical for ensuring device functionality, user privacy, and employee adoption. Some agentless solutions can enforce device security configurations like PIN codes, selectively wipe corporate data on any device, and more.