Cloud Computing: A Little Less Cloudy

May 16, 2016 | Leave a Comment

By Christina McGhee, Manager/FedRAMP Technical Lead, Schellman

Cloud-Computing-A-Little-Less-CloudyToday, consumers have an increasing interest in implementing cloud solutions to process and store their data. They are looking to take advantage of the benefits provided by cloud computing, including flexibility, cost savings, and availability. Fortunately, there are many cloud solutions available to consumers, touting cloud computing features such as multi-tenancy, virtualization, or increased collaboration. But is it really a cloud service?

With the rapid growth of these types of solutions, consumers and other interested organizations want to identify whether a service is actually a cloud service.

In actuality, there is such thing as a cloud service. It has a definition and we have seen federal agencies require cloud service providers to justify why their service is considered a cloud service.

The five essential cloud characteristics are based on the National Institute of Standards and Technology’s (NIST) definition of cloud computing in Special Publication (SP) 800-145. Here, NIST defines cloud computing as a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.

According to NIST SP 800-145, a cloud service employs all of the following five characteristics:

  1. On-demand self-service – A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider.
  2. Broad network access – Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platforms (e.g., mobile phones, tablets, laptops, and workstations).
  3. Resource pooling – The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand. There is a sense of location independence in that the customer generally has no control or knowledge over the exact location of the provided resources but may be able to specify location at a higher level of abstraction (e.g., country, state, or datacenter). Examples of resources include storage, processing, memory, and network bandwidth.
  4. Rapid elasticity – Capabilities can be elastically provisioned and released, in some cases automatically, to scale rapidly outward and inward commensurate with demand. To the consumer, the capabilities available for provisioning often appear to be unlimited and can be appropriated in any quantity at any time.
  5. Measured service – Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported, providing transparency for both the provider and consumer of the utilized service.

Whether you are a cloud service provider, consumer, or other interested party, it is important to identify how the cloud service offering meets each of the five essential characteristics. For example, cloud service providers in the FedRAMP authorization process usually document how their service meets each of the five essential cloud computing characteristics in their System Security Plan (SSP).

It goes without saying that regardless of whether or not a service meets the definition of a cloud service, the cloud service provider and consumer must always plan and prepare for the security risks associated with providing or using a the cloud service and the types of data the cloud service will consume. The cloud service provider is responsible for selecting a security program framework to implement security controls specific for cloud environments and the data protection requirements of their customers. Equally, the consumer must be fully aware of the data they plan to process and/or store with the cloud service and their responsibilities to protect that data.

 

Providing Trust and Assurance Through Cloud Certification and Attestation: A Complimentary CSA STAR Program Webinar by Schellman

May 12, 2016 | Leave a Comment

 

ByAvani Desai, Executive Vice President, Schellman

resource-csa-star-programIn the last 24 months, the Cloud Security Alliance (CSA) has made great strides in enhancing their CSA Security, Trust and Assurance Registry (STAR) Program.  In brief, the STAR Program is a publicly available registry designed to recognize assurance requirements and maturity levels of cloud service providers (CSPs).  Prior to issuing the guidance for STAR Certification and STAR Attestation, a CSP could only perform a self-assessment, which meant completing the Consensus Assessments Initiative questionnaire (CAIQ) and making the responses publicly available on the CSA Register.  The CAIQ was completed in several different ways and the content varied from short answers to full-page responses.  It was relevant information but not independently validated.  This created a path for the STAR Certification and STAR Attestation Programs.

Join Schellman during a complimentary webinar titled “CSA STAR Program: Attestation and Certification”.  The webinar will be held on May 13th from 12:00pm EST to 1:00pm EST and will provide one (1) hour of CPE.  Debbie Zaller, Schellman Principal, and Ryan Mackie, Practice Leader, STAR Program, will provide an in-depth discussion on the opportunities to undergo third party assessments, through the CSA STAR Programs, to validate maturity level or control activities.

Organizations, specifically cloud service providers, are continuously working to provide confidence to their customers regarding the security and operating effectiveness of their controls supporting the cloud and the STAR Certification and STAR Attestation options provided by the CSA allow for these organizations to further establish confidence in the market,” said Ryan Mackie.  “This webinar is a practical introduction to the STAR Level 2 offerings, outlining their benefits, requirements, and process, and how these types of third party validation can clearly compliment a cloud provider’s governance and risk management system.”

This informative webinar will provide:

  • An overview and journey of the CSA STAR Programs
  • A definition of the CCM framework
  • An overview of the Certification and Attestation purpose and scope
  • The process and preparations
  • A discussion of the common challenges and benefits

For more information and to register for the webinar, click here .  The event will also be recorded and available for on-demand viewing,. Click for more information.

ABOUT THE SPEAKERS
Debbie Zaller leads Schellman’s CSA STAR Attestation and SOC 2 services practice  where she is responsible for internal training, methodology creation, and quality reporting.  Debbie has performed over 150 SOC 2 assessments and Debbie also holds a Certificate of Cloud Security Knowledge (CCSK).

Ryan Mackie leads Schellman’s CSA STAR Certification and ISO 27001 certification services practice where he is an integral part of the methodology creation and the planning and execution of assessments.  Ryan has performed over 100 ISO 27001 assessments and is a certified ISO 27001 Lead Auditor trainer.

 

Outdated Privacy Act Close to Getting an Upgrade

May 12, 2016 | Leave a Comment

By Susan Richardson, Manager/Content Strategy, Code42

050916_OutdatedPrivacyAct_BlogThe outdated Electronic Communications Privacy Act (ECPA) may finally get a much-needed upgrade, but the reform can’t come soon enough for Microsoft, other cloud providers and privacy advocates. Here’s what you need to know:

The issues:
The ECPA was enacted in 1986, as electronic communication started to become more prevalent. The intent was to extend federal restrictions on government wiretaps from telephones to computer communications. But as we created other electronic communication devices and moved content to the cloud, the Act became outdated. The primary gripes are that it:

  • Allows government agencies to request emails more than 180 days old with just an administrative subpoena, which the agency itself can issue, vs. having to get a warrant from a judge.
  • Doesn’t require notifying affected customers when their data is being requested, giving them a chance to challenge the data demand. In fact, the Act includes a non-disclosure provision that can specifically prohibit providers from notifying customers.

The lobbying and lawsuits:
Plenty of wide-ranging groups have been advocating for ECPA reform, including the American Civil Liberties Union, the Center for Democracy & Technology, the Electronic Frontier Foundation, the Digital Due Process Coalition, the Direct Marketing Association and even the White House, in its 2014 Big Data Report.

On April 14, Microsoft added a little more weight to its argument. The company filed a lawsuit against the U.S. Justice Department, suing for the right to tell its customers when a federal agency is looking at their email. The lawsuit points out that the government’s non-disclosure secrecy requests have become the rule vs. the exception. In 18 months, Microsoft was required to maintain secrecy in 2,576 legal demands for customer data. Even more surprising, the company said, was that 68 percent of those requests had no fixed end date—meaning the company is effectively prohibited forever from telling its customers that the government has obtained their data.

The reform:
Two weeks after Microsoft filed its suit, the U.S. House voted 419-0 in favor of the Email Privacy Act, which would update the ECPA in these key ways:

  • Require government representatives to get a warrant to access messages older than 180 days from email and cloud providers.
  • Allows providers to notify affected customers when their data is being requested, unless the court grants a gag order.

The last step in the process is for the Senate to turn to the reform bill into law. While no timeline has been given, the Senate is getting a lot of pressure to act quickly.

Download The Guide to Modern Endpoint Backup and Data Visibility to learn more about selecting a modern endpoint backup solution in a dangerous world.

How to Reduce Costs and Security Threats Using Two Amazon Tools

May 10, 2016 | Leave a Comment

By David Lucky, Director of Product Management, Datapipe

AWSCloudfrontHave you ever gone to see a movie that would have been amazing if not for one person? The plot was engaging, the dialogue was well-written, and there were strong performances from most of the cast. But there was just that one actor who simply didn’t live up to the rest of the film, and it made every scene he was in that much worse? Simply put, that actor was bad, and brought down the whole operation.

That idea of the “bad actor” can be applied to Internet clients, as well. Fortunately, you’re not hurting any feelings by sussing them out: the bad actors are usually automated processes that can harm your systems. The two most common forms are content scrapers, which dig into your content for their own profit, and bad bots, who will misrepresent who they are to get around any restrictions stopping them.

We’d all like to believe that everyone accessing content will use it appropriately. Unfortunately, we can’t always assume the best, and being proactive in dealing with these bad actors will reduce security threats to your infrastructure and apps.

Even better, blocking bad actors will also lower your operating costs. When these bots access your content, you’re serving the traffic to them, whether you want to or not. That adds more to your overall costs. By blocking them, you’re restricting traffic from a number of undesired sources. Luckily, AWS has a pair of tools you can combine to say goodbye to these bad actors: Amazon CloudFront with an AWS web application firewall (WAF).

With AWS WAF, you can define a set of rules known as a web access control list (web ACL). Every single rule contains a set of conditions, plus an action. Any request that’s received by CloudFront gets handed over to AWS WAF for further inspection; if the request matches, the user can access the content as attempted. If the request doesn’t match the conditions in a specified rule, the default action of the web ACL is taken. These conditions will remove quite a bit of unwanted traffic, as you can set filters by source IP address, strings of text, and a whole lot more. As for the web ACL actions, you can count the request for later analysis, allow it, or block it.

Perhaps the best attribute of the WAF is that you can smoothly integrate it within your existing DevOps, and automate workflows to react. Since bad actors are always switching their methods to mask their actions, your proactive detection methods must constantly change, as well. Having those automations in place is immensely helpful in finding bad actors and restricting their access.

There’s a great walkthrough of how to set up this solution on the AWS Security Blog, step-by-step. Feel free to check it out for more information, or get in touch with us if you have any additional questions. And for AWS customers that need even more than what the AWS WAF has to offer, there are services that are complimentary to the AWS WAF that provide enhanced protection for business critical applications on AWS. You won’t even need to thank the Academy when all of those bad actors are removed.

DoD Updates Government Security Requirements for Cloud, But What Does That Really Mean?

May 6, 2016 | Leave a Comment

By Brian Burns, Bid Response Manager/Government Affairs, Datapipe

DF-ST-87-06962IT officials from the Department of Defense (DoD) have released an update to the Cloud Computing Security Requirements Guide (CC SRG), which establishes security requirements and other criteria for commercial and non-Defense Department cloud providers to operate within DoD. These kinds of updates are not uncommon. In fact, they are encouraged through an interesting use of a DevOps type methodology – as the DoD explains:

DoD Cloud computing policy and the CC SRG is constantly evolving based on lessons learned with respect to the authorization of Cloud Service Offerings and their use by DoD Components. As such the CC SRG is following an “Agile Policy Development” strategy and will be updated quickly when necessary.

The DoD offers a continuous public review option and accepts comments on the current version of the CC SRG at all times, moving to update the document quickly and regularly to address the constantly changing concerns of an evolving technology like public and private cloud infrastructure. The most recent update includes administrative changes and corrections and some expanded guidance on previously instated requirements, with the main focus on the updates being to clarify standards set in version one and alleviate confusion and any potential inaccuracy.

If you are interested, you can read through the entire CC SRG revision history online.

What is particularly interesting here is the DoD’s acknowledgment that management of cloud environments is constantly evolving, security requirements and best practices need to be iterative, and updates need to be made regularly to ensure relevancy. It’s also important to note that the CC SRG is only one of many government policies put in place to help government agencies securely and effectively implement cloud infrastructures. There are also guidelines like NIST SP 800-37 Risk Management, NIST 800-53, FISMA and FedRAMP to consider. All of these provide a knowledge base for cloud computing security authorization processes and security requirements for government agencies.

What the DoD’s updates to the CC SRG should reinforce for agencies is that they need to have a clear cloud strategy in place in order to ensure compliance and success in the cloud. Determining the best implementation of these guidelines for your needs is difficult in and of itself. Add to that the ongoing management and updates required to keep up with ever-evolving guidelines and an IT team can find itself struggling.

By partnering with systems integrators and software vendors, or working directly with a managed service provider, like Datapipe, government agencies can more easily develop a long-term cloud strategy to architect, deploy, and manage high-security and high-performance cloud and hosted solutions, and stay on top of evolving government policies and guidelines.

For example, Microsoft Azure recently announced new accreditation for their Government Cloud, Amazon AWS has an isolated AWS region designed to host sensitive data and regulated workloads called AWS GovCloud, and you can learn more about our new Federal Community Cloud Platform (FCCP), which meets all FISMA controls and FedRAMP requirements, and all of our specific government cloud solutions on the Datapipe Government Solutions section of our site.

Five Endpoint Backup Features That Help Drive Adoption

May 3, 2016 | Leave a Comment

By Susan Richardson, Manager/Content Strategy, Code42

042516_5EndpointFeatures_BlogIf you’re among the 28 percent of enterprises that still haven’t implemented a planned endpoint backup system, here are 5 key attributes to look for in a system, to help drive adoption and success. These recommendations are courtesy of Laura DuBois, program vice president at IDC, a global market intelligence provider with 1,500 highly mobile, knowledge-driven employees:

1. Supports Productivity
Look for a lightweight system that doesn’t put a drag on memory, so employees can access data and collaborate quickly. If the system slows people down, they won’t use it.

2. Increases Security
While some people think of endpoint backup primarily for disaster recovery, you should think of it as a data loss prevention tool, too. A good endpoint backup system offers a multi-layered security model that includes transmission security, account security, password security, encryption security (both in transit and at rest) and secure messaging.

3. Offers Intuitive Self-Service
Employees don’t want to wait for IT to recapture lost data. Having an easy-to-use, self-service interface allows employees to locate and retrieve their own data. Not only does this help increase adoption, it also cuts down on calls to the IT Help Desk to save administrative time and money. A survey of Code42 customers found that 36 percent had fewer restore support tickets after installing the CrashPlan endpoint backup system, and 49 percent reduced IT hours spent on data restores.

In fact, for CISOs looking to make the case for an endpoint backup system, DuBois suggests compiling Help Desk volume data and the productivity associated with it.

4. Supports Heterogeneity
DuBois’ research showed that the average corporate employee uses 2.5 devices for work, some company issued and some not. Your endpoint backup system has to accommodate today’s diversity in devices, platforms and network connectivity.

5. Handles the Added Traffic
Some endpoint backup systems can get bogged down with lots of users and not enough network bandwidth. Look for a system that backups up almost continuously, so the processing is spread out vs. taxing the system all at once and slowing it down.

To learn more, see DuBois’ webinar, “5 Expert Tips to Drive User Adoption in Endpoint Backup Deployments.”

 

10 Key Questions to Answer Before Upgrading Enterprise Software

April 27, 2016 | Leave a Comment

By Rachel Holdgrafer, Business Content Strategist, Code42

042016_upgradesoftware2_blogThe evolution of software has made possible things we never dreamed. With software upgrades come new competencies and capabilities, better security, speed, power and often disruption. Whenever something new enters an existing ecosystem, it can upset the works.

The cadence of software upgrades in large organizations is typically guided by upgrade policies; the risk of disruption is greater in large organizations—which is the chief reason large companies lag up to two versions behind current software releases. They take a wait-and-see approach, observe how the early adopters fare with software upgrades and adopt as a late majority.

A proper upgrade process involves research, planning and execution. Use these top 10 principles to establish when and why to upgrade:

1. What’s driving the upgrade? Software upgrades addressing known security vulnerabilities are a priority in the enterprise. Usability issues that impact productivity should also be addressed quickly.

2. Who depends on the legacy software? Identifying departments that depend on legacy software allows IT to schedule an upgrade when it has the least impact on productivity.

3. Can the upgrade be scheduled according to our policy? Scheduling upgrades within the standard upgrade cycle minimizes distraction and duplication of effort. Change control policies formalize how products are introduced into the environment and minimize disruption to the enterprise and IT.

4. Is the organization ready for another upgrade? Just because an organization needs a software upgrade doesn’t mean it can sustain that upgrade. Upgrade and patch fatigue are very real. Consider the number of upgrades you’ve deployed in recent months when deciding whether to undertake another one.

5. What is the upgrade going to cost? Licensing costs are only one part of the total cost associated with software upgrades. Services, staff time, impact to other projects, tech support for associated systems and upgrades for systems that no longer work with the new platform must also be included in the total cost.

6. What is the ROI of the upgrade? Software updates that defeat security vulnerabilities are non-negotiable—security itself is the ROI. Non-security related upgrades, however, must demonstrate their value through increased productivity or improved efficiency and reduced costs.

7. How will the customer be impacted? Consider all the ways an upgrade could impact customers and make adjustments before the upgrade begins. Doing so ensures you mitigate any potential issues before they happen.

8. What could go wrong? Since your goal is to increase performance, not diminish it, draft contingency plans for each identified scenario to readily address performance and stability issues, should they arise.

9. What level of support does the vendor provide? Once you understand what could go wrong during the upgrade, look into the level of support the vendor provides. Identify gaps in coverage and source outside resources to fill in as needed.

10. What’s your recourse? No one wants to think about it, but sometimes upgrades do more harm than good. In the event something goes wrong and you need to revert to a previous software version, can you?

Download The Guide to Modern Endpoint Backup and Data Visibility to learn more about how a modern endpoint backup solution can simplify software upgrades.

Survey of IT Pros Highlights Lack of Understanding of SaaS Data Loss Risks

April 26, 2016 | Leave a Comment

By Melanie Sommer, Director of Marketing, Spanning by EMC

Recently, Spanning – an EMC company and provider of backup and recovery for SaaS applications – announced the results of a survey* of over 1,000 IT professionals across the U.S. and the U.K. about trends in SaaS data protection. It turns out that IT pros across the pond have the same concerns as here in the U.S., as the survey found that security is the top concern when moving critical applications to cloud. Specifically, 44 percent of U.S. and U.K. IT pros cited external hacking/data breaches as their top concerns, ahead of insider attacks and user error.

But that’s not the most interesting finding, as the survey found that perceived concerns differ from reality when it comes to actual data loss. In total, nearly 80 percent of respondents have experienced data loss in their organizations’ SaaS deployments. Accidental deletion of information was the leading cause of data loss from SaaS applications (43 percent in U.S., 41 percent in U.K.), ahead of data loss caused by malicious insiders and hackers.

While organizations in both the U.S. and U.K. have experienced data loss due to accidental deletions, migration errors (33 percent in U.S., 31 percent in U.K.), and accidental overwrites (27 percent in U.S., 26 percent in U.K.) also led external and insider attacks as top causes of data loss.

How SaaS Backup and Recovery Helps
As a case in point, consider one serious user error – clicking a malicious link or file and triggering a ransomware attack. If an organization uses cloud-based collaboration tools like Office 365 One Drive for Business or Google Drive, the impact from a ransomware attack is multiplied at compute speed. How? An infected laptop contains files that automatically sync to the cloud (via Google Drive, or OneDrive for Business). Those newly-infected files sync, then infect and encrypt other files in every connected system – including those of business partners or customers, whose files and collaboration tools will be similarly compromised.

This is where backup and recovery enters the picture. Nearly half of respondents in the U.S. not already using a cloud-to-cloud backup and recovery solution said that they trust their SaaS providers with managing backup, while the other half rely on manual solutions. In most cases, SaaS providers are not in a position to recover lost or deleted data due to user error, and cannot blunt the impact of a ransomware attack on their customers. Further, with many organizations relying both on manual backups and an assumption that none of the admins in charge are malicious, the opportunity for accidental neglect or oversight is too big to ignore. The industry would seem to agree. Roughly a third of organizations in the U.S. (37 percent) are already using or plan to use a cloud-to-cloud backup provider for backup and recovery of their SaaS applications within the next 12 months.

Since the survey included U.K. respondents, it also gauged sentiment around the rapidly changing data privacy regulations in the EU, specifically in regards to the “E.U.-U.S. Privacy Shield.” The vast majority of IT professionals surveyed agree (66 percent in the U.K., 72 percent in the U.S.) that storing data in a primary cloud provider’s EU data center will ensure 100 percent compliance with data and privacy regulations.

These results paint a picture of an industry that is as unsure as they are underprepared; while security is a top concern when moving critical applications to the cloud, most organizations trust the inherent protection of their SaaS applications to keep their data safe, even though the leading cause of data loss is user error, which is not normally covered under native SaaS application backup. The results also show that the concerns influencing cloud adoption have little to do with the real cause of everyday data loss and more with a fear of data breaches or hackers.

The takeaway from these survey results: more IT pros need an increased awareness and understanding about where, when, and how critical data can be lost to reduce their cloud adoption concerns; and, more IT pros need to learn how to minimize the true sources of SaaS data loss risk. To learn more, download the full survey report, or view an infographic outlining the major findings of the survey.

*Survey Methodology
Spanning by EMC commissioned the online survey, which was completed by 1,037 respondents in December 2015. Of the respondents, 537 (52 percent) were based in the United Kingdom, and 500 in the United States (48 percent). A full 100 percent of the respondents “have influence or decision making authority on spending in the IT department” of their organization.
Respondents were asked to select between two specific roles: “IT Function with Oversight for SaaS Applications” (75 percent U.S., 78 percent U.K., 77 percent overall); “Line of Business/SaaS application owner” (39 percent U.S., 43 percent U.K., 41 percent overall); the remaining identified as “other.”

Can a CASB Protect You From the Treacherous 12?

April 25, 2016 | Leave a Comment

By Ganesh Kirti, Founder and CTO, Palerra

CSA.T12blog500pxMany frequently asked questions related to cloud security have included concerns about compliance and insider threats. But lately, a primary question is whether cloud services are falling victim to the same level of external attack as the data center. With Software as a Service (SaaS) becoming the new normal for the corporate workforce, and Infrastructure as a Service (IaaS) on the rise, cloud services now hold mission-critical enterprise data, intellectual property, and other valuable assets. As a result, the cloud is coming under attack, and it’s happening from both inside and outside the organization.

On February 29, the CSA Top Threats Working Group clarified the nature of cloud service attacks in a report titled, “The Treacherous 12: Cloud Computing Top Threats in 2016.” In this report the CSA concludes that although cloud services deliver business-supporting technology more efficiently than ever before, they also bring significant risk.

The CSA suggests that these risks occur in part because enterprise business units often acquire cloud services independently of the IT department, and often without regard for security. In addition, regardless of whether the IT department sanctions new cloud services, the door is wide open for the Treacherous 12.

Because all cloud services (sanctioned or not) present risks, the CSA points out that businesses need to take security policies, processes, and best practices into account. That makes sense, but is it enough?

Gartner predicts that through 2020, 95 percent of cloud security failures will be the customer’s fault. This does not necessarily mean that customers lack security expertise. What it does mean, though, is that it’s no longer sufficient to know how to make decisions about risk mitigation in the cloud. To reliably address cloud security, automation will be key.

Cloud security automation is where Cloud Access Security Brokers (CASBs) come into play. A CASB can help automate visibility, compliance, data security, and threat protection for cloud services. We thought it would be interesting to take a look at how well CASBs in general would fare at helping enterprises survive the treacherous 12.

The good news is that CASBs clearly address nine of the treacherous 12 (along with many other risks not mentioned in the report). These include:

#1 Data breach
#2 Weak ID, credential, and access management
#3 Insecure APIs
#4 System and application vulnerabilities
#5 Account hijacking
#6 Malicious insiders
#7 Advanced persistent threats
#10 Abuse and nefarious use of cloud services
#12 Shared technology issues

There are countless examples of why being protected against the treacherous 12 is important. Some of the more high profile ones:

  • Data breach: In the 2015 Anthem breach, hackers used a third-party cloud service to steal over 80M customer credentials.
  • Insecure APIs: The mid-2015 IRS breach exposed over 300K records. While that’s a big number, the more interesting one is that it only took 1 vulnerable API to allow the breach to happen.
  • Malicious Insiders: Uber reported that their main database was improperly accessed. The unauthorized individual downloaded 50K names and numbers to a cloud service. Was it their former employee, the current Lyft CTO? That was Uber’s opinion. The DOJ disagreed and a lawsuit ensued.

In each of these cases a CASB could have helped. A CASB can help detect data breaches by monitoring privileged users, encryption policies, and movement of sensitive data. A CASB can also detect unusual activity within cloud services that originate from API calls, and support risk scoring of external APIs and applications based on the activity. And a CASB can spot malicious insiders by monitoring for overly-privileged user accounts as well as user profiles, roles, and privileges that drift from compliant baselines. Finally, a CASB can detect malicious user activity through user behavior analytics.

What about the three threats that aren’t covered by a CASB? Those include:

#8 Data loss
#9 Insufficient due diligence
#11 Denial of services

The cost of data loss (#8, above) is huge. A now-defunct company named Code Spaces had to close down when their corporate assets were destroyed, because it did not follow best practices for business continuity and disaster recovery. Data loss prevention is a primary corporate responsibility, and a CASB can’t detect whether it is in place. Insufficient due diligence (#9) is the responsibility of the organization leveraging the cloud service, not the service provider. Executives need a good roadmap and checklist for due diligence. A CASB can provide advice, but they don’t automate the process. Finally, denial of service (DoS, #11, above) attacks are intended to take the provider down. It is the provider’s responsibility to take precautions to mitigate DoS attacks.

For a quick reference guide to the question, “Can a CASB protect you from the 2016 treacherous 12?,” download this infographic.

To learn more, join Palerra CTO Ganesh Kirti and CSA Executive VP of Research J.R. Santos as they discuss “CASBs and the Treacherous 12 Top Cloud Threats” on April 25, 2-3pm EDT. Register for the webinar now.

The Panama Papers, Mossack Fonseca and Security Fundamentals

April 21, 2016 | Leave a Comment

By Matt Wilgus, Practice Director, Schellman

The-Panama-Papers-Mossack-Fonseca-and-the-Writing-on-the-WallThe release of details contained in the Panama Papers will be one of the biggest news stories of the year. The number of high-profile individuals implicated will continue to grow as teams comb through the 11.5 million documents leaked from Mossack Fonseca, a Panamanian law firm. While the news headlines will focus on mainly world leaders, athletes and well-to-dos, the overview from The International Consortium of Investigative Journalists (ICIJ) gets into additional details. This overview is worth reading to understand what services the firm provided, who uses the services, how they can be used legally and how they can be abused.

The overview seems like something out of a John Grisham book. In fact some of the information being released is similar to a plot from a book he wrote over 25 years ago. In 1991, John Grisham published “The Firm”, a book which revolves around several lawyers working for the fictional law firm Bendini, Lambert and Locke. Some of the similarities between the book and today include a law firm that primarily exists to assist money laundering and tax evasion, part of the plot involves the details of many transactions from retrieving thousands of documents and there is a whistleblower. The fictional firm also provided services to legitimate clients, although in the book that number is about 25 percent. It is unknown what percentage of Mossack Foneseca clients were legitimate and how many would be described as Ponzi schemers, drug kingpins and tax evaders, as the ICIJ overview mentions. While the novel is fiction, the book sets the stage as something that has been seen before.

Whether the leak started from an external breach of systems or an intentional leak from an insider, it is always intriguing to know how it occurred and what could have been done. Did it start with a phishing email, a rogue employee, a web application flaw, etc.? Forbes reported that the client portal server was running Drupal 7.23, which was found to be susceptible to a SQL injection vulnerability that was announced in October 2014. There were many reports of exploitation of this vulnerability days after it was announced, so it is likely someone took advantage of the exploit. The team responsible for WordFence, a popular WordPress security plugin, provided another possible exploitation scenario related to upload functionality that existed in the Revolution Slide plugin. These are just some of the potential means that could have caused a breach at Mossack Fonseca. Other possibilities include scenarios related to weaknesses in the email server and a lack of encryption in transit. Mossack Fonseca’s does have a Data Security page on their site, although it primarily touts SSL and the fact they house all of our servers in-house as their primary security measures. In 2011, I wrote a post on how the legal profession was an easy target for breaches. Looking back I realize that technology has changed, but in many ways the weaknesses are likely to stay the same. One of the biggest changes to note from 2011 is the number of online applications law firms have now. This isn’t just the top 100 law firms; this includes smaller regional firms as well. In addition to the main corporate web site and an area to share documents (or client portal), which are now offerings that appear much more prevalent across firms of all sizes, firms have blog sites, premium service offerings, extranets and even applications that provide a gateway into all the other online applications. More applications means a larger attack surface. Unlike Mossack Fonseca, which claims it hosted everything internally, many law firms we see do use third-party SaaS offerings to handle some of these functions. Outsourcing to a third party which specializes in providing a particular service can often provide better security than a firm can provide in house.

Given the Mossack Fonseca’s focus on company formation, minimizing tax burdens, Private Interest Foundations and the like, the firm could have easily been a target given the recent groundswell of activism against tax avoidance and income inequality. While the lapse in security at Mossack Fonseca may not be representative of security at all law firms, the details surrounding their environment point to likely weaknesses in people, processes and technology which could exist in any organization.

  • People – Given what we know about potential vulnerabilities in their environment and the exfiltration of data, we can surmise that someone was not paying attention for an extended period of time. There are many security roles in an organization including, but not limited to policy development, administration and monitoring. In some environments one person may be responsible for many roles and in some cases not all responsibilities can be met. This may because no one was given the role or the person that was given the responsibility left the organization. A recent search of LinkedIn did not turn up too many IT-related profiles with Mossack Fonseca as a current or previous employer, although this doesn’t necessarily mean these individuals do not exist. Contractors may have also performed the role. That said, a third party could have been hired for a given job, say deploying the client portal, but maybe was not responsible for post implementation support.
  • Process – Being notified of vulnerabilities in the software supporting the organization is paramount to understanding where risks exist. Knowing what data is leaving the environment is also critical. The likelihood that either of these was occurring is low and if either were occurring there wasn’t necessarily anyone to act on it in a timely fashion.
  • Technology – A breakdown in people and processes can occasionally be mitigated by technology. The WordPress and Drupal sites are now protected by a third party security provider, but other sites likely are not. An up-to-date intrusion detection system (IDS) may have detected some of the threats the organization faced, or activities that occurred, although there were several potential options to exploit so one avenue or another would have likely been open. For an organization that appears to have missed some fundamental security concerns, they may have used technology to secure some data as there is a site named crypt.mossfon.com, which is still up.

The Panama Papers incident may once again raise awareness around data security with legal firms. Organizations performing support services to legal firms, such as eDiscovery and Case Management providers, may also want to take note. Mossack Fonseca has a link on their page for ISO Certifications. However, the only one listed is ISO 9001:2008. An ISO 27001 assessment, or certification, may not have prevented the leak, but it would have demonstrated greater consideration of security on the part of Mossack Fonseca. A penetration test would also have been beneficial, although given the vulnerabilities that existed even a vulnerability scan would have detected some of the issues.

With most data breaches, the actual data on the people and companies is less interesting (albeit potentially more valuable) than the way in which the breach occurred or the attacker persisted in the attack. As it relates to the Panama Papers, it is the opposite. The forthcoming details related to various individuals, their transactions, and the potential future tax and privacy implications are far more interesting to the public than the means whereby the exfiltration actually occurred. That said, taking a few minutes to understand how it happened and what we can learn can be a worthwhile step in preventing future breaches.