November 21, 2016 | Leave a Comment
By Vijay Ramanathan, Vice President of Product Management, Code42
It’s time to flip our thinking about enterprise information security. For a long time, the starting point of our tech stacks has been the network. We employ a whole series of solutions on servers and networks—from monitoring and alerts to policies and procedures—to prevent a network breach. We then install some antivirus and malware detection tools on laptops and devices to catch anything that might infect the network through endpoints.
But this approach isn’t working. The bad guys are still getting in. We like to think we can just keep building a bigger wall, but motivated cybercriminals and insiders keep figuring out ways to jump over it or tunnel underneath it. How? By targeting users, not the network. Today, one-third of data compromises are caused by insiders, either maliciously and unwittingly.
Just because we have antivirus software or malware detection on our users’ devices doesn’t mean we’re protected. Those tools are only effective about 60% to 70% of the time at best. And with the increasing prevalence of BYOD, we can’t control everything on an employee’s device.
Even when we do control enterprise-issued devices, our security tools can’t prevent a laptop from being stolen. Or keep an employee from downloading client data onto a USB drive. Or stop a high-level employee from emailing sensitive data to a spear phisher posing as a co-worker.
We need to change our thinking. We need to admit that breaches are inevitable and be prepared to quickly recover and remediate. That means starting at the outside, with our increasingly vulnerable endpoints.
With a good endpoint backup system in place, one that’s backing up data in real time, you gain a window into all your data. You can see exactly where an attack started and what path it took. You can see what an employee who just gave his two weeks’ notice is doing with data. You can see if a stolen laptop has any sensitive data on it, so you know if it’s reportable or not.
By starting with endpoints, you eliminate blind spots. And isn’t that the ultimate goal of enterprise infosec?
To learn more about the starting point in the modern security stack watch the on-demand webinar.
November 18, 2016 | Leave a Comment
By Jon King, Security Technologist and Principal Engineer, Intel Security
And you thought virtualization was tough on security …
Containers, the younger and smaller siblings of virtualization, are more active and growing faster than a litter of puppies. Recent stats for one vendor show containers now running on 10% of hosts, up from 2% 18 months ago. Adoption is skewed toward larger organizations running more than 100 hosts. And the number of running containers is expected to increase by a factor of 5 in nine months, with few signs of slowing. Once companies go in, they go all in. The number of containers per host is increasing, with 25% of companies running 10 or more containers simultaneously on one system. Containers also live for only one-sixth the time of virtual machines. These stats would appear to support the assertion that containers are not simply a replacement for server virtualization, but the next step in granular resource allocation.
Adequately protecting the large number of containers could require another level of security resources and capabilities. To better understand the scope of the problem, think of your containers as assets. How well are you managing your physical server assets? How quickly do you update details when a machine is repaired or replaced? Now multiply that by 5 to 10 units, and reduce the turnover rate to a couple of days. If your current asset management system is just keeping up with the state of physical machines, patches, and apps, containers are going to overwhelm it.
Asset management addresses the initial state of your containers, but these are highly mobile and flexible assets. You need to be able to see where your containers are, what they are doing, and what data they are operating on. Then you need sufficient controls to apply policies and constraints to each container as they spin up, move around, and shut down. It is increasingly important to be able to control your data movements within virtual environments, including where it can go, encrypting it in transit, and logging access for compliance audits.
While the containers themselves have an inherent level of security and isolation, the large number of containers and their network of connections to other resources increase the attack surface. Interprocess communications have been exploited in other environments, so they should be monitored for unusual behavior, such as destinations, traffic volume, or inappropriate encryption.
One of the great things about containers, from a security perspective, is the large amount of information you can get from each one for security monitoring. This is also a significant challenge, as the volume will quickly overwhelm the security team. Security information and event management (SIEM) tools are necessary to find the patterns and correlations that may be indicators of attack, and compare them with real-time situational awareness and global threat intelligence.
Containers provide the next level of resource allocation and efficiency, and in many ways deliver greater isolation than virtual machines. However, if you are not prepared for the significant increase in numbers, connections, and events, your team will quickly be overwhelmed. Make sure that, as you take the steps to deploy containers within your data center, you also appropriately augment and equip your security team.
November 14, 2016 | Leave a Comment
By Raj Samani, EMEA CTO, Intel Security
“How many visitors do you expect to access the No More Ransom Portal?”
This was the simple question asked prior to this law enforcement (Europol’s European Cybercrime Centre, Dutch Police) and private industry (Kaspersky Lab, Intel Security) portal going live, which I didn’t have a clue how to answer. What do YOU think? How many people do you expect to access a website dedicated to fighting ransomware? If you said 2.6 million visitors in the first 24 hours, then please let me know six numbers you expect to come up in the lottery this weekend (I will spend time until the numbers are drawn to select the interior of my new super yacht). I have been a long-time advocate of public cloud technology, and its benefit of rapid scalability came to the rescue when our visitor numbers blew expected numbers out of the water. To be honest, if we had attempted to host this site internally, my capacity estimates would have resulted in the portal crashing within the first hour of operation. That would have been embarrassing and entirely my fault.
Indeed, my thoughts on the use of cloud computing technology are well documented in various blogs, my work within the Cloud Security Alliance, and the book I recently co-authored. I have often used the phrase, “Cloud computing in the future will keep our lights on and water clean.” The introduction of Amazon Web Services (AWS) and AWS Marketplace into the No More Ransom Initiative to host the online portal demonstrates that the old myth, “One should only use public cloud for noncritical services,” needs to be quickly archived into the annals of history.
To ensure such an important site was ready for the large influx of traffic at launch, we had around-the-clock support out of Australia and the U.S. (thank you, Ben Potter and Nathan Case from AWS!), which meant everything was running as it should and we could handle millions of visitors on our first day. This, in my opinion, is the biggest benefit of the cloud. Beyond scalability, and the benefits of outsourcing the management and the security of the portal to a third party, an added benefit was that my team and I could focus our time on developing tools to decrypt ransomware victims’ systems, conduct technical research, and engage law enforcement to target the infrastructure to make such keys available.
AWS also identified controls to reduce the risk of the site being compromised. With the help of Barracuda, they implemented these controls and regularly test the portal to reduce the likelihood of an issue.
Thank you, AWS and Barracuda, and welcome to the team! This open initiative is intended to provide a noncommercial platform to address a rising issue targeting our digital assets for criminal gain. We’re thrilled that we are now able to take the fight to the cloud.
November 11, 2016 | Leave a Comment
By Susan Richardson
Smart entrepreneurs have long employed differential pricing strategies to get more money from customers they think will pay a higher price. Cyber criminals have been doing the same thing on a small scale with ransomware: demanding a larger ransom from individuals or companies flush with cash, or organizations especially sensitive to downtime and service disruptions. But now it appears cyber criminals have figured out how to improve their ROI by attaching basic price discrimination to large-scale, phishing-driven ransomware campaigns. So choosing to pay a ransom could come with an even heftier price tag in the near future.
Personalization made easy: no code required
Typically, a ransom payment amount is provided by a command and control server or is hardcoded into the executable. But Malware Hunter Team recently discovered a new ransomware variant called Fantom that uses the filename to set the size of the ransom demand. A post on the BleepingComputer blog explains that this allows the developer to create various distribution campaigns using the same exact sample, but request different ransom amounts depending on how the distributed file is named—no code changes required. When executed, the ransomware will examine the filename and check if it contains certain substrings. Depending on the matched substrings, it will set the ransom to a particular amount.
The news is salt in the wound for businesses, which have already been targeted by ransomware at a growing pace with higher price demands. A 2016 Symantec survey found that while consumers account for a slight majority of ransomware attacks today, the long-term trend shows a steady increase in attacks on organizations.
Those most vulnerable? Healthcare and financial organizations, according to a 2016 global ransomware survey by Malwarebytes. Both industries were targeted well above the average 39 percent ransomware penetration rate. Over a one-year period, healthcare organizations were targeted the most at 53 percent penetration, with financial organizations a close second at 51 percent.
And while one-third of ransomware victims face demands of $500 or less, large organizations are being extorted for larger sums. Nearly 60 percent of all enterprise ransomware attacks demanded more than $1,000, and more than 20 percent asked for more than $10,000, according to the Malwarebytes survey.
A highly publicized five-figure ransom was demanded of the Los Angeles-based Hollywood Presbyterian Medical Center in February. A ransomware attack disabled access to the hospital’s network, email and patient data. After 10 days of major disruption, hospital officials paid the $17,000 (40-bitcoin) ransom to get their systems back up. Four months later, the University of Calgary paid $20,000 CDN in bitcoins to get its crippled systems restored.
Now with a new price-discrimination Fantom on the loose, organizations can expect to be held hostage for even higher ransoms in the future.
November 4, 2016 | Leave a Comment
By Susan Richardson, Manager/Content Strategy, Code42
What’s the most effective thing you can do for cyber security awareness? Stop talking about it, according to a new study that uncovered serious security fatigue among consumers. The National Institute of Standards and Technology study, published recently, found many users have reached their saturation point and become desensitized to cyber security. They’ve been so bombarded with security messages, advice and demands for compliance that they can’t take any more—at which point they become less likely to comply.
Security fatigue wasn’t even on the radar
Study participants weren’t even asked about security fatigue. It wasn’t until researchers analyzed their notes that they found eight pages (single-spaced!) of comments about being annoyed, frustrated, turned off and tired of being told to “watch out for this and watch out for that” or being “locked out of my own account because I forgot or I accidentally typed in my password incorrectly.” In fact, security fatigue was one of the most consistent topics that surfaced in the research, cited by 63 percent of the participants.
The biases tied to security fatigue
When people are fatigued, they’re prone to fall back on cognitive biases when making decisions. The study uncovered three cognitive biases underlying security fatigue:
- Users are personally not at risk because they have nothing of value—i.e., who would “want to steal that message about how I made blueberry muffins over the weekend.”
- Someone else, such as an employer, a bank or a store is responsible for security, and if targeted, they will be protected—i.e., it’s not my responsibility
- No security measures will really make a difference—i.e., if Target and the government and all these large organizations can’t protect their data from cyber attacks, how can I?
The repercussions of security fatigue
The result of security fatigue is the kind of online behavior that keeps a CISO up at night. Fatigued users:
- Avoid unnecessary decisions
- Choose the easiest available option
- Make decisions driven by immediate motivations
- Behave impulsively
- Feel a loss of control
What can you do to overcome employee security fatigue?
To help users maintain secure online habits, the study suggests organizations limit the number of security decisions users need to make because, as one participant said, “My [XXX] site, first it gives me a login, then it gives me a site key I have to recognize, and then it gives me a password. If you give me too many more blocks, I am going to be turned off.”
The study also recommends making it simple for users to choose the right security action. For example, if users can log in two ways—either via traditional username and password or via a more secure and more convenient personal identity verification card—the card should show up as the default option.
November 2, 2016 | Leave a Comment
By Jacob Ansari, Manager, Schellman
On Oct. 21, Dyn, a provider of domain name services (DNS), an essential function of the Internet that translates names like www.schellmanco.com to its numerical IP address, went offline after a significant distributed denial of service (DDoS) attack affected Dyn’s ability to provide DNS services to major Internet sites like Twitter, Spotify, and GitHub. Initial analysis showed that the DDoS attack made use of Mirai, malware that takes control of Internet of Things (IoT) devices for the purposes of directing Internet traffic at the target of the DDoS attack. Commonly referred to as botnets, these networks of compromised devices allow for the distributed version of denial of service attacks; the attack traffic occurs from a broad span of Internet addresses and devices, making the attack more powerful and more difficult to contain.
Mirai is not the first malware to target IoT devices for these purposes, and security researchers have found numerous security vulnerabilities in all manner of IoT devices, including cameras, kitchen appliances, thermostats, and children’s toys. The author of the Mirai code, however, published the full source code online, allowing attackers with only a modicum of technical capability to make use of it to hijack IoT devices and create potentially significant DDoS attacks, but the core of the issue remains the fundamental insecurities of IoT devices.
While IoT device manufacturers might face complicated security challenges from working in new environments or with the kinds of hardware or software constraints not seen on desktop systems or consumer mobile devices, the reality, at least for now, is that IoT devices have the kinds of security weaknesses that the rest of the Internet learned about 20 years ago, primarily default administrative accounts, insecure remote access, and out-of-date and vulnerable software components. Researchers have found that they can remotely control IoT devices, such as baby monitors or even automobiles, extract private data from the mobile apps used to interface with devices, or cause damage to other equipment the IoT device controls, such as harming a furnace by toggling the thermostat on and off repeatedly.
Ultimately, defending against DDoS attacks has a few components. ISPs and carriers bear some responsibility to identify these kinds of attacks and take the actions that only they can take. Security and Internet services like Dyn or companies that provide DDoS mitigation will need to scale up their capabilities to address greater orders of magnitude in the attacks they could face. But for IoT-based botnet attacks, the lion’s share of responsibility falls on IoT device manufacturers, who have a lot of catching up to do on good security practice for the devices and applications that they provide.
October 31, 2016 | Leave a Comment
By Ryan Mackie, Principal and ISO Certification Services Practice Director, Schellman
ISO 27001 North American GrowthISO/IEC 27001:2015 (ISO 27001) certification is becoming more of a conversation in most major businesses in the United States. To provide some depth, there was a 20% increase in ISO 27001 certificates maintained globally (comparing the numbers from 2014 to 2015 as noted in the recent ISO survey).
As for North America, there was a 78% growth rate in ISO 27001 certificates maintained, compared to those in North America in 2014. So it is clear evidence that the compliance effort known as ISO 27001 is making its imprint on organizations in the United States. However, it’s just the beginning. Globally, there are 27,563 ISO 27001 certificates maintained, of which only 1,247 are maintained in the United States; that is 4.5% of all ISO 27001 certificates.
As the standard makes its way into board room and compliance department discussions, one of the first questions is understanding the scope of the effort. What will be discussed in this short narrative is something that we, as an ANAB and UKAS accredited ISO 27001 certification body, deal with often when current clients or prospects ask about scoping their ISO 27001 information security management system (ISMS), and specifically related to how to handle third party data centers or colocation service providers.
Consider an organization is a software as a services (SaaS) provider with customers throughout the world. All operations are centrally managed out of one location in the United States but to meet the needs of global customers, the organization has placed their infrastructure at colocation facilities located in India, Ireland, and Germany. They have a contractual requirement to obtain ISO 27001 certification for their SaaS services and are now starting from the ground up. First things first, they need to determine what their scope should be.
It is quite clear that given the scenario above, the scope will include their SaaS offering. As with ISO 27001, the ISMS will encompass the full SaaS offering (to ensure that the right people, processes, procedures, policies, and controls are in place to meet their confidentiality, integrity, and availability requirements as well as their regulatory and contractual requirements). When determining the reach of the control set, organizations typically consider those that are straight forward: the technology stack, the operations and people supporting it, its availability and integrity, as well as the supply chain fostering it. This example organization is no different but struggles with how it should handle its colocation service providers. Ultimately, there are two options – Inclusion and Carve-out.
The organization can include the sites in scope of its ISMS. The key benefit is that the locations themselves would be included on the final certificate. But, with an ISMS, an organization cannot include the controls of another organization within its scope as there is no responsibility for the design, maintenance, and improvement of those controls in relation to the risk associated with the services provided.
So, to include a colocation service provider, it would be no different than including an office space that is rented in a multi-tenant building. The organization is responsible for and maintains the controls once the individual enters its boundaries but all other controls would be the responsibility of the landlord. The controls within the rented space of the colocation service provider would be considered relevant to the scope of the ISMS. These controls would be limited, which is understandable given their already very low risk; however, they would still require to be assessed. That would mean that an onsite audit would be required to be performed to ensure that the location, should it be included within the scope and ultimately on the final certificate, has the proper controls in place and has been physically validated by the certification body.
As a result, the inclusion of these locations would allow for them to be on the certificate but would require the time and cost necessary to audit them (albeit the assessment would be limited and focused only on those controls the organization is responsible for within the rented space of the colocation service provider).
The organization can choose to carve out the colocation service provider locations. As compared to the inclusion method, this is by far cheaper in that onsite assessments are not required. More reliance would be applied to the controls supporting the Supplier Relations control domain in Annex A of ISO 27001; however, these controls would be critical for both the inclusive and carve-out method. The downside of this option – the locations could not be included on the final ISO 27001 certificate (as they were not included within the scope of the ISMS), and it may require additional conversations with customers highlighting that though those locations were not physically assessed as part of the audit, the logical controls of the infrastructure sited within those locations were within the scope of the assessment and were tested.
Ultimately, it is a clear business decision. Nothing in the ISO 27001 standard requires certain locations to be included within the scope of the ISMS, and the organization is free to scope their ISMS as it suits. Additionally, unlike other compliance efforts (such as AICPA SOC examinations), there is not a required assertion from the third party regarding their controls, as the ISMS, by design, does not include any controls outside of the responsibility of the organization being assessed. However, the organization should keep in mind the final certificate and if it will be fully accepted by the audience that is receiving it. Does the cost of requiring the onsite audit warrant these locations to be included or is the justification just not there.
If this scenario is applicable to your situation or scoping, Schellman can have further discussions to talk through the benefits and drawbacks of each option so that there is scoping confidence heading into the certification audit.
October 27, 2016 | Leave a Comment
By Evelyn de Souza, Data Privacy and Security Leader, Cisco Systems and Strategy Advisor, Cloud Security Alliance
Everything we know about defeating the insider threat seems to not be solving the problem. In fact, evidence from the Deep, Dark and Open Web points to a greatly worsening problem. Today’s employees work with a number of applications and with a series of clicks information can be both maliciously and accidentally leaked.
The Cloud Security Alliance has been keen to uncover the extent of the insider threat problem with its overall mission of providing security assurance within Cloud Computing, and providing education to help secure cloud computing.
As a follow up to the Top Threats in Cloud Computing and over recent months we surveyed close to 100 professionals on the extent of the following:
- Employees leaking critical information and tradecraft on illicit sites
- Data types and formats being exfiltrated along with exfiltration mechanisms
- Why so many data threats go undetected
- What happens to the data after it has been exfiltrated
- Tools to disrupt and prevent the data exfiltration cycle
- Possibilities to expunge traces of data once exfiltrated
We asked some difficult questions that have surprised our audience and that many were hard pressed to answer. We wanted to get a clear picture of the extent of knowledge and where the gaps lay. We hear lots of talk about the threats to the cloud and challenges that organizations facing it take. And, in the wake of emerging data privacy regulation, we see much discussion about ensuring levels of compliance. However, the results of this survey show there is a gap with dealing with both present and future requirements for data erasure in the cloud. And, that despite the fact that accidental insider threats or misuse of data is a common phenomenon, there is a distinct lack of procedure for dealing with instances across cloud computing.
October 26, 2016 | Leave a Comment
By Avani Desai, Executive Vice President, Schellman & Co.
October 25, 2016 | Leave a Comment
By Susan Richardson, Manager/Content Strategy, Code42
During National Cyber Security Awareness Month, understanding the ins and outs of ransomware seems particularly important—given the scandalous growth of this malware. In this webinar on ransomware hosted by SC Magazine, guest speaker John Kindervag, vice president and principal analyst at Forrester, talks about what ransomers are good at—and offers best practices for hardening defenses. Code42 System Engineer Arek Sokol is also featured as a guest speaker, defining continuous data protection as a no-fail solution that assures recovery without paying the ransom.
The art of extortion
Kindervag says ransomers are good at leveraging known vulnerabilities when organizations are slow to patch. They are also excellent phishermen, posing skillfully as trusted brands to lure their prey; collaborative entrepreneurs who learn and share information; and enthusiastic teachers, eager to impart how to pay in bitcoin for the unschooled.
Like Pearl Harbor, Kindervag says, the day the enterprise gets hit with across-the-board ransomware will live in infamy—unless the organization has planned for the event with effective backup.
Kindervag advises the following to prevent the delivery of ransomware:
- Prioritized patch management to avoid poor security hygiene that puts computer systems at risk.
- Email and web content security that includes effective anti-spam, gray mail categorization, and protection for employees against poisoned attachments.
- Improved endpoint protection with key capabilities that include prevention, detection and remediation, USB device control to reduce the ransomware infection vector, and isolation of vulnerable software through app sandboxing and network segmentation.
- Hardening network security with a zero trust architecture in which any entity (users, devices, applications, packets, etc.) requires verification regardless of its location on or with respect to the corporate network to prevent the lateral movement of malware.
- A focus on clean, effective backups.
The ransomware antidote
Following Kindervag’s “hardening defenses” presentation, Sokol reports on the number of businesses hit by ransomware in 2015 (47 percent) and how many incidents come through the endpoint (78 percent). He also dispels the rumor that file sync and share are synonymous with rather than antithetical to endpoint backup.
During the webinar, Sokol demonstrates the extensibility of modern, continuous, cross-platform endpoint backup. He describes the efficacy of endpoint backup in recovering data following ransomware or a breach, its utility in speeding and simplifying data migration and its ability to visualize data movement—thereby identifying insider threats when employees leak or take confidential data. Don’t miss it.