January 6, 2017 | Leave a Comment
By Ajmal Kohgadai, Product Marketing Manager, Skyhigh Networks
As enterprises continue to migrate their on-premises IT infrastructure to the cloud, they often find that their existing threat protection solutions aren’t sufficient to consistently detect threats that arise in the cloud. While security information and event management (SIEM) solutions continue to rely on rule-based (or heuristics-based) approach to detect threats, they often fail when it comes to the cloud. This is, in large part, because SIEMs don’t evolve without significant human input as user behavior changes over time, new cloud services are adopted, and new threat vectors are introduced.
Without a threat protection solution built for the cloud, enterprises can suffer data loss when:
- Malicious or careless insiders download data from a corporate sanctioned cloud service, then upload it to a shadow cloud file sharing service (e.g. Anthem breach of 2015)
- An employee downloads data onto a personal device, regardless of being on or off-network, at which point control over that data is lost
- Privileged users of a cloud service (such as administrators) change security configurations inappropriately
- An employee shares data with a third party, such as a vendor or partner
- Malware on a corporate computer leverages an unmanaged cloud service as a vector to exfiltrate data stolen from on-premises systems of record
- A user endpoint device syncs malware to a file sharing cloud service and exposes other users and the corporate network to malware
- Data in a sanctioned cloud services is lost to an insecure and unmanaged cloud service via an API connection between the two services
However, even the most advanced cloud threat protection technology can be rendered ineffective when it’s not being used to its fullest potential. Below are some of the proven best practices and must-haves when implementing a cloud threat protection solution.
- Focus on multi-dimensional threats, not simple anomalies – a user logs in from a new IP address, or downloads a higher than average volume of data, or changes a security setting within an application. In isolation, these are anomalies but not necessarily indicative of a security threat. Focus first on threats that combine multiple indicators and anomalies together, providing strong evidence that an incident is in progress.
- Start with machine-defined models, then refine – aside from accuracy limitations, it’s difficult to get started with threat protection by configuring detailed rules with thresholds for which you have no context. Start with unsupervised machine learning – that is software that analyzes user behavior and automatically begins detecting threats. Augment with feedback later to fine tune threat detection and reduce false positives.
- Monitor all cloud usage for shadow and sanctioned apps – cloud activity within one service might appear routine because threats are often signaled by multiple activities across services. Correlate activity across other apps and a pattern will start to appear if a threat is in motion. That’s why it is important to start with visibility into both sanctioned and unsanctioned cloud services to get the full picture.
- Leverage your existing SIEM and SOC workflow – events generated by a cloud threat protection solution should flow into existing SOC/SIEM solutions in real time via a standard feed. This capability will allow security experts to both correlate cloud anomalies with on-premises ones while also allowing the integration of cloud threat incidence response with incident response workflows within their existing SOC/SIEM.
- Correlate cloud usage with other data sources – looking at a single data source to detect threats is inadequate. It is necessary to bring in additional information for context. That data can include whether the user is logging in using an anonymizing proxy or using a TOR connection, or whether her account credentials are for sale on the Darknet.
- Whitelist low-risk users and known events – a general rule of thumb is to allow the threat protection system to generate as many threat events as the security team has the bandwidth to follow up on. One way to do it is to test the system by increasing thresholds. Another way is to whitelist events generated by low risk (trusted) users. This capability can protect your IT security from being inundated with false positives.
December 22, 2016 | Leave a Comment
By Laurie Kumerow, Consultant, Code42
On Black Friday, a hacker hit San Francisco’s light rail agency with a ransomware attack. Fortunately, this story has a happy ending: the attack ended in failure. So why did it raise the hairs on the back of our collective neck? Because we fear that next time a critical infrastructure system is attacked, it could just as easily end in tragedy. But it doesn’t have to if organizations with Industrial Control Systems (ICS) heed three key lessons from San Francisco’s ordeal.
First, let’s look at what happened: On Friday, Nov. 25, a hacker infected the San Francisco Municipal Transportation Agency’s (SMFTA) network with ransomware that encrypted data on 900 office computers, spreading through the system’s Windows operating system. As a precautionary measure, the third party that operates SMFTA’s ticketing system shut down payment kiosks to prevent the malware from spreading. Rather than stop service, SMFTA opened the gates and offered free rides for much of the weekend. The attacker demanded a 100 Bitcoin ransom, or around $73,000, to unlock the affected files. SFMTA refused to pay since it has a backup system. By Monday, most of the agency’s computers and systems were back up and running.
Here are three key lessons other ICS organizations should learn from the event, so they’re prepared to derail similar ransomware attacks as deftly:
- Recognize you are increasingly in cybercriminals’ cross hairs. Cyberattacks on ICS systems, which control public and private infrastructure such as electrical grids, oil pipelines and water systems, are on the rise. In 2015, the U.S. Industrial Control Systems Cyber Emergency Response Team (ICS-CERT) responded to 20% more cyber incidents than in 2014. And for the first time since the agency started tracking reported incidents in 2009, the critical manufacturing sector experienced more incidents than the energy sector. Critical manufacturing organizations produce products like turbines, generators, primary metals, commercial ships and rail equipment that are essential to other critical infrastructure sectors.
- Keep your IT and OT separate. Thankfully, the San Fran Muni ransomware attack never went beyond SFMTA’s front-office systems. But, increasingly, cyber criminals are penetrating control systems through enterprise networks. An ICS-CERT report noted that while the 2015 penetration of OT systems via IT systems was low at 12 percent of reported incidents, it represented a 33 percent increase from 2014. Experts say the solution is to adopt the Purdue Model, a segmented network architecture with separate zones for enterprise, manufacturing and control systems.
- Invest in off-site, real-time backup. SFMTA was able to recover the encrypted data without paying the ransom because it had a good backup system. That wasn’t the case with the Lansing (Michigan) Board of Water & Light. When its corporate network suffered a ransomware attack in April, the municipal utility agency paid $25,000 in ransom to unlock its accounting system, email service and phone lines.
If San Francisco’s example isn’t enough to motivate ICS organizations to take cybersecurity seriously, then Booz Allen Hamilton’s 2016 Industrial CyberSecurity Threat Briefing should do the trick. It includes dozens of cyber threats to ICS organizations.
December 19, 2016 | Leave a Comment
By Nigel Hawthorn, Skyhigh Networks, EMEA Marketing Director
Data breaches are happening all the time; often they hit the news for a short while then they are replaced with the latest list of victims, so we thought we’d review a data breach from a year ago and look back at the total cost to the company involved. The data breach took place in October 2015 where a UK service provider (TalkTalk) was the victim of a DDoS attack and a SQL injection to extract the data.
TalkTalk suffered a data breach in October 2015 resulting in the theft of personal data. Full details of the loss are available in other articles, so there’s no need to go into the technical details here.
There was a huge amount of publicity in the UK, during the first few days the situation and amount of data lost were not clear. In the end, 156,959 sets of personal details were stolen and 15,656 of these included bank account details. The company contacts each of its customers trying to reassure them and provided a free credit monitoring subscription for a year in case other data had also been lost and was misused.
In the following financial results, the company admitted to lost customers, direct costs to the business of £60,000,000 and a revenue drop of £80,000,000. A subsequent review of the total market showed that they had lost 4.4% market share.
On year later, in October 2016, TalkTalk was fined £400,000 by the Information Commissioner’s Office (ICO) for the incident. The fine is the highest ever imposed by the ICO, with TalkTalk’s lack of cybersecurity cited for the amount. The Information Commissioner, Elizabeth Denham, said that TalkTalk’s “failure to implement the most basic cybersecurity measures allowed hackers to penetrate systems with ease”. While in the eyes of some the fine may seem high, it’s only £2.50 per impacted customer.
Here’s a receipt for the current costs to the company:
This breach can be examined further and there are key lessons all businesses should learn.
- The total cost of a data breach isn’t always obvious
While the £400,000 fine is substantial, it’s really just the tip of the iceberg in regards to how much the data breach actually cost. There were so many other financial repercussions which, to some other firms, may have been fatal. There was the 11 percent drop in share price, as well as the loss of 101,000 existing customers and potential future ones. All in all, when remediation costs are included too, TalkTalk calculated that the breach cost it more than £80 million in revenue. That’s hardly pocket change.
- Acquisitions and demergers affect cyber risk
When Carphone Warehouse purchased the UK subsidiary of Tiscali, the business was merged with TalkTalk, which it also owned at the time. Following the data breach, the ICO’s investigation revealed that the hackers had gained access to the customer database through vulnerable web pages that had belonged to Tiscali. When companies join or split, how the action impacts IT systems must be managed, regardless of how insignificant they may seem. Systems will have different parentage, which can impact how effective a cybersecurity solution or process is, leaving potential access points unguarded.
- Patching and updating can mitigate some of the risks caused by aging systems
It’s no great surprise that older systems are more vulnerable to cyber attacks than newer ones. Yet, some businesses continue to rely on aging systems without patching or updating them, which is simply making things even easier for cybercriminals. The targeted Tiscali web pages had not been patched for three and a half years and the backend database is no longer supported by the supplier. When you consider the rapid pace of cyber threat evolution, that’s the equivalent of leaving the windows and doors open. Businesses must ensure they are patching on a regular basis and setting aside time for major updates.
- Warnings and red flags should be investigated
TalkTalk has and will continue to face scrutiny for its handling of the debacle, but one of the biggest criticisms is that it did not investigate numerous warnings that something was wrong. While it was the October 2015 data breach that made these particular headlines, TalkTalk customers had fallen victim to scams due to a previous breach and the regulator’s investigation found there had been two previous SQL injection attacks in the previous three months but TalkTalk were not monitoring those particular webpages. Whether the company ignored the warnings or was simply ignorant, businesses should investigate any signs that an issue exists. This also includes red flags generated by cybersecurity systems. Almost a third of companies suffer from alert fatigue, due to their general frequency and numerous false positives, and do not investigate.
- Communication plans are essential
How a company communicates a data breach is vital in mitigating the potential damage to reputation. If customer data has been compromised, they need to be made aware of it, with the need even more pressing if bank details are taken. To ensure all stakeholders are reassured that the situation is being handled, firms must have a communication plan including draft email, letter and script templates in place so they can be issued immediately, unfortunately TalkTalk’s initial responses fanned the flames due in part to lack of preparation as well as slow identification of the total data loss. While companies must be proactive with their communications, they must also have the necessary resources to deal with customers calling in. TalkTalk customers faced long holding times when ringing to find out more information, compounding anger further.
- EU GDPR will increase fines
The ICO’s fine is a record amount, but TalkTalk is fortunate that the breach took place before the EU GDPR comes into force in May 2018. The new regulation will see potential fines increase to four percent of global turnover or €20 million, whichever is higher, in TalkTalk’s case this could mean a fine of around £73M, roughly the same amount as their profit in their last financial year.
- EU GDPR enforces disclosure
The GDPR demands disclosure of all data loss incidents of unencrypted data; any company that experiences data loss, regardless of whether it’s their fault or a third parties’, will have 72 hours to disclose it to the regulators and have to inform data subjects “without delay”, so being able to investigate data transfers and monitor cloud use will become essential.
- Cybersecurity is a boardroom issue
If a company were to take only one lesson away from TalkTalk’s breach, it’s that data is now the crown jewels of any business. Not only will it help drive sales and growth, but mishandling it can lead to severe fines and even closure. It needs to be treated with the utmost respect and that means understanding that cybersecurity is now a boardroom discussion. For too long it has been considered the remit of IT but, with so many areas where a business can become vulnerable, it must now be an enterprise-wide endeavour.
December 15, 2016 | Leave a Comment
By Jeremy Zoss, Managing Editor, Code42
If one of your employees gets duped into transferring money or securities in a phishing scam, don’t expect your cyber insurance policy to cover it. And even your crime policy won’t cover it unless you purchase a specific social engineering endorsement. Many companies have learned the hard way and tried to sue their insurance carriers, with little luck.
Aqua Star, a New York seafood importer, expected to be covered after a spoofed email from a supplier drove an employee to change the supplier’s bank account, causing Aqua Star to wire more than $700,000 to a hacker instead of the supplier. Aqua Star has a crime policy through Travelers, which includes Computer Fraud coverage that applies to loss caused by the fraudulent entry of electronic data into any computer system owned, leased or operated by the insured. But when Aqua Star filed the claim, Travelers pointed out an exclusion if the data was entered by an authorized user. Aqua Star then sued Travelers, but the court agreed with Travelers, ruling that the employee was clearly an authorized user.
A similar phishing scam resulted in Apache Corp., an oil and gas producer, wiring $2.4 million to cybercriminals. It’s insurance company, Great American, denied the payout, so Apache went to district court and won. However, Great American appealed to a higher court, which reversed the decision, saying the bogus email didn’t directly cause the loss.
What commercial cyber insurance policies do cover
Cyber insurance policies cover losses that result from unauthorized data breaches or system failures. But they vary greatly in the details and exceptions. Most will cover forensic investigation fees, monetary losses caused by network downtime, data loss recovery fees, costs to notify affected parties and manage a crisis, legal expenses, and regulatory fines.
When it comes to ransomware, you need to look closely at the policy’s Cyber Extortion coverage. If it offers only third-party coverage, then ransomware isn’t covered.
Crime insurance policies cover losses that result from theft, fraud or deception. But as the Aqua Star and Apache examples illustrate, insurers typically deny coverage for social engineering fraud, claiming that the loss didn’t result from “direct” fraud. Insurers contend that the crime policy applies only if a cybercriminal penetrates the company’s computer system and illegally takes money out of company coffers.
Some crime policies also contain a “voluntary parting” exclusion that specifically bars social engineering claims by barring coverage for losses that arise out of anyone acting with authority who voluntarily gives up title to, or possession of, company property.
Fishing for a solution? Add an endorsement
Many insurance companies offer a social engineering fraud endorsement, like this one from Chubb. It’s offered under a crime policy for a nominal additional premium. The coverage, sometimes referred to as an impersonation fraud or fraudulent instruction endorsement, is typically up to $250,000 per occurrence, with no annual aggregate, but higher limits are available for a higher premium.
The net lesson: a phishing endorsement is an easy fix to a potentially costly oversight.
December 14, 2016 | Leave a Comment
By Tolga Erbay, Senior Manager, Security Risk and Compliance, Dropbox
In early 2014 Dropbox joined the Cloud Security Alliance (CSA). Working with the CSA is an important part of Dropbox’s commitment to security and transparency.
In June of 2014 Dropbox achieved Level 1 Certification through STAR, the CSA’s publicly available registry, which documents how Dropbox’s security practices measure up to industry-accepted standards and the CSA’s best practices. Building on its Level 1 Self-Assessment, Dropbox recently announced CSA STAR Level 2 Certification which attests to its security controls and processes.
“Dropbox continuously proves to be at the forefront of compliance standards,” said Jim Reavis, co-founder and CEO of the Cloud Security Alliance (CSA). “With rigorous independent auditing and certification for both well-accepted and up-and-coming standards, they’re demonstrating an impressive dedication to their customers’ security. We’re excited to have Dropbox on the short list of companies that have achieved our Security, Trust & Assurance Registry (STAR) Level 2 Certification.”
Dropbox is dedicated to building trust with its customers across the globe, and helping them fit Dropbox into their compliance strategies. Dropbox is proud to work closely with the CSA to establish open and transparent cloud security best practices within the industry. Dropbox strives to stay ahead of the curve as new standards and certifications are introduced and will continue to partner with the CSA to support research and education in key cloud security areas.
Standards such as CSA STAR certification underscore Dropbox’s commitment to keeping customer data safe, operating at the highest levels of availability, and maintaining transparency in data storage and processing. And they demonstrate Dropbox’s leadership in the SaaS industry, as Dropbox is one of the first major providers to achieve CSA STAR certification. Dropbox is excited to make continued strides with these compliance milestones.
December 9, 2016 | Leave a Comment
By Lance Logan, Manager/Global Marketing Program, Code42
For the second year in a row, IBM’s Fletcher Previn wowed the audience at the JAMF user conference with impressive statistics on how the company’s growing Mac-based workforce is delivering dramatic and measurable business value.
IBM expects Macs to save $26M in IT costs over four years
Big Blue says each Mac device will save them at least $265 over a four-year lifespan (and up to $535 depending on model) versus comparable PCs. With IBM’s Mac workforce at 90,000 (and adding 1,300 Mac users per month), that adds up to more than $26 million savings over the next four years—a huge margin. Simpler IT support and a high level of user self-service drive the bulk of this cost savings. IBM reports that just 3.5 percent of its Mac users currently call the help desk, compared to 25 percent of its PC users. This enables IBM to support 90,000+ Mac users (and 217,000 Apple device users) with just 50 IT employees.
It’s not just IT cost savings driving Mac adoption among big names in business tech. Deloitte calls iOS “the most secure platform for business” and says “Apple’s products are essential to the modern workforce.” Cisco has also jumped on the Apple bandwagon, believing Apple devices will accelerate productivity. Basic user satisfaction also shouldn’t be ignored, as IBM reports a 91 percent satisfaction rate among Mac users and says its pro-Mac policies help the company attract and retain top talent.
The average enterprise is still hesitant about widespread Mac deployment
It’s one thing for big-name tech innovators like IBM and Cisco to proclaim the promise of Macs in the enterprise, but what’s happening across the rest of the enterprise landscape? Code42 recently conducted a survey on Mac deployment among our diverse business contacts, and the results tell a less enthusiastic story.
Macs have a major—and growing—presence in the modern enterprise
Among Code42’s enterprise contacts, one-third (33.6%) have more than 500 Mac users and one in five (22.8%) have 1,000+ Mac users. These numbers further demonstrate that the modern enterprise is supporting OS diversity with a substantial Mac-based workforce—and we fully expect these numbers to grow in the coming years.
User preference—not business value—still drives most Mac adoption
While IBM and others put total cost of ownership, security and productivity as top reasons for Mac adoption, our results show user preference continues to be the main reason that enterprises are embracing Macs today.
Top reasons for Mac adoption
1. Happier end users (37%)
2. Fewer help desk tickets (14%)
3. Better OS security (12%)
Top IT challenges are Macs’ top strengths
Our survey showed the time-consuming burdens of tech refresh and help desk tickets are the most significant IT challenges associated with end user devices across operating systems, followed by malware/ransomware. These challenges are actually two of Mac devices’ greatest strengths. Macs traditionally enable a much higher level of self-service, and Code42 enables user-driven tech refresh for Mac users (and PC users, too). This level of self-service produces the kind of IT cost savings IBM has seen with its dramatically reduced help desk tickets. For the time being, Macs also continue to be less targeted and less vulnerable to malware and ransomware.
Many IT professionals remain wary of widespread Mac deployment
While our survey showed most enterprises may not be seeing million-dollar IT savings from Mac deployments, they did report a range of definitive benefits. So it’s revealing that one in five respondents said they’re ultimately not big fans of their companies’ Mac adoption.
Realizing advantages of Macs in the enterprise requires preparation, time
Supporting a large Mac-based workforce isn’t as simple as flicking a switch or changing a policy. It requires substantial changes to technology infrastructure and processes to make sure everything from calendars to apps to backup work seamlessly across both Mac and PC users. This often leaves IT stuck in the middle of user preferences and resource realities: Users want Macs, but IT needs the time—and the budget—to put the tools and processes in place to support a hybrid workforce.
But with IBM’s results ringing in the ears of the business world, more and more companies of every size and in every industry are sure to begin exploring the benefits of a larger Mac-based workforce. The best strategy for IT leaders is to act now to get ahead of this inevitable shift. Start examining your infrastructure to find the holes in Mac compatibility, and seek out technology partners that build solutions for this modern hybrid device environment.
Or, as IBM’s Previn put it, “Give employees the devices they want, manage those devices in a modern way, and drive self sufficiency in the environment.”
To learn more about how endpoint backup can protect the data on enterprise Macs, download the market brief Securing & Enabling the Mac-Empowered Enterprise.
December 5, 2016 | Leave a Comment
By Jamie Tischart, CTO Cloud/SaaS, Intel Security
The world is awash in DevOps, but what does that really mean? Although DevOps can mean several things to different individuals and organizations, ultimately it is about the cultural and technical changes that occur to deliver cloud services in a highly competitive environment.
Cultural changes come in the form of integrating teams that historically have been disparate around a single vision. Technical changes come with automating as much of the development, deployment, and operational environment as possible to more rapidly deliver high-quality and highly secure code.
This is where I believe the DevOps debate becomes cloudy (sorry for the pun). As is normal in engineering endeavors, we often forget the purpose or the problem we are trying to solve and instead get mired in the details of the process or the tool. We tend to lose site that bringing DevOps together has the purpose of solving how to more rapidly deliver higher-quality, more secure products to our customers, so they can solve their problems and we stay ahead of our competitors.
I found it interesting that there is little information about whether DevOps or OpsDev is the terminology coined but that adding security into the mix has three different coined terms of DevSecOps, SecDevOps, and DevOpsSec. At first I didn’t give it much thought and I figured that over time it would converge into an industry standard and we would move on our merry way of trying to achieve that difficult goal of high-quality, highly secure continuous deployment of cloud services. Then I looked closer and thought that there might be something to these three different nomenclatures and that they highlight the different challenges that security has in integrating into the software development lifecycle.
Let’s talk about the general purpose of including security in DevOps practices. Security was often an assumed part of the development and testing process to which few people paid attention. Or, security was an afterthought that slowed down the development process and release cycle, executed by some other team requiring fixes to obscure vulnerabilities that would never be found or leveraged for harm.
That entire mindset, while flawed, worked reasonably well in the world of single-tenant application development where a 12-month release cycle was the norm and applications were deployed behind several layers of security appliances. This all changed when we started delivering multi-tenant cloud offerings where any vulnerability could put millions of customers and the reputation of our companies at risk. Yet, we still held onto some of these archaic practices. We were slow to integrate secure coding and testing practices into our everyday engineering execution. We continued to leave security activities until the end of cycles and we left many vulnerabilities unattended because it slowed the release. This was until, of course, someone exploited the vulnerability and then everyone dropped everything and all hell broke loose.
Integrating Security into DevOps
Integrating security into DevOps practices is the goal to alleviate these problems. It is the way to continuously evolve security through automated techniques and to achieve our goal of rapidly delivered high-quality, highly secure products. This brings me back to the different terms for integrating security into the DevOps movement and how each organization needs to determine how security is integrated.
Let’s first look at DevOpsSec. Consider the order and how that implies that security still comes at the end of the process. Maybe I am just being paranoid but this is a practice we need to curtail and instead imbed security into every aspect of the lifecycle. If we expound on that a bit and take it literally (and maybe we shouldn’t), the team will complete dev, deploy and operate, and then review security. If this is done in small increments and completed rapidly it is still a massive improvement to the end-game security testing we have seen in the past. However, it still may expose vulnerabilities within cloud production environments and require reversion or patching that could have been completed before hand.
Next let’s review SecDevOps. This would imply that the security activities occur before any development or operations. I am not sure that this is truly practical, although it is certainly a well-intentioned principle and has merits that should be incorporated into the DevOps practice. My interpretation of this is that new requirements/user stories/features – whatever your method – include security requirements in the development. If we take this to the next step, then these security requirements would have automated tests created and added to the automation suites so they can run continuously to ensure that security is inclusive throughout the cycle. Hmm, this sounds pretty good…
The last one is DevSecOps. Literally, you can expand this to completing development, then reviewing and automating for security, and then deploying and operating. This articulation hopes to catch the security concerns before they are deployed to the world but are not as incorporated into the overall process as SecDevOps. Certainly DevSecOps has the benefit of focusing on security before introducing a vulnerability to the the wild, but it is not security-focused in every activity.
Maybe I am taking it too literally, but maybe what we need is SecDevSecOpsSec. Here, security is a continuous activity in itself that needs to be incorporated into all stages of the product lifecycle. However, that is quite a mouthful…
The important thing is that when your organization is approaching DevOps, don’t forget the security aspect. Think about how you are going to integrate security into every aspect of your lifecycle. As for which term to utilize, I am going to standardize on SecDevOps. Integrating security at the start has the best of intentions and will lead to the most secure practices.
December 2, 2016 | Leave a Comment
By Laurie Kumerow, Consultant, Code42
When it comes to cybersecurity, the U.S. federal government recognizes the carrot is more effective than the stick. Instead of using regulations to increase data security and protect personal information within private organizations, the White House is enlisting the insurance industry to offer incentives for adopting security best practices.
In March 2016, the U.S. House Homeland Security Cybersecurity Subcommittee held a hearing to explore possible market-driven cyber insurance incentives. The idea, said Rep. John Ratcliffe, chairman of the subcommittee, is to enable “all boats to rise, thereby advancing the security of the nation.”
The issue isn’t a lack of cyber insurance. Today, 80% of companies with more than 1,000 employees have a standalone cybersecurity policy, according to a Risk and Insurance Management Society survey. The real issue is getting companies to maintain more than a minimum set of security standards.
Borrowing from the fire insurance playbook
The insurance industry has been a catalyst for change in the past. Attendees of the Homeland Security Cybersecurity Subcommittee hearing pointed to the fire insurance market as a good example of using a carrot to drive positive behavior. Insurers offer lower rates to policyholders who adhere to certain fire safety standards, such as installing sprinklers and having extinguishers nearby.
Identifying best practices
So, what are the cybersecurity equivalents of sprinklers and fire alarms? Hearing attendees highlighted four components of an effective cyber risk culture:
- Executive leadership: what boards of directors should do to build corporate cultures that manage cyber risk well.
- Education and awareness: training and other mechanisms that are necessary to foster a culture of cybersecurity.
- Technology: specific technologies that can improve cybersecurity protections.
- Information sharing: ensuring the right people within the company have the information they need to enhance cybersecurity risk investments.
Spurring much-needed actuarial data
The hearing also touched on a major missing element in the current cyber insurance industry: reliable actuarial data regarding data breaches and other cyber incidents. Auto insurers know the likelihood of car accidents, so they know how to price the liability and measure the risk. But the likelihood and ramifications of various data breaches are a wildcard today, leading to problems in pricing cybersecurity policies.
Hearing attendees discussed creating an actuarial data repository with data from leading actuarial firms, forensic technology firms and individual insurer cyber claims. The proposed database would be housed at a nongovernmental location such as the Insurance Services Office Inc. (ISO), which has managed insurer actuarial databases for more than four decades. The hope is the database would encourage voluntary sharing of information about data breaches, business interruption events and cybersecurity controls to aid in risk mitigation.
While the cyber insurance carrot is a long way from becoming reality, at least the seed has been planted.
November 30, 2016 | Leave a Comment
By Jon King, Security Technologist and Principal Engineer, Intel Security
Securing virtual assets that appear and disappear.
The average life span of a container is short and getting shorter. While some organizations use containers as replacements for virtual machines, many are using them increasingly for elastic compute resources, with life spans measured in hours or even minutes. Containers allow an organization to treat the individual servers providing a service as disposable units, to be shut down or spun up on a whim when traffic or behavior dictates.
Since the value of an individual container is low, and startup time is short, a company can be far more aggressive about its scaling policies, allowing the container service to scale both up and down faster. Since new containers can be spun up on the order of seconds or sub seconds instead of minutes, they also allow an organization to scale down further than would previously have provided sufficient available overhead to manage traffic spikes. Finally, if a service is advanced enough to have automated monitoring and self-healing, a minuscule perturbation in container behavior might be sufficient to cause the misbehaving instance to be destroyed and a new container started in its place.
At container speeds, behavior and traffic monitoring happens too quickly for humans to process and react. By the time an event is triaged, assigned, and investigated, the container will be gone. Security and retention policies need to be set correctly from the time the container is spawned. Is this workload allowed to run in this location? Are rules set up to manage the arbitration between security policies and SLAs?
The volume of events from containers also overwhelms human capabilities. Automation and machine learning are essential to collect this data, filter it, and augment the human security professionals who are doing triage. Identifying suspicious traffic or unexpected container behavior through pattern recognition, correlation, and historical comparisons are essential jobs that machines are very good at.
Perhaps the biggest issue with container life spans is the potential lack of information available for investigations. If you have a container breach, the container is probably gone when you need it for forensic details. It’s like the scene of a crime being deleted before the detectives arrive.
The good news is that if you collect information from a container while it is running, you have a wealth of information available to you. Memory dumps can be captured and analyzed for traces of a malware infection or exfiltration function. And stopped containers can be saved for later analysis. Done well, this is like going back in time to a crime scene, able to examine every detail—not just the faint traces the criminal left behind. Of course, saving this type of data is counter to many of the container benefits of ephemerality, and could quickly consume a huge amount of storage, so once again automation and machine learning are crucial to help decide what artifacts to retain.
As the latest form of resource virtualization, containers enable a new and growing set of security opportunities and threats. Actively involving the security team in container architecture discussions will make sure you are using them to best advantage.
November 23, 2016 | Leave a Comment
By Patty Hatter, Vice President and General Manager, Intel Security Group Professional Services
How to Bring Cloud Usage into the Light
On any given day – with a quick spot-check – you’ll probably find that up to half of your company’s IT usage is basically hidden in the shadows of various business units. Marketing, finance, sales, human resources, and engineering are using file sharing services with customers, online collaboration tools with contractors and suppliers, and multiple SaaS solutions in addition to on-demand IaaS compute resources. Business areas oftentimes make swift decisions to keep their business operations running. As departments look for the best way to do their jobs and efficiently meet their business objectives, they opt for immediate solutions that often operate outside of corporate IT security policies and guidelines.
When it comes to business units – if you haven’t created an environment of trust – IT can quickly rank the least-loved group in a company. Worse yet, you could be seen as the department of prevention. While the business units are looking for new apps or elastic compute to increase productivity, IT is looking for efficiency, security, and compliance. Departments will side step IT if they believe the needed services won’t be available in time, or if the value proposition is weak.
In today’s cyberattack-riddled environments, “shadow IT” is undeniably risky. To ensure optimum safety, you’ve got to bring IT into the light. Multiple file sharing services have been breached, and credential theft can potentially allow an adversary into any of these services. You’ve got to have IT security experts involved in the selection of these cloud services or construction of private clouds. Period.
Soon after joining McAfee, I took on the added responsibility as CIO in addition to my role as VP of operations. No easy task – but I saw what the business functions needed to move forward, and I knew that IT had to be at the center of it, as a “reliable and trustworthy business partner.” My first objective was the transformation of IT into a more collaborative and positive role. There was a lot of shadow IT at the company then and a pervasive attitude of mistrust.
Transformation is an issue of trust. If other groups within the company felt they could not work with IT, we needed to counter that perception. We started with the business functions, which tend to have simpler IT needs, such as marketing and sales, and moved up to the big challenge of winning over engineering.
Start with forgiveness
“It’s easier to ask for forgiveness than permission” is something you often hear when groups are discussing a shadow IT project. I suggest approaching with an attitude of forgiveness and understanding – to rebuild what are often strained relationships. Recent hacks and breaches will make this easier. You may have to remind your colleagues that their data is better off under the IT security tent if something bad happens, and that you will be their partner in this. Having to face the board of directors because the new marketing strategy, product designs, or customer data was stolen is a scenario that should convince most managers to at least participate in talks.
Build trust with transparency
You still need to address the agility and cost issues that are the root cause of shadow IT, or the problem will persist. We put together an effective governance model that enabled a high level of transparency on what was and wasn’t working. IT doesn’t always think the same way as the other groups, and clear communication and governance were important steps to understanding the business unit’s needs and building trust. Developing the cost models together, our business units realized that they got a much better financial deal when working with IT. Moreover, they were operating within the boundaries of corporate security policies.
Set up a cloud architecture team
Tackling shadow IT from the engineering department brought new issues to light. With their own technical resources, “do it yourself” is often the default path for engineering. This not only results in a gap between IT and engineering, but different development stacks and services between the various product teams, which makes it costly and difficult to scale. We set up an engineering/IT cloud architecture team to build a consistent set of use cases and identify big bets that we could put our joint resources on, so we could move forward quickly. It took time to get this started, but we were playing the long game here, working to bridge these two groups, not trying for a quick takeover.
In the end, the teaming approach among IT, the business functions, and engineering enabled us to develop a total view of business needs and a joint architectural approach. We had full visibility of the on-prem and SaaS managed infrastructure and capabilities that allowed us to get the results we needed like rapid achievement of new capabilities and an improved cost model.