People Are Not IP Addresses…So Why Do Security Solutions Think They Are?

January 18, 2017 | Leave a Comment

By Jason Garbis, Vice President of Products, Cryptzone

Attackers are erasing database contents and replacing them with a note demanding Bitcoin ransom payment for restoration. It also appears that victims who pay are often not getting their data back, and that multiple attackers are overwriting each other’s ransom demands. Seriously, these databases are of course important to their owners, and these attacks are clearly a headache for them. Hopefully they have backups.

Let’s explore this situation a bit more, and then step back for some analysis.

Here’s What We Know

There is no indication of a vulnerability in MongoDB; rather these systems are allowing administrative access from any IP address, and are (mis)configured for either no authentication or default credentials. There are a large number of such systems – Internet service search engines show approximately 100,000 exposed instances, and several independent security researchers have identified over 27,000 instances that have been hijacked as of January 8, a number that’s growing daily.

Putting aside the mistaken configuration that enabled access with no/weak authentication, let’s look at this from a user access and network perspective. At the risk of being too obvious, these systems are Internet-facing either intentionally or unintentionally. If intentional, their admins clearly require remote access, and therefore these systems must expose some network service.

“People are not IP addresses!”
— Jason Garbis, Vice President of Products at Cryptzone

The problem comes down to how access is restricted – and a realization that relying solely on authentication is not enough. Too many systems are either misconfigured (as appears to be the case with these MongoDB) or are subject to vulnerabilities – enterprises need to limit access at a network level. The issue is that network security tools are built around controlling access by IP address, yet the problem we need to solve is how people (identities) access these systems. And people are not IP addresses!

If these databases were unintentionally exposed to the Internet, then no remote access is required – either admins have local system access, or they’re relying on another security mechanism such as being on a LAN or accessing the network through a VPN. Yet, these systems are exposed directly to the Internet, and therefore not likely on an internal corporate network. Looking at the discovered instances on Shodan, it appears that many of them have IP addresses associated with cloud or hosting providers!

This is an interesting pattern. Because cloud network access is managed by IP addresses, users may be simply setting their cloud network security groups to permit access from anyone on the internet – much to their detriment, as this attack shows.

Clearly, misconfiguring a database to not require authentication is a problem, but there are many exploits that exist even in properly secured and properly configured systems. It’s time to realize that the bigger problem is in allowing unauthorized users to have network access to these systems in the first place. Why are there 100,000 instances of MongoDB available for a public scan? I suggest that most of these were not intended for public access.

The ability to access a service on the network is a privilege, and it must be treated as such. The principle of least privilege demands that we prevent unauthorized users from scanning, connecting to, or accessing our services. Following this principle will dramatically reduce the ability of attackers to exploit misconfigurations or vulnerabilities.

But there’s a problem. There is a disconnect between how we need to model users – as people – and our network security systems, which are centered on IP addresses. And, to repeat myself, people are not IP addresses.

Let’s Bring This Together

Organizations need to secure network access in an identity-centric way, and in a way that’s driven by automated policies so that users – who are people – get appropriate access. Network security systems must be able to do this, and allow us to easily limit user access to the minimum necessary.

The good news is that this is achievable today. The Software-Defined Perimeter (SDP) – an open specification published by the Cloud Security Alliance – defines a model where network access is controlled in an identity-centric way. Every user obtains a dynamically adjusted network perimeter that’s individualized based on their specific requirements and entitlements. The Software-Defined Perimeter is well-suited to cloud environments; network services such as MongoDB can be easily protected by SDP network gateways.

With SDP, organizations can easily define policies that control which users get access to these database instances, and prevent all unauthorized users from scanning or accessing these services – even if they’re misconfigured and don’t require authentication. And, because this access is built around users, not IP addresses, authorized users can securely access these systems from anywhere, with strong authentication enforced at the network level.

We’ll never be completely safe in our hyper-connected world, but we’re unnecessarily making things harder for ourselves, as this latest attack shows. We need to take a new, identity-centric approach to network security, and the Software-Defined Perimeter model provides exactly this. Putting this in place will go a long way towards making our systems more secure while keeping our users productive.

Windows 10 Steps Up Ransomware Defense

January 17, 2017 | Leave a Comment

By Jeremy Zoss, Managing Editor, Code42

Here’s some good news for the countless businesses getting ready for the migration to Windows 10: Microsoft recently announced that its Windows 10 Anniversary Update features security updates specifically targeted to fight ransomware. No defense is completely hack-proof, but it’s great to see the biggest names in the tech world are putting ransomware at the top of their list of concerns.

Patching holes, preventing users from “clicking the link”
Microsoft released a guide on how the latest Windows 10 Anniversary Update specifically enhances protection against ransomware. The company focused on eliminating the vulnerabilities hackers have exploited in the past, and says its updated Microsoft Edge browser has no known successful zero-day exploits or exploit kits to date.

The company says its smart email filtering tools helped identify some 58 million attempts to distribute ransomware via email—in July 2016 alone. But what if a phishing email does reach gullible and mistake-prone end users? Microsoft says it has invested in improving its SmartScreen URL filter, which builds a list of questionable or untrustworthy URLs and alerts users should they click on a link to a “blacklisted” domain.

Thanks to security upgrades, Microsoft says Windows 10 users are 58 percent less likely to encounter ransomware than those running Windows 7.

Better threat visibility for IT
On the response end, the Windows 10 Anniversary Updates also sees the launch of the Windows Defender Advanced Threat Protection (ATP) service. The basic idea behind Windows Defender ATP is to use contextual analytics of network activity to see signs of attacks that other security layers miss. Microsoft says the new service gives “a more holistic view of what is attacking the enterprise…so that enterprise security operations teams can investigate and respond.” Better visibility of your users’ activities—now that’s something we at Code42 can get behind.

Using the intelligence of the “hive mind” to fight ransomware
One impediment to the fight against ransomware has been organizations’ reluctance to share information on attacks, both attempted and successful. We already know that new strains of ransomware emerge daily, but without this shared knowledge, even older strains are essentially new and unknown (and thus remarkably effective) to most of the enterprise world. The sheer size and market share of Windows puts Microsoft in a unique position to solve this problem. Its threat detection products are now bringing together detailed information on the millions of attempted ransomware attacks that hit Windows systems every day. With Microsoft now focused on fighting this threat, we’re eager to see the company leverage the intelligence of this hive mind to beat back the advance of the ransomware threat.

What does Microsoft say about ransomware recovery?
It’s important to note that responding to a ransomware attack is not necessarily the same as recovering from an attack. In other words, Windows 10 says it can help you detect successful attacks sooner and limit their impact—but how does it help you deal with the damage already done? How does it help you recover the data that is encrypted? How does it help you get back to business?

The Windows 10 ransomware guide makes just one small mention of recovery, urging all to “implement a comprehensive backup strategy.” However, Microsoft offers a rather antiquated look at backup strategies, leaving endpoint devices uncovered, focusing on user-driven processes instead of automatic, continuous backup, and even suggesting enterprises use Microsoft OneDrive as a backup solution. As we’ve explained before, OneDrive alone is insufficient data protection. It’s an enterprise file sync-and-share solution (EFSS), built to enable file sharing and collaborative productivity—not continuous, secure backup and fast, seamless restores.

Making the move to Windows 10? Make sure your backup is ready
Most enterprises are at least beginning to plan for the move to Windows 10, as they should be. The new OS offers plenty of advantages, not least of which are security features that undoubtedly make Windows 10 more hack-resistant. But as security experts and real-world examples continually show, nothing can completely eliminate the risk of ransomware. That’s why your recovery strategy—based on the ability to quickly restore all data—is just as critical as your defense strategy.

Moreover, as more organizations make the move to Windows 10, they’re seeing that the ability to efficiently restore all data is the key ingredient to a successful migration. Faster, user-driven migrations reduce user downtime and IT burden, and guaranteed backup eliminates the data loss (and resulting lost productivity) that plagues the majority of data migration projects.

Long Con or Domino Effect: Beware the Secondary Attack

January 12, 2017 | Leave a Comment

By  Jeremy Zoss, Managing Editor, Code42

Lightning may not strike twice, but cybercrime certainly does. The latest example: A year after the major hack of the U.S. Office of Personnel Management (OPM), cyber criminals are again targeting individuals impacted by the OPM breach with ransomware attacks.

In the new attack, a phishing email impersonates an OPM official, warning victims of possible fraud and asking them to review an attached document—which, of course, launches the ransomware.

OPM attack part of bigger trends in ransomware
The new round of attacks could come from two sources—both are part of trends in ransomware.

  • The long con: The first scenario is that the same individuals that executed the original OPM hack are now launching these ransomware attacks. If this is the case, it at least alleviates some concerns that the OPM hack was state-sponsored cyberterrorism and/or a sign of a new kind of “cold war.” But the trend toward this type of “long con” is scary in its own right. Users are already more likely than ever to “click the link”—now patient cyber criminals are using hacked data to deploy extremely authentic phishing scams.
  • The “kick ‘em while they’re down” attack: It’s more likely that the OPM ransomware attack is just an example of enterprising cybercriminals seeing vulnerability in the already-victimized. This is another unsettlingly effective trend—like “ambulance chasing” for cybercriminals: Follow the headlines to find organizations that have recently been hit with a cyberattack (of any kind), then swoop in posing as official “help” in investigating or preventing further damage. Clever cybercriminals know they can prey on the anxiety, fear and uncertainty of users in this position.

How can you get ahead of evolving ransomware?
Though we’ve said it a thousand times, it’s more true than ever: Ransomware is evolving at an incredible rate and it is overwhelming traditional data security tools. Paying the ransom becomes an appealing option to unprepared businesses, and this steady cash flow only fuels the problem.

Want to see where ransomware is headed next and understand how you can snuff out this threat? Read our new report, The ransomware roadmap for CXOs: where cybercriminals will attack next.

Six Cloud Threat Protection Best Practices from the Trenches

January 6, 2017 | Leave a Comment

By Ajmal Kohgadai, Product Marketing Manager, Skyhigh Networks

As enterprises continue to migrate their on-premises IT infrastructure to the cloud, they often find that their existing threat protection solutions aren’t sufficient to consistently detect threats that arise in the cloud. While security information and event management (SIEM) solutions continue to rely on rule-based (or heuristics-based) approach to detect threats, they often fail when it comes to the cloud. This is, in large part, because SIEMs don’t evolve without significant human input as user behavior changes over time, new cloud services are adopted, and new threat vectors are introduced.

Without a threat protection solution built for the cloud, enterprises can suffer data loss when:

  • Malicious or careless insiders download data from a corporate sanctioned cloud service, then upload it to a shadow cloud file sharing service (e.g. Anthem breach of 2015)
  • An employee downloads data onto a personal device, regardless of being on or off-network, at which point control over that data is lost
  • Privileged users of a cloud service (such as administrators) change security configurations inappropriately
  • An employee shares data with a third party, such as a vendor or partner
  • Malware on a corporate computer leverages an unmanaged cloud service as a vector to exfiltrate data stolen from on-premises systems of record
  • A user endpoint device syncs malware to a file sharing cloud service and exposes other users and the corporate network to malware
  • Data in a sanctioned cloud services is lost to an insecure and unmanaged cloud service via an API connection between the two services

However, even the most advanced cloud threat protection technology can be rendered ineffective when it’s not being used to its fullest potential. Below are some of the proven best practices and must-haves when implementing a cloud threat protection solution.

  1. Focus on multi-dimensional threats, not simple anomalies – a user logs in from a new IP address, or downloads a higher than average volume of data, or changes a security setting within an application. In isolation, these are anomalies but not necessarily indicative of a security threat. Focus first on threats that combine multiple indicators and anomalies together, providing strong evidence that an incident is in progress.
  2. Start with machine-defined models, then refine – aside from accuracy limitations, it’s difficult to get started with threat protection by configuring detailed rules with thresholds for which you have no context. Start with unsupervised machine learning – that is software that analyzes user behavior and automatically begins detecting threats. Augment with feedback later to fine tune threat detection and reduce false positives.
  3. Monitor all cloud usage for shadow and sanctioned apps – cloud activity within one service might appear routine because threats are often signaled by multiple activities across services. Correlate activity across other apps and a pattern will start to appear if a threat is in motion. That’s why it is important to start with visibility into both sanctioned and unsanctioned cloud services to get the full picture.
  4. Leverage your existing SIEM and SOC workflow – events generated by a cloud threat protection solution should flow into existing SOC/SIEM solutions in real time via a standard feed. This capability will allow security experts to both correlate cloud anomalies with on-premises ones while also allowing the integration of cloud threat incidence response with incident response workflows within their existing SOC/SIEM.
  5. Correlate cloud usage with other data sources – looking at a single data source to detect threats is inadequate. It is necessary to bring in additional information for context. That data can include whether the user is logging in using an anonymizing proxy or using a TOR connection, or whether her account credentials are for sale on the Darknet.
  6. Whitelist low-risk users and known events – a general rule of thumb is to allow the threat protection system to generate as many threat events as the security team has the bandwidth to follow up on. One way to do it is to test the system by increasing thresholds. Another way is to whitelist events generated by low risk (trusted) users. This capability can protect your IT security from being inundated with false positives.

 

Three Lessons From the San Francisco Muni Ransomware Attack

December 22, 2016 | Leave a Comment

By Laurie Kumerow, Consultant, Code42

On Black Friday, a hacker hit San Francisco’s light rail agency with a ransomware attack. Fortunately, this story has a happy ending: the attack ended in failure. So why did it raise the hairs on the back of our collective neck? Because we fear that next time a critical infrastructure system is attacked, it could just as easily end in tragedy. But it doesn’t have to if organizations with Industrial Control Systems (ICS)  heed three key lessons from San Francisco’s ordeal.

First, let’s look at what happened: On Friday, Nov. 25, a hacker infected the San Francisco Municipal Transportation Agency’s (SMFTA) network with ransomware that encrypted data on 900 office computers, spreading through the system’s Windows operating system. As a precautionary measure, the third party that operates SMFTA’s ticketing system shut down payment kiosks to prevent the malware from spreading. Rather than stop service, SMFTA opened the gates and offered free rides for much of the weekend. The attacker demanded a 100 Bitcoin ransom, or around $73,000, to unlock the affected files. SFMTA refused to pay since it has a backup system. By Monday, most of the agency’s computers and systems were back up and running.

Here are three key lessons other ICS organizations should learn from the event, so they’re prepared to derail similar ransomware attacks as deftly:

  1. Recognize you are increasingly in cybercriminals’ cross hairs. Cyberattacks on ICS systems, which control public and private infrastructure such as electrical grids, oil pipelines and water systems, are on the rise. In 2015, the U.S. Industrial Control Systems Cyber Emergency Response Team (ICS-CERT) responded to 20% more cyber incidents than in 2014. And for the first time since the agency started tracking reported incidents in 2009, the critical manufacturing sector experienced more incidents than the energy sector. Critical manufacturing organizations produce products like turbines, generators, primary metals, commercial ships and rail equipment that are essential to other critical infrastructure sectors.
  1. Keep your IT and OT separate. Thankfully, the San Fran Muni ransomware attack never went beyond SFMTA’s front-office systems. But, increasingly, cyber criminals are penetrating control systems through enterprise networks. An ICS-CERT report noted that while the 2015 penetration of OT systems via IT systems was low at 12 percent of reported incidents, it represented a 33 percent increase from 2014. Experts say the solution is to adopt the Purdue Model, a segmented network architecture with separate zones for enterprise, manufacturing and control systems.
  1. Invest in off-site, real-time backup. SFMTA was able to recover the encrypted data without paying the ransom because it had a good backup system. That wasn’t the case with the Lansing (Michigan) Board of Water & Light. When its corporate network suffered a ransomware attack in April, the municipal utility agency paid $25,000 in ransom to unlock its accounting system, email service and phone lines.

If San Francisco’s example isn’t enough to motivate ICS organizations to take cybersecurity seriously, then Booz Allen Hamilton’s 2016 Industrial CyberSecurity Threat Briefing should do the trick. It includes dozens of cyber threats to ICS organizations.

Adding Up the Full Cost of a Data Breach

December 19, 2016 | Leave a Comment

By Nigel Hawthorn, Skyhigh Networks, EMEA Marketing Director

Data breaches are happening all the time; often they hit the news for a short while then they are replaced with the latest list of victims, so we thought we’d review a data breach from a year ago and look back at the total cost to the company involved. The data breach took place in October 2015 where a UK service provider (TalkTalk) was the victim of a DDoS attack and a SQL injection to extract the data.

Background
TalkTalk suffered a data breach in October 2015 resulting in the theft of personal data. Full details of the loss are available in other articles, so there’s no need to go into the technical details here.

There was a huge amount of publicity in the UK, during the first few days the situation and amount of data lost were not clear. In the end, 156,959 sets of personal details were stolen and 15,656 of these included bank account details. The company contacts each of its customers trying to reassure them and provided a free credit monitoring subscription for a year in case other data had also been lost and was misused.

In the following financial results, the company admitted to lost customers, direct costs to the business of £60,000,000 and a revenue drop of £80,000,000. A subsequent review of the total market showed that they had lost 4.4% market share.

On year later, in October 2016, TalkTalk was fined £400,000 by the Information Commissioner’s Office (ICO) for the incident. The fine is the highest ever imposed by the ICO, with TalkTalk’s lack of cybersecurity cited for the amount. The Information Commissioner, Elizabeth Denham, said that TalkTalk’s “failure to implement the most basic cybersecurity measures allowed hackers to penetrate systems with ease”. While in the eyes of some the fine may seem high, it’s only £2.50 per impacted customer.

Here’s a receipt for the current costs to the company:

This breach can be examined further and there are key lessons all businesses should learn.

  1. The total cost of a data breach isn’t always obvious
    While the £400,000 fine is substantial, it’s really just the tip of the iceberg in regards to how much the data breach actually cost. There were so many other financial repercussions which, to some other firms, may have been fatal. There was the 11 percent drop in share price, as well as the loss of 101,000 existing customers and potential future ones. All in all, when remediation costs are included too, TalkTalk calculated that the breach cost it more than £80 million in revenue. That’s hardly pocket change.
  2. Acquisitions and demergers affect cyber risk
    When Carphone Warehouse purchased the UK subsidiary of Tiscali, the business was merged with TalkTalk, which it also owned at the time. Following the data breach, the ICO’s investigation revealed that the hackers had gained access to the customer database through vulnerable web pages that had belonged to Tiscali. When companies join or split, how the action impacts IT systems must be managed, regardless of how insignificant they may seem. Systems will have different parentage, which can impact how effective a cybersecurity solution or process is, leaving potential access points unguarded.
  3. Patching and updating can mitigate some of the risks caused by aging systems
    It’s no great surprise that older systems are more vulnerable to cyber attacks than newer ones. Yet, some businesses continue to rely on aging systems without patching or updating them, which is simply making things even easier for cybercriminals. The targeted Tiscali web pages had not been patched for three and a half years and the backend database is no longer supported by the supplier. When you consider the rapid pace of cyber threat evolution, that’s the equivalent of leaving the windows and doors open. Businesses must ensure they are patching on a regular basis and setting aside time for major updates.
  4. Warnings and red flags should be investigated
    TalkTalk has and will continue to face scrutiny for its handling of the debacle, but one of the biggest criticisms is that it did not investigate numerous warnings that something was wrong. While it was the October 2015 data breach that made these particular headlines, TalkTalk customers had fallen victim to scams due to a previous breach and the regulator’s investigation found there had been two previous SQL injection attacks in the previous three months but TalkTalk were not monitoring those particular webpages. Whether the company ignored the warnings or was simply ignorant, businesses should investigate any signs that an issue exists. This also includes red flags generated by cybersecurity systems. Almost a third of companies suffer from alert fatigue, due to their general frequency and numerous false positives, and do not investigate.
  5. Communication plans are essential
    How a company communicates a data breach is vital in mitigating the potential damage to reputation. If customer data has been compromised, they need to be made aware of it, with the need even more pressing if bank details are taken. To ensure all stakeholders are reassured that the situation is being handled, firms must have a communication plan including draft email, letter and script templates in place so they can be issued immediately, unfortunately TalkTalk’s initial responses fanned the flames due in part to lack of preparation as well as slow identification of the total data loss. While companies must be proactive with their communications, they must also have the necessary resources to deal with customers calling in. TalkTalk customers faced long holding times when ringing to find out more information, compounding anger further.
  6. EU GDPR will increase fines
    The ICO’s fine is a record amount, but TalkTalk is fortunate that the breach took place before the EU GDPR comes into force in May 2018. The new regulation will see potential fines increase to four percent of global turnover or €20 million, whichever is higher, in TalkTalk’s case this could mean a fine of around £73M, roughly the same amount as their profit in their last financial year.
  7. EU GDPR enforces disclosure
    The GDPR demands disclosure of all data loss incidents of unencrypted data; any company that experiences data loss, regardless of whether it’s their fault or a third parties’, will have 72 hours to disclose it to the regulators and have to inform data subjects “without delay”, so being able to investigate data transfers and monitor cloud use will become essential.
  8. Cybersecurity is a boardroom issue
    If a company were to take only one lesson away from TalkTalk’s breach, it’s that data is now the crown jewels of any business. Not only will it help drive sales and growth, but mishandling it can lead to severe fines and even closure. It needs to be treated with the utmost respect and that means understanding that cybersecurity is now a boardroom discussion. For too long it has been considered the remit of IT but, with so many areas where a business can become vulnerable, it must now be an enterprise-wide endeavour.

 

Cyber Insurance Against Phishing? There’s a Catch

December 15, 2016 | Leave a Comment

By Jeremy Zoss, Managing Editor, Code42

If one of your employees gets duped into transferring money or securities in a phishing scam, don’t expect your cyber insurance policy to cover it. And even your crime policy won’t cover it unless you purchase a specific social engineering endorsement. Many companies have learned the hard way and tried to sue their insurance carriers, with little luck.

Aqua Star, a New York seafood importer, expected to be covered after a spoofed email from a supplier drove an employee to change the supplier’s bank account, causing Aqua Star to wire more than $700,000 to a hacker instead of the supplier. Aqua Star has a crime policy through Travelers, which includes Computer Fraud coverage that applies to loss caused by the fraudulent entry of electronic data into any computer system owned, leased or operated by the insured. But when Aqua Star filed the claim, Travelers pointed out an exclusion if the data was entered by an authorized user. Aqua Star then sued Travelers, but the court agreed with Travelers, ruling that the employee was clearly an authorized user.

A similar phishing scam resulted in Apache Corp., an oil and gas producer, wiring $2.4 million to cybercriminals. It’s insurance company, Great American, denied the payout, so Apache went to district court and won. However, Great American appealed to a higher court, which reversed the decision, saying the bogus email didn’t directly cause the loss.

What commercial cyber insurance policies do cover
Cyber insurance policies cover losses that result from unauthorized data breaches or system failures. But they vary greatly in the details and exceptions. Most will cover forensic investigation fees, monetary losses caused by network downtime, data loss recovery fees, costs to notify affected parties and manage a crisis, legal expenses, and regulatory fines.

When it comes to ransomware, you need to look closely at the policy’s Cyber Extortion coverage. If it offers only third-party coverage, then ransomware isn’t covered.

Crime insurance policies cover losses that result from theft, fraud or deception. But as the Aqua Star and Apache examples illustrate, insurers typically deny coverage for social engineering fraud, claiming that the loss didn’t result from “direct” fraud. Insurers contend that the crime policy applies only if a cybercriminal penetrates the company’s computer system and illegally takes money out of company coffers.

Some crime policies also contain a “voluntary parting” exclusion that specifically bars social engineering claims by barring coverage for losses that arise out of anyone acting with authority who voluntarily gives up title to, or possession of, company property.

Fishing for a solution? Add an endorsement
Many insurance companies offer a social engineering fraud endorsement, like this one from Chubb. It’s offered under a crime policy for a nominal additional premium. The coverage, sometimes referred to as an impersonation fraud or fraudulent instruction endorsement, is typically up to $250,000 per occurrence, with no annual aggregate, but higher limits are available for a higher premium.

The net lesson: a phishing endorsement is an easy fix to a potentially costly oversight.

Standardizing Cloud Security with CSA STAR Certification

December 14, 2016 | Leave a Comment

By Tolga Erbay, Senior Manager, Security Risk and Compliance, Dropbox

In early 2014 Dropbox joined the Cloud Security Alliance (CSA). Working with the CSA is an important part of Dropbox’s commitment to security and transparency.

In June of 2014 Dropbox achieved Level 1 Certification through STAR, the CSA’s publicly available registry, which documents how Dropbox’s security practices measure up to industry-accepted standards and the CSA’s best practices. Building on its Level 1 Self-Assessment, Dropbox recently announced CSA STAR Level 2 Certification which attests to its security controls and processes.

“Dropbox continuously proves to be at the forefront of compliance standards,” said Jim Reavis, co-founder and CEO of the Cloud Security Alliance (CSA). “With rigorous independent auditing and certification for both well-accepted and up-and-coming standards, they’re demonstrating an impressive dedication to their customers’ security. We’re excited to have Dropbox on the short list of companies that have achieved our Security, Trust & Assurance Registry (STAR) Level 2 Certification.”

Dropbox is dedicated to building trust with its customers across the globe, and helping them fit Dropbox into their compliance strategies. Dropbox is proud to work closely with the CSA to establish open and transparent cloud security best practices within the industry. Dropbox strives to stay ahead of the curve as new standards and certifications are introduced and will continue to partner with the CSA to support research and education in key cloud security areas.

Standards such as CSA STAR certification underscore Dropbox’s commitment to keeping customer data safe, operating at the highest levels of availability, and maintaining transparency in data storage and processing. And they demonstrate Dropbox’s leadership in the SaaS industry, as Dropbox is one of the first major providers to achieve CSA STAR certification. Dropbox is excited to make continued strides with these compliance milestones.

IBM Touts Major Mac Cost Savings; IT Professionals Still Hesitant

December 9, 2016 | Leave a Comment

By Lance Logan, Manager/Global Marketing Program, Code42

For the second year in a row, IBM’s Fletcher Previn wowed the audience at the JAMF user conference with impressive statistics on how the company’s growing Mac-based workforce is delivering dramatic and measurable business value.

IBM expects Macs to save $26M in IT costs over four years
Big Blue says each Mac device will save them at least $265 over a four-year lifespan (and up to $535 depending on model) versus comparable PCs. With IBM’s Mac workforce at 90,000 (and adding 1,300 Mac users per month), that adds up to more than $26 million savings over the next four years—a huge margin. Simpler IT support and a high level of user self-service drive the bulk of this cost savings. IBM reports that just 3.5 percent of its Mac users currently call the help desk, compared to 25 percent of its PC users. This enables IBM to support 90,000+ Mac users (and 217,000 Apple device users) with just 50 IT employees.

It’s not just IT cost savings driving Mac adoption among big names in business tech. Deloitte calls iOS “the most secure platform for business” and says “Apple’s products are essential to the modern workforce.” Cisco has also jumped on the Apple bandwagon, believing Apple devices will accelerate productivity. Basic user satisfaction also shouldn’t be ignored, as IBM reports a 91 percent satisfaction rate among Mac users and says its pro-Mac policies help the company attract and retain top talent.

The average enterprise is still hesitant about widespread Mac deployment
It’s one thing for big-name tech innovators like IBM and Cisco to proclaim the promise of Macs in the enterprise, but what’s happening across the rest of the enterprise landscape? Code42 recently conducted a survey on Mac deployment among our diverse business contacts, and the results tell a less enthusiastic story.

Macs have a major—and growing—presence in the modern enterprise
Among Code42’s enterprise contacts, one-third (33.6%) have more than 500 Mac users and one in five (22.8%) have 1,000+ Mac users. These numbers further demonstrate that the modern enterprise is supporting OS diversity with a substantial Mac-based workforce—and we fully expect these numbers to grow in the coming years.

User preference—not business value—still drives most Mac adoption
While IBM and others put total cost of ownership, security and productivity as top reasons for Mac adoption, our results show user preference continues to be the main reason that enterprises are embracing Macs today.

Top reasons for Mac adoption
1. Happier end users (37%)
2. Fewer help desk tickets (14%)
3. Better OS security (12%)

Top IT challenges are Macs’ top strengths
Our survey showed the time-consuming burdens of tech refresh and help desk tickets are the most significant IT challenges associated with end user devices across operating systems, followed by malware/ransomware. These challenges are actually two of Mac devices’ greatest strengths. Macs traditionally enable a much higher level of self-service, and Code42 enables user-driven tech refresh for Mac users (and PC users, too). This level of self-service produces the kind of IT cost savings IBM has seen with its dramatically reduced help desk tickets. For the time being, Macs also continue to be less targeted and less vulnerable to malware and ransomware.

Many IT professionals remain wary of widespread Mac deployment
While our survey showed most enterprises may not be seeing million-dollar IT savings from Mac deployments, they did report a range of definitive benefits. So it’s revealing that one in five respondents said they’re ultimately not big fans of their companies’ Mac adoption.

Realizing advantages of Macs in the enterprise requires preparation, time
Supporting a large Mac-based workforce isn’t as simple as flicking a switch or changing a policy. It requires substantial changes to technology infrastructure and processes to make sure everything from calendars to apps to backup work seamlessly across both Mac and PC users. This often leaves IT stuck in the middle of user preferences and resource realities: Users want Macs, but IT needs the time—and the budget—to put the tools and processes in place to support a hybrid workforce.

But with IBM’s results ringing in the ears of the business world, more and more companies of every size and in every industry are sure to begin exploring the benefits of a larger Mac-based workforce. The best strategy for IT leaders is to act now to get ahead of this inevitable shift. Start examining your infrastructure to find the holes in Mac compatibility, and seek out technology partners that build solutions for this modern hybrid device environment.

Or, as IBM’s Previn put it, “Give employees the devices they want, manage those devices in a modern way, and drive self sufficiency in the environment.”

To learn more about how endpoint backup can protect the data on enterprise Macs, download the market brief Securing & Enabling the Mac-Empowered Enterprise.

DevOpsSec, SecDevOps, DevSecOps: What’s in a Name?

December 5, 2016 | Leave a Comment

By Jamie Tischart, CTO Cloud/SaaS, Intel Security

private-cloud-shot-2016-07-22-1The world is awash in DevOps, but what does that really mean? Although DevOps can mean several things to different individuals and organizations, ultimately it is about the cultural and technical changes that occur to deliver cloud services in a highly competitive environment.

Cultural changes come in the form of integrating teams that historically have been disparate around a single vision. Technical changes come with automating as much of the development, deployment, and operational environment as possible to more rapidly deliver high-quality and highly secure code.

This is where I believe the DevOps debate becomes cloudy (sorry for the pun). As is normal in engineering endeavors, we often forget the purpose or the problem we are trying to solve and instead get mired in the details of the process or the tool. We tend to lose site that bringing DevOps together has the purpose of solving how to more rapidly deliver higher-quality, more secure products to our customers, so they can solve their problems and we stay ahead of our competitors.

I found it interesting that there is little information about whether DevOps or OpsDev is the terminology coined but that adding security into the mix has three different coined terms of DevSecOps, SecDevOps, and DevOpsSec. At first I didn’t give it much thought and I figured that over time it would converge into an industry standard and we would move on our merry way of trying to achieve that difficult goal of high-quality, highly secure continuous deployment of cloud services. Then I looked closer and thought that there might be something to these three different nomenclatures and that they highlight the different challenges that security has in integrating into the software development lifecycle.

Let’s talk about the general purpose of including security in DevOps practices. Security was often an assumed part of the development and testing process to which few people paid attention.  Or, security was an afterthought that slowed down the development process and release cycle, executed by some other team requiring fixes to obscure vulnerabilities that would never be found or leveraged for harm.

That entire mindset, while flawed, worked reasonably well in the world of single-tenant application development where a 12-month release cycle was the norm and applications were deployed behind several layers of security appliances. This all changed when we started delivering multi-tenant cloud offerings where any vulnerability could put millions of customers and the reputation of our companies at risk. Yet, we still held onto some of these archaic practices. We were slow to integrate secure coding and testing practices into our everyday engineering execution. We continued to leave security activities until the end of cycles and we left many vulnerabilities unattended because it slowed the release. This was until, of course, someone exploited the vulnerability and then everyone dropped everything and all hell broke loose.

Integrating Security into DevOps
Integrating security into DevOps practices is the goal to alleviate these problems. It is the way to continuously evolve security through automated techniques and to achieve our goal of rapidly delivered high-quality, highly secure products. This brings me back to the different terms for integrating security into the DevOps movement and how each organization needs to determine how security is integrated.

Let’s first look at DevOpsSec. Consider the order and how that implies that security still comes at the end of the process. Maybe I am just being paranoid but this is a practice we need to curtail and instead imbed security into every aspect of the lifecycle. If we expound on that a bit and take it literally (and maybe we shouldn’t), the team will complete dev, deploy and operate, and then review security. If this is done in small increments and completed rapidly it is still a massive improvement to the end-game security testing we have seen in the past. However, it still may expose vulnerabilities within cloud production environments and require reversion or patching that could have been completed before hand.

Next let’s review SecDevOps. This would imply that the security activities occur before any development or operations. I am not sure that this is truly practical, although it is certainly a well-intentioned principle and has merits that should be incorporated into the DevOps practice. My interpretation of this is that new requirements/user stories/features – whatever your method – include security requirements in the development. If we take this to the next step, then these security requirements would have automated tests created and added to the automation suites so they can run continuously to ensure that security is inclusive throughout the cycle. Hmm, this sounds pretty good…

The last one is DevSecOps. Literally, you can expand this to completing development, then reviewing and automating for security, and then deploying and operating. This articulation hopes to catch the security concerns before they are deployed to the world but are not as incorporated into the overall process as SecDevOps. Certainly DevSecOps has the benefit of focusing on security before introducing a vulnerability to the the wild, but it is not security-focused in every activity.

Maybe I am taking it too literally, but maybe what we need is SecDevSecOpsSec. Here, security is a continuous activity in itself that needs to be incorporated into all stages of the product lifecycle. However, that is quite a mouthful…

The important thing is that when your organization is approaching DevOps, don’t forget the security aspect. Think about how you are going to integrate security into every aspect of your lifecycle. As for which term to utilize, I am going to standardize on SecDevOps. Integrating security at the start has the best of intentions and will lead to the most secure practices.