Your Morning Security Spotlight: Apple, Breaches, and Leaks

By Jacob Serpa, Product Marketing Manager, Bitglass

Here are the top cybersecurity stories of recent weeks:

–Apple’s High Sierra has massive vulnerability
–Survey says all firms suffered a mobile cyberattack
–Morrisons liable for ex-employee leaking data
–S3 misconfiguration leaks NCF customer data
–Imgur reports 2014 breach of 1.7 million credentials

Apple’s High Sierra has massive vulnerability
Apple’s latest operating system, High Sierra, was found to have a massive vulnerability. By typing the username “root” and leaving the password blank, devices running the operating system could be accessed, offering a way to steal data and upload malicious software.

Survey says all firms suffered a mobile cyberattack
In Check Point’s survey of 850 businesses around the world, all were found to have experienced a mobile cyberattack. This demonstrates the dangers of enabling unsecured BYOD and mobile data access. Additionally, the report contains surprising statistics on mobile malware, man-in-the-middle attacks, and more.

Morrisons liable for ex-employee leaking data
The supermarket chain Morrisons was recently found liable for a breach caused by an ex-employee in 2014. In 2015, the employee was sentenced to eight years in jail for maliciously leaking the payroll data of 100,000 fellow employees. However, Morrisons will now be held responsible, as well.

S3 misconfiguration leaks NCF customer data
The National Credit Federation (NCF) is reported to have leaked sensitive data belonging to tens of thousands of its customers. The information, which included bank account numbers and scans of Social Security cards, was leaked through an Amazon S3 misconfiguration that allowed complete public access to certain data.

Imgur reports 2014 breach of 1.7 million credentials
Imgur recently discovered that it suffered from a breach in 2014 that led to the compromise of 1.7 million users’ email addresses and passwords. The attack serves as an example of the fact that breaches (and ongoing data theft) can take years to detect.

Clearly, organizations that fail to protect their sensitive information will suffer the consequences. Learn how to achieve comprehensive visibility and control over data by reading the solution brief for the Next-Gen CASB.

Electrify Your Digital Transformation with the Cloud

By Tori Ballantine, Product Marketing, Hyland

Taking your organization on a digital transformation journey isn’t just a whimsical idea; or something fun to daydream about; or an initiative that “other” companies probably have time to implement. It’s something that every organization needs to seriously consider. If your business isn’t digital, it needs to be in order to remain competitive.

So if you take it as a given that you need to embrace digital transformation to survive and thrive in the current landscape, the next logical step is to look at how the cloud fits into your strategy. Because sure, it’s possible to digitally transform without availing yourself of the massive benefits of the cloud. But why would you?

Why would you intentionally leave on the table what could be one of the strongest tools in your arsenal? Why would you take a pass on the opportunity to transform – and vastly improve – the processes at the crux of how your business works?

Lightning strikes
In the case of content services, including capabilities like content management, process management and case management, cloud adoption is rising by the day. Companies with existing on-premises solutions are considering the cloud as the hosting location for their critical information, and companies seeking new solutions are looking at cloud deployments to provide them with the functionality they require.

If your company was born in the digital age, it’s likely that you inherently operate digitally. If your company was founded in the time before, perhaps you’re playing catch up.

Both of these types of companies can find major benefits in the cloud. Data is created digitally, natively — but there is still paper that needs to be brought into the digital fold. The digitizing of information is just a small part of digital transformation. To truly take information management to the next level, the cloud offers transformative options that just aren’t available in a premises-bound solution.

People are overwhelmingly using the cloud in their personal lives, according to AIIM’s State of Information Management: Are Businesses Digitally Transforming or Stuck in Neutral? Of those polled, 75 percent use the cloud in their personal life and 68 percent report that they use the cloud for business. That’s nearly three-quarters of respondents!

When we look at the usage of cloud-based solutions in areas like enterprise content management (ECM) and related applications, 35 percent of respondents leverage the cloud as their primary content management solutions; for collaboration and secure file sharing; or for a combination of primary content management and file sharing. These respondents are deploying these solutions either exclusively in the cloud or as part of on-prem/cloud hybrid solutions.

Another 46 percent are migrating all their content to the cloud over time; planning to leverage the cloud but haven’t yet deployed; or are still experimenting with different options. They are in the process of discerning exactly how best to leverage the power of the cloud for their organizations.

And only 11 percent have no plans for the cloud. Eleven percent! Can your business afford to be in that minority?

More and more, the cloud is becoming table stakes in information management. Organizations are growing to understand that a secure cloud solution not only can save them time and money, but also provide them with stronger security features, better functionality and larger storage capacity.

The bright ideas
So, what are some of the ways that leveraging the cloud for your content services can digitally transform your business?

  • Disaster recovery. When your information is stored on-premises and calamity strikes — a fire, a robbery, a flood — you’re out of luck. When your information is in the cloud, it’s up and ready to keep your critical operations running.
  • Remote access. Today’s workforce wants to be mobile, and they need to access their critical information wherever they are. A cloud solution empowers your workers by granting them the ability to securely access critical information from remote locations.
  • Enhanced security. Enterprise-level cloud security has come a long way and offers sophisticated protection that is out of reach for many companies to manage internally.

Here are other highly appealing advantages of cloud-based enterprise solutions, based on a survey conducted by IDG Enterprise:

  • Increased uptime
  • 24/7 data availability
  • Operational cost savings
  • Improved incident response
  • Shared/aggregated security expertise of vendor
  • Access to industry experts on security threats

Whether you’re optimizing your current practices or rethinking them from the ground up, these elements can help you digitally transform your business by looking to the cloud.

Can you afford not to?

AWS Cloud: Proactive Security & Forensic Readiness

This post kicks off a series examining proactive security and forensic readiness in the AWS cloud environment. 

By Neha Thethi, Information Security Analyst, BH Consulting

In a time where cyber-attacks are on the rise in magnitude and frequency, being prepared during a security incident is paramount. This is especially crucial for organisations adopting the cloud for storing confidential or sensitive information.

This blog is an introduction to a five-part blog series that provides a checklist for proactive security and forensic readiness in the AWS cloud environment.

Cyber-attack via third party services
A number of noteworthy information security incidents and data breaches have come to light recently that involve major organisations being targeted via third-party services or vendors. Such incidents are facilitated in many ways, such as a weakness or misconfiguration in the third-party service, or more commonly, a failure to implement or enable existing security features.

For example, it has been reported that several data breach incidents in 2017 occurred as a result of an Amazon S3 misconfiguration. Additionally,  the recent data breach incident at Deloitte appears to have been caused by the company’s failure to enable two-factor authentication to protect a critical administrator account in its Azure-hosted email system.

Security responsibility
Many of our own customers at BH Consulting have embraced the use of cloud, particularly Amazon Web Services (AWS). It is estimated that the worldwide cloud IT infrastructure revenue has almost tripled in the last four years. The company remains the dominant market leader, with an end-of-2016 revenue run rate of more than $14 billion.  It owes its popularity to its customer focus, rich set of functionalities, pace of innovation, partner and customer ecosystem as well as implementation of secure and compliant solutions.

AWS provides a wealth of material and various specialist partners to help customers enhance security in their AWS environment. A significant part of these resources is a shared responsibility model for customers, to better understand their security responsibilities based on the service model being used (infrastructure-as-a-service, platform-as-a-service or software-as-a-service).

Figure 1: AWS Shared Responsibility Model

When adopting third-party services, such as AWS, it is important that customers understand their responsibility for protecting data and resources that they are entrusting to these third parties.

Security features
Numerous security measures are provided by AWS, however, awareness of relevant security features and appropriate configuration, are key to taking full advantage of these measures. There may be certain useful and powerful features that a customer may be unaware of.  It is the responsibility of the customer to identify all the potential features so as to determine how best to leverage each one, if at all.

Five-part best practice checklist
The blog series will offer the following five-part best practice checklists, for proactive security and forensic readiness in AWS Cloud.

  1. Identity and Access Management in AWS
  2. Infrastructure Level Protection in AWS
  3. Data Protection in AWS
  4. Detective Controls in AWS
  5. Incident Response in AWS

Stay tuned for further installments.

Four Important Best Practices for Assessing Cloud Vendors

By Nick Sorensen, President & CEO, Whistic

When it comes to evaluating new vendors, it can be challenging to know how best to communicate the requirements of your vendor assessment process and ultimately select the right partner to help your business move forward — while at the same time avoiding the risk of a third-party security incident. After all, 63 percent of data breaches are linked to third parties in some way. In fact, we all recently learned about how an Equifax vendor was serving up malicious code on their website in a newly discovered security incident.

The Whistic team has done thorough research on what a good vendor assessment process looks like and how to keep your organization safe from third party security threats. In the following article, we’ll outline a few of these best practices that your organization can follow in order to improve your chances of a successful vendor review. Of course, there will still be situations that you must address in which a vendor is either not prepared to respond to your request or isn’t willing to comply with your process. However, we’ll share some tips for how to best respond to these situations, too.

But before we get started, keep these three keys in mind:

  1. Time your assessments: The timing of the assessment will be the single greatest leverage you have in getting a vendor to respond. Keep in mind that aligning your review with a new purchase or contract renewal is key.
  2. Alert the vendor ASAP: The sooner a vendor is aware of a review the better. Plan ahead and engage early and get executive buy-in from your team to hold vendors accountable to your policy. If your business units understand that you have a policy requirement to review every new vendor, they can help set expectations during the procurement process and eliminate last-minute reviews.
  3. Don’t overwhelm your vendors: Unnecessary questions or requests for irrelevant documentation can slow the process down significantly. Be sure to revisit your questionnaire periodically and identify new ways to customize questions based on vendor feedback. You may find that after conducting several security reviews that there may be ways to improve the experience for both parties.

Personalize the Communication
At Whistic, we’ve had a front row seat to the security review processes of companies all across the world and a wide range of use cases. We’ve seen firsthand how much of a difference personalized communication can make in creating a more seamless process for all involved, especially third party vendors who are or hope to be trusted partners to your business.

With this in mind, we strongly recommend sending a personalized email to each vendor when initiating a new questionnaire request to supplement the email communication that they will receive from any software you utilize. This can help alleviate concerns the vendor may have about the assessment process and should help to improve turnaround times on completed questionnaires. Even with the automated communication support from a third party security platform, the best motivator for your vendor to complete your request may be a friendly reminder from you or the buyer that the sales process is on hold until they complete the assessment.

Deliver Expectations Early
Assuming that your vendor already understands that you are going to need to complete a security review on them, the best time to help them understand your expectations is either right before or right after you initiate a request via your third party security platform.

When doing so, keep the following in mind as you have a phone call or draft an email to your vendor to introduce the vendor assessment request:

  • Set The Stage: Let your vendor know about the third party security platform that your organization uses and that it is required method for completing your security review process.
  • Give Clear Direction: Specify a clear deadline and any specific instructions for completing the entire security review — not just the questionnaire.
  • Provide Resources: Provide information for the best point of contact who can answer questions they may have throughout the process. It’s also a good idea to let them know that your third party security platform may reach out if they aren’t making progress on their vendor assessment.

Utilize an Email Template
Whether you use a customized template created by your team or a predefined template (such as the one Whistic provides to its customers), it’s worth spending a few minutes upfront to standardize the communication process. This will save you time in the long-run and allow you to deliver a consistent message to each of your vendors.

Respond to Vendor Concerns
It isn’t uncommon for vendors, particularly account executives, to try and deflect a security review as they know it has the potential to delay the sales/renewal process. They may also have questions about sharing information through a third party security platform as opposed to emailing that information to you. We know from experience how frustrating this can be for all involved, so below are a two tips for handling pushback:

  • Preparation: If you are getting repeated pushback from vendors, review the “Keys to Success” outlined at the beginning of this article and explore additional ways to adopt those best practices.
  • Complexity, Relevance, and Length: These items can be among the reasons why vendors complain about your security review process. Consider periodically revisiting your questionnaire and consider adding additional filter logic to limit the number of questions asked of each vendor or make the question sets more relevant to vendor that is responding.

These are just a few things to consider as you look to assess your next cloud vendor. What else have you found helpful as you have approached this responsibility at your company?

 

Your Morning Security Spotlight

By Jacob Serpa, Product Marketing Manager, Bitglass

The top cybersecurity stories of the week revolved around malware and breaches. Infections and data theft remain very threatening realities for the enterprise.

400 Million Malware Infections in Q3 of 2017
In the last few months, malware has successfully infected hundred of millions of devices around the world. As time passes, threats will continue to become more sophisticated, effective, and global in reach. To defend themselves, organizations must remain informed about current malware trends.

Fileless Attacks Are on the Rise
It is estimated that 35 percent of all cyberattacks in 2018 will be fileless. This kind of attack occurs when users click on unsafe URLs that run malicious scripts through Flash, for example. Rather than rely solely on security measure that only monitor for threatening files, the enterprise should adopt solutions that can defend against zero-footprint threats.

Terdot Malware Demonstrates the Future of Threats
The Terdot malware, which can surveil emails and alter social media posts in order to propagate, is serving as an example of the evolution of malware. More and more, threats will include reconnaissance capabilities and increasing sophistication. Hackers are looking to refine their methods and contaminate as many devices as possible.

Spoofed Black Friday Apps Steal Information and Spread Malware
In their rush to buy discounted products, many individuals are downloading malicious applications that masquerade as large retailers offering Black Friday specials. As information is stolen from affected devices and malware makes its way to more endpoints, businesses that support bring your own device (BYOD) must be mindful of how they secure data and defend against threats.

What to Do in the Event of a Breach
ITPro posted an article on how organizations should respond when their public cloud instances are breached. Rather than assume that cloud app vendors perfectly address all security concerns, organizations must understand the shared responsibility model of cloud security. While vendors are responsible for securing infrastructure and cloud apps themselves, it is up to the enterprise to secure data as it is accessed and moved to devices. As such, remediation strategies vary depending on how breaches occur (compromised credentials versus underlying infrastructure being attacked).

Clearly, the top stories from the week were concerned with what can go wrong when using the cloud. To combat these threats, organizations must first understand them. From there, they can adopt the appropriate security solutions. To take the first step and learn more about threats in the cloud, download this report.

IT Sales in the Age of the Cloud

By Mathias Widler, Regional Sales Director, Zscaler

The cloud is associated not only with a change in corporate structures, but also a transformation of the channel and even sales itself. Cloudification makes it necessary for sales negotiations to be held with decision-makers in different departments and time zones, with different cultural backgrounds and in different languages. The main challenge: getting a variety of departments to the negotiating table, and identifying the subject matter expert among many stakeholders.

To communicate with different decision-makers, sales reps must switch quickly from their roles as salespeople to global strategists and account managers. Today’s salespeople sell services, not boxes. They must also explain how the service can benefit the business, instead of simply touting its features.

The new sales process highlights the need for new skills and qualifications in the sales department, as we explain below.

Selling business value
A decade ago, it was important to get a company’s security person excited about new technology during a sales pitch. But the days of simply closing a deal by convincing the responsible person or admin to buy the product are long gone. What is needed today is a holistic winning strategy, which starts by explaining the business advantages of a solution to a potential customer.

Today, the work starts long before the sales person picks up the phone. The pitch must be individually tailored to the current and future business requirements of each organization. True cloud solutions facilitate an integrated implementation of digital transformation processes – providing the foundation for a better user experience, more flexibility, lower costs, and much more. The cloud is sold not as an end in itself, but as a result of the above-mentioned effects. Therefore, the service must be adapted to the requirements of the prospective customer and presented convincingly.

Reaching out to more decision-makers
Besides the CIO, many more stakeholders now need to be brought to the table, including the application-level department, network managers, security contacts, project managers, data protection officers, and potentially the works council. The decision-making processes involved in the purchase of a cloud service are therefore much more complex and protracted. According to a recent CEB report, in just two and half years, the average number of decision-makers per project increased by 26 percent from 2013 to 2016.

Today, the average number of persons involved in a buying decision is 6.8. A group of stakeholders is no longer as homogeneous as before, because it is much more difficult to reach consensus among a diverse group of senior executives. What is more, in addition to internal decision-makers, external decision-makers can also play a decisive role. This increases still further the number of stakeholders, and adds to the complexity of the decision-making processes.

To reach a consensus, a winning strategy must be acceptable to all decision-makers with various backgrounds. The demands placed on sales have become inherently more complex in the age of the cloud. Sales people who were used to sell an appliance have to reinvent themselves as strategists, who need to balance conflicting interests and find common ground, in particular with respect to the introduction of the cloud.

Dealing with long sales cycles
CEB points out that the sales process up to closing has been prolonged by a factor of two, as it involves efforts to overcome differences of opinion as well as fine-tuning to reach a consensus. For the project to succeed, departments that have previously made separate decisions now have to come together at the table. To sell a cloud service today, sales professionals must be able to convince the entire buying center that their solution is the right one. It’s helpful if sales people can identify the subject matter expert in a negotiating team, whose vote will ultimately be decisive.

Globalization also means that the salesperson needs to take cultural sensitivities into account. It is no longer a rarity for an IT department of a global corporation to be based in Southern or Eastern Europe due to available expertise and the wage level of the workforce.

At the same time, salespeople should not lose sight of how they can act as catalysts to speed up a decision. Which different types of information do the stakeholders need? Where does leverage come into play to move the team to the next step? What conflicting interests need to be balanced?

Understanding new principles: capex vs opex, SLAs and trust
Before a company can benefit from the much-promised advantages of the cloud, it must rely on the expertise of sales, which makes the value-add clear across the organization. This is all the more important as the cloud service is not as “tangible” as hardware. The process of building trust is handled through service level agreements, reference customer discussions, and, where necessary, credit points for non-performance. A portal can provide insight into the availability of the service level, which highlights the continuous availability of the service or describes service failures.

As capital expenditures (capex) are converted into operating expenses (opex), another issue, which needs to be made clear, comes into play with respect to license agreement-based procurement. The businesses pay only for use of the services, which can be adjusted as and when required. Regarding the data protection provisions applicable to the cloud service, consulting with the works council and understanding its respective concerns is recommended. A contract on data processing establishes the legal framework for cooperation with the cloud provider.

Once the effectiveness of the cloud approach can be demonstrated by a proof-of-concept, the cloud has basically won. After all, a test environment can be set up within a very short time. The maintenance cost for maintaining and updating of hardware solutions is thus a thing of the past, which should be a compelling argument for every department from an administrative point of view.

What makes a successful salesperson?
In a nutshell, the sales manager has to convince the customer of the business value of a cloud-based solution – at all levels of the decision-making process. In this context, the personal skills to engage in multi-faceted communication with a wide range of contacts are much more relevant than before.

Emotional intelligence, as well as technical expertise in project management, should also be thrown into the mix. It’s important to take an active role at all levels of the sales process, taking account of the fact that the counterarguments of the prospective customer have to be addressed at various points on the path to digitization.

Project management plays an increasingly important role in the age of the cloud, such as keeping in touch with all stakeholders and monitoring the progress of the negotiations. Even after the project is brought to a successful conclusion, sales has to continue to act as an intermediary, and remain available as a contact to ensure customer satisfaction. This is because services can be quickly activated – and canceled.

For this reason, it’s important in the new cloud era to continue to act as an intermediary and maintain contact with the cloud operations team in the implementation phase. The salesperson of a cloud service is in a sense the account manager, who initiates the relationship and keeps it going.

Days of Our Stolen Identity: The Equifax Soap Opera

By Kate Donofrio, Senior Associate, Schellman & Co.

The Equifax saga continues like a soap opera, Days of Our Stolen Identity.  Every time it appears the Equifax drama is ending, a new report surfaces confirming additional security issues.

On Thursday, September 12, NPR reported that Equifax took down their website this time based on an issue with fraudulent Adobe Flash update popups on their site, initially discovered by an independent security analyst, Randy Abrams.[1]  Did the latest vulnerability mean Equifax continued with their inadequate information technology and security practices, even after being breached?  Or is it an even worse possibility, that their machines were not completely remediated from the original breach?

As it turns out, Equifax claimed they were not directly breached again, rather one of their third-party service providers responsible for uploading web content to Equifax site for analytics and monitoring was at fault.  According to Equifax, the unnamed third-party service provider uploaded the malicious code to their site.  It appears the only thing Equifax has been consistently good at is placing blame and pointing a finger in other directions.

Equifax needs to take responsibility after all they hired the service provider, are responsible for validating compliance of their service provider’s actions within their environment, and still hold the overall responsibility of their information.  This is a huge lesson for any company who attempts to pass blame to a third-party.

For those that have not been keeping track, below demonstrates a rough timeline of the recent Equifax scandal:

  • Mid-May 2017 – July 29, 2017: Reported period where Equifax’s systems were breached and data compromised.
  • July 29, 2017: Equifax identified the breach internally.
  • August 1 and August 2, 2017: Executives dumped $1.78 million worth of Equifax stock: Chief Financial Officer, John Gamble ($946,374); U.S. Information Solutions President, Joseph Loughran ($584,099); and Workforce Solutions President, Rodolfo Ploder ($250,458).[2]
  • September 7, 2017: Equifax released a public statement about the breach of over 145 million U.S. consumers’ information, 209,000 credit cards, and other breaches of non-US citizen information.[3]
  • September 12, 2017: Alex Holden, founder of Milwaukee, Wisconsin-based Hold Security LLC, contacted noted cybersecurity reporter, Brian Krebs, on a discovered security flaw within Equifax’s publicly available employee portal in Argentina. The Equifax portal had an active administrative user with the User ID “admin” and the password set to “admin.”  For those of you who may be unaware, the admin/admin username and password combination is regularly used as a vendor default, and often a combination tried by users to break into systems.  The administrative access allowed maintenance of users within the portal, including the ability to show employee passwords in clear-text. [4]
  • September 14, 2017: On his blog, Krebs on Security, Brian Krebs posted an article referencing a non-public announcement Visa and MasterCard sent to banks, which stated that the “window of exposure for the [Equifax] breach was actually November 10, 2016 through July 6, 2017.”[5] (Note: Equifax still claims the breach was one big download of data in Mid-May 2017, and that the November dates were merely transaction dates.)
  • September 15, 2017: Visa and MasterCard updated the breach notification to include social security numbers and addresses. [6] They found that the breach occurred on the Equifax’s site where people signed up for credit monitoring.
  • September 15, 2017: Equifax Chief Information Officer, David Webb, and Chief Security Officer, Susan Mauldin retired, effective immediately.[7][8]
  • September 19, 2017: Equifax admitted they tweeted out a bogus website address at least seven times; for instance, promoting “securityequifax2017.com” instead of the correct site, “equifaxsecurity2017.com,” and thus sent customers to the wrong site. Software engineer Nick Sweeting took the opportunity to teach Equifax a lesson and created an identical site at the incorrect “securityequifax2017.com” with a scathing indictment banner at the top of the page: “Why did Equifax use a domain that’s so easily impersonated by phishing sites?”[9]
  • September 29, 2017: CEO, Richard F. Smith stepped down, though he was expected to walk away with roughly $90 million.[10]
  • September 29, 2017: Astonishingly, the Internal Revenue Service (IRS) awarded Equifax a sole source contract (not publicly bid) for roughly $7.25 million to perform identity verifications for taxpayers.[11] Just in case you were not lucky enough to be a part of the recent Equifax breach, the IRS is giving you another “opportunity.”
  • October 3, 2017: During testimony with House Energy and Commerce Committee, former Equifax CEO, Richard F. Smith, blamed one person in his IT department for not patching the Apache Struts vulnerability and for the entire breach.[12]
  • October 10, 2017: Krebs on Security reported the number of UK Residents hacked was 693,665, not the initial 400,000 disclosed.[13]
  • October 12, 2017: Malicious Adobe Flash code was found on Equifax’s website. Equifax blamed a third-party service provider for feeding the information to the site.
  • October 12, 2017: IRS temporarily suspended Equifax’s contract over additional security concerns.[14]

This is not the first time Equifax has been involved in a breach of customer information.  On September 8, 2017, Forbes released an article detailing prior breaches, including one in May 2016 that leaked personal information of 430,000 records of grocer Kroger’s employees[15]from an Equifax site that provided employees with W2 information.  That breach was attributed to attackers determining PIN numbers utilized for site access to break into accounts and steal information.  PIN numbers consisted of the last four digits of an employee’s social security number and their four-digit birth year.

More information keeps surfacing as Equifax continues to simultaneously be scrutinized for their every move and targeted by security personnel and hackers alike.  A huge question remains how a company managing the information of so many people, who was certified compliant under several different certifications, including PCI DSS, SOC 2 Type II, FISMA, ISO/IEC 27001:2013[16] to name a few, could be so negligent.

From my experience, there are a lot of large corporations out there with the mentality that they are just too big to fail or to comply one-hundred percent.  I have heard echoing of this mantra repeatedly over the years, and every time, it makes you want to scream “you are too big not to comply!”

However, history has proven, a lot of these big corporations are in fact too big to fail.  Sure, Equifax is going to be continuously under scrutiny, fined, sued, and have their name dragged through the mud.  However, at the end of the day, they will still be managing the information for millions of people, not just Americans, and business will continue as usual.  They will be the butt of jokes and the subject of discussion for a while, but then the stories will start to fall behind other major headlines and soon all will be forgotten.

The reality is the Equifax saga is nothing new to consumers, and Equifax joins the likes of Target, Home Depot, Citibank, and many other companies who had their name plastered within headlines for major data breaches.

The compromises made some consumers think twice about using these companies, or using a credit card at their locations, but time moves on and eventually convenience always beats security.  Each of the companies compromised took a financial hit at the time, but years later they are still chugging away, some with record profits.  Sure, the damage made them reorganize and rethink security going forward, but why is it that consumers must suffer first before these large companies take steps to protect them?  While millions of consumers could be facing identity theft or financial compromise due to the Equifax breach, Equifax’s executives cashed out large amounts of stock, took their resignation, and will move on to the next company or retire off their riches.

What is the big picture here?  Is it true what Equifax’s ex-CEO said on the stand, that one member of their information security team caused this huge compromise of data? Of course not, and by the way it was ludicrous for a CEO to place blame on one member of their IT staff.  The truth is companies attempt to juggle their personal profit with the company’s security.  Let’s be honest, most of the time information security spends revenue without a return.  The only time a return is realized is when a company mitigates a breach and that information is not often relayed across an organization.

The damages incurred by consumers and even other businesses due to data breaches far outweigh the penalties the negligent companies face.  The Federal Trade Commission claims that recovering from an identity breach averages six months and 200 hours of work[17].  If only 10% of those involved in the Equifax breach have their identities compromised, using average U.S. hourly earnings, that would equate to roughly $77 billion in potential costs to the American people (14,500,000 people * 200 hours * $26.55 = ~$77 billion).  These are just averages and there are horror stories detailing people fighting for years to clear up their identity.

Overall, there needs to be more accountability and transparency in what these corporations are doing with consumer data.  Most of these companies are going through endless audits covering different regulations and compliances, yet it does not seem to matter, as breaches continue to rise in number.

As other countries are progressively moving forward with reforms for the protection of personal information of their residents, such as the European Union’s General Data Protection Regulation (GDPR), the US continues to blindly stumble along, refusing to take a serious look at these issues.  The amount of money these companies are profiting off the data they collect is ridiculous, and when they have a breach, the fines and other punishments are a joke.

It’s time for things to change, as no company should be able to just say, “whoops, sorry about that” after a breach and move on.

What’s New with the Treacherous 12?

By the CSA Top Threats Working Group

In 2016, the CSA Top Threats Working Group published the Treacherous 12: Top Threats to Cloud Computing, which expounds on 12 categories of security issues that are relevant to cloud environments. The 12 security issues were determined by a survey of 271 respondents.

Following the publication of that document, the group has continued to track the cloud security landscape for incidents. This activity culminated in the creation of an update titled Top Threats to Cloud Computing Plus: Industry Insights.

The update serves as a validation of the relevance of security issues discussed in the earlier document, as well as provides references and overviews of these incidents. In total, 21 anecdotes and examples are featured in the document.

The references and overview of each anecdote and example are written with the help of publicly available information.

The Top Threats Working Group hopes that shedding light on recent anecdotes and examples related to the 12 security issues will provide readers with relevant context that is current and in-line with the security landscape.

 

CSA Releases Minor Update to CCM, CAIQ

By the CSA Research Team

The Cloud Security Alliance has released a minor update for the Cloud Control Matrix (CCM) and the Consensus Assessment Initiative Questionnaire (CAIQ) v3.0.1. This update incorporates mappings to Shared Assessments 2017 Agreed Upon Procedures (AUP), PCI DSS v3.2, CIS-AWS-Foundation v1.1, HITRUST CSF v8.1, NZISM v2.5.

The Cloud Security Alliance would like to thank the following individuals and organizations for their contributions to this minor update of the CCM.

Shared Assessments 2017 AUP
Angela Dogan
The Shared Assessments Team

PCI DSS v3.2 
Michael Fasere
Capital One

NZISM v2.5
Phillip Cutforth
New Zealand Government CIO

HITRUST CSF v8.1
CSA CCM Working Group

CIS-AWS-Foundations
Jon-Michael Brook

Learn more about this minor update to the CCM. Please feel free to contact us at [email protected]nce.org if you have any queries regarding the update.

If you are interested in participating in future CCM Working Group activities, please feel free to sign up for the working group.

The GDPR and Personal Data…HELP!

By Chris Lippert, Senior Associate, Schellman & Co.

With the General Data Protection Regulation (GDPR) becoming effective May 25, 2018, organizations (or rather, organisations) seem to be stressing a bit. Most we speak with are asking, “where do we even start?” or “what is included as personal data under the GDPR?” It is safe to say that these are exactly the questions organizations should be asking, but to know where to start, organizations first need to understand how the GDPR applies to their organization within this new definition for personal data. Without first understanding what to look for, an organization cannot begin to perform data discovery and data mapping exercises, review data management practices and prepare the organization for compliance with the GDPR.

Personal data redefined…sort of.
To start – is personal data redefined by the GDPR? Yes. Is it more encompassing of a definition? Yes. Does it provide a good amount of guidance on interpretation of said definition? In some areas, but not in others.

The Articles of the GDPR open with a list of definitions in Article 4 that provide some guidance on how to digest the remainder of the regulation—the recitals also contain some nuggets of wisdom if you have time to review. Personal data is the very first definition listed under Article 4, hinting that it is most likely pertinent to a comprehensive understanding of the regulation. Article 4(1) states:

‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person.

In breaking down this definition, there are a few key phrases to focus on. Any information is the big one, as it confirms that personal data, under this regulation, is not limited to a particular group or type of data. Relating to specifies that personal data can encompass any group or type of data, as long as the data is tied to or related to something else. What is that something else? A natural person. A natural person is just that—an actual human being to whom the data applies.

You may have noticed I skipped the ‘an identified or identifiable’ portion of the definition—identified or identifiable means that the natural person has either already been identified, or can readily be identified utilizing other available information. Article 4(1) adds further clarity here, stating that an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person. The fact that name, identification number, location data and online identifier are specifically referenced at the beginning of this definition is important, as those pieces of data serve to directly identify an individual. If that specific data is held by the organization, all related data is in scope.

However, if those unique identifiers are not held, your organization should reference the list of other data that could otherwise identify the natural person and bring everything into scope. For example, you may not have John Smith’s name in your database, but you may have salary, company name, and city that that point directly to John Smith when linked together.

In addition to the new definition of personal data, the GDPR also adds some more specificity around what it deems “special categories” of personal data. Article 9 1. states:

processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade-union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person’s sex life or sexual orientation shall be prohibited.

This definition is important, as this states that certain personal data falls into a subcategory that has stricter processing requirements. Although the requirement above states that processing of special categories of personal data is prohibited, it is important to note that there are exceptions to this rule. Organizations should reference Article 9 if they believe special categories of data to be in scope.

So how does this definition differ from previous definitions of personal data?
Even though the GDPR “redefines” personal data, is it really all that different from existing definitions? As a baseline, let’s refer to two of the more commonly used definitions for personal data taken from the GDPR’s predecessor—the Data Protection Directive—and NIST 800-122.

The Data Protection Directive defines personal data in Article 2 (a), which states ‘personal data ‘ shall mean any information relating to an identified or identifiable natural person (‘data subject’); an identifiable person is one who can be identified, directly or indirectly, in particular by reference to an identification number or to one or more factors specific to his physical, physiological, mental, economic, cultural or social identity. This definition is almost identical to that of the GDPR. The main difference is that the GDPR added additional data that can identify an individual, such as name, location data and online identifier. By adding these into the mix, the GDPR is clarifying where individuals are presumed to be identified, helping organizations understand that the data associated with those identifiers is in scope and covered under the regulation.

Special categories of personal data is also defined under the Data Protection Directive. Article 8 1. states Member States shall prohibit the processing of personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, trade-union membership, and the processing of data concerning health or sex life. The GDPR expanded on this definition as well, now including genetic and biometric data, as well as sexual orientation data to be included in special categories. Essentially, the GDPR has taken the definitions for both personal data and special categories from the Data Protection Directive and provided more clarity, while making them more inclusive at the same time.

Most people probably expect the Data Protection Directive and GDPR to have similar definitions, as they are essentially version 1 and 2 of modern day EU data privacy legislation, respectively. However, when compared to the definition of personal data contained in U.S.-based guidance, we start to see some key differences. As the National Institute of Standards and Technology (NIST) is widely accepted, let’s look at their definition of personal data found in their Guide to Protecting the Confidentiality of Personally Identifiable Information (PII) from 2010.  NIST 800-122, Section 2.1 states PII is – any information about an individual maintained by an agency, including (1) any information that can be used to distinguish or trace an individual‘s identity, such as name, social security number, date and place of birth, mother‘s maiden name, or biometric records; and (2) any other information that is linked or linkable to an individual, such as medical, educational, financial, and employment information.

In breaking down this NIST definition, we see some similarities, in that the NIST definition starts off just as broadly with the phrasing “any information.” In the same vein, the wording “about an individual” speaks to the clarification provided in the GDPR definition as well. That being said, the definition then goes on to add more specifics regarding information that can identify or be linked to an individual, which is where we start to notice some differences. The identifying pieces of information listed in the NIST definition includes name, social security number, date and place of birth, mother’s maiden name or biometric records. The GDPR is a bit more inclusive in its definition, including name, identification number, location data, and online identifier, which covers most of the items from the NIST definition but also adds the online portion as well. While the GDPR doesn’t include the biometric data in the main definition, it does cover physical and genetic information in the other related information listing.

These differences don’t stop there. The NIST definition does go on to provide guidance on other information that could be linked to the individual, but instead of listing out specific data, the definition focuses rather on sectoral categories of data that seem to be derived from the sectoral privacy laws in the United States. The GDPR definition does not follow this pattern, and instead focuses on the different data that can be linked to an individual from a more generic standpoint, listing out the pieces of information that could be tied to an individual in most industries. Also, while the GDPR definition states that one or more of those other data elements can also identify the individual, the NIST definition really brings that other information into scope by saying it can be personal data as long as the individual is identified—though it does not state that the information can also be used to identify an otherwise unidentified individual.

Final Thoughts
With the GDPR’s becoming effective next year, it’s clear that this new definition of personal data expands on the preexisting EU definition of personal data contained in the Data Protection Directive.  Additionally, it adds more specificity to the data that can be used to identify an individual in comparison with leading US personal data definitions.

Why is this so important and relevant to organizations? This new definition of personal data is the most comprehensive definition to date, bringing into scope more information to be considered than any previous definitions in industry regulations or standards. Now, organizations will need to take another look at their previous determination of personal data and reevaluate their data management practices to ensure that the information they hold has been labeled and handled correctly. In fact, information deemed not applicable to past privacy regulations and standards may now become relevant when taking the new definition of personal data into consideration.

Look no further than IP addresses.  Most companies wouldn’t normally lump in IP addresses with personal data, but the now-effective GDPR specifically calls out online identifiers in the definition of personal data. The Court of Justice for the European Union (CJEU) issued its judgement indicating as such in Case C-582/14: Patrick Breyer v Bundesrepublik Deutschland, setting precedent that even dynamic IP addresses can be considered personal data in certain situations. Given this new standard, it will be important for organizations to incorporate judgements from recent cases and guidance from the Article 29 Working Party (being replaced by the European Data Protection Board in May of 2018) when determining how the GDPR impacts to their organization and how best to comply.

New procedures and criteria can be confusing, but hopefully the information above has provided some clarity around this new definition of personal data that the GDPR will introduce next year. Basic knowledge of these definitions can be a starting point for determining how the GDPR applies to your organization, and  if approached from a comprehensive data and risk management standpoint, this information can help better prepare your organization for compliance with the GDPR and other future privacy regulations and frameworks.

If you should have any questions regarding the new definition of personal data or the GDPR in general, please feel free to reach out to your friendly neighborhood privacy team here at Schellman.

Webinar: How Threat Intelligence Sharing Can Help You Stay Ahead of Attacks

By Lianna Catino, Communications Manager, TruSTAR Technology

According to a recent Ponemon Institute survey of more than 1,000 security practitioners, 84 percent say threat intelligence is “essential to a strong security posture,” but the data is too voluminous and complex to be actionable.

Enter the CloudCISC Working Group. Powered by TruSTAR’s threat intelligence platform, more than 30 CSA enterprise members are now actively exchanging threat data on a daily basis to help them surface relevant intelligence. The platform allows security analysts to mine historical incident data correlations among CSA members to take faster action against new threats.

This month CloudCISC marks its one year anniversary, and to celebrate we’re bringing you a recap of some of the hottest trending threats we’re seeing on the CSA platform in Q3.

Led by CSA and TruSTAR, we’ll be walking you through the CloudCISC platform and dissecting threats that are specifically relevant and trending among CSA members.

In the event you missed it, you can watch the replay.

Thinking of joining CSA’s Cloud Cyber Intelligence Exchange? Request your invitation today.

Improving Metrics in Cyber Resiliency: A Study from CSA

By  Dr. Senthil Arul, Lead Author, Improving Metrics in Cyber Resiliency

With the growth in cloud computing, businesses rely on the network to access information about operational assets being stored away from the local server. Decoupling information assets from other operational assets could result in poor operational resiliency if the cloud is compromised. Therefore, to keep the operational resiliency unaffected, it is essential to bolster information asset resiliency in the cloud.

To study the resiliency of cloud computing, the CSA formed a research team consisting of members from both private and public sectors within the Incident Management and Forensics Working Group and the Cloud Cyber Incident Sharing Center.

To measure cyber resiliency, the team leveraged a model developed to measure the resiliency of a community after an earthquake. Expanding this model to cybersecurity introduced two new variables that could be used to improve cyber resiliency.

  • Elapsed Time to Identify Failure (ETIF)
  • Elapsed Time to Identify Threat (ETIT)

Measuring these and developing processes to lower the values of ETIF and ETIT can improve the resiliency of an information system.

The study also looked at recent cyberattacks and measured ETIF for each of the attacks. The result showed that the forensic analysis process is not standard across all industries and, as such, the data in the public domain are not comparable. Therefore, to improve cyber resiliency, the team recommends that the calculation and publication of ETIF be transferred to an independent body (such as companies in IDS space) from the companies that experienced cyberattacks. A technical framework and appropriate regulatory framework need to be created to enable the measurement and reporting of ETIF and ETIT.

Download the full study.

Security Needs Vs. Business Strategy – Finding a Common Ground

By Yael Nishry, Vice President of Business Development, Vaultive

Even before cloud adoption became mainstream, it wasn’t uncommon for IT security needs to conflict with both business strategy and end user preferences. Almost everyone with a background in security has found themselves in the awkward position of having to advise on going against a technology with significant appeal and value because it would introduce too much risk.

In my time working both as a vendor and as a risk management consultant, few IT leaders I’ve come across want to be a roadblock when it comes to achieving business goals and accommodating (reasonable) user preferences and requests. However, they also understand the costs of a potential security or non-compliance issue down the road. Unfortunately, many IT security teams have also experienced the frustration of being overridden, either officially by executives electing to accept the risk or by users adopting unregulated, unsanctioned applications and platforms, introducing risk into the organization against their recommendation.

In today’s world of cloud computing there are more vendor options than ever and end users often come to the table with their preferences and demands.  More and more I speak to IT and security leaders who have been directed to move to the cloud or have been pressured to move data to a specific cloud application for business reasons but find themselves saying no because the native cloud security controls are not enough.

Fortunately, in the past few years, solutions have emerged that allow IT and security leaders to stop saying no and instead enable the adoption of business-driven requests while giving IT teams the security controls they need to reduce risk. Cloud vendors spend a lot of time and resources to secure their infrastructure and applications, but what they are not responsible for is ensuring compliant cloud usage in their customer’s organizations.

The legal liability for data breaches is yours and yours alone.  Only you can guarantee compliant usage within your organization, so it’s important to understand the types of data that will be flowing into the cloud environment and work with various stakeholders to enforce controls that will reduce risk to an acceptable level and comply with any geographic or industry regulations.

It can be tempting, as always, to lock everything down and allow users only the most basic functionality in cloud applications. However, that often results in a poor user experience and leads to unsanctioned cloud use and shadow IT.

While cloud environments are very different from on premise environments, many of the security principles are still valid. As a foundation, I often guide organizations to look at what they are doing today for on-premises security and begin with extending those same principles into the cloud. Three useful principles to begin with are:

Privilege Management
Privilege management has been used in enterprises for years as an on-premises method to secure sensitive data and guide compliant user behavior by limiting access. In some cloud services, like Amazon Web Services (AWS), individual administrators can quickly amass enough power to cause significant downtime or security concerns, either unintentionally or through compromised credentials. Ensuring appropriate privilege management in the cloud can help reduce that risk.

In addition to traditional privilege management, the cloud also introduces a unique challenge when it comes to cloud service providers. Since they can access your cloud instance, it’s important to factor into your cloud risk assessment that your cloud provider also has access to your data. If you’re concerned about insider threats or government data requests served directly to the cloud provider, evaluating options to segregate data from your cloud provider is recommended.

Data Loss Protection
Another reason it’s so important to speak with stakeholders and identify the type of data flowing into the cloud is to determine what data loss protection (DLP) policies you need to enforce. Common data characteristics to look out for include personally identifiable information, credit card numbers, or even source code. If you’re currently using on-premises DLP, it’s a good time to review and update your organizations’ already defined patterns and data classification definitions to ensure that they are valid and relevant as you look to extend them to the cloud.

It’s also important to also educate end users on what to expect. Good cloud security should be mostly frictionless, but, if you decided to enforce policies such blocking a transaction or requiring additional authentication for sensitive transactions, it’s important to include this in your training materials and any internal documentation provided to users. It not only lets users know what to expect, leading to fewer helpdesk tickets but also can be used to refresh users on internal policies and security basics.

Auditing
A key aspect of any data security strategy is to maintain visibility into your data to ensure compliant usage. Companies need to make sure that they do not lose this capability as they migrate their data and infrastructure into the cloud. If you use security information event management (SIEM) tools today, it’s worth taking the time to decide on what cloud applications and transactions you should integrate into your reports.

By extending the controls listed above into your cloud environment, you can establish a common ground of good security practices that protect business enabling technology. With the right tools and strategy in place, it’s possible to stop saying no outright and instead come to the table enabled to empower relevant business demands while maintaining appropriate security and governance controls.

 

 

Ransomware Explained

By Ryan Hunt, PR and Content Manager, SingleHop

How it Works    Plus Tips for Prevention & Recovery
Ransomware attacks — a type of malware (a.ka. malicious software) — are proliferating around the globe at a blistering pace. In Q1 2017, a new specimen emerged every 4.2 seconds!* What makes ransomware a go-to mechanism for cyber attackers? The answer is in the name itself.

How it works
Unlike other hacks, the point of ransomware isn’t to steal or destroy valuable data; it’s to hold it hostage.

Ransomware enters computer systems via email attachments, pop-up ads, outdated business applications and even corrupted USB sticks.

Even if one computer is initially infected, ransomware can easily spread network-wide via a LAN or by gaining access to username and passwords.

Once the malware activates, the hostage situation begins: Data is encrypted and the user is instructed to pay a ransom to regain control.

Ransomware Prevention

  1. Install Anti-Virus/Anti-Malware Software
  2. But Be Sure to Update & Patch Software/Operating Systems
  3. Invest In Enterprise Threat Detection Systems and Mail Server Filtering
  4. Educate Employees on Network Security

What to do if your data is held hostage? If attacked, should your company pay?
Remember: Preventative measures are never 100% effective.

Paying the ransom might get you off the hook quickly, but will make you a repeat target for attack.

There’s a better way
Beat the attackers to the punch by investing in Cloud Backups and Disaster Recovery as a Service.

Backups
Daily Offsite Backups = You’ll Always Have Clean, Recent Copies of Your Data

Disaster Recovery
Disaster Recovery Solutions are crucial in the event Ransomware compromises your entire system. Here, you’ll be able to operate your business as usual via a redundant network and infrastructure. Sorry, Malware Ninjas.

Is the Cloud Moving Too Fast for Security?

By Doug Lane, Vice President/Product Marketing, Vaultive

In February 2017, a vulnerability in Slack was discovered which had the potential to expose the data of the company’s reported four million daily active users. Another breach in February on CloudFlare, a content delivery network, leaked sensitive customer data stored by millions of websites powered by the company. On March 7, the Wikileaks CIA Vault 7 exposed 8,761 documents on alleged agency hacking operations. On June 19, Deep Root Analytics, a conservative data firm, misconfigured an Amazon S3 Server that housed information on 198 million U.S. voters. On July 12, Verizon had the same issue and announced a misconfigured Amazon S3 data repository at a third-party vendor that exposed the data of more than 14 million U.S. customers.

That’s at least five-major cloud application and infrastructure data breach incidents for 2017, and we’re only in July. Add in the number of ransomeware and other attacks during the first half of this year and it’s clear the cloud has a real security problem.

By now, most everyone recognizes the benefits of the cloud; bringing new applications and infrastructure online quickly and scaling it to meet ever changing business demands. Although highly valuable for the business side, when security teams lose control over how and where new services are implemented, the network is at risk and subsequently, so is their data. The balance of allowing businesses to move at the speed of the cloud and maintain the needed security controls is becoming increasingly difficult. With the spike in data exposures and breaches, it shows that security teams are struggling to secure cloud use.

The Slack breach is a great example at the application-level. Slack is simple to use and implement, which has driven the application’s record-breaking growth. Departments, teams, and small groups can easily spin up Slack without IT approval or support, and instances of the application can spread quickly across an organization. Although Slack patched the vulnerability identified in February before any known exposure occurred, if it were hacked, the attacker could have had full access and control over four million user accounts.

In the Verizon situation, a lack of control at the infrastructure level is what caused so many of their customers to be exposed this month. When servers can be brought online so easily and configured remotely by third-party partners, the right security protocols can be missed or ignored.

As more businesses move to the cloud and as cloud services continue to grow, organizations must establish a unified set of cloud security and governance controls for business-critical SaaS applications and IaaS resources. In most cases, cloud providers will have stronger security than any individual company can maintain and manage on-premise. However, each new service comes with it’s own security capabilities, which can increase risks because of feature gaps or human error during configuration. Adding additional encryption and policy controls independently of the vendor, is a proven way for organizations to fully entrust their data to a cloud provider without giving up complete control over who can access it while also making sure employees are compliant when using SaaS applications. These controls allow businesses to move at the speed of the cloud without placing their data at risk.

The reality is that threats are increasing in frequency and severity. The people behind attacks are far more sophisticated and their intentions far more sinister. We, as individuals and businesses, entrust a mind-boggling amount of data to the cloud but there doesn’t exist today a way to entirely prevent hackers from getting through the door at the service, infrastructure or software provider. Remaining in control of your data that traverses all the cloud services that you use is the safest thing you can do to protect your business. Because, in the end, if they can’t read it or use it, is data really data?

Guidance for Critical Areas of Focus in Cloud Computing Has Been Updated

Newest version reflects real-world security practices, future of cloud computing security

By J.R. Santos, Executive Vice President of Research, Cloud Security Alliance

Today marks a momentous day not only for CSA but for all IT and information security professionals as we release Guidance for Critical Areas of Focus in Cloud Computing 4.0, the first major update to the Guidance since 2011.

As anyone involved in cloud security knows, the landscape we face today is a far cry from what was going on 10, even five, years ago. To keep pace with those changes almost every aspect of the Guidance was reworked. In fact, almost 80 percent of it was rewritten from the ground up, and domains were restructured to better reflect the current state of cloud computing, as well as the direction in which this critical sector is heading.

For those unfamiliar with what is widely considered to be the definitive guide for cloud security, the Guidance acts as a practical, actionable roadmap for individuals and organizations looking to safely and securely adopt the cloud paradigm. This newest version includes significant content updates to address leading-edge cloud security practices and incorporates more of the various applications used in the security environment today.

Guidance 4.0 covers such topics as:

  • DevOps, continuous delivery, and secure software development;
  • Software Defined Networks, the Software Defined Perimeter and cloud network security.
  • Microservices and containers;
  • New regulatory guidance and evolving roles of audits and compliance inheritance;
  • Using CSA tools such as the CCM, CAIQ, and STAR Registry to inform cloud risk decisions;
  • Securing the cloud management plane;
  • More practical guidance for hybrid cloud;
  • Compute security guidance for containers and serverless, plus updates to managing virtual machine security; and
  • The use of immutable, serverless, and “new” cloud architectures.

Today is the culmination of more than a year of input and review from the CSA and information security communities. Guidance 4.0 was drafted using an open research model (a herculean effort for those unfamiliar with the process), and none of it would have been possible without the assistance of Securosis, whose research analysts oversaw the project. We owe them—and everyone involved—a tremendous thanks.

You can learn more about the Guidance and read the updated version here.

Patch Me If You Can

By Yogi Chandiramani, Technical Director/EMEA, Zscaler

In May, the worldwide WannaCry attack infected more than 200,000 workstations. A month later, just as organizations were regaining their footing, we saw another ransomware attack, which impacted businesses in more than 65 countries.

What have we learned about these attacks?

  • Compromises/infections can happen no matter what types of controls you implement – zero risk does not exist
  • The security research community collaborated to identify indicators of compromise (IOCs) and provide steps for mitigation
  • Organizations with an incident response plan were more effective at mitigating risk
  • Enterprises with a patching strategy and process were better protected

Patching effectively
Two months before the attack, Microsoft released a patch for the vulnerability that WannaCry exploited. But, because many systems did not receive the patch, and because WannaCry was so widely publicized, the patching debate made it to companies’ board-level leadership, garnering the sponsorship needed for a companywide patch strategy.

Even so, the attack of June 27 spread laterally using the SMB protocol a month after WannaCry, by which time most systems should have been patched. Does the success of this campaign reflect a disregard for the threat? A lack of urgency when it comes to patching? Or does the problem come down to the sheer volume of patches?

Too many security patches
As we deploy more software and more devices to drive productivity and improve business outcomes, we create new vulnerabilities. Staying ahead of them is daunting, with the need to continually update security systems, and patch end-user devices running different operating systems and software versions. Along with patch and version management, there is change control, outage windows, documentation processes, post-patch support, and more. And it’s only getting worse.

The following graph illustrates the severity of vulnerabilities over time, and you can see that halfway through 2017, the number of disclosed vulnerabilities is already close to the overall patch volume of 2016.

source: National Vulnerability Database, a part of the National Institute of Standards and Technology (NIST). (https://nvd.nist.gov/vuln-metrics/visualizations/cvss-severity-distribution-over-time)

The challenge for companies is the sheer number of patches that need to be processed to remain fully up to date (a volume that continues to increase). Technically speaking, systems will always be one step behind in terms of vulnerability patching.

Companies must become aware of security gap
In light of the recent large-scale attacks, companies should revisit their patching strategy as a part of their fundamental security posture. Where are the gaps? The only way to know is through global visibility — for example, visibility into vulnerable clients or identifying botnet traffic — which provides key insights in terms of where to start and focus.

Your security platform’s access logs are a gold mine, providing data as well as context, with information such as who, when, where, and how traffic is flowing through the network. The following screen capture is a sample log showing a botnet callback attempt. With this information, you can see where to where to focus your attention and your security investments.

In the following example, you can identify potentially vulnerable browsers or plugins. It’s important to ensure that your update strategies include these potential entry points for malware, as well.

These are but two examples of potential gaps that can be easily closed with the appropriate insight into what software and versions are being used within an organisation. As a next step, companies should focus on patching those gaps with the highest known risk as a starting point.

But patching remains an onerous, largely manual task that is difficult to manage. A better alternative is a cloud-delivered security-as-a-service solution, which automates updates and the patching process. With threat actors becoming increasingly inventive as they design their next exploits, it pays to have a forward-thinking strategy that reduces the administrative overhead, improves visibility, and delivers protections that are always up to date.

 

Cyberattacks Are Here: Security Lessons from Jon Snow, White Walkers & Others from Game of Thrones

An analysis of Game of Thrones characters as cyber threats to your enterprise.

By Virginia Satrom, Senior Public Relations Specialist, Forcepoint

As most of you have probably seen, we recently announced our new human point brand campaign. Put simply, we are leading the way in making security not just a technology issue, but a human-centric one. In light of this, I thought it would be fun to personify threats to the enterprise with one of my favorite shows – Game of Thrones. Surprisingly, there are a lot of lessons that can be learned from GoT in the context of security.

Before we start, I’d like to provide a few disclaimers:

  • This is meant to be tongue in cheek, not literal, so take off your troll hat for the sake of some interesting analogies.
  • This is not comprehensive. Honestly, I could have written another 5,000 words around ALL the characters that could be related to threats.
  • This is based off of the Game of Thrones television series, not the books.
  • And finally, spoilers people. There are spoilers if you are not fully caught up through Season 6. You’ve been warned 🙂

Now, let’s dive in, lords and ladies…

What makes this Game of Thrones analysis so interesting is that these characters, depending on external forces, can change drastically from season to season. Therefore, our favorite character could represent a myriad of threats during a given season or the series overall. This concept relates to what we call ‘The Cyber Continuum of Intent’ which places insiders in your organization on a continuum which can move fluidly from accidental to malicious given their intent and motivations. There are also many instances where a character is a personification of a cyber threat or attack method.

Let’s start with one of the most devious characters – Petyr Baelish aka Littlefinger. Littlefinger is a good example of an advanced evasion technique (AET) that maneuvers throughout your network delivering an exploit or malicious content into a vulnerable target so that the traffic looks normal and security devices will pass it through. As Master of Coin and a wealthy business owner, he operates in the innermost circle of King’s Landing, while secretly undermining those close to him to raise his standing within Westeros. He succeeds, in fact, by marrying Lady Tulley to ultimately become the Protector of the Vale with great influence over its heir – Robyn Arryn of the Vale. Looking at his character from another angle, Littlefinger could also be considered a privileged user within a global government organization or enterprise. He is trusted by Ned Stark with Ned’s plans to expose the Lannister’s lineage and other misdoings, but he ultimately uses that information and knowledge for personal gain – causing Ned’s demise. And let’s not forget that Littlefinger also betrays Sansa Stark’s confidence and trust, marrying her to Ramsay Snow.

Varys and his ‘little birds’ equate to bots, and collectively, a botnet. Botnets are connected devices in a given network that can be controlled via an owner with command and control software. Of course, Varys (aptly also known as the Spider) commands and controls his little birds through his power, influence and also money. When it comes to security, botnets are used to penetrate a given organization’s systems – often through DDoS attacks, sending spam, and so forth. This example is similar to Turkish hackers who actually gamified DDoS attacks, offering money and rewards to carry out cybercrime.

Theon Greyjoy begins the series as a loyal ward to Eddard Stark and friend to Robb and Jon, but through his own greed and hunger for power becomes a true malicious insider. He also is motivated by loyalty to his family and home that he has so long been away from. He overtook The North with his fellow Ironborns, fundamentally betraying the Starks.

Theon Greyjoy and Ramsay Bolton (formerly Snow) are no strangers to one another, and play out a horrific captor/captive scenario through Seasons 4 and 5. Ramsay is similar to Ransomware as it usually coerces its victims to pay a ransom through fear. In the enterprise, this means a ransom is demanded in Bitcoin for the return of business critical data or IP. Additionally, Ramsay Snow holds RIckon Stark as a hostage in Season 6. He agrees to return Rickon to Jon Snow and Sansa Stark, but has his men kill Rickon right as the siblings reunite. This is often the case in Ransomware that infiltrates the enterprise – often, even if Ransom is paid, data is not returned.

Gregor Clegane, also known as The Mountain, uses sheer brute force to cause mayhem within Westeros, which would be similar to brute force cracking. This is a trial and error method used to decode encrypted data, through exhaustive effort. The Mountain is used for his strength and training as a combat warrior, defeating a knight in a duel in Season 1, and in Season 4 defeating Prince Oberyn Martell in trial by combat – in a most brutal way. He could also be compared to a nation state hacker, with fierce loyalty to the crown — particularly the Lannister family. He is also a reminder that physical security can be as important as virtual for enterprises.

Depending on the season or the episode, this can fluctuate, but 99% of the time I think we can agree that Cersei Lannister is a good example of a malicious insider and more specifically a rogue insider. She is keen to keep her family in power and will do whatever it takes to maintain control over their destiny. My favorite part about Cersei is though she is extremely easy to loathe, throughout the entire series it is clear she loves her children and would do anything for them. After the last of her children dies, she quickly evolves from grief to rage. As the adage says, sad people harm themselves but mad people harm others. Cersei can be related to a disgruntled employee who intends to steal critical data with malicious intent that is facing challenges from within or outside of the workplace.

If we take a look at Seasons 4 and 5, and the fall of Jon Snow, many of the Night’s Watch members are good examples of insiders. Olly, for example, starts out as a loyal brother among the Night’s Watch. If he happened to leak any intel that could harm Jon Snow’s leadership or well-being, it would have been accidental. This could be compared to an employee within an organization who is doing their best, but accidentally clicks on a malicious link. However, as Snow builds his relationships with the wildlings, Olly cannot help but foster disdain and distrust toward Snow for allying with the people that harmed his family. Conversely, Alliser Thorne was always on the malicious side of the continuum, having it out for Snow especially after losing the election to be the 998th Lord Commander of the Night’s Watch. Ultimately, Thorne’s rallying of the Night’s Watch to his side led to Snow’s demise (even if it was only temporary).

Sons of the Harpy mirror a hacktivist group fighting the rule of Daenerys Targaryen over Meereen. They wreak havoc on Daenerys’s Unsullied elite soldiers and are backed by the leaders who Daenerys overthrew – the ‘Masters’ of Meereen – in the name of restoring the ‘tradition’ of slavery in their city. They seek to overthrow Daenerys and use any means necessary to ensure there is turmoil and anarchy. Hacktivists are often politically motivated. If the hacktivist group is successful, it can take the form of a compromised user on the Continuum – through impersonation. After all, the most pervasive malware acts much like a human being.

Let’s not forget about the adversaries that live beyond The Wall – The White Walkers. The White Walkers represent a group of malicious actors seeking to cause harm in the Seven Kingdoms, or for this analogy, your network. What is interesting about these White Walkers is that they are a threat that has been viewed as a legend or folklore except for those that have actually seen them. However, we know that this season they become very real. Secondly, what makes the White Walkers so remarkable is that we do not know their intentions or motivations, they cannot be understood like most of these characters seeking power or revenge. I argue that this makes them the most dangerous and hardest threat to predict. And lastly, if we think about how the White Walkers came to be, we know that they were initially created to help defend the Children of the Forest against the First Men. But, we now know that they have grown exponentially in number and begun to take on a life (pun intended) of their own. This is equated to the use of AI in the technology space which some fear will overtake us humans.

In my mind The Wall itself could be considered a character, and therefore a firewall of sorts. Its purpose is to keep infiltration out; however, as we learned at the end of Season 6, this wall is penetrable. This leads me to the main takeaway – enterprises and agencies face a myriad of threats and should not rely on traditional perimeter defenses, but have multi-layered security solutions in place.

With all of these parallels, it becomes clear that people are the true constant complexity in security. It is known that enterprises must have people-centric, intelligent solutions to combat the greatest threats like those faced in Westeros.

CSA Industry Blog Listed Among 100 Top Information Security Blogs for Data Security

Our blog was recently ranked 35th among 100 top information security blogs for data security professionals by Feedspot. Among the other blogs named to the list were The Hacker News, Krebs on Security and Dark Reading. Needless to say, we’re honored to be in such good company.

To be listed, Feedspot’s editorial team and expert reviews, assessed each blog on the following criteria:

• Google reputation and Google search ranking;
• Influence and popularity on Facebook, Twitter and other social media sites; and
• Quality and consistency of posts.

We strive to offer our readers broad range of informative content that provides not only varying points of view but information you can use as a jumping off point to enhance your organization’s cloud security.

We’re glad to be in such great company and hope that you’ll take the time to visit our blog. We invite you to sign up to receive it and other CSA announcements. We think you’ll like what you see.

Locking-in the Cloud: Seven Best Practices for AWS

By Sekhar Sarukkai, Co-founder and Chief Scientist, Skyhigh Networks

With the voter information of 198 million Americans exposed to the public, the Deep Root Analytics leak brought cloud security to the forefront. The voter data was stored in an AWS S3 bucket with minimal protection. In fact, the only level of security that separated the data from being outright published online was a simple six-character Amazon sub-domain. Simply put, Deep Root Analytics wasn’t following some of the most basic AWS security best practices.

More importantly, this leak demonstrated how essential cloud security has become to preventing data leaks. Even though AWS is the most popular IaaS system, its security, especially on the customer end, is frequently neglected. This leaves sensitive data vulnerable to both internal and external threats. External threats are regularly covered in the news, from malware to DDoS hacking. Yet the Deep Root Analytics leak proves that insider threats can be dangerous, even if they are based on negligence rather than malicious intent.

Amazon already addressed the issue of outside threats through its numerous security investments and innovations, such as the AWS shield for DDoS attacks. Despite extensive safety precautions, well-organized and persistent hackers could still break Amazon’s defenses. However, Amazon cannot be blamed for the AWS security breaches, as it is estimated that 95 percent of cloud security breaches by 2020 will be the customer’s fault.

This is because AWS is based on a system of cooperation between Amazon and its customers. This system, known as the shared responsibility model, operates on the assumption that Amazon is responsible for safeguarding and monitoring the AWS infrastructure and responding to fraud and abuse. On the other hand, customers are responsible for the security “in” the cloud. Specifically, they are in charge of configuring and managing the services themselves, as well as installing updates and security patches.

AWS Best Practices

The following best practices serve as a background to securing configuring AWS.

  1. Activate CloudTrail log file validation:

CloudTrail log validation ensures that any changes made to a log file can be identified after they have been delivered to the S3 bucket. This is an important step towards securing AWS because it provides an additional layer of security for S3, something that could have prevented the Deep Root Analytics leak.

  1. Turn on access logging for CloudTrail S3 buckets:

Log data captured by CloudTrail is stored in the CloudTrail S3 buckets, which can be useful for activity monitoring and forensic investigations. With access logging turned on, customers can identify unauthorized or unwarranted access attempts, as well as track these access requests, improving the security of AWS.

  1. Use multifactor authentication:

Multifactor authentication (MFA) should be activated when logging into both root and Identity and Access Management (IAM) user accounts. For the root user, the MFA should be tied to a dedicated device and not any one user’s personal device. This would ensure that the root account is accessible even if the user’s personal device is lost or if that user leaves the company. Lastly, MFA needs to be required for deleting CloudTrail logs, as hackers are able to avoid detection for longer by deleting S3 buckets containing CloudTrail logs.

  1. Rotate IAM access keys regularly:

When sending requests between the AWS Command Line Interface (CLI) and the AWS APIs, an access key is needed. Rotating this access key after a standardized and selected number of days decreases the risk of both external and internal threats. This additional level of security ensures that data cannot be accessed with a lost or stolen key if it has been sufficiently rotated.

  1. Minimize number of discrete security groups:

Account compromise can come from a variety of sources, one of which is misconfiguration of a security group. By minimizing the number of discrete security groups, enterprises can reduce the risk of misconfiguring an account.

  1. Terminate unused access keys:

AWS users must terminate unused access keys, as access keys can be an effective method for compromising an account. For example, if someone leaves the company and still has access to a key, that person would be able to use it until its termination. Similarly, if old access keys are deleted, external threats only have a brief window of opportunity. It is recommended that access keys left unused for 30 days be terminated.

  1. Restrict access to CloudTrail bucket:

No user or administrator account should have unrestricted access to CloudTrail logs, as they are susceptible to phishing attacks. Even if users have no malicious intent, they are still susceptible. As a result, access to the CloudTrail logs needs to be restricted to limit the risk of unauthorized access.

These best practices for the AWS infrastructure could go a long way in securing your sensitive information. By applying even a few of them to your AWS configuration, sensitive information could remain secure, and another Deep Root Analytics leak could be prevented in the future.