AWS Cloud: Proactive Security and Forensic Readiness – Part 1

December 11, 2017 | Leave a Comment

By Neha Thethi, Information Security Analyst, BH Consulting

Part 1 – Identity and Access Management in AWS
This is the first in a five-part blog series that provides a checklist for proactive security and forensic readiness in the AWS cloud environment. This post relates to identity and access management in AWS.

In a recent study by Dashlane regarding password strength, AWS was listed as an organization that supports weak password rules. However, AWS has numerous features that enable granular control for access to an account’s resources by means of the Identity and Access Management (IAM) service. IAM provides control over who can use AWS resources (authentication) and how they can use those resources (authorization).

The following list focuses on limiting access to, and use of, root account and user credentials; defining roles and responsibilities of system users; limiting automated access to AWS resources; and protecting access to data stored in storage buckets – including important data stored by services such as CloudTrail.

The checklist provides best practice for the following:

  1. How are you protecting the access to and the use of AWS root account credentials?
  2. How are you defining roles and responsibilities of system users to control human access to the AWS Management Console and API?
  3. How are you protecting the access to and the use of user account credentials?
  4. How are you limiting automated access to AWS resources?
  5. How are you protecting your CloudTrail logs stored in S3 and your Billing S3 bucket?

Best-practice checklist

1) How are you protecting the access to and the use of AWS root account credentials?

  • Lock away your AWS account (root) login credentials
  • Use multi-factor authentication (MFA) on root account
  • Make minimal use of root account (or no use of root account at all if possible). Use IAM user instead to manage the account
  • Do not use AWS root account to create API keys.

2) How are you defining roles and responsibilities of system users to control human access to the AWS Management Console and API?

  • Create individual IAM users
  • Configure a strong password policy for your users
  • Enable MFA for privileged users
  • Segregate defined roles and responsibilities of system users by creating user groups. Use groups to assign permissions to IAM users
  • Clearly define and grant only the minimum privileges to users, groups, and roles that are needed to accomplish business requirements.
  • Use AWS defined policies to assign permissions whenever possible
  • Define and enforce user life-cycle policies
  • Use roles to delegate access to users, applications, or services that don’t normally have access to your AWS resources
  • Use roles for applications that run on Amazon EC2 instances
  • Use access levels (list, read, write and permissions management) to review IAM permissions
  • Use policy conditions for extra security
  • Regularly monitor user activity in your AWS account(s).

3) How are you protecting the access to and the use of user account credentials?

  • Rotate credentials regularly
  • Remove/deactivate unnecessary credentials
  • Protect EC2 key pairs. Password protect the .pem and .ppk file on user machines
  • Delete keys on your instances when someone leaves your organization or no longer requires access
  • Regularly run least privilege checks using IAM user Access Advisor and IAM user Last Used Access Keys
  • Delegate access by using roles instead of by sharing credentials
  • Use IAM roles for cross-account access and identity federation
  • Use temporary security instead of long-term access keys.

4) How are you limiting automated access to AWS resources?

  • Use IAM roles for EC2 and an AWS SDK or CLI
  • Store static credentials securely that are used for automated access
  • Use instance profiles or Amazon STS for dynamic authentication
  • For increased security, implement alternative authentication mechanisms (e.g. LDAP or Active Directory)
  • Protect API access using Multi-factor authentication (MFA).

5) How are you protecting your CloudTrail logs stored in S3 and your Billing S3 bucket?

  • Limit access to users and roles on a “need-to-know” basis for data stored in S3
  • Use bucket access permissions and object access permissions for fine-grained control over S3 resources
  • Use bucket policies to grant other AWS accounts or IAM

 

For more details, refer to the following AWS resources:

Next up in the blog series, is Part 2 – Infrastructure Level Protection in AWS – best practice checklist. Stay tuned.

Let us know if we have missed anything in our checklist!

DISCLAIMER: Please be mindful that this is not an exhaustive list. Given the pace of innovation and development within AWS, there may be features being rolled out as these blogs were being written. Also, please note that this checklist is for guidance purposes only.

 

What Will Software Defined Perimeter Mean for Compliance?

December 8, 2017 | Leave a Comment

By Eitan Bremler, VP Marketing and Product Management, Safe-T Data

Your network isn’t really your network anymore. More specifically, the things you thought of as your network — the boxes with blinking lights, the antennae, the switches, the miles of Cat 5 cable — no longer represent the physical reality of your network in the way that they once did. In addition to physical boxes and cables, your network might run through one or more public clouds, to several branch offices over a VPN, and even through vendor or partner networks if you use a managed services provider. What’s more, most of the routing decisions will be made automatically. In total, these new network connections and infrastructure add up to a massive attack surface.

The software defined perimeter is a response to this new openness. It dictates that just because parts of your infrastructure are connected to one another, that doesn’t mean they should be allowed access. Essentially, the use of SDP lets administrators place a digital fence around parts of their network, no matter where it resides.

Flat Networks Leave Data Vulnerable
Where security is concerned, complicated networks can be a feature, not a bug. For companies above a certain size, who must protect critical data, a degree of complexity in network design is recommended. For example, can everyone in your company access the shared drive where you store your cardholder’s information? This is bad practice — what you need to adopt is a practice known as segmentation, recommended by US-CERT.

Any network in which every terminal can access every part of the network is known as a “flat” network. That is, every user and application can access only those resources which are absolutely critical for them to do their jobs. A flat network operates by the principle of most privilege — everyone gets access to anything. In other words, if a hacker gets into an application, or an employee goes rogue, prepare for serious trouble.

Flat networks are also a characteristic of networks lacking a software defined perimeter.

Create Nested Software Defined Perimeters for Extra Security
Flat networks introduce a high level of risk for flat organizations, but the use of SDP can eliminate this risk. The software-defined approach can create isolated network segments around applications and databases. What’s more, this approach doesn’t rely on either physically rewiring a network or creating virtual LANs, both of which are time-consuming processes.

This approach is already used in public cloud data centers, where thousands of applications that must not communicate with one another must coexist on VMs that are hosted on the same bare-metal servers. The servers themselves are all wired to one another in the manner of a flat network, but SDN keeps their networks or data from overlapping.

Do You Need SDP in Order to Be Compliant?
Software defined perimeters are strongly recommended for security, but it’s actually not necessary for compliance — yet. PCI DSS 3.2 doesn’t require network segmentation — in the main because the technology is still in its relative infancy, and is not yet accessible to every company. Those companies that can segment their networks, however, do receive a bit of a bonus.

If you manage to segment your network appropriately, only the segments of your network that contain cardholder data will be subject to PCI audit. Otherwise, the entirety of a flat network will be subject to scrutiny. Clearly, it’s easier to defend and secure a tiny portion of your network than the entire thing. Those who learn the art of network segmentation will have a massive advantage in terms of compliance.

Look for Software-Defined Perimeter Solutions
Solutions using the SDP method will help organizations set Zero Trust boundaries between different applications and databases. These are effectively more secure than firewalls, because they obviate the necessity of opening ports between any two segmented networks. This additional security feature lets companies reduce the scope of PCI without changing

Your Morning Security Spotlight: Apple, Breaches, and Leaks

December 7, 2017 | Leave a Comment

By Jacob Serpa, Product Marketing Manager, Bitglass

Here are the top cybersecurity stories of recent weeks:

–Apple’s High Sierra has massive vulnerability
–Survey says all firms suffered a mobile cyberattack
–Morrisons liable for ex-employee leaking data
–S3 misconfiguration leaks NCF customer data
–Imgur reports 2014 breach of 1.7 million credentials

Apple’s High Sierra has massive vulnerability
Apple’s latest operating system, High Sierra, was found to have a massive vulnerability. By typing the username “root” and leaving the password blank, devices running the operating system could be accessed, offering a way to steal data and upload malicious software.

Survey says all firms suffered a mobile cyberattack
In Check Point’s survey of 850 businesses around the world, all were found to have experienced a mobile cyberattack. This demonstrates the dangers of enabling unsecured BYOD and mobile data access. Additionally, the report contains surprising statistics on mobile malware, man-in-the-middle attacks, and more.

Morrisons liable for ex-employee leaking data
The supermarket chain Morrisons was recently found liable for a breach caused by an ex-employee in 2014. In 2015, the employee was sentenced to eight years in jail for maliciously leaking the payroll data of 100,000 fellow employees. However, Morrisons will now be held responsible, as well.

S3 misconfiguration leaks NCF customer data
The National Credit Federation (NCF) is reported to have leaked sensitive data belonging to tens of thousands of its customers. The information, which included bank account numbers and scans of Social Security cards, was leaked through an Amazon S3 misconfiguration that allowed complete public access to certain data.

Imgur reports 2014 breach of 1.7 million credentials
Imgur recently discovered that it suffered from a breach in 2014 that led to the compromise of 1.7 million users’ email addresses and passwords. The attack serves as an example of the fact that breaches (and ongoing data theft) can take years to detect.

Clearly, organizations that fail to protect their sensitive information will suffer the consequences. Learn how to achieve comprehensive visibility and control over data by reading the solution brief for the Next-Gen CASB.

Electrify Your Digital Transformation with the Cloud

December 5, 2017 | Leave a Comment

By Tori Ballantine, Product Marketing, Hyland

Taking your organization on a digital transformation journey isn’t just a whimsical idea; or something fun to daydream about; or an initiative that “other” companies probably have time to implement. It’s something that every organization needs to seriously consider. If your business isn’t digital, it needs to be in order to remain competitive.

So if you take it as a given that you need to embrace digital transformation to survive and thrive in the current landscape, the next logical step is to look at how the cloud fits into your strategy. Because sure, it’s possible to digitally transform without availing yourself of the massive benefits of the cloud. But why would you?

Why would you intentionally leave on the table what could be one of the strongest tools in your arsenal? Why would you take a pass on the opportunity to transform – and vastly improve – the processes at the crux of how your business works?

Lightning strikes
In the case of content services, including capabilities like content management, process management and case management, cloud adoption is rising by the day. Companies with existing on-premises solutions are considering the cloud as the hosting location for their critical information, and companies seeking new solutions are looking at cloud deployments to provide them with the functionality they require.

If your company was born in the digital age, it’s likely that you inherently operate digitally. If your company was founded in the time before, perhaps you’re playing catch up.

Both of these types of companies can find major benefits in the cloud. Data is created digitally, natively — but there is still paper that needs to be brought into the digital fold. The digitizing of information is just a small part of digital transformation. To truly take information management to the next level, the cloud offers transformative options that just aren’t available in a premises-bound solution.

People are overwhelmingly using the cloud in their personal lives, according to AIIM’s State of Information Management: Are Businesses Digitally Transforming or Stuck in Neutral? Of those polled, 75 percent use the cloud in their personal life and 68 percent report that they use the cloud for business. That’s nearly three-quarters of respondents!

When we look at the usage of cloud-based solutions in areas like enterprise content management (ECM) and related applications, 35 percent of respondents leverage the cloud as their primary content management solutions; for collaboration and secure file sharing; or for a combination of primary content management and file sharing. These respondents are deploying these solutions either exclusively in the cloud or as part of on-prem/cloud hybrid solutions.

Another 46 percent are migrating all their content to the cloud over time; planning to leverage the cloud but haven’t yet deployed; or are still experimenting with different options. They are in the process of discerning exactly how best to leverage the power of the cloud for their organizations.

And only 11 percent have no plans for the cloud. Eleven percent! Can your business afford to be in that minority?

More and more, the cloud is becoming table stakes in information management. Organizations are growing to understand that a secure cloud solution not only can save them time and money, but also provide them with stronger security features, better functionality and larger storage capacity.

The bright ideas
So, what are some of the ways that leveraging the cloud for your content services can digitally transform your business?

  • Disaster recovery. When your information is stored on-premises and calamity strikes — a fire, a robbery, a flood — you’re out of luck. When your information is in the cloud, it’s up and ready to keep your critical operations running.
  • Remote access. Today’s workforce wants to be mobile, and they need to access their critical information wherever they are. A cloud solution empowers your workers by granting them the ability to securely access critical information from remote locations.
  • Enhanced security. Enterprise-level cloud security has come a long way and offers sophisticated protection that is out of reach for many companies to manage internally.

Here are other highly appealing advantages of cloud-based enterprise solutions, based on a survey conducted by IDG Enterprise:

  • Increased uptime
  • 24/7 data availability
  • Operational cost savings
  • Improved incident response
  • Shared/aggregated security expertise of vendor
  • Access to industry experts on security threats

Whether you’re optimizing your current practices or rethinking them from the ground up, these elements can help you digitally transform your business by looking to the cloud.

Can you afford not to?

AWS Cloud: Proactive Security & Forensic Readiness

December 1, 2017 | Leave a Comment

This post kicks off a series examining proactive security and forensic readiness in the AWS cloud environment. 

By Neha Thethi, Information Security Analyst, BH Consulting

In a time where cyber-attacks are on the rise in magnitude and frequency, being prepared during a security incident is paramount. This is especially crucial for organisations adopting the cloud for storing confidential or sensitive information.

This blog is an introduction to a five-part blog series that provides a checklist for proactive security and forensic readiness in the AWS cloud environment.

Cyber-attack via third party services
A number of noteworthy information security incidents and data breaches have come to light recently that involve major organisations being targeted via third-party services or vendors. Such incidents are facilitated in many ways, such as a weakness or misconfiguration in the third-party service, or more commonly, a failure to implement or enable existing security features.

For example, it has been reported that several data breach incidents in 2017 occurred as a result of an Amazon S3 misconfiguration. Additionally,  the recent data breach incident at Deloitte appears to have been caused by the company’s failure to enable two-factor authentication to protect a critical administrator account in its Azure-hosted email system.

Security responsibility
Many of our own customers at BH Consulting have embraced the use of cloud, particularly Amazon Web Services (AWS). It is estimated that the worldwide cloud IT infrastructure revenue has almost tripled in the last four years. The company remains the dominant market leader, with an end-of-2016 revenue run rate of more than $14 billion.  It owes its popularity to its customer focus, rich set of functionalities, pace of innovation, partner and customer ecosystem as well as implementation of secure and compliant solutions.

AWS provides a wealth of material and various specialist partners to help customers enhance security in their AWS environment. A significant part of these resources is a shared responsibility model for customers, to better understand their security responsibilities based on the service model being used (infrastructure-as-a-service, platform-as-a-service or software-as-a-service).

Figure 1: AWS Shared Responsibility Model

When adopting third-party services, such as AWS, it is important that customers understand their responsibility for protecting data and resources that they are entrusting to these third parties.

Security features
Numerous security measures are provided by AWS, however, awareness of relevant security features and appropriate configuration, are key to taking full advantage of these measures. There may be certain useful and powerful features that a customer may be unaware of.  It is the responsibility of the customer to identify all the potential features so as to determine how best to leverage each one, if at all.

Five-part best practice checklist
The blog series will offer the following five-part best practice checklists, for proactive security and forensic readiness in AWS Cloud.

  1. Identity and Access Management in AWS
  2. Infrastructure Level Protection in AWS
  3. Data Protection in AWS
  4. Detective Controls in AWS
  5. Incident Response in AWS

Stay tuned for further installments.

Four Important Best Practices for Assessing Cloud Vendors

November 24, 2017 | Leave a Comment

By Nick Sorensen, President & CEO, Whistic

When it comes to evaluating new vendors, it can be challenging to know how best to communicate the requirements of your vendor assessment process and ultimately select the right partner to help your business move forward — while at the same time avoiding the risk of a third-party security incident. After all, 63 percent of data breaches are linked to third parties in some way. In fact, we all recently learned about how an Equifax vendor was serving up malicious code on their website in a newly discovered security incident.

The Whistic team has done thorough research on what a good vendor assessment process looks like and how to keep your organization safe from third party security threats. In the following article, we’ll outline a few of these best practices that your organization can follow in order to improve your chances of a successful vendor review. Of course, there will still be situations that you must address in which a vendor is either not prepared to respond to your request or isn’t willing to comply with your process. However, we’ll share some tips for how to best respond to these situations, too.

But before we get started, keep these three keys in mind:

  1. Time your assessments: The timing of the assessment will be the single greatest leverage you have in getting a vendor to respond. Keep in mind that aligning your review with a new purchase or contract renewal is key.
  2. Alert the vendor ASAP: The sooner a vendor is aware of a review the better. Plan ahead and engage early and get executive buy-in from your team to hold vendors accountable to your policy. If your business units understand that you have a policy requirement to review every new vendor, they can help set expectations during the procurement process and eliminate last-minute reviews.
  3. Don’t overwhelm your vendors: Unnecessary questions or requests for irrelevant documentation can slow the process down significantly. Be sure to revisit your questionnaire periodically and identify new ways to customize questions based on vendor feedback. You may find that after conducting several security reviews that there may be ways to improve the experience for both parties.

Personalize the Communication
At Whistic, we’ve had a front row seat to the security review processes of companies all across the world and a wide range of use cases. We’ve seen firsthand how much of a difference personalized communication can make in creating a more seamless process for all involved, especially third party vendors who are or hope to be trusted partners to your business.

With this in mind, we strongly recommend sending a personalized email to each vendor when initiating a new questionnaire request to supplement the email communication that they will receive from any software you utilize. This can help alleviate concerns the vendor may have about the assessment process and should help to improve turnaround times on completed questionnaires. Even with the automated communication support from a third party security platform, the best motivator for your vendor to complete your request may be a friendly reminder from you or the buyer that the sales process is on hold until they complete the assessment.

Deliver Expectations Early
Assuming that your vendor already understands that you are going to need to complete a security review on them, the best time to help them understand your expectations is either right before or right after you initiate a request via your third party security platform.

When doing so, keep the following in mind as you have a phone call or draft an email to your vendor to introduce the vendor assessment request:

  • Set The Stage: Let your vendor know about the third party security platform that your organization uses and that it is required method for completing your security review process.
  • Give Clear Direction: Specify a clear deadline and any specific instructions for completing the entire security review — not just the questionnaire.
  • Provide Resources: Provide information for the best point of contact who can answer questions they may have throughout the process. It’s also a good idea to let them know that your third party security platform may reach out if they aren’t making progress on their vendor assessment.

Utilize an Email Template
Whether you use a customized template created by your team or a predefined template (such as the one Whistic provides to its customers), it’s worth spending a few minutes upfront to standardize the communication process. This will save you time in the long-run and allow you to deliver a consistent message to each of your vendors.

Respond to Vendor Concerns
It isn’t uncommon for vendors, particularly account executives, to try and deflect a security review as they know it has the potential to delay the sales/renewal process. They may also have questions about sharing information through a third party security platform as opposed to emailing that information to you. We know from experience how frustrating this can be for all involved, so below are a two tips for handling pushback:

  • Preparation: If you are getting repeated pushback from vendors, review the “Keys to Success” outlined at the beginning of this article and explore additional ways to adopt those best practices.
  • Complexity, Relevance, and Length: These items can be among the reasons why vendors complain about your security review process. Consider periodically revisiting your questionnaire and consider adding additional filter logic to limit the number of questions asked of each vendor or make the question sets more relevant to vendor that is responding.

These are just a few things to consider as you look to assess your next cloud vendor. What else have you found helpful as you have approached this responsibility at your company?

 

Your Morning Security Spotlight

November 21, 2017 | Leave a Comment

By Jacob Serpa, Product Marketing Manager, Bitglass

The top cybersecurity stories of the week revolved around malware and breaches. Infections and data theft remain very threatening realities for the enterprise.

400 Million Malware Infections in Q3 of 2017
In the last few months, malware has successfully infected hundred of millions of devices around the world. As time passes, threats will continue to become more sophisticated, effective, and global in reach. To defend themselves, organizations must remain informed about current malware trends.

Fileless Attacks Are on the Rise
It is estimated that 35 percent of all cyberattacks in 2018 will be fileless. This kind of attack occurs when users click on unsafe URLs that run malicious scripts through Flash, for example. Rather than rely solely on security measure that only monitor for threatening files, the enterprise should adopt solutions that can defend against zero-footprint threats.

Terdot Malware Demonstrates the Future of Threats
The Terdot malware, which can surveil emails and alter social media posts in order to propagate, is serving as an example of the evolution of malware. More and more, threats will include reconnaissance capabilities and increasing sophistication. Hackers are looking to refine their methods and contaminate as many devices as possible.

Spoofed Black Friday Apps Steal Information and Spread Malware
In their rush to buy discounted products, many individuals are downloading malicious applications that masquerade as large retailers offering Black Friday specials. As information is stolen from affected devices and malware makes its way to more endpoints, businesses that support bring your own device (BYOD) must be mindful of how they secure data and defend against threats.

What to Do in the Event of a Breach
ITPro posted an article on how organizations should respond when their public cloud instances are breached. Rather than assume that cloud app vendors perfectly address all security concerns, organizations must understand the shared responsibility model of cloud security. While vendors are responsible for securing infrastructure and cloud apps themselves, it is up to the enterprise to secure data as it is accessed and moved to devices. As such, remediation strategies vary depending on how breaches occur (compromised credentials versus underlying infrastructure being attacked).

Clearly, the top stories from the week were concerned with what can go wrong when using the cloud. To combat these threats, organizations must first understand them. From there, they can adopt the appropriate security solutions. To take the first step and learn more about threats in the cloud, download this report.

IT Sales in the Age of the Cloud

November 9, 2017 | Leave a Comment

By Mathias Widler, Regional Sales Director, Zscaler

The cloud is associated not only with a change in corporate structures, but also a transformation of the channel and even sales itself. Cloudification makes it necessary for sales negotiations to be held with decision-makers in different departments and time zones, with different cultural backgrounds and in different languages. The main challenge: getting a variety of departments to the negotiating table, and identifying the subject matter expert among many stakeholders.

To communicate with different decision-makers, sales reps must switch quickly from their roles as salespeople to global strategists and account managers. Today’s salespeople sell services, not boxes. They must also explain how the service can benefit the business, instead of simply touting its features.

The new sales process highlights the need for new skills and qualifications in the sales department, as we explain below.

Selling business value
A decade ago, it was important to get a company’s security person excited about new technology during a sales pitch. But the days of simply closing a deal by convincing the responsible person or admin to buy the product are long gone. What is needed today is a holistic winning strategy, which starts by explaining the business advantages of a solution to a potential customer.

Today, the work starts long before the sales person picks up the phone. The pitch must be individually tailored to the current and future business requirements of each organization. True cloud solutions facilitate an integrated implementation of digital transformation processes – providing the foundation for a better user experience, more flexibility, lower costs, and much more. The cloud is sold not as an end in itself, but as a result of the above-mentioned effects. Therefore, the service must be adapted to the requirements of the prospective customer and presented convincingly.

Reaching out to more decision-makers
Besides the CIO, many more stakeholders now need to be brought to the table, including the application-level department, network managers, security contacts, project managers, data protection officers, and potentially the works council. The decision-making processes involved in the purchase of a cloud service are therefore much more complex and protracted. According to a recent CEB report, in just two and half years, the average number of decision-makers per project increased by 26 percent from 2013 to 2016.

Today, the average number of persons involved in a buying decision is 6.8. A group of stakeholders is no longer as homogeneous as before, because it is much more difficult to reach consensus among a diverse group of senior executives. What is more, in addition to internal decision-makers, external decision-makers can also play a decisive role. This increases still further the number of stakeholders, and adds to the complexity of the decision-making processes.

To reach a consensus, a winning strategy must be acceptable to all decision-makers with various backgrounds. The demands placed on sales have become inherently more complex in the age of the cloud. Sales people who were used to sell an appliance have to reinvent themselves as strategists, who need to balance conflicting interests and find common ground, in particular with respect to the introduction of the cloud.

Dealing with long sales cycles
CEB points out that the sales process up to closing has been prolonged by a factor of two, as it involves efforts to overcome differences of opinion as well as fine-tuning to reach a consensus. For the project to succeed, departments that have previously made separate decisions now have to come together at the table. To sell a cloud service today, sales professionals must be able to convince the entire buying center that their solution is the right one. It’s helpful if sales people can identify the subject matter expert in a negotiating team, whose vote will ultimately be decisive.

Globalization also means that the salesperson needs to take cultural sensitivities into account. It is no longer a rarity for an IT department of a global corporation to be based in Southern or Eastern Europe due to available expertise and the wage level of the workforce.

At the same time, salespeople should not lose sight of how they can act as catalysts to speed up a decision. Which different types of information do the stakeholders need? Where does leverage come into play to move the team to the next step? What conflicting interests need to be balanced?

Understanding new principles: capex vs opex, SLAs and trust
Before a company can benefit from the much-promised advantages of the cloud, it must rely on the expertise of sales, which makes the value-add clear across the organization. This is all the more important as the cloud service is not as “tangible” as hardware. The process of building trust is handled through service level agreements, reference customer discussions, and, where necessary, credit points for non-performance. A portal can provide insight into the availability of the service level, which highlights the continuous availability of the service or describes service failures.

As capital expenditures (capex) are converted into operating expenses (opex), another issue, which needs to be made clear, comes into play with respect to license agreement-based procurement. The businesses pay only for use of the services, which can be adjusted as and when required. Regarding the data protection provisions applicable to the cloud service, consulting with the works council and understanding its respective concerns is recommended. A contract on data processing establishes the legal framework for cooperation with the cloud provider.

Once the effectiveness of the cloud approach can be demonstrated by a proof-of-concept, the cloud has basically won. After all, a test environment can be set up within a very short time. The maintenance cost for maintaining and updating of hardware solutions is thus a thing of the past, which should be a compelling argument for every department from an administrative point of view.

What makes a successful salesperson?
In a nutshell, the sales manager has to convince the customer of the business value of a cloud-based solution – at all levels of the decision-making process. In this context, the personal skills to engage in multi-faceted communication with a wide range of contacts are much more relevant than before.

Emotional intelligence, as well as technical expertise in project management, should also be thrown into the mix. It’s important to take an active role at all levels of the sales process, taking account of the fact that the counterarguments of the prospective customer have to be addressed at various points on the path to digitization.

Project management plays an increasingly important role in the age of the cloud, such as keeping in touch with all stakeholders and monitoring the progress of the negotiations. Even after the project is brought to a successful conclusion, sales has to continue to act as an intermediary, and remain available as a contact to ensure customer satisfaction. This is because services can be quickly activated – and canceled.

For this reason, it’s important in the new cloud era to continue to act as an intermediary and maintain contact with the cloud operations team in the implementation phase. The salesperson of a cloud service is in a sense the account manager, who initiates the relationship and keeps it going.

Days of Our Stolen Identity: The Equifax Soap Opera

October 26, 2017 | Leave a Comment

By Kate Donofrio, Senior Associate, Schellman & Co.

The Equifax saga continues like a soap opera, Days of Our Stolen Identity.  Every time it appears the Equifax drama is ending, a new report surfaces confirming additional security issues.

On Thursday, September 12, NPR reported that Equifax took down their website this time based on an issue with fraudulent Adobe Flash update popups on their site, initially discovered by an independent security analyst, Randy Abrams.[1]  Did the latest vulnerability mean Equifax continued with their inadequate information technology and security practices, even after being breached?  Or is it an even worse possibility, that their machines were not completely remediated from the original breach?

As it turns out, Equifax claimed they were not directly breached again, rather one of their third-party service providers responsible for uploading web content to Equifax site for analytics and monitoring was at fault.  According to Equifax, the unnamed third-party service provider uploaded the malicious code to their site.  It appears the only thing Equifax has been consistently good at is placing blame and pointing a finger in other directions.

Equifax needs to take responsibility after all they hired the service provider, are responsible for validating compliance of their service provider’s actions within their environment, and still hold the overall responsibility of their information.  This is a huge lesson for any company who attempts to pass blame to a third-party.

For those that have not been keeping track, below demonstrates a rough timeline of the recent Equifax scandal:

  • Mid-May 2017 – July 29, 2017: Reported period where Equifax’s systems were breached and data compromised.
  • July 29, 2017: Equifax identified the breach internally.
  • August 1 and August 2, 2017: Executives dumped $1.78 million worth of Equifax stock: Chief Financial Officer, John Gamble ($946,374); U.S. Information Solutions President, Joseph Loughran ($584,099); and Workforce Solutions President, Rodolfo Ploder ($250,458).[2]
  • September 7, 2017: Equifax released a public statement about the breach of over 145 million U.S. consumers’ information, 209,000 credit cards, and other breaches of non-US citizen information.[3]
  • September 12, 2017: Alex Holden, founder of Milwaukee, Wisconsin-based Hold Security LLC, contacted noted cybersecurity reporter, Brian Krebs, on a discovered security flaw within Equifax’s publicly available employee portal in Argentina. The Equifax portal had an active administrative user with the User ID “admin” and the password set to “admin.”  For those of you who may be unaware, the admin/admin username and password combination is regularly used as a vendor default, and often a combination tried by users to break into systems.  The administrative access allowed maintenance of users within the portal, including the ability to show employee passwords in clear-text. [4]
  • September 14, 2017: On his blog, Krebs on Security, Brian Krebs posted an article referencing a non-public announcement Visa and MasterCard sent to banks, which stated that the “window of exposure for the [Equifax] breach was actually November 10, 2016 through July 6, 2017.”[5] (Note: Equifax still claims the breach was one big download of data in Mid-May 2017, and that the November dates were merely transaction dates.)
  • September 15, 2017: Visa and MasterCard updated the breach notification to include social security numbers and addresses. [6] They found that the breach occurred on the Equifax’s site where people signed up for credit monitoring.
  • September 15, 2017: Equifax Chief Information Officer, David Webb, and Chief Security Officer, Susan Mauldin retired, effective immediately.[7][8]
  • September 19, 2017: Equifax admitted they tweeted out a bogus website address at least seven times; for instance, promoting “securityequifax2017.com” instead of the correct site, “equifaxsecurity2017.com,” and thus sent customers to the wrong site. Software engineer Nick Sweeting took the opportunity to teach Equifax a lesson and created an identical site at the incorrect “securityequifax2017.com” with a scathing indictment banner at the top of the page: “Why did Equifax use a domain that’s so easily impersonated by phishing sites?”[9]
  • September 29, 2017: CEO, Richard F. Smith stepped down, though he was expected to walk away with roughly $90 million.[10]
  • September 29, 2017: Astonishingly, the Internal Revenue Service (IRS) awarded Equifax a sole source contract (not publicly bid) for roughly $7.25 million to perform identity verifications for taxpayers.[11] Just in case you were not lucky enough to be a part of the recent Equifax breach, the IRS is giving you another “opportunity.”
  • October 3, 2017: During testimony with House Energy and Commerce Committee, former Equifax CEO, Richard F. Smith, blamed one person in his IT department for not patching the Apache Struts vulnerability and for the entire breach.[12]
  • October 10, 2017: Krebs on Security reported the number of UK Residents hacked was 693,665, not the initial 400,000 disclosed.[13]
  • October 12, 2017: Malicious Adobe Flash code was found on Equifax’s website. Equifax blamed a third-party service provider for feeding the information to the site.
  • October 12, 2017: IRS temporarily suspended Equifax’s contract over additional security concerns.[14]

This is not the first time Equifax has been involved in a breach of customer information.  On September 8, 2017, Forbes released an article detailing prior breaches, including one in May 2016 that leaked personal information of 430,000 records of grocer Kroger’s employees[15]from an Equifax site that provided employees with W2 information.  That breach was attributed to attackers determining PIN numbers utilized for site access to break into accounts and steal information.  PIN numbers consisted of the last four digits of an employee’s social security number and their four-digit birth year.

More information keeps surfacing as Equifax continues to simultaneously be scrutinized for their every move and targeted by security personnel and hackers alike.  A huge question remains how a company managing the information of so many people, who was certified compliant under several different certifications, including PCI DSS, SOC 2 Type II, FISMA, ISO/IEC 27001:2013[16] to name a few, could be so negligent.

From my experience, there are a lot of large corporations out there with the mentality that they are just too big to fail or to comply one-hundred percent.  I have heard echoing of this mantra repeatedly over the years, and every time, it makes you want to scream “you are too big not to comply!”

However, history has proven, a lot of these big corporations are in fact too big to fail.  Sure, Equifax is going to be continuously under scrutiny, fined, sued, and have their name dragged through the mud.  However, at the end of the day, they will still be managing the information for millions of people, not just Americans, and business will continue as usual.  They will be the butt of jokes and the subject of discussion for a while, but then the stories will start to fall behind other major headlines and soon all will be forgotten.

The reality is the Equifax saga is nothing new to consumers, and Equifax joins the likes of Target, Home Depot, Citibank, and many other companies who had their name plastered within headlines for major data breaches.

The compromises made some consumers think twice about using these companies, or using a credit card at their locations, but time moves on and eventually convenience always beats security.  Each of the companies compromised took a financial hit at the time, but years later they are still chugging away, some with record profits.  Sure, the damage made them reorganize and rethink security going forward, but why is it that consumers must suffer first before these large companies take steps to protect them?  While millions of consumers could be facing identity theft or financial compromise due to the Equifax breach, Equifax’s executives cashed out large amounts of stock, took their resignation, and will move on to the next company or retire off their riches.

What is the big picture here?  Is it true what Equifax’s ex-CEO said on the stand, that one member of their information security team caused this huge compromise of data? Of course not, and by the way it was ludicrous for a CEO to place blame on one member of their IT staff.  The truth is companies attempt to juggle their personal profit with the company’s security.  Let’s be honest, most of the time information security spends revenue without a return.  The only time a return is realized is when a company mitigates a breach and that information is not often relayed across an organization.

The damages incurred by consumers and even other businesses due to data breaches far outweigh the penalties the negligent companies face.  The Federal Trade Commission claims that recovering from an identity breach averages six months and 200 hours of work[17].  If only 10% of those involved in the Equifax breach have their identities compromised, using average U.S. hourly earnings, that would equate to roughly $77 billion in potential costs to the American people (14,500,000 people * 200 hours * $26.55 = ~$77 billion).  These are just averages and there are horror stories detailing people fighting for years to clear up their identity.

Overall, there needs to be more accountability and transparency in what these corporations are doing with consumer data.  Most of these companies are going through endless audits covering different regulations and compliances, yet it does not seem to matter, as breaches continue to rise in number.

As other countries are progressively moving forward with reforms for the protection of personal information of their residents, such as the European Union’s General Data Protection Regulation (GDPR), the US continues to blindly stumble along, refusing to take a serious look at these issues.  The amount of money these companies are profiting off the data they collect is ridiculous, and when they have a breach, the fines and other punishments are a joke.

It’s time for things to change, as no company should be able to just say, “whoops, sorry about that” after a breach and move on.

What’s New with the Treacherous 12?

October 20, 2017 | Leave a Comment

By the CSA Top Threats Working Group

In 2016, the CSA Top Threats Working Group published the Treacherous 12: Top Threats to Cloud Computing, which expounds on 12 categories of security issues that are relevant to cloud environments. The 12 security issues were determined by a survey of 271 respondents.

Following the publication of that document, the group has continued to track the cloud security landscape for incidents. This activity culminated in the creation of an update titled Top Threats to Cloud Computing Plus: Industry Insights.

The update serves as a validation of the relevance of security issues discussed in the earlier document, as well as provides references and overviews of these incidents. In total, 21 anecdotes and examples are featured in the document.

The references and overview of each anecdote and example are written with the help of publicly available information.

The Top Threats Working Group hopes that shedding light on recent anecdotes and examples related to the 12 security issues will provide readers with relevant context that is current and in-line with the security landscape.