The Future of Cybersecurity Arrow to Content

June 23, 2015 | Leave a Comment

vidya_260_340In 2013, President Obama issued an Executive Order to protect critical infrastructure by establishing baseline security standards. One year later, the government announced the cybersecurity framework, a voluntary how-to guide to strengthen cybersecurity and meanwhile, the Senate Intelligence Committee voted to approve the Cybersecurity Information Sharing Act (CISA), moving it one step closer to a floor debate.

Most recently, President Obama unveiled his new Cybersecurity Legislative Proposal, which aims to promote better cybersecurity in information-sharing between the United States government and the private sector. As further support, The White House recently hosted a Summit on cybersecurity and consumer protection at Stanford University in Palo Alto on February 13, 2015 which convened key stakeholders from government, industry and academia to advance the discussion on how to protect consumers and companies from mounting network threats.

No doubt we have come a long way, but looking at the front-page headlines today reminds us that we’ve still got a long ways to go. If the future if going to be different and more secure than today, we have to do some things differently.

I recently participated on a panel titled “The Future of Cybersecurity” at the MetricStream GRC Summit 2015, where I was joined on stage by some of today’s leading thinkers and experts on cybersecurity; Dr. Peter Fonash, Chief Technology Officer Office of Cybersecurity and Communications, Department of Homeland Security; Alma R. Cole, Vice President of Cyber Security, Robbins Gioia; Charles Tango, SVP and CISO, Sterling National Bank; Randy Sloan, Managing Director, Citigroup; and moderator John Pescatore, Director of Emerging Security Trends, SANS Institute.

The purpose of this panel was to convene a diverse group of experts who believe in a common and shared goal – to help our customers, companies, governments and societies become more secure. This panel followed on the heels of a keynote address by Anne Neuberger, Chief Risk Officer of the NSA, who spoke about a simple challenge that we can all relate to: operations. Speaking on her experience at the NSA, Neuberger articulated that a lot of security problems can be traced back to the operations, and more precisely, this idea that ‘we know what to do, but we just weren’t doing it well’ or ‘we had the right data, but the data wasn’t in the right place.’

Moderator John Pescatore from SANS Institute did an exceptional job asking the questions that needed to be asked, and guiding a very enlightening discussion for the audience. For one hour on stage, we played our small part in advancing the discussion on cybersecurity, exploring the latest threats and challenges at hand, and sharing some of the strategies and solutions that can help us all become more secure.

Here are the five key takeaways that resonated most.

 

threat-cybersecurity-e1434543639797Topic 1: Threat information sharing tends to be a one-way street. There is an obvious desire from the government to get information from private industry, but a lot more needs to be done to make this a two-way street.

According to Dr. Peter Fonash, Chief Technology Officer at the Office of Cybersecurity and Communications at the Department of Homeland Security, the DHS is looking to play a more active role in threat information sharing. To that end, the DHS is actively collecting a significant amount of information, and even paying security companies for information, including the reputation information of IP addresses. However, some challenges faced when it comes to the government being able to participate in sharing that threat information is in getting that information as “unclassified as possible” and second, lots of lawyers involved in making sure that everything that is shared is done so in a legal manner. Dr. Fonash stressed that government faces another challenge; private industry thinking that government is in some way an advisory or industry competitor when it comes to threat information – this is simply not the case.

Topic 2: There are lots of new tools, the rise of automation, big data mining – but the real challenge is around talent.

Simply stated, our organizations need more skilled cybersecurity professionals than what the currently supply offers. For cybersecurity professionals, it is a great time to be working in this field – job security for life, but it is a bad time if you are charged with hiring for this role. Automation and big data mining tools can definitely help when they are optimized for your organization, with the right context and analysts who can review the results of those tools. According to Alma R. Cole, Vice President of Cyber Security at Robbins Gioia, in the absence of the skill-sets that that you aren’t able to find, look internally. Your enterprise architecture, business analysis, or process improvement leaders can directly contribute to the outcome of cybersecurity without themselves having a PHD in cybersecurity. While cybersecurity experts are needed, we can’t just rely on the experts. Cole makes the case that as part of the solution, organizations are building security operations centers outside of the larger city centers like New York and DC – where salaries aren’t as high, and there isn’t as much competition for these roles. Some organizations are also experimenting with virtual security operations centers, which provide employees with flexibility, the ability to work from anywhere, and improved quality of life, while also providing the organization with the talent they need.

Topic 3: We are living and doing business in a global economy – we sell and buy across the world and we compete and cooperate with enemies and business partners around the world. We are trying to make our supply chains more secure but we keep making more risky connections.

According to Charles Tango, SVP and CISO at Sterling National Bank, this might be a problem that gets worse before it gets better. We’ve seen a dramatic increase in outsourcing, and many organizations have come to realize that the weakest link in the chain is oftentimes their third party. At this moment in time, as an industry, banks are largely reactionary, and there’s a lot of layering of processes, people and tools to identify and manage different risks across the supply chain. The industry needs a new approach, wherein banks can start to tackle the problem together. According to Tango, we won’t be able to solve this challenge of managing our third and fourth parties on an individual bank-by-bank basis; we have to start to tackle this collaboratively as an industry.

Topic 4: No doubt, the future of applications is changing dramatically, and evolving everyday – just look at the space of mobile computing.

According to Randy Sloan, Managing Director at Citigroup, from a dev-ops automation perspective, if you are introducing well-understood components and automation such as pluggable security – you are way out in front, and you are going to be able to tighten things up to increase security. More challenging from an app-dev perspective is the rapidness – the rapid development and the agile lifecycles that you have to stay up with. The goal is always to deliver software faster and cheaper, but that does not always mean better. Sloan advocates for balance – investing the right time from an IS architecture, to putting the right security testing processes in place, and focusing on speed – slowing things down and doing things a more thoughtfully.

Topic 5: We’ve got dashboards, and threat data, and more sharing than ever before. But what we need now are more meaningful approaches to analytics that aren’t in the rear view mirror.

I believe over the next few years, organizations will be more analytics driven, leveraging artificial intelligence, automation, machine learning and heuristic-based mechanisms. Now the challenge is figuring out how to sustain it. This is the value of an ERM framework where you can bring together different technologies and tools to get information that can distilled and reported out. This is about managing and mitigating risk in real time, and intercepting threats and preventing them from happening rather than doing analysis after the fact.

We live in an increasingly hyper-connected, socially collaborative, mobile, global, cloudy world. These are exciting times, full of new opportunities and technologies that continue to push the boundaries and limits of our wildest imaginations. Our personal and professional lives are marked by very different technology interaction paradigms than just five years ago. Organizations and everyone within them need to focus on pursuing the opportunities that such disruption and change brings about, while also addressing the risk and security issues at hand. We must remember that the discussions, strategies, and actions of today are helping to define and shape the future of cybersecurity.

By Vidya Phalke, CTO, MetricStream

11 Advantages of Cloud Computing and How Your Business Can Benefit From Them Arrow to Content

June 22, 2015 | Leave a Comment

blog-banner-advantages-cloud-computing-300x180HOW COMPANIES USING THE CLOUD GROW 19.3% FASTER THAN THEIR COMPETITORS

While their motivations vary, businesses of all sizes, industries, and geographies are turning to cloud services. According to Goldman Sachs, spending on cloud computing infrastructure and platforms will grow at a 30% compound annual growth rate (CAGR) from 2013 through 2018 compared with 5 percent growth for overall enterprise IT. Cloud adoption is accelerating faster than previously anticipated, leading Forrester to recently revise its 2011 forecast of the public cloud market size upward by 20 percent. Whether you’re looking atSoftware-as-a-Service (SaaS), Infrastructure-as-a-Service (IaaS), or Platform-as-a-Service (PaaS), the predictions are the same: fast growth of the workloads placed in the cloud and an increased percentage of the total IT budget going toward cloud computing.

blog image - forrester cloud market sizing 850

 

 

 

 

 

 

 

 

 

 

According to a study by the Cloud Security Alliance, 33% of organizations have a “full steam ahead” attitude toward cloud services and 86% of companies spend at least part of their IT budget on cloud services. IT leaders at 79% of companies receive regular requests from end users each month to buy more cloud applications with file sharing and collaboration, communication, social media, and content sharing topping the list of the most-requested cloud services.

Numerous factors are driving cloud adoption, according to a study conducted by the market research company Vanson Bourne. “The Business Impact of the Cloud” report compiles insights from interviews of 460 senior decision-makers within the finance functions of various enterprises. The report summarized 11 drivers of cloud adoption along with quantifiable improvements these companies have achieved by deploying cloud services to improve productivity, lower cost, and improve time to market.

Though they aren’t in IT positions, the majority of these financial executives are actively involved in their organizations’ discussions about cloud strategy. Their perspective of cloud computing includes benefits to the business as a whole. Companies that adopted cloud services experienced a 20.66% average improvement in time to market, 18.80% average increase in process efficiency, and 15.07% reduction in IT spending. Together, these benefits led to a 19.63% increase in company growth.

blog image - cloud measurable impact 680

 

 

 

 

 

 

 

 

 

 

 

 

 

The Vanson Bourne report identified eleven advantages of cloud computing that organizations are experiencing today, leading to quantifiable improvements in their businesses:

1. Fresh Software

With SaaS, the latest versions of the applications needed to run the business are made available to all customers as soon as they’re released. Immediate upgrades put new features and functionality into workers’ hands to make them more productive. What’s more, software enhancements are typically released quite frequently. This is in contrast to home grown or purchased software that might have major new releases only once a year or so and take significant time to roll out.

2. Do more with less

With cloud computing, companies can reduce the size of their own data centers — or eliminate their data center footprint altogether. The reduction of the numbers of servers, the software cost, and the number of staff can significantly reduce IT costs without impacting an organization’s IT capabilities.

3. Flexible costs

The costs of cloud computing are much more flexible than traditional methods. Companies only need to commission – and thus only pay for – server and infrastructure capacity as and when it is needed. More capacity can be provisioned for peak times and then de-provisioned when no longer needed. Traditional computing requires buying capacity sufficient for peak times and allowing it to sit idle the rest of the time.

4. Always-on availability

Most cloud providers are extremely reliable in providing their services, with many maintaining 99.99% uptime. The connection is always on and as long as workers have an Internet connection, they can get to the applications they need from practically anywhere. Some applications even work off-line.

5. Improved mobility

Data and applications are available to employees no matter where they are in the world. Workers can take their work anywhere via smart phones and tablets—roaming through a retail store to check customers out, visiting customers in their homes or offices, working in the field or at a plant, etc.

6. Improved collaboration

Cloud applications improve collaboration by allowing dispersed groups of people to meet virtually and easily share information in real time and via shared storage. This capability can reduce time-to-market and improve product development and customer service.

7. Cloud computing is more cost effective

Because companies don’t have to purchase equipment and build out and operate a data center, they don’t have to spend significant money on hardware, facilities, utilities and other aspects of operations. With traditional computing, a company can spend millions before it gets any value from its investment in the data center.

8. Expenses can be quickly reduced

During times of recession or business cut-backs (like the energy industry is currently experiencing), cloud computing offers a flexible cost structure, thereby limiting exposure.

9. Flexible capacity

Cloud is the flexible facility that can be turned up, down or off depending upon circumstances. For example, a sales promotion might be wildly popular, and capacity can be added quickly to avoid crashing servers and losing sales. When the sale is over, capacity can shrink to reduce costs.

10. Facilitate M&A activity

Cloud computing accommodates faster changes so that two companies can become one much faster and more efficiently. Traditional computing might require years of migrating applications and decommissioning data centers before two companies are running on the same IT stack.

11. Less environmental impact

With fewer data centers worldwide and more efficient operations, we are collectively having less of an impact on the environment. Companies who use shared resources improve their ‘green’ credentials.

Despite these benefits, the Cloud Security Alliance has identified several barriers holding back cloud adoption. At 73% of companies, the security of data is the top concern holding back cloud projects. That’s followed by concern about regulatory compliance (38%), loss of control over IT services (38%), and knowledge and experience of both IT and business managers (34%). As organizations address their security and compliance concerns by extending corporate policies to data in the cloud and invest in closing the cloud skills gap, they can more fully take advantage of the benefits of cloud services.

Written by:

cameron-coles3

CAMERON COLES: Sr. Product Marketing Manager at Skyhigh. Interested in data that reveals the promise and peril of the cloud economy.

6 Security Tips From the Gartner Security & Risk Management Summit Arrow to Content

June 18, 2015 | Leave a Comment

Posted by Christopher Hines

resiliencyAs I was sitting in the Gartner Keynote session here at Gartner’s Security & Risk Management Summit, listening to the analysts speak about what’s necessary for greater enterprise security, it became  clear that one word was ever-present in each of the Gartner analyst’s speeches. That word was resiliency. The analysts made a point that the ability to absorb hits and accept risk while focusing on the overall success of the company, was a must for all organizations in today’s breach prone world.

They spoke about the need to move from prevention to detection and response. They called for security professionals to stop thinking as pure defenders and start thinking like business facilitators. Most importantly, how IT security leaders must “seize the opportunity” given to them by the massive headline breaches we see each week. As perverse as that may sound.

One analyst, used Netherlands’ water control system as a prime example of resiliency.  Citing how the system opens and closes based on the level of the water.  Allowing ships to pass through once they were safely able to do so, while also providing safe water levels, and controlling the currents as the water nears the shores of the Netherlands. In short, the technology is resilient. Enterprises must also be able to do the same.

He mentioned how access control and authentication, if not used properly, can cause extra steps for employees, slowing down core business functions in the process. At the same time, only a minor set-back for cyber criminals attempting to steal corporate data. Companies must be able to roll with the punches, and accept risk as a part of the security landscape.

The analyst then went on to speak about each of the 6 core principles of resiliency that all enterprise securers should abide by in order to gain the trust of the c-suite. Here’s the breakdown of each principle he mentioned during his presentation:

  1. Don’t just go for the checked boxes, think in terms of risk-based
  2. Move from a technology focus to an outcome-driven focus
  3. Shift from the defender of data to the facilitator of core business functions
  4. Don’t just control information, understand its flow in order to secure it more effectively
  5. Drift from a pure technology focus to more of a people -purpose and work to gain trust
  6. Move from prevention to detection & respond so that you can react faster and limit damage

It was refreshing to hear the analyst speak about the need for resiliency, and break down the 6 core principles and what they mean to enterprise security teams. These principles can now act as the guide for all enterprises still questioning the need for a new approach to security. Something some securers just haven’t seemed to evolve to yet, being stuck in the more traditional security mindset.

IT securers now have unprecedented power within their organizations. The massive breaches we have all grown too familiar with continue to pile up, forcing security to now become a board room level discussion. The C-suite is now turning to the IT security team for the answer to the question of how to protect data, while also enabling the business to function and grow. You, as the securer must be prepared for the task. You must be able to speak in terms that the C-suite is familiar with, and is willing to listen to with an open mind.

Keep these 6 principles in mind, and seize the opportunity.

Chris Hines

Product Marketing Manager | Bitglass

Cloud Security Alliance and Palo Alto Networks Release Security Considerations for Private vs. Public Clouds Arrow to Content

June 17, 2015 | Leave a Comment

PAN_Logo

 

 

 

By Larry Hughes, Research Analyst, Cloud Security Alliance

Cloud computing has the potential to enhance collaboration, agility, scale and availability, and provides opportunities for cost reduction through optimized and efficient computing.   The cloud trend presents a momentous opportunity to revisit not only how we think about computing, but also how we think about information security.

The Cloud Security Alliance (CSA) recently teamed up with Palo Alto Networks to produce a new whitepaper titled, “Security Considerations for Private vs. Public Clouds.”  For purposes of definition, a public cloud deployment occurs when a cloud’s entire infrastructure is owned, operated and physically housed by an independent Cloud Service Provider.  A private cloud deployment consists of a cloud’s entire infrastructure (e.g., servers, storage, network) owned, operated and physically housed by the tenant business itself, generally managed by its own IT infrastructure organization. 

While the title of the paper implies a primary focus on security, we took the opportunity to expand the conversation and incorporate a wider set of considerations including:

  • Business and legal topics, including contracts, service level agreements, roles and responsibilities, and compliance and auditing. We touch on the importance of establishing principal business and legal feasibility early on in the process, before investing too much in technical requirements.
  • Physical and virtual attack surface considerations including a look at vulnerabilities that are accessible to would-be attackers.
  • Operational issues, including data migration, change management, logging, monitoring and measuring and incident management and recovery and the roles they play in determining which cloud deployment makes the most sense for an organization.

Cloud security is one of the most critical considerations, regardless of whether the deployment is public vs. private. But security is not black and white and no two companies looking to deploy a cloud infrastructure do so for exactly the same reasons. Wise organizations will take the long view and invest in security accordingly. As Thomas Edison once said, “Opportunity is missed by most people because it is dressed in overalls and looks like work.”

On Tuesday, June 23, Matt Keil, Palo Alto Networks Director of Product Marketing for Data Center, and I will be hosting a webinar to discuss the white paper in-depth and look at security considerations for public and private clouds.  For more information and to register for the webinar, click here.

For more information on Palo Alto Networks, please visitwww.paloaltonetworks.com .

 

Google leads the way out of the castle to the cloud Arrow to Content

June 11, 2015 | Leave a Comment

By Mike Recker, Manager – Corporate Systems Engineers, Code42

54312856_520Traditional IT infrastructure is built to centralize data and prevent intrusion. Like a bank vault or a defended castle in medieval times, valuables are kept in one repository and fortified to keep intruders out. In this scenario, the queen can behold all that she owns and keep her enemies at bay.

The centralized storage model worked well until the early 2000s. But the world has changed. A mobilized workforce no longer toils inside the castle walls—and they demand streamlined workflow from everywhere. Which makes tunneling into the castle (via VPN connection) to utilize the tools of their trade not just inefficient, but irrelevant.

Accepting the brave new world of security
As people, applications, e-mail servers, databases and virtual computing move outside the corporate firewall—and companies accept the necessity of shifting their security practices, big questions arise. How should applications be delivered? Where should data be protected? Google Corp has an idea:

Virtually every company today uses firewalls to enforce perimeter security. However, this security model is problematic because, when that perimeter is breached, an attacker has relatively easy access to a company’s privileged intranet. As companies adopt mobile and cloud technologies, the perimeter is becoming increasingly difficult to enforce. Google is taking a different approach to network security. We are removing the requirement for a privileged intranet and moving our corporate applications to the Internet.

Google gets it. “The perimeter is no longer just the physical location of the enterprise, and what lies inside the perimeter is no longer a blessed and safe place to host personal computing devices and enterprise applications.” In fact, Google decries the internal network (those drafty stone rooms inside the castle) are as dangerous as the Internet. And Google should know.

Let down the drawbridge; beef up the secret handshake
Google’s BeyondCorp initiative depends on device and user credentials—regardless of a user’s network location—to authenticate and authorize access to enterprise resources.

As a result, all Google employees can work successfully from any network, and without the need for a traditional VPN connection into the privileged network. The user experience between local and remote access to enterprise resources is effectively identical, apart from potential differences in latency.

Most companies will balk at the idea of enabling workers to access enterprise apps and data from anywhere without a VPN connection—much less store the data they produce outside the firewall–on the endpoint and in the cloud.

Change is hard, inevitable and here
The idea of enabling workers to store data on the endpoint with cloud backup goes against “mature” information security policies. IT will point to rules that require users to backup to the central file server where data can be monitored and protected. When the employee fails to follow policy and loses data—as a result of everyday disasters such as file overwrite, malware, ransomware, device loss or theft, IT can shrug it off because the employee ignored the policy. Or can they?

The biggest mistake IT makes is assuming the data is where it should be because people were told to put it there. When “process” fails, like it did at Sony, Target, Anthem and Home Depot, what should IT do to save face?

First, dust off the resume. Sadly, people lost jobs and in some cases, careers, because they believed the perimeter approach to collecting and securing data still worked.

Second, stop looking for a stronger firewall; secure the data where it lives on servers, desktops, laptops and mobile devices.

Third, understand that the enemy is outside and inside the castle. Make sure data is collected, visible and auditable so it can be restored to a known good state from a secure copy. In cases of breach and leakage, protecting every device with a backup assures faster inventory and remediation and substantial cost and productivity savings during data recovery.

Six data security practices for the brave new world
Plainly, data centers surrounded by defensive measures have failed to keep data secure. What does work is a security approach in which the data on every device is protected and backed up—whether or not the device is on the corporate network. The only thing missing in Google’s “trust but verify” approach is clear guidance on data backup and management.

That’s where we come in: We recommend these modern, proven data security practices for endpoints:

  1. Secure every device with full disk encryption (FDE) to disable access to data should the device be lost or stolen—inside or outside the organization.
  2. Deploy automatic, continuous backup of every device, every file and every version so data is recoverable in any event.
  3. Enable workers to work they way they do. Abandon processes that require antiquated behaviors and replace with automated agents that work lightly and quietly in the background.
  4. Keep encryption keys on premises to prevent unauthorized access from anyone and any agency.
  5. Trust but verify every user and device before enabling access to the network and data.
  6. Implement data governance tools that enable data visibility and analytics for auditing, data tracing and fast remediation.

When you live by these security practices in the brave new world, you’ll sleep better at night—even when the drawbridge is down.

Three Quick Cloud Security Wins for Enterprise IT Arrow to Content

June 10, 2015 | Leave a Comment

By Krishna Narayanaswamy, Chief Scientist, Netskope

Netskope_3 quick winsToday we released our Cloud Report for Summer 2015 – global as well as Europe, Middle East and Africa versions. Whereas in prior reports, we shared our top findings about usage, activities, and policy violations across enterprises’ cloud apps, in this report (and going forward!) we are matching those findings with a set of “quick wins,” or recommendations for how to mitigate cloud risk and protect data.

This season’s report focuses heavily on cloud data loss prevention (DLP). In our cloud, we identify policy violations for DLP profiles, including personally identifiable information (PII), payment card industry information (PCI), protected health information (PHI), source code, profanity, and “confidential” or “top secret” information, both at rest in and en route to or from cloud apps.

Two of the most dramatic findings in this report were that for content at rest in sanctioned cloud storage apps, 17.9 percent violated a DLP policy. Of those files, more than one out of five, or 22.2 percent were exposed publicly, or shared with at least one person outside of the corporate domain. Those are both huge numbers, and easily fixable. This leads us to quick win #1: Discover sensitive content in your sanctioned apps and eliminate public access. Don’t forget to notify internal collaborators.

For DLP violations in content at rest and en route, we looked at category and activity. The vast majority (90 percent) of these violations occurred in the Cloud Storage category, and primarily in the activities “upload” and “download.” The other categories that have DLP violations include Webmail, Social Media, and CRM, and top DLP-violating activities vary depending on the category, e.g., “post” in social and “download” in CRM. This brings up quick win #2: Enforce your cloud DLP policies on data-compromising activities in apps containing sensitive data. Start where most violations occur: uploads and downloads in Cloud Storage.

For the first time since we’ve been releasing this report, we noticed a decline in the average apps per enterprise. They went from 730 in our last report to 715. Anecdotally, our customers are getting more serious about consolidating apps and standardizing on their corporate-sanctioned ones. They’re doing this through policy, education, and user coaching. We believe the decline is a direct result of this effort, which leads us to quick win #3: Consolidate on popular apps that are also enterprise-ready. Use app discovery as a guide, and get there with user coaching.

We also have a global and an EMEA version of our infographic available on our website.

Are you missing the most versatile endpoint security tool? Arrow to Content

June 8, 2015 | Leave a Comment

By , Integrated Marketing Manager, Code42

Beyond Back UpLots of companies have endpoint security strategies. We know, because we’ve asked them. We’re using Backup Awareness Month to help businesses evaluate the obvious and hidden benefits of backup within a larger security plan.

Hardware fails. It’s inevitable.
12,000 hard drives will fail this week. Anyone in IT knows hardware failure and retirement are inevitable in technology. With the right backup, there will never come a time when you have to spend thousands of dollars on data recovery or tell users there’s no hope of restoring files.

 

2015 called and “there’s an app for that”—continuous endpoint backup.

Don’t play by the ransomer’s rules.
New malware is born every 4 seconds. Yeah. It’s a cruel world online. You’ll never be 100% impenetrable, but you won’t have to play by the ransomer’s rules if you get hit. Just restore destroyed files to the last known good state—prior to infection.

Back it up before you lock it down.
Without reliable, up-to-date backups, full disk encryption is a bit too secure. You’ve succeeded in restricting unwelcome access to corporate data, but you’ve risked locking your own users out of their files in the process. Think about it: if the computer gets damaged, data may become unreadable—even to those with permission to view it.

That’s why so many enterprises mandate that IT back up endpoints before locking them down with full disk encryption software.

To err is human. To recover is divine.
Users make mistakes—they modify read-only files and forget or omit saving to the shared drive, they spill on, drop, lose, misplace and misuse their devices. You know the saying, “you can’t teach an old dog new tricks?” It applies to the modern workforce and the way they work. If it can happen, it will. Continuous endpoint backup makes files available to restore when user errors happen.

The most powerful tool in the box saves data, money, time and a lot more.
Endpoint backup is a lot bigger than a copy in the cloud. It gives the enterprise assurances that it can recover from known threats. With the right backup solution, you’ll have the antidote when data loss strikes. Now that’s a lot to love.

CSA Establishes Cloud Data Governance Working Group and Releases Governance Framework Arrow to Content

June 4, 2015 | Leave a Comment

By J.R. Santos, Vice President/Research and Member Services, Cloud Security Alliance

jr santosIt is becoming increasingly difficult to protect customer data in the clouds, which in turn is causing more and more cloud providers and cloud consuming organizations to embrace data governance strategies. To address this need, Cloud Security Alliance (CSA) recently created the Cloud Data Governance 2.0 working group.

The Cloud Data Governance working group has been created to design a universal set of principles and map to emerging technologies and techniques for ensuring the privacy, confidentiality, availability, integrity and security of data across private and public clouds. The group has recently released a data governance framework to ensure the privacy, availability, integrity and overall security of data in different cloud models. These will feed into the GRC stack and can be implemented as controls across CSA’s CAIQ, CCM and STAR.

The Cloud Data Governance working group will look to develop thought leadership materials to promote CSA’s leadership across the spheres of data privacy, data protection and data governance. One key issue is that the over-emphasis on technology controls often leads to underlying weaknesses in processes. The group will work to harmonize data privacy regulations to a set of data protection principles that can help cloud consuming organizations and cloud service providers meet new data privacy requirements in a more efficient and proactive manner.

Chaired Evelyn de Souza of Cisco, the group is comprised of representatives from across the industry, with collaboration between key industry leaders from different verticals, academia, industry analyst associations and vendor subject matter experts.

The Governance Framework is tied to the CSA Cloud Controls Matrix and examines the three phases to govern:

  1. Plan (Plan & Organize)
  2. Do (Acquire and Implement, Deliver and Support)
  3. Check, Act (Monitor and Evaluate)

The Cloud Data Governance working group has some exciting research coming up later in 2015, including reviewing and streamlining the values of security risk management, going from ad hoc to optimal. Also research on data privacy – measuring the changing perceptions to data heat index – is scheduled for release.

If you are planning to attend Cloud Expo in New York next, you are invited to attend a presentation being given by Evelyn that will focus on how to set up a cloud data governance program and spans setting up an executive board to ensuring the availability, integrity, security and privacy of cloud data through its lifecycle.

To learn more about the Cloud Data Governance 2.0 working group, please join the LinkedIn group: CSA Cloud Data Governance Working Group or join the mailing list.

 

 

Savvy Businesses Leverage Enterprise Cloud PaaS Arrow to Content

June 3, 2015 | Leave a Comment

By Rajesh Raman, Vice President/Zaplet, MetricStream

ra-205x205Imagine a workshop full of tools: hammers, wrenches and screwdrivers. These simple tools can be used on a variety of materials: wood, brick, polymer and so on. But are these basic tools the best and enough for all materials and all projects? No, some projects require more specialized tools.

In the same way, an all-purpose Platform-as-a-Service (PaaS) is fine for building general applications from the ground-up, but specialized areas demand a different and more specialized set of tools. “Enterprise PaaS” are purpose-built platforms for a class of applications, and provide the fundamental functions and intelligent building blocks to meet the needs of that class of applications. Salesforce.com is an example of Enterprise PaaS in the customer relationship management (CRM) space.

There’s a reason companies have begun adopting Enterprise PaaS solutions; it enables rapid development and deployment of domain-specific applications that meet their unique needs and characteristics. In addition, it becomes possible to create a wide range of applications that share data and work and collaborate together in a seamless and more integrated manner than ever before. These applications can be tailored to a specific company’s needs, such as compliance with company-specific policies or unique industry regulations.

Enterprise platforms have matured, bringing a vast amount of specialized and real-world expertise into their particular spaces. For example, in the Governance, Risk, and Compliance (GRC) space, many governance and operational (ex. risk and issue management, audit, etc.) nuances cut across various domains and functions.  A GRC Enterprise platform leverages years of global GRC expertise and provides an established set of core functional and data objects, database schemas, forms and workflows that become the basic building blocks on which new applications can be developed.

Light-bulb moment

lightbulb-moment-e1427886527910

(Image Source: Shutterstock)

When it comes to Governance, Risk, and Compliance, there is no one-size-fits-all approach. For example, mid-tier banks face challenges in risk management, similar to what the big banks face, but there are subtle variations. Companies of all sizes and industries are increasingly leveraging applications that are built on top of a flexible GRC platform. This approach helps address the unique requirements of mid-tier banks with very targeted applications. For example, a mid-tier bank using a GRC platform-enabled Risk Management App can easily extend and integrate that application with others for audit, policy management, and third-party vendor management, especially as the company’s need and requirements evolve.

Another example is a company who has leveraged a GRC platform approach for incident management. Their “light-bulb moment” occurred when they realized it made more sense to do this from a mobile phone. Their Mobile Incident Management app leverages the sophisticated capabilities of the enterprise GRC platform, that can be accessed seamlessly and in real-time over the interface of a mobile phone.

The real benefit of an Enterprise GRC platform is that the new leverages the old. A platform approach provides a way to cross-leverage intelligence across applications, and also offers a more integrated and unified end-to-end view. This robust and highly flexible model has proven to offer a clear value proposition to the market.

Enterprise platforms also open up opportunities for partners who want to leverage their expertise in some area, and monetize it. For example, a company may be considered experts in energy regulations (e.g. NERC/CIP), but simply cannot deliver their expertise to everyone on a one-on-one basis. They need a platform to build a custom application for this market that can be scaled and delivered to a larger customer base. In this case, the company built its own application on top of the GRC platform that they can sell to their customers. This has become a great way for organizations to build and sell apps, provide the market with real value and also provide the organization with a new revenue stream.

Zaplet
As I mentioned above, Salesforce.com is a cloud-based platform-as-a-service in the CRM space. MetricStream’s Zaplet is similar for the GRC space. Zaplet allows partners and customers of MetricStream to build their own targeted GRC applications, either by extending core functions or adding specialized content, thus creating a thriving ecosystem of hundreds of thousands of business applications.

Zaplet PaaS provides rich development tools, such that a user rarely needs to write additional code. The company can say: “we want to use this data object,” or “we want to extend that attribute,” and then they can build a workflow, create a custom form, and as simple as that, they have a new application that can help them successfully solve their specific GRC problem.

Challenges
The challenges for Enterprise PaaS are similar to those for general-purpose PaaS, namely: scalability, security and availability. For this reason, enterprise platform providers need to have excellent data centres, which have sophisticated access control and security architecture; expert ways of securing data; and proper segregation of multiple customers’ data.

Another challenge for enterprise platforms is how to make the development tool rich enough, with everything that business users will need—and yet make it simple, intuitive and easy to use, such that no programming training is required.

A GRC platform approach is viewed as the solution, making available all of building blocks needed for GRC application development: compliance, risk, audit, issue management, third-party management, reporting, dashboard, workflows, data functional objects and more.

This post originally appeared on CloudTweaks.

CipherCloud Risk Lab Details Logjam TLS Vulnerability and Other Diffie-Hellman Weakness Arrow to Content

June 1, 2015 | Leave a Comment

CipherCloud Lab notifies customers that 1006 cloud applications are vulnerable to logjam and other DH weaknesses, 181 cloud applications move from a low/medium risk score to high risk category, 946 cloud applications risk scores increase.

 

By David Berman, Director of Cloud Visibility and Security Solutions, CipherCloud

CipherCloud Risk Intelligence Lab™ has performed a detailed analysis of thousand of cloud applications and today has pushed new intelligence to hundreds of customers with access to cloud risk scoring via the company’s CloudSource™ Knowledge Base.

The logjam vulnerability made public this week affects the Transport Layer Security protocol used to encrypt traffic between client devices and web, VPN and email servers used by cloud providers and enterprises. The vulnerability allows an attacker to lower the strength of encryption enabling sending and receiving streams of communication to be more easily cracked. Academics showed that via the vulnerability a secure Diffie-Hellman 2048-bit algorithm can be downgraded by automated exploits to a lower level of encryption. The attack does not rely on social engineering like getting users to click on a link in an email. In previous attacks, an element of social engineering was required.

The exploit can be accomplished when the attacker and the user are on the same network – a common scenario when users access cloud applications or corporate networks over public WiFi.

CipherCloud researchers have found 181 cloud applications that can be exploited by public techniques used by any hacker and nation states or other actors with sufficient computing power can theoretically attack 825 cloud applications.

In addition, CipherCloud researchers detailed that many applications are vulnerable to cross-domain attacks when the logjam vulnerability is found on the web site landing domain even when the site’s login domain is not vulnerable. Post login, users that return to the vulnerable landing domain can have their session encryption automatically downgraded by an attacker if that domain presents the export-grade Diffie-Hellman cipher suite.

The attacks are serious, a special concern is if a credential is stolen, it may be used for Single Sign-on to multiple applications or reused in other cloud applications (studies have found that users reuse passwords between sites 30 – 40% of the time).

Detailed steps to remediate the vulnerability can be found at https://weakdh.org.

CipherCloud Lab will provide further updates as providers address the vulnerability.

 Summary of Findings

  • 1006 cloud applications discovered with logjam vulnerability and other DH weaknesses
  • 181 cloud applications can be exploited by normal attacker (computing power available to anyone)
  • 825 cloud applications can theoretically be exploited by nation states or attackers with required computing power (capability to break encryption beyond 512-bits)

181 Cloud Applications with Logjam Vulnerability by Category

 

DH_Weakness__Chart_2

825 Cloud Applications with DH Weakness by Category

 

Page Dividing Line