Identity Management Plays a Key Role in Mobile Device Management (MDM)

By: Dan Dagnall, Chief Technology Strategist, Fischer International Identity

As BYOD and other mobile device related initiatives take hold, sooner rather than later, identity management will once again be considered as an enforcement mechanism; and rightly it should.

Identity and access management (IAM) has grown up over the years.  Its early beginnings were in metadata management and internal synchronization of data to/from target applications.  Lately it seems like one cannot roll out a new technology or service without considering the effect IAM will have on the initial roll-out, as well as ongoing enforcement of security, access, and policy related evaluations.

IAM is becoming the hub for all things security and so it should be for mobile device management.  MDM provides an administrative interface for managing server-related components, as well as self-service interfaces and over-the-air provisioning.  All of these components are key to a successful BYOD strategy, and all of these components should consider IAM as the authority in terms of the overall decision making process.

  1. When to provision the device (including the association of the device to the end user)
  2. When to lock/wipe the device
  3. How to enable users the ability to request apps for download and which apps they qualify for.
  4. How to allow users to leverage the device for multi-factor authentication

When to provision the device (including the association of the device to the end user)

As a user’s identity is created within an organization, MDM actions are needed to secure the end point device and associate that device with the end user for BYOD initiatives.  MDM technology provides for the ability to provision apps to the device.  In the IAM world, the same exercise occurs when a new user is detected and evaluated against access policies to validate and define the user’s identity in terms of application access and the exact permissions the user’s identity is to be granted.

Given that IAM should be looked to and leveraged as the authority over mobile device provisioning, organizations in the above scenario should not re-create the wheel. Rather, they should consider extending their existing IAM resource pool (i.e., those items controlled by IAM and associated policies defined within IAM) to include MDM servers, administrators, and end user device management.   I am not saying that IAM should consider replicating or mirroring the same functionality granted by MDM servers and management consoles, but I am advocating that those interfaces and servers fall under the umbrella of enforcement attributed to what IAM already does for your organization. In the IAM world, this consists of an integration component to enable external enforcement of MDM-related policies and actions originating from the IAM stack.

When to lock/wipe the device

Blocking users from accessing their applications (for multiple reasons) is a fundamental attribute of IAM.  There is no need to deploy new MDM technologies or write new integration points or, again, re-create the wheel when it comes to disabling or locking a user out of an application on the device, or the device itself.  In many cases, there are automated processes in place in the identity side that will immediately disable or lock a user out of the system if certain criteria are met.  For instance a termination event will initiate the disabling (or locking) actions.  So instead of focusing on disabling everything else and then leveraging an MDM interface to push actions to the mobile device, those termination actions (or disabling actions) should be driven from within the context of IAM.

How to enable users the ability to request apps for download and which apps they qualify for Requesting access in the form of applications or associated permissions is not new to IAM either.  MDM brings the concept of controlling which apps a user is able to download and run on their device.  This is a fundamental component of IAM (evaluate a user, and qualify for a specific application set and specific permissions based on the users role(s) within the organization.  Extending this type of self-service capability to users via IAM, instead of a separate solution strictly for MDM can potentially cost your organization much less.  IAM solves this problem very well by limiting which applications a user is able to request by enforcing access and permissions related policies at log in.  Proper identity & access management will evaluate the user at log in time and determine what he/she is able to request.  This concept is not new to IAM, and extending it to include enforcement of [mobile] apps can save you time and money.

How to allow users to leverage the device for multi-factor authentication

This new(er) trend places the mobile device in the spotlight more than any other mentioned in this article.  The requirement of organizations to leverage the mobile device as a second form of authentication (and identity verification) ties the device, BYOD and mobile device management directly to IAM.  Organizations developing an MDM strategy and deploying a solution must consider the effects of identity & access related policies while developing their MDM strategy.  Organizations that look to their existing IAM solution for answers regarding MDM management and enforcement will find that their IAM stack is a viable option for securing mobile devices.  In many cases extending the IAM solution to encompass the new MDM components will take work, however integration between different platforms is something IAM vendors (or developers) do very well, and lack of integration with your new MDM platform should not be a reason to forego merging IAM and MDM.

Bottom line:

Overall, identity and access management will play an increasingly important role regarding enforcement of MDM policy, as well as authorization of MDM admins to take actions against end user mobile devices.  If your organization has an extensive IAM solution in place, I strongly suggest you consider placing most (if not all) enforcement, provisioning, de-provisioning, and device identification (i.e. associating a device to a user) in the capable hands of your IAM solution.  The project may look a lot different than you anticipated, but you’ll find that IAM can provide many more answers than questions when it comes to how you should roll out your new MDM / BYOD strategy.

 

How to Adopt the Public Cloud While Attaining Private Cloud Control and Security

Earlier this year, McKinsey & Company released an article titled “Protecting information in the cloud,” discussing the increased use of cloud computing by enterprises across several industries and the benefits and risks associated with cloud usage. The article recognizes that many organizations are already using cloud applications and as a result realizing the associated efficiency and cost benefits. In fact, most of these organizations are looking to increase their usage of the cloud this year and beyond in both private and public environments. However, there are issues that are inhibiting adoption, such as risks tied to data security and concerns around privacy and compliance.

The McKinsey article rightly points out that allowing perceived risks to bar further adoption of the cloud is not a realistic option for most organizations, given the many compelling benefits offered and the need to be competitive in today’s economy. Enterprises must determine ways to embrace the cloud while also being able to satisfy important questions concerning security, compliance and regulatory protection that are hampering aggressive movement to the cloud.

The benefits of choosing either a public or private cloud option over the traditional on-premise deployment are clearly outlined in the article. McKinsey concludes that the solution for many enterprises will be a hybrid approach of public and private cloud and therefore, the primary question becomes which applications belong in which environments. This is where the article begins to fall short in its analysis of the issues surrounding cloud adoption, because it does not fully consider all solutions available, including cloud encryption gateways.

The McKinsey article recommends applications such as Customer Resource Management (CRM) and Human Capital Management (HCM) as logical choices for public cloud deployment. However, from my experience, many companies face barriers to even these types of applications for a variety of reasons, including the need to retain full control of any personally identifiable information (customer or employee) or to protect regulated data that may be subject to sector-based compliance requirements (think ITAR, HIPAA, PCI DSS, etc.). These important compliance and regulatory concerns frequently force enterprises down an on-premise path (either a traditional enterprise software implementation or via a private cloud deployment).

 

In these situations, a cloud encryption gateway can be used to keep the control of sensitive data in the hands of the organization that is adopting the public cloud service. These gateways intercept sensitive data while it is still on-premise and replace it with a random tokenized or strongly encrypted value, rendering it meaningless should anyone hack the data while it is in transit, processed or stored in the cloud. In addition, some gateways ensure that end users have access to all of the cloud application’s features and functions such as ability to do standard and complex searches on data, send email, and generate reports – even though the sensitive data is no longer in the cloud application.

 

Applications McKinsey believes should be located on a private cloud include enterprise resource planning (ERP), supply chain management, and custom applications. McKinsey recommends a private deployment option for this class of application largely due to the sensitivity of the data that is processed and stored in them. But private clouds, while a nice improvement over legacy on-premise deployment models, unfortunately cannot approach the TCO and elasticity benefits that true public-cloud SaaS providers offer enterprises. So, just like with CRM and HCM, the real opportunity for this class of applications is to figure out a model that marries the data security of a private cloud deployment with the unique TCO and elasticity value propositions of public cloud.

 

Here again cloud encryption gateways can play a critical role. As described earlier, enterprises would be able to move these sensitive applications onto a public cloud resource with a cloud encryption gateway that would directly satisfy any corporate concerns regarding data security, privacy and residency requirements.

Of course, not all cloud encryption gateways are created equal, so please refer to this recent paper, which provides important questions to ask when determining which gateway is the right fit for you.

Gerry Grealish leads the marketing & product organizations at PerspecSys Inc., a leading provider of cloud data security and SaaS security solutions that remove the technical, legal and financial risks of placing sensitive company data in the cloud. The PerspecSys Cloud Data Protection Gateway accomplishes this for many large, heavily regulated companies by never allowing sensitive data to leave a customer’s network, while simultaneously maintaining the functionality of cloud applications.

 

Cloud-Based Identity Management: Best Practices for Rapid End-User Adoption

By Glenn Choquette, Director of Product Management, Fischer International Identity.

Executive Summary

Identity Management (IdM) is not new. Yet after all this time on the market, organizations still have mixed results for end-user adoption, as many organizations that rolled-out IdM years ago still haven’t achieved their goals: end users keep calling the help desk to reset passwords, to request accounts and to perform other tasks instead of using the self-service identity solution. While most organizations have diligently assessed vendor offerings, fewer have adequately planned how to achieve their utilization objectives. Many organizations assume that end users will automatically start using their IdM solution without any planning or incentives, but that’s proven to be false. With user acceptance rates ranging from under 5% to nearly 100%, it’s clear that successful IdM rollouts don’t just happen:  They involve executive sponsorship, planning, education, setting measurable objectives, metrics, and a variety of “incentives” for achieving the goals. Fortunately, these activities will improve user adoption when launching, or even when “re-launching” IdM.

Introduction

Best practices for your organization depend on a variety of factors such as its size, culture, geographic distribution, which applications are in the cloud or on-premise, types and diversity of users, previous rollout experiences, the chosen IdM solution, etc. A combination of planning, education, metrics and incentives has proven to maximize both the quality of the end user experience and financial benefits of IdM. Like all projects that involve significant change, executive sponsorship and active executive participation are critical to success.

Planning

The first step to planning for rapid user adoption is to understand the capabilities of the chosen solution. Plan to automate as much of the setup as possible to avoid end user inertia. If your solution supports it, plan a transition acceptable to your corporate culture that requires the use of the new solution.  If automation isn’t possible with your solution, simplify the registration process as much as possible and increase your use of incentives. Your end-user adoption plan should consider your organization’s IdM objectives as well as the potential costs and risks of each aspect of the plan.

User Awareness

In most organizations, users tend to delay change until they are absolutely convinced of the benefits for themselves; fortunately, IdM has a lot to offer end users: single password to remember, no more waiting in the help desk queue for password resets, no forms to fill out to request access to resources, etc.  So, don’t keep it a secret. Market the benefits of IdM before launch. Make users aware of how their lives will be easier.

Metrics and Incentives

Metrics and incentives are pivotal to success and provide ongoing leverage for continued attainment of objectives. They can become your best friends in achieving rapid user adoption. Just as it’s important to “sell” the expected benefits to the user base prior to launch, it can be even more important to keep the momentum going by communicating the observed benefits after launch. If non-IT leaders haven’t already been sold, you’ll want to reach out to them to help carry the torch, as it’s in their own best interest to do so.

Fortunately, compared to legacy IdM solutions, modern IdM solutions achieve faster user adoption with fewer end-user incentives as users face fewer obstacles and are able to clearly see the benefits of using the solutions. Setup activities occur naturally during friendly IdM processes such as receiving new accounts and changing passwords. As more people in the organization become aware of the success of IdM and what it means, both to themselves and to the bottom line, your user base will begin to sell the solution for you. Soon, your modern solution will become the organization’s norm and the unbelievers will be viewed as laggards, under peer pressure to join the team.

Conclusion

Identity Management solutions and implementation methods have improved over the last several years. Whether your organization is new to Identity Management or implemented a solution years ago but is experiencing inadequate utilization, proper planning and execution of solution launch (or re-launch) activities can improve utilization rates.

 

 

How secure is Mobile Device Management anyway?


objective-c-hooking-1

Researchers have successfully breached the Good Technology container. MDM software can only be as secure as the underlying operating system.

 

As the adoption of smartphones and tablets grows exponentially, one of the biggest challenges facing corporate IT organizations is not the threat of losing the device – likely owned by the employee – but the threat of a targeted attack stealing sensitive corporate data stored on these mobile devices. As a first line of defense, an increasing number of companies rely on Mobile Device Management software and Secure Container solutions to secure and manage corporate data accessed from these mobile devices. However, a recent analysis conducted by Lacoon Mobile Security – presented a few weeks ago at the BlackHat conference in Amsterdam – shows that the leading secure container solution Good Technology can be breached and corporate email stolen from Apple iOS and Android devices.

Lacoon CEO Michael Shaulov, spoke with me about the shocking results of this research and made it clear that no matter what MDM software you deploy, you are in danger. MDM and Secure Containers depend on the integrity of the host system. “Ask yourself: If the host system is uncompromised, what is the added value? If the host system is in-fact compromised, what is the added value? We’ve been through this movie before”, referring to the underlying endpoint management philosophy inherited from the previous PC era.

In their presentation “Practical Attacks against Mobile Device Management (MDM)”, Michael Shaulov and Daniel Brodie, Security Researcher, explain the details of how they penetrated the Good Technology container to exfiltrate sensitive corporate email – Good Technology did not respond to my request for comment:

Android 4.0.4 device – Samsung Galaxy S3:

1. The attacker creates a “two-stage” application which bypasses the market’s malicious app identification measures such as Google Bouncer or other mobile application reputation systems. The app is then published on Google Play or other legit Android appstores. By using the “two-stage” technique, the attacker can publish a seemingly innocent application and, once the victim installs the app, the app itself refers to the malicious code which is then downloaded.

2. The app exploits a mobile OS vulnerability which allows for privilege escalation. For example, the vulnerability in the Exynos5 chipset released in December 2012 that affects the drivers used by camera and multimedia devices.

3. The malware creates a hidden ‘suid’ binary and uses it for privileged operations, such as reading the mobile logs, as discussed in the next step. The file is placed in an execute-only directory (i.e. –x–x–x), which allows it to remain hidden from most MDM root detectors.

4. The malware listens to events in the ‘adb’ logs. These logs, and their corresponding access permissions, differ between Android versions. Note that for Android version 4.0 and higher root permissions are required in order to read the logs.

5. The malware waits for a log event that signifies that the user is reading an email.

6. The malware dumps the heap using /proc//maps and /mem. Accordingly, it can find the email structure, exfiltrate it and send it home – perhaps uploading it to an unsuspected DropBox account.

Apple iOS 5.1 device – iPhone:

Malware targeting iOS based devices needs to first jailbreak the device, and then installs the container-bypassing software.

1. The attacker installs a signed application on the targeted device, through the Enterprise/ Developer certificate. This may require physical access but there are known instances when this has done remotely.

2. The attacker uses a Jailbreak exploit in order to inject code into the secure container. The Lacoon researchers used the standard DYLD_INSERT_LIBRARIES technique to insert modified libraries into the shared memory. In this manner, their (signed) dylib are loaded into memory when the secure container executes.

3. The attacker removes any trace of the Jailbreak.

4. The malware places hooks into the secure container using standard Objective-C hooking mechanisms.

5. The malware is alerted when an email is read and pulls the email from the UI elements of the app.

6. Finally, the malware sends every email displayed on the device to the remote command and control server.


The analysis performed by the Lacoon analysts exposes the security limitation of the secure container approach. Shaulov believes that MDM provides management, not absolute security. It is beneficial to separate between business and personal data in a BYOD scenario. Its main use case is the selective remote wipe of enterprise content and Copy & Paste prevention.

Secure containers rely on different defense mechanisms to protect the corporate data. Generally these include iOS jailbreaking and Android rooting detection, prevention of the installation of applications from third-party markets in order to protect against malware and, most importantly, data encryption. However, these measures can be bypassed. On one hand there is a quite active community involved in jailbreaking/rooting efforts. On the other hand the jailbreaking/rooting detection mechanisms are quite restricted – see for example xCon, a free iOS app to defeat jailbreak detection. Usually, checks are performed only against features that signify a jailbroken/rooted device. For example, the presence of Cydia, a legit iOS app which allows the downloading of third party applications not approved by Apple, or the SU tool used on Android to allow privileged operations. More importantly, there are no detection mechanisms for exploitation. So even if the secure container recognizes a jailbroken/rooted device, there are no techniques to detect the actual privilege escalation.

MDM software and Secure Containers are supposed to detect jailbroken iOS and rooted Android devices but “they are dependent on the underlying operating system sandbox, which can be bypassed”, Shaulov says.

MDM not so secure after all

Sebastien Andrivet, Co-founder and director of ADVTOOLS, took a different approach to auditing the security of MDM products and performed a thorough analysis of the server components, such as the administrative console, and their communications with the mobile devices. I met Andrivet in London at the Mobile and Smart Device Security Conference 2012, where he presented the alarming results of his research. Among other, Andrivet found persistent cross-site scripting and cross-site request forgery vulnerabilities in two leading MDM solutions – he would not publicly disclose the names of these products but I saw the screenshots of the trace logs and spotted some of the leading brands mentioned in the Lacoon report.

Andrivet openly stated that, despite being marketed as security tools, MDM products are not “security products” and in fact not so secure after all. However, he is also a bit skeptical about the significance of the findings of the Lacoon research. “Frankly, it is not so easy to penetrate these products, especially on iOS”, says Andrivet. For example, to break into the Good container in the way described above, you need physical access to the device and the password. With an iPhone 4, it is still possible to break a 4-digit pass code. But it is not currently feasible to do the same with iPhone 4S and iPhone 5. Andrivet also observes that it is true that it is possible to repackage an existing iOS application and sign it with your own enterprise certificate. But to install it on the device, a victim will have to accept explicitly the installation of the certificate and then of the application itself. With social engineering, this might be possible, but definitely not so easy. Andrivet points out that the Lacoon researchers have not broken the secure container encryption. They found the information in clear somewhere else – i.e. in memory. What is important is that they found a way to get the data. How they did it (breaking or not the secure container) is not so important. They “breached” the container, even if they didn’t “break” it.

The truth is that MDM products, as any other piece of software in the world, suffer from actual security vulnerabilities. But the Lacoon research is making headlines based on old versions of these products. “The risk is to provide misleading information”, warns Andrivet. In fact, even military-grade spyphone products like FinFisher cannot infiltrate the most recent versions of mobile devices like iPhone 4S or 5 as it is far easier to attack an Android device than an iOS one.

MDM is no silver bullet

Mobile security is a complex topic, and there is no silver bullet. This is true of security in general and mobile is no different, says Ojas Rege, Vice President Strategy at MobileIron, one of the leading MDM software mentioned in the above researches. The challenge many organizations face is that they compromise user experience in the name of security. For mobile, that’s the kiss of death, because users will not accept a compromised experience.

The key is to divide the problem into two: reducing the risk of data loss from well-intentioned users and reducing the risk of malicious attack, continue Rege. The former is, for example, giving users a compelling but secure way to share files instead of using consumer-grade services such as DropBox. The latter is what these researches are really about. MDM is important as a baseline but a full security program is going to require a great deal of education as well. “Jailbreak/rooting is a cat and mouse game”, according to Rege. The reality is that these devices will always have personal use – no matter who owns them – so the chances of malicious software making its way into device are high. The level of sandbox security built into the core OS is a key determiner of what other protections might be needed and what the resulting risk might actually be.

The point about MDM not offering absolute security is a bit cavalier, according to David Lingenfelter, Information Security Officer at Fiberlink, another leading MDM product mentioned in the Lacoon research. Anybody in the security community who is touting or expecting absolute security has missed the point. Cybercriminals only have to be right once. While targeted attacks are definitely a reality, containers are designed for more than just stopping a targeted attack. They help with data leak prevention, blocking users from “accidentally” distributing corporate information through their personal apps.

For better or worse, corporate IT still has to work in the confines of a world dominated by compliance. Adding controls around corporate information by using containers helps risk and compliance teams show their auditors that they are taking what is in essence a consumer-grade device and adding corporate level processes to those devices, continue Lingenfelter.

Infection is inevitable

The lesson learned from trying to secure traditional endpoints may be applied here. The general consensus among the security community is that controls on endpoints are not sufficient anymore to protect from targeted attacks. We can expect the same in the mobile world.

“Infection is inevitable”, continue Shaulov. As demonstrated by our research, MDM and Secure Containers do not and cannot provide absolute security. These are certainly useful tools to separate between business and personal data. As such, they should be part of a baseline for a multi-layered approach. Quoting an RSA report, Shaulov argue that “mitigating the effects of malware on corporate data, rather than trying to keep malware off a device entirely, may be a better strategy”.

This new approach requires thinking outside of the box and the industry is now starting to wake up to this challenge and looking at the network level for threat mitigation. For example, solutions like FireEye, Damballa, Fidelis and Checkpoint – just to name a few – can look at different network parameters and aberrant behavior to detect a compromised device in the process of exfiltrating data. Parameters may be traffic to well-known C&C servers, heuristic behavioral analysis which signify abnormal behavior, sequences of events and data intrusion detection.

Lingenfelter agrees that approach to security has been, and needs to remain, an approach of layers. However, he warns that while other technologies that are based on heuristic style monitoring and detection of malicious activity have come a long way, they too are far from absolute security. Companies have to realize that most mobile technology has been designed for consumers. It has the security focus of consumer devices and applications, which is to make it as easy for the end user as possible. To say that there is going to be one single technology or approach to change this and make these devices have the security level of corporate devices is reckless.  The true objective with mobile device security and management is to add on as much security, in layers, as possible without a significant impact on end user experience.

Have you deployed MDM to your mobile users? Do you trust mobile secure containers with your corporate data? How confident are you that your CEO’s iPhone is not jailbroken – or that it never was? Can you detect a compromised tablet spying on your company’s next board meeting?

About the Author

Cesare Garlati is one of the most quoted and sought‐after thought leaders in the enterprise mobility space. Former Vice President of Mobile Security at Trend Micro, Cesare currently serves as Co‐Chair of the CSA Mobile Working Group – Cloud Security Alliance. Prior to Trend Micro, Mr. Garlati held director positions within leading mobility companies such as iPass, Smith Micro Software and WaveMarket. Prior to this, he was senior manager of product development at Oracle, where he led the development of Oracle’s first cloud application and many other modules of the Oracle E‐Business Suite.

Cesare has been frequently quoted in the press, including such media outlets as The Economist, Financial Times, The Register, The Guardian, ZD Net, SC Magazine, Computing and CBS News. An accomplished public speaker, Cesare also has delivered presentations and highlighted speeches at many events, including the Mobile World Congress, Gartner Security Summits, IDC CIO Forums, CTIA Applications, CSA Congress and RSA Conferences.

Cesare holds a Berkeley MBA, a BS in Computer Science and numerous professional certifications from Microsoft, Cisco and Sun.

He lives in the Bay Area with his wife and son. Cesare’s interests include consumer electronics in general and mobile technology in particular.

 

 

Cloud APIs – the Next Battleground for Denial-of-Service Attacks

by Mark O’Neill

 markoneill

In recent months, there have been a number of highly publicized cyberattacks on U.S. banks. These attacks took the form of Distributed Denial of Service (DDoS) attacks, involving enormous amounts of traffic being sent to Internet-facing banking services, rendering them unusable. These recent denial-of-service attacks focused mainly on the websites of banks and other financial institutions, bringing down their online banking services, inconveniencing users, losing revenue and damaging the financial institutions’ brand reputation.

The attack surface of banks is changing, however. Increasingly banks are rolling out mobile apps to ensure improved customer service and loyalty. These mobile apps consume data via APIs in the Cloud. Given this scenario the next wave of DDoS attacks may very well target cloud these APIs in order to disable these mobile apps. A mobile app is “blind” without access to its APIs. In light of such risks, Chief Security Officers and their IT security team members need to come up to speed on both the threats posed to APIs and the very real impact an API disruption presents. This article examines strategies for protecting Cloud APIs against DDoS attacks.

Let’s take a look at how mobile apps use APIs within a banking context. Similar to other mobile apps, mobile banking apps use APIs to perform actions and receive data. A DDoS attack would effectively disable access to the API. As mobile app penetration and usage grows, and bank customers use apps as their main channel to perform banking transactions, the impact an API attack can have on an economy grows exponentially. Customers are unable to pay bills, transfer money, or ensure they have funds to make purchases.

In the recent cyber attacks on banks, users could initiate the mobile banking apps from their phone or tablet, but the apps could not “call home” to their banking systems so they could not connect to any account details, or even log into their account. Unlike the Website disruption, this API disruption is not directly visible to the end users. The perception of the attack is different because the app itself can still be launched on their phone or tablet. In fact, when confronted with a mobile banking app which has problems performing certain functions, a user may simply blame their mobile network, or assume they have lost coverage, rather than suspect the API has been compromised.

Protecting APIs

The recent DDoS attacks have highlighted the need to put measures in place to protect APIs. Going forward, we can envisage a scenario where rather than APIs only being taken down as a side effect of attacks on Websites, future attacks could be directed against APIs with a goal of taking out mobile applications.

Ensure Distributed Deployment to Avoid Vulnerability

Today, it’s still quite common to have API protection grouped with Website protection. As APIs are still relatively quite new, it sometimes seems they are considered to come under the general rules of an organization’s Web resources. As such, there is a lot of focus on protecting the Website from DDoS or general attacks while neglecting to prepare for an API disruption and its impact on mobile applications.

In the case of the recent banking DDoS attacks, the huge volume of data involved, meant there was little the banks could do to protect against the attacks. However, separating the hosting of APIs from the hosting of “traditional” Website resources may be one mitigating factor. This means that a DDoS attack against the Website may not have the side-effect of taking down the APIs used by mobile banking apps.

Implement Policies (e.g. Identity and Throttling)

Additionally, the IT security team should be aware that APIs are different in terms of usage patterns, and in the type of traffic they receive. Whereas a website is accessed by a browser, an API is accessed by an app. This means that it makes sense to protect APIs using different policies. For example, the policy could dictate that a specific API could be accessed by particular users with defined throttling and security policies. Similarly, identity-based policy rules can be used to govern and secure APIs. This is the basis of “API Management”.

Organizations should also consider implementing policies to control the availability and access to their APIs. Simply opening applications via APIs to the outside world without any security policy in place exposes the enterprise to the potential malicious usage of those APIs. Any organization exposing data via an API needs to ensure its clients can’t easily pull down data; otherwise it runs the risk of becoming a channel for data harvesting. This means implementing throttling policies to detect if a particular client is abusing its right of access or levels of usage of the APIs.

APIs and Financial Services

Consider the example of how a Financial Services firm working with companies that have managed funds would benefit from effective API management. In a typical scenario where the firm is managing pensions or 401ks, they would normally have an API to give the current price and other details regarding the fund. In such a context, it is normal for an app to call the API on a regular basis. However, if an API is called by the same app thousands of times a second, or is called in an obviously automated way, the API will be monopolized to the detriment of other users. In this instance, the Financial Services firm would need to identify and block users with risky behavioural patterns – without impacting the experience of legitimate users.

What kind of API Management?

There are some API management products that can provide mitigation against attacks. These products have features including the ability to set policies, throttle traffic, and deliver the security clients need via a particular security token such as an API key or OAuth tokens. Of note, API keys should be handled carefully as there is a tendency to embed them in applications, without regard for security. Additionally, API management products can also detect unusual API patterns. For example, if the mobile application generally accesses certain API operations in particular patterns it can detect anomalous traffic activity and provide alerts.

Lessons Learned

One of the lessons from recent attacks is the need to put measures in place to protect APIs especially as future attacks could be directed against APIs with a goal of taking out mobile applications. This risk is increasing as a significant amount of users adopts mobile applications. This trend combined with banks’ focus on providing mobile banking applications means that organizations’ require a watertight approach to managing and securing APIs.

Mark O’Neill is a frequent speaker and blogger on APIs and security. He is the co-founder and CTO at Vordel, now part of Axway. In his new role as VP Emerging Technology, he manages Axway’s Identity and API Management strategy. Vordel’s API Server enables enterprises to connect to Cloud and Mobile. Mark can be followed on his blog at www.soatothecloud.com and twitter @themarkoneill

####

Going up? Safety first, then send your data to the cloud

By: Joe Sturonas, CTO, PKWARE

As the proliferation of data continues to plague businesses, the pressure is on for companies to migrate away from their physical data centers. Cloud computing is being adopted at a rapid rate because it addresses not only the costs for physical space, but also rising energy costs and mandates for more scalable IT services. Enterprises are drastically reducing their storage spend by using online storage solution providers to store massive amounts of data on third-party servers.

The cloud is definitely calling, but even the most seasoned IT processionals debate, grapple and get a bit intimidated by an otherwise simple term that has taken the world by storm.

Inevitable Risk

Every minute of every day presents the opportunity for a data mishap. A security breach, as well as lost, stolen or even compromised records, triggers negative exposure that quickly equates to forfeited sales, legal fees, disclosure expenses and a host of remediation costs. The fallout can result in years of struggle to recoup reputation and repair a brand in the marketplace. Cloud providers do not want to be held liable for any issues related to your data loss.  Best case, they will credit back your fees, but nothing can help a damaged reputation or customers who leave your organization when a data breach occurs.

While the cloud environment seems be to a holy grail for trends around data proliferation and massive storage needs; clouds present complex security issues and put critical corporate data, intellectual property, customer information, and PII in potential jeopardy. Enterprises forfeit security and governance control when data is handed over and cloud providers do not assume responsibility.

The recent cyber attacks by groups like Anonymous and data breaches like that of LinkedIn illustrate the need to incorporate an advanced risk and compliance plan that includes any third-party managed cloud environment. Clearly, the cloud often opens a Pandora’s Box for unanticipated consequences.

Storing huge amounts of data on third party servers may mean instant online access and lower costs; however, that data is often comingled on shared servers and exposed to users you don’t know. If your Cloud storage provider encrypts your data but holds the key, anyone working for that Cloud storage provider can gain access to your data. That means the potential of your data be shared, sold, marketed to and profiled for someone else’s gain.

Data also has to actually “get to” the cloud, which usually means leaving your trusted infrastructure and overcoming compounded transfer vulnerabilities as data moves to and from the cloud. Even the most unintended data breach could cost a company its reputation.

Potential Pitfalls

Transfer vulnerabilities– The potential for data breaches is multiplied as data travels to and from the cloud using various networks especially in highly mobile and distributed workforces.

Non-compliance penalties– Extended enterprises, partner networks and virtual machines are continuously scrutinized for compliance. All sensitive data must be protected with appropriate measures.

Storage expense– Companies are charged by the amount of data that is put into the cloud; therefore providers lack motivation to compress that data. Any compression by providers is deemed unreliable since encrypted data cannot be compressed.

Provider holds the keys– Cloud agreements can address how internal folks at the vendor will be managing your data.  Provisions can limit administrative access and grant who has hiring and oversight over those privileged administrators.  If the data that is housed in the Cloud is, in fact, encrypted then the issue becomes more about who maintains the keys.

To summarize…

  • Security breaches will happen even for the most vigilant that do not encrypt their data.
  • Your company’s reputation is at stake.
  • Security regulations are increasing.
  • The Cloud introduces new levels of risk.
  • Cloud providers have root access to all your unencrypted data in the cloud, and they are not your employees.

The only way to protect data in the cloud is if you encrypt the data and you maintain control of the private key.

CLOUD SECURITY BEST PRACTICES

Impact on security policies and procedures?

Your existing security policies and procedures need to be reviewed to evaluate the use of Cloud applications and storage.  Some companies choose to shut off access to certain Cloud applications, some choose to implement application-stores to limit access to specific approved applications, and some do not attempt to curtail access at all.  Shutting off access is not a popular option to your employees who are most likely already familiar with consumer type options, such as Dropbox.  Your end-users have certain problems like transferring or sharing a large file too large for email that they know such services can solve.

Employees, internal team members and partners, may not have any idea of the risk of putting insecure data in the Cloud.  They probably don’t know that unsecure services, such as Dropbox, pose a security risk and may have sensitive company data stored there.  You need to alert them to the data security risks of the Cloud and have them sign a security policy to that effect.  Taking draconian measures toward preventing the use of services like Dropbox will only force employees to find even less secure ways to exchange data. Providing a secure way for employees to use services like Dropbox is a far better approach.

The regulatory standards issues that you deal with today in your own data center are just as important in the Cloud.  Compliance with PCI DSS, EU Privacy Act, Sarbanes-Oxley, and FIPS140-2, etc. are just as imperative. If you know that the data is encrypted before it goes into the Cloud, you may be compliant with any number of these regulations. Even if the Cloud vendor is hacked or someone uses an administrative password improperly, your data is impregnable at that location. 

EVALUATING SECURITY SOLUTIONS FOR THE CLOUD

Encrypting your data and maintaining the keys yourself is considered by industry experts as the only way of making sure that no one can read your data, period.  It doesn’t matter if a privileged user has access to your data, they still can’t decipher it.

Regulatory compliance counts in any cloud, any environment, and any country. You must ensure your data is compliant with any regulation standards for your industry.

If there are assistants, executive and sales representatives who use different operating systems on different computing platforms and want to share that data securely inside or outside of the private or public cloud…then you need data-centric, file-level encryption that is portable across all.

Be sure to evaluate Data Location and Data Segregation as they relate to co-tenancy. Not only do you want to hold the key, but you want to encrypt all of your data so that your data, especially sensitive data (PII), is protected if comingled with other organizations’ data.

A Cloud security solution must also enable recovery and provide you with the ability to restore your data many years from now.  To meet some regulatory compliance statutes you have to keep your data for seven, even 20 years.

Cloud providers might assure users that the communications from your browser to their servers are encrypted using TLS. That provides a level of protection of the data only as it travels through the Internet, but then data remains in the clear once it lands on their server.

Worry-free breach

Odds are you will have to report a breach one day. If that day comes, you want to announce that no data was compromised and minimize corporate liability both in dollars and reputation.  With data-centric encryption where you hold the keys and the data is encrypted at the file level, no one can access that data.  Therefore, you may not even have to report it as a breach and you don’t really have to rely on all the remediation contractual issues…because essentially there was a breach but no data was lost.

So before you store sensitive data in the Cloud, make sure you encrypt that data. This insures that your data is safe and accessible to you and only you.

About the author: Joe Sturonas is Chief Technology Officer for PKWARE.

PKWARE, the industry leader in enterprise data security products, has a history rooted in innovation, starting with the creation of the .ZIP file in 1986. Since then, PKWARE has been at the forefront of creating products for reducing and protecting data – from mainframes to servers to desktops and into virtual and Cloud environments. www.pkware.com

 

How to Harden Your APIs

The market for APIs has experienced explosive growth in recent years, yet the major issues that providers still face are protection and hardening of the APIs that they expose to users. In particular, when you are exposing APIs from a cloud based platform, this becomes very difficult to achieve given the various cloud provider constraints. In order to achieve this you would have to implement a solution that will provide the hardening capabilities out of the box, but that still permits for customization of the granular settings to meet the nuances of a specific environment. If this is something you desire, this article might help you foresee the many uses and versatility.

Identify sensitive data and sensitivity of your API.

The first step in protecting sensitive data is identifying it as such. This could be PII, PHI and PCI data (PII – Personally Identifiable Information, PHI – Protected/ Personal Health Information, PCI – Payment Card Industry). Perform a complete analysis of your inbound and outbound data to your API, including all parameters, to figure this out.

Once identified, make sure only authorized people can access the data.

This will require solid identity, authentication, and authorization systems to be in place. These all can be provided by the same system. Your API should be able to identify multiple types and classes of identities. In order to achieve an effective identity strategy, your system has to accept identities of the older formats such as X.509, SAML, WS-Security as well as the newer breed of OAuth, Open ID, etc. In addition, your identity systems must mediate the identities, as an Identity Broker, so it can securely and efficiently relate these credentials to your API for consumption.

API Governance.

You should implement identity-based governance policies. These policies need to be enforced globally, not just locally. Effectively, this means you must have predictable results that are reproducible regardless of where you deploy your policies. Once the user is identified and authenticated, then you can use that result to authorize the user based on not only that credential, but also based on the location where the invocation came from, time of the day, day of the week, etc. Furthermore, for highly sensitive systems the data or user can be classified as well. Top secret data can be accessed only by top classified credentials, etc. In order to build very effective policies and govern them at run time, you need to integrate with a mature policy decision engine. It can be either standard based, such as XACML, or integrated with an existing legacy system provider.

Protect Data.

Protect your data as if your business depends on it, as it often does, or should. Make sure that the sensitive data, whether in transit or at rest (storage), is not in an unprotected original format. While there are multiple ways the data can be protected, the most common ones are encryption or tokenization. In the case of encryption, the data will be encrypted, so only authorized systems can decrypt the data back to its original form. This will allow the data to circulate encrypted and decrypt as necessary along the way by secured steps. While this is a good solution for many companies, you need to be careful about the encryption standard you choose, your key management and key rotation policies. The other standard, “tokenization”, is based on the fact you can’t steal what is not there. You can basically tokenize anything from PCI, PII or PHI information. The original data is stored in a secure vault and a token (or pointer, representing the data) will be sent in transit downstream. The advantage is that if any unauthorized party gets hold of the token, they wouldn’t know where to go to get the original data, let alone have access to the original data. Even if they do know where the token data is located, they are not white listed, so the original data is not available to them. The greatest advantage with tokenization systems is that it reduces the exposure scope throughout your enterprise, as you have eliminated vulnerabilities throughout the system by eliminating the sensitive and critical data from the stream thereby centralizing your focus and security upon the stationary token vault rather than active, dynamic and pliable data streams. While you’re at it, you might want to consider a mechanism, such as DLP, which is highly effective in monitoring for sensitive data leakage. This process can automatically tokenize or encrypt the sensitive data that is going out. You might also want to consider policy based information traffic control. While certain groups of people may be allowed to communicate certain information (such as company financials by an auditor,etc.) the groups may not be allowed to send that information. You can also enforce that by a location based invocation (ie. intranet users vs. mobile users who are allowed to get certain information).

I wrote a series of Context Aware Data Protection articles on this recently.

QOS.

While APIs exposed in the cloud can let you get away with scalability from an expansion or a burst during peak hours, it is still a good architectural design principle to make sure that you limit or rate access to your API. This is especially valuable if you are offering an open API and exposure to anyone, which is an important and valuable factor. There are two sides to this: a business side and a technical side. The technical side will allow your APIs to be consumed in a controlled way, and the business side will let you negotiate better SLA contracts based on usage model you have handy.

You also need to have a flexible throttling mechanism. The throttling mechanism should allow you to have the following options: just notify, throttle the excessive traffic, or shape the traffic by holding the messages until the next sampling period starts.In addition, there should be a mechanism to monitor and manage traffic, both for long term and for short term, which can be based on two different policies.

Protect your API.

The attacks or misuse of your publicly exposed API can be intentional or accidental. Either way, you can’t afford for anyone to bring your API down. You need to have application aware firewalls that can look into the application level messages and prevent attacks. Generally the application attacks tend to fall under Injection attacks (SQL Injection, Xpath injection, etc.), Script attacks, or attack on the infrastructure itself.

Message Security.

You also must provide both transport level and message level security features. While transport security features, such as SSL and TSL, provide some data privacy you need to have an option to encrypt/ sign message traffic, so it will reach the end systems safely and securely and can authenticate the end user who sent the message.

Monitor effectively.

If you don’t collect metrics on the usage of the APIs by monitoring,you will be shooting blind. Unless you understand who is using it, when, how they are using itand the patterns of usage,it is going to be very hard to protect it. All of the above actions are built proactively based on certain assumptions. You need to monitor your traffic not only to validate your assumptions, but also to make sure you are ready for reactive measures based on what is happening. This becomes critical in mitigating the risk for cloud based API deployments.

 

Andy is the Chief Architect & Group CTO for the Intel unit responsible for Cloud/ Application security, API, Big Data, SOA and Mobile middleware solutions, where he is responsible for architecting API, SOA, Cloud, Governance, Security, and Identity solutions for their major corporate customers. In his role, he is responsible for helping Intel/McAfee field sales, technical teams and customer executives. Prior to this role, he has held technology architecture leadership and executive positions with L-1 Identity Solutions, IBM (Datapower), BMC, CSC, and Nortel. His interests and expertise include Cloud, SOA, identity management, security, governance, and SaaS. He holds a degree in Electrical and Electronics engineering and has over 25+ years of IT experience.

He blogs regularly at www.thurai.net/securityblog on API, Security, SOA, Identity, Governance and Cloud topics. You can find him on LinkedIn at http://www.linkedin.com/in/andythurai. or on Twitter at @AndyThurai.

Three Critical Features That Define an Enterprise-Grade Cloud Service

By David Baker, CSO at Okta

 

The line between enterprise and consumer is fading as employees work from all manner of devices to access the on-premises, cloud and even consumer applications needed to get work done. But it’s important to not confuse enterprise and consumer services from a security standpoint. Enterprises are increasingly trusting cloud serviceproviders to secure private, often sensitive data. These services must be held to more rigorous standards—but what does it really take to be considered truly “enterprise grade”?

 

Cloud services today are ubiquitous and are quick to use terms like security, high availability and transparency. There are many features that define enterprise services, but the three that stand out for me areplatform security, service availability and multi-tenant architecture.

 

Platform Security

 

Whether you call it Layer 7 or application security, armorizing a cloud service is especially critical in the enterprise. These services are entrusted to handle sensitive corporate and customer data, and enterprises must be able to trust that their cloud vendors have rigorous security standards in place and that their customers’ data is behind lock and key.

 

The most basic step toward enterprise security is independent third-party certification. Yes, the c-word. I have seen any check-box attestations and certifications, buta certification alone does not mean that platform security is solid. There are many tiers of security validation, and programs such as FedRamp, ISO 27001, and SOC stand out as good benchmarks of operational security for cloud service providers. On top of operational security validation, enterprise cloud services should be able to demonstrate additional validation through recurring third-party application penetration testing. And the penetration tests should be shared with customers because transparency builds trust.

 

I have been pleasantly surprised by how many customers ask me to present my security controls according to the CSA Security, Trust & Assurance Registry (STAR) program. In fact, I’m working with my SOC auditors now to build additional narratives to our SOC 2 Type II report that map directly to STAR. A powerful way to demonstrate platform security is to not only provide the SOC 2 report, but to also provide every penetration report and STAR CCM as well.

 

Service Availability

 

Availability is a critical component of enterprise-ready services. Areliable cloud service does little good if customers are unable to access it. Remember, enterprise cloud services are either replacing a legacy service or providing something that the enterprise needs 24x7x365.  “Four 9s availability” is a good industry benchmark for enterprise cloud services, but the number of 9s is only part of the equation.

 

Enterprise cloud vendors should guarantee availability with SLAs to ensure the service’s availability.Service providers are increasingly choosingcommodity IaaS providers, and customers are left to wonder whether the cloud service provides a better SLA than the IaaS providers to the vendor. If a cloud service is built on top of an IaaS, transparency is key.

 

Enterprise cloud vendors should be able to demonstrate (through at least two years of historical availability) that their cloud architecture is able to withstand. With today’s cloud infrastructures, it should be assumed that virtual instances will disappear because of hardware and network failures, natural disasters and power loss. Enterprise cloud services must be built for disaster avoidance, not disaster recovery!

 

The service must be built for resiliency, and it must be maintained. Maintenance windows are a thing of the past. Show me a cloud service with a “four 9s” SLA and a monthly service window, and I will show you a service provider with a “three 9s” SLA, no maintenance windows and higher availability.

 

Multi-Tenancy

 

Security and availability are essential components of any application that’s ready for the enterprise. But perhaps the most important characteristic of an enterprise-grade service is how it deals with the conundrum of multi-tenancy. The most common question prospective customers ask is, “How do you protect and secure my data from your other customers’ data?”Dedicated subnets and dedicated servers per each customerdon’t scale within a multi-tenant cloud infrastructure. It’s purpose is to be low-cost accommodate elastic scalability as needed.The solution to segmenting customer data is encryption, not subnets or dedicated instances. Yes, that means each customer’s data is uniquely encrypted while at rest within the service.

 

Making this work, however, is not always straightforward. The cloud service must assign a unique key to encrypt each customer’s data. This, in turn,requires a robust key management architecture that uses in-memory secrets that are never stored to a disk or written down to ensure the integrity of customers’ key stores and data. And the key management system should also be resilient to losing encrypted data structures and be able to quickly expire keys. Sure, it sounds obvious, but it’s scary how often developers focus on building the safe but forget to secure the key.

 

I’ve worked in corporate security for more than 15 years, and I’ve seen numerous instances of built-in encryption security gone terribly wrong. Encryption protocols that are either too easily cracked or encryption keys that arestored in the same database as the encrypted data they are used to protect.

 

Three Prongs of an Enterprise Cloud Service

 

Enterprise users should expect more rigorous security standards from the applications they use at work. The stakes are higher in business, with repercussions that extend beyond just the end-user and can affect the entire organization. There are many components that make a cloud service truly enterprise ready, but platform security, availability and multi-tenancy are, in my opinion, the three most important. How a cloud service measures up determines whether it’s truly enterprise-grade, or whether it’s merely pretending to be.

 

—-By David Baker, chief security officer of Okta, an enterprise-grade identity management service that addresses the challenges of a cloud, mobile and interconnected business world. Follow him on Twitter at@bazaker.

 

The Shrinking Security Model: Micro-perimeters

By Ed King, VP Product Marketing – Emerging Technologies, Axway (following acquisition of Vordel)

 

As Cloud and mobile computing make enterprise IT ever more extended, the traditional security model of keeping the bad guys out and allowing only the good guys in no longer works well.  While the reach of the enterprise has expanded, the security perimeter may actually have to shrink to around the smallest entities such as the application and the dataset.  A truly scalable security model for this world of BYOx (fill in device, application, identity) seems to be one based on massively scalable micro-perimeters. What is big is now small and what is small is now big.

 

Micro-perimeter #1: Applications

 

Application security has long been secondary to network security.  In the old days, since most business applications were only accessible on the corporate network via a browser or fat client, applications only needed rudimentary authentication and authorization capabilities.  However, now with the pervasiveness of Cloud based services and mobile access, the network perimeter has effectively evaporated and application security is a front and center of house issue.  By shrinking the security perimeter to each individual application, enterprise IT can control a user’s access to the application from anywhere and any device, without having to rely on a cumbersome VPN connection.  For applications in the Cloud, Cloud service providers already provide basic network security such as firewalling.  Application security, however, is the responsibility of the enterprise.  Any access control that was previously implemented at the network level needs to move to the application level.  Setting up a micro-perimeter around applications involves:

  • Authentication and single sign-on –  This can mean strong and multi-factor authentication if a higher level of assurance is required.  If the application is being used by third-party users, a federated scheme is highly recommended.
  • Authorization – This typically means a role or attribute based scheme.  More advanced authorization schemes can involve fine grained entitlement management, as well as risk based schemes.  If federated access is required, definitely consider OAuth, which has become the de-facto federated authorization scheme of today.

 

Building authentication and authorization capabilities into individual applications is neither economical nor scalable.  Look for access management technologies that can front new and legacy applications and support the latest federation standards such as OAuth, OpenID Connect, and SCIM (System for Cross-domain Identity Management).

 

Micro-perimeter #2: APIs

 

How we use applications has changed since Apple introduced the iPhone and the App Store.  We no longer use a small number of larger complex applications (think Excel, Word), but a large number of small purpose-built applications.  How many applications do you have on your smartphone?  This same trend is true for Cloud applications.  Instead of large ERP platforms such as SAP and Oracle, enterprises are now favoring smaller, best-of-breed applications such as Salesforce and Workday.  In addition, the modern application user experience is cross-modal, users use a number of applications on different platforms to complete tasks within the same business process.  This new breed of applications use web APIs to enable integration and support multiple user engagement applications on mobile and Cloud.  API has become the common access point given the proliferation of applications and endpoints.  Setting up a micro-perimeter around APIs involve 3 aspects of protection:

  • Interface security to ensure transport level security and blocking of attacks such as SQL injection and cross-site scripting
  • Access control to ensure only the right user, device and applications are allowed to access the APIs, along with integration to enterprise identity and access management platforms
  • Data security to monitor all data passing through the API, including header, message body, and any attachment, for sensitive data, then perform real-time redaction

Just as with application security, do not reinvent the wheel when installing micro-perimeters around APIs.  Consider products such as API Servers and API Gateways that offer comprehensive API protection in all three areas.

 

Micro-perimeter #3: Devices

 

Mobile devices are more easily compromised than servers and desktop computers, and thus have a much bigger attack surface.  In addition to the typical endpoint security vulnerabilities such as malware and operating system exploits, a lost or stolen device gives attackers physical access to the device, which opens up additional exploit options at the hardware, firmware, operating system and application levels.  Beyond physical security, the widespread use of application stores creates opportunities for malware to be downloaded freely and spread quickly.  Deploying a micro-perimeter around the mobile device has been a hot security field in recent years.  Various solutions ranging from MDM (mobile device management), mobile virtual machines and containers, to application signing are available.  Look for technologies that can:

  • Validate application authenticity and integrity
  • Secure operating system and applications from malware and viruses
  • Detect and block suspicious/unauthorized cross-application activities
  • Secure keys and identities on the device
  • Secure communication and prevent man-in-the-middle exploits

 

Micro-perimeter #4: Data

 

In this ultra-connected world, data drives applications and user interactions.  Data is often passed from application to application and from device to device.  Data security measures are usually in place at the original egress point when the data leave its source, but once the data is sent to its first client, what happens after that is anybody’s guess.  Using identity data as an example, once user data is sent to a Cloud service, that service may be caching the user credential to allow single sign-on to a third-party service.  The second leg of that integration may not have proper user consent.  How the identity data is handled by the second service is an unknown risk to the enterprise.  The way to secure data in a federated environment is to put up a micro-perimeter around the data set.  The data set should be encrypted so only authorized endpoints have the means to consume the data.  An example of this is the OAuth 2.0 standard that replaces user identity and authorization scope with an opaque token, then provides interaction mechanisms to ensure user consent is provided when a new third-party needs to consume the OAuth token.  This type of technology has not yet expanded to handle arbitrary data sets, beyond the traditional cumbersome PKI infrastructure.  Future capabilities may also include wrapping data sets with policies that can be directly consumed by client applications.

 

While mobile and Cloud technologies have expanded the reach of enterprise security, moving to a micro-perimeter based security model maybe the key to having a massively scalable security model.  What is big is now small and what is small is now big.

 

About the author:

Ed King VP Product Marketing Emerging Technologies Axway (recently acquired Vordel)
Ed has responsibility for Product Marketing of emerging technologies around Cloud and Mobile at Axway, following their recent acquisition of Vordel. At Vordel, he was VP Product Marketing for the API Server product that defined the Enterprise API Delivery Platform. Before that he was VP of Product Management at Qualys, where he directed the company’s transition to its next generation product platform. As VP of Marketing at Agiliance, Ed revamped both product strategy and marketing programs to help the company double its revenue in his first year of tenure. Ed has also held senior executive roles in Product Management and Marketing at Qualys, Agiliance, Oracle, Jamcracker, Softchain and Thor Technologies. He holds an engineering degree from the Massachusetts Institute of Technology and a MBA from the University of California, Berkeley.

Upcoming Cloud Security Training in EMEA – sign up today!

Securosis has recently updated the CCSK training curriculum to be in alignment with the Cloud Security Alliance Guidance V3.0, and the training class is much improved. Many of the hands-on exercises have been overhauled, and if you are looking to get familiar with cloud security you will want to check out this class.
A unique CCSK training class will happen April 8-10 in Reading, UK to deliver the Basic, Plus, and Train the Trainer (TTT) courses. That’s right, there will be a third day to train the next group of CCSK curriculum instructors. Securosis’ Mike Rothman will be the instructor and he was one of the developers of the training curriculum and one of two people certified to train other instructors.
With the CSA making a fairly serious investment, as evidenced by their recent announcement naming HP as a Master Training Partner and some other upcoming strategic alliances, the CCSK is going to grow gangbusters in 2013. So if you do training, or would like cloud security to be a larger part of your business, getting certified as a CCSK trainer would be a good thing. If you want to become certified to teach, you need to attend one of these courses.
And even if you aren’t interested in teaching, it’s also a good opportunity to get trained by the folks who built the course.

You can get details and sign up for the training in Reading, UK, April 8-10. (http://ccskuk.eventbrite.com/)
Here is the description of each of the 3 days of training:
Day 1: There is a lot of hype and uncertainty around cloud security, but this class will slice through the hyperbole and provide students with the practical knowledge they need to understand the real cloud security issues and solutions. The Certificate of Cloud Security Knowledge (CCSK) – Basic class provides a comprehensive one day review of cloud security fundamentals and prepares them to take the Cloud Security Alliance CCSK certification exam. Starting with a detailed description of cloud computing, the course covers all major domains in the latest Guidance document from the Cloud Security Alliance, and the recommendations from the European Network and Information Security Agency (ENISA). The Basic class is geared towards security professionals, but is also useful for anyone looking to expand their knowledge of cloud security. (We recommend attendees have at least a basic understanding of security fundamentals, such as firewalls, secure development, encryption, and identity management).
Day 2: The CCSK-Plus class builds upon the CCSK Basic class with expanded material and extensive hands-on activities with a second day of training. The Plus class (on the second day) enhances the classroom instruction with real world cloud security labs! Students will learn to apply their knowledge as they perform a series of exercises, as they complete a scenario bringing a fictional organization securely into the cloud. This second day of training includes additional lecture, although students will spend most of their time assessing, building, and securing a cloud infrastructure during the exercises. Activities include creating and securing private clouds and public cloud instances, as well as encryption, applications, identity management, and much more.
Day 3: The CCSK Instructor workshop adds a third day to train prospective trainers. More detail about how to teach the course will be presented, as well as a detailed look into the hands-on labs, and an opportunity for all trainers to present a portion of the course. Click here for more information on the CCSK Training Partner Program (PDF) – https://cloudsecurityalliance.org/wp-content/uploads/2011/05/CCSK-Partner-Program.pdf.

 

The Dark Side of Big Data: CSA Opens Peer Review Period for the “Top Ten Big Data and Privacy Challenges” Report

moonBig Data seems to be on the lips of every organization’s CXO these days. By exploiting Big Data, enterprises are able to gain valuable new insights into customer behavior via advanced analytics. However, what often gets lost amidst all the excitement are the very real and many security and privacy issues that go hand in hand with Big Data.  Traditional security schemes mechanisms were simply never designed to deal with the reality of Big Data, which often relies on distributed, large-scale cloud infrastructures, a diversity of data sources, and the high volume and frequency of data migration between different cloud environments.

To address these challenges, the CSA Big Data Working Group released an initial report, The Top 10 Big Data Security and Privacy Challenges at CSA Congress 2012, It was the first such industry report to take a holistic view at the wide variety of big data challenges facing enterprises. Since this time, the group has been working to further its research, assembling detailed information and use cases for each threat.  The result is the first Top 10 Big Data and Privacy Challenges report and, beginning today, the report is open for peer review during which CSA members are invited to review and comment on the report prior to its final release. The 35-page report outlines the unique challenges presented by Big Data through narrative use cases and identifies the dimension of difficulty for each challenge.

The Top 10 Big Data and Privacy Challenges have been enumerated as follows:

  1. Secure computations in distributed programming frameworks
  2. Security best practices for non-relational data stores
  3. Secure data storage and transactions logs
  4. End-point input validation/filtering
  5. Real-time security monitoring
  6. Scalable and composable privacy-preserving data mining and analytics
  7. Cryptographically enforced data centric security
  8. Granular access control
  9. Granular audits
  10. Data provenance

The goal of outlining these challenges is to raise awareness among security practitioners and researchers so that industry wide best practices might be adopted to addresses these issues as they continue to evolve. The open review period ends March 18, 2013.  To review the report and provide comments, please visit https://interact.cloudsecurityalliance.org/index.php/bigdata/top_ten_big_data_2013 .

Tweet this: The Dark Side of Big Data: CSA Releases Top 10 Big Data and Privacy Challenges Report. http://bit.ly/VHmk0d

CSA Releases CCM v 3.0

The Cloud Security Alliance (CSA) today has released a draft of the latest version of the Cloud Control Matrix, CCM v3.0. This latest revision to the industry standard for cloud computing security controls realigns the CCM control domains to achieve tighter integration with the CSA’s “Security Guidance for Critical Areas of Focus in Cloud Computing version 3” and introduces three new control domains. Beginning February 25, 2013 the draft version of CCM v3.0 will be made available for peer review through the CSA Interact website with the peer review period closing March 27, 2013, and final release of CCM v3.0 on April 1, 2013.

The three new control domains; “Mobile Security”, “Supply Change Management, Transparency and Accountability”, and “Interoperability & Portability” address rapidly expanding methods cloud data is accessed, the need for ensuring due care is taken in the cloud providers supply chain, and the minimization of service disruptions in the face of a change to cloud provider relationship.

The “Mobile Security” controls are built upon the CSA’s “Security Guidance for Critical Areas of Mobile Computing, v1.0” and are the first mobile device specific controls incorporated into the Cloud Control Matrix.

The “Supply Change Management, Transparency and Accountability” control domain seeks to address risks associated with governing data within the cloud while the “Interoperability & Portability” brings to the forefront considerations to minimize service disruptions in the face of a change in a cloud vendor relationship or expansion of services.

The realigned control domains have also benefited through changes in language to improve the clarity and intent of the control, and, in some cases, realigned within the expanded control domains to ensure the cohesiveness within each control domain and minimize overlap.

The draft of the Cloud Control Matrix can be downloaded from the Cloud Security Alliance website and the CSA welcomes peer review through the CSA Interact website.

The CSA invites all interested parties to participate in the peer review and the CSA Cloud Controls Matrix Working Group Meeting to be held during the week of the RSA Conference, at 4pm PT on February 28, 2013, at the Sir Francis Drake Hotel
Franciscan Room
450 Powell St in San Francisco, CA.

CSA Drafts New SOC Position Paper

Phil Agcaoili, Founding Member, Cloud Security Alliance

David Barton, Principal, UHY Advisors

 

In June 2011, the American Institute of Certified Public Accountants (AICPA) eliminated SAS 70 which had been a commonly used reporting standard within the information technology industry for providing third party audits of controls.  At that time, the AICPA introduced three Service Organization ControlSM (SOC) reporting options intended to replace SAS 70; SOC 1 (and the associated SSAE 16 guidance), SOC 2, and SOC 3.

The new AICPA reporting framework was created to eliminate confusion in the marketplace for service organizations (including Cloud providers) wishing to provide their customers with third party assurance on their controls.   Part of this confusion stems from the lack of knowledge by buyers regarding the purpose of each type of available report and their intended use.

The Cloud Security Alliance (CSA) has drafted the following position paper as a means to educate its members and provide guidance on selecting the most appropriate reporting standard.

After careful consideration of alternatives, the Cloud Security Alliance has determined that for most cloud providers, a SOC 2 Type 2 attestation examination conducted in accordance with AICPA standard AT Section 101 (AT 101) utilizing the CSA Cloud Controls Matrix (CCM) as additional suitable criteria is likely to meet the assurance and reporting needs of the majority of users of cloud services.

We’d like to thank the following people for their time and energy over the past year to help develop, define, harden, and deliver this guidance that we know will help our industry—Chris Halterman (E&Y), David Barton (UHY Advisors), Jon Long (CompliancePoint), Dan Schroeder (Habif, Arogeti & Wynne), Ryan Buckner (BrightLine), Beth Ross (E&Y), Jim Reavis (Cloud Security Alliance), Daniele Catteddu (Cloud Security Alliance), Audrey Katcher (Rubin Brown), Erin Mackler (AICPA), Janis Parthun (AICPA), and Phil Agcaoili (Cox Communications).

 

When Good Is Not Good Enough: NIST Raises the Bar for Cloud Data Protection Vendors

Earlier this year, the National Institute of Standards and Technology (NIST) released a publication titled Cloud Computing Synopsis & Recommendations (Special Publication 800-146) describing in detail the current cloud computing environment, explaining the economic opportunities and risks associated with cloud adoption, and openly addressing the security and data privacy challenges. NIST makes numerous recommendations for companies or agencies considering the move to the cloud (including delivering a strong case for uniform management practices in the data security and governance arenas).

 

The report highlights several reasons why cloud-based SaaS applications present heightened security risks. As a means to offset the threats, NIST’s recommendation on cloud encryption is clear-cut: organizations should require FIPS 140-2 compliant encryption to protect their sensitive data assets. This should apply to stored data as well as application data, and for Federal agencies, it’s a firm requirement, not simply a best practice or recommended guideline.

 

What does FIPS 140-2 validation mean? An encryption vendor whose cryptographic module attains this validation attests that its solution:

 

  • Uses an approved algorithm,
  • Handles the encryption keys appropriately, and
  • Always handles the data to be encrypted in a certain way, in a certain block size, with a certain amount of padding, and with some amount of randomness so the ciphertext can’t be searched.

 

Compare this to another level of validation, FIPS 197. FIPS 197 is an algorithmic standard that addresses the Advanced Encryption Standard (AES). As a standard that is used worldwide, AES is approved by the U.S. government to satisfy only one condition listed above – condition (1) “Uses an approved algorithm.” However, an encryption solution that only incorporates the validated algorithms of FIPS 197 does not meet security requirements (2) and (3) above, and hence is insufficient to be certified as FIPS 140-2 (minimizing its usefulness for those looking to use strong encryption).

 

Why is validation important? Well – it is a big deal for security professionals entrusted with deploying systems for protecting sensitive data. These differing standards leave the door open for confusion amid various market claims. Some solution vendors say “We can do AES encryption so our encryption is good.” Or “We use Military Grade encryption.” The reality is that if it is not FIPS 140-2 validated, stating something is Military Grade is clearly misleading.

 

One of the hottest areas for encryption technology is the Cloud – specifically, encrypting sensitive data stored and processed within SaaS or PaaS applications such as Oracle CRM On Demand or Salesforce.com. When you strongly encrypt data, for example using a FIPS 140-2 validated algorithm, it can “break” the end user’s experience with an application. For example, what happens when you try to search on a field like LAST NAME if all of the values, such as “Smith,” stored in the LAST NAME field have been encrypted? Well, your search will come back empty (and you’d be a pretty frustrated user).

 

A new class of products, which Gartner calls Cloud Encryption Gateways, has emerged to tackle this challenge. These solutions encrypt sensitive data before it leaves an organization’s firewall so that only an encrypted value goes into the cloud for processing and storage. And they also promise to “preserve functionality,” so you can still pull up a last name like SMITH on a search of SMI* even though the last names put in the cloud have been encrypted. Cool, right?

 

But you have to be careful as some vendors do this “magic” by modifying the encryption algorithms to ensure that a few characters always line up the same way in order to preserve the functionality I described (common operations like searching and sorting, etc.). This approach utilizes a weakened form of encryption that is certainly not FIPS 140-2 encryption. From a certification standpoint it doesn’t have any strength behind it; it just has a certification that says “If you run these strings through a certain way, you will get a result that looks like this” (FIPS 197).

 

It is important to remember that the implementation of AES without FIPS 140-2 is treated by the U.S. Federal government as clear text. Why? When you water down an encryption algorithm (like in the earlier example), you open up the encryption engine to crypto analysis, which creates a much easier path to cracking the data. This, by definition, puts sensitive data at risk. Solutions using these weakened algorithms make enterprises wrestle with the difficult tradeoff between meeting requirements for data privacy/protection and the overall usability of their application. This is a no-win scenario.

 

The good news is that there are some innovative approaches out there that do not rely on this sort of methodology. So my advice is to do your homework, ask the hard questions of your suppliers, and make sure your information is protected by the strongest techniques possible. Enterprises can find solutions that will keep all of their interested parties satisfied:

 

  • Privacy & Security Professionals: Can use industry acknowledged strong encryption techniques, such as FIPS 140-2, or tokenization
  • Business End-Users: Can get all of the SaaS or PaaS application functionality they need – security does not “break” the application’s usability
  • IT Professionals: Can deploy a standards-based, scalable platform that meets security and business needs and scales to support multiple clouds

 

And an alternative technique, called tokenization, also deserves a mention. Tokenization, sort of a “first cousin” of encryption, is a process by which a data field, such as a primary account number (PAN) from a credit or debit card, is replaced with a surrogate value called a token. De-tokenization is the reverse process of redeeming a token for its associated original value.

 

While there are various approaches to creating tokens, they typically are simply randomly generated values that have no mathematical relation to the original data field. Herein lies the inherent security of the approach – it is nearly impossible to determine the original value of the sensitive data field by knowing only the surrogate token value. So if a criminal got access to the token in the cloud, there is no “key” that could ever decipher it. The true data value never leaves the safety of the token vault stored securely behind an organization’s firewall.

 

Tokenization as an obfuscation technique has proven especially useful for organizations in some geographic jurisdictions with legal requirements specifying that sensitive data physically reside within country borders at all times. Privacy and security professionals find that tokenization technology provides a workable solution in these instances and overcomes the strict data residency rules enforced in many countries.

 

So whether it is industry acknowledged strong encryption or tokenization, make sure you choose robust, strong and validated techniques that will allow you to meet your security objectives. Never lose sight of the primary goal of adopting a security solution and avoid the temptation to sacrifice security strength for usability benefits. In the end, it truly is not worth the risk.

 

David Stott is senior director, product management, at PerspecSys where he leads efforts to ensure products and services meet market requirements.

 

Critical Infrastructure and the Cloud

Cloud computing continues to be a hot topic. But so what if people are talking about it, who is actually adopting it? One of the questions I have been asking myself is, ‘Will cloud be adopted for critical infrastructure? And what is the security perspective on this?

Naturally a blog to answer that question will never really do the topic any justice. But it is a crucial issue. I wrote about critical cloud computing already a year ago on my blog ( http://blogs.mcafee.com/enterprise/the-audacity-of-cloud-for-critical-infrastructure ), and over the past years I have worked on these issues, for example with the European Network and Information Security Agency (ENISA), who have published the white paper; Critical Cloud Computing: A CIIP Perspective on cloud computing services.

The ENISA paper focusses on large cyber disruptions and large cyber attacks, as in the EU’s Critical Information Infrastrcuture Protection (CIIP) plan, e.g.) and looks at the relevant underlying threats like natural disaster, power network outages, software bugs, exhaustions due to overload, cyber attacks, etc. It underlines the strengths of cloud computing, when it comes to dealing with natural disasters, regional powercuts and DDoS attacks. At the same time it highlights that the impact of cyber attacks could be very large, because of the concentration of resources. Everyday people discover software exploits, in widely used software (this week UPnP, last month Ruby on Rails, and so on). What would be the impact if there was a software exploit for a cloud platform used widely across the globe?

As an expert on the ENISA Cloud Security and Resilience Working Group, I see this white paper as the starting point for discussions about what are the big cloud computing risks from a CIIP perspective. Revisiting the risk assessments we worked on in the past is important, mainly because the use of cloud computing is now so different, and because cloud computing is being adopted in critical sectors like finance, energy, transport and even governmental services.

A discussion about the CIIP perspective on cloud computing becomes all the more relevant in the light of the EU’s Cyber Security strategy, which will focus on critical sectors and preventing large-scale cyber attacks and disruptions. The strategy will be revealed by the European Commission in February and it will be interesting to see what role cloud computing will play in the strategy.

The report is available on the ENISA website at; https://resilience.enisa.europa.eu/cloud-security-and-resilience/cloud-computing-benefits-risks-and-recommendations-for-information-security/view

There is no doubt that internet connections and cloud computing are becoming the backbone of our society. The adoption within critical infrastructure sectors means that resilience and security becomes even more imperative for all of us.

By Raj Samani, EMEA Strategic Advisor CSA  and EMEA CTO McAfee

[email protected]_Samani

Towards a “Permanent Certified Cloud”: Monitoring Compliance in the Cloud with CTP 3.0

Cloud services can be monitored for system performance but can they also be monitored for compliance? That’s one of the main questions that the Cloud Trust Protocol aims to address in 2013.

Compliance and transparency go hand in hand.

The Cloud Trust Protocol (CTP) is designed to allow cloud customers to query cloud providers in real-time about the security level of their service. This is measured by evaluating “security attributes” such as availability, elasticity, confidentiality, location of processing or incident management performance, just to name a few examples. To achieve this, CTP will provide two complementary features:

  • First, CTP can be used to automatically retrieve information about the security offering of cloud providers, as typically represented by an SLA.
  • Second, CTP is designed as a mechanism to report the current level of security actually measured in the cloud, enabling customers to be alerted about specific security events.

These features will help cloud customers compare competing cloud offerings to discover which ones provide the level of security, transparency and monitoring capabilities that best match the control objectives supporting their compliance requirements. Additionally, once a cloud service has been selected, the cloud customer will also be able to compare what the cloud provider offered with what was later actually delivered.

For example, a cloud customer might decide to implement a control objective related to incident management through a procedure that requires some security events to be reported back to a specific team within a well-defined time-frame. This customer could then use CTP to ask the maximum delay the cloud provider commits to for reporting incidents to customers during business hours. The same cloud customer may also ask for the percentage of incidents that were actually reported back to customers within that specific time-limit during the preceding two-month period. The first example is typical of an SLA while the second one describes the real measured value of a security attribute.

CTP is thus designed to promote transparency and accountability, enabling cloud customers to make informed decisions about the use of cloud services, as a complement to the other components of the GRC stack. Real time compliance monitoring should encourage more businesses to move to the cloud by putting more control in their hands.

From CTP 2.0 to CTP 3.0

CTP 2.0 was born in 2010 as an ambitious framework designed by our partner CSC to provide a tool for cloud customers to “ask for and receive information about the elements of transparency as applied to cloud service providers”. CSA research has begun undertaking the task of transforming this original framework into a practical and implementable protocol, referred to as CTP 3.0.

We are moving fast and the first results are already ready for review. On January 15th, CSA completed a first review version of the data model and a RESTful API to support the exchange of information between cloud customers and cloud provider, in a way that is independent of any cloud deployment model (IaaS, PaaS or SaaS). This is now going through the CSA peer review process.

Additionally, a preliminary set of reference security attributes is also undergoing peer review. These attributes are an attempt to describe and standardize the diverse approaches taken by cloud providers to expressing the security features reported by CTP. For example, we have identified more than five different ways of measuring availability. Our aim is to make explicit the exact meaning of the metrics used. For example, what does unavailability really mean for a given provider? Is their system considered unavailable if a given percentage of users reports complete loss of service? Is it considered unavailable according to the results of some automated test to determine system health?

As well as all this nice theory, we are also planning to get our hands dirty and build a working prototype implementation of CTP 3.0 in the second half of 2013.

Challenges and research initiatives

While CTP 3.0 may offer a novel approach to compliance and accountability in the cloud, it also creates interesting challenges.

To start with, providing metrics for some security attributes or control measures can be tricky. For example, evaluating the quality of vulnerability assessments performed on an information system is not trivial if we want results to be comparable across cloud providers. Other examples are data location and retention, which are both equally complex to monitor, because of the difficulty of providing supporting evidence.

As a continuous monitoring tool, CTP 3.0 is a nice complement to traditional audit and certification mechanisms, which typically only assess compliance at a specific point in time. In theory, this combination brings up the exciting possibility of a “permanently certified cloud”, where a certification could be extended in time through automated monitoring. In practice however, making this approach “bullet-proof” requires a strong level of trust in the monitoring infrastructure.

As an opportunity to investigate these points and several other related questions, CSA has recently joined two ambitious European Research projects: A4Cloud and CUMULUS. A4Cloud will produce an accountability framework for the entire cloud supply chain, by combining risk analysis, creative policy enforcement mechanisms and monitoring. CUMULUS aims to provide novel cloud certification tools by combining hybrid, incremental and multi-layer security certification mechanisms, relying on service testing, monitoring data and trusted computing proofs.

We hope to bring back plenty of new ideas for CTP!

Help us make compliance monitoring a reality!

A first draft of the “CTP 3.0 Data Model and API” is currently undergoing expert review and will then be opened to public review. If you would like to provide your expert feedback, please do get in touch!

by Alain Pannetrat 

[email protected]

Dr. Alain Pannetrat is a Senior Researcher at Cloud Security Alliance EMEA. He works on CSA’s Cloud Trust Protocol providing monitoring mechanisms for cloud services, as well as CSA research contributions to EU funded projects such as A4Cloud and Cumulus. He is a security and privacy expert, specialized in cryptography and cloud computing. He previously worked as a IT Specialist for the CNIL, the French data protection authority, and was an active member of the Technology Subgroup of the Article 29 Working Party, which informs European policy on data protection. He received a PhD in Computer Science after conducting research at Institut Eurecom on novel cryptographic protocols for IP multicast security.