April 30, 2013 | Leave a Comment
By Wolfgang Kandek
It is common belief that buying more robust and expensive security products will offer the best protection from computer-based attacks; that ultimately the expenditure pays off by preventing data theft. According to Gartner, more than $50 billion is spent annually on security infrastructure software, hardware and services. They expect this number to continue to grow and reach $86 billion by 2016. With security investments skyrocketing, the number of successful attacks should be decreasing but they aren’t. That’s the reality. There is no one thing or even combination of things that can guarantee you won’t get hacked. However, there are some basic precautions companies can take that can put up enough defenses to make it not worth a hacker’s time and effort to try to break in.
The recent Verizon Business 2013 Data Breach Investigations Report revealed that 78 percent of initial intrusions were rated as low difficulty and likely could have been avoided if IT administrators had used some intermediate and even simple controls. Using outdated software versions, non-hardened configurations and weak passwords are just a few of the many common mistakes businesses make. These basic precautions are being overlooked, or worse, ignored.
Implement a security hygiene checklist
One of the most simple and effective way for companies to improve their defenses is to create and closely adhere to a checklist for basic security hygiene. The Centre for the Protection of National Infrastructure in the UK and the Center for Strategic & International Studies (CSIS) in the U.S. released a list of the top 20 critical security controls for defending against the most common types of attacks. Topping the list are creating an inventory of authorized and unauthorized devices and software, securing configurations for hardware and software, and continuous vulnerability assessment and remediation.
A laundry list of organizations are already using this checklist and seeing results, including the U.S. Department of State, NASA, Goldman Sachs and OfficeMax. The State Department followed the guidelines for 40,000 computers in 280 sites around the world and within the first nine months, it reduced its risk by 90 percent. In Australia, the defense agency’s Department of Industry, Innovation, Science, Research and Tertiary Education reported that it had eliminated 85 percent of all incidents and blocked malware it would have missed otherwise, without purchasing additional software or increasing end user restrictions.
My own security precaution checklist includes:
- Promptly apply security patches for applications and operating system to keep all software up to date
- Harden software configurations
- Curtail admin privileges for users
- Use 2-factor authentication for remote access services
- Change default admin passwords
- And prohibit Web surfing with admin accounts
Making it happen
The hardest part of changing security policies is getting IT administrators on board to drive these initiatives. Since they are already managing heavy workloads, it is important to present the efforts as ways of strengthening existing security measures rather than adding responsibilities. Incentivizing implementation is another effective strategy. Or, you can always remind them that cleaning up after an attack is harder than preventing one, but in case you need more ammunition for motivating IT:
- Friendly competition – One engineer at NASA boosted participation by awarding badges, points and other merits as if it were a game, giving employees incentive to compete for the highest score.
- Company-wide report card – The Department of State assigns letter grades based on threat risk for each location including various aspects of security and compliance. For instance, a lower grade would be given for software that is missing critical patches and infrequent vulnerability scanning. The report cards are published internally for all locations to see and again boost participation by competition and cooperation.
- Show them the money – The biggest incentive of all would be offering bonuses or time off for quantifiable improvements in security and reduced risk.
While spending money on the latest security product to build bigger and stronger walls may impress the board of directors, it won’t necessarily deter attacks. Ultimately, the goal is to implement fairly basic but often forgotten measures to eliminate opportunistic attacks and discourage hackers who don’t want to waste the time and energy trying to get in. Some renewed attention to the basics can mean the difference between suffering from an attack and repelling one.
Wolfgang Kandek, CTO, Qualys
As the CTO for Qualys, Wolfgang is responsible for product direction and all operational aspects of the QualysGuard platform and its infrastructure. Wolfgang has over 20 years of experience in developing and managing information systems. His focus has been on Unix-based server architectures and application delivery through the Internet. Prior to joining Qualys, Wolfgang was Director of Network Operations at the Online Music streaming company myplay.com and at iSyndicate, an Internet media syndication company. Earlier in his career, Wolfgang held a variety of technical positions at EDS, MCI and IBM. Wolfgang earned a Masters and a Bachelors degree in Computer Science from the Technical University of Darmstadt, Germany.
Wolfgang is a frequent speaker at security events and forums including Black Hat, RSA Conference, InfoSecurity UK and The Open Group. Wolfgang is the main contributor to the Laws of Vulnerabilities blog.
Company website: www.qualys.com
April 30, 2013 | Leave a Comment
By: Dan Dagnall, Chief Technology Strategist, Fischer International Identity
As BYOD and other mobile device related initiatives take hold, sooner rather than later, identity management will once again be considered as an enforcement mechanism; and rightly it should.
Identity and access management (IAM) has grown up over the years. Its early beginnings were in metadata management and internal synchronization of data to/from target applications. Lately it seems like one cannot roll out a new technology or service without considering the effect IAM will have on the initial roll-out, as well as ongoing enforcement of security, access, and policy related evaluations.
IAM is becoming the hub for all things security and so it should be for mobile device management. MDM provides an administrative interface for managing server-related components, as well as self-service interfaces and over-the-air provisioning. All of these components are key to a successful BYOD strategy, and all of these components should consider IAM as the authority in terms of the overall decision making process.
- When to provision the device (including the association of the device to the end user)
- When to lock/wipe the device
- How to enable users the ability to request apps for download and which apps they qualify for.
- How to allow users to leverage the device for multi-factor authentication
When to provision the device (including the association of the device to the end user)
As a user’s identity is created within an organization, MDM actions are needed to secure the end point device and associate that device with the end user for BYOD initiatives. MDM technology provides for the ability to provision apps to the device. In the IAM world, the same exercise occurs when a new user is detected and evaluated against access policies to validate and define the user’s identity in terms of application access and the exact permissions the user’s identity is to be granted.
Given that IAM should be looked to and leveraged as the authority over mobile device provisioning, organizations in the above scenario should not re-create the wheel. Rather, they should consider extending their existing IAM resource pool (i.e., those items controlled by IAM and associated policies defined within IAM) to include MDM servers, administrators, and end user device management. I am not saying that IAM should consider replicating or mirroring the same functionality granted by MDM servers and management consoles, but I am advocating that those interfaces and servers fall under the umbrella of enforcement attributed to what IAM already does for your organization. In the IAM world, this consists of an integration component to enable external enforcement of MDM-related policies and actions originating from the IAM stack.
When to lock/wipe the device
Blocking users from accessing their applications (for multiple reasons) is a fundamental attribute of IAM. There is no need to deploy new MDM technologies or write new integration points or, again, re-create the wheel when it comes to disabling or locking a user out of an application on the device, or the device itself. In many cases, there are automated processes in place in the identity side that will immediately disable or lock a user out of the system if certain criteria are met. For instance a termination event will initiate the disabling (or locking) actions. So instead of focusing on disabling everything else and then leveraging an MDM interface to push actions to the mobile device, those termination actions (or disabling actions) should be driven from within the context of IAM.
How to enable users the ability to request apps for download and which apps they qualify for Requesting access in the form of applications or associated permissions is not new to IAM either. MDM brings the concept of controlling which apps a user is able to download and run on their device. This is a fundamental component of IAM (evaluate a user, and qualify for a specific application set and specific permissions based on the users role(s) within the organization. Extending this type of self-service capability to users via IAM, instead of a separate solution strictly for MDM can potentially cost your organization much less. IAM solves this problem very well by limiting which applications a user is able to request by enforcing access and permissions related policies at log in. Proper identity & access management will evaluate the user at log in time and determine what he/she is able to request. This concept is not new to IAM, and extending it to include enforcement of [mobile] apps can save you time and money.
How to allow users to leverage the device for multi-factor authentication
This new(er) trend places the mobile device in the spotlight more than any other mentioned in this article. The requirement of organizations to leverage the mobile device as a second form of authentication (and identity verification) ties the device, BYOD and mobile device management directly to IAM. Organizations developing an MDM strategy and deploying a solution must consider the effects of identity & access related policies while developing their MDM strategy. Organizations that look to their existing IAM solution for answers regarding MDM management and enforcement will find that their IAM stack is a viable option for securing mobile devices. In many cases extending the IAM solution to encompass the new MDM components will take work, however integration between different platforms is something IAM vendors (or developers) do very well, and lack of integration with your new MDM platform should not be a reason to forego merging IAM and MDM.
Overall, identity and access management will play an increasingly important role regarding enforcement of MDM policy, as well as authorization of MDM admins to take actions against end user mobile devices. If your organization has an extensive IAM solution in place, I strongly suggest you consider placing most (if not all) enforcement, provisioning, de-provisioning, and device identification (i.e. associating a device to a user) in the capable hands of your IAM solution. The project may look a lot different than you anticipated, but you’ll find that IAM can provide many more answers than questions when it comes to how you should roll out your new MDM / BYOD strategy.
April 26, 2013 | Leave a Comment
Earlier this year, McKinsey & Company released an article titled “Protecting information in the cloud,” discussing the increased use of cloud computing by enterprises across several industries and the benefits and risks associated with cloud usage. The article recognizes that many organizations are already using cloud applications and as a result realizing the associated efficiency and cost benefits. In fact, most of these organizations are looking to increase their usage of the cloud this year and beyond in both private and public environments. However, there are issues that are inhibiting adoption, such as risks tied to data security and concerns around privacy and compliance.
The McKinsey article rightly points out that allowing perceived risks to bar further adoption of the cloud is not a realistic option for most organizations, given the many compelling benefits offered and the need to be competitive in today’s economy. Enterprises must determine ways to embrace the cloud while also being able to satisfy important questions concerning security, compliance and regulatory protection that are hampering aggressive movement to the cloud.
The benefits of choosing either a public or private cloud option over the traditional on-premise deployment are clearly outlined in the article. McKinsey concludes that the solution for many enterprises will be a hybrid approach of public and private cloud and therefore, the primary question becomes which applications belong in which environments. This is where the article begins to fall short in its analysis of the issues surrounding cloud adoption, because it does not fully consider all solutions available, including cloud encryption gateways.
The McKinsey article recommends applications such as Customer Resource Management (CRM) and Human Capital Management (HCM) as logical choices for public cloud deployment. However, from my experience, many companies face barriers to even these types of applications for a variety of reasons, including the need to retain full control of any personally identifiable information (customer or employee) or to protect regulated data that may be subject to sector-based compliance requirements (think ITAR, HIPAA, PCI DSS, etc.). These important compliance and regulatory concerns frequently force enterprises down an on-premise path (either a traditional enterprise software implementation or via a private cloud deployment).
In these situations, a cloud encryption gateway can be used to keep the control of sensitive data in the hands of the organization that is adopting the public cloud service. These gateways intercept sensitive data while it is still on-premise and replace it with a random tokenized or strongly encrypted value, rendering it meaningless should anyone hack the data while it is in transit, processed or stored in the cloud. In addition, some gateways ensure that end users have access to all of the cloud application’s features and functions such as ability to do standard and complex searches on data, send email, and generate reports – even though the sensitive data is no longer in the cloud application.
Applications McKinsey believes should be located on a private cloud include enterprise resource planning (ERP), supply chain management, and custom applications. McKinsey recommends a private deployment option for this class of application largely due to the sensitivity of the data that is processed and stored in them. But private clouds, while a nice improvement over legacy on-premise deployment models, unfortunately cannot approach the TCO and elasticity benefits that true public-cloud SaaS providers offer enterprises. So, just like with CRM and HCM, the real opportunity for this class of applications is to figure out a model that marries the data security of a private cloud deployment with the unique TCO and elasticity value propositions of public cloud.
Here again cloud encryption gateways can play a critical role. As described earlier, enterprises would be able to move these sensitive applications onto a public cloud resource with a cloud encryption gateway that would directly satisfy any corporate concerns regarding data security, privacy and residency requirements.
Of course, not all cloud encryption gateways are created equal, so please refer to this recent paper, which provides important questions to ask when determining which gateway is the right fit for you.
Gerry Grealish leads the marketing & product organizations at PerspecSys Inc., a leading provider of cloud data security and SaaS security solutions that remove the technical, legal and financial risks of placing sensitive company data in the cloud. The PerspecSys Cloud Data Protection Gateway accomplishes this for many large, heavily regulated companies by never allowing sensitive data to leave a customer’s network, while simultaneously maintaining the functionality of cloud applications.
April 26, 2013 | Leave a Comment
By Glenn Choquette, Director of Product Management, Fischer International Identity.
Identity Management (IdM) is not new. Yet after all this time on the market, organizations still have mixed results for end-user adoption, as many organizations that rolled-out IdM years ago still haven’t achieved their goals: end users keep calling the help desk to reset passwords, to request accounts and to perform other tasks instead of using the self-service identity solution. While most organizations have diligently assessed vendor offerings, fewer have adequately planned how to achieve their utilization objectives. Many organizations assume that end users will automatically start using their IdM solution without any planning or incentives, but that’s proven to be false. With user acceptance rates ranging from under 5% to nearly 100%, it’s clear that successful IdM rollouts don’t just happen: They involve executive sponsorship, planning, education, setting measurable objectives, metrics, and a variety of “incentives” for achieving the goals. Fortunately, these activities will improve user adoption when launching, or even when “re-launching” IdM.
Best practices for your organization depend on a variety of factors such as its size, culture, geographic distribution, which applications are in the cloud or on-premise, types and diversity of users, previous rollout experiences, the chosen IdM solution, etc. A combination of planning, education, metrics and incentives has proven to maximize both the quality of the end user experience and financial benefits of IdM. Like all projects that involve significant change, executive sponsorship and active executive participation are critical to success.
The first step to planning for rapid user adoption is to understand the capabilities of the chosen solution. Plan to automate as much of the setup as possible to avoid end user inertia. If your solution supports it, plan a transition acceptable to your corporate culture that requires the use of the new solution. If automation isn’t possible with your solution, simplify the registration process as much as possible and increase your use of incentives. Your end-user adoption plan should consider your organization’s IdM objectives as well as the potential costs and risks of each aspect of the plan.
In most organizations, users tend to delay change until they are absolutely convinced of the benefits for themselves; fortunately, IdM has a lot to offer end users: single password to remember, no more waiting in the help desk queue for password resets, no forms to fill out to request access to resources, etc. So, don’t keep it a secret. Market the benefits of IdM before launch. Make users aware of how their lives will be easier.
Metrics and Incentives
Metrics and incentives are pivotal to success and provide ongoing leverage for continued attainment of objectives. They can become your best friends in achieving rapid user adoption. Just as it’s important to “sell” the expected benefits to the user base prior to launch, it can be even more important to keep the momentum going by communicating the observed benefits after launch. If non-IT leaders haven’t already been sold, you’ll want to reach out to them to help carry the torch, as it’s in their own best interest to do so.
Fortunately, compared to legacy IdM solutions, modern IdM solutions achieve faster user adoption with fewer end-user incentives as users face fewer obstacles and are able to clearly see the benefits of using the solutions. Setup activities occur naturally during friendly IdM processes such as receiving new accounts and changing passwords. As more people in the organization become aware of the success of IdM and what it means, both to themselves and to the bottom line, your user base will begin to sell the solution for you. Soon, your modern solution will become the organization’s norm and the unbelievers will be viewed as laggards, under peer pressure to join the team.
Identity Management solutions and implementation methods have improved over the last several years. Whether your organization is new to Identity Management or implemented a solution years ago but is experiencing inadequate utilization, proper planning and execution of solution launch (or re-launch) activities can improve utilization rates.
April 25, 2013 | Leave a Comment
Researchers have successfully breached the Good Technology container. MDM software can only be as secure as the underlying operating system.
As the adoption of smartphones and tablets grows exponentially, one of the biggest challenges facing corporate IT organizations is not the threat of losing the device – likely owned by the employee – but the threat of a targeted attack stealing sensitive corporate data stored on these mobile devices. As a first line of defense, an increasing number of companies rely on Mobile Device Management software and Secure Container solutions to secure and manage corporate data accessed from these mobile devices. However, a recent analysis conducted by Lacoon Mobile Security – presented a few weeks ago at the BlackHat conference in Amsterdam – shows that the leading secure container solution Good Technology can be breached and corporate email stolen from Apple iOS and Android devices.
Lacoon CEO Michael Shaulov, spoke with me about the shocking results of this research and made it clear that no matter what MDM software you deploy, you are in danger. MDM and Secure Containers depend on the integrity of the host system. “Ask yourself: If the host system is uncompromised, what is the added value? If the host system is in-fact compromised, what is the added value? We’ve been through this movie before”, referring to the underlying endpoint management philosophy inherited from the previous PC era.
In their presentation “Practical Attacks against Mobile Device Management (MDM)”, Michael Shaulov and Daniel Brodie, Security Researcher, explain the details of how they penetrated the Good Technology container to exfiltrate sensitive corporate email – Good Technology did not respond to my request for comment:
Android 4.0.4 device – Samsung Galaxy S3:
1. The attacker creates a “two-stage” application which bypasses the market’s malicious app identification measures such as Google Bouncer or other mobile application reputation systems. The app is then published on Google Play or other legit Android appstores. By using the “two-stage” technique, the attacker can publish a seemingly innocent application and, once the victim installs the app, the app itself refers to the malicious code which is then downloaded.
2. The app exploits a mobile OS vulnerability which allows for privilege escalation. For example, the vulnerability in the Exynos5 chipset released in December 2012 that affects the drivers used by camera and multimedia devices.
3. The malware creates a hidden ‘suid’ binary and uses it for privileged operations, such as reading the mobile logs, as discussed in the next step. The file is placed in an execute-only directory (i.e. –x–x–x), which allows it to remain hidden from most MDM root detectors.
4. The malware listens to events in the ‘adb’ logs. These logs, and their corresponding access permissions, differ between Android versions. Note that for Android version 4.0 and higher root permissions are required in order to read the logs.
5. The malware waits for a log event that signifies that the user is reading an email.
6. The malware dumps the heap using /proc//maps and /mem. Accordingly, it can find the email structure, exfiltrate it and send it home – perhaps uploading it to an unsuspected DropBox account.
Apple iOS 5.1 device – iPhone:
Malware targeting iOS based devices needs to first jailbreak the device, and then installs the container-bypassing software.
1. The attacker installs a signed application on the targeted device, through the Enterprise/ Developer certificate. This may require physical access but there are known instances when this has done remotely.
2. The attacker uses a Jailbreak exploit in order to inject code into the secure container. The Lacoon researchers used the standard DYLD_INSERT_LIBRARIES technique to insert modified libraries into the shared memory. In this manner, their (signed) dylib are loaded into memory when the secure container executes.
3. The attacker removes any trace of the Jailbreak.
4. The malware places hooks into the secure container using standard Objective-C hooking mechanisms.
5. The malware is alerted when an email is read and pulls the email from the UI elements of the app.
6. Finally, the malware sends every email displayed on the device to the remote command and control server.
The analysis performed by the Lacoon analysts exposes the security limitation of the secure container approach. Shaulov believes that MDM provides management, not absolute security. It is beneficial to separate between business and personal data in a BYOD scenario. Its main use case is the selective remote wipe of enterprise content and Copy & Paste prevention.
Secure containers rely on different defense mechanisms to protect the corporate data. Generally these include iOS jailbreaking and Android rooting detection, prevention of the installation of applications from third-party markets in order to protect against malware and, most importantly, data encryption. However, these measures can be bypassed. On one hand there is a quite active community involved in jailbreaking/rooting efforts. On the other hand the jailbreaking/rooting detection mechanisms are quite restricted – see for example xCon, a free iOS app to defeat jailbreak detection. Usually, checks are performed only against features that signify a jailbroken/rooted device. For example, the presence of Cydia, a legit iOS app which allows the downloading of third party applications not approved by Apple, or the SU tool used on Android to allow privileged operations. More importantly, there are no detection mechanisms for exploitation. So even if the secure container recognizes a jailbroken/rooted device, there are no techniques to detect the actual privilege escalation.
MDM software and Secure Containers are supposed to detect jailbroken iOS and rooted Android devices but “they are dependent on the underlying operating system sandbox, which can be bypassed”, Shaulov says.
MDM not so secure after all
Sebastien Andrivet, Co-founder and director of ADVTOOLS, took a different approach to auditing the security of MDM products and performed a thorough analysis of the server components, such as the administrative console, and their communications with the mobile devices. I met Andrivet in London at the Mobile and Smart Device Security Conference 2012, where he presented the alarming results of his research. Among other, Andrivet found persistent cross-site scripting and cross-site request forgery vulnerabilities in two leading MDM solutions – he would not publicly disclose the names of these products but I saw the screenshots of the trace logs and spotted some of the leading brands mentioned in the Lacoon report.
Andrivet openly stated that, despite being marketed as security tools, MDM products are not “security products” and in fact not so secure after all. However, he is also a bit skeptical about the significance of the findings of the Lacoon research. “Frankly, it is not so easy to penetrate these products, especially on iOS”, says Andrivet. For example, to break into the Good container in the way described above, you need physical access to the device and the password. With an iPhone 4, it is still possible to break a 4-digit pass code. But it is not currently feasible to do the same with iPhone 4S and iPhone 5. Andrivet also observes that it is true that it is possible to repackage an existing iOS application and sign it with your own enterprise certificate. But to install it on the device, a victim will have to accept explicitly the installation of the certificate and then of the application itself. With social engineering, this might be possible, but definitely not so easy. Andrivet points out that the Lacoon researchers have not broken the secure container encryption. They found the information in clear somewhere else – i.e. in memory. What is important is that they found a way to get the data. How they did it (breaking or not the secure container) is not so important. They “breached” the container, even if they didn’t “break” it.
The truth is that MDM products, as any other piece of software in the world, suffer from actual security vulnerabilities. But the Lacoon research is making headlines based on old versions of these products. “The risk is to provide misleading information”, warns Andrivet. In fact, even military-grade spyphone products like FinFisher cannot infiltrate the most recent versions of mobile devices like iPhone 4S or 5 as it is far easier to attack an Android device than an iOS one.
MDM is no silver bullet
Mobile security is a complex topic, and there is no silver bullet. This is true of security in general and mobile is no different, says Ojas Rege, Vice President Strategy at MobileIron, one of the leading MDM software mentioned in the above researches. The challenge many organizations face is that they compromise user experience in the name of security. For mobile, that’s the kiss of death, because users will not accept a compromised experience.
The key is to divide the problem into two: reducing the risk of data loss from well-intentioned users and reducing the risk of malicious attack, continue Rege. The former is, for example, giving users a compelling but secure way to share files instead of using consumer-grade services such as DropBox. The latter is what these researches are really about. MDM is important as a baseline but a full security program is going to require a great deal of education as well. “Jailbreak/rooting is a cat and mouse game”, according to Rege. The reality is that these devices will always have personal use – no matter who owns them – so the chances of malicious software making its way into device are high. The level of sandbox security built into the core OS is a key determiner of what other protections might be needed and what the resulting risk might actually be.
The point about MDM not offering absolute security is a bit cavalier, according to David Lingenfelter, Information Security Officer at Fiberlink, another leading MDM product mentioned in the Lacoon research. Anybody in the security community who is touting or expecting absolute security has missed the point. Cybercriminals only have to be right once. While targeted attacks are definitely a reality, containers are designed for more than just stopping a targeted attack. They help with data leak prevention, blocking users from “accidentally” distributing corporate information through their personal apps.
For better or worse, corporate IT still has to work in the confines of a world dominated by compliance. Adding controls around corporate information by using containers helps risk and compliance teams show their auditors that they are taking what is in essence a consumer-grade device and adding corporate level processes to those devices, continue Lingenfelter.
Infection is inevitable
The lesson learned from trying to secure traditional endpoints may be applied here. The general consensus among the security community is that controls on endpoints are not sufficient anymore to protect from targeted attacks. We can expect the same in the mobile world.
“Infection is inevitable”, continue Shaulov. As demonstrated by our research, MDM and Secure Containers do not and cannot provide absolute security. These are certainly useful tools to separate between business and personal data. As such, they should be part of a baseline for a multi-layered approach. Quoting an RSA report, Shaulov argue that “mitigating the effects of malware on corporate data, rather than trying to keep malware off a device entirely, may be a better strategy”.
This new approach requires thinking outside of the box and the industry is now starting to wake up to this challenge and looking at the network level for threat mitigation. For example, solutions like FireEye, Damballa, Fidelis and Checkpoint – just to name a few – can look at different network parameters and aberrant behavior to detect a compromised device in the process of exfiltrating data. Parameters may be traffic to well-known C&C servers, heuristic behavioral analysis which signify abnormal behavior, sequences of events and data intrusion detection.
Lingenfelter agrees that approach to security has been, and needs to remain, an approach of layers. However, he warns that while other technologies that are based on heuristic style monitoring and detection of malicious activity have come a long way, they too are far from absolute security. Companies have to realize that most mobile technology has been designed for consumers. It has the security focus of consumer devices and applications, which is to make it as easy for the end user as possible. To say that there is going to be one single technology or approach to change this and make these devices have the security level of corporate devices is reckless. The true objective with mobile device security and management is to add on as much security, in layers, as possible without a significant impact on end user experience.
Have you deployed MDM to your mobile users? Do you trust mobile secure containers with your corporate data? How confident are you that your CEO’s iPhone is not jailbroken – or that it never was? Can you detect a compromised tablet spying on your company’s next board meeting?
About the Author
Cesare Garlati is one of the most quoted and sought‐after thought leaders in the enterprise mobility space. Former Vice President of Mobile Security at Trend Micro, Cesare currently serves as Co‐Chair of the CSA Mobile Working Group – Cloud Security Alliance. Prior to Trend Micro, Mr. Garlati held director positions within leading mobility companies such as iPass, Smith Micro Software and WaveMarket. Prior to this, he was senior manager of product development at Oracle, where he led the development of Oracle’s first cloud application and many other modules of the Oracle E‐Business Suite.
Cesare has been frequently quoted in the press, including such media outlets as The Economist, Financial Times, The Register, The Guardian, ZD Net, SC Magazine, Computing and CBS News. An accomplished public speaker, Cesare also has delivered presentations and highlighted speeches at many events, including the Mobile World Congress, Gartner Security Summits, IDC CIO Forums, CTIA Applications, CSA Congress and RSA Conferences.
Cesare holds a Berkeley MBA, a BS in Computer Science and numerous professional certifications from Microsoft, Cisco and Sun.
He lives in the Bay Area with his wife and son. Cesare’s interests include consumer electronics in general and mobile technology in particular.
April 13, 2013 | Leave a Comment
by Mark O’Neill
In recent months, there have been a number of highly publicized cyberattacks on U.S. banks. These attacks took the form of Distributed Denial of Service (DDoS) attacks, involving enormous amounts of traffic being sent to Internet-facing banking services, rendering them unusable. These recent denial-of-service attacks focused mainly on the websites of banks and other financial institutions, bringing down their online banking services, inconveniencing users, losing revenue and damaging the financial institutions’ brand reputation.
The attack surface of banks is changing, however. Increasingly banks are rolling out mobile apps to ensure improved customer service and loyalty. These mobile apps consume data via APIs in the Cloud. Given this scenario the next wave of DDoS attacks may very well target cloud these APIs in order to disable these mobile apps. A mobile app is “blind” without access to its APIs. In light of such risks, Chief Security Officers and their IT security team members need to come up to speed on both the threats posed to APIs and the very real impact an API disruption presents. This article examines strategies for protecting Cloud APIs against DDoS attacks.
Let’s take a look at how mobile apps use APIs within a banking context. Similar to other mobile apps, mobile banking apps use APIs to perform actions and receive data. A DDoS attack would effectively disable access to the API. As mobile app penetration and usage grows, and bank customers use apps as their main channel to perform banking transactions, the impact an API attack can have on an economy grows exponentially. Customers are unable to pay bills, transfer money, or ensure they have funds to make purchases.
In the recent cyber attacks on banks, users could initiate the mobile banking apps from their phone or tablet, but the apps could not “call home” to their banking systems so they could not connect to any account details, or even log into their account. Unlike the Website disruption, this API disruption is not directly visible to the end users. The perception of the attack is different because the app itself can still be launched on their phone or tablet. In fact, when confronted with a mobile banking app which has problems performing certain functions, a user may simply blame their mobile network, or assume they have lost coverage, rather than suspect the API has been compromised.
The recent DDoS attacks have highlighted the need to put measures in place to protect APIs. Going forward, we can envisage a scenario where rather than APIs only being taken down as a side effect of attacks on Websites, future attacks could be directed against APIs with a goal of taking out mobile applications.
Ensure Distributed Deployment to Avoid Vulnerability
Today, it’s still quite common to have API protection grouped with Website protection. As APIs are still relatively quite new, it sometimes seems they are considered to come under the general rules of an organization’s Web resources. As such, there is a lot of focus on protecting the Website from DDoS or general attacks while neglecting to prepare for an API disruption and its impact on mobile applications.
In the case of the recent banking DDoS attacks, the huge volume of data involved, meant there was little the banks could do to protect against the attacks. However, separating the hosting of APIs from the hosting of “traditional” Website resources may be one mitigating factor. This means that a DDoS attack against the Website may not have the side-effect of taking down the APIs used by mobile banking apps.
Implement Policies (e.g. Identity and Throttling)
Additionally, the IT security team should be aware that APIs are different in terms of usage patterns, and in the type of traffic they receive. Whereas a website is accessed by a browser, an API is accessed by an app. This means that it makes sense to protect APIs using different policies. For example, the policy could dictate that a specific API could be accessed by particular users with defined throttling and security policies. Similarly, identity-based policy rules can be used to govern and secure APIs. This is the basis of “API Management”.
Organizations should also consider implementing policies to control the availability and access to their APIs. Simply opening applications via APIs to the outside world without any security policy in place exposes the enterprise to the potential malicious usage of those APIs. Any organization exposing data via an API needs to ensure its clients can’t easily pull down data; otherwise it runs the risk of becoming a channel for data harvesting. This means implementing throttling policies to detect if a particular client is abusing its right of access or levels of usage of the APIs.
APIs and Financial Services
Consider the example of how a Financial Services firm working with companies that have managed funds would benefit from effective API management. In a typical scenario where the firm is managing pensions or 401ks, they would normally have an API to give the current price and other details regarding the fund. In such a context, it is normal for an app to call the API on a regular basis. However, if an API is called by the same app thousands of times a second, or is called in an obviously automated way, the API will be monopolized to the detriment of other users. In this instance, the Financial Services firm would need to identify and block users with risky behavioural patterns – without impacting the experience of legitimate users.
What kind of API Management?
There are some API management products that can provide mitigation against attacks. These products have features including the ability to set policies, throttle traffic, and deliver the security clients need via a particular security token such as an API key or OAuth tokens. Of note, API keys should be handled carefully as there is a tendency to embed them in applications, without regard for security. Additionally, API management products can also detect unusual API patterns. For example, if the mobile application generally accesses certain API operations in particular patterns it can detect anomalous traffic activity and provide alerts.
One of the lessons from recent attacks is the need to put measures in place to protect APIs especially as future attacks could be directed against APIs with a goal of taking out mobile applications. This risk is increasing as a significant amount of users adopts mobile applications. This trend combined with banks’ focus on providing mobile banking applications means that organizations’ require a watertight approach to managing and securing APIs.
Mark O’Neill is a frequent speaker and blogger on APIs and security. He is the co-founder and CTO at Vordel, now part of Axway. In his new role as VP Emerging Technology, he manages Axway’s Identity and API Management strategy. Vordel’s API Server enables enterprises to connect to Cloud and Mobile. Mark can be followed on his blog at www.soatothecloud.com and twitter @themarkoneill