Managing consumer technology in the enterprise – Why IT needs to change mindset to better support the business. Arrow to Content

September 19, 2012 | Leave a Comment

Talking regularly about the consumerization of IT can often make one sound like a broken record, but the economic, security and management challenges it throws up for enterprises are too important to ignore.

The problems boil down to a lack of control, which can be described in two key ways. IT departments of course are built on policies, planning and predictability, but the introduction of technology from the consumer sphere, even when purchased centrally by IT teams for use in the enterprise, creates its own problems. It’s sexy and easy-to-use but it’s certainly not built with security and manageability in mind and will usually fall short of IT’s typical expectations. Products from the likes of Google and Apple, for example, whose respective mobile platforms iOS and Android now account for the lion’s share of the market, are great at serving the needs of consumers but have been extremely slow at embracing enterprise requirements. There is no enterprise sales or support culture with these vendors and there is little transparency with product roadmaps, which takes corporate IT managers completely out of their comfort zone.

The second problem is that, whether consumer-focused tech or not, applications and devices are being brought into the corporate world via the individual employee rather than being mandated from IT, which is the complete opposite of what normally happens. Most IT teams simply aren’t set up to work in this way, and it will require a fundamental change of thinking to ensure consumerization is handled properly.

Rather than adopt the classic head-in-the-sand approach of old, CIOs and IT bosses need to embrace consumerization and take a proactive, strategic approach built around flexible policies and the right security and management tools. Firstly, BYOD policies can’t be created in a vacuum – IT leaders need to sit down with line of business managers in all parts of the organization to figure out what their employees would like to use and how to make that possible. Thus IT is taking the initiative and reaching out in an inclusive, proactive manner.

Secondly, policies must be drawn up to be more flexible and fluid. In a world where everyone in the organization from the CEO down needs to be managed, there can’t be a one-size-fits-all approach to policy making. IT needs to think carefully and map technology and policies to the various user groups. Finally, they need the right infrastructure technologies to help enable all of this.

Companies that are questioning whether or not to allow workers to bring personal devices into the workplace should just stop asking: It’s clear that you can get a competitive edge when you put the right precautions in place. The Consumerization phenomenon gives companies that allow it a competitive advantage as it enhances innovation and creativity in the workplace while reducing overall costs for the entire organization. The key to not being overwhelmed by this trend is that all these devices need to be secured by implementing the proper BYOD policies and procedures.

Consumerization of IT is disruptive and inevitable. But many IT leaders are slow to realize it. Like dinosaurs of a previous IT era, they are headed for extinction.

 

NEXT: BYOD Best Practices – Three pitfalls you can’t afford to ignore.

 

Post based on a podcast produced by the Financial Times featuring Cesare Garlati, head of Mobile Security at Trend Micro, on some of the downsides of bringing your own device to work. Listen to the FT Connected Business podcast at http://podcast.ft.com/index.php?pid=1398

More on Consumerization, BYOD and Mobile Security at http://BringYourOwnIT.com

 

Cesare Garlati, Vice President Consumerization and Mobile Security, Trend Micro

 

As Vice President of Consumerization and Mobile Security at Trend Micro, Cesare Garlati serves as the evangelist for the enterprise mobility product line. Cesare is responsible for raising awareness of Trend Micro’s vision for security solutions in an increasingly consumerized IT world, as well as ensuring that customer insights are incorporated into Trend solutions. Prior to Trend Micro, Mr. Garlati held director positions within leading mobility companies such as iPass, Smith Micro and WaveMarket. Prior to this, he was senior manager of product development at Oracle, where he led the development of Oracle’s first cloud application and many other modules of the Oracle E-Business Suite.

 

Cesare has been frequently quoted in the press, including such media outlets as The Economist, Financial Times, The Register, The Guardian, Le Figaro, El Pais, Il Sole 24 Ore, ZD Net, SC Magazine, Computing and CBS News. An accomplished public speaker, Cesare also has delivered presentations and highlighted speeches at many events, including the Mobile World Congress, Gartner Security Summits, IDC CIO Forums, CTIA Applications and the RSA Conference.

 

Cesare holds a Berkeley MBA, a BS in Computer Science and numerous professional certifications from Microsoft, Cisco and Sun. Cesare is the chair of the Consumerization Advisory Board at Trend Micro and co-chair of the CSA Mobile Working Group.

 

You can follow Cesare at http://BringYourOwnIT.com and on Twitter at http://twitter.com/CesareGarlati

 

 

 

 

7 Steps to Developing a Cloud Security Plan Arrow to Content

September 10, 2012 | Leave a Comment

By David Grimes, Chief Technology Officer, NaviSite

 

In IT, the easiest way to stop a new technology or solution from being implemented is to raise a security red flag. As soon as someone mentions concerns around a new IT solution not being “secure” the project can come to a screeching halt. So as cloud infrastructure and cloud computing has begun to enter enterprise IT conversations, concerns around the security of cloud quickly became the biggest barrier to adoption.

 

Just like security for any other technology solution being used – past, present, or future – creating a security strategy and plan must be one of the first considerations for enterprise IT organizations. And while partnering with a service provider with strong security procedures and services in cloud computing is an important step, enterprises need to continue to take an active role in their own security and risk management. With that in mind, NaviSite has compiled 7 basic steps based on our experiences helping hundreds of companies secure enterprise resources. By following these steps any business can rely on a proven methodology for cost-effectively and securely leveraging cloud services and gain the cost and business advantages of cloud services without compromising the security of enterprise applications.

 

  1. Review Your Business Goals: It is important that any cloud security plan begins with the basic understanding of your specific business goals. Security is not a one-size-fits all proposition and should focus on enabling – technologies, processes, and people. Additionally gaining executive input is not only essential to ensure that assets are protected with the proper safeguards, but also to ensure that all parties understand the strategic goals.
  2. Maintain a Risk Management Program: Develop and maintain a risk management program centrally, and view it holistically. An effective cloud computing risk management program is important for reducing overall risk to the organization. It is also key for prioritizing the utilization of resources and for providing the business with a long-term strategy.
  3. Create a Security Plan that Supports Your Business Goals: Develop goals with measurable results that are consistent with providing support for the growth and stability of the company. These goals should include – specification date for completion, verification of achievement, and a measurable expected result. Security professionals are encouraged to regularly conduct careful analysis to develop responsible programs and build in the necessary controls and auditing capabilities to mitigate threats and maintain a reasonable security program that protects organizational assets.
  4. Establish Corporate Wide Support: Gain the approval of your cloud computing security plan from not only executive management but also the general workforce. Organizations need to establish levels of security that meet business goals and comply with regulatory requirements and risk management policies, but that can be centrally managed and conveniently implemented across the organization with minimal negative impact to productivity. Gaining this acceptance streamlines adoption throughout the organization.
  5. Create Security Policies, Procedures With input from a variety of business units establish a set of guidelines to ensure that all compliance measures are identified. Cloud services are a major advantage for growing organizations that have not yet embedded established policies and procedures into the company. The enterprise can rely on the best practices the service provider has developed over years of experience in similar environments.
  6. Audit and Review Often: Review the security plan on a regular basis, report on achievements of goals, and audit the compliance of the organization to the security policies and procedures. Understanding the auditing requirements for your business and the frequency of your audits is essential not only for ensuring compliance but also for maintaining best practices for securing enterprise resources.
  7. Continuously Improve: Annually review your cloud computing security plan with senior management and your cloud services provider. Many companies believe that once they have solid policies and procedures in place they do not need to revisit them—but your industry and your business will change over time, and the technology available to support your security plan will evolve. Understanding the dynamic nature of your business and constantly evaluating your security requirements are the foundation for implementing a successful continuous improvement strategy.

 

Cloud computing provides compelling cost and strategic benefits, including: scalability with reduced capital expenditure; more efficient use of IT resources; and the ability for an organization to focus on their core competency. Many well established security technologies and procedures can be applied to cloud computing to provide enterprise-class security. The steps outlined above will help organizations structure security and compliance programs to take advantage of the economic advantages of managed cloud services while meeting organizational security and compliance objectives.

 

Properly managed cloud infrastructure provides better security than most enterprise data centers, applications, and IT infrastructure. It allows companies to more efficiently deploy scarce technical personnel. Obviously, enterprise security should not be taken lightly, including cloud security, but it also doesn’t have to be a major roadblock either. These seven steps are meant to serve as a framework to guide companies as they develop a secure cloud-computing plan. For the complete checklist of the above seven steps download the white paper titled 7 Steps to Developing a Cloud Security Plan.

 

Can You Be Sued for Using the Cloud? Arrow to Content

August 29, 2012 | Leave a Comment

We all know that adopting the Cloud comes with some risks – security, reliability and scalability have, to-date, been the most popular complaints. But now, we can add a new one to the mix: litigation. Case in point, companies doing business in Australia, known for its strict privacy laws, have been warned that the risk for litigation should be factored into their due diligence when selecting a cloud vendor.

 

The Acting Victorian Privacy Commissioner recently spoke at the 2012 Evolve Cloud Security Conference in Australia that focused on privacy concerns related to widespread cloud adoption. In his speech, he advised cloud users to scrutinize service provider security policies thoroughly before jumping into an arrangement based primarily on cost savings and scalability. Why? Because, in Australia, as well as other regulated jurisdictions, cases of information misuse will be investigated and prosecuted.

 

And more often than not, the cloud user will be the target of the litigation. As highlighted in the Cloud Computing Information Sheet, if a business can’t answer basic questions about where its data is located, who owns and controls the service provider organization, and what happens to data when contracts terminate, the business is directly at risk.

 

Preserving functionality in particular can prove a challenge when it comes to cloud data security. A cloud service provider may in fact offer the ability to encrypt data to sufficiently meet privacy laws, but it does so at the risk of complicating data access and SaaS application usability. In that case, a secure cloud application may not seem like it’s worth the hassle to a company, and they may opt for an on-premise solution alternative.

 

It is important to carefully investigate statements made by cloud providers about legal compliance or other security credentials. Especially with international vendors, they may not know the details of the regulations that an individual enterprise needs to adhere to, let alone those of a specific geographic region, or the specific policies of an industry group. Should data become compromised, they are not liable in most cases.

 

Striking fear in the hearts of enterprises seeking to exploit technological innovation may prevent some data mishandling. But it doesn’t help address the long-term issue of how companies can successfully and legally implement the cloud into their IT strategies. Cloud advantages have simply become too valuable to ignore. If companies want to stay competitive, they must find ways to meet the privacy and residency restrictions enforced in countries like Australia, Switzerland, China and others while making the move to the cloud.

 

The Privacy Commission also warned against “haphazard” approaches to “de-identify” personally identifiable information (PII). Permanently removing the personally identifiable information is not a valid option because this often destroys the data’s intrinsic business value. Industry approved approaches, such as encryption using strong algorithms (i.e., FIPS 140-2 validated) or tokenization, which replaces PII with randomly generated tokens with no relation to the original information, are methods that should be explored.

 

Tokenization, in particular, should be looked at very carefully as it helps to solve data control, access, and location issues because the data controllers themselves maintain the system and the original data.  With tokenization, all sensitive information can be kept in-house – what travels to the cloud are random tokens vs. actual data – making information undecipherable should it be improperly accessed. So, companies can adopt cloud applications (public or private) with added assurance about their position relative to data residency, privacy and compliance. And employees accessing the protected cloud data can enjoy application functionality and the same user experience, such as searching and sorting, on encrypted or tokenized data, with the standard cloud SaaS application – all while staying within the legal lines.

 

Bottom line: Data control is becoming a key legal requirement in many countries and jurisdictions – and it is one that will clearly be enforced. Are you and your organization covered or do you need to prepare for a legal battle in the Cloud?

 

 

Gerry Grealish leads the Marketing & Product organizations at PerspecSys Inc., a leading provider of cloud data security and SaaS security solutions that remove the technical, legal and financial risks of placing sensitive company data in the cloud. The PerspecSys Cloud Data Protection Gateway accomplishes this for many large, heavily regulated companies by never allowing sensitive data to leave a customer’s network, while simultaneously maintaining the functionality of cloud applications.

 

Is crypto in the cloud enough? Arrow to Content

August 27, 2012 | Leave a Comment

Box.net, DropBox, iCloud, SkyDrive,Amazon Cloud Drive… the list goes on for convenient cloud storage options. Some have had a security incident; the rest will. All implement some form of protection against accidental exposure with varying degrees of protection. Are these sufficient and, in the ones claiming cryptographic isolation, truly implemented in a manner enough for more than sharing pictures of the kids with Aunt Betty? We’ll examine the technologies, architectures, risks and mitigations associated with cloud storage and the cryptographic techniques employed.

Even with the promise of cloud, all of the providers are looking to monetize their service. For the past couple of years, the draw of “unlimited” to build up the user counts for a service has been adjusted downwards. Mozy was one of the first, discontinuing their unlimited backup service in 2011.  Microsoft’s SkyDrive dropped their free service in April 2012 from 25 GB down to 7 GB.   Why did providers serve up free access and what’s moving them in a different direction?

Online Storage Drivers

There are three components driving requirements for each of these services: Privacy/Security, Locale and good old fashioned cost.  They all intertwine into a mishmash of designs and constraints.

Privacy/Security

Some governments/organizations require that, for security, data remain within their borders, regardless of encryption – the locale aspect.  A judge or government may compel a Cloud Service Provider to disclose requested data when they hand down a legal order or sign a search warrant.  Most of the Providers write into their use policies that they will comply with law enforcement requests.

This sort of blatant disregard for a user’s privacy scares European Union citizens.  The entire purpose of the EU’s Data Protection Directive (Directive 95/46/E) , and its antithesis, the US PATRIOT Act surrounds who can access what private data.  Some of the security and privacy aspects may be answered through cryptography.  A full treatment of encryption as a service may be found on the Cloud Security Alliance’s web site.

Location

Locale is the easiest to address and hardest to guarantee.  Various laws require data stay within their government’s borders.  If data migrate past those borders, the service provider is subject to fines.  This varies between countries, trust reciprocation and what sorts of protections are/are not considered adequate for ignoring said provisions.  In some cases, segregation through cryptography suffices in complying with location based laws.

Costs

The last storage driver is cost (although it might be first from a provider’s perspective).  The business efficiencies expected for Storage as a Service and the reason the above providers thought they could turn a profit hinge on the type of data de-duplication seen in the enterprise.  Separate copies of, for instance, a popular MP3 file or a Power Point presentation are not individually stored; a pointer to that file exists instead that all of the service users may access.  The benefits are huge, where enterprises see as much as a 50-90% reduction in storage space necessary.  This efficiency requires storage vendors’ access to the data they are storing for comparison.

Compromise

How do you balance these three?  Which aspects allow you to meet your privacy/security/regulatory policies without jeopardizing your bottom line?  Let’s dissect the solutions:

Underlying technology – Cost is a mighty significant factor in designing an on-demand storage service.  Many of the most popular solutions were created on a shoestring budget.  What better way to operate under tight fiscal constraints then to use the power of the cloud and scale up or down with workload.  It turns out that at least a couple of the more popular services (currently) use Amazon’s S3 (Simple Storage Service ).  S3 includes built in cryptography, where key material resides, not on Amazon’s servers, but within the application making the S3 API calls.  What the services do with the key material is up to them. For simplicity, some services allow Amazon to manage the keys, as discussed later.

Cryptographic algorithms – With few exceptions, everyone uses 256 bit SSL/TLS for data-in-transit protection and when encrypting data-at-rest, 256 bit AES.  These are today’s de-facto standards, and there are easier ways to breach security than brute force attacks on 128 bit or longer key lengths.

Key Material – In Server Side cryptography, the service provider manages both the keys and your data.  This limits the complexity of the environment and allows for the de-duplication aspects mentioned earlier while still providing user to user data isolation.  If a user deletes a file, it may be recovered without much fuss.  Crypto hygiene takes place without issue: Keys may be rotated appropriately, split into separate locations and put into Highly Available clusters.

So what are the risks?

Put simply, storing key material with the information it is designated to protect is akin to leaving the vault door unlocked at a bank.  As long as no one is trying to get in, you might get away with it – for a while.    The service provider may be compelled, against your wishes, to produce the key material and data with warrants in the USand similar government requests in other countries.  Most privacy policies actually document their compliance for these requests (see table).   Trusted insiders can poke around and access keys and thereby data.  Programming and operational mistakes may come to light, as was evidenced in the Dropbox disclosure incident.

Client Side Cryptography

There really is no one you can trust besides yourself.  Rich Mogul from Securosis makes a couple of duct tape style suggestions for sharing within an insecure environment using various forms of encryption.  Newer providers Jungle Disk and Spider Oak label their services as inaccessible to anyone without permission – you have a password which decrypts your keys and then all sharing and use operations occur from there.  Jonathan Feldman makes the case that secure sharing defeats the purpose of the cloud file sync and is just wrong.

 

Services Underlying Technology Release to law Key Material Access
Amazon Cloud Drive S3 Yes – Privacy Policy Server Side
Box.com (formerly box.net) S3 Yes – Privacy Policy Server Side
Dropbox S3 Yes – Privacy Policy Server Side
Google Drive Google App Engine Yes – Privacy Policy Server Side
iCloud iDataCenter (EMC) Yes – Will disclose Server Side
Skydrive (Microsoft) Microsoft Azure Yes – Not Secured In Transit Only
Spider Oak Proprietary No – Zero Knowledge Client Side Password
Jungle Disk S3 No – No Access Client Side Password

This is far from an exhaustive list.  All of the products listed have their place, and should be used according to your specific application and to their strengths/avoided dependent on their weaknesses.

For a very in-depth treatment of cloud storage security, with a special emphasis on one of the most privacy paranoid countries in the world (Germany), please see the Fraunhofer Institute for Secure Information Technology’s Cloud Storage Technical Report.

Jon-Michael C. Brook is a Sr. Principal Security Architect within Symantec’s Public Sector Organization.  He holds a BS-CEN from the University of Florida and an MBA from the University of South Florida.  He obtained a number of industry certifications, including the CISSP and CCSK, holds patents & trade secrets in intrusion detection, enterprise network controls, cross domain security and semantic data redaction, and has a special interest in privacy.  More information may be found on his LinkedIn profile.

 

Your Cloud Provider is a Partner… Not a One-Night Stand Arrow to Content

August 21, 2012 | Leave a Comment

“We programmatically interface with Cloud Providers to manage our customer data, so we can rely on them for securing our services right?” Wrong!

 

The moment you start interfacing with a Cloud Provider you immediately inherit the risks associated with their deployment, development, and security models – or lack thereof in many cases. However, you’re still responsible for the secure development of your business’s applications and services, but with the caveat that you are now sharing that responsibility with a Cloud Provider. Unfortunately, most Cloud Providers do not provide sufficient visibility into the maturity of security activities within their software development lifecycle.

 

Below we’ll take a brief walkthrough of a secure buy-cycle for a Cloud Provider and look at how you are affected by interfacing with Cloud Providers and what you can do to ensure consistent adherence to secure programming patterns and practices.

Gaining Visibility into Security Activities

 

Gaining visibility into the security posture of a Cloud Provider requires a large amount of discussion and documentation review. There are several common security activities that I look for when evaluating a Cloud Provider. If I were to evaluate your security capabilities as a Cloud Provider, some of my very first questions would be:

 

Do you centralize application security initiatives?

 

As a user of your Cloud Provider services, I need assurance that your development team and management staff is enabled by a centralized security team to produce fully secured products. Show me that you have a centralized security team or standards committee. I want to see a team that is responsible for defining application security practices and standards as well as defines and recommends security activities within the organization. Don’t run your application security program like the Wild-Wild West!

Do you enforce an application security-training curriculum?

 

As a user of your Cloud Provider services, I need assurance that your development team and management staff is aware of the latest secure programming vulnerabilities and their mitigation strategies. Before you can begin addressing application security risks, your team needs to have an understanding of those core risks!

Do you facilitate secure development through automation?

 

As a user of your Cloud Provider services, I need assurance that your development team and management staff has the tooling necessary to streamline challenging security activities for quick remediation. This is simply a matter of scalability; humans alone are not a viable option for finding and fixing every problem in your codebase. Technologies such as Static Analysis Security Testing (SAST) and Dynamic Application Security Testing (DAST) help scale code review and penetration testing solutions by focusing on a common set of application security problems while additional human-resources apply more specialized techniques to the business contextual components of your services.

 

I do not want to hear that you “perform penetration tests on a yearly basis using a 3rd party firm and or 3rd party tool.” This type of process is not continuous, does not enable developers, does not scale and leaves too many open problems.

Do you have incident response for dealing with security vulnerabilities?

 

As a user of your Cloud Provider services, I need assurance that you have a process in place to respond to vulnerabilities identified in production applications. I’m looking for a standardized process that is well understood by the key stakeholders in your business and the applicable business unit.

 

Show me the turn-around time for fixing vulnerabilities. Give me an understanding of compensating controls used to reduce exposure of exploitable vulnerabilities. Most importantly, show me who did what, when, and how. I cannot make educated and well-informed decisions for my business if you do not provide me with enough information from your end.

How do you ensure confidentiality and integrity of sensitive data?

 

As a user of your Cloud Provider services, I need assurance that you have sufficient controls in place to protect my sensitive data throughout the service lifecycle. Tell me the protections you have in place when sensitive data is being entered into the application, when the sensitive data is transmitted across the wire, when the sensitive data is at rest, and when the data is presented to end users.

 

Key security controls that I am looking for in this regard include using FIPS 140-2 compliant cryptographic modules, masking of sensitive fields, use of Transport Layer Security (TLS) for network transmission, use of strong encryption and message digest algorithms for persistence, and a key management strategy that incorporates key rotation and processes to minimize disclosure. The last thing I’d want is you storing the cryptographic key in a database column adjacent to the encrypted data!

How can my team make use of your services securely?

 

As a user of your Cloud Provider services, I need assurance that my development team will have all the support they need to systematically interface with your exposed API in a secure fashion. Show me clear and concise documentation of the security features and security characteristics of your exposed functionality. My development teams need to understand your authentication and identity management workflow along with guidance on how to manage those identity tokens.

 

My development teams also need to understand any security relevant assumptions you place on your exposed API. For example, are you expecting my development team to verify the user is authorized to access a database record by querying the UserEntitlments endpoint prior to querying the DatabaseRecord endpoint? Or have you encapsulated the authorization logic within the DatabaseRecord endpoint so that my development team only has to make one API call? I definitely don’t want to be responsible for disclosing my users’ information because you did not provide me guidance on how to securely interact with your service.

Verify Security Claims and Assertions

 

While simply hammering your potential Cloud Provider with application security questions like the above helps provide visibility into their security posture, it in no way verifies that they’re doing what they claim. In an ideal partnership, it is prudent for you to require your potential Cloud Provider to “get tested” by an application security team before moving the relationship forward. Whether an internal team or a 3rd party carries out the assessment, the goal of the effort would be to gain confidence that the Cloud Provider is properly adhering to and implementing their security claims and assertions.

 

The assessment should cover not only a code review and penetration test of the target services, but should also evaluate the capability of the Cloud Provider to implement their security activities throughout their Software Development Lifecycle. Use the vulnerabilities from the code review and penetration test to assist in the evaluation of their security activity effectiveness. Ask them:

 

  1. What vulnerabilities in this report are known and unknown?
  2. How long have you been working on remediating the known?
  3. Why do you believe the unknown were not previously identified?
  4. How long will it take to fix these vulnerabilities?

 

You can roughly estimate what security activity failed based on evidence from a combined code review and penetration test. If the vulnerabilities indicate a complete lack of security control(s), then there is likely a serious problem with the Cloud Provider’s planning and requirements phases. If the appropriate security controls exist but were not used correctly or there are various implementations of the same security control, then there is likely a problem in the design and implementation phases. If the vulnerability is substantial and was unknown, then there is likely a serious problem with the Cloud Provider’s secure coding enforcement strategies. Finally, if the vulnerability is substantial and known for an extended period of time, then there is likely a serious problem with the Cloud Provider’s incident response strategies.

 

Conclusion

 

There is a very common problem facing consumers of Cloud Providers today; they simply fail to dig deep enough in the selection process and settle for what looks good on the surface – a surefire way to build a short-lived relationship. You must realize that you inherit the risk of your Cloud Provider the moment you leverage their services. The risks are further compounded when sensitive information is passed through these Cloud Provider services. When you evaluate your future Cloud Providers, ensure that you gain visibility into their application security activities and you verify security assertions and claims through penetration tests and code reviews. After all, your Cloud Provider is a Partner… not a One-Night Stand!

 

Eric Sheridan – Chief Scientist, Static Analysis

 

Eric Sheridan is responsible for the research, design, implementation, and deployment of core static analysis technologies embedded within WhiteHat Sentinel Source. Mr. Sheridan brings more than 10 years of application security expertise to WhiteHat Security with a focus on secure programming patterns and practices. This experience has allowed Mr. Sheridan to infuse WhiteHat Security with the ability to create static analysis strategies and technologies that actually target the correct problem domain thus enabling developers to produce more secure code. In addition to his static analysis expertise, Mr. Sheridan has enormous experience in defining, integrating, and executing security activities throughout the software development lifecycle.

 

Prior to joining WhiteHat Security, Mr. Sheridan co-founded Infrared Security; a company specializing in application security consultation and the development of next generation static analysis technologies ultimately used within WhiteHat Sentinel Source. Aside from providing professional consultation services to organizations in both the Government and Private sectors for more than 6 years, Mr. Sheridan frequently contributes to the Open Web Application Security Project (OWASP). Mr. Sheridan led the creation of the CSRFGuard and CSRF Prevention Cheat Sheet projects while contributing to WebGoat, CSRFTester, and Stinger.

 

Avoiding Storms In The Cloud – The Critical Need for Independent Verification Arrow to Content

August 16, 2012 | 1 Comment

By Chris Wysopal,   Co-founder and CTO of Veracode

Last year, Forrester predicted that cloud computing would top $240 billion in 2020. Market Research Media came up with a more aggressive forecast of $270 billion in 2020. None of this data is particularly surprising, as cloud technology is clearly here to stay, particularly if cloud providers are able to maintain secure environments for their customers.     As companies adapt to the shifting cloud paradigm to address cost, scalability, and ease of delivery issues, there continues to be a growing concern about the safety of data in the cloud, and whether cloud security can ever be as robust as enterprise security.

The dangers associated with storing information in the cloud are regularly highlighted in well publicized breaches and security flaws experienced by some of the world’s most well-known brands. Cloud businesses such as Amazon, Yahoo, Linkedin, eHarmony and Dropbox have all been attacked in just the last few months, but the problem is not exclusive to consumer facing businesses. B2B organizations that offer cloud-based solutions, like my company Veracode, are facing their own set of security requirements from business customers the need to ensure data is protected.

The answer to why cloud security has become such a fast growing concern for enterprise organizations today can be found in a perfect storm of current trends.

First is that the reporting of security breaches has skyrocketed, in part because hackivists love the publicity but also because crime typically occurs where there is value, and in our digital economy the value resides in various forms of intellectual property.

Second is that today’s cloud computing environments often distributes corporate intellectual property to many different infrastructures while promising authorized users ready access to that information, which means the value can be found in many places.

Third is that enterprise organizations rarely use just one cloud-based service. If one was to count the number of Salesforce.com customers that have integrated the service with other cloud-based marketing automation solutions, or cloud-based accounting solutions, it would be a very high number.  With all of this corporate information and intellectual property now residing in so many interconnected places in the cloud, hackers that are actively looking for weaknesses can abuse those connections and wreak havoc for the cloud customer and provider alike.

What most enterprise organizations are looking for from prospective cloud-based solution providers is transparency in the provider’s security mechanisms and IT processes. Companies want to know what security mechanisms are being used to keep their information confidential and secure, particularly while it is in transit to and from the provider’s datacenter, but also while it is in use in the datacenter, while it is at rest in a disaster recovery site, and ultimately, how the information is finally deleted. Customers are also concerned about the security mechanisms used to authenticate company users that will be accessing and updating the information. Sure, the goal of most cloud-delivered services is to provide fast, easy, ready access to corporate information – but only to the appropriate people.

In terms of process transparency, companies need (and want) to know that a provider’s IT procedures do not allow for corporate information to be exposed to members of the provider’s workforce, even during routine maintenance or updates to infrastructure or service software. They also want to know whether the service infrastructure and software is continually being hardened against attack, and that the incident response procedures are well known and appropriately followed.  Many breaches have been tied to vulnerabilities, such as SQL injection, in the custom software developed by the service provider.  Customers are beginning to seek evidence that this software was developed and tested for security.

This brings us to the impact cloud security concerns are having on solution providers.  While customers are certainly asking more questions about their providers’ security, they are also increasingly expecting independent proof of the answers. This is a good thing.

One example that we recently encountered at Veracode was during an RFP process, which asked  that we answer the checklist questions  published in Gartner’s September 2011 research note titled “Critical Security Questions to Ask a Cloud Service Provider.” The checklist is designed to arm customers with the necessary security questions to ask of their cloud-based solution providers as part of their due diligence.  We provided those answers, but the customer went further to ask for our SysTrust report and proof that our hosting provider was certified as an SSAE 16 facility.  SysTrust certification demands Ernst & Young audits every January and February that review process documentation, includes personnel interviews and reviews activity logs to see whether effective platform controls existed to protect information during the previous year. The hosting provider also goes through a similar process with their auditors, providing an added layer of third party security validation.

Ultimately the burden of security should fall on both the cloud solution provider and the customer. As Greg Rusu, general manager of PEER 1 Hosting’s public cloud division Zunicore stated in a recent InfoSecurity article, “the burden of security lies with both the cloud provider and the customer. No matter how secure the cloud provider makes the infrastructure…what we see in practice is that security is a partnership.”

After all, at the end of the day it’s the customers’ duty to protect their intellectual property and corporate information. Taking assurances from cloud solution vendors, even in writing, only provides a certain level of assurance, which is why calling for third party validation is so critical.   This level of third party inspection is no different than the advice we give our own customers about securing their applications – trust is good but independent validation is much better.

BIO

Chris Wysopal, co-founder and chief technology officer of Veracode, is responsible for the security analysis capabilities of Veracode technology. He is recognized as an expert in the information security field, and his opinions on Internet security are highly sought after. Wysopal has given keynotes at computer security events and has testified on Capitol Hill on the subjects of government computer security and how vulnerabilities are discovered in software. He also has spoken as the keynote at West Point, to the Defense Information Systems Agency (DISA) and before the International Financial Futures and Options Exchange in London. Wysopal’s groundbreaking work in 2002 while at the company @stake was instrumental in developing industry guidelines for responsibly disclosing software security vulnerabilities. He is a founder of the Organization for Internet Safety, which established industry standards for the responsible disclosure of Internet security vulnerabilities.

Big Data, Big Cloud, Big Problem Arrow to Content

August 15, 2012 | Leave a Comment

By Todd Thiemann

Big data presents a big opportunity for businesses to mine large volumes of data from a variety of sources to make better and more high velocity decisions.  Since big data implementations are practically always deployed in a cloud environment, be it a private cloud or public cloud, this poses a major security challenge. That’s because some of that “Big Data” will inevitably be sensitive in the form of intellectual property covered by corporate security mandates, cardholder data affected by PCI DSS, or Personally Identifiable Information (PII) affected by state or national data breach laws.

For the purposes of this article, our definition of Big Data refers to the non-relational storage and processing technologies including NoSQL tools such as Hadoop, MongoDB, Cassandra and CouchDB.   These offerings comprise the bulk of “Big Data” deployments and share similar security challenges.  For example, The Hadoop Distributed File System (HDFS) is used to store data that needs to be analyzed.  Software frameworks such as MapReduce or Scribe process large amounts of data in parallel on large clusters of commodity computer nodes. Tasks are distributed and processed in a completely parallel manner across the cluster. The framework sorts the output, which can be used as input to the reduce tasks. Typically both the input and the output of the job are stored across the cluster of compute nodes.

The ability to perform complex ad-hoc queries against massive disparate datasets can unlock tremendous value for enterprises. In order to tap this intelligence, companies are using distributed file systems such as Hadoop. This is primarily because the volume of data has increased beyond the performance capabilities of relational database systems.

While traditional relational databases use the concept of a data container, this is absent in the Big Data world. Instead of a datafile associated with a database, NoSQL implementations scatter files across hundreds or thousands of nodes.  As a result, sensitive data that requires protection is no longer in one compact tablespace on a single system, but can be scattered among a multitude of nodes in the cloud.

One of the key challenges posed by NoSQL tools is that while they are great at crunching massive volumes of data, they have virtually zero built-in security or access control capabilities. If a Big Data deployment includes or will include sensitive data, it’s imperative to put data security and access controls in place. Operating a Big Data infrastructure without some form of security is a very high risk endeavor.

The following threats and how to mitigate them are important considerations in Big Data environments:

Privileged User Abuse – keeping system administrators from accessing or copying sensitive data.

Unauthorized Applications – preventing rogue application processes from touching your Big Data.

Managing Administrative Access – While system administrators should not be allowed to access data, they may need access to the directory structure for maintenance operations and performing backups.

Monitoring Access – Understanding who is accessing what data in a Big Data repository allows for necessary auditing and reporting.

When it comes to protecting and controlling access to Big Data, encryption combined with key management are central elements of a layered security approach. Here are some important considerations when securing Big Data environments:

  • Classify data & threats – This is one of the biggest challenges for any data security project – knowing what is sensitive, where is it located, what are the potential threats. If no sensitive data is in scope, data protection may not be necessary. If sensitive data is stored in the Big Data environment, it needs to be protected.  Talking to the Big Data development team about the nature of the data is a first step.
  • Encryption & Key Management – Taping the key to the front door just above the door knob is not a security best practice. In the same vein, storing encryption keys within the data environment they are protecting is also not a best practice.
  • Separation of Duties – this has many implications, but one is that encryption keys should never be under the control of IT administrators.
  • Costs – Minimizing silos of encryption and key management typically reduces costs and minimizes scalability, audit, and total cost of ownership issues.
  • Performance – Enterprises are embracing Big Data for its potential to enable faster decision making. By the same token, encryption and key management should not significantly slow down Big Data system performance

Big Data promises to be the proverbial goose that lays golden eggs. Understanding the data security and privacy risks associated with a Big Data environment early in the development process, and taking appropriate steps to protect sensitive information, will prevent that goose from getting cooked.

Todd Thiemann is senior director of product marketing at Vormetric and co-chair of the Cloud Security Alliance (CSA) Solution Provider Advisory Council.

Best Practices to Secure the Cloud with Identity Management Arrow to Content

August 13, 2012 | 1 Comment

Authored by: Dan Dagnall, Director of Pre-Sales Engineering at Fischer International Identity

 

What is the “cloud identity?”   The “cloud identity” begins at the birth of the user’s “digital identity” and includes the attributes to define “who you are.”  “Cloud Identity” is not a new term to those in the industry, but one that has definitely taken hold as the way to define “you” in the cloud.  Much focus has been on how to “enable” a secure authentication event (through mechanisms like ADFS or Shibboleth), which is a key component of securing the transaction between Identity Providers (“IdP”) and Service Providers (“SP”). However, too little focus has been placed on the fundamental component required to “ensure” the integrity of the transaction; and by “integrity,” I mean that the person is right, the attributes are right, and the values are right  The integrity of a “cloud identity” transaction can only be secured by sound identity management practices, with a razor-sharp focus on attribute management and policy enforcement.

 

Competent attribute management is the foundation of securing the “cloud identity.”  It is the attribute and its corresponding value that ultimately determine the digital identity of an individual (or entity).  When you consider the level of accuracy required (if your true goal is the validity of the transaction) in a cloud-centric world, you will concede the importance of properly representing the user in the cloud.  When you consider attributes within this context, it becomes clear why identity management (IdM\) is the epicenter for securing the cloud identity.

 

Attribute management is much more than “just a middleware component;” it is identity management at a fundamental level.  This fundamental level must not be overlooked as our industry begins discussing the large scale initiatives to create a common “ecosystem” through which cloud identities will travel.

 

There are a few key components of the IdM stack that provide for the integrity I’m describing; automation and policy management/enforcement.

 

Best Practice #1: Automation

Sound identity management practices must include automation, which includes event detection and downstream provisioning (i.e. the system automatically detects when a user, along with data associated to the user, is added/modified within the system of record, followed by automatically provisioning the user and the required attributes to downstream systems). Detection of changes to key attributes specific to the user’s identity [ideally, in real time] ensures the validity of the attribute value, i.e. making sure the value is correct and placed in the proper location and that placement was/is authorized.

 

Manual modification of users (on downstream target systems) including manual entry of attribute/value pairs is not a secure approach unless identity management has authorized these actions and the user performing them.  Manual approaches can undermine data integrity and leave the user (whose identity and sensitive information will be floating around the cloud) at a major disadvantage and lead to improper representation of their identity in the cloud, not to mention the inherent risk for the user and the organization as a whole.  This represents a scary reality for some, unless of course IdM has been properly deployed to ensure that malicious events are either immediately detected or thwarted before-hand.

 

Automated event detection eliminates the need for manual interactions with the user’s attribute set, which as I’ve discussed is the single-most important aspect of securing one’s identity in the cloud.  Automated event detection when coupled with attribute management enables the proper enforcement of organizational policies put in place to protect the user.

 

Best Practice #2: Policy Management & Enforcement

Once automation is introduced, securing the remaining aspects of the cloud identity shifts to policy management and enforcement.  Policy management is the layer of IdM which defines who is authorized and what level of access will be granted to downstream target systems.  Whether bound by regulation (which is most often the case) or the requirement to comply with a set of standards and/or practices to participate in global federations (i.e. attribute management processes that meet a certain criteria), policy definition is the key to successfully securing the cloud identity.

 

Securing this layer cannot be accomplished by allowing unchecked “human” decisions to overrule the policy because it can have a direct effect on how that user is represented in the cloud.  As a user, I’d sleep much better knowing that automated policy enforcement is managing my cloud identity, and abiding by organizational or regulatory guidelines like CSA and others to keep my identity safe and properly represented in the cloud.

 

In conclusion, someone with direct access to my data (because there is no automation), who can manipulate my attribute values without authorization (because there is no policy definition and enforcement), could compromise the representation of my “cloud identity” and call into question the integrity of the entire transaction.

So before you consider cloud-based transactions, specifically those where identity data is required, it is in your best interest to solidify your IdM practices and introduce the components I’ve outlined.  Only then can you truly secure the cloud for your users and your organization.

Application-Aware Firewalls Arrow to Content

August 9, 2012 | Leave a Comment

You may have heard this term recently and wondered what it meant. When it comes to security, everyone thinks of Firewalls, Proxies, IPS, IDS, Honeypots, VPN devices, email security and even Web security, but most people don’t think in terms of application level security unless either you are the developer, admin, or user of those specific services or perhaps a hacker. Especially when your traditional network boundaries disappear you can’t carry all of those devices with you. When you move out of your traditional boundaries, towards the cloud, you trust the cloud provider to provide you these features. But you can’t do the same with application level security.  That is because those devices work on a level below the Application Layer (Or Layer 7 in the ISO-OSI architecture model). And those standards are very well defined and established, whereas, to an extent, the application layer is still evolving – from COBOL to API, everything is fair game.

There is a reason why enterprises are looking for devices which can do it all. I was reading a security research report the other day, which suggested that attackers are moving up the stack to the application layer since it is so easy to hack into applications nowadays; especially with the applications moving to the cloud, thus introducing new vectors of attack, including a whole layer of API/ XML threats (if you are still bound to XML/SOAP and can’t free yourself). Most of the organizations that I see don’t have the same solid security at the application level as they do at the network level. This discrepancy developed over last few years as more and more applications came out with new technologies exposing themselves to newer threats. Plus there is no unified standard amongst developers when they develop application level security.

The network security we have today is not “application aware”. This means that API/XML and other application level threats go right through the regular network defenses that you’ve built up over years. Many people think that if they use REST or JSON then they are not as prone to attacks as those who are using SOAP/XML/ RPC, which is a funny thought.

Add this to the fact that when your applications move your enterprise boundary to go to a cloud, they are exposed to hackers 24×7 waiting to be attacked.  This leaves you subject not only to direct attacks on your application, but also to bounces off another application that is hosted in a multi-tenant environment. So your new “firewall” should be able to inspect, analyze application traffic, and identify threats. But the issue doesn’t stop here; you also need to analyze for viruses, malware and the “intention” of the message (and its attachments) as they pass through. Most times the issue with Firewalls inspecting traffic is that they look at where information is going (port and maybe an IP address), but not what the message is intended to do. There is a reason why injection attacks such as SQL Injection, XSS, Xpath injection all became so popular.

 

Now there is another issue, and this relates to the way applications are built nowadays. In the olden days you controlled both the client, the server, and even the communication between them to an extent. Now we expose APIs and let others build interfaces, middleware, and the usage model as they see fit. Imagine a rookie or an outsourced developer developing a sub-standard code and putting it out there for everyone poke and prod for weaknesses.  As we all know, the chain is as strong as the weakest link. A problem arises because it is hard to figure out which is your weakest link. So application-aware firewalls can not only inspect, analyze or control traffic to applications, but also utilize inherent knowledge allowing them to work at a deeper level too.

This gives you freedom to move the necessity of application level security from your applications/ services/ API to a centralized location, so your developers can concentrate on what they are supposed to do – develop the services that matter to your organization and not worry about other nuances, which can now be left to the experts.

Andy Thurai — Chief Architect & CTO, Application Security and Identity Products, Intel

Andy Thurai is Chief Architect and CTO of Application Security and Identity Products with Intel, where he is responsible for architecting SOA, Cloud, Governance, Security, and Identity solutions for their major corporate customers. In his role, he is responsible for helping Intel/McAfee field sales, technical teams and customer executives. Prior to this role, he has held technology architecture leadership and executive positions with L-1 Identity Solutions, IBM (Datapower), BMC, CSC, and Nortel. His interests and expertise include Cloud, SOA, identity management, security, governance, and SaaS. He holds a degree in Electrical and Electronics engineering and has over 20+ years of IT experience.

 

He blogs regularly at http://cloudsecurity.intel.com/ on Security, SOA, Identity, Governance and Cloud topics. You can find him on LinkedIn at http://www.linkedin.com/in/andythurai.

Consumerization 101 – Employee Privacy vs. Corporate Liability Arrow to Content

July 31, 2012 | Leave a Comment

Mary D. joined MD&M Inc. in 2009. Being an Apple enthusiast, she was quite excited to learn that the company offered an innovative BYOD program that allows employees to use their own iPhone for work. As part of the new hire package, Mary signed the acceptable use policy and was granted access to corporate email on the go.

Mary’s started having performance problems in her second year and her manager put her on notice. After six months, Mary was terminated. When her manager clicked the ‘terminate’ button within the company’s HR system, a series of automated tasks were initiated, including the remote wipe of all information on Mary’s iPhone.

As it turned out, Mary had been performing poorly because her son John was dying of cancer. Just a few weeks before Mary was terminated, her husband took a picture of her and his son using Mary’s iPhone. It was the last photo Mary had of her son, and MD&M Inc. unknowingly destroyed it. Mary sued the company for damages.

Just how much is the last photo of a mother and son worth? Attorneys and expert witnesses sought to answer that question. They arrived at $5 million.

Three pitfalls your BYOD program can’t afford to ignore.   

While Mary’s story is a fictitious case debated last year by the International Legal Technology Association (ILTA), it’s just a matter of time before stories like this become mainstream reality. A recent survey by Trend Micro clearly shows that a majority of companies are already allowing employees to use their personal devices for work-related activities– 75% of organizations in the U.S. offer BYOD programs.

Besides preserving data security and managing a myriad of personal devices, companies must also consider a new set of legal and ethical issues that may arise when employees are using their own devices for work. Here are just three pitfalls to consider:

Pitfall #1: Remote deletion of personal data:  Under what circumstances (if any) should the company have a right to remove any non work-related content from an employee-owned device?

Pitfall #2: Tracking individual location: What corporate applications might ‘track’ the location of an employee-owned device?  Is the employee aware that this is possible?

Pitfall #3: Monitoring Internet access: Should accessing questionable websites be restricted, when an employee is also using a personal device for work?

 

NEXT: BYOD Best Practices – Three pitfalls you can’t afford to ignore.

 

Cesare Garlati, Vice President Consumerization and Mobile Security, Trend Micro

 

As Vice President of Consumerization and Mobile Security at Trend Micro, Cesare Garlati serves as the evangelist for the enterprise mobility product line. Cesare is responsible for raising awareness of Trend Micro’s vision for security solutions in an increasingly consumerized IT world, as well as ensuring that customer insights are incorporated into Trend solutions. Prior to Trend Micro, Mr. Garlati held director positions within leading mobility companies such as iPass, Smith Micro and WaveMarket. Prior to this, he was senior manager of product development at Oracle, where he led the development of Oracle’s first cloud application and many other modules of the Oracle E-Business Suite.

 

Cesare has been frequently quoted in the press, including such media outlets as The Economist, Financial Times, The Register, The Guardian, Le Figaro, El Pais, Il Sole 24 Ore, ZD Net, SC Magazine, Computing and CBS News. An accomplished public speaker, Cesare also has delivered presentations and highlighted speeches at many events, including the Mobile World Congress, Gartner Security Summits, IDC CIO Forums, CTIA Applications and the RSA Conference.

 

Cesare holds a Berkeley MBA, a BS in Computer Science and numerous professional certifications from Microsoft, Cisco and Sun. Cesare is the chair of the Consumerization Advisory Board at Trend Micro and co-chair of the CSA Mobile Working Group.

 

You can follow Cesare at http://BringYourOwnIT.com and on Twitter at http://twitter.com/CesareGarlati

Page Dividing Line