Moving to a “Show Me” State – Gaining Control and Visibility in Cloud Services Arrow to Content

January 27, 2011 | Leave a Comment

Survey after survey, security and more specifically the lack of control and visibility around what is happening to your information on cloud provider premises, is listed as the number one barrier to cloud adoption.

So far, there have been two approaches to solving the problem:

1 – The “Trust Me” approach: The enterprise relies on the cloud provider to apply best practices to secure your data, and the only tool you have available to get visibility into what is happening on the cloud provider’s premise is Google Earth. If you use Gmail and want to know more about what is happening to your email, follow this link or this one.

2 – The “Show Me” approach: The cloud provider gets bombarded by hundreds of questions and demands for site visits that vary from one customer to another. In most cases, these questionnaires are not tailored for cloud computing, they’re based on the existing control frameworks and best practices used to manage internal IT environments and external vendors.

Neither approach has been satisfactory thus far.

The “Trust Me” approach creates frustration for enterprises moving to the cloud: they cannot meet their compliance processes which often demands providing detailed evidence of compliance and answers to very specific questions.

The “Show Me” approach creates a tremendous burden for the cloud provider and a very long process for end-customers before any cloud-based service can be deployed. It completely defeats the cloud agility promise.

Auditors’ insatiable demand for evidence of compliance is pushing the industry towards standardizing a “Show Me” approach.

The Cloud Security Alliance Governance, Risk management and Compliance (GRC) Stack to assess security of cloud environments is a great step in that direction. It defines an industry accepted approach to document security controls implemented in cloud offerings:

- CloudAudit provides the technical foundation to enable transparency and trust between cloud computing providers and their customers

- Cloud Controls Matrix provides fundamental security principles to guide cloud vendors and to assist prospective cloud customers in assessing the overall security risk of a cloud provider.

- Consensus Assessments Initiative Questionnaire provides industry-accepted ways to document what security controls exist in a cloud provider’s offering.

The Cloud Security Alliance’s high profile, with members representing the leading cloud providers, technology vendors, and enterprise consumers of cloud services, provides the necessary weight and credibility such an initiative needs to be successful.

It offers cloud providers and end-customers alike a consistent and common approach to establish more transparency in cloud services. Enterprise GRC solutions such as RSA Archer have integrated the CSA GRC controls into the core platform so that customers can use the same GRC platform to assess cloud service providers as the one they already use to manage risk and compliance across the enterprise.

This is a great step forward towards solving the “Verify” part of the “Trust and Verify” equation that needs to be addressed to help drive cloud adoption.

What do readers think of this new approach by the Cloud Security Alliance? Is it a step in the right direction or should it go further?

Eric Baize is Senior Director in the RSA’s Office of Strategy and Technology with responsibility for developing RSA’s strategy and roadmaps cloud and virtualization. Mr Baize also leads the EMC Product Security Office with company-wide responsibility for securing EMC and RSA products.

Previously, Mr. Baize pioneered EMC’s push towards security. He was a founding member of the leadership team that defined EMC’s vision of information-centric security, and which drove the acquisition of RSA Security and Network Intelligence in 2006.

Mr Baize is a Certified Information Security Manager, holder of a U.S. patent and author of international security standards. He represents EMC on the Board of Directors of SAFECode.

Building a Secure Future in the Cloud Arrow to Content

January 27, 2011 | 1 Comment

By Mark Bregman

Executive Vice President and Chief Technology Officer, Symantec

Cloud computing offers clear and powerful benefits to IT organizations of all sizes, but the path to cloud computing – please excuse the pun – is often cloudy.

With cloud computing, IT resources can scale almost immediately in response to business needs and can be delivered under a predictable (and budget friendly) pay-as-you-go model. An InformationWeek survey[1] in June 2010 found 58 percent of companies have either already moved to a private cloud, or plan to soon. Many others, meanwhile, are considering whether to shift some or all of their IT infrastructure to public clouds.

One of the biggest challenges in moving to a public cloud is security – organizations must be confident their data is protected, whether at rest or in motion.  This is new territory for our industry.  We don’t yet have a standard method or a broadly accepted blueprint for IT leaders to follow.  In my role as CTO of Symantec, I have invested a lot of time examining this challenge, both through internal research and in numerous discussions with our customers around the world.  And while we are not yet at the point of writing the book on this subject, I can tell you that there are a number of common themes that arise in nearly every conversation I have on this subject.  From this information, I’ve developed a checklist of five critical business considerations that decision makers should examine as they think about moving their infrastrucutre – and their data – to the cloud.

1. Cost-benefit analysis. The business case for cloud computing requires a clear understanding of costs as compared to an organization’s in-house solution. The key measure is that cloud must reduce capital and operational expenses without sacrificing user functionality, such as availability. The best delivery model for cloud functionality is a hardware-agnostic approach that embraces the commodity architectures in use by the world’s leading Internet and SaaS providers. This can be achieved through low-cost commodity servers and disks coupled with intelligent management software, providing true cloud-based economies of scale and efficiency.

2. Robust security. When you move to the cloud, you’re entrusting the organization’s intellectual property to a third party. Do their security standards meet the needs of your business? Even the smallest entry point can create an opening for unauthorized access and theft. Authentication and access controls are even more critical in a public cloud where cluster attacks aimed at a hypervisor can compromise multiple customers. Ideally, the cloud provider should offer a broad set of security solutions enabling an information-centric approach to securing critical interfaces – between services and end users, private and public services, as well as virtual and physical cloud infrastructures.

3. Data availability. As cloud places new demands on storage infrastructure, data availability, integrity, and confidentiality must be guaranteed. Often, these provisions come with vendors who offer massive scalability and elasticity in their clouds. To make this approach manageable for customers, cloud vendors must offer tools that provide visibility and control across heterogeneous storage platforms. The final test for cloud storage is interoperability with virtual infrastructures. This allows service providers to standardize on a single approach to data protection, de-duplication, assured availability, and disaster recovery across physical and virtual cloud server environments, including VMWare, MS Hyper-V and a variety of UNIX virtualization platforms.

4. Regulatory compliance. Cloud computing brings a host of new governance considerations. Organizations must evaluate the ability of the cloud provider to address the company’s own regulations, national and worldwide rules for conducting business in different regions, and customer needs. For example, many healthcare customers will require SOX and HIPAA compliance while financial customers must comply with Gramm-Leahy-Biley and Red Flags.

5. Check the fine print. Don’t forget to thoroughly evaluate your organization’s SLA requirements and ensure the cloud provider can and is legally responsible to deliver on these provisions. The most common SLAs relate to disaster recovery services. Make sure a contingency plan is in place to cover against outages. In the event of a disaster, is the facility hosting your data able to quickly offload into another data center? On a related note, an SLA best practice is to perform data classification for everything – including customer data – being considered for cloud migration. Know where your vendor’s cloud assets are physically located, because customer SLAs – such as with federal agencies – may require highly confidential data to stay on-shore. Non-sensitive information can reside in offshore facilities.

These five critical business considerations serve as a checklist for building trust into the cloud. This trust is crucial as the consumerization of IT continues to redefine the goals and requirements of IT organizations. Consider that, by the end of 2011, one billion smartphones will connect to the Internet as compared to 1.3 billion PCs.[2] The acceptance of mobile devices into the enterprise environment creates more demand for SaaS and remote access. As result, businesses large and small will increasingly turn to cloud to keep pace with demand.

# # #

[1] InformationWeek, “These Private Cloud Stats Surprised Me”:  http://www.informationweek.com/blog/main/archives/2010/06/these_private_c.html

[1] Mocana Study: “2010 Mobile & Smart Device Security Survey”

Moving to the Cloud? Take Your Application Security With You Arrow to Content

January 27, 2011 | Leave a Comment

By Bill Pennington, Chief Strategy Officer, WhiteHat Security

Cloud computing is becoming a fundamental part of information technology. Nearly every enterprise is evaluating or deploying cloud solutions. Even as business managers turn to the cloud to reduce costs, streamline staff, and increase efficiencies, they remain wary about the security of their applications.  Many companies express concern about turning over responsibility for their application security to an unknown entity, and rightly so.

Who is responsible for application security in the new world of cloud computing? Increasingly, we see third-party application providers, who are not necessarily security vendors, being asked to verify the thoroughness and effectiveness of their security strategies. Nevertheless, the enterprise ultimately still bears most of the responsibility for assessing application security regardless of where the application resides. Cloud computing or not, application security is a critical component of any operational IT strategy.

Businesses are run on the Internet, and as cloud computing expands that means that a host of new data is being exposed publicly. History and experience tell us that well over 80% of all websites have at least one serious software flaw, or vulnerability, that exposes an organization to loss of sensitive corporate or customer data, significant brand and reputation damage, and in some cases, huge financial repercussions.

Recent incidents on popular websites like YouTube, Twitter and iTunes; hosting vendors like Go Daddy; and the Apple iPad have exposed millions of records, often taking advantage of garden-variety cross-site scripting (XSS) vulnerabilities. The 2009 Heartland Payment Systems breach was accomplished via a SQL Injection vulnerability. Thus far, the financial cost to Heartland is $60 million and counting. The soft costs are more difficult to determine.

Across the board, organizations will have the opportunity to prioritize security on the most exposed part of the business, Web applications, and often the most seriously underfunded. The following issues must be understood in order to align business goals and security needs as the enterprise transitions to cloud computing.

1. Web Applications are the Primary Attack Target – Securing Applications Must be a Priority

Most experts agree that websites are the target of choice. Why? With more than 200 million websites in production today, it follows that attackers would make them their target. No matter the skill level, there is something for everyone on the Web, from random, opportunistic attackers to very focused criminals focused on data from a specific organization.  In one of the most recognized cases, an attacker used SQL injection to steal credit /debit card numbers that were then used to steal more than $1 million from ATMs worldwide.

And yet, most organizations believe that application security is underfunded, with only 18% of IT security budgets allocated to address the threat posed by insecure Web applications, while 43 percent of IT security budgets were allocated to network and host security.

While businesses are fighting yesterday’s wars, hackers have already moved on. Even more puzzling, application security is not a strategic corporate initiative at more than 70 percent of organizations. And these same security practitioners do not believe they have sufficient resources specifically budgeted to Web application security to address the risk.        As more applications move to the cloud, this imbalance must change. IT and the business must work together to prioritize the most critical security risk.

2. The Network Layer has Become Abstracted

Prior to cloud computing, organizations felt a certain confidence level about the security of applications that resided behind the firewall. To a certain extent, they were justified in their beliefs. Now, we see the network layer, which had been made nearly impenetrable over the past 10 years, becoming abstracted by the advent of cloud computing. Where once there was confidence, there is now confusion among security teams as to where to focus their resources. The short answer is: Keep your eye on the application because it is the only thing you can control.

With cloud computing, the customer is left vulnerable in many ways. First, the security team has lost visibility into the network security infrastructure. If the cloud provider makes a change to its infrastructure, it naturally changes the risk profile of the customer’s application. However, the customer is most likely not informed of these changes and therefore unaware of the ultimate impact. It is the customer’s responsibility to demand periodic security reports from its cloud vendor and thoroughly understand how their valuable data is being protected.

3. Security Team Loses Visibility with Cloud Computing: No IPS/IDS

One of the main concerns of security professionals anticipating an organizational switch to cloud computing is loss of visibility into attacks in progress, particularly with software-as-a-service (SaaS) offerings. With enterprise applications hosted by the cloud service provider, the alarm bells that the security team could rely on to alert them of attack, typically Intrusion Prevention or Intrusion Detection Systems, are now in the hands of the vendor. For some, this loss of visibility can translate into loss of control. In order to retain a measure of control, it is critical to understand the security measures that are in place at your cloud vendor and also to require that vendor to provide periodic security updates.

4. Change in Infrastructure is a Great Time to make Policy Changes/New Security Controls

Any time there is a change from one infrastructure to another, it presents businesses with an impetus to review its security policies and procedures. In fact, a move to cloud computing can be an excellent opportunity to institute new security policies and controls across the board. A credible case can be made to review budgets and allocate more funds to application security.         Where previously application security was a second-tier spending priority, it now rises to the top when SaaS comes into play.

This is a great time to pull business, security and development teams together to develop a strategy.

5. Cloud Security Brings App Security more in line with Business Goals – Decision Making Based on Business Value and Appropriate Risk.

For many organizations, application security is an afterthought. The corporate focus is on revenue, and often that means frequently pushing new code. Even with rigid development and QA processes, there will be differences between QA websites and actual production applications. This was not as critical when the applications resided behind the firewall, but now managers must take into account the value of the data stored in an application residing in the cloud.

Ideally, the security team and the business managers would inventory their cloud (and existing Web) application deployments. Once an accurate asset inventory is obtained, the business managers should evaluate every application and prioritize the security measures based on business value and create a risk profile. Once these measurements have occurred, an accurate application vulnerability assessment should be performed on all applications. Only then can the team assign value and implement an appropriate solution for the risk level. For example, a brochureware website will not need the same level of security as an e-commerce application.

Once an organization has accurate and actionable vulnerability data about all its websites, it can then create a mitigation plan. Having the correct foundation in place simplifies everything. Urgent issues can be “virtually patched” with a Web application firewall; less serious issues can be sent to the development queue for later remediation. Instead of facing the undesirable choice between shutting a website down or leaving it exposed, organizations armed with the right data can be proactive about security, reduce risk and maintain business continuity.

Conclusion

Ultimately, website security in the cloud is no different than website security in your own environment. Every enterprise needs to create a website risk management plan that will protect their valuable corporate and customer data from attackers. If your organization has not prioritized website security previously, then now is the time to make it a priority. Attackers understand what many organizations do not – that Web applications are the easiest and most profitable target. And, cloud applications are accessed via the browser which means the website security is the only preventive measure that will help fight attacks.

At the same time, enterprises need to hold cloud vendors responsible for a certain level of network security while remaining accountable for their own data security. Ask vendors what type of security measures they employ, how frequently they assess their security and more. As a customer, you have a right to know before you hand over your most valuable assets. And, vendors know that a lack of security can mean lost business.

There may be some hurdles to jump during the transition from internal to cloud applications. But, by following these recommendations, an organization can avoid pitfalls:

1.       You can’t secure what you don’t know you own – Inventory your applications to gain visibility into what data is at risk and where attackers can exploit the money or data transacted.

2        Assign a champion – Designate someone who can own and drive data security and is strongly empowered to direct numerous teams for support. Without accountability, security and compliance will suffer.

3        Don’t wait for developers to take charge of security – Deploy shielding technologies to mitigate the risk of vulnerable applications.

4        Shift budget from infrastructure to application security – With the proper resource allocation, corporate risk can be dramatically reduced.

Neuroprivilogy: The New Frontier of Cyber Crime Arrow to Content

January 21, 2011 | Leave a Comment

By Shlomi Dinoor, vice president, emerging technologies, Cyber-Ark Software

Is your Neuroprivilogy vulnerable? The answer is most probably yes, you simply have no clue what Neuroprivilogy is (yet)…

The first step of this discussion is defining a fancy term to help educate and describe this new phenomenon:  Neuroprivilogy.  As the name suggests Neuroprivilogy is constructed from the words neural (network) and privileged (access), and can be defined as the science of privileged access points’ networks.  Using the neural network metaphor, an organization’s infrastructure is not flat, but instead, a network of systems (neuron=system).  The connections between systems are access points similar to synapses (for neurons).  Some of these access points are extremely powerful (i.e. privileged) while others are not.  Regardless, access points should be accessed only by authorized sources.

In nearly every IT department, discussions about virtualization and debates about moving to the cloud usually end up in the same uncomfortable place, bookended by concerns about lack of security and loss of control. To help create a realistic risk/reward profile, we must first examine how the definition of privilege, in context of the identity and access management landscape, is evolving.  We are no longer just talking about controlling database administrators with virtually limitless access to sensitive data and systems; we are talking about processes and operations that can be considered privileged based on the data accessed, the database being entered, or the actions being taken as a result of the data.

The concept of “privilege” is defined by the risk of the data being accessed or the system being manipulated.  Virtualized and cloud technologies compound that risk, making traditional perimeter defenses no longer sufficient to protect far-reaching cloud-enabled privileged operations. Whether data is hosted, based in a cloud or virtualized, “privileged accounts and access points” are everywhere.

To gain a better understanding of the vulnerabilities impacting a privileged access points’ network, consider these Seven Neuroprivilogy Vulnerability Fallacies:

1. These access points have limited permissions

Most access points are granted privileged access rights to systems – systems use proxy accounts for inter-system interactions (e.g. application to database). Usually the most permissive access rights required are used as the common (permission) denominator.

2. Given the associated high risk I probably have controls in place

Does anything from the following list sounds familiar? Hardcoded passwords, clear text passwords in scripts, default password never changed, if we’ll touch it everything will break… The irony is personal accounts for real users have very limited access rights, while having stricter controls (even simple ones such as mandating frequently password change).

3. But I have all those security systems so I must be covered, right?

Existing security controls fail to address this challenge – IAM, SIEM and GRC are all good solutions, however they address the challenge of known identities, accounting for limited access to the organization’s infrastructure, hence lower risk. Accounts associated with privileged access points usually have limitless access, and are often used by non-carbon based entities or anonymous identities. Therefore, more adequate controls are required.

4. Privileged access points vulnerability is strictly for insiders

Picture yourself as the bad guy, which of the following would you target? Personal accounts with limited capabilities protected by some controls, OR privileged access points with limitless access protected by no controls? The notion of an internal access point is long gone; especially with the borderless infrastructure trend (did I say cloud?).

5. This vulnerability is isolated to my traditional systems

Some of the more interesting attacks/breaches from the past year present an interesting yet not an entirely unexpected trend. The target is no longer confined to the traditional server, application or database. Bad guys attacked source code configuration management systems (Aurora attacks), point of sale devices, PLC (Stuxnet), ATMs, Videoconferencing systems (Cisco) and more.

6. Adding new systems (including security) should not impact my security posture

That’s where it gets interesting. Most systems interact with others, whether of infrastructure nature (such as database, user store) or services. Whenever adding a system to your environment you immediately add administrative accounts to the service, and interaction points (access points) to other systems. As already mentioned most of these powerful access points are poorly maintained, causing a local vulnerability (of the new system) as well global vulnerability (new system serves as a hopping point to other network nodes). Regardless, your overall security posture goes down.

7. I have many more accounts for real users than access points for systems

Though this fallacy might sound right, the reality is actually very different. It is not about how many systems you have, but the inter-communication between them. Based on conversations with enterprise customers, the complexity of the network and magnitude of this challenge will surprise many.

When observing these fallacies and advanced persistent threat (APT) attacks characteristics, you realize Neuroprivilogy vulnerability is the Holy Grail for APT attackers. Cyber criminals understand the potential of these privileged access points’ networks and by leveraging these vulnerabilities they have transformed the cyber crime frontier, as seen with many of the recent APT attacks, such as Stuxnet.  It fits perfectly with APT characteristics – not about quick or easy wins, but about patient, methodological and persistent attacks targeting a well defined (big) “prize.” Working the privileged access points’ network will eventually grant the bad guy access to his target.

So, what options exist for organizations that must balance protecting against cyber criminals with the proven advantages of virtualization and cloud technology?  Let’s get down to some more details about network access points – how to find them and now to eliminate the vulnerability, or at least lessen the impact.

Discover – there is nothing you can do if you don’t know about it… To better secure network access points, including related identities, processes and operations, organizations must be able to automate the detection process of privileged accounts, including service accounts and scheduled tasks, wherever they are used across the data center and remote networks.  This auto-detection capability significantly reduces ongoing administration overhead by proactively adding in new devices and systems as they are commissioned, and it further ensures that any privileged password changes are propagated wherever the account is used.  It also increases stability and eliminates risks of process and application failures from password synchronization mismatches.

Control – don’t be an ostrich, take control! Another benefit of automation, particularly for those who fear loss of control, is that organizations are assured that password refreshes are made at regular intervals and in line with the organization’s IT and security policies. Having an automated system in place allows the company to have a streamlined mechanism for disabling these privileged accounts immediately, thus lessening the impact on business operations.

And yeah, Comply – from a compliance standpoint, regulations such as Sarbanes-Oxley, PCI, and Basel II require organizations to provide accountability about who or what accessed privileged information, what was done, and whether passwords are protected and updated according to policy.  Without the necessary systems in place to automatically track and report that access, compliance becomes a daunting, time-consuming, and often expensive process, especially in terms of employees’ time and potential fines.

It is true no single solution can prevent every breach or cyber threat that could impact a virtualized or cloud environment (multi layers of defence is important). However by adopting a Neuroprivilogy state of mind, organizations gain a more holistic view of infrastructure vulnerabilities.  The best advice is to “prepare now” by proactively implementing proven processes and technologies to automate adherence to security policies that are in place across the entire enterprise.  In doing so, enterprises can protect sensitive access points against breaches, meet audit requirements as well as mitigate productivity and business losses.

So, now that you know more, I’ll ask again: is your Neuroprivilogy vulnerable? If you aren’t sure, chances are there is a cyber criminal out there who already knows.  So now the real question becomes: what are you going to do about it?

# # #

About the author:  Shlomi Dinoor has more than 12 years of security and identity management experience in senior engineering management positions.  As the head of Cyber-Ark Labs at Cyber-Ark Software (www.cyber-ark.com), Dinoor is focused on new technologies that help customers prepare for “what’s next” in terms of emerging insider threats, data breach vulnerabilities and audit requirements.  To read more, visit his personal blog, Shlomi’s Parking Spot.

Will the Cloud Cause the Reemergence of Security Silos? Arrow to Content

January 19, 2011 | Leave a Comment

by: Matthew Gardiner

Generally in the world silos relate to things that are beneficial, such as silos for grain or corn.  However in the world of IT security, silos are very bad.  In many forensic investigations application silos turn up as a key culprit that enabled data leakage of one sort or another.  It is not that any one application silo is inherently a problem – one can repair and manage a single silo much as a farmer would do – it is the existence of many silos, and silos of so many type, that is the core problem.  Farmers generally don’t use thousands of grain silos to handle their harvest; they have a handful of large, sophisticated, and centralized ones.

The same approach has proven highly effective in the world of application security, particularly since the emergence of the Web and its explosion of applications and users.  Managing security as a centralized service and applying it across large swaths of an organization’s infrastructure and applications is clearly a best practice.  However with the emergence of the Cloud as the hot application development and deployment platform going forward, organizations are at significant risk of returning to the bad days of security silos.  When speed overruns architecture, say hello to security silos and the weaknesses that they bring.

What do I mean by security silos?  I think of silos as application “architectures” which cause security (as well as IT management in general) to be conducted in “bits-and-pieces”, thus uniquely within the specific platform or system.  Applications are built this way because it feels faster in the short term. After all, the project needs to get done.  But after this approach is executed multiple times the organization is left with many inconsistent, custom, and diverse implementations and related security systems.  These systems are inevitably both complex to operate and expensive to maintain as well as easy to breach on purpose or by accident.

Perhaps this time it is different?  Perhaps IT complexity will magically decline with the Cloud?  Do you really think that the move to the Cloud is going to make the enterprise IT environment homogeneous and thus inherently easier to manage and secure?  Not a chance.  In fact, just the opposite is most likely. How many organizations will move all of their applications and data to public clouds?  And for that matter to a single public cloud provider.  Very few.  Given this, it is imperative that security architects put in place security systems that are designed to operate in a highly heterogeneous, hybrid (mixed public cloud and on-premise) world.  The cloud-connected world is one where applications and data will on one day be inside the organization on a traditional platform, the next day hosted within the organizations private cloud, the next day migrated to live within a public cloud service, and then back again, based on what is best for the organization at that time.

Are security silos inevitable with the move to the Cloud?  In the short term, unfortunately, probably yes.  With every new IT architecture the security approach has to do some catch-up.  It is the security professionals’ job to make this catch-up period as short as possible.

How should we shorten the catch-up period?

  • First update your knowledge base around the Cloud and security.  There are a lot of good sources out there; one in particular that I like is from the Cloud Security Alliance (CSA), Security Guidance for Critical Areas of Focus in Cloud Computing.
  • Second rethink your existing people, processes, and technology (sorry for the classic IT management cliché) in terms of the cloud.  You will find the control objectives don’t change, but how you will accomplish them will.
  • Third start making the necessary investments to prepare your organization for the transition to the cloud that is likely already underway.

While there are many areas covered in the above CSA document, let me focus on one area that in particular highlights some cloud specific security challenges, specifically around Identity and Access Management.

The CSA document says it well, “While an enterprise may be able to leverage several Cloud Computing services without a good identity and access management strategy, in the long run extending an organization’s identity services into the cloud is a necessary precursor towards strategic use of on-demand computing services.”  Issues such as user provisioning, authentication, session management, and authorization are not new issues to security professionals.  However, accomplishing them in the context of the cloud requires that the identity management systems that are on-premise in the enterprise automatically “dance” with the equivalent systems at the various cloud service providers.  This dance is best choreographed through the use of standards, such as SAML, XACML, and others.  In fact the rise of the cloud also raises the possibility of outsourcing even some of your identity management services, such as multi-factor authentication, access management, and other capabilities to specialized cloud security providers.

While in the short term it would seem that the emergence of some security silos is inevitable with organizations’ aggressive move to the cloud, it doesn’t have be this way forever.  We know security silos are bad, we know how to avoid them, and we have much of the necessary technology already available to eliminate them. Our necessary action is to take action.

Matthew Gardiner is a Director working in the Security business unit at CA Technologies. He is a recognized industry leader in the security & Identity and Access Management (IAM) markets worldwide. He is published, blogs, and is interviewed regularly in leading industry media on a wide range of IAM, cloud security, and other security-related topics. He is a member of the Kantara Initiative Board of Trustees. Matthew has a BSEE from the University of Pennsylvania and an SM in Management from MIT’s Sloan School of Management.  He blogs regularly at: http://community.ca.com/members/Matthew-Gardiner.aspx and also tweets @jmatthewg1234.  More information about CA Technologies can be found at www.ca.com.

Certifiable in the Cloud Arrow to Content

January 13, 2011 | 1 Comment

Author: Pamela Fusco, VP of Industry Solutions for Solutionary

Cloud computing remains as much a mystery to some as it is a part of others’ daily lexicon. I spend a lot of time working with people who have connections to various offices of the U.S. government and I find that regardless of the topic, or the background of the person I’m speaking with, one thing that consistently works when I’m discussing something like cloud services with an audience that may not be too familiar with it, is to start with an analogy.

Are you ready for my big Cloud computing analogy? Here it is: PIZZA!

Now, you might be wondering what pizza has to do with cloud computing?  Simply put, the passion for pizza is internationally recognized. With the exception of a few minor recipe tweaks here or there, the process for making it is well-known and all of the major “systems” we need to make it (ovens, stoves, etc.) are there for anyone to access.

But the funny thing about making pizza, is that as simple as it seems, not everyone can make good pizza and unless you’re making four or five pizzas at a time, you end up with a lot of wasted food (half salami, bunches of veggies, etc.). So for those of us who simply don’t have the time, or desire to do it ourselves, we order out. And that has fueled this multi-billion dollar pizza industry.

So taking pizza to the cloud–organizations have already figured out that the ROI for “at home” cloud computing (i.e., pizza making) is not as impressive as the ROI you benefit from when buying from someone who is already in the business of delivering cloud services.

No great mystery here really, right? But let’s get a bit more complicated because cloud computing is just the beginning. Sure, an organization might initially look at cloud computing as a way to realize cost savings over maintain data centers, but there’s so much more to it than that and that’s where you have to be careful.

History is an excellent indication of what our future holds. In fact, many believe that in order to understand our future, we must first understand our past. And that couldn’t be truer when it comes to risk mitigation and compliance in a virtualized cloud environment. If we take a look back at how applications have been delivered in the past, we have to also recognize the issues that have presented themselves with regards to risk mitigation and compliance. And just as unauthorized mainframe access was a problem way back then, the availability of our data when it’s “in the cloud” is a concern for organizations today.

All industries are under substantial oversight and regulation—from the FDA to PCI DSS—and requirements for these industries are ever changing. When operating with constantly changing requirements, basic standards and processes are core to the success of the operation and are usually “baked in” at some level within the data services you’re purchasing. But what happens when you want to try something different; more toppings perhaps, or maybe you’re hosting a get together and your on-demand needs have doubled?  If you’re with an experienced cloud servicers provider, then you probably won’t hesitate to place your custom order and add more products, because you trust the service and you are confident that what you receive will be “business as usual” and you will get your products as you need them.

This is all possible through experience. Your local pizzeria knows how to produce their products, scale their product to meet demand, and deliver services to support point in time needs and requirements. Heck, they probably even offer a “30 minute” service delivery guarantee (a.k.a. SLA).  Service providers, from infrastructure to software, must know their business and the business requirements of their clients, but they must also invest in R&D and innovation to ensure client retention, increase client base, and maintain compliance with regulatory statutes.  If they ignore any of these aspects, history will repeat itself.

Page Dividing Line