News roundup for May 28 2010 Arrow to Content

May 28, 2010 | Leave a Comment

Financial Services Like The Cloud, Provided It’s Private - http://www.informationweek.com/cloud-computing/blog/archives/2010/05/financial_servi.html

Novell Identity Manager extended to cloud - http://www.computerworlduk.com/technology/applications/software-service/news/index.cfm?newsid=20357

Amazon CEO Jeff Bezos: Cloud services can be as big as retail business - http://www.zdnet.com/blog/btl/amazon-ceo-jeff-bezos-cloud-services-can-be-as-big-as-retail-business/35111

Software evaluation 2.0 ? Arrow to Content

May 27, 2010 | Leave a Comment

I spend a lot of time evaluating software; for product reviews, to see which versions are vulnerable to various exploits and sometimes just to see if I should be using it. Most often this looks something like: find the software, download it, find the install and configuration documents, walk through them, find an error or two (documentation always seems to be out of date), fiddle with dependencies (database settings, etc.) finally get it mostly working (I think) and then test it out. I’m always wondering in the back of my mind if I’ve done it properly or if I’ve missed something, especially when it comes to performance issues (is the software slow, or did I mess up the database settings and not give it enough buffers?).

But it seems that finally some places are taking note of this and making their software available in a VM, fully configured and working, no fuss or mess, just download, run it, and play with the software. Personally I love this, especially for large and complex systems that require external components such as a database server, message queues and so on. No more worries about configuring all the add on bits, or making sure you have a compatible version and so on.

This really dovetails nicely with an increasing reliance on cloud computing, instead of buying a software package and having to maintain it you can buy a VM image with the software, essentially getting SaaS levels of configuration and support but still giving you the IaaS levels of control (you can run it in house, behind a VPN, you can configure it specially for your needs if you have to, etc.). The expertise needed to properly configure the database (which varies between databases hugely, and depending on what the product needs, i.e. latency? memory? bulk transfers? special character encoding?) is provided by the vendor who (at least in theory) knows best. I also think vendors will start to appreciate this, the tech support and time needed to guide customers through install and configuration and integration with other services and components can be replaced by “Ok sir I need you to upload the image to your VMware server (or EC2 or whatever) and turn it on… Ok now login as admin and change the password. Ok we’re done.”

Security is also a lot easier. The vendor can test patches knowing with certainty that they will work on customer’s systems since the customer has the exact same system as the vendor, and customers can apply patches with a higher degree of confidence, knowing that they were tested in the same environment.

Reddit ships code as fully functional VM. – http://blog.reddit.com/2010/05/admins-never-do-what-you-want-now-it-is.html

Update: CERT release fuzzing framework, as a VMware image – http://threatpost.com/en_us/blogs/cert-releases-basic-fuzzing-framework-052710
http://www.cert.org/download/bff/

Counterfeit gear in the cloud Arrow to Content

May 26, 2010 | Leave a Comment

One of the best and worst things about outsourced cloud computing (as opposed to in house efforts) is the ability to spend more time on what is important to you, and leave things like networking infrastructure, hardware support and maintenance and so on to the provider. The thing I remember most about system and network administration is all the little glitches, some of which weren’t so little and had to be fixed right away (usually at 3 in the morning). One thing I love about outsourcing this stuff is I no longer have to worry about network infrastructure.

Assuming of course that the cloud provider does a good job. The good news here is that network availability and performance is really easy to measure, and really hard for a cloud provider to hide. Latency is latency, and you generally can’t fake low latency networks (although if you can please let me know! We’ll make millions). Ditto for bandwidth, either the data transfers in 3 minutes or 4 minutes, a provider can’t really fake that either. Reliability is a little tougher since you have to measure it continuously to get good numbers (are there short but total outages, longer “brownouts” with reduced network capacity, or is everything actually working fine?). But none of this takes into account or allow us to predict the type of catastrophic failures that result in significant downtime.

One way providers deal with this potential problem is simple: they buy good name brand gear with support contracts that guarantee replacement times, how long it will take a engineer to show up, etc. But this stuff is expensive. So what happens if a cloud provider is finds, or is offered name brand equipment at reduced, or even really cheap prices (this does happen legitimately; a company goes bust and stuff is sometimes sold off cheap). This stuff isn’t under a support contract and is not up to the same specs as the real stuff meaning it is more likely to fail or suffer problems, causing you grief.

How do you, the cloud provider customer, know that your provider isn’t accidentally (or otherwise) buying counterfeit network gear?

Well short of a physical inspection and phoning in the serial numbers to the manufacturer you won’t. Unfortunately I can’t think of any decent solutions to this, so if you know of them or have any ideas feel free to leave comments or email me, [email protected].

Feds shred counterfeit Cisco trade – With a new conviction today, the federal action known as Operation Network Raider has resulted in 30 felony convictions and more than 700 seizures of counterfeit Cisco network hardware with an estimated value of more than $143 million.

-By Layer 8, Network World

Yikes.

Amazon AWS – 11 9′s of reliability? Arrow to Content

May 24, 2010 | 2 Comments

Amazon recently added a new redundancy service to their S3 data storage service. Amazon now claims that data stored in the “durable storage” class is 99.999999999% “durable” (not to be confused with availability – more on this later).

“If you store 10,000 objects with us, on average we may lose one of them every 10 million years or so. This storage is designed in such a way that we can sustain the concurrent loss of data in two separate storage facilities.”

http://aws.typepad.com/aws/2010/05/new-amazon-s3-reduced-redundancy-storage-rrs.html –Jef;

So how exactly does Amazon arrive at this claim? Well reading further they also offer a “REDUCED_REDUNDANCY” storage class (which is 33% cheaper than normal) that guarantees 99.99% and is “designed to sustain the loss of data in a single facility.” From this was can extrapolate that Amazon is simply storing the data in multiple physical data centers, the chance of each one becoming unavailable (burning down, cable cut, etc.) is something like 0.01%, so storing at two data centers means a 0.0001% chance that both will fail at the same time (or on the flip side: a 99.9999% durability guarantee), three data centers giving us 0.000001% chance of loss (a 99.999999% durability guarantee) and so on. I’m not sure of the exact numbers that Amazon is using but you get the general idea; a small chance of failure, combined with multiple locations makes for a very very small chance of failure at all the locations at the same time.

Except there is a huge gaping hole in this logic. To expose it let’s revisit history, specifically the Hubble Space Telescope. The Hubble Space Telescope can be pointed in specific directions using six on board gyroscopes. By adding momentum to a single gyroscope or applying the brakes to it you can cause Hubble to spin clockwise or counter clockwise in a single axis. With two of these gyroscopes you can move Hubble in three axis to point anywhere. Of course having three sets of gyroscopes makes maneuvering it easier and having spare gyroscopes ensures that a failure or three won’t leave you completely unable to point the Hubble at interesting things.

But what happens when you have a manufacturing defect in the gyroscopes, specifically the use of regular air instead of inert nitrogen during the manufacturing of the gyroscopes? Well having redundancy doesn’t do much since the gyroscopes start failing in the same manner at around the same time (almost leaving Hubble useless if not for the first servicing mission).

The lesson here is that having redundant and backup systems that are identical to the primary systems may not increase the availability of the system significantly. And I’m willing to bet that Amazons S3 data storage facilities are near carbon copies of each other with respect to the hardware and software they use (to say nothing of configuration, access controls, authentication and so on). A single flaw in the software, for example an software related issue that results in a loss or mangling of data may hit multiple sites at the same time as the bad data is propagated. Alternatively a security flaw in the administrative end of things could let an attacker gain access to and start deleting data from the entire S3 “cloud”.

You can’t just take the chance of failure and square it for two sites if the two sites are identical. The same goes for 3, 4 or 27 sites. Oh and also to read the fine print: “durability” means the data is stored somewhere, but Amazon makes no claims about availability or whether or not you can get at it.
Something to keep in mind as you move your data into the cloud.

3 Problems Cloud Security Certification Can Solve Arrow to Content

May 17, 2010 | 2 Comments

By Jim Reavis

What if there were widely accepted standards for cloud security and, better yet, a universally recognized designation for “trusted” cloud providers?

The basic promise of cloud computing is undeniably appealing: Increase efficiency and reduce cost by taking advantage of flexibly pooled computing resources managed by somebody else.

Indeed, as Bill Brenner of CSO put it, “Given how expensive it is to maintain in-house hardware and software, the idea of putting one’s IT infrastructure in the cloud sounds downright heavenly.”

Unfortunately, this “heavenly” scenario is marred by real concerns about security – concerns which can range from network security basics like data integrity and identity management to abstruse questions of “local law and jurisdiction where data is held.”

Yes, cloud computing is changing everything from data center architecture to entire business eco-systems. However, as a new paradigm the many complex questions it poses, particularly when it comes to issues of security, governance, and compliance, are effectively preventing (or at least slowing) its widespread adoption.

So how would a Cloud Security Certification, such as that being proposed by the Cloud Security Alliance (CSA) help matters?

1. One standard cloud-specific definition for “secure”

“Security controls in cloud computing are, for the most part, no different than security controls in any IT environment,” the CSA writes in their recently released security guidance document for cloud computing.

“However,” the document continues, “because of the cloud service models employed, the operational models, and the technologies used to enable cloud services, cloud computing may present different risks to an organization than traditional IT solutions.”

A common, standardized definition of what properly belongs to cloud security would ensure that cloud providers and their clients operate with a shared, comprehensive view of the cloud security landscape and clear expectations of how it should be managed.

2. Streamline process for evaluating providers

The complexity of the cloud security equation and the fact that every provider addresses this complexity in its own unique way makes the process of mapping the security requirements of the enterprise to the capabilities of the vendor both difficult and time-consuming.

By allowing cloud providers to display a “visible seal of trust”, insists Novell’s Jim Ebzery, certification like this will give organizations “a simple way to assure their specific corporate security policies and regulatory concerns will be enforced in the cloud.”

3. Overcome security fears to Cloud adoption

Cloud technology can solve real problems faced by organizations and enterprises today and will play a major role in the evolution of IT infrastructure going forward. The continuing growth of this field will benefit everyone.

Nevertheless, the speed of this evolution, and the rate of innovation fostered by the cloud more generally, will absolutely be determined by the rate of cloud adoption.

The emergence of a generally accepted cloud security “seal of approval” should allay many of the concerns that stand in the way of this adoption and, ultimately, open the door to a future of practically unlimited opportunity.

What are your thoughts on a logo or certification program for the cloud?

Season’s Greetings from the CSA! Arrow to Content

May 17, 2010 | Leave a Comment

By Zenobia Godschalk

2009 has been a busy year for the CSA, and 2010 promises to be even more fruitful. The alliance is now 23 corporate members strong, and is affiliated with numerous leading industry groups (such as ISACA, OWASP and the Jericho Forum) to help advance the goal of cloud security. Below is a recap of recent news and events, as well as upcoming events. We have had tremendous response to the work to date, and this month we will release version two of our guidance. Thanks to all our members for their hard work in our inaugural year!

RECENT NEWS

 

CSA and DMTF

The CSA and DMTF have partnered to help coordinate best practices for Cloud security. More details here: http://www.cloudsecurityalliance.org/pr20091201.html

Cloud Security Survey

The Cloud Security survey is still open for responses! Here’s your chance to influence cloud security research. Survey takes just a few minutes, and respondents will receive a free, advance copy of the results.

http://www.surveymonkey.com/s.aspx?sm=VqH8jHHwc9GhANj3EzDl1g_3d_3d

Computerworld: Clear Metrics for Cloud Security? Yes, Seriously

http://www.computerworld.com/s/article/9141010/Clear_Metrics_for_Cloud_Security_Yes_Seriously

The Cloud Computing Show (featuring interview with CSA’s Chris Hof)

http://cloudcomputingshow.blogspot.com/2009/11/cloud-computing-show-20.html

 

For more cloud security news, check out the press page on the CSA site.

 

RECENT EVENTS

 

State of California

At the end of October CSA was invited to present to Information Security professionals of the State of California. During this 2-day state-sponsored conference we provided education and transparency into CSA’s research around Cloud Security and how the federal government is using cloud deployments.

CSI DC

Also at the end of October the CSA participated at a Cloud Security workshop during the annual CSI Conference in DC.

India Business Technology Summit

In November Nils Puhlmann, co-founder of CSA, presented to an audience of 1,400 at the annual India Business Technology Summit. Not only did he address the audience in a keynote but delivered the CSA message and learnings in a workshop at the India Institute of Science and Technology in Bangalore. Puhlmann also participated in a follow up panel in Mumbai at the India Business Technology Executive Summit.

Conference of ISMS Forum

In December CSA was represented at the 6th International Conference of ISMS Forum in Seville. Nils Puhlmann delivered a keynote and moderated a panel looking into the future of information security and how new technologies like cloud computing might affect our industry.

CSA and ISMS also signed an MOU to cooperate more closely together. The event and CSA’s participation were covered in the prominent Spanish newspaper, Cinco Días, the business supplement of El País.

ISMS and CSA also started activities to launch a Spanish Chapter of CSA to better address the unique and local issues around secure cloud adoption in Spain.

UPCOMING NEWS AND EVENTS

Guidance V2

The second version of the security guidance for critical areas of focus in cloud computing is coming soon! Watch for the next version of the guidance to be released this month.

 

SecureCloud 2010

Registration is now open for SecureCloud 2010, a joint conference with ENISA and ISACA being held in Barcelona, March 16th and 17th.

http://www.cloudsecurityalliance.org/sc2010.html

Cloud Security Alliance Summit

In addition, the Cloud Security Alliance Summit will be held in conjunction with the RSA Conference in SF March 1. Further details are below, and check back on the CSA website for more updates coming soon!

Cloud Security Alliance Summit

March 1, 2010, San Francisco, Moscone Center The next generation of computing is being delivered as a utility.

Cloud Computing is a fundamental shift in information technology utilization, creating a host of security, trust and compliance issues.

The Cloud Security Alliance is the world’s leading organization focused on the cloud, and has assembled top experts and industry stakeholders to provide authoritative information about the state of cloud security in the Cloud Security Alliance Summit. This half day event will provide broad coverage of cloud security domains and available best practices for governance, legal, compliance and technical issues. From encryption and virtualization to vendor management and electronic discovery, the speakers provide guidance on key business and operational issues. We will also present the latest findings from the CSA working groups for Cloud Threats, Metrics and Controls Mappings.

Your Chance to Influence Cloud Security Research! Arrow to Content

May 17, 2010 | Leave a Comment

By Zenobia Godschalk

The Cloud Security Alliance needs your help! We are conducting a survey to help us better understand users current cloud deployment plans and biggest areas of security and compliance concern. The feedback generated here will assist the CSA in shaping our educational curriculum and areas of guidance over the coming months. So, if you’re concerned about cloud security, let your voice be heard!

http://www.surveymonkey.com/s.aspx?sm=VqH8jHHwc9GhANj3EzDl1g_3d_3d

(Survey takes just a few minutes, and you will receive a complimentary copy of the results)

Cloud Security and Privacy book by CSA founding members Arrow to Content

May 17, 2010 | Leave a Comment

By Jim Reavis

I wanted to let everyone know about the new book release, Cloud Security and Privacy: An Enterprise Perspective on Risks and Compliance. This book was written by three experts, two of whom are CSA founding members. I had the opportunity to read the book prior to its publication and I can personally recommend it as a great resource for those seeking to learn about and securely adopt cloud computing. The book URL is below:

http://oreilly.com/catalog/9780596802769/

Seemingly basic power problems in state-of-the-art data centers Arrow to Content

May 17, 2010 | Leave a Comment

By Wing Ko

I came across this “Stress tests rain on Amazon’s cloud” article from the itnews for Australian Business about a week ago. A team of researchers in Australia spent 7 months stress tested Amazon’s EC2, Google’s AppEngine and Microsoft’s Azure cloud computing services, and found that these cloud providers suffered from regular performance and availability issues.

The researchers have just released more data yesterday – http://www.itnews.com.au/News/153819,more-data-released-on-cloud-stress-tests.aspx. Turns out Google’s AppEngine problem was “by design” – no single processing task can last more than 30 seconds to prevent denial-of-service attack to the AppEngine. It’ll be nice to warn the customers ahead of time, but nevertheless, a reasonable security feature.

The reason for Amazon’s problem was not so reasonable – due to a power and back-up generator failure. It’s kind of hard to believe that as sophisticated as Amazon, a simple power failure cause outages and performance degradations. Or was it?

I was personally involved in 3 major data center outages due to “simple” power problems. Obviously, there will be no name associate with these incidents to protect the innocents, blah, blah, blah …

Incident #1:

We have just launched a new state-of-the-art data center, and it was in use for less than 6 months. A summer power outage knocked out half of the data center for less than an hour, but it took us about 2 days to restore services to all the customers because some high-end equipment were fired, disk crashed, etc. – you know the deal.

Initially everyone was puzzled – why half of the data center were out of power when we have separate power sources from 2 utility companies, battery banks, and diesel generators with 2 separate diesel refilling companies for our brand-new data center! We should be able to stay up for as long as we needed even without any outside power sources. Well, post-mortem revealed that the electricians didn’t connect one set of the PDUs to the power systems, so that’s why every other racks were out of power. We were in such a hurry to light up that center, we didn’t test everything. Since all systems were fed with dual-power through multiple levels, we couldn’t tell half the systems weren’t fully powered. When we tested the power, we happened to test the half that worked.

Incident #2:

Another summer storm came through around 11PM and knocked out power to a slightly older data center. Somehow it blew the main circuit to the water pumps. The good news was that the backup power worked and all the systems were up and running. The bad news was that the A/C systems depend on the cool water, so no cool water, no A/C. Well, we had mainframes, mainframe-class UNIX servers, enterprise-class Windows servers, SANs, DASes, and many more power hungry, heat monsters in that data center. Only a few night shift people were there, and they didn’t know much, but they did follow the escalation process and called the data center and operations managers. I got the call and immediately called and instructed my staff to login remotely to shut down the systems while I drove in. In a normal day, I hated to stay in that data center floor for more than 30 minutes because it’s so cold. It took me about 20 minutes to get there, and boy, our 25 feet high, 150,000 square feet data center had reached over 90 degrees. Some equipment initiated thermal shutdowns on their own, but some simply overheated and crashed. That outage caused several million dollars in damages just on equipment alone.

Incident #3:

This time no summer storms, just a bird. A bird was in the back room and somehow decided to end its life by plunging into the power relay. Again, normally all these systems are redundant, so it should be fine. Unfortunately, luck will have it, earlier that week, a relay went bad, but the data center manager didn’t bother to rush a repair. Well, you probably know the rest – no power except emergency lights in the data center – $$$.

I don’t know what caused the power outage in the case of Amazon, but the moral of this long story is that, pay special attention to your power systems. Test, retest, and triple-test your systems with different scenarios.

CSA Federal Cloud Security Symposium Hosted by MITRE (McLean, VA) Arrow to Content

May 17, 2010 | Leave a Comment

 By Dov Yoran

On August 5th, 2009, Cloud Security Alliance Federal Cloud Security Symposium was hosted by MITRE Corporation. This full day venue provided government personnel with access to leading commercial cloud security experts. Throughout the day perspectives on cloud computing, its benefits and its security implication were discussed with respect to the public sector.

The day began with Jim Reavis, CSA’s executive director providing an overview to the 200 strong audience on CSA’s organization, mission and goals. He spoke on how the economics of cloud computing will create a transformational change as organizations can move significant budget from capex to opex. He foresees the economic pressures are so compelling that businesses will bypass IT & Governance all together if they don’t become part of the solution in cloud adoption.

The day continued with Peter Mell from NIST providing a carefully articulated definition of cloud computing. He spoke about the challenges of composing this definition – not being able to please everyone, but putting forth something that everyone as a whole can understand. He discussed the potential threat exposure of large scaled cloud environments. Continuing with to the idea of micro clouds, structures that might have less threat but could also reap economic benefits. As one can imagine, Peter’s guidance was to employ different levels of clouds for different security concerns.

Next, Jason Witty from Bank of America, Glenn Brunette of Sun and Ward Spangenberg of IO Active discussed the cloud threat model in a panel session addressing initial concerns on attack perspectives. Glenn believes that social engineering is still the weakest link in a security provider’s arsenal. He continued by saying that even if the provider exposes some of their technologies, it really shouldn’t matter because defense in depth strategies should be employed. Jason re-affirmed the social engineering weakness, but also gave insight that the same userid/password compromises can now potentially give one access to massive amounts of information and resources – potentially a much bigger threat if compromised.

When asked about trusting the cloud provider, all three universally agreed that the insider threat exposure can be mitigated by compartmentalizing data and ensuring segregation of duty of cloud personnel. Jason commented on the importance of data classification, discussing how the government is ahead of the private sector in this arena. This first step taken should be identifying data and then defining its appropriate risk exposure.

All three addressed the uniqueness of cloud computing – commenting that it can be leveraged by both businesses as well as the bad guys. Ward spoke about how the concentration of risk in applications and systems are greater due to their interdependency. But the traditional risks are still alive in the cloud and need to continue to be addressed.

The next panel, Encryption and Key Management in the Cloud focused on the underlying challenge of dispersion of data and operations. The panel debated the success of PCI compliance and how lessons learned can be applied to the cloud. Jon Callas from PGP thought it was successful simply for the purpose of pushing security into the business world in a non-overbearing manner, but still having, “a little bit of teeth.” Pete Nicolleti from Terremark agreed with the effectiveness on the idea of continuous compliance that PCI drives, however he feels it didn’t go far enough, believing it should have more stringent actions for those that fail.

The afternoon began with a panel on the legal ramifications of cloud computing. The discussion jumped right into the inherent conflicts of SLAs – on one hand the provider needs to achieve consistency, on the other hand the client needs flexibility. Dan Burton from Salesforce outlined that most clients are ok with the standard online click through agreements. But he also recognizes the needs of large financial companies and government organizations with sensitive data – however in reality, there’s only so far the provider can go and the ultimate decision is up to the customer on what data they are comfortable leveraging a provider for.

Dan passionately reminded the room that market forces are so powerful they will take the lead on cloud computing. Government and legal will have to follow simply because the business transformation is moving so fast. Jeffrey Ritter from Waters Edge noted that governing law is behind the times, i.e. it was not written with a global framework of information sharing, manufacturing and cloud computing in mind with its rapid data exchanges across boarders. Legislation has not even really begun to think about these implications in a legal framework.

The afternoon continued by the Incident Response and Forensics panel, lead by the ever energetic Pam Fusco. One of the key issues discussed was the investigation process. Wing Ko from Maricom Systems described how personnel used for investigations should have previous courtroom experience. David Ostertag from Verizon noted that the investigator doesn’t necessarily have to know about the underlying business itself, but they do need to have knowledge of the specific regulations for the client at hand. The focus needs to be on the data itself (whether in motion or at rest), understanding its location, protection, etc. This lead to a lively debate on responsibility for the chain of custody, for which Dave stated that the physical owner of the server is responsible, so it depends on the business model – fully managed, co-lo, etc. This concept is particularly interesting when investigations are conducted on a virtual machine. Dave explained the registry exists on the virtual machine, so it if goes down, the information will be lost. There was widespread disagreement from the audience as participants suggested taking snapshots of the image as the log files are persistent for a period of time even if they don’t last forever. This discussion was further ignited by the idea of confiscation of a physical server even if it affects several independent companies on that one server. If law enforcement needs to come and get the information, the client’s site will go down. All agreed that customers need to be made aware of legal terms.

Next, Glenn Brunette took the room through a detailed presentation on virtualization hardening. He reminded us not to overlook the traditional issues, for example, the physical connections (making sure network cables are connected, looking to redundancy and better protection by not having all servers in the same rack, etc.) The usual basics of patching, hardening and clearly defining rules based access control leveraging least privileges were all presented. These were especially important in the cloud environment whereby the user will not have access control to the hypervisor, just to the virtual machine image.

Another suggested measure of protection was the concept of tokenizing the data, i.e. passing it through a filter to not expose certain fields, thereby protecting data from the provider. Glenn also spoke about the basics, using vetted, certified hardware and software providers and emphasizing the use of open standards. He concluded with somber concern on the administrative challenges of keeping pace with the scale of virtual machine and cloud processing (detecting, imaging, shutting down, deleting, etc).

The day concluded with an Interoperability and Application panel lead by George Reese of Stradis. The spirited debate was sparked by John Willis who claimed interoperability doesn’t even matter right now because we’re at such an early stage of the cloud explosion – we don’t even know where it’s going to be in two years. To the contrary, Dan Burton argued that interoperability is extremely important. No one knows where the innovation is going to come from, and a company loses this benefit if not interoperable. He believes that customers are driving towards interoperability, not just wanting one provider. And if they become locked in, they will walk with their feet. He spoke about Facebook’s integration with Salesforce via an API to port public data, citing that no one would have imagined that a few short years ago. However there was some rebuke from the audience in that Salesforce could not ultimately vouch for the authenticity of the data. That responsibility lies with the end user.

Ultimately, this last discussion epitomizes the juxtaposition of cloud computing benefits and challenges. The inherent economic efficiencies, speed to market, ease of adoption and growth implications are obvious. The security concerns also need to be addressed to help mitigate vulnerabilities and exploits with the rapid adoption of any new technology, especially one as universal as cloud computing.

Page Dividing Line