Cloud Security and Privacy book by CSA founding members Arrow to Content

May 17, 2010 | Leave a Comment

By Jim Reavis

I wanted to let everyone know about the new book release, Cloud Security and Privacy: An Enterprise Perspective on Risks and Compliance. This book was written by three experts, two of whom are CSA founding members. I had the opportunity to read the book prior to its publication and I can personally recommend it as a great resource for those seeking to learn about and securely adopt cloud computing. The book URL is below:

http://oreilly.com/catalog/9780596802769/

Seemingly basic power problems in state-of-the-art data centers Arrow to Content

May 17, 2010 | Leave a Comment

By Wing Ko

I came across this “Stress tests rain on Amazon’s cloud” article from the itnews for Australian Business about a week ago. A team of researchers in Australia spent 7 months stress tested Amazon’s EC2, Google’s AppEngine and Microsoft’s Azure cloud computing services, and found that these cloud providers suffered from regular performance and availability issues.

The researchers have just released more data yesterday – http://www.itnews.com.au/News/153819,more-data-released-on-cloud-stress-tests.aspx. Turns out Google’s AppEngine problem was “by design” – no single processing task can last more than 30 seconds to prevent denial-of-service attack to the AppEngine. It’ll be nice to warn the customers ahead of time, but nevertheless, a reasonable security feature.

The reason for Amazon’s problem was not so reasonable – due to a power and back-up generator failure. It’s kind of hard to believe that as sophisticated as Amazon, a simple power failure cause outages and performance degradations. Or was it?

I was personally involved in 3 major data center outages due to “simple” power problems. Obviously, there will be no name associate with these incidents to protect the innocents, blah, blah, blah …

Incident #1:

We have just launched a new state-of-the-art data center, and it was in use for less than 6 months. A summer power outage knocked out half of the data center for less than an hour, but it took us about 2 days to restore services to all the customers because some high-end equipment were fired, disk crashed, etc. – you know the deal.

Initially everyone was puzzled – why half of the data center were out of power when we have separate power sources from 2 utility companies, battery banks, and diesel generators with 2 separate diesel refilling companies for our brand-new data center! We should be able to stay up for as long as we needed even without any outside power sources. Well, post-mortem revealed that the electricians didn’t connect one set of the PDUs to the power systems, so that’s why every other racks were out of power. We were in such a hurry to light up that center, we didn’t test everything. Since all systems were fed with dual-power through multiple levels, we couldn’t tell half the systems weren’t fully powered. When we tested the power, we happened to test the half that worked.

Incident #2:

Another summer storm came through around 11PM and knocked out power to a slightly older data center. Somehow it blew the main circuit to the water pumps. The good news was that the backup power worked and all the systems were up and running. The bad news was that the A/C systems depend on the cool water, so no cool water, no A/C. Well, we had mainframes, mainframe-class UNIX servers, enterprise-class Windows servers, SANs, DASes, and many more power hungry, heat monsters in that data center. Only a few night shift people were there, and they didn’t know much, but they did follow the escalation process and called the data center and operations managers. I got the call and immediately called and instructed my staff to login remotely to shut down the systems while I drove in. In a normal day, I hated to stay in that data center floor for more than 30 minutes because it’s so cold. It took me about 20 minutes to get there, and boy, our 25 feet high, 150,000 square feet data center had reached over 90 degrees. Some equipment initiated thermal shutdowns on their own, but some simply overheated and crashed. That outage caused several million dollars in damages just on equipment alone.

Incident #3:

This time no summer storms, just a bird. A bird was in the back room and somehow decided to end its life by plunging into the power relay. Again, normally all these systems are redundant, so it should be fine. Unfortunately, luck will have it, earlier that week, a relay went bad, but the data center manager didn’t bother to rush a repair. Well, you probably know the rest – no power except emergency lights in the data center – $$$.

I don’t know what caused the power outage in the case of Amazon, but the moral of this long story is that, pay special attention to your power systems. Test, retest, and triple-test your systems with different scenarios.

CSA Federal Cloud Security Symposium Hosted by MITRE (McLean, VA) Arrow to Content

May 17, 2010 | Leave a Comment

 By Dov Yoran

On August 5th, 2009, Cloud Security Alliance Federal Cloud Security Symposium was hosted by MITRE Corporation. This full day venue provided government personnel with access to leading commercial cloud security experts. Throughout the day perspectives on cloud computing, its benefits and its security implication were discussed with respect to the public sector.

The day began with Jim Reavis, CSA’s executive director providing an overview to the 200 strong audience on CSA’s organization, mission and goals. He spoke on how the economics of cloud computing will create a transformational change as organizations can move significant budget from capex to opex. He foresees the economic pressures are so compelling that businesses will bypass IT & Governance all together if they don’t become part of the solution in cloud adoption.

The day continued with Peter Mell from NIST providing a carefully articulated definition of cloud computing. He spoke about the challenges of composing this definition – not being able to please everyone, but putting forth something that everyone as a whole can understand. He discussed the potential threat exposure of large scaled cloud environments. Continuing with to the idea of micro clouds, structures that might have less threat but could also reap economic benefits. As one can imagine, Peter’s guidance was to employ different levels of clouds for different security concerns.

Next, Jason Witty from Bank of America, Glenn Brunette of Sun and Ward Spangenberg of IO Active discussed the cloud threat model in a panel session addressing initial concerns on attack perspectives. Glenn believes that social engineering is still the weakest link in a security provider’s arsenal. He continued by saying that even if the provider exposes some of their technologies, it really shouldn’t matter because defense in depth strategies should be employed. Jason re-affirmed the social engineering weakness, but also gave insight that the same userid/password compromises can now potentially give one access to massive amounts of information and resources – potentially a much bigger threat if compromised.

When asked about trusting the cloud provider, all three universally agreed that the insider threat exposure can be mitigated by compartmentalizing data and ensuring segregation of duty of cloud personnel. Jason commented on the importance of data classification, discussing how the government is ahead of the private sector in this arena. This first step taken should be identifying data and then defining its appropriate risk exposure.

All three addressed the uniqueness of cloud computing – commenting that it can be leveraged by both businesses as well as the bad guys. Ward spoke about how the concentration of risk in applications and systems are greater due to their interdependency. But the traditional risks are still alive in the cloud and need to continue to be addressed.

The next panel, Encryption and Key Management in the Cloud focused on the underlying challenge of dispersion of data and operations. The panel debated the success of PCI compliance and how lessons learned can be applied to the cloud. Jon Callas from PGP thought it was successful simply for the purpose of pushing security into the business world in a non-overbearing manner, but still having, “a little bit of teeth.” Pete Nicolleti from Terremark agreed with the effectiveness on the idea of continuous compliance that PCI drives, however he feels it didn’t go far enough, believing it should have more stringent actions for those that fail.

The afternoon began with a panel on the legal ramifications of cloud computing. The discussion jumped right into the inherent conflicts of SLAs – on one hand the provider needs to achieve consistency, on the other hand the client needs flexibility. Dan Burton from Salesforce outlined that most clients are ok with the standard online click through agreements. But he also recognizes the needs of large financial companies and government organizations with sensitive data – however in reality, there’s only so far the provider can go and the ultimate decision is up to the customer on what data they are comfortable leveraging a provider for.

Dan passionately reminded the room that market forces are so powerful they will take the lead on cloud computing. Government and legal will have to follow simply because the business transformation is moving so fast. Jeffrey Ritter from Waters Edge noted that governing law is behind the times, i.e. it was not written with a global framework of information sharing, manufacturing and cloud computing in mind with its rapid data exchanges across boarders. Legislation has not even really begun to think about these implications in a legal framework.

The afternoon continued by the Incident Response and Forensics panel, lead by the ever energetic Pam Fusco. One of the key issues discussed was the investigation process. Wing Ko from Maricom Systems described how personnel used for investigations should have previous courtroom experience. David Ostertag from Verizon noted that the investigator doesn’t necessarily have to know about the underlying business itself, but they do need to have knowledge of the specific regulations for the client at hand. The focus needs to be on the data itself (whether in motion or at rest), understanding its location, protection, etc. This lead to a lively debate on responsibility for the chain of custody, for which Dave stated that the physical owner of the server is responsible, so it depends on the business model – fully managed, co-lo, etc. This concept is particularly interesting when investigations are conducted on a virtual machine. Dave explained the registry exists on the virtual machine, so it if goes down, the information will be lost. There was widespread disagreement from the audience as participants suggested taking snapshots of the image as the log files are persistent for a period of time even if they don’t last forever. This discussion was further ignited by the idea of confiscation of a physical server even if it affects several independent companies on that one server. If law enforcement needs to come and get the information, the client’s site will go down. All agreed that customers need to be made aware of legal terms.

Next, Glenn Brunette took the room through a detailed presentation on virtualization hardening. He reminded us not to overlook the traditional issues, for example, the physical connections (making sure network cables are connected, looking to redundancy and better protection by not having all servers in the same rack, etc.) The usual basics of patching, hardening and clearly defining rules based access control leveraging least privileges were all presented. These were especially important in the cloud environment whereby the user will not have access control to the hypervisor, just to the virtual machine image.

Another suggested measure of protection was the concept of tokenizing the data, i.e. passing it through a filter to not expose certain fields, thereby protecting data from the provider. Glenn also spoke about the basics, using vetted, certified hardware and software providers and emphasizing the use of open standards. He concluded with somber concern on the administrative challenges of keeping pace with the scale of virtual machine and cloud processing (detecting, imaging, shutting down, deleting, etc).

The day concluded with an Interoperability and Application panel lead by George Reese of Stradis. The spirited debate was sparked by John Willis who claimed interoperability doesn’t even matter right now because we’re at such an early stage of the cloud explosion – we don’t even know where it’s going to be in two years. To the contrary, Dan Burton argued that interoperability is extremely important. No one knows where the innovation is going to come from, and a company loses this benefit if not interoperable. He believes that customers are driving towards interoperability, not just wanting one provider. And if they become locked in, they will walk with their feet. He spoke about Facebook’s integration with Salesforce via an API to port public data, citing that no one would have imagined that a few short years ago. However there was some rebuke from the audience in that Salesforce could not ultimately vouch for the authenticity of the data. That responsibility lies with the end user.

Ultimately, this last discussion epitomizes the juxtaposition of cloud computing benefits and challenges. The inherent economic efficiencies, speed to market, ease of adoption and growth implications are obvious. The security concerns also need to be addressed to help mitigate vulnerabilities and exploits with the rapid adoption of any new technology, especially one as universal as cloud computing.

Will Silicon Valley Run Out of Data Center Space? Arrow to Content

May 17, 2010 | Leave a Comment

By Wing Ko

This slashdot posting caught my eyes last night – http://hardware.slashdot.org/story/09/08/12/2227215/Will-Silicon-Valley-Run-Out-of-Data-Center-Space. Judging from the thread, apparently it caught the eyes of quite a few people too.

With all the exciting news and press releases during the dotcom era, most non-IT people thought that all the data centers were in California (and for the few better informed ones, the rest of the data centers were in Virginia). With hotties like Google, eBay, Myspace, and Facebook, I think, now, even many IT people thought that all the data centers, computing and people power are in CA. No doubt, many data centers and talented people are in CA, particularly Silicon Valley and its surrounding areas. And the world’s computing needs has just begun. But to say that it may run out of data center space? That’s pretty far-fetch. Or was it?

Last summer when I was in the Silicon Valley area, there were plenty of vacant offices and buildings, courtesy of the dotbomb. I checked with folks recently, and it’s still the same way if not worst, due to the (Great) Recession. So there are plenty of offices, spaces, and even lands for data centers, right? Well, maybe …

Data centers are a little like farms. Growing plants and crops, you’ll need clean/cheap/free water, ample/free sunlight, strong/diverse root systems, rich soil, cheap/free lands, and knowledge of how to grow and care for the plants. Good, money making data centers need cheap/effective cooling (water or air), clean/reliable/cheap (electric) power, big-fat/diverse/cheap network pipes, lands free of natural disasters, cheap lands, and skill/cheap talents.

Silicon Valley is probably not the best place to make money building data centers, and many figured this out a decade ago. The truth is that there are many mega data centers located outside of the West and East coasts, and more build outs continue to happen outside of the coasts, so I don’t think Silicon Valley will run out of data center space any time soon. Besides, with cloud computing, you really shouldn’t be too concern about a particular physical location of your provider’s data center as long as you and your provider have good business continuity plans in place.

One of the neat tricks (advantages) of cloud computing is the pooling and dynamic reassigning of resources as needed. Thus, even in case your provider does run out of space in their Silicon Valley data center, they should be able to transparently move your sites/services someplace else – nice, huh?

Is your Cloud Provider making money? Arrow to Content

May 17, 2010 | Leave a Comment

By Jim Reavis

At a recent Cloud Security Alliance event, George Reese moderated a panel about Public/Private cloud interoperability and application portability. It was a great discussion, and I hope to be able to publish the proceedings soon.

One of the common points that comes up when discussing this topic is the subject of cloud provider viability, which is one of the many reasons why we care about the topic. Obviously, you want your application (and data) to be portable if you have concerns about where it is hosted. A question I asked that has been bothering me was, how do you know if your cloud provider is making money? Financial stability is a good indicator of viability.

If the cloud provider in question is a publicly-traded company and is a “pure play” cloud company, their financial performance should be a matter of public record. A privately held company may be more difficult to pin down, but will often provide this information to the right customer. But what about a company that has a significant portfolio of products and services, of which only a few may be cloud-based? Is it easy to decipher the financial reality of a cloud product line? In these heady “cloud rush” days, it is to be expected that many companies will seek market share, and they may do so by offering loss leading products that are not intended to make money. Is it possible that the least expensive IaaS option you seek is a trojan horse to sell additional services, and if so, do you want them?

Personally, I am very interested in understanding the true profit margin (or lack thereof) of the emerging cloud services. Profit is an indicator of corporate strategy, and the cloud provider’s corporate strategy is of the utmost importance to the cloud customer – sometimes that strategy backfires. If you were to pick your 5 favorite cloud services, how much do you know about the profitability of those services?

-Jim (from riskbloggers.com)

Welcome to the CSA Blog Arrow to Content

May 17, 2010 | Leave a Comment

By Jim Reavis

Welcome to the Cloud Security Alliance blog. We have initiated this service to allow for more rapid communications between our expert volunteers and the larger community interested in cloud security. We plan to use this venue to comment on the important issues of the day related to our mission, as well as to provide some insights into our research in progress, including version 2 of our guidance, which is schedule for completion in October of 2009.

-Jim

Page Dividing Line