Software evaluation 2.0 ? Arrow to Content

May 27, 2010 | Leave a Comment

I spend a lot of time evaluating software; for product reviews, to see which versions are vulnerable to various exploits and sometimes just to see if I should be using it. Most often this looks something like: find the software, download it, find the install and configuration documents, walk through them, find an error or two (documentation always seems to be out of date), fiddle with dependencies (database settings, etc.) finally get it mostly working (I think) and then test it out. I’m always wondering in the back of my mind if I’ve done it properly or if I’ve missed something, especially when it comes to performance issues (is the software slow, or did I mess up the database settings and not give it enough buffers?).

But it seems that finally some places are taking note of this and making their software available in a VM, fully configured and working, no fuss or mess, just download, run it, and play with the software. Personally I love this, especially for large and complex systems that require external components such as a database server, message queues and so on. No more worries about configuring all the add on bits, or making sure you have a compatible version and so on.

This really dovetails nicely with an increasing reliance on cloud computing, instead of buying a software package and having to maintain it you can buy a VM image with the software, essentially getting SaaS levels of configuration and support but still giving you the IaaS levels of control (you can run it in house, behind a VPN, you can configure it specially for your needs if you have to, etc.). The expertise needed to properly configure the database (which varies between databases hugely, and depending on what the product needs, i.e. latency? memory? bulk transfers? special character encoding?) is provided by the vendor who (at least in theory) knows best. I also think vendors will start to appreciate this, the tech support and time needed to guide customers through install and configuration and integration with other services and components can be replaced by “Ok sir I need you to upload the image to your VMware server (or EC2 or whatever) and turn it on… Ok now login as admin and change the password. Ok we’re done.”

Security is also a lot easier. The vendor can test patches knowing with certainty that they will work on customer’s systems since the customer has the exact same system as the vendor, and customers can apply patches with a higher degree of confidence, knowing that they were tested in the same environment.

Reddit ships code as fully functional VM. – http://blog.reddit.com/2010/05/admins-never-do-what-you-want-now-it-is.html

Update: CERT release fuzzing framework, as a VMware image – http://threatpost.com/en_us/blogs/cert-releases-basic-fuzzing-framework-052710
http://www.cert.org/download/bff/

Counterfeit gear in the cloud Arrow to Content

May 26, 2010 | Leave a Comment

One of the best and worst things about outsourced cloud computing (as opposed to in house efforts) is the ability to spend more time on what is important to you, and leave things like networking infrastructure, hardware support and maintenance and so on to the provider. The thing I remember most about system and network administration is all the little glitches, some of which weren’t so little and had to be fixed right away (usually at 3 in the morning). One thing I love about outsourcing this stuff is I no longer have to worry about network infrastructure.

Assuming of course that the cloud provider does a good job. The good news here is that network availability and performance is really easy to measure, and really hard for a cloud provider to hide. Latency is latency, and you generally can’t fake low latency networks (although if you can please let me know! We’ll make millions). Ditto for bandwidth, either the data transfers in 3 minutes or 4 minutes, a provider can’t really fake that either. Reliability is a little tougher since you have to measure it continuously to get good numbers (are there short but total outages, longer “brownouts” with reduced network capacity, or is everything actually working fine?). But none of this takes into account or allow us to predict the type of catastrophic failures that result in significant downtime.

One way providers deal with this potential problem is simple: they buy good name brand gear with support contracts that guarantee replacement times, how long it will take a engineer to show up, etc. But this stuff is expensive. So what happens if a cloud provider is finds, or is offered name brand equipment at reduced, or even really cheap prices (this does happen legitimately; a company goes bust and stuff is sometimes sold off cheap). This stuff isn’t under a support contract and is not up to the same specs as the real stuff meaning it is more likely to fail or suffer problems, causing you grief.

How do you, the cloud provider customer, know that your provider isn’t accidentally (or otherwise) buying counterfeit network gear?

Well short of a physical inspection and phoning in the serial numbers to the manufacturer you won’t. Unfortunately I can’t think of any decent solutions to this, so if you know of them or have any ideas feel free to leave comments or email me, [email protected].

Feds shred counterfeit Cisco trade – With a new conviction today, the federal action known as Operation Network Raider has resulted in 30 felony convictions and more than 700 seizures of counterfeit Cisco network hardware with an estimated value of more than $143 million.

-By Layer 8, Network World

Yikes.

Amazon AWS – 11 9′s of reliability? Arrow to Content

May 24, 2010 | 2 Comments

Amazon recently added a new redundancy service to their S3 data storage service. Amazon now claims that data stored in the “durable storage” class is 99.999999999% “durable” (not to be confused with availability – more on this later).

“If you store 10,000 objects with us, on average we may lose one of them every 10 million years or so. This storage is designed in such a way that we can sustain the concurrent loss of data in two separate storage facilities.”

http://aws.typepad.com/aws/2010/05/new-amazon-s3-reduced-redundancy-storage-rrs.html –Jef;

So how exactly does Amazon arrive at this claim? Well reading further they also offer a “REDUCED_REDUNDANCY” storage class (which is 33% cheaper than normal) that guarantees 99.99% and is “designed to sustain the loss of data in a single facility.” From this was can extrapolate that Amazon is simply storing the data in multiple physical data centers, the chance of each one becoming unavailable (burning down, cable cut, etc.) is something like 0.01%, so storing at two data centers means a 0.0001% chance that both will fail at the same time (or on the flip side: a 99.9999% durability guarantee), three data centers giving us 0.000001% chance of loss (a 99.999999% durability guarantee) and so on. I’m not sure of the exact numbers that Amazon is using but you get the general idea; a small chance of failure, combined with multiple locations makes for a very very small chance of failure at all the locations at the same time.

Except there is a huge gaping hole in this logic. To expose it let’s revisit history, specifically the Hubble Space Telescope. The Hubble Space Telescope can be pointed in specific directions using six on board gyroscopes. By adding momentum to a single gyroscope or applying the brakes to it you can cause Hubble to spin clockwise or counter clockwise in a single axis. With two of these gyroscopes you can move Hubble in three axis to point anywhere. Of course having three sets of gyroscopes makes maneuvering it easier and having spare gyroscopes ensures that a failure or three won’t leave you completely unable to point the Hubble at interesting things.

But what happens when you have a manufacturing defect in the gyroscopes, specifically the use of regular air instead of inert nitrogen during the manufacturing of the gyroscopes? Well having redundancy doesn’t do much since the gyroscopes start failing in the same manner at around the same time (almost leaving Hubble useless if not for the first servicing mission).

The lesson here is that having redundant and backup systems that are identical to the primary systems may not increase the availability of the system significantly. And I’m willing to bet that Amazons S3 data storage facilities are near carbon copies of each other with respect to the hardware and software they use (to say nothing of configuration, access controls, authentication and so on). A single flaw in the software, for example an software related issue that results in a loss or mangling of data may hit multiple sites at the same time as the bad data is propagated. Alternatively a security flaw in the administrative end of things could let an attacker gain access to and start deleting data from the entire S3 “cloud”.

You can’t just take the chance of failure and square it for two sites if the two sites are identical. The same goes for 3, 4 or 27 sites. Oh and also to read the fine print: “durability” means the data is stored somewhere, but Amazon makes no claims about availability or whether or not you can get at it.
Something to keep in mind as you move your data into the cloud.

Season’s Greetings from the CSA! Arrow to Content

May 17, 2010 | Leave a Comment

By Zenobia Godschalk

2009 has been a busy year for the CSA, and 2010 promises to be even more fruitful. The alliance is now 23 corporate members strong, and is affiliated with numerous leading industry groups (such as ISACA, OWASP and the Jericho Forum) to help advance the goal of cloud security. Below is a recap of recent news and events, as well as upcoming events. We have had tremendous response to the work to date, and this month we will release version two of our guidance. Thanks to all our members for their hard work in our inaugural year!

RECENT NEWS

 

CSA and DMTF

The CSA and DMTF have partnered to help coordinate best practices for Cloud security. More details here: http://www.cloudsecurityalliance.org/pr20091201.html

Cloud Security Survey

The Cloud Security survey is still open for responses! Here’s your chance to influence cloud security research. Survey takes just a few minutes, and respondents will receive a free, advance copy of the results.

http://www.surveymonkey.com/s.aspx?sm=VqH8jHHwc9GhANj3EzDl1g_3d_3d

Computerworld: Clear Metrics for Cloud Security? Yes, Seriously

http://www.computerworld.com/s/article/9141010/Clear_Metrics_for_Cloud_Security_Yes_Seriously

The Cloud Computing Show (featuring interview with CSA’s Chris Hof)

http://cloudcomputingshow.blogspot.com/2009/11/cloud-computing-show-20.html

 

For more cloud security news, check out the press page on the CSA site.

 

RECENT EVENTS

 

State of California

At the end of October CSA was invited to present to Information Security professionals of the State of California. During this 2-day state-sponsored conference we provided education and transparency into CSA’s research around Cloud Security and how the federal government is using cloud deployments.

CSI DC

Also at the end of October the CSA participated at a Cloud Security workshop during the annual CSI Conference in DC.

India Business Technology Summit

In November Nils Puhlmann, co-founder of CSA, presented to an audience of 1,400 at the annual India Business Technology Summit. Not only did he address the audience in a keynote but delivered the CSA message and learnings in a workshop at the India Institute of Science and Technology in Bangalore. Puhlmann also participated in a follow up panel in Mumbai at the India Business Technology Executive Summit.

Conference of ISMS Forum

In December CSA was represented at the 6th International Conference of ISMS Forum in Seville. Nils Puhlmann delivered a keynote and moderated a panel looking into the future of information security and how new technologies like cloud computing might affect our industry.

CSA and ISMS also signed an MOU to cooperate more closely together. The event and CSA’s participation were covered in the prominent Spanish newspaper, Cinco Días, the business supplement of El País.

ISMS and CSA also started activities to launch a Spanish Chapter of CSA to better address the unique and local issues around secure cloud adoption in Spain.

UPCOMING NEWS AND EVENTS

Guidance V2

The second version of the security guidance for critical areas of focus in cloud computing is coming soon! Watch for the next version of the guidance to be released this month.

 

SecureCloud 2010

Registration is now open for SecureCloud 2010, a joint conference with ENISA and ISACA being held in Barcelona, March 16th and 17th.

http://www.cloudsecurityalliance.org/sc2010.html

Cloud Security Alliance Summit

In addition, the Cloud Security Alliance Summit will be held in conjunction with the RSA Conference in SF March 1. Further details are below, and check back on the CSA website for more updates coming soon!

Cloud Security Alliance Summit

March 1, 2010, San Francisco, Moscone Center The next generation of computing is being delivered as a utility.

Cloud Computing is a fundamental shift in information technology utilization, creating a host of security, trust and compliance issues.

The Cloud Security Alliance is the world’s leading organization focused on the cloud, and has assembled top experts and industry stakeholders to provide authoritative information about the state of cloud security in the Cloud Security Alliance Summit. This half day event will provide broad coverage of cloud security domains and available best practices for governance, legal, compliance and technical issues. From encryption and virtualization to vendor management and electronic discovery, the speakers provide guidance on key business and operational issues. We will also present the latest findings from the CSA working groups for Cloud Threats, Metrics and Controls Mappings.

Page Dividing Line