The Consumerization of IT, BYOC, and the (New) Role of IT

Nicholas G Carr

9 September 2013

Author: Brandon Cook

It has been a decade since Nicolas Carr published his controversial essay “IT Doesn’t Matter” in the Harvard Business Review. Back then, he claimed that companies weren’t really getting a competitive advantage from the technology advances – the bits and bytes – of hardware and software. Carr argued that IT infrastructure was becoming commoditized, and it was the business strategies using that technology rather than the technology itself that would give companies their competitive advantages.

Ten years later, Carr’s ideas really have become a reality. Now we are in the era of the consumerization of IT and Bring Your Own Cloud (BYOC), where individual workers and business departments rent hardware and software—the virtual machines, the applications, the storage capacity, the big data processing capacity, and so on. Often, they make these choices without IT’s knowledge or approval.

This shift from “own the infrastructure” to “rent the applications” leads to the next question: What is the role of the IT department now? If we no longer need this group to select, install and maintain the latest model server, do they still play a strategic role in the enteprise?

I’d like to share a real-world story that demonstrates that IT departments do have an important and significant role in the BYOC era. Not only does IT have the critical responsibility for protecting corporate data as it moves to the cloud and from the cloud, but this group also makes certain that the right cloud services are being used, in the right way (meeting company policies) and in a productive and cost-efficient manner.

Leveraging Skyhigh, one Fortune 100 company’s IT department gained visibility into the use of public cloud storage services by the company. On average, a company uses 19 different cloud storage services and this particular company was no different. Of course some services are more popular than others with workers, and the company ranked its top 5 cloud storage services by number of users:

  1. Dropbox
  2. Google Drive
  3. SkyDrive
  4. SugarSync
  5. Box

With 19 different services in use, how can employees effectively collaborate and share their work? The IT team polled employees and found they were struggling with managing multiple file sharing services and would prefer having one corporate standard.

Skyhigh analysis of their cloud storage use gave them necessary insight to understand which services were actively being used, by how many users, how frequently, for how much data and which ones people had signed up for but used less often.   The usage ranking was:

  1. Box
  2. SugarSync
  3. Dropbox
  4. Google Drive
  5. SkyDrive

The data revealed that Box was used most often. And Skyhigh’sCloudRegistry showed that Box was also the lowest risk service. Armed with this data, IT negotiated a corporate-wide deal with Box and set this as the company standard for public cloud storage services. An IT manager at the company told me, “By leveraging Skyhigh data, we are able to look through the landscape of file sharing services and understand employee usage. This presents a clearer and accurate picture, giving us a better context for decision-making that supports our employees.”

If you’d like to take a look at the file sharing services in use at your organization to understand the risk profile for each service and the usage beyond “user count”, schedule a free file sharing cloud assessment with Skyhigh(note: assessment results returned in 12 – 24hrs.).

And…if you attend Boxworks next week, make sure to ask Nicolas Carr, who’s speaking at the event, for his take on the role of IT in the BYOC era.

 

Beyond Encryption: The 5 Pillars of Cloud Data Security

Author: Kamal Shah, Skyhigh Networks
Given the recent influx of cyber-security attacks and the hubbub about the National Security Agency’s PRISM program, there is lot of talk about the importance of encryption to protect corporate data in the cloud. (PRISM is a clandestine data mining operation authorized by the U.S. government in which data stored or passing over the Internet can be collected without the owner’s knowledge or consent.)

While it’s true that encryption helps to keep data private, encryption is just 1 of 5 capabilities needed to completely secure corporate data in the cloud. Allow me to use an analogy in the physical world to explain what I mean.

Banks are an ideal example of the use of layers of security to protect important assets. A bank branch has a vault in which it stores cash and other valuables. Having a vault is essential, but on its own it’s not enough to fully protect the riches within.

The bank also has policies to guide who can access the vault; what identification methods are required to verify that an employee or customer has the right to access the vault; the hours when the vault can be legitimately accessed; and so on.

The bank also needs surveillance cameras so that in event of a breach, the authorities can play back the recording to understand exactly what happened, and when. Stationed near the vault, the bank has a security guard for additional protection against threats and to deter thieves. And finally, the bank employs armored vans to move cash around from the bank to stores, to off-premise ATMs, and to other banks.

Similarly, when we talk about protecting corporate data in the cloud, you need more than just a point encryption solution; you need comprehensive approach to cloud data security.

Let’s start with encryption—a technology that has been around for decades but is now more important than ever as threats from all angles are increasing. The encryption solution you use on your data needs to be standards-based and it must support both structured and unstructured data. For structured data, the encryption technology must not break any application functionality (such as searching or sorting). This latter requirement is quite important; if you can’t search on data in comments field in Salesforce.com because it is obscured through encryption, you’ve defeated the value of using the application.

So encryption is 1 of 5 critical security capabilities. What are the other 4?

You need contextual access control so you can ensure secure access to the data based on who the users are, what devices they are using, and what geographic locations they are in.

You need application auditing so you can identify who has accessed which data and alert based on anomalous use. This is critical as most SaaS applications don’t provide audit trail of “read” operations to understand what exactly happened when an incident occurred.

You need data loss prevention to make sure that PII and PHI data is not moving to or through the cloud in the clear in violation of PCI, HIPAA and HITECH regulations.

And finally, you need the ability to easily but consistently enforce these policies for cloud-to-cloud use cases.

This last need is an up-and-coming requirement that companies are just beginning to realize, but it will grow more important as companies use more cloud-based applications. Let me give you an example.

Let’s say a company uses Jive for business social and Box for cloud storage of documents posted in Jive. When Jason, an employee in my Sales department, posts a blog post on a competitor with a detailed attachment, Jive automatically stores the document in Box. In this cloud-to-cloud scenario, I need to make sure that my security, compliance and governance policies are consistently enforced across both, Jive and Box.

Encryption as a means of data security is a good start, but not sufficient. Make sure you bolster it with the other critical security capabilities for a more complete cloud data security strategy. To learn more check out our Beyond Encryption Slideshare.

 

Windows Azure Leads Way with SOC 2 + CSA CCM Attestation

by John Howie, COO, Cloud Security Alliance

This week Microsoft announced that Windows Azure had completed an assessment against the Cloud Security Alliance Level 2 Cloud Control Matrix as part of its Service Organization Control (SOC) 2 Type II audit conducted by Deloitte. This combined approach was recommended by the American Institute of CPAs (AICPA) and published in a position paper released with the Cloud Security Alliance (CSA) earlier this year, as part of our guidance on selecting the most appropriate reporting standard.

The guidance reflects the Cloud Security Alliance’s view that for most cloud providers, a SOC 2 Type II attestation examination conducted in accordance with AICPA standard AT Section 101 (AT 101) utilizing the CSA Cloud Controls Matrix (CCM) as additional suitable criteria is likely to meet the assurance and reporting needs of the majority of users of cloud services.

We would like to congratulate Microsoft for their continued leadership in being the first cloud provider to produce a SOC 2 report with CCM included as recommended by the AICPA and the CSA.  Customers of Windows Azure will benefit from the comprehensive review of the company’s cloud controls in critical areas such as confidentiality, availability, and privacy.

We strongly encourage other providers to follow Microsoft’s lead by doing the same, as it will work to strengthen and preserve the confidentiality and privacy of data in the cloud for us all.

Visit the Windows Azure Security blog to learn more.

Just What the Doctor Ordered: A Prescription for Cloud Data Security for Healthcare Service Providers

by Kamal Shah, VP, Products and Marketing at Skyhigh Networks

Cloud services are here to stay, and practically everybody is embracing them. In fact, the cloud computing industry is growing at the torrid pace of nearly 30% per year right now, according to Pike Research.

Certainly healthcare service providers are getting on the cloud services bandwagon, either by choice or by decree. As reported in Forbes, the Health Insurance Portability and Accountability Act (HIPAA) omnibus and the American Recovery and Reinvestment Act (ARRA) requirements stipulate that everyone in the healthcare industry must migrate their patient records and other data to the cloud. This is to facilitate medical professionals’ authorized access to electronic health records (EHRs) to improve patient care and reduce costs.

At the same time, healthcare organizations have an obligation to make sure that their use of cloud services is secure and that personal health information (PHI) is fully protected. The risks are huge if they don’t get this right. Any exposure of PHI is deemed a violation of HIPAA compliance, which can lead to steep fines and other costs for the healthcare service provider, not to mention the loss of trust and confidence of its patients.

Even the best of intentions can backfire on healthcare organizations. PHI doesn’t necessarily have to be lost or stolen in order to violate HIPAA’s letter of the law. The Oregon Health & Science University was recently cited for using an unsecured cloud platform to maintain a spreadsheet containingsensitive patient data. The intent was to make it easier to share accurate information about patients among the healthcare professionals involved in their care.

Unfortunately the university didn’t have a contractual agreement to use the cloud service and the privacy and security of the patient data could not be absolutely assured. Although officials don’t believe the incident will lead to identity theft or financial harm, the university is notifying affected patients as a matter of caution.

So, what’s the prescription for hospitals and other providers to reduce their risk when using cloud services? Security experts recommend a three-step process to facilitate cloud data protection:

  • First, get an understanding of all the cloud services already in use by the organization. There’s probably a lot of unofficial “shadow use” of services that company officials aren’t aware of and that may put the organization at risk.
  • Next, leverage all the innovation in big data analytics to understand this usage and to ensure that the organization’s policies are consistently enforced.
  • And finally, for the recommended cloud services, secure the data in the cloud through contextual access controls based on user, device and location, encryption, and data loss prevention.

Read how one leading hospital put this framework to use and successfully reduced the risk of cloud services.

You can Benefit from the Cloud: Choose based on Class of Service

In my last blog, I had promised a deeper dive into Choosing a Cloud provider based on Class of Service.

It is a very timely topic. In one of very many recent articles on cloud security, Avoiding cloud security pitfalls Telstra enterprise and infrastructure services IT director Lalitha Biddulph advises “A lot of cloud services are proprietary and once you move your data in there, you may have given away your right to shift data by choosing to use a particular service.”

Without a doubt this is an area of risk to be balanced when making decisions about which key vendors to use when you consider public cloud usages across SaaS, PaaS and IaaS models. It is also an area of opportunity where organizations can draw up distinct SLAs around their rights with their data and ensure that the SLAs are properly drawn up, communicated and agreed to by all parties prior to moving data across.

Over the last couple of years we have seen remarkable strides forward with cloud providers becoming much more diligent in not only improving levels of security for hosted email, customer relationship management and vertically-focused applications, but also with IaaS providers becoming much more flexible in conditions around SLAs and reporting.

I continue to feel greatly encouraged by the work that the Cloud Security Alliance is doing and it is why I invest my time in their activities. I believe that they have the power with their wealth of resource and broad industry participation to continue to educate the industry and move us forward with ideal frameworks based on consensus.

While I think caution should be urged and organizations should be in no doubt about the risks that their data can be exposed to in cloud models, this should also be balanced with the economic advantages. Added, to that, cloud models have matured for the types of services I have mentioned above and others – that too should be taken into consideration along with a robust set of security controls.
Additionally, for more news and discussions, head over to @SecDatacenter or Secure Data Center Trends

Evelyn de Souza Bio
Evelyn is a senior data center and cloud security strategist for the Security Technology Group at Cisco responsible for championing holistic and next generation security solutions . She is a strong proponent of building automated, repeatable processes that enable organizations to sustain compliance while optimizing security posture and reducing costs. To this end, she pioneered the development of such tools in her previous role as the McAfee Compliance Mapping Matrix, which cross-maps various regulations, standards, and frameworks to e solutions and the McAfee PCI Mapping Tool. She currently co-chairs the Cloud Security Alliance Cloud Controls Matrix (CCM) and is focused on harmonizing efforts across industry initiatives such as the Open Data Center Alliance (ODCA). Evelyn is a dedicated security professional with more than 12 years in the IT security industry. She enjoys engaging with industry analysts, customers, and partners to discuss industry trends and how security solutions can be best implemented to meet the needs of next-generation datacenters. She holds a Bachelors of Arts degree with honors in music from Monash University, Melbourne, Australia. She can also be found on Twitter at: e_desouza

IT Opportunities Surrounding Shadow IT

By Kamal Shah

Skyhigh Networks VP of Products and Marketing

 

The magnitude of Shadow IT is significant and growing.Gartner has predicted that a full 35 percent of IT spending will take place outside of IT by 2015 – just 18 months away. By the end of the decade, that figure will hit 90 percent.

 

CIOs, CISOsand members of an organization’s Security and IT teams have a difficult time getting a handle on Shadow IT, and just how many cloud services are in use by the employees in their organization.In our experience they typically estimate somewhere between 25-30 services in use, but in reality we see that there are usually between 300-400 services, 11x more than IT was aware of.

 

When the IT and Security teams come to realize the sheer volume of cloud services in use, the massive size of Shadow IT, and the magnitude of cloud data security risk due to Shadow IT, it’s always a real eye opener.  The vast number of cloud services running speaks to several exploding trends – cloud computing, bring your own device (BYOD) orbring your own cloud (BYOC), and consumerization of IT.

 

Specifically, the rapid shift from on-premise business applications to cloud-based SaaS applications has enabled any employee with a credit card and an Internet connection to become an IT manager and deploy their own Shadow IT applications without notifying IT.

 

These three forcing trends are not going away.  In fact, these trends are expanding broadly, fueled by the growing consensus that use of cloud services results in higher productivity. A recent survey of IT decision makers found that 72 percent suspected that Shadow IT was beneficial and made it easier for employees to do their jobs. However, Shadow IT also creates clear cloud data security and cloud compliance risks.  It is unclear how safe data is within these cloud services, and there is no guarantee what security measures the providers put in place.  The breach of Evernote is a good example, and was eye-opening for the industry.  These service providers are focused on the instant delivery of cloud applications, not security.  If a giant company such as LinkedIn is at risk, how susceptible are the small SaaS providers employees are using without their IT department’s knowledge or safeguards.

 

The good news is that most IT teams want to constructively address the Shadow IT phenomena and believe that there is a happy medium that balances cloud services agility and cloud security.ITwants to help their business counterparts accelerate the safe adoption of cloud services while protecting corporate data.There are a number of approaches for discovering and studying Shadow IT, such as using a cloud-based solution that analyzes firewall logs in a non-intrusive and real-time manner.  The most popular approaches take it a step further and identify the risks of cloud services, as not all SaaS applications employees are using are unsafe.

 

Take the time to learn about these approaches, and find the one that works best for your organization.  Like most cloud services, organizations should be able to use these solutions in a matter of minutes and immediately help IT organizations shine a light on Shadow IT for safer and more productive cloud services usage.

 

 

Why the Cloud Cannot be treated as a One-size-fits-all when it comes to Security

Despite the fact that cloud providers have long since differentiated themselves on very distinct offerings based on cloud platform type, I often see the cloud written about as though it is a single, uniformservice. And, the problem with that is while there are commonalities, it is downright misleading especially as so much is misunderstood around what’s required to secure cloud-based services and the risks that are involved. Today there are three classes of service, Software as a Service (SaaS) where the provider hosts software-based services and the consumer accesses via a web interface, Platform as Service (PaaS) that developers mostly use to developsoftware-based offerings, and Infrastructure as a Service (IaaS) where consumers can “rent” infrastructure to host their own services.

When I speak with customers I recommend they consider cloud offerings in the light of classes of services they need, the types of data they will need to expose, their regulatory compliance needs and the reputation and flexibility of the service providers they are looking to leverage. Because, even within the classes of service I mentioned above there are distinct variances.

Choosing a cloud provider based on class of service

Over the last five years in particular the industry has benefitted from broad based adoption of SaaS particularly for customer relationship management, payroll and document collaboration to name a few. But, cloud providers in this space range from those with established practices and who have robust data handling and hygiene practices that are well documented to emerging players. The same goes for PaaS and IaaS. Over the last couple of years some IaaS providers have developed tailored offerings to suit particular verticals such as government, retail and healthcare. Today, the industry is still very much lacking from standard definitions and templates for SLA. And with each different class of service, there are different security requirements too, ranging from SaaS where the consumer has no ability to push security controls down to the provider’s environment to IaaS where typically the consumer is responsible for securing the virtual machines that they might “rent” from a provider. This is where leveraging the freely available resources from the Cloud Security Alliance Trust and Assurance Registry (STAR) an initiative that encourages transparency of security practices within cloud providers, is incredibly valuable.

Data Security According to Data Type

Data, too, is not created equal. Consumers of different cloud services need to consider the data that entrust in the hands of a SaaS provider from a sensitivity level as well as any exposure that may result from a potential data breach. This concern may be a little different with IaaS where a consumer potentially has the opportunity to addmore safeguards such as encryption, file monitoring and other security controls at the virtual machines level that may help mitigate some of the risks. I have seen some excellent security implementations around some vertical stack models that some IaaS providers have developed for government, retail, healthcare and now expanding to more verticals. However, there are issues such as data residency, data handling and monitoring at the network and overall host level that still need to be considered and carefully thought out.

Regulatory Compliance Needs

Some years back the security industry had been focused around the idea of audit and compliance fatigue – this the idea that many enterprises today can be dealing with in excess of fifty mandates pending whom they do business with and their geographic span and the amount of often manual audit data collection. Since then, there has been some automation of IT audit practices but it still remains a time consuming practices for most organizations. There are over 4000 mandates today, which the Unified Compliance Framework has done an amazing job of tracking and cross mapping for many years and as always more government and data privacy mandates in the works. The Cloud Security Alliance Cloud Controls Matrix also cross walks several standards but further categorizes controls according to platform, recognizing that different models require different controls. It is ideal for those looking to learn about how to evolve their controls to map to different models and who want to avoid the audit fatigue syndrome through the concept of audit once, report many times.

Over the next few weeks I will drill down into each of the above areas. In the meantime, if you have any questions or wish to discuss any of the above further, please contact me at [email protected]

Evelyn de Souza Bio
Evelyn is a senior data center and cloud security strategist for the Security Technology Group at Cisco responsible for championing holistic and next generation security solutions . She is a strong proponent of building automated, repeatable processes that enable organizations to sustain compliance while optimizing security posture and reducing costs. To this end, she pioneered the development of such tools in her previous role as the McAfee Compliance Mapping Matrix, which cross-maps various regulations, standards, and frameworks to e solutions and the McAfee PCI Mapping Tool. She currently co-chairs the Cloud Security Alliance Cloud Controls Matrix (CCM) and is focused on harmonizing efforts across industry initiatives such as the Open Data Center Alliance (ODCA). Evelyn is a dedicated security professional with more than 12 years in the IT security industry. She enjoys engaging with industry analysts, customers, and partners to discuss industry trends and how security solutions can be best implemented to meet the needs of next-generation datacenters. She holds a Bachelors of Arts degree with honors in music from Monash University, Melbourne, Australia. She can also be found on Twitter at: e_desouza

CSA Releases the Expanded Top Ten Big Data Security & Privacy Challenges

Big Data remains one of the most talked about technology trends in 2013. But lost among all the excitement about the potential of Big Data are the very real security and privacy challenges that threaten to slow this momentum.

Security and privacy issues are magnified by the three V’s of big data: Velocity, Volume, and Variety. These factors include variables such as large-scale cloud infrastructures, diversity of data sources and formats, streaming nature of data acquisition and the increasingly high volume of inter-cloud migrations. Consequently, traditional security mechanisms, which are tailored to securing small-scale static (as opposed to streaming) data, often fall short.

The CSA’s Big Data Working Group followed a three-step process to arrive at top security and privacy challenges presented by Big Data:

  1. Interviewed CSA members and surveyed security-practitioner oriented trade journals to draft an initial list of high priority security and privacy problems
  2. Studied published solutions.
  3. Characterized a problem as a challenge if the proposed solution does not cover the problem scenarios.

Following this exercise, the Working Group researchers compiled their list of the Top 10 challenges, which are as follows:

  1. Secure computations in distributed programming frameworks
  2. Security best practices for non-relational data stores
  3. Secure data storage and transactions logs
  4. End-point input validation/filtering
  5. Real-Time Security Monitoring
  6. Scalable and composable privacy-preserving data mining and analytics
  7. Cryptographically enforced data centric security
  8. Granular access control
  9. Granular audits
  10. Data Provenance

The Expanded Top 10 Big Data challenges has evolved from the initial list of challenges presented at CSA Congress to an expanded version that addresses three new distinct issues:

  1. Modeling: formalizing a threat model that covers most of the cyber-attack or data-leakage scenarios
  2. Analysis: finding tractable solutions based on the threat model
  3. Implementation: implanting the solution in existing infrastructures

The full report explores each one of these challenges in depth, including an overview of the various use casesfor each challenge.

The challenges themselves can be organized into four distinct aspects of the Big Data ecosystem as follows:

big data1

The objective of highlighting these challenges is to bring renewed focus on fortifying big data infrastructures. The Expanded Top 10 Big Data Security Challenges report can be downloaded in its entirety here.

 

 

 

Leveraging Intel from Hackers to Mitigate Risks

Authored by Robert Hansen

Know your enemy and know yourself and you can fight a hundred battles without disaster.” – Sun Tzu

A few weeks ago, I interviewed “Adam” a self-described ‘blackhat’ hacker about why he started hacking, what motivates him and others in the underground community and why he has decided to change his ways. What was revealed in this interview (which was published in full in a three-part series on the WhiteHat Security blog) hopefully sheds light on how other blackhats like “Adam” think and how they communicate. From this we in the security industry can devise better solutions, abandon failed technologies, and fix the most glaring issues. A great deal can be unearthed by examining Adam’s words and those of other attackers like him.

For example, Adam shared insights into some web vulnerabilities that are the most used by the attacker community, among them XSS and SQL injection, and his belief that SQL injections are the vulnerabilities that should be fixed first because they are most heavily used. Adam also shares the characteristics that he thinks make up a “good” web application vulnerability: that it is fast to exploit, persistent, gives root/full access as well allows the ability to deface/redirect sites, or wipe IP logs completely. When it comes to lists like the recently announced OWASP Top 10 for 2013, Adam downplays their importance as a “best practice” because they are never up to date or comprehensive – i.e. clickjacking and DoS/DDoS are not on the OWASP list yet extremely useful to attackers – and serve only as a good measure for prioritization.

While some IT security professionals shy away from listening to anything from the dark side, much can be learned from knowing your adversary and what makes them tick. From this conversation with Adam alone we are able to better ascertain how to first prioritize testing and finding vulnerabilities and then prioritize mitigating and fixing them.

To take this conversation one step further, I will be co-hosting a webinar on June 20 that delves further into some of the lessons we can learn from our adversaries in the ‘blackhat’ community and how we can better leverage this intel for tracking attacks and deploying the right protection strategies.

About Robert Hansen

Robert Hansen (CISSP) is the Director of Product Management at WhiteHat Security. He’s the former Chief Executive of SecTheory and Falling Rock Networks which focused on building a hardened OS. Mr. Hansen began his career in banner click fraud detection at ValueClick. Mr. Hansen has worked for Cable & Wireless doing managed security services, and eBay as a Sr. Global Product Manager of Trust and Safety. Mr. Hansen contributes to and sits on the board of several startup companies. Mr. Hansen has co-authored “XSS Exploits” by Syngress publishing and wrote the eBook, “Detecting Malice.” Robert is a member of WASC, APWG, IACSP, ISSA, APWG and contributed to several OWASP projects, including originating the XSS Cheat Sheet. He is also a mentor at TechStars. His passion is breaking web technologies to make them better.

Cloud Trust Study: Security, Privacy and Reliability in the cloud get high marks with U.S. small to mid-sized businesses

Comscore and Microsoft recently commissioned a study to get a pulse on what small to mid-sized businesses (SMB) think about the cloud in terms of security, privacy and reliability.

The results tell us that there’s a gap between the perceptions of those not using the cloud, with the real experiences of those using one or more cloud service.

For detailed result from four geographies (France, Germany, the U.K. and the U.S.), check out Adrienne Hall’s post here.

A Hybrid Approach for Migrating IAM to the Cloud

Merritt Maxim

Director-Product Marketing

CA Technologies

 

We continue to hear about how cloud, mobility and the consumerization of IT has the potential to transform business.  However, the ongoing hype around these trends may lead some to believe that these trends require an “all or none” approach.  This can create conflicts as organizations may have significant investments in on-premise IT and cannot simply pull the plug on these environments and immediately go to the cloud.  As a result, they are seeking ways to utilize cloud-based applications and infrastructure while maintaining certain applications on-premise. The resulting architecture is referred to as a hybrid environment because it features both on-premise and cloud-based resources.

 

Hybrid approaches can provide organizations with flexibility to slowly move to cloud based services while still maintaining select on-premise resources.   For organizations in this situation, one of their major challenges is providing users with the flexibility to seamlessly move around the environment while still maintaining appropriate security levels—or more specifically, ensuring consistent control and security policy between on-premise applications and cloud services.

 

Within a strictly on-premise model, IT focuses on building physical infrastructures—servers, virtualization layers, operating systems, and middleware applications—and delivering security throughout the whole stack.  With a hybrid model, however, IT must change its perspective and style, treating any and all IT components (cloud-based or otherwise) as services that are available for the business to consume. In doing so, IT security needs to ensure consistent protection between and among the organizations and all the instances of applications where sensitive data exists (i.e., the broader and fragmented data center).

 

At first blush, it might seem that the role of IT security is significantly diminished by this process. The reality, however, is that securely enabling the access to and interaction of cloud services provides much more value to the business. In doing so, IT is enabling an organization to move more quickly. Furthermore, IT is facilitating the adoption of the consumer-oriented IT capabilities that employees are demanding. In other words, utilizing more cloud-based services puts the IT security function front and center in the day to day of a company’s planning activities.

 

Once organizations simultaneously leverage applications via a variety of IT models, such as on-premise applications and SaaS-based services, the traditional notion of a network perimeter simply no longer exists. And as a result, our ideas about how we manage security and identity have to change.

 

How doesone ensure appropriate security levels within this hybrid environment?

 

To avoid building separate identity silos solely for cloud-based services resources (the result of unique accounts within each of those providers and applications), enterprises should look for a centralized IAM service thatcan manage all users’ access and authentication before they go to any applications—on-premise or in the cloud.

 

By taking the approach that Identity is the new perimeter, we can funnel all access to enterprise resources through a central identity service.  In this way we create a single front door to every SaaS, mobile and on-premise application.  This service can enforce whatever level of authentication you desire for each application.  With standards such as SAML and OAuth being quickly adopted by SaaS providers and mobile application developers, you have the ability to enforce that all enterprise uses enter through your central identity service…your new identity perimeter.

 

For employees, authentication could be against a corporate directory. For partners, it could entail using identity federation via standards such as SAML that enable the users of an organization to easily and securely access the data and applications of other organizations as well as cloud services via cloud single sign-on, thus preventing the need to maintain another list of user accounts.  This approach ensures that all the identity-related functions, such as authentication—and ultimately authorization—are consistently managed by the enterprise.

 

For customers who may already have an existing digital social identity (such as Facebook or Google) and would like to be able to leverage that identity, standards such as OpenID and OAuth would allow those users to access cloud resources using those credentials and not require additional user registration steps. For special employees or high-value transactions, a higher level of authentication might be required before allowing the user access to a particular service. There might be very sensitive data that goes into a SaaS-based HR application, for example. If the necessary level of required authentication is not native to that particular SaaS environment, the enterprise could require an additional “step-up authentication”—via a centralized identity service—before granting access.

 

As hybrid environments become the norm, the need for solutions that can interoperate in on-premise and cloud environments will be paramount.  Adopting a hybrid based approach can enable organizations of all types and sizes to realize efficiency gains while still protecting their critical digital resources, regardless of whether those resources are on-premise or in the cloud.

 

This can result in:

  • Reduced security risk for all systems, applications, and information
  • Reduced administrative expenses and improved efficiency
  • Improved IT agility through flexible deployment options across on-premise and cloud environments
  • Ability to move to the cloud on a comfortable schedule

 

Organizations may find this hybrid approach as a practical alternative deployment model to going 100% into the cloud without sacrificing agility, usability or flexibility.

 

Merritt Maxim has 15 years of product management and product marketing experience in the information security industry, including stints at RSA Security, Netegrity and CA Technologies. In his current role at CA Technologies, Merritt handles product marketing for CA’s identity management and cloud security initiatives.  The co-author of “Wireless Security”Merritt blogs on a variety of IT security topics, and can be followed at www.twitter.com/merrittmaxim. Merritt received his BA cum laude from Colgate University and his MBA from the MIT Sloan School of Management.

Don’t let a disaster leave your data out in the cold

By Andrew Wild, CSO at Qualys

When we see images from natural disasters like Hurricane Sandy of flooded neighborhoods, downed power lines and destroyed homes the first concern, of course, is for the safety of the people. But as a chief security officer I also think about how disasters affect companies and the vital assets of their business – the data.

Natural disasters are unpredictable. They happen out of the blue and leave no time to prepare. So now – while things are calm — would be a good time to make sure your data isn’t left to the mercy of the forces of nature. Being prepared means creating information management policies and procedures so that sensitive information remains protected regardless of what happens. This process includes four steps: identifying data that needs to be kept confidential, classifying the sensitivity of it, deciding how it can be best protected and how data left on discarded computer systemscan be kept away from prying eyes.

1)     Identification

All data management programs shouldstart with identifying important information resources, which should be tracked throughout their lifecycle. The organization needs to identify not just all the information it has, but how sensitive it is and where it is processed and stored. Sensitive data can find its way into many different types of systems beyond servers and desktops, including printers, copiers, scanners, laptops, cash registers, payment terminals, thumb drives, external hard drives and mobile devices.

2)    Classification

Before an organization can classify the sensitivity of information, itmust set policies around data ownership – who is responsible for what data? Employees often believe that the IT department owns all of the organization’s data and is solely responsible for securing it. However, the business unit that creates or uses the data is usually the best candidate for taking on the classification and ownership responsibilities for the data, including naming an owner and a custodian of the information. When making these decisions it is important to consider the impact to the organization that would come if the data were to be lost or inappropriately disclosed. Typically, data is classified into four levels: Public, Internal Use Only, Confidential and Restricted. The classifications should support business requirements and ensure the appropriate level of safeguarding for every type of sensitive information.

3)    Handling

Next up is deciding how the different classifications of data should be stored and handled. Typically, the handling processes are defined by the classification level of the resource. The higher the sensitivity, the more stringent the handling procedures should be. For example, organizations will require the most sensitive information to be encrypted, and may prohibit the use of devices like USB flash drives for highly sensitive data because they can be contaminated with malware and easily lost or stolen.

4)    Destruction

People who spend a lot of energy protecting sensitive information often neglect to take precautions once they are done with the data or systems on which it is stored. Exposing confidential data by failing to properly sanitize or destroy media like hard drives can be considered a breach subject to state data breach laws. It can put consumers at risk of identity theft and corporations at risk of espionage. As such, it is imperative that information management policies include procedures for proper destruction and disposal of data storage systems. Paper, magnetic tape, optical discs and hard disk drives can all be shredded, making it very difficult to recover the information. For organizations that don’t want to take any chances with highly sensitive information, they can write over data several times or use a degaussing technique on magnetic media to make sure that the original data is not recoverable. There are third parties that offer a range ofservices for wiping data entirely from systems. Interestingly, computers may be destroyed in natural disasters but that doesn’t mean the data on the disk drives can’t be recovered and thus leaked to the outside world if the systems are not handled properly.

I sincerely hope that the victims of Hurricane Sandy have recovered and are rebuilding. For the rest of us, this can serve as a reminder of the need to be prepared with information management policies in the event of a disaster.

 

Andrew has more than 20 years of experience leading teams to design, implement and operate secure networks and computer systems. As Qualys’ Chief Security Officer, Andrew oversees the security, risk management and compliance of its enterprise and SaaS environments. Prior to joining Qualys, he managed a team of information security engineers responsible for the design, implementation and operation of security solutions for EMC’s SaaS offerings, with heavy emphasis on cloud and virtualization technologies. Prior to EMC, he was the Chief Security Officer at Transaction Network Services. He has also held a variety of network engineering leadership roles with large network service providers including BT and Sprint. Andrew has a master’s degree in electrical engineering from George Washington University and a bachelor’s degree in electrical engineering from the United States Military Academy. He is a veteran of the United States Army and served in Operations Desert Shield and Desert Storm.

 

New York State launches investigation of top insurance companies’ cybersecurity practices. Who’s next?

The following blog excerpt on “New York State launches investigation of top insurance companies’ cybersecurity practices. Who’s next?” was written by the external legal counsel of the CSA, Ms. Francoise Gilbert of the IT Law Group. We repost it here with her permission. It can be viewed in its original form at: http://www.francoisegilbert.com/2013/06/new-york-state-launches-investigation-of-cybersecurity-practices-of-top-insurance-companies-whos-next/

The State of New York has launched an inquiry into the steps taken by the largest insurance companies to keep their customers and companies safe from cyber threats. This is the second inquiry of this kind. Earlier this year, a similar investigation targeted the cyber security practices of New York based financial institutions.

On May 28, 2013, the New York Department of Financial Services (DFS) issued letters pursuant to Section 308 of the New York Insurance Law (“308 Letters”) to 31 of the country’s largest insurance companies, requesting information on the policies and procedures they have in place to protect health, personal and financial records in their custody against cyber attacks.

Read the full article. >>

How the “Internet of Things” Will Feed Cloud Computing’s Next Evolution

David Canellos, PerspecSys president and CEO

 

 

While the Internet of things is not a new concept (Kevin Ashton first coined the term in 1999 to describe how the Internet is connected to the physical world), it is just now becoming a reality due to some major shifts in technology.

 

According to ABI Research, more than 5B wireless connectivity chips will ship this year – and most of those chips will find their way into tablets, sensors, cameras and even light bulbs or refrigerators that will increasingly become connected to the Internet. Currently, there are about two Internet-connected devices for every person on the planet, but by 2025, analysts are forecasting that this ratio will surpass six. This means we can expect to grow to nearly 50 billion Internet-connected devices in the next decade.

 

Driven by a revolution in technology, for the first time we have the ability to create a central nervous system on our planet. Over the next decade, most of the connected device growth will come from very small sensors that are primarily doing machine-to-machine communications and acting as the digital nerve endings for highly dynamic global sense-and-respond systems. This sensor technology will allow us to measure systems on a global scale and at the same time offer a never before seen array of intelligent services.

 

“Whether it is Smart Cities, e-Health and Assisted Living, Intelligent Manufacturing, Smart Logistics and Transport, or Smart Metering, 21st century machines are now sensing, anticipating, and responding to our needs; and we can control them remotely. We cannot have a policy or create the impression that the Internet of things would create an Orwellian world. Our goal, and our commitment, should be to create a vision that focuses on providing real value for people.” Neelie Kroes, vice president of the European Commission responsible for the Digital Agenda

 

This promise is what generates excitement about these interconnected sensor data networks. If successful, they will help us solve some of the biggest problems facing our society, making “The Internet of Things” not just a reality, but a force for major change.

 

The Role of Cloud Computing

 

While the Internet of things is exciting on its own, it is my belief that the real innovation will come from combining it with cloud computing. As all these interactions between connected devices occur, large volumes of data will be generated. This data will be easily captured and stored, but it needs to be transformed into valuable knowledge and actionable intelligence – this is where the real power of the cloud kicks in. Systems in the cloud will be used to (a) transform data to insight and (b) drive productive, cost-effective actions from these insights. Through this process, the cloud effectively serves as the brain to improve decision-making and optimization for Internet-connected interactions.

 

Cloud computing can provide the virtual infrastructure for utility computing integrating applications, monitoring devices, storage devices, analytics tools, visualization platforms, and client delivery. The utility-based model that cloud computing offers will enable businesses and users to access applications on demand anytime, anyplace and anywhere.

 

Data Protection Challenges

 

With the intersection of the Internet of things and cloud computing, protecting personal privacy becomes an essential and necessary condition. How to ensure information security and privacy is an important issue that must be addressed and resolved in the development of the Internet of things. People will resist the ubiquitous free flow of information if there is no public confidence that it will not cause serious threats to privacy.

 

The intelligence and integrated nature of the Internet of things raises serious concerns over individual privacy in the new environment of smart devices and objects. Universal connectivity through Internet access exacerbates the problem because, unless special mechanisms are considered (encryption, authentication, etc.), personally identifiable information (PII) may become uncontrollably exposed. 

 

Data Protection Solutions

 

In order to remove barriers to the Internet of things and the cloud, the technology industry (and enterprises deploying and using these technologies) needs to embrace the basic principles of protecting personal privacy, including the management, storage and processing of all sensitive information.

 

Legislation will continue to evolve in an attempt to deal with these issues and sector-specific industry bodies will produce regulations that provide guidelines and best practices to security and privacy officers. And security technologies will surely continue to advance to ensure that these regulations can be complied in the most effective and efficient ways possible.

 

In the middle of it all will be IT and security professionals, and their technology partners, who will have the challenge of managing not only the threats of data leakage and identity theft, but also growing consumer and employee concerns about data privacy.

 

Perhaps Marc Vael, international vice president of ISACA said it best: “The protection of private data often referred to as personally identifiable information (PII) is the responsibility of both organizations and individuals. Organizations need to ensure that PII is managed and protected throughout its life cycle by having a governance strategy and good processes in place. Individuals must think before they provide their PII to a third party … and be aware of the value of the information they are providing and assess if they can trust whom they are giving it to. Data protection involves improving people’s awareness, using best-of-breed technology and deploying sound business processes.”

 

If the industry – and its customers and beneficiaries – can embrace these ideas, we’ll be able to realize the full potential of the cloud-enhanced, Internet of things world of which we’re on the cusp.

 

6-5-2013

 

David Canellos is president and CEO of PerspecSys. Previously, David was SVP of sales and marketing at Irdeto Worldwide, a division of Naspers. Prior to that, David was the president and COO of Cloakware, which was acquired by Irdeto. Before joining Cloakware, David was the general manager and vice president of sales for Cramer Systems (now Amdocs), a U.K.-based company, where he was responsible for the company’s revenue and operations in the Americas. Prior to his work with Cramer, David held a variety of executive, sales management and business development positions with the Oracle Corporation, Versatility and SAIC.

Rethink cloud security to get ahead of the risk curve

By Kurt Johnson, Courion Corporation

 

Kurt_Johnson

Ever since the cloud sprung up to the top of every IT discussion, the issue of cloud security was right alongside it. Let’s face it, enterprise security has never been easy, and the rapidly expanding use of software in the cloud has added layers of complexity – and risk – to the job. More valuable intellectual property, personally identifiable information, medical records and customer data now sit in the cloud. Risk should not prevent this, but it’s a risk that needs to be managed.

 

With more data spread across multiple environments, accessed not only by employees but contractors, partners and customers alike, and accessed via more devices such as tablets and mobile, identity and access becomes an increasing concern. Who has access? Do they need this access? What are they doing with that access? All of these are critical for an effective security strategy. The cloud doesn’t change Identity and Access Management needs. We still need to ensure that the right people are getting the right level of access to cloud resources, and that they are doing the right things with that access. However, many cloud applications are purchased by the business units without IT’s knowledge. Identity and access administration become more ad hoc. Security is losing control, but not losing responsibility.

 

The IAM Gap

 

The cloud only puts a fine point on overall access risk as a growing concern. We’re confronting an expanding identity and access management gap (“IAM Gap”) that’s threatening the integrity of many organizations today.

 

Many organizations use provisioning systems to automate the setup, modification and disablement of accounts according to policy. Access certification provides a periodic, point-in-time look at who has access. Managers must attest that subordinates have the right access according to their responsibilities. But, what happens in between? New applications, new accounts, new policies and other changes are a daily event. The ad hoc nature of the cloud means new users and access could be happening without any visibility to IT. Identity and access should not be a once-a-year checkpoint.

 

The gap between provisioning and certification represents trillions of ever-changing relationships among identities, access rights and resources. It’s a danger zone that exposes the soft underbelly of your organization’s security. One wouldn’t expect to do a virus scan or intrusion detection analysis once every six months, so why should your organization stall on monitoring identities and access?

 

So, what should your organization do? Take a hard look at IAM programs and expand that to include the cloud. Update IAM guidelines and controls. Go beyond mere provisioning and certification to include intelligence and analytics. Define the policies of who should have what type of access, define appropriate use and get the line of businesses involved in the process.

 

Then, make sure cloud as well as on-premise applications are included. There should not be stove-piped strategies – one for cloud, one for on-premise. It should be an enterprise IAM strategy that incorporates both.

 

To incorporate the cloud in this strategy, start with an inventory of your cloud applications. Once the cloud applications have been identified they should be categorized by risk, much like any enterprise application. Define the appropriate identity and access controls to the appropriate risk levels. Low risk applications, like TripIt, should have acceptable use agreements and password policies. Too many end-users use the same passwords for personal applications as they do for enterprise applications. What happens when password breaches occur, such as those that happened with Evernote or LinkedIn? Medium risk applications, such as Box or ShareFile, should add automated provisioning and de-provisioning, access certification reviews, access policy reviews and exception monitoring. For high risk applications, such as Salesforce.com, higher level controls should be added which include user activity monitoring, privileged account monitoring, multi-factor authentication and identity and access intelligence so as to provide more real-time analysis and monitoring of access risk.

 

The strategy needs to address the gap not just on day one and through periodic point-in-time reviews, but with intelligence that provides a measure of real-time monitoring and which tracks user activity.

 

As the openness imperative and cloud movement raise the access risk management stakes, organizations need to:

 

  • Identify where risk is and understand it
  • Drive security controls to settle the risk
  • Dynamically strengthen security controls based on risk status
  • Spotlight risk in real-time

 

The solution is harnessing the Big Data in the trillions of access relationships – on the ground or in the cloud – to better understand what is really going on. Security staff are essentially looking for a needle in the haystack of data. Unfortunately, they don’t know what the needle looks like, so they have to look at all the hay and find something that looks different. What they really need to see are meaningful patterns. This is where predictive analytics come in – the same technology that an online retailer might use to better target product offers to you based on your recent buying behavior, for example.

 

Closing the IAM Gap with Real-Time Risk Aware Identity & Access Intelligence

 

You need to apply predictive analytics specifically to the big data around identity, rights, policies, activities and resources to reveal anomalous patterns of activity. From this, you gain access intelligence, and you can compare the patterns representing good behavior with anomalies. Consider a person with legitimate rights to a resource accessing a cloud-based CRM system and downloading the entire customer database from his home office at 2 a.m. on a Saturday night. This event might bear looking into, but you’d never even know it occurred with traditional controls because the person had legitimate access to the system. By identifying patterns or anomalies from “normal” – and displaying them in graphical heat maps – you have a view you haven’t seen before.

 

This kind of analysis closes the IAM Gap and provides a risk-driven approach to IAM. You understand and manage risk in real time, not every three to 12 months. You automate information security and identify patterns not discernible to the naked eye. With anomalies and patterns revealed, you prioritize your next security steps, strengthen controls in times of highest risk and continuously update threat definitions.

 

Here’s the key point: In this new approach, you assess risk from live data, not scenarios you’ve anticipated and coded into the system. Many security tools alert you to actions you’ve already defined as “bad.” But how do you see things you didn’t know were bad before? You need analytics to uncover patterns, serve them up to you and let you weigh whether they warrant further investigation. Real-time, predictive analytics put you ahead of the risk curve, harnessing existing company data to sound alarms before a loss – when the risk around an individual or resource spikes. In other words, you don’t know what you don’t know.

 

This kind of operational intelligence identifies, quantifies and settles access risks in time to avoid audit issues and real damage to your business. It’s interactive, real-time, scalable and self-learning. You have actionable, risk-prioritized insight.

 

Whether the applications you monitor are partly or solely in the cloud does not matter; you’re securing all your enterprise systems and resources wherever they reside. You are making sure risks are reduced before they become bonafide breaches. Bottom line, we need a new “perimeter:” one that truly understands who someone is, what they should access, what they are doing with that access and what patterns of behavior might represent threats to the organization. This way, you’re taking advantage of all the benefits of the cloud while opening your business to employees, customers and partners – all while getting ahead of risk.

 

Kurt Johnson is vice president of strategy and corporate development at Courion Corporation (www.courion.com).

 

# # #

 

 

Cloud Computing Trends: Assessing IT Maturity and Adoption Practices

By John Howie, COO, Cloud Security Alliance

MSFT considering cloud pic

In keeping with our CSA mission to promote best practices for providing security assurance, I have a few resources to share that can help organizations understand cloud computing trends and assess their own current IT environment with regard to security, privacy and reliability practices, policies and compliance.

Microsoft released a new Trends in cloud computing report, which analyzes the results of current IT maturity and adoption practices of organizations worldwide that have used the free Cloud Security Readiness Tool (CSRT).  The report data consists of answers provided by approximately 5700 people whose anonymized responses to the CSRT’s 27 questions were collected over a six-month period between October 2012 and March 2013.

This report helps organizations understand current cloud computing trends and evaluate IT security areas that are strengths and weaknesses. For example, areas of strength for those who utilized the tool are information security (through deployment of antivirus/antimalware software), security architecture, and facility security whereas areas of weakness are human resources security, operations security, information security (through consistent incident reporting), legal protection and operations management. I encourage you to read the report and see how these trends are evolving over time.

A few months ago CSA recommended Microsoft’s Cloud Security Readiness Tool (CSRT). The CSRT helps organizations review and understand their IT maturity level and their readiness to consider adopting or growing cloud services. The tool’s foundation builds on the Cloud Security Alliance’s Cloud Control Matrix (CCM) to ensure a common set of controls objectives are used to evaluate organizations maturity. The tool is a simple way to adopt Security, Trust, and Assurance Registry (STAR) and CCM principles.

The CSRT tool helps organizations understand their IT readiness so they are in a better place to make informed comparisons and evaluate the benefits of cloud adoption.

 

Building Trust and Security Through Transparency of Service

David Baker Okta

By David Baker, CSO at Okta

 

With the growing movement of enterprises to the cloud, it’s more important than ever that service providers demonstrate and prove good security practices to their customers, in good times and in bad. During an incident, how a cloud provider communicates to its customers says a lot about its commitment to security. Sounds obvious, right? Well, three different times during the past seven months — and once while I was on a panel at the 2012 CSA Congress in Orlando — I’ve learned that it isn’t clear after all. As CSO at Okta, I work closely with our customers and they always ask, “What will you guys do if a breach occurs?”

 

When I tell customers that we’ll proactively reach out to them with written communication within hours of any important incident, they are surprised … which surprises me.We include transparent communication into every service level agreement (SLA), alongside availability guarantees and recovery point and time objectives.

 

SLAs exist so that customers have a means to measure the basic service performance of their providers. SLAscan sometimes be very complex and involve many components. But it’s the communication aspect that I see most commonly omitted. It’s important for cloud providers to incorporate communication protocols into their SLAs to ensure trust and transparency with their customers.

 

Proactive Communication

 

The most basic question that customers have for their cloud providers is finding out if there’s been a breach in service. During last year’s CSA conference in Orlando, the same question came up again and again: “How would I even know if the service is breached?”

 

Typically, when a large consumer-facing provider goes down the company posts a “We’re sorry” or a failmessage on its homepage. This works for a service such as Google, which expects users will visit the site, see the service interruption and then wait for the site to come back online. Users might Tweet about how annoyed they are that Google’s down, but they wouldn’t expect a phone call from a Google rep explaining the problem and detailing the company’s plans to resolve the problem. Large, consumer services such as Google simply have too many millions of users.

 

But for enterprises that rely on cloud services to run their businesses, an impersonal “sorry” on the provider’s website is little consolation during an interruption or breach. They should expect, as part of the signed SLA, a proactive message alerting them to the problem and detailing the response. Maintaining a high-touch customer interaction is essential to building and maintaining trust with customers. Cloud providers may think this seems futile or silly if they have several thousand enterprise customers and need to alert an administrator point of contactat each customer during a service-wide incident. Welcome to the big leagues of enterprise SaaS IT!

 

Transparent Expectations

 

As importantly, communication shouldn’t stop after the initial notification. It’s important for the vendor to update customers throughout the disruption, whether an outage, a breach or a service interruption. Transparency is essential from an enterprise standpoint in order to educate customers about the details of what’s going on, and to build trust that the problem is being addressed, what the target resolution steps are, and what work-around steps can be implemented..

 

Typically, recovery point objectives (RPOs) and recovery time objectives (RTOs) are standard SLA elements that set customer expectations for when the service will be recovered. What these elements don’t do is dictate how — and how frequently — the provider communicates to its customers during the recovery process.Okta provides identity management (IAM) in the cloud and is an extension of customers’ IT team, so we maintain high-touch communication with our customers’ IT teams as frequently as possible. Companies should expect the same when they extend their mail, system log facilities or HR services into the cloud, all of which are important extensions of the enterprise.

 

By setting customer expectations from the outset with a detailed SLA, cloud vendors can assuage their customers’ anxieties — and develop trust for when or if breaches or servicedowntime occur.

 

Continuity

 

Earlier this year, I wrote about how enterprise cloud IT services allow companies to enhance their business continuity plans. Geographic redundancy and layering across multiple AWS availability zones signifies a service’s investment in disaster avoidance and translates into customer’s disaster recovery and business continuity plans. But lets face it, every disaster recovery and business continuity plan document assumes a worst-case scenario, so responsible service providers should work with their customers to develop continuity plans that account for specific worst-case disasters, whether a serious extended service degradation or a significant outage.

 

Though not necessarily baked into SLAs, customers should be able to leverage their providers tohelp assemble a continuity plan tailored to their needs. Objective plans between a cloud service provider and its customers aboutoutage protocols in advance can save a lot of time, frustration and anxiety when a service misses a beat. It can be appropriate to have global or customer-wide SLAs spell out precisely the measures that will be taken in different scenarios to ensure a speedy recovery.

 

The businesses that thrive in the cloud are highly available, disaster resilient and prepared for anything. And they clearly communicate these guarantees to customers through SLAs. These agreements are intended to build trust by guaranteeing open communication when a problem arises and clear explanation about how (and when) the problem will be fixed. The detail in the SLA, and how a cloud provider follows through on those details, says a lot about its commitment to security — during the good times and, most importantly, during the bad times.

 

—-By David Baker, chief security officer of Okta, an enterprise-grade identity management service that addresses the challenges of a cloud, mobile and interconnected business world. Follow him on Twitter at@bazaker.

Plugging “Cloud Identity Leaks” – Why Your Business Should Become an Identity Provider

Vordel tap image

By Mark O’Neill VP Innovation – API & Identity Management, Axway (following Vordel acquisition)

markoneill

Most people have used the Facebook, Twitter, or Google Apps buttons located on Websites to log into third party services. This approach is useful within consumer IT as it enables the user to access various services via their own Facebook, Twitter or Google Apps passwords without the effort of setting up multiple accounts on different websites. This trend has also transferred to the enterprise with employees now actively logging into business sites or business-to-business marketplaces via their own personal Facebook, Twitter or Google passwords. While employees may enjoy this convenience, organizations need to consider if this practice is good for their business?

Cloud Identity Leaks

Let’s take a look at some of the issues associated with employees using their personal passwords to access third party services. If employees are identifying themselves as Gmail or Facebook users and not as employees of an organization, it is difficult for the organization to have an audit trail of employee behaviour on business sites or within business-to-business marketplaces. For example, when the user is logged into Salesforce, ADP or similar services via their own social login it is impossible for the organization to verify their identity, track their activity and govern access. Additionally, the organization has lost all power of de-provisioning these employees from accessing these services once they leave the organization as, they are still logged into the third party services via their consumer identity and the organization can’t do anything about it.

Corporate IT & CSOs Must Regain Lost Ground

At the moment the majority of employees are accessing third party services via their social log-ins. This means they have effectively transferred control of their identity, with associated provisioning and account management abilities, to Google, Twitter or Facebook. As such, corporate IT is at risk of becoming irrelevant and being viewed as an inconvenience to the employee. To use an American Football analogy this is similar to the employee making an end run around corporate IT. Corporate IT can fight back by making it company policy for employees to use the corporate ID to access third party services, and by making it very easy to do so.

The lack of control over employee identities is also of concern to Chief  Security Officers (CSO) who need to know how users are managing passwords, which type of services they are accessing, and evaluate the risk of their identities being hijacked. Typically CSOs will have password policies to address these issues. However, if users are simply bypassing the corporate log-in and logging into third party systems via Gmail– the CSO’s policies are rendered redundant and irrelevant.

Identity Providers

It is clear that organizations need to control how employees are using their social identities to access work related services. Within an identity context, Twitter, Facebook and Google are considered to be Identity Providers (IDPs). This means they literally provide the user’s identity. These services are the location where a user logs-in, usually with a user name and password. Facebook or a similar service will then log the user in and vouch for the user’s identity to other systems the employee is trying to use. Of note, it’s technologies such as OAuth and OpenID which have enabled Facebook, Twitter and Google to become IDPs. There is nothing preventing an organization who wants to become its own Identity Provider from  also leveraging these technologies to do so.

Organizations who want to regain control of their employees’ identities can make it a company policy for employees to log into third party services via the company Intranet. In this way, the organization can become its own IDP enabling the business to vouch for the identity of its employees. Within this scenario employees could log-in via the company Intranet with the organization providing its own links as a springboard to the various third party services the employee uses. In this way, the business can provide an on-ramp from the users log-in to any third party services the employee may be using, and become an  Identity Provider.

An organization can become an identity provider by engaging its developers to produce an internal portal for its employees. However, this approach involves climbing a mountain of complex identity standards. Alternatively, Identity Mediation products offer a Gateway that acts as a spring board from the corporate identity out to third party services allowing the organization to become an Identity Provider and govern employee identities.

Ownership

To conclude, it’s important to understand that Identity Providers such as Google, Facebook and Twitter own the user’s log in, so in effect they own the user. For example, if a user is logging in via Facebook to several services and they cancel their Facebook account, they will no longer be able to log into the service. Therefore, employee identities are becoming increasingly tied to platforms such as Facebook, Twitter and Google. To alleviate this trend, organizations need to take control of how their employees are accessing services and offer an alternative — the corporate login. The organization needs to make it company policy for employees to use the corporate log-in, and most importantly make it very easy to use. Otherwise, employees will continue to use their personal log-ins to access third party sites while continuing to expose the organization to potential risks and a complete lack of governance.

About the author
Mark O’Neill is a frequent speaker and blogger on APIs and security. He is the co-founder and CTO at Vordel, now part of Axway. In his new role as VP Innovation, he manages Axway’s Identity and API Management strategy. Vordel’s API Server enables enterprises to connect to Cloud and Mobile. Mark can be followed on his blog at www.soatothecloud.com and twitter @themarkoneill

Cloud-to-Ground, The Last Frontier?

 

 

 

 

 

Whilst Cloud-to-Cloud service integration is relatively straight forward, Cloud service to on premise integration presents more challenges for the enterprise architect

 

By Ed King, VP Product Marketing –  Axway (following acquisition of Vordel)

Cloud-to-Cloud security integration is now a fairly well solved problem.  Cloud based services allow extremely limited access to backend infrastructure and data store. Typically integrations are done via highly constrained REST APIs and occasionally SOAP Web Services.  Thus, security integration between Cloud based services has largely been focused on access control and API security.  Cloud-to-Ground security integration is the last frontier that must be concurred before Cloud adoption can hit its full stride.  Cloud-to-Ground security integration is a much more difficult and complex technical problem compared to Cloud-to-Cloud integration.  In this blog post we will discuss three key challenges in linking security from Cloud-to-Ground and how to leverage new and existing security technologies to make it work.

It’s 10pm, do you know where you users are?

The first common Cloud-to-Ground security integration problem for an enterprise to solve is to enable an on-premise user already signed-on locally to have single-sign on (SSO) access to Cloud based services.  This is a straightforward problem to solve because most Cloud based services today support SAML and increasingly OAuth 2.0 based federated SSO.  The Cloud service providers invariably offer well-documented APIs for SSO integration based on these standards.  Unfortunately, the preparedness on the on-premise side is not as good.  Traditional identity management platforms support at least SAML, but often as a bolt-on, extra-cost federation module.  OAuth 2.0 support is still more miss than hit.

Instead of just adding a bolt-on federation module, consider deploying a stand-alone Security Token Service (STS) instead.  STS solutions provide token mediation for standard and proprietary token types and broker trust between on-premise and Cloud identity platforms.  The STS approach provides you with the flexibility you’ll need as you expand your Cloud and mobile usage, as well as safeguard you against future changes in federation standards.  There are a number of standalone STS products such as Ping Identity, or you can find it as a feature of most API server and gateway products from companies such as Axway/Vordel and Oracle.

The challenge with Ground-to-Cloud SSO is that today’s employees are no longer bound to the office.  Even if we aren’t all road warriors or work in a field job, most of us take some work home or work from home part-time.  The instinctive IT response is to ask remote workers to use VPN, then the normal SSO flow would work perfectly.  VPN is a cumbersome but viable solution for home working scenarios, but is technically challenging for mobile workers.  VPN while on mobile networks, hotel and public WiFi, or enterprise guest networks can be frustrating to downright impossible.  To solve this problem you need to move the initial login point into the Cloud or in the DMZ, which we will discuss next.

Knock, knock, who’s there?  The outside-in use cases.

Network perimeter security does a good job of obscuring on-premise resources to the outside and blocking direct access from external endpoints.  This creates a technical challenge for integrating Cloud based services with on-premise systems.  Let’s look at two use cases:

(1) SSO from Cloud based identity platforms, and
(2) Data and functional integration between on-premise with Cloud based services.

Above I proposed moving the initial login point from behind the firewall to the Cloud or the DMZ.  A number of Cloud based SSO solutions now exist to do just that.  These Cloud-to-Cloud SSO solutions offer users a comprehensive catalog of prebuilt integrations to popular Cloud based services such as Salesforce, Workday, and Google.  Users can log into services such as Okta, Symplified, and VMware, then SSO to a catalog of Cloud based services without having to tangle with the company VPN.  However, SSO is a misnomer because it is really Dual Sign-On: Cloud and on-premise.  On-premise assets are still protected by on-premise identity management systems such as Oracle Access Manager and CA SiteMinder and are inaccessible to the Cloud based SSO platforms.

To achieve true SSO across Cloud and on-premise, a user who is authenticated by the Cloud/DMZ SSO platform must be able to SSO to on-premise systems and be able to see Cloud and on-premise applications in a single integrated application catalog.  This can be done by introducing a trusted gateway into the DMZ.  The gateway is trusted by the on-premise SSO platform and can take a SAML token from the Cloud SSO platform in exchange for the proprietary token used by the on-premise system, such as Oracle’s Obsso cookie or CA’s Siteminder session token.  The gateway can also perform dynamic URL mapping so the internal applications can be accessed without VPN.  Here is a video showing an example of a user logged into VMware’s Horizon Application Manager in the Cloud then SSO to an on-premise application protected by Oracle Access Manager.

The second case is the integration of a Cloud based application to an on-premise system.  For example, Salesforce.com pushing new customer records into an on-premise Siebel CRM.  Instead of poking a hole in the firewall to allow direct access, the better way to achieve a secured integration is to make on-premise systems look like a Cloud endpoint, thus leveraging the web oriented architecture (WOA) for integration.  This means putting up a REST API façade that can be exposed externally.  This API based integration approach makes integration easier and more secure.  API security can be easily achieved using off-the-shelf API Server, API Gateway, and API Management products.  These products not only control access to the APIs, but can also monitor the flow of date from on-premise systems to the Cloud and enforce data redaction policies for security and privacy purposes.

Are you an identity pusher?  Try to be an identity provider.

Whether it is SaaS, PaaS, or applications you stand up inside IaaS, all these applications still have access control built-in to ensure users can only see and do what their job roles permit.  Where do these applications go for identity data to make authorization decisions?  Traditional software architecture uses either an embedded user repository, or can point to an LDAP Directory.  Either way, an identity needs to be provisioned into the application or local LDAP.  In the case of Cloud, this means pushing identity records from on-premise identity platforms to Cloud based services.  This is the traditional provisioning nightmare that the enterprise has not been able to solve despite throwing millions of dollars at it.  Except the problem just become more complicated by adding external Cloud based resources.  Emerging standards like SCIM and more modern provisioning solutions such as Identropy attempt to solve the Cloud provisioning problem.

The better solution is to have applications, especially Cloud based services, support the claims based identity model, such as Microsoft SharePoint 2010.  The claims based model enables the application to reply on an external identity provider to supply user authentication and role information.  By delegating to an external identity provider, the application can control access without retaining the identity locally.  Here is a video showing how to set up SharePoint 2010 for claims based access control.

Two major challenges have to be overcome for this model to scale.

1)    More Cloud based applications have to support the claim based identity model.

2)    The enterprise must develop its own identity provider service.  This is best achieved by exposing the identity provider service as a REST API.

If you are interested in learning more about this topic, you can view this webinar I presented with Eve Maler of Forrester Research: The IAM-As-An-API Era: You Must Become A Cloud Identity Services Provider

Summary
Cloud-to-Ground security integration is still a challenge and most solutions are not very elegant, yet.  However, enough technologies already exist such as API Servers, Cloud Gateways, Cloud SSO and Security Token Service that can build on your existing security infrastructure to provide good solutions today.  New emerging technologies and standards such as OpenID Connect, SCMI, UMA, HTML5 WebSocket all hold promises to make these solutions increasing better, more secured, and more scalable.

 

Ed King VP Product Marketing Emerging Technologies Axway (recently acquired Vordel)
Ed has responsibility for Product Marketing of emerging technologies around Cloud and Mobile at Axway, following their recent acquisition of Vordel. At Vordel, he was VP Product Marketing for the API Server product that defined the Enterprise API Delivery Platform. Before that he was VP of Product Management at Qualys, where he directed the company’s transition to its next generation product platform. As VP of Marketing at Agiliance, Ed revamped both product strategy and marketing programs to help the company double its revenue in his first year of tenure. Ed has also held senior executive roles in Product Management and Marketing at Qualys, Agiliance, Oracle, Jamcracker, Softchain and Thor Technologies. He holds an engineering degree from the Massachusetts Institute of Technology and a MBA from the University of California, Berkeley.

 

Security Check List: An Ounce of Prevention is Better than a Pound of Cure

By Wolfgang Kandek

It is common belief that buying more robust and expensive security products will offer the best protection from computer-based attacks; that ultimately the expenditure pays off by preventing data theft. According to Gartner, more than $50 billion is spent annually on security infrastructure software, hardware and services. They expect this number to continue to grow and reach $86 billion by 2016. With security investments skyrocketing, the number of successful attacks should be decreasing but they aren’t. That’s the reality. There is no one thing or even combination of things that can guarantee you won’t get hacked. However, there are some basic precautions companies can take that can put up enough defenses to make it not worth a hacker’s time and effort to try to break in.

The recent Verizon Business 2013 Data Breach Investigations Report revealed that 78 percent of initial intrusions were rated as low difficulty and likely could have been avoided if IT administrators had used some intermediate and even simple controls. Using outdated software versions, non-hardened configurations and weak passwords are just a few of the many common mistakes businesses make. These basic precautions are being overlooked, or worse, ignored.

Implement a security hygiene checklist

One of the most simple and effective way for companies to improve their defenses is to create and closely adhere to a checklist for basic security hygiene. The Centre for the Protection of National Infrastructure in the UK and the Center for Strategic & International Studies (CSIS) in the U.S. released a list of the top 20 critical security controls for defending against the most common types of attacks. Topping the list are creating an inventory of authorized and unauthorized devices and software, securing configurations for hardware and software, and continuous vulnerability assessment and remediation.

A laundry list of organizations are already using this checklist and seeing results, including the U.S. Department of State, NASA, Goldman Sachs and OfficeMax. The State Department followed the guidelines for 40,000 computers in 280 sites around the world and within the first nine months, it reduced its risk by 90 percent. In Australia, the defense agency’s Department of Industry, Innovation, Science, Research and Tertiary Education reported that it had eliminated 85 percent of all incidents and blocked malware it would have missed otherwise, without purchasing additional software or increasing end user restrictions.

My own security precaution checklist includes:

  • Promptly apply security patches for applications and operating system to keep all software up to date
  • Harden software configurations
  • Curtail admin privileges for users
  • Use 2-factor authentication for remote access services
  • Change default admin passwords
  • And prohibit Web surfing with admin accounts

Making it happen

The hardest part of changing security policies is getting IT administrators on board to drive these initiatives. Since they are already managing heavy workloads, it is important to present the efforts as ways of strengthening existing security measures rather than adding responsibilities. Incentivizing implementation is another effective strategy. Or, you can always remind them that cleaning up after an attack is harder than preventing one, but in case you need more ammunition for motivating IT:

  • Friendly competition – One engineer at NASA boosted participation by awarding badges, points and other merits as if it were a game, giving employees incentive to compete for the highest score.
  • Company-wide report card – The Department of State assigns letter grades based on threat risk for each location including various aspects of security and compliance. For instance, a lower grade would be given for software that is missing critical patches and infrequent vulnerability scanning. The report cards are published internally for all locations to see and again boost participation by competition and cooperation.
  • Show them the money – The biggest incentive of all would be offering bonuses or time off for quantifiable improvements in security and reduced risk.

While spending money on the latest security product to build bigger and stronger walls may impress the board of directors, it won’t necessarily deter attacks. Ultimately, the goal is to implement fairly basic but often forgotten measures to eliminate opportunistic attacks and discourage hackers who don’t want to waste the time and energy trying to get in. Some renewed attention to the basics can mean the difference between suffering from an attack and repelling one.

Wolfgang Kandek, CTO, Qualys

As the CTO for Qualys, Wolfgang is responsible for product direction and all operational aspects of the QualysGuard platform and its infrastructure. Wolfgang has over 20 years of experience in developing and managing information systems. His focus has been on Unix-based server architectures and application delivery through the Internet. Prior to joining Qualys, Wolfgang was Director of Network Operations at the Online Music streaming company myplay.com and at iSyndicate, an Internet media syndication company. Earlier in his career, Wolfgang held a variety of technical positions at EDS, MCI and IBM. Wolfgang earned a Masters and a Bachelors degree in Computer Science from the Technical University of Darmstadt, Germany.

Wolfgang is a frequent speaker at security events and forums including Black Hat, RSA Conference, InfoSecurity UK and The Open Group. Wolfgang is the main contributor to the Laws of Vulnerabilities blog.

Company website: www.qualys.com