IT Opportunities Surrounding Shadow IT Arrow to Content

June 27, 2013 | Leave a Comment

By Kamal Shah

Skyhigh Networks VP of Products and Marketing

 

The magnitude of Shadow IT is significant and growing.Gartner has predicted that a full 35 percent of IT spending will take place outside of IT by 2015 – just 18 months away. By the end of the decade, that figure will hit 90 percent.

 

CIOs, CISOsand members of an organization’s Security and IT teams have a difficult time getting a handle on Shadow IT, and just how many cloud services are in use by the employees in their organization.In our experience they typically estimate somewhere between 25-30 services in use, but in reality we see that there are usually between 300-400 services, 11x more than IT was aware of.

 

When the IT and Security teams come to realize the sheer volume of cloud services in use, the massive size of Shadow IT, and the magnitude of cloud data security risk due to Shadow IT, it’s always a real eye opener.  The vast number of cloud services running speaks to several exploding trends – cloud computing, bring your own device (BYOD) orbring your own cloud (BYOC), and consumerization of IT.

 

Specifically, the rapid shift from on-premise business applications to cloud-based SaaS applications has enabled any employee with a credit card and an Internet connection to become an IT manager and deploy their own Shadow IT applications without notifying IT.

 

These three forcing trends are not going away.  In fact, these trends are expanding broadly, fueled by the growing consensus that use of cloud services results in higher productivity. A recent survey of IT decision makers found that 72 percent suspected that Shadow IT was beneficial and made it easier for employees to do their jobs. However, Shadow IT also creates clear cloud data security and cloud compliance risks.  It is unclear how safe data is within these cloud services, and there is no guarantee what security measures the providers put in place.  The breach of Evernote is a good example, and was eye-opening for the industry.  These service providers are focused on the instant delivery of cloud applications, not security.  If a giant company such as LinkedIn is at risk, how susceptible are the small SaaS providers employees are using without their IT department’s knowledge or safeguards.

 

The good news is that most IT teams want to constructively address the Shadow IT phenomena and believe that there is a happy medium that balances cloud services agility and cloud security.ITwants to help their business counterparts accelerate the safe adoption of cloud services while protecting corporate data.There are a number of approaches for discovering and studying Shadow IT, such as using a cloud-based solution that analyzes firewall logs in a non-intrusive and real-time manner.  The most popular approaches take it a step further and identify the risks of cloud services, as not all SaaS applications employees are using are unsafe.

 

Take the time to learn about these approaches, and find the one that works best for your organization.  Like most cloud services, organizations should be able to use these solutions in a matter of minutes and immediately help IT organizations shine a light on Shadow IT for safer and more productive cloud services usage.

 

 

Why the Cloud Cannot be treated as a One-size-fits-all when it comes to Security Arrow to Content

June 24, 2013 | Leave a Comment

Despite the fact that cloud providers have long since differentiated themselves on very distinct offerings based on cloud platform type, I often see the cloud written about as though it is a single, uniformservice. And, the problem with that is while there are commonalities, it is downright misleading especially as so much is misunderstood around what’s required to secure cloud-based services and the risks that are involved. Today there are three classes of service, Software as a Service (SaaS) where the provider hosts software-based services and the consumer accesses via a web interface, Platform as Service (PaaS) that developers mostly use to developsoftware-based offerings, and Infrastructure as a Service (IaaS) where consumers can “rent” infrastructure to host their own services.

When I speak with customers I recommend they consider cloud offerings in the light of classes of services they need, the types of data they will need to expose, their regulatory compliance needs and the reputation and flexibility of the service providers they are looking to leverage. Because, even within the classes of service I mentioned above there are distinct variances.

Choosing a cloud provider based on class of service

Over the last five years in particular the industry has benefitted from broad based adoption of SaaS particularly for customer relationship management, payroll and document collaboration to name a few. But, cloud providers in this space range from those with established practices and who have robust data handling and hygiene practices that are well documented to emerging players. The same goes for PaaS and IaaS. Over the last couple of years some IaaS providers have developed tailored offerings to suit particular verticals such as government, retail and healthcare. Today, the industry is still very much lacking from standard definitions and templates for SLA. And with each different class of service, there are different security requirements too, ranging from SaaS where the consumer has no ability to push security controls down to the provider’s environment to IaaS where typically the consumer is responsible for securing the virtual machines that they might “rent” from a provider. This is where leveraging the freely available resources from the Cloud Security Alliance Trust and Assurance Registry (STAR) an initiative that encourages transparency of security practices within cloud providers, is incredibly valuable.

Data Security According to Data Type

Data, too, is not created equal. Consumers of different cloud services need to consider the data that entrust in the hands of a SaaS provider from a sensitivity level as well as any exposure that may result from a potential data breach. This concern may be a little different with IaaS where a consumer potentially has the opportunity to addmore safeguards such as encryption, file monitoring and other security controls at the virtual machines level that may help mitigate some of the risks. I have seen some excellent security implementations around some vertical stack models that some IaaS providers have developed for government, retail, healthcare and now expanding to more verticals. However, there are issues such as data residency, data handling and monitoring at the network and overall host level that still need to be considered and carefully thought out.

Regulatory Compliance Needs

Some years back the security industry had been focused around the idea of audit and compliance fatigue – this the idea that many enterprises today can be dealing with in excess of fifty mandates pending whom they do business with and their geographic span and the amount of often manual audit data collection. Since then, there has been some automation of IT audit practices but it still remains a time consuming practices for most organizations. There are over 4000 mandates today, which the Unified Compliance Framework has done an amazing job of tracking and cross mapping for many years and as always more government and data privacy mandates in the works. The Cloud Security Alliance Cloud Controls Matrix also cross walks several standards but further categorizes controls according to platform, recognizing that different models require different controls. It is ideal for those looking to learn about how to evolve their controls to map to different models and who want to avoid the audit fatigue syndrome through the concept of audit once, report many times.

Over the next few weeks I will drill down into each of the above areas. In the meantime, if you have any questions or wish to discuss any of the above further, please contact me at [email protected]

Evelyn de Souza Bio
Evelyn is a senior data center and cloud security strategist for the Security Technology Group at Cisco responsible for championing holistic and next generation security solutions . She is a strong proponent of building automated, repeatable processes that enable organizations to sustain compliance while optimizing security posture and reducing costs. To this end, she pioneered the development of such tools in her previous role as the McAfee Compliance Mapping Matrix, which cross-maps various regulations, standards, and frameworks to e solutions and the McAfee PCI Mapping Tool. She currently co-chairs the Cloud Security Alliance Cloud Controls Matrix (CCM) and is focused on harmonizing efforts across industry initiatives such as the Open Data Center Alliance (ODCA). Evelyn is a dedicated security professional with more than 12 years in the IT security industry. She enjoys engaging with industry analysts, customers, and partners to discuss industry trends and how security solutions can be best implemented to meet the needs of next-generation datacenters. She holds a Bachelors of Arts degree with honors in music from Monash University, Melbourne, Australia. She can also be found on Twitter at: e_desouza

CSA Releases the Expanded Top Ten Big Data Security & Privacy Challenges Arrow to Content

June 17, 2013 | Leave a Comment

Big Data remains one of the most talked about technology trends in 2013. But lost among all the excitement about the potential of Big Data are the very real security and privacy challenges that threaten to slow this momentum.

Security and privacy issues are magnified by the three V’s of big data: Velocity, Volume, and Variety. These factors include variables such as large-scale cloud infrastructures, diversity of data sources and formats, streaming nature of data acquisition and the increasingly high volume of inter-cloud migrations. Consequently, traditional security mechanisms, which are tailored to securing small-scale static (as opposed to streaming) data, often fall short.

The CSA’s Big Data Working Group followed a three-step process to arrive at top security and privacy challenges presented by Big Data:

  1. Interviewed CSA members and surveyed security-practitioner oriented trade journals to draft an initial list of high priority security and privacy problems
  2. Studied published solutions.
  3. Characterized a problem as a challenge if the proposed solution does not cover the problem scenarios.

Following this exercise, the Working Group researchers compiled their list of the Top 10 challenges, which are as follows:

  1. Secure computations in distributed programming frameworks
  2. Security best practices for non-relational data stores
  3. Secure data storage and transactions logs
  4. End-point input validation/filtering
  5. Real-Time Security Monitoring
  6. Scalable and composable privacy-preserving data mining and analytics
  7. Cryptographically enforced data centric security
  8. Granular access control
  9. Granular audits
  10. Data Provenance

The Expanded Top 10 Big Data challenges has evolved from the initial list of challenges presented at CSA Congress to an expanded version that addresses three new distinct issues:

  1. Modeling: formalizing a threat model that covers most of the cyber-attack or data-leakage scenarios
  2. Analysis: finding tractable solutions based on the threat model
  3. Implementation: implanting the solution in existing infrastructures

The full report explores each one of these challenges in depth, including an overview of the various use casesfor each challenge.

The challenges themselves can be organized into four distinct aspects of the Big Data ecosystem as follows:

big data1

The objective of highlighting these challenges is to bring renewed focus on fortifying big data infrastructures. The Expanded Top 10 Big Data Security Challenges report can be downloaded in its entirety here.

 

 

 

Leveraging Intel from Hackers to Mitigate Risks Arrow to Content

June 14, 2013 | Leave a Comment

Authored by Robert Hansen

Know your enemy and know yourself and you can fight a hundred battles without disaster.” – Sun Tzu

A few weeks ago, I interviewed “Adam” a self-described ‘blackhat’ hacker about why he started hacking, what motivates him and others in the underground community and why he has decided to change his ways. What was revealed in this interview (which was published in full in a three-part series on the WhiteHat Security blog) hopefully sheds light on how other blackhats like “Adam” think and how they communicate. From this we in the security industry can devise better solutions, abandon failed technologies, and fix the most glaring issues. A great deal can be unearthed by examining Adam’s words and those of other attackers like him.

For example, Adam shared insights into some web vulnerabilities that are the most used by the attacker community, among them XSS and SQL injection, and his belief that SQL injections are the vulnerabilities that should be fixed first because they are most heavily used. Adam also shares the characteristics that he thinks make up a “good” web application vulnerability: that it is fast to exploit, persistent, gives root/full access as well allows the ability to deface/redirect sites, or wipe IP logs completely. When it comes to lists like the recently announced OWASP Top 10 for 2013, Adam downplays their importance as a “best practice” because they are never up to date or comprehensive – i.e. clickjacking and DoS/DDoS are not on the OWASP list yet extremely useful to attackers – and serve only as a good measure for prioritization.

While some IT security professionals shy away from listening to anything from the dark side, much can be learned from knowing your adversary and what makes them tick. From this conversation with Adam alone we are able to better ascertain how to first prioritize testing and finding vulnerabilities and then prioritize mitigating and fixing them.

To take this conversation one step further, I will be co-hosting a webinar on June 20 that delves further into some of the lessons we can learn from our adversaries in the ‘blackhat’ community and how we can better leverage this intel for tracking attacks and deploying the right protection strategies.

About Robert Hansen

Robert Hansen (CISSP) is the Director of Product Management at WhiteHat Security. He’s the former Chief Executive of SecTheory and Falling Rock Networks which focused on building a hardened OS. Mr. Hansen began his career in banner click fraud detection at ValueClick. Mr. Hansen has worked for Cable & Wireless doing managed security services, and eBay as a Sr. Global Product Manager of Trust and Safety. Mr. Hansen contributes to and sits on the board of several startup companies. Mr. Hansen has co-authored “XSS Exploits” by Syngress publishing and wrote the eBook, “Detecting Malice.” Robert is a member of WASC, APWG, IACSP, ISSA, APWG and contributed to several OWASP projects, including originating the XSS Cheat Sheet. He is also a mentor at TechStars. His passion is breaking web technologies to make them better.

Cloud Trust Study: Security, Privacy and Reliability in the cloud get high marks with U.S. small to mid-sized businesses Arrow to Content

June 11, 2013 | Leave a Comment

Comscore and Microsoft recently commissioned a study to get a pulse on what small to mid-sized businesses (SMB) think about the cloud in terms of security, privacy and reliability.

The results tell us that there’s a gap between the perceptions of those not using the cloud, with the real experiences of those using one or more cloud service.

For detailed result from four geographies (France, Germany, the U.K. and the U.S.), check out Adrienne Hall’s post here.

A Hybrid Approach for Migrating IAM to the Cloud Arrow to Content

June 10, 2013 | Leave a Comment

Merritt Maxim

Director-Product Marketing

CA Technologies

 

We continue to hear about how cloud, mobility and the consumerization of IT has the potential to transform business.  However, the ongoing hype around these trends may lead some to believe that these trends require an “all or none” approach.  This can create conflicts as organizations may have significant investments in on-premise IT and cannot simply pull the plug on these environments and immediately go to the cloud.  As a result, they are seeking ways to utilize cloud-based applications and infrastructure while maintaining certain applications on-premise. The resulting architecture is referred to as a hybrid environment because it features both on-premise and cloud-based resources.

 

Hybrid approaches can provide organizations with flexibility to slowly move to cloud based services while still maintaining select on-premise resources.   For organizations in this situation, one of their major challenges is providing users with the flexibility to seamlessly move around the environment while still maintaining appropriate security levels—or more specifically, ensuring consistent control and security policy between on-premise applications and cloud services.

 

Within a strictly on-premise model, IT focuses on building physical infrastructures—servers, virtualization layers, operating systems, and middleware applications—and delivering security throughout the whole stack.  With a hybrid model, however, IT must change its perspective and style, treating any and all IT components (cloud-based or otherwise) as services that are available for the business to consume. In doing so, IT security needs to ensure consistent protection between and among the organizations and all the instances of applications where sensitive data exists (i.e., the broader and fragmented data center).

 

At first blush, it might seem that the role of IT security is significantly diminished by this process. The reality, however, is that securely enabling the access to and interaction of cloud services provides much more value to the business. In doing so, IT is enabling an organization to move more quickly. Furthermore, IT is facilitating the adoption of the consumer-oriented IT capabilities that employees are demanding. In other words, utilizing more cloud-based services puts the IT security function front and center in the day to day of a company’s planning activities.

 

Once organizations simultaneously leverage applications via a variety of IT models, such as on-premise applications and SaaS-based services, the traditional notion of a network perimeter simply no longer exists. And as a result, our ideas about how we manage security and identity have to change.

 

How doesone ensure appropriate security levels within this hybrid environment?

 

To avoid building separate identity silos solely for cloud-based services resources (the result of unique accounts within each of those providers and applications), enterprises should look for a centralized IAM service thatcan manage all users’ access and authentication before they go to any applications—on-premise or in the cloud.

 

By taking the approach that Identity is the new perimeter, we can funnel all access to enterprise resources through a central identity service.  In this way we create a single front door to every SaaS, mobile and on-premise application.  This service can enforce whatever level of authentication you desire for each application.  With standards such as SAML and OAuth being quickly adopted by SaaS providers and mobile application developers, you have the ability to enforce that all enterprise uses enter through your central identity service…your new identity perimeter.

 

For employees, authentication could be against a corporate directory. For partners, it could entail using identity federation via standards such as SAML that enable the users of an organization to easily and securely access the data and applications of other organizations as well as cloud services via cloud single sign-on, thus preventing the need to maintain another list of user accounts.  This approach ensures that all the identity-related functions, such as authentication—and ultimately authorization—are consistently managed by the enterprise.

 

For customers who may already have an existing digital social identity (such as Facebook or Google) and would like to be able to leverage that identity, standards such as OpenID and OAuth would allow those users to access cloud resources using those credentials and not require additional user registration steps. For special employees or high-value transactions, a higher level of authentication might be required before allowing the user access to a particular service. There might be very sensitive data that goes into a SaaS-based HR application, for example. If the necessary level of required authentication is not native to that particular SaaS environment, the enterprise could require an additional “step-up authentication”—via a centralized identity service—before granting access.

 

As hybrid environments become the norm, the need for solutions that can interoperate in on-premise and cloud environments will be paramount.  Adopting a hybrid based approach can enable organizations of all types and sizes to realize efficiency gains while still protecting their critical digital resources, regardless of whether those resources are on-premise or in the cloud.

 

This can result in:

  • Reduced security risk for all systems, applications, and information
  • Reduced administrative expenses and improved efficiency
  • Improved IT agility through flexible deployment options across on-premise and cloud environments
  • Ability to move to the cloud on a comfortable schedule

 

Organizations may find this hybrid approach as a practical alternative deployment model to going 100% into the cloud without sacrificing agility, usability or flexibility.

 

Merritt Maxim has 15 years of product management and product marketing experience in the information security industry, including stints at RSA Security, Netegrity and CA Technologies. In his current role at CA Technologies, Merritt handles product marketing for CA’s identity management and cloud security initiatives.  The co-author of “Wireless Security”Merritt blogs on a variety of IT security topics, and can be followed at www.twitter.com/merrittmaxim. Merritt received his BA cum laude from Colgate University and his MBA from the MIT Sloan School of Management.

Don’t let a disaster leave your data out in the cold Arrow to Content

June 10, 2013 | Leave a Comment

By Andrew Wild, CSO at Qualys

When we see images from natural disasters like Hurricane Sandy of flooded neighborhoods, downed power lines and destroyed homes the first concern, of course, is for the safety of the people. But as a chief security officer I also think about how disasters affect companies and the vital assets of their business – the data.

Natural disasters are unpredictable. They happen out of the blue and leave no time to prepare. So now – while things are calm — would be a good time to make sure your data isn’t left to the mercy of the forces of nature. Being prepared means creating information management policies and procedures so that sensitive information remains protected regardless of what happens. This process includes four steps: identifying data that needs to be kept confidential, classifying the sensitivity of it, deciding how it can be best protected and how data left on discarded computer systemscan be kept away from prying eyes.

1)     Identification

All data management programs shouldstart with identifying important information resources, which should be tracked throughout their lifecycle. The organization needs to identify not just all the information it has, but how sensitive it is and where it is processed and stored. Sensitive data can find its way into many different types of systems beyond servers and desktops, including printers, copiers, scanners, laptops, cash registers, payment terminals, thumb drives, external hard drives and mobile devices.

2)    Classification

Before an organization can classify the sensitivity of information, itmust set policies around data ownership – who is responsible for what data? Employees often believe that the IT department owns all of the organization’s data and is solely responsible for securing it. However, the business unit that creates or uses the data is usually the best candidate for taking on the classification and ownership responsibilities for the data, including naming an owner and a custodian of the information. When making these decisions it is important to consider the impact to the organization that would come if the data were to be lost or inappropriately disclosed. Typically, data is classified into four levels: Public, Internal Use Only, Confidential and Restricted. The classifications should support business requirements and ensure the appropriate level of safeguarding for every type of sensitive information.

3)    Handling

Next up is deciding how the different classifications of data should be stored and handled. Typically, the handling processes are defined by the classification level of the resource. The higher the sensitivity, the more stringent the handling procedures should be. For example, organizations will require the most sensitive information to be encrypted, and may prohibit the use of devices like USB flash drives for highly sensitive data because they can be contaminated with malware and easily lost or stolen.

4)    Destruction

People who spend a lot of energy protecting sensitive information often neglect to take precautions once they are done with the data or systems on which it is stored. Exposing confidential data by failing to properly sanitize or destroy media like hard drives can be considered a breach subject to state data breach laws. It can put consumers at risk of identity theft and corporations at risk of espionage. As such, it is imperative that information management policies include procedures for proper destruction and disposal of data storage systems. Paper, magnetic tape, optical discs and hard disk drives can all be shredded, making it very difficult to recover the information. For organizations that don’t want to take any chances with highly sensitive information, they can write over data several times or use a degaussing technique on magnetic media to make sure that the original data is not recoverable. There are third parties that offer a range ofservices for wiping data entirely from systems. Interestingly, computers may be destroyed in natural disasters but that doesn’t mean the data on the disk drives can’t be recovered and thus leaked to the outside world if the systems are not handled properly.

I sincerely hope that the victims of Hurricane Sandy have recovered and are rebuilding. For the rest of us, this can serve as a reminder of the need to be prepared with information management policies in the event of a disaster.

 

Andrew has more than 20 years of experience leading teams to design, implement and operate secure networks and computer systems. As Qualys’ Chief Security Officer, Andrew oversees the security, risk management and compliance of its enterprise and SaaS environments. Prior to joining Qualys, he managed a team of information security engineers responsible for the design, implementation and operation of security solutions for EMC’s SaaS offerings, with heavy emphasis on cloud and virtualization technologies. Prior to EMC, he was the Chief Security Officer at Transaction Network Services. He has also held a variety of network engineering leadership roles with large network service providers including BT and Sprint. Andrew has a master’s degree in electrical engineering from George Washington University and a bachelor’s degree in electrical engineering from the United States Military Academy. He is a veteran of the United States Army and served in Operations Desert Shield and Desert Storm.

 

New York State launches investigation of top insurance companies’ cybersecurity practices. Who’s next? Arrow to Content

June 5, 2013 | Leave a Comment

The following blog excerpt on “New York State launches investigation of top insurance companies’ cybersecurity practices. Who’s next?” was written by the external legal counsel of the CSA, Ms. Francoise Gilbert of the IT Law Group. We repost it here with her permission. It can be viewed in its original form at: http://www.francoisegilbert.com/2013/06/new-york-state-launches-investigation-of-cybersecurity-practices-of-top-insurance-companies-whos-next/

The State of New York has launched an inquiry into the steps taken by the largest insurance companies to keep their customers and companies safe from cyber threats. This is the second inquiry of this kind. Earlier this year, a similar investigation targeted the cyber security practices of New York based financial institutions.

On May 28, 2013, the New York Department of Financial Services (DFS) issued letters pursuant to Section 308 of the New York Insurance Law (“308 Letters”) to 31 of the country’s largest insurance companies, requesting information on the policies and procedures they have in place to protect health, personal and financial records in their custody against cyber attacks.

Read the full article. >>

How the “Internet of Things” Will Feed Cloud Computing’s Next Evolution Arrow to Content

June 5, 2013 | Leave a Comment

David Canellos, PerspecSys president and CEO

 

 

While the Internet of things is not a new concept (Kevin Ashton first coined the term in 1999 to describe how the Internet is connected to the physical world), it is just now becoming a reality due to some major shifts in technology.

 

According to ABI Research, more than 5B wireless connectivity chips will ship this year – and most of those chips will find their way into tablets, sensors, cameras and even light bulbs or refrigerators that will increasingly become connected to the Internet. Currently, there are about two Internet-connected devices for every person on the planet, but by 2025, analysts are forecasting that this ratio will surpass six. This means we can expect to grow to nearly 50 billion Internet-connected devices in the next decade.

 

Driven by a revolution in technology, for the first time we have the ability to create a central nervous system on our planet. Over the next decade, most of the connected device growth will come from very small sensors that are primarily doing machine-to-machine communications and acting as the digital nerve endings for highly dynamic global sense-and-respond systems. This sensor technology will allow us to measure systems on a global scale and at the same time offer a never before seen array of intelligent services.

 

“Whether it is Smart Cities, e-Health and Assisted Living, Intelligent Manufacturing, Smart Logistics and Transport, or Smart Metering, 21st century machines are now sensing, anticipating, and responding to our needs; and we can control them remotely. We cannot have a policy or create the impression that the Internet of things would create an Orwellian world. Our goal, and our commitment, should be to create a vision that focuses on providing real value for people.” Neelie Kroes, vice president of the European Commission responsible for the Digital Agenda

 

This promise is what generates excitement about these interconnected sensor data networks. If successful, they will help us solve some of the biggest problems facing our society, making “The Internet of Things” not just a reality, but a force for major change.

 

The Role of Cloud Computing

 

While the Internet of things is exciting on its own, it is my belief that the real innovation will come from combining it with cloud computing. As all these interactions between connected devices occur, large volumes of data will be generated. This data will be easily captured and stored, but it needs to be transformed into valuable knowledge and actionable intelligence – this is where the real power of the cloud kicks in. Systems in the cloud will be used to (a) transform data to insight and (b) drive productive, cost-effective actions from these insights. Through this process, the cloud effectively serves as the brain to improve decision-making and optimization for Internet-connected interactions.

 

Cloud computing can provide the virtual infrastructure for utility computing integrating applications, monitoring devices, storage devices, analytics tools, visualization platforms, and client delivery. The utility-based model that cloud computing offers will enable businesses and users to access applications on demand anytime, anyplace and anywhere.

 

Data Protection Challenges

 

With the intersection of the Internet of things and cloud computing, protecting personal privacy becomes an essential and necessary condition. How to ensure information security and privacy is an important issue that must be addressed and resolved in the development of the Internet of things. People will resist the ubiquitous free flow of information if there is no public confidence that it will not cause serious threats to privacy.

 

The intelligence and integrated nature of the Internet of things raises serious concerns over individual privacy in the new environment of smart devices and objects. Universal connectivity through Internet access exacerbates the problem because, unless special mechanisms are considered (encryption, authentication, etc.), personally identifiable information (PII) may become uncontrollably exposed. 

 

Data Protection Solutions

 

In order to remove barriers to the Internet of things and the cloud, the technology industry (and enterprises deploying and using these technologies) needs to embrace the basic principles of protecting personal privacy, including the management, storage and processing of all sensitive information.

 

Legislation will continue to evolve in an attempt to deal with these issues and sector-specific industry bodies will produce regulations that provide guidelines and best practices to security and privacy officers. And security technologies will surely continue to advance to ensure that these regulations can be complied in the most effective and efficient ways possible.

 

In the middle of it all will be IT and security professionals, and their technology partners, who will have the challenge of managing not only the threats of data leakage and identity theft, but also growing consumer and employee concerns about data privacy.

 

Perhaps Marc Vael, international vice president of ISACA said it best: “The protection of private data often referred to as personally identifiable information (PII) is the responsibility of both organizations and individuals. Organizations need to ensure that PII is managed and protected throughout its life cycle by having a governance strategy and good processes in place. Individuals must think before they provide their PII to a third party … and be aware of the value of the information they are providing and assess if they can trust whom they are giving it to. Data protection involves improving people’s awareness, using best-of-breed technology and deploying sound business processes.”

 

If the industry – and its customers and beneficiaries – can embrace these ideas, we’ll be able to realize the full potential of the cloud-enhanced, Internet of things world of which we’re on the cusp.

 

6-5-2013

 

David Canellos is president and CEO of PerspecSys. Previously, David was SVP of sales and marketing at Irdeto Worldwide, a division of Naspers. Prior to that, David was the president and COO of Cloakware, which was acquired by Irdeto. Before joining Cloakware, David was the general manager and vice president of sales for Cramer Systems (now Amdocs), a U.K.-based company, where he was responsible for the company’s revenue and operations in the Americas. Prior to his work with Cramer, David held a variety of executive, sales management and business development positions with the Oracle Corporation, Versatility and SAIC.

Rethink cloud security to get ahead of the risk curve Arrow to Content

June 5, 2013 | Leave a Comment

By Kurt Johnson, Courion Corporation

 

Kurt_Johnson

Ever since the cloud sprung up to the top of every IT discussion, the issue of cloud security was right alongside it. Let’s face it, enterprise security has never been easy, and the rapidly expanding use of software in the cloud has added layers of complexity – and risk – to the job. More valuable intellectual property, personally identifiable information, medical records and customer data now sit in the cloud. Risk should not prevent this, but it’s a risk that needs to be managed.

 

With more data spread across multiple environments, accessed not only by employees but contractors, partners and customers alike, and accessed via more devices such as tablets and mobile, identity and access becomes an increasing concern. Who has access? Do they need this access? What are they doing with that access? All of these are critical for an effective security strategy. The cloud doesn’t change Identity and Access Management needs. We still need to ensure that the right people are getting the right level of access to cloud resources, and that they are doing the right things with that access. However, many cloud applications are purchased by the business units without IT’s knowledge. Identity and access administration become more ad hoc. Security is losing control, but not losing responsibility.

 

The IAM Gap

 

The cloud only puts a fine point on overall access risk as a growing concern. We’re confronting an expanding identity and access management gap (“IAM Gap”) that’s threatening the integrity of many organizations today.

 

Many organizations use provisioning systems to automate the setup, modification and disablement of accounts according to policy. Access certification provides a periodic, point-in-time look at who has access. Managers must attest that subordinates have the right access according to their responsibilities. But, what happens in between? New applications, new accounts, new policies and other changes are a daily event. The ad hoc nature of the cloud means new users and access could be happening without any visibility to IT. Identity and access should not be a once-a-year checkpoint.

 

The gap between provisioning and certification represents trillions of ever-changing relationships among identities, access rights and resources. It’s a danger zone that exposes the soft underbelly of your organization’s security. One wouldn’t expect to do a virus scan or intrusion detection analysis once every six months, so why should your organization stall on monitoring identities and access?

 

So, what should your organization do? Take a hard look at IAM programs and expand that to include the cloud. Update IAM guidelines and controls. Go beyond mere provisioning and certification to include intelligence and analytics. Define the policies of who should have what type of access, define appropriate use and get the line of businesses involved in the process.

 

Then, make sure cloud as well as on-premise applications are included. There should not be stove-piped strategies – one for cloud, one for on-premise. It should be an enterprise IAM strategy that incorporates both.

 

To incorporate the cloud in this strategy, start with an inventory of your cloud applications. Once the cloud applications have been identified they should be categorized by risk, much like any enterprise application. Define the appropriate identity and access controls to the appropriate risk levels. Low risk applications, like TripIt, should have acceptable use agreements and password policies. Too many end-users use the same passwords for personal applications as they do for enterprise applications. What happens when password breaches occur, such as those that happened with Evernote or LinkedIn? Medium risk applications, such as Box or ShareFile, should add automated provisioning and de-provisioning, access certification reviews, access policy reviews and exception monitoring. For high risk applications, such as Salesforce.com, higher level controls should be added which include user activity monitoring, privileged account monitoring, multi-factor authentication and identity and access intelligence so as to provide more real-time analysis and monitoring of access risk.

 

The strategy needs to address the gap not just on day one and through periodic point-in-time reviews, but with intelligence that provides a measure of real-time monitoring and which tracks user activity.

 

As the openness imperative and cloud movement raise the access risk management stakes, organizations need to:

 

  • Identify where risk is and understand it
  • Drive security controls to settle the risk
  • Dynamically strengthen security controls based on risk status
  • Spotlight risk in real-time

 

The solution is harnessing the Big Data in the trillions of access relationships – on the ground or in the cloud – to better understand what is really going on. Security staff are essentially looking for a needle in the haystack of data. Unfortunately, they don’t know what the needle looks like, so they have to look at all the hay and find something that looks different. What they really need to see are meaningful patterns. This is where predictive analytics come in – the same technology that an online retailer might use to better target product offers to you based on your recent buying behavior, for example.

 

Closing the IAM Gap with Real-Time Risk Aware Identity & Access Intelligence

 

You need to apply predictive analytics specifically to the big data around identity, rights, policies, activities and resources to reveal anomalous patterns of activity. From this, you gain access intelligence, and you can compare the patterns representing good behavior with anomalies. Consider a person with legitimate rights to a resource accessing a cloud-based CRM system and downloading the entire customer database from his home office at 2 a.m. on a Saturday night. This event might bear looking into, but you’d never even know it occurred with traditional controls because the person had legitimate access to the system. By identifying patterns or anomalies from “normal” – and displaying them in graphical heat maps – you have a view you haven’t seen before.

 

This kind of analysis closes the IAM Gap and provides a risk-driven approach to IAM. You understand and manage risk in real time, not every three to 12 months. You automate information security and identify patterns not discernible to the naked eye. With anomalies and patterns revealed, you prioritize your next security steps, strengthen controls in times of highest risk and continuously update threat definitions.

 

Here’s the key point: In this new approach, you assess risk from live data, not scenarios you’ve anticipated and coded into the system. Many security tools alert you to actions you’ve already defined as “bad.” But how do you see things you didn’t know were bad before? You need analytics to uncover patterns, serve them up to you and let you weigh whether they warrant further investigation. Real-time, predictive analytics put you ahead of the risk curve, harnessing existing company data to sound alarms before a loss – when the risk around an individual or resource spikes. In other words, you don’t know what you don’t know.

 

This kind of operational intelligence identifies, quantifies and settles access risks in time to avoid audit issues and real damage to your business. It’s interactive, real-time, scalable and self-learning. You have actionable, risk-prioritized insight.

 

Whether the applications you monitor are partly or solely in the cloud does not matter; you’re securing all your enterprise systems and resources wherever they reside. You are making sure risks are reduced before they become bonafide breaches. Bottom line, we need a new “perimeter:” one that truly understands who someone is, what they should access, what they are doing with that access and what patterns of behavior might represent threats to the organization. This way, you’re taking advantage of all the benefits of the cloud while opening your business to employees, customers and partners – all while getting ahead of risk.

 

Kurt Johnson is vice president of strategy and corporate development at Courion Corporation (www.courion.com).

 

# # #

 

 

Page Dividing Line