March 31, 2017 | Leave a Comment
By Rich Campagna, Senior Vice President/Products & Marketing, Bitglass
In the past 6-9 months, I’ve noticed a trend amongst Bitglass customers where more and more of them are opting to use the identity capabilities built into our Cloud Access Security Broker (CASB) in lieu of a dedicated Identity as a Service (IDaaS) product. As CASB identity functionality has evolved, there is less need for a separate, standalone product in this space and we are seeing the beginnings of CASBs eating the IDaaS market.
A few years back, Bitglass’ initial identity capabilities consisted solely of our SAML proxy, which ensures that even if a user goes direct to a cloud application from an unmanaged device and on a public network, they are transparently redirected into Bitglass’ proxies – without agents!
From there, customer demand lead us to build Active Directory synchronization capability for group and user management, authentication directly against AD, and native multifactor authentication. Next came SCIM support and the ability to provide SSO not only for sanctioned/protected cloud applications, but any application.
So what’s left? If you look at Gartner’s Magic Quadrant for Identity and Access Management as a Service, Worldwide*, Greg Kreizmann and Neil Wynne break IDaaS capabilities into three categories:
- Identity Governance and Administration – “At a minimum, the vendor’s service is able to automate synchronization (adds, changes and deletions) of identities held by the service or obtained from customers’ identity repositories to target applications and other repositories. The vendor also must provide a way for customers’ administrators to manage identities directly through an IDaaS administrative interface, and allow users to reset their passwords. In addition, vendors may offer deeper functionality, such as supporting identity life cycle processes, automated provisioning of accounts among heterogeneous systems, access requests (including self-service), and governance over user access to critical systems via workflows for policy enforcement, as well as for access certification processes. Additional capabilities may include role management and access certification.”
- Access – “Access includes user authentication, single sign-on (SSO) and authorization enforcement. At a minimum, the vendor provides authentication and SSO to target applications using web proxies and federation standards. Vendors also may offer ways to vault and replay passwords to get to SSO when federation standards are not supported by the applications. Most vendors offer additional authentication methods — their own or through integration with third-party authentication products.”
- Identity log monitoring and reporting – “The vendor logs IGA and access events, makes the log data available to customers for their own analysis, and provides customers with a reporting capability to answer the questions, ‘Who has been granted access to which target systems and when?’ and ‘Who has accessed those target systems and when?’”
Check, check, and check! Not only do leading CASBs offer these capabilities as part of their cloud data protection suites, in some cases, they go quite a bit farther. Take logging and reporting for example. An IDaaS product sees login and logout events, but nothing that happens during the session. CASBs can log and report on every single transaction – login, logout and everything in between.
Another example is multifactor authentication. Whereas an IDaaS can trigger MFA at the beginning of a session due to a suspicious context, a CASB can trigger MFA at any time – such as mid-session if a user starts to exhibit risk behaviors.
Since these capabilities have evolved as part of CASBs, which offer comprehensive data protection capabilities for cloud applications, I expect that 2017 will be a year with a lot more enterprises considering CASB platforms for both cloud identity and cloud data protection.
*Magic Quadrant for Identity and Access Management as a Service, Worldwide, Greg Kreizmann and Neil Wynne, 06 June 2016
March 23, 2017 | Leave a Comment
By Nic Scott, Managing Director/UK, Code 42
What’s the latest on Brexit? When the UK government triggers Article 50, it will signal the start of the official two-year countdown until the UK leaves the European Union. According to UK Prime Minister Theresa May, this is still on track to happen at some point in March.
While there are still many unknowns in regards to geopolitical policies and legislation that will be created, annulled, or abolished post-Brexit, the UK government has given away one handy hint when it comes to the now-infamous General Data Protection Regulation (GDPR).
Post-Brexit, the UK will be mirroring data protection regulations to that which exists in Europe.
This means that from May 2018, when the UK is still an EU member, the GDPR will be applicable to UK businesses. And, even when the UK exits the EU in 2019, an identical version of the GDPR will also be enforced.
Needless to say, this isn’t good news for UK organizations that have been burying their heads in the sand, hoping that this pesky EU legislation will just go away post-Brexit. Unfortunately, these rules aren’t going anywhere. It’s time for companies to wake up to the consequence of data negligence regarding the GDPR. This isn’t just infosecurity providers scaremongering for sales, and it’s not a ‘potential’ occurrence like the Y2K bug, this is actually happening.
Get your ducks in a row, or get fined
Should a sensitive data breach occur under the GDPR, the European Data Protection Board (or likely the Information Commissioner’s Office, post-Brexit) will evaluate whether the affected company has been negligent in its data protection operations and the level of compensation a company must pay affected parties—which can reach €20m or a fine of up to four percent of its global turnover. Not a pretty thought for the C-suite, which by nature is tasked with mitigating risk.
Concerningly, according to Code42’s 2016 Datastrophe Study, in which over 400 UK IT decision makers (ITDMs) were surveyed, 50 percent of them acknowledged that the security measures they have in place currently will not be enough to meet GDPR standards.
How to become compliant
The first step is for an organization to know what kind of data falls under GDPR protection, where it is stored, and for how long it should be kept. Moreover, what is the best way to secure it, to what extent that data should be backed up, and how to prevent any leaks from the inside of the company. Simple, right?
The implementation of the right endpoint security stack is vital—one that takes into consideration first-line defense, such as intrusion detection systems and antivirus solutions, right down to last line defense, to easily remediate and recover should a breach occur. The right solution is an important advantage given the number of people and devices accessing potentially sensitive corporate information.
Also, enterprises should create internal policies that promote accessibility and flexibility with approved solutions, without locking the enterprise down to the point of stifling productivity. Employees play a big role regarding the sanctity of corporate information. That is why it is vital to train and educate your staff about possible intrusions, how they can secure data themselves, and how to avoid being tricked into leaking sensitive information.
Taking these precautions will allow an organization to gain control of its own information and ensure that the CIO’s overall focus is on increasing profit and expanding technological reach, rather than worrying about the safety of the zeroes and ones.
March 22, 2017 | Leave a Comment
By Jane Melia, Vice President of Strategic Business Development , QuintessenceLabs and Co-chair, CSA Quantum-safe Security Working Group
No kinds of organizations have tighter security than the average casino. After all, the house always wins, and it wants to keep those winnings. A recent Wired article, however, explains how a team of Russian hackers managed to beat a lot of casinos worldwide. They did so by exploiting inherent flaws in the pseudo-random number generators (PRNG) that are integral parts of randomizing every spin of a slot machine. Even if you don’t care about wealthy casino bosses losing money, you still need to be concerned about the drawbacks to using PRNGs because slots aren’t the only things that are vulnerable. Most of the world’s encryption is also based on pseudo-random numbers.
What’s in a Name?
Before going into detail about how the heists were carried out, let’s talk about PRNGs and why pseudo should be a no-no for slot machines and, more importantly, cybersecurity. As the prefix “pseudo” indicates, the numbers generated are not truly random. PRNGs are programs that start with a base number known as a seed. The seed gets tumbled together with other inputs such as another algorithm and a random-ish physical component such as the timing of the strokes on a user’s keyboard. Both humans and computers are really bad at random so if someone is able to measure the pattern of your keystrokes and/or break one of the algorithms used, they can reverse engineer the other inputs and predict the next numbers in the “random” sequence. Find the pattern, break the code and the jackpot (or encrypted data) is yours.
One- and Two-Armed Bandits
In the case of the Russian casino swindlers, they were given a head start by Vladamir Putin who had gambling outlawed in 2009. This meant a lot of slot machines were available on the cheap. Take apart a few machines, figure out how the PRNGs work and you’re nearly there. Since the inputs for slot machine PRNGs change based on the time of day, the hackers, in this case, had to do more work on-site at the casinos. The leg man would set himself up in front of a machine and video a dozen or more spins using his smartphone. The video would be streamed live to his compatriots in St. Petersburg who would analyze the video and use what they knew about the machine’s innards to predict its pattern. Then they would send a list of timing markers that caused the phone to vibrate a split-second before a winning combination comes up, signaling casino guy to hit the spin button. It didn’t work every time but it was a whole lot more effective than chance – somewhere around $250K per week more effective.
To make things worse, not only did the engineered cheat allow a shadowy St. Petersburg group to snatch millions of dollars, the problem they exploited is a fundamental part of PRNGs so casinos are still vulnerable to this kind of fraud. That brings us back to cybersecurity issues. As shown in the casino example, it takes a lot of work to figure out the patterns produced by a PRNG. Most hackers don’t have two dozen guys with a supercomputer in St. Petersburg to help. Soon, however, they will all have something better – at least if your goal is to defeat the PRNGs and break an encryption.
The Future is Yesterday
Any data that needs to be kept secret and safe over time is already at risk of being breached. Quantum computers exponentially more powerful than those we use today are already being developed. Current predictions are that quantum computers will be fully realized in the next five to ten years, but it could be even less than that. No PRNG will be able to stand up to the brute force of quantum computers. All too soon, only a true random number generator (RNG) will do.
The only way to generate true random numbers is by using the natural world (i.e. something not made by humans). Quantum encryption, for instance, uses the fully entropic (or completely random) nature of the quantum world to generate true random numbers that are the basis for the strongest possible encryption keys. Quantum key generation is designed to take on the coming quantum computing storm and keep medical records, tax returns, classified government documents, corporate secrets (and anything else that needs to stay under wraps after 2020 safe). Bet on it.
March 15, 2017 | Leave a Comment
By Katie Lewin, Federal Director, Cloud Security Alliance
CSA Summit at RSA was a day-long session on Securing the Converged Cloud organized around presentations and panels from leading vendors such as Centrify, Veracode, Microsoft, and Netskope, as well as a talk on “Effective Cybersecurity” by Ret. Gen. Keith Alexander and a fireside chat with Robert Herjavec of “Shark Tank” fame. (Session recordings from the CSA Summit are now available.)
Several themes emerged over the course of the day of presentations, panels and fireside chats:
- The cloud is still the most secure environment for data and acceptance of cloud as a secure environment for data storage is at the tipping point of acceptance by most IT users. In one survey cited, half of the respondents said that the cloud was more secure than on-premises.
- Identity continues to be important – the message of many of the speakers is that there are too many passwords and too many special privileges.
- Emphasis should be placed on data protection rather than device protection. Security is moving to Modern Data Controls – from device and identity security to data protection and controls. Rights management and data classification are the key indicators in data control.
- Security must move to a process that authenticates first and then connects as opposed to the current emphasis on connect and then authenticate.
Presentation slides will be available on the CSA web site.
Many speakers asserted that today’s security is not secure. Evidence of this includes breaches at Yahoo, USG Office of Personnel Management, 2016 Presidential election. Network perimeters are fading with cloud use, mobile devices, IoT devices and the mobile work force. Therefore, security in the age of access must focus on passwords.
Too many passwords and privileged users require a paradigm shift to identity management.
There is evidence that focusing on identity reduces the number of beaches. Businesses must take steps to implement identity management, including:
- Establish Identity assurance across the IT environment;
- Consolidate identities through single sign-on and then layer on multi-factor identification;
- Limit lateral movement – move to automated provisioning – identify who is still on staff and what they can access;
- Move to approval workflow for access requests; and
- Audit privilege access.
Speakers emphasized that transition to the cloud can be revolutionary rather than evolutionary. There were several real-life examples of a revolutionary transition. One large company wanted to eliminate its Intranet and rely solely on the Internet. The benefits of this approach included single sign-on, reduced complexity, establishment of standards, improved security and cost efficiency. In addition, the company did not have to secure and maintain network devices on its premises. In order to effect this transition, the company determined that they needed to concentrate on securing their data assets and not their appliances. The approach they took was to establish a strict policy-based access structure combined with micro-segmentation. This approach was successful. The Internet gives users access similar to private transactions; eliminates choke points of routing all transaction to a single data center. They were also able to optimize data center traffic using a hybrid cloud approach.
One of the highlights of the day was a speech from Ret. Gen. Keith Alexander on “Strategy of Effective Cybersecurity.” He began by outlining some of the current trends in cyber world:
- Technology is rapidly changing, and data available is increasing exponentially; but this information becomes outdated in 2 -3 year horizon.
- Advanced technology is playing a more important role in our lives – for example IBM’s Watson is now working on formulating chemo for brain cancer patients.
- Moving to the cloud is good – resulting in better security and cost savings especially for small and mid-size businesses.
However, there are threats that must be addressed in this environment. Cyber skills are now part of a nation’s power in the world. There are many examples of this, including cyber attacks from nation states aimed at other states. These attacks are evolving from disruptive to destructive.
What is the path forward to meet these threats? Entities must share meta data on attacks and intrusion attempts to have the information to formulate defensive strategies. There should also be Software as Service defensive tools on the cloud available for entities to share. These tools and strategies can be developed and implemented while also protecting civil liberties and privacy.
Product Announcement from AWS – Regulatory Product Mapping Tool
This tool maps security control frameworks to reveal overlap and gaps between various security methodologies. Currently, the product includes FedRAMP controls and the AWS set of controls. Other control sets will be added. This product could be useful in determining how long and how much it could cost a system to obtain an Authority to Operate from a Federal agency. Click for more information on this tool.
March 13, 2017 | Leave a Comment
By Frank Guanco, Quantum-Safe Security Working Group, Cloud Security Alliance
Recently there has been an increase in the perceived threat of the quantum computer to modern cryptographic standards in widespread use. During the last year, security agencies such as the United States Government National Security Agency (NSA) and the United Kingdom’s Communications Electronics Security Group (CESG) have called for a move to a set of quantum-safe cryptographic standards. The consensus is that today’s cyber security solutions needs to be retooled sooner rather than later, and the transition to quantum-safe security must begin now. The arrival date for a practical quantum computer is still up for debate, however, most experts believe we will see a quantum computer capable of breaking current public key cryptosystems within five to 15 years.
Recently the Quantum-Safe Security Working Group from the Cloud Security Alliance (CSA), released its ‘Applied Quantum-Safe Security’ paper, designed to provide individuals in the security industry and related fields with applicable knowledge regarding the quantum computer and its influence on cyber security. The white paper discusses how cryptographic tools must be adapted to fit specific types of data and serves as a call-to-arms for the available protection options for when the quantum computer arrives.
Digital and physical security
Computer security has primarily focused on digital security methods, however, physical security of data is also critical. Algorithms provide authentication and encryption for online communications and security of a cryptographic scheme is based on mathematics and resilience against large computing power to ensure digital security. Consider this physical security example – security breaches impacting governments and large organizations are often linked to insiders, capable of physical access not afforded the outside world. This breach occurs despite the fact that digital avenues may have been closed and intensive security protocols employed. Cryptographic keys are not only abstract random strings, but also real physical objects that should be stored in secured physical appliances. To be more quantum-safe, new tools must include all physical and mathematical security systems, each with its own practical application domain.
Impact of Cloud Computing
The ongoing move toward the cloud for all our IT needs greatly increases the reliance on data networks. Data is stored in huge data centers, and transferred between them at ever-increasing rates. The cloud model—with its associated storage and network requirements—enables a stronger and more reliable IT infrastructure. This heavily networked model also opens some serious new post-quantum threat vectors, with the most serious being a “data-vaulting” or harvesting attack where an attacker stores communications between the client and the cloud so that data can be decrypted in the future when general purpose quantum computers are available. What we need to keep top of mind is that data stored today may already be compromised by future quantum computers, especially if the data is being monitored and stored.
Data “at rest” in enormous cloud data centers is also at risk since quantum computers will effectively reduce the keys protecting that data to half of their original strength. Additionally, post-quantum attack vectors will compromise the key management systems that generate, distribute and protect the keys needed to secure that data. Any connections and links between these large data centers must have the highest levels of protection possible. The need for quantum-safe cybersecurity is greatly compounded in a cloud-based IT environment.
As we move towards a world of quantum computers, organizations need to take the knowledge outlined in the ‘Applied Quantum-Safe Security’ paper and assess their own quantum-safe needs. Not every organization will require the same security measures and it takes time to change an infrastructure. The best way to prepare is to follow what is going on with the development of the quantum computer and its security solutions.
Since the cloud relies heavily on secure communications, quantum safety is a critical issue for the CSA. Enterprises will only use cloud services if they believe that their data is safe, both in the cloud provider servers and in transit. Quantum-safe security is a true requirement for further expansion of the cloud. The CSA encourages industry leaders to start thinking and talking about quantum safety. Quantum-related technology is evolving very quickly every day, both on the attack side and the defend side. Organizations should think about adopting some low-risk solutions now to improve infrastructure.
Cyber security technology never has and never will be a ‘one size fits all’. There is no one universal solution that would provide the perfect security against all possible threats. What we have learned, however, is that we must prepare ourselves for emerging technology, especially when we know it’s coming. The key to quantum computer protection is the use of adaptable cryptographic tools. These tools must be tailored to fit specific types of data and specific applications. To download a copy of the full white paper, please visit here.
March 9, 2017 | Leave a Comment
By Frank Khan Sullivan, Vice President/Marketing, Strategic Blue
There is a need to communicate a project’s maturity to a non-technical audience. The Market & Technology Readiness Level Framework [PDF] aims to provide decision makers with a holistic view of a project’s maturity in a simple way – with a single score. It offers a faster way to assess, measure and support technology projects. The MTRL Assessment form is at the bottom of this article for those interested.
This framework has been developed by Frank Khan Sullivan, Michel Drescher from Oxford University e-Research Centre and Frank Bennett at Cloud Industry Forum, and was originally used to support several European Research & Innovation projects in cloud software and security to develop a go-to-market strategy at CloudForward 2016.
We will be accepting the next intake round of projects and businesses at Cloud Expo 2017 in London on Thursday, March 16 at 14:00 in the Cloud Innovation Theatre. The session is free to attend and will be hosted at the ExCeL Centre near Docklands. We will joining the CloudWatch2 project consortium speaking on March 15 on the European Digital Single Market and why trust is vital to the future cloud market.
By adopting the MTRL framework R&I projects can benefit from:
- Access Direct Support Workshops* before or during project reviews
- Quickly assess the maturity of a group of projects in a cluster/portfolio
- Communicate clearly the current and desired future state of a project
- Reduce the risk of project failure by intervening before crisis points
- Understand roadblocks and dependencies between TRL and MRL
Understanding How To Communicate R&D Projects
The decision to exploit outputs of applied research projects often rests on a decision maker’s understanding of how value will be created. The project leader must articulate a project’s current state of maturity, and demonstrate how it will progress through development stages. However, what the project leader wishes to communicate and what the decision maker understands does not always match.
Creating a Common Framework for All Stakeholders
Without a common framework to understand how mature a technology is, or its level of traction with its target users or constituents, funding and operational decisions take longer. The MTRL framework provides a common language for project leaders and funding decision makers to articulate their progress between stages.
Technology Readiness Levels are a widely accepted measure of the maturity of a technology, however, it obscures an important dimension – is the technology or project output ready to be brought to market, and if not, what can be practically done to accelerate its entry and subsequent uptake within a group of constituent users?
For example, if a project has developed a small scale prototype but has yet to validate the needs of its intended users, much effort and funding may be expended in the pursuit of features for a large scale prototype that will never be used.
Combining Technology Readiness and Market Readiness
By understanding both the current state of a project’s technology and market readiness, it becomes possible to offer more targeted support, such as refinement of a value proposition or closer pairing with an industry partner. This in turn increases the likelihood of a project’s outputs persisting outside the lab, reduced dependence on increasingly scarce grants and a more efficient use of existing resources.
First Success Story: CloudTeams.eu launches in Europe
CloudTeams.eu joined us at the CloudForward conference in Madrid back in October 2016 to conduct a Market & Technology Readiness workshop. Less than 6 months later, they have now successfully launched and made a major leap forward in executing their go-to-market strategy. CloudTeams is an innovative online platform connecting developers and users to speed up the collection of feedback from a target group of users to reduce time-to-market, reduce costly development errors and validate feature sets. We would like to take this opportunity to congratulate them on a really great project!
Conclusion: The MTRL Framework is ready for rollout!
In summary, the MTRL framework helps decision makers understand what resources may be required to progress through specific stages of development in the project lifecycle. This becomes particularly relevant in reducing the time it takes to assess groups of technology projects in clusters and making support accessible before reviews.
For project leaders: Request an MTRL Assessment and Direct Support Workshop
For funding bodies: Learn about implementing the MTRL Assessment Framework
Special thanks to Michel Drescher, Frank Bennett and Prof. David Wallom for their inputs in developing the methodology and thinking behind the framework. Feel free to connect with me directly to discuss how MTRL Assessments can be used to help your fund, project or go-to-market strategy/business models.
March 8, 2017 | Leave a Comment
By Jeremy Zoss, Managing Editor, Code42
It’s 2017, which means there’s a good chance your company is preparing to migrate to Windows 10. The operating system may have launched back in 2015, but this is the year that Gartner predicts enterprise adoption of the operating system will truly take off, hitting its peak in 2020.
What caused the delay in adoption? Based on a Spiceworks survey, concerns included stability, application compatibility, and security. Perhaps the largest factor, however, was large corporations opting to combine their move to Windows 10 with a device migration. Typically, these purchases occur every two to four years, so may companies were simply waiting for the next hardware purchase cycle to switch to the new operating system.
Whether combined with new machines or upgrading existing hardware, there are many factors to consider during device migrations, and the costs may surprise you. Fortunately, Gartner has also prepared an extremely detailed report on the costs and challenges of moving to Windows 10. Read the report today to discover:
- The typical costs to migrate a PC to Windows 10.
- The key factors of migration cost.
- How to determine your budget for migration costs using Gartner’s model.
- How to improve migration before it starts.