Cloud Migration Strategies and Their Impact on Security and Governance

By Peter HJ van Eijk, Head Coach and Cloud Architect, ClubCloudComputing.com

cloud migration concept with servers in the cloud

Public cloud migrations come in different shapes and sizes, but I see three major approaches. Each of these has very different technical and governance implications.

Three approaches to cloud migration

Companies dying to get rid of their data centers often get started on a ‘lift and shift’ approach, where applications are moved from existing servers to equivalent servers in the cloud. The cloud service model consumed here is mainly IaaS (infrastructure as a service). Not much is outsourced to cloud providers here. Contrast that with SaaS.

The other side of the spectrum is adopting SaaS solutions. More often than not, these trickle in from the business side, not from IT. These could range from small meeting planners to full blown sales support systems.

More recently, developers have started to embrace cloud native architectures. Ultimately, both the target environment as well as the development environment can be cloud based. The cloud service model consumed here is typically PaaS.

I am not here to advocate the benefits of one over the other, I think there can be business case for each of these.

The categories also have some overlap. Lift and shift can require some refactoring of code, to have it better fit cloud native deployments. And hardly any SaaS application is stand alone, so some (cloud native) integration with other software is often required.

Profound differences

The big point I want to make here is that there are profound differences in the issues that each of these categories faces, and the hard decisions that have to be made. Most of these decisions are about governance and risk management.

With lift and shift, the application functionality is pretty clear, but bringing that out to the cloud introduces data risks and technical risks. Data controls may be insufficient, and the application’s architecture may not be a good match for cloud, leading to poor performance and high cost.

One group of SaaS applications stems from ‘shadow IT’. The people that adopt them typically pay little attention to existing risk management policies. These can also add useless complexity to the application landscape. The governance challenges for these are obvious: consolidate and make them more compliant with company policies.

Another group of SaaS applications is the reincarnation of the ‘enterprise software package’. Think ERP, CRM or HR applications. These are typically run as a corporate project, with all its change management issues, except that you don’t have to run it yourself.

The positive side of SaaS solutions, in general, is that they are likely to be cloud native, which could greatly reduce their risk profile. Of course, this has to be validated, and a minimum risk control is to have a good exit strategy.

Finally, cloud native development is the most exciting, rewarding and risky approach. This is because it explores and creates new possibilities that can truly transform an organization.

One of the most obvious balances to strike here is between speed of innovation and independence of platform providers. The more you are willing to commit yourself to an innovative platform, the faster you may be able to move. The two big examples I see of that are big data and internet of things. The major cloud providers have very interesting offerings there, but moving a fully developed application from one provider to another is going to be a really painful proposition. And of course, the next important thing is for developers to truly understand the risks and benefits of cloud native development.

Again, big governance and risk management are issues to address.

Peter van Eijk is one of the world’s most experienced cloud trainers. He has worked for 30+ years in research, with IT service providers and in IT consulting (University of Twente, AT&T Bell Labs, EDS, EUNet, Deloitte). In more than 100 training sessions he has helped organizations align on security and speed up their cloud adoption. He is an authorized CSA CCSK and (ISC)2 CCSP trainer, and has written or contributed to several cloud training courses. 

Continuous Monitoring in the Cloud

By Michael Pitcher, Vice President, Technical Cyber Services, Coalfire Federal

lock and key for cloud security

I recently spoke at the Cloud Security Alliance’s Federal Summit on the topic “Continuous Monitoring / Continuous Diagnostics and Mitigation (CDM) Concepts in the Cloud.” As government has moved and will continue to move to the cloud, it is becoming increasingly important to ensure continuous monitoring goals are met in this environment. Specifically, cloud assets can be highly dynamic, lacking persistence, and thus traditional methods for continuous monitoring that work for on-premise solutions don’t always translate to the cloud.

Coalfire has been involved with implementing CDM for various agencies and is the largest Third Party Assessment Organization (3PAO), having done more FedRAMP authorizations than anyone, uniquely positioning us to help customers think through this challenge. However, these concepts and challenges are not unique to the government agencies that are a part of the CDM program; they also translate to other government and DoD communities as well as commercial entities.

To review, Phase 1 of the Department of Homeland Security (DHS) CDM program focused largely on static assets and for the most part excluded the cloud. It was centered around building and knowing an inventory, which could then be enrolled in ongoing scanning, as frequently as every 72 hours. The objective is to determine if assets are authorized to be on the network, are being managed, and if they have software installed that is vulnerable and/or misconfigured. As the cloud becomes a part of the next round of CDM, it is important to understand how the approach to these objectives needs to adapt.

Cloud services enable resources to be allocated, consumed, and de-allocated on the fly to meet peak demands. Just about any system is going to have times where more resources are required than others, and the cloud allows compute, storage, and network resources to scale with this demand. As an example, within Coalfire we have a Security Parsing Tool (Sec-P) that spins up compute resources to process vulnerability assessment files that are dropped into a cloud storage bucket. The compute resources only exist for a few seconds while the file gets processed, and then they are torn down. Examples such as this, as well as serverless architectures, challenge traditional continuous monitoring approaches.

However, potential solutions are out there, including:

  • Adopting built-in services and third-party tools
  • Deploying agents
  • Leveraging Infrastructure as Code (IaC) review
  • Using sampling for validation
  • Developing a custom approach

Adopting built-in services and third-party tools

Dynamic cloud environments highlight the inadequacies of performing active and passive scanning to build inventories. Assets may simply come and go before they can be assessed by a traditional scan tool. Each of the major cloud services providers (CSPs) and many of the smaller ones provide inventory management services in addition to services that can monitor resource changes – examples include AWS’ System Manager Inventory Manager and Cloud Watch, Microsoft’s Azure Resource Manager and Activity Log, and Google’s Asset Inventory and Cloud Audit Logging. There are also quality third-party applications that can be used, some of them even already FedRAMP authorized. Regardless of the service/tool used, the key here is interfacing them with the integration layer of an existing CDM or continuous monitoring solution. This can occur via API calls to and from the solution, which are made possible by the current CDM program requirements.

Deploying agents

For resources that are going to have some degree of persistence, agents are a great way to perform continuous monitoring. Agents can check in with a master to maintain the inventory and also perform security checks once the resource is spun up, instead of having to wait for a sweeping scan. Agents can be installed as a part of the build process or even be made part of a deployment image. Interfacing with the master node that controls the agents and comparing that to the inventory is a great way to perform cloud-based “rogue” asset detection, a requirement under CDM. This concept employed on-premises is really about finding unauthorized assets, such as a personal laptop plugged into an open network port. In the cloud it is all about finding assets that have drifted from the approved configuration and are out of compliance with the security requirements.

For resources such as our Coalfire Sec-P tool from the previous example, where it exists as code more than 90 percent of the time, we need to think differently. An agent approach may not work as the compute resources may not exist long enough to even check in with the master, let alone perform any security checks.

Infrastructure as code review

IaC is used to deploy and configure cloud resources such as compute, storage, and networking. It is basically a set of templates that “programs” the infrastructure. It is not a new concept for the cloud, but the speed at which environments change in the cloud is bringing IaC into the security spotlight.

Now, we need to consider how we can perform assessment on the code that builds and configures the resources. There are many tools and different approaches on how to do this; application security is not anything new, it just must be re-examined when we consider it part of performing continuous monitoring on infrastructure. The good news is that IaC uses structured formats and common languages such as XML, JSON, and YAML. As a result, it is possible to use tools or even write custom scripts to perform the review. This structured format also allows for automated and ongoing monitoring of the configurations, even when the resources only exist as code and are not “living.” It is also important to consider what software is spinning up with the resources, as the packages that are leveraged must include up-to-date versions that do not have vulnerabilities. Code should undergo a security review when it changes, and thus the approved code can be continuously monitored.

Setting asset expiry is one way to enforce CDM principals in a high DevOps environment that leverages IaC. The goal of CDM is to assess assets every 72 hours, and thus we can set them to expire (get torn down, and therefore require rebuild) within the timeframe to know they are living on fresh infrastructure built with approved code.

Sampling

Sampling is to be used in conjunction with the methods above. In a dynamic environment where the total number of assets is always changing, there should be a solid core of the fleet that can be scanned via traditional means of active scanning. We just need to accept that we are not going to be able to scan the complete inventory. There should also be far fewer profiles, or “gold images,” than there are total assets. The idea is that if you can get at least 25% of each profile in any given scan, there is a good chance you are going to find all the misconfigurations and vulnerabilities that exist on all the resources of the same profile, and/or identify if assets are drifting from the fleet. This is enough to identify systemic issues such as bad deployment code or resources being spun up with out-of-date software. If you are finding resources in a profile that have a large discrepancy with the others in that same profile, then that is a sign of DevOps or configuration management issues that need to be addressed. We are not giving up on the concept of having a complete inventory, just accepting the fact that there really is no such thing.

Building IaC assets specifically for the purposes of performing security testing is a great option to leverage as well. These assets can have persistence and be “enrolled” into a continuous monitoring solution to report on the vulnerabilities in a similar manner to on-premises devices, via a dashboard or otherwise. The total number of vulnerabilities in the fleet is the quantity found on these sample assets, multiplied by the number of those assets that are living in the fleet. As we stated above, we can get this quantity from the CSP services or third-party tools.

Custom approaches

There are many different CSPs out there for the endless cloud-based possibilities, and all CSPs have various services and tools available from them, and for them. What I have reviewed are high-level concepts, but each customer will need to dial in the specifics based on their use cases and objectives.

Cloud Security Trailing Cloud App Adoption in 2018

By Jacob Serpa, Product Marketing Manager, Bitglass

In recent years, the cloud has attracted countless organizations with its promises of increased productivity, improved collaboration, and decreased IT overhead. As more and more companies migrate, more and more cloud-based tools arise.

In its fourth cloud adoption report, Bitglass reveals the state of cloud in 2018. Unsurprisingly, organizations are adopting more cloud-based solutions than ever before. However, their use of key cloud security tools is lacking. Read on to learn more.

The Single Sign-On Problem

Single sign-on (SSO) is a basic, but critical security tool that authenticates users across cloud applications by requiring them to sign in to a single portal. Unfortunately, a mere 25 percent of organizations are using an SSO solution today. When compared to the 81 percent of companies that are using the cloud, it becomes readily apparent that there is a disparity between cloud usage and cloud security usage. This is a big problem.

The Threat of Data Leakage

While using the cloud is not inherently more risky than the traditional method of conducting business, it does lead to different threats that must be addressed in appropriate fashions. As adoption of cloud-based tools continues to grow, organizations must deploy cloud-first security solutions in order to defend against modern-day threats. While SSO is one such tool that is currently underutilized, other relevant security capabilities include shadow IT discoverydata loss prevention (DLP), contextual access control, cloud encryptionmalware detection, and more. Failure to use these tools can prove fatal to any enterprise in the cloud.

Microsoft Office 365 vs. Google’s G Suite

Office 365 and G Suite are the leading cloud productivity suites. They each offer a variety of tools that can help organizations improve their operations. Since Bitglass’ 2016 report, Office 365 has been deployed more frequently than G Suite. Interestingly, this year, O365 has extended its lead considerably. While roughly 56 percent of organizations now use Microsoft’s offering, about 25 percent are using Google’s. The fact that Office 365 has achieved more than two times as many deployments as G Suite highlights Microsoft’s success in positioning its product as the solution of choice for the enterprise.

The Rise of AWS

Through infrastructure as a service (IaaS), organizations are able to avoid making massive investments in IT infrastructure. Instead, they can leverage IaaS providers like Microsoft, Amazon, and Google in order to achieve low-cost, scalable infrastructure. In this year’s cloud adoption report, every analyzed industry exhibited adoption of Amazon Web Services (AWS), the leading IaaS solution. While the technology vertical led the way at 21.5 percent adoption, 13.8 percent of all organizations were shown to use AWS.

To gain more information about the state of cloud in 2018, download Bitglass’ report, Cloud Adoption: 2018 War.

Five Cloud Migration Mistakes That Will Sink a Business

By Jon-Michael C. Brook, Principal, Guide Holdings, LLC

intersection of success and failure Today, with the growing popularity of cloud computing, there exists a wealth of resources for companies that are considering—or are in the process of—migrating their data to the cloud. From checklists to best practices, the Internet teems with advice. But what about the things you shouldn’t be doing? The best-laid plans of mice and men often go awry, and so, too, will your cloud migration unless you manage to avoid these common cloud mistakes:

“The Cloud Service Provider (CSP) will do everything.”

Cloud computing offers significant advantages—cost, scalability, on-demand service and infinite bandwidth. And the processes, procedures, and day-to-day activities a CSP delivers provides every cloud customer–regardless of size–with the capabilities of Fortune 50 IT staff. But nothing is idiot proof. CSPs aren’t responsible for everything–they are only in charge of the parts they can control based on the shared responsibility model and expect customers to own more of the risk mitigation.

Advice: Take the time upfront to read the best practices of the cloud you’re deploying to. Follow cloud design patterns and understand your responsibilities–don’t trust that your cloud service provider will take care of everything. Remember, it is a shared responsibility model.

“Cryptography is the panacea; data-in-motion, data-at-rest and data-in-transit protection works the same in the cloud.”

Cybersecurity professionals refer to the triad balance: Confidentiality, Integrity and Availability. Increasing one decreases the other two. In the cloud, availability and integrity are built into every service and even guaranteed with Service Level Agreements (SLAs).The last bullet in the confidentiality chamber involves cryptography, mathematically adjusting information to make it unreadable without the appropriate key. However, cryptography works differently in the cloud. Customers expect service offerings will work together, and so the CSP provides the “80/20” security with less effort (i.e. CSP managed keys).

Advice: Expect that while you must use encryption for the cloud, there will be a learning curve. Take the time to read through the FAQs and understand what threats each architectural option really opens you up to.

“My cloud service provider’s default authentication is good enough.”

One of cloud’s tenets is self-service. CSPs have a duty to protect not just you, but themselves and everyone else that’s virtualized on their environment. One of the early self-service aspects is authentication—the act of proving you are who you say you are. There are three ways to accomplish this proof: 1) Reply with something you know (i.e., password); 2) Provide something you have (i.e., key or token); or 3) Produce something you are (i.e., a fingerprint or retina scan). These are all commonplace activities. For example, most enterprise systems require a password with a complexity factor (upper/lower/character/number), and even banks now require customers to enter additional password codes received as text messages. These techniques are imposed to make the authentication stronger, more reliable and with wider adoption. Multi-factor authentication uses more than one of them.

Advice: Cloud Service Providers offer numerous authentication upgrades, including some sort of multi-factor authentication option—use them.

“Lift and shift is the clear path to cloud migration.”

Cloud cost advantages evaporate quickly due to poor strategic decisions or architectural choices. A lift-and-shift approach in moving to cloud is where existing virtualized images or snapshots of current in-house systems are simply transformed and uploaded onto a Cloud Service Provider’s system. If you want to run the exact same system in-house rented on an IaaS platform, it will cost less money to buy a capital asset and depreciate the hardware over three years.  The lift-and-shift approach ignores the elastic scalability to scale up and down on demand, and doesn’t use rigorously tested cloud design patterns that result in resiliency and security. There may be systems within a design that are appropriate to be an exact copy, however, placing an entire enterprise architecture directly onto a CSP would be costly and inefficient.

Advice: Invest the time up front to redesign your architecture for the cloud, and you will benefit greatly.

“Of course, we’re compliant.”

Enterprise risk and compliance departments have decades of frameworks, documentation and mitigation techniques. Cloud-specific control frameworks are less than five years old, but are solid and are continuing to be understood each year.

However, adopting the cloud will need special attention, especially when it comes to non-enterprise risks such as an economic denial of service (credit card over-the-limit), third-party managed encryption keys that potentially give them access to your data (warrants/eDiscovery) or compromised root administrator account responsibilities (CSP shutting down your account and forcing physical verification for reinstatement).

Advice: These items don’t have direct analogs in the enterprise risk universe. Instead, the understandings must expand, especially in highly regulated industries. Don’t face massive fines, operational downtime or reputational losses by not paying attention to a widened risk environment.

Jon-Michael C. Brook, Principal at Guide Holdings, LLC, has 20 years of experience in information security with such organizations as Raytheon, Northrop Grumman, Booz Allen Hamilton, Optiv Security and Symantec. He is co-chair of CSA’s Top Threats Working Group and the Cloud Broker Working Group, and contributor to several additional working groups. Brook is a Certified Certificate of Cloud Security Knowledge+ (CCSK+) trainer and Cloud Controls Matrix (CCM) reviewer and trainer.

Cybersecurity and Privacy Certification from the Ground Up

By Daniele Catteddu, CTO, Cloud Security Alliance

The European Cybersecurity Act, proposed in 2017 by the European Commission, is the most recent of several policy documents adopted and/or proposed by governments around the world, each with the intent (among other objectives) to bring clarity to cybersecurity certifications for various products and services.

The reason why cybersecurity, and most recently privacy, certifications are so important is pretty obvious: They represent a vehicle of trust and serve the purpose of providing assurance about the level of cybersecurity a solution could provide. They represent, at least in theory, a simple mechanism through which organizations and individuals can make quick, risk-based decisions without the need to fully understand the technical specifications of the service or product they are purchasing.

What’s in a certification?

Most of us struggle to keep pace with technological innovations, and so we often find ourselves buying services and products without sufficient levels of education and awareness of the potential side effects these technologies can bring. We don’t fully understand the possible implications of adopting a new service, and sometimes we don’t even ask ourselves the most basic questions about the inherent risks of certain technologies.

In this landscape, certifications, compliance audits, trust marks and seals are mechanisms that help improve market conditions by providing a high-level representation of the level of cybersecurity a solution could offer.

Certifications are typically performed by a trusted third party (an auditor or a lab) who evaluates and assesses a solution against a set of requirements and criteria that are in turn part of a set of standards, best practices, or regulations. In the case of a positive assessment, the evaluator issues a certification or statement of compliance that is typically valid for a set length of time.

One of the problems with certifications under the current market condition is that they have a tendency to proliferate, which is to say that for the same product or service more than one certification exists. The example of cloud services is pretty illustrative of this issue. More than 20 different schemes exist to certify the level of security of cloud services, ranging from international standards to national accreditation systems to sectorial attestation of compliance.

Such a proliferation of certifications can serve to produce the exact opposite result that a certification was built for. Rather than supporting and streamlining the decision-making process, they could create confusion, and rather than increasing trust, they favor uncertainty. It should be noted, however, that such a proliferation isn’t always a bad thing. Sometimes, it’s the result of the need to accommodate important nuances of various security requirements.

Crafting the ideal certification

CSA has been a leader in cloud assurance, transparency and compliance for many years now, supporting the effort to improve the certification landscape. Our goal has been—and still is—to make the cloud and IoT technology environment more secure, transparent, trustworthy, effective and efficient by developing innovative solutions for compliance and certification.

It’s in this context that we are surveying our community and the market at-large to understand what both subject matter experts and laypersons see as the essential features and characteristics of the ideal certification scheme or meta-framework.

Our call to action?

Tell us—in a paragraph, a sentence or a word—what you think a cybersecurity and privacy certification should look like. Tell us what the scope should be (security/privacy, product /processes /people, cloud/IoT, global/regional/national), what’s the level of assurance offered, which guarantees and liabilities are expected, what’s the tradeoff between cost and value, how it should be proposed/communicated to be understood and valuable for the community at large.

Tell us, but do it before July 2 because that’s when the survey closes.

How ChromeOS Dramatically Simplifies Enterprise Security

By Rich Campagna, Chief Marketing Officer, Bitglass

chrome logoGoogle’s Chromebooks have enjoyed significant adoption in education, but have seen very little interest in the enterprise until recently. According to Gartner’s Peter Firstbrook in Securing Chromebooks in the Enterprise (6 March 2018), a survey of more than 700 respondents showed that nearly half of organizations will definitely purchase or probably will purchase Chromebooks by EOY 2017. And Google has started developing an impressive list of case studies, including WhirlpoolNetflixPinterestthe Better Business Bureau, and more.

And why wouldn’t this trend continue? As the enterprise adopts cloud en masse, more and more applications are available anywhere through a browser – obviating the need for a full OS running legacy applications. Additionally, Chromebooks can represent a large cost savings – not only in terms of a lower up-front cost of hardware, but lower ongoing maintenance and helpdesk costs as well.

With this shift comes a very different approach to security. Since Chrome OS is hardened and locked down, the need to secure the endpoint diminishes, potentially saving a lot of time and money. At the same time, the primary storage mechanism shifts from the device to the cloud, meaning that the need to secure data in cloud applications, like G Suite, with a Cloud Access Security Broker (CASB) becomes paramount. Fortunately, the CASB market has matured substantially in recent years, and is now widely viewed as “ready for primetime.”

Overall, the outlook for Chromebooks in the enterprise is positive, with a very real possibility of dramatically simplifying security. Now, instead of patching and protecting thousands of laptops, the focus shift towards protecting data in a relatively small number of cloud applications. Quite the improvement!

What If the Cryptography Underlying the Internet Fell Apart?

By Roberta Faux, Director of Research, Envieta

Without the encryption used to secure passwords for logging in to services like Paypal, Gmail, or Facebook, a user is left vulnerable to attack. Online security is becoming fundamental to life in the 21st century. Once quantum computing is achieved, all the secret keys we use to secure our online life are in jeopardy.

The CSA Quantum-Safe Security Working Group has produced a new primer on the future of cryptography. This paper, “The State of Post-Quantum Cryptography,” is aimed at helping non-technical corporate executives understand what the impact of quantum computers on today’s security infrastructure will be.

Some topics covered include:
–What Is Post-Quantum Cryptography
–Breaking Public Key Cryptography
–Key Exchange & Digital Signatures
–Quantum Safe Alternative
–Transition Planning for Quantum-Resistant Future

Quantum Computers Are Coming
Google, Microsoft, IBM, and Intel, as well as numerous well-funded startups, are making significant progress toward quantum computers. Scientists around the world are investigating a variety of technologies to make quantum computers real. While no one is sure when (or even if) quantum computers will be created, some experts believe that within 10 years a quantum computer capable of breaking today’s cryptography could exist.

Effects on Global Public Key Infrastructure
Quantum computing strikes at the heart of the security of the global public key infrastructure (PKI). PKI establishes secure keys for bidirectional encrypted communications over an insecure network. PKI authenticates the identity of information senders and receivers, as well as protects data from manipulation. The two primary public key algorithms used in the global PKI are RSA and Elliptic Curve Cryptography. A quantum computer would easily break these algorithms.

The security of these algorithms is based on intractably hard mathematical problems in number theory. However, they are only intractable for a classical computer, where bits can have only one value (a 1 or a 0). In a quantum computer, where k bits represent not one but 2^k values, RSA and Elliptic Curve cryptography can be solved in polynomial time using an algorithm called Shor’s algorithm. If quantum computers can scale to work on even tens of thousands of bits, today’s public key cryptography becomes immediately insecure.

Post-Quantum Cryptography
Fortunately, there are cryptographically hard problems that are believed to be secure even from quantum attacks. These crypto-systems are known as post-quantum or quantum-resistant cryptography. In recent years, post-quantum cryptography has received an increasing amount of attention in academic communities as well as from industry. Cryptographers have been designing new algorithms to provide quantum-safe security.

Proposed algorithms are based on a number of underlying hard problems widely believed to be resistant to attacks even with quantum computers. These fall into the following classes:

  • Multivariate cryptography
  • Hash-based cryptography
  • Code-based cryptography
  • Supersingular elliptic curve isogeny cryptography

Our new white paper explains the pros and cons of the various classes for post-quantum cryptography. Most post-quantum algorithms will require significantly larger key sizes than existing public key algorithms which may pose unanticipated issues such as compatibility with some protocols. Bandwidth will need to increase for key establishment and signatures. These larger key sizes also mean more storage inside a device.

Cryptographic Standards
Cryptography is typically implemented according to a standard. Standard organizations around the globe are advising stakeholders to plan for the future. In 2015, the U.S. National Security Agency posted a notice urging the need to plan for the replacement of current public key cryptography with quantum-resistant cryptography. While there are quantum-safe algorithms available today, standards are still being put in place.

Standard organizations such as ETSI, IETF, ISO, and X9 are all working on recommendations. The U.S. National Institute for Standards and Technology, known as NIST, is currently working on a project to produce a draft standard of a suite of quantum resistant algorithms in the 2022-2024 timeframe. This is a challenging process which has attracted worldwide debate. Various algorithms have advantages and disadvantages with respect to computation, key sizes and degree of confidence. These factors need to be evaluated against the target environment.

Cryptographic Transition Planning
One of the most important issues that the paper underscores, is the need to being planning for cryptographic transition to migrate from existing public key cryptography to post-quantum cryptography. Now is the time to vigorously investigate the wide range of post quantum cryptographic algorithms and find the best ones for use in the future. This point is vital for corporate leaders to understand and begin transition planning now.

The white paper, “The State of Post-Quantum Cryptography,” was released by CSA Quantum-Safe Security Working Group. This introduces non-technical executives to the current and evolving landscape in cryptographic security.

Download the paper now.

Electrify Your Digital Transformation with the Cloud

By Tori Ballantine, Product Marketing, Hyland

Taking your organization on a digital transformation journey isn’t just a whimsical idea; or something fun to daydream about; or an initiative that “other” companies probably have time to implement. It’s something that every organization needs to seriously consider. If your business isn’t digital, it needs to be in order to remain competitive.

So if you take it as a given that you need to embrace digital transformation to survive and thrive in the current landscape, the next logical step is to look at how the cloud fits into your strategy. Because sure, it’s possible to digitally transform without availing yourself of the massive benefits of the cloud. But why would you?

Why would you intentionally leave on the table what could be one of the strongest tools in your arsenal? Why would you take a pass on the opportunity to transform – and vastly improve – the processes at the crux of how your business works?

Lightning strikes
In the case of content services, including capabilities like content management, process management and case management, cloud adoption is rising by the day. Companies with existing on-premises solutions are considering the cloud as the hosting location for their critical information, and companies seeking new solutions are looking at cloud deployments to provide them with the functionality they require.

If your company was born in the digital age, it’s likely that you inherently operate digitally. If your company was founded in the time before, perhaps you’re playing catch up.

Both of these types of companies can find major benefits in the cloud. Data is created digitally, natively — but there is still paper that needs to be brought into the digital fold. The digitizing of information is just a small part of digital transformation. To truly take information management to the next level, the cloud offers transformative options that just aren’t available in a premises-bound solution.

People are overwhelmingly using the cloud in their personal lives, according to AIIM’s State of Information Management: Are Businesses Digitally Transforming or Stuck in Neutral? Of those polled, 75 percent use the cloud in their personal life and 68 percent report that they use the cloud for business. That’s nearly three-quarters of respondents!

When we look at the usage of cloud-based solutions in areas like enterprise content management (ECM) and related applications, 35 percent of respondents leverage the cloud as their primary content management solutions; for collaboration and secure file sharing; or for a combination of primary content management and file sharing. These respondents are deploying these solutions either exclusively in the cloud or as part of on-prem/cloud hybrid solutions.

Another 46 percent are migrating all their content to the cloud over time; planning to leverage the cloud but haven’t yet deployed; or are still experimenting with different options. They are in the process of discerning exactly how best to leverage the power of the cloud for their organizations.

And only 11 percent have no plans for the cloud. Eleven percent! Can your business afford to be in that minority?

More and more, the cloud is becoming table stakes in information management. Organizations are growing to understand that a secure cloud solution not only can save them time and money, but also provide them with stronger security features, better functionality and larger storage capacity.

The bright ideas
So, what are some of the ways that leveraging the cloud for your content services can digitally transform your business?

  • Disaster recovery. When your information is stored on-premises and calamity strikes — a fire, a robbery, a flood — you’re out of luck. When your information is in the cloud, it’s up and ready to keep your critical operations running.
  • Remote access. Today’s workforce wants to be mobile, and they need to access their critical information wherever they are. A cloud solution empowers your workers by granting them the ability to securely access critical information from remote locations.
  • Enhanced security. Enterprise-level cloud security has come a long way and offers sophisticated protection that is out of reach for many companies to manage internally.

Here are other highly appealing advantages of cloud-based enterprise solutions, based on a survey conducted by IDG Enterprise:

  • Increased uptime
  • 24/7 data availability
  • Operational cost savings
  • Improved incident response
  • Shared/aggregated security expertise of vendor
  • Access to industry experts on security threats

Whether you’re optimizing your current practices or rethinking them from the ground up, these elements can help you digitally transform your business by looking to the cloud.

Can you afford not to?

What’s Hindering the Adoption of Cloud Computing in Europe?

upload download cloud computingAs with their counterparts in North America, organizations across Europe are eagerly embracing cloud computing into their operating environment. However, despite the overall enthusiasm around the potential of cloud computing to transform their business practices, many CIOs have real concerns about migrating their sensitive data and applications to public cloud environments. Why? In essence, it boils down to a few core areas of concern:

  1. A perceived lack of clarity in existing Cloud Service Level Agreements and security policy agreements
  2. The application, monitoring, and enforcement of security SLA’s
  3. The relative immaturity of cloud services

These issues of course are far from new and, in fact, great progress has been made over the past five years to address these and other concerns around fostering greater trust in cloud computing. The one threat that is present across these issues is transparency – the greater transparency that a cloud service provider can provide into their approach to information security, the more confident organizations will be in adopting and trusting public cloud providers with their data and assets.

To this end, the European Commission (EC) launched the Cloud Selected Industry Group (SIG) on Certification in April of 2013 with the aim of supporting the identification of certifications and schemes deemed “appropriate” for the European Economic Area (EEA) market. Following this, ENISA (European Network and Information Security Agency) launched their Cloud Certification Schemes Metaframework (CCSM) initiative in 2014 to map detailed security requirements used in the public sector to describe security objectives in existing cloud certification schemes. And of course, the Cloud Security Alliance has also played a role in defining security-specific certification schemes with the creation the CSA Open Certification Framework (CSA OCF) which works to enable cloud providers to achieve a global, accredited and trusted certification.

Beyond defining a common set of standards and certifications, SLA’s have become an important proxy by which to gauge visibility into a Cloud provider’s security and privacy capabilities. The specification of security parameters in Cloud Service Level Agreements (“secSLAs)” has been recognized as a mechanism to bring more transparency and trust for both cloud service providers and their customers.  Unfortunately, the conspicuous lack of relevant Cloud security SLA standards has also become a barrier for their adoption. For these reasons, standardized Cloud secSLAs should become part of the more general SLAs/Master Service Agreements signed between the CSP and their customers. Current efforts from the CSA and ISO/IEC in this field are expected to bring some initial results by 2016.

This topic will be a key theme at this year’s EMEA Congress, taking place November 17-19 in Berlin, Germany, with a plenary panel on “Cloud Trust and Security Innovation” featuring Nathaly Rey, Head of Trust, Google for Work as well as a track on Secure SLA’s which is being led by Dr. Michaela Iorga, Senior Security Technical Lead for Cloud Computing, NIST.

To register for the EMEA Congress, visit: https://csacongress.org/event/emea-2015/#registration.