By Nate Dyer, Product Marketing Director, Tenable
Application containers like Docker have exploded in popularity among IT and development teams across the world. Since its inception in 2013, Docker software has been downloaded 80 billion times and more than 3.5 million applications have been “dockerized” to run in containers.
With all the enthusiasm and near-mainstream adoption status, it’s important to understand the reasons why security continues to be the top challenge with container deployments. Let’s take a look.
Security is the top container management challenge
In study after study security comes up as the top container management challenge. In many ways, container security issues are no different than those impacting traditional IT. Poor cyber hygiene, such as developers using vulnerable versions of Kubernetes or misconfigured Docker services, creates a lot of turmoil in the container ecosystem. Security teams need to find vulnerabilities and prioritize their remediation based on actual cyber risk – just as they would for any other computing asset.
Containers create unique issues for security teams
But, in other ways, container security issues are rather unique. Modern application development today is largely focused on assembling existing software components, many of which are open-source code, instead of writing code from scratch.
For example, many developers turn to container image repositories like Docker Hub to construct their own container images quickly. Unfortunately, very few of these assembled components are actually analyzed by security teams to assess business risk.
And the risks are real: 17 Docker images were recently discovered and removed from Docker Hub because they had installed cryptocurrency miners on unwitting users’ servers. The question we all need to ask is: Do you know where your container images are coming from?
Traditional vulnerability management approaches don’t work for security containers
To make matters more difficult, traditional vulnerability management approaches don’t work for securing containers. The average lifespan of a container is often measured in hours, making it very challenging to discover running containers using large IP ranges in the scan configuration.
Then, if you come across a running container in a scan, it’s difficult to assess it due to its “just enough operation system” design principles. Many containers don’t have an IP address or SSH logins to run a credentialed scan.
Finally, if you happen to find a security issue in a container, you don’t just apply a patch to remediate the flaw. Rather, you have to shut down the container, fix the bug in the container image code and then redeploy as part of the new, immutable infrastructure mindset where IT infrastructure is treated as code.
Three steps to mastering container security
While Docker containers have turned traditional vulnerability management on its head, there is a path forward. You can master container security by following three steps:
- Discover and secure container infrastructure. This includes detecting Docker in your environment, patching host and orchestration infrastructure and hardening services based on industry best practices.
- Shift left with security controls. Focus your security testing, policy assurance and remediation workflows on the development process before software is shipped into production to prevent vulnerabilities.
- Incorporate containers into your holistic Cyber Exposure program. Rather than relying on a point solution to secure a new type of computing asset, make sure your vulnerability management approach supports containers alongside other assets across your attack surface.
Want to learn more how to master these three steps? Check out Container Security Best Practices: A How-to Guide to start reaping the benefits.