Part I: Deployment and Container Orchestration
Part II: Container Security
Part III: Container Deployment
Part IV: Introduction to Kubernetes
Part V: Kubernetes Administration
Part VI: Kubernetes Security Checklist
Introduction
Software deployment is undergoing a seismic shift, with the advent of DevOps breaking down traditional silos and accelerating the pace of code release. This first post in a 9-part series on cloud native security basics focuses on DevOps culture, container-related threats, and how to enable the integration of security into the heart of DevOps.
The State of Software Deployment
The way we write, test, and deploy software is undergoing a seismic shift. The number of programming languages and frameworks available to developers gives us the ultimate flexibility when it comes to building products and services.
Historically, system administrators and software development teams have operated in silos, where developers write code and hand it off to operations, who then package the code and deploy it to servers. It's a very hands-on series of tasks.
Security teams have had their own silo: running scanners, manually reviewing source code, and building time-consuming gates in the software development lifecycle. The state of modern software development is what the industry has coined as "DevOps," and things change quickly in DevOps environments. As the pace to release code increases, silos are dissolved, and the patience for manual security gates and saying "No!" to everything is no longer tolerated.
The security optimist will see opportunity in a DevOps environment. This series will explore how containerization and Kubernetes can actually help in your efforts to sandwich the "Sec" right in the middle of "DevOps." The technologies discussed in this series do not offer immunity out of the box. In fact, they are often less understood than traditional networking and software packaging techniques. Keep that in mind.
Continuous Integration and Deployment
One of the core pillars of building a DevOps culture is enabling automation and reducing friction between a developer's laptop and production. Continuous integration and continuous deployment help us to reduce that friction by utilizing automation and to build pipelines to employ machines instead of humans to bless code before it is deployed.
Continuous Integration
At its core, continuous integration (CI) is the act of collaboratively developing software using a shared repository. Continuous integration automates the building and testing of each change that is committed to the repository.
Smaller tasks are encouraged in a continuous integration pipeline, which enables features or bug fixes to be deployed rapidly. Feedback loops are kept tight; when builds fail, developers can act quickly. Without a continuous integration pipeline in place, it is difficult to move towards using containers for packaging and Kubernetes as an orchestration tool.
Continuous Deployment
Continuous deployment (CD) picks up where continuous integration leaves off. When the change is made and the continuous integration tests pass, the code may be considered ready to deploy. Continuous deployment takes the build artifact or source code that has passed all of the continuous integration automation and ensures it is ready for production deployment. The key with continuous deployment is that this step is automated, just as continuous integration is automated. Many organizations employ continuous delivery, which is a manual (not automated) deployment step to production.
Incorporating Security Into DevOps
So, we have established a DevOps pipeline. Code is deployed to production automatically in a matter of minutes. What else could a fast-moving software company ask for? Security.
Many security curmudgeons may scoff at the idea that DevOps can actually enable security. However, whether or not security teams embrace it, software is moving this way. Let's jump on the bandwagon and learn how to work with these new technologies and workflows. Using containers and Kubernetes securely can give all teams across the board wins – even security teams. That being said, it’s important to note that the CI/CD process, and DevOps in general, is not a replacement for later checks and balances for Kubernetes security.
Modern Infrastructure and Container Orchestration
Containers have taken center stage when it comes to software packaging and delivery. At a high level, containers provide an abstraction mechanism that allows applications and their dependencies to be decoupled from the actual infrastructure they are running on. This abstraction allows development teams and operations teams to deploy packaged software without focusing on the specific configurations, software versions, or even programming language running in the container.
To make containers useful in large environments, an orchestration layer is needed. Container orchestration is what allows containerized applications and services to be useful and accessible; it is ephemeral and manages containers in real-time. Today, most teams are using Kubernetes as an orchestrator, so in the remainder of this series, we will be covering Kubernetes Security.
Container Security Threats
Containers will not solve your application security issues, but they definitely bring big changes to the traditional environment, introducing new potential threats. Bear in mind that containers are not a simple technology, especially from a security perspective; you need to spend the time to learn the underlying technology and be extra careful when using them.
Significant threats related to the use of containers include:
Container escape: The attacker manages to escape the application and get access to the container, or even escape the container and get access to the host. A kernel exploit can be used to gain root access to the host and control all the containers. The Dero cryptocurrency miner is a recent example of a Kubernetes attack that succesfully achieved container escape.
Container privileges: A running container can request additional access to the underlying host system calls and filesystem; this could open the door to serious problems. Containers can also run as the root user, which grants additional privileges to the attacker if the container is compromised. Two recent vulnerabilities in the Kubernetes third party ecosystem, in Fluid and Bare Metal Operator, provide concrete examples of container privilege escalation.
Resource starvation through denial of service: A container exhausts the hostʼs resources, namely CPU, RAM, storage, and network.
Network-related threats: Following a successful attack on a container, the attacker can try to access other resources in the same network, such as other containers, hosts, or even the container orchestration tool.
Integrity of images: Attackers may infect applications or even entire container images. Additionally, outdated images can be vulnerable to attacks.
Compromising secrets: Secrets are like keys, passwords, or parameters that are passed or stored in the container. If they become compromised, an attacker may cause serious harm.
Conclusion
In this Kubernetes Security Basics series, we will provide details on the steps you need to follow in order to implement a secure containerized and Kubernetes environment, which is quite different compared to traditional server-based architecture. In the next post, we will dive into Container Security.