Part I: Deployment and Container Orchestration
Part II: Container Security
Part III: Container Deployment
Part IV: Introduction to Kubernetes
Part V: Kubernetes Administration
Part VI: Kubernetes Security Checklist
In the previous post in this Cloud Native Security Basics series, we covered container security. Before diving into securing Kubernetes and container-centric infrastructures, it is first important to understand the core components of a modern DevOps environment. Contrary to popular belief, containers and orchestration tools such as Kubernetes do not give us security out of the box. This lesson will set the stage for building a secure DevOps pipeline and production infrastructure using Kubernetes.
The first thing you want to be certain about when deploying an image is that you are actually deploying the right one. An easy way to verify this is to check the image certificate. Images can be signed in a way that includes important information about the image, as well as key information about its authenticity. Image signing is not universally adopted but it is slowly becoming accepted as a way to digitally sign and ensure an image comes from a trusted source.
Container orchestrators use configuration files to define the containers that build an application; for example, Kubernetes uses YAML. Donʼt forget to verify the integrity of these configuration files before deployment. Even a slight variation can result in the introduction of a malicious image in your infrastructure.
In the next parts in this series, we will see how you can use admission controllers to perform checks right before you deploy a resource into a cluster. Using admission controllers and before deploying an image in Kubernetes, you can run various checks and set specific requirements, like the following:
Immutable infrastructure is a high-level concept that is at the heart of DevOps and container-based deployment. Immutable infrastructure is an approach to managing servers (such as virtual machines) that relies on replacing components completely to a known good state, as opposed to manually updating the server independently. These components are pulled from a common image that is consistent across the stack and tagged appropriately.
While it is possible to build an immutable infrastructure on bare metal servers in a data center, the rise of the cloud is what made the concept popular due to ease of use. As mentioned in Part I of this series, simple-to-use cloud APIs allow operations teams and developers to spin up and tear down virtual machines in an automated and scripted manner.
This makes immutability a reality in DevOps pipelines. The benefits of immutable infrastructures span security, development, and operations teams. Containers help us achieve immutable infrastructure by packaging applications into self-contained deployment artifacts. In the world of containers, immutable means that the container wonʼt be patched or updated, and no changes in configuration will be applied. Containers are versioned and replaced when rolling out new versions, or an old image is redeployed if there is a need for a roll-back.
Even though containers facilitate building an immutable infrastructure, you still need to enforce it. One way to achieve this is by running the container with a read-only filesystem. If the application needs access to a writable local storage, you can mount a writable, temporary file system.
Another way to enforce immutability is by preventing an executable from running in a container if it wasnʼt present in the image that instantiated the container. This method is called drift prevention. “Drift” is the difference between the software that came with the original image and the software that is currently running in the container. You can prevent drift if you have the ability to stop software that was not included in the original image and prevent it from running.
Drift prevention in containers can be achieved by using a runtime security solution that coordinates with the scanning solution. The scanner fingerprints the files within the image during the initial scan. During runtime, an enforcement tool checks every executable that the container tries to run against the fingerprints of the initial scan. If the file is not identical, it is not permitted to run.
Immutable infrastructure allows for rapid patching across a fleet of servers. For example, if a vulnerability is found in a library that is used across your servers, it is possible to roll out a reliable patch by replacing each machine instead of manually updating packages. This gives security teams visibility into the current state of the entire system through code, instead of running time-consuming and often unreliable tools.
When infrastructure is subject to being destroyed and recreated at any time, we must re-architect how we handle persistent data such as logs. Immutable infrastructure forces us to ship logs away from our servers and into a centralized logging and alerting mechanism – an added security benefit.
An added benefit of an immutable infrastructure is the ability to roll back to a known-good state. Since all deployments are performed by rolling out new images, we can also use the historical configuration artifact to roll back if necessary.
Containers and hosts share the same kernel. If a host gets compromised, all containers in that host are potential victims, especially if the attacker manages to get root or elevated privileges. Securing and hardening the host is extremely important for the security of your containers.
Container applications should be run on dedicated host machines – either virtual machines or bare metal. Pick the right Linux distribution for the host, preferably one that is specialized for hosting containers. Try to install the minimum necessary components; avoid installing desktop applications, compiler environments, or other server applications.
Generally, humans need little access to the host, especially if you are using a container orchestrator like Kubernetes. Thus, you donʼt need to maintain many user identities in the host machines. This way makes it easy to spot possible unauthorized user logon attempts or attempts to create rogue users.
Hardening the hostʼs operating system is a necessity as with any server out there. Place production containers on a separate host system and be careful who you authorize to deploy on this host. No other containers should be allowed on this host; development and test containers should be deployed on different hosts. You can also consider separating containers according to their context – for example, databases shouldnʼt be on the same host with middleware or authentication services.
Patching in a container environment needs to cover four different domains:
In all cases, you should follow your organizationʼs patch management policies and best practices, bearing in mind that some patches might require restarting services or even the host. In a previous section we discussed that, when it comes to container images, patching usually involves deploying a new image. The latest Kubernetes CVEs demonstrate the importance of keeping on top of CVEs in the third-party Kubernetes ecosystem as well.
How do we take our containers and deploy them into a computing environment where it can serve traffic in a scalable, secure, and reliable manner?
This is where container orchestration frameworks come into the picture. Container orchestration gives system administrators and developers the ability to achieve an immutable infrastructure by handling all infrastructure and container management as code. At a high level, orchestration tools should be able to handle the following in regards to the deployment of containers:
Cross-compatibility between physical server infrastructure and Container orchestration tools vary greatly, and many commercial and open-source options exist. Kubernetes quickly became one of the most popular and widely adopted technologies for container orchestration tasks.
Log4Shell was a zero-day vulnerability that was disclosed in December 2021 and received a CVSS security rating of 10, the highest available score. It affected Log4j, an open source logging framework, and it allowed an attacker to execute code that the system assumed originated from a trusted source. Practically any application using Log4j was affected and had to be patched.
At that time, security teams around the world rushed to patch their systems. However, research shows that a lot of applications still remain vulnerable.
This is why it is important to scan your container images for vulnerabilities, including Log4Shell.
Patching systems quickly is especially important when a critical vulnerability is made public. In theory, the shorter the time it takes to patch a system, the less likely a vulnerability will be exploited. Systems that lack consistency due to configuration drift present challenges when it comes to applying patches promptly.
There are a number of reasons an organization may suffer from poor patching, but a lack of immutable infrastructure may be a large contributor. When infrastructure is treated as code and known-good state is well-understood across every device, it is much easier to patch systems quickly and reliably.
Of course, quickly patching critical vulnerabilities is not sufficient. We have already presented several security measures that you need to consider, such as hardening the host, patching, and using centralized logging and monitoring. In the next part of the Series, we will introduce Kubernetes, the most popular container orchestrator, in-depth.