Skip to content
Jimmy MestaMar 7, 2024 3:45:10 PM7 min read

Introduction to Kubernetes: Cloud Native Security Basics Part IV

Part I: Deployment and Container Orchestration
Part II: Container Security
Part III: Container Deployment
Part IV: Introduction to Kubernetes
Part V: Kubernetes Administration
Part VI: Kubernetes Security Checklist


In the last part of this Kubernetes Security Basics series, we covered container deployment and started to touch on Kubernetes. Here we will get into Kubernetes, discussing its history and main components, which will set us up for more in-depth discussions and finally security for Kubernetes. For an overview of the layers of Kubernetes, not limited to Kubernetes itself, check out this layered guide to Kubernetes security best practices.


Kubernetes history

Kubernetes was originally designed and developed by engineers at Google, as the open-source descendant of the internal Google project called Borg, which is how Google deploys and manages containers. Containers have been used throughout Google's infrastructure for many years; it is rumored to deploy over 2 billion containers a week! Many of the lessons learned from the Borg project have been incorporated into Kubernetes for the general development community to take advantage of in their own environments.

Kubernetes is Greek for "helmsman" and can be thought of as the captain of a ship, steering containers to their correct destination and overseeing the performance of the overall cluster. Kubernetes is also commonly known as K8s (K-8 letters- S) for short or "Kube." It has quickly become one of the most popular open-source projects in the world, with hundreds of active contributors and many corporate sponsors.

The Kubernetes system is highly complex, abstracting many traditional application and networking concepts in favor of APIs and container-centric deployment mechanisms. It lends itself well to DevOps environments where velocity, scaleability, and reliability are paramount.

Security is still a concern for Kubernetes, and the technology is moving so fast that it is difficult for the security community to keep up. This rapid change – coupled with a steep learning curve and insecure defaults – makes it important to take security seriously from the onset when making the shift to a container-based ecosystem.


Kubernetes components

Before we dive into securing Kubernetes, it is helpful to understand the core components or "primitives" that are built into the system. The following list is not exhaustive by any means, but it will give you a base understanding about how Kubernetes schedules, deploys, and scales containers within a given system. Of course, there will be additional factors to consider if you are using a managed cloud provider for Kubernetes like AKS or EKS; but regardless of how you actually deploy Kubernetes, these common components will be consistent.

For a deeper dive into these components and how they are relevant to securing Kubernetes, see these pre-requisities for implementing Kubernetes security.



In Kubernetes, a cluster is simply a collection of compute resources, generally in the form of virtual machines. Within the cluster, there exists any number of Kubernetes "masters" and "nodes". Each of these performs specific tasks in order to deploy and maintain running containers. Masters and nodes must communicate with each other: the security of this communication is critical to the overall health of the cluster.


Control Plane

The control plane is usually located in the Kubernetes Master node. It is responsible for managing the cluster and coordinating all activities such as scheduling applications, maintaining applications' desired state, and scaling applications. It contains the API Server (kube-apiserver), the etcd, the kube- controller-manager , the cloud-controller-manager , and the kube-scheduler .


API Server

The API server is the frontend of the control plane. It is a key piece of the Kubernetes architecture that is responsible for managing all cluster components. It exposes the Kubernetes API, processes and validates REST requests submitted by cluster administrators, and relays instructions to other components throughout the cluster. The API server is the heartbeat of the Kubernetes cluster.



Maintaining state within a cluster is the job of etcd. Etcd is a simple, consistent, and highly available key-value mechanism that stores the configuration data of the cluster. It practically serves as the clusterʼs database.



A node can be any virtual machine or physical server that serves as a worker machine in Kubernetes. It includes the Kubelet, which is an agent that manages the node and communicates with the control plane. The Kubelet starts, stops, and scales containers by interacting with a Container Runtime Interface (CRI) such as Docker. A node also runs kube-proxy, which is a network proxy that implements part of the Kubernetes Services concept. It maintains the network rules that allow network communications.



A pod is an abstraction in Kubernetes that is composed of one or more containers with shared storage and network resources. Pods are the smallest deployable unit in Kubernetes. The

containers within a pod are created together, deployed on the same node, scaled together, and deleted together. Quite often a pod contains only one container. Each pod is provisioned with a unique IP address within the cluster.

Here is an example of a pod manifest that would be used to deploy a single-container pod running a Redis container on port 6397.

  1. apiVersion: v1
  2. kind: Pod
  3. metadata:
  4. name: redis-rails
  5. spec:
  6. containers:
  7. - name: key-value
  8. image: redis
  9. ports:
  10. - containerPort: 6379



A service is yet another abstraction within Kubernetes that gives pods the ability to work together as a coupled group. Services give a collection of pods a stable IP address and an internal domain name. It is uncommon to see pods that are not tied to a service within a cluster. Services bring stability to pods, as pods can often come and go while a service will stick around. Load balancing is also accomplished using services to route traffic intelligently to a given pod.

There are four types of services:

  • ClusterIP: This is the default service type. It exposes the service internally to the cluster.
  • NodePort: Exposes the service to the Internet from the IP address of the Node at the specified port. The port can range between 30000 and 32767.
  • LoadBalancer: Creates a load balancer assigned to a fixed IP address. It is the easiest (but not always most secure) way to expose your cluster to the Internet when using certain public cloud providers.
  • ExternalName: Maps the service to a DNS name. This service type is good for directing traffic to outside resources.

The following service uses a selector of app: rails to place all pods with that label in the service named web-frontend .

  1. kind: Service
  2. apiVersion: v1
  3. metadata:
  4. name: web-frontend
  5. spec:
  6. ports:
  7. - name: http
  8. port: 80
  9. targetPort: 3000
  10. protocol: TCP
  11. selector:
  12. app: rails
  13. type: LoadBalancer



In order to begin "coupling" pods in a service, we must first give the pod a common identifier to let the service know that the pod would like to be part of the service. This is where labels come into the picture. Labels are simply key-value pairs attached to any API object in the system (such as a pod or node); they are used to organize and select subsets of objects. With labels you can specify identifying attributes of objects that are meaningful and relevant to users.

Labels are what we use to determine the components to which a given operation will apply; they are at the heart of any Kubernetes deployment. An example is one pod labeled "tier: backend" and another labeled "tier: frontend". These pods would be used by two separate services that query for each of the pod labels that suit their function (an Nginx frontend versus a Java backend).



Isolating resources within a cluster becomes important as the number of cluster users increases. Namespaces provide a way to create sub-clusters or "virtual" clusters within the same physical Kubernetes cluster. They help group and organize objects. Kubernetes comes out of the box with three default namespaces: default , kube-system , and kube-public . Any number of namespaces may be created to meet an organization's needs.

It is common to use namespaces to separate development environments for teams, such as development or QA. However, while it is possible and fairly common to place production, QA, test, and development in the same cluster and separate via namespaces, this is not recommended from a security standpoint. Using separate clusters for production workloads will help avoid costly namespace misconfiguration errors that may lead to privilege escalation.



So far in this series we have covered container security, container deployment, and now we have the basic components of Kubernetes as well. It is now time to move on to more advanced topics like Kubernetes networking, and Kubernetes security to follow. In the next part of this series, we will understand how clusters interact with the API server, and all the other ways to interact with Kubernetes.


Jimmy Mesta

Jimmy Mesta is the founder and Chief Technology Officer at RAD Security. He is responsible for the technological vision for the RAD Security platform. A veteran security engineering leader focused on building cloud-native security solutions, Jimmy has held various leadership positions with enterprises navigating the growth of cloud services and containerization. Previously, Jimmy was an independent consultant focused on building large-scale cloud security programs, delivering technical security training, producing research and securing some of the largest containerized environments in the world.