With over a hundred certified Kubernetes distributions available, choosing a self-hosted or managed provider option can be overwhelming. This blog will look at the security considerations for each of these options. While managed Kubernetes services abstract a lot of the managerial overhead for teams, it is critical to understand the security limitations when deciding on this route.
According to the Cloud Native Computing Foundation landscape, there are currently 117 Certified Kubernetes Distributions— customized or pre-configured renditions of the open source Kubernetes project that have been vetted by the CNCF. Twenty-eight of those (plus vanilla Kubernetes) are open source distributions you can install and run on any infrastructure. An additional thirty-eight are proprietary but still “self-hosted.” In contrast, fifty-one are hosted, or “managed” offerings, meaning that the provider supplies the infrastructure in addition to the Kubernetes distribution. With all these options, how should you choose to deploy Kubernetes?
The first step is deciding whether to self-host or to use a managed provider.
The claim any managed service makes is that they are handling certain problems for you, freeing up your resources to deal with others. This is an appealing value proposition, and 79% of Kubernetes practitioners take this route, according to the CNCF’s most recent survey. Let’s start with the security concerns managed Kubernetes providers solve for their customers.
The core offering of any managed Kubernetes service is taking responsibility for the Kubernetes control plane (etcd, the API server, kubeproxy, etc). For small teams in particular, this can be a huge weight off your shoulders. From a security perspective, this should alleviate concerns about availability for those control plane components. Often, a managed service will come with some sort of SLA guaranteeing a certain amount of uptime (x number of “9s”). Major cloud providers may distribute the control plane across multiple availability zones or regions for redundancy and failover. Your managed provider may also automatically scale these components to the size of your cluster, helping ensure continued availability as your cluster grows.
Many, if not most, managed Kubernetes providers will include (or at least offer) data backups and easy recovery options. This may be targeted to your etcd database, but it may also extend to other aspects of your control plane or even your entire cluster. This can ease your planning for disaster recovery/failover in response to incidents ranging from natural disasters to DDoS or ransomware attacks. You may want to make sure those backups are being stored in separate regions or availability zones so you can access them during an outage.
Another perennial security concern for any system is how to apply security patches/updates and manage newly discovered vulnerabilities. Your managed provider’s responsibility for the Kubernetes control plane should include patches and updates to those control plane components. This leaves patches for Kubernetes components on your worker nodes, which you may be able to automate or otherwise easily apply those (ie, a 1-click update button). One caveat here worth remembering: the kubelet on local nodes must be kept within two minor versions of your Kubernetes API. This means that while your provider may install patch updates, minor or major version upgrades to both your control plane and nodes will likely remain your responsibility (even if made substantially easier by your cloud provider’s 1-click update button).
Another advantage associated with managed Kubernetes is tight integration with other services from your cloud provider (node provisioning, storage, etc). The big-3 providers (AWS, GCP, and Azure) include among these integrations their own identity and access management services. This provides the possibility of using your cloud provider’s own tooling to handle authentication and authorization to your cluster. As we move increasingly towards the practice of Zero Trust networking architectures, authentication and authorization have become the quintessential security concern, so this integration has the potential to be a major boon for your security team.
AWS pioneered the concept of a “shared responsibility” approach for security in the cloud. In the case of managed Kubernetes, your cloud provider’s responsibility includes (a) security of the platform infrastructure (ie, the provider’s internal networking, the actual compute hardware services run on, etc), and (b) the security of the control plane components they are managing. Your responsibilities may include things like (a) provisioning users and roles within the identity and access management system, (b) patching and maintaining your worker nodes, and (c) the security of the actual workloads you are running themselves.
All of the above sounds like an excellent case for using managed Kubernetes, and for many teams that is the right choice. Before you run off, though, its at least worth pointing out that this decision isn’t without consequences. If you attended last fall’s KubeCon NA, you may have seen a talk about the security of these managed platforms. If not, you can watch the talk here. The speaker outlines a number of potential vulnerabilities in each of the big-3 cloud provider platforms. We won’t rehash the details here (they are subject to change anyways as providers respond to reports like these), but the upshot is this:
To provide you with a managed service, your cloud provider needs their own access to your cluster, and that access has to have very expansive authorizations to enable them to do their job. This means there is always a built-in backdoor to your cluster, and depending on how they have architected that back-door it may be more or less vulnerable to exploitation by clever hackers or malicious insiders.
This may or may not be a deal-breaker for you, depending on several factors:
Assuming you’re satisfied with the answers to the above questions and willing to accept the risk, managed Kubernetes may still be your best option. If, however, you find those answers unacceptable or those risks untenable, you may be back to standing up your clusters on your own using one of the self-hosted Kubernetes distributions.
Here are a few questions to help you decide:
The good news is that the only thing KSOC is dependent on is Kubernetes. As long as you have access to the Kubernetes API, we can help you answer critical Kubernetes security questions like, ‘What RBAC permissions does this user have?’ or ‘Which of these manifest issues is actually relevant today, given the dynamic nature of my cluster?’
KSOC, the Kubernetes Security Operations Center, was built by some of the world's leading Kubernetes experts to handle the security challenges teams face when adopting and running Kubernetes. Our cloud native platform allows users to easily identify and remediate vulnerabilities, misconfigurations, and RBAC issues. To learn more, get in touch with our team.