I spent some time playing with the new type of container in Kubernetes and came up with a post and a few challenging yet fun practical exercises that may help you master the new feature faster π
Kubernetes 1.28 introduced a new type of container - "native" sidecars. Why this addition was needed? How do the new sidecar containers compare to regular and init containers? What use cases does this new type of container enable? And most importantly, how can I get my hands dirty with the hot new feature? Let's find out!
First, a potentially disturbing surprising fact: the 1.28 release didn't add a new Sidecar
type or even a sidecarContainers
field to the Pod spec - the so-called "native sidecar" containers are just the same old init containers but with some new properties, which make them behave unlike any other type of containers. At first glance, this may seem like a bad design decision, but when you wrap your head around it, you'll see that it's actually a clever and future-proof solution to a rather delicate architectural problem.
Before diving into the details of the new sidecar containers, let's quickly recap the original difference between the regular and init containers in Kubernetes.
Every Pod must have at least one regular container defined in its .spec.containers
list. When a Pod has multiple regular containers, they all start and run concurrently, and if some of the Pod's containers terminate, they become subject to the Pod-wide restart policy:
Multiple concurrent and restartable containers deployed together as a single "unit" is a powerful abstraction, but there are also situations when some of the containers in such a group may need to:
Enforcing the startup order of regular containers is tricky but doable - a bunch of ugly-looking shell scripts usually do the trick. However, running some of the containers to completion while keeping others restartable is a much harder problem to solve. Simply setting the restart policy to OnFailure
is not good enough because when one of the regular containers terminates, the Pod stops being ready, meaning that it's no longer able to serve traffic even if its primary container is still up and running.
A proper solution was needed to address the above problems, and that's how the init containers were born.
Since a while ago, the Kubernetes Pod spec has gotten another list - .spec.initContainers
:
Even though the elements of this new list have a literally identical set of attributes to the regular containers, the init containers behave differently.
Init containers:
...and because of the above design, init containers:
Always
- init containers then fallback to OnFailure
.Historically, init containers were used to perform some auxiliary "one-off" tasks before the main application container startup:
But there is more auxiliary functionality that is beneficial to keep outside of the main application container but still in the same Pod:
And such functionality doesn't fit the init container model because the above containers:
Expectedly, over time, another architectural pattern, perfectly describing the above requirements, emerged - Sidecar Containers. However, up until the 1.28 release, sidecars in Kubernetes were implemented using regular containers. But, since there is no startup order guarantee for regular containers, and there is just one restart policy to rule them all, engineers had to come up with various workarounds to make sidecar containers behave as expected.
So, how has Kubernetes 1.28 changed the situation?
The 1.28 release didn't add a new Sidecar
type or even a .spec.sidecarContainers
list. Instead, it introduced a new restartPolicy
attribute for... containers!
In addition to the Pod-wide restart policy, now containers can have their own restart policy, but only if:
Always
.So, what's the difference between traditional init and init containers with the restartPolicy: Always
attribute?
The new type of init containers:
In other words, the only "true init" thing about the new type of containers is that they still respect the startup order. The rest of the behavior seems very different (if not opposite) to the traditional init logic. However, the new behavior is exactly what one would expect from a Sidecar container.
Yes, it's not the most obvious way to achieve the desired behavior. But this design paves the way to a more advanced type of containers - KEP-753: Sidecar containers explains the motivation for reusing the initContainers
list and even mentions a new type of container called infrastructureContainers
, which might be used to unify the behavior of the old init and new Sidecar containers in the future:
Here's how the new behavior can be visualized using a single Pod with carefully crafted containers:
And now, for the most exciting part! I've prepared a few challenging yet fun problems to help you better internalize your knowledge of sidecars. Ready to tackle them? Then continue on at iximiuz Labs π
Building labs.iximiuz.com - a place to help you learn Containers and Kubernetes the fun way π
Hello π Ivan's here with a slightly delayed September roundup of all things Linux, Containers, Kubernetes, and Server Side π§ What I was working on This month, I worked on an assorted set of topics. Skill Paths First off, the skill paths! I finally finished the underlying machinery, and now iximiuz Labs supports a new type of content - short roadmaps that you can use to develop or improve a specific skill: how to debug distroless containers, how to copy images from one repository to another,...
Hello friends! Ivan's here with another monthly roundup of all things Linux, Containers, Kubernetes, and Server Side π§ The issue's main topic is iximiuz Labs' largest-ever upgrade: Fresher and more streamlined look of the frontend UI π A new 5.10 Linux kernel built with nftables support (finally, we can try out kube-proxy's nftables mode). New default playground user - laborant (yep, rootless containers learning for). New playgrounds: Ubuntu 24.04, Debian Trixie, Fedora, and Incus (yay! more...
Hello friends! Ivan's here with a slightly delayed July roundup of all things Linux, Containers, Kubernetes, and Server Side π§ What I was working on This month, I got nerd-sniped by cgroups. It all started when I ran into a pretty significant difference in how Docker and Kubernetes handle the OOM events. When you limit the memory usage of a multi-process Docker container, the OOM killer often terminates only one of the processes if the container runs out of memory. If this process is not the...