Hey, hey!
It's Ivan Velichko, a software engineer and a technical storyteller. I brought you a monthly roundup of all things Containers, Kubernetes, and Backend Development from iximiuz.com.
The main theme of this (rather productive) month is Kubernetes API - I've posted a few long-form write-ups and started one promising Go repository. But first, I'm glad to announce that the newsletter got its very first sponsor - Teleport. And I'm super happy about it - not just it makes the newsletter pay for itself (ConvertKit is no cheap!), but also because of the things I'm asked to include in my emails. Teleport folks have a surprisingly good technical blog, so I don't have to feel guilty about the sponsored content - it's something I could have shared here anyway:
SPONSORED Check out this article where Teleport explores SSH best practices to boost the security of your infrastructure: from changing SSH default options and using bastion hosts to avoiding password-based authentication using short-living SSH certificates. And the best part about it - Teleport makes it simple to implement.
Last year I spent a substantial amount of time writing Kubernetes controllers. It was simultaneously a fun and challenging activity. The idea of a controller is simple and powerful - read some objects (desired state), check statuses of others (observed state), and make changes to the world bringing it closer to the desired state; then repeat. However, the devil is in the details.
The Kubernetes API is the basis of any controller, but it comes with its own quirks: Resource Types and Groups, Objects and Kinds, resource versions and optimistic locking, etc. Combined with the statically typed language like Go, it makes the learning curve quite steep. Why there are three different clients in the client-go project? When to use typed client and when to stick with a dynamic client and work with Kubernetes objects as with Unstructured structs (no pun intended). Wtf is RESTMapper and runtime.Scheme?
When you master the API access basics, a whole lot of more advanced questions arise. A naive control loop implementation that literally GETs resources from the API on every iteration is inefficient and prone to all sorts of concurrency issues. To address this problem, the Kubernetes most advanced API client, client-go, brings a bunch of higher-level controller-tailored abstractions: Informers to watch for resource changes, Cache to reduce the API access, Work Queue to line up the changes in one processing flow, etc. But that's not it yet!
Historically, writing Kubernetes controllers involved quite some boilerplate code. So, many repetitive tasks were codified in the controller-runtime package that extends the capabilities of the already advanced API client. Bootstrapping of controllers, including CRD and webhook creation was automated by the kubebuilder project. But Red Hat (or was it CoreOS?) thought it's not enough and introduced the Operator SDK solving more or less the same problem but adding extra capabilities on top. And neither kubebuilder nor Operator SDK (or maybe the last one does?) actually has a runtime footprint! It's still the same controller-runtime in the end, which in turn is just a fancy wrapper around client-go. But how would you know it?
When I dove into this Zoo of concepts, libraries, and projects, I almost sank π As it turned out, nothing was really complicated. But everything was so entangled! So, the time has come! I'm starting a series of articles (or two?) with an aim to share my Kubernetes API and controllers learning path.
The idea is to start from the Kubernetes API itself and then move to the client, explore its basic and then advanced capabilities and how they are used for writing controllers, and finally touch upon the controller-runtime and kubebuilder projects. Hopefully, with a lot of practical examples on the way. Here's what I've got for now:
It was definitely a good start, and I'm getting a lot of positive and constructive feedback. Let's see what February results into π
A lot of stuff!
π₯ Tracing the path of network traffic in Kubernetes - a massive one, but given the vastness of the Kubernetes networking topic, the post does really great on condensing it into a single read. Reminded me about a twitter thread I posted some time ago sharing my way of tackling Kubernetes networking. Hopefully, one day I'll turn it into a full-fledged blog post too.
βTwo reasons Kubernetes is so complex - sounds like it boils down not to the actual complexity of Kubernetes but to the wrong developers' expectations of it. Kubernetes is not a platform to simplify your deployments - it's a full-blown cluster Operating System. And operating a group of potentially heterogeneous servers is a much broader task than launching a bunch of containers. Everything you expect from an OS doing for you to utilize the hardware resources of your laptop, Kubernetes does to groups of servers. On top of that, Kubernetes' design choice to implement everything as "declare the desired state and wait until control loops reconcile it" makes it harder to reason about the behavior of the system. But I've got a feeling that an imperative implementation of a distributed OS would be even a bigger mess :)
βUsing Admission Controllers to Detect Container Drift at Runtime - consider it as a continuation of the above rant. When we combine the declarative approach with manual ad-hoc changes, the end state of the system becomes much harder to predict. GitOps says the VCS is the only source of truth - you make a change to your code or configs, push it to Git, and wait until a CI/CD pipeline applies it to production. However, when something goes wrong, folks probably turn off their pipelines for the period of troubleshooting and start a good old manual debugging. But how can we make sure the end state of the system is reflected in Git? Do we need a custom control loop making sure all the manual changes are eventually reverted back to the latest state in Git? And if it breaks production one more time, we'll eventually remember to backport the manual adjustments back to our repos.
βThe Rise of ClickOps - and while we're thinking of how to befriend GitOps and control loops, Corey Quinn already lives in the future. I spent some time with harness.io last months, and its UX is probably what is supposed to be called ClickOps - you configure stuff using a fancy UI, but then export the pipelines configs into (unreadable mess of) YAML files and commit them somewhere. Can't say I enjoyed it.
βThe ROAD to SRE - a set of (IMO, reasonable) principles folks came up with while introducing SRE to their organization. One of the main goals was to avoid creating yet another Ops team. Reminded me of my rant on DevOps, SRE, and Platform Engineering and what makes them different.
And now back to the roots.
βThe HTTP QUERY Method - surprisingly, a draft of a new HTTP method definition. Think GET with a body. The lack of a body makes GET requests ill-suitable in situations when the query parameters are lengthy. People often use POST lengthy in such cases, but it's not RESTful. So, adding a GET with a body to the HTTP spec makes perfect sense, actually.
βSome ways DNS can break - Julia Evans used her immense audience for good one more time and collected a dozen of real-world examples of how DNS can break your stuff. And believe it or not, I was involved in a DNS-related incident this week too. Of course, it didn't look like DNS at all until someone noticed the period of Kube DNS errors matched perfectly with the time when services had troubles. So, it's always DNS. Or should we say Kube DNS these days? π
π₯ systemd-by-example.com - this is nuts! Get a disposable Linux box (a container, actually) with systemd and play with it right from your browser.
βDev corrupts NPM libs 'colors' and 'faker' breaking thousands of apps - oops, someone did it again. The way we manage our dependencies is clearly broken, and some ecosystems are broken more than others. Here are some thoughts by Russ Cox about potential improvements that resonated with me.
βIntroducing Ephemeral Containers - it shouldn't be hard to add the support of ephemeral containers given the way Kubernetes implements Pods. But speaking of the containers drift (see above), I find this feature pretty valuable.
Ok, I should probably stop adding stuff to this issue, it's getting much bigger than I expected. Should I start sending the newsletter twice a month, maybe? Let me know in replies, folks!
Cheers,
Ivan Velichko
P.S. If you find this newsletter helpful, please spread the word - forward this email to your friend :)
β
Building labs.iximiuz.com - a place to help you learn Containers and Kubernetes the fun way π
Hello π It's this time of the month again! My traditional roundup of all things Linux, Containers, Kubernetes, and Server Side, delivered straight into your inbox π¬ What I was working on October was very productive for me - I shipped no major iximiuz Labs features (it's always hard to resist the temptation!) and instead dedicated all my available time to content work. The main focus was on Container Images. It's the subject of the first module of my "panoramic" Docker course, and it is almost...
Hey there, Iβve just finished putting together everything I know about Node.js container images and figured you might find the write-up useful. If youβre working with Node.js in Docker, chances are youβve been hit by the dilemma of which base image to use. Do you go for the default node:latest, the slimmer node:22-slim, or something super minimal like a distroless image? What about Bitnamiβs alternative β how does it stack up? Before you jump headfirst into your next build, you might want to...
Hello π Ivan's here with a slightly delayed September roundup of all things Linux, Containers, Kubernetes, and Server Side π§ What I was working on This month, I worked on an assorted set of topics. Skill Paths First off, the skill paths! I finally finished the underlying machinery, and now iximiuz Labs supports a new type of content - short roadmaps that you can use to develop or improve a specific skill: how to debug distroless containers, how to copy images from one repository to another,...