Kubernetes Part 1: When All You Have is a Hammer

June 28, 2022 by
Kubernetes Part 1: When All You Have is a Hammer
Bart Vanbrabant

There is a strong tendency in the telecom and enterprise industry to use Kubernetes (K8s) for any automation and orchestration task. We believe this is a case of “when all you have is a hammer, everything starts looking like a nail”

Just because its possible does not mean it is a good idea. pic.twitter.com/1zvzxAACat

— Hannes Gredler (@HannesGredler) April 5, 2022

It’s important to select the right tool for the right job. Therefore, we will look deeper into the difference between K8s (a container orchestration platform) and an end-to-end service orchestrator like Inmanta. In this article we focus on Kubernetes, and in the next one we explain the difference as well as the synergies with Inmanta’s multi-domain service orchestrator.

TL;DR

  • K8s is a platform to build, deploy, manage and maintain containerized applications at scale. Application inside K8s should be built according to specific best practices, e.g. https://12factor.net/
  • Inmanta is a general-purpose service orchestrator to build, deploy, manage and maintain end-to-end services from existing building blocks across multiple domains (network, cloud, data center…).
  • Does K8s turn any piece of software into a managed solution? No.
  • Does K8s magically imbues a certain quality to all things inside it? No.
  • Can Inmanta run on K8s? Yes
  • Can Inmanta delegate to K8s? Yes
  • Can K8s delegate to Inmanta? Yes

Kubernetes the beautiful

K8s is one of the marvels of modern software engineering. It is an exceptionally well-built piece of software with a very specific purpose: run applications at scale. It seems to be designed under the motto: hard choices but smart choices. 

At scale you don’t need ordering

Let me explain with a small example. Many refer to Kubernetes as a container orchestrator. For an orchestrator, one of the basic functionalities one would expect is the ability to manage dependencies between services: first deploy the database server, and only then the application server that depends on the database. 

When operating at Google’s scale, you no longer have a single database server or a single application server. You have a database cluster of thousands of nodes and an application cluster of thousands of nodes. On such a large scale, the database will rarely be completely up (some nodes will fail at any given time) or completely down (some nodes will always be up) . 

As a best practice for large-scale applications, all parts of the application have to be failure tolerant. Because there’s always something broken, it is important that a component failure can’t escalate into a complete system failure. Every component of the application must be able to gracefully handle the (partial) failure of any component it depends on. In our example, the application must be able to work without the database, or with the database degrading, failing, coming back up at a different location, and scaling up or down. 

So at scale, ordering deployment steps is not a very clear concept at all. And as a best practice, applications built for scale should not care about ordering: if their dependencies are not there yet, they have to be able to handle this all by themselves. 

So K8s doesn’t have support for ordering deployment steps at all! Instead, they provide very sophisticated support for application components to keep track of where their dependencies are right now!

The purpose defines the limits

The K8s design is full of these choices: any limitation built into the design, also happens to be a best practice for operating at scale. This allows the design of K8s itself to be more elegant and at the same time nudges users to build better applications. A whole ecosystem of best practices has established itself around this, for example https://12factor.net/.

This laser-like focus makes K8s into the great tool it is. But it is also one of the great risks of K8s: it is built so well that it is easy to forget that it is designed for a very specific purpose, with very specific limitations. It is possible to run any application on K8s, but K8s is intended to run a specific type of application. 

Building applications according to the best practices of K8s, greatly reduces operational cost. Running these applications on K8S can further reduce this cost. K8S also makes it very easy to deploy anything, even if it doesn’t follow its best practices.  But it does not magically turn everything it touches into a managed solution that ‘just works’. 

It is very tempting to get caught up in the excitement of working with such a great tool and forget the basic reality that if you can’t operate a system at all, you won’t be able to operate it using K8s either. 

Amongst advanced K8s users, there is a general fear that novice users might get caught up in optimism and forget the basic rules of the game. Even Kelsey Hightower, Google’s chief K8s evangelist and an extremely respected engineer known for his absolute integrity, spends quite some effort repeating this warning. 

As builders we are responsible for calling out the flaws and limitations in the systems we work on so people don't hurt themselves by mistake. https://t.co/gfnhwf9D7E

— Kelsey Hightower (@kelseyhightower) January 28, 2022

You can run just about any workload on Kubernetes, but running a stateful workload presents a real challenge, especially for those that think it's easy as:

$ kubectl apply -f kafka.yaml

— Kelsey Hightower (@kelseyhightower) March 24, 2019

Some people believe that rubbing Kubernetes on a stateful workload turns it into a fully managed database offering rivaling RDS. This is false. Maybe with enough effort, and additional components, and an SRE team, you can build RDS on top of Kubernetes.

— Kelsey Hightower (@kelseyhightower) March 24, 2019
Kubernetes Part 1: When All You Have is a Hammer
Bart Vanbrabant June 28, 2022
Share this post
Tags