Unlock the Power of Kubernetes: A Simplified Guide for Non-Ops Folks

Are you a developer, engineering manager, or anyone else who isn’t a dedicated Kubernetes operations expert, but needs to understand what all the buzz is about? Have you ever felt lost in a sea of Kubernetes jargon?

This series, Kubernetes Simplified, is designed specifically for you.

Why Kubernetes Matters (Even If You’re Not “Ops”):

Kubernetes has become the standard for deploying and managing applications in the cloud. Understanding its core concepts allows you to:

  • Communicate Effectively: Speak the same language as your operations team, leading to smoother collaboration and faster problem-solving.
  • Make Informed Decisions: Contribute to architectural discussions and understand the implications of different deployment strategies.
  • Unlock Scalability and Resilience: Learn how Kubernetes enables applications to scale automatically and recover from failures, leading to more reliable systems.
  • Become a More Valuable Asset: Enhance your skills and broaden your understanding of modern software development and deployment practices.

What You’ll Learn (Without Getting Overwhelmed):

This course provides a high-level overview of key Kubernetes concepts, focusing on practical understanding rather than deep dives into technical minutiae. We’ll cover:

  • The “Why” Behind Kubernetes: Understand the evolution of software deployment and the problems that Kubernetes solves.
  • Core Components: Learn the purpose and function of essential Kubernetes building blocks, like Pods, Services, Deployments, and more.
  • Key Concepts: Master essential ideas, such as Affinity, Namespaces, and cluster networking.

What This Course Is Not:

This isn’t a hands-on tutorial that will teach you how to build a Kubernetes cluster from scratch. We won’t be diving deep into YAML files or command-line arguments. Instead, this course is designed to give you a solid foundation so you can effectively participate in Kubernetes-related discussions and understand the big picture.

Ready to cut through the complexity and gain a clear understanding of Kubernetes? Let’s get started!

Kubernetes

Why did such a thing come into being? Why was kubernetes created in the first place?

SNL - Why though?

Let’s look at some of the things that inspired the creation of Kubernetes. This post is primarily background information. But it’ll give you an idea of the evolution of software development and deployment that will help you appreciate Kubernetes better.

Traditional software deployment

Once upon a time, probably in the early noughties (2000s) or maybe slightly before that, we ran applications on physical servers. Operations (Ops) teams were responsible for purchasing the hardware required to ensure all applications could run smooth and do what they are supposed to do. Developers were asked to forecast data growth or the growth in application scale and applications were deployed on servers where they often had to share resources with not much control over how much of a certain type of resource, an application could or should consume. One or more applications would be deployed to a server. Sometimes a load balancer was put in place and multiple physical servers were setup behind this load balancer and multiple instances of the application would be deployed to various servers. Traffic to the application would be controlled using the load balancer software.

Challenges with this approach

  1. Resource hogging - an app could consume more resources than it should and negatively impact other apps on the same server. There was a lack of isolation at the application level.
  2. Poor resource utilisation - If the server was under-utilised during a period of time, all that expensive hardware is wasted. The magnitude of waste was directly proportional to the scale at which the applications were deployed!

Virtualization of hardware

This was the phase when companies still had physical servers but provisioned virtual machines for their applications to run. So a server with 32 cores would have a VM that consumed a certain number of cores and memory. This introduced a level of isolation that was unavailable in bare metal servers earlier! Applications running on VM 1 on machine A couldn’t access resources of another application running on VM 2 on the same machine.

Virtualization allowed better utilization of resources on the servers. It also enabled better scalability as applications could be added and updated easily compared to how it was done earlier, reducing hardware costs by a magnitude that was unimaginable until then.

One physical server could be virtualized into a cluster of virtual machines!

Challenges

Each VM was still a full machine running its own operating system and applications and dependencies all on top of virtual hardware. Thus, the minimum resources required to run multiple virtual machines was still quite a lot compared to what we do today with containers.

Containerisation of applications

Containers are the next level of what enabled us to better utilise resources on physical servers. They are one way of bundling applications and dependencies built for a certain processor architecture and operating system combination - thereby enabling you to deploy multiple containers on a machine with a certain operating system. Each container in this context could be a different app or different instances of the same app. They are basically an isolation mechanism on the host machine’s operating system that separates one app’s resource view from another app.

If you are confused, I’d strongly recommend reading this answer on stackoverflow. Or you could read an earlier post I wrote about this topic going slightly more into the details of the differences between containers and virtual machines.

Containerisation comes with various advantages:

  1. Faster deployments and creation of apps - compared to VM images, container images are a lot quicker to build and deploy
  2. DevOps separation - container images are published at build time, deployment is separate. Thus, developer workflow is unaware of runtime infrastructure.
  3. Improved observability as you are able to surface OS level metrics and other information
  4. Environment consistency - Application runs the same on the same OS on any hardware
  5. All the qualities described earlier contributes to making loosely coupled distributed, elastic microservices.
  6. Resource utilisation and isolation improvements - run more containers on same machine and isolation ensures one app’s performance doesn’t impact another, i.e. with good configuration, you can ensure one container does not negatively impact others.

Challenges here

As you embrace microservices architecture, there will be many containers to manage in production. Thus, you need to manage all these and ensure there is no downtime. Imagine having to write all the software that would do that for you in a reliable way. A system that ensures your containers are constantly running and serving what it is supposed to, no matter what error it encounters. That’s what Kubernetes is. It takes care of scaling your application and failovers, it provides deployment patterns and much more.

What Kubernetes gives you

A whole load of goodies if you ask me.

I don’t know if there is a point in listing them all here. But I’ll definitely share some highlights, and you can read all of it in the official Kubernetes docs

  1. Service discovery and load balancing: exposes a container using DNS name or IP address, It also does load balancing to distribute network traffic and ensure that traffic to containers are manageable.
  2. Storage orchestration - manages mounting of storage - any system that you choose, local, public cloud etc.
  3. Automated rollouts and rollbacks - declaratively describe the desired state for your containers and Kubernetes will deal with getting to that state.
  4. Automated bin packing - You give Kubernetes a cluster of nodes to use for running containerized tasks. You then define how much CPU and memory each container needs. Kubernetes then takes care of which node a container can run on to make the best use of available resources.
  5. Self-healing - automatically restart containers that fail, replaces containers, kills containers that do not respond to the health check that you defined and only exposes your containers when they are ready to serve requests.
  6. Secrets and configuration management - store and manage sensitive information like tokens, ssh keys, passwords and deploy and update such secrets without rebuilding your container images.
  7. Horizontal scaling - scaling your app up and down with a command, through UI or automatically based on resource usage

That sounds amazing! So I guess then everyone must be using Kubernetes already! Let’s hold that thought. Let’s explore why someone might not want to use Kubernetes.

What k8s is not?

Kubernetes is a container orchestration system, which enables you to run a distributed system well. However, if you only have a handful of applications that are distributed and are already working well the way they are, then going through learning and understanding Kubernetes to run them might be overkill.

Kubernetes is not a Platform as a Service, i.e. it does not hide the operating systems from the applications. It is very much a building block of a PaaS but not in itself a PaaS.

Kubernetes is not a Continuous integration or continuous deployment solution. It doesn’t build your application. It does not come out of the box with middleware like message buses, databases, or data processing frameworks, although with some learning, you could configure such services to run on Kubernetes.

Furthermore, it is highly flexible, and this means, it comes with some (sensible) defaults, but you have to know what you are doing to run a Kubernetes cluster.

Although I used the word orchestration earlier, Kubernetes, eliminates the need for orchestration. Where orchestration means - coordinating a series of steps to execute in a certain order. Kubernetes continuously drives the current state towards a certain desired state, which is defined through configuration.

Outro

I hope this keeps you interested. I’ll try to make this consumable in other ways to enable you to breeze through this content easily.