Learn & Build

Practical Programming Tutorials & Projects

,

How Kubernetes Works Internally (Beginner-Friendly Explanation)

After understanding what Kubernetes is and why it exists, the next logical question is:

How does Kubernetes actually work under the hood?

If you haven’t read it yet, start with Understanding Kubernetes: A Beginner’s Guide, where I explain what Kubernetes is and why it’s needed.

In this post, I’ll explain how Kubernetes works internally — at a high level — without diving into unnecessary complexity. The goal is to help you understand the main building blocks and how they interact.


The Big Picture

At its core, Kubernetes is a control system.

You tell Kubernetes:

  • What you want to run
  • How many instances you want
  • How it should behave

Kubernetes continuously works to make sure the actual state matches the desired state.


Kubernetes Cluster Basics

A Kubernetes setup is called a cluster.

A cluster has two main parts:

  • Control Plane – the brain
  • Worker Nodes – where applications run

The Control Plane (The Brain of Kubernetes)

The control plane is responsible for deciding what should happen.

Its key components include:

API Server

  • Entry point to Kubernetes
  • All commands go through it (kubectl, dashboards, automation)
  • Validates and processes requests

Scheduler

  • Decides where your application should run
  • Looks at available nodes
  • Chooses the best node for each workload

Controller Manager

  • Watches the system continuously
  • Detects differences between desired and actual state
  • Takes action to fix problems (e.g., restarting failed workloads)

etcd

  • Key-value database
  • Stores the entire cluster state
  • Source of truth for Kubernetes

Worker Nodes (Where Apps Run)

Worker nodes do the actual work of running applications.

Each node contains:

kubelet

  • Communicates with the control plane
  • Ensures containers are running as expected

Container Runtime

  • Runs containers (Docker, containerd, etc.)
  • Pulls images and starts containers

kube-proxy

  • Handles networking
  • Enables communication between services and pods

How Everything Works Together

Here’s a simplified flow:

  1. You submit a request (for example, deploy an app)
  2. The API server receives and validates it
  3. The scheduler selects a worker node
  4. The kubelet on that node starts containers
  5. Controllers continuously monitor the system
  6. If something breaks, Kubernetes fixes it automatically

This loop runs constantly.


Why This Design Matters

This architecture enables:

  • Self-healing applications
  • Automatic scaling
  • Zero-downtime deployments
  • Infrastructure abstraction

You don’t manage servers — Kubernetes does.


How This Leads to Pods and Deployments

Now that you understand the internals, concepts like these make more sense:

  • Pods – what actually runs containers
  • Deployments – how Kubernetes manages replicas and updates
  • YAML files – how you describe desired state

These will be covered next.


What’s Next

In upcoming posts, I’ll cover:

  • Pods vs Deployments (with examples)
  • Writing your first Kubernetes YAML
  • Running Kubernetes locally

This post is just the foundation.


Conclusion

Kubernetes works by continuously reconciling the desired state of your applications with reality.

Once you understand this internal loop, Kubernetes becomes far less mysterious — and much easier to learn.


Leave a comment

About

Learn & Build is a personal blog about learning programming by building real projects. It documents practical tutorials, small experiments, and lessons learned along the way.