Skip to content
linux orchestration

Kubernetes (K8s)

kubernetes orchestration containers devops
Plain English

If Docker is a way to package your application into a container, Kubernetes is the system that runs those containers at scale. It decides which server to run each container on, restarts containers that crash, scales up when traffic spikes, and routes network requests to the right place. Think of it as an automated operations team that manages your containers 24/7 without human intervention.

Technical Definition

Kubernetes (K8s) is an open-source container orchestration platform originally developed by Google (from internal Borg system) and now maintained by the Cloud Native Computing Foundation (CNCF).

Architecture:

  • Control plane: API server (kube-apiserver), scheduler (kube-scheduler), controller manager (kube-controller-manager), etcd (distributed key-value store for cluster state)
  • Worker nodes: kubelet (node agent), kube-proxy (network routing), container runtime (containerd/CRI-O)

Core resources:

  • Pod: smallest deployable unit; one or more containers sharing network and storage
  • Deployment: manages desired state for Pods (replica count, update strategy, rollback)
  • Service: stable network endpoint for accessing Pods (ClusterIP, NodePort, LoadBalancer)
  • Ingress: HTTP/HTTPS routing rules mapping external URLs to internal Services
  • ConfigMap / Secret: externalized configuration and sensitive data
  • PersistentVolumeClaim: requests for durable storage
  • Namespace: logical isolation boundary within a cluster

Key capabilities:

  • Self-healing: restarts failed containers, replaces unhealthy pods, reschedules when nodes die
  • Horizontal scaling: scales pod replicas up/down based on CPU, memory, or custom metrics (HPA)
  • Rolling updates: zero-downtime deployments by gradually replacing old pods with new ones
  • Service discovery: DNS-based discovery within the cluster (pod-name.namespace.svc.cluster.local)
  • Resource management: CPU/memory requests and limits per container, preventing noisy neighbors

Lightweight distributions: K3s (Rancher), MicroK8s (Canonical), Kind (Kubernetes in Docker for testing).

Kubernetes deployment and service

# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: web-app
  template:
    metadata:
      labels:
        app: web-app
    spec:
      containers:
        - name: web
          image: myapp:1.2.0
          ports:
            - containerPort: 3000
          resources:
            requests:
              cpu: 100m
              memory: 128Mi
            limits:
              cpu: 500m
              memory: 512Mi
---
apiVersion: v1
kind: Service
metadata:
  name: web-app
spec:
  selector:
    app: web-app
  ports:
    - port: 80
      targetPort: 3000
  type: ClusterIP
# Apply and verify
$ kubectl apply -f deployment.yaml
$ kubectl get pods
NAME                       READY   STATUS    RESTARTS   AGE
web-app-6d8f7b9c4d-abc12  1/1     Running   0          30s
web-app-6d8f7b9c4d-def34  1/1     Running   0          30s
web-app-6d8f7b9c4d-ghi56  1/1     Running   0          30s

$ kubectl scale deployment web-app --replicas=5
In the Wild

Kubernetes runs a significant portion of production workloads at companies of all sizes. Every major cloud provider offers managed Kubernetes (EKS, AKS, GKE). For homelab environments, K3s is the popular choice: it is a lightweight, single-binary distribution that runs on Raspberry Pis and mini PCs. A typical production Kubernetes cluster hosts web applications, APIs, background workers, and cron jobs, with Prometheus for monitoring, Grafana for dashboards, and cert-manager for TLS certificates. The learning curve is steep (the Kubernetes API has hundreds of resource types), but the payoff is a self-healing, auto-scaling platform. GitOps tools like ArgoCD and Flux automate deployments by watching Git repositories for changes and syncing them to the cluster.