Skip to content
linux containers

Docker

docker containers devops microservices
Plain English

Docker is the tool that made containers mainstream. Before Docker, setting up a development environment meant installing dozens of dependencies and hoping they matched production. Docker lets you define everything your application needs in a simple text file (Dockerfile), build it into a portable package (image), and run it identically on any machine. “Works on my machine” becomes “works everywhere.”

Technical Definition

Docker is a container platform consisting of a container runtime (containerd), a CLI (docker), a build system (BuildKit), and an image registry (Docker Hub). It standardized the container workflow: build images from Dockerfiles, push to registries, pull and run anywhere.

Core concepts:

  • Dockerfile: declarative build instructions. Each instruction creates a layer in the image.
  • Image: read-only template of layered filesystem snapshots. Immutable once built. Tagged with version identifiers (e.g., nginx:1.25-alpine).
  • Container: a running instance of an image with a writable layer on top.
  • Registry: storage for images (Docker Hub, GHCR, AWS ECR, self-hosted Harbor).
  • Docker Compose: YAML-based tool for defining and running multi-container applications (web + database + cache).

Dockerfile best practices:

  • Use specific base image tags (not :latest)
  • Order instructions from least to most frequently changing (leverage build cache)
  • Minimize layers with multi-stage builds (build in one stage, copy artifacts to a minimal runtime stage)
  • Run as non-root user
  • Use .dockerignore to exclude unnecessary files

Networking modes:

  • bridge (default): containers get private IPs on an internal network, port-mapped to the host
  • host: container shares the host’s network namespace (no isolation, maximum performance)
  • none: no networking
  • overlay: multi-host networking for Docker Swarm or Kubernetes

Multi-stage Dockerfile and Compose

# Multi-stage build: small final image
FROM node:20-alpine AS builder
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build

FROM node:20-alpine
WORKDIR /app
COPY --from=builder /app/dist ./dist
COPY --from=builder /app/node_modules ./node_modules
COPY package.json .
EXPOSE 3000
USER node
CMD ["node", "dist/server.js"]
# docker-compose.yml
services:
  web:
    build: .
    ports: ["3000:3000"]
    environment:
      DATABASE_URL: postgres://db:5432/app
    depends_on: [db, redis]
  db:
    image: postgres:16-alpine
    volumes: [pgdata:/var/lib/postgresql/data]
    environment:
      POSTGRES_DB: app
      POSTGRES_PASSWORD_FILE: /run/secrets/db_pass
  redis:
    image: redis:7-alpine
volumes:
  pgdata:
In the Wild

Docker is ubiquitous in modern software development. Nearly every CI/CD pipeline builds Docker images, and most production deployments run in containers. Docker Compose is the standard for local development environments (spin up your app, database, and cache with one command). In homelab setups, Docker runs services like Portainer, Home Assistant, Plex, and monitoring stacks. The industry has shifted from Docker’s container runtime to containerd (which Docker itself now uses internally), but the Docker CLI and image format remain the developer standard. Alternatives like Podman offer a daemonless, rootless alternative with Docker CLI compatibility.