Container
A container is like a shipping container for software. Just as a shipping container holds everything needed for transport (goods, packaging, labels) and fits on any truck, ship, or train, a software container holds everything an application needs to run (code, libraries, settings) and works the same on any computer. Unlike a virtual machine, containers share the host computer’s core operating system, making them much lighter and faster to start.
A container is an isolated user-space instance that packages an application with its dependencies (libraries, binaries, configuration files) and runs as a process on the host operating system. Containers share the host kernel and use Linux kernel features for isolation:
- Namespaces: isolate what a container can see (PID, network, mount, user, UTS, IPC namespaces). Each container has its own process tree, network stack, and filesystem view.
- cgroups (control groups): limit and account for resource usage (CPU, memory, I/O, network bandwidth). Prevent a single container from consuming all host resources.
- Union filesystems (OverlayFS): layer read-only image layers with a writable container layer. Enables efficient image sharing and storage.
Container image: a read-only template containing the application, runtime, libraries, and filesystem. Built from a Dockerfile with layered instructions. Images are stored in registries (Docker Hub, GitHub Container Registry, private registries).
Container vs. VM:
| Aspect | Container | VM |
|---|---|---|
| Startup | Seconds | Minutes |
| Size | MBs | GBs |
| Isolation | Process-level (shared kernel) | Hardware-level (separate kernel) |
| Overhead | Minimal | Significant (full OS per VM) |
| Density | 100s per host | 10s per host |
| Security | Weaker boundary (kernel shared) | Stronger boundary (separate kernel) |
OCI (Open Container Initiative) standardizes container image format and runtime specification, ensuring portability across Docker, Podman, containerd, and CRI-O.
Container lifecycle with Docker
# Pull an image
$ docker pull nginx:alpine
# Run a container
$ docker run -d --name web -p 8080:80 nginx:alpine
# List running containers
$ docker ps
CONTAINER ID IMAGE STATUS PORTS NAMES
a1b2c3d4e5f6 nginx:alpine Up 2 minutes 0.0.0.0:8080->80/tcp web
# View resource usage
$ docker stats --no-stream
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O
web 0.01% 3.2MiB / 16GiB 0.02% 1.2kB / 648B
# Build a custom image
$ cat Dockerfile
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci --production
COPY . .
EXPOSE 3000
CMD ["node", "server.js"]
$ docker build -t myapp:1.0 . Containers transformed how software is built and deployed. Instead of “works on my machine” problems, a container image guarantees identical behavior across development, CI/CD, staging, and production. Docker popularized containers starting in 2013, and Kubernetes became the standard orchestrator for running containers at scale. Most modern web applications, APIs, and microservices run in containers. In Proxmox homelabs, LXC containers (Linux Containers, a lighter alternative to Docker) run system services like DNS, reverse proxies, and monitoring. The container ecosystem includes registries (image storage), orchestrators (scheduling and scaling), service meshes (network policies), and observability tools (logging, tracing, metrics).