Skip to content
linux virtualization

Virtual Machine (VM)

virtualization vm infrastructure compute
Plain English

A virtual machine is a computer inside a computer. It behaves exactly like a real, separate machine with its own operating system, storage, and network connection, but it is actually just software running on a physical server. You can run multiple VMs on one powerful server, each completely isolated from the others. If one VM crashes or gets hacked, the others are unaffected.

Technical Definition

A Virtual Machine (VM) is a software abstraction of a physical computer, managed by a hypervisor that allocates physical resources (CPU, memory, storage, network) to each VM. Each VM runs a complete guest operating system kernel and userspace, fully isolated from other VMs on the same host.

Resource allocation:

  • vCPU: virtual CPU cores mapped to physical cores/threads. Overcommitment is common (e.g., 32 vCPUs on a 16-core host).
  • Memory: RAM allocated to the VM. Can be fixed or dynamic (ballooning). Overcommitment possible with swap.
  • Storage: virtual disks (QCOW2, VMDK, VHD) backed by physical storage. Support thin provisioning (allocate on write) and snapshots (point-in-time copies).
  • Network: virtual NICs connected to virtual switches/bridges, with VLAN tagging, rate limiting, and firewall rules.

VM lifecycle: create, start, pause, snapshot, migrate (live migration moves a running VM between physical hosts with minimal downtime), clone, stop, delete.

Advantages over bare metal:

  • Hardware consolidation (run 20+ VMs on one server)
  • Isolation (security boundary between workloads)
  • Portability (VMs can move between hosts)
  • Snapshots and backups (point-in-time recovery)

Tradeoff vs. containers: VMs provide stronger isolation (separate kernel) but with more overhead (each VM boots a full OS, consuming GBs of RAM and taking minutes to start). Containers share the host kernel and start in seconds.

Virtual MachinesHardware (CPU, RAM, Disk)Hypervisor (KVM, ESXi)Guest OSBins/LibsApp 1VM 1Guest OSBins/LibsApp 2VM 2Guest OSBins/LibsApp 3VM 3ContainersHardware (CPU, RAM, Disk)Host OS (Linux Kernel)Container Runtime (Docker)Bins/LibsApp 1ContainerBins/LibsApp 2ContainerBins/LibsApp 3ContainerVMs: full OS per instance (GBs, minutes) | Containers: shared kernel (MBs, seconds)

Managing VMs with virsh (KVM/libvirt)

# List running VMs
$ virsh list
 Id   Name              State
 1    web-server        running
 2    db-server         running
 3    monitoring        running

# Create a VM from XML definition
$ virsh define web-server.xml
$ virsh start web-server

# Take a snapshot
$ virsh snapshot-create-as web-server snap-before-upgrade \
  --description "Before OS upgrade"

# Live migrate a VM to another host
$ virsh migrate --live web-server qemu+ssh://host2/system

# View resource usage
$ virsh dominfo web-server
Name:           web-server
State:          running
CPU(s):         4
Max memory:     8388608 KiB
Used memory:    8388608 KiB
In the Wild

VMs are the backbone of modern infrastructure. Every major cloud provider (AWS EC2, Azure VMs, GCP Compute Engine) sells VMs as their core compute product. On-premise data centers run hundreds of VMs on clusters of physical servers using VMware vSphere, Proxmox VE, or Microsoft Hyper-V. In homelabs, Proxmox is the popular choice for running multiple isolated services (pfSense router, TrueNAS storage, Pi-hole DNS) on a single physical machine. VM snapshots before upgrades are standard practice: if the upgrade fails, roll back in seconds. The industry trend is toward containers for stateless workloads (web apps, microservices) while VMs remain preferred for stateful workloads (databases, legacy applications) and workloads requiring strong isolation.