Skip to content
linux virtualization

Hypervisor

hypervisor virtualization infrastructure compute
Plain English

A hypervisor is the software layer that makes virtual machines possible. It sits between the physical hardware and the virtual machines, dividing up the server’s CPU, memory, and storage among multiple VMs. Think of it as an apartment building manager who assigns units (VMs) to tenants, making sure everyone has their own space, utilities, and front door, all within one building (physical server).

Technical Definition

A hypervisor (also called a Virtual Machine Monitor, VMM) is software that creates, runs, and manages virtual machines by virtualizing physical hardware resources. There are two types:

Type 1 (bare-metal): runs directly on the physical hardware with no underlying host OS. The hypervisor IS the operating system. Examples: Proxmox VE (KVM-based), VMware ESXi, Microsoft Hyper-V Server, Xen. Used in production data centers and homelabs.

Type 2 (hosted): runs as an application on top of a conventional operating system. The host OS manages hardware; the hypervisor runs on top. Examples: VirtualBox, VMware Workstation, Parallels Desktop. Used for development and testing.

Hardware virtualization extensions:

  • Intel VT-x / AMD-V: CPU instructions that allow the hypervisor to run guest OS code directly on the CPU in a controlled manner, eliminating the performance penalty of software emulation.
  • Intel VT-d / AMD-Vi: I/O virtualization (IOMMU) allowing direct hardware passthrough to VMs (e.g., passing a GPU or NIC directly to a VM for near-native performance).
  • SR-IOV: Single Root I/O Virtualization; a NIC presents multiple virtual functions that can be assigned directly to different VMs.

Key hypervisor responsibilities:

  • CPU scheduling (time-slicing physical cores among VMs)
  • Memory management (shadow page tables, EPT/NPT for nested paging)
  • I/O virtualization (virtual disks, NICs, USB devices)
  • VM isolation (preventing cross-VM access)
  • Live migration (moving running VMs between physical hosts)

Proxmox VE hypervisor management

# Check if hardware virtualization is enabled
$ egrep -c '(vmx|svm)' /proc/cpuinfo
16  # Non-zero means VT-x/AMD-V is available

# List VMs on a Proxmox node
$ qm list
VMID  NAME              STATUS   MEM(MB)  BOOTDISK(GB)  PID
100   pfsense           running  4096     32            1234
101   truenas           running  16384    64            5678
102   docker-host       running  8192     128           9012

# Create a new VM via CLI
$ qm create 103 --name k3s-node \
  --memory 4096 --cores 4 --cpu host \
  --net0 virtio,bridge=vmbr0,tag=10 \
  --scsi0 local-zfs:32 --scsihw virtio-scsi-single \
  --ostype l26 --boot order=scsi0

# Start and access console
$ qm start 103
$ qm terminal 103
In the Wild

Hypervisors power virtually all cloud computing and enterprise data centers. AWS, Azure, and GCP all run modified hypervisors (Nitro, Hyper-V, custom KVM) to provision the VMs they sell as cloud instances. VMware vSphere dominates enterprise on-premise virtualization, while Proxmox VE (free, open-source, KVM-based) is the go-to for homelabs and budget-conscious deployments. A common homelab setup runs Proxmox on a single mini PC, hosting a firewall VM (pfSense/OPNsense), a NAS VM (TrueNAS), a Docker host VM, and various service VMs, all on hardware that costs under $500. The trend toward containers has not replaced hypervisors; most Kubernetes clusters run on VMs managed by hypervisors, creating a layered virtualization stack.