Skip to content
networking performance

Bandwidth

bandwidth throughput network-performance capacity
Plain English

Bandwidth is the width of your internet pipe. A wider pipe can carry more data at once, just like a wider highway can carry more cars. If your internet plan says “500 Mbps,” that is the maximum bandwidth: the most data that can flow through per second. Bandwidth is not the same as speed (that is latency). You can have a wide pipe (high bandwidth) with slow delivery (high latency), or a narrow pipe (low bandwidth) with fast delivery (low latency).

Technical Definition

Bandwidth is the maximum rate of data transfer across a network path, measured in bits per second (bps). It represents capacity, not actual usage.

Key distinctions:

  • Bandwidth (capacity): the theoretical maximum data rate of a link (e.g., 1 Gbps Ethernet port)
  • Throughput (actual): the real-world data rate achieved, always less than bandwidth due to overhead (headers, retransmissions, protocol inefficiency)
  • Latency (delay): the time for a single packet to travel from source to destination; independent of bandwidth
  • Goodput: application-level throughput excluding protocol overhead

Common bandwidth units:

UnitBits per secondTypical use
Kbps1,000Legacy dial-up
Mbps1,000,000Home internet, Wi-Fi
Gbps1,000,000,000Data center links, fiber
Tbps1,000,000,000,000ISP backbone, CDN edge

Bandwidth vs. data transfer: ISPs often cap monthly data transfer (e.g., 1 TB/month) separately from bandwidth (e.g., 500 Mbps). Bandwidth is rate; data transfer is volume.

Bandwidth-delay product (BDP): the amount of data “in flight” on a network link at any moment. BDP = bandwidth x RTT. Important for TCP window sizing: a 1 Gbps link with 50ms RTT has a BDP of 6.25 MB, meaning TCP needs at least a 6.25 MB window to fully utilize the link.

Measuring bandwidth and throughput

# Test bandwidth to a speed test server
$ speedtest-cli
Download: 487.32 Mbit/s
Upload: 52.18 Mbit/s
Ping: 12.453 ms

# Measure throughput between two hosts with iperf3
$ iperf3 -s                    # On the server
$ iperf3 -c 10.0.0.1 -t 10    # On the client
[ ID] Interval       Transfer     Bitrate
[  5] 0.00-10.00 sec  1.09 GBytes   937 Mbits/sec  sender
[  5] 0.00-10.00 sec  1.09 GBytes   935 Mbits/sec  receiver

# Monitor real-time bandwidth usage per interface
$ nload eth0
Incoming:   245.32 MBit/s  (peak: 891.21 MBit/s)
Outgoing:    12.54 MBit/s  (peak:  48.72 MBit/s)
In the Wild

Bandwidth planning is central to network design. An office with 200 employees streaming video calls needs significantly more bandwidth than one sending emails. In data centers, 10 Gbps and 25 Gbps links are standard between servers, with 100 Gbps spine links. Cloud providers charge for bandwidth (egress fees) and often throttle instance network performance based on instance size. CDNs (Cloudflare, AWS CloudFront) reduce origin bandwidth by caching content at the edge. When users complain “the internet is slow,” the problem is usually not bandwidth but latency, packet loss, or a congested link. Tools like iperf3 for LAN testing and speedtest-cli for WAN testing are the standard diagnostic approach.