Skip to content
general infrastructure

Data Center

data-center infrastructure colocation physical-security
Plain English

A data center is a specialized building filled with rows of servers. It provides the three things servers need that a regular office cannot: massive amounts of electricity, industrial cooling systems (servers generate enormous heat), and redundant internet connections. Every website you visit, every cloud service you use, and every app on your phone ultimately runs on hardware inside a data center.

Technical Definition

A data center is a facility designed to house IT infrastructure (compute, storage, networking) with controlled environmental conditions and physical security.

Facility components:

  • Power: utility feed, automatic transfer switch (ATS), UPS (uninterruptible power supply) for bridging to generators, diesel generators for extended outages. Measured in megawatts (MW).
  • Cooling: CRAC (computer room air conditioning), hot/cold aisle containment, liquid cooling (direct-to-chip), free cooling (outside air economizers). Cooling is typically 30-40% of total power consumption.
  • Networking: multiple ISP uplinks, meet-me rooms for cross-connects, core/aggregation/access switching layers.
  • Physical security: mantraps, biometric access, CCTV, 24/7 security staff.
  • Fire suppression: clean agent systems (FM-200, Novec) that suppress fire without damaging electronics.

Tier classification (Uptime Institute):

TierUptimeAnnual DowntimeRedundancy
I99.671%28.8 hoursNo redundancy
II99.741%22 hoursPartial redundancy
III99.982%1.6 hoursN+1 redundancy, concurrently maintainable
IV99.995%26.3 minutes2N fully redundant, fault tolerant

Power metrics:

  • PUE (Power Usage Effectiveness): total facility power / IT equipment power. Ideal: 1.0 (all power goes to compute). Industry average: ~1.58. Hyperscale leaders: 1.1-1.2.

Deployment models:

  • On-premise: company owns and operates the facility
  • Colocation (colo): company owns the servers; facility owner provides space, power, cooling, connectivity
  • Hyperscale cloud: AWS, Azure, GCP own and operate massive facilities (100,000+ servers per site)

Data center monitoring (IPMI and environmental)

# Check server hardware health via IPMI (out-of-band management)
$ ipmitool -I lanplus -H 10.30.30.100 -U admin sensor list
CPU Temp         | 62.000    | degrees C | ok
Inlet Temp       | 24.000    | degrees C | ok
Fan 1            | 8400.000  | RPM       | ok
PSU1 Power       | 280.000   | Watts     | ok
PSU2 Power       | 0.000     | Watts     | ok  (standby)

# Monitor rack power consumption (SNMP from PDU)
$ snmpget -v2c -c public 10.30.30.200 \
  1.3.6.1.4.1.318.1.1.26.6.3.1.7.1
# Returns: current draw in amps

# Check UPS status
$ apcaccess status
STATUS     : ONLINE
LINEV      : 120.0 Volts
LOADPCT    : 42.0 Percent
BCHARGE    : 100.0 Percent
TIMELEFT   : 45.0 Minutes
In the Wild

Data centers are the physical foundation of the digital economy. AWS operates data centers in 30+ regions globally, with some single campuses exceeding 1 million square feet. The trend toward AI workloads (GPU clusters consuming 10-40 kW per rack vs. 5-10 kW for traditional servers) is driving a construction boom and power crisis in many regions. For smaller operations, colocation provides enterprise-grade facilities without the capital expenditure of building your own. Homelab enthusiasts run “micro data centers” in closets and garages, facing the same challenges at a smaller scale: heat management, power backup (UPS), and noise control. Environmental monitoring (temperature, humidity, leak detection) is critical; a single cooling failure can overheat and damage equipment within minutes.