Skip to content

Optimized Rack Space (Tiered)

Service ownership

Owner: dc-operations (colo-pm@clouddigit.ai) — Status: GA — Last audited: 2026-05-11

Density-tiered rack pricing for HPC, GPU, or other high-power-density footprints.

Why tiered

A standard 5 kW full rack works for normal workloads. GPU-dense racks (16x H100, GPU bare metal pods) easily pull 30+ kW. Pricing per RU stops being meaningful — what matters is delivered kW and matching cooling. Tiered pricing aligns to that.

Tiers

Tier Power per rack Cooling Best for
T1 up to 8 kW Standard CRAC Standard production (use racks.md instead)
T2 8–15 kW In-row cooling Mid-density compute
T3 15–25 kW In-row + chimney containment GPU compute, dense storage
T4 25–40 kW Rear-door heat exchanger GPU bare metal pods
T5 40+ kW Direct liquid cooling (rear or DLC) High-density training clusters

Pricing model

Tier-based per-rack-month rate; commitment plans available.

T4 / T5 require a feasibility check (cooling, distribution, ground loading) before contract — open a ticket with the planned SKUs (e.g., "8x bm-gpu-h100-8 over 4 racks") and we'll respond in ≤ 5 BWD with capacity availability per region.

Region availability

Region T1–T3 T4 T5
bd-dha-1
bd-ctg-1
bd-syl-1

Operate this service

Pre-engineered rack templates optimized for specific workloads — lower BDT per RU than traditional colo for predictable footprints.

Tiers

Tier RU + Power Use case
opt-storage 42U / 8 kW Storage-heavy (JBOD arrays)
opt-compute 42U / 14 kW Compute-dense (1U/2U servers)
opt-gpu 24U / 30 kW GPU racks; aisle with extra cooling
opt-network 42U / 5 kW Network gear (switches, routers)

Each tier has fixed physical/cooling/power baseline — you fit your workload to the tier, not the other way.

When this fits over traditional racks

  • Your workload matches one of the tier profiles
  • You want to scale fleet predictably (10 storage racks → 10× the tier)
  • You don't need bespoke power/cooling

For irregular workloads, traditional racks give more flexibility.

IAM

Same as traditional racks — colo.* roles.

Pricing

~15-25% lower BDT per RU than equivalent traditional rack. The savings come from CD's standardized engineering.

Provisioning lead time

  • Storage tier: 7 BWD (rack ready, you ship gear)
  • Compute tier: 7 BWD
  • GPU tier: 14 BWD (specialized cooling provisioning)
  • Network tier: 7 BWD

Capacity reservation

For multi-rack deployments (3+ racks of the same tier in the same DC): pre-reserve to lock pricing and ensure capacity:

bash cd colo opt-rack reserve --tier opt-compute --quantity 10 --dc bd-dha-1 --term 24-months

Metrics

Same shape as traditional racks (colo.rack.*) — temp, humidity, power, uplinks.

Additional metrics per tier:

Tier Tier-specific metrics
opt-storage colo.disks.health_pct, colo.disks.smart_warnings (when integrated)
opt-compute Standard compute envelope
opt-gpu colo.aisle.gpu_temp_c (separate alert tier)
opt-network colo.network.switch_uplinks_state

Tier-fit operations

The platform monitors whether your actual usage fits the tier:

bash cd colo opt-rack fit-report --rack-id <id>

If you're running 14kW gear in an 8kW storage tier → migrate to compute tier (or accept overage charges).

Capacity planning

For fleet operators (10+ racks of the same tier):

  • Standardize on the tier; refuse non-standard gear that doesn't fit
  • Auto-deploy via golden images and rack-build SOPs
  • Track per-rack utilization; rebalance proactively

Hardware refresh

Old gear → new gear within the same rack: schedule a 4-8h maintenance window. Smart-hands assists with swap; uplinks stay live across the swap.

For tier upgrades (compute → GPU): typically requires a new rack (cooling/power difference).

Tier-fit violation

WARN: Rack opt-storage-bd-dha-1-r042 power usage 9.4 kW, tier limit 8 kW

You've exceeded the tier envelope: - Migrate gear to a higher-tier rack (compute or GPU) - Accept overage charges (~2× per-kW over tier) - Reduce workload

CD won't trip the breaker (you bought the tier, not strict enforcement) but the overage bills.

Standardized image deployment broken

For fleet operators using golden images: - Image build pipeline failed (re-run) - Network boot path broken (BMC config drift) - New hardware revision incompatible with image

Diagnose per-rack: cd colo opt-rack provision-log --rack-id <id>.

GPU tier cooling alert

colo.aisle.gpu_temp_c rising: - A GPU's fan failed (smart-hands swap) - Cold-aisle door left open - Aisle CRAC unit degraded (CD will repair)

GPU tier has tighter thermal envelope; alerts faster than standard tiers.

Capacity reservation not honored

You reserved 10 racks; only 7 available when ready: - Construction / delivery delays at the DC - A previous customer over-ran on their reservation

CD reservation contracts have penalty clauses for missed capacity. Escalate to your CE.

Migration between tiers

You ordered the wrong tier: - Same-DC migration: 4-8 h smart-hands operation; bills as migration service - Cross-DC migration: weeks; effectively a new install

Plan tier carefully before the first install.