Skip to content

Quarter / Half / Full Rack

Service ownership

Owner: dc-operations (colo-pm@clouddigit.ai) — Status: GA — Last audited: 2026-05-11

Standard rack-unit colocation in Cloud Digit's three Tier-III data centres.

Sizes

Footprint RU Power (committed) Power (peak) Best for
Quarter rack 10 RU 1.0 kW 1.5 kW Pilot, 1-2U appliances, FI gateway
Half rack 22 RU 2.5 kW 4.0 kW DR site, mid-density workload
Full rack 47 RU 5.0 kW 8.0 kW Standard production footprint

For higher densities (≥ 10 kW per rack) see Optimized Rack Space.

What's included

  • Rack with rear door, lockable
  • 2 × A+B power feeds (PDU per rail)
  • Cabling tray, casters, blanking panels
  • Default: 2 × 10 GbE network handoff into Cloud Digit fabric (or BDIX-direct)
  • Smart-PDU with per-outlet metering
  • 24/7 manned site, biometric access for designated tenant personnel
  • CCTV retention 90 days

Power model

  • Committed kW is yours to use; pay flat
  • Above committed is metered at the Power Overage rate
  • A+B feeds are independent — feed loss on either does not interrupt service

Connectivity options

  • Cross-connect to Cloud Digit cloud — see Cross-Connect
  • BDIX direct — terminate at the BDIX MMR
  • Carrier transit — via on-site carriers (open a ticket for current list)
  • Hybrid Cloud Burst — see Hybrid Cloud Burst

Pricing

Flat per-rack-month + power overage if any. 1- and 3-year commitment plans drop 15–25%. See Pricing.

Operate this service

Traditional colocation: rent rack units in Cloud Digit's Tier-III datacenters and install your own hardware.

When colocation fits

  • Bring-your-own hardware (existing investment)
  • Vendor-specific appliances (firewalls, storage arrays, mainframes)
  • Strict physical-security or chain-of-custody requirements
  • Hybrid: colo + cloud connected via cross-connect

If the answer is "we just want servers", check Bare Metal first — colo is more operational overhead.

Rack tiers

Tier RU available Power Use case
Quarter rack 10U 2× 16A Small footprint, branch deployment
Half rack 20U 2× 32A Mid-size
Full rack 42U 2× 32A (or 2× 63A premium) Standard production deployment

All tiers: redundant power, redundant network uplinks (2× 10GbE or 25GbE), cold-aisle containment.

IAM

Role Can do
colo.viewer View rack details, environmentals
colo.requester Request smart/remote hands, cross-connects
colo.dc-visitor Authorized to enter the DC (with appointment)
colo.admin Manage rack contract, IAM bindings, security badges

colo.dc-visitor requires KYC and biometric enrolment (one-time).

Power budgeting

Don't fill 100% of contracted power — leave 20% headroom for spikes:

  • 32A circuit = 7.6 kW @ 240V
  • Budget ~6 kW usable
  • Modern 1U servers: 200-400W; full rack of 42 → easily 12-16 kW (split across PDUs)

Over-provisioning trips the breaker; under-provisioning wastes BDT.

Asset management

Cloud Digit DCIM (Data Center Infrastructure Management) tracks: - Every U-position - Every cable - Every port on the network uplinks - Power consumption per device

Required for audit; maintain with discipline.

Environmental monitoring

Per rack:

Metric Healthy Alert
colo.rack.temp_c 18-27 °C > 30 °C
colo.rack.humidity_pct 40-60% > 70% (condensation risk)
colo.rack.power_kw < 80% of contract > 90%
colo.rack.psu_input both legs powered one leg lost (redundancy gone)

Sensors integrated; alerts via the same channels as cloud.

Deployment workflow

  1. Schedule install window (DC requires 5+ BWD notice for new gear)
  2. Ship gear or hand-carry with DC visitor appointment (see Administration → IAM)
  3. CD smart-hands assists with racking (or you DIY if you have colo.dc-visitor)
  4. Update DCIM with every U-position and cable
  5. Document network uplinks, IP plan

Maintenance windows

  • Routine: customer-driven (you book the window, do the work)
  • DC infrastructure (cooling/power): announced 14 days in advance
  • Emergency: minimal notice; CD will help minimize impact

Network operations

Each rack has 2× uplinks to CD spine — connect your top-of-rack switches via LACP. Cloud Digit doesn't manage your switches; you do.

Your TOR ←(2× 25GbE LACP)→ CD spine ↓ VPC / BDIX / Internet

For VPC integration: cross-connect to CD network as if your TOR is a Cloud Digit edge router.

Compliance

Quarterly attestation report: - Power and environmental compliance per contract - Access log (every DC entry) - Maintenance history

Required for PCI-DSS, BB ICT 4.0.

Power overage tripped breaker

colo.rack.power_kw exceeded contract limit; breaker tripped:

  1. CD NOC alerts you (24×7)
  2. Some equipment lost power (whatever was on the tripped leg)
  3. Reset the breaker (smart-hands required if remote)
  4. Investigate the spike (new equipment? all-out workload?)

Long-term: contract for higher power, or distribute load across more legs.

WARN: rack-1234 uplink port 2 down — LACP degraded

Customer-side or CD-side issue: - Your TOR may have a port issue (check your switch logs) - CD spine may have a port issue (CD will repair)

LACP keeps traffic flowing on the surviving leg; throughput reduced by half during repair.

Temperature warning

colo.rack.temp_c > 30:

  • Cold-aisle containment failure nearby (CD will repair)
  • Your gear's airflow path obstructed (cable management mess)
  • Equipment over-densified (heat output exceeds aisle cooling capacity)

If sustained: shed workload, contact CD for cooling assessment.

Access denied at DC entry

You scheduled a visit, security says no:

  • Visit not pre-registered (must be requested 24+ h in advance)
  • KYC expired (re-verify with badge office)
  • Visitor not authorized (you need colo.dc-visitor for that DC)

For emergencies: CD NOC can authorize same-day with manager approval.

Equipment shipping lost

CD receives equipment on your behalf:

bash cd colo shipping track --reference <vendor-PO>

Lost shipments — CD has a chain-of-custody log from receiving dock to your rack. Coordinate with vendor + insurance.

DCIM out of sync

You racked something and didn't update DCIM, then forgot:

  • Quarterly audit catches it
  • Update DCIM with current state
  • Document any uplifted equipment

DCIM hygiene is contract obligation.

Cross-connect not working

Cross-connect to another CD customer or to BDIX:

  • Physical cable not patched (smart-hands)
  • VLAN mismatch
  • L3 / BGP configuration drift

cd colo cross-connect status shows physical + protocol state.