VDC / Dedicated Tenant Pool¶
Service ownership
Owner: compute-platform (compute-pm@clouddigit.ai) — Status: GA — Last audited: 2026-05-11
A carved-out capacity pool — vCPU, RAM, and storage — reserved for one tenant across one or more hypervisors.
What it is¶
The VDC abstraction is larger than a single Dedicated Host — it's a pool of capacity that may span multiple physical hosts but is exclusively yours. You schedule VMs into the pool; the platform places them.
When to pick a VDC¶
- You want predictable capacity and predictable spend
- You want the flexibility of multiple hosts (live migration, fault tolerance) without the multi-tenant exposure
- You're migrating from VMware-style "resource pools" with quotas
- You want to commit to a baseline and burst above it
Key features¶
- Pool capacity expressed as vCPU, RAM, block storage
- Sub-pools per project / business unit, with quotas
- Mix of flavor families inside the pool
- HA across multiple hypervisor hosts inside the pool
- Optional dedicated network plane for inter-VDC traffic
Sizing¶
VDCs are sized in increments — typical starting points:
| Tier | vCPU | RAM | Block storage |
|---|---|---|---|
| Small | 64 | 256 GiB | 4 TiB |
| Medium | 256 | 1 TiB | 16 TiB |
| Large | 512 | 2 TiB | 32 TiB |
| XL | 1024 | 4 TiB | 64 TiB |
Custom sizes via the sales team — useful when you have a known target migration footprint.
Pricing¶
Monthly base for committed capacity; usage above the commitment is billed at standard hourly rates. Best fit for predictable, steady-state workloads with occasional bursts. See Pricing.
Related¶
- Virtual Machines
- Dedicated Hosts
- Auto Scaling Groups — burst above the VDC commitment
Operate this service¶
Setting up a VDC, carving it into sub-pools, and assigning quotas across teams.
When a VDC fits¶
A VDC is the right primitive when you want dedicated capacity at scale (~ ≥ 64 vCPU baseline) but don't want the operational ceremony of multiple Dedicated Hosts:
- Migrating from VMware-style resource pools
- Banking / fintech mandates for dedicated capacity but flexibility across many small VMs
- Steady-state predictable workloads with a known footprint
If you have one fat VM that pins a whole physical host: a Dedicated Host is more honest. If you have lots of small-to-medium VMs and want a "cloud you can resize once a quarter": a VDC.
Sizing¶
VDCs ship in named tiers (Small / Medium / Large / XL) but custom sizing is normal — the sales team will tune to your migration footprint. Three numbers matter:
| Dimension | Practical guidance |
|---|---|
| vCPU | Sum your VMs' allocated vCPU × 1.25 for headroom |
| RAM | Sum allocations × 1.20 (lower headroom since RAM doesn't burst the same way) |
| Block storage | Sum disk allocations × 1.40 (snapshots + image overhead) |
Under-sized VDCs are not fatal — bursting above the commitment falls back to on-demand pricing — but a VDC that's chronically over-spilling is a sign you should re-commit.
Sub-pools and quotas¶
The killer feature of VDC over Dedicated Hosts: sub-pools.
```bash
Carve a sub-pool for the finance team¶
cd compute vdc subpool create \ --vdc acme-vdc-prod \ --name finance \ --vcpu 64 --ram 256 --block-tb 4 ```
Sub-pools have their own quota and their own IAM bindings. Teams can self-serve inside their sub-pool without seeing or affecting other teams' allocations.
IAM model¶
| Role | Can do |
|---|---|
vdc.viewer | See VDC and sub-pool sizes; per-subpool metrics |
vdc.consumer | Launch VMs within an assigned sub-pool |
vdc.subpool-admin | Manage one sub-pool's quotas and IAM |
vdc.admin | Manage the whole VDC: sub-pools, baseline commit, expansion |
For large orgs: vdc.admin is a 1–2 person role; teams get vdc.subpool-admin on their slice.
Anti-affinity within a VDC¶
Same primitive as Dedicated Hosts: pin a workload's VMs to different hypervisor hosts within the pool for HA:
bash cd compute vdc policy create \ --vdc acme-vdc-prod \ --name web-tier-ha \ --type host-anti-affinity \ --vm-tags 'tier=web,env=prod'
The scheduler picks distinct hosts inside the VDC for each VM matching the tag selector.
Network plane¶
Default: the VDC's VMs use your regular VPCs. Optional: dedicated inter-VDC network plane — useful for east-west traffic isolation in highly-regulated estates.
Enable via console VDC → Network → Dedicated plane; SRE provisions a separate switch fabric for your VDC. Adds ~1.2× to the network line item.
Compliance¶
A VDC is, in regulator-speak, "dedicated capacity" — single-tenant at the hypervisor pool level. Most BB ICT 4.0 attestations accept VDC; some auditors prefer the per-host model of Dedicated Hosts. Confirm with your auditor early.
Related¶
Day-2 VDC: utilization tracking, bursting policy, expansion, and the quarterly rebalance.
Utilization tracking¶
Console VDC → Utilization shows three things you care about:
| Metric | Healthy range | Alert at |
|---|---|---|
vdc.vcpu.utilization | 60–85% | < 50% (over-committed) or > 90% (no headroom) |
vdc.ram.utilization | 65–85% | < 55% or > 90% |
vdc.burst.hours.mtd | < 15% of committed hours | > 25% (consider re-committing higher) |
A VDC that hovers at 40% utilization is costing you money. Recommit lower at renewal.
Bursting¶
When VMs in the VDC exceed the committed capacity, they spill to on-demand shared hypervisors unless you've disabled bursting:
bash cd compute vdc policy set acme-vdc-prod \ --burst-to-shared false
With bursting disabled, VM create returns VDCCapacityExceeded and your scheduling pipeline must wait or re-route. This is the right setting for licence-sensitive estates where leaving the VDC boundary breaks the licence.
Expansion¶
Two paths:
- Soft expansion — increase the VDC commitment by amending the contract. New capacity online in 24–72 h. Billing increases at the next monthly cycle.
- Hard expansion — add a whole new VDC and federate. Use when the VDC has hit the per-VDC ceiling (1024 vCPU as of 2026-05) or you want a second region.
```bash cd compute vdc expand --vdc acme-vdc-prod --add-vcpu 256 --add-ram-gib 1024
Request opens an internal ticket; SRE confirms ETA¶
```
Sub-pool rebalancing¶
Sub-pools' allocations are zero-sum within the VDC. To shift capacity:
```bash cd compute vdc subpool resize --vdc acme-vdc-prod --name finance --vcpu 80 cd compute vdc subpool resize --vdc acme-vdc-prod --name retail --vcpu 48
Total must equal VDC total¶
```
Resizes are atomic — either both succeed or neither does. VMs already running in a shrunk sub-pool are unaffected; the sub-pool just becomes over-committed until VMs are released.
Live migration¶
Inside a VDC, the platform live-migrates VMs across the pool's hosts during maintenance — same UX as standard VMs, but the target host is always inside your VDC. Tenant isolation is preserved through maintenance.
vdc.live_migration_count_24h is a useful health metric: a sudden spike usually means SRE is doing host-level firmware work; nothing to do, but useful context.
Quarterly rebalance ritual¶
A pattern that works well: every quarter, review utilization per sub-pool and:
- Shrink sub-pools that have averaged < 50% for the quarter.
- Grow sub-pools that have burst > 20% of the quarter.
- If the whole VDC averaged > 85% or < 55%, schedule a re-commit at renewal.
- Refresh anti-affinity policies — tags drift over time.
Cloud Digit publishes a VDC right-sizing report via the Cost Explorer; use it as a starting point.
Backups¶
VM backups are per-VM (same as standard VMs). The VDC itself has no separate backup concept — it's a capacity construct, not a data construct.
Related¶
VDCCapacityExceeded¶
ERROR: VDCCapacityExceeded: subpool 'finance' has 4 vCPU free, requested 8
Two things to check:
- Sub-pool quota —
cd compute vdc subpool show finance. If under, the subpool is the bottleneck. - VDC total — if every sub-pool is full but the VDC has free capacity, an admin needs to rebalance sub-pools.
Quick relief:
- Bump the sub-pool quota at the expense of another (zero-sum within the VDC)
- Enable
burst-to-sharedif you can tolerate spillover to shared hypervisors - Expand the VDC (longer lead time)
VMs in Scheduling for >60 seconds¶
The scheduler is searching for a host that satisfies: - Sub-pool quota - Anti-affinity / affinity policies - Flavor capacity
If the search fails: VM enters Error with one of: - NoEligibleHost — relax a policy or expand capacity - AntiAffinityUnsatisfiable — all candidate hosts already run a peer - FlavorIncompatible — the host SKUs in the VDC don't support that flavor
Check console VDC → Scheduler events for the specific reason.
Sub-pool resize fails¶
ERROR: SubpoolResizeFailed: total of all subpools (1056) exceeds VDC capacity (1024)
Sub-pool sizes must sum to the VDC capacity. Shrink another sub-pool first, then grow. Atomic resize across multiple sub-pools is supported:
bash cd compute vdc subpool resize-batch --vdc acme-vdc-prod \ --set finance=80 --set retail=48
Bursting bill spike¶
If vdc.burst.hours.mtd is climbing and the BDT bill is breaking budget:
- Identify which VMs are bursting: console VDC → Burst report.
- Are they prod or dev? Dev burst is usually a runaway scheduler — kill them.
- Prod burst means the commit is undersized — see expansion options.
To stop further bleeding immediately:
bash cd compute vdc policy set acme-vdc-prod --burst-to-shared false
New VMs fail-closed; existing VMs keep running. You can re-enable later.
"VMs are slow but utilization looks fine"¶
In a multi-host VDC, a few things can cause performance to look fine in aggregate but bad per-VM:
- One host degraded —
dh.healthper host inside the VDC; SRE will live-migrate off, but the few minutes before that move sting - Network plane saturated —
vdc.network.utilizationnear 100% - Storage backend contended — check
disk.read_iopspercentile distribution, not just mean - NUMA misplacement — VMs > 16 vCPU may straddle sockets; see vm-flavors-troubleshooting
VDC expansion delayed¶
Expansion ETAs:
- Soft expansion (add vCPU/RAM): 24–72 h
- Hard expansion (new VDC instance): 5–10 BWD
Past ETA? Open the expansion ticket directly with SRE — billing/contract approvals occasionally hold things up.
Compliance attestation missing the new sub-pool¶
If you carved a new sub-pool and the quarterly attestation doesn't mention it: the report is generated at quarter close, not real-time. New sub-pools appear in the next quarter's report. If you need attestation immediately (audit-driven), open a Support ticket for a point-in-time attestation.