Dedicated Hosts¶
Service ownership
Owner: compute-platform (compute-pm@clouddigit.ai) — Status: GA — Last audited: 2026-05-11
A whole hypervisor host reserved for one tenant. Same KVM stack as standard Virtual Machines — different isolation guarantee.
What it is¶
A physical hypervisor in our fleet, allocated to your project. You launch VMs on it like any other VM, but no other tenant's workloads ever land on the same host.
When to pick this over standard VMs¶
- Per-physical-CPU licensing (Oracle, SQL Server, certain ISV licences)
- Compliance requirement for "no shared hypervisor"
- Workloads that need predictable noisy-neighbour-free performance but don't justify Bare Metal
- "Bring your own VM placement" scenarios
Key features¶
- KVM hypervisor managed by Cloud Digit; you launch and manage VMs on top
- Live migration within your dedicated hosts during maintenance
- Standard VPC, security groups, and IAM apply to VMs on the host
- Per-host capacity model — pick a SKU, fill it with your VMs up to its capacity envelope
Host SKUs¶
| SKU | vCPU capacity | RAM capacity | Best for |
|---|---|---|---|
dh-32x256 | 32 vCPU | 256 GiB | Small ISV-licensed footprint |
dh-48x384 | 48 vCPU | 384 GiB | Medium estate |
dh-64x512 | 64 vCPU | 512 GiB | Large estate |
dh-96x768 | 96 vCPU | 768 GiB | Memory-heavy ISV stack |
Pricing¶
Monthly base rate per host; VMs running on the host are not separately metered (you've paid for the host). Commitment plans applicable. See Pricing.
Limits¶
- Up to 20 VMs per host (subject to flavor capacity)
- Live migration between dedicated hosts supported within the same project
Related¶
- Virtual Machines
- Bare Metal Servers
- VDC — virtualised dedicated pool, larger than a single host
Operate this service¶
How to choose, allocate, and govern dedicated hypervisor hosts — the "single-tenant" variant of regular VMs.
When admins actually need a dedicated host¶
Don't reach for one by default — they cost 1.4–1.8× the equivalent on-demand VMs. Pick a dedicated host when at least one of:
- An ISV licence requires "physical CPU" boundaries (Oracle DB, SQL Server Enterprise, certain BI tools)
- An auditor demands "no shared hypervisor" wording (BB ICT 4.0, some PCI-DSS scopes)
- A workload is sensitive enough that
hyp.stealspikes are unacceptable but Bare Metal is overkill
If you just want predictable performance and aren't fighting a licence: try a VDC first — same isolation property, better economics for mixed footprints.
Host SKU selection¶
| SKU | Best for |
|---|---|
dh-32x256 | One sizable database + a handful of app VMs |
dh-48x384 | Mixed estate, ~10–15 VMs |
dh-64x512 | Larger estate, room for HA pairs |
dh-96x768 | JVM-heavy or in-memory-DB estates |
A common pattern: two dh-48x384 in two AZs, with anti-affinity rules so HA pairs sit on different hosts.
Project layout and IAM¶
Recommended:
- One project per dedicated-host footprint (e.g.,
acme-prod-oracle-dh) - Bind
dh.adminto a small group; everyone else getsvm.builderon the project and lands their VMs into the project's host pool - Tag every VM with the licence-bearing application (e.g.,
oracle=ee,sql-server=ee) so the Cost Explorer can roll up "what's running on the host"
Built-in roles:
| Role | Can do |
|---|---|
dh.viewer | List hosts, see placement |
dh.builder | Launch VMs onto an existing host (subject to capacity) |
dh.admin | Allocate/release hosts, set placement policies, configure HA |
Placement policies¶
Two knobs:
- Host-affinity — pin specific VMs to a specific host (licence-binding scenario).
- Host-anti-affinity — keep HA pairs on different hosts.
bash cd compute dh policy create \ --project acme-prod-oracle-dh \ --name oracle-rac-anti-affinity \ --type host-anti-affinity \ --vm-tags 'role=oracle-rac-node'
The platform's scheduler respects these at VM create and at live-migration time.
Licensing bookkeeping¶
The platform doesn't enforce ISV licences for you — it gives you the boundary, you do the math.
For Oracle DB Enterprise Edition, document for the audit:
- Host SKU and its physical CPU count (BMC-reported, in console Compute → DH → Detail)
- Date the host was allocated to your project (BDT line item with timestamp)
- VMs running on the host (the host detail page lists them)
- A signed letter from Cloud Digit attesting single-tenancy (available on request)
The quarterly BM/DH attestation report packages this for you.
Capacity reservations¶
Like Bare Metal, dedicated hosts need a reservation. Unlike Bare Metal, the lead time is shorter (typically <24 h) since the hypervisor is already racked — Cloud Digit just drains other tenants off it for you.
Related¶
Living with single-tenant hypervisors: how live migration changes, what to monitor, and how to coordinate maintenance.
Monitoring (host-level)¶
In addition to per-VM metrics, hosts expose host-level metrics — useful for capacity decisions:
| Metric | Notes |
|---|---|
dh.vcpu.allocated | Sum of vCPU across VMs on the host vs. host vCPU capacity |
dh.ram.allocated | Same for RAM |
dh.cpu.utilization | Hypervisor-side; should track VM cpu.busy aggregate |
dh.live_migration_in | VMs migrated onto this host (during peer maintenance) |
dh.live_migration_out | VMs migrated off this host (during this host's maintenance) |
dh.health | healthy / degraded / critical |
Alert on dh.health != healthy and on dh.vcpu.allocated > 0.9 * capacity (no headroom for HA reshuffles).
Live migration¶
Cloud Digit will live-migrate VMs between your project's dedicated hosts during maintenance — you don't approve, the platform manages it. To check eligibility:
```bash cd compute dh capacity --project acme-prod-oracle-dh
Shows free vCPU/RAM per host¶
```
If the project has only one host: maintenance will require scheduled downtime, coordinated 14 days in advance. Always allocate at least 2 hosts for production.
Adding VMs to the host pool¶
VMs land on a host implicitly when the project is configured with default-placement=dedicated. Otherwise, target a specific host:
bash cd compute vm create \ --project acme-prod-oracle-dh \ --flavor std-8x32 \ --image rhel-9 \ --placement-host dh-bd-dha-1-az2-04
Out-of-capacity returns HostCapacityExceeded — see Troubleshooting.
Rebalancing¶
When a host fills up, manually rebalance:
```bash
Move a VM to a peer host (live, no downtime)¶
cd compute vm migrate --vm app-04 --target-host dh-bd-dha-1-az2-05 ```
The platform refuses migrations that would violate an anti-affinity policy — you'll get a clear error pointing to the offending policy.
Maintenance coordination¶
Cloud Digit gives DH customers a maintenance contact list — names + emails that get notified 14 days before any host-level maintenance. Update via console DH → Settings → Maintenance contacts.
For workloads that cannot tolerate the standard live-migration blackout (200–800 ms), schedule a coordinated window: SRE will pause migrations, you'll coordinate app failover, then SRE resumes. Request via Support ticket; minimum 7 days notice.
Releasing a host¶
```bash
Drain VMs (live-migrates them to peers; fails if not enough peer capacity)¶
cd compute dh drain --host dh-bd-dha-1-az2-04
Then release (billing stops at midnight UTC)¶
cd compute dh release --host dh-bd-dha-1-az2-04 ```
You can't release a host with VMs on it. Drain first.
Backups¶
Backups are per-VM (same mechanism as standard VMs); the host itself isn't backed up — it's stateless hypervisor.
Related¶
HostCapacityExceeded¶
Trying to launch a VM and the host pool is full:
ERROR: HostCapacityExceeded: no dedicated host in project 'acme-prod-oracle-dh' has 8 vCPU + 32 GiB free
Three resolutions:
- Resize an existing VM down to free capacity.
- Allocate another host (
cd compute dh allocate --sku dh-48x384 --region bd-dha-1 --az az2); typical <24 h to be ready. - Allow burst to on-demand — set the project policy
dh.burst-to-on-demand=true. New VMs that don't fit will land on shared hypervisors. Risky for licence boundaries; safe for general compute.
VM landed on the wrong host¶
If a VM ends up on host A instead of host B (where you wanted it):
bash cd compute vm migrate --vm app-04 --target-host <correct-host>
If you wanted host B but a placement policy prevents it: read the policy (cd compute dh policy show <name>) and either amend it or pick a different host. Don't disable the policy in production without coordinating — it's usually preventing a licence violation or an HA blast-radius problem.
Anti-affinity rejection¶
ERROR: AntiAffinityViolation: would place vm 'oracle-rac-02' on host containing vm 'oracle-rac-01' tagged role=oracle-rac-node
Working as intended. Either:
- Pick a different host
- Allocate a new host
- Amend the policy (with caution; HA exists for a reason)
Drain stuck¶
cd compute dh drain hangs or partially completes:
| Cause | Fix |
|---|---|
| Peer hosts don't have capacity | Allocate another host first, then retry drain |
| A VM with anti-affinity has no valid target | Migrate that VM manually to a non-conflicting host |
A VM is in Building or Error state | Resolve that VM's state first; drain skips them |
| VMs with PCI passthrough (rare on DH) | Manual stop required before migrate |
Drain emits per-VM status to console DH → Events.
High hyp.steal on a dedicated host¶
If you see steal time on a single-tenant host, that's unusual — investigate:
- A vCPU-overcommitted VM placement (multiple VMs with 32 vCPU each on a 32-vCPU host) → reduce overcommit; the scheduler should prevent this but check
- A live-migration in progress → temporary, ignore
- A platform bug → ticket it; Cloud Digit will investigate placement and fix
Host marked degraded¶
dh.health=degraded reasons (visible in console):
| Reason | Customer action |
|---|---|
| DIMM uncorrectable ECC | None; SRE will live-migrate VMs and swap |
| Fan stuck | None; SRE will dispatch |
| LACP leg down | None; traffic still flows on the other |
| Pending firmware critical | Acknowledge the maintenance window |
The platform live-migrates VMs off degraded hosts proactively; you'll see dh.live_migration_out tick up.
Released host still billing¶
Release-time is midnight UTC of the day you issued release. A host released at 14:00 UTC Tuesday stops billing at 00:00 UTC Wednesday. The MTD bill reflects this.
If you see a host you released last week still billing: confirm in console DH → Lifecycle that the release went through. Otherwise ticket.