Cloud Migration (Lift-and-Shift)¶
Service ownership
Owner: professional-services (ps-pm@clouddigit.ai) — Status: GA — Last audited: 2026-05-11
Wave-planned migration from on-prem or other clouds into Cloud Digit, with minimal application change.
What it is¶
A scoped engagement to move workloads onto Cloud Digit infrastructure with as little re-architecting as possible. Right call when:
- The application is stable and not getting a refactor anyway
- You want sovereignty / BDT billing benefits now
- You'll modernize later (see Cloud Modernization)
Engagement phases¶
gantt
title Lift-and-shift wave (per-application)
dateFormat YYYY-MM-DD
axisFormat Wk %V
section Discovery
Inventory + dependency map :a1, 2026-06-01, 2w
Wave plan :a2, after a1, 1w
section Build
Landing zone :a3, after a2, 2w
Replication setup :a4, after a3, 1w
Initial sync :a5, after a4, 1w
section Cutover
Rehearsal cutover :a6, after a5, 1w
Production cutover :a7, after a6, 1w
section Stabilise
Hyper-care :a8, after a7, 2w Tooling we typically pair¶
- VM-level replication — Backup-as-a-Service agent OR third-party (CloudEndure-equivalent, Veeam, etc.)
- Database — Database Migration Service
- Storage —
rclone,aws s3 syncto Object Storage - Networking — VPN for replication traffic, BDIX Direct Connect for large estates
- Cutover orchestration — runbooks driven by your team, observed by ours
Deliverables¶
- Wave plan with sequenced cutover dates
- Landing-zone IaC (Terraform) — yours at handover
- Per-application runbook, including rollback
- Cutover-day staffing plan
- Post-migration validation report
- Cost-comparison report (before vs after)
Pricing¶
Quote-based; typical wave pricing is per-application + a fixed program-management fee. Hyper-care is included for 2 weeks post-cutover. See Pricing.
Related¶
- Cloud Modernization — what to do after lift-and-shift
- DR Planning
- DMS
Operate this service¶
Cloud Digit-led migration of existing workloads to the platform — VMs, databases, storage, applications.
Engagement scope¶
| Phase | Deliverable |
|---|---|
| Discovery | Inventory of source environment + dependencies |
| Plan | Migration runbook with wave grouping |
| Migrate | Execute waves; data + workload |
| Validate | Functional + performance verification |
| Cutover | DNS / traffic redirect |
| Closeout | Source decom plan |
IAM¶
| Role | Can do |
|---|---|
migration.viewer | View engagement records |
migration.requester | Submit migration requests |
migration.admin | Sign off on contracts |
CD migration team gets temporary elevated access (engagement-scoped, time-bounded, audited).
Wave planning¶
Group workloads by: - Risk (low-criticality → high-criticality) - Dependencies (DBs before app tier; auth before everything) - Effort (similar = batched)
Typical sequence: 1. Dev / staging environments (proof of concept) 2. Internal tools 3. Low-traffic prod 4. High-traffic prod
Each wave: 2-4 weeks. Total engagement: 3-12 months.
Risk management¶
Each wave has: - Rollback plan to source (kept warm 30 days) - Validation criteria (pass/fail gates) - Customer sign-off before proceeding
Pricing¶
Fixed-price per wave (recommended) or T&M (for unknowns). Discovery phase often T&M; execution often fixed-price.
Related¶
Discovery phase¶
Tools and methods: - Agent-based discovery (deploy on source, capture inventory + dependencies) - Network flow analysis (catch undocumented dependencies) - Customer interviews (capture tribal knowledge)
Output: Migration Assessment Report.
Wave execution¶
Per wave: 1. Pre-cutover — set up target, replicate data (via DMS and similar) 2. Sync window — application-consistent state 3. Cutover — DNS / LB redirect to target 4. Validation — smoke tests, performance check 5. Hypercare — heightened monitoring 1-2 weeks 6. Decom plan — source stays warm 30 days, then sunset
Metrics¶
| Metric | Notes |
|---|---|
migration.waves.complete | Of total planned |
migration.workloads.migrated | Cumulative |
migration.rollbacks | Should be 0 ideally; > 0 means re-plan |
migration.discovered_dependencies | Catch rate |
Communication¶
Weekly cadence: - Status report (waves complete, blocked, planned) - Decisions log - Risk register
Daily during active waves.
Validation criteria¶
Per workload: - Functional smoke tests passing - Performance within 10% of source (or better) - No data loss (checksum sample) - No regression in monitoring
Hypercare¶
Post-cutover, 1-2 weeks of: - CD on-call for migration-related incidents - Joint daily standup - Quick rollback if needed
Then transition to BAU.
Related¶
Discovery incomplete¶
A workload missed during discovery: - Pause wave; re-discover - Add to next wave - Or migrate ad-hoc if simple
Discovery completeness matters; rushed migrations leave orphans.
Performance worse on target¶
Source on bare metal, target on virtualized — performance differs: - Re-size to higher flavor - Move to bare metal target if needed - Refactor app (next engagement: modernization)
10% performance acceptable; 50% is a problem.
Data inconsistency post-migration¶
Source and target diverge: - DMS lag at cutover - Trigger on source modified data post-cutover (cleanup needed) - Application bug double-writing
Run cd dms validate; reconcile differences.
Rollback after cutover¶
Cutover failed; revert: 1. DNS revert to source 2. Replicate target → source (catch any writes made during target window) 3. Investigate; re-plan
Source kept warm 30 days for this scenario.
Stakeholder fatigue¶
Long engagements (6+ months) lose momentum: - Re-affirm value at quarter boundaries - Adjust scope if needed - Celebrate milestones
Migrations are marathons; pacing matters.
Scope creep¶
Customer requests modernization mid-migration (refactor while moving): - Out of scope for lift-and-shift - Refer to Modernization as follow-on engagement - Migrate as-is; modernize after
Mixing scopes loses focus.
Source environment decom delayed¶
Cutover successful but source not decommissioned: - BDT bleeding twice (source + target) - Establish hard decom date (30 days post-cutover) - Track in engagement closeout