Snapshot Storage¶
Service ownership
Owner: storage-platform (storage-pm@clouddigit.ai) — Status: GA — Last audited: 2026-05-11
A standalone snapshot repository — independent of any specific volume — for offsite snapshot lineage, cross-region copies, and long retention.
What it is¶
A backing store for snapshot artefacts, distinct from the live block storage where the source volume lives. Snapshots created on Cloud Digit block volumes (NVMe HCI or Provisioned IOPS) can flow into this repository via Backup-as-a-Service or via lifecycle policies.
Why standalone¶
| Concern | Standalone repository solves |
|---|---|
| Snapshot lineage tied to source volume lifecycle | Outlives volume deletion |
| Single-region failure | Cross-region copies |
| Audit retention beyond live storage | Object-storage backed, decoupled |
| Restoring to a new region or new VPC | Treats the snapshot as a portable artefact |
Operations¶
- Copy snapshot → Snapshot Storage (manual, scheduled, or policy)
- Copy across regions
- Restore from Snapshot Storage to a new volume in any region
- Lifecycle rules (retention, transition to deeper Archive)
Pricing¶
Backed by Object Storage (Archive) — same per-GiB-month rate. See Pricing.
Related¶
Operate this service¶
Standalone snapshot repository for block/file volumes — distinct from the VM-coupled snapshots in Compute → Snapshots & Custom Images.
Why a separate repository¶
- Cross-project snapshot custody (snapshots survive project deletion)
- Long-retention compliance copies stored apart from operational snapshots
- Audit-only access (snapshots visible to compliance team, not to project members)
IAM¶
| Role | Can do |
|---|---|
snap-store.viewer | List snapshots and metadata |
snap-store.depositor | Push snapshots from a source project to the repository |
snap-store.restorer | Restore snapshots into a target project |
snap-store.admin | Manage repositories, retention, immutability |
Separation: a project's volume.admin can take snapshots locally; only snap-store.depositor can push them to the repository.
Immutability¶
Enable Object Lock-style immutability per repository:
bash cd snap-store repo create \ --name acme-compliance-snapshots \ --immutability compliance \ --min-retention-days 2555
compliance mode cannot be shortened, even by root account. Use governance for shorter / shortable holds.
Retention catalogue¶
Maintain a YAML retention catalogue mapping workloads to retention:
yaml catalogue: - workload: acme-finance-db retention_years: 7 immutability: compliance - workload: acme-marketing-data retention_years: 2 immutability: governance
Push via cd snap-store catalogue apply — the repository enforces retention per-snapshot based on its tagged workload.
Cost¶
Snapshot Storage uses Archive-class pricing for snapshots older than 30 days. Plan for it in the multi-year budget.
Related¶
Deposit workflow¶
```bash
In source project — take a snapshot¶
SNAP=$(cd compute snapshot create --vm db-01 --app-consistent -o id)
Push to the repository¶
cd snap-store deposit \ --snapshot $SNAP \ --repo acme-compliance-snapshots \ --workload acme-finance-db \ --retention-years 7 ```
Once deposited, the local snapshot can be deleted (it lives in the repository now); or kept for fast restore (extra cost).
Restore workflow¶
bash cd snap-store list --repo acme-compliance-snapshots --workload acme-finance-db cd snap-store restore --snapshot rep-snap-abc --target-project acme-prod-restore
Restore time depends on snapshot age: - < 30 days: minutes - 30–365 days: 1–4 hours (archive thaw) - > 365 days: 4–12 hours
Metrics¶
| Metric | Notes |
|---|---|
snap-store.bytes_stored | Per-repo, per-workload |
snap-store.deposits_24h | Should match your deposit schedule |
snap-store.restores_24h | Sudden spike = unscheduled restore happening |
snap-store.immutability_breach_attempts | > 0 = a process tried to delete a locked snapshot; investigate |
Compliance reporting¶
Monthly attestation report (auto-generated): - Total snapshots, by workload, by retention tier - Immutability breaches (always 0 if compliance mode works as designed) - Upcoming retention expiry — list of snapshots scheduled to be deleted in the next 90 days
Required for some financial regulators.
Repository replication¶
Repos can replicate to a peer in another BD region for DR:
bash cd snap-store repo replicate \ --source acme-compliance-snapshots \ --target-region bd-ctg-1
Replication is async, expected lag < 4 hours.
Related¶
Deposit fails: WorkloadNotInCatalogue¶
ERROR: WorkloadNotInCatalogue: 'acme-new-app' not registered
Repositories enforce the retention catalogue. Either add the workload:
bash cd snap-store catalogue add \ --repo acme-compliance-snapshots \ --workload acme-new-app \ --retention-years 3
Or deposit to a different repo. The catalogue requirement is a guardrail — don't disable it.
Restore slow / stuck¶
Old (>30 day) snapshots are archive-tier; restore needs a thaw. cd snap-store restore status --request-id <id> shows the current stage:
| Stage | Typical duration |
|---|---|
thawing | 1–4 h for 30–365 day snapshots; 4–12 h for older |
restoring | Throughput-bound; ~30 min per TiB |
available | Done |
Immutability breach attempt¶
WARN: Immutability breach attempt: principal user/bob tried to delete locked snapshot
A user (or a service running as them) tried to delete a compliance-mode snapshot. Logged automatically. Investigate why — usually a buggy cleanup script.
Compliance-mode snapshots cannot be deleted until retention expires. Document the incident; the snapshot is safe.
Retention catalogue conflict¶
ERROR: CatalogueConflict: workload 'acme-app' has two retention rules (5 yr, 7 yr)
The catalogue is checked at apply-time. Resolve by picking one rule (usually the stricter); apply.
Replication lag¶
snap-store.replication.lag_sec > 4 hours:
- Source-side deposit rate exceeded replication throughput
- Inter-region link saturated
- Target-region repository over quota
Ticket for sustained lag.
Repository over quota¶
ERROR: QuotaExceeded: repo storage 100 TiB, current 99.4 TiB
Repositories have a soft quota. Bump via Support ticket (response < 1 BWD); compliance repositories are usually bumped without scrutiny.