File Storage (NFS / SMB)¶
Service ownership
Owner: storage-platform (storage-pm@clouddigit.ai) — Status: GA — Last audited: 2026-05-11
Managed shared filesystems — NFSv4.1 and SMB 3.x — accessible from VMs, K8s pods, and bare metal.
What it is¶
A managed file share that you mount on multiple compute resources at once. The right primitive for lift-and-shift apps that expect a POSIX (or Windows) shared mount, and for workloads where multiple VMs read and write the same hierarchy.
Protocols¶
| Protocol | Use |
|---|---|
| NFSv4.1 | Linux / *nix workloads, K8s persistent volumes |
| SMB 3.x | Windows workloads, mixed environments |
| Dual | Same backing share exposed via both |
Use cases¶
- Lift-and-shift apps that need a shared
/datamount - WordPress / Drupal media libraries
- Build artifacts shared across CI runners
- Home directories
- Kubernetes
ReadWriteManyPVs
Performance tiers¶
| Tier | Throughput per share | IOPS | Best for |
|---|---|---|---|
| Standard | 500 MB/s | 20,000 | General shared FS |
| Performance | 2 GB/s | 80,000 | Higher-IOPS shared workloads |
| Premium | 4 GB/s | 200,000 | NetApp-backed; latency-sensitive |
Backed by NetApp ONTAP for the Premium tier, by Ostor (Virtuozzo) for Standard / Performance.
Capacity¶
- Min: 100 GiB; max: 100 TiB per share (Standard / Performance)
- Premium scales to 500 TiB per share
- Capacity hot-grow supported; shrink not supported
Region availability¶
| Region | Status |
|---|---|
bd-dha-1 | GA |
bd-ctg-1 | Preview |
bd-syl-1 | Preview |
Pricing¶
Per GiB-month, by tier. See Pricing.
Related¶
- Block Storage (NVMe HCI) — single-attach block, default for VMs
- Object Storage (S3) — for data your app can address by key
- Managed Kubernetes — RWX PVs use this service
Operate this service¶
Shared filesystems for workloads that need POSIX semantics across many clients.
Protocol choice¶
| Use case | Pick |
|---|---|
| Linux app cluster, shared assets | NFS |
| Windows / mixed clients | SMB |
| HPC / training datasets | NFS |
| Legacy Windows file server replacement | SMB |
A share is single-protocol. To serve both: provision two shares (NFS + SMB) over the same underlying capacity using multi-protocol mode (premium tier).
IAM and access control¶
| Role | Can do |
|---|---|
file.viewer | List shares |
file.mounter | Mount existing shares (issues mount credentials) |
file.builder | Create/resize/delete shares |
file.admin | Above + access policy, AD integration, snapshot policy |
NFS: export rules by CIDR or by VPC. SMB: integrate with Active Directory for user/group ACLs:
bash cd file share create \ --name acme-shared \ --protocol smb \ --size-tib 4 \ --ad-domain acme.local
Performance tiers¶
| Tier | Baseline throughput | Use case |
|---|---|---|
general | 250 MiB/s per TiB | App-tier shared files |
bursting | 250 MiB/s + burst | Bursty CI / analytics |
provisioned | Up to 4 GiB/s | HPC, training datasets |
Bursting tier accrues "burst credits" during idle and spends them during spikes — useful when load is predictable.
Encryption¶
At-rest: AES-256, platform-managed. CMK supported. In-transit: NFS via Kerberos or TLS-wrapped (NFS-over-TLS); SMB 3.x with encryption enabled.
Tagging¶
Tag every share with app, env, cost-center. Required for chargeback.
Related¶
Mounting¶
NFS: bash sudo mount -t nfs -o vers=4.1,rsize=1048576,wsize=1048576 \ share-abc.file.bd-dha-1.cloudigit.internal:/ /mnt/shared
SMB: bash sudo mount -t cifs //share-abc/share /mnt/shared \ -o vers=3.1.1,credentials=/etc/credentials/share-abc
Use vers=4.1 (NFS) and vers=3.1.1 (SMB) — older protocol versions have weaker security and lower performance.
Metrics¶
| Metric | Healthy | Alert |
|---|---|---|
file.throughput_mbps | within tier | sustained > 95% of tier |
file.iops | within tier | same |
file.client_count | varies | sudden drop (clients lost mount) |
file.burst_credits (bursting) | accruing or steady | exhausted |
file.latency_ms p99 | < 5 ms | > 15 ms |
Resize¶
Online, no remount:
```bash cd file share resize --share acme-shared --size-tib 8
Available immediately to all mounted clients¶
```
Shrinking is offline — drain clients first.
Snapshots¶
bash cd file snapshot create --share acme-shared --name pre-deploy-2026-05-11
NFS snapshots appear as a read-only .snapshot/ directory inside the share (NetApp-style). SMB snapshots integrate with Previous Versions (right-click → Restore previous versions on Windows clients).
Backup¶
BaaS supports file-share backups directly — daily incremental, cross-region option. No need to mount-then-backup.
Access auditing¶
For SMB shares: audit policy logs every access. Stream to SIEM:
bash cd file audit enable --share acme-shared --to siem://acme-siem/files
NFS auditing is at the protocol layer (less granular); audit at the VM-side instead.
Related¶
NFS mount times out¶
| Cause | Check |
|---|---|
| Security group blocks 2049/tcp + udp | VPC SG on the client |
| Export rules don't include client CIDR | cd file share show --share <id> |
| Client DNS can't resolve share endpoint | Cloud Digit DNS in /etc/resolv.conf? |
nfs-common package not installed | apt install nfs-common (Debian/Ubuntu) |
SMB: "The user name or password is incorrect"¶
Most common: machine-account credentials drifted from AD. Re-join the domain or refresh creds:
bash sudo cd file smb cred refresh --share acme-shared
Other causes: - Kerberos clock skew > 5 min - AD trust path broken (cross-forest) - User locked out
Throughput floor at tier baseline¶
You provisioned bursting but the share won't burst:
- Burst credits exhausted —
file.burst_creditsmetric near 0. Wait for re-accrual (idle time) or upgrade toprovisioned. - Client side bottleneck — single-client NFS throughput caps around 1.2 GiB/s on most VM flavors. Spread across more clients.
- Mount options suboptimal — verify
rsize=1048576,wsize=1048576.
"Stale file handle"¶
The share was recreated or the inode changed. Remount:
bash sudo umount -f /mnt/shared sudo mount -a
Persistent stale handles after a snapshot restore usually mean the application cached an open file descriptor — restart the app.
Files appear truncated on SMB¶
Windows clients with offline files (Sync Center) caching enabled can return stale views. Disable offline files for the mount, or use NFS for the workload.
Cross-AZ latency¶
Latency to a share in another AZ: 1–2 ms (vs <0.5 ms intra-AZ). Mount from VMs in the same AZ as the share when possible. The share's AZ is fixed at create time.
Audit log gaps¶
SMB audit drops events under heavy load (>50k ops/min). For workloads above that rate, send audit to a dedicated tier or use SIEM's high-throughput ingest.