Object Storage (S3-compatible)¶
Service ownership
Owner: storage-platform (storage-pm@clouddigit.ai) — Status: GA — Last audited: 2026-05-11
S3-compatible object storage in three regions, with 11 nines durability and BDIX-direct egress for domestic clients.
What it is¶
Object storage with a strict subset of the AWS S3 API. Bring your existing S3 client (boto3, AWS CLI, MinIO mc, rclone, Veeam, CommVault — all work) and point it at the Cloud Digit endpoint.
Endpoints¶
| Region | Endpoint (path-style) | Endpoint (virtual-host) |
|---|---|---|
bd-dha-1 | https://s3.bd-dha-1.clouddigit.ai | https://<bucket>.s3.bd-dha-1.clouddigit.ai |
bd-ctg-1 | https://s3.bd-ctg-1.clouddigit.ai | https://<bucket>.s3.bd-ctg-1.clouddigit.ai |
bd-syl-1 | https://s3.bd-syl-1.clouddigit.ai | https://<bucket>.s3.bd-syl-1.clouddigit.ai |
API surface¶
Supported (the standard 80% of real-world S3 traffic):
- Bucket: create, list, delete, lifecycle, versioning, policy, ACL, CORS
- Object: PUT, GET, HEAD, DELETE, multipart upload, copy, list (v1 & v2)
- Object Lock (GOVERNANCE & COMPLIANCE retention) — see Object Lock
- Presigned URLs (V4 sig)
- Server-side encryption (SSE-S3, SSE-C; SSE-KMS via OpenBao on roadmap)
Not yet supported: S3 Batch Operations, S3 Select, Requester Pays, Intelligent-Tiering classes (use Archive for cold).
Durability and availability¶
- Durability: 99.999999999% (11 nines), erasure-coded across 3 fault zones in-region
- Availability SLA: 99.95% monthly per region
- Read-after-write consistency for new objects; strong consistency for overwrites and deletes (S3-equivalent)
Cross-region replication¶
Standard CRR (Cross-Region Replication) — write to bucket A in bd-dha-1, replicate to bucket B in bd-ctg-1. Requires versioning on both. Useful for DR.
Performance¶
- Single-object PUT: up to 5 GiB; for larger, use multipart (5 MiB part minimum)
- Per-bucket request rate: thousands of req/s sustained, tens of thousands burst
- Per-object throughput: limited by client and network — typical at-edge throughput 1–10 Gbps
Pricing¶
- Storage: per GiB-month, BDT
- Requests: per 1,000 PUT/POST/COPY; per 10,000 GET/HEAD
- Data transfer: free over BDIX; international egress metered
See Pricing.
Quickstart¶
```bash
AWS CLI works as-is¶
aws --endpoint-url https://s3.bd-dha-1.clouddigit.ai s3 mb s3://my-bucket aws --endpoint-url https://s3.bd-dha-1.clouddigit.ai s3 cp ./file.bin s3://my-bucket/ ```
```python
boto3¶
import boto3 s3 = boto3.client( "s3", endpoint_url="https://s3.bd-dha-1.clouddigit.ai", aws_access_key_id="...", aws_secret_access_key="...", ) s3.put_object(Bucket="my-bucket", Key="file.bin", Body=b"hello") ```
Related¶
- Object Storage (Archive) — cold tier
- Object Lock — WORM / immutability
- Backup-as-a-Service — backed by Object Storage
Operate this service¶
Bucket governance, access policy, encryption, lifecycle, and chargeback.
Bucket naming and project layout¶
- DNS-safe lowercase, 3–63 chars
- One bucket per logical workload (
acme-prod-logs,acme-stage-images) - Project owns buckets — no cross-project sharing without explicit policy
IAM¶
| Role | Can do |
|---|---|
s3.viewer | List buckets / objects |
s3.reader | Get object |
s3.writer | Put / delete object |
s3.builder | Create / configure buckets, lifecycle policies |
s3.admin | Above + delete buckets, modify bucket policies, manage keys |
For machine-to-machine: issue S3-API access keys (project-scoped, with TTL ≤ 90 days). Rotate via API tokens & service accounts.
Bucket policies¶
JSON bucket policies in IAM-compatible syntax. Common pattern — read-only from a CDN edge:
json { "Version": "2012-10-17", "Statement": [{ "Sid": "AllowCdnRead", "Effect": "Allow", "Principal": {"Service": "cdn.clouddigit.ai"}, "Action": ["s3:GetObject"], "Resource": "arn:cd:s3:::acme-prod-static/*" }] }
Encryption¶
Default: SSE-S3 (platform-managed). For regulated workloads:
- SSE-KMS — bucket-scoped CMK from Key Manager
- SSE-C — per-request customer-supplied key (no key on platform)
Enable at bucket creation; switching after the fact requires re-uploading existing objects.
Lifecycle policies¶
Tier or expire objects automatically. Typical:
yaml rules: - id: logs-90-day prefix: logs/ transitions: - days: 30 storage_class: archive expiration: days: 365
Apply via cd s3 lifecycle put. Lifecycle is the mechanism for keeping bills predictable on log-heavy buckets.
Tagging and chargeback¶
Bucket tags propagate to Cost Explorer. Require cost-center on every bucket via project policy.
Compliance¶
S3 in Cloud Digit is single-region by default — objects never leave Bangladesh. Cross-region replication (intra-BD) is opt-in for DR.
Related¶
Metrics¶
| Metric | Healthy | Alert |
|---|---|---|
s3.request_rate | within bucket limits | 4xx error rate > 1% |
s3.error_rate.5xx | < 0.01% | > 0.1% for 5 min |
s3.bytes_stored | grows linearly | sudden 10× jump (audit) |
s3.lifecycle_transitions_24h | matches lifecycle rules | dropped to 0 — policy may be broken |
Request-rate ceilings: 5,500 PUT/POST/DELETE + 5,500 GET per partitioned prefix. Use prefix sharding for high-throughput buckets.
Lifecycle audit¶
Lifecycle silently doing nothing is the worst bug. Verify monthly:
```bash cd s3 lifecycle audit --bucket acme-prod-logs
Reports: rules configured, last execution, objects affected¶
```
Cross-region replication¶
Intra-BD DR replication is async:
bash cd s3 replication create \ --source bd-dha-1/acme-prod-data \ --target bd-ctg-1/acme-prod-data-dr
Replication lag normally < 60 s. Spikes during heavy upload are normal.
Bucket versioning¶
Enable for any bucket whose contents are mutable:
bash cd s3 bucket version enable --bucket acme-prod-config
Each PUT creates a new version; deletes leave a delete-marker. Pair with a lifecycle rule to expire old versions (cost control).
Pre-signed URLs¶
For browser uploads / time-limited downloads:
bash cd s3 url presign \ --bucket acme-prod-uploads \ --key user-123/avatar.png \ --expires 1h \ --method PUT
Never expose long-lived credentials to a browser.
Performance¶
- Use multipart upload for objects > 100 MiB (parallel parts, recoverable on failure)
- Multipart minimum part size: 5 MiB; maximum parts: 10,000
- Use
s3.transfer-accelerationfor cross-region uploads (premium tier)
Related¶
AccessDenied despite correct credentials¶
Most common causes, in order:
- Bucket policy denies the principal — check
cd s3 policy get --bucket <name> - Object ACL specifically denies (rare on Cloud Digit, common on migrated buckets)
- CMK key policy denies (SSE-KMS buckets) — principal needs
kms:Decrypt - Request signature invalid — clock skew > 15 min; sync NTP
- Cross-project request without explicit grant
Use cd s3 simulate --principal user/jane --action s3:GetObject --resource arn:cd:s3:::bucket/key to test policies without making the request.
SlowDown / 503¶
Bucket exceeded request rate. Mitigations:
- Prefix sharding — randomize the first 4 chars of the key
- Implement exponential backoff in the client
- For high-throughput buckets, request a request-rate quota bump
Multipart upload stuck¶
Abandoned multipart uploads stay billed until cleaned:
bash cd s3 multipart list --bucket <name> --older-than 7d cd s3 multipart abort --bucket <name> --upload-id <id>
Best practice: lifecycle rule auto-aborts uploads older than 7 days.
Lifecycle not transitioning¶
| Cause | Fix |
|---|---|
| Object created after the rule's last execution | Wait for next daily scheduler tick |
| Object's storage class doesn't match the transition | Audit the rule |
| Rule disabled | Re-enable via cd s3 lifecycle put |
| Versioning enabled — rule applies to current version only by default | Add NoncurrentVersionTransition |
Cross-region replica lag¶
Replication lag > 5 min sustained:
- Check source-side request rate (replication competes for throughput)
- Check target bucket's
s3.bytes_storedgrowth — should mirror source - Open a Support ticket for sustained lag
Versioning explosion¶
Bucket size doubles overnight. Cause: a client that does PUT every minute on the same key with versioning enabled.
- Identify the culprit prefix:
cd s3 ls --bucket <name> --recursive --include-versions | head - Add lifecycle:
NoncurrentVersionExpiration: 30d - Fix the client (most cases: it's writing instead of patching)
Pre-signed URL invalid¶
| Symptom | Cause |
|---|---|
SignatureDoesNotMatch | Client modified the request |
AccessDenied | URL expired |
InvalidRequest | Headers changed between signing and use |
Sign with the exact method, content-type, and headers the client will send.