AI / GPU¶
Five services covering training, inference, notebooks, and vector search — with sovereign-resident GPU capacity.
-
NVIDIA-class GPU VMs, hourly billing, single-GPU and multi-GPU.
-
Single-tenant GPU servers — full PCIe topology, NVLink where available.
-
AI Notebook (Jupyter / VS Code)
Managed JupyterLab and VS Code for Web with one-click GPU attach.
-
Hosted model endpoints, autoscaled, OpenAI-compatible API surface.
-
Standalone managed vector store — PGVector, Milvus, or Weaviate.
Sovereign AI in one paragraph¶
Models, datasets, prompts, and inference traffic stay inside Bangladesh. There is no shipping of training data to off-shore regions. For regulated FIs and government workloads where data residency rules out OpenAI / Anthropic / Gemini direct, Cloud Digit's LLM-as-a-Service is the on-shore alternative.
Pick a service¶
| You want to… | Pick |
|---|---|
| Fine-tune a 7B–70B model | GPU VMs or GPU Bare Metal |
| Serve a model behind an API | Inference Endpoints |
| Build a RAG pipeline | Vector Database + Inference Endpoints |
| Hand a notebook to a data scientist | AI Notebook (Jupyter) |
| Train at scale, with NVLink | GPU Bare Metal |
| Iterate cheaply on a small model | GPU VMs (single-GPU) |
Compliance posture for AI workloads¶
- Inference logs default to redacted-prompt-only retention; full-prompt logging is opt-in per endpoint
- Model artifacts (weights) are stored in customer-controlled object buckets; we do not co-mingle weights across tenants
- Training data residency: Bangladesh-only, with optional Object Lock for evidence retention