You don't run three clusters.
Three clusters run you.
Upgrades, policies, on-call, drift — every cluster multiplies the burden. With 5 clusters, you're already burning 45% of platform capacity on duplication, not product.
Where your Kubernetes budget actually goes
Your cloud bill shows compute and networking. The real cost lives in the engineering time everyone accepts as status quo.
- Compute instances
- Storage volumes
- Network transfer
- Load balancers
- Managed services & add-ons
"Config drift between our 5 clusters is the #1 cause of outages. Every damn sprint we fix something that broke because of drift."
r/kubernetes · 200+ upvotes
One cluster. All your clouds.
One engineering cost.
Same team. Same clouds. Different math.
| Metric | Today (5 clusters) | With emma (1 cluster) |
|---|---|---|
| Upgrades, policies, RBAC, on-call, GitOps | all × 5 | all × 1 |
| FTEs on cluster ops | 4.4 | ~1 |
| FTEs for product work | 3.6 | ~7 |
Upstream 1.29+. No fork. Standard API server, scheduler, etcd. Your existing manifests, Helm charts, and operators work unchanged.
kubectl get nodes → nodes across AWS, GCP, Azure
Cilium CNI (eBPF) on top of emma's multi-cloud fabric (BGP/VXLAN, Cisco Catalyst 8000v). One network policy layer across all clouds. No VPN stitching.
kubectl get cnp → same policies, all clouds
Unified emma CSI driver across all clouds. Abstracts EBS, PD, Managed Disks behind one interface. Data stays in-region. No cross-cloud replication unless you configure it.
kubectl get sc → emma-storage (works across AWS, GCP,
Azure)
Hosted in Luxembourg (EU). emma manages etcd, API server, scheduler, monitoring. You get kubeconfig with full RBAC. Control plane outage ≠ workload outage.
Hosted in Luxembourg · dedicated per tenant
Official Terraform provider on registry. Cluster and node groups as HCL. CI/CD friendly. emma operates what Terraform creates.
resource "emma_kubernetes" "prod" { ... }
Per-provider node pools in the same cluster. Isolated failure domains. Move workloads between providers with node affinity, not re-architecture.
nodeSelector: topology.emma.ms/cloud: aws
What changes for your team
The #1 concern: "What do my people need to change?" Here's the honest answer.
- kubectl, Helm, ArgoCD — same CLI, same charts, same workflows
- CI/CD pipelines — change one cluster endpoint, everything else stays
- Namespaces & RBAC — your isolation model carries over
- Monitoring — Prometheus, Grafana, PagerDuty — same endpoints
- Manifests — deploy as-is, no refactoring
- Cluster endpoint — one kubeconfig instead of N
- Infra abstraction — emma manages control plane, you manage workloads
- On-call scope — one cluster to monitor, not five
From 7 clusters to 1.
3.2 FTEs reclaimed.
Your data stays where you put it.
What your security team needs before they'll approve.
Hosted in Luxembourg, EU. emma operates the control plane — you retain full RBAC and audit log access.
Workloads and data stay in the regions you choose. No cross-border replication unless you configure it. Node pools pinned to specific cloud regions.
Your cluster runs across clouds — but it's yours alone. Dedicated control plane, no shared etcd, no other tenants. Network isolation via Cilium eBPF policies.
- Encryption in transit — TLS 1.3 for all API and inter-node traffic
- Encryption at rest — per-provider native (EBS encryption, GCP CMEK, Azure SSE)
- Access model — customer-controlled RBAC. emma engineers: control plane only, no workload access without explicit grant
- Audit logs — full Kubernetes audit log stream, exportable to your SIEM
What you're thinking right now
What happens to my platform team?
How long does migration take?
What's the risk?
How much does it cost?
Do my engineers need to learn new tools?
Data residency?
When is this NOT a fit?
One cluster. All your clouds.
Your team ships product.
15 minutes. We'll review your cluster architecture and show where consolidation saves engineering time.
✓ Got it
We'll get back to you within 24 hours.
Your old clusters stay live until you're ready.
Every new engineer you hire goes partly to cluster overhead. The problem compounds with each cloud you add.
Your engineers work on product. One upgrade path. One on-call rotation. One compliance surface.
"We thought we needed 2 more hires. Turned out we needed fewer clusters." — Head of Platform