A cloud-native Kubernetes operator that provides global service load balancing across geographically dispersed clusters using DNS.
K8GB is a Kubernetes operator that provides global service load balancing (GSLB) for applications running across multiple, geographically distributed Kubernetes clusters. It solves the problem of directing user traffic to the closest or most available cluster instance, ensuring high availability and regional failover capabilities using a cloud-native, DNS-based approach.
Platform engineers, SREs, and DevOps teams managing multi-cluster Kubernetes deployments across different regions or clouds who need to ensure application resilience and efficient global traffic distribution.
Developers choose K8GB because it's a fully open-source, Kubernetes-native alternative to proprietary GSLB solutions. It eliminates vendor lock-in, integrates seamlessly with existing Kubernetes tooling and health checks, and requires no dedicated management cluster or specialized network hardware.
A cloud native Kubernetes Global Balancer
Open-Awesome is built by the community, for the community. Submit a project, suggest an awesome list, or help improve the catalog on GitHub.
Uses the time-tested DNS protocol for global traffic routing, avoiding single points of failure as emphasized in the key differentiators section.
Leverages existing Liveness and Readiness probes to make routing decisions, integrating seamlessly with application health without extra configuration.
Configuration is entirely through a Gslb Custom Resource, enabling declarative management and GitOps-friendly workflows as shown in the example YAML.
Implements strategies like failover to meet specific high-availability needs, documented in the strategy.md file for flexible traffic distribution.
Works on any conformant Kubernetes cluster, on-premises or across major clouds, ensuring flexibility without vendor lock-in.
Relies on DNS TTL for updates, which can introduce delays in traffic failover compared to layer-4 or layer-7 solutions, affecting real-time responsiveness.
Requires integration with supported EdgeDNS providers like Infoblox or Route53, adding setup complexity and potential additional costs.
Officially tested only with specific ingress controllers (NGINX, Istio, AWS ALB), which may not cover all community or custom implementations.
Initial deployment involves configuring multiple clusters and DNS services, which can be non-trivial and time-consuming for new users.