Kubernetes: Evenly Distribution of Pods Across Cluster Nodes
Managing Pods distribution across a cluster is hard. Pod affinity and anti-affinity feature of Kubernetes allows some control of Pod placement. However, these features only resolve part of Pods distribution use cases. There is a common need to distribute the Pods evenly across the cluster for high availability and efficient cluster resource utilization. As such, PodTopologySpread scheduling plugin was designed to fill that gap. The plugin has reached a stable state since Kubernetes v1.19.
In this article, I’ll show you an example of using the topology spread constraints feature of Kubernetes to distribute the Pods workload across the cluster nodes in an absolute even manner.
Distribute Pods Evenly Across The Cluster
The topology spread constraints rely on node labels to identify the topology domain(s) that each worker Node is in.
In order to distribute pods evenly across all cluster worker nodes in an absolute even manner, we can use the well-known node label called kubernetes.io/hostname as a topology domain, which ensures each worker node is in its own topology domain.
In the below manifest, we have defined a deployment with 3 replicas that assigned a label app=technotes to the Pod and a topologySpreadConstaints that acts on pods that have that label defined.
spec.topologySpreadConstaints is defined as:
`maxSkew: 1` — distribute pods in an absolute even manner `topologyKey`: kubernetes.io/hostname —use the hostname as topology domain `whenUnsatisfiable`: ScheduleAnyway — always schedule pods even if it can’t satisfy even distribution of pods `labelSelector` —only act on Pods that match this selector
Finally, the Pods runs a container image called technotes, you guessed it, this blog.
apiVersion: v1 kind: Namespace metadata: name: technotes --- apiVersion: apps/v1 kind: Deployment metadata: name: technotes namespace: technotes spec: replicas: 3 selector: matchLabels: app: technotes template: metadata: labels: app: technotes spec: topologySpreadConstraints: - maxSkew: 1 topologyKey: kubernetes.io/hostname whenUnsatisfiable: ScheduleAnyway labelSelector: matchLabels: app: technotes containers: - name: technotes image: ghcr.io/blaat/technotes
Now, let’s apply the manifest:
$ kubectl apply -f technotes-deployment.yaml namespace/technotes created deployment.apps/technotes created
And verify that the pod’s placement is balanced across all worker nodes:
$ kubectl -n technotes get pods -o wide --sort-by=.spec.nodeName
PodTopologySpread scheduling plugin gives power to Kubernetes administrators to achieve high availability of applications as well as efficient utilization of cluster resources.
Scaling down a Deployment will not guarantee and may result in imbalanced Pods distribution. You can use Descheduler to rebalance the Pods distribution.
https://medium.com/geekculture/kubernetes-distributing-pods-evenly-across-cluster-c6bdc9b49699 https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/ https://kubernetes.io/blog/2020/05/introducing-podtopologyspread/