This lab demonstrates how Kubernetes uses taints, tolerations, node affinity, and pod anti-affinity to control pod scheduling.
- A running Kubernetes cluster (e.g., Minikube with two nodes: master and worker).
kubectl
installed and configured.
-
Start Minikube:
minikube start --driver=docker --cni=cilium --kubernetes-version=stable --extra-config=kubelet.authentication-token-webhook=true --extra-config=kubelet.authorization-mode=AlwaysAllow --extra-config=kubelet.cgroup-driver=systemd --extra-config=kubelet.read-only-port=10255 --insecure-registry="registry.k8s.io"
-
Add a worker node:
minikube node add
-
Verify nodes:
kubectl get nodes
You should see:
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane,master Xs v1.XX.X
minikube-m02 Ready <none> Xs v1.XX.X
To prevent non-system pods from being scheduled on the master node:
-
Add a taint to the master node:
kubectl taint nodes minikube node-role.kubernetes.io/master:NoSchedule
-
Verify the taint:
kubectl describe node minikube | grep Taint
- Understand and configure taints and tolerations.
- Learn how to use node affinity to schedule pods on specific nodes.
- Use pod anti-affinity to distribute pods across nodes.
Taint the worker node to prevent pods from being scheduled unless they tolerate the taint:
kubectl taint nodes minikube-m02 key1=value1:NoSchedule
Verify the taint:
kubectl describe node minikube-m02 | grep Taint
View Pod YAML
apiVersion: v1
kind: Pod
metadata:
name: no-toleration-pod
spec:
containers:
- name: nginx
image: nginx
Apply the pod:
kubectl apply -f no-toleration-pod.yaml
The pod will be Pending because it cannot be scheduled on the tainted worker node or the master node (master is also tainted).
View Pod YAML
apiVersion: v1
kind: Pod
metadata:
name: toleration-pod
spec:
tolerations:
- key: "key1"
operator: "Equal"
value: "value1"
effect: "NoSchedule"
containers:
- name: nginx
image: nginx
Apply the pod:
kubectl apply -f toleration-pod.yaml
The pod should now run on the tainted worker node.
Label the worker node:
kubectl label nodes minikube-m02 disktype=ssd
Verify the label:
kubectl get nodes --show-labels
View Pod YAML
apiVersion: v1
kind: Pod
metadata:
name: node-affinity-pod
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: disktype
operator: In
values:
- ssd
containers:
- name: nginx
image: nginx
Apply the pod:
kubectl apply -f node-affinity-pod.yaml
The pod should be scheduled on the worker node.
View Deployment YAML
apiVersion: apps/v1
kind: Deployment
metadata:
name: anti-affinity-app
spec:
replicas: 2
selector:
matchLabels:
app: anti-affinity
template:
metadata:
labels:
app: anti-affinity
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: anti-affinity
topologyKey: "kubernetes.io/hostname"
containers:
- name: nginx
image: nginx
Apply the deployment:
kubectl apply -f anti-affinity-deployment.yaml
Check where the pods are scheduled:
kubectl get pods -o wide
The pods should be distributed across the nodes.
Remove all resources created during the lab:
kubectl delete pod no-toleration-pod toleration-pod node-affinity-pod
kubectl delete deployment anti-affinity-app
kubectl taint nodes minikube-m02 key1=value1:NoSchedule-
kubectl label nodes minikube-m02 disktype-
kubectl taint nodes minikube node-role.kubernetes.io/master:NoSchedule-
- Taints and Tolerations: Prevent pods from being scheduled on nodes unless they tolerate the taints.
- Node Affinity: Schedule pods on specific nodes based on labels.
- Pod Anti-Affinity: Distribute pods across nodes to avoid co-locating them.
This lab demonstrates how Kubernetes uses scheduling controls to manage workload placement.