# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
default nginx-deployment-6fd7c9fcc6-rcxpg 1/1 Running 0 107m
kube-flannel kube-flannel-ds-4jrsz 1/1 Running 1 21h
kube-flannel kube-flannel-ds-snbnc 1/1 Running 3 (112m ago) 21h
kube-system coredns-5dd5756b68-krk7z 1/1 Running 1 19h
kube-system coredns-5dd5756b68-qlfvw 1/1 Running 1 19h
kube-system etcd-k8s-master 1/1 Running 20 21h
kube-system kube-apiserver-k8s-master 1/1 Running 5 (112m ago) 18h
kube-system kube-controller-manager-k8s-master 1/1 Running 70 (15m ago) 19h
kube-system kube-proxy-gjft2 1/1 Running 1 21h
kube-system kube-proxy-lswx4 1/1 Running 1 21h
kube-system kube-scheduler-k8s-master 1/1 Running 70 (16m ago) 21h
kube-system metrics-server-98bc7f888-jh7m8 1/1 Running 0 52m
Failed to update lock: Put "https://<master-ip>:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler?timeout=5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
error retrieving resource lock kube-system/kube-scheduler: Get "https://<master-ip>:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler?timeout=5s": context deadline exceeded
failed to renew lease kube-system/kube-scheduler: timed out waiting for the condition
Failed to release lock: Put "https://10.0.91.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/kube-scheduler?timeout=5s": net/http: request canceled (Client.Timeout exceeded while awaiting headers)
"Leaderelection lost"
kube-controller-manager-k8s-master and kube-scheduler-k8s-master both are restarting again and again every 30min once. When i check the logs from controller manager that gives error as mentioned above. When i launch my custom application pod it is also restarting again and again because of this. how to debug and fix this restarting issue of scheduler and controller-manager?
I disabled and stopped firewall. Checked if the certificates have valid expiration date and whether timezone in certificate matches the certificate. Even then the pods are restarting.