dial tcp 10.96.0.1:443: connect: no route to host
Quote from moshe on 24/07/2022, 6:09 pmProblem
User gets the following error when checking pod state with kubectl logs <POD NANE>
failed to create client: error while trying to communicate with apiserver: Get “https://10.96.0.1:443/version”: dial tcp 10.96.0.1:443: connect: no route to host
Note: Error appears in promethues stack deployment Example of the error
kube-prometheus-stack-kube-state-metrics-64f75d684f-vhzkq 0/1 CrashLoopBackOff 808 (37s ago) 2d20h
Solution
Seems like its network configuration issue. you should check your network provider settings.
Check where the API Server is running with the following command and make sure its running properly
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.101.35.25 <none> 80/TCP 3d
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d
nginx-ingress-nginx-ingress LoadBalancer 10.99.49.98 <pending> 80:31842/TCP,443:30072/TCP 7d3h
prometheus-alertmanager ClusterIP 10.101.180.238 <none> 80/TCP 5m33s
prometheus-kube-state-metrics ClusterIP 10.103.29.143 <none> 8080/TCP 5m33s
prometheus-node-exporter ClusterIP 10.103.79.114 <none> 9100/TCP 5m33s
prometheus-pushgateway ClusterIP 10.97.194.194 <none> 9091/TCP 5m33s
prometheus-server ClusterIP 10.103.205.84 <none> 80/TCP 5m33sCheck the node where the pod is running and which node it is deployed and check if this node can access the control-plane / node where the API server deployed
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
grafana-757556d876-l45x6 1/1 Running 0 3d 192.168.35.149 k8snode-ap2 <none> <none>
hello-local-hostpath-pod 1/1 Running 0 11d 192.168.116.67 k8snode-ap3 <none> <none>
nginx-ingress-nginx-ingress-747c9cb6db-94ksq 0/1 CrashLoopBackOff 2011 (75s ago) 7d3h 192.168.116.75 k8snode-ap3 <none> <none>
prometheus-alertmanager-95c6bf458-vz4gx 2/2 Running 0 10m 192.168.35.159 k8snode-ap2 <none> <none>
prometheus-kube-state-metrics-774f8c7564-qzmqx 1/1 Running 0 10m 192.168.133.155 k8snode-ap1 <none> <none>
prometheus-node-exporter-4j7dw 1/1 Running 0 10m 10.2.43.20 k8snode-ap1 <none> <none>
prometheus-node-exporter-f2d7k 1/1 Running 0 10m 10.2.43.22 k8snode-ap3 <none> <none>
prometheus-node-exporter-m4c86 1/1 Running 0 10m 10.2.43.21 k8snode2 <none> <none>
prometheus-pushgateway-5bf8bb44f8-m54dm 1/1 Running 0 10m 192.168.133.154 k8snode1 <none> <none>
prometheus-server-5c7c5df8db-rwjqp 2/2 Running 0 10m 192.168.35.158 k8snode2 <none> <none>
Problem
User gets the following error when checking pod state with kubectl logs <POD NANE>
failed to create client: error while trying to communicate with apiserver: Get “https://10.96.0.1:443/version”: dial tcp 10.96.0.1:443: connect: no route to host
Note: Error appears in promethues stack deployment Example of the error
kube-prometheus-stack-kube-state-metrics-64f75d684f-vhzkq 0/1 CrashLoopBackOff 808 (37s ago) 2d20h
Solution
Seems like its network configuration issue. you should check your network provider settings.
Check where the API Server is running with the following command and make sure its running properly
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 10.101.35.25 <none> 80/TCP 3d
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d
nginx-ingress-nginx-ingress LoadBalancer 10.99.49.98 <pending> 80:31842/TCP,443:30072/TCP 7d3h
prometheus-alertmanager ClusterIP 10.101.180.238 <none> 80/TCP 5m33s
prometheus-kube-state-metrics ClusterIP 10.103.29.143 <none> 8080/TCP 5m33s
prometheus-node-exporter ClusterIP 10.103.79.114 <none> 9100/TCP 5m33s
prometheus-pushgateway ClusterIP 10.97.194.194 <none> 9091/TCP 5m33s
prometheus-server ClusterIP 10.103.205.84 <none> 80/TCP 5m33s
Check the node where the pod is running and which node it is deployed and check if this node can access the control-plane / node where the API server deployed
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
grafana-757556d876-l45x6 1/1 Running 0 3d 192.168.35.149 k8snode-ap2 <none> <none>
hello-local-hostpath-pod 1/1 Running 0 11d 192.168.116.67 k8snode-ap3 <none> <none>
nginx-ingress-nginx-ingress-747c9cb6db-94ksq 0/1 CrashLoopBackOff 2011 (75s ago) 7d3h 192.168.116.75 k8snode-ap3 <none> <none>
prometheus-alertmanager-95c6bf458-vz4gx 2/2 Running 0 10m 192.168.35.159 k8snode-ap2 <none> <none>
prometheus-kube-state-metrics-774f8c7564-qzmqx 1/1 Running 0 10m 192.168.133.155 k8snode-ap1 <none> <none>
prometheus-node-exporter-4j7dw 1/1 Running 0 10m 10.2.43.20 k8snode-ap1 <none> <none>
prometheus-node-exporter-f2d7k 1/1 Running 0 10m 10.2.43.22 k8snode-ap3 <none> <none>
prometheus-node-exporter-m4c86 1/1 Running 0 10m 10.2.43.21 k8snode2 <none> <none>
prometheus-pushgateway-5bf8bb44f8-m54dm 1/1 Running 0 10m 192.168.133.154 k8snode1 <none> <none>
prometheus-server-5c7c5df8db-rwjqp 2/2 Running 0 10m 192.168.35.158 k8snode2 <none> <none>