Skip to content

Failed to list *v1.Pod & v1.service - getsockopt: connection refused err: failed to get node info: node "server1" not found #60884

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
SamSinging opened this issue Mar 7, 2018 · 29 comments
Labels
sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle.

Comments

@SamSinging
Copy link

Unable to proceed with kubeadm init due to the log below.
This is just for the initial setup. I've tried:

  • Removing the Environment="KUBELET_NETWORK_ARGS=--network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin" line from the 10-kubelet.conf file
  • Copying /etc/kubernetes/admin.conf to $HOME/.kube/config and restarting kubelet + daemon-reload

kubectl apply -f doesn't seem to work as well due to the error below:
The connection to the server 107.105.136.28:6443 was refused - did you specify the right host or port?

What happened:
Please see below log from journalctl -xeu kubelet

Mar 07 20:42:25 server1 kubelet[14619]: E0307 20:42:25.908557 14619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://107.105.136.28:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dserver1&limit=500&resourceVersion=0: dial tcp 107.105.136.28:6443: getsockopt: connection refused
Mar 07 20:42:26 server1 kubelet[14619]: E0307 20:42:26.718263 14619 eviction_manager.go:238] eviction manager: unexpected err: failed to get node info: node "server1" not found
Mar 07 20:42:26 server1 kubelet[14619]: E0307 20:42:26.906939 14619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://107.105.136.28:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 107.105.136.28:6443: getsockopt: connection refused
Mar 07 20:42:26 server1 kubelet[14619]: E0307 20:42:26.907933 14619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://107.105.136.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dserver1&limit=500&resourceVersion=0: dial tcp 107.105.136.28:6443: getsockopt: connection refused
Mar 07 20:42:26 server1 kubelet[14619]: E0307 20:42:26.908994 14619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://107.105.136.28:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dserver1&limit=500&resourceVersion=0: dial tcp 107.105.136.28:6443: getsockopt: connection refused
Mar 07 20:42:27 server1 kubelet[14619]: E0307 20:42:27.392766 14619 remote_runtime.go:92] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed pulling image "gcr.io/google_containers/pause-amd64:3.0": Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded w
Mar 07 20:42:27 server1 kubelet[14619]: E0307 20:42:27.392788 14619 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "kube-apiserver-server1_kube-system(7c74aa0f4b9044a62ba3fc2b222b6a49)" failed: rpc error: code = Unknown desc = failed pulling image "gcr.io/google_containers/pause-amd64:3.0": Error response from daemon: Get https://gcr.io/v2/: net/http: request c
Mar 07 20:42:27 server1 kubelet[14619]: E0307 20:42:27.392795 14619 kuberuntime_manager.go:647] createPodSandbox for pod "kube-apiserver-server1_kube-system(7c74aa0f4b9044a62ba3fc2b222b6a49)" failed: rpc error: code = Unknown desc = failed pulling image "gcr.io/google_containers/pause-amd64:3.0": Error response from daemon: Get https://gcr.io/v2/: net/http: request
Mar 07 20:42:27 server1 kubelet[14619]: E0307 20:42:27.392823 14619 pod_workers.go:186] Error syncing pod 7c74aa0f4b9044a62ba3fc2b222b6a49 ("kube-apiserver-server1_kube-system(7c74aa0f4b9044a62ba3fc2b222b6a49)"), skipping: failed to "CreatePodSandbox" for "kube-apiserver-server1_kube-system(7c74aa0f4b9044a62ba3fc2b222b6a49)" with CreatePodSandboxError: "CreatePodSandbo
Mar 07 20:42:27 server1 kubelet[14619]: E0307 20:42:27.907221 14619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://107.105.136.28:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 107.105.136.28:6443: getsockopt: connection refused
Mar 07 20:42:27 server1 kubelet[14619]: E0307 20:42:27.908220 14619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://107.105.136.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dserver1&limit=500&resourceVersion=0: dial tcp 107.105.136.28:6443: getsockopt: connection refused
Mar 07 20:42:27 server1 kubelet[14619]: E0307 20:42:27.909242 14619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://107.105.136.28:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dserver1&limit=500&resourceVersion=0: dial tcp 107.105.136.28:6443: getsockopt: connection refused
Mar 07 20:42:28 server1 kubelet[14619]: I0307 20:42:28.239272 14619 kubelet_node_status.go:273] Setting node annotation to enable volume controller attach/detach
Mar 07 20:42:28 server1 kubelet[14619]: I0307 20:42:28.246581 14619 kubelet_node_status.go:82] Attempting to register node server1
Mar 07 20:42:28 server1 kubelet[14619]: E0307 20:42:28.246744 14619 kubelet_node_status.go:106] Unable to register node "server1" with API server: Post https://107.105.136.28:6443/api/v1/nodes: dial tcp 107.105.136.28:6443: getsockopt: connection refused
Mar 07 20:42:28 server1 kubelet[14619]: E0307 20:42:28.907491 14619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://107.105.136.28:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 107.105.136.28:6443: getsockopt: connection refused
Mar 07 20:42:28 server1 kubelet[14619]: E0307 20:42:28.908477 14619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://107.105.136.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dserver1&limit=500&resourceVersion=0: dial tcp 107.105.136.28:6443: getsockopt: connection refused
Mar 07 20:42:28 server1 kubelet[14619]: E0307 20:42:28.909606 14619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://107.105.136.28:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dserver1&limit=500&resourceVersion=0: dial tcp 107.105.136.28:6443: getsockopt: connection refused
Mar 07 20:42:29 server1 kubelet[14619]: E0307 20:42:29.907804 14619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://107.105.136.28:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 107.105.136.28:6443: getsockopt: connection refused
Mar 07 20:42:29 server1 kubelet[14619]: E0307 20:42:29.908776 14619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://107.105.136.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dserver1&limit=500&resourceVersion=0: dial tcp 107.105.136.28:6443: getsockopt: connection refused
Mar 07 20:42:29 server1 kubelet[14619]: E0307 20:42:29.909785 14619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://107.105.136.28:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dserver1&limit=500&resourceVersion=0: dial tcp 107.105.136.28:6443: getsockopt: connection refused
Mar 07 20:42:30 server1 kubelet[14619]: E0307 20:42:30.908131 14619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://107.105.136.28:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 107.105.136.28:6443: getsockopt: connection refused
Mar 07 20:42:30 server1 kubelet[14619]: E0307 20:42:30.909155 14619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://107.105.136.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dserver1&limit=500&resourceVersion=0: dial tcp 107.105.136.28:6443: getsockopt: connection refused
Mar 07 20:42:30 server1 kubelet[14619]: E0307 20:42:30.910228 14619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://107.105.136.28:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dserver1&limit=500&resourceVersion=0: dial tcp 107.105.136.28:6443: getsockopt: connection refused
Mar 07 20:42:31 server1 kubelet[14619]: E0307 20:42:31.908495 14619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:465: Failed to list *v1.Service: Get https://107.105.136.28:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 107.105.136.28:6443: getsockopt: connection refused
Mar 07 20:42:31 server1 kubelet[14619]: E0307 20:42:31.909392 14619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:474: Failed to list *v1.Node: Get https://107.105.136.28:6443/api/v1/nodes?fieldSelector=metadata.name%3Dserver1&limit=500&resourceVersion=0: dial tcp 107.105.136.28:6443: getsockopt: connection refused
Mar 07 20:42:31 server1 kubelet[14619]: E0307 20:42:31.910461 14619 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://107.105.136.28:6443/api/v1/pods?fieldSelector=spec.nodeName%3Dserver1&limit=500&resourceVersion=0: dial tcp 107.105.136.28:6443: getsockopt: connection refused
Mar 07 20:42:32 server1 kubelet[14619]: E0307 20:42:32.018611 14619 event.go:209] Unable to write event: 'Patch https://107.105.136.28:6443/api/v1/namespaces/default/events/server1.1519a1c427a76a94: dial tcp 107.105.136.28:6443: getsockopt: connection refused' (may retry after sleeping)

What you expected to happen:
kubernetes should be able to initialize successfully since this is the initial setup using kubeadm

How to reproduce it (as minimally and precisely as possible):
apt-get install -y apt-transport-https
sudo curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add
sudo echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee
/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-xenial main
apt-get update

apt-get install -y kubelet kubeadm kubectl kubernetes-cni

swapoff -a
kubeadm init

Anything else we need to know?:
This was the initial error after kubeadm init. It got stuck for a while at "[init] This might take a minute or longer if the control plane images have to be pulled."

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
- There is no internet connection, so the kubelet cannot pull the following control plane images:
- gcr.io/google_containers/kube-apiserver-amd64:v1.9.3
- gcr.io/google_containers/kube-controller-manager-amd64:v1.9.3
- gcr.io/google_containers/kube-scheduler-amd64:v1.9.3

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'
couldn't initialize a Kubernetes cluster

Environment:

  • Kubernetes version (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

  • Cloud provider or hardware configuration:
    Local Server (Ubuntu 16.04.3)

  • Kernel (e.g. uname -a):
    Linux server1 4.13.0-36-generic Link fix to cloud console #40~16.04.1-Ubuntu SMP Fri Feb 16 23:25:58 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

  • Install tools:
    kubeadm

  • Others:

@k8s-ci-robot k8s-ci-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Mar 7, 2018
@SamSinging
Copy link
Author

/sig cluster-lifecycle

@k8s-ci-robot k8s-ci-robot added sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Mar 8, 2018
@myqq0000
Copy link

+1

@nikhilno1
Copy link

Having the same issue. Any solution/workaround? Thanks.

@exp0nge
Copy link

exp0nge commented Apr 8, 2018

Same issue on Raspbian Stretch.

@murugan007
Copy link

murugan007 commented Apr 11, 2018

I have the same issue with k8s v1.10.0

@bykvaadm
Copy link

same with 1.10.1

@emptyewer
Copy link

same issue with 1.10.1 on ubuntu 16.04

@dims
Copy link
Member

dims commented May 3, 2018

Looks like this is a kubeadm scenario. Can you please use docker logs to look at logs from any failed apiserver container?

@emptyewer
Copy link

emptyewer commented May 3, 2018

I0503 22:02:53.308277       1 storage_rbac.go:218] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:endpoint-controller
 I0503 22:02:53.348304       1 storage_rbac.go:218] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:generic-garbage-collector
 I0503 22:02:53.388448       1 storage_rbac.go:218] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:horizontal-pod-autoscaler
 I0503 22:02:53.428276       1 storage_rbac.go:218] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:job-controller
 I0503 22:02:53.468461       1 storage_rbac.go:218] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:namespace-controller
 I0503 22:02:53.508937       1 storage_rbac.go:218] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:node-controller
 I0503 22:02:53.548204       1 storage_rbac.go:218] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:persistent-volume-binder
 I0503 22:02:53.589470       1 storage_rbac.go:218] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pod-garbage-collector
 I0503 22:02:53.628636       1 storage_rbac.go:218] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replicaset-controller
 I0503 22:02:53.668431       1 storage_rbac.go:218] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:replication-controller
 I0503 22:02:53.708720       1 storage_rbac.go:218] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:resourcequota-controller
 I0503 22:02:53.748635       1 storage_rbac.go:218] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:route-controller
 I0503 22:02:53.788545       1 storage_rbac.go:218] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-account-controller
 I0503 22:02:53.828922       1 storage_rbac.go:218] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:service-controller
 I0503 22:02:53.868351       1 storage_rbac.go:218] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:statefulset-controller
 I0503 22:02:53.908406       1 storage_rbac.go:218] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:ttl-controller
 I0503 22:02:53.948229       1 storage_rbac.go:218] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:certificate-controller
 I0503 22:02:53.988495       1 storage_rbac.go:218] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pvc-protection-controller
 I0503 22:02:54.028219       1 storage_rbac.go:218] created clusterrolebinding.rbac.authorization.k8s.io/system:controller:pv-protection-controller
 I0503 22:02:54.066914       1 controller.go:537] quota admission added evaluator for: {rbac.authorization.k8s.io roles}
 I0503 22:02:54.068947       1 storage_rbac.go:249] created role.rbac.authorization.k8s.io/extension-apiserver-authentication-reader in kube-system
 I0503 22:02:54.108876       1 storage_rbac.go:249] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
 I0503 22:02:54.148422       1 storage_rbac.go:249] created role.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
 I0503 22:02:54.188197       1 storage_rbac.go:249] created role.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
 I0503 22:02:54.228478       1 storage_rbac.go:249] created role.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
 I0503 22:02:54.268426       1 storage_rbac.go:249] created role.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
 I0503 22:02:54.310263       1 storage_rbac.go:249] created role.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
 I0503 22:02:54.347008       1 controller.go:537] quota admission added evaluator for: {rbac.authorization.k8s.io rolebindings}
 I0503 22:02:54.349070       1 storage_rbac.go:279] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-controller-manager in kube-system
 I0503 22:02:54.388306       1 storage_rbac.go:279] created rolebinding.rbac.authorization.k8s.io/system::leader-locking-kube-scheduler in kube-system
 I0503 22:02:54.427975       1 storage_rbac.go:279] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-system
 I0503 22:02:54.468390       1 storage_rbac.go:279] created rolebinding.rbac.authorization.k8s.io/system:controller:cloud-provider in kube-system
 I0503 22:02:54.508567       1 storage_rbac.go:279] created rolebinding.rbac.authorization.k8s.io/system:controller:token-cleaner in kube-system
 I0503 22:02:54.548459       1 storage_rbac.go:279] created rolebinding.rbac.authorization.k8s.io/system:controller:bootstrap-signer in kube-public
 I0503 22:02:55.095712       1 controller.go:537] quota admission added evaluator for: { serviceaccounts}
 I0503 22:02:55.484138       1 controller.go:537] quota admission added evaluator for: {apps deployments}
 I0503 22:02:55.536026       1 controller.go:537] quota admission added evaluator for: {apps daemonsets}
 I0503 22:03:21.617753       1 trace.go:76] Trace[1666680632]: "Patch /api/v1/nodes/deepradio-ws6/status" (started: 2018-05-03 22:03:11.596955185 +0000 UTC m=+26.008574211) (total time: 10.020731394s):
 Trace[1666680632]: [10.020731394s] [10.016878506s] END
 I0503 22:03:35.002346       1 trace.go:76] Trace[107158748]: "Create /api/v1/namespaces/kube-system/events" (started: 2018-05-03 22:03:04.992697708 +0000 UTC m=+19.404316734) (total time: 30.009605188s):
 Trace[107158748]: [30.009605188s] [30.006093168s] END

above are the last few lines from kube-apiserver docker logs. I have this issue only with kubernetes versions 1.10 and 1.9. k8s version 1.8 works fine.

@dims
Copy link
Member

dims commented May 3, 2018

so looks like it is still running ... next step is to check if the ip it is listening on ok. sudo lsof -i :6443, kubelet is trying to contact apiserver ip/port and we need to make sure the lsof reflects the ip/port. You can also do docker inspect on the api server container to check what you see there

@emptyewer
Copy link

actually when I look at the docker ps output, the apiserver is not running. journalctl is listed below

May 03 15:09:39 deepradio-ws6 kubelet[23859]: E0503 15:09:39.736025   23859 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://172.21.142.12:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.21.142.12:6443: getsockopt: connection refused
May 03 15:09:39 deepradio-ws6 kubelet[23859]: E0503 15:09:39.736975   23859 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://172.21.142.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Ddeepradio-ws6&limit=500&resourceVersion=0: dial tcp 172.21.142.12:6443: getsockopt: connection refused
May 03 15:09:39 deepradio-ws6 kubelet[23859]: E0503 15:09:39.738050   23859 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.21.142.12:6443/api/v1/pods?fieldSelector=spec.nodeName%3Ddeepradio-ws6&limit=500&resourceVersion=0: dial tcp 172.21.142.12:6443: getsockopt: connection refused
May 03 15:09:40 deepradio-ws6 kubelet[23859]: E0503 15:09:40.736689   23859 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://172.21.142.12:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.21.142.12:6443: getsockopt: connection refused
May 03 15:09:40 deepradio-ws6 kubelet[23859]: E0503 15:09:40.737618   23859 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://172.21.142.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Ddeepradio-ws6&limit=500&resourceVersion=0: dial tcp 172.21.142.12:6443: getsockopt: connection refused
May 03 15:09:40 deepradio-ws6 kubelet[23859]: E0503 15:09:40.738731   23859 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.21.142.12:6443/api/v1/pods?fieldSelector=spec.nodeName%3Ddeepradio-ws6&limit=500&resourceVersion=0: dial tcp 172.21.142.12:6443: getsockopt: connection refused
May 03 15:09:41 deepradio-ws6 kubelet[23859]: E0503 15:09:41.737434   23859 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Service: Get https://172.21.142.12:6443/api/v1/services?limit=500&resourceVersion=0: dial tcp 172.21.142.12:6443: getsockopt: connection refused
May 03 15:09:41 deepradio-ws6 kubelet[23859]: E0503 15:09:41.738325   23859 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://172.21.142.12:6443/api/v1/nodes?fieldSelector=metadata.name%3Ddeepradio-ws6&limit=500&resourceVersion=0: dial tcp 172.21.142.12:6443: getsockopt: connection refused
May 03 15:09:41 deepradio-ws6 kubelet[23859]: E0503 15:09:41.739380   23859 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://172.21.142.12:6443/api/v1/pods?fieldSelector=spec.nodeName%3Ddeepradio-ws6&limit=500&resourceVersion=0: dial tcp 172.21.142.12:6443: getsockopt: connection refused

@dims
Copy link
Member

dims commented May 3, 2018

right, we need to figure out why the api server container failed by looking at its logs. (the journalctl from kubelet is saying that it cannot talk to api server)

@woile
Copy link

woile commented May 18, 2018

Having same issue on raspbian stretch

@mingregister
Copy link
Contributor

same issue with 1.10.4 on CentOS Linux release 7.4.1708 (Core)

@kuznero
Copy link

kuznero commented Jun 22, 2018

Same with v1.10.4 on CentOS

@dkuji
Copy link

dkuji commented Jul 28, 2018

same issue with v1.11.1 on raspbian(stretch)

@ghost
Copy link

ghost commented Aug 20, 2018

I think im having the same issue. Here are my docker logs for the api server:

Flag --insecure-port has been deprecated, This flag will be removed in a future version.
I0820 09:34:07.877808       1 server.go:703] external host was not specified, using 192.168.1.100
I0820 09:34:07.879449       1 server.go:145] Version: v1.11.2
I0820 09:34:36.864890       1 plugins.go:158] Loaded 8 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I0820 09:34:36.866518       1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0820 09:34:36.890629       1 plugins.go:158] Loaded 8 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I0820 09:34:36.890877       1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0820 09:34:37.846377       1 master.go:234] Using reconciler: lease
W0820 09:35:21.135214       1 genericapiserver.go:319] Skipping API batch/v2alpha1 because it has no resources.
W0820 09:35:28.331097       1 genericapiserver.go:319] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0820 09:35:28.479529       1 genericapiserver.go:319] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0820 09:35:28.961188       1 genericapiserver.go:319] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0820 09:35:41.582878       1 genericapiserver.go:319] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
[restful] 2018/08/20 09:35:42 log.go:33: [restful/swagger] listing is available at https://192.168.1.100:6443/swaggerapi
[restful] 2018/08/20 09:35:42 log.go:33: [restful/swagger] https://192.168.1.100:6443/swaggerui/ is mapped to folder /swagger-ui/
[restful] 2018/08/20 09:36:09 log.go:33: [restful/swagger] listing is available at https://192.168.1.100:6443/swaggerapi
[restful] 2018/08/20 09:36:09 log.go:33: [restful/swagger] https://192.168.1.100:6443/swaggerui/ is mapped to folder /swagger-ui/
I0820 09:36:10.625267       1 plugins.go:158] Loaded 8 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I0820 09:36:10.625893       1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.

@Baitanik
Copy link

Baitanik commented Sep 4, 2018

I am also having same issue .. t/kubelet.go:451: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: getsockopt: connection refused
Sep 04 21:48:16 ddeVM kubelet[4000]: E0904 21:48:16.747740 4000 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/kubelet.go:460: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: getsockopt: connection refused
Sep 04 21:48:16 ddeVM kubelet[4000]: E0904 21:48:16.749527 4000 reflector.go:205] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp [::1]:8443: getsockopt: connection refused .

.....
using kubectl version : v1.10.0
minikube : v0.28.2

@wmturner
Copy link

wmturner commented Sep 18, 2018

Getting this as well. It looks (at least in my case) to be related to iptables rules failing to add, hence causing the connectivity issue with the various endpoints:

Sep 17 23:35:00 k8s1.lax.simplicify.net firewalld[957]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -C POSTROUTING -s 172.17.0.0/16 ! -o docker0 -j MASQUERADE
Sep 17 23:35:00 k8s1.lax.simplicify.net firewalld[957]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -C DOCKER -i docker0 -j RETURN' failed: iptables: Bad rule
Sep 17 23:35:00 k8s1.lax.simplicify.net firewalld[957]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -D FORWARD -i docker0 -o docker0 -j DROP' failed: iptables: Bad r
Sep 17 23:35:00 k8s1.lax.simplicify.net firewalld[957]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C FORWARD -i docker0 -o docker0 -j ACCEPT' failed: ipt
Sep 17 23:35:00 k8s1.lax.simplicify.net firewalld[957]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C FORWARD -i docker0 ! -o docker0 -j ACCEPT' failed: i
Sep 17 23:35:00 k8s1.lax.simplicify.net firewalld[957]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C FORWARD -o docker0 -m conntrack --ctstate RELATED,ES
Sep 17 23:35:00 k8s1.lax.simplicify.net firewalld[957]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -C PREROUTING -m addrtype --dst-type LOCAL -j DOCKER' fail
Sep 17 23:35:00 k8s1.lax.simplicify.net firewalld[957]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t nat -C OUTPUT -m addrtype --dst-type LOCAL -j DOCKER ! --dst 1
Sep 17 23:35:00 k8s1.lax.simplicify.net firewalld[957]: WARNING: COMMAND_FAILED: '/usr/sbin/iptables -w2 -t filter -C FORWARD -o docker0 -j DOCKER' failed: iptables: No c
Sep 17 23:35:00 k8s1.lax.simplicify.net firewalld[957]: WARNING: COMMAND_FAILED:

@chrXiTer
Copy link

same issue with 1.11.3 on ubuntu 16.04.5. When kubeadm init.

@chrXiTer
Copy link

same issue with 1.11.3 on ubuntu 16.04.5. When kubeadm init.

In my case. Because of I copy /var/lib/docker/image/aufs From other host to my host.
Because my network is so bad.
It seems ok when using “docker image ls”。
But 。。。。md。

@mkulak
Copy link

mkulak commented Sep 29, 2018

I have this issue after updating from 1.8.7 to 1.12.0.

Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.0", GitCommit:"0ed33881dc4355495f623c6f22e7dd0b7632b7c0", GitTreeState:"clean", BuildDate:"2018-09-27T17:05:32Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/arm"}

@Nilesh20
Copy link

Facing same issue , any solution ??

@Baitanik
Copy link

Baitanik commented Oct 11, 2018

I was having the same issue . Following things I did .
fist do "minikube stop" then

  1. minikube delete
    2 . rm -rf ~/.minikube ~/.kube /etc/kubernetes
  2. delete the docker images also .
  3. minikube start with increased memory and cpu, and following options
    --cpus 4 --memory 12500 --apiserver-name= --bootstrapper=localkube

and I was not having that error anymore...

after this you can use kubectl command to apply template as per your requirement .
although I read somewhere that --bootstrapper=localkube is not recommended in higher versions of minikube .. it uses localkube instead of kubeadm and kubelet . .. but as of now doing the above steps solved the issue for me .

@yuxiaoba
Copy link

I have faced this problem too,may be you can try
$ ps -aux | grep "kube"
check the kube-apiserver  
if the kube-apiserver not exit
you can
$systemctl restart kube-apiserver
$systemctl restart kube-controller-manager
$systemctl restart kube-scheduler

@Analyse4
Copy link

same with v1.10.0

@neolit123
Copy link
Member

neolit123 commented Oct 26, 2018

it feels to me that there is a variety of reported issues here.

i'm going to respond to the kubeadm case as per the first post:

  • make sure you have internet connection; if you don't have one pre-pull the kubeadm images using kubeadm config images pull requires k8s 1.11+
  • if you are on a version before 1.11 consider upgrading as support for 1.10 is going to be dropped this december.

try the support channels like stack overflow or slack if you have questions:
https://github.com/kubernetes/community/blob/master/contributors/guide/issue-triage.md#user-support-response-example

/close

@k8s-ci-robot
Copy link
Contributor

@neolit123: Closing this issue.

In response to this:

it feels to me that there is a variety of reported issues here.

i'm going to respond to the kubeadm case as per the first post:

  • make sure you have internet connection; if you don't have one pre-pull the kubeadm images using kubeadm config images pull requires k8s 1.11+
  • if you are on a version before 1.11 consider upgrading as support for 1.10 is going to be dropped this december.

try the support channels like stack overflow or slack if you have questions:
Vhttps://github.com/kubernetes/community/blob/master/contributors/guide/issue-triage.md#user-support-response-example

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@BinZhiZhu
Copy link

same with v1.21.2 Mac os

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle.
Projects
None yet
Development

No branches or pull requests