Skip to content

kubectl top node shows unknown for almost all nodes #165

@cbluth

Description

@cbluth

I installed metrics-server on a fresh k8s v1.12.1 cluster via kubectl apply -f ./deploy/1.8+/:

$ kubectl --context=dev apply -f ./deploy/1.8+/
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.extensions/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
$ kubectl --context=dev version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:46:06Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:36:14Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}

Now, when I do kubectl top node i see this:

$ kubectl --context=dev top node
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%     
master-5h8ar   169m         4%     2518Mi          34%         
master-0l8e4   <unknown>                           <unknown>               <unknown>               <unknown>               
master-tx20q   <unknown>                           <unknown>               <unknown>               <unknown>               
node-8nl1e     <unknown>                           <unknown>               <unknown>               <unknown>               
node-9zyu2     <unknown>                           <unknown>               <unknown>               <unknown>               
node-ibfn9     <unknown>                           <unknown>               <unknown>               <unknown>               
node-nidq      <unknown>                           <unknown>               <unknown>               <unknown>               
node-rb44h     <unknown>                           <unknown>               <unknown>               <unknown>               
node-k21wl     <unknown>                           <unknown>               <unknown>               <unknown>               
node-memql     <unknown>                           <unknown>               <unknown>               <unknown>               
node-v7yku     <unknown>                           <unknown>               <unknown>               <unknown>               
node-wddnt     <unknown>                           <unknown>               <unknown>               <unknown>

Activity

cbluth

cbluth commented on Oct 24, 2018

@cbluth
Author

here are some logs:

$ kubectl --context=dev -n kube-system logs metrics-server-5cbbc84f8c-fd4sf 
I1024 13:08:43.451632       1 serving.go:273] Generated self-signed cert (apiserver.local.config/certificates/apiserver.crt, apiserver.local.config/certificates/apiserver.key)
[restful] 2018/10/24 13:08:44 log.go:33: [restful/swagger] listing is available at https://:443/swaggerapi
[restful] 2018/10/24 13:08:44 log.go:33: [restful/swagger] https://:443/swaggerui/ is mapped to folder /swagger-ui/
I1024 13:08:44.473203       1 serve.go:96] Serving securely on [::]:443
E1024 13:08:59.696005       1 reststorage.go:129] unable to fetch node metrics for node "node-rb44h": no metrics known for node
E1024 13:08:59.696022       1 reststorage.go:129] unable to fetch node metrics for node "node-8nl1e": no metrics known for node
E1024 13:08:59.696026       1 reststorage.go:129] unable to fetch node metrics for node "master-5h8ar": no metrics known for node
E1024 13:08:59.696030       1 reststorage.go:129] unable to fetch node metrics for node "master-tx20q": no metrics known for node
E1024 13:08:59.696034       1 reststorage.go:129] unable to fetch node metrics for node "node-memql": no metrics known for node
E1024 13:08:59.696038       1 reststorage.go:129] unable to fetch node metrics for node "node-wddnt": no metrics known for node
E1024 13:08:59.696041       1 reststorage.go:129] unable to fetch node metrics for node "master-0l8e4": no metrics known for node
E1024 13:08:59.696043       1 reststorage.go:129] unable to fetch node metrics for node "node-k21wl": no metrics known for node
E1024 13:08:59.696045       1 reststorage.go:129] unable to fetch node metrics for node "node-ibfn9": no metrics known for node
E1024 13:08:59.696047       1 reststorage.go:129] unable to fetch node metrics for node "node-9zyu2": no metrics known for node
E1024 13:08:59.696050       1 reststorage.go:129] unable to fetch node metrics for node "node-nidq": no metrics known for node
E1024 13:08:59.696054       1 reststorage.go:129] unable to fetch node metrics for node "node-v7yku": no metrics known for node
E1024 13:09:17.825779       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/nginx-proxy-node-ibfn9: no metrics known for pod
E1024 13:09:17.825791       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-apiserver-master-tx20q: no metrics known for pod
E1024 13:09:17.825794       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-apiserver-master-5h8ar: no metrics known for pod
E1024 13:09:17.825796       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/nginx-proxy-node-8nl1e: no metrics known for pod
E1024 13:09:17.825799       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/weave-net-tgxtp: no metrics known for pod
E1024 13:09:17.825801       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/nginx-proxy-node-v7yku: no metrics known for pod
E1024 13:09:17.825803       1 reststorage.go:144] unable to fetch pod metrics for pod backend/memcached-2: no metrics known for pod
E1024 13:09:17.825806       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kubedns-autoscaler-d4fc847bf-8mm5d: no metrics known for pod
E1024 13:09:17.825811       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-proxy-node-ibfn9: no metrics known for pod
E1024 13:09:17.825816       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/nginx-proxy-node-memql: no metrics known for pod
E1024 13:09:17.825821       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/weave-net-z8vvr: no metrics known for pod
E1024 13:09:17.825827       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-dns-5bff646c-vw55l: no metrics known for pod
E1024 13:09:17.825832       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/weave-net-4tp22: no metrics known for pod
E1024 13:09:17.825842       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/nginx-proxy-node-k21wl: no metrics known for pod
E1024 13:09:17.825848       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/weave-net-kfzkb: no metrics known for pod
E1024 13:09:17.825851       1 reststorage.go:144] unable to fetch pod metrics for pod backend/memcached-0: no metrics known for pod
E1024 13:09:17.825854       1 reststorage.go:144] unable to fetch pod metrics for pod platform/buffer-675557c7f8-vxbdl: no metrics known for pod
E1024 13:09:17.825858       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-proxy-node-v7yku: no metrics known for pod
E1024 13:09:17.825861       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/weave-net-s7pm9: no metrics known for pod
E1024 13:09:17.825867       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-proxy-node-wddnt: no metrics known for pod
E1024 13:09:17.825871       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/nginx-proxy-node-nidq: no metrics known for pod
E1024 13:09:17.825876       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-apiserver-master-0l8e4: no metrics known for pod
E1024 13:09:17.825881       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-dns-5bff646c-rv68n: no metrics known for pod
E1024 13:09:17.825887       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/tiller-deploy-845cffcd48-mbd9f: no metrics known for pod
E1024 13:09:17.825893       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-proxy-node-k21wl: no metrics known for pod
E1024 13:09:17.825899       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-proxy-master-0l8e4: no metrics known for pod
E1024 13:09:17.825905       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kubernetes-dashboard-5db4d9f45f-xxwqr: no metrics known for pod
E1024 13:09:17.825908       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/weave-net-svwlv: no metrics known for pod
E1024 13:09:17.825912       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-proxy-master-tx20q: no metrics known for pod
E1024 13:09:17.825918       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/weave-net-cgf5q: no metrics known for pod
E1024 13:09:17.825923       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-scheduler-master-5h8ar: no metrics known for pod
E1024 13:09:17.825929       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-scheduler-master-tx20q: no metrics known for pod
E1024 13:09:17.825932       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/metrics-server-5cbbc84f8c-fd4sf: no metrics known for pod
E1024 13:09:17.825936       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/nginx-proxy-node-9zyu2: no metrics known for pod
E1024 13:09:17.825940       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-controller-manager-master-5h8ar: no metrics known for pod
E1024 13:09:17.825945       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-scheduler-master-0l8e4: no metrics known for pod
E1024 13:09:17.825951       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-proxy-node-8nl1e: no metrics known for pod
E1024 13:09:17.825956       1 reststorage.go:144] unable to fetch pod metrics for pod backend/memcached-1: no metrics known for pod
E1024 13:09:17.825962       1 reststorage.go:144] unable to fetch pod metrics for pod platform/buffer-675557c7f8-kfb5q: no metrics known for pod
E1024 13:09:17.825967       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/weave-net-hx2z8: no metrics known for pod
E1024 13:09:17.825970       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-controller-manager-master-0l8e4: no metrics known for pod
E1024 13:09:17.825975       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-proxy-node-rb44h: no metrics known for pod
E1024 13:09:17.825980       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/nginx-proxy-node-rb44h: no metrics known for pod
E1024 13:09:17.825983       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-proxy-node-9zyu2: no metrics known for pod
E1024 13:09:17.825988       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/weave-net-9dpmw: no metrics known for pod
E1024 13:09:17.825991       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-proxy-node-memql: no metrics known for pod
E1024 13:09:17.825997       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-controller-manager-master-tx20q: no metrics known for pod
E1024 13:09:17.826002       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/weave-net-dsp9d: no metrics known for pod
E1024 13:09:17.826007       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-proxy-master-5h8ar: no metrics known for pod
E1024 13:09:17.826010       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/weave-net-tjc47: no metrics known for pod
E1024 13:09:17.826014       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-proxy-node-nidq: no metrics known for pod
E1024 13:09:17.826018       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/weave-net-2vntr: no metrics known for pod
E1024 13:09:17.826023       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/nginx-proxy-node-wddnt: no metrics known for pod
E1024 13:09:44.678408       1 manager.go:102] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:master-tx20q: unable to fetch metrics from Kubelet master-tx20q (master-tx20q): Get https://master-tx20q:10250/stats/summary/: dial tcp: lookup master-tx20q on 10.233.128.3:53: no such host, unable to fully scrape metrics from source kubelet_summary:node-ibfn9: unable to fetch metrics from Kubelet node-ibfn9 (node-ibfn9): Get https://node-ibfn9:10250/stats/summary/: dial tcp: lookup node-ibfn9 on 10.233.128.3:53: no such host, unable to fully scrape metrics from source kubelet_summary:node-k21wl: unable to fetch metrics from Kubelet node-k21wl (node-k21wl): Get https://node-k21wl:10250/stats/summary/: dial tcp: lookup node-k21wl on 10.233.128.3:53: no such host, unable to fully scrape metrics from source kubelet_summary:node-v7yku: unable to fetch metrics from Kubelet node-v7yku (node-v7yku): Get https://node-v7yku:10250/stats/summary/: dial tcp: lookup node-v7yku on 10.233.128.3:53: no such host, unable to fully scrape metrics from source kubelet_summary:node-memql: unable to fetch metrics from Kubelet node-memql (node-memql): Get https://node-memql:10250/stats/summary/: dial tcp: lookup node-memql on 10.233.128.3:53: no such host, unable to fully scrape metrics from source kubelet_summary:master-0l8e4: unable to fetch metrics from Kubelet master-0l8e4 (master-0l8e4): Get https://master-0l8e4:10250/stats/summary/: dial tcp: lookup master-0l8e4 on 10.233.128.3:53: no such host, unable to fully scrape metrics from source kubelet_summary:node-nidq: unable to fetch metrics from Kubelet node-nidq (node-nidq): Get https://node-nidq:10250/stats/summary/: dial tcp: lookup node-nidq on 10.233.128.3:53: no such host, unable to fully scrape metrics from source kubelet_summary:node-wddnt: unable to fetch metrics from Kubelet node-wddnt (node-wddnt): Get https://node-wddnt:10250/stats/summary/: dial tcp: lookup node-wddnt on 10.233.128.3:53: no such host, unable to fully scrape metrics from source kubelet_summary:node-rb44h: unable to fetch metrics from Kubelet node-rb44h (node-rb44h): Get https://node-rb44h:10250/stats/summary/: dial tcp: lookup node-rb44h on 10.233.128.3:53: no such host, unable to fully scrape metrics from source kubelet_summary:node-8nl1e: unable to fetch metrics from Kubelet node-8nl1e (node-8nl1e): Get https://node-8nl1e:10250/stats/summary/: dial tcp: lookup node-8nl1e on 10.233.128.3:53: no such host, unable to fully scrape metrics from source kubelet_summary:node-9zyu2: unable to fetch metrics from Kubelet node-9zyu2 (node-9zyu2): Get https://node-9zyu2:10250/stats/summary/: x509: certificate is valid for kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local, localhost, master-5h8ar, master-tx20q, master-0l8e4, lb-apiserver.kubernetes.local, api.k8s.example.com, api.k8s-dev.example.com, not node-9zyu2]
E1024 13:09:50.059632       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-scheduler-master-0l8e4: no metrics known for pod
E1024 13:09:50.059644       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-proxy-node-8nl1e: no metrics known for pod
E1024 13:09:50.059647       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/weave-net-hx2z8: no metrics known for pod
E1024 13:09:50.059649       1 reststorage.go:144] unable to fetch pod metrics for pod backend/memcached-1: no metrics known for pod
E1024 13:09:50.059651       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/kube-proxy-node-rb44h: no metrics known for pod
E1024 13:09:50.059653       1 reststorage.go:144] unable to fetch pod metrics for pod kube-system/nginx-proxy-node-rb44h: no metrics known for pod
cbluth

cbluth commented on Oct 24, 2018

@cbluth
Author

reverting back to heapster has fixed my issue of being able to see the metrics, but i would rather not use heapster as it is being deprecated.

$ kubectl --context=dev delete -f ./deploy/1.8+/
clusterrole.rbac.authorization.k8s.io "system:aggregated-metrics-reader" deleted
clusterrolebinding.rbac.authorization.k8s.io "metrics-server:system:auth-delegator" deleted
rolebinding.rbac.authorization.k8s.io "metrics-server-auth-reader" deleted
apiservice.apiregistration.k8s.io "v1beta1.metrics.k8s.io" deleted
serviceaccount "metrics-server" deleted
deployment.extensions "metrics-server" deleted
service "metrics-server" deleted
clusterrole.rbac.authorization.k8s.io "system:metrics-server" deleted
clusterrolebinding.rbac.authorization.k8s.io "system:metrics-server" deleted
$ kubectl --context=dev apply -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml
clusterrolebinding.rbac.authorization.k8s.io/heapster created

$ kubectl --context=dev apply -f https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/standalone/heapster-controller.yaml
serviceaccount/heapster created
deployment.extensions/heapster created
service/heapster created
$ kubectl --context=dev top node
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
master-0l8e4   201m         5%     2498Mi          33%       
master-5h8ar   147m         3%     2512Mi          33%       
master-tx20q   158m         4%     2441Mi          33%       
node-8nl1e     42m          1%     1403Mi          18%       
node-9zyu2     62m          1%     1433Mi          18%       
node-ibfn9     73m          1%     1481Mi          19%       
node-k21wl     54m          1%     1408Mi          18%       
node-memql     52m          1%     1433Mi          18%       
node-nidq      54m          1%     1427Mi          18%       
node-rb44h     22m          0%     1468Mi          19%       
node-v7yku     27m          0%     1468Mi          19%       
node-wddnt     38m          0%     1411Mi          18%
ajatkj

ajatkj commented on Nov 2, 2018

@ajatkj

I had similar issue which was fixed after I added the flag-
command: - /metrics-server - --kubelet-preferred-address-types=InternalIP
You can try to ping the pod IP and pod hostname from metrics-server to see the difference.
kubectl exec -it metrics-server xxxxxxxxxxx-xxxxx sh

cbluth

cbluth commented on Nov 5, 2018

@cbluth
Author

I had similar issue which was fixed after I added the flag-
command: - /metrics-server - --kubelet-preferred-address-types=InternalIP
You can try to ping the pod IP and pod hostname from metrics-server to see the difference.
kubectl exec -it metrics-server xxxxxxxxxxx-xxxxx sh

This fixed my issue.

pdefreitas

pdefreitas commented on Dec 3, 2018

@pdefreitas

Ok I am experiencing the same issue under EKS and HPA is supposed to be working with eks.2

@cbluth did you find other solution other than going back to heapster?

JokerDevops

JokerDevops commented on Oct 29, 2019

@JokerDevops
[root@node40 metrics-server]# kubectl top  nodes
NAME           CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%
192.168.0.47   56m          1%     586Mi           1%
192.168.0.49   <unknown>                           <unknown>               <unknown>               <unknown>
192.168.0.51   <unknown>                           <unknown>               <unknown>               <unknown>
192.168.0.40   <unknown>                           <unknown>               <unknown>               <unknown>
192.168.0.43   <unknown>                           <unknown>               <unknown>               <unknown>
192.168.0.44   <unknown>                           <unknown>               <unknown>               <unknown>
192.168.0.45   <unknown>                           <unknown>               <unknown>               <unknown>
192.168.0.46   <unknown>                           <unknown>               <unknown>               <unknown>
192.168.0.41   <unknown>                           <unknown>               <unknown>               <unknown>
[root@node40 metrics-server]# kubectl logs -f metrics-server-v0.3.1-65949d64cb-cqlcs  -n kube-system
E1028 10:51:03.997174       1 manager.go:102] unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:192.168.0.41: unable to fetch metrics from Kubelet 192.168.0.41 (192.168.0.41): Get https://192.168.0.41:10250/stats/summary/: dial tcp 192.168.0.41:10250: i/o timeout, unable to fully scrape metrics from source kubelet_summary:192.168.0.51: unable to fetch metrics from Kubelet 192.168.0.51 (192.168.0.51): Get https://192.168.0.51:10250/stats/summary/: dial tcp 192.168.0.51:10250: i/o timeout, unable to fully scrape metrics from source kubelet_summary:192.168.0.44: unable to fetch metrics from Kubelet 192.168.0.44 (192.168.0.44): Get https://192.168.0.44:10250/stats/summary/: dial tcp 192.168.0.44:10250: i/o timeout, unable to fully scrape metrics from source kubelet_summary:192.168.0.40: unable to fetch metrics from Kubelet 192.168.0.40 (192.168.0.40): Get https://192.168.0.40:10250/stats/summary/: dial tcp 192.168.0.40:10250: i/o timeout, unable to fully scrape metrics from source kubelet_summary:192.168.0.46: unable to fetch metrics from Kubelet 192.168.0.46 (192.168.0.46): Get https://192.168.0.46:10250/stats/summary/: dial tcp 192.168.0.46:10250: i/o timeout, unable to fully scrape metrics from source kubelet_summary:192.168.0.49: unable to fetch metrics from Kubelet 192.168.0.49 (192.168.0.49): Get https://192.168.0.49:10250/stats/summary/: dial tcp 192.168.0.49:10250: i/o timeout, unable to fully scrape metrics from source kubelet_summary:192.168.0.45: unable to fetch metrics from Kubelet 192.168.0.45 (192.168.0.45): Get https://192.168.0.45:10250/stats/summary/: dial tcp 192.168.0.45:10250: i/o timeout, unable to fully scrape metrics from source kubelet_summary:192.168.0.43: unable to fetch metrics from Kubelet 192.168.0.43 (192.168.0.43): Get https://192.168.0.43:10250/stats/summary/: dial tcp 192.168.0.43:10250: i/o timeout]
JokerDevops

JokerDevops commented on Oct 29, 2019

@JokerDevops

@ajatkj Hi, I've set -- --kubelet-preferred address-types=InternalIP and he still has errors.

JokerDevops

JokerDevops commented on Oct 29, 2019

@JokerDevops

Can you help me? Thank you!

Antiarchitect

Antiarchitect commented on Oct 30, 2019

@Antiarchitect

Same as @JokerDevops here. For one of my nodes --kubelet-preferred address-types=InternalIP doesn't help.

serathius

serathius commented on Oct 31, 2019

@serathius
Contributor

Have you tried setting hostNetwork: true I have seen it help on AWS.

Antiarchitect

Antiarchitect commented on Oct 31, 2019

@Antiarchitect

So it was my fault. I have Bare Metal cluster so all my InternalIPs are external ones. But that was the node which hold the metrics server itself so it tried to request stats via internal source - external destination. Anyway - fixed my FW and now all is ok.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

      Development

      No branches or pull requests

        Participants

        @Antiarchitect@serathius@pdefreitas@cbluth@ajatkj

        Issue actions

          kubectl top node shows `unknown` for almost all nodes · Issue #165 · kubernetes-sigs/metrics-server