-
Notifications
You must be signed in to change notification settings - Fork 2k
Closed
Description
I installed metrics-server
on a fresh k8s v1.12.1 cluster via kubectl apply -f ./deploy/1.8+/
:
$ kubectl --context=dev apply -f ./deploy/1.8+/
clusterrole.rbac.authorization.k8s.io/system:aggregated-metrics-reader created
clusterrolebinding.rbac.authorization.k8s.io/metrics-server:system:auth-delegator created
rolebinding.rbac.authorization.k8s.io/metrics-server-auth-reader created
apiservice.apiregistration.k8s.io/v1beta1.metrics.k8s.io created
serviceaccount/metrics-server created
deployment.extensions/metrics-server created
service/metrics-server created
clusterrole.rbac.authorization.k8s.io/system:metrics-server created
clusterrolebinding.rbac.authorization.k8s.io/system:metrics-server created
$ kubectl --context=dev version
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:46:06Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:36:14Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Now, when I do kubectl top node
i see this:
$ kubectl --context=dev top node
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
master-5h8ar 169m 4% 2518Mi 34%
master-0l8e4 <unknown> <unknown> <unknown> <unknown>
master-tx20q <unknown> <unknown> <unknown> <unknown>
node-8nl1e <unknown> <unknown> <unknown> <unknown>
node-9zyu2 <unknown> <unknown> <unknown> <unknown>
node-ibfn9 <unknown> <unknown> <unknown> <unknown>
node-nidq <unknown> <unknown> <unknown> <unknown>
node-rb44h <unknown> <unknown> <unknown> <unknown>
node-k21wl <unknown> <unknown> <unknown> <unknown>
node-memql <unknown> <unknown> <unknown> <unknown>
node-v7yku <unknown> <unknown> <unknown> <unknown>
node-wddnt <unknown> <unknown> <unknown> <unknown>
Activity
cbluth commentedon Oct 24, 2018
here are some logs:
cbluth commentedon Oct 24, 2018
reverting back to
heapster
has fixed my issue of being able to see the metrics, but i would rather not useheapster
as it is being deprecated.ajatkj commentedon Nov 2, 2018
I had similar issue which was fixed after I added the flag-
command: - /metrics-server - --kubelet-preferred-address-types=InternalIP
You can try to ping the pod IP and pod hostname from metrics-server to see the difference.
kubectl exec -it metrics-server xxxxxxxxxxx-xxxxx sh
cbluth commentedon Nov 5, 2018
This fixed my issue.
pdefreitas commentedon Dec 3, 2018
Ok I am experiencing the same issue under EKS and HPA is supposed to be working with eks.2
@cbluth did you find other solution other than going back to heapster?
JokerDevops commentedon Oct 29, 2019
JokerDevops commentedon Oct 29, 2019
@ajatkj Hi, I've set -- --kubelet-preferred address-types=InternalIP and he still has errors.
JokerDevops commentedon Oct 29, 2019
Can you help me? Thank you!
Antiarchitect commentedon Oct 30, 2019
Same as @JokerDevops here. For one of my nodes
--kubelet-preferred address-types=InternalIP
doesn't help.serathius commentedon Oct 31, 2019
Have you tried setting
hostNetwork: true
I have seen it help on AWS.Antiarchitect commentedon Oct 31, 2019
So it was my fault. I have Bare Metal cluster so all my InternalIPs are external ones. But that was the node which hold the metrics server itself so it tried to request stats via internal source - external destination. Anyway - fixed my FW and now all is ok.