New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubectl get cs showing unhealthy statuses for controller-manager and scheduler on v1.18.6 clean install #93472
Comments
please see here:
/close |
@neolit123: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/sig api-machinery |
/sig cluster-lifecycle |
Getting same issues ~$ kubectl get componentstatus Anyway of how to solve this. or what is the work around? |
Stop using it as it's officially deprecated in 1.19: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.19.md#deprecation
|
how do you stop using the component? |
It looks like you then you can remove the line "- --port=0" from |
unsubscribe
…On Sat, Aug 7, 2021 at 11:55 PM Angus ***@***.***> wrote:
It looks like you then you can remove the line "- --port=0" from /
etc/kubernetes/manifests/kube-scheduler.yaml and
/etc/kubernetes/manifests/kube-controller-manager.yaml
next restart kubectl and test it again.
I am running version v1.20.1 and this has resolved the error. The actual
services do appear to be up and returning "ok" when queried.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#93472 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ADO55VNFYOMQDURIGK55RX3T3WTURANCNFSM4PIZE5DQ>
.
Triage notifications on the go with GitHub Mobile for iOS
<https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675>
or Android
<https://play.google.com/store/apps/details?id=com.github.android&utm_campaign=notification-email>
.
--
Michael Anywar
|
Getting same issue
|
What happened:
kubeadm init --v=6 --config=kubeadmin/kubeadm-init.yaml --upload-certs
cat kubeadm-init.yaml
kubectl get cs
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Deploy cluster with kubeadm v1.18.6 + Flannel
From any master node run:
kubectl get cs
Anything else we need to know?:
Environment:
kubectl version
): v1.18.6cat /etc/os-release
):Debian GNU/Linux 10 (buster)
uname -a
):4.19.0-9-amd64 Unit test coverage in Kubelet is lousy. (~30%) #1 SMP Debian 4.19.118-2+deb10u1 (2020-06-07) x86_64 GNU/Linux
kubeadm
Latest Flannel
Other:
Default config after kubeadm init:
cat /etc/kubernetes/manifests/kube-controller-manager.yaml
P.S. I know, that insecure ports 10251 & 10252 are deprecated, but why are they used in the out of the box config & how to properly "fix" it?
@k8s-ci-robot /sig api-machinery
@k8s-ci-robot /wg component-standard
The text was updated successfully, but these errors were encountered: