Skip to content

how to renew the certificate when apiserver cert expired? #581

Closed
@zalmanzhao

Description

@zalmanzhao

Is this a request for help?

If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.

If no, delete this section and continue on.

What keywords did you search in kubeadm issues before filing this one?

If you have found any duplicates, you should instead reply there and close this page.

If you have not found any duplicates, delete this section and continue on.

Is this a BUG REPORT or FEATURE REQUEST?

Choose one: BUG REPORT or FEATURE REQUEST

Versions

kubeadm version (use kubeadm version):1.7.5

Environment:

  • Kubernetes version (use kubectl version):1.7.5
  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release):
  • Kernel (e.g. uname -a):
  • Others:

What happened?

What you expected to happen?

How to reproduce it (as minimally and precisely as possible)?

Anything else we need to know?

Activity

errordeveloper

errordeveloper commented on Jan 22, 2018

@errordeveloper
Member

Duplicate of #206.

kachkaev

kachkaev commented on Aug 3, 2018

@kachkaev

@zalmanzhao did you manage to solve this issue?

I created a kubeadm v1.9.3 cluster just over a year ago and it was working fine all this time. I went to update one deployment today and realised I was locked out of the API because the cert got expired. I can't even kubeadm alpha phase certs apiserver, because I get failure loading apiserver certificate: the certificate has expired (kubeadm version is currently 1.10.6 since I want to upgrade).

Adding insecure-skip-tls-verify: true to ~/.kube/configclusters[0].cluser does not help too – I see You must be logged in to the server (Unauthorized) when trying to kubectl get pods (kubernetes/kubernetes#39767).

The cluster is working, but it lives its own life until it self-destroys or until things get fixed 😅 Unfortunately, I could not find a solution for my situation in #206 and am wondering how to get out of it. The only relevant material I could dig out was a blog post called ‘How to change expired certificates in kubernetes cluster’, which looked promising at first glance. However, it did not fit in the end because my master machine did not have /etc/kubernetes/ssl/ folder (only /etc/kubernetes/pki/) – either I have a different k8s version or I simply deleted that folder without noticing.

@errordeveloper could you please recommend something? I'd love to fix things without kubeadm reset and payload recreation.

davidcomeyne

davidcomeyne commented on Sep 6, 2018

@davidcomeyne

@kachkaev Did you have any luck on renewing the certs without resetting the kubeadm?
If so, please share, I'm having the same issue here with k8s 1.7.4. And I can't seem to upgrade ($ kubeadm upgrade plan) because the error pops up again telling me the the certificate has expired and that it cannot list the masters in my cluster:

[ERROR APIServerHealth]: the API Server is unhealthy; /healthz didn't return "ok"
[ERROR MasterNodesReady]: couldn't list masters in cluster: Get https://172.31.18.88:6443/api/v1/nodes?labelSelector=node-role.kubernetes.io%2Fmaster%3D: x509: certificate has expired or is not yet valid
kachkaev

kachkaev commented on Sep 6, 2018

@kachkaev

Unfortunately, I gave up in the end. The solution was to create a new cluster, restore all the payload on it, switch DNS records and finally delete the original cluster 😭 At least there was no downtime because I was lucky enough to have healthy pods on the old k8s during the transition.

davidcomeyne

davidcomeyne commented on Sep 6, 2018

@davidcomeyne

Thanks @kachkaev for responding. I will nonetheless give it another try.
If I find something I will make sure to post it here...

danroliver

danroliver commented on Sep 14, 2018

@danroliver

If you are using a version of kubeadm prior to 1.8, where I understand certificate rotation #206 was put into place (as a beta feature) or your certs already expired, then you will need to manually update your certs (or recreate your cluster which it appears some (not just @kachkaev) end up resorting to).

You will need to SSH into your master node. If you are using kubeadm >= 1.8 skip to 2.

  1. Update Kubeadm, if needed. I was on 1.7 previously.
$ sudo curl -sSL https://dl.k8s.io/release/v1.8.15/bin/linux/amd64/kubeadm > ./kubeadm.1.8.15
$ chmod a+rx kubeadm.1.8.15
$ sudo mv /usr/bin/kubeadm /usr/bin/kubeadm.1.7
$ sudo mv kubeadm.1.8.15 /usr/bin/kubeadm
  1. Backup old apiserver, apiserver-kubelet-client, and front-proxy-client certs and keys.
$ sudo mv /etc/kubernetes/pki/apiserver.key /etc/kubernetes/pki/apiserver.key.old
$ sudo mv /etc/kubernetes/pki/apiserver.crt /etc/kubernetes/pki/apiserver.crt.old
$ sudo mv /etc/kubernetes/pki/apiserver-kubelet-client.crt /etc/kubernetes/pki/apiserver-kubelet-client.crt.old
$ sudo mv /etc/kubernetes/pki/apiserver-kubelet-client.key /etc/kubernetes/pki/apiserver-kubelet-client.key.old
$ sudo mv /etc/kubernetes/pki/front-proxy-client.crt /etc/kubernetes/pki/front-proxy-client.crt.old
$ sudo mv /etc/kubernetes/pki/front-proxy-client.key /etc/kubernetes/pki/front-proxy-client.key.old
  1. Generate new apiserver, apiserver-kubelet-client, and front-proxy-client certs and keys.
$ sudo kubeadm alpha phase certs apiserver --apiserver-advertise-address <IP address of your master server>
$ sudo kubeadm alpha phase certs apiserver-kubelet-client
$ sudo kubeadm alpha phase certs front-proxy-client
  1. Backup old configuration files
$ sudo mv /etc/kubernetes/admin.conf /etc/kubernetes/admin.conf.old
$ sudo mv /etc/kubernetes/kubelet.conf /etc/kubernetes/kubelet.conf.old
$ sudo mv /etc/kubernetes/controller-manager.conf /etc/kubernetes/controller-manager.conf.old
$ sudo mv /etc/kubernetes/scheduler.conf /etc/kubernetes/scheduler.conf.old
  1. Generate new configuration files.

There is an important note here. If you are on AWS, you will need to explicitly pass the --node-name parameter in this request. Otherwise you will get an error like: Unable to register node "ip-10-0-8-141.ec2.internal" with API server: nodes "ip-10-0-8-141.ec2.internal" is forbidden: node ip-10-0-8-141 cannot modify node ip-10-0-8-141.ec2.internal in your logs sudo journalctl -u kubelet --all | tail and the Master Node will report that it is Not Ready when you run kubectl get nodes.

Please be certain to replace the values passed in --apiserver-advertise-address and --node-name with the correct values for your environment.

$ sudo kubeadm alpha phase kubeconfig all --apiserver-advertise-address 10.0.8.141 --node-name ip-10-0-8-141.ec2.internal
[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"

  1. Ensure that your kubectl is looking in the right place for your config files.
$ mv .kube/config .kube/config.old
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ sudo chmod 777 $HOME/.kube/config
$ export KUBECONFIG=.kube/config
  1. Reboot your master node
$ sudo /sbin/shutdown -r now
  1. Reconnect to your master node and grab your token, and verify that your Master Node is "Ready". Copy the token to your clipboard. You will need it in the next step.
$ kubectl get nodes
$ kubeadm token list

If you do not have a valid token. You can create one with:

$ kubeadm token create

The token should look something like 6dihyb.d09sbgae8ph2atjw

  1. SSH into each of the slave nodes and reconnect them to the master
$ sudo curl -sSL https://dl.k8s.io/release/v1.8.15/bin/linux/amd64/kubeadm > ./kubeadm.1.8.15
$ chmod a+rx kubeadm.1.8.15
$ sudo mv /usr/bin/kubeadm /usr/bin/kubeadm.1.7
$ sudo mv kubeadm.1.8.15 /usr/bin/kubeadm
$ sudo kubeadm join --token=<token from step 8>  <ip of master node>:<port used 6443 is the default> --node-name <should be the same one as from step 5>

  1. Repeat Step 9 for each connecting node. From the master node, you can verify that all slave nodes have connected and are ready with:
$ kubectl get nodes

Hopefully this gets you where you need to be @davidcomeyne.

davidcomeyne

davidcomeyne commented on Sep 15, 2018

@davidcomeyne

Thanks a bunch @danroliver !
I will definitely try that and post my findings here.

ivan4th

ivan4th commented on Oct 22, 2018

@ivan4th

@danroliver Thanks! Just tried it on an old single-node cluster, so did steps up to 7. It worked.

dmellstrom

dmellstrom commented on Oct 22, 2018

@dmellstrom

@danroliver Worked for me. Thank you.

davidcomeyne

davidcomeyne commented on Oct 23, 2018

@davidcomeyne

Did not work for me, had to set up a new cluster. But glad it helped others!

fovecifer

fovecifer commented on Oct 30, 2018

@fovecifer

thank you @danroliver . it works for me
and my kubeadm version is 1.8.5

kvchitrapu

kvchitrapu commented on Nov 1, 2018

@kvchitrapu

Thanks @danroliver putting together the steps. I had to make small additions to your steps. My cluster is running v1.9.3 and it is in a private datacenter off of the Internet.

On the Master

  1. Prepare a kubeadm config.yml.
apiVersion: kubeadm.k8s.io/v1alpha1
kind: MasterConfiguration
api:
  advertiseAddress: <master-ip>
kubernetesVersion: 1.9.3
  1. Backup certs and conf files
mkdir ~/conf-archive/
for f in `ls *.conf`;do mv $f ~/conf-archive/$f.old;done

mkdir ~/pki-archive
for f in `ls apiserver* front-*client*`;do mv $f ~/pki-archive/$f.old;done
  1. The kubeadm commands on master had --config config.yml like this:
kubeadm alpha phase certs apiserver --config ./config.yml 
kubeadm alpha phase certs apiserver-kubelet-client --config ./config.yml 
kubeadm alpha phase certs front-proxy-client --config ./config.yml
kubeadm alpha phase kubeconfig all --config ./config.yml --node-name <master-node>
reboot
  1. Create token

On the minions

I had to move

mv /etc/kubernetes/pki/ca.crt ~/archive/
mv /etc/kubernetes/kubelet.conf ~/archive/
systemctl stop kubelet
kubeadm join --token=eeefff.55550009999b3333 --discovery-token-unsafe-skip-ca-verification <master-ip>:6443

55 remaining items

Loading
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

      Development

      No branches or pull requests

        Participants

        @pmcgrath@terrywang@ivan4th@errordeveloper@neolit123

        Issue actions

          how to renew the certificate when apiserver cert expired? · Issue #581 · kubernetes/kubeadm