Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubeadm unknown service runtime.v1alpha2.RuntimeService #4581

Closed
niklashagman opened this issue Sep 24, 2020 · 46 comments
Closed

Kubeadm unknown service runtime.v1alpha2.RuntimeService #4581

niklashagman opened this issue Sep 24, 2020 · 46 comments

Comments

@niklashagman
Copy link

niklashagman commented Sep 24, 2020

Problem
Following Kubernetes official installation instruction for containerd and kubeadm init will fail with unknown service runtime.v1alpha2.RuntimeService.

# Commands from https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd
apt-get update && apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
apt-get update && apt-get install -y containerd.io

# Configure containerd
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
systemctl restart containerd

kubeadm init
...
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR CRI]: container runtime is not running: output: time="2020-09-24T11:49:16Z" level=fatal msg="getting status of runtime failed: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
, error: exit status 1

Solution:

rm /etc/containerd/config.toml
systemctl restart containerd
kubeadm init

Versions:

  • Ubuntu 20.04 (focal)
  • containerd.io 1.3.7
  • kubectl 1.19.2
  • kubeadm 1.19.2
  • kubelet 1.19.2
@fuweid
Copy link
Member

fuweid commented Sep 25, 2020

@blinkiz could you please provide your containerd config.toml and command result of ctr plugin ls? thanks

@niklashagman
Copy link
Author

niklashagman commented Sep 25, 2020

Apparently it is my fault this time. My ansible playbook did not override config.toml file as I expected. Am sorry for taking up your time, default installation instructions work great.

In the config.toml file installed by package containerd.io there is the line disabled_plugins = ["cri"] that am guessing creating the issue. That maybe is bad default setting to have in the package containerd.io but that is for another issue/bug.

Closing.

@mikebrow
Copy link
Member

Apparently it is my fault this time. My ansible playbook did not override config.toml file as I expected. Am sorry for taking up your time, default installation instructions work great.

In the config.toml file installed by package containerd.io there is the line disabled_plugins = ["cri"] that am guessing creating the issue. That maybe is bad default setting to have in the package containerd.io but that is for another issue/bug.

Closing.

Docker (by default) uses that config for the containerd they install via containerd.io packages. Grr. been causing similar ^ issues for k8s users for years. :-)

@alexcpn
Copy link

alexcpn commented Nov 25, 2020

I followed the official instructions here https://kubernetes.io/docs/setup/production-environment/container-runtimes/

and I was getting similar error

root@green-1:~# kubeadm init  --config=config.yaml 
W1125 12:58:32.733485   26426 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.19.4
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR CRI]: container runtime is not running: output: time="2020-11-25T12:58:32Z" level=fatal msg="getting status of runtime failed: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

I checked /etc/containerd/config.toml and saw 'disabled_plugins = []'

Note the only thing I changed in the config.toml was to use systemd as true -it was different from the way docs has mentioned
(maybe this was the problem?)

[plugins."io.containerd.grpc.v1.cri"]
    systemd_cgroup = false --> to true

from docs

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
  ...
  [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
    SystemdCgroup = true

deleting this config.toml as given in the first post and restarting containerd service solved and kubeadm could proceed

@alexcpn
Copy link

alexcpn commented Nov 25, 2020

This is definitely a bug. For the worker node I followed the docs exact and I was getting this error

Nov 25 14:01:37 green-3 kubelet[24158]: E1125 14:01:37.781735   24158 kubelet.go:2103] Container runtime network not ready: NetworkReady=false reason:Ne
Nov 25 14:01:40 green-3 kubelet[24158]: W1125 14:01:40.121655   24158 manager.go:1168] Failed to process watch event {EventType:0 Name:/kubepods/burstab
Nov 25 14:01:40 green-3 kubelet[24158]: I1125 14:01:40.161343   24158 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 858e1a9
Nov 25 14:01:41 green-3 kubelet[24158]: I1125 14:01:41.162982   24158 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 858e1a9
Nov 25 14:01:41 green-3 kubelet[24158]: I1125 14:01:41.163235   24158 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: a46854a
Nov 25 14:01:41 green-3 kubelet[24158]: E1125 14:01:41.163527   24158 pod_workers.go:191] Error syncing pod 4dabce76-ceb5-43fb-bef1-1992a3aa124d ("kube-
Nov 25 14:01:41 green-3 kubelet[24158]: W1125 14:01:41.624869   24158 manager.go:1168] Failed to process watch event {EventType:0 Name:/kubepods/burstab
Nov 25 14:01:42 green-3 kubelet[24158]: I1125 14:01:42.164906   24158 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: a46854a
Nov 25 14:01:42 green-3 kubelet[24158]: E1125 14:01:42.165198   24158 pod_workers.go:191] Error syncing pod 4dabce76-ceb5-43fb-bef1-1992a3aa124d ("kube-
Nov 25 14:01:43 green-3 kubelet[24158]: W1125 14:01:43.127317   24158 manager.go:1168] Failed to process watch event {EventType:0 Name:/kubepods/burstab
Nov 25 14:01:55 green-3 kubelet[24158]: I1125 14:01:55.011131   24158 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: a46854a
Nov 25 14:02:17 green-3 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
Nov 25 14:02:17 green-3 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
Nov 25 14:02:17 green-3 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Nov 25 14:02:17 green-3 kubelet[25982]: I1125 14:02:17.474559   25982 server.go:411] Version: v1.19.4

till I deleted the config,toml restarted contained and kubelet, after which only the worker joined

@fuweid
Copy link
Member

fuweid commented Nov 26, 2020

@alexcpn please open new issue to describe the case, thanks.

@felipecrp
Copy link

felipecrp commented May 13, 2022

Apparently it is my fault this time. My ansible playbook did not override config.toml file as I expected. Am sorry for taking up your time, default installation instructions work great.

In the config.toml file installed by package containerd.io there is the line disabled_plugins = ["cri"] that am guessing creating the issue. That maybe is bad default setting to have in the package containerd.io but that is for another issue/bug.

Closing.

This comment saved my day 👍 . The default docker configuration removes CRI.

@LuksJobs
Copy link

Problem Following Kubernetes official installation instruction for containerd and kubeadm init will fail with unknown service runtime.v1alpha2.RuntimeService.

# Commands from https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd
apt-get update && apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
apt-get update && apt-get install -y containerd.io

# Configure containerd
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
systemctl restart containerd

kubeadm init
...
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR CRI]: container runtime is not running: output: time="2020-09-24T11:49:16Z" level=fatal msg="getting status of runtime failed: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
, error: exit status 1

Solution:

rm /etc/containerd/config.toml
systemctl restart containerd
kubeadm init

Versions:

  • Ubuntu 20.04 (focal)
  • containerd.io 1.3.7
  • kubectl 1.19.2
  • kubeadm 1.19.2
  • kubelet 1.19.2

Thanks! You help-me with this solution!!!

@BuildAndDestroy
Copy link

Heads up, this just happened to me on a clean install of Kubernetes v1.24.0 on Ubunutu 20.04.4 LTS. The original fix helped me as well.

Exception:

[init] Using Kubernetes version: v1.24.0
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR CRI]: container runtime is not running: output: time="2022-05-16T23:41:59Z" level=fatal msg="getting status of runtime: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
To see the stack trace of this error execute with --v=5 or higher

Corrected:

user@k8s-master:~/$ sudo rm /etc/containerd/config.toml
user@k8s-master:~/$ sudo systemctl restart containerd
user@k8s-master:~/$ sudo kubeadm init
[init] Using Kubernetes version: v1.24.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
..
..

@RuoCJ
Copy link

RuoCJ commented May 24, 2022

W0524 16:59:01.427276 22679 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/dockershim.sock". Please update your configuration!
[init] Using Kubernetes version: v1.24.0
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time="2022-05-24T16:59:03+08:00" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher

I also encountered such a problem, but I didn't solve the problem according to the above operation steps. How should I solve it?

@hoony9x
Copy link

hoony9x commented May 25, 2022

I have a same issue with kubeadm v1.24.0 in CentOS 7.9

@RuoCJ
Copy link

RuoCJ commented May 25, 2022

I have a same issue with kubeadm v1.24.0 in CentOS 7.9

I use ubuntu 20.04 and kubeadm v1.24.0

@mikebrow
Copy link
Member

W0524 16:59:01.427276 22679 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/dockershim.sock". Please update your configuration!
[init] Using Kubernetes version: v1.24.0
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time="2022-05-24T16:59:03+08:00" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher

I also encountered such a problem, but I didn't solve the problem according to the above operation steps. How should I solve it?

there is no dockershim in kubernetes v1.24 you'll need to install and configure containerd or cri-o

@mikebrow
Copy link
Member

mikebrow commented May 25, 2022

I have a same issue with kubeadm v1.24.0 in CentOS 7.9

unfortunately the config in the containerd.io package has, since forever, had a bad configuration for kubernetes tools. The bad configuration is they install a version of the config for containerd that is only good for docker. This config needs to be replaced at least with the default containerd config.. and you can modify it from there if you like.

"containerd config default > /etc/containerd/config.toml" will overwrite docker's version of the config and replace it with containerd's version of the config.. which also works just fine for docker. Then restart containerd.

@RuoCJ
Copy link

RuoCJ commented May 26, 2022

W0524 16:59:01.427276 22679 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/dockershim.sock". Please update your configuration!
[init] Using Kubernetes version: v1.24.0
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time="2022-05-24T16:59:03+08:00" level=fatal msg="connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher

I also encountered such a problem, but I didn't solve the problem according to the above operation steps. How should I solve it?

there is no dockershim in kubernetes v1.24 you'll need to install and configure containerd or cri-o

Thank you very much for your reply. I will try it.

@pratmeht
Copy link

pratmeht commented May 27, 2022

Hi,

I had this same error. "failed to pull images... rpc error ...unknown service...
I had raised a ticket and was redirected to this page.

Referring the steps given on top, it solved my problem for K8s 1.24 on RHEL 8.2
However same steps did not help me on K8s 1.24 on CentOS 7.9. Here I continue to receive same error msg! :(

[root@controlplane1 ~]$  kubeadm config images pull --v=5
I0527 14:59:20.357743     923 initconfiguration.go:117] detected and using CRI socket: unix:///var/run/containerd/containerd.sock
I0527 14:59:20.364564     923 interface.go:432] Looking for default routes with IPv4 addresses
I0527 14:59:20.364594     923 interface.go:437] Default route transits interface "enp5s0"
I0527 14:59:20.367026     923 interface.go:209] Interface enp5s0 is up
I0527 14:59:20.367113     923 interface.go:257] Interface "enp5s0" has 2 addresses :[10.10.32.16/21 fe80::96c6:91ff:fe3c:ad5c/64].
I0527 14:59:20.367340     923 interface.go:224] Checking addr  10.10.32.16/21.
I0527 14:59:20.367358     923 interface.go:231] IP found 10.10.32.16
I0527 14:59:20.367368     923 interface.go:263] Found valid IPv4 address 10.10.32.16 for interface "enp5s0".
I0527 14:59:20.367377     923 interface.go:443] Found active IP 10.10.32.16
I0527 14:59:20.367404     923 kubelet.go:214] the value of KubeletConfiguration.cgroupDriver is empty; setting it to "systemd"
I0527 14:59:20.370303     923 version.go:186] fetching Kubernetes version from URL: https://dl.k8s.io/release/stable-1.txt
exit status 1
output: time="2022-05-27T14:59:21+05:30" level=fatal msg="pulling image: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.ImageService"
, error
k8s.io/kubernetes/cmd/kubeadm/app/util/runtime.(*CRIRuntime).PullImage
        cmd/kubeadm/app/util/runtime/runtime.go:121
k8s.io/kubernetes/cmd/kubeadm/app/cmd.PullControlPlaneImages
        cmd/kubeadm/app/cmd/config.go:340
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdConfigImagesPull.func1
        cmd/kubeadm/app/cmd/config.go:312
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
        vendor/github.com/spf13/cobra/command.go:856
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
        vendor/github.com/spf13/cobra/command.go:974
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
        vendor/github.com/spf13/cobra/command.go:902
k8s.io/kubernetes/cmd/kubeadm/app.Run
        cmd/kubeadm/app/kubeadm.go:50
main.main
        cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:250
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1571
failed to pull image "k8s.gcr.io/kube-apiserver:v1.24.1"
k8s.io/kubernetes/cmd/kubeadm/app/cmd.PullControlPlaneImages
        cmd/kubeadm/app/cmd/config.go:341
k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdConfigImagesPull.func1
        cmd/kubeadm/app/cmd/config.go:312
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
        vendor/github.com/spf13/cobra/command.go:856
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
        vendor/github.com/spf13/cobra/command.go:974
k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
        vendor/github.com/spf13/cobra/command.go:902
k8s.io/kubernetes/cmd/kubeadm/app.Run
        cmd/kubeadm/app/kubeadm.go:50
main.main
        cmd/kubeadm/kubeadm.go:25
runtime.main
        /usr/local/go/src/runtime/proc.go:250
runtime.goexit
        /usr/local/go/src/runtime/asm_amd64.s:1571

Can someone pls help.

@sundayhk
Copy link

sundayhk commented Nov 11, 2022

CentOS 7
Linux 5.19.1-1.el7.elrepo.x86_64

Case 1:Kubernetes 1.2x binary installation and reports this error solution:

  • containerd: v1.6.4
  • kubelet: 1.24.6
containerd config default > /etc/containerd/config.toml
sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/' /etc/containerd/config.toml
sed -i 's/snapshotter = "overlayfs"/snapshotter = "native"/' /etc/containerd/config.toml

Case 2:kuberspray installs kubernetes 1.25.3 and reports this error solution:

  • containerd: v1.6.9
  • kubelet: 1.25.3
sed -i 's@# containerd_snapshotter: "native"@containerd_snapshotter: "native"@g' inventory/mycluster/group_vars/all/containerd.yml

Then rerun kubespray

@MajidAhangari
Copy link

Thanks,it's solved my problem

@Mossjiang
Copy link

Mossjiang commented Dec 13, 2022

Thanks,it's solved my problem too

@datadoggers
Copy link

in my case path of docker installation is wrong and that is causes the same issue .
issue was in kubelet that is exited with status 1
OS-ubuntu 20.04

1 . install docker through k8s guide (command - apt install docker.io does not install latest version of docker and there can
issue in installation of containerd . )
https://docs.docker.com/engine/install/ubuntu/ -- it install latest version of docker as well as containerd.
2. after installtion there will be /etc/containerd/config.toml by default . just delete it
3. systemctl restart containerd
4. systemctl restart kubelet > kubeadm init

@maxxibon
Copy link

maxxibon commented Jan 3, 2023

Got this error too

[init] Using Kubernetes version: v1.26.0
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR CRI]: container runtime is not running: output: E0103 00:14:31.026921    5282 remote_runtime.go:948] "Status from runtime service failed" err="rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
time="2023-01-03T00:14:31Z" level=fatal msg="getting status of runtime: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
, error: exit status 1

The following steps worked for me ->

sudo apt-get update && sudo apt-get install -y containerd
sudo mkdir -p /etc/containerd
sudo containerd config default | sudo tee /etc/containerd/config.toml
# update SystemdCgroup to true in `/etc/containerd/config.toml`
sudo systemctl restart containerd

ThauMish added a commit to ThauMish/Kube-Vagrant-Ansible that referenced this issue Jan 16, 2023
ThauMish added a commit to ThauMish/Kube-Vagrant-Ansible that referenced this issue Jan 16, 2023
ThauMish added a commit to ThauMish/Kube-Vagrant-Ansible that referenced this issue Jan 16, 2023
ThauMish added a commit to ThauMish/Kube-Vagrant-Ansible that referenced this issue Jan 16, 2023
@nodesocket
Copy link

nodesocket commented Jan 25, 2023

I just ran apt-get upgrade and now my control plane and all workers are failing to run containerd and thus also kubelet. The logs for sudo service containerd status show:

Jan 24 23:15:55 kube-master containerd[431]: time="2023-01-24T23:15:55.608380181-06:00" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1
Jan 24 23:15:55 kube-master containerd[431]: time="2023-01-24T23:15:55.609315014-06:00" level=warning msg="failed to load plugin io.containerd.grpc.v1.cri" error="invalid plugin config

It seems the apt-get upgrade reverted my changes to /etc/containerd/config.toml and set SystemdCgroup back to false as well as systemd_cgroup. Why does this keep on reverting? Additionally, why are these defaulted to false?

Seems like perhaps there should be some enhanced logic in the generate default config that detects if systemd is in use and set those values to true?

EDIT

Update + solution: in my case, I had to set SystemdCgroup = true and systemd_cgroup = false. Leaving systemd_cgroup = true resulted in an error on containerd startup.

@hamedsol
Copy link

hamedsol commented Feb 1, 2023

not work for me any update pls

@PushAndRun
Copy link

I am also running in the same issue, even after checking the disabled_plugins, and trying all poosible combinations of SystemdCgroup = true/false and systemd_cgroup = true/false :(

@hamedsol
Copy link

hamedsol commented Feb 3, 2023

follow Kubernetes latest official regarding containerd also install it via binaries it will be solved

@PushAndRun
Copy link

@hamedsol Thanks for the hint. I was finally able to init the cluster after installing the containerd binaries by following this guide: https://www.itzgeek.com/how-tos/linux/ubuntu-how-tos/install-containerd-on-ubuntu-22-04.html

@bingheGT
Copy link

Problem Following Kubernetes official installation instruction for containerd and kubeadm init will fail with unknown service runtime.v1alpha2.RuntimeService.

# Commands from https://kubernetes.io/docs/setup/production-environment/container-runtimes/#containerd
apt-get update && apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
apt-get update && apt-get install -y containerd.io

# Configure containerd
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
systemctl restart containerd

kubeadm init
...
[preflight] Running pre-flight checks
error execution phase preflight: [preflight] Some fatal errors occurred:
	[ERROR CRI]: container runtime is not running: output: time="2020-09-24T11:49:16Z" level=fatal msg="getting status of runtime failed: rpc error: code = Unimplemented desc = unknown service runtime.v1alpha2.RuntimeService"
, error: exit status 1

Solution:

rm /etc/containerd/config.toml
systemctl restart containerd
kubeadm init

Versions:

  • Ubuntu 20.04 (focal)
  • containerd.io 1.3.7
  • kubectl 1.19.2
  • kubeadm 1.19.2
  • kubelet 1.19.2

Thank you, I solved the problem using your solution!

@Jparmo
Copy link

Jparmo commented Feb 24, 2023

I have the same problem when I try to join to master node and my containerd is activate, nothing works
version of kubelet=1.25.5-00
[preflight] Running pre-flight checks
[WARNING SystemVerification]: missing optional cgroups: blkio
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR CRI]: container runtime is not running: output: time="2023-02-24T20:01:08Z" level=fatal msg="validate service connection: CRI v1 runtime API is not implemented for endpoint "unix:///var/run/containerd/containerd.sock": rpc error: code = Unimplemented desc = unknown service runtime.v1.RuntimeService"
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher

@hamedsol
Copy link

Follow official documents regarding containerd to enable SystemCgroups on etc/containerd/config.toml if you don't have this file make it by sudo containerd config default | tee etc/containerd/configuration.toml , better to install containerd from binary rather than by package manager

@Enoxime
Copy link

Enoxime commented Feb 26, 2023

Got the same issue and fixed it by installing the containerd.io package from the docker repository instead of the one from ubuntu's repository.
see: https://docs.docker.com/engine/install/ubuntu/#set-up-the-repository

I have ubuntu 22.04.2 on VMs and raspberry pies

Also it seems there is presently an issue to retrieve the gpg key from https://packages.cloud.google.com/apt/doc/apt-key.gpg

@zsinba
Copy link

zsinba commented Mar 1, 2023

Got the same issue. there is no answer in debian 11.

@riyazhakki
Copy link

I was having same issue on ubutnu 20.04.3, it got fixed after reinstalling docker,containrd, runc etc, then deleting the /etc/container.d/config.toml file.

@thallgren
Copy link

I struggled with this too. The solution for me was to comment this line out in /etc/containerd/config.toml

disabled_plugins = ["cri"]

@nivaran
Copy link

nivaran commented Apr 8, 2023

Refer to Three pages of Kubernetes Doc:

  1. https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
  2. https://kubernetes.io/docs/setup/production-environment/container-runtimes/
  3. https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

Steps :

  1. complete the Pre-requisite ( pages 1 & 2 )
  2. install CRI from page 2 ( For Containered & Docker- https://docs.docker.com/engine/install/ubuntu/ )
  3. Make changes in /etc/containerd/config.toml ( page 2 ) then restart containerd
  4. Install kubectl, kubeadm, kubelet ( page 1 ) then systemctl daemon-reload
  5. kubeadm init ( page 3 )
  6. install CNI ( for weave net - https://www.weave.works/docs/net/latest/kubernetes/kube-addon/)
  7. systemctl daemon-reload && kubectl get nodes

OR can use single shell script for both master-node & worker node configuration

script Link - https://github.com/nivaran/Install_k8s_using_shell_script

shaharby7 added a commit to shaharby7/kubernetes-under-the-hood that referenced this issue Apr 14, 2023
…Service.

when trying to install the kube mast on a debian image you get " unknown service runtime.v1alpha2.RuntimeService." due to the fact that the installation of 'containerd' from the default debian repository is not updated (for referance - containerd/containerd#4581). To fix it I added the rpository of containrd and installed from their, a documentation of how to do it as a referance can be found here - https://docs.docker.com/engine/install/debian/#uninstall-old-versions.
@hub4ops
Copy link

hub4ops commented May 12, 2023

Hi folks,
my bash history from the new control plane is below, it works.

   18  apt update
   19  apt upgrade -y
   20  reboot
   21  cat <<EOF | sudo tee /etc/modules-load.d/containerd.conf
overlay
br_netfilter
EOF

   22  sudo modprobe overlay
   23  sudo modprobe br_netfilter
   24  cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF

   25  sudo sysctl --system
   26  sudo apt-get update
   27  wget https://github.com/containerd/containerd/releases/download/v1.7.1/containerd-1.7.1-linux-amd64.tar.gz
   28  tar Cxzvf /usr/local containerd-1.7.1-linux-amd64.tar.gz
   29  systemctl daemon-reload
   30  systemctl enable --now containerd
   31  nano /usr/local/lib/systemd/system/containerd.service
   32  systemctl status sshd
   33  ls -lh /lib/systemd/system/
   34  nano /lib/systemd/system/containerd.service
   35  systemctl daemon-reload
   36  systemctl status containerd.service
   37  nano /lib/systemd/system/containerd.service
   38  ls /usr/local/bin/containerd
   39  systemctl enable --now containerd
   40  wget https://github.com/opencontainers/runc/releases/download/v1.1.7/runc.amd64
   41  install -m 755 runc.amd64 /usr/local/sbin/runc
   42  wget https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz
   43  tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.3.0.tgz
   44  systemctl status containerd.service
   45  systemctl restart containerd.service
   46  systemctl status containerd.service
   47  containerd -v
   48  sudo mkdir -p /etc/containerd
   49  sudo containerd config default | sudo tee /etc/containerd/config.tom
   50  sudo containerd config default | sudo tee /etc/containerd/config.toml
   51  ls /etc/containerd/
   52  rm /etc/containerd/config.tom
   53  sudo systemctl restart containerd
   54  sudo systemctl status containerd
   55  sudo swapoff -a
   56  nano /etc/fstab
   57  sudo apt-get update && sudo apt-get install -y apt-transport-https curl
   58  curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
   59  cat <<EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF

   60  sudo apt-get update
   61  sudo apt-get install -y kubelet=1.26.0-00 kubeadm=1.26.0-00 kubectl=1.26.0-00
   62  sudo apt-mark hold kubelet kubeadm kubectl
   63  sudo kubeadm init --pod-network-cidr 192.168.0.0/16 --kubernetes-version 1.26.0

@arun-chib
Copy link

It got fixed for me finally:

fix:

Update contained to the latest and fix toml file with below changes:

disabled_plugins = ["cri"] -->> disabled_plugins = [""]

Done.
Thnak me later ;)

@alanlupsha
Copy link

February 2024 fix:

sudo containerd config default > /etc/containerd/config.toml

sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/' /etc/containerd/config.toml
sudo sed -i 's/snapshotter = "overlayfs"/snapshotter = "native"/' /etc/containerd/config.toml
sudo sed -i 's/systemd_cgroup \= true/systemd_cgroup \= true/' /etc/containerd/config.toml

sudo systemctl restart containerd
sudo systemctl status containerd
q

kubectl get all
kubectl get nodes

@bjrara
Copy link

bjrara commented Apr 23, 2024

February 2024 fix:

sudo containerd config default > /etc/containerd/config.toml

sudo sed -i 's/SystemdCgroup \= false/SystemdCgroup \= true/' /etc/containerd/config.toml
sudo sed -i 's/snapshotter = "overlayfs"/snapshotter = "native"/' /etc/containerd/config.toml
sudo sed -i 's/systemd_cgroup \= true/systemd_cgroup \= true/' /etc/containerd/config.toml

sudo systemctl restart containerd
sudo systemctl status containerd
q

kubectl get all
kubectl get nodes

This fix works for me! @alanlupsha Would you mind sharing how you came up with this fix?

@alanlupsha
Copy link

This fix works for me! @alanlupsha Would you mind sharing how you came up with this fix?

I'm glad that works! I used them googles on the interwebs and put all the findings into this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests