[nfs-client-provisioner]PVC pending state #754
Description
Hi,
Currently I'm trying to get the nfs-client-provisioner running on k8s v1.10.1, and use your giving yaml for testing with rbac. As I deploy test-claim.yaml, PVC always pending with below info:
Normal ExternalProvisioning 2m (x143 over 37m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "fuserim.pri/ifs" or manually created by system administrator
I check pod nfs-client-provisioner, it is successful:
root@k8s-master1:/data/nfs_file# kubectl logs nfs-client-provisioner-5bff76dd6c-j2mpg
I0507 07:22:12.743160 1 controller.go:407] Starting provisioner controller 5c858a43-51c7-11e8-84df-0a580af40124!
I also refer #174, link " https://kubernetes.io/docs/admin/kube-controller-manager/ " is expired , and I check with kube-controller-manager.yaml, --controllers like below:
- --controllers=*,bootstrapsigner,tokencleaner
I don't know whether this is the issue. Anyone can help? Thanks in advance.
Activity
rtrive commentedon May 9, 2018
Hi, mee too i'm stuck here, the pvc is pending state. I try both quick start and the normal mode
dragon9783 commentedon May 22, 2018
same issue to me, i also deploy openebs provisioner, and it work well
wongma7 commentedon Jun 15, 2018
Is that the full log of the provisioner? Can you provide it if not
/area nfs-client
jrfeenst commentedon Jun 25, 2018
I have the same issue. The log (-v 4) shows nothing interesting:
I0625 13:00:02.214122 1 controller.go:492] Starting provisioner controller ac4f591d-7877-11e8-8e50-0a58c0a80305!
I0625 13:00:02.214962 1 reflector.go:202] Starting reflector *v1.PersistentVolumeClaim (15s) from github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:496
I0625 13:00:02.215061 1 reflector.go:240] Listing and watching *v1.PersistentVolumeClaim from github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:496
I0625 13:00:02.215163 1 reflector.go:202] Starting reflector *v1.PersistentVolume (15s) from github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:497
I0625 13:00:02.215192 1 reflector.go:240] Listing and watching *v1.PersistentVolume from github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:497
I0625 13:00:02.216170 1 reflector.go:202] Starting reflector *v1.StorageClass (15s) from github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:498
I0625 13:00:02.216273 1 reflector.go:240] Listing and watching *v1.StorageClass from github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:498
I0625 13:00:17.221821 1 reflector.go:286] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:498: forcing resync
I0625 13:00:17.229086 1 reflector.go:286] github.com/kubernetes-incubator/external-storage/lib/controller/controller.go:496: forcing resync
mostlyAtNight commentedon Jun 28, 2018
Hello - I have the same issue (PVC stuck in Pending state) - but I'm using the EFS provisioner.
Logs (for the pod) do not show much:
How can I get further information to help debugging this issue?
One other thing: I;m running on Amazon EKS (quite new) and had to install nfs-utils on the nodes as it was not installed by default. Perhaps the issue is related and there is some other missing software there?
Kind regards, Pete
mostlyAtNight commentedon Jun 28, 2018
Strangely it's now working as expected, things I changed:
The PVCs now bind properly and are not stuck in the Pending state - perhaps it's something to do with the container not liking starting out in a different serviceAccount and then being subsequently patched?
...
wongma7 commentedon Jun 28, 2018
If it were a serviceAccount misconfiguration I would expect to see many permission denied errors in the log. So I am still perplexed!
n0thing2333 commentedon Jun 29, 2018
stuck here too. @mostlyAtNight .
I'm also using EKS+EFS. Can you elaborate more on how you solve this problem? (The service account part)
nagapavan commentedon Jul 7, 2018
stuck for me too. Describe log for PVC:
Looks like the functionality from Documentation:
When you create a claim that asks for the class, a volume will be automatically created.
is no longer working.matthieudolci commentedon Jul 7, 2018
We had the same issue with efs and eks using the aws eks ami for the nodes, we solved it by using
amazon-efs-utils
instead ofnfs-utils
Youhana-Hana commentedon Jul 10, 2018
I have the same pvc pending issue
Name: efs
Namespace: default
StorageClass: aws-efs
Status: Pending
Volume:
Labels:
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{"volume.beta.kubernetes.io/storage-class":"aws-efs"},"name":"efs","namespa..
volume.beta.kubernetes.io/storage-class=aws-efs
volume.beta.kubernetes.io/storage-provisioner=example.com/aws-efs
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
Events:
Type Reason Age From Message
Normal ExternalProvisioning 1m (x61 over 16m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "example.com/aws-efs" or manually created by system administrator
26 remaining items