[nfs-client] provitsioner is not allowed to use nfs volumes #1145
Description
Upon nfs-client container deployment, I get the event
Error creating: pods "nfs-client-provisioner-5cf596db9d-" is forbidden: unable to validate against any security context constraint: [spec.volumes[0]: Invalid value: "nfs": nfs volumes are not allowed to be used]
So the nfs-client provisioner does not start.
The problem appears both applying rbac.yaml explicitly and installing with helm.
I tried to work around this by creating PV and referring to that in configuration - that helps provisioner to start. But then when I try to create containers that use NFS volumes they cannot be mounted, although nfs-client's logs says the PV was provisioned, and PVs status says it is Bound. The container startup error is MountVolume.SetUp failed for volume “nfs” : mount failed: exit status 32
I can mount the NFS in Linux from node's console without problems.
Running Minishift v1.29.0+72fa7b2 (OKD/Openshift 3.11, CentOS) with nfs-utils installed on the node.
Configuration files from master branch (2019-03-06).
Activity
wongma7 commentedon Mar 29, 2019
The pod needs to be granted permissions to
use
an SCC that permits it to use nfs volumes, for examplehostmount-anyuid
. Alternatively, you can create a PV with the same NFS information, create a PVC to bind to that PV, and use the PVC in your nfs-client-provisioner pod because therestricted
SCC will allow you to use pvc volumes.See here for more info: https://docs.openshift.com/container-platform/3.11/admin_guide/manage_scc.html
scphantm commentedon Mar 29, 2019
i tried that.
helm install --name nfs-client-provisioner --kubeconfig sa.kubeconfig stable/nfs-client-provisioner --set nfs.server=nfs-65-225 --set nfs.path=/kubernetes-test
oc adm policy add-scc-to-user hostmount-anyuid system:serviceaccount:default/nfs-client-provisioner
It still can't start the pod.
Error creating: pods "nfs-client-provisioner-75c6bd6c85-" is forbidden: unable to validate against any security context constraint: [fsGroup: Invalid value: []int64{65534}: 65534 is not an allowed group spec.volumes[0]: Invalid value: "nfs": nfs volumes are not allowed to be used spec.containers[0].securityContext.securityContext.runAsUser: Invalid value: 65534: must be in the ranges: [1000000000, 1000009999]]
Ive also tried adding privileged and restricted to the service user.
i suspect my problem is related to
but i don't know where that comes from and it won't let me change it.
wongma7 commentedon Mar 29, 2019
Is it possible to have helm render the replicaset yaml, change the securityContext and create the replicaset after? I am not too familiar with helm, once you get it working I would also open an issue with the charts repo to make it optional.
Any idea where the range [1000000000, 1000009999] comes from? It might be an annotation on the project, what does the project yaml look like? Can you paste the privileged scc yaml here?
scphantm commentedon Mar 29, 2019
Well, i tried deleting everything and creating it by hand instead of using helm. Error message changed...
Error creating: pods "nfs-client-provisioner-67c58f4645-" is forbidden: unable to validate against any security context constraint: [spec.volumes[0]: Invalid value: "nfs": nfs volumes are not allowed to be used]
problem is this is after i gave the service account
privileged
andhostmount-anyuid
fejta-bot commentedon Jun 27, 2019
Issues go stale after 90d of inactivity.
Mark the issue as fresh with
/remove-lifecycle stale
.Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with
/close
.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
supriya-premkumar commentedon Jul 17, 2019
I ran into this issue as well and was able to resolve it by adding
nfs
volume in thePodSecurityPolicy
spec.fejta-bot commentedon Aug 17, 2019
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with
/remove-lifecycle rotten
.Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with
/close
.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
fejta-bot commentedon Sep 16, 2019
Rotten issues close after 30d of inactivity.
Reopen the issue with
/reopen
.Mark the issue as fresh with
/remove-lifecycle rotten
.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
k8s-ci-robot commentedon Sep 16, 2019
@fejta-bot: Closing this issue.
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
jdorel commentedon Nov 10, 2019
Can be resolved with
oc adm policy add-scc-to-user hostmount-anyuid -z nfs-client-provisioner
2 remaining items