Skip to content
This repository was archived by the owner on Oct 21, 2020. It is now read-only.
This repository was archived by the owner on Oct 21, 2020. It is now read-only.

[nfs-client] provitsioner is not allowed to use nfs volumes #1145

Closed
@PetrGlad

Description

@PetrGlad

Upon nfs-client container deployment, I get the event

Error creating: pods "nfs-client-provisioner-5cf596db9d-" is forbidden: unable to validate against any security context constraint: [spec.volumes[0]: Invalid value: "nfs": nfs volumes are not allowed to be used]

So the nfs-client provisioner does not start.
The problem appears both applying rbac.yaml explicitly and installing with helm.

I tried to work around this by creating PV and referring to that in configuration - that helps provisioner to start. But then when I try to create containers that use NFS volumes they cannot be mounted, although nfs-client's logs says the PV was provisioned, and PVs status says it is Bound. The container startup error is MountVolume.SetUp failed for volume “nfs” : mount failed: exit status 32
I can mount the NFS in Linux from node's console without problems.

Running Minishift v1.29.0+72fa7b2 (OKD/Openshift 3.11, CentOS) with nfs-utils installed on the node.
Configuration files from master branch (2019-03-06).

Activity

wongma7

wongma7 commented on Mar 29, 2019

@wongma7
Contributor

The pod needs to be granted permissions to use an SCC that permits it to use nfs volumes, for example hostmount-anyuid. Alternatively, you can create a PV with the same NFS information, create a PVC to bind to that PV, and use the PVC in your nfs-client-provisioner pod because the restricted SCC will allow you to use pvc volumes.

See here for more info: https://docs.openshift.com/container-platform/3.11/admin_guide/manage_scc.html

scphantm

scphantm commented on Mar 29, 2019

@scphantm

i tried that.

helm install --name nfs-client-provisioner --kubeconfig sa.kubeconfig stable/nfs-client-provisioner --set nfs.server=nfs-65-225 --set nfs.path=/kubernetes-test

oc adm policy add-scc-to-user hostmount-anyuid system:serviceaccount:default/nfs-client-provisioner

It still can't start the pod.

Error creating: pods "nfs-client-provisioner-75c6bd6c85-" is forbidden: unable to validate against any security context constraint: [fsGroup: Invalid value: []int64{65534}: 65534 is not an allowed group spec.volumes[0]: Invalid value: "nfs": nfs volumes are not allowed to be used spec.containers[0].securityContext.securityContext.runAsUser: Invalid value: 65534: must be in the ranges: [1000000000, 1000009999]]

Ive also tried adding privileged and restricted to the service user.

apiVersion: apps/v1
kind: ReplicaSet
metadata:
  annotations:
    deployment.kubernetes.io/desired-replicas: '1'
    deployment.kubernetes.io/max-replicas: '1'
    deployment.kubernetes.io/revision: '1'
  creationTimestamp: '2019-03-29T20:12:20Z'
  generation: 1
  labels:
    app: nfs-client-provisioner
    pod-template-hash: '3172682741'
    release: nfs-client-provisioner
  name: nfs-client-provisioner-75c6bd6c85
  namespace: default
  ownerReferences:
    - apiVersion: apps/v1
      blockOwnerDeletion: true
      controller: true
      kind: Deployment
      name: nfs-client-provisioner
      uid: f5723a92-525e-11e9-b02f-005056006301
  resourceVersion: '31959'
  selfLink: >-
    /apis/apps/v1/namespaces/default/replicasets/nfs-client-provisioner-75c6bd6c85
  uid: f573a5fb-525e-11e9-b02f-005056006301
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-client-provisioner
      pod-template-hash: '3172682741'
      release: nfs-client-provisioner
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: nfs-client-provisioner
        pod-template-hash: '3172682741'
        release: nfs-client-provisioner
    spec:
      containers:
        - env:
            - name: PROVISIONER_NAME
              value: cluster.local/nfs-client-provisioner
            - name: NFS_SERVER
              value: pgh-realm-65-225
            - name: NFS_PATH
              value: /kubernetes-test
          image: 'quay.io/external_storage/nfs-client-provisioner:v3.1.0-k8s1.11'
          imagePullPolicy: IfNotPresent
          name: nfs-client-provisioner
          resources: {}
          terminationMessagePath: /dev/termination-log
          terminationMessagePolicy: File
          volumeMounts:
            - mountPath: /persistentvolumes
              name: nfs-client-root
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext:
        fsGroup: 65534
        runAsUser: 65534
      serviceAccount: nfs-client-provisioner
      serviceAccountName: nfs-client-provisioner
      terminationGracePeriodSeconds: 30
      volumes:
        - name: nfs-client-root
          nfs:
            path: /kubernetes-test
            server: pgh-realm-65-225
status:
  conditions:
    - lastTransitionTime: '2019-03-29T20:12:21Z'
      message: >-
        pods "nfs-client-provisioner-75c6bd6c85-" is forbidden: unable to
        validate against any security context constraint: [fsGroup: Invalid
        value: []int64{65534}: 65534 is not an allowed group spec.volumes[0]:
        Invalid value: "nfs": nfs volumes are not allowed to be used
        spec.containers[0].securityContext.securityContext.runAsUser: Invalid
        value: 65534: must be in the ranges: [1000000000, 1000009999]]
      reason: FailedCreate
      status: 'True'
      type: ReplicaFailure
  observedGeneration: 1
  replicas: 0

i suspect my problem is related to

      securityContext:
        fsGroup: 65534
        runAsUser: 65534

but i don't know where that comes from and it won't let me change it.

wongma7

wongma7 commented on Mar 29, 2019

@wongma7
Contributor

Is it possible to have helm render the replicaset yaml, change the securityContext and create the replicaset after? I am not too familiar with helm, once you get it working I would also open an issue with the charts repo to make it optional.

Any idea where the range [1000000000, 1000009999] comes from? It might be an annotation on the project, what does the project yaml look like? Can you paste the privileged scc yaml here?

scphantm

scphantm commented on Mar 29, 2019

@scphantm

Well, i tried deleting everything and creating it by hand instead of using helm. Error message changed...

Error creating: pods "nfs-client-provisioner-67c58f4645-" is forbidden: unable to validate against any security context constraint: [spec.volumes[0]: Invalid value: "nfs": nfs volumes are not allowed to be used]

problem is this is after i gave the service account privileged and hostmount-anyuid

fejta-bot

fejta-bot commented on Jun 27, 2019

@fejta-bot

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

added
lifecycle/staleDenotes an issue or PR has remained open with no activity and has become stale.
on Jun 27, 2019
supriya-premkumar

supriya-premkumar commented on Jul 17, 2019

@supriya-premkumar

I ran into this issue as well and was able to resolve it by adding nfs volume in the PodSecurityPolicy spec.

apiVersion: policy/v1beta1 
kind: PodSecurityPolicy 
  metadata: 
    name: example 
  spec: 
    volumes: 
     - nfs 
fejta-bot

fejta-bot commented on Aug 17, 2019

@fejta-bot

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

added
lifecycle/rottenDenotes an issue or PR that has aged beyond stale and will be auto-closed.
and removed
lifecycle/staleDenotes an issue or PR has remained open with no activity and has become stale.
on Aug 17, 2019
fejta-bot

fejta-bot commented on Sep 16, 2019

@fejta-bot

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

k8s-ci-robot

k8s-ci-robot commented on Sep 16, 2019

@k8s-ci-robot
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

jdorel

jdorel commented on Nov 10, 2019

@jdorel

Can be resolved with oc adm policy add-scc-to-user hostmount-anyuid -z nfs-client-provisioner

2 remaining items

Loading
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    area/nfs-clientlifecycle/rottenDenotes an issue or PR that has aged beyond stale and will be auto-closed.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

      Development

      No branches or pull requests

        Participants

        @PetrGlad@scphantm@wongma7@supriya-premkumar@jdorel

        Issue actions

          [nfs-client] provitsioner is not allowed to use nfs volumes · Issue #1145 · kubernetes-retired/external-storage