Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fsGroup securityContext does not apply to nfs mount #260

Closed
kmarokas opened this issue Aug 3, 2018 · 65 comments
Closed

fsGroup securityContext does not apply to nfs mount #260

kmarokas opened this issue Aug 3, 2018 · 65 comments

Comments

@kmarokas
Copy link

kmarokas commented Aug 3, 2018

The example https://github.com/kubernetes/examples/tree/master/staging/volumes/nfs works fine if the container using nfs mount is running as root user. If I use securityContext to run not as root user then I have no write access to the mounted volume.

How to reproduce:
here is the nfs-busybox-rc.yaml with securityContext:

# This mounts the nfs volume claim into /mnt and continuously
# overwrites /mnt/index.html with the time and hostname of the pod.

apiVersion: v1
kind: ReplicationController
metadata:
  name: nfs-busybox
spec:
  replicas: 2
  selector:
    name: nfs-busybox
  template:
    metadata:
      labels:
        name: nfs-busybox
    spec:
      securityContext:
        runAsUser: 10000
        fsGroup: 10000
      containers:
      - image: busybox
        command:
          - sh
          - -c
          - 'while true; do date > /mnt/index.html; hostname >> /mnt/index.html; sleep $(($RANDOM % 5 + 5)); done'
        imagePullPolicy: IfNotPresent
        name: busybox
        securityContext:
          runAsUser: 10000
        volumeMounts:
          # name must match the volume name below
          - name: nfs
            mountPath: "/mnt"
      volumes:
      - name: nfs
        persistentVolumeClaim:
          claimName: nfs

Actual result:

kubectl exec nfs-busybox-2w9bp -t -- id
uid=10000 gid=0(root) groups=10000

kubectl exec nfs-busybox-2w9bp -t -- ls -l /
total 48
<..>
drwxr-xr-x    3 root     root          4096 Aug  2 12:27 mnt

Expected result:
the group ownership of /mnt folder should be user 10000

The mount options in nfs pv are not allowed except rw

apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs
spec:
  capacity:
    storage: 5Gi
  accessModes:
    - ReadWriteMany
  nfs:
    # FIXME: use the right IP
    server: 10.23.137.115
    path: "/"
  mountOptions:
#    - rw // is allowed
#    - root_squash // error during pod scheduling: mount.nfs: an incorrect mount option was specified
#    - all_squash // error during pod scheduling: mount.nfs: an incorrect mount option was specified
#    - anonuid=10000 // error during pod scheduling: mount.nfs: an incorrect mount option was specified
#    - anongid=10000 // error during pod scheduling: mount.nfs: an incorrect mount option was specified
kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.3-rancher1", GitCommit:"f6320ca7027d8244abb6216fbdb73a2b3eb2f4f9", GitTreeState:"clean", BuildDate:"2018-05-29T22:28:56Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 1, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 1, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@jefflaplante
Copy link

Why did this get closed with no resolution? I have this same issue. If there is a better solution than an init container please someone fill me in.

@geerlingguy
Copy link

Yeah... I'm having the same issue with NFS too. securityContext.fsGroup seems to have no affect on NFS volume mounts, so you kinda have to use the initContainer approach :(

@mlensment
Copy link

mlensment commented Apr 28, 2019

I'm having the same problem.

@komaldhiman112
Copy link

same issue able to write but not able to read from nfs mounted volume . kubernetes shows success in mounting process but no luck .

@varun-da
Copy link

/reopen

@k8s-ci-robot
Copy link
Contributor

@varun-da: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@kmarokas
Copy link
Author

/reopen

@k8s-ci-robot
Copy link
Contributor

@kmarokas: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this Jul 23, 2019
@varun-da
Copy link

thanks @kmarokas!

@varun-da
Copy link

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jul 23, 2019
@leopoldodonnell
Copy link

Would love for this to be addressed! In the mean time here's how we're dealing with it...

In this example there are two pods that are mounting an AWS EFS volume via nfs. To enable a non-root user, we make the mount point accessible via an initContainer.

---
apiVersion: v1
kind: Pod
metadata:
  name: alpine-efs-1
  labels:
    name: alpine
spec:
  volumes:
  - name: nfs-test
    nfs:
      server: fs-xxxxxxxx.efs.us-east-1.amazonaws.com
      path: /
  securityContext:
    fsGroup: 100
    runAsGroup: 100
    runAsUser: 405
  initContainers:
    - name: nfs-fixer
      image: alpine
      securityContext:
        runAsUser: 0
      volumeMounts:
      - name: nfs-test
        mountPath: /nfs
      command:
      - sh
      - -c
      - (chmod 0775 /nfs; chgrp 100 /nfs)
  containers:
  - name: alpine
    image: alpine
    volumeMounts:
      - name: nfs-test
        mountPath: /nfs
    command:
      - tail
      - -f
      - /dev/null
---
apiVersion: v1
kind: Pod
metadata:
  name: alpine-efs-2
  labels:
    name: alpine
spec:
  volumes:
  - name: nfs-test
    nfs:
      server: fs-xxxxxxxx.efs.us-east-1.amazonaws.com
      path: /
  securityContext:
    supplementalGroups:
      - 100
    fsGroup: 100
    # runAsGroup: 100
    runAsUser: 405
  initContainers:
    - name: nfs-fixer
      image: alpine
      securityContext:
        runAsUser: 0
      volumeMounts:
      - name: nfs-test
        mountPath: /nfs
      command:
      - sh
      - -c
      - (chmod 0775 /nfs; chgrp 100 /nfs)
  containers:
  - name: alpine
    image: alpine
    volumeMounts:
      - name: nfs-test
        mountPath: /nfs
    command:
      - tail
      - -f
      - /dev/null

@spawnia
Copy link

spawnia commented Oct 31, 2019

The same seems to be true for cifs mounts created through a custom volume driver: juliohm1978/kubernetes-cifs-volumedriver#8

Edit: Looks like there is very little magic that Kubernetes does when mounting the volumes. The individual volume drivers have to respect the fsGroup configuration set in the pod. Looks like the NFS provider doesn't do that as of now.

Is https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client the place where this could be fixed?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 29, 2020
@varun-da
Copy link

/remove-lifecycle stale

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 11, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 11, 2023
@vmule
Copy link

vmule commented Jul 31, 2023

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jul 31, 2023
@yingding
Copy link

I ran into this exact issue with a statical PV using the default mount nfs.
There is no possible mount options for nfs to change the nfs permission. securityContext.fsGroup setting is ignored without any outputs.
Unfortunately, the initContainer approach is not an option for me.
You can do something about this issue?

@radirobi97
Copy link

@yingding have you found any workaround?

@yingding
Copy link

@radirobi97 If you can use initContiners approach #260 (comment) , it will work.
I had this issue still with a pod from ML system which I do not have control over. Ultimately, i switched to object store and gave up on the default nfs mount of static PV.
But I think dynamic nfs CSI driver shall not have this static PV issue.

Berodin pushed a commit to Berodin/palworld-helmchart that referenced this issue Jan 27, 2024
found kubernetes/examples#260 (comment) from which it seems it is known at least since 2018 that fsGroup doesn't affect PVCs on NFS and CIFS either.
Carthaca added a commit to sapcc/helm-charts that referenced this issue Mar 1, 2024
This reverts commit 5bf9d4e.

needs init container first due to kubernetes/examples/issues/260
Carthaca added a commit to sapcc/helm-charts that referenced this issue Mar 1, 2024
This reverts commit 5bf9d4e.

needs init container first due to kubernetes/examples/issues/260
Carthaca added a commit to sapcc/helm-charts that referenced this issue Mar 1, 2024
This reverts commit 5bf9d4e.

needs init container first due to kubernetes/examples/issues/260
Carthaca added a commit to sapcc/helm-charts that referenced this issue Mar 8, 2024
`fsGroupChangePolicy: "OnRootMismatch"`does not work for NFS mounts
(also see kubernetes/examples/issues/260)
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 12, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Apr 11, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale May 11, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@rmunn
Copy link

rmunn commented May 22, 2024

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label May 22, 2024
@rmunn
Copy link

rmunn commented May 22, 2024

/reopen

@k8s-ci-robot
Copy link
Contributor

@rmunn: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests