Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Usage of the fsGroup securityContext #8

Closed
spawnia opened this issue Oct 16, 2019 · 8 comments
Closed

Usage of the fsGroup securityContext #8

spawnia opened this issue Oct 16, 2019 · 8 comments
Labels
enhancement New feature or request

Comments

@spawnia
Copy link
Contributor

spawnia commented Oct 16, 2019

Preface: Thanks for the plugin, it has been working great for us!

I am trying to mount a volume created through this driver into a container with a specific uid/gid. Here is an abbreviated example of the configuration i am using:

kind: PersistentVolume
metadata:
  name: test-volume
spec:
  flexVolume:
    driver: juliohm/cifs
    options:
      opts: domain=Foo
      server: fooserver123
      share: /test
  accessModes:
    - ReadWriteMany
---
kind: PersistentVolumeClaim
metadata:
  name: test-claim
spec:
  volumeName: test-volume
  accessModes:
    - ReadWriteMany
---
kind: Deployment
...
    spec:
      containers:
        - image: php-apache
          name: web
          volumeMounts:
            - mountPath: /var/www/test
              name: test
      securityContext:
        fsGroup: 33
      volumes:
        - name: test
          persistentVolumeClaim:
            claimName: test-claim

The volume is successfully mounted, but not as the specified Group 33 - it still belongs to root:root.

I have to change the volume definition like so to make it work:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: test-volume
spec:
  flexVolume:
    driver: juliohm/cifs
    options:
+     opts: domain=Foo,uid=33,gid=33
-     opts: domain=Foo
      server: fooserver123
      share: /test
  accessModes:
    - ReadWriteMany

Now, /var/www/test is owned by 33:33.

I would prefer to use fsGroup, as that would enable me to share the volume across multiple containers and change the user/group for each. Can that be achieved?

This is more of a question, i am not sure if there is actually something you can do here. Happy about any advice.

@juliohm1978
Copy link
Owner

juliohm1978 commented Oct 30, 2019

Hi @spawnia. Apologies for the late response. It's been a crazy month, and I haven't had time to calm down and take a look at some personal projects.

This is a great suggestion, and hopefully I'll be able to get around and push a new version soon. Hang in there.

@juliohm1978
Copy link
Owner

Ok, so I'm taking a closer look at your idea... but I'm a little confused.

The volume driver does not create the PV itself. It only mounts the volume with opts that were provided. In that regards, it seems to be working.

Unfortunately, I'm not able to control how the PV is provisioned. Either manually or through a provisioner, the volume driver is unable to interact at that moment.

Does that sound right?

@spawnia
Copy link
Contributor Author

spawnia commented Oct 31, 2019

It seems as though Kubernetes does not allow controlling the permissions within certain kinds of provisioned volumes.

I experimented a bit and found that emptyDir does apply the fsGroup, while your driver and hostPath do not.

Other people had similar issues with NFS: kubernetes/examples#260

@spawnia
Copy link
Contributor Author

spawnia commented Oct 31, 2019

I dug a little deeper - looks like Kubernetes will pass through the fsGroup option: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-storage/flexvolume.md#default-json-options

@juliohm1978
Copy link
Owner

I did some testing with minikube using all values in the pod's securityContext.

securityContext:
  runAsUser: 33
  runAsGroup: 33
  fsGroup: 33

and voilà, the following comes up as argument to the volume driver.

{
  "kubernetes.io/fsType": "",
  "kubernetes.io/mounterArgs.FsGroup": "33",
  "kubernetes.io/pod.name": "nginx-deployment-549ddfb5fc-rnqk8",
  "kubernetes.io/pod.namespace": "default",
  "kubernetes.io/pod.uid": "bb6b2e46-c80d-4c86-920c-8e08736fa211",
  "kubernetes.io/pvOrVolumeName": "test-volume",
  "kubernetes.io/readwrite": "rw",
  "kubernetes.io/serviceAccount.name": "default",
  "opts": "domain=Foo",
  "server": "fooserver123",
  "share": "/test"
}

Except, kubernetes.io/mounterArgs.FsGroup is the only argument I'm passed from the securityContext options. Should I use that for both uid and gid?

Let me know what you think.

@juliohm1978
Copy link
Owner

I also prepared a 0.5-beta version you can test. Let me know if that works.

juliohm/kubernetes-cifs-volumedriver-installer:0.5-beta

@spawnia
Copy link
Contributor Author

spawnia commented Nov 4, 2019

I also prepared a 0.5-beta version you can test.

That's really awesome, thanks for digging in.

I would like to review the code as well, could you put it up as a PR?

@juliohm1978
Copy link
Owner

There ya go

#9

@juliohm1978 juliohm1978 added bug Something isn't working enhancement New feature or request and removed bug Something isn't working labels Nov 5, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants