Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fsgroup is not applied correctly to already existing content in PVCs #341

Open
davinkevin opened this issue Jun 1, 2023 · 8 comments
Open
Labels

Comments

@davinkevin
Copy link

davinkevin commented Jun 1, 2023

Hello 👋,

I'm using the local-path-provisioner as part of k3d, to test and validate our development and I discovered something strange with the local-path-provisioner and its conformance to the fsGroup parameter

First, all the code used in this issue is available here

With EKS cluster

First, I deploy an app, this just writes on a PVC some files. The important settings are:

  • fsgroup: 4000
  • runAsGroup: 4000
  • runAs: 1000
$ kubectl apply -k eks/01-write/
namespace/kdavin-test-fsgroup created
configmap/fsgroup-test-9446dm7hth created
persistentvolumeclaim/fsgroup-test created
deployment.apps/fsgroup-test created
$ kubectl logs fsgroup-test-dd796cfdd-87fbm -f
total 20
drwxrwsr-x    3 root     4000          4096 Jun  1 14:28 .
drwxr-xr-x    1 root     root            43 Jun  1 14:28 ..
drwxrws---    2 root     4000         16384 Jun  1 14:28 lost+found
Hello from fsgroup-test
total 24
drwxrwsr-x    3 root     4000          4096 Jun  1 14:28 .
drwxr-xr-x    1 root     root            43 Jun  1 14:28 ..
-rw-r--r--    1 1000     4000             0 Jun  1 14:28 foo
drwxrws---    2 root     4000         16384 Jun  1 14:28 lost+found
-r-xr-xr-x    1 1000     4000            18 Jun  1 14:28 test.txt
-rw-r--r--    1 1000     4000             0 Jun  1 14:28 /test/a/b/c/subfile.txt

So, files are owned by 1000 and group is 4000.

Then, I redeploy the app with different SecurityContext settings:

  • fsgroup: 6000
  • runAsGroup: 6000
  • runAs: 1000
$ kubectl apply -k eks/02-read/
namespace/kdavin-test-fsgroup unchanged
configmap/fsgroup-test-9446dm7hth created
persistentvolumeclaim/fsgroup-test unchanged
deployment.apps/fsgroup-test configured
$ kubectl logs fsgroup-test-77bcb759db-t7tmd
total 28
drwxrwsr-x    4 root     6000          4096 Jun  1 14:28 .
drwxr-xr-x    1 root     root            43 Jun  1 14:30 ..
drwxrwsr-x    3 1000     6000          4096 Jun  1 14:28 a
-rw-rw-r--    1 1000     6000             0 Jun  1 14:28 foo
drwxrws---    2 root     6000         16384 Jun  1 14:28 lost+found
-rwxrwxr-x    1 1000     6000            18 Jun  1 14:28 test.txt
-rw-rw-r--    1 1000     6000             0 Jun  1 14:28 /test/a/b/c/subfile.txt

We can see, 1000 is still the owner, but 6000 is now the group owner, instead of 4000 like before (following the k8s fsgroup spec).

With K3d, presumably k3s

I repeat the same thing with k3d now, with the same settings. First with:

$ kubectl apply -k k3s/01-write/
namespace/kdavin-test-fsgroup created
configmap/fsgroup-test-9446dm7hth created
persistentvolumeclaim/fsgroup-test created
deployment.apps/fsgroup-test created
$ kubectl logs fsgroup-test-79b59c9988-9jmrl
total 8
drwxrwxrwx    2 root     root          4096 Jun  1 14:33 .
drwxr-xr-x    1 root     root          4096 Jun  1 14:34 ..
Hello from fsgroup-test
total 12
drwxrwxrwx    2 root     root          4096 Jun  1 14:34 .
drwxr-xr-x    1 root     root          4096 Jun  1 14:34 ..
-rw-r--r--    1 1000     4000             0 Jun  1 14:34 foo
-r-xr-xr-x    1 1000     4000            18 Jun  1 14:34 test.txt
-rw-r--r--    1 1000     4000             0 Jun  1 14:34 /test/a/b/c/subfile.txt

All is ok, with the same value as eks. But if I apply the same change, with following settings:

  • fsgroup: 6000
  • runAsGroup: 6000
  • runAs: 1000
$ kubectl apply -k k3s/02-read/
namespace/kdavin-test-fsgroup unchanged
configmap/fsgroup-test-789h6hh8dd created
persistentvolumeclaim/fsgroup-test unchanged
deployment.apps/fsgroup-test configured
$ kubectl logs fsgroup-test-85b478c545-l7znn
total 16
drwxrwxrwx    3 root     root          4096 Jun  1 14:34 .
drwxr-xr-x    1 root     root          4096 Jun  1 14:35 ..
drwxr-xr-x    3 1000     4000          4096 Jun  1 14:34 a
-rw-r--r--    1 1000     4000             0 Jun  1 14:34 foo
-r-xr-xr-x    1 1000     4000            18 Jun  1 14:34 test.txt
-rw-r--r--    1 1000     4000             0 Jun  1 14:34 /test/a/b/c/subfile.txt

Files are still owned by 4000 at group level, where it should be owned by 6000 now.

Conclusion

Is it a bug or an intended limitation of the local-path-provisionner?
If yes, could we state it in the readme?

At implementation level, could we, for example, provide the fsGroup parameter to the setup script as an env variable to make this setup phase compatible?

As user of it, can we do something to bypass this limitation?

Additional details:

  • local-path-provisioner version: v0.0.24
  • k3d version: k3d version v5.5.1
  • k3s version v1.26.4-k3s1 (default)

If you need some extra details, don't hesitate to ask

/cc @tomdcc @mfredenhagen @pio-kol @gurbuzali @athkalia @skurtzemann @deepy @robmoore-i

Copy link

github-actions bot commented Jun 7, 2024

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

@github-actions github-actions bot added the stale label Jun 7, 2024
@davinkevin
Copy link
Author

still up

@github-actions github-actions bot removed the stale label Jun 8, 2024
Copy link

github-actions bot commented Aug 7, 2024

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

@github-actions github-actions bot added the stale label Aug 7, 2024
@deepy
Copy link

deepy commented Aug 8, 2024

Still relevant

@github-actions github-actions bot removed the stale label Aug 9, 2024
Copy link

github-actions bot commented Oct 8, 2024

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

@github-actions github-actions bot added the stale label Oct 8, 2024
@davinkevin
Copy link
Author

Never been so relevant. With some help, we could implement a fix

@github-actions github-actions bot removed the stale label Oct 9, 2024
@acalliariz
Copy link

Issue is still relevant. The securityContext.fsGroup field is not respected. Is this a limitation of the underlying local or hostPath PV?

Copy link

This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.

@github-actions github-actions bot added the stale label Dec 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants