-
Notifications
You must be signed in to change notification settings - Fork 40.6k
Allow setting ownership on mounted secrets #81089
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
/sig auth |
/cc @jingxu97 |
see also #57923 (specifically #57923 (comment)) |
edit: misread the description as multiple containers running as different users wanting access to the same secret. this is asking for a way to have multiple processes in a single container running as different users not have access to portions of the same secret? |
Correct, multiple processes in a single container running as different users not have access to another or portions of the same secret. |
This does not sound like a very common use case. Different processes running under different users are typically separated to different containers. There are a handful of ways you could implement a custom solution within a container, but this doesn't sound like a use case we're likely to support in Kubernetes. |
Let's say there is only one process running in the container, and the container's security context has
Currently, the secret file would still be created with |
I think that's a scenario that fsGroup would address. That said, I agree that it shouldn't be necessary (at least to manually add it) in this case. |
This has a lot of overlap (maybe a dupe?) with #2630 |
I have read through #2630 where the focus was on a related issue of volumes from some kind of provisioner (hostPath, emptyDir, etc.). This thread seems to be focused on secrets and by extension configmaps. I can say that some software is strongly opinionated (open ssh comes to mind, but I have seen java application servers also do this) about the combination of the uid of the process accessing a sensitive file (an ssh private key for example), the uid/gid of said file, and the user/group/other permissions of the file. Currently using kubernetes 1.14.6 to the best of my knowledge it is not possible to satisfy the requirements of such opinionated software because it is not possible to for the user to set the uid of a secret/configmap mount point. fsGroup is not always a solution because the software checks the uid which as of today is always 0. Providing a configuration option to set the mount uid via the yaml used to define the mount is clean because it is the same abstraction expected by a non-contanerized system. It then is left to the user to correctly align the value of the mount uid and the value of the runAsUser setting for the container itself, one could even make an argument that the mount should default to the runAsUser value because most of the time there is a single container in a Pod, and it makes sense that the process running in the container needs to access the mount. |
I totally approve
At this time, being able to use secret to store ssh keys in a strict-security context (pod.securityContext.runAsNonRoot) is nearly impossible (or require tedious workaround like in https://stackoverflow.com/a/50426726/1620937 which is lame) |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
I do not believe this issue should go stale. I still have software that can not be run in a Pod because the software does uid checks of sensitive files. The software (rightly) refuses to run as uid 0 because it doesn't need to, and (rightly) refuses to use a sensitive (secrets) file it doesn't own and is not the only entity that can read said file. |
/remove-lifecycle stale |
+1 needed for the mounted secret files |
+1 also affected here. (ssh keys) |
+1 |
2 similar comments
+1 |
+1 |
any updates on this? |
+1 |
1 similar comment
+1 |
Another common use case is the postgres client modules when using a .pgpass credentials file. |
@wildk1w1 You can use spec:
volumes:
- name: client-certs
secret:
secretName: postgres-client-certs
defaultMode: 0600
- name: ca-cert
secret:
secretName: postgres-cluster-cert
defaultMode: 0600
securityContext:
fsGroup: 12345 # UID |
/unassign a-hilaly |
this does not change owner of the file and with |
How can we proceed here? |
Any update on this? I mounted ssh key into pod at home folder and set |
Also see #129043 |
What would you like to be added:
Currently, you can set secret file permissions, but not ownership: (see the "Secret files permissions" section)
https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets
It would be good to add a
defaultOwner
, and possibledefaultGroup
field that would allow setting the default ownership of the secret files.Why is this needed:
It is possible that there is more than one process running in a container, each running as a different user. (think
processA
running asuserA
andprocessB
running asuserB
).processA
might need to usesecretA
andprocessB
to usesecretB
.To make the secrets usable today,
secretA
andsecretB
would need to be world-readable, because there is no way to set ownership on them. This is undesirable from the security standpoint, asprocessA
could readsecretB
and vice versa.The text was updated successfully, but these errors were encountered: