Skip to content

Offload crypto to a plugin mechanism to enable HSM and managed cloud crypto support #779

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
reegnz opened this issue Mar 22, 2022 · 11 comments
Labels
enhancement help wanted Feature requests approved by maintainers that are not included in the project roadmap

Comments

@reegnz
Copy link

reegnz commented Mar 22, 2022

In the README I see this line:

NOTE: we are working on providing key management mechanisms that offload the encryption to HSM based modules or managed cloud crypto solutions such as KMS.

This is clearly not happening right now:

There should be a managed crypto solution eventually, either through direct support, or having a mechanism to offload crypto to an external service.

Crypto could be offloaded to a plugin mechanism so that sealed-secrets doesn't need to directly support KMS and various different HSM solutions, but 3rd party solutions could implement their own crypto offloading.
This possibility was mentioned in the vault ticket #293 (comment)

@github-actions github-actions bot added the triage Issues/PRs that need to be reviewed label Mar 22, 2022
@mkmik
Copy link
Collaborator

mkmik commented Mar 22, 2022

the ticket has been closed by a bot, fwiw; I still think this is a nice feature; I don't have the bandwidth but I'll be happy to discuss design with anybody who wants to work on it

@reegnz
Copy link
Author

reegnz commented Mar 23, 2022

I'm seeing that PR #416 had some architecture changes to introduce a backend, I think that work could be picked up again to make a PR (just the backend part, with AES).

I'm also thinking that plugins should be separate processes (think how terraform providers work, launching as a separate process and communicating through grpc). That way you could just drop in backend binaries to a well-known filesystem path to extend to other backends. The binaries could then be provided as sidecars (similar to how argocd can be extended with custom tools: https://argo-cd.readthedocs.io/en/stable/operator-manual/custom_tools/). But that might be overkill for the initial extendability feature.

For the first part I suggest to do two backends: AES for the current behaviour and gRPC and REST to be able to extend sealed-secrets without the sealed-secrets codebase requiring further modification.
Then people have the option to provide their own encryption backends as a sidecar.

@mkmik
Copy link
Collaborator

mkmik commented Mar 23, 2022

I like the gRPC option. I think the sidecar approach is particularly useful because it allows to trivially decouple the crypto modules from the project itself.

would you envision creating a long running "crypto engine" process, serving gRPC requests over e.g. unix domain socket? or do you envision the sealed-secrets main process spawning one "crypto engine" process each time a single process is needed?


I was imagining starting simple with one crypto-engine at a time. Something like this:

One volume called "crypto-engine" with a unix domain socket in a well-known path.
One container called "sealed-secrets" running sealed-secrets main process.
One container called "crypto-engine" running an implementation of the "crypto-engine".

Whenever the sealed-secrets controller needs to decrypt a secret it fires off a request to the unix domain socket in volume shared between the two containers.

I like gRPC over unix domain sockets so we can easily control which containers cat talk to each other (since there are many possible uses sidecars, including use cases that are better run less privileges, such as metrics collectors or whatnot)

The crypto-engine runs in a separate side-car container so we can mount into it whatever secrets we may need to contact the external crypto service, without necessarily sharing that with other sidecars.
The crypto-engine sidecar is a long running process so it can amortise the possibly expensive initialisation phase with the cloud-provider service.

WDYT?

@reegnz
Copy link
Author

reegnz commented Mar 23, 2022

I think I'd be more comfortable with sealed-secrets spawning the process, as that model is already battle-proven in terraform, so one could use terraform as an inspiration for that plugin/provider model. And then having an initContainer only, that puts the provider binary at a well-known directory (drop-in directory style) and spawning the process is handled by sealed-secrets.
But I don't mind sidecar+sockets either, but why do a socket if you can bind to 127.0.0.1? I think kubernetes pods have a shared network namespace between it's containers for use-cases like
this. Although I agree that you might want to limit which containers get access.

Regarding mounting secrets: yes, that's a good argument for using a sidecar instead of spawning a sub-process directly, separating the secrets (eg. SA tokens, mounted secrets, etc) would be important for some.

@mkmik
Copy link
Collaborator

mkmik commented Mar 23, 2022

but why do a socket if you can bind to 127.0.0.1?

because this means other sidecars (possibly injected by mutating webhook) would also share the network namespace and thus the attack surface would be increased

@mkmik
Copy link
Collaborator

mkmik commented Mar 23, 2022

And then having an initContainer only, that puts the provider binary at a well-known directory (drop-in directory style) and spawning the process is handled by sealed-secrets.

problems:

  • if you don't have a statically linked binary you may have some headache
  • having a full-blown docker image for the "crypto engine" provides a much easier and standard way to deal with all sorts of things; for example what if you crypto engine needs to talk with some service using some private CA; you can make this a problem of whoever provides the crypto engine sidecar image and related configuration, instead of fiddling with the main container config.

@alvneiayu alvneiayu added help wanted Feature requests approved by maintainers that are not included in the project roadmap enhancement and removed triage Issues/PRs that need to be reviewed labels Mar 24, 2022
@reegnz
Copy link
Author

reegnz commented Aug 31, 2023

I'm circling back to this topic as I could not deal with it after our previous discussions:

because this means other sidecars (possibly injected by mutating webhook) would also share the network namespace and thus the attack surface would be increased

I guess you mean the other sidecars would have network access to the crypto-engine as containers in a pod share the network namespace by design? That's a valid concern.

I guess all of it boils down to identity: how does the sidecar verify that the source of the request is from the right client. With just unix domain sockets you also don't solve for that, you just control on the lower layer, not the application layer.

IMHO this is really a defense in depth issue:

  • Access to the service (this is kind-of a firewalling issue, solved by unix domain sockets, or if it's a separate pod, by k8s networkpolicy)
  • Service authorization (this needs identity verification).

For authorization the sidecars could require a service account token in each request (eg. projected service account tokens with a unique audience), and the sidecar can verify the token.

Either way, I think the unix domain socket solution is a good first tier, but later on to achieve defense in depth I'd also try to utilize some form of caller identity too.

With a caller identity mechanism the users could have an option to run the crypto-engine as a separate deployment even, and the k8s issued JWT on the client side and validation of it on the crypto-engine side ensures authorization.

FYI I've done pod-to-pod authorization with projected K8S service account tokens before with success.

@frmrm
Copy link

frmrm commented Feb 9, 2024

Hey folks, this issue has been stale for a little bit. Has there been any significant movement on actually implementing a solution here? This keeps coming up over the past few years I've been using Sealed Secrets in production and I'd be happy to make an effort toward getting this knocked out if nobody has work in progress on this already.

As I understand the design at the moment we need:

  • Specification of an API (be it REST or gRPC) that provides two endpoints:
    • GetPublicKey -> retrieve the current public key for kubeseal to do its thing
    • Decrypt -> pass in ciphertext, get plaintext back
  • An example of how to start this as a sidecar and expose a unix socket
  • A controller flag to enable the plugin behavior
  • Ideally, support for parallel encryption/decryption between the old, internal secrets and new external provider secrets

Does that sound right?

@reegnz
Copy link
Author

reegnz commented Feb 9, 2024

Yeah, I didn't get around to doing any code for this, my org hasn't adopted sealed-secrets yet...

GetPublicKey -> retrieve the current public key for kubeseal to do its thing

For me it's desirable to avoid go-crypto altogether (because of the FIPS issue around it) and entirely offload crypto to a plugin.
So would be great to have a symmetric Encrypt/Decrypt and offload all of it to the plugin.

@reegnz reegnz closed this as completed Feb 9, 2024
@reegnz reegnz reopened this Feb 9, 2024
@reegnz
Copy link
Author

reegnz commented Feb 9, 2024

Accidentally closed. :)

@frmrm
Copy link

frmrm commented Feb 9, 2024

For me it's desirable to avoid go-crypto altogether (because of the FIPS issue around it) and entirely offload crypto to a plugin.
So would be great to have a symmetric Encrypt/Decrypt and offload all of it to the plugin.

I don't have all the details on the FIPS issue, but asymmetric encryption is pretty important for our use case. We ideally want to have a minimal number of folks having access to even the production cloud account if we can. Permitting symmetric encryption is a larger problem because then the controller has to manage encryption rather than kubeseal.... or you teach kubeseal how to use your credentials to interact with the symmetric encryption provider.

IMO we should decouple these issues:

  1. Use of an external key provider
  2. FIPS compliance of the cryptography
  3. Symmetric encryption support

I think we should treat this issue as solely concerned with (1) to make progress on it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement help wanted Feature requests approved by maintainers that are not included in the project roadmap
Projects
None yet
Development

No branches or pull requests

4 participants