-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add option to write Kubernetes resource YAMLs to disk #79
Comments
I made some progress on this use case for my own self, but I haven't been motivated to upstream anything yet... https://github.com/kingdonb/kuby_test https://github.com/kingdonb/kuby_test/blob/42e0871fb886a13637232d060fc2efa70938512f/builder.sh#L7 I settled on this script https://github.com/kingdonb/kuby_test/blob/42e0871fb886a13637232d060fc2efa70938512f/builder.rb#L6 I wanted to strip out any secrets and throw them away, first of all, since I'll likely be using a CI process to automatically commit the updated files back to git when they have change, I don't want any un-encrypted secrets committed to git. (Later, on a separate project, I decided to save the secrets, but in a different file that has been excluded from git. Then they can be handled by an admin, as-needed. Another option would be, excluding the secret from the git commit, until it has been encrypted, then verify the encryption and move it to a secure location. Because the CI process would have to decrypt the secret in order to know if a newly generated copy has been changed or not, I did not pursue this for now...) I also wasn't able to figure out how to properly configure In GitOps, tenants frequently do not have cluster-admin outside of a namespace scope, and they generally cannot write any cluster-wide resources. I haven't looked deeply into kuby-core or kube-dsl to know if it's straightforward to tell cluster-wide resources apart from others, but we might want to have a |
That script of yours is really neat :) A good example of how to separate resources, others might find it useful.
One thing that will be challenging when implementing this feature is knowing what use-cases people have, i.e. removing secrets. I don't personally have a need to write resources to disk, so it will be important to find out what people actually need. On the topic of secrets, how does gitops encrypt them (or does it?) Are there other industry-standards out there? This is an area I know nothing about.
Hmm interesting. What if we hashed the contents and exposed the digest as an annotation on the
Ah, there isn't a way do that at the moment. The deployer code in kuby-core treats resources without a namespace as cluster resources, perhaps your script could adopt a similar approach?
Hmm I could see including something like that, or perhaps providing a |
The standard for Flux is SOPS, you can use any solution though (like sealed-secrets controller) – different solutions behave differently, but the standard is either (1) a private key which is kept on the cluster or (2) a KMS key which is granted for use on the cluster, is used to encrypt the data fields in a secret (or the entire secret, including metadata) before it is stored in git. The encrypted file is stored in the repo and decrypted on-demand. Different solutions approach the question of "where should decryption be allowed" differently – for example, sealed-secrets must be decrypted into the namespace where they were originally encrypted into, or you can disable that behavior in the controller. SOPS recognizes that anyone with the key can decrypt the data, so doesn't offer this feature I guess because it's an artificial limitation that only protects you as long as your keys are strictly access-controlled – if they key is compromised, it's game over. Namespaces won't protect you. But also, SOPS is currently unmaintained while by comparison, sealed-secrets has had some releases, but it has also had long spans of time with no releases in recent history, so I still find it hard to recommend it instead of SOPS (which I do personally like better.) So "what is the industry standard" is a tough question to answer definitively, because of the support situation of tools like this and other issues. I will say SOPS is the standard in spite of the issues for now... others are welcome to disagree.
That sounds like some scaffolding that I would expect SOPS to provide, and maybe it's already been done I'm just unaware. I'm very leery to provide guidance around security tooling because I am not a security professional, in the CNCF project that I work on we've employed some auditors to help us confirm our security posture and going through the experience has made me more aware of how much I don't know. Anyway, point being – just because I don't see how a one way hash of the data can ever be used to compromise the integrity of the encryption, doesn't mean it's so. I wouldn't be the one to suggest that. So you can solve the problem that way as well; if you only rotate secrets through an intentional process, you know when they have changed because you're changing them, and you don't need to rely on diff to tell you that. I put secrets into a separate directory, or separate git branch, or separate repo entirely so they are isolated, not only for security reasons, but also to separate signal from noise. That way keys can be rotated every hour, if you like, and it will not be seen as noise in the repo. Then there are also solutions like Vault CSI and external-secrets operator which keep secrets outside of the cluster. I haven't used any of those, but it's possible they are even more popular than the solutions which I have used. I am of the opinion that secrets should be rotated frequently and as a pragmatist I understand that means it must be done automatically, so I do want to have this conversation. But that's about as deep as my strongly held opinions go for now, other than to say that I am also still one of those people who consider secrets should be handled separately and with white gloves.
I was thinking, since we may have access to the CRD, we can read the spec to find out: In the GitOps model though, there is no guarantee we (the CI process) will have access to the cluster at build time, or permission to read CRDs. But, in Kuby (where cert-manager is installed through a plugin) we do at least have those manifests on disk I think, so we can read them even if they might not always match what is on the cluster in a scenario like the one I have imagined for tenants. Maybe what's needed is a general support for providing runtime "middlewares" or postprocessors that run on the output of Kuby, much as my script is doing, but as a supported part of the build pipeline; compare this to Helm's Thanks for ideating with me. |
Whether you're using git-ops, want to version your k8s resources in source control, or just want to save them to a directory, Kuby should support emitting them via the CLI. Perhaps something like
kuby resources -o /path/to/output_dir
? There are a couple of strategies we could support as well:The text was updated successfully, but these errors were encountered: