-
Notifications
You must be signed in to change notification settings - Fork 105
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
node agent: allow running out of cluster using $KUBECONFIG #414
Comments
@milesbxf We would review the changes. Although we don't have any infra to test these changes. cc @tiagolobocastro |
This seems reasonable to me, and not OS specific as it's just adding support for running out-of cluster which sometimes is useful to debug things - so we'd be happy to take this :) I actually run NixOS myself but mostly focus on mayastor so never hit this previously. I also saw something once, a work around for calling NixOS host binaries from a pod. I'll see if I can find it. |
I had chatgpt generate an example linking the binaries that i slightly modified.
|
@tiagolobocastro @Abhinandan-Purkait if we take this community contribution into the project, please ensure that it's attributed as a community driven contribute i.e. @ncrmro creates the PR and clearly shows as the author / creator.
|
@milesbxf this can be taken up for the next milestone v4.2. Would you mind opening the pull-request? |
Describe the problem/challenge you have
I'm running openebs/zfs-localpv on NixOS. I've been unable to get the node agent running in-cluster, since the node agent requires access to the
zfs
binary on the node. NixOS stores binaries in very non-standard locations, and I wasn't successful in modifying the zfs-chroot configmap to get it working. I realised that whilst unconventional, it'd be far easier to just run the node agent directly on the node, out of cluster.However, whilst the kube client config code can load from an external kubeconfig file, there's no way to configure the binary to do so, and it's hardcoded to use in-cluster config or fail.
Describe the solution you'd like
I'd like the node agent to fall back to looking up kubeconfig from the standard
KUBECONFIG
environment variable if it's unable to get in-cluster config and we've not set the kubeconfig path elsewhere.I have a branch here with this solution, which is working fine for me - happy to raise this as a PR if you're happy with the approach: https://github.com/openebs/zfs-localpv/compare/develop...milesbxf:run-out-of-cluster?expand=1
Anything else you would like to add:
I realise from the docs that you're only currently intending to support Ubuntu and CentOS as target OSes - that's totally fair, and I'm happy to run this myself. However, the solution above would allow me to do so without maintaining a fork, and shouldn't impact any other usage 🙏
Environment:
kubectl version
): v1.23.7/etc/os-release
): "NixOS 22.11 (Raccoon)"The text was updated successfully, but these errors were encountered: