-
Notifications
You must be signed in to change notification settings - Fork 716
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubeadm should support adding restricted labels on Nodes #2509
Comments
Just brainstorming a possible approach:
Race condition (corner case):
|
one problem here is that joining nodes that do not have the admin.conf credential (e.g. workers) do not have sufficient permissions to to self-label with you can try that with since kubeadm is not a controller it has no way to auto-label new joining workers with an admin credential. |
on the SIG Cluster Lifecycle level, in the past it has been discussed multiple times that higher level tools can do that - cluster-api, kops, minikube, kind. |
Ah, of course. 🤦
The problem I see is that these high-level tools generally have the same problem as the kubectl workaround -- a race exists between the scheduler and the tool/controller that applies the labels. This can lead to unwanted workloads on the node(s) of interest -- this is kind of a big deal, IMO. I suppose some mechanism to prevent these unwanted workloads could be devised. For example, kubeadm does support taints via the NodeRegistrationOptions. The high-level tools could use that to taint new nodes via kubelet's flag and then remove the taint after the labels have been applied. Just need to think through the consequences of that -- for example, the taint would temporarily prevent CNIs from being installed on the nodes but perhaps that is acceptable given that the taint is only applied for short duration. For reference, here is the relevant CAPI issue: kubernetes-sigs/cluster-api#493 |
it can certainly be orchestrated on a high level, it's just that it's blocked on the kubeadm joining node credential level and the fact that kubeadm is not a service. it can be added as a feature of an eventual kubeadm operator: today, we could allow Labels under NodeRegistrationOptions the same way we have Taints, but then we have to document that the kubernetes.io* labels are reserved and not allowed to be self-set by node clients. so this would make the Labels field less useful, as what users mostly want is |
while i don't know about all the details of the thread, it seems natural for the Cluster API's machine controller to maintain the node labels without a kubeadm binding (as the CAPI bootstrap provider).
|
@neolit123 feel free to close this issue or leave it open if think appropriate. |
Added xref in #2315 as a potential kubeadm operator feature. |
FEATURE REQUEST
Versions
kubeadm version (use
kubeadm version
): AllEnvironment:
kubectl version
): 1.16+What happened?
Currently, specifying a restricted label via the kubelet's
--node-labels
flag will cause node bootstrapping to fail, with the kubelet throwing an error.What you expected to happen?
I would like an easy and secure means by which to assign roles to my nodes. Previously, kubelet's self labeling mechanism could be used. However, security concerns have restricted the allowed set of labels; In particular, kubelet's ability to self label in the .kubernetes.io/ and .k8s.io/ namespaces is mostly prohibited.
At a minimum, kubeadm should support adding restricted labels with prefixes:
node-role.kubernetes.io/<role>
kubectl get node
looks for this specific label when displaying the role to the user.node-restriction.kubernetes.io/
How to reproduce it (as minimally and precisely as possible)?
For example:
The following error is produced:
Anything else we need to know?
Workarounds that have been recommended include:
Users can use label prefixes outside the restricted set; however, this poses security concerns and should be avoided.
Users can use kubectl to manually label the nodes. This is undesirable for multiple reasons:
Possible approach for kubeadm:
Kubeadm currently adds the restricted label "node-role.kubernetes.io/master" using the K api client during the MarkControlPlane phase. Similar approach could be taken to add user specified restricted labels to worker as well as master nodes.
Related
eksctl-io/eksctl#2363
#1060
The text was updated successfully, but these errors were encountered: