In the previous step we configured a worker node by
- Creating a set of key pairs for the worker node by ourself
- Getting them signed by the CA by ourself
- Creating a kube-config file using this certificate by ourself
- Everytime the certificate expires we must follow the same process of updating the certificate by ourself
This is not a practical approach when you have 1000s of nodes in the cluster, and nodes dynamically being added and removed from the cluster. With TLS boostrapping:
- The Nodes can generate certificate key pairs by themselves
- The Nodes can generate certificate signing request by themselves
- The Nodes can submit the certificate signing request to the Kubernetes CA (Using the Certificates API)
- The Nodes can retrieve the signed certificate from the Kubernetes CA
- The Nodes can generate a kube-config file using this certificate by themselves
- The Nodes can start and join the cluster by themselves
- The Nodes can request new certificates via a CSR, but the CSR must be manually approved by a cluster administrator
In Kubernetes 1.11 a patch was merged to require administrator or Controller approval of node serving CSRs for security reasons.
So let's get started!
Certificates API: The Certificate API (as discussed in the lecture) provides a set of APIs on Kubernetes that can help us manage certificates (Create CSR, Get them signed by CA, Retrieve signed certificate etc). The worker nodes (kubelets) have the ability to use this API to get certificates signed by the Kubernetes CA.
kube-apiserver - Ensure bootstrap token based authentication is enabled on the kube-apiserver.
--enable-bootstrap-token-auth=true
kube-controller-manager - The certificate requests are signed by the kube-controller-manager ultimately. The kube-controller-manager requires the CA Certificate and Key to perform these operations.
--cluster-signing-cert-file=/var/lib/kubernetes/ca.crt \\
--cluster-signing-key-file=/var/lib/kubernetes/ca.key
Note: We have already configured these in lab 8 in this course
Run the following steps on master-1
For the workers(kubelet) to access the Certificates API, they need to authenticate to the kubernetes api-server first. For this we create a Bootstrap Token to be used by the kubelet
Bootstrap Tokens take the form of a 6 character token id followed by 16 character token secret separated by a dot. Eg: abcdef.0123456789abcdef. More formally, they must match the regular expression [a-z0-9]{6}.[a-z0-9]{16}
Set an expiration date for the bootstrap token of 7 days from now (you can adjust this)
EXPIRATION=$(date -u --date "+7 days" +"%Y-%m-%dT%H:%M:%SZ")
cat > bootstrap-token-07401b.yaml <<EOF
apiVersion: v1
kind: Secret
metadata:
# Name MUST be of form "bootstrap-token-<token id>"
name: bootstrap-token-07401b
namespace: kube-system
# Type MUST be 'bootstrap.kubernetes.io/token'
type: bootstrap.kubernetes.io/token
stringData:
# Human readable description. Optional.
description: "The default bootstrap token generated by 'kubeadm init'."
# Token ID and secret. Required.
token-id: 07401b
token-secret: f395accd246ae52d
# Expiration. Optional.
expiration: ${EXPIRATION}
# Allowed usages.
usage-bootstrap-authentication: "true"
usage-bootstrap-signing: "true"
# Extra groups to authenticate the token as. Must start with "system:bootstrappers:"
auth-extra-groups: system:bootstrappers:worker
EOF
kubectl create -f bootstrap-token-07401b.yaml --kubeconfig admin.kubeconfig
Things to note:
- expiration - make sure its set to a date in the future. The computed shell variable
EXPIRATION
ensures this. - auth-extra-groups - this is the group the worker nodes are part of. It must start with "system:bootstrappers:" This group does not exist already. This group is associated with this token.
Once this is created the token to be used for authentication is 07401b.f395accd246ae52d
Next we associate the group we created before to the system:node-bootstrapper ClusterRole. This ClusterRole gives the group enough permissions to bootstrap the kubelet
kubectl create clusterrolebinding create-csrs-for-bootstrapping \
--clusterrole=system:node-bootstrapper \
--group=system:bootstrappers \
--kubeconfig admin.kubeconfig
--------------- OR ---------------
cat > csrs-for-bootstrapping.yaml <<EOF
# enable bootstrapping nodes to create CSR
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: create-csrs-for-bootstrapping
subjects:
- kind: Group
name: system:bootstrappers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:node-bootstrapper
apiGroup: rbac.authorization.k8s.io
EOF
kubectl create -f csrs-for-bootstrapping.yaml --kubeconfig admin.kubeconfig
kubectl create clusterrolebinding auto-approve-csrs-for-group \
--clusterrole=system:certificates.k8s.io:certificatesigningrequests:nodeclient \
--group=system:bootstrappers \
--kubeconfig admin.kubeconfig
--------------- OR ---------------
cat > auto-approve-csrs-for-group.yaml <<EOF
# Approve all CSRs for the group "system:bootstrappers"
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: auto-approve-csrs-for-group
subjects:
- kind: Group
name: system:bootstrappers
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
apiGroup: rbac.authorization.k8s.io
EOF
kubectl create -f auto-approve-csrs-for-group.yaml --kubeconfig admin.kubeconfig
Reference: https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#approval
We now create the Cluster Role Binding required for the nodes to automatically renew the certificates on expiry. Note that we are NOT using the system:bootstrappers group here any more. Since by the renewal period, we believe the node would be bootstrapped and part of the cluster already. All nodes are part of the system:nodes group.
kubectl create clusterrolebinding auto-approve-renewals-for-nodes \
--clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient \
--group=system:nodes \
--kubeconfig admin.kubeconfig
--------------- OR ---------------
cat > auto-approve-renewals-for-nodes.yaml <<EOF
# Approve renewal CSRs for the group "system:nodes"
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: auto-approve-renewals-for-nodes
subjects:
- kind: Group
name: system:nodes
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
apiGroup: rbac.authorization.k8s.io
EOF
kubectl create -f auto-approve-renewals-for-nodes.yaml --kubeconfig admin.kubeconfig
Reference: https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#approval
Going forward all activities are to be done on the worker-2
node until step 11.
Note that kubectl is required here to assist with creating the boostrap kubeconfigs for kubelet and kube-proxy
KUBE_VERSION=$(curl -L -s https://dl.k8s.io/release/stable.txt)
wget -q --show-progress --https-only --timestamping \
https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kube-proxy \
https://dl.k8s.io/release/${KUBE_VERSION}/bin/linux/amd64/kubelet
Reference: https://kubernetes.io/releases/download/#binaries
Create the installation directories:
sudo mkdir -p \
/var/lib/kubelet/pki \
/var/lib/kube-proxy \
/var/lib/kubernetes/pki \
/var/run/kubernetes
Install the worker binaries:
{
chmod +x kube-proxy kubelet
sudo mv kube-proxy kubelet /usr/local/bin/
}
Move the certificates and secure them.
{
sudo mv ca.crt kube-proxy.crt kube-proxy.key /var/lib/kubernetes/pki
sudo chown root:root /var/lib/kubernetes/pki/*
sudo chmod 600 /var/lib/kubernetes/pki/*
}
It is now time to configure the second worker to TLS bootstrap using the token we generated
For worker-1 we started by creating a kubeconfig file with the TLS certificates that we manually generated. Here, we don't have the certificates yet. So we cannot create a kubeconfig file. Instead we create a bootstrap-kubeconfig file with information about the token we created.
This is to be done on the worker-2
node. Note that now we have set up the load balancer to provide high availibilty across the API servers, we point kubelet to the load balancer.
Set up some shell variables for nodes and services we will require in the following configurations:
LOADBALANCER=$(dig +short loadbalancer)
POD_CIDR=10.244.0.0/16
SERVICE_CIDR=10.96.0.0/16
CLUSTER_DNS=$(echo $SERVICE_CIDR | awk 'BEGIN {FS="."} ; { printf("%s.%s.%s.10", $1, $2, $3) }')
Set up the bootstrap kubeconfig.
{
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig \
set-cluster bootstrap --server="https://${LOADBALANCER}:6443" --certificate-authority=/var/lib/kubernetes/pki/ca.crt
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig \
set-credentials kubelet-bootstrap --token=07401b.f395accd246ae52d
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig \
set-context bootstrap --user=kubelet-bootstrap --cluster=bootstrap
sudo kubectl config --kubeconfig=/var/lib/kubelet/bootstrap-kubeconfig \
use-context bootstrap
}
--------------- OR ---------------
cat <<EOF | sudo tee /var/lib/kubelet/bootstrap-kubeconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority: /var/lib/kubernetes/pki/ca.crt
server: https://${LOADBALANCER}:6443
name: bootstrap
contexts:
- context:
cluster: bootstrap
user: kubelet-bootstrap
name: bootstrap
current-context: bootstrap
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
user:
token: 07401b.f395accd246ae52d
EOF
Create the kubelet-config.yaml
configuration file:
Reference: https://kubernetes.io/docs/reference/config-api/kubelet-config.v1beta1/
cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: /var/lib/kubernetes/pki/ca.crt
authorization:
mode: Webhook
containerRuntimeEndpoint: unix:///var/run/containerd/containerd.sock
cgroupDriver: systemd
clusterDomain: "cluster.local"
clusterDNS:
- ${CLUSTER_DNS}
registerNode: true
resolvConf: /run/systemd/resolve/resolv.conf
rotateCertificates: true
runtimeRequestTimeout: "15m"
serverTLSBootstrap: true
EOF
Note: We are not specifying the certificate details - tlsCertFile and tlsPrivateKeyFile - in this file
Create the kubelet.service
systemd unit file:
cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=containerd.service
Requires=containerd.service
[Service]
ExecStart=/usr/local/bin/kubelet \\
--bootstrap-kubeconfig="/var/lib/kubelet/bootstrap-kubeconfig" \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--cert-dir=/var/lib/kubelet/pki/ \\
--v=2
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
Things to note here:
- bootstrap-kubeconfig: Location of the bootstrap-kubeconfig file.
- cert-dir: The directory where the generated certificates are stored.
- kubeconfig: We specify a location for this but we have not yet created it. Kubelet will create one itself upon successful bootstrap.
In one of the previous steps we created the kube-proxy.kubeconfig file. Check here if you missed it.
{
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/
sudo chown root:root /var/lib/kube-proxy/kube-proxy.kubeconfig
sudo chmod 600 /var/lib/kube-proxy/kube-proxy.kubeconfig
}
Create the kube-proxy-config.yaml
configuration file:
Reference: https://kubernetes.io/docs/reference/config-api/kube-proxy-config.v1alpha1/
cat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: /var/lib/kube-proxy/kube-proxy.kubeconfig
mode: ipvs
clusterCIDR: ${POD_CIDR}
EOF
Create the kube-proxy.service
systemd unit file:
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes
[Service]
ExecStart=/usr/local/bin/kube-proxy \\
--config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5
[Install]
WantedBy=multi-user.target
EOF
On worker-2:
{
sudo systemctl daemon-reload
sudo systemctl enable kubelet kube-proxy
sudo systemctl start kubelet kube-proxy
}
Remember to run the above commands on worker node:
worker-2
At worker-2
node, run the following, selecting option 5
./cert_verify.sh
Now, go back to master-1
and approve the pending kubelet-serving certificate
//: # (command:kubectl certificate approve --kubeconfig admin.kubeconfig $(kubectl get csr --kubeconfig admin.kubeconfig -o json | jq -r '.items | .[] | select(.spec.username == "system:node:worker-2") | .metadata.name'))
kubectl get csr --kubeconfig admin.kubeconfig
Output - Note the name will be different, but it will begin with
csr-
NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION
csr-7k8nh 85s kubernetes.io/kubelet-serving system:node:worker-2 <none> Pending
csr-n7z8p 98s kubernetes.io/kube-apiserver-client-kubelet system:bootstrap:07401b <none> Approved,Issued
Approve the pending certificate. Note that the certificate name csr-7k8nh
will be different for you, and each time you run through.
kubectl certificate approve --kubeconfig admin.kubeconfig csr-7k8nh
Note: In the event your cluster persists for longer than 365 days, you will need to manually approve the replacement CSR.
Reference: https://kubernetes.io/docs/reference/access-authn-authz/kubelet-tls-bootstrapping/#kubectl-approval
List the registered Kubernetes nodes from the master node:
kubectl get nodes --kubeconfig admin.kubeconfig
output
NAME STATUS ROLES AGE VERSION
worker-1 NotReady <none> 93s v1.28.4
worker-2 NotReady <none> 93s v1.28.4
Prev: Bootstrapping the Kubernetes Worker Nodes
Next: Configuring Kubectl