This method allows you to provision a baremetal machine as a Kubernetes node, using the provisioning logic of OSM as provided by the specific OSP.
The approach is currently a bit hacky. We are creating a machine-deployment with provider GCE, but with a bogus config, so nothing is actually provisioned on GCE. This machine-deployment though forces Machine-Controller to create suitable OSM configuration, which we then download and use for provisioning on the baremetal machine.
- A local Linux machine
- yq installed on it
- This repository checked out
- Access to the remote machine
- SSH access to the remote machine
- or password access and a ssh key pair available
- A KKP user-cluster
- When you create a new user-cluster, choose the KubeAdm provider
The nodes need to be prepared with a fresh installation of your OS. Ubuntu Server 22.04.2 was used for this example.
Have a look at the provided OSP for Ubuntu under ./manifests/01_osp-ubuntu-edge.yaml to check for specific changes necessary in this context. Most notably, updating the hostname was turned off, as this caused Ubuntu's network config to break. So make sure your machine has a suitable hostname during installation.
If you need a different OSP, grab one of the default ones, store it under ./manifests/ and adapt for your needs.
Then make sure 02_machinedeployment.yaml references the correct OSP.
The following provision script will be executed on your local machine and needs ssh-access to the remote machine. If not already done, please add your public ssh key to the remote machine.
-
Create user-cluster using the KubeAdm provider.
-
Download the kubeconfig file from KKP. If using the UI, there is a green "Get Kubeconfig" button top-right. If using the API, go to https://<kkp-domain>/api/v2/projects/<project-id>/clusters/<cluster-id>/kubeconfig.
-
Set the following env var
$ export KUBECONFIG=$(pwd)/kubeconfig-admin-xyz
Now we are ready to execute the provisioning script.
$ make bootstrap
It will ask you for the IP and username of the remote machine.
First, it downloads the provisioning steps (as cloud-init file) from KKP.
Then, it uploads this cloud-init file, alongside a remote-provisioning script to the remote machine.
After that's done, the remote-provisioning script gets executed remotely via ssh. It does not much more than running cloud-init --file <cloud-init file> init
.
At the final stage, kubectl is used to approve pending certificate requests (the certs used for secure communication between the Kubelet and the Kubernetes API).
After a short while the new node should appear.
$ kubectl get nodes