Skip to content

Commit

Permalink
Add Cluster API deployment method
Browse files Browse the repository at this point in the history
Signed-off-by: Cristiano Colangelo <[email protected]>
  • Loading branch information
criscola committed Nov 18, 2022
1 parent 077a6e6 commit 63ea8d3
Show file tree
Hide file tree
Showing 4 changed files with 274 additions and 0 deletions.
132 changes: 132 additions & 0 deletions telemetry-aware-scheduling/deploy/cluster-api/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,132 @@
# Cluster API deployment

## Introduction

Cluster API is a Kubernetes sub-project focused on providing declarative APIs and tooling to simplify provisioning, upgrading, and operating multiple Kubernetes clusters. [Learn more](https://cluster-api.sigs.k8s.io/introduction.html).

This folder contains an automated and declarative way of deploying the Telemetry Aware Scheduler using Cluster API. We will make use of the [ClusterResourceSet feature](https://cluster-api.sigs.k8s.io/tasks/experimental-features/cluster-resource-set.html) to automatically apply a set of resources. Note you must enable its feature gate before running `clusterctl init` (with `export EXP_CLUSTER_RESOURCE_SET=true`).

## Requirements

- A management cluster provisioned in your infrastructure of choice. See [Cluster API Quickstart](https://cluster-api.sigs.k8s.io/user/quick-start.html).
- Run Kubernetes v1.22 or greater (tested on Kubernetes v1.25).

## Provision clusters with TAS installed using Cluster API

We will provision a cluster with the TAS installed using Cluster API.

1. In your management cluster, with all your environment variables set to generate cluster definitions, run for example:

```bash
clusterctl generate cluster scheduling-dev-wkld \
--kubernetes-version v1.25.0 \
--control-plane-machine-count=1 \
--worker-machine-count=3 \
> your-manifests.yaml
```

Be aware that you will need to install a CNI such as Calico before the cluster will be usable. You may automate this
step in the same way as we will see with TAS resources using ClusterResourceSets.

2. Merge the contents of the resources provided in `cluster-patch.yaml` and `kubeadmcontrolplane-patch.yaml` with
`your-manifests.yaml`.

If you move `KubeadmControlPlane` in its own file, you can use the convenient `yq` utility:

> Note that if you are already using patches, `directory: /tmp/kubeadm/patches` must coincide, else the property will be
> overwritten.
```bash
yq eval-all '. as $item ireduce ({}; . *+ $item)' your-own-kubeadmcontrolplane.yaml kubeadmcontrolplane-patch.yaml > final-kubeadmcontrolplane.yaml
```

The new config will:
- Configure TLS certificates for the extender
- Change the `dnsPolicy` of the scheduler to `ClusterFirstWithHostNet`
- Place `KubeSchedulerConfiguration` into control plane nodes and pass the relative CLI flag to the scheduler.

You will also need to add a label to the `Cluster` resource of your new cluster to allow ClusterResourceSets to target
it (see `cluster-patch.yaml`). Simply add a label `scheduler: tas` in your `Cluster` resource present in `your-manifests.yaml`.

3. You will need to prepare the Helm Charts of the various components and join the TAS manifests together for convenience:

First, under `telemetry-aware-scheduling/deploy/charts` tweak the charts if you need (e.g.
additional metric scraping configurations), then render the charts:

```bash
helm template ../charts/prometheus_node_exporter_helm_chart/ > prometheus-node-exporter.yaml
helm template ../charts/prometheus_helm_chart/ > prometheus.yaml
helm template ../charts/prometheus_custom_metrics_helm_chart > prometheus-custom-metrics.yaml
```

You need to add namespaces resources, else resource application will fail. Prepend the following to `prometheus.yaml`:

```bash
kind: Namespace
apiVersion: v1
metadata:
name: monitoring
labels:
name: monitoring
````

Prepend the following to `prometheus-custom-metrics.yaml`:
```bash
kind: Namespace
apiVersion: v1
metadata:
name: custom-metrics
labels:
name: custom-metrics
```

The custom metrics adapter and the TAS deployment require TLS to be configured with a certificate and key.
Information on how to generate correctly signed certs in kubernetes can be found [here](https://github.com/kubernetes-sigs/apiserver-builder-alpha/blob/master/docs/concepts/auth.md).
Files ``serving-ca.crt`` and ``serving-ca.key`` should be in the current working directory.

Run the following:

```bash
kubectl -n custom-metrics create secret tls cm-adapter-serving-certs --cert=serving-ca.crt --key=serving-ca.key -oyaml --dry-run=client > custom-metrics-tls-secret.yaml
kubectl -n default create secret tls extender-secret --cert=serving-ca.crt --key=serving-ca.key -oyaml --dry-run=client > tas-tls-secret.yaml
```

**Attention: Don't commit the TLS certificate and private key to any Git repo as it is considered bad security practice! Makesure to wipe them off your workstation after applying the relative Secrets to your cluster.**
You also need the TAS manifests (Deployment, Policy CRD and RBAC accounts) and the extender's "configmapgetter"
ClusterRole. We will join the TAS manifests together, so we can have a single ConfigMap for convenience:

```bash
yq '.' ../tas-*.yaml > tas.yaml
```

4. Create and apply the ConfigMaps

```bash
kubectl create configmap custom-metrics-tls-secret-configmap --from-file=./custom-metrics-tls-secret.yaml -o yaml --dry-run=client > custom-metrics-tls-secret-configmap.yaml
kubectl create configmap custom-metrics-configmap --from-file=./prometheus-custom-metrics.yaml -o yaml --dry-run=client > custom-metrics-configmap.yaml
kubectl create configmap prometheus-configmap --from-file=./prometheus.yaml -o yaml --dry-run=client > prometheus-configmap.yaml
kubectl create configmap prometheus-node-exporter-configmap --from-file=./prometheus-node-exporter.yaml -o yaml --dry-run=client > prometheus-node-exporter-configmap.yaml
kubectl create configmap tas-configmap --from-file=./tas.yaml -o yaml --dry-run=client > tas-configmap.yaml
kubectl create configmap tas-tls-secret-configmap --from-file=./tas-tls-secret.yaml -o yaml --dry-run=client > tas-tls-secret-configmap.yaml
kubectl create configmap extender-configmap --from-file=../extender-configuration/configmap-getter.yaml -o yaml --dry-run=client > extender-configmap.yaml
```

Apply to the management cluster:

```bash
kubectl apply -f '*-configmap.yaml'
```

5. Apply the ClusterResourceSets

ClusterResourceSets resources are already given to you in `clusterresourcesets.yaml`.
Apply them to the management cluster with `kubectl apply -f clusterresourcesets.yaml`

6. Apply the cluster manifests

Finally, you can apply your manifests `kubectl apply -f your-manifests.yaml`.
The Telemetry Aware Scheduler will be running on your new cluster.

You can test if the scheduler actually works by following this guide:
[Health Metric Example](https://github.com/intel/platform-aware-scheduling/blob/25a646ece15aaf4c549d8152c4ffbbfc61f8a009/telemetry-aware-scheduling/docs/health-metric-example.md)
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
apiVersion: cluster.x-k8s.io/v1beta1
kind: Cluster
metadata:
labels:
scheduler: tas
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
apiVersion: addons.cluster.x-k8s.io/v1alpha3
kind: ClusterResourceSet
metadata:
name: prometheus
spec:
clusterSelector:
matchLabels:
scheduler: tas
resources:
- kind: ConfigMap
name: prometheus-configmap
---
apiVersion: addons.cluster.x-k8s.io/v1alpha3
kind: ClusterResourceSet
metadata:
name: prometheus-node-exporter
spec:
clusterSelector:
matchLabels:
scheduler: tas
resources:
- kind: ConfigMap
name: prometheus-node-exporter-configmap
---
apiVersion: addons.cluster.x-k8s.io/v1alpha3
kind: ClusterResourceSet
metadata:
name: custom-metrics
spec:
clusterSelector:
matchLabels:
scheduler: tas
resources:
- kind: ConfigMap
name: custom-metrics-configmap
---
apiVersion: addons.cluster.x-k8s.io/v1alpha3
kind: ClusterResourceSet
metadata:
name: custom-metrics-tls-secret
spec:
clusterSelector:
matchLabels:
scheduler: tas
resources:
- kind: ConfigMap
name: custom-metrics-tls-secret-configmap
---
apiVersion: addons.cluster.x-k8s.io/v1alpha3
kind: ClusterResourceSet
metadata:
name: tas
spec:
clusterSelector:
matchLabels:
scheduler: tas
resources:
- kind: ConfigMap
name: tas-configmap
---
apiVersion: addons.cluster.x-k8s.io/v1alpha3
kind: ClusterResourceSet
metadata:
name: tas-tls-secret
spec:
clusterSelector:
matchLabels:
scheduler: tas
resources:
- kind: ConfigMap
name: tas-tls-secret-configmap
---
apiVersion: addons.cluster.x-k8s.io/v1alpha3
kind: ClusterResourceSet
metadata:
name: extender
spec:
clusterSelector:
matchLabels:
scheduler: tas
resources:
- kind: ConfigMap
name: extender-configmap
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
apiVersion: controlplane.cluster.x-k8s.io/v1beta1
kind: KubeadmControlPlane
spec:
kubeadmConfigSpec:
files:
- path: /etc/kubernetes/schedulerconfig/scheduler-componentconfig.yaml
content: |
apiVersion: kubescheduler.config.k8s.io/v1
kind: KubeSchedulerConfiguration
clientConnection:
kubeconfig: /etc/kubernetes/scheduler.conf
extenders:
- urlPrefix: "https://tas-service.default.svc.cluster.local:9001"
prioritizeVerb: "scheduler/prioritize"
filterVerb: "scheduler/filter"
weight: 1
enableHTTPS: true
managedResources:
- name: "telemetry/scheduling"
ignoredByScheduler: true
ignorable: true
tlsConfig:
insecure: false
certFile: "/host/certs/client.crt"
keyFile: "/host/certs/client.key"
- path: /tmp/kubeadm/patches/kube-scheduler+json.json
content: |-
[
{
"op": "add",
"path": "/spec/dnsPolicy",
"value": "ClusterFirstWithHostNet"
}
]
clusterConfiguration:
scheduler:
extraArgs:
config: "/etc/kubernetes/schedulerconfig/scheduler-componentconfig.yaml"
extraVolumes:
- hostPath: "/etc/kubernetes/schedulerconfig"
mountPath: "/etc/kubernetes/schedulerconfig"
name: schedulerconfig
- hostPath: "/etc/kubernetes/pki/ca.key"
mountPath: "/host/certs/client.key"
name: cacert
- hostPath: "/etc/kubernetes/pki/ca.crt"
mountPath: "/host/certs/client.crt"
name: clientcert
initConfiguration:
patches:
directory: /tmp/kubeadm/patches
joinConfiguration:
patches:
directory: /tmp/kubeadm/patches

0 comments on commit 63ea8d3

Please sign in to comment.