Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[YUNIKORN-1779] Deduplicate the deployment files #350

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
43 changes: 24 additions & 19 deletions docs/developer_guide/deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ The easiest way to deploy YuniKorn is to leverage our [helm charts](https://hub.
you can find the guide [here](get_started/get_started.md). This document describes the manual process to deploy YuniKorn
scheduler and admission controller. It is primarily intended for developers.

**Note** The primary source of deployment information is the Helm chart, which can be found at [yunikorn-release](https://github.com/apache/yunikorn-release/). Manual deployment may lead to out-of-sync configurations, see [deployments/scheduler](https://github.com/apache/yunikorn-k8shim/tree/master/deployments/scheduler)
**Note** User are required to create these files manually. A script is available under `yunikorn-release` repo which uses the `helm template`` command and performs some filtering operations to generate these files.

## Build docker image

Expand All @@ -41,13 +41,22 @@ This command will build an image. The image will be tagged with a default versio

**Note** the latest yunikorn images in docker hub are not updated anymore due to ASF policy. Hence, you should build both scheduler image and web image locally before deploying them.

**Note** the imaging tagging includes your build architecture. For Intel, it would be `amd64` and for Mac M1, it would be `arm64`.
**Note** the imaging tagging includes your build architecture. For Intel, it would be `amd64` and for Mac M1, it would be `arm64`. However, the YAML files generated by the script do not automatically identify the system architecture for image tags. Therefore, users need to manually adjust the image tags or configuration files as necessary.

## Create deployment file

Under project root of the `yunikorn-release`, run the command to create yaml file for deployment:
```
./helm-chart/yunikorn/create-deployment-yaml.sh
```

After finished, the deployment files would be located under `/helm-chart/yunikorn/deployment`.

## Setup RBAC for Scheduler
In the example, RBAC are configured for the yuniKorn namespace.
The first step is to create the RBAC role for the scheduler, see [yunikorn-rbac.yaml](https://github.com/apache/yunikorn-k8shim/blob/master/deployments/scheduler/yunikorn-rbac.yaml)

Under project root of the `yunikorn-release`, the first step is to create the RBAC role for the scheduler.
```
kubectl create -f deployments/scheduler/yunikorn-rbac.yaml
kubectl create -f helm-charts/yunikorn/deployment/rbac.yaml
```
The role is a requirement on the current versions of kubernetes.

Expand All @@ -56,17 +65,13 @@ The role is a requirement on the current versions of kubernetes.
This must be done before deploying the scheduler. It requires a correctly setup kubernetes environment.
This kubernetes environment can be either local or remote.

- download configuration file if not available on the node to add to kubernetes:
```
curl -o yunikorn-configs.yaml https://raw.githubusercontent.com/apache/yunikorn-k8shim/master/deployments/scheduler/yunikorn-configs.yaml
```
- modify the content of yunikorn-configs.yaml file as needed, and create ConfigMap in kubernetes:
```
kubectl create configmap yunikorn-configs --from-file=yunikorn-configs.yaml
kubectl create configmap yunikorn-configs --from-file=helm-charts/yunikorn/deployment/yunikorn-defaults.yaml
```
- Or update ConfigMap in kubernetes:
```
kubectl create configmap yunikorn-configs --from-file=yunikorn-configs.yaml -o yaml --dry-run=client | kubectl apply -f -
kubectl create configmap yunikorn-configs --from-file=helm-charts/yunikorn/deployment/yunikorn-defaults.yaml -o yaml --dry-run=client | kubectl apply -f -
```
- check if the ConfigMap was created/updated correctly:
```
Expand All @@ -77,7 +82,7 @@ kubectl describe configmaps yunikorn-configs

The scheduler can be deployed with following command.
```
kubectl create -f deployments/scheduler/scheduler.yaml
kubectl create -f helm-charts/yunikorn/deployment/deployment.yaml
```

The deployment will run 2 containers from your pre-built docker images in 1 pod,
Expand All @@ -87,7 +92,7 @@ The deployment will run 2 containers from your pre-built docker images in 1 pod,

Alternatively, the scheduler can be deployed as a K8S scheduler plugin:
```
kubectl create -f deployments/scheduler/plugin.yaml
kubectl create -f helm-charts/yunikorn/deployment/plugin.yaml
```

The pod is deployed as a customized scheduler, it will take the responsibility to schedule pods which explicitly specifies `schedulerName: yunikorn` in pod's spec. In addition to the `schedulerName`, you will also have to add a label `applicationId` to the pod.
Expand All @@ -105,27 +110,27 @@ Note: Admission controller abstracts the addition of `schedulerName` and `applic

## Setup RBAC for Admission Controller

Before the admission controller is deployed, we must create its RBAC role, see [admission-controller-rbac.yaml](https://github.com/apache/yunikorn-k8shim/blob/master/deployments/scheduler/admission-controller-rbac.yaml).
Before the admission controller is deployed, we must create its RBAC role.

```
kubectl create -f deployments/scheduler/admission-controller-rbac.yaml
kubectl create -f helm-charts/yunikorn/deployment/admission-controller-rbac.yaml
```

## Create the Secret

Since the admission controller intercepts calls to the API server to validate/mutate incoming requests, we must deploy an empty secret
used by the webhook server to store TLS certificates and keys. See [admission-controller-secrets.yaml](https://github.com/apache/yunikorn-k8shim/blob/master/deployments/scheduler/admission-controller-secrets.yaml).
used by the webhook server to store TLS certificates and keys.

```
kubectl create -f deployments/scheduler/admission-controller-secrets.yaml
kubectl create -f helm-charts/yunikorn/deployment/admission-controller-secrets.yaml
```

## Deploy the Admission Controller

Now we can deploy the admission controller as a service. This will automatically validate/modify incoming requests and objects, respectively, in accordance with the [example in Deploy the Scheduler](#Deploy-the-Scheduler). See the contents of the admission controller deployment and service in [admission-controller.yaml](https://github.com/apache/yunikorn-k8shim/blob/master/deployments/scheduler/admission-controller.yaml).
Now we can deploy the admission controller as a service. This will automatically validate/modify incoming requests and objects, respectively, in accordance with the [example in Deploy the Scheduler](#Deploy-the-Scheduler).

```
kubectl create -f deployments/scheduler/admission-controller.yaml
kubectl create -f helm-charts/yunikorn/deployment/admission-controller-deployment.yaml
```

## Access to the web UI
Expand Down