Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chore(docs): document how to set imagePullSecrets in baseJobTemplate #446

Merged
merged 6 commits into from
Feb 11, 2025
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
54 changes: 48 additions & 6 deletions charts/prefect-worker/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -222,9 +222,13 @@ serviceAccount:

### Configuring a Base Job Template on the Worker

If you want to define the [base job template](https://docs.prefect.io/concepts/work-pools/#base-job-template) of the worker and pass it as a value in this chart, you will need to do the following. **Note** if the `workPool` already exists, the base job template passed **will** be ignored.
The worker uses the [base job template](https://docs.prefect.io/v3/deploy/infrastructure-concepts/work-pools#base-job-template)
to create the Kubernetes job that executes your workflow. The base job template configuration can be modified by setting
`worker.config.baseJobTemplate.configuration`.

1. Define the base job template in a local file. To get a formatted template, run the following command & store locally in `base-job-template.json`
**Note**: if the target work pool (`config.workPool`) already exists, the base job template passed **will be ignored**.

1. Define the base job template in a local file. To get a formatted template, run the following command & store locally in `base-job-template.json`:

```bash
# you may need to install `prefect-kubernetes` first
Expand All @@ -233,25 +237,63 @@ pip install prefect-kubernetes
prefect work-pool get-default-base-job-template --type kubernetes > base-job-template.json
```

2. Modify the base job template as needed
2. Modify the base job template as needed. Keep in mind that modifications are not merged with the default template. The configuration
you provide will replace the default configuration entirely. See [modifying the base job template](#modifying-the-base-job-template)
for more information.

3. Install the chart as you usually would, making sure to use the `--set-file` command to pass in the `base-job-template.json` file as a paramater:

```bash
helm install prefect-worker prefect/prefect-worker -f values.yaml --set-file worker.config.baseJobTemplate.configuration=base-job-template.json
```

#### Modifying the Base Job Template

Modifying the base job template replaces the default configuration entirely.
Put differently, any provdied configuration is not merged with the default configuration.

For example, if you want to add an image pull secret to the base job template,
you would modify the `base-job-template.json` file to look like this:

```diff
{
"job_configuration": {
"job_manifest": {
"spec": {
"template": {
"spec": {
+ "imagePullSecrets": [
+ {
+ "name": "my-pull-secret"
+ }
+ ]
}
}
}
}
},
}
```

Here, you add `imagePullSecrets` into your existing configuration. Note that
the snippet is truncated for brevity. The full configuration should still be
provided.

Once applied, you can see the entire base job template in the UI by navigating
to `Account settings` > `Work Pools` > your work pool > three-dot menu in the
top right corner > `Edit` > `Base Job Template` section > `Advanced` tab.

#### Updating the Base Job Template

If a base job template is set through Helm (via either `.Values.worker.config.baseJobTemplate.configuration` or `.Values.worker.config.baseJobTemplate.existingConfigMapName`), we'll run an optional `initContainer` that will sync the template configuration to the work pool named in `.Values.worker.config.workPool`.
If a base job template is set through Helm (via either `worker.config.baseJobTemplate.configuration` or `worker.config.baseJobTemplate.existingConfigMapName`), we'll run an optional `initContainer` that will sync the template configuration to the work pool named in `worker.config.workPool`.

Any time the base job template is updated, the subsequent `initContainer` run will run `prefect work-pool update <work-pool-name> --base-job-template <template-json>` and sync this template to the API.

Please note that configuring the template via `baseJobTemplate.existingConfigMapName` will require a manual restart of the `prefect-worker` Deployment in order to kick off the `initContainer` - alternatively, you can use a tool like [reloader](https://github.com/stakater/Reloader) to automatically restart an associated Deployment. However, configuring the template via `baseJobTemplate.configuration` value will automatically roll the Deployment on any update.

## Troubleshooting

### Setting `.Values.worker.clusterUid`
### Setting `worker.clusterUid`

This chart attempts to generate a unique identifier for the cluster it is installing the worker on to use as metadata for your runs. Since Kubernetes [does not provide a "cluster ID" API](https://github.com/kubernetes/kubernetes/issues/44954), this chart will do so by [reading the `kube-system` namespace and parsing the immutable UID](https://github.com/PrefectHQ/prefect-helm/blob/main/charts/prefect-worker/templates/_helpers.tpl#L94-L105). [This mimics the functionality in the `prefect-kubernetes` library](https://github.com/PrefectHQ/prefect/blob/5f5427c410cd04505d7b2c701e2003f856044178/src/integrations/prefect-kubernetes/prefect_kubernetes/worker.py#L835-L859).

Expand All @@ -264,7 +306,7 @@ This chart does not offer a built-in way to assign these roles, as it does not m

> HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"namespaces \"kube-system\" is forbidden: User \"system:serviceaccount:prefect:prefect-worker\" cannot get resource \"namespaces\" in API group \"\" in the namespace \"kube-system\"","reason":"Forbidden","details":{"name":"kube-system","kind":"namespaces"},"code":403}

In many cases, these role additions may be entirely infeasible due to overall access limitations. As an alternative, this chart offers a hard-coded override via the `.Values.worker.clusterUid` value.
In many cases, these role additions may be entirely infeasible due to overall access limitations. As an alternative, this chart offers a hard-coded override via the `worker.clusterUid` value.

Set this value to a user-provided unique ID - this bypasses the `kube-system` namespace lookup and utilizes your provided value as the cluster ID instead. Be sure to set this value consistently across your Prefect deployments that interact with the same cluster

Expand Down
54 changes: 48 additions & 6 deletions charts/prefect-worker/README.md.gotmpl
Original file line number Diff line number Diff line change
Expand Up @@ -222,9 +222,13 @@ serviceAccount:

### Configuring a Base Job Template on the Worker

If you want to define the [base job template](https://docs.prefect.io/concepts/work-pools/#base-job-template) of the worker and pass it as a value in this chart, you will need to do the following. **Note** if the `workPool` already exists, the base job template passed **will** be ignored.
The worker uses the [base job template](https://docs.prefect.io/v3/deploy/infrastructure-concepts/work-pools#base-job-template)
to create the Kubernetes job that executes your workflow. The base job template configuration can be modified by setting
`worker.config.baseJobTemplate.configuration`.

1. Define the base job template in a local file. To get a formatted template, run the following command & store locally in `base-job-template.json`
**Note**: if the target work pool (`config.workPool`) already exists, the base job template passed **will be ignored**.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this definitely true? In my testing, I was able to modify the base job template and each time it would update the configuration seen in the UI - even though I created the work pool before attaching the worker.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hm good question. it is true i think - but maybe with the addition of the init container it is no longer true for our helm charts.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I bet that's it, since we're directly using prefect work-pool update <json-file> now. I'll remove that warning.


1. Define the base job template in a local file. To get a formatted template, run the following command & store locally in `base-job-template.json`:

```bash
# you may need to install `prefect-kubernetes` first
Expand All @@ -233,25 +237,63 @@ pip install prefect-kubernetes
prefect work-pool get-default-base-job-template --type kubernetes > base-job-template.json
```

2. Modify the base job template as needed
2. Modify the base job template as needed. Keep in mind that modifications are not merged with the default template. The configuration
you provide will replace the default configuration entirely. See [modifying the base job template](#modifying-the-base-job-template)
for more information.

3. Install the chart as you usually would, making sure to use the `--set-file` command to pass in the `base-job-template.json` file as a paramater:

```bash
helm install prefect-worker prefect/prefect-worker -f values.yaml --set-file worker.config.baseJobTemplate.configuration=base-job-template.json
```

#### Modifying the Base Job Template

Modifying the base job template replaces the default configuration entirely.
Put differently, any provdied configuration is not merged with the default configuration.

For example, if you want to add an image pull secret to the base job template,
you would modify the `base-job-template.json` file to look like this:

```diff
{
"job_configuration": {
"job_manifest": {
"spec": {
"template": {
"spec": {
+ "imagePullSecrets": [
+ {
+ "name": "my-pull-secret"
+ }
+ ]
}
}
}
}
},
}
```

Here, you add `imagePullSecrets` into your existing configuration. Note that
the snippet is truncated for brevity. The full configuration should still be
provided.

Once applied, you can see the entire base job template in the UI by navigating
to `Account settings` > `Work Pools` > your work pool > three-dot menu in the
top right corner > `Edit` > `Base Job Template` section > `Advanced` tab.

#### Updating the Base Job Template

If a base job template is set through Helm (via either `.Values.worker.config.baseJobTemplate.configuration` or `.Values.worker.config.baseJobTemplate.existingConfigMapName`), we'll run an optional `initContainer` that will sync the template configuration to the work pool named in `.Values.worker.config.workPool`.
If a base job template is set through Helm (via either `worker.config.baseJobTemplate.configuration` or `worker.config.baseJobTemplate.existingConfigMapName`), we'll run an optional `initContainer` that will sync the template configuration to the work pool named in `worker.config.workPool`.

Any time the base job template is updated, the subsequent `initContainer` run will run `prefect work-pool update <work-pool-name> --base-job-template <template-json>` and sync this template to the API.

Please note that configuring the template via `baseJobTemplate.existingConfigMapName` will require a manual restart of the `prefect-worker` Deployment in order to kick off the `initContainer` - alternatively, you can use a tool like [reloader](https://github.com/stakater/Reloader) to automatically restart an associated Deployment. However, configuring the template via `baseJobTemplate.configuration` value will automatically roll the Deployment on any update.

## Troubleshooting

### Setting `.Values.worker.clusterUid`
### Setting `worker.clusterUid`

This chart attempts to generate a unique identifier for the cluster it is installing the worker on to use as metadata for your runs. Since Kubernetes [does not provide a "cluster ID" API](https://github.com/kubernetes/kubernetes/issues/44954), this chart will do so by [reading the `kube-system` namespace and parsing the immutable UID](https://github.com/PrefectHQ/prefect-helm/blob/main/charts/prefect-worker/templates/_helpers.tpl#L94-L105). [This mimics the functionality in the `prefect-kubernetes` library](https://github.com/PrefectHQ/prefect/blob/5f5427c410cd04505d7b2c701e2003f856044178/src/integrations/prefect-kubernetes/prefect_kubernetes/worker.py#L835-L859).

Expand All @@ -264,7 +306,7 @@ This chart does not offer a built-in way to assign these roles, as it does not m

> HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"namespaces \"kube-system\" is forbidden: User \"system:serviceaccount:prefect:prefect-worker\" cannot get resource \"namespaces\" in API group \"\" in the namespace \"kube-system\"","reason":"Forbidden","details":{"name":"kube-system","kind":"namespaces"},"code":403}

In many cases, these role additions may be entirely infeasible due to overall access limitations. As an alternative, this chart offers a hard-coded override via the `.Values.worker.clusterUid` value.
In many cases, these role additions may be entirely infeasible due to overall access limitations. As an alternative, this chart offers a hard-coded override via the `worker.clusterUid` value.

Set this value to a user-provided unique ID - this bypasses the `kube-system` namespace lookup and utilizes your provided value as the cluster ID instead. Be sure to set this value consistently across your Prefect deployments that interact with the same cluster

Expand Down