A Helm chart for OpenFunction on Kubernetes
Name | Url | |
---|---|---|
wangyifei | [email protected] |
Kubernetes: >=v1.23.0-0
Repository | Name | Version | AppVersion |
---|---|---|---|
file://charts/knative-serving | knative-serving | 1.3.2 | 1.3.2 |
file://charts/shipwright-build | shipwright-build | 0.10.0 | 0.10.0 |
file://charts/tekton-pipelines | tekton-pipelines | 0.37.2 | 0.37.2 |
https://charts.bitnami.com/bitnami | contour | 10.2.2 | 1.23.3 |
https://dapr.github.io/helm-charts/ | dapr | 1.11.3 | 1.11.3 |
https://kedacore.github.io/charts | keda | 2.11.2 | 2.11.2 |
Ensure Helm is initialized in your Kubernetes cluster.
For more details on initializing Helm, read the Helm docs
-
Run the following command to add the OpenFunction chart repository first:
helm repo add openfunction https://openfunction.github.io/charts/ helm repo update
-
Then you have several options to setup OpenFunction, you can choose to:
-
Install all components:
kubectl create namespace openfunction helm install openfunction openfunction/openfunction -n openfunction
-
Install Serving only (without build):
kubectl create namespace openfunction helm install openfunction --set global.ShipwrightBuild.enabled=false --set global.TektonPipelines.enabled=false openfunction/openfunction -n openfunction
-
Install Knative sync runtime only:
kubectl create namespace openfunction helm install openfunction --set global.Keda.enabled=false openfunction/openfunction -n openfunction
-
Install KedaHttp sync runtime only:
kubectl create namespace openfunction helm install openfunction --set global.KnativeServing.enabled=false openfunction/openfunction -n openfunction
helm repo add kedacore https://kedacore.github.io/charts helm repo update helm install http-add-on kedacore/keda-add-ons-http --create-namespace -n keda
-
Install OpenFunction async runtime only:
kubectl create namespace openfunction helm install openfunction --set global.Contour.enabled=false --set global.KnativeServing.enabled=false openfunction/openfunction -n openfunction
-
kubectl get po -n openfunction
To uninstall/delete the openfunction
release:
helm uninstall openfunction -n openfunction
helm upgrade [RELEASE_NAME] openfunction/openfunction -n openfunction --no-hooks
With Helm v3, CRDs created by this chart are not updated by default and should be manually updated. Consult also the Helm Documentation on CRDs.
See helm upgrade for command documentation.
First, you'll need to uninstall the old openfunction
release:
helm uninstall openfunction -n openfunction
Then you'll need to upgrade the new OpenFunction CRDs
kubectl apply -f https://openfunction.sh1a.qingstor.com/crds/v1.2.0/openfunction.yaml
helm repo update
helm install openfunction openfunction/openfunction -n openfunction
First, you'll need to uninstall the old openfunction
release:
helm uninstall openfunction -n openfunction
Then you'll need to upgrade the new OpenFunction CRDs
kubectl apply -f https://openfunction.sh1a.qingstor.com/crds/v1.1.0/openfunction.yaml
helm repo update
helm install openfunction openfunction/openfunction -n openfunction
First, you'll need to uninstall the old openfunction
release:
helm uninstall openfunction -n openfunction
Then you'll need to upgrade the new OpenFunction CRDs
kubectl apply -f https://openfunction.sh1a.qingstor.com/crds/v1.0.0/openfunction.yaml
helm repo update
helm install openfunction openfunction/openfunction -n openfunction
helm upgrade openfunction openfunction/openfunction -n openfunction --no-hooks
There is a breaking change when upgrading from v0.6.0 to 0.7.x which requires additional manual operations.
First, you'll need to uninstall the old openfunction
release:
helm uninstall openfunction -n openfunction
Confirm that the component namespaces have been deleted, it will take a while:
kubectl get ns -o=jsonpath='{range .items[?(@.metadata.annotations.meta\.helm\.sh/release-name=="openfunction")]}{.metadata.name}: {.status.phase}{"\n"}{end}'
If the knative-serving namespace is in the terminating state for a long time, try running the following command and remove finalizers:
kubectl edit ingresses.networking.internal.knative.dev -n knative-serving
Then you'll need to upgrade the new OpenFunction CRDs
kubectl apply -f https://openfunction.sh1a.qingstor.com/crds/v0.7.0/openfunction.yaml
You also need to upgrade the dependent components' CRDs
You only need to deal with the components included in the existing Release.
- knative-serving CRDs
kubectl apply -f https://openfunction.sh1a.qingstor.com/crds/v0.7.0/knative-serving.yaml
- shipwright-build CRDs
kubectl apply -f https://openfunction.sh1a.qingstor.com/crds/v0.7.0/shipwright-build.yaml
- tekton-pipelines CRDs
kubectl apply -f https://openfunction.sh1a.qingstor.com/crds/v0.7.0/tekton-pipelines.yaml
helm repo update
helm install openfunction openfunction/openfunction -n openfunction
Key | Type | Default | Description |
---|---|---|---|
config.daprProxyImage | string | "openfunction/dapr-proxy:v0.1.1" |
|
config.eventsourceHandlerImage | string | "openfunction/eventsource-handler:v4" |
|
config.knativeServingConfigFeaturesName | string | "config-features" |
|
config.knativeServingNamespace | string | "knative-serving" |
|
config.tracing | string | "enabled: false\nprovider:\n name: \"skywalking\"\n oapServer: \"localhost:xxx\"\ntags:\n func: function-with-tracing\n layer: faas\n tag1: value1\n tag2: value2\nbaggage:\n key: sw8-correlation\n value: \"base64(string key):base64(string value),base64(string key2):base64(string value2)\"\n" |
|
config.triggerHandlerImage | string | "openfunction/trigger-handler:v4" |
|
contour.configInline.gateway.controllerName | string | "projectcontour.io/projectcontour/contour" |
|
contour.contour.ingressClass.name | string | "contour" |
|
contour.fullnameOverride | string | "contour" |
|
contour.namespaceOverride | string | "projectcontour" |
|
controllerManager.kubeRbacProxy.image.repository | string | "openfunction/kube-rbac-proxy" |
|
controllerManager.kubeRbacProxy.image.tag | string | "v0.8.0" |
|
controllerManager.openfunction.image.repository | string | "openfunction/openfunction" |
|
controllerManager.openfunction.image.tag | string | "v1.2.0" |
|
controllerManager.openfunction.resources.limits.cpu | string | "500m" |
|
controllerManager.openfunction.resources.limits.memory | string | "500Mi" |
|
controllerManager.openfunction.resources.requests.cpu | string | "100m" |
|
controllerManager.openfunction.resources.requests.memory | string | "20Mi" |
|
controllerManager.replicas | int | 1 |
|
global.Contour.enabled | bool | true |
|
global.Dapr.enabled | bool | true |
|
global.Keda.enabled | bool | true |
|
global.KnativeServing.enabled | bool | true |
|
global.ShipwrightBuild.enabled | bool | true |
|
global.TektonPipelines.enabled | bool | true |
|
keda.image.keda.repository | string | "openfunction/keda" |
|
keda.image.keda.tag | string | "2.11.2" |
|
keda.image.metricsApiServer.repository | string | "openfunction/keda-metrics-apiserver" |
|
keda.image.metricsApiServer.tag | string | "2.11.2" |
|
keda.image.webhooks.repository | string | "openfunction/keda-admission-webhooks" |
|
keda.image.webhooks.tag | string | "2.11.2" |
|
keda.resources.metricServer | object | {"limits":{"cpu":1,"memory":"1000Mi"},"requests":{"cpu":"100m","memory":"100Mi"}} |
Manage [resource request & limits] of KEDA metrics apiserver pod |
keda.resources.operator | object | {"limits":{"cpu":1,"memory":"1000Mi"},"requests":{"cpu":"100m","memory":"100Mi"}} |
Manage [resource request & limits] of KEDA operator pod |
keda.resources.webhooks | object | {"limits":{"cpu":"50m","memory":"100Mi"},"requests":{"cpu":"10m","memory":"10Mi"}} |
Manage [resource request & limits] of KEDA admission webhooks pod |
knative-serving.activator.activator.image.repository | string | "openfunction/knative.dev-serving-cmd-activator" |
|
knative-serving.activator.activator.resources.limits.cpu | string | "1" |
|
knative-serving.activator.activator.resources.limits.memory | string | "600Mi" |
|
knative-serving.activator.activator.resources.requests.cpu | string | "300m" |
|
knative-serving.activator.activator.resources.requests.memory | string | "60Mi" |
|
knative-serving.autoscaler.autoscaler.image.repository | string | "openfunction/knative.dev-serving-cmd-autoscaler" |
|
knative-serving.autoscaler.autoscaler.resources.limits.cpu | string | "1" |
|
knative-serving.autoscaler.autoscaler.resources.limits.memory | string | "1000Mi" |
|
knative-serving.autoscaler.autoscaler.resources.requests.cpu | string | "100m" |
|
knative-serving.autoscaler.autoscaler.resources.requests.memory | string | "100Mi" |
|
knative-serving.configDeployment.queueSidecarImage.repository | string | "openfunction/knative.dev-serving-cmd-queue" |
|
knative-serving.controller.controller.image.repository | string | "openfunction/knative.dev-serving-cmd-controller" |
|
knative-serving.controller.controller.resources.limits.cpu | string | "1" |
|
knative-serving.controller.controller.resources.limits.memory | string | "1000Mi" |
|
knative-serving.controller.controller.resources.requests.cpu | string | "100m" |
|
knative-serving.controller.controller.resources.requests.memory | string | "100Mi" |
|
knative-serving.defaultDomain.job.image.repository | string | "openfunction/knative.dev-serving-cmd-default-domain" |
|
knative-serving.domainMapping.domainMapping.image.repository | string | "openfunction/knative.dev-serving-cmd-domain-mapping" |
|
knative-serving.domainMapping.domainMapping.resources.limits.cpu | string | "300m" |
|
knative-serving.domainMapping.domainMapping.resources.limits.memory | string | "400Mi" |
|
knative-serving.domainMapping.domainMapping.resources.requests.cpu | string | "30m" |
|
knative-serving.domainMapping.domainMapping.resources.requests.memory | string | "40Mi" |
|
knative-serving.domainmappingWebhook.domainmappingWebhook.image.repository | string | "openfunction/knative.dev-serving-cmd-domain-mapping-webhook" |
|
knative-serving.domainmappingWebhook.domainmappingWebhook.resources.limits.cpu | string | "500m" |
|
knative-serving.domainmappingWebhook.domainmappingWebhook.resources.limits.memory | string | "500Mi" |
|
knative-serving.domainmappingWebhook.domainmappingWebhook.resources.requests.cpu | string | "100m" |
|
knative-serving.domainmappingWebhook.domainmappingWebhook.resources.requests.memory | string | "100Mi" |
|
knative-serving.netContourController.controller.image.repository | string | "openfunction/knative.dev-net-contour-cmd-controller" |
|
knative-serving.netContourController.controller.resources.limits.cpu | string | "400m" |
|
knative-serving.netContourController.controller.resources.limits.memory | string | "400Mi" |
|
knative-serving.netContourController.controller.resources.requests.cpu | string | "40m" |
|
knative-serving.netContourController.controller.resources.requests.memory | string | "40Mi" |
|
knative-serving.webhook.webhook.image.repository | string | "openfunction/knative.dev-serving-cmd-webhook" |
|
knative-serving.webhook.webhook.resources.limits.cpu | string | "500m" |
|
knative-serving.webhook.webhook.resources.limits.memory | string | "500Mi" |
|
knative-serving.webhook.webhook.resources.requests.cpu | string | "100m" |
|
knative-serving.webhook.webhook.resources.requests.memory | string | "100Mi" |
|
kubernetesClusterDomain | string | "cluster.local" |
|
managerConfig.controllerManagerConfigYaml.health.healthProbeBindAddress | string | ":8081" |
|
managerConfig.controllerManagerConfigYaml.leaderElection.leaderElect | bool | true |
|
managerConfig.controllerManagerConfigYaml.leaderElection.resourceName | string | "79f0111e.openfunction.io" |
|
managerConfig.controllerManagerConfigYaml.metrics.bindAddress | string | "127.0.0.1:8080" |
|
managerConfig.controllerManagerConfigYaml.webhook.port | int | 9443 |
|
metricsService.ports[0].name | string | "https" |
|
metricsService.ports[0].port | int | 8443 |
|
metricsService.ports[0].targetPort | string | "https" |
|
metricsService.type | string | "ClusterIP" |
|
revisionController.enable | bool | false |
|
revisionController.image.pullPolicy | string | "IfNotPresent" |
|
revisionController.image.repository | string | "openfunction/revision-controller" |
|
revisionController.image.tag | string | "v1.0.0" |
|
shipwright-build.shipwrightBuildController.shipwrightBuild.BUNDLE_CONTAINER_IMAGE.repository | string | "openfunction/shipwright-bundle" |
|
shipwright-build.shipwrightBuildController.shipwrightBuild.GIT_CONTAINER_IMAGE.repository | string | "openfunction/shipwright-io-build-git" |
|
shipwright-build.shipwrightBuildController.shipwrightBuild.MUTATE_IMAGE_CONTAINER_IMAGE.repository | string | "openfunction/shipwright-mutate-image" |
|
shipwright-build.shipwrightBuildController.shipwrightBuild.WAITER_CONTAINER_IMAGE.repository | string | "openfunction/shipwright-waiter" |
|
shipwright-build.shipwrightBuildController.shipwrightBuild.image.repository | string | "openfunction/shipwright-shipwright-build-controller" |
|
tekton-pipelines.controller.tektonPipelinesController.entrypointImage.repository | string | "openfunction/tektoncd-pipeline-cmd-entrypoint" |
|
tekton-pipelines.controller.tektonPipelinesController.gitImage.repository | string | "openfunction/tektoncd-pipeline-cmd-git-init" |
|
tekton-pipelines.controller.tektonPipelinesController.gsutilImage.digest | string | "sha256:27b2c22bf259d9bc1a291e99c63791ba0c27a04d2db0a43241ba0f1f20f4067f" |
|
tekton-pipelines.controller.tektonPipelinesController.gsutilImage.repository | string | "openfunction/cloudsdktool-cloud-sdk" |
|
tekton-pipelines.controller.tektonPipelinesController.image.repository | string | "openfunction/tektoncd-pipeline-cmd-controller" |
|
tekton-pipelines.controller.tektonPipelinesController.imagedigestExporterImage.repository | string | "openfunction/tektoncd-pipeline-cmd-imagedigestexporter" |
|
tekton-pipelines.controller.tektonPipelinesController.kubeconfigWriterImage.repository | string | "openfunction/tektoncd-pipeline-cmd-kubeconfigwriter" |
|
tekton-pipelines.controller.tektonPipelinesController.nopImage.repository | string | "openfunction/tektoncd-pipeline-cmd-nop" |
|
tekton-pipelines.controller.tektonPipelinesController.prImage.repository | string | "openfunction/tektoncd-pipeline-cmd-pullrequest-init" |
|
tekton-pipelines.controller.tektonPipelinesController.shellImage.digest | string | "sha256:b16b57be9160a122ef048333c68ba205ae4fe1a7b7cc6a5b289956292ebf45cc" |
|
tekton-pipelines.controller.tektonPipelinesController.shellImage.repository | string | "openfunction/distroless-base" |
|
tekton-pipelines.controller.tektonPipelinesController.shellImageWin.digest | string | "sha256:b6d5ff841b78bdf2dfed7550000fd4f3437385b8fa686ec0f010be24777654d6" |
|
tekton-pipelines.controller.tektonPipelinesController.shellImageWin.repository | string | "mcr.microsoft.com/powershell:nanoserver" |
|
tekton-pipelines.controller.tektonPipelinesController.workingdirinitImage.repository | string | "openfunction/tektoncd-pipeline-cmd-workingdirinit" |
|
tekton-pipelines.controller.type | string | "ClusterIP" |
|
tekton-pipelines.webhook.webhook.image.repository | string | "openfunction/tektoncd-pipeline-cmd-webhook" |
|
webhookService.ports[0].port | int | 443 |
|
webhookService.ports[0].targetPort | int | 9443 |
|
webhookService.type | string | "ClusterIP" |
Autogenerated from chart metadata using helm-docs v1.11.2