Skip to content

Latest commit

 

History

History
140 lines (101 loc) · 5.06 KB

install.md

File metadata and controls

140 lines (101 loc) · 5.06 KB

Installing Tekton Pipelines

Use this page to add the component to an existing Kubernetes cluster.

Pre-requisites

  1. A Kubernetes cluster (if you don't have an existing cluster):

    # Example cluster creation command on GKE
    gcloud container clusters create $CLUSTER_NAME \
      --zone=$CLUSTER_ZONE
  2. Grant cluster-admin permissions to the current user:

    kubectl create clusterrolebinding cluster-admin-binding \
    --clusterrole=cluster-admin \
    --user=$(gcloud config get-value core/account)

    See Role-based access control for more information.

Adding the Tekton Pipelines

To add the Tekton Pipelines component to an existing cluster:

  1. Run the kubectl apply command to install Tekton Pipelines and its dependencies:

    kubectl apply --filename https://storage.googleapis.com/tekton-releases/latest/release.yaml

    (Previous versions will be available at previous/$VERSION_NUMBER, e.g. https://storage.googleapis.com/tekton-releases/previous/0.2.0/release.yaml.)

  2. Run the kubectl get command to monitor the Tekton Pipelines components until all of the components show a STATUS of Running:

    kubectl get pods --namespace tekton-pipelines

    Tip: Instead of running the kubectl get command multiple times, you can append the --watch flag to view the component's status updates in real time. Use CTRL + C to exit watch mode.

You are now ready to create and run Tekton Pipelines:

Installing Tekton Pipelines on OpenShift/MiniShift

The tekton-pipelines-controller service account needs the anyuid security context constraint in order to run the webhook pod.

See Security Context Constraints for more information

  1. First, login as a user with cluster-admin privileges. The following example uses the default system:admin user (admin:admin for MiniShift):

    # For MiniShift: oc login -u admin:admin
    oc login -u system:admin
  2. Run the following commands to set up the project/namespace, and to install Tekton Pipelines:

    oc new-project tekton-pipelines
    oc adm policy add-scc-to-user anyuid -z tekton-pipelines-controller
    oc apply --filename https://storage.googleapis.com/tekton-releases/latest/release.yaml

    See here for an overview of the oc command-line tool for OpenShift.

  3. Run the oc get command to monitor the Tekton Pipelines components until all of the components show a STATUS of Running:

    oc get pods --namespace tekton-pipelines --watch

Configuring Tekton Pipelines

How are resources shared between tasks

Pipelines need a way to share resources between tasks. The alternatives are a Persistent volume or a GCS storage bucket

The PVC option can be configured using a ConfigMap with the name config-artifact-pvc and the following attributes:

  • size: the size of the volume (5Gi by default)

The GCS storage bucket can be configured using a ConfigMap with the name config-artifact-bucket with the following attributes:

  • location: the address of the bucket (for example gs://mybucket)
  • bucket.service.account.secret.name: the name of the secret that will contain the credentials for the service account with access to the bucket
  • bucket.service.account.secret.key: the key in the secret with the required service account json.
  • The bucket is recommended to be configured with a retention policy after which files will be deleted.

Both options provide the same functionality to the pipeline. The choice is based on the infrastructure used, for example in some Kubernetes platforms, the creation of a persistent volume could be slower than uploading/downloading files to a bucket, or if the the cluster is running in multiple zones, the access to the persistent volume can fail.

Custom Releases

The release Task can be used for creating a custom release of Tekton Pipelines. This can be useful for advanced users that need to configure the container images built and used by the Pipelines components.


Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License, and code samples are licensed under the Apache 2.0 License.