- Installation
-
An OpenShift Container Platform cluster, version 4.12 or later.
The applications were tested on both managed and self-managed deployments.
-
Adequate worker node capacity in the cluster for the Cloud Paks to be installed.
Refer to the Cloud Pak documentation to determine the required capacity for the cluster.
-
Cluster storage configured with storage classes supporting both RWO and RWX storage.
The applications were tested with OpenShift Data Foundation, Rook Ceph, AWS EFS, and the built-in file storage in ROKS classic clusters.
-
From the Administrator's perspective, navigate to the OperatorHub page.
-
Search for "Red Hat OpenShift GitOps." Click on the tile and then click on "Install."
-
Keep the defaults in the wizard and click on "Install."
-
Wait for it to appear in the " Installed Operators list." If it doesn't install correctly, you can check its status on the "Installed Operators" page in the
openshift-operators
namespace.
-
Open a terminal and ensure you have the OpenShift CLI installed:
oc version --client # Client Version: 4.12.47
Ideally, the client's minor version should be at most one iteration behind the server version. Most commands here are pretty basic and will work with more significant differences, but keep that in mind if you see errors about unrecognized commands and parameters.
If you do not have the CLI installed, follow these instructions.
-
Create the
Subscription
resource for the operator:cat << EOF | oc apply -f - --- apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-gitops-operator namespace: openshift-operators spec: channel: latest installPlanApproval: Automatic name: openshift-gitops-operator source: redhat-operators sourceNamespace: openshift-marketplace EOF
Wait until the ArgoCD instance appears as ready in the
openshift-gitops
namespace.oc wait ArgoCD openshift-gitops \ -n openshift-gitops \ --for=jsonpath='{.status.phase}'=Available \ --timeout=600s
If you don't already have an entitlement key to the IBM Entitled Registry, obtain your key using the following instructions:
-
Go to the Container software library.
-
Click the "Copy key."
-
Copy the entitlement key to a safe place to update the cluster's global pull secret.
-
(Optional) Verify the validity of the key by logging in to the IBM Entitled Registry using a container tool:
export IBM_ENTITLEMENT_KEY=the key from the previous steps podman login cp.icr.io --username cp --password "${IBM_ENTITLEMENT_KEY:?}"
Update the OCP global pull secret with the entitlement key.
Remember that that secret's registry user is cp
. A common mistake is to assume the registry user is the name or email of the user owning the entitlement key.
-
Navigate to the "Workloads > Secrets" page in the "Administrator" perspective.
-
Select the object "pull-secret."
-
Click on "Actions -> Edit secret."
-
Scroll to the bottom of that page and click on "Add credentials," using the following values for each field:
- "Registry Server Address" cp.icr.io
- "Username": cp
- "Password": paste the entitlement key you copied from the Obtain an entitlement key
- "Email": any email, valid or not, will work. This field is mostly a hint to other people who may see the entry in the configuration
-
Click on "Save."
Updating the OCP global pull secret triggers the staggered restart of each node in the cluster. However, the Red Hat OpenShift on IBM Cloud platform requires an additional step: reload all workers nodes (in the case of ROKS classic clusters) or replace all workers nodes (in the case of ROKS VPC Gen2 clusters.)
You can perform the reloading or replacement of workers directly from the cluster page in the IBM Cloud console or use a terminal, following the instructions listed here.
Global pull secrets require granting too much privilege to the OpenShift GitOps service account, so we have started transitioning to the definition of pull secrets at a namespace level.
The Application resources are transitioning to use PreSync
hooks to copy the entitlement key from a Secret
named ibm-entitlement-key
in the openshift-gitops
namespace, so issue the following command to create that secret:
# Note that if you just created the OpenShift GitOps operator
# the namespace may not be ready yet, so you may need to wait
# a minute or two
oc create secret docker-registry ibm-entitlement-key \
--docker-server=cp.icr.io \
--docker-username=cp \
--docker-password="${IBM_ENTITLEMENT_KEY:?}" \
--docker-email="[email protected]" \
--namespace=openshift-gitops
Important: The instructions for installing and configuring the OpenShift GitOps operator are meant exclusively for demonstration purposes. For users who already manage their own OpenShift GitOps installation, read the contents of the config/argocd/templates
folder carefully and assess whether the settings are compatible with your installation, especially when it comes to the .spec.resourceCustomizations
field of the ArgoCD
custom resource.
The instructions in this section assume you have administrative privileges to the Argo CD instance.
After completing the list of activities listed in the previous sections, you can add the Argo CD Application
objects for a Cloud Pak using either the OpenShift Container Platform console or using commands in a terminal.
-
Launch the Argo CD console: Click on the grid-like icon in the upper-left section of the screen, where you should click on "Cluster Argo CD."
-
The Argo CD login screen will prompt you for an admin user and password. The default user is
admin .
The admin password is located in secretopenshift-gitops-cluster
in theopenshift-gitops
namespace.-
Switch to the
openshift-gitops
project, locate the secret in the "Workloads -> Secrets" selections in the left-navigation tree of the Administrator view, scroll to the bottom, and click on "Reveal Values" to retrieve the value of theadmin.password
field. -
Type in the user and password listed in the previous steps, and click the "Sign In" button.
-
-
(add Argo app) Once logged to the Argo CD console, click on the "New App+" button in the upper left of the Argo CD console and fill out the form with values matching the Cloud Pak of your choice, according to the table below:
For all other fields, use the following values:
Field Value Application Name argo-app Path config/argocd Namespace openshift-gitops Project default Sync policy Automatic Self Heal true Repository URL https://github.com/IBM/cloudpak-gitops Revision HEAD Cluster URL https://kubernetes.default.svc -
(add Cloud Pak Shared app) Click on the "New App+" button again and fill out the form with values matching the Cloud Pak of your choice, according to the table below:
For all other fields, use the following values:
Field Value Application Name cp-shared-app Path config/argocd-cloudpaks/cp-shared Namespace ibm-cloudpaks Project default Sync policy Automatic Self Heal true Repository URL https://github.com/IBM/cloudpak-gitops Revision HEAD Cluster URL https://kubernetes.default.svc Optional: If you want to deploy Cloud Pak for Integration or Cloud Pak for Security to a non-default namespace, you must override the default value for the Cloud Paks, using the parameters below:
Parameter (Default) Value dedicated_cs.enabled true dedicated_cs.namespace_mapping.cp4i cp4i dedicated_cs.namespace_mapping.cp4s cp4s Note that Cloud Pak for Data and Cloud Pak for Business Automation do not have this setting - because they enable dedicated Foundation Service namespace by default. Cloud Pak for AIOps does not have this setting either, because it does not support dedicated Foundation Service namespaces.
-
After filling out the form details, click the "Create" button
-
(add actual Cloud Pak) Click on the "New App+" button again and fill out the form with values matching the Cloud Pak of your choice, according to the table below:
Note that if you want to deploy a Cloud Pak to a non-default namespace, you need to make sure you pass the same namespace values used in the optional parameter values for the
cp-shared
application.Cloud Pak Application Name Path Namespace Business Automation cp4a-app config/argocd-cloudpaks/cp4a cp4a Data cp4d-app config/argocd-cloudpaks/cp4d cp4d Integration cp4i-app config/argocd-cloudpaks/cp4i cp4i Security cp4s-app config/argocd-cloudpaks/cp4s cp4s AIOps cp4aiops-app config/argocd-cloudpaks/cp4aiops cp4aiops For all other fields, use the following values:
Field Value Project default Sync policy Automatic Self Heal true Repository URL https://github.com/IBM/cloudpak-gitops Revision HEAD Cluster URL https://kubernetes.default.svc -
After filling out the form details, click the "Create" button
-
Under "Parameters," set the values for the fields
storageclass.rwo
andstorageclass.rwx
with the appropriate storage classes. For OpenShift Container Storage, the values will beocs-storagecluster-ceph-rbd
andocs-storagecluster-cephfs
, respectively. -
After filling out the form details, click the "Create" button
-
Wait for the synchronization to complete.
-
Open a terminal and ensure you have the OpenShift CLI installed:
oc version --client # Client Version: 4.10.60
Ideally, the client's minor version should be at most one iteration behind the server version. Most commands here are pretty basic and will work with more significant differences, but keep that in mind if you see errors about unrecognized commands and parameters.
If you do not have the CLI installed, follow these instructions.
-
Log in to the Argo CD server
gitops_url=https://github.com/IBM/cloudpak-gitops gitops_branch=main argo_pwd=$(oc get secret openshift-gitops-cluster \ -n openshift-gitops \ -o go-template='{{index .data "admin.password"|base64decode}}') \ && argo_url=$(oc get route openshift-gitops-server \ -n openshift-gitops \ -o jsonpath='{.spec.host}') \ && argocd login "${argo_url}" \ --username admin \ --password "${argo_pwd}" \ --insecure
-
Add the
argo
application. (this step assumes you still have the shell variables assigned from previous actions) :argocd proj create argocd-control-plane \ --dest "https://kubernetes.default.svc,openshift-gitops" \ --src ${gitops_url:?} \ --upsert \ && argocd app create argo-app \ --project argocd-control-plane \ --dest-namespace openshift-gitops \ --dest-server https://kubernetes.default.svc \ --repo ${gitops_url:?} \ --path config/argocd \ --helm-set-string targetRevision="${gitops_branch}" \ --revision ${gitops_branch:?} \ --sync-policy automated \ --upsert \ && argocd app wait argo-app
-
Add the
cp-shared
application. (this step assumes you still have the shell variables assigned from previous steps) :cp_namespace=ibm-cloudpaks # Switch to true if you want to use Red Hat Cert Manager instead of # IBM Cert Manager. # # ** This is only supported for CP4BA and CP4D ** # red_hat_cert_manager=false # If you want to override the default target namespace for # Cloud Pak for Security, you need to adjust the values below # to indicate the desired target namespace. # dedicated_cs_enabled=false cp4s_namespace=cp4s argocd app create cp-shared-app \ --project default \ --dest-namespace openshift-gitops \ --dest-server https://kubernetes.default.svc \ --repo ${gitops_url:?} \ --path config/argocd-cloudpaks/cp-shared \ --helm-set-string argocd_app_namespace="${cp_namespace}" \ --helm-set-string metadata.argocd_app_namespace="${cp_namespace}" \ --helm-set-string red_hat_cert_manager="${red_hat_cert_manager:-false}" \ --helm-set-string dedicated_cs.enabled="${dedicated_cs_enabled:-false}" \ --helm-set-string dedicated_cs.namespace_mapping.cp4s="${cp4s_namespace:-cp4s}" \ --helm-set-string targetRevision="${gitops_branch:?}" \ --revision ${gitops_branch:?} \ --sync-policy automated \ --upsert
-
Add the respective Cloud Pak application (this step assumes you still have shell variables assigned from previous steps) :
# Choose a value from the "Application Name" column in the # table of Cloud Paks above, such as cp4a, cp4i, or cp4d cp=cp4i # Note that if you want to use a target namespace that is not the # default, you must make the corresponding parameter update to the # cp-shared-app application. cp_namespace=$cp app_name=${cp}-app # app_path=<< choose the respective value from the "path name." # column in the table of Cloud Paks above, such as # config/argocd-cloudpaks/cp4a, config/argocd-cloudpaks/cp4i, # etc app_path=config/argocd-cloudpaks/${cp} argocd app create "${app_name}" \ --project default \ --dest-namespace openshift-gitops \ --dest-server https://kubernetes.default.svc \ --helm-set-string metadata.argocd_app_namespace="${cp_namespace:?}" \ --helm-set-string repoURL=${gitops_url:?} \ --helm-set-string targetRevision="${gitops_branch}" \ --path "${app_path}" \ --repo ${gitops_url:?} \ --revision "${gitops_branch}" \ --sync-policy automated \ --upsert
-
List all the applications to see their overall status (this step assumes you still have shell variables assigned from previous steps):
argocd app list -l app.kubernetes.io/instance=${app_name}
-
You can also use the Argo CD command-line interface to wait for the application to be synchronized and healthy:
argocd app wait "${app_name}" \ --sync \ --health \ --operation \ --timeout 3600
In a GitOps practice, the "post-configuration" phase would entail committing changes to the repository and waiting for the GitOps operator to synchronize those settings toward the target environments.
This repository allows some light customizations to enable its reuse for demonstration purposes without requiring the cloning or forking of the repository.
The main Argo Application for the Cloud Pak (config/argocd-cloudpaks/cp4d
) has a parameter named components
, which contains a comma-separated list of components names matching the values in the product documentation.
Alter the values in this array with the element names found in the product documentation (e.g., wml
for Watson Machine Learning) to define the list of components installed in the target cluster.
The main Argo Application for the Cloud Pak (config/argocd-cloudpaks/cp4i
) has a parameter array named modules
, where you will find boolean values for various modules, such as apic
, mq
, and platform
.
Set those values to true
or false
to define which Cloud Pak modules you want to install in the target cluster.
Given the demonstration purposes of this repository, it is unsuited for anchoring a true GitOps deployment for many reasons. The primary limitation is the repository not being designed to represent any concrete deployment environment (e.g., there is no environment-specific folder where you could list the Cloud Pak components for a specific cluster.)
In that sense, you can duplicate the repository on a different Git organization and use that repository as the starting point to deploy Cloud Paks in your environments. This is a non-comprehensive list of aspects you should address in that new repository:
- If you already have the OpenShift GitOps operator installed in your target clusters:
- Merge the
.spec.resourceCustomizations
resources found inargocd/templates/argocd.yaml
into theArgoCD.argoproj.io
instance for your cluster - Delete the entire folder
/argocd
- Merge the
- Delete folders corresponding to Cloud Paks you don't plan on using. These Cloud Pak folders are located under the
/config/argocd-cloudpaks
and/config
folders.