Skip to content

Commit

Permalink
Merge pull request #40 from mvazquezc/lab-4.17
Browse files Browse the repository at this point in the history
Updated to 4.17
  • Loading branch information
mvazquezc authored Nov 6, 2024
2 parents 4f565e2 + 46a0e10 commit 0ae9e89
Show file tree
Hide file tree
Showing 22 changed files with 209 additions and 72 deletions.
1 change: 1 addition & 0 deletions .github/workflows/docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ on:
- 'lab-4.13'
- 'lab-4.14'
- 'lab-4.15'
- 'lab-4.17'
env:
SITE_DIR: "gh-pages"
jobs:
Expand Down
2 changes: 1 addition & 1 deletion documentation/antora.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
name: 4.15
name: 4.17
title: LAB - Hosted Control Planes on Baremetal
version: ~
nav:
Expand Down
Binary file modified documentation/modules/ROOT/assets/images/hc-upgrade-cp1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified documentation/modules/ROOT/assets/images/hc-upgrade-cp2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified documentation/modules/ROOT/assets/images/hc-upgrade-cp3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified documentation/modules/ROOT/assets/images/hc-upgrade-dp1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified documentation/modules/ROOT/assets/images/hc-upgrade-dp2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified documentation/modules/ROOT/assets/images/hc-wizard1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified documentation/modules/ROOT/assets/images/hc-wizard2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified documentation/modules/ROOT/assets/images/hc-wizard3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file modified documentation/modules/ROOT/assets/images/hc-wizard4.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
70 changes: 36 additions & 34 deletions documentation/modules/ROOT/pages/_attributes.adoc
Original file line number Diff line number Diff line change
@@ -1,39 +1,41 @@
:experimental:
:source-highlighter: highlightjs
:branch: lab-4.15
:branch: lab-4.17
:github-repo: https://github.com/RHsyseng/hypershift-baremetal-lab/blob/{branch}
:profile: hypershift-baremetal-lab
:rhel-version: v8.9
:openshift-release: v4.15
:tooling-version: 4.15
:mce-version: 2.5
:hosted-control-planes-version: 4.15
:management-cluster-version: 4.15.10
:management-cluster-kubeversion: v1.28.8+8974577
:hosted-cluster-version-1: 4.15.6
:hosted-cluster-kubeversion-1: v1.28.7+f1b5f6c
:hosted-cluster-rhcos-machineos-1: 415.92.202403270524-0
:hosted-cluster-kernel-1: 5.14.0-284.59.1.el9_2.x86_64
:hosted-cluster-container-runtime-1: cri-o://1.28.4-8.rhaos4.15.git24f50b9.el9
:hosted-cluster-version-2: 4.15.8
:hosted-cluster-kubeversion-2: v1.28.7+f1b5f6c
:hosted-cluster-rhcos-machineos-2: 415.92.202403270524-0
:hosted-cluster-kernel-2: 5.14.0-284.59.1.el9_2.x86_64
:hosted-cluster-container-runtime-2: cri-o://1.28.4-8.rhaos4.15.git24f50b9.el9
:hosted-cluster-version-3: 4.15.9
:hosted-cluster-kubeversion-3: v1.28.7+f1b5f6c
:hosted-cluster-rhcos-machineos-3: 415.92.202403270524-0
:hosted-cluster-kernel-3: 5.14.0-284.59.1.el9_2.x86_64
:hosted-cluster-container-runtime-3: cri-o://1.28.4-8.rhaos4.15.git24f50b9.el9
:mce-overview-docs-link: https://docs.openshift.com/container-platform/4.15/architecture/mce-overview-ocp.html
:assisted-service-docs-link: https://docs.openshift.com/container-platform/4.15/installing/installing_on_prem_assisted/installing-on-prem-assisted.html
:baremetal-operator-docs-link: https://docs.openshift.com/container-platform/4.15/operators/operator-reference.html#cluster-bare-metal-operator_cluster-operators-ref
:metallb-operator-docs-link: https://docs.openshift.com/container-platform/4.15/networking/metallb/about-metallb.html
:rhel-version: v9.4
:byow-rhel: v9.X
:openshift-release: v4.17
:tooling-version: 4.17
:mce-version: 2.7
:hosted-control-planes-version: 4.17
:management-cluster-version: 4.17.3
:management-cluster-kubeversion: v1.30.5
:hosted-cluster-version-1: 4.17.1
:hosted-cluster-kubeversion-1: v1.30.4
:hosted-cluster-rhcos-machineos-1: 417.94.202410090854-0
:hosted-cluster-kernel-1: 5.14.0-427.40.1.el9_4.x86_64
:hosted-cluster-container-runtime-1: cri-o://1.30.6-3.rhaos4.17.git49b5172.el9
:hosted-cluster-version-2: 4.17.2
:hosted-cluster-kubeversion-2: v1.30.5
:hosted-cluster-rhcos-machineos-2: 417.94.202410160352-0
:hosted-cluster-kernel-2: 5.14.0-427.40.1.el9_4.x86_64
:hosted-cluster-container-runtime-2: cri-o://1.30.6-5.rhaos4.17.git690d4d6.el9
:hosted-cluster-version-3: 4.17.3
:hosted-cluster-kubeversion-3: v1.30.5
:hosted-cluster-rhcos-machineos-3: 417.94.202410211619-0
:hosted-cluster-kernel-3: 5.14.0-427.42.1.el9_4.x86_64
:hosted-cluster-container-runtime-3: cri-o://1.30.6-6.rhaos4.17.git6ac6e96.el9
:epel-release: https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm
:mce-overview-docs-link: https://docs.openshift.com/container-platform/4.17/architecture/mce-overview-ocp.html
:assisted-service-docs-link: https://docs.openshift.com/container-platform/4.17/installing/installing_on_prem_assisted/installing-on-prem-assisted.html
:baremetal-operator-docs-link: https://docs.openshift.com/container-platform/4.17/operators/operator-reference.html#cluster-bare-metal-operator_cluster-operators-ref
:metallb-operator-docs-link: https://docs.openshift.com/container-platform/4.17/networking/metallb/about-metallb.html
:hypershift-upstream-docs-link: https://hypershift-docs.netlify.app
:hosted-control-planes-docs-link: https://docs.openshift.com/container-platform/4.15/architecture/control-plane.html#hosted-control-planes-overview_control-plane
:mce-channel: stable-2.5
:assisted-service-config-ocp-version: 4.15
:assisted-service-config-rhcos-live-iso-url: https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.15/4.15.0/rhcos-4.15.0-x86_64-live.x86_64.iso
:assisted-service-config-rhcos-rootfs-url: https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.15/4.15.0/rhcos-4.15.0-x86_64-live-rootfs.x86_64.img
:assisted-service-config-rhcos-machineos: 415.92.202402201450-0
:last-update-time: 2024-05-06
:hosted-control-planes-docs-link: https://docs.openshift.com/container-platform/4.17/architecture/control-plane.html#hosted-control-planes-overview_control-plane
:mce-channel: stable-2.7
:assisted-service-config-ocp-version: 4.17
:assisted-service-config-rhcos-live-iso-url: https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.17/4.17.0/rhcos-4.17.0-x86_64-live.x86_64.iso
:assisted-service-config-rhcos-rootfs-url: https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.17/4.17.0/rhcos-4.17.0-x86_64-live-rootfs.x86_64.img
:assisted-service-config-rhcos-machineos: 417.94.202409121747-0
:last-update-time: 2024-11-06
2 changes: 1 addition & 1 deletion documentation/modules/ROOT/pages/hcp-deployment.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ multicluster-engine-operator-5c899596bd-x92kd 1/1 Running 0 3m5
+
4. Once the operator is up and running we can go ahead and create the `MultiClusterEngine` operand to deploy a multicluster engine.
+
IMPORTANT: Starting in OCP 4.14, Hosted Control Plane components will be deployed as part of MCE by default.
IMPORTANT: Hosted Control Plane components will be deployed as part of MCE by default.
+
[.console-input]
[source,bash,subs="attributes+,+macros"]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ image::hc-wizard1.png[Hosted Cluster Wizard Screen 1]
+
.. `Controller availability policy`: Single Replica
.. `Infrastructure availability policy`: Single Replica
.. `OLM catalog placement`: Management
.. `Namespace`: hardware-inventory
.. `Use autoscaling`: Unchecked
.. `Number of hosts`: 2
Expand Down Expand Up @@ -240,7 +241,7 @@ oc --kubeconfig ~/hypershift-lab/mgmt-kubeconfig -n hosted get \
[source,console,subs="attributes+,+macros"]
-----
NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE
nodepool-hosted-1 hosted 2 2 False False {hosted-cluster-version-1}
nodepool-hosted-1 hosted 2 2 False False {hosted-cluster-version-1} False False
-----
At this point the Hosted Cluster deployment is not finished yet, since we need to fix Ingress for the cluster to be fully deployed. We will do that in the next section where we will learn how to access the Hosted Cluster.
15 changes: 7 additions & 8 deletions documentation/modules/ROOT/pages/lab-setup.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -9,13 +9,13 @@ CAUTION: If you are a Red Hatter, you can order a lab environment on the https:/
[#lab-requirements]
== Lab Requirements

RHEL 8.X box with access to the Internet. This lab relies on KVM, so you need to have the proper virtualization packages already installed. It is highly recommended to use a bare-metal host. Our lab environment has the following specs:
RHEL {byow-rhel} box with access to the Internet. This lab relies on KVM, so you need to have the proper virtualization packages already installed. It is highly recommended to use a bare-metal host. Our lab environment has the following specs:

* 64 CPUs (with or without hyperthreading)
* 200GiB Memory.
* 1 TiB storage.
IMPORTANT: These instructions have been tested in a RHEL {rhel-version}, we cannot guarantee that other operating systems (even RHEL-based) will work. We won't be providing support out of RHEL 8.
IMPORTANT: These instructions have been tested in a RHEL {rhel-version}, we cannot guarantee that other operating systems (even RHEL-based) will work. We won't be providing support out of RHEL {byow-rhel}.

These are the steps to install the required packages on a RHEL {rhel-version} server:

Expand All @@ -25,7 +25,7 @@ These are the steps to install the required packages on a RHEL {rhel-version} se
dnf -y install libvirt libvirt-daemon-driver-qemu qemu-kvm
usermod -aG qemu,libvirt $(id -un)
newgrp libvirt
systemctl enable --now libvirtd
systemctl enable libvirtd --now
-----

[#lab-deployment]
Expand All @@ -44,7 +44,7 @@ IMPORTANT: Below commands must be executed from the hypervisor host as root if n
[source,bash,subs="attributes+,+macros"]
-----
dnf -y copr enable karmab/kcli
dnf -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
dnf -y install {epel-release}
dnf -y install kcli bash-completion vim jq tar git python3-cherrypy
-----

Expand All @@ -55,8 +55,7 @@ dnf -y install kcli bash-completion vim jq tar git python3-cherrypy
[source,bash,subs="attributes+,+macros"]
-----
kcli download oc -P version=stable -P tag='{tooling-version}'
kcli download kubectl -P version=stable -P tag='{tooling-version}'
mv kubectl oc /usr/bin/
mv oc /usr/bin/
-----

[#configure-lab-network]
Expand Down Expand Up @@ -87,7 +86,7 @@ semanage fcontext -a -t dnsmasq_lease_t /opt/dnsmasq/hosts.leases
restorecon /opt/dnsmasq/hosts.leases
sed -i "s/UPSTREAM_DNS/1.1.1.1/" /opt/dnsmasq/upstream-resolv.conf
systemctl daemon-reload
systemctl enable --now dnsmasq-virt
systemctl enable dnsmasq-virt --now
systemctl mask dnsmasq
-----

Expand Down Expand Up @@ -209,7 +208,7 @@ sed -i "s/CHANGE_DEV_PWD/developer/" management.yml
kcli create cluster openshift --pf management.yml --force
-----

This will take around 30-45m to complete, you can follow progress by running `kcli console -s`.
This will take around 30-45m to complete, the command will output the deployment steps.

If the installation fails for whatever reason, you will need to delete all the VMs that were created and execute the same procedure again. So, first remove the plans, which actually will remove all VMs:

Expand Down
11 changes: 4 additions & 7 deletions documentation/modules/ROOT/pages/machineconfigs-and-tuned.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -78,8 +78,8 @@ oc --insecure-skip-tls-verify=true --kubeconfig ~/hypershift-lab/hosted-kubeconf
[source,console,subs="attributes+,+macros"]
-----
NAME STATUS ROLES AGE VERSION
hosted-worker1 Ready,SchedulingDisabled worker 4h7m {hosted-cluster-kubeversion-3}
hosted-worker2 Ready worker 4h6m {hosted-cluster-kubeversion-3}
hosted-worker1 Ready,SchedulingDisabled worker 5h50m {hosted-cluster-kubeversion-3}
hosted-worker2 Ready worker 5h50m {hosted-cluster-kubeversion-3}
-----
+
4. You can also check the NodePool that reports if the config is being updated.
Expand Down Expand Up @@ -204,8 +204,8 @@ oc --insecure-skip-tls-verify=true --kubeconfig ~/hypershift-lab/hosted-kubeconf
[source,console,subs="attributes+,+macros"]
-----
NAME STATUS ROLES AGE VERSION
hosted-worker1 Ready,SchedulingDisabled worker 5h30m {hosted-cluster-kubeversion-3}
hosted-worker2 Ready worker 5h29m {hosted-cluster-kubeversion-3}
hosted-worker1 Ready,SchedulingDisabled worker 5h59m {hosted-cluster-kubeversion-3}
hosted-worker2 Ready worker 5h59m {hosted-cluster-kubeversion-3}
-----
+
4. You can also check the NodePool that reports if the config is being updated.
Expand Down Expand Up @@ -259,6 +259,3 @@ HugePages_Rsvd: 0
HugePages_Surp: 0
<OMITTED>
-----
28 changes: 14 additions & 14 deletions documentation/modules/ROOT/pages/upgrading-hosted-cluster.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -55,9 +55,9 @@ oc --insecure-skip-tls-verify=true --kubeconfig ~/hypershift-lab/hosted-kubeconf
[console-input]
[source,console,subs="attributes+,+macros"]
-----
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
hosted-worker0 Ready,SchedulingDisabled worker 148m {hosted-cluster-kubeversion-1} 192.168.125.30 <none> Red Hat Enterprise Linux CoreOS {hosted-cluster-rhcos-machineos-1} (Plow) {hosted-cluster-kernel-1} {hosted-cluster-container-runtime-1}
hosted-worker1 Ready worker 147m {hosted-cluster-kubeversion-1} 192.168.125.31 <none> Red Hat Enterprise Linux CoreOS {hosted-cluster-rhcos-machineos-1} (Plow) {hosted-cluster-kernel-1} {hosted-cluster-container-runtime-1}
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
hosted-worker0 Ready,SchedulingDisabled worker 4h44m {hosted-cluster-kubeversion-1} 192.168.125.30 <none> Red Hat Enterprise Linux CoreOS {hosted-cluster-rhcos-machineos-1} {hosted-cluster-kernel-1} {hosted-cluster-container-runtime-1}
hosted-worker2 Ready worker 4h44m {hosted-cluster-kubeversion-1} 192.168.125.32 <none> Red Hat Enterprise Linux CoreOS {hosted-cluster-rhcos-machineos-1} {hosted-cluster-kernel-1} {hosted-cluster-container-runtime-1}
-----
4. Once completed, the nodes will be running the newer version (RHCOS and CRIO versions changed).
+
Expand All @@ -71,9 +71,9 @@ oc --insecure-skip-tls-verify=true --kubeconfig ~/hypershift-lab/hosted-kubeconf
[console-input]
[source,console,subs="attributes+,+macros"]
-----
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
hosted-worker0 Ready worker 148m {hosted-cluster-kubeversion-2} 192.168.125.30 <none> Red Hat Enterprise Linux CoreOS {hosted-cluster-rhcos-machineos-2} (Plow) {hosted-cluster-kernel-2} {hosted-cluster-container-runtime-2}
hosted-worker1 Ready worker 147m {hosted-cluster-kubeversion-2} 192.168.125.31 <none> Red Hat Enterprise Linux CoreOS {hosted-cluster-rhcos-machineos-2} (Plow) {hosted-cluster-kernel-2} {hosted-cluster-container-runtime-2}
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
hosted-worker0 Ready worker 4h53m {hosted-cluster-kubeversion-2} 192.168.125.30 <none> Red Hat Enterprise Linux CoreOS {hosted-cluster-rhcos-machineos-2} {hosted-cluster-kernel-2} {hosted-cluster-container-runtime-2}
hosted-worker2 Ready worker 4h53m {hosted-cluster-kubeversion-2} 192.168.125.32 <none> Red Hat Enterprise Linux CoreOS {hosted-cluster-rhcos-machineos-2} {hosted-cluster-kernel-2} {hosted-cluster-container-runtime-2}
-----
5. The NodePool should report the correct version as well.
+
Expand All @@ -87,7 +87,7 @@ oc --kubeconfig ~/hypershift-lab/mgmt-kubeconfig -n hosted get nodepool nodepool
[source,console,subs="attributes+,+macros"]
-----
NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE
nodepool-hosted-1 hosted 2 2 False False {hosted-cluster-version-2}
nodepool-hosted-1 hosted 2 2 False False {hosted-cluster-version-2} False False
-----
[#upgrading-hostedcluster-cli]
Expand Down Expand Up @@ -183,9 +183,9 @@ oc --insecure-skip-tls-verify=true --kubeconfig ~/hypershift-lab/hosted-kubeconf
[console-input]
[source,console,subs="attributes+,+macros"]
-----
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
hosted-worker0 Ready,SchedulingDisabled worker 3h {hosted-cluster-kubeversion-2} 192.168.125.30 <none> Red Hat Enterprise Linux CoreOS {hosted-cluster-rhcos-machineos-2} (Plow) {hosted-cluster-kernel-2} {hosted-cluster-container-runtime-2}
hosted-worker1 Ready worker 179m {hosted-cluster-kubeversion-2} 192.168.125.31 <none> Red Hat Enterprise Linux CoreOS {hosted-cluster-rhcos-machineos-2} (Plow) {hosted-cluster-kernel-2} {hosted-cluster-container-runtime-2}
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
hosted-worker0 Ready,SchedulingDisabled worker 5h14m {hosted-cluster-kubeversion-2} 192.168.125.30 <none> Red Hat Enterprise Linux CoreOS {hosted-cluster-rhcos-machineos-2} {hosted-cluster-kernel-2} {hosted-cluster-container-runtime-2}
hosted-worker2 Ready worker 5h14m {hosted-cluster-kubeversion-2} 192.168.125.32 <none> Red Hat Enterprise Linux CoreOS {hosted-cluster-rhcos-machineos-2} {hosted-cluster-kernel-2}
-----
+
3. Once completed, we can see both nodes are running a newer version (check the Node, RHCOS, Kernel and CRIO versions).
Expand All @@ -200,9 +200,9 @@ oc --insecure-skip-tls-verify=true --kubeconfig ~/hypershift-lab/hosted-kubeconf
[console-input]
[source,console,subs="attributes+,+macros"]
-----
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
hosted-worker0 Ready worker 3h9m {hosted-cluster-kubeversion-3} 192.168.125.30 <none> Red Hat Enterprise Linux CoreOS {hosted-cluster-rhcos-machineos-3} (Plow) {hosted-cluster-kernel-3} {hosted-cluster-container-runtime-3}
hosted-worker1 Ready worker 3h8m {hosted-cluster-kubeversion-3} 192.168.125.31 <none> Red Hat Enterprise Linux CoreOS {hosted-cluster-rhcos-machineos-3} (Plow) {hosted-cluster-kernel-3} {hosted-cluster-container-runtime-3}
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
hosted-worker0 Ready worker 5h24m {hosted-cluster-kubeversion-3} 192.168.125.30 <none> Red Hat Enterprise Linux CoreOS {hosted-cluster-rhcos-machineos-3} {hosted-cluster-kernel-3} {hosted-cluster-container-runtime-3}
hosted-worker2 Ready worker 4h24m {hosted-cluster-kubeversion-3} 192.168.125.32 <none> Red Hat Enterprise Linux CoreOS {hosted-cluster-rhcos-machineos-3} {hosted-cluster-kernel-3} {hosted-cluster-container-runtime-3}
-----
4. The NodePool should report the correct version as well.
+
Expand All @@ -216,5 +216,5 @@ oc --kubeconfig ~/hypershift-lab/mgmt-kubeconfig -n hosted get nodepool nodepool
[source,console,subs="attributes+,+macros"]
-----
NAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE
nodepool-hosted-1 hosted 2 2 False False {hosted-cluster-version-3}
nodepool-hosted-1 hosted 2 2 False False {hosted-cluster-version-3} False False
-----
4 changes: 2 additions & 2 deletions lab-materials/hosted-cluster/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ metadata:
labels:
spec:
release:
image: quay.io/openshift-release-dev/ocp-release:4.15.6-multi
image: quay.io/openshift-release-dev/ocp-release:4.17.1-multi
pullSecret:
name: pullsecret-cluster-hosted
sshKey:
Expand Down Expand Up @@ -78,7 +78,7 @@ spec:
agentLabelSelector:
matchLabels: {}
release:
image: quay.io/openshift-release-dev/ocp-release:4.15.6-multi
image: quay.io/openshift-release-dev/ocp-release:4.17.1-multi
---
apiVersion: cluster.open-cluster-management.io/v1
kind: ManagedCluster
Expand Down
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
plan: management-cluster
force: false
version: stable
tag: "4.15.10"
tag: "4.17.3"
cluster: "management"
domain: hypershift.lab
api_ip: 192.168.125.10
Expand Down
2 changes: 1 addition & 1 deletion site.sh
Original file line number Diff line number Diff line change
Expand Up @@ -3,4 +3,4 @@
_CURR_DIR="$( cd "$(dirname "$0")" ; pwd -P )"
rm -rf $_CURR_DIR/gh-pages $_CURR_DIR/.cache

antora --pull --stacktrace site.yml
antora --stacktrace site.yml
Loading

0 comments on commit 0ae9e89

Please sign in to comment.