Skip to content

Commit

Permalink
[RHIDP-4934] Fix issue preventing the CI install script from being us…
Browse files Browse the repository at this point in the history
…ed in restricted systems where Podman would not work (#404)

* Do not rely on Podman to rebuild the bundles/IIB for clusters with hosted control planes

In Prow CI containers, it appears that Podman could not be used due to a restricted environment.
The error below was returned:
```
 [DEBUG] registry.redhat.io/rhdh/rhdh-operator-bundle@sha256:2abaeacfa8fd744579e44e4b320086a8678094dd92eb24825c05f43617384529 => quay.io/rhdh/rhdh-operator-bundle@sha256:2abaeacfa8fd744579e44e4b320086a8678094dd92eb24825c05f43617384529
time="2024-11-14T08:04:54Z" level=warning msg="\"/\" is not a shared mount, this could cause issues or missing mounts with rootless containers"
cannot clone: Operation not permitted
Error: cannot re-exec process
```

Instead, this tries to rely on both umoci [1] and Skopeo to manipulate the
images, rebuild and repush them in
the internal cluster registry.

[1] https://github.com/opencontainers/umoci

* Catch and return errors early in the script

* Update docs

* Fix potential permission issues with 'skopeo login' or 'oc registry login'

By default, they try to write in /run, which might be forbidden.

Write in the current working directory instead
  • Loading branch information
rm3l authored Nov 22, 2024
1 parent 4bae532 commit 308bf7c
Show file tree
Hide file tree
Showing 2 changed files with 75 additions and 42 deletions.
6 changes: 3 additions & 3 deletions .rhdh/docs/installing-ci-builds.adoc
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
== Installing CI builds of Red Hat Developer Hub
== Installing CI builds of Red Hat Developer Hub on OpenShift

*Prerequisites*

* `oc`. See link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/cli_tools/openshift-cli-oc#cli-installing-cli_cli-developer-commands[Installing the OpenShift CLI].
* You are logged in as an administrator using `oc login`. See link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/cli_tools/openshift-cli-oc#cli-logging-in_cli-developer-commands[Logging in to the OpenShift CLI] or link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/cli_tools/openshift-cli-oc#cli-logging-in-web_cli-developer-commands[Logging in to the OpenShift CLI using a web browser].
* `skopeo`. See link:https://github.com/containers/skopeo/blob/main/install.md[Installing Skopeo].
* `opm`. See link:https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/cli_tools/opm-cli[opm CLI].
* `podman`. See link:https://podman.io/docs/installation[Podman Installation Instructions].
* `sed`. See link:https://www.gnu.org/software/sed/[GNU sed].
* `skopeo`. See link:https://github.com/containers/skopeo/blob/main/install.md[Installing Skopeo].
* `umoci` (used if the script detects that the cluster has a hosted control plane). See link:https://github.com/opencontainers/umoci#install[Install].
*Procedure*

Expand Down
111 changes: 72 additions & 39 deletions .rhdh/scripts/install-rhdh-catalog-source.sh
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
#
# Requires: oc, jq

set -e
set -euo pipefail

RED='\033[0;31m'
NC='\033[0m'
Expand Down Expand Up @@ -67,7 +67,7 @@ OCP_VER="v$(oc version -o json | jq -r '.openshiftVersion' | sed -r -e "s#([0-9]
OCP_ARCH="$(oc version -o json | jq -r '.serverVersion.platform' | sed -r -e "s#linux/##")"
if [[ $OCP_ARCH == "amd64" ]]; then OCP_ARCH="x86_64"; fi
# if logged in, this should return something like latest-v4.12-x86_64
UPSTREAM_IIB="quay.io/rhdh/iib:latest-${OCP_VER}-${OCP_ARCH}";
IIB_TAG="latest-${OCP_VER}-${OCP_ARCH}"

while [[ "$#" -gt 0 ]]; do
case $1 in
Expand All @@ -80,9 +80,9 @@ while [[ "$#" -gt 0 ]]; do
TO_INSTALL="$2"; shift 1;;
'--next'|'--latest')
# if logged in, this should return something like latest-v4.12-x86_64 or next-v4.12-x86_64
UPSTREAM_IIB="quay.io/rhdh/iib:${1/--/}-${OCP_VER}-$OCP_ARCH";;
IIB_TAG="${1/--/}-${OCP_VER}-$OCP_ARCH";;
'-v')
UPSTREAM_IIB="quay.io/rhdh/iib:${2}-${OCP_VER}-$OCP_ARCH";
IIB_TAG="${2}-${OCP_VER}-$OCP_ARCH";
OLM_CHANNEL="fast-${2}"
shift 1;;
'-h'|'--help') usage; exit 0;;
Expand All @@ -97,6 +97,8 @@ if [[ ! $(command -v skopeo) ]]; then
exit 1
fi

UPSTREAM_IIB="quay.io/rhdh/iib:${IIB_TAG}";

# shellcheck disable=SC2086
UPSTREAM_IIB_MANIFEST="$(skopeo inspect docker://${UPSTREAM_IIB} --raw || exit 2)"
# echo "Got: $UPSTREAM_IIB_MANIFEST"
Expand Down Expand Up @@ -221,10 +223,27 @@ spec:
function install_hosted_control_plane_cluster() {
# Clusters with an hosted control plane do not propagate ImageContentSourcePolicy/ImageDigestMirrorSet resources
# to the underlying nodes, causing an issue mirroring internal images effectively.
# This function works around this by locally modifying the bundles (replacing all refs to the internal registries
# with their mirrors on quay.io), rebuilding and pushing the images to the internal cluster registry.
if [[ ! $(command -v umoci) ]]; then
errorf "Please install umoci 0.4+. See https://github.com/opencontainers/umoci"
exit 1
fi

mkdir -p "${TMPDIR}/rhdh/rhdh" >&2
echo "[DEBUG] Rendering IIB $UPSTREAM_IIB as a local file..." >&2
opm render "$UPSTREAM_IIB" --output=yaml > "${TMPDIR}/rhdh/rhdh/render.yaml"
if [ ! -s "${TMPDIR}/rhdh/rhdh/render.yaml" ]; then
errorf "[ERROR] 'opm render $UPSTREAM_IIB' returned an empty output, which likely means that this IIB Image does not contain any operators in it. Please reach out to the RHDH Productization team." >&2
exit 1
fi

# 1. Expose the internal cluster registry if not done already
echo "[DEBUG] Exposing cluster registry..." >&2
internal_registry_url="image-registry.openshift-image-registry.svc:5000"
oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{"spec":{"defaultRoute":true}}' --type=merge >&2
my_registry=$(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')
podman login -u kubeadmin -p $(oc whoami -t) --tls-verify=false $my_registry >&2
skopeo login -u kubeadmin -p $(oc whoami -t) --tls-verify=false $my_registry >&2
if oc -n openshift-marketplace get secret internal-reg-auth-for-rhdh &> /dev/null; then
oc -n openshift-marketplace delete secret internal-reg-auth-for-rhdh >&2
fi
Expand All @@ -241,7 +260,7 @@ function install_hosted_control_plane_cluster() {
--docker-username=kubeadmin \
--docker-password=$(oc whoami -t) \
--docker-email="[email protected]" >&2
oc registry login --registry="$my_registry" --auth-basic="kubeadmin:$(oc whoami -t)" >&2
# oc registry login --registry="$my_registry" --auth-basic="kubeadmin:$(oc whoami -t)" --to="${REGISTRY_AUTH_FILE}" >&2
for ns in rhdh-operator rhdh; do
# To be able to push images under this scope in the internal image registry
if ! oc get namespace "$ns" > /dev/null; then
Expand All @@ -252,60 +271,74 @@ function install_hosted_control_plane_cluster() {
oc policy add-role-to-user system:image-puller system:serviceaccount:openshift-marketplace:default -n openshift-marketplace >&2 || true
oc policy add-role-to-user system:image-puller system:serviceaccount:rhdh-operator:default -n rhdh-operator >&2 || true

echo ">>> WORKING DIR: $TMPDIR <<<" >&2
mkdir -p "${TMPDIR}/rhdh/rhdh" >&2
opm render "$UPSTREAM_IIB" --output=yaml > "${TMPDIR}/rhdh/rhdh/render.yaml"
pushd "${TMPDIR}" >&2
# 2. Render the IIB locally, modify any references to the internal registries with their mirrors on Quay
# and push the updates to the internal cluster registry
for bundleImg in $(cat "${TMPDIR}/rhdh/rhdh/render.yaml" | grep -E '^image: .*operator-bundle' | awk '{print $2}' | uniq); do
originalBundleImg="$bundleImg"
digest="${originalBundleImg##*@sha256:}"
bundleImg="${bundleImg/registry.stage.redhat.io/quay.io}"
bundleImg="${bundleImg/registry.redhat.io/quay.io}"
bundleImg="${bundleImg/registry-proxy.engineering.redhat.com\/rh-osbs\/rhdh-/quay.io\/rhdh\/}"
echo "[DEBUG] $originalBundleImg => $bundleImg" >&2
if podman pull "$bundleImg" >&2; then
if skopeo inspect "docker://$bundleImg" &> /dev/null; then
newBundleImage="${my_registry}/rhdh/rhdh-operator-bundle:${digest}"
newBundleImageAsInt="${internal_registry_url}/rhdh/rhdh-operator-bundle:${digest}"
mkdir -p "bundles/$digest" >&2
# --entrypoint is needed on some older versions of Podman, but work with
containerId=$(podman create --entrypoint='/bin/sh' "$bundleImg" || exit 1)
podman cp $containerId:/metadata "./bundles/${digest}/metadata" >&2
podman cp $containerId:/manifests "./bundles/${digest}/manifests" >&2
podman rm -f $containerId >&2

echo "[DEBUG] Copying and unpacking image $bundleImg locally..." >&2
skopeo copy "docker://$bundleImg" "oci:./bundles/${digest}/src:latest" >&2
umoci unpack --image "./bundles/${digest}/src:latest" "./bundles/${digest}/unpacked" --rootless >&2

# Replace the occurrences in the .csv.yaml or .clusterserviceversion.yaml files
for file in "./bundles/${digest}/manifests"/*; do
if [ -f "$file" ]; then
sed -i 's#registry.redhat.io/rhdh#quay.io/rhdh#g' "$file" >&2
sed -i 's#registry.stage.redhat.io/rhdh#quay.io/rhdh#g' "$file" >&2
sed -i 's#registry-proxy.engineering.redhat.com/rh-osbs/rhdh-#quay.io/rhdh/#g' "$file" >&2
fi
echo "[DEBUG] Replacing refs to internal registry in bundle image $bundleImg..." >&2
for folder in manifests metadata; do
for file in "./bundles/${digest}/unpacked/rootfs/${folder}"/*; do
if [ -f "$file" ]; then
echo "[DEBUG] replacing refs to internal registries in file '${file}'" >&2
sed -i 's#registry.redhat.io/rhdh#quay.io/rhdh#g' "$file" >&2
sed -i 's#registry.stage.redhat.io/rhdh#quay.io/rhdh#g' "$file" >&2
sed -i 's#registry-proxy.engineering.redhat.com/rh-osbs/rhdh-#quay.io/rhdh/#g' "$file" >&2
fi
done
done

cat <<EOF > "./bundles/${digest}/bundle.Dockerfile"
FROM scratch
COPY ./manifests /manifests/
COPY ./metadata /metadata/
EOF
pushd "./bundles/${digest}" >&2
newBundleImage="${my_registry}/rhdh/rhdh-operator-bundle:${digest}"
newBundleImageAsInt="${internal_registry_url}/rhdh/rhdh-operator-bundle:${digest}"
podman image build -f bundle.Dockerfile -t "${newBundleImage}" . >&2
podman image push "${newBundleImage}" --tls-verify=false >&2
popd >&2
# repack the image with the changes
echo "[DEBUG] Repacking image ./bundles/${digest}/src => ./bundles/${digest}/unpacked..." >&2
umoci repack --image "./bundles/${digest}/src:latest" "./bundles/${digest}/unpacked" >&2

# Push the bundle to the internal cluster registry
echo "[DEBUG] Pushing updated image: ./bundles/${digest}/src => ${newBundleImage}..." >&2
skopeo copy --dest-tls-verify=false "oci:./bundles/${digest}/src:latest" "docker://${newBundleImage}" >&2

sed -i "s#${originalBundleImg}#${newBundleImageAsInt}#g" "${TMPDIR}/rhdh/rhdh/render.yaml" >&2
fi
done

local newIndex="${UPSTREAM_IIB/quay.io/"${my_registry}"}"
local newIndexAsInt="${UPSTREAM_IIB/quay.io/"${internal_registry_url}"}"

# 3. Regenerate the IIB image with the local changes to the render.yaml file and build and push it from within the cluster
echo "[DEBUG] Regenerating IIB Dockerfile with updated refs..." >&2
opm generate dockerfile rhdh/rhdh >&2
podman image build -t "${newIndex}" -f "./rhdh/rhdh.Dockerfile" --no-cache rhdh >&2
podman image push "${newIndex}" --tls-verify=false >&2

printf "%s" "${newIndexAsInt}"
echo "[DEBUG] Submitting in-cluster build request for the updated IIB..." >&2
if ! oc -n rhdh get buildconfig.build.openshift.io/iib >& /dev/null; then
oc -n rhdh new-build --strategy docker --binary --name iib >&2
fi
oc -n rhdh patch buildconfig.build.openshift.io/iib -p '{"spec": {"strategy": {"dockerStrategy": {"dockerfilePath": "rhdh.Dockerfile"}}}}' >&2
oc -n rhdh start-build iib --wait --follow --from-dir=rhdh >&2
local imageStreamWithTag="rhdh/iib:${IIB_TAG}"
oc tag rhdh/iib:latest "${imageStreamWithTag}" >&2

local result="${internal_registry_url}/${imageStreamWithTag}"
echo "[DEBUG] IIB built and pushed to internal cluster registry: $result..." >&2
printf "%s" "${result}"
}

pushd "${TMPDIR}"
echo ">>> WORKING DIR: $TMPDIR <<<"

# Using the current working dir, otherwise tools like 'skopeo login' will attempt to write to /run, which
# might be restricted in CI environments.
export REGISTRY_AUTH_FILE="${TMPDIR}/.auth.json"

# Defaulting to the hosted control plane behavior which has more chances to work
CONTROL_PLANE_TECH=$(oc get infrastructure cluster -o jsonpath='{.status.controlPlaneTopology}' || \
(echo '[WARN] Could not determine the cluster type => defaulting to the hosted control plane behavior' >&2 && echo 'External'))
Expand Down

0 comments on commit 308bf7c

Please sign in to comment.