Skip to content

Commit

Permalink
👻 Add hack/setup-operator.sh scripts and more dev doc (#1818)
Browse files Browse the repository at this point in the history
Add a helpful script to setup the operator and create a Tackle instance.
Update README.md files to match. Added some useful commands to
`hack/README.md`.

The script is based on the operator's scripts of similar names, but
modified to install the dev version without building anything locally
first.

For reference, here is the OLM install script:
https://github.com/operator-framework/operator-lifecycle-manager/blob/master/scripts/install.sh

---------

Signed-off-by: Scott J Dickerson <[email protected]>
  • Loading branch information
sjd78 authored Jun 20, 2024
1 parent ffe06ee commit 4a4102f
Show file tree
Hide file tree
Showing 4 changed files with 265 additions and 6 deletions.
13 changes: 7 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,12 +71,12 @@ $ minikube start --addons=dashboard --addons=ingress

Note: We need to enable the dashboard and ingress addons. The dashboard addon installs the dashboard service that exposes the Kubernetes objects in a user interface. The ingress addon allows us to create Ingress CRs to expose the Tackle UI and Tackle Hub API.

Since the olm addon is disabled until OLM issue [2534](https://github.com/operator-framework/operator-lifecycle-manager/issues/2534) is resolved we need to install the [OLM manually](https://github.com/operator-framework/operator-lifecycle-manager/releases) i.e. for version `v0.27.0` we can use:
Since the olm addon is disabled until OLM issue [2534](https://github.com/operator-framework/operator-lifecycle-manager/issues/2534) is resolved we need to install the [OLM manually](https://github.com/operator-framework/operator-lifecycle-manager/releases) i.e. for version `v0.28.0` we can use:

```sh
curl -L https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.27.0/install.sh -o install.sh
curl -L https://github.com/operator-framework/operator-lifecycle-manager/releases/download/v0.28.0/install.sh -o install.sh
chmod +x install.sh
./install.sh v0.27.0
./install.sh v0.28.0
```

See also official Konveyor instructions for [Provisioning Minikube](https://konveyor.github.io/konveyor/installation/#provisioning-minikube).
Expand Down Expand Up @@ -109,7 +109,9 @@ You will need `kubectl` on your PATH and configured to control minikube in order
Follow the official instructions for [Installing Konveyor Operator](https://konveyor.github.io/konveyor/installation/#installing-konveyor-operator)
Alternatively, the [konveyor/operator git repository](https://github.com/konveyor/operator) provides a script to install Tackle locally using `kubectl`. You can [inspect its source here](https://github.com/konveyor/operator/blob/main/hack/install-tackle.sh). This script creates the `konveyor-tackle` namespace, CatalogSource, OperatorGroup, Subscription and Tackle CR, then waits for deployments to be ready.
Alternative 1, use the script [`hack/setup-operator.sh`](./hack/setup-operator.sh). It is a local variation of the script from the operator that still allows overriding portions of the Tackle CR with environment variables.
Alternative 2, the [konveyor/operator git repository](https://github.com/konveyor/operator) provides a script to install Tackle locally using `kubectl`. You can [inspect its source here](https://github.com/konveyor/operator/blob/main/hack/install-tackle.sh). This script creates the `konveyor-tackle` namespace, CatalogSource, OperatorGroup, Subscription and Tackle CR, then waits for deployments to be ready.
#### Customizing the install script (optional)
Expand Down Expand Up @@ -152,8 +154,7 @@ $ npm run start:dev
## Understanding the local development environment
Tackle2 runs in a Kubernetes compatible environment (i.e. Openshift, Kubernetes or minikube) and is usually deployed with Tackle2 Operator (OLM).
Although the UI pod has access to tackle2 APIs from within the cluster, the UI can also be executed outside the cluster and access Tackle APIs endpoints by proxy.
Tackle2 runs in a Kubernetes compatible environment (i.e. Openshift, Kubernetes or minikube) and is usually deployed with Tackle2 Operator (OLM). Although the UI pod has access to tackle2 APIs from within the cluster, the UI can also be executed outside the cluster and access Tackle APIs endpoints by proxy.
The React and Patternfly based UI is composed of web pages served by an http server with proxy capabilities.
Expand Down
68 changes: 68 additions & 0 deletions hack/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,3 +15,71 @@ added to the instance by:
> cd tackle2-hub/hack
> HOST=localhost:9002 ./add/all.sh
```

## Useful commands

#### List all of the "tackle-" pods, what container image they're using, and the image's manifest digest:

```sh
> minikube kubectl -- \
get pods -n konveyor-tackle -o=json \
| jq '
.items[] | select(.metadata.name | test("tackle-")) |
{
name:.metadata.name,
image:.status.containerStatuses[0].image,
imageID:.status.containerStatuses[0].imageID
}
'
{
"name": "tackle-hub-57b4f5b87c-5cdds",
"image": "quay.io/konveyor/tackle2-hub:latest",
"imageID": "docker-pullable://quay.io/konveyor/tackle2-hub@sha256:f19ab51cc9f23ee30225dd1c15ca545c2b767be7d7e1ed5cd83df47a40e5d324"
}
{
"name": "tackle-operator-597f9755fb-84jg7",
"image": "quay.io/konveyor/tackle2-operator:latest",
"imageID": "docker-pullable://quay.io/konveyor/tackle2-operator@sha256:4110d23743087ee9ed97827aa22c1e31b066a0e5c25db90196c5dfb4dbf9c65b"
}
{
"name": "tackle-ui-5ccd495897-vsj5x",
"image": "quay.io/konveyor/tackle2-ui:latest",
"imageID": "docker-pullable://quay.io/konveyor/tackle2-ui@sha256:541484a8919d9129bed5b95a2776a84ef35989ca271753147185ddb395cc8781"
}
```

#### List the current ":latest" tag's manifest digest from quay for a single image (tackle2-hub in this example):

```sh
> curl https://quay.io/api/v1/repository/konveyor/tackle2-hub/tag/\?onlyActiveTags\=true\&specificTag\=latest | jq '.'
{
"tags": [
{
"name": "latest",
"reversion": false,
"start_ts": 1718406240,
"manifest_digest": "sha256:f19ab51cc9f23ee30225dd1c15ca545c2b767be7d7e1ed5cd83df47a40e5d324",
"is_manifest_list": true,
"size": null,
"last_modified": "Fri, 14 Jun 2024 23:04:00 -0000"
}
],
"page": 1,
"has_additional": false
}
```

#### Bounce a deployment to update to the current image with a tag

The ":latest" image tag usually move frequently. Using the previous two commands, the `sha256` for the `tackle2-hub` image match between the kubectl output and the quay.io output. This comparison is an easy way to make sure the container image in your environment is actually the current version.

If the digests do not match, the easy way to update is to "bounce" the deployment (the tackle-hub in this example):

```sh
> minikube kubectl -- scale -n konveyor-tackle deployment tackle-hub --replicas=0
deployment.apps/tackle-hub scaled
> minikube kubectl -- scale -n konveyor-tackle deployment tackle-hub --replicas=1
deployment.apps/tackle-hub scaled
```

Assuming the default `image_pull_policy=Always`, after the bounce the deployment and pod will be using the current image.
Empty file added hack/setup-minikube.sh
Empty file.
190 changes: 190 additions & 0 deletions hack/setup-operator.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,190 @@
#!/bin/bash
#
# Based on:
# - https://github.com/konveyor/operator/blob/main/hack/install-konveyor.sh **latest
# - https://github.com/konveyor/operator/blob/main/hack/install-tackle.sh
# - https://konveyor.github.io/konveyor/installation/#installing-konveyor-operator
# - https://github.com/konveyor/operator/blob/main/tackle-k8s.yaml
#
# By default, no authentication, and only use pre-built images
#
set -eo pipefail
# set -euxo pipefail

# use kubectl if available, else fall back to `minikube kubectl --`, else error
KUBECTL=kubectl
if ! command -v $KUBECTL >/dev/null 2>&1; then
KUBECTL="minikube kubectl --"
# kubectl_bin="${__bin_dir}/kubectl"
# mkdir -p "${__bin_dir}"
# curl -Lo "${kubectl_bin}" "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/${__os}/${__arch}/kubectl"
# chmod +x "${kubectl_bin}"
fi
echo "kubectl command: ${KUBECTL}"

debug() {
echo "Install Konveyor FAILED!!!"
echo "What follows is some info that may be useful in debugging the failure"

$KUBECTL get namespace "${NAMESPACE}" -o yaml || true
$KUBECTL get --namespace "${NAMESPACE}" all || true
$KUBECTL get --namespace "${NAMESPACE}" -o yaml \
subscriptions.operators.coreos.com,catalogsources.operators.coreos.com,installplans.operators.coreos.com,clusterserviceversions.operators.coreos.com \
|| true
$KUBECTL get --namespace "${NAMESPACE}" -o yaml tackles.tackle.konveyor.io/tackle || true

for pod in $($KUBECTL get pods -n "${NAMESPACE}" -o jsonpath='{.items[*].metadata.name}'); do
$KUBECTL --namespace "${NAMESPACE}" describe pod "${pod}" || true
done
exit 1
}
trap 'debug' ERR

function retry_command() {
local retries=$1
local sleeptime=$2
local cmd=${@:3}

until [[ $retries -eq 0 ]] || ${cmd} &>/dev/null; do
echo "command failed, try again in ${sleeptime}s [retries: $retries]"
sleep $sleeptime
((retries--))
done
[[ $retries == 0 ]] && return 1 || return 0
}

# Inputs for setting up the operator
NAMESPACE="${NAMESPACE:-konveyor-tackle}"
OPERATOR_INDEX_IMAGE="${OPERATOR_INDEX_IMAGE:-quay.io/konveyor/tackle2-operator-index:latest}"

# Either pass in the full Tackle CR, or specify individual bits
TACKLE_CR="${TACKLE_CR:-}"

FEATURE_AUTH_REQUIRED="${FEATURE_AUTH_REQUIRED:-false}"
IMAGE_PULL_POLICY="${IMAGE_PULL_POLICY:-Always}"
UI_INGRESS_CLASS_NAME="${UI_INGRESS_CLASS_NAME:-nginx}"
UI_IMAGE="${UI_IMAGE:-quay.io/konveyor/tackle2-ui:latest}"
HUB_IMAGE="${HUB_IMAGE:-quay.io/konveyor/tackle2-hub:latest}"
ADDON_ANALYZER_IMAGE="${ADDON_ANALYZER_IMAGE:-quay.io/konveyor/tackle2-addon-analyzer:latest}"
ANALYZER_CONTAINER_REQUESTS_MEMORY="${ANALYZER_CONTAINER_REQUESTS_MEMORY:-0}"
ANALYZER_CONTAINER_REQUESTS_CPU="${ANALYZER_CONTAINER_REQUESTS_CPU:-0}"

install_operator() {
echo "Installing the Konveyor Operator..."

# Install the Konveyor Namespace, CatalogSource, OperatorGroup, and Subscription
cat <<EOF | $KUBECTL apply -f -
---
apiVersion: v1
kind: Namespace
metadata:
name: ${NAMESPACE}
---
apiVersion: operators.coreos.com/v1alpha1
kind: CatalogSource
metadata:
name: konveyor
namespace: ${NAMESPACE}
spec:
displayName: Konveyor Operator
publisher: Konveyor
sourceType: grpc
image: ${OPERATOR_INDEX_IMAGE}
---
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: konveyor
namespace: ${NAMESPACE}
spec:
targetNamespaces:
- ${NAMESPACE}
---
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: konveyor-operator
namespace: ${NAMESPACE}
spec:
channel: development
installPlanApproval: Automatic
name: konveyor-operator
source: konveyor
sourceNamespace: ${NAMESPACE}
EOF

echo "Waiting for the Tackle CRD to exist"
retry_command 10 10 \
$KUBECTL get customresourcedefinitions.apiextensions.k8s.io tackles.tackle.konveyor.io

if [[ $? -ne 0 ]]; then
echo "Tackle CRD doesn't exist yet, cannot continue"
exit 1
fi

echo "Waiting for the Tackle CRD to become established"
$KUBECTL wait \
--namespace ${NAMESPACE} \
--timeout=120s \
--for=condition=Established \
customresourcedefinitions.apiextensions.k8s.io tackles.tackle.konveyor.io

echo "Waiting for the Tackle Operator to become available"
$KUBECTL rollout status \
--namespace "${NAMESPACE}" \
--timeout=600s \
-w deployment/tackle-operator
}

install_konveyor() {
echo "Installing the Konveyor CR..."

echo "Make sure the Tackle Operator is available"
$KUBECTL rollout status \
--namespace "${NAMESPACE}" \
--timeout=600s \
-w deployment/tackle-operator

echo "Create a Tackle CR"
if [ -n "${TACKLE_CR}" ]; then
echo "${TACKLE_CR}" | $KUBECTL apply --namespace "${NAMESPACE}" -f -
else
cat <<EOF | $KUBECTL apply --namespace "${NAMESPACE}" -f -
kind: Tackle
apiVersion: tackle.konveyor.io/v1alpha1
metadata:
name: tackle
spec:
image_pull_policy: ${IMAGE_PULL_POLICY}
feature_auth_required: ${FEATURE_AUTH_REQUIRED}
ui_ingress_class_name: ${UI_INGRESS_CLASS_NAME}
ui_image_fqin: ${UI_IMAGE}
hub_image_fqin: ${HUB_IMAGE}
analyzer_fqin: ${ADDON_ANALYZER_IMAGE}
analyzer_container_requests_memory: ${ANALYZER_CONTAINER_REQUESTS_MEMORY}
analyzer_container_requests_cpu: ${ANALYZER_CONTAINER_REQUESTS_CPU}
EOF
fi

# Log Want to see in github logs what we just created
echo "Created CR:"
$KUBECTL get --namespace "${NAMESPACE}" -o yaml tackles.tackle.konveyor.io/tackle

# Wait for reconcile to finish
$KUBECTL wait \
--namespace "${NAMESPACE}" \
--for=condition=Successful \
--timeout=600s \
tackles.tackle.konveyor.io/tackle

# Now wait for all the tackle deployments
$KUBECTL wait \
--namespace "${NAMESPACE}" \
--selector="app.kubernetes.io/part-of=tackle" \
--for=condition=Available \
--timeout=600s \
deployments.apps
}

$KUBECTL get customresourcedefinitions.apiextensions.k8s.io tackles.tackle.konveyor.io &>/dev/null || install_operator
install_konveyor

0 comments on commit 4a4102f

Please sign in to comment.