-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ospdo scaledown docs #730
base: main
Are you sure you want to change the base?
ospdo scaledown docs #730
Conversation
Related patches : Jira: https://issues.redhat.com/browse/OSPRH-6618
Related patches : Jira: https://issues.redhat.com/browse/OSPRH-6618
053c882
to
a6329a3
Compare
= OSPdO scale down pre database adoption | ||
|
||
This section scales down and removes OSPdO resources in favor of the RHOSO ones. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
= OSPdO scale down pre database adoption | |
This section scales down and removes OSPdO resources in favor of the RHOSO ones. | |
= Scaling down director Operator resources | |
Before you migrate your databases to the control plane, you must scale down and remove director Operator (OSPdO) resources in order to use the {rhos_long} resources. | |
@pinikomarov Please confirm that this rewrite is accurate.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes it is
. Scale-down RHOSO OpenStack Operator controller-manager to 0 replicas and delete its OpenStackControlPlane’s OpenStackClient pod temporarily, so that we can use the OSPdO controller-manager to clean up some of its resources, needed because OSPdO ultimately can’t act due to a pod name collision between its OpenStackClient and RHOSO’s OpenStackClient. | ||
+ | ||
---- | ||
oc patch csv -n openstack-operators openstack-operator.v0.0.1 --type json -p="[{"op": "replace", "path": "/spec/install/spec/deployments/0/spec/replicas", "value": "0"}]" | ||
oc delete openstackclients.client.openstack.org --all | ||
---- | ||
+ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
. Scale-down RHOSO OpenStack Operator controller-manager to 0 replicas and delete its OpenStackControlPlane’s OpenStackClient pod temporarily, so that we can use the OSPdO controller-manager to clean up some of its resources, needed because OSPdO ultimately can’t act due to a pod name collision between its OpenStackClient and RHOSO’s OpenStackClient. | |
+ | |
---- | |
oc patch csv -n openstack-operators openstack-operator.v0.0.1 --type json -p="[{"op": "replace", "path": "/spec/install/spec/deployments/0/spec/replicas", "value": "0"}]" | |
oc delete openstackclients.client.openstack.org --all | |
---- | |
+ | |
. Scale down the {rhos_acro} OpenStack Operator `controller-manager` to 0 replicas and temporarily delete the `OpenStackControlPlane` `OpenStackClient` pod, so that you can use the OSPdO `controller-manager` to clean up some of its resources. The cleanup is needed to avoid a pod name collision between the OSPdO OpenStackClient and the {rhos_acro} OpenStackClient. | |
+ | |
---- | |
$ oc patch csv -n openstack-operators openstack-operator.v0.0.1 --type json -p="[{"op": "replace", "path": "/spec/install/spec/deployments/0/spec/replicas", "value": "0"}]" | |
$ oc delete openstackclients.client.openstack.org --all | |
---- | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
. Delete the OSPdO OpenStackControlPlane | ||
+ | ||
---- | ||
oc delete openstackcontrolplanes.osp-director.openstack.org -n openstack --all | ||
---- | ||
+ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
. Delete the OSPdO OpenStackControlPlane | |
+ | |
---- | |
oc delete openstackcontrolplanes.osp-director.openstack.org -n openstack --all | |
---- | |
+ | |
. Delete the OSPdO `OpenStackControlPlane` custom resource (CR): | |
+ | |
---- | |
$ oc delete openstackcontrolplanes.osp-director.openstack.org -n openstack --all | |
---- | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
. Delete the OSPdO OpenStackNetConfig (this will remove OSPdO’s associated NNCPs) | ||
+ | ||
---- | ||
oc delete osnetconfig -n openstack --all | ||
---- | ||
+ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
. Delete the OSPdO OpenStackNetConfig (this will remove OSPdO’s associated NNCPs) | |
+ | |
---- | |
oc delete osnetconfig -n openstack --all | |
---- | |
+ | |
. Delete the OSPdO `OpenStackNetConfig` CR to remove the associated node network configuration policies: | |
+ | |
---- | |
$ oc delete osnetconfig -n openstack --all | |
---- | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
. Label the third OCP node, the once which is holding the OSPdO vm (the one that was not labeled earlier) | ||
+ | ||
---- | ||
oc label nodes <ostest-master-node> type=openstack | ||
---- | ||
+ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
. Label the third OCP node, the once which is holding the OSPdO vm (the one that was not labeled earlier) | |
+ | |
---- | |
oc label nodes <ostest-master-node> type=openstack | |
---- | |
+ | |
. Label the {OpenShiftShort} node that contains the OSPdO virtual machine (VM): | |
+ | |
---- | |
$ oc label nodes <ostest_master_node> type=openstack | |
---- | |
+ | |
* Replace ` <ostest_master_node>` with the remaining master node that contains the OSPdO VM. |
@pinikomarov Please ensure that this rewrite is accurate.
And just for my knowledge, "ostest" = operating system test?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, that's accurate,
ostest was an arbitrary name chosen for ospdo deployments , no special meaning, I'll change it to something general: ospdo_vm_master_node
. Create an NNCP for the third node (sample configuration): | ||
+ | ||
---- | ||
cat << EOF > /tmp/node3_nncp.yaml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
. Create an NNCP for the third node (sample configuration): | |
+ | |
---- | |
cat << EOF > /tmp/node3_nncp.yaml | |
. Create a node network configuration policy for the third {OpenShiftShort} node. For example: | |
+ | |
---- | |
$ cat << EOF > /tmp/node3_nncp.yaml |
I think we need to be more specific than "third node". Is it okay to say RHOCP node? We describe the node a little more in the previous step.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
oc apply -f /tmp/node3_nncp.yaml | ||
---- | ||
+ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oc apply -f /tmp/node3_nncp.yaml | |
---- | |
+ | |
$ oc apply -f /tmp/node3_nncp.yaml | |
---- | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
. Get remaining OSPdO operands and delete everything, except DO NOT DELETE OpenStackBaremetalSets nor OpenStackProvisionServers yet | ||
+ | ||
---- | ||
for i in $(oc get crd | grep osp-director | grep -v baremetalset | grep -v provisionserver | awk {'print $1'}); do echo Deleting $i...; oc delete $i -n openstack --all; done | ||
---- | ||
+ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
. Get remaining OSPdO operands and delete everything, except DO NOT DELETE OpenStackBaremetalSets nor OpenStackProvisionServers yet | |
+ | |
---- | |
for i in $(oc get crd | grep osp-director | grep -v baremetalset | grep -v provisionserver | awk {'print $1'}); do echo Deleting $i...; oc delete $i -n openstack --all; done | |
---- | |
+ | |
. Get the remaining OSPdO operands and delete everything. Do not delete the `OpenStackBaremetalSets` and `OpenStackProvisionServer` resources: | |
+ | |
---- | |
$ for i in $(oc get crd | grep osp-director | grep -v baremetalset | grep -v provisionserver | awk {'print $1'}); do echo Deleting $i...; oc delete $i -n openstack --all; done | |
---- | |
@pinikomarov Does "everything" refer to "operands"? Or "services"? I'd like to use a more specific word.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok ,
and I'll change to "resources" , that's more specific
. Scale down OSPdO to 0 replicas | ||
+ | ||
---- | ||
ospdo_csv_ver=$(oc get csv -n openstack -l operators.coreos.com/osp-director-operator.openstack -o json | jq -r '.items[0].metadata.name') | ||
oc patch csv -n openstack $ospdo_csv_ver --type json -p="[{"op": "replace", "path": "/spec/install/spec/deployments/0/spec/replicas", "value": "0"}]" | ||
---- | ||
+ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
. Scale down OSPdO to 0 replicas | |
+ | |
---- | |
ospdo_csv_ver=$(oc get csv -n openstack -l operators.coreos.com/osp-director-operator.openstack -o json | jq -r '.items[0].metadata.name') | |
oc patch csv -n openstack $ospdo_csv_ver --type json -p="[{"op": "replace", "path": "/spec/install/spec/deployments/0/spec/replicas", "value": "0"}]" | |
---- | |
+ | |
. Scale down OSPdO to 0 replicas: | |
+ | |
---- | |
$ ospdo_csv_ver=$(oc get csv -n openstack -l operators.coreos.com/osp-director-operator.openstack -o json | jq -r '.items[0].metadata.name') | |
$ oc patch csv -n openstack $ospdo_csv_ver --type json -p="[{"op": "replace", "path": "/spec/install/spec/deployments/0/spec/replicas", "value": "0"}]" | |
---- | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
. Remove the webhooks from OSPdO | ||
+ | ||
---- | ||
oc patch csv $ospdo_csv_ver -n openstack --type json -p="[{"op": "remove", "path": "/spec/webhookdefinitions"}]" | ||
---- | ||
+ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
. Remove the webhooks from OSPdO | |
+ | |
---- | |
oc patch csv $ospdo_csv_ver -n openstack --type json -p="[{"op": "remove", "path": "/spec/webhookdefinitions"}]" | |
---- | |
+ | |
. Remove the webhooks from OSPdO: | |
+ | |
---- | |
$ oc patch csv $ospdo_csv_ver -n openstack --type json -p="[{"op": "remove", "path": "/spec/webhookdefinitions"}]" | |
---- | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
. Remove the finalizer from the OSPdO OpenStackBaremetalSet | ||
+ | ||
---- | ||
oc patch openstackbaremetalsets.osp-director.openstack.org -n openstack compute --type json -p="[{"op": "remove", "path": "/metadata/finalizers"}]" | ||
---- | ||
+ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
. Remove the finalizer from the OSPdO OpenStackBaremetalSet | |
+ | |
---- | |
oc patch openstackbaremetalsets.osp-director.openstack.org -n openstack compute --type json -p="[{"op": "remove", "path": "/metadata/finalizers"}]" | |
---- | |
+ | |
. Remove the finalizer from the OSPdO `OpenStackBaremetalSet` resource: | |
+ | |
---- | |
$ oc patch openstackbaremetalsets.osp-director.openstack.org -n openstack compute --type json -p="[{"op": "remove", "path": "/metadata/finalizers"}]" | |
---- | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
. Now delete the OpenStackBaremetalSet and OpenStackProvisionServer | ||
+ | ||
---- | ||
oc delete openstackbaremetalsets.osp-director.openstack.org -n openstack --all | ||
oc delete openstackprovisionservers.osp-director.openstack.org -n openstack --all | ||
---- | ||
+ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
. Now delete the OpenStackBaremetalSet and OpenStackProvisionServer | |
+ | |
---- | |
oc delete openstackbaremetalsets.osp-director.openstack.org -n openstack --all | |
oc delete openstackprovisionservers.osp-director.openstack.org -n openstack --all | |
---- | |
+ | |
. Delete the `OpenStackBaremetalSet` and `OpenStackProvisionServer` resources: | |
+ | |
---- | |
$ oc delete openstackbaremetalsets.osp-director.openstack.org -n openstack --all | |
$ oc delete openstackprovisionservers.osp-director.openstack.org -n openstack --all | |
---- | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
Annotate all OSP compute BMHs so that Metal3 no longer handles them | ||
+ | ||
* Do this for each OSP compute BMH | ||
+ | ||
---- | ||
compute_bmh_list=$(oc get bmh -n openshift-machine-api |grep compute|awk '{printf $1 " "}') | ||
|
||
for bmh_compute in $compute_bmh_list;do oc annotate bmh -n openshift-machine-api $bmh_compute baremetalhost.metal3.io/detached=""; | ||
oc delete bmh -n openshift-machine-api $bmh_compute;done | ||
---- | ||
+ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Annotate all OSP compute BMHs so that Metal3 no longer handles them | |
+ | |
* Do this for each OSP compute BMH | |
+ | |
---- | |
compute_bmh_list=$(oc get bmh -n openshift-machine-api |grep compute|awk '{printf $1 " "}') | |
for bmh_compute in $compute_bmh_list;do oc annotate bmh -n openshift-machine-api $bmh_compute baremetalhost.metal3.io/detached=""; | |
oc delete bmh -n openshift-machine-api $bmh_compute;done | |
---- | |
+ | |
. Annotate each {OpenStackShort} Compute `BareMetalHost` resource so that Metal3 does not start the node: | |
+ | |
---- | |
$ compute_bmh_list=$(oc get bmh -n openshift-machine-api |grep compute|awk '{printf $1 " "}') | |
for bmh_compute in $compute_bmh_list;do oc annotate bmh -n openshift-machine-api $bmh_compute baremetalhost.metal3.io/detached=""; | |
$ oc delete bmh -n openshift-machine-api $bmh_compute;done | |
---- | |
@pinikomarov Is my rewrite accurate?
Should the space between lines 196 and 198 be closed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok , and yes
. Delete OSPdO OLM resources (this will remove the OpenStack Director Operator) | ||
+ | ||
---- | ||
oc delete subscription osp-director-operator -n openstack | ||
oc delete operatorgroup osp-director-operator -n openstack | ||
oc delete catalogsource osp-director-operator-index -n openstack | ||
oc delete csv $ospdo_csv_ver -n openstack | ||
---- | ||
+ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
. Delete OSPdO OLM resources (this will remove the OpenStack Director Operator) | |
+ | |
---- | |
oc delete subscription osp-director-operator -n openstack | |
oc delete operatorgroup osp-director-operator -n openstack | |
oc delete catalogsource osp-director-operator-index -n openstack | |
oc delete csv $ospdo_csv_ver -n openstack | |
---- | |
+ | |
. Delete the OSPdO Operator Lifecycle Manager resources to remove OpenStack director Operator: | |
+ | |
---- | |
$ oc delete subscription osp-director-operator -n openstack | |
$ oc delete operatorgroup osp-director-operator -n openstack | |
$ oc delete catalogsource osp-director-operator-index -n openstack | |
$ oc delete csv $ospdo_csv_ver -n openstack | |
---- | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
. Scale-up the RHOSO OpenStack Operator controller-manager to 1 replica so that the associated OpenStackControlPlane is reconciled and its OpenStackClient pod is recreated | ||
+ | ||
---- | ||
oc patch csv -n openstack-operators openstack-operator.v0.0.1 --type json -p="[{"op": "replace", "path": "/spec/install/spec/deployments/0/spec/replicas", "value": "1"}]" | ||
---- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
. Scale-up the RHOSO OpenStack Operator controller-manager to 1 replica so that the associated OpenStackControlPlane is reconciled and its OpenStackClient pod is recreated | |
+ | |
---- | |
oc patch csv -n openstack-operators openstack-operator.v0.0.1 --type json -p="[{"op": "replace", "path": "/spec/install/spec/deployments/0/spec/replicas", "value": "1"}]" | |
---- | |
. Scale up the {rhos_acro} OpenStack Operator `controller-manager` to 1 replica so that the associated `OpenStackControlPlane` CR is reconciled and its `OpenStackClient` pod is recreated: | |
+ | |
---- | |
$ oc patch csv -n openstack-operators openstack-operator.v0.0.1 --type json -p="[{"op": "replace", "path": "/spec/install/spec/deployments/0/spec/replicas", "value": "1"}]" | |
---- |
@pinikomarov What does "reconciled" mean in this context?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok ,
"reconciled", here and to OCP operators in general means: "act" according to the resources I defined,
so here, once the controller-manager operator is up and running (as we patch it to be 1 instance instead of 0), it reads its instructions in the OpenStackControlPlane resource and executes whatever is needed so that everything defined in the that OpenStackControlPlane CR is alive and in existance, specifically we pay attention here to the OpenStackClient resources as the focused part of that CR.
add ospdo scaledown pre dataplane adoption docs
Related patches :
#715
#708
Jira: https://issues.redhat.com/browse/OSPRH-6618