-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OSD-28131: Deploying COO in place of OBO on SC clusters #667
base: main
Are you sure you want to change the base?
Conversation
@Nikokolas3270: This pull request references OSD-28131 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: Nikokolas3270 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hi @Nikokolas3270. Thanks for your PR. I'm waiting for a rhobs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
@@ -50,7 +50,6 @@ objects: | |||
operator: In | |||
values: | |||
- management-cluster | |||
- service-cluster |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe this will have to be a step-by-step approach.
First remove it here, then clean up the CSVs, then install COO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nope, COO can be installed in parallel or even before OBO is uninstalled.
Of course while OBO is installed, COO install will fail... but the install will succeed as soon as OBO is uninstalled, that is to say once OBO subcription + CSV have been removed
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, ok, if that’s not noise for SRE, then it’s fine.
Another topic… does it still make sense to control the installation from here? I believe OSDFM is the right choice for installing components in SCs/MCs, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, It won't be noise for us. We don't check whether COO or OBO is running correctly; maybe we should check that, but we don't for now. Remark that the Prometheus pods created from the MonitoringStack object will be replaced when the new operator is finally installed, but the outage should be quite minimal (few seconds).
Another topic… does it still make sense to control the installation from here? I believe OSDFM is the right choice for installing components in SCs/MCs, no?
That's a good point, indeed while the uninstallation of OBO must be done through this repository, it is probably suitable to have COO deployment controlled through OSDFM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I will add the subscription in this file:
https://gitlab.cee.redhat.com/service/osd-fleet-manager/-/blob/main/config/resources/service-cluster.yaml
Speaking of that: shouldn't we get rid of this template file after ALL clusters use COO in place of OBO? I mean I always found it strange to have a file in the operator codebase telling how the operator should be deployed on the infrastructure. To me, this should be decorrelated; remark that this operator is not the only operator suffering from this lack of boundaries.
WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. The attention point is the Namespace in the first SSS. If we delete it, even for a few seconds, the MonitoringStack CR in the SCs/MCs will be removed, as well as the Prom and AM instances and monitoring will be down for ROSA HCP :)
We will have to change the SSS to “Upsert” first, to them remove it, making sure it’s also deleted from Hive (delete:true in the saas-file target).
Then we can control the Namespace from somewhere else, like the OSDFM.
Do not merge yet
COO is deployed through the "Red Hat" catalog.
However it is not 100% sure if COO will be deployed through this catalog.
Below article seems to imply that the operator will be deployed through one of the default catalog... and "Red Hat" is one of them:
https://access.redhat.com/articles/7103797