Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OSD-28131: Deploying COO in place of OBO on SC clusters #667

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

Nikokolas3270
Copy link

Do not merge yet

COO is deployed through the "Red Hat" catalog.
However it is not 100% sure if COO will be deployed through this catalog.
Below article seems to imply that the operator will be deployed through one of the default catalog... and "Red Hat" is one of them:
https://access.redhat.com/articles/7103797

@Nikokolas3270 Nikokolas3270 requested a review from a team as a code owner February 4, 2025 14:43
@Nikokolas3270 Nikokolas3270 requested review from JoaoBraveCoding and PeterYurkovich and removed request for a team February 4, 2025 14:43
@openshift-ci-robot
Copy link
Collaborator

openshift-ci-robot commented Feb 4, 2025

@Nikokolas3270: This pull request references OSD-28131 which is a valid jira issue.

Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.19.0" version, but no target version was set.

In response to this:

Do not merge yet

COO is deployed through the "Red Hat" catalog.
However it is not 100% sure if COO will be deployed through this catalog.
Below article seems to imply that the operator will be deployed through one of the default catalog... and "Red Hat" is one of them:
https://access.redhat.com/articles/7103797

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

Copy link

openshift-ci bot commented Feb 4, 2025

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: Nikokolas3270
Once this PR has been reviewed and has the lgtm label, please assign periklis for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link

openshift-ci bot commented Feb 4, 2025

Hi @Nikokolas3270. Thanks for your PR.

I'm waiting for a rhobs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@@ -50,7 +50,6 @@ objects:
operator: In
values:
- management-cluster
- service-cluster
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this will have to be a step-by-step approach.

First remove it here, then clean up the CSVs, then install COO.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope, COO can be installed in parallel or even before OBO is uninstalled.
Of course while OBO is installed, COO install will fail... but the install will succeed as soon as OBO is uninstalled, that is to say once OBO subcription + CSV have been removed

Copy link
Collaborator

@apahim apahim Feb 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, ok, if that’s not noise for SRE, then it’s fine.

Another topic… does it still make sense to control the installation from here? I believe OSDFM is the right choice for installing components in SCs/MCs, no?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, It won't be noise for us. We don't check whether COO or OBO is running correctly; maybe we should check that, but we don't for now. Remark that the Prometheus pods created from the MonitoringStack object will be replaced when the new operator is finally installed, but the outage should be quite minimal (few seconds).

Another topic… does it still make sense to control the installation from here? I believe OSDFM is the right choice for installing components in SCs/MCs, no?

That's a good point, indeed while the uninstallation of OBO must be done through this repository, it is probably suitable to have COO deployment controlled through OSDFM.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, I will add the subscription in this file:
https://gitlab.cee.redhat.com/service/osd-fleet-manager/-/blob/main/config/resources/service-cluster.yaml
Speaking of that: shouldn't we get rid of this template file after ALL clusters use COO in place of OBO? I mean I always found it strange to have a file in the operator codebase telling how the operator should be deployed on the infrastructure. To me, this should be decorrelated; remark that this operator is not the only operator suffering from this lack of boundaries.
WDYT?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. The attention point is the Namespace in the first SSS. If we delete it, even for a few seconds, the MonitoringStack CR in the SCs/MCs will be removed, as well as the Prom and AM instances and monitoring will be down for ROSA HCP :)

We will have to change the SSS to “Upsert” first, to them remove it, making sure it’s also deleted from Hive (delete:true in the saas-file target).

Then we can control the Namespace from somewhere else, like the OSDFM.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants