Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add role to deploy 17.1 env for adoption #2297

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Conversation

cescgina
Copy link
Contributor

@cescgina cescgina commented Sep 3, 2024

Add a new role that will deploy a tripleo environment that will serve as
source for adoption. This role is expected to cosume the infra created
by [1], and a 17.1 scenario definition from the data-plane-adoption
repo, introduced by [2].

It also introduce a small fix to the deploy-ocp.yml so the resulting ocp
cluster is ready (the nodes needed to be uncordoned).

[1] #2285
[2] openstack-k8s-operators/data-plane-adoption#597

Copy link
Contributor

openshift-ci bot commented Sep 3, 2024

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please ask for approval from cescgina. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link

github-actions bot commented Sep 3, 2024

Thanks for the PR! ❤️
I'm marking it as a draft, once your happy with it merging and the PR is passing CI, click the "Ready for review" button below.

Copy link
Collaborator

@cjeanner cjeanner left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would need some more flexibility, but it could match the need with some tweaks/iterations.

roles/adoption_osp_deploy/files/undercloud.conf Outdated Show resolved Hide resolved
roles/adoption_osp_deploy/tasks/deploy_ceph.yml Outdated Show resolved Hide resolved
roles/adoption_osp_deploy/tasks/deploy_ceph.yml Outdated Show resolved Hide resolved
roles/adoption_osp_deploy/tasks/overcloud_deploy.yml Outdated Show resolved Hide resolved
roles/adoption_osp_deploy/tasks/main.yml Outdated Show resolved Hide resolved
roles/adoption_osp_deploy/tasks/main.yml Show resolved Hide resolved
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/3e0801ce95b34d3184964958b146d59f

✔️ openstack-k8s-operators-content-provider SUCCESS in 10h 11m 04s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 1h 14m 43s
✔️ cifmw-crc-podified-edpm-baremetal SUCCESS in 1h 25m 40s
✔️ noop SUCCESS in 0s
✔️ cifmw-pod-ansible-test SUCCESS in 7m 23s
cifmw-pod-pre-commit FAILURE in 5m 39s
✔️ build-push-container-cifmw-client SUCCESS in 37m 19s

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/f96d929a98b34e69a20a3813f6e9f506

✔️ openstack-k8s-operators-content-provider SUCCESS in 2h 14m 09s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 1h 12m 43s
✔️ cifmw-crc-podified-edpm-baremetal SUCCESS in 1h 28m 44s
✔️ noop SUCCESS in 0s
✔️ cifmw-pod-ansible-test SUCCESS in 7m 44s
cifmw-pod-pre-commit FAILURE in 5m 37s
✔️ build-push-container-cifmw-client SUCCESS in 37m 04s

@cescgina cescgina force-pushed the adoption_osp_role branch 2 times, most recently from ae14a9b to cd97aff Compare September 5, 2024 15:57
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/179a53575c8e437a909be2c41711f18c

✔️ openstack-k8s-operators-content-provider SUCCESS in 2h 33m 15s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 1h 14m 26s
✔️ cifmw-crc-podified-edpm-baremetal SUCCESS in 1h 23m 39s
✔️ noop SUCCESS in 0s
✔️ cifmw-pod-ansible-test SUCCESS in 7m 39s
cifmw-pod-pre-commit FAILURE in 5m 40s
✔️ build-push-container-cifmw-client SUCCESS in 37m 40s

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/100b262aeb704c85b86c6589151688bf

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 46m 28s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 1h 18m 33s
✔️ cifmw-crc-podified-edpm-baremetal SUCCESS in 1h 26m 06s
✔️ noop SUCCESS in 0s
✔️ cifmw-pod-ansible-test SUCCESS in 7m 44s
cifmw-pod-pre-commit FAILURE in 5m 25s
✔️ build-push-container-cifmw-client SUCCESS in 37m 00s

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/11d5100f2b374d3d875c5cd958cdfef7

✔️ openstack-k8s-operators-content-provider SUCCESS in 34m 34s
podified-multinode-edpm-deployment-crc FAILURE in 17m 05s
cifmw-crc-podified-edpm-baremetal FAILURE in 21m 52s
✔️ noop SUCCESS in 0s
✔️ cifmw-pod-ansible-test SUCCESS in 7m 33s
cifmw-pod-pre-commit FAILURE in 6m 20s
✔️ build-push-container-cifmw-client SUCCESS in 22m 29s

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/c8f71eecadfe40639c4e22942a5fb6c5

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 43m 57s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 1h 14m 39s
✔️ cifmw-crc-podified-edpm-baremetal SUCCESS in 1h 31m 01s
✔️ noop SUCCESS in 0s
✔️ cifmw-pod-ansible-test SUCCESS in 7m 50s
cifmw-pod-pre-commit FAILURE in 5m 37s
✔️ build-push-container-cifmw-client SUCCESS in 20m 46s

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/c22a1ed7df85440d83b1bb6fc81d2aa0

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 40m 24s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 1h 16m 26s
✔️ cifmw-crc-podified-edpm-baremetal SUCCESS in 1h 25m 59s
✔️ noop SUCCESS in 0s
✔️ cifmw-pod-ansible-test SUCCESS in 8m 17s
cifmw-pod-pre-commit FAILURE in 7m 40s
✔️ build-push-container-cifmw-client SUCCESS in 30m 23s

deploy-osp-adoption.yml Outdated Show resolved Hide resolved
deploy-osp-adoption.yml Outdated Show resolved Hide resolved
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/2c06710e48894922925227767bc3a40b

✔️ openstack-k8s-operators-content-provider SUCCESS in 2h 02m 30s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 1h 17m 43s
✔️ cifmw-crc-podified-edpm-baremetal SUCCESS in 1h 19m 04s
✔️ noop SUCCESS in 0s
✔️ cifmw-pod-ansible-test SUCCESS in 8m 37s
✔️ cifmw-pod-pre-commit SUCCESS in 7m 27s
build-push-container-cifmw-client TIMED_OUT in 1h 30m 37s

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/782d8ffaf82c413cab42bc91c2a5128d

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 50m 00s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 1h 16m 23s
✔️ cifmw-crc-podified-edpm-baremetal SUCCESS in 1h 29m 49s
✔️ noop SUCCESS in 0s
✔️ cifmw-pod-ansible-test SUCCESS in 7m 39s
cifmw-pod-pre-commit FAILURE in 7m 11s
✔️ build-push-container-cifmw-client SUCCESS in 31m 20s

@cescgina cescgina force-pushed the adoption_osp_role branch 3 times, most recently from 809ddd1 to 9494ec2 Compare September 28, 2024 14:04
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/adabf7ff0819468ea1b1c8a9aa1a76eb

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 26m 40s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 1h 14m 25s
cifmw-crc-podified-edpm-baremetal RETRY_LIMIT in 20m 33s
✔️ noop SUCCESS in 0s
✔️ cifmw-pod-ansible-test SUCCESS in 7m 10s
✔️ cifmw-pod-pre-commit SUCCESS in 6m 59s
✔️ build-push-container-cifmw-client SUCCESS in 21m 56s

@cescgina cescgina changed the title [POC] Add role to deploy 17.1 env for adoption Add role to deploy 17.1 env for adoption Sep 30, 2024
cescgina added a commit to cescgina/data-plane-adoption that referenced this pull request Sep 30, 2024
Introduce an scenarios folder that will contain the needed input to
deploy a 17.1 environment using the cifmw role added in [1]. The
scenario is defined by a variable file, with undercloud specific
parameters, overcloud specific parameters, hooks that can be called
before or after both the undercloud and overcloud deployment, and two
maps that relate the groups in inventory produced by the playbook that
created the infra, to Roles and roles hostnames, to make it easier to
work with different roles in different scenarios.

[1] openstack-k8s-operators/ci-framework#2297
@cescgina cescgina marked this pull request as ready for review September 30, 2024 10:26
deploy-ocp.yml Outdated Show resolved Hide resolved
deployment. Defaults to `pool.ntp.org`
* `cifmw_adoption_osp_deploy_repos`: (List) List of 17.1 repos to enable. Defaults to
`[rhel-9-for-x86_64-baseos-eus-rpms, rhel-9-for-x86_64-appstream-eus-rpms, rhel-9-for-x86_64-highavailability-eus-rpms, openstack-17.1-for-rhel-9-x86_64-rpms, fast-datapath-for-rhel-9-x86_64-rpms, rhceph-6-tools-for-rhel-9-x86_64-rpms]`
* `cifmw_adoption_osp_deploy_skip_stages`: (String or List) Stages to skip
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm I'd use actual tags: ['undercloud', 'overcloud'] and leverage --skip-tags. That would be better, since it would really skip the tasks without even having to consider the actual tasks with some when condition.

UNLESS this would be needed in CI context? Not sure if Zuul job definition allows to pass down and tags to skip... ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't think of any use case for this in a CI job, I added it mostly to avoid having to re-run the undercloud deployment whenever something went wrong with the overcloud deploy. I remembered this was implemented in kustomize_deploy and copied the idea from there. But you're right that this could be much simpler with tags, I'll give it a try locally and update the PR

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've updated the PR moving to ansible tags, it's indeed simpler, thanks @cjeanner

roles/adoption_osp_deploy/tasks/main.yml Show resolved Hide resolved
roles/adoption_osp_deploy/tasks/main.yml Show resolved Hide resolved
Copy link
Contributor

@marios marios left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks great, biggest issue i saw was the secrets that needs to be worked out

group-templates:
osp-controllers:
computes:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

which computes is this then in the adoption jobs. I mean, osp_computes is the source computes which then become the destination data plane nodes.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these are the computes from the greenfield jobs, we don't use them for adoption (in https://github.com/openstack-k8s-operators/ci-framework/pull/2297/files#diff-60530f309d776a3851cf8d6bfb03bf6c4bb531cc5de3f57e66f5a14021d545f2R27 we set the number of vms of this group to 0). But, since we reuse the base network definition https://github.com/openstack-k8s-operators/ci-framework/blob/main/scenarios/reproducers/networking-definition.yml we can't undefine anything that is already there. So I need to make sure the ips for unused compute group do not overlap with the ones from osp-compute groups. I'll add a comment clarifying this

no_log: true
ansible.builtin.command: >-
subscription-manager register --force
--org "{{ cifmw_adoption_osp_deploy_rhsm_org }}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how will this work though i mean the secret cannot live in this repo as it is not trusted/config repo.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is not intended to be used in CI jobs, this is here in case someone wants to run the deployment manually like I've been doing to test it. Once I start building the zuul jobs, I'll need to do something similar from a config repo

ansible.builtin.command: >
podman login
--username "{{ cifmw_adoption_osp_deploy_container_user }}"
--password "{{ cifmw_adoption_osp_deploy_container_password }}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same this will have to be moved to where these secrets live

deploy-ocp.yml Outdated Show resolved Hide resolved
cescgina added a commit to cescgina/data-plane-adoption that referenced this pull request Oct 1, 2024
Introduce an scenarios folder that will contain the needed input to
deploy a 17.1 environment using the cifmw role added in [1]. The
scenario is defined by a variable file, with undercloud specific
parameters, overcloud specific parameters, hooks that can be called
before or after both the undercloud and overcloud deployment, and two
maps that relate the groups in inventory produced by the playbook that
created the infra, to Roles and roles hostnames, to make it easier to
work with different roles in different scenarios.

[1] openstack-k8s-operators/ci-framework#2297
cescgina added a commit to cescgina/data-plane-adoption that referenced this pull request Oct 1, 2024
Introduce an scenarios folder that will contain the needed input to
deploy a 17.1 environment using the cifmw role added in [1]. The
scenario is defined by a variable file, with undercloud specific
parameters, overcloud specific parameters, hooks that can be called
before or after both the undercloud and overcloud deployment, and two
maps that relate the groups in inventory produced by the playbook that
created the infra, to Roles and roles hostnames, to make it easier to
work with different roles in different scenarios.

[1] openstack-k8s-operators/ci-framework#2297
cescgina added a commit to cescgina/data-plane-adoption that referenced this pull request Oct 2, 2024
Introduce an scenarios folder that will contain the needed input to
deploy a 17.1 environment using the cifmw role added in [1]. The
scenario is defined by a variable file, with undercloud specific
parameters, overcloud specific parameters, hooks that can be called
before or after both the undercloud and overcloud deployment, and two
maps that relate the groups in inventory produced by the playbook that
created the infra, to Roles and roles hostnames, to make it easier to
work with different roles in different scenarios.

[1] openstack-k8s-operators/ci-framework#2297
cescgina added a commit to cescgina/data-plane-adoption that referenced this pull request Oct 2, 2024
Introduce an scenarios folder that will contain the needed input to
deploy a 17.1 environment using the cifmw role added in [1]. The
scenario is defined by a variable file, with undercloud specific
parameters, overcloud specific parameters, hooks that can be called
before or after both the undercloud and overcloud deployment, and two
maps that relate the groups in inventory produced by the playbook that
created the infra, to Roles and roles hostnames, to make it easier to
work with different roles in different scenarios.

[1] openstack-k8s-operators/ci-framework#2297
Add a new role that will deploy a tripleo environment that will serve as
source for adoption. This role is expected to cosume the infra created
by [1], and a 17.1 scenario definition from the data-plane-adoption
repo, introduced by [2].

It also introduce a small fix to the deploy-ocp.yml so the resulting ocp
cluster is ready (the nodes needed to be uncordoned).

[1] #2285
[2] openstack-k8s-operators/data-plane-adoption#597
Copy link
Contributor

@fmount fmount left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @cescgina for starting this. I think we can have a testproject to prove parity with what we have in rdo and then start merging your changes. They might be eventually refined w/ follow up patches if required.

{% endfor %}
computes:
children:
{{ _adoption_source_scenario.roles_groups_map['osp-computes'] }}: {}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering if this inventory can be used in place of [1] when we trigger the ceph migration (after the adoption) and a playbook that is meant to prepare the TripleO environment [2].
This would save us time to build a "ceph compatibility" layer when it's about executing those playbooks. Do you think we can extend this to have a Ceph section? Maybe worth discussing this in a follow up.

[1] https://review.rdoproject.org/r/c/rdo-jobs/+/53695/59/playbooks/data_plane_adoption/templates/ceph_inventory.j2
[2] openstack-k8s-operators/data-plane-adoption#637

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fmount I can try adding a new section matching what you have in ceph_inventory once I finish the current testing run, and check that it doesn't interfere with what I currently have. If that works we can get it in on this first version, if we need further changes we can leave it for a follow-up

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ack and thank you!

- {{ gateway_ip }}
domain: []
addresses:
- ip_netmask: {{ ctlplane_ip }}/{{ ctlplane_cidr }}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I recently had intermittent ssh issues with this config, while having the ctlplane ip (that I assume we use to ssh to the node) on a dedicated nic didn't present such problem.
I'm not asking to change this layout at this point, but just wanted to point out an issue that I faced "a lot" during [1] testing, while not present in the infrared based jobs where the ssh interface is on a dedicated nic.

[1] https://review.rdoproject.org/r/c/testproject/+/53696

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@fmount tbh at first I thought this would pose the problems you mentioned, but it seems to work fine in the machine I'm currently testing on. How does the config look like when you have the controlplane ip in a dedicated nic? What ip do you assign to the bridge?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have to check how the downstream env looks like, I just faced that issue during the ceph migration playbook, when you need to run os-net-config to update the nodes' net config.
The interesting part is that I'm not facing it anymore in my last attempts, so I'm confused about this one.
I think downstream we simply do not have a ovs_bridge for the ssh net nic.
Example:

1: lo    inet 127.0.0.1/8 scope host lo\       valid_lft forever preferred_lft forever
2: enp1s0    inet 192.168.24.8/24 brd 192.168.24.255 scope global enp1s0\       valid_lft forever preferred_lft forever
2: enp1s0    inet 192.168.24.54/32 brd 192.168.24.255 scope global enp1s0\       valid_lft forever preferred_lft forever
7: br-ex    inet 10.0.0.119/24 brd 10.0.0.255 scope global br-ex\       valid_lft forever preferred_lft forever
8: vlan30    inet 172.17.3.109/24 brd 172.17.3.255 scope global vlan30\       valid_lft forever preferred_lft forever
8: vlan30    inet 172.17.3.135/32 brd 172.17.3.255 scope global vlan30\       valid_lft forever preferred_lft forever
9: vlan70    inet 172.17.5.142/24 brd 172.17.5.255 scope global vlan70\       valid_lft forever preferred_lft forever
9: vlan70    inet 172.17.5.63/32 brd 172.17.5.255 scope global vlan70\       valid_lft forever preferred_lft forever
10: vlan50    inet 172.17.2.69/24 brd 172.17.2.255 scope global vlan50\       valid_lft forever preferred_lft forever
11: vlan20    inet 172.17.1.119/24 brd 172.17.1.255 scope global vlan20\       valid_lft forever preferred_lft forever
12: vlan40    inet 172.17.4.131/24 brd 172.17.4.255 scope global vlan40\       valid_lft forever preferred_lft forever

and

---
network_config:
- type: interface
  name: nic1
  use_dhcp: false
  dns_servers: ['172.16.0.1', '10.0.0.1']
  domain: []
  addresses:
  - ip_netmask: 192.168.24.8/24
  routes: [{'default': True, 'nexthop': '192.168.24.1'}]
- type: vlan
  vlan_id: 20
  device: nic1
  addresses:
  - ip_netmask: 172.17.1.119/24
  routes: []
- type: vlan
  vlan_id: 40
  device: nic1
  addresses:
  - ip_netmask: 172.17.4.131/24
  routes: []
- type: ovs_bridge
  name: br-tenant
  use_dhcp: false
  members:
  - type: interface
    name: nic2
    primary: true
  - type: vlan
    vlan_id: 50
    addresses:
    - ip_netmask: 172.17.2.69/24
    routes: []
  - type: vlan
    vlan_id: 30
    addresses:
    - ip_netmask: 172.17.3.109/24
    routes: []
  - type: vlan
    vlan_id: 70
    addresses:
    - ip_netmask: 172.17.5.142/24
    routes: []
- type: ovs_bridge
  name: br-ex
  dns_servers: ['172.16.0.1', '10.0.0.1']
  domain: []
  use_dhcp: false
  addresses:
  - ip_netmask: 10.0.0.119/24
  routes: []
  members:
  - type: interface
    name: nic3
    primary: true

Not sure it helps and you don't have to change what you have because of this, but hope this gives you the idea.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks that's helpful. It's quite different to what I have and tbh I'm not sure how much would I need to change on the tripleo side to use something like this, for now I'm leaning on keeping the current one and having this discussion here for reference in case we start seeing problems with the interface. Now I wonder though, is there some topology where the current setup will be problematic and will need to be changed? Should we have this template as part of the scenario definition?

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://softwarefactory-project.io/zuul/t/rdoproject.org/buildset/46a67ffcd92c485bad4a385db672d87a

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 52m 36s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 1h 16m 29s
cifmw-crc-podified-edpm-baremetal RETRY_LIMIT in 19m 54s
✔️ noop SUCCESS in 0s
✔️ cifmw-pod-ansible-test SUCCESS in 8m 02s
✔️ cifmw-pod-pre-commit SUCCESS in 7m 22s
✔️ build-push-container-cifmw-client SUCCESS in 31m 35s

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants