Skip to content

Commit

Permalink
Introduce scenarios for 17.1 env for adoption
Browse files Browse the repository at this point in the history
Introduce an scenarios folder that will contain the needed input to
deploy a 17.1 environment using the cifmw role added in [1]. The
scenario is defined by a variable file, with undercloud specific
parameters, overcloud specific parameters, hooks that can be called
before or after both the undercloud and overcloud deployment, and two
maps that relate the groups in inventory produced by the playbook that
created the infra, to Roles and roles hostnames, to make it easier to
work with different roles in different scenarios.

[1] openstack-k8s-operators/ci-framework#2297
  • Loading branch information
cescgina committed Oct 1, 2024
1 parent d39856d commit f88a0d5
Show file tree
Hide file tree
Showing 9 changed files with 605 additions and 0 deletions.
90 changes: 90 additions & 0 deletions scenarios/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
# OSP 17.1 scenarios

The files stored in this folder define different osp 17.1 deployments to be
tested with adoption. For each scenario, we have a <scenario_name>.yaml file
and folder with the same name. The yaml file contains variables that will be
used to customize the scenario, while the folder contains files that will be
used in the deployment (network_data, role files, etc.).

This scenario definition assumes that all relevant parameters to the
deployment are known, with the exception of infra-dependente values like ips or
hostnames.

## Scenario definition file

The scenario definition file (the <scenario_name>.yaml) has the following top
level sections:

- undercloud
- overcloud
- cloud_domain
- hostname_groups_map
- roles_groups_map
- hooks

### Undercloud section

The undercloud section contains the following parameters (all optional):

- `config`: a list of options to set in `undercloud.conf` file, each entry is
a dictionary with the fields `section`, `option` and `value`.
- `undercloud_parameters_override`: path to a file that contains some parameters
for the undercloud setup, is passed through the `hieradata_override` option in
the `undercloud.conf`.
- `undercloud_parameters_defaults`: path to a file that contains
parameters_defults for the undercloud, is passed through the `custom_env_files`
option in the `undercloud.conf`.

### Overcloud section

The overcloud section contains the following parameters:

- `stackname`: name of the overcloud deployment, default is `overcloud`.
- `args`: list of cli arguments to use when deploying the overcloud.
- `vars`: list of environment files to use when deploying the overcloud.
- `network_data_file`: path to the network_data file that defines the network
to use in the overcloud, required.
- `vips_data_file`: path to the file defining the virtual ips to use in the
overcloud, required.
- `roles_file`: path to the file defining the roles of the different nodes
used in the overcloud, required.
- `config_download_file`: path to the config-download file used to pass
environment variables to the overcloud, required.
- `ceph_osd_spec_file`: path to the osd spec file used to deploy ceph when
applicable, optional.

### Cloud domain

Name of the dns domain used for the overcloud, particularly relevant for tlse
environments.

## Hostname groups map

Map that relates ansible groups in the inventory produced by the infra creation
to role hostname format for 17.1 deployment. This allows to tell which nodes
belong to the overcloud without trying to rely on specific naming. Used to
build the hostnamemap. For example, let's assume that we have an inventory with
a group called `osp-computes` that contains the computes, and a group called
`osp-controllers` that contains the controllers, then a posible map would look
like:

```
hostname_groups_map:
osp-computes: "overcloud-novacompute"
osp-controllers: "overcloud-controller"
```

## Roles groups map

Map that relates ansible groups in the inventory produced by the infra cration
to OSP roles. This allows to build a tripleo-ansible-inventory which is used,
for example, to deploy Ceph. Continuing from the example mentioned in the
previous section, a possible value for this map would be:

```
hostname_groups_map:
osp-computes: "Compute"
osp-controllers: "Controller"
```

## Hooks
66 changes: 66 additions & 0 deletions scenarios/hci.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
---
undercloud:
config:
- section: DEFAULT
option: undercloud_hostname
value: undercloud.localdomain
- section: DEFAULT
option: undercloud_timezone
value: UTC
- section: DEFAULT
option: undercloud_debug
value: true
- section: DEFAULT
option: container_cli
value: podman
- section: DEFAULT
option: undercloud_enable_selinux
value: false
- section: DEFAULT
option: generate_service_certificate
value: false
undercloud_parameters_override: "hci/hieradata_overrides_undercloud.yaml"
undercloud_parameters_defaults: "hci/undercloud_parameter_defaults.yaml"
ctlplane_vip: 192.168.122.99
pre_oc_run:
- name: Deploy Ceph
type: playbook
source: "adoption_deploy_ceph.yml"
cloud_domain: "localdomain"
hostname_groups_map:
# map ansible groups in the inventory to role hostname format for
# 17.1 deployment
osp-computes: "overcloud-computehci"
osp-controllers: "overcloud-controller"
roles_groups_map:
# map ansible groups to tripleo Role names
osp-computes: "ComputeHCI"
osp-controllers: "Controller"
overcloud:
stackname: "overcloud"
args:
- "--override-ansible-cfg /home/zuul/ansible_config.cfg"
- "--templates /usr/share/openstack-tripleo-heat-templates"
- "--libvirt-type qemu"
- "--timeout 90"
- "--overcloud-ssh-user zuul"
- "--deployed-server"
- "--validation-warnings-fatal"
- "--disable-validations"
- "--heat-type pod"
- "--disable-protected-resource-types"
vars:
- "/home/zuul/deployed_ceph.yaml"
- "/usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml"
- "/usr/share/openstack-tripleo-heat-templates/environments/podman.yaml"
- "/usr/share/openstack-tripleo-heat-templates/environments/low-memory-usage.yaml"
- "/usr/share/openstack-tripleo-heat-templates/environments/debug.yaml"
- "/usr/share/openstack-tripleo-heat-templates/environments/manila-cephfsnative-config.yaml"
- "/usr/share/openstack-tripleo-heat-templates/environments/enable-legacy-telemetry.yaml"
- "/usr/share/openstack-tripleo-heat-templates/environments/cephadm/cephadm.yaml"
- "/usr/share/openstack-tripleo-heat-templates/environments/cephadm/ceph-mds.yaml"
network_data_file: "hci/network_data.yaml"
vips_data_file: "hci/vips_data.yaml"
roles_file: "hci/roles.yaml"
ceph_osd_spec_file: "hci/osd_spec.yaml"
config_download_file: "hci/config-download.yaml"
80 changes: 80 additions & 0 deletions scenarios/hci/config-download.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
---
resource_registry:
# yamllint disable rule:line-length
OS::TripleO::DeployedServer::ControlPlanePort: /usr/share/openstack-tripleo-heat-templates/deployed-server/deployed-neutron-port.yaml
OS::TripleO::OVNMacAddressNetwork: OS::Heat::None
OS::TripleO::OVNMacAddressPort: OS::Heat::None
OS::TripleO::ComputeHCI::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_internal_api.yaml
OS::TripleO::ComputeHCI::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_storage.yaml
OS::TripleO::ComputeHCI::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_storage_mgmt.yaml
OS::TripleO::ComputeHCI::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_tenant.yaml
OS::TripleO::Controller::Ports::ExternalPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_external.yaml
OS::TripleO::Controller::Ports::InternalApiPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_internal_api.yaml
OS::TripleO::Controller::Ports::StorageMgmtPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_storage_mgmt.yaml
OS::TripleO::Controller::Ports::StoragePort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_storage.yaml
OS::TripleO::Controller::Ports::TenantPort: /usr/share/openstack-tripleo-heat-templates/network/ports/deployed_tenant.yaml
OS::TripleO::Services::CeilometerAgentCentral: /usr/share/openstack-tripleo-heat-templates/deployment/ceilometer/ceilometer-agent-central-container-puppet.yaml
OS::TripleO::Services::CeilometerAgentNotification: /usr/share/openstack-tripleo-heat-templates/deployment/ceilometer/ceilometer-agent-notification-container-puppet.yaml
OS::TripleO::Services::CeilometerAgentIpmi: /usr/share/openstack-tripleo-heat-templates/deployment/ceilometer/ceilometer-agent-ipmi-container-puppet.yaml
OS::TripleO::Services::ComputeCeilometerAgent: /usr/share/openstack-tripleo-heat-templates/deployment/ceilometer/ceilometer-agent-compute-container-puppet.yaml
OS::TripleO::Services::Collectd: /usr/share/openstack-tripleo-heat-templates/deployment/metrics/collectd-container-puppet.yaml
OS::TripleO::Services::MetricsQdr: /usr/share/openstack-tripleo-heat-templates/deployment/metrics/qdr-container-puppet.yaml
OS::TripleO::Services::OsloMessagingRpc: /usr/share/openstack-tripleo-heat-templates/deployment/rabbitmq/rabbitmq-messaging-rpc-pacemaker-puppet.yaml
OS::TripleO::Services::OsloMessagingNotify: /usr/share/openstack-tripleo-heat-templates/deployment/rabbitmq/rabbitmq-messaging-notify-shared-puppet.yaml
OS::TripleO::Services::HAproxy: /usr/share/openstack-tripleo-heat-templates/deployment/haproxy/haproxy-pacemaker-puppet.yaml
OS::TripleO::Services::Pacemaker: /usr/share/openstack-tripleo-heat-templates/deployment/pacemaker/pacemaker-baremetal-puppet.yaml
OS::TripleO::Services::PacemakerRemote: /usr/share/openstack-tripleo-heat-templates/deployment/pacemaker/pacemaker-remote-baremetal-puppet.yaml
OS::TripleO::Services::Clustercheck: /usr/share/openstack-tripleo-heat-templates/deployment/pacemaker/clustercheck-container-puppet.yaml
OS::TripleO::Services::Redis: /usr/share/openstack-tripleo-heat-templates/deployment/database/redis-pacemaker-puppet.yaml
OS::TripleO::Services::Rsyslog: /usr/share/openstack-tripleo-heat-templates/deployment/logging/rsyslog-container-puppet.yaml
OS::TripleO::Services::MySQL: /usr/share/openstack-tripleo-heat-templates/deployment/database/mysql-pacemaker-puppet.yaml
OS::TripleO::Services::CinderBackup: /usr/share/openstack-tripleo-heat-templates/deployment/cinder/cinder-backup-pacemaker-puppet.yaml
OS::TripleO::Services::CinderVolume: /usr/share/openstack-tripleo-heat-templates/deployment/cinder/cinder-volume-pacemaker-puppet.yaml
OS::TripleO::Services::HeatApi: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-api-container-puppet.yaml
OS::TripleO::Services::HeatApiCfn: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-api-cfn-container-puppet.yaml
OS::TripleO::Services::HeatApiCloudwatch: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-api-cloudwatch-disabled-puppet.yaml
OS::TripleO::Services::HeatEngine: /usr/share/openstack-tripleo-heat-templates/deployment/heat/heat-engine-container-puppet.yaml
parameter_defaults:
RedisVirtualFixedIPs:
- ip_address: 192.168.122.110
use_neutron: false
OVNDBsVirtualFixedIPs:
- ip_address: 192.168.122.111
use_neutron: false
ControllerExtraConfig:
nova::compute::libvirt::services::libvirt_virt_type: qemu
nova::compute::libvirt::virt_type: qemu
ComputeExtraConfig:
nova::compute::libvirt::services::libvirt_virt_type: qemu
nova::compute::libvirt::virt_type: qemu
Debug: true
DockerPuppetDebug: true
ContainerCli: podman
ControllerCount: 3
ComputeHCICount: 3
NeutronGlobalPhysnetMtu: 1350
CinderLVMLoopDeviceSize: 20480
CloudName: overcloud.localdomain
CloudNameInternal: overcloud.internalapi.localdomain
CloudNameStorage: overcloud.storage.localdomain
CloudNameStorageManagement: overcloud.storagemgmt.localdomain
CloudNameCtlplane: overcloud.ctlplane.localdomain
CloudDomain: localdomain
NetworkConfigWithAnsible: false
ControllerHostnameFormat: '%stackname%-controller-%index%'
ComputeHCIHostnameFormat: '%stackname%-computehci-%index%'
CtlplaneNetworkAttributes:
network:
dns_domain: localdomain
mtu: 1500
name: ctlplane
tags:
- 192.168.122.0/24
subnets:
ctlplane-subnet:
cidr: 192.168.122.0/24
dns_nameservers: 192.168.122.10
gateway_ip: 192.168.122.10
host_routes: []
name: ctlplane-subnet
ip_version: 4
11 changes: 11 additions & 0 deletions scenarios/hci/hieradata_overrides_undercloud.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
---
parameter_defaults:
UndercloudExtraConfig:
ironic::disk_utils::image_convert_memory_limit: 2048
ironic::conductor::heartbeat_interval: 20
ironic::conductor::heartbeat_timeout: 120

# Ironic defaults to using `qemu:///system`. When running libvirtd
# unprivileged we need to use `qemu:///session`. This allows us to pass
# the value of libvirt_uri into /etc/ironic/ironic.conf.
ironic::drivers::ssh::libvirt_uri: 'qemu:///session'
61 changes: 61 additions & 0 deletions scenarios/hci/network_data.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
---
- name: Storage
mtu: 1500
vip: true
name_lower: storage
dns_domain: storage.mydomain.tld.
service_net_map_replace: storage
subnets:
storage_subnet:
vlan: 21
ip_subnet: '172.18.0.0/24'
allocation_pools: [{'start': '172.18.0.120', 'end': '172.18.0.250'}]

- name: StorageMgmt
mtu: 1500
vip: true
name_lower: storage_mgmt
dns_domain: storagemgmt.mydomain.tld.
service_net_map_replace: storage_mgmt
subnets:
storage_mgmt_subnet:
vlan: 23
ip_subnet: '172.20.0.0/24'
allocation_pools: [{'start': '172.20.0.120', 'end': '172.20.0.250'}]

- name: InternalApi
mtu: 1500
vip: true
name_lower: internal_api
dns_domain: internal-api.mydomain.tld.
service_net_map_replace: internal_api
subnets:
internal_api_subnet:
vlan: 20
ip_subnet: '172.17.0.0/24'
allocation_pools: [{'start': '172.17.0.120', 'end': '172.17.0.250'}]

- name: Tenant
mtu: 1500
vip: false # Tenant network does not use VIPs
name_lower: tenant
dns_domain: tenant.mydomain.tld.
service_net_map_replace: tenant
subnets:
tenant_subnet:
vlan: 22
ip_subnet: '172.19.0.0/24'
allocation_pools: [{'start': '172.19.0.120', 'end': '172.19.0.250'}]

- name: External
mtu: 1500
vip: true
name_lower: external
dns_domain: external.mydomain.tld.
service_net_map_replace: external
subnets:
external_subnet:
gateway_ip: '172.21.0.1'
vlan: 44
ip_subnet: '172.21.0.0/24'
allocation_pools: [{'start': '172.21.0.120', 'end': '172.21.0.250'}]
3 changes: 3 additions & 0 deletions scenarios/hci/osd_spec.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
---
data_devices:
all: true
Loading

0 comments on commit f88a0d5

Please sign in to comment.