What has changed in ci-artifacts?
(Organized release by release)
- Add nfd test_master_branch protocol #179
- new toolbox command:
toolbox/nfd-operator/deploy_from_commit.sh <git repository> <git reference>
to deploy NFD Operator from a custom commit.
- new toolbox command:
- Support for running NTO e2e tests #185
- new toolbox command:
toolbox/nto/run_e2e_test.sh <git repository> <git reference>
to run the NTO e2e testsuite from a given commit.
- Add
--base_machineset
flag torun_toolbox.py cluster set_scale
#243- new flag to set the base machineset, from which the new machineset will be derived.
- Add nfd test_master_branch protocol `#179 https://github.com/openshift-psap/ci-artifacts/pull/179>`_
- Use
subscriptions.operators.coreos.com
instead ofsubscriptions
to avoid conflicts with Knative subscriptions #207 #208
toolbox/nfd/deploy_from_operatorhub.sh
was moved totoolbox/nfd-operator/deploy_from_operatorhub.sh
toolbox/local-ci/deploy.sh <ci command> <git repository> <git reference>
was fixed #179
- Introduce a Github Action for checking ansible variable consistency #196
toolbox/repo/validate_role_files.py
is a new script to ensure that all the Ansible variables defining a filepath (roles/
) do point to an existing filetoolbox/repo/validate_role_vars_used.py
is a new script to ensure that all the Ansible variables defined are actually used in their role (with an exception for symlinks)
- gpu_operator_deploy_from_operatorhub: allow overriding subscription.spec.installPlanApproval `#219<https://github.com/openshift-psap/ci-artifacts/pull/219>`_
./toolbox/gpu-operator/deploy_from_operatorhub.sh
can receive a new flag-install-plan=Manual|Automatic
(Manual
is the default) to override the Subscription install-plan approval setting when deploying from OperatorHub.
- Change scaleup to set_scale - supported scale other than just 1 node #139
toolbox/cluster/scaleup.sh
has been removed,toolbox/cluster/set_scale.sh
has been introduced as a replacement.
- Add easy ways to test the entitlement #120
toolbox/entitlement/deploy.sh --machine-configs /path/to/machineconfigs
has been removedtoolbox/entitlement/deploy.sh --pem /path/to/key.pem
should be used instead. See there for a function to extract the PEM key from amachine-configs
resource file.
- toolbox: rename entitlement/test.sh -> entitlement/test_cluster.sh #166
toolbox/entitlement/test.sh
was renamed intotoolbox/entitlement/test_cluster.sh
Change scaleup to set_scale - supported scale other than just 1 node #139
toolbox/cluster/set_scale.sh
has been introduced to control the scale (node count, of a given AWS instance-type) of a cluster
Add easy ways to test the entitlement #120
- new commands to test a PEM key before deploying it:
toolbox/entitlement/test_in_podman.sh /path/to/key.pem
toolbox/entitlement/test_in_cluster.sh /path/to/key.pem
- new commands to test a PEM key before deploying it:
gpu_operator_set_repo-config: new role to set spec.driver.repoConfig #124
- new option to deploy a custom PEM CA file, to access private repo mirrors
toolbox/entitlement/deploy.sh --pem </path/to/key.pem> [--ca </path/to/key.ca.pem>]
- new command to configure the GPU Operator with a given repo-list file
toolbox/gpu-operator/set_repo-config.sh <path/to/repo.list> [<dest-dir>]
- new option to deploy a custom PEM CA file, to access private repo mirrors
gpu_operator_deploy_from_operatorhub: add support for setting the channel # <#173>
toolbox/gpu-operator/deploy_from_operatorhub.sh [<version> [<channel>]]
- gpu_operator_set_repo-config: new role to set spec.driver.repoConfig #124