You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When adding the shortNames for our CRDs, we encounter a problem when trying to remove the old v1alpha2 CRD version. Even if the resources are migrated to v1 during the upgrade path, the CRD status keep in the storedVersions field the v1alpha2 version. Which block the upgrade removing the old version. Therefore, we need to find the best way to fix this issue to safely remove the v1alpha2 version.
To simulate the issue follow the following steps:
Start clean cluster
Install kubewarden-crds version 0.1.4 and kubewarden-controller version 0.4.6. These are versions before the CRDs v1 and Kubewarden v1.0.0 release
Install a v1alpha2 policy. See below a policy to be used
Upgrade to v1.0.0 of the crds and controller helm charts
Upgrade to latest crds and controller helm charts
Change a local kubewarden-crds helm chart removing the v1alpha2 version
Try to upgrade the crds using my local helm chart
The following error happens:
Error: UPGRADE FAILED: cannot patch "clusteradmissionpolicies.policies.kubewarden.io" with kind CustomResourceDefinition: CustomResourceDefinition.apiextensions.k8s.io "clusteradmissionpolicies.policies.kubewarden.io" is invalid: status.storedVersions[0]: Invalid value: "v1alpha2": must appear in spec.versions
Therefore, even if the policies have been migrated to v1 during the upgrade path. The storedVersion fields is still telling that we have v1alpha2 installed. This is the field description:
storedVersions lists all versions of CustomResources that were ever persisted. Tracking these versions allows a migration path for stored versions in etcd. The field is mutable so a migration controller can finish a migration to another version (ensuring no old objects are left in storage), and then remove the rest of the versions from this list. Versions may not be removed from spec.versions while they exist in this list.
Considering this documentation. I guess our controller needs to updates this field to allow the removal of the old CRD.
In case you want to setup a similar testing environment. This is the commands used to create a cluster with old kubewarden stack version:
When adding the
shortNames
for our CRDs, we encounter a problem when trying to remove the oldv1alpha2
CRD version. Even if the resources are migrated tov1
during the upgrade path, the CRD status keep in thestoredVersions
field thev1alpha2
version. Which block the upgrade removing the old version. Therefore, we need to find the best way to fix this issue to safely remove thev1alpha2
version.To simulate the issue follow the following steps:
0.1.4
and kubewarden-controller version0.4.6
. These are versions before the CRDsv1
and Kubewardenv1.0.0
releasev1alpha2
policy. See below a policy to be usedv1.0.0
of the crds and controller helm chartsv1alpha2
versionThe following error happens:
Therefore, even if the policies have been migrated to
v1
during the upgrade path. ThestoredVersion
fields is still telling that we havev1alpha2
installed. This is the field description:Considering this documentation. I guess our controller needs to updates this field to allow the removal of the old CRD.
In case you want to setup a similar testing environment. This is the commands used to create a cluster with old kubewarden stack version:
The policy definition:
Acceptance criteria
storedVersions
field removing the old version not in use.v1alpha2
API package with//+kubebuilder:skip
. This will remove the version from the CRD generationOriginally posted by @jvanz in #896 (comment)
The text was updated successfully, but these errors were encountered: