-
Notifications
You must be signed in to change notification settings - Fork 368
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v0.16 -> v.017 upgrade, cannot establish control of object #583
Comments
Hey @patjones 👋 This is due to dropping support for those alpha level CRDs. You can fix the issue by just deleting those CRDs (i.e. Feel free to weigh in on that issue or follow up here :) |
Thanks for the response @hasheddan! Well that cleared one of those away however I'm still having issues with the natgatway crd even after deleting it a few times.
|
@patjones do you have any |
I do not actually |
Issue here was that the old revision was re-creating the CRD before the new revision. We solved the issue by deleting the old revision, but the buggy behavior is being tracked in crossplane/crossplane#2197. @patjones thanks for bringing this up! I am going to close this out in favor of the linked issue, but please feel free to re-open if you have additional questions. |
…netes-patches Update kubernetes patches
Perhaps unrelated, but if someone finds the issue by the error similar to this (as I did):
Changing the owner could help with the script that looks like: #!/bin/bash
# Variables
old_owner_name="crossplane-provider-aws-245ce7fb587d"
new_owner_name="crossplanecontrib-provider-aws-6707d06fe75f"
# Get the new UID from providerrevisions.pkg.crossplane.io
new_uid=$(kubectl get providerrevisions.pkg.crossplane.io $new_owner_name -o jsonpath='{.metadata.uid}')
echo "Replacing with: $(kubectl get providerrevisions.pkg.crossplane.io $new_owner_name -o jsonpath='{.spec.image}')"
# Get all CRDs with the old owner name
crds=$(kubectl get crds -o json | jq -r --arg old_owner_name "$old_owner_name" '.items[] | select(.metadata.ownerReferences[]?.name == $old_owner_name) | .metadata.name')
# Loop through each CRD and patch it
# Loop through each CRD and patch it
for crd in $crds; do
echo "Patching CRD: $crd"
# Get the index of the old owner reference
index=$(kubectl get crd $crd -o json | jq -r --arg old_owner_name "$old_owner_name" '
.metadata.ownerReferences | to_entries[] | select(.value.name == $old_owner_name) | .key')
# Check if index was found
if [ -z "$index" ]; then
echo "Old owner reference not found for CRD: $crd"
continue
fi
# Patch the CRD
kubectl patch crd $crd --type='json' -p='[
{"op": "remove", "path": "/metadata/ownerReferences/'$index'"},
{"op": "add", "path": "/metadata/ownerReferences/-", "value": {"apiVersion": "pkg.crossplane.io/v1", "kind": "ProviderRevision", "name": "'$new_owner_name'", "uid": "'$new_uid'", "controller": true}}
]'
done |
What happened?
I was trying to upgrade my aws provider from v0.16 -> v0.17. I re-applied my provider file after bumping the version
Unfortunately, the providerrevision for v0.17 is showing up unhealthy
If I describe the troublesome revision I get a few events that seen problematic
How can we reproduce it?
Install the aws provider v0.16 with
Provider
andProviderConfig
yaml files, bump the provider version to v0.17.What environment did it happen in?
Crossplane version: 1.10.0
Openshift 4.6.19 (k8s 1.19)
Thanks for any insight!
The text was updated successfully, but these errors were encountered: