-
Notifications
You must be signed in to change notification settings - Fork 216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KUBERNETES_PATCH_PATH fails when applying back original resource with modification. #346
Comments
@GrahamDumpleton hello and thank you for submitting this issue. Most of our teammates have days off till the end of this week because of the New Year's holiday season in Russia. We will reach you next week and see together what we can do. |
Alas, I can't use |
Thinking whether there should be a distinct |
Hello!
Oops, this one is not correct.
The closest to this scenario is a
Sounds good. @zuzzas what do you think about adding Update operation? |
Another example where behaviour of
In other words, it errors even though the fields outside of resource requests are the same values as originals. Not much choice at this point but to not use |
Expected behavior (what you expected to happen):
Hook script is configured to use snapshots and so gets complete copies of resources from the cluster.
A resource, in this case a
Secret
, is taken from the snapshot and modified. The complete resource with all original fields and the one modified field is passed back to the shell-operator usingKUBERNETES_PATCH_PATH
file, with something like:Expected behaviour is that this should be applied back to the cluster and the original version updated.
IOW, expecting this would work just like if had run
kubectl apply
. However, it doesn't.Actual behavior (what actually happened):
The update of the existing resource fails with the error:
In other words, it fails when supplying the original resource with just the modification applied due to the existence of the original
resourceVersion
property. Even though it should be an update since the resource does exist, the message indicates creation of an object is being attempted. The existence ofresourceVersion
property doesn't cause an issue if you were to take the same modified object and usekubectl apply
.If one explicitly removes
resourceVersion
from what is written back toKUBERNETES_PATCH_PATH
then it works, but that isn't really desirable. For the case of an update you actually really want it to verify the resource version is the same and make the update so that can catch if resource in cluster had changed in the interim, or even deleted.What doesn't make sense is that in the code it does:
so at that point it seems to treat them all as being a create operation. I can't follow the code through after that to work out what actually happens, but the docs say I should expect:
So I would expect that it should be realising that the resource already exists and creates a patch for the differences and applies that, but that doesn't look like what is happening.
For now I have switched to using:
instead, but that isn't going to work for other cases will have where is too hard to determine a small patch. In those cases will just have to drop the
resourceVersion
and hope that will be okay, although the case where resource deleted since will be an issue since could potentially recreate it when don't want it to be.Environment:
Additional information for debugging (if necessary):
Hook script
crontab: "* * * * *"
group: secret-copier
kubernetes:
apiVersion: example.com/v1alpha1
kind: SecretExport
executeHookOnEvent:
group: secret-copier
apiVersion: example.com/v1alpha1
kind: SecretImport
executeHookOnEvent:
group: secret-copier
kind: Secret
executeHookOnEvent:
group: secret-copier
EOF
exit 0
fi
ytt --data-value-file contexts=$BINDING_CONTEXT_PATH -f ../handlers >$KUBERNETES_PATCH_PATH
The text was updated successfully, but these errors were encountered: