-
Notifications
You must be signed in to change notification settings - Fork 396
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
✨ hack/verify-go-modules.sh: compare dependency versions #3312
base: main
Are you sure you want to change the base?
Conversation
…/kubernetes On-behalf-of: SAP [email protected] Signed-off-by: Robert Vasek <[email protected]>
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
I would honestly be open to fail the script if we have a drift, otherwise it's super easy to miss this and you need to know the script is doing it. WDYT? |
We can, but this brings a need to keep a list of our go.mod files in specific order, and keep it up to date (though this would actually be nice now already). I'll explain. At the moment, the order is semi-random depending a couple of factors, for example on my system it was like so (listing from the example output above):
Assuming the script fails and exits after the first go.mod, the user would start with cleaning If however Things to doSo, if what's above makes sense, we'll need:
|
I see, that sounds like a bigger challenge. I'd be okay with merging it as is. |
/retest |
I actually did it over yesterday's evening, but we can iterate on this in another PR. |
Summary
This PR adds checks for go.mod dependencies, verifying the versions used in our modules are the same as the ones used declared by k8s.io/kubernetes module. When rebasing or otherwise bringing in dependency updates, they can break our code -- see #3283 (comment) for example. Just following the recommendations from this script, the issue I've just linked to was solved in a couple of minutes instead of regrettably spending a day or two debugging different things... :D
The checks only report warnings if our deps are more than a patch version apart with k8s.io/kubernetes, but never return non-zero exit code to not fail CI in cases when we want to use different versions deliberately.
Example output:
Related issue(s)
Fixes #3306
Release Notes