forked from kubernetes/autoscaler
-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
vpa: Update to version 1.0.0 #108
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sort nodegroups in order of their ID
This reverts commit 1384c8b.
Revert "Add subresource status for vpa"
Previous "CropNodes" function of ScaleDownBudgetProcessor had an assumption that atomically-scaled node groups should be classified as "empty" or "drain" as a whole, however Cluster Autoscaler may classify some of the nodes from a single group as "empty" and other as "drain".
update agnhost image to pull from registry.k8s.io
Generated by runing: ``` go mod tidy go mod vendor ```
Update VPA vendor
…r-cleanup Replace `BuildTestContainer` with use of builder
Include short unregistered nodes in calculation of incorrect node group
Add atomic scale down option for node groups
Add BigDarkClown to Cluster Autoscaler approvers
…h-weird-temp-folder-name Quote temp folder name parameter to avoid errors
* Merged multiple tests into one single table driven test. * Fixed some typos.
…strator * Started handling scale up options for ZeroToMaxNodeScaling with the existing estimator * Skip setting similar node groups for the node groups that use ZeroToMaxNodeScaling * Renamed the autoscaling option from "AtomicScaleUp" to "AtomicScaling" * Merged multiple tests into one single table driven test. * Fixed some typos.
* Renamed the "AtomicScaling" autoscaling option to "ZeroOrMaxNodeScaling" to be more clear about the behavior.
Add support for scaling up with ZeroToMaxNodesScaling option
Change the tracking of APIVersion from a boolean indicating if the VPA is v1beta1 to the version string and make sure it gets exported in metrics. Add tests for the recommender metrics.
Add status field in subresource on crd yaml and add new ClusterRole system:vpa-actor to patch /status subresource. The `metadata.generation` only increase on vpa spec update. Fix e2e test for patch and create vpa
feat(metrics): add metrics to observe where time is consumed in scale up
Packet autoscaler cloudprovider updates
Refactor `getStartDeletionTestCases` function so that each test case operates on its own node group.
…implemented fix(cloudprovider/exteranlgrpc): properly handle unimplemented methods
Fix the api group to x-k8s.io
Signed-off-by: Eric Lin <[email protected]>
Use informer factory to reuse listers
fix: add elect-leader flag to the pflag
Bump VPA version to 1.0.0 in preparation for 1.28 release
Signed-off-by: Eric Lin <[email protected]>
Allow setting content-type in command
Bump default VPA version to 1.0.0 in vpa-release-1.0 branch
…1-to-VPA-release-1.0-branch Cherry pick kubernetes#6151 to VPA release 1.0 branch
fix duplicate -addext when generating certificates
Vertical Pod Autoscaler release 0.13.0 Signed-off-by: Mikkel Oscar Lyderik Larsen <[email protected]>
Signed-off-by: Mikkel Oscar Lyderik Larsen <[email protected]>
Vertical Pod Autoscaler release 0.14.0 Signed-off-by: Mikkel Oscar Lyderik Larsen <[email protected]>
Vertical Pod Autoscaler release 1.0.0
Signed-off-by: Mikkel Oscar Lyderik Larsen <[email protected]>
Signed-off-by: Mikkel Oscar Lyderik Larsen <[email protected]>
👍 |
1 similar comment
👍 |
mikkeloscar
merged commit Mar 26, 2024
c8b9704
into
zalando-vertical-pod-autoscaler
8 of 9 checks passed
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Update to version 1.0.0 of the VPA from upstream: https://github.com/kubernetes/autoscaler/releases/tag/vertical-pod-autoscaler-1.0.0
Since this is a huge PR this is the diff between our VPA version and the upstream 1.0.0 version
https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler#compatibility