You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ZAU currently supports a maxUnavailable and ExponentialFactor params to set how fast we want to update our pods.
We can set maxUnavailable as int or exponential. But that could not be enough in some scenarios where pods scales.
Example: 30 pods running.
If we set the maxUnavailable as 3, when the number of pods grows this number will be obsolete and too small.
If we set the maxUnavailable as 10%, when the number of pods increase a lot (eg:2000), 200 maybe be too fast to update nodes.
It would be nice to have a upper bound limit to not allow the percentage to overcome a certain amount.
One idea is to have something similar to kubernets HPA where you can allow different policies and the behavior which they work https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
The text was updated successfully, but these errors were encountered:
danielblando
changed the title
Support a upper bound when updating pods
Support an upper bound when updating pods
Jun 8, 2023
ZAU currently supports a maxUnavailable and ExponentialFactor params to set how fast we want to update our pods.
We can set maxUnavailable as int or exponential. But that could not be enough in some scenarios where pods scales.
Example: 30 pods running.
If we set the maxUnavailable as 3, when the number of pods grows this number will be obsolete and too small.
If we set the maxUnavailable as 10%, when the number of pods increase a lot (eg:2000), 200 maybe be too fast to update nodes.
It would be nice to have a upper bound limit to not allow the percentage to overcome a certain amount.
One idea is to have something similar to kubernets HPA where you can allow different policies and the behavior which they work
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
The text was updated successfully, but these errors were encountered: