You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have a tendency to over-subscribe our nodes ... allowing pods to have higher limits.memory settings than their requests.memory. We expect/hope that processes inside containers will be OOMKilled before critical system processes (like kubelet) are ... but our experience is showing that kubelet often gets OOMKill events first.
I know that we can hard code a new memory reservation for the kubelet process.. but what I really want is to be able to alter the parameters that go into the
I would like parameters that allow us to change the 11 and 255 numbers, while retaining the general dynamic calculation. That would allow us to tweak these numbers to fit our environment, without losing the dynamic nature of the configuration.
Any alternatives you've considered:
I have considered trying to do this in a bootstrap container... I just hate to do all that work when I think a few parameters could be added in here and this could then be easy.
The text was updated successfully, but these errors were encountered:
It's not really the same ... because that PR would allow us to maybe change the max_num_pods, but not change the fundamental calculation of how much memory we want to reserve per pod. I want to see the 11 an 255 settings be customizable.
@diranged#1721 covers more than the title suggests but I can see how this change could be related or could be independent; I think the main thing here is that API changes for kube reserved should consider all of the requested functionality before being implemented.
Also I'm interested if you've attempted to set max pods to 110, as is the upstream Kubernetes max, and then defined a fixed kube reserved value to resolve your issue?
What I'd like:
We have a tendency to over-subscribe our nodes ... allowing pods to have higher
limits.memory
settings than theirrequests.memory
. We expect/hope that processes inside containers will be OOMKilled before critical system processes (likekubelet
) are ... but our experience is showing thatkubelet
often gets OOMKill events first.I know that we can hard code a new memory reservation for the kubelet process.. but what I really want is to be able to alter the parameters that go into the
bottlerocket/sources/api/schnauzer/src/helpers.rs
Lines 1079 to 1081 in a63007c
I would like parameters that allow us to change the
11
and255
numbers, while retaining the general dynamic calculation. That would allow us to tweak these numbers to fit our environment, without losing the dynamic nature of the configuration.Any alternatives you've considered:
I have considered trying to do this in a bootstrap container... I just hate to do all that work when I think a few parameters could be added in here and this could then be easy.
The text was updated successfully, but these errors were encountered: