Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow customizing the parameters in the kube_reserve_memory() function #4187

Open
diranged opened this issue Sep 11, 2024 · 3 comments
Open
Labels
status/needs-triage Pending triage or re-evaluation type/enhancement New feature or request

Comments

@diranged
Copy link

What I'd like:

We have a tendency to over-subscribe our nodes ... allowing pods to have higher limits.memory settings than their requests.memory. We expect/hope that processes inside containers will be OOMKilled before critical system processes (like kubelet) are ... but our experience is showing that kubelet often gets OOMKill events first.

I know that we can hard code a new memory reservation for the kubelet process.. but what I really want is to be able to alter the parameters that go into the

Value::Null => {
format!("{}Mi", (max_num_pods * 11 + 255))
}
function.

I would like parameters that allow us to change the 11 and 255 numbers, while retaining the general dynamic calculation. That would allow us to tweak these numbers to fit our environment, without losing the dynamic nature of the configuration.

Any alternatives you've considered:

I have considered trying to do this in a bootstrap container... I just hate to do all that work when I think a few parameters could be added in here and this could then be easy.

@diranged diranged added status/needs-triage Pending triage or re-evaluation type/enhancement New feature or request labels Sep 11, 2024
@stevehipwell
Copy link

@diranged this looks to be a duplicate of #1721.

@diranged
Copy link
Author

diranged commented Oct 4, 2024

It's not really the same ... because that PR would allow us to maybe change the max_num_pods, but not change the fundamental calculation of how much memory we want to reserve per pod. I want to see the 11 an 255 settings be customizable.

@stevehipwell
Copy link

@diranged #1721 covers more than the title suggests but I can see how this change could be related or could be independent; I think the main thing here is that API changes for kube reserved should consider all of the requested functionality before being implemented.

Also I'm interested if you've attempted to set max pods to 110, as is the upstream Kubernetes max, and then defined a fixed kube reserved value to resolve your issue?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status/needs-triage Pending triage or re-evaluation type/enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants