Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory thrashing / nodes go Unready #1098

Closed
maximethebault opened this issue Nov 11, 2022 · 1 comment
Closed

Memory thrashing / nodes go Unready #1098

maximethebault opened this issue Nov 11, 2022 · 1 comment

Comments

@maximethebault
Copy link

What happened:
Using Karpenter to provision EKS AL2 nodes backed by this AMI.
When pods memory usage gets close to node's allocatable capacity, node goes into memory thrashing. Same symptoms & consequences as described on that issue: aws/karpenter-provider-aws#2129
We saw this issue happening on a r6id.xlarge instance type, with standard provisioner settings and standard usage. About 30 pods were scheduled on the node when this happened.

What you expected to happen:
kube-reserved / system-reserved / hard eviction correctly configured to avoid this situation.

How to reproduce it (as minimally and precisely as possible):
Let Karpenter provision a r6id.xlarge node for a pod with memory request close to allocatable capacity.
Make the memory used by this pod grow.

Anything else we need to know?:

Environment:

  • AWS Region: eu-west-1
  • Instance Type(s): r6id.xlarge
  • EKS Platform version: eks.11
  • Kubernetes version: 1.21
  • AMI Version: all recent AMI release (saw some v20220824 as well as some v20221101)
@maximethebault
Copy link
Author

Closed in favor of #1145

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant