-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Variable for setting CPU for HMS rw #206
base: master
Are you sure you want to change the base?
Conversation
@@ -236,9 +236,11 @@ resource "kubernetes_deployment" "apiary_hms_readwrite" { | |||
|
|||
resources { | |||
limits { | |||
cpu = "${var.hms_rw_cpu}m" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@barnharts4 @rpoluri can I reuse this variable ? It is being used for ECS: https://github.com/ExpediaGroup/apiary-data-lake/blob/master/variables.tf#L302
Happy to create a new one if 512M is not compatible with EKS.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few comments:
- Whatever you do for RW, we should do for RO too (not the same limit necessarily, but at least assign the variable to the pod limits for k8s).
- By adding a limit here, we are most likely decreasing the CPU available to this pod, since we have never set a limit before - https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/#if-you-do-not-specify-a-cpu-limit, and no limit means it can use as much CPU as it wants from the node.
- I suppose if the
metastore
namespace has aLimitRange
assigned, this may not be true, but I don't think it does.
- I suppose if the
- I don't see any problem with re-using the same two vars between k8s and ECS, as long as we document it. However, I notice the desc in
VARIABLES.md
is only for ECS, so we probably need to clarify what it can be when using k8s.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(I also realize it is late on Friday night for you, so we can always leave the doc cleanup and other cleanup for a subsequent PR next week, since you probably just want to stabilize prod before the weekend.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for sharing that link. I can't figure out what is the CPU currently being set on the HMS pods. In theory, it is possible, that the pod gets scheduled with less CPU on the node, right ?
we thought this could help based on cloudera is saying here: https://docs.cloudera.com/cloudera-manager/7.4.2/health-tests/topics/cm_ht_hive_metastore_server.html?#ariaid-title8
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't figure out what is the CPU currently being set on the HMS pods.
I don't think any limit is being set right now. I ran kubectl get limitrange --namespace metastore
for egdp-prod
and there were none, so HMS pods should be able to use the complete CPU of an EKS worker node (in contention with all the other pods on that node, though).
📝 Description
🔗 Related Issues