Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Variable for setting CPU for HMS rw #206

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open

Conversation

abhimanyugupta07
Copy link
Member

📝 Description

🔗 Related Issues

@abhimanyugupta07 abhimanyugupta07 requested a review from a team as a code owner October 29, 2021 15:31
@@ -236,9 +236,11 @@ resource "kubernetes_deployment" "apiary_hms_readwrite" {

resources {
limits {
cpu = "${var.hms_rw_cpu}m"
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@barnharts4 @rpoluri can I reuse this variable ? It is being used for ECS: https://github.com/ExpediaGroup/apiary-data-lake/blob/master/variables.tf#L302

Happy to create a new one if 512M is not compatible with EKS.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few comments:

  1. Whatever you do for RW, we should do for RO too (not the same limit necessarily, but at least assign the variable to the pod limits for k8s).
  2. By adding a limit here, we are most likely decreasing the CPU available to this pod, since we have never set a limit before - https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/#if-you-do-not-specify-a-cpu-limit, and no limit means it can use as much CPU as it wants from the node.
    1. I suppose if the metastore namespace has a LimitRange assigned, this may not be true, but I don't think it does.
  3. I don't see any problem with re-using the same two vars between k8s and ECS, as long as we document it. However, I notice the desc in VARIABLES.md is only for ECS, so we probably need to clarify what it can be when using k8s.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(I also realize it is late on Friday night for you, so we can always leave the doc cleanup and other cleanup for a subsequent PR next week, since you probably just want to stabilize prod before the weekend.)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for sharing that link. I can't figure out what is the CPU currently being set on the HMS pods. In theory, it is possible, that the pod gets scheduled with less CPU on the node, right ?

we thought this could help based on cloudera is saying here: https://docs.cloudera.com/cloudera-manager/7.4.2/health-tests/topics/cm_ht_hive_metastore_server.html?#ariaid-title8

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't figure out what is the CPU currently being set on the HMS pods.

I don't think any limit is being set right now. I ran kubectl get limitrange --namespace metastore for egdp-prod and there were none, so HMS pods should be able to use the complete CPU of an EKS worker node (in contention with all the other pods on that node, though).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants