-
Notifications
You must be signed in to change notification settings - Fork 960
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(chart): expose health port via svc #6037
Conversation
✅ Deploy Preview for karpenter-docs-prod canceled.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🚀
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/karpenter snapshot
Pull Request Test Coverage Report for Build 8720520076Warning: This coverage report may be inaccurate.This pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.
Details
💛 - Coveralls |
Snapshot successfully published to
|
What use case does this meet? It's a bit odd to me that we would loadbalance a health port. |
I'd like to monitor the health/availability of Karpenter using the Blackbox Exporter/Prometheus Probes |
Is it not possible to scrape the pods directly? |
Sure its possible, but pods are ephemeral, and would involve using Endpoints to dynamically update our Probe config at regular intervals Probing the Service instead allows for a stable endpoint and abstraction away from individual pods. I am setting up SLO's as opposed to alerting (which we already have at a pod level) We have the same setup for 20+ other services on our clusters (hoping for some consistency), but Karpenter is the exception as it does not currently have its health port exposed via the Service Its worth noting that Prometheus Probe's only accept static or dynamic Ingress objects as targets so endpoints would not work in this scenario |
Just so I have a little more context: Can you point to a few upstream processes that do exposure of their health ports through the service?
Can you do this through kube-state? Doesn't the health of the pod get exposed back up through the pod thorugh status conditions? Or is there something that you gain from the health probe port that doesn't get exposed through kube-state. |
This PR has been inactive for 14 days. StaleBot will close this stale PR after 14 more days of inactivity. |
Description
Expose health port via Service
tested on k8s cluster, allows reaching
karpenter.dev.svc.cluster.local:8081/healthz
How was this change tested?
Does this change impact docs?
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.