-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Request: Sleep when no requests are received for a certain period of time #4
Comments
For our environment and usecase (why this is a comment and not in the issue description above 😄) an additonal pod is not very desired, as it would mean it's running all the time and effectively would cost us money. Yet I think this could be mitigated by making the proxy pod kind of extensible - we're already running an nginx deployment per app-instance, so putting this one and the snorlax-proxy together, howevery this might look like, would be handy for us, maybe also someone else. But perhaps there's a better solution to monitor ingress traffic which does not involve some sort of proxy pod at all. |
I see 2 parts here, one being a kind of undesired behaviour (1) and one being a feature request (2):
I can see how the implementation could cover both at once, but in case a choice has to be made about the order, I think the above would make sense. |
Yeah, this is something I've been thinking about. A sidecar proxy sounds like it could work. And being able to sense inactivity to allow for more configuration would be useful. Another possibility would be to parse default ingress controller logs to try and support the general case at first. Though one concern would be an ingress controller that's spitting out a lot of logs. Example ingress-nginx log below, it contains
|
While this could indeed provide a plug&play solution, I think it would probably scale horribly. Something like that could be implemented if the logs are queried at something like a Loki or CloudWatch, but then again we're at something that would require a tooling/software that might not be available in the cluster -> not really generic. |
I tend to agree re: scale if an ingress controller handles a lot of traffic, but I'm not quite sure where that threshold is. Datadog agents are able to collect and send all logs that a busy ingress controller spits out, so maybe it's not too bad. Re: the sidecar proxy, two things to think about are:
🤔 |
Another way I'm considering is integrating with some service mesh (probably linkerd to start with). That way Snorlax would be able to query Prometheus with something like:
So the changes would be:
One part that I like about this design is that users only need to add the linkerd annotation on their workloads, and we let linkerd handle the sidecar / network proxying. Update: One complication with ^ is that there are health check requests (e.g. ELB healthcheck) which would be counted. |
This sounds like an excellent idea, how about - for a first implementation - making the prometheus query configurable? If applications already export request count metrics these simply could be used. No need for any proxy in that case. |
Yeah, it'd be a nice integration, but we would not be able to filter out health check requests from the request counts. In the environment I have snorlax deployed to, the deployments get http checks from the AWS ALB load balancer. I've actually started working on the sidecar proxy, because I feel more comfortable with the idea of the sidecar proxy if a library is doing the heavy lifting. There are not-often-used HTTP edge cases I would rather not have to consider. I'm aiming to get it done sometime within the next week. |
Anything new on this feature? Been evaluating options like what you have here in snorlax thus far (great start, btw), and this particular feature would be key for our purposes. |
I started testing different ways to build this, but stalled on finding an implementation I was happy with. So this feature is on hold for now. |
I'd like to open this feature request to keep track and discuss the already discovered potential feature of sending the deployment(s) back to sleep.
Why?
Currently deployments will stay awake until the next "bedtime" after being awoken once. This behaviour is fine if the user is just very early, but not desired if the user only works late hours.
Considerations
I guess snorlax needs a way to be in between the ingress and the deployment? A pod that just proxies all the traffic comes to mind for that.
Other
This feature might also be helpful with not going to sleep if the deployment is still in use (recieving some amounts of traffic), even if it is bedtime.
The text was updated successfully, but these errors were encountered: