This repository has been archived by the owner on Jan 27, 2021. It is now read-only.
support cpu/gpu consumption metrics as well in addition to request count #14
Labels
feature request
Requests or implements improvements that are, specifically, new features
Thanks for releasing this useful tool :)
What would you like to be added?
Currently it seems that only metrics supported is request count. Is there any plan to monitor consumption of CPU and GPU resources in addition to request count to make decision as if a given pod is idle or not.
Why is this needed?
As a user might simple submit a job and the job runs for hours before user will check it again on status. Usually in ML model training. So this will avoid the issue if killing the pod when the analysis is running.
The text was updated successfully, but these errors were encountered: