-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Queue full #69
Comments
when graphite-remote-adapter is unable to send the metrics to graphite for any reason it's internal queue gets filled and hence stops receiving metrics. Prometheus will start droppping samples so it doesn't fill its queue. |
For example now I use 1 remote addr for all my adapters now(5) OK, this is my conf
My prometheus log: This happens periodically in the prometheus log: |
are you sure graphite is not having any issues? it could be slow at ingesting the samples. |
Now adapter instances are not limited in resources. Now there are 5 of them, they consume about 0.1 processor cores and 700 megabytes of RAM each. graphite was deployed through docker-compose. |
how does the resource usage of graphit elook like? any errors in graphite side? |
Graphite is 40% loaded, today we’ll deploy in a cluster and try to write to it. |
Hello, I have a problem with queue in prometheus + graphite-remote-adapter
level=warn ts=2019-12-10T08:31:54.018127762Z caller=queue_manager.go:230 component=remote queue="0:http://***/write?graphite.default-prefix=kube_poly_ " msg="Remote storage queue full, discarding sample. Multiple subsequent messages of this kind may be suppressed."
Prometheus & adapter config is default
only 10% of metrics from 70 computers reach
The text was updated successfully, but these errors were encountered: