-
Notifications
You must be signed in to change notification settings - Fork 162
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Add doc for diagnosing backpressure from Elasticsearch (#4097)
* initial apm-es-backpressure doc draft * address review comments * fix internal doc references * address review comments * es backpressure troubleshoot doc fmt fix * address comments * fix doc typo * add not recommended banner
- Loading branch information
Showing
2 changed files
with
55 additions
and
1 deletion.
There are no files selected for viewing
51 changes: 51 additions & 0 deletions
51
docs/en/observability/apm/apm-performance-diagnostic.asciidoc
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,51 @@ | ||
[[apm-performance-diagnostic]] | ||
=== APM Server performance diagnostic | ||
|
||
[[apm-es-backpressure]] | ||
[float] | ||
==== Diagnosing backpressure from {es} | ||
|
||
When {es} is under excessive load or indexing pressure, APM Server could experience the downstream backpressure when indexing new documents into {es}. | ||
Most commonly, backpressure from {es} will manifest itself in the form of higher indexing latency and/or rejected requests, which in return could lead APM Server to deny incoming requests. | ||
As a result, APM agents connected to the affected APM Server will suffer from throttling and/or request timeout when shipping APM events. | ||
|
||
To quickly identify possible issues try looking for similar error logs lines in APM Server logs: | ||
|
||
[source,json] | ||
---- | ||
... | ||
{"log.level":"error","@timestamp":"2024-07-27T23:46:28.529Z","log.origin":{"function":"github.com/elastic/go-docappender/v2.(*Appender).flush","file.name":"[email protected]/appender.go","file.line":370},"message":"bulk indexing request failed","service.name":"apm-server","error":{"message":"flush failed (429): [429 Too Many Requests]"},"ecs.version":"1.6.0"} | ||
{"log.level":"error","@timestamp":"2024-07-27T23:55:38.612Z","log.origin":{"function":"github.com/elastic/go-docappender/v2.(*Appender).flush","file.name":"[email protected]/appender.go","file.line":370},"message":"bulk indexing request failed","service.name":"apm-server","error":{"message":"flush failed (503): [503 Service Unavailable]"},"ecs.version":"1.6.0"} | ||
... | ||
---- | ||
|
||
To gain better insight into APM Server health and performance, consider enabling the monitoring feature by following the steps in <<apm-monitor-apm,Monitor APM Server>>. | ||
When enabled, APM Server will additionally report a set of vital metrics to help you identify any performance degradation. | ||
|
||
Pay careful attention to the next metric fields: | ||
|
||
* `beats_stats.metrics.libbeat.output.events.active` that represents the number of buffered pending documents waiting to be ingested; | ||
(_if this value is increasing rapidly it may indicate {es} backpressure_) | ||
* `beats_stats.metrics.libbeat.output.events.acked` that represents the total number of documents that have been ingested successfully; | ||
* `beats_stats.metrics.libbeat.output.events.failed` that represents the total number of documents that failed to ingest; | ||
(_if this value is increasing rapidly it may indicate {es} backpressure_) | ||
* `beats_stats.metrics.libbeat.output.events.toomany` that represents the number of documents that failed to ingest due to {es} responding with 429 Too many Requests; | ||
(_if this value is increasing rapidly it may indicate {es} backpressure_) | ||
* `beats_stats.output.elasticsearch.bulk_requests.available` that represents the number of bulk indexers available for making bulk index requests; | ||
(_if this value is equal to 0 it may indicate {es} backpressure_) | ||
* `beats_stats.output.elasticsearch.bulk_requests.completed` that represents the number of already completed bulk requests; | ||
* `beats_stats.metrics.output.elasticsearch.indexers.active` that represents the number of active bulk indexers that are concurrently processing batches; | ||
|
||
See {metricbeat-ref}/exported-fields-beat.html[{metricbeat} documentation] for the full list of exported metric fields. | ||
|
||
One likely cause of excessive indexing pressure or rejected requests is undersized {es}. To mitigate this, follow the guidance in {ref}/rejected-requests.html[Rejected requests]. | ||
|
||
(Not recommended) If scaling {es} resources up is not an option, you can adjust the `flush_bytes`, `flush_interval`, `max_retries` and `timeout` settings described in <<apm-elasticsearch-output,Configure the Elasticsearch output>> to reduce APM Server indexing pressure. However, consider that increasing number of buffered documents and/or reducing retries may lead to a higher rate of dropped APM events. Down below a custom configuration example is listed where the number of default buffered documents is roughly doubled while {es} indexing retries are decreased simultaneously. This configuration provides a generic example and might not be applicable to your situation. Try adjusting the settings further to see what works for you. | ||
[source,yaml] | ||
---- | ||
output.elasticsearch: | ||
flush_bytes: "2MB" # double the default value | ||
flush_interval: "2s" # double the default value | ||
max_retries: 1 # decrease the default value | ||
timeout: 60 # decrease the default value | ||
---- |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters