You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Increasing the receive size doesn't help in reducing the inserts because the kafkareceiver sends each message to the next consumer. The ideal implementation should combine the individual message spans/logs/metrics into one big ResourceSpans/Logs/Metrics by appending them and sending them across the pipeline.
The batch processor queues the item and returns immediately. Kafka receiver then marks the message as consumed, despite it not yet being written to storage. This creates a risk of data loss if the collector crashes or the storage backend becomes unavailable for an extended period. We should only mark messages as consumed after receiving confirmation from ClickHouse that the data has been successfully written.
Increasing the receive size doesn't help in reducing the inserts because the kafkareceiver sends each message to the next consumer. The ideal implementation should combine the individual message spans/logs/metrics into one big
ResourceSpans/Logs/Metrics
by appending them and sending them across the pipeline.signoz-otel-collector/receiver/signozkafkareceiver/kafka_receiver.go
Lines 470 to 502 in c6f81e7
The text was updated successfully, but these errors were encountered: