Skip to content

Commit

Permalink
Add auto-config explaination to Collector
Browse files Browse the repository at this point in the history
  • Loading branch information
nslaughter committed Apr 22, 2024
1 parent d8d5062 commit 5038809
Showing 1 changed file with 12 additions and 54 deletions.
66 changes: 12 additions & 54 deletions collector/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,68 +90,26 @@ from an S3 object using a CloudFormation template:
Loading configuration from S3 will require that the IAM role attached to your function includes read access to the relevant bucket.
## Auto-Configuration
Configuring the Lambda Collector without the decouple processor and batch processor can lead to performance issues. So the OpenTelemetry Lambda Layer automatically adds the decouple processor to the end of the chain if the batch processor is used and the decouple processor is not.
# Improving Lambda responses times
At the end of a lambda function's execution, the OpenTelemetry client libraries will flush any pending spans/metrics/logs
to the collector before returning control to the Lambda environment. The collector's pipelines are synchronous and this
means that the response of the lambda function is delayed until the data has been exported.
to the collector before returning control to the Lambda environment. The collector's pipelines are synchronous and this
means that the response of the lambda function is delayed until the data has been exported.
This delay can potentially be for hundreds of milliseconds.
To overcome this problem the [decouple](./processor/decoupleprocessor/README.md) processor can be used to separate the
two ends of the collectors pipeline and allow the lambda function to complete while ensuring that any data is exported
To overcome this problem the [decouple](./processor/decoupleprocessor/README.md) processor can be used to separate the
two ends of the collectors pipeline and allow the lambda function to complete while ensuring that any data is exported
before the Lambda environment is frozen.
Below is a sample configuration that uses the decouple processor:
```yaml
receivers:
otlp:
protocols:
grpc:

exporters:
logging:
loglevel: debug
otlp:
endpoint: { backend endpoint }

processors:
decouple:

service:
pipelines:
traces:
receivers: [otlp]
processors: [decouple]
exporters: [logging, otlp]
```
See the section regarding auto-configuration above. You don't need to manually add the decouple processor to your configuration.
## Reducing Lambda runtime
If your lambda function is invoked frequently it is also possible to pair the decouple processor with the batch
processor to reduce total lambda execution time at the expense of delaying the export of OpenTelemetry data.
If your lambda function is invoked frequently it is also possible to pair the decouple processor with the batch
processor to reduce total lambda execution time at the expense of delaying the export of OpenTelemetry data.
When used with the batch processor the decouple processor must be the last processor in the pipeline to ensure that data
is successfully exported before the lambda environment is frozen.
An example use of the batch and decouple processors:
```yaml
receivers:
otlp:
protocols:
grpc:

exporters:
logging:
loglevel: debug
otlp:
endpoint: { backend endpoint }

processors:
decouple:
batch:
timeout: 5m

service:
pipelines:
traces:
receivers: [otlp]
processors: [batch, decouple]
exporters: [logging, otlp]
```
As stated previously in the auto-configuration section, the OpenTelemetry Lambda Layer will automatically add the decouple processor to the end of the processors if the batch is used and the decouple processor is not. The result will be the same whether you configure it manually or not.

0 comments on commit 5038809

Please sign in to comment.