Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ADOT lambda layer with trace exporter adds ~130 ms to billed duration for each lambda invocation #493

Closed
zaharsantarovich opened this issue Feb 14, 2023 · 3 comments
Labels

Comments

@zaharsantarovich
Copy link

zaharsantarovich commented Feb 14, 2023

I have a hello world empty .NET 6 lambda with v0.68 ADOT lambda layer.
collector.yaml:

receivers:
  otlp:
    protocols:
      grpc:
        endpoint: localhost:4317
exporters:
  otlp:
    endpoint: ${NEW_RELIC_OPENTELEMETRY_ENDPOINT}
    headers:
      api-key: ${NEW_RELIC_LICENSE_KEY}
service:
  pipelines:
    traces:
      receivers: [otlp]
      exporters: [otlp] 

CloudWatch logs for the lambda with ADOT lambda layer that doesn't send traces to the collector receiver (I added a custom sampler that drops all traces):
REPORT RequestId: fc897c5e-9caf-4409-89ea-9d1d5d2e370d Duration: 27.09 ms Billed Duration: 28 ms Memory Size: 2048 MB Max Memory Used: 123 MB
XRAY TraceId: 1-63ebbac7-6927d234289ecdfb2df6d4a8 SegmentId: 311415df27555941 Sampled: true

CloudWatch logs for the same lambda which sends 1 trace per invocation to the collector receiver:
REPORT RequestId: 2b72d6e8-4589-4b89-9698-715ae8a9e9f7 Duration: 160.41 ms Billed Duration: 161 ms Memory Size: 2048 MB Max Memory Used: 126 MB
XRAY TraceId: 1-63ebbb3a-3e2cfb8851cb3f95162d8a9d SegmentId: 5485d6a96f24cdbe Sampled: true

The billed duration difference is around 130 ms. Lambda memory size doesn't affect the billed duration overhead.
Traces are sent to the New Relic endpoint. According to New Relic, their OTLP endpoint returns responses very fast, far less than 100 ms.
It looks like batching for the collector exporters is disabled and traces are sent to New Relic for each lambda invocation:
https://github.com/Aneurysm9/opentelemetry-lambda/blob/fd2c4c91fba2c1ad22a653e1dd8dd94ddcec023b/collector/internal/confmap/converter/disablequeuedretryconverter/converter.go#L75

@RichiCoder1
Copy link

This is probably, possibly related to #263

Copy link

github-actions bot commented May 5, 2024

This issue was marked stale. It will be closed in 30 days without additional activity.

@github-actions github-actions bot added the Stale label May 5, 2024
@tylerbenson
Copy link
Member

Please try the latest collector layer release with the decouple processor automatically added. This sounds like exactly the problem that processor was designed to resolve.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants