You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
I am experiencing a deadline exceed issue on the DataDog exporter, as evidenced by the logs. This issue results in failed export attempts and subsequent retries.
Steps to reproduce
1. Configure the custom Docker image with the custom collector based on opentelemetry-lambda that includes the DataDog exporter.
2. Initiate data export (traces, logs, metrics).
3. Observe the logs for errors related to context deadlines being exceeded.
What did you expect to see?
I expected the data to be exported successfully to DataDog without any timeout errors.
What did you see instead?
The export requests failed with “context deadline exceeded” errors, resulting in retries and eventual dropping of the payloads. Here are some excerpts from the logs:
1719687286935 {"level":"warn","ts":1719687286.9350078,"caller":"[email protected]/batch_processor.go:263","msg":"Sender failed","kind":"processor","name":"batch","pipeline":"logs","error":"no more retries left: Post \"https://http-intake.logs.datadoghq.com/api/v2/logs?ddtags=service%3Akognitos.book.yaml%2Cenv%3Amain%2Cregion%3Aus-west-2%2Ccloud_provider%3Aaws%2Cos.type%3Alinux%2Cotel_source%3Adatadog_exporter\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"}
1719687286936 {"level":"error","ts":1719687286.9363096,"caller":"[email protected]/traces_exporter.go:181","msg":"Error posting hostname/tags series","kind":"exporter","data_type":"traces","name":"datadog","error":"max elapsed time expired Post \"https://api.datadoghq.com/api/v2/series\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)","stacktrace":"github.com/open-telemetry/opentelemetry-collector-contrib/exporter/datadogexporter.(*traceExporter).exportUsageMetrics\n\t/root/go/pkg/mod/github.com/open-telemetry/opentelemetry-collector-contrib/exporter/[email protected]/traces_exporter.go:181\ngithub.com/open-telemetry/opentelemetry-collector-contrib/exporter/datadogexporter.(*traceExporter).consumeTraces\n\t/root/go/pkg/mod/github.com/open-telemetry/opentelemetry-collector-contrib/exporter/[email protected]/traces_exporter.go:139\ngo.opentelemetry.io/collector/exporter/exporterhelper.(*tracesRequest).Export\n\t/root/go/pkg/mod/go.opentelemetry.io/collector/[email protected]/exporterhelper/traces.go:59\ngo.opentelemetry.io/collector/exporter/exporterhelper.(*timeoutSender).send\n\t/root/go/pkg/mod/go.opentelemetry.io/collector/[email protected]/exporterhelper/timeout_sender.go:43\ngo.opentelemetry.io/collector/exporter/exporterhelper.(*baseRequestSender).send\n\t/root/go/pkg/mod/go.opentelemetry.io/collector/[email protected]/exporterhelper/common.go:37\ngo.opentelemetry.io/collector/exporter/exporterhelper.(*tracesExporterWithObservability).send\n\t/root/go/pkg/mod/go.opentelemetry.io/collector/[email protected]/exporterhelper/traces.go:159\ngo.opentelemetry.io/collector/exporter/exporterhelper.(*baseRequestSender).send\n\t/root/go/pkg/mod/go.opentelemetry.io/collector/[email protected]/exporterhelper/common.go:37\ngo.opentelemetry.io/collector/exporter/exporterhelper.(*baseRequestSender).send\n\t/root/go/pkg/mod/go.opentelemetry.io/collector/[email protected]/exporterhelper/common.go:37\ngo.opentelemetry.io/collector/exporter/exporterhelper.(*baseExporter).send\n\t/root/go/pkg/mod/go.opentelemetry.io/collector/[email protected]/exporterhelper/common.go:294\ngo.opentelemetry.io/collector/exporter/exporterhelper.NewTracesRequestExporter.func1\n\t/root/go/pkg/mod/go.opentelemetry.io/collector/[email protected]/exporterhelper/traces.go:134\ngo.opentelemetry.io/collector/consumer.ConsumeTracesFunc.ConsumeTraces\n\t/root/go/pkg/mod/go.opentelemetry.io/collector/[email protected]/traces.go:25\ngo.opentelemetry.io/collector/processor/batchprocessor.(*batchTraces).export\n\t/root/go/pkg/mod/go.opentelemetry.io/collector/processor/[email protected]/batch_processor.go:414\ngo.opentelemetry.io/collector/processor/batchprocessor.(*shard).sendItems\n\t/root/go/pkg/mod/go.opentelemetry.io/collector/processor/[email protected]/batch_processor.go:261\ngo.opentelemetry.io/collector/processor/batchprocessor.(*shard).startLoop\n\t/root/go/pkg/mod/go.opentelemetry.io/collector/processor/[email protected]/batch_processor.go:223"}
What version of collector/language SDK version did you use?
Version: Custom layer-collector/0.8.0 + datadogexporter from v0.103.0
What language layer did you use?
Config: None. It is a custom runtime that includes the binary in extensions.
Enabling/disabling sending_queue does not seem to do anything to prevent the errors. I did noticed that if I hit the service continuously some traces do get sent, but only a few.
What I discarded as potential solutions:
Connectivity issues. DataDog has an API Key validation API calls that succeeds. If the service is hit constantly some traces get thru.
The text was updated successfully, but these errors were encountered:
@3miliano I think it is because of container freeze right after invocation complete and with those configs you have shared, collector is not aware of Lambda lifecycle. So as @tylerbenson suggested, using batch processor (so it will activate decouple processor by default) right before Datadog exported should resolve your problem.
Describe the bug
I am experiencing a deadline exceed issue on the DataDog exporter, as evidenced by the logs. This issue results in failed export attempts and subsequent retries.
Steps to reproduce
1. Configure the custom Docker image with the custom collector based on opentelemetry-lambda that includes the DataDog exporter.
2. Initiate data export (traces, logs, metrics).
3. Observe the logs for errors related to context deadlines being exceeded.
What did you expect to see?
I expected the data to be exported successfully to DataDog without any timeout errors.
What did you see instead?
The export requests failed with “context deadline exceeded” errors, resulting in retries and eventual dropping of the payloads. Here are some excerpts from the logs:
What version of collector/language SDK version did you use?
Version: Custom layer-collector/0.8.0 + datadogexporter from v0.103.0
What language layer did you use?
Config: None. It is a custom runtime that includes the binary in extensions.
Additional context
Here is my configuration file:
Enabling/disabling
sending_queue
does not seem to do anything to prevent the errors. I did noticed that if I hit the service continuously some traces do get sent, but only a few.What I discarded as potential solutions:
The text was updated successfully, but these errors were encountered: