-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mismatch between Beyla and Loki pod logs on service.name
, service.namespace
, and service.instance.id
#942
Comments
Trying to determine how to resolve this. |
Thanks @petewall . I started brainstorming with @zeitlinger and plan to involve next week the Beyla team. |
Since Beyla 1.9 (released Nov 25th), instance id is: |
@cyrille-leclerc I just noticed that the operator uses the docker image to determine the service version - I think we should also add that to our spec: https://github.com/open-telemetry/opentelemetry-operator/blob/2389f9441912835fbd2af00d26dd76d6c1dae545/pkg/instrumentation/sdk.go#L461 |
here's the code that can be re-used from the operator: https://github.com/open-telemetry/opentelemetry-operator/blob/2389f9441912835fbd2af00d26dd76d6c1dae545/pkg/instrumentation/sdk.go#L408-L485 |
Thanks, can you please update the doc? |
done |
I guess I can assume that |
correct |
here's the PR to disable the label for service.instance.id: open-telemetry/opentelemetry-operator#3497 |
@mariomac I'm happy to review a PR if you add the logic to beyla |
Problem description
There is inconsistency in critical resource attributes of the telemetry of applications instrumented with Beyla for metrics & traces combined with the Grafana K8s Helm Chart for logs emitted through k8s stdout.
This inconsistency, different
service.name
,service.namespace
, andservice.instance.id
break correlations capabilities in Grafana Cloud, particularly in Grafana Application Observability.Example of inconsistencies
Root Cause Analysis - Inconsistent naming strategies
service.name
first_non_null( pod.annotation[resource.opentelemetry.io/service.name] if (useLabelsForResourceAttributes) { pod.label[app.kubernetes.io/name] } k8s.deployment.name k8s.replicaset.name k8s.statefulset.name k8s.daemonset.name k8s.cronjob.name k8s.job.name )
first_non_null( k8s.deployment.name ?TODO? )
first_non_null( service app application name pod.label[app.kubernetes.io/name] container container_name component workload job )
-- ℹservice.namespace
first_non_null( pod.annotation[resource.opentelemetry.io/service.namespace] if (useLabelsForResourceAttributes) { pod.label[app.kubernetes.io/part-of] } k8s.namespace.name )
k8s.namespace.name
0
service.instance.id
pod.annotation[resource.opentelemetry.io/service.instance.id] if (useLabelsForResourceAttributes) { pod.label[app.kubernetes.io/instance] } join(k8s.namespace.name, k8s.pod.name, k8s.container.name, ".")
(don't use annotation or label - we want to remove it: https://github.com/open-telemetry/opentelemetry-operator/issues/3495)TODO GENERATED?
0
service.version
first_non_null( pod.annotation[resource.opentelemetry.io/service.version] if (useLabelsForResourceAttributes) { pod.label[app.kubernetes.io/version] } docker tag, except when it contains a `/` )
?
?
deployment.environment.name
first_non_null( pod.annotation[resource.opentelemetry.io/deployment.environment.name] )
0
0
The text was updated successfully, but these errors were encountered: