Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrong app.kubernetes.io/instance is set #929

Open
gouthamve opened this issue Nov 20, 2024 · 1 comment
Open

Wrong app.kubernetes.io/instance is set #929

gouthamve opened this issue Nov 20, 2024 · 1 comment

Comments

@gouthamve
Copy link
Member

The app.kubernetes.io/instance label needs to be unique for each pod. See: https://kubernetes.io/docs/reference/labels-annotations-taints/#app-kubernetes-io-instance

See:

➜  tanka git:(main) ✗ k -n monitoring get po k8s-monitoring-alloy-logs-5qkrc -oyaml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    k8s.grafana.com/logs.job: integrations/alloy
    kubectl.kubernetes.io/default-container: alloy
  creationTimestamp: "2024-11-15T15:44:38Z"
  generateName: k8s-monitoring-alloy-logs-
  labels:
    app.kubernetes.io/instance: k8s-monitoring
    app.kubernetes.io/name: alloy-logs
    controller-revision-hash: 68c597b7c4
    pod-template-generation: "3"
  name: k8s-monitoring-alloy-logs-5qkrc
  namespace: monitoring
  ownerReferences:
  - apiVersion: apps/v1
    blockOwnerDeletion: true
    controller: true
    kind: DaemonSet
    name: k8s-monitoring-alloy-logs
    uid: a1366d4d-c0e4-4946-a06a-2bade9644a97
  resourceVersion: "26198515"
  uid: 2d9900ae-5d7b-48d5-86e0-ecce76712e4a
➜  tanka git:(main) ✗ k -n monitoring get sts k8s-monitoring-alloy -oyaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  annotations:
    kubectl.kubernetes.io/last-applied-configuration: |
      {"apiVersion":"apps/v1","kind":"StatefulSet","metadata":{"annotations":{},"labels":{"app.kubernetes.io/instance":"k8s-monitoring","app.kubernetes.io/managed-by":"Helmraiser","app.kubernetes.io/name":"alloy","app.kubernetes.io/part-of":"alloy","app.kubernetes.io/version":"v1.5.0","helm.sh/chart":"alloy-0.10.0","tanka.dev/environment":"ac01f6e2b7516fbe884c27716a1be2e96137a278ac0bbbdc"},"name":"k8s-monitoring-alloy","namespace":"monitoring"},"spec":{"minReadySeconds":10,"podManagementPolicy":"Parallel","replicas":1,"selector":{"matchLabels":{"app.kubernetes.io/instance":"k8s-monitoring","app.kubernetes.io/name":"alloy"}},"serviceName":"k8s-monitoring-alloy","template":{"metadata":{"annotations":{"k8s.grafana.com/logs.job":"integrations/alloy","kubectl.kubernetes.io/default-container":"alloy"},"labels":{"app.kubernetes.io/instance":"k8s-monitoring","app.kubernetes.io/name":"alloy"}},"spec":{"containers":[{"args":["run","/etc/alloy/config.alloy","--storage.path=/tmp/alloy","--server.http.listen-addr=0.0.0.0:12345","--server.http.ui-path-prefix=/","--cluster.enabled=true","--cluster.join-addresses=k8s-monitoring-alloy-cluster","--cluster.name=\"alloy\"","--stability.level=generally-available"],"env":[{"name":"ALLOY_DEPLOY_MODE","value":"helm"},{"name":"HOSTNAME","valueFrom":{"fieldRef":{"fieldPath":"spec.nodeName"}}}],"image":"docker.io/grafana/alloy:v1.5.0","imagePullPolicy":"IfNotPresent","name":"alloy","ports":[{"containerPort":12345,"name":"http-metrics"},{"containerPort":4317,"name":"otlp-grpc","protocol":"TCP"},{"containerPort":4318,"name":"otlp-http","protocol":"TCP"},{"containerPort":9999,"name":"prometheus","protocol":"TCP"},{"containerPort":14250,"name":"jaeger-grpc","protocol":"TCP"},{"containerPort":6832,"name":"jaeger-binary","protocol":"TCP"},{"containerPort":6831,"name":"jaeger-compact","protocol":"TCP"},{"containerPort":14268,"name":"jaeger-http","protocol":"TCP"},{"containerPort":9411,"name":"zipkin","protocol":"TCP"}],"readinessProbe":{"httpGet":{"path":"/-/ready","port":12345,"scheme":"HTTP"},"initialDelaySeconds":10,"timeoutSeconds":1},"volumeMounts":[{"mountPath":"/etc/alloy","name":"config"}]},{"args":["--volume-dir=/etc/alloy","--webhook-url=http://localhost:12345/-/reload"],"image":"ghcr.io/jimmidyson/configmap-reload:v0.12.0","name":"config-reloader","resources":{"requests":{"cpu":"1m","memory":"5Mi"}},"volumeMounts":[{"mountPath":"/etc/alloy","name":"config"}]}],"dnsPolicy":"ClusterFirst","nodeSelector":{"kubernetes.io/os":"linux"},"serviceAccountName":"k8s-monitoring-alloy","tolerations":[{"effect":"NoSchedule","key":"kubernetes.io/arch","operator":"Equal","value":"arm64"}],"volumes":[{"configMap":{"name":"k8s-monitoring-alloy"},"name":"config"}]}}}}
  creationTimestamp: "2024-09-02T11:51:55Z"
  generation: 3
  labels:
    app.kubernetes.io/instance: k8s-monitoring
    app.kubernetes.io/managed-by: Helmraiser
    app.kubernetes.io/name: alloy
    app.kubernetes.io/part-of: alloy
    app.kubernetes.io/version: v1.5.0
    helm.sh/chart: alloy-0.10.0
    tanka.dev/environment: ac01f6e2b7516fbe884c27716a1be2e96137a278ac0bbbdc

We're setting it to a static value.

Other systems expect this value to be unique, like open-telemetry/opentelemetry-operator#3204

@petewall
Copy link
Collaborator

Thanks for raising this. Actually, the instance is set to k8s-monitoring because it's all a part of the same "Application" as the whole of the K8s monitoring Helm chart. It's set natively by Helm itself: https://helm.sh/docs/chart_best_practices/labels/

What is the issue cause by this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants