-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
v2: empty panel for cluster memory #896
Comments
Would you be willing to check this with the latest (2.0.0-rc.3)? I just deployed this and am sending data to a Grafana Cloud stack and I'm able to see data on the memory percent per cluster field. The cluster label is present. |
What is the expected job name in your case? Maybe it's not |
|
Can you share your values.yaml file? |
Sure:
|
Ok I can confirm that that job is not part of this helm chart but I don't get the metric at all at this point (ie that scrape job was coming from another alloy). Stil no metrics with |
Silly doubt: is the regexp "" correct in a relabelling config? Its default should be |
Ok right now I can only confirm that I don't see any node_exporter metrics coming out of that alloy instance, will continue the investigation but it feels like being closer to the solution. |
FYI: my node exporters have label |
Small update:
|
Another update: is the
which will rewrite the value of the Also the |
Related? Fro what I can see, unless relabelled, the |
This looks at the |
We do use the |
I am using latest and a recent kernel but I don't get why the pod is not actually exporting
So it seems a local problem and not a problem of the chart. You can close this issue if you want otherwise I'll report here the reason as soon as I discover it but right now seems really related to the local test setup (rancher desktop+k3d). |
In the Kubernetes overview on Grafana Cloud the "Memory Usage by Cluster" panel uses this query:
In the two metrics needed only
kube_node_status_capacity
has thecluster
label while it's missing fornode_memory_MemAvailable_bytes
.The result is an empty panel for cluster memory.
I remember it working so maybe something got lost?
A possible reason could be that the
cluster
label is added as an external label when remote_writing from thealloy-metrics
instance but not for thealloy-module-system
one which is in charge of scraping the node exporters.The text was updated successfully, but these errors were encountered: