-
Notifications
You must be signed in to change notification settings - Fork 794
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No data in #883
Comments
I suspect what is happening here is that your prometheus is not configured to scrape the exporter.
|
Thanks so much for replying, I've been trying to to this to work for a few days now. Can you please explain how I could check the /metrics endpoint on the exporter ? I logged into the "kube-prometheus-stack-grafana" pod and did a curl against the "pgexporter-prometheus-postgres-exporter" service IP address which accrording to the env variable KUBE_PROMETHEUS_STACK_KUBE_STATE_METRICS_PORT_8080_TCP_PORT is 8080 but it's not able to connect at all. I can't see the exporter as a target in the dashboard at all, this is what I have for the prometheuses.monitoring.coreos.com CRD:
...do I need to create a "ScrapeConfig" it suggests here ? I don't see any ScrapeConfig objects in the "kube-prometheus-stack" namespace ? |
For your first question about checking the exporter pod: I think you're conflating the connection to prometheus with the connection to the exporter. The environment variable you mention is for kube-prometheus-stack, not the elasticsearch-exporter. In the command args you originally mention as well as the logs from the exporter, the exporter is listening on port 9108. I think you want something similar to this: For the scrape configs, I think I linked you to the wrong section. Try here: https://prometheus-operator.dev/docs/user-guides/getting-started/. That talks about using a Here's an example:
|
Thanks Joe, Really appreciate you helping me out here, I must be missing a step :-( I'm getting a little confused between exporters and service monitors. I intially tried to set up a service monitor against the elastic service but coudn't see any metrics in Prometheus so I assumed the alternative was to use an exporter and configure the connection in the deployment. Here's my first attempt using a service monitor against the elastic service.
|
So you have prometheus - this is your database. It stores the metrics and you can query it. It also scrapes metrics from exporters. Exporters - these are things that expose metrics. They are often translators of data. In this case elasticsearch_exporter takes data from elasticsearch and exposes it as prometheus metrics. By itself the exporter only exposes metrics over HTTP(s). The kube-prometheus-stack glues a bunch of stuff together to make many pieces work together. The service monitor is a way to tell prometheus about kubernetes services to monitor. What you have in your last comment looks okay to me, but I'm not an expert. If you still don't have the target in prometheus, it's probably something with the config for kube-prometheus-stack. I think this is the repo for that: https://github.com/prometheus-operator/prometheus-operator You could also try the #prometheus channel in the CNCF slack. That might be more fruitful for kube-prometheus-stack issues. |
Thanks again, I'll give that a try :) |
Hey again, I exposed the elastic exporter service as a nodePort service and confirmed it's getting the metrics but I still can't get them into Promtheus :-( From reading futher, seems ServiceMonitor is required to avoid having to manually add a new scrape into the Prometheus config. |
I have installed the helm chart from the Kube Prometheus stack here:
https://github.com/prometheus-community/helm-charts/blob/main/charts/kube-prometheus-stack/README.md
....then I added:
https://github.com/prometheus-community/elasticsearch_exporter
...and updated the elastic-prometheus-elasticsearch-exporter deployment with the following options:
...and when I check the pod logs it seems to be collecting data:
level=info ts=2024-04-04T09:37:36.260266299Z caller=clusterinfo.go:214 msg="triggering initial cluster info call"
level=info ts=2024-04-04T09:37:36.260317077Z caller=clusterinfo.go:183 msg="providing consumers with updated cluster info label"
level=info ts=2024-04-04T09:37:36.271143372Z caller=main.go:244 msg="started cluster info retriever" interval=5m0s
level=info ts=2024-04-04T09:37:36.271525105Z caller=tls_config.go:274 msg="Listening on" address=[::]:9108
level=info ts=2024-04-04T09:37:36.271545007Z caller=tls_config.go:277 msg="TLS is disabled." http2=false address=[::]:9108
level=info ts=2024-04-04T09:42:36.260458556Z caller=clusterinfo.go:183 msg="providing consumers with updated cluster info
....but when I log into Promethus, I can't see anything related to elastic. Am I missing some additional configuruation ?
Thanks for any tips in advance.
Regards,
John
The text was updated successfully, but these errors were encountered: