Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make it possible to export multiple sets of metrics in the same process #35

Open
manueljacob opened this issue Apr 22, 2024 · 1 comment

Comments

@manueljacob
Copy link

Use case

There are multiple instances of an application (running on different servers). Some metrics are instance-specific (e.g. about the requests handled by a specific instance). Some metrics are not instance-specific (those are based on data from a single database). In our case, the application is a Rails application, but that shouldn’t matter for much for this feature request.

For the instance-specific metrics, each instance is scraped (in our case, by Prometheus). The metrics that are not instance-specific should be exported by each instance but only a single instance should scraped at a given time (in our case, there is load balancing by Kubernetes, but the details don’t matter here).

Current solutions

If we export both sets of metrics on each instance, the same data (metrics that are not instance-specific) will be collected multiple times by Prometheus, wasting resources and making handling of the data more complicated.

We could launch separate processes to export the metrics that are not instance-specific, but that complicates deployment and uses extra resources.

Desired solution

It should be possible to export both sets of metrics in the same process but on different ports and / or paths.

Proposed feature

It should be possible to create multiple instances of the Yabeda class that each can be configured separately.

@Envek
Copy link
Member

Envek commented Apr 24, 2024

Hey, thanks for writing the issue!

So, you want to be able to expose two metrics endpoints from every process/pod on different paths or ports:

  1. one serving only per-process or per-pod metrics (all counters incremented and histograms measured plus maybe some subset of collect blocks), scraped directly

  2. one serving only per-application metrics (basically only executing collect blocks, but maybe not all of them), scraped through k8s service or balancer.

Am I right?


For now, you can take a look how the same problem (instance-specific vs common database-sourced metrics) is solved in yabeda-sidekiq via collect_cluster_metrics config flag. It is meant to be used specifically in this workaround with separate metrics exporter process:

We could launch separate processes to export the metrics that are not instance-specific, but that complicates deployment and uses extra resources.

For now, this is the only way to handle this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants