This is the documentation for Promregator V1.0 (and higher). If you are interested in reading the documentation of Promregator V0.*, please refer to this page.
A quickstart guide using docker images is availabe at our quickstart page.
A detailed analysis of Promregator's architecture is described in the architecture page. Please refer to this page for further details.
Since version 1.0.0, Promregator supports (only) the following mode on how integrates with Prometheus:
Service discovery (determining which CF App instances are subject to scraping) is separated in an own endpoint (/discovery
). It provides a JSON-formatted downloadable file, which can be used with the file_sd_configs
service discovery method of Prometheus. The file includes an own target for each CF app instance to be scraped, for which Promregator serves as proxy. Therefore, Prometheus will send multiple scraping requests to Promregator (at the endpoint starting with path /singleTargetMetrics
), which redirects them to the corresponding CF app instances. This approach allows to scale to hundreds of apps to be scraped, as control over the point of time for scraping is handled properly by Prometheus. Moreover, it supports flexibility on rewriting using Prometheus.
For getting to know more about the rationale of how Promregator scrapes its targets, see also this page.
An overview on the endpoints provided by Promregator can be found at the endpoint's page.
Configuration of Promregator is performed using any variant of the Spring Property Configuration approach.
The suggested approach is to create a configuration YAML file, such as myconfig.yaml
, and start Promregator with the following command line option:
java -Dspring.config.location=file:/path/to/your/myconfig.yaml -jar promregator-x.y.z-SNAPSHOT.jar
Here is an dummy example for such a configuration yaml file when using basic authentication:
cf:
api_host: api.cf.example.org
username: myCFUserName
proxy:
host: 192.168.111.1
port: 8080
promregator:
authenticator:
type: oauth2xsuaaBasic
oauth2xsuaaBasic:
tokenServiceURL: https://jwt.token.server.example.org/oauth/token
client_id: myOAuth2ClientId
# client_secret: <should be provided via environment variable PROMREGATOR_AUTHENTICATOR_OAUTH2XSUAABASIC_CLIENT_SECRET>
targets:
- orgName: myCfOrgName
spaceName: mySpaceName
applicationName: myApplication1
- orgName: myOtherCfOrgName
spaceName: myOtherSpaceName
applicationName: myOtherApplication
Here is an dummy example for such a configuration yaml file when using certificate-based authentication:
cf:
api_host: api.cf.example.org
username: myCFUserName
proxy:
host: 192.168.111.1
port: 8080
promregator:
authenticator:
type: oauth2xsuaaCertificate
oauth2xsuaaCertificate:
tokenServiceCertURL: https://jwt.cert.token.server.example.org/oauth/token
client_id: myOAuth2ClientId
client_certificates: |
-----BEGIN CERTIFICATE-----
MyIFu...IxZ
-----END CERTIFICATE-----
# client_key: <should be provided via environment variable PROMREGATOR_AUTHENTICATOR_OAUTH2XSUAACERTIFICATE_CLIENT_KEY>
targets:
- orgName: myCfOrgName
spaceName: mySpaceName
applicationName: myApplication1
- orgName: myOtherCfOrgName
spaceName: myOtherSpaceName
applicationName: myOtherApplication
The documentation of the configuration options can be found here.
Promregator is written in Java and therefore requires a Java Virtual Machine (e.g. Java Runtime Edition) to run. Finding a proper memory configuration can be a tricky thing with JVMs - especially, if being run in a docker container.
The current knowledge about memory configuration for Promregator can be found at the Java Memory Configuration page.
From the perspective of Prometheus, Promregator behaves like both a service discovery tool and a service which contains several scraping targets.
Promregator provides a JSON-formatted file, which can be fed directly to a file of service discovery file_sd_configs
. The endpoint, where this file is available, is called /discovery
. The endpoint is enabled automatically. You may retrieve the document using wget
or curl
my using the following command line:
$ curl http://hostname-of-promregator:8080/discovery > promregator.json
(NB: This assumes that there is no authentication enabled, cf. promregator.discovery.auth
on our configuration options page)
The file then is downloaded to the file called promregator.json
. This file contains references to the corresponding paths of endpoints which support the Single Target Scraping mode.
A sample service discovery configuration at Prometheus then may look like this:
[...]
file_sd_configs:
- files:
- promregator.json
Note that the file has to explicitly mention the hostname and the port of your Promregator instance as it is seen from Prometheus. Promregator tries to auto-detect this based on the request retrieved. However, for example if Promregator is running in a Docker container, this mechanism may fail. You then have to explicitly set the configuration parameters promregator.discovery.hostname
and promregator.discovery.port
accordingly. For further details on these two options, also refer to the (configuration options page)[config.md].
Moreover, it may be worth mentioning that querying the /discovery
endpoint significantly more frequently than the application cache (see also configuration option cf.cache.timeout.application
) and the resolver cache (see also cf.cache.timeout.resolver
) is of little use: The results provided by the endpoint are mainly generated out of the values in these two caches. Extraordinary querying might still make sense, though, if you have explicitly invalidated the caches manually.
Promregator V1 expects that label enrichment will be done by Prometheus. This is in line with Prometheus' recommended approach of handling labels which is using rewriting rules.
In your configuration of Prometheus you then may specify relabel_configs
, which you may adjust to your own needs. For that, Promregator's discovery service provides the following meta labels:
Label name | Meaning | Example(s) |
---|---|---|
__meta_promregator_target_orgName |
the name of the Cloud Foundry organization in which the CF app instance is located | yourOrgName |
__meta_promregator_target_spaceName |
the name of the Cloud Foundry space in which the CF app instance is located | yourSpaceName |
__meta_promregator_target_applicationName |
the name of the Cloud Foundry application of the CF app instance which is being scraped | appName |
__meta_promregator_target_applicationId |
the GUID of the Cloud Foundry application of the CF app instance which is being scraped | 5d49f9b0-8ac7-46b3-8945-1f500be8b96a |
__meta_promregator_target_instanceNumber |
the instance number of the CF app instance which is being scraped | 0 or 2 |
__meta_promregator_target_instanceId |
the instance identifier of the CF app instance which is being scraped | 5d49f9b0-8ac7-46b3-8945-1f500be8b96a:0 |
If you want to have the labels provided in the canonical way (e.g. adding org_name
, app_name
and so forth), you may use the following configuration snippet:
relabel_configs:
- source_labels: [__meta_promregator_target_instanceId]
target_label: instance
- source_labels: [__meta_promregator_target_instanceId]
target_label: cf_instance_id
- source_labels: [__meta_promregator_target_orgName]
target_label: org_name
- source_labels: [__meta_promregator_target_spaceName]
target_label: space_name
- source_labels: [__meta_promregator_target_applicationName]
target_label: app_name
- source_labels: [__meta_promregator_target_instanceNumber]
target_label: cf_instance_number
Summarizing the suggestions for the Prometheus' configuration, it is recommended to configure Prometheus like this:
[...]
file_sd_configs:
- files:
- /path/to/your/promregator.json
relabel_configs:
- source_labels: [__meta_promregator_target_instanceId]
target_label: instance
- source_labels: [__meta_promregator_target_instanceId]
target_label: cf_instance_id
- source_labels: [__meta_promregator_target_orgName]
target_label: org_name
- source_labels: [__meta_promregator_target_spaceName]
target_label: space_name
- source_labels: [__meta_promregator_target_applicationName]
target_label: app_name
- source_labels: [__meta_promregator_target_instanceNumber]
target_label: cf_instance_number
Basic Authentication available starting with version 0.2.0 of Promregator.
In general, you need to specify a set of username and password, which may be used for authentication at various places. To define the set of credentials to be used is done by
promregator:
authentication:
basic:
username: someuser
password: somepassword
There are three places where inbound authentication checks can be enabled:
- At the metrics' scraping endpoints
/metrics
and/singleTargetMetrics
by setting the configuration optionpromregator.endpoint.auth
toBASIC
. - At the discovery endpoint
/discovery
by setting the configuration optionpromregator.discovery.auth
toBASIC
. - At the Promregator's internal metrics endpoint
/promregatorMetrics
by setting the configuration optionpromregator.metrics.auth
toBASIC
or any combination of these.
For further details on these configuration options, also refer to the configuration options page).
The corresponding option in Prometheus is the scrape_configs[].basic_auth
option. Let us assume that you have configured Promregator like this
promregator:
endpoint:
auth: BASIC
authentication:
basic:
username: someuser
password: somepassword
Then the corresponding configuration in Prometheus may look like this:
scrape_configs:
- job_name: 'promregator'
basic_auth:
username: someuser
password: somepassword
By default, logging is set to only emit messages, which are of major severity to allow running Promregator out of the box. Therefore, only messages of level "Warning" or higher are provided. Promregator uses logback as logging tool. It is aware of the following levels:
Level | Meaning | Written by default settings | may contain secret data |
---|---|---|---|
ERROR | Something fatal has happened, which Promregator does not permit to go on as expected | Yes | No |
WARN | A situation occurred, which most likely is not expected and thus may hint to some other mistake (e.g. wrong configuration setting) | Yes | No |
INFO | Documents typical and usual results of operations; the main flow of logic can be seen in the logs | No | No (1) |
DEBUG | Additionally provides internal state information to allow finding bugs | No | No (1) |
TRACE | Very detailed logging providing detailed internal state information (currently not used) | No | Yes |
As the "Trace" level may expose internal secrets (such as passwords, credentials, or similar), it is not recommended to post such logs in Github issue reports without scanning them manually before.
(1) Please note that also higher levels (especially "Info" and "Debug") may also contain references to URLs and hostnames, which might be internal to your network. If this is relevant in your case, you might also need to check the content of these log records before posting the log to github.
You may change the log level provided by setting the Spring configuration variable
logging.level.org.cloudfoundry.promregator
to the corresponding log level mentioned in the table above. You may do so, for example, by specifying the variable in your pomregator.yml
file like this:
[...]
logging:
level:
org:
cloudfoundry:
promregator: INFO
Alternatively, you may also provide a Java system variable via the command line like this:
java -Dlogging.level.org.cloudfoundry.promregator=INFO -jar promregator-x.y.z-SNAPSHOT.jar
or, if you are running the docker container, you may do it like this:
docker run -d \
--env JAVA_OPTS=-Dlogging.level.org.cloudfoundry.promregator=INFO \
-v /path/to/your/own/promregator.yaml:/etc/promregator/promregator.yml \
promregator/promregator:<version>
If you want to have your logs formatted based on JSON, remember that Promregator uses logback (and not log4j or log4j2). Starting with Promregator version 0.6.2, also the necessary logback-contrib
packages are shipped with Promregator, such that a (sample) logback.xml
configuration file in the classpath with the following content would do the trick:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<appender name="json"
class="ch.qos.logback.core.ConsoleAppender">
<layout class="ch.qos.logback.contrib.json.classic.JsonLayout">
<jsonFormatter
class="ch.qos.logback.contrib.jackson.JacksonJsonFormatter">
<prettyPrint>true</prettyPrint>
</jsonFormatter>
<timestampFormat>yyyy-MM-dd' 'HH:mm:ss.SSS</timestampFormat>
</layout>
</appender>
<include
resource="org/springframework/boot/logging/logback/base.xml" />
<logger name="jsonLogger" level="INFO">
<appender-ref ref="json" />
</logger>
</configuration>