Skip to content

Commit

Permalink
feat: Confluent Cloud OTeL metrics (#457)
Browse files Browse the repository at this point in the history
* feat:  confluent cloud example

* fix: added more details to readme

* fix: missspelling

* chore: various updates

* Update other-examples/collector/confluentcloud/README.md

Co-authored-by: jack-berg <[email protected]>

* chore: updates

---------

Co-authored-by: jack-berg <[email protected]>
  • Loading branch information
jcountsNR and jack-berg authored Oct 18, 2023
1 parent 83410ce commit 25b828e
Show file tree
Hide file tree
Showing 6 changed files with 115 additions and 0 deletions.
1 change: 1 addition & 0 deletions other-examples/collector/confluentcloud/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
*.env
39 changes: 39 additions & 0 deletions other-examples/collector/confluentcloud/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# Confluent Cloud OpenTelemetry metrics example setup

This example shows a setup for running a Docker OpenTelemetry Collector to scrape metrics from Confluent Cloud and post them the New Relic OTLP Collector Endpoint. For more information, please see our [Kafka with Confluent documentation](https://docs.newrelic.com/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/collector/collector-configuration-examples/opentelemetry-collector-kafka-confluentcloud/).

## Pre-requisites:
1. You must have a Docker daemon running
2. You must have docker compose installed (info: https://docs.docker.com/compose/)
3. You must have a Confluent cluster and account created (free account: https://www.confluent.io/get-started/)
4. For best authentication practices, you will have to procure a tls authentication key from Confluent and put the ca, cert, and key files in this directory. </br>
(Confluent TLS encryption docs: https://docs.confluent.io/platform/current/kafka/encryption.html)



To run the example: add in the key files, set the environment variables, and run `docker compose up`

```shell
export NEW_RELIC_API_KEY=<your_api_key>
export NEW_RELIC_OTLP_ENDPOINT=https://otlp.nr-data.net
export CLUSTER_ID=<your_cluster_id>
export CLUSTER_API_KEY=<your_cluster_api_key>
export CLUSTER_API_SECRET=<your_cluster_api_secret>
export CLUSTER_BOOTSTRAP_SERVER=<your_cluster_bootstrap_server>

docker compose up
```
</br>

## Local Variable information

| Variable | Description | Docs |
| -------- | ----------- | ---- |
| **NEW_RELIC_API_KEY** |New Relic Ingest API Key |[API Key docs](https://docs.newrelic.com/docs/apis/intro-apis/new-relic-api-keys/) |
| **NEW_RELIC_OTLP_ENDPOINT** | OTLP endpoint is https://otlp.nr-data.net | [OTLP endpoint config docs](https://docs.newrelic.com/docs/more-integrations/open-source-telemetry-integrations/opentelemetry/get-started/opentelemetry-set-up-your-app/#review-settings) |
| **CLUSTER_ID** | ID of cluster from Confluent Cloud | Available in your cluster settings |
| **CLUSTER_API_KEY** | Resource API key for your Confluent cluster |[Resource API key docs](https://docs.confluent.io/cloud/current/access-management/authenticate/api-keys/api-keys.html#create-a-resource-api-key) |
| **CLUSTER_API_SECRET**| Resource API secret key from your Confluent cluster| [Resource API key docs](https://docs.confluent.io/cloud/current/access-management/authenticate/api-keys/api-keys.html#create-a-resource-api-key) |
| **CLUSTER_BOOTSTRAP_SERVER** | Bootstrap Server for cluster | Available in your cluster settings |

</br>
55 changes: 55 additions & 0 deletions other-examples/collector/confluentcloud/collector.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,55 @@
receivers:
kafkametrics:
brokers:
- $CLUSTER_BOOTSTRAP_SERVER
protocol_version: 2.0.0
scrapers:
- brokers
- topics
- consumers
auth:
tls:
ca_file: ./etc/otelcol/pem/ca.pem
cert_file: ./etc/otelcol/pem/cert.pem
key_file: ./etc/otelcol/pem/key.pem
collection_interval: 30s


prometheus:
config:
scrape_configs:
- job_name: "confluent"
scrape_interval: 60s # Do not go any lower than this or you'll hit rate limits
static_configs:
- targets: ["api.telemetry.confluent.cloud"]
scheme: https
basic_auth:
username: $CONFLUENT_API_ID
password: $CONFLUENT_API_SECRET
metrics_path: /v2/metrics/cloud/export
params:
"resource.kafka.id":
- $CLUSTER_ID
exporters:
otlp:
endpoint: $NEW_RELIC_OTLP_ENDPOINT
headers:
api-key: $NEW_RELIC_API_KEY
processors:
batch:
memory_limiter:
limit_mib: 400
spike_limit_mib: 100
check_interval: 5s
service:
telemetry:
logs:
pipelines:
metrics:
receivers: [prometheus]
processors: [batch]
exporters: [otlp]
metrics/kafka:
receivers: [kafkametrics]
processors: [batch]
exporters: [otlp]
20 changes: 20 additions & 0 deletions other-examples/collector/confluentcloud/docker-compose.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
version: "3.6"

services:

otel-collector:
image: otel/opentelemetry-collector-contrib:latest
command: --config=/etc/otelcol/config.yaml
volumes:
- ./collector.yaml:/etc/otelcol/config.yaml
- ./key.pem:/etc/otelcol/pem/key.pem
- ./cert.pem:/etc/otelcol/pem/cert.pem
- ./ca.pem:/etc/otelcol/pem/ca.pem
environment:
- NEW_RELIC_OTLP_ENDPOINT
- NEW_RELIC_API_KEY
- CLUSTER_BOOTSTRAP_SERVER
- CLUSTER_API_KEY
- CLUSTER_API_SECRET
- CLUSTER_ID

File renamed without changes.

0 comments on commit 25b828e

Please sign in to comment.