-
Notifications
You must be signed in to change notification settings - Fork 88
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Additional receiver or scrape in V2 version. #955
Comments
So, you're pretty close. Adding the faro receiver works in the alloy-receivers' extraConfig section. You should add the As for where to point the output of that component, the easiest would be to enable the applicationObservability feature and utilize those components: applicationObservability:
enabled: true
receivers:
otlp:
grpc:
enabled: true
...
alloy-receiver:
enabled: true
controller:
podAnnotations: {kubernetes.azure.com/set-kube-service-host-fqdn: "true"}
extraConfig: |-
faro.receiver "integrations_app_agent_receiver" {
server {
listen_address = "0.0.0.0"
listen_port = 8027
cors_allowed_origins = ["*"]
max_allowed_payload_size = "10MiB"
rate_limiting {
rate = 100
}
}
output {
logs = [otelcol.processor.k8sattributes.default.input]
traces = [otelcol.processor.k8sattributes.default.input]
}
}
alloy:
extraPorts:
- name: otlp-grpc
port: 4317
targetPort: 4317
protocol: TCP
- name: faro
port: 8027
targetPort: 8027
protocol: TCP As for your Azure metrics, putting it in the singleton makes sense since the |
Here's a full values file. You'll need to merge in any other changes and destinations. Replace the "my-destination" with the name of your destination that can handle metrics: cluster:
name: cxk314-cluster
destinations:
- name: my-destination
type: otlp
host: otlp-gateway.example.com
metrics: {enabled: true}
logs: {enabled: true}
traces: {enabled: true}
applicationObservability:
enabled: true
receivers:
otlp:
grpc:
enabled: true
alloy-receiver:
enabled: true
controller:
podAnnotations: {kubernetes.azure.com/set-kube-service-host-fqdn: "true"}
extraConfig: |-
faro.receiver "integrations_app_agent_receiver" {
server {
listen_address = "0.0.0.0"
listen_port = 8027
cors_allowed_origins = ["*"]
max_allowed_payload_size = "10MiB"
rate_limiting {
rate = 100
}
}
output {
logs = [otelcol.processor.k8sattributes.default.input]
traces = [otelcol.processor.k8sattributes.default.input]
}
}
alloy:
extraPorts:
- name: otlp-grpc
port: 4317
targetPort: 4317
protocol: TCP
- name: faro
port: 8027
targetPort: 8027
protocol: TCP
alloy-singleton:
enabled: true
controller:
podAnnotations: {kubernetes.azure.com/set-kube-service-host-fqdn: "true"}
extraConfig: |-
prometheus.exporter.azure "eventhub_azure_exporter" {
subscriptions = ["my_subscription_id"]
resource_type = "microsoft.eventhub/namespaces"
metrics = ["ServerErrors", "UserErrors", "QuotaExceededErrors", "ThrottledRequests", "IncomingMessages"]
}
prometheus.scrape "eventhub_azure_exporter" {
targets = prometheus.exporter.azure.eventhub_azure_exporter.targets
job_name = "integrations/azure_exporter"
forward_to = [otelcol.receiver.prometheus.my-destination.receiver]
} |
Thanks @petewall. Not sure if I am getting what you said with this statement: Also, regarding |
So, what I meant by my comment is that the config generator tries to only add destination components when there are features enabled that will use those destinations. No sense in putting components for a pyroscope destination on any alloy that isn't handling profiles, for example. Unfortunately, this intelligent placement doesn't work if the only thing going onto an alloy instance is the |
Im curious about the certificate error you're seeing. The Self-reporting feature merely creates a small set of static metrics and tries to deliver them to the same location as any metric destination. It does not go directly to Grafana. |
Oh, are you referring to: alloy-logs:
alloy:
enableReporting: true|false Yeah, you can disable that safely. I'm referring to: selfReporting:
enabled: true|false |
We are getting the below for 2.0.0-rc.5 and 2.0.0-rc.6
When:
then all Alloy instances are having that certificate error. We have to set it to We are also getting this error. Is alloy-logs/otel destination not converting logs to grpc? We have one destination defined for all telemetry. Our endpoint expects grcp for everything:
Metrics are working fine but logs have this error:
|
Hmm.. is there anything interesting with your cluster setup that would prevent the certificates? Perhaps an AKS thing? In the meantime, just add this to all of your alloy instances: alloy-logs:
alloy:
enableReporting: false That should silence the certificate errors. As for: selfReporting:
enabled: true That should only be injecting a small handful of metrics. It does not deliver those metrics anywhere other than metric-capable destinations you define. |
Hi,
I am trying to add Faro receiver using k8s-monitoring-helm V2 chart. What is the recommended way to do this? Where should it go? I tried following to add it in
alloy-receiver
block ofvalues.yaml
but not sure it is the right way:And for additional scrape where to add that? I need to get metrics from Azure Monitor for Azure EventHub. Should I add it in
alloy-singleton
like this?More examples of how to customize V2 Helm chart would be really helpful. Like adding more receivers, scrape or discovery rules.
The text was updated successfully, but these errors were encountered: