Skip to content

Commit

Permalink
Merge branch 'master' into otelsdks
Browse files Browse the repository at this point in the history
  • Loading branch information
nico-shishkin authored Aug 22, 2024
2 parents bbb263e + fde15e3 commit 5c45c1a
Show file tree
Hide file tree
Showing 18 changed files with 438 additions and 234 deletions.
114 changes: 56 additions & 58 deletions docs/shipping/App360/App360.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,8 @@ drop_filter: []
[App360](https://docs.logz.io/docs/user-guide/distributed-tracing/spm/) is a high-level monitoring dashboard within Logz.io that enables you to monitor your operations. This integration allows you to configure the OpenTelemetry collector to send data from your OpenTelemetry installation to Logz.io using App360.




## Architecture overview

This integration is based on OpenTelemetry and includes configuring the OpenTelemetry collector to receive data generated by your application instrumentation and send it to Logz.io using App360
Expand All @@ -32,7 +34,9 @@ This integration uses OpenTelemetry Collector Contrib, not the OpenTelemetry Col



## Set up your locally hosted OpenTelemetry installation to send App360 data to Logz.io


## Set up your locally hosted OpenTelemetry

**Before you begin, you'll need**:

Expand All @@ -49,14 +53,14 @@ You can either download the OpenTelemetry collector to your local host or run th

##### Download locally

Create a dedicated directory on the host of your application and download the [OpenTelemetry collector](https://github.com/open-telemetry/opentelemetry-collector/releases/tag/cmd%2Fbuilder%2Fv0.73.0) that is relevant to the operating system of your host.
Create a dedicated directory on the host of your application and download the [OpenTelemetry collector](https://github.com/open-telemetry/opentelemetry-collector/releases) that is relevant to the operating system of your host.

##### Run as a Docker container

In the same Docker network as your application:

```shell
docker pull otel/opentelemetry-collector-contrib:0.73.0
docker pull otel/opentelemetry-collector-contrib:0.105.0
```

:::note
Expand All @@ -80,33 +84,33 @@ connectors:
spanmetrics:
aggregation_temporality: AGGREGATION_TEMPORALITY_CUMULATIVE
dimensions:
- name: rpc.grpc.status_code
- name: http.method
- name: http.status_code
- name: cloud.provider
- name: cloud.region
- name: db.system
- name: messaging.system
- default: DEV
name: env_id
- name: rpc.grpc.status_code
- name: http.method
- name: http.status_code
- name: cloud.provider
- name: cloud.region
- name: db.system
- name: messaging.system
- default: DEV
name: env_id
dimensions_cache_size: 100000
histogram:
explicit:
buckets:
- 2ms
- 8ms
- 50ms
- 100ms
- 200ms
- 500ms
- 1s
- 5s
- 10s
- 2ms
- 8ms
- 50ms
- 100ms
- 200ms
- 500ms
- 1s
- 5s
- 10s
metrics_expiration: 5m
resource_metrics_key_attributes:
- service.name
- telemetry.sdk.language
- telemetry.sdk.name
- service.name
- telemetry.sdk.language
- telemetry.sdk.name

exporters:
logzio/traces:
Expand All @@ -122,23 +126,15 @@ processors:
batch:
tail_sampling:
policies:
[
{
name: policy-errors,
type: status_code,
status_code: {status_codes: [ERROR]}
},
{
name: policy-slow,
type: latency,
latency: {threshold_ms: 1000}
},
{
name: policy-random-ok,
type: probabilistic,
probabilistic: {sampling_percentage: 10}
}
]
- name: policy-errors
type: status_code
status_code: {status_codes: [ERROR]}
- name: policy-slow
type: latency
latency: {threshold_ms: 1000}
- name: policy-random-ok
type: probabilistic
probabilistic: {sampling_percentage: 10}
metricstransform/metrics-rename:
transforms:
- include: ^duration.*$$
Expand All @@ -150,20 +146,20 @@ processors:
new_name: calls_total
metricstransform/labels-rename:
transforms:
- include: ^latency
action: update
match_type: regexp
operations:
- action: update_label
label: span.name
new_label: operation
- include: ^calls
action: update
match_type: regexp
operations:
- action: update_label
label: span.name
new_label: operation
- include: ^latency
action: update
match_type: regexp
operations:
- action: update_label
label: span.name
new_label: operation
- include: ^calls
action: update
match_type: regexp
operations:
- action: update_label
label: span.name
new_label: operation

extensions:
pprof:
Expand All @@ -186,6 +182,7 @@ service:
telemetry:
logs:
level: "debug"

```

{@include: ../../_include/tracing-shipping/replace-tracing-token.html}
Expand All @@ -194,7 +191,8 @@ service:

{@include: ../../_include/log-shipping/listener-var.html}

By default, this configuration collects all traces that have a span that was completed with an error, all traces that are slower than 1000 ms, and 10% of the rest of the traces.

By default, this configuration collects all traces that have a span completed with an error, all traces that are slower than 1000 ms, and 10% of the remaining traces.

You can add more policy configurations to the processor. For more on this, refer to [OpenTelemetry Documentation](https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/processor/tailsamplingprocessor/README.md).

Expand Down Expand Up @@ -236,7 +234,7 @@ Mount the `config.yaml` as volume to the `docker run` command and run it as foll
docker run \
--network host \
-v <PATH-TO>/config.yaml:/etc/otelcol-contrib/config.yaml \
otel/opentelemetry-collector-contrib:0.73.0
otel/opentelemetry-collector-contrib:0.105.0
```

Expand All @@ -257,7 +255,7 @@ docker run \
-p 14268:14268 \
-p 4317:4317 \
-p 55681:55681 \
otel/opentelemetry-collector-contrib:0.73.0
otel/opentelemetry-collector-contrib:0.105.0
```


Expand Down
7 changes: 6 additions & 1 deletion docs/shipping/Code/dotnet.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@ logs2metrics: []
metrics_dashboards: ['3lGo7AE5839jDfkAYU8r21']
metrics_alerts: ['1ALFpmGPygXKWi18TDoO5C']
drop_filter: []
toc_min_heading_level: 2
toc_max_heading_level: 3
---


Expand All @@ -27,7 +29,8 @@ import TabItem from '@theme/TabItem';
[Project's GitHub repo](https://github.com/logzio/logzio-dotnet/)
:::

<Tabs>

<Tabs queryString="current-lib">
<TabItem value="log4net" label="log4net" default>

**Before you begin, you'll need**:
Expand Down Expand Up @@ -1182,6 +1185,8 @@ Replace `<<TYPE>>` with the log type to identify these logs in Logz.io.
</TabItem>
<TabItem value="OpenTelemetry" label="OpenTelemetry">



### Prerequisites

Ensure that you have the following installed locally:
Expand Down
4 changes: 2 additions & 2 deletions docs/shipping/Code/json.md → docs/shipping/Code/http.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
id: JSON
title: JSON
id: HTTP
title: HTTP
overview: Ship logs from your code directly to the Logz.io listener as a minified JavaScript Object Notation (JSON) file, a standard text-based format for representing structured data based on JavaScript object syntax.
product: ['logs']
os: ['windows', 'linux']
Expand Down
2 changes: 2 additions & 0 deletions docs/shipping/Code/java.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@ logs2metrics: []
metrics_dashboards: []
metrics_alerts: []
drop_filter: []
toc_min_heading_level: 2
toc_max_heading_level: 3
---

:::tip
Expand Down
2 changes: 1 addition & 1 deletion docs/shipping/Code/node-js.md
Original file line number Diff line number Diff line change
Expand Up @@ -598,7 +598,7 @@ const meter = new MeterProvider.MeterProvider({

### Add required metrics to the code

Yu can use the following metrics:
You can use the following metrics:

| Name | Behavior |
| ---- | ---------- |
Expand Down
96 changes: 94 additions & 2 deletions docs/shipping/Network/cloudflare.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,13 @@ drop_filter: []

The Cloudflare web application firewall (WAF) protects your internet property against malicious attacks that aim to exploit vulnerabilities such as SQL injection attacks, cross-site scripting, and cross-site forgery requests.

import Tabs from '@theme/Tabs';
import TabItem from '@theme/TabItem';

<Tabs>
<TabItem value="use-s3" label="Send logs using S3" default>


For an overview of Cloudflare logs, and the related S3 and Logpush configuration procedures, click [here](https://developers.cloudflare.com/logs/).


Expand All @@ -35,8 +42,6 @@ Before you begin, ensure that you have:
+ [Enabled the Cloudflare Logppush service](https://developers.cloudflare.com/logs/get-started/logpush-dashboard) for the assets you want to monitor in Cloudflare, via **Analytics > Logs > Connect a service**.




##### Configure Logpush to send logs to the S3 bucket

To configure Logpush to stream logs of Cloudflare's datasets to your cloud service in batches, follow the [Cloudflare procedure](https://developers.cloudflare.com/logs/get-started/enable-destinations/aws-s3/) to enable the Logpush service to access Amazon S3. <!-- deprecated link (https://developers.cloudflare.com/logs/logpush/aws-s3 -->
Expand All @@ -48,6 +53,93 @@ For an overview of the Logpush service, [click here](https://developers.cloudfla
Use [our procedure](https://docs.logz.io/docs/shipping/aws/aws-s3-bucket/#configure-logzio-to-fetch-logs-from-an-s3-bucket) to configure Logz.io to fetch logs from your S3 bucket.


</TabItem>
<TabItem value="use-cf-api" label="Send logs using Cloudflare API" default>

You can send available logs from the Cloudflare API with Logzio API fetcher.

## Pull Docker Image
Download the logzio-api-fetcher image:

```shell
docker pull logzio/logzio-api-fetcher
```

## Configuration
Create a local config file `config.yaml`.

```yaml
apis:
- name: cloudflare example
type: cloudflare
cloudflare_account_id: <<CLOUDFLARE_ACCOUNT_ID>>
cloudflare_bearer_token: <<CLOUDFLARE_BEARER_TOKEN>>
url: https://api.cloudflare.com/client/v4/accounts/{account_id}/alerting/v3/history
next_url: https://api.cloudflare.com/client/v4/accounts/{account_id}/alerting/v3/history?since={res.result.[0].sent}
days_back_fetch: 7
additional_fields:
type: cloudflare

logzio:
url: https://<<LISTENER-HOST>>:8071
token: <<LOGZIO_SHIPPING_TOKEN>>
```
:::note
You can customize the endpoints to collect data from by adding extra API configurations under `apis`.
:::

### Cloudflare configuration options
| Parameter Name | Description | Required/Optional | Default |
|-------------------------|---------------------------------------------------------------------------------------------------------------------------------------------|-------------------|-------------------|
| name | Name of the API (custom name) | Optional | the defined `url` |
| cloudflare_account_id | The CloudFlare Account ID | Required | - |
| cloudflare_bearer_token | The Cloudflare Bearer token | Required | - |
| url | The request URL | Required | - |
| next_url | If needed to update the URL in next requests based on the last response. Supports using variables. | Optional | - |
| additional_fields | Additional custom fields to add to the logs before sending to logzio | Optional | - |
| days_back_fetch | The amount of days to fetch back in the first request. Applies a filter on `since` parameter. | Optional | - |
| scrape_interval | Time interval to wait between runs (unit: `minutes`) | Optional | 1 (minute) |
| pagination_off | True if builtin pagination should be off, False otherwise | Optional | `False` |

### Logzio output configuration options
| Parameter Name | Description | Required/Optional | Default |
|----------------|-----------------------------|-------------------|---------------------------------|
| url | The logzio Listener address (You can find the relevant `<<LISTENER-HOST>>` [here](https://app.logz.io/#/dashboard/settings/manage-tokens/data-shipping?product=logs).) | Optional | `https://listener.logz.io:8071` |
| token | The logzio shipping token | Required | - |


## Run The Docker Container
In the path where you saved your `config.yaml`, run:
```shell
docker run --name logzio-api-fetcher \
-v "$(pwd)":/app/src/shared \
logzio/logzio-api-fetcher
```

:::note
To run in Debug mode add `--level` flag to the command:
```shell
docker run --name logzio-api-fetcher \
-v "$(pwd)":/app/src/shared \
logzio/logzio-api-fetcher \
--level DEBUG
```
Available Options: `INFO`, `WARN`, `ERROR`, `DEBUG`
:::

### Stopping the container
When you want to stop the container, to make sure it will finish the iteration on time, please give it a grace period of 30 seconds when you run the docker stop command:

```shell
docker stop -t 30 logzio-api-fetcher
```


</TabItem>
</Tabs>


##### Check Logz.io for your logs

Give your Cloudflare data some time to get from your system to ours, and then open [Open Search Dashboards](https://app.logz.io/#/dashboard/osd).
Expand Down
Loading

0 comments on commit 5c45c1a

Please sign in to comment.