From de8f8bfc35a0870c3ed3dfc3bee73e89b64576df Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?=E2=80=9Cnico-shishkin=E2=80=9D?= <“nicoshishkinatlogz@outlook.com”> Date: Sun, 12 Nov 2023 19:21:09 +0100 Subject: [PATCH] DOC-558 --- docs/shipping/Access-Management/okta.md | 4 ++++ docs/shipping/Azure/azure-activity-logs.md | 3 +++ docs/shipping/Azure/azure-blob-trigger.md | 4 ++++ docs/shipping/Azure/azure-diagnostic-logs.md | 4 ++++ docs/shipping/Azure/azure-graph.md | 4 ++++ docs/shipping/Code/dotnet.md | 23 ++++++++++++++++--- docs/shipping/Code/go.md | 9 ++++++++ docs/shipping/Code/java.md | 15 +++++++++++- docs/shipping/Code/node-js.md | 13 +++++++++++ docs/shipping/Code/python.md | 4 ++++ docs/shipping/Containers/docker.md | 4 ++++ docs/shipping/Containers/openshift.md | 4 ++++ docs/shipping/Database/mysql.md | 4 ++++ docs/shipping/GCP/gcp-stackdriver.md | 3 +++ docs/shipping/Other/fluent-bit.md | 6 +++-- docs/shipping/Other/fluentd.md | 5 ++++ docs/shipping/Other/heroku.md | 4 +++- docs/shipping/Other/microsoft-graph.md | 4 +++- .../Other/salesforce-commerce-cloud.md | 4 +++- docs/shipping/Other/salesforce.md | 4 +++- docs/shipping/Security/carbon-black.md | 4 +++- docs/shipping/Security/cisco-securex.md | 4 +++- docs/shipping/Security/x509.md | 4 ++++ .../api-status-metrics.md | 4 ++++ .../Synthetic-Monitoring/ping-statistics.md | 4 ++++ .../synthetic-link-detector.md | 4 ++++ .../notification-endpoints/terraform.md | 4 ++++ 27 files changed, 141 insertions(+), 12 deletions(-) diff --git a/docs/shipping/Access-Management/okta.md b/docs/shipping/Access-Management/okta.md index 3eac801a..84d350d5 100644 --- a/docs/shipping/Access-Management/okta.md +++ b/docs/shipping/Access-Management/okta.md @@ -26,6 +26,10 @@ You can send logs from multiple Okta tenants and any Okta domain. If you want to ship from multiple Okta tenants over the same docker, you'll need to use the latest configuration using a tenants-credentials.yml file. Otherwise, you can continue using the previous configuration without a tenants-credentials.yml. ::: +:::note +[Project's GitHub repo](https://github.com/logzio/logzio-okta/) +::: + **Before you begin, you'll need**: * Okta administrator privileges diff --git a/docs/shipping/Azure/azure-activity-logs.md b/docs/shipping/Azure/azure-activity-logs.md index 07c8c450..f54aa294 100644 --- a/docs/shipping/Azure/azure-activity-logs.md +++ b/docs/shipping/Azure/azure-activity-logs.md @@ -20,6 +20,9 @@ At the end of this process, you'll have configured an event hub namespace, an ev The resources set up by the automated deployment can collect data for a single Azure region. +:::note +[Project's GitHub repo](https://github.com/logzio/logzio-azure-serverless/) +::: ### Overview of the services you'll be setting up in your Azure account diff --git a/docs/shipping/Azure/azure-blob-trigger.md b/docs/shipping/Azure/azure-blob-trigger.md index 8c1331a5..571b9891 100644 --- a/docs/shipping/Azure/azure-blob-trigger.md +++ b/docs/shipping/Azure/azure-blob-trigger.md @@ -28,6 +28,10 @@ The following resources are needed for this integration: ![Integration-architecture](https://dytvr9ot2sszz.cloudfront.net/logz-docs/azure_blob/blob-trigger-resources.png) +:::note +[Project's GitHub repo](https://github.com/logzio/logzio-azure-blob-trigger/) +::: + ### Supported data types This Logz.io function supports the following data types: diff --git a/docs/shipping/Azure/azure-diagnostic-logs.md b/docs/shipping/Azure/azure-diagnostic-logs.md index 3ea051d8..d6346188 100644 --- a/docs/shipping/Azure/azure-diagnostic-logs.md +++ b/docs/shipping/Azure/azure-diagnostic-logs.md @@ -21,6 +21,10 @@ At the end of this process, your Azure function will forward logs from an Azure ![Overview of Azure Diagnostic Logz.io integration](https://dytvr9ot2sszz.cloudfront.net/logz-docs/log-shipping/azure-diagnostic-logs-overview.png) +:::note +[Project's GitHub repo](https://github.com/logzio/logzio-azure-serverless/) +::: + ### Overview of the services you'll be setting up in your Azure account The automated deployment sets up a new Event Hub namespace and all the components you'll need to collect logs in one Azure region. diff --git a/docs/shipping/Azure/azure-graph.md b/docs/shipping/Azure/azure-graph.md index 21077324..86a5761a 100644 --- a/docs/shipping/Azure/azure-graph.md +++ b/docs/shipping/Azure/azure-graph.md @@ -24,6 +24,10 @@ Logzio-api-fetcher supports many API endpoints, including but not limited to: * Azure Active Directory sign-in logs There are many other APIs available through Microsoft Graph. + +:::note +[Project's GitHub repo](https://github.com/logzio/logzio-api-fetcher/) +::: ## Register a new app in Azure Active Directory diff --git a/docs/shipping/Code/dotnet.md b/docs/shipping/Code/dotnet.md index 7182dd40..38fd5fe4 100644 --- a/docs/shipping/Code/dotnet.md +++ b/docs/shipping/Code/dotnet.md @@ -29,6 +29,10 @@ import TabItem from '@theme/TabItem'; * .NET Core SDK version 2.0 or higher * .NET Framework version 4.6.1 or higher +:::note +[Project's GitHub repo](https://github.com/logzio/logzio-dotnet/) +::: + #### Add the dependency to your project @@ -385,6 +389,9 @@ namespace LogzioLog4NetSampleApplication * .NET Core SDK version 2.0 or higher * .NET Framework version 4.6.1 or higher +:::note +[Project's GitHub repo](https://github.com/logzio/logzio-dotnet/) +::: #### Add the dependency to your project @@ -685,7 +692,9 @@ namespace LogzioNLogSampleApplication * .NET Core SDK version 2.0 or higher * .NET Framework version 4.6.1 or higher - +:::note +[Project's GitHub repo](https://github.com/logzio/logzio-dotnet/) +::: #### Add the dependency to your project @@ -1027,6 +1036,9 @@ This integration is based on [Serilog.Sinks.Logz.Io repository](https://github.c * .NET Core SDK version 2.0 or higher * .NET Framework version 4.6.1 or higher +:::note +[Project's GitHub repo](https://github.com/logzio/logzio-dotnet/) +::: #### Install the Logz.io Serilog sink @@ -1214,6 +1226,10 @@ Replace `<` with the type that you want to assign to your logs. You will u Helm is a tool for managing packages of pre-configured Kubernetes resources using Charts. This integration allows you to collect and ship diagnostic metrics of your .NET application in Kubernetes to Logz.io, using dotnet-monitor and OpenTelemetry. logzio-dotnet-monitor runs as a sidecar in the same pod as the .NET application. +:::note +[Project's GitHub repo](https://github.com/logzio/logzio-helm/) +::: + ###### Sending metrics from nodes with taints If you want to ship metrics from any of the nodes that have a taint, make sure that the taint key values are listed in your in your daemonset/deployment configuration as follows: @@ -1356,8 +1372,9 @@ These instructions show you how to: * Add advanced settings to the basic custom metrics export configuration - - +:::note +[Project's GitHub repo](https://github.com/logzio/logzio-app-metrics/) +::: #### Send custom metrics to Logz.io with a hardcoded Logz.io exporter diff --git a/docs/shipping/Code/go.md b/docs/shipping/Code/go.md index 8b1ea460..47b6c5e2 100644 --- a/docs/shipping/Code/go.md +++ b/docs/shipping/Code/go.md @@ -20,6 +20,10 @@ If your code is running inside Kubernetes the best practice will be to use our [ ## Logs +:::note +[Project's GitHub repo](https://github.com/logzio/logzio-go/) +::: + This shipper uses **goleveldb** and **goqueue** as a persistent storage implementation of a persistent queue, so the shipper backs up your logs to the local file system before sending them. Logs are queued in the buffer and 100% non-blocking. A background Go routine ships the logs every 5 seconds. @@ -102,6 +106,11 @@ l.Stop() // Drains the log buffer ``` ## Metrics + +:::note +[Project's GitHub repo](https://github.com/logzio/go-metrics-sdk/) +::: + ### Install the SDK Run the following command: diff --git a/docs/shipping/Code/java.md b/docs/shipping/Code/java.md index 391d9b49..302087fd 100644 --- a/docs/shipping/Code/java.md +++ b/docs/shipping/Code/java.md @@ -18,7 +18,8 @@ drop_filter: [] If your code runs within Kubernetes, it's best practice to use our Kubernetes integration to collect various telemetry types. ::: -## Logs +## Logs + import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; @@ -26,6 +27,10 @@ import TabItem from '@theme/TabItem'; +:::note +[Project's GitHub repo](https://github.com/logzio/go-metrics-sdk/) +::: + The Logz.io Log4j 2 appender sends logs using non-blocking threading, bulks, and HTTPS encryption to port 8071. This appender uses LogzioSender. @@ -259,6 +264,10 @@ public class LogzioLog4j2Example { +:::note +[Project's GitHub repo](https://github.com/logzio/ogzio-log4j2-appender/) +::: + Logback sends logs to your Logz.io account using non-blocking threading, bulks, and HTTPS encryption to port 8071. This appender uses BigQueue implementation of persistent queue, so all logs are backed up to a local file system before being sent. @@ -509,6 +518,10 @@ If the log appender does not ship logs, add `true ## Metrics + +:::note +[Project's GitHub repo](https://github.com/logzio/micrometer-registry-logzio/) +::: ### Usage diff --git a/docs/shipping/Code/node-js.md b/docs/shipping/Code/node-js.md index 31b98533..4c7b2ff6 100644 --- a/docs/shipping/Code/node-js.md +++ b/docs/shipping/Code/node-js.md @@ -22,6 +22,10 @@ import TabItem from '@theme/TabItem'; +:::note +[Project's GitHub repo](https://github.com/logzio/logzio-nodejs/) +::: + logzio-nodejs collects log messages in an array, which is sent asynchronously when it reaches its size limit or time limit (100 messages or 10 seconds), whichever comes first. It contains a simple retry mechanism which upon connection reset or client timeout, tries to send a waiting bulk (2 seconds default). @@ -122,6 +126,10 @@ logger.log(obj); +:::note +[Project's GitHub repo](https://github.com/logzio/winston-logzio/) +::: + This winston plugin is a wrapper for the logzio-nodejs appender, which basically means it just wraps our nodejs logzio shipper. With winston-logzio, you can take advantage of the winston logger framework with your Node.js app. @@ -380,6 +388,11 @@ Deploy this integration to send custom metrics from your Node.js application to The provided example uses the [OpenTelemetry JS SDK](https://github.com/open-telemetry/opentelemetry-js) and is based on [OpenTelemetry exporter collector proto](https://github.com/open-telemetry/opentelemetry-js/tree/main/packages/opentelemetry-exporter-collector-proto). +:::note +[Project's GitHub repo](https://github.com/logzio/js-metrics/) +::: + + **Before you begin, you'll need**: Node 8 or higher diff --git a/docs/shipping/Code/python.md b/docs/shipping/Code/python.md index 2bfdd26b..956f2919 100644 --- a/docs/shipping/Code/python.md +++ b/docs/shipping/Code/python.md @@ -16,6 +16,10 @@ drop_filter: [] ## Logs +:::note +[Project's GitHub repo](https://github.com/logzio/logzio-python-handler/) +::: + Logz.io Python Handler sends logs in bulk over HTTPS to Logz.io. Logs are grouped into bulks based on their size. diff --git a/docs/shipping/Containers/docker.md b/docs/shipping/Containers/docker.md index 972ea247..d1b6b31b 100644 --- a/docs/shipping/Containers/docker.md +++ b/docs/shipping/Containers/docker.md @@ -24,6 +24,10 @@ from other Docker containers and forward them to your Logz.io account. To use docker-collector-logs, you'll set environment variables when you run the container. The Docker logs directory and docker.sock are mounted to the container, allowing Filebeat to collect the logs and metadata. +:::note +[Project's GitHub repo](https://github.com/logzio/docker-collector-logs/) +::: + ##### Upgrading to a newer version diff --git a/docs/shipping/Containers/openshift.md b/docs/shipping/Containers/openshift.md index baccb478..4964ccb6 100644 --- a/docs/shipping/Containers/openshift.md +++ b/docs/shipping/Containers/openshift.md @@ -19,6 +19,10 @@ drop_filter: [] OpenShift is a family of containerization software products developed by Red Hat. Deploy this integration to ship logs from your OpenShift cluster to Logz.io. Deploy this integration to ship logs from your OpenShift cluster to Logz.io. This integration will deploy the default daemonset, which sends only container logs while ignoring all containers with "openshift" namespace. +:::note +[Project's GitHub repo](https://github.com/logzio/logzio-openshift/) +::: + **Before you begin, you'll need**: * Working Openshift cluster diff --git a/docs/shipping/Database/mysql.md b/docs/shipping/Database/mysql.md index eece57d9..ec0aaeaa 100644 --- a/docs/shipping/Database/mysql.md +++ b/docs/shipping/Database/mysql.md @@ -17,6 +17,10 @@ drop_filter: [] ## Logs +:::note +[Project's GitHub repo](https://github.com/logzio/logzio-mysql-logs/) +::: + ### Default configuration **Before you begin, you'll need**: diff --git a/docs/shipping/GCP/gcp-stackdriver.md b/docs/shipping/GCP/gcp-stackdriver.md index 51b3600f..bc368c37 100644 --- a/docs/shipping/GCP/gcp-stackdriver.md +++ b/docs/shipping/GCP/gcp-stackdriver.md @@ -18,6 +18,9 @@ drop_filter: [] Google Cloud Platform (GCP) Stackdriver collects logs from your cloud services. You can use Google Cloud Pub/Sub to forward your logs from GP sinks to Logz.io. +:::note +[Project's GitHub repo](https://github.com/logzio/logzio-pubsub/) +::: **Before you begin, you'll need**: diff --git a/docs/shipping/Other/fluent-bit.md b/docs/shipping/Other/fluent-bit.md index efae8a77..356c7354 100644 --- a/docs/shipping/Other/fluent-bit.md +++ b/docs/shipping/Other/fluent-bit.md @@ -14,13 +14,15 @@ metrics_alerts: [] drop_filter: [] --- - + ## Run Fluent Bit as a standalone app Fluent Bit is an open source Log Processor and Forwarder which allows you to collect any data like metrics and logs from different sources. This integration allows you to send logs from Fluent Bit running as a standalone app and forward them to your Logz.io account. - +:::note +[Project's GitHub repo](https://github.com/logzio/fluent-bit-logzio-output/) +::: ### Install Fluent Bit diff --git a/docs/shipping/Other/fluentd.md b/docs/shipping/Other/fluentd.md index d7c0d07a..ad053c9c 100644 --- a/docs/shipping/Other/fluentd.md +++ b/docs/shipping/Other/fluentd.md @@ -21,6 +21,7 @@ Fluentd is a data collector, which unifies the data collection and consumption. Fluentd will fetch all existing logs, as it is not able to ignore older logs. ::: + ## Configure Fluentd with Ruby Gems **Before you begin, you'll need**: @@ -290,6 +291,10 @@ This integration includes: Upon deployment, each container on your host system, including the Fluentd container, writes logs to a dedicated log file. Fluentd fetches the log data from this file and ships the data over HTTP or HTTPS to your Logz.io account, either via an optional proxy sever or directly. +:::note +[Project's GitHub repo](https://github.com/logzio/fluentd-docker-logs/) +::: + **Before you begin, you'll need**: Docker installed on your host system diff --git a/docs/shipping/Other/heroku.md b/docs/shipping/Other/heroku.md index bb4c1558..0cdfbcbd 100644 --- a/docs/shipping/Other/heroku.md +++ b/docs/shipping/Other/heroku.md @@ -71,7 +71,9 @@ If you still don't see your logs, see [log shipping troubleshooting]({{site.base * [Heroku CLI](https://devcenter.heroku.com/articles/heroku-cli#download-and-install) - +:::note +[Project's GitHub repo](https://github.com/logzio/gheroku-buildpack-telegraf/) +::: :::note All commands in these instructions should be run from your Heroku app directory. diff --git a/docs/shipping/Other/microsoft-graph.md b/docs/shipping/Other/microsoft-graph.md index bd019240..d9bdb9cc 100644 --- a/docs/shipping/Other/microsoft-graph.md +++ b/docs/shipping/Other/microsoft-graph.md @@ -18,7 +18,9 @@ drop_filter: [] Microsoft Graph is a RESTful web API that enables you to access Microsoft Cloud service resources. This integration allows you to collect data from Microsoft Graph API and send it to your Logz.io account. - +:::note +[Project's GitHub repo](https://github.com/logzio/logzio-api-fetcher/) +::: diff --git a/docs/shipping/Other/salesforce-commerce-cloud.md b/docs/shipping/Other/salesforce-commerce-cloud.md index 1041b2b1..0d71b3aa 100644 --- a/docs/shipping/Other/salesforce-commerce-cloud.md +++ b/docs/shipping/Other/salesforce-commerce-cloud.md @@ -21,7 +21,9 @@ Salesforce Commerce Cloud is a scalable, cloud-based software-as-a-service (SaaS The default configuration uses a Docker container with environment variables. - +:::note +[Project's GitHub repo](https://github.com/logzio/sfcc-webdav-fetcher/) +::: ##### Pull the Docker image of the Logz.io Salesforce Commerce Cloud data fetcher diff --git a/docs/shipping/Other/salesforce.md b/docs/shipping/Other/salesforce.md index 74f83967..bb57a342 100644 --- a/docs/shipping/Other/salesforce.md +++ b/docs/shipping/Other/salesforce.md @@ -20,7 +20,9 @@ drop_filter: [] Salesforce is a customer relationship management solution. The Account sObject is an abstraction of the account record and holds the account field information in memory as an object. This integration allows you to collect sObject data from Salesforce and send it to your Logz.io account. - +:::note +[Project's GitHub repo](https://github.com/logzio/salesforce-logs-receiver/) +::: ##### Pull the Docker image of the Logz.io API fetcher diff --git a/docs/shipping/Security/carbon-black.md b/docs/shipping/Security/carbon-black.md index 2a1dec89..f39dc79c 100644 --- a/docs/shipping/Security/carbon-black.md +++ b/docs/shipping/Security/carbon-black.md @@ -19,7 +19,9 @@ drop_filter: [] With this integration, you can collect Logs from Carbon Black and forward them to Logz.io. - +:::note +[Project's GitHub repo](https://github.com/logzio/s3-hook/) +::: ### Set Carbon Black Event Forwarder diff --git a/docs/shipping/Security/cisco-securex.md b/docs/shipping/Security/cisco-securex.md index 69f44bc0..8f014dd7 100644 --- a/docs/shipping/Security/cisco-securex.md +++ b/docs/shipping/Security/cisco-securex.md @@ -19,7 +19,9 @@ drop_filter: [] Cisco SecureX connects the breadth of Cisco's integrated security portfolio and your infrastructure. This integration allows you to collect data from Cisco SecureX API and send it to your Logz.io account. - +:::note +[Project's GitHub repo](https://github.com/logzio/logzio-api-fetcher/) +::: diff --git a/docs/shipping/Security/x509.md b/docs/shipping/Security/x509.md index 4860777b..20b5588e 100644 --- a/docs/shipping/Security/x509.md +++ b/docs/shipping/Security/x509.md @@ -21,6 +21,10 @@ Deploy this integration to collect X509 certificate metrics from URLs and send t * x509_start_date (in seconds passed since 1.1.1970) * x509_end_date (in seconds passed since 1.1.1970) +:::note +[Project's GitHub repo](https://github.com/logzio/x509-certificate-metrics-lambda/) +::: + ## Collect certificate metrics using AWS Lambda diff --git a/docs/shipping/Synthetic-Monitoring/api-status-metrics.md b/docs/shipping/Synthetic-Monitoring/api-status-metrics.md index 4f82785b..849a1998 100644 --- a/docs/shipping/Synthetic-Monitoring/api-status-metrics.md +++ b/docs/shipping/Synthetic-Monitoring/api-status-metrics.md @@ -17,6 +17,10 @@ drop_filter: [] Deploy this integration to collect API status metrics of user API and send them to Logz.io. +:::note +[Project's GitHub repo](https://github.com/logzio/logzio-api-status/) +::: + The integration is based on a Lambda function that will be auto-deployed together with the layer [LogzioLambdaExtensionLogs](https://github.com/logzio/logzio-lambda-extensions/tree/main/logzio-lambda-extensions-logs). diff --git a/docs/shipping/Synthetic-Monitoring/ping-statistics.md b/docs/shipping/Synthetic-Monitoring/ping-statistics.md index 38a4cc0e..d4532e67 100644 --- a/docs/shipping/Synthetic-Monitoring/ping-statistics.md +++ b/docs/shipping/Synthetic-Monitoring/ping-statistics.md @@ -19,6 +19,10 @@ drop_filter: [] Deploy this integration to collect metrics of ping statistics collected from your preferred web addresses and send them to Logz.io. +:::note +[Project's GitHub repo](https://github.com/logzio/logzio-ping-statistics/) +::: + The integration is based on a Lambda function that will be auto-deployed together with the layer [LogzioLambdaExtensionLogs](https://github.com/logzio/logzio-lambda-extensions/tree/main/logzio-lambda-extensions-logs). diff --git a/docs/shipping/Synthetic-Monitoring/synthetic-link-detector.md b/docs/shipping/Synthetic-Monitoring/synthetic-link-detector.md index d1cdea3a..9c8ed6fb 100644 --- a/docs/shipping/Synthetic-Monitoring/synthetic-link-detector.md +++ b/docs/shipping/Synthetic-Monitoring/synthetic-link-detector.md @@ -16,6 +16,10 @@ drop_filter: [] Deploy this integration to collect data on broken links in a web page, and to get additional data about the links. +:::note +[Project's GitHub repo](https://github.com/logzio/synthetic-link-detector/) +::: + diff --git a/docs/user-guide/integrations/notification-endpoints/terraform.md b/docs/user-guide/integrations/notification-endpoints/terraform.md index 6f285c12..25e8b1f7 100644 --- a/docs/user-guide/integrations/notification-endpoints/terraform.md +++ b/docs/user-guide/integrations/notification-endpoints/terraform.md @@ -4,6 +4,10 @@ sidebar_position: 7 # Terraform Logz.io Provider +:::note +[Project's GitHub repo](https://github.com/logzio/logzio_terraform_provider/) +::: + The Terraform Logz.io provider offers a great way to build integrations using Logz.io APIs. Terraform is an infrastructure orchestrator written in Hashicorp Language (HCL). It is a popular Infrastructure-as-Code (IaC) tool that does away with manual configuration processes. You can take advantage of the Terraform Logz.io provider to really streamline the process of integrating observability into your dev workflows.