From e032f4789b4dd43c9249c77872d49580f847c120 Mon Sep 17 00:00:00 2001 From: Juha Mynttinen Date: Wed, 20 Dec 2023 08:17:52 +0200 Subject: [PATCH 01/10] Kafka datadog: clarify Kafka metrics vs Kafka consumer metrics Split the chapter of customising metrics into two - first for the main Kafka integration and the other for the Consumer Integration. Add an example how to use exclude_topics. The issue was that documentation failed to mention that the four keys include_topics, exclude_topics, include_consumer_groups and exclude_consumer_groups only affect the consumer integration. And that kafka_custom_metrics applies to Kafka integration. --- .../howto/datadog-customised-metrics.rst | 30 +++++++++++++++---- 1 file changed, 24 insertions(+), 6 deletions(-) diff --git a/docs/products/kafka/howto/datadog-customised-metrics.rst b/docs/products/kafka/howto/datadog-customised-metrics.rst index fdde87aad3..ff397c015e 100644 --- a/docs/products/kafka/howto/datadog-customised-metrics.rst +++ b/docs/products/kafka/howto/datadog-customised-metrics.rst @@ -42,9 +42,30 @@ Before customising the metrics, make sure that you have a Datadog endpoint confi To customise the metrics sent to Datadog, you can use the ``service integration-update`` passing the following customised parameters: * ``kafka_custom_metrics``: defining the comma separated list of custom metrics to include (within ``kafka.log.log_size``, ``kafka.log.log_start_offset`` and ``kafka.log.log_end_offset``) + +As example to sent the ``kafka.log.log_size`` and ``kafka.log.log_end_offset`` metrics execute the following code: + +.. code:: + + avn service integration-update \ + -c kafka_custom_metrics=['kafka.log.log_size','kafka.log.log_end_offset'] \ + INTEGRATION_ID + +Once the update is successful and metrics have been collected and pushed, you should see them in your Datadog explorer. + +.. seealso:: Learn more about :doc:`/docs/integrations/datadog`. + + +Customise Apache Kafka® Consumer Integration metrics sent to Datadog +==================================================================== + +`Kafka Consumer Integration `_ collects metrics for message offsets. + +To customise the metrics sent from this Datadog integration to Datadog, you can use the ``service integration-update`` passing the following customised parameters: + * ``include_topics``: defining the comma separated list of topics to include -.. Tip:: +.. Tip:: By default, all topics are included. @@ -52,16 +73,13 @@ To customise the metrics sent to Datadog, you can use the ``service integration- * ``include_consumer_groups``: defining the comma separated list of consumer groups to include * ``exclude_consumer_groups``: defining the comma separated list of consumer groups to include - -As example to sent the ``kafka.log.log_size`` and ``kafka.log.log_end_offset`` metrics for ``topic1`` and ``topic2`` execute the following code: +As example to include topics ``topic1`` and ``topic2`` and exclude topic ``topic3`` execute the following code: .. code:: avn service integration-update \ - -c kafka_custom_metrics=['kafka.log.log_size','kafka.log.log_end_offset'] \ -c include_topics=['topic1','topic2'] \ + -c exclude_topics=['topic3'] \ INTEGRATION_ID Once the update is successful and metrics have been collected and pushed, you should see them in your Datadog explorer. - -.. seealso:: Learn more about :doc:`/docs/integrations/datadog`. \ No newline at end of file From 3133a24fc257a4d5ba77560216f05a6f6d892b73 Mon Sep 17 00:00:00 2001 From: Harshini Rangaswamy Date: Wed, 20 Dec 2023 14:52:49 +0100 Subject: [PATCH 02/10] Updated content clarity and consistency --- .../howto/datadog-customised-metrics.rst | 57 +++++++++---------- 1 file changed, 28 insertions(+), 29 deletions(-) diff --git a/docs/products/kafka/howto/datadog-customised-metrics.rst b/docs/products/kafka/howto/datadog-customised-metrics.rst index ff397c015e..17c1d3bae9 100644 --- a/docs/products/kafka/howto/datadog-customised-metrics.rst +++ b/docs/products/kafka/howto/datadog-customised-metrics.rst @@ -1,17 +1,17 @@ Configure Apache Kafka® metrics sent to Datadog =============================================== -When creating a :doc:`Datadog service integration `, you can customise which metrics are sent to the Datadog endpoint using the :doc:`Aiven CLI `. +When creating a `Datadog service integration `_, customize which metrics are sent to the Datadog endpoint using the `Aiven CLI `_. -For each Apache Kafka® topic and partition, the following metrics are currently supported: +The following metrics are currently supported for each topic and partition in Apache Kafka®: * ``kafka.log.log_size`` * ``kafka.log.log_start_offset`` * ``kafka.log.log_end_offset`` -.. Tip:: +.. note:: - All the above metrics are tagged with ``topic`` and ``partition`` allowing you to monitor each topic and partition independently. + All metrics are tagged with ``topic`` and ``partition``, enabling independent monitoring of each ``topic`` and ``partition``. Variables --------- @@ -23,27 +23,27 @@ Variable Description ================== ============================================================================ ``SERVICE_NAME`` Aiven for Apache Kafka® service name ------------------ ---------------------------------------------------------------------------- -``INTEGRATION_ID`` ID of the integration between the Aiven for Apache Kafka service and Datadog +``INTEGRATION_ID`` ID of the integration between Aiven for Apache Kafka service and Datadog ================== ============================================================================ -.. Tip:: - The ``INTEGRATION_ID`` parameter can be found by issuing: - - .. code:: +You can find the ``INTEGRATION_ID`` parameter by executing this command: + +.. code:: - avn service integration-list SERVICE_NAME + avn service integration-list SERVICE_NAME + +Customize Apache Kafka® metrics for Datadog +---------------------------------------------------- -Customise Apache Kafka® metrics sent to Datadog ------------------------------------------------ +Before customizing metrics, ensure a Datadog endpoint is configured and enabled in your Aiven for Apache Kafka service. For setup instructions, see `Send metrics to Datadog `_. Format any listed parameters as a comma-separated list: ``['value0', 'value1', 'value2', ...]``. -Before customising the metrics, make sure that you have a Datadog endpoint configured and enabled in your Aiven for Apache Kafka service. For details on how to set up the Datadog integration, check the :doc:`dedicated article `. Please note that in all the below parameters a 'comma separated list' has the following format: ``['value0','value1','value2','...']``. -To customise the metrics sent to Datadog, you can use the ``service integration-update`` passing the following customised parameters: +To customize the metrics sent to Datadog, you can use the ``service integration-update`` passing the following customized parameters: -* ``kafka_custom_metrics``: defining the comma separated list of custom metrics to include (within ``kafka.log.log_size``, ``kafka.log.log_start_offset`` and ``kafka.log.log_end_offset``) +* ``kafka_custom_metrics``: defining the comma-separated list of custom metrics to include (within ``kafka.log.log_size``, ``kafka.log.log_start_offset`` and ``kafka.log.log_end_offset``) -As example to sent the ``kafka.log.log_size`` and ``kafka.log.log_end_offset`` metrics execute the following code: +For example, to send the ``kafka.log.log_size`` and ``kafka.log.log_end_offset`` metrics, execute the following code: .. code:: @@ -51,29 +51,28 @@ As example to sent the ``kafka.log.log_size`` and ``kafka.log.log_end_offset`` m -c kafka_custom_metrics=['kafka.log.log_size','kafka.log.log_end_offset'] \ INTEGRATION_ID -Once the update is successful and metrics have been collected and pushed, you should see them in your Datadog explorer. -.. seealso:: Learn more about :doc:`/docs/integrations/datadog`. +After you successfully update and the metrics are collected and sent to Datadog, you can view them in your Datadog explorer. +.. seealso:: Learn more about :doc:`Datadog and Aiven `. -Customise Apache Kafka® Consumer Integration metrics sent to Datadog -==================================================================== -`Kafka Consumer Integration `_ collects metrics for message offsets. +Customize Apache Kafka® consumer metrics for Datadog +----------------------------------------------------- -To customise the metrics sent from this Datadog integration to Datadog, you can use the ``service integration-update`` passing the following customised parameters: +`Kafka Consumer Integration `_ collects metrics for message offsets. To customize the metrics sent from this Datadog integration to Datadog, you can use the ``service integration-update`` passing the following customized parameters: -* ``include_topics``: defining the comma separated list of topics to include +* ``include_topics``: Specify a comma-separated list of topics to include. -.. Tip:: + .. Note:: By default, all topics are included. -* ``exclude_topics``: defining the comma separated list of topics to exclude -* ``include_consumer_groups``: defining the comma separated list of consumer groups to include -* ``exclude_consumer_groups``: defining the comma separated list of consumer groups to include +* ``exclude_topics``: Specify a comma-separated list of topics to exclude. +* ``include_consumer_groups``: Specify a comma-separated list of consumer groups to include. +* ``exclude_consumer_groups``: Specify a comma-separated list of consumer groups to exclude. -As example to include topics ``topic1`` and ``topic2`` and exclude topic ``topic3`` execute the following code: +For example, to include topics ``topic1`` and ``topic2``, and exclude ``topic3``, execute the following code: .. code:: @@ -82,4 +81,4 @@ As example to include topics ``topic1`` and ``topic2`` and exclude topic ``topic -c exclude_topics=['topic3'] \ INTEGRATION_ID -Once the update is successful and metrics have been collected and pushed, you should see them in your Datadog explorer. +After you successfully update and the metrics are collected and sent to Datadog, you can view them in your Datadog explorer. From eaba0cd7f470a12b3b295de35f2a4a95b06736d7 Mon Sep 17 00:00:00 2001 From: Harshini Rangaswamy Date: Wed, 20 Dec 2023 14:58:17 +0100 Subject: [PATCH 03/10] Fixed link issues --- docs/products/kafka/howto/datadog-customised-metrics.rst | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/products/kafka/howto/datadog-customised-metrics.rst b/docs/products/kafka/howto/datadog-customised-metrics.rst index 17c1d3bae9..d89c0b936e 100644 --- a/docs/products/kafka/howto/datadog-customised-metrics.rst +++ b/docs/products/kafka/howto/datadog-customised-metrics.rst @@ -1,7 +1,7 @@ Configure Apache Kafka® metrics sent to Datadog =============================================== -When creating a `Datadog service integration `_, customize which metrics are sent to the Datadog endpoint using the `Aiven CLI `_. +When creating a `Datadog service integration `_, customize which metrics are sent to the Datadog endpoint using the :doc:`Aiven CLI `. The following metrics are currently supported for each topic and partition in Apache Kafka®: @@ -36,7 +36,7 @@ You can find the ``INTEGRATION_ID`` parameter by executing this command: Customize Apache Kafka® metrics for Datadog ---------------------------------------------------- -Before customizing metrics, ensure a Datadog endpoint is configured and enabled in your Aiven for Apache Kafka service. For setup instructions, see `Send metrics to Datadog `_. Format any listed parameters as a comma-separated list: ``['value0', 'value1', 'value2', ...]``. +Before customizing metrics, ensure a Datadog endpoint is configured and enabled in your Aiven for Apache Kafka service. For setup instructions, see :doc:`Send metrics to Datadog `. Format any listed parameters as a comma-separated list: ``['value0', 'value1', 'value2', ...]``. To customize the metrics sent to Datadog, you can use the ``service integration-update`` passing the following customized parameters: From cda9adeae5295baeeca9d6581213d5b65b6da60a Mon Sep 17 00:00:00 2001 From: Dorota Wojcik Date: Tue, 9 Jan 2024 13:50:49 +0100 Subject: [PATCH 04/10] add oracle regions --- includes/clouds-list.rst | 40 +++++++++++++++++++++++++++++++++++++++- 1 file changed, 39 insertions(+), 1 deletion(-) diff --git a/includes/clouds-list.rst b/includes/clouds-list.rst index 3f5a526022..de79f89df1 100644 --- a/includes/clouds-list.rst +++ b/includes/clouds-list.rst @@ -418,4 +418,42 @@ UpCloud - United States, New York: New York * - North America - ``upcloud-us-sjo`` - - United States, California: San Jose \ No newline at end of file + - United States, California: San Jose + +Oracle Cloud Infrastructure +----------------------------------------------------- + +.. important:: + + Oracle Cloud Infrastructure (OCI) is supported on the Aiven platfrom as a :doc:`limited availability feature `. If you're interested in trying it out, contact the sales team at sales@Aiven.io. + +.. list-table:: + :header-rows: 1 + + * - Region + - Cloud + - Description + * - Europe + - ``eu-frankfurt-1`` + - Germany, Germany Central: Frankfurt + * - Asia-Pacific + - ``ap-mumbai-1`` + - India, India West: Mumbai + * - Middle East + - ``me-dubai-1`` + - UAE, UAE East: Dubai + * - South America + - ``sa-saopaulo-1`` + - Brazil, Brazil East: São Paulo + * - Europe + - ``uk-london-1`` + - United Kingdom, UK South: London + * - North America + - ``us-ashburn-1`` + - US East, Virginia: Ashburn + * - Asia-Pacific + - ``ap-sydney-1`` + - Australia, Australia East: Sydney + * - North America + - ``us-phoenix-1`` + - US West, Arizona: Phoenix \ No newline at end of file From fda79873f2a95884527020099d229f5ed0d90780 Mon Sep 17 00:00:00 2001 From: Dorota Wojcik Date: Tue, 9 Jan 2024 13:57:58 +0100 Subject: [PATCH 05/10] reorder --- includes/clouds-list.rst | 26 +++++++++++++------------- 1 file changed, 13 insertions(+), 13 deletions(-) diff --git a/includes/clouds-list.rst b/includes/clouds-list.rst index de79f89df1..c511f76efe 100644 --- a/includes/clouds-list.rst +++ b/includes/clouds-list.rst @@ -433,27 +433,27 @@ Oracle Cloud Infrastructure * - Region - Cloud - Description - * - Europe - - ``eu-frankfurt-1`` - - Germany, Germany Central: Frankfurt * - Asia-Pacific - ``ap-mumbai-1`` - India, India West: Mumbai - * - Middle East - - ``me-dubai-1`` - - UAE, UAE East: Dubai - * - South America - - ``sa-saopaulo-1`` - - Brazil, Brazil East: São Paulo + * - Asia-Pacific + - ``ap-sydney-1`` + - Australia, Australia East: Sydney + * - Europe + - ``eu-frankfurt-1`` + - Germany, Germany Central: Frankfurt * - Europe - ``uk-london-1`` - United Kingdom, UK South: London + * - Middle East + - ``me-dubai-1`` + - UAE, UAE East: Dubai * - North America - ``us-ashburn-1`` - US East, Virginia: Ashburn - * - Asia-Pacific - - ``ap-sydney-1`` - - Australia, Australia East: Sydney * - North America - ``us-phoenix-1`` - - US West, Arizona: Phoenix \ No newline at end of file + - US West, Arizona: Phoenix + * - South America + - ``sa-saopaulo-1`` + - Brazil, Brazil East: São Paulo \ No newline at end of file From 34a54827c5d1956d21f3cf92d3b44de2acf29ea8 Mon Sep 17 00:00:00 2001 From: Dorota Wojcik Date: Tue, 9 Jan 2024 14:15:47 +0100 Subject: [PATCH 06/10] fix --- includes/clouds-list.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/includes/clouds-list.rst b/includes/clouds-list.rst index c511f76efe..3e300b5c33 100644 --- a/includes/clouds-list.rst +++ b/includes/clouds-list.rst @@ -425,7 +425,7 @@ Oracle Cloud Infrastructure .. important:: - Oracle Cloud Infrastructure (OCI) is supported on the Aiven platfrom as a :doc:`limited availability feature `. If you're interested in trying it out, contact the sales team at sales@Aiven.io. + Oracle Cloud Infrastructure (OCI) is supported on the Aiven platfrom as a :doc:`limited availability feature `. If you're interested in trying it out, contact the sales team at sales@Aiven.io. .. list-table:: :header-rows: 1 From 4cf0dde7f298dbd17c50f9751fcd9ef3d4ed69de Mon Sep 17 00:00:00 2001 From: Arthur Date: Wed, 10 Jan 2024 18:19:33 +0100 Subject: [PATCH 07/10] harmonize file name for get started pages (#2417) --- _redirects | 10 ++++++++++ _toc.yml | 20 +++++++++---------- docs/products/clickhouse.rst | 2 +- .../{getting-started.rst => get-started.rst} | 0 .../clickhouse/howto/load-dataset.rst | 2 +- docs/products/flink.rst | 2 +- .../{getting-started.rst => get-started.rst} | 0 .../flink/howto/flink-confluent-avro.rst | 2 +- docs/products/kafka.rst | 2 +- .../schema-registry-authorization.rst | 2 +- .../{getting-started.rst => get-started.rst} | 0 docs/products/kafka/howto/enable-oidc.rst | 2 +- .../howto/enabled-consumer-lag-predictor.rst | 2 +- .../products/kafka/howto/fake-sample-data.rst | 2 +- docs/products/kafka/howto/kafka-conduktor.rst | 2 +- docs/products/kafka/howto/kafka-klaw.rst | 2 +- docs/products/kafka/kafka-connect.rst | 2 +- .../{getting-started.rst => get-started.rst} | 0 docs/products/kafka/kafka-mirrormaker.rst | 2 +- .../{getting-started.rst => get-started.rst} | 0 .../howto/setup-replication-flow.rst | 2 +- docs/products/kafka/karapace.rst | 2 +- .../karapace/concepts/acl-definition.rst | 2 +- .../{getting-started.rst => get-started.rst} | 0 .../enable-oauth-oidc-kafka-rest-proxy.rst | 2 +- docs/products/m3db.rst | 2 +- .../{getting-started.rst => get-started.rst} | 0 docs/products/opensearch.rst | 2 +- .../concepts/opensearch-vs-elasticsearch.rst | 2 +- docs/products/opensearch/dashboards.rst | 2 +- .../{getting-started.rst => get-started.rst} | 2 +- .../{getting-started.rst => get-started.rst} | 0 .../opensearch-aggregations-and-nodejs.rst | 2 +- .../howto/opensearch-and-nodejs.rst | 2 +- docs/products/postgresql.rst | 2 +- .../{getting-started.rst => get-started.rst} | 0 .../tools/terraform/howto/vpc-peering-aws.rst | 2 +- index.rst | 2 +- 38 files changed, 47 insertions(+), 37 deletions(-) rename docs/products/clickhouse/{getting-started.rst => get-started.rst} (100%) rename docs/products/flink/{getting-started.rst => get-started.rst} (100%) rename docs/products/kafka/{getting-started.rst => get-started.rst} (100%) rename docs/products/kafka/kafka-connect/{getting-started.rst => get-started.rst} (100%) rename docs/products/kafka/kafka-mirrormaker/{getting-started.rst => get-started.rst} (100%) rename docs/products/kafka/karapace/{getting-started.rst => get-started.rst} (100%) rename docs/products/m3db/{getting-started.rst => get-started.rst} (100%) rename docs/products/opensearch/dashboards/{getting-started.rst => get-started.rst} (90%) rename docs/products/opensearch/{getting-started.rst => get-started.rst} (100%) rename docs/products/postgresql/{getting-started.rst => get-started.rst} (100%) diff --git a/_redirects b/_redirects index f4a334f006..0854f9a152 100644 --- a/_redirects +++ b/_redirects @@ -92,6 +92,16 @@ /docs/tools/cli/account/account-authentication-method /docs/tools/cli/account /docs/tools/cli/card /docs/tools/cli/account /docs/tools/api/examples /docs/tools/api +/docs/products/postgresql/getting-started /docs/products/postgresql/get-started +/docs/products/m3db/getting-started /docs/products/m3db/get-started +/docs/products/flink/getting-started /docs/products/flink/get-started +/docs/products/kafka/getting-started /docs/products/kafka/get-started +/docs/products/clickhouse/getting-started /docs/products/clickhouse/get-started +/docs/products/opensearch/getting-started /docs/products/opensearch/get-started +/docs/products/kafka/karapace/getting-started /docs/products/kafka/karapace/get-started +/docs/products/kafka/kafka-connect/getting-started /docs/products/kafka/kafka-connect/get-started +/docs/products/opensearch/dashboards/getting-started /docs/products/opensearch/dashboards/get-started +/docs/products/kafka/kafka-mirrormaker/getting-started /docs/products/kafka/kafka-mirrormaker/get-started # Redirect from .index.html to specific page names for landing diff --git a/_toc.yml b/_toc.yml index b2fba7fea1..2c59e12d8f 100644 --- a/_toc.yml +++ b/_toc.yml @@ -313,7 +313,7 @@ entries: - file: docs/products/kafka title: Apache Kafka entries: - - file: docs/products/kafka/getting-started + - file: docs/products/kafka/get-started title: Get started - file: docs/products/kafka/howto/fake-sample-data title: Sample data generator @@ -448,7 +448,7 @@ entries: - file: docs/products/kafka/kafka-connect title: Apache Kafka Connect entries: - - file: docs/products/kafka/kafka-connect/getting-started + - file: docs/products/kafka/kafka-connect/get-started - file: docs/products/kafka/kafka-connect/concepts entries: - file: docs/products/kafka/kafka-connect/concepts/list-of-connector-plugins @@ -559,7 +559,7 @@ entries: - file: docs/products/kafka/kafka-mirrormaker title: Apache Kafka MirrorMaker2 entries: - - file: docs/products/kafka/kafka-mirrormaker/getting-started + - file: docs/products/kafka/kafka-mirrormaker/get-started - file: docs/products/kafka/kafka-mirrormaker/concepts entries: - file: docs/products/kafka/kafka-mirrormaker/concepts/disaster-recovery-migration @@ -586,7 +586,7 @@ entries: - file: docs/products/kafka/karapace title: Karapace entries: - - file: docs/products/kafka/karapace/getting-started + - file: docs/products/kafka/karapace/get-started - file: docs/products/kafka/karapace/concepts title: Concepts entries: @@ -620,7 +620,7 @@ entries: title: Plans and pricing - file: docs/products/flink/reference/flink-limitations title: Limitations - - file: docs/products/flink/getting-started + - file: docs/products/flink/get-started title: Quickstart - file: docs/products/flink/concepts title: Concepts @@ -766,7 +766,7 @@ entries: title: Plans and pricing - file: docs/products/clickhouse/reference/limitations title: Limits and limitations - - file: docs/products/clickhouse/getting-started + - file: docs/products/clickhouse/get-started title: Quickstart - file: docs/products/clickhouse/concepts title: Concepts @@ -984,7 +984,7 @@ entries: - file: docs/products/m3db title: M3DB entries: - - file: docs/products/m3db/getting-started + - file: docs/products/m3db/get-started title: Get started - file: docs/products/m3db/concepts title: Concepts @@ -1082,7 +1082,7 @@ entries: - file: docs/products/opensearch title: OpenSearch entries: - - file: docs/products/opensearch/getting-started + - file: docs/products/opensearch/get-started title: Quickstart entries: - file: docs/products/opensearch/howto/sample-dataset @@ -1181,7 +1181,7 @@ entries: - file: docs/products/opensearch/dashboards title: OpenSearch Dashboards entries: - - file: docs/products/opensearch/dashboards/getting-started + - file: docs/products/opensearch/dashboards/get-started - file: docs/products/opensearch/dashboards/howto title: HowTo entries: @@ -1208,7 +1208,7 @@ entries: entries: - file: docs/products/postgresql/overview title: Overview - - file: docs/products/postgresql/getting-started + - file: docs/products/postgresql/get-started title: Quickstart - file: docs/products/postgresql/concepts title: Concepts diff --git a/docs/products/clickhouse.rst b/docs/products/clickhouse.rst index 5667e0b532..cf520b0017 100644 --- a/docs/products/clickhouse.rst +++ b/docs/products/clickhouse.rst @@ -7,7 +7,7 @@ Aiven for ClickHouse® is a fully managed distributed columnar database based on .. grid:: 1 2 2 2 - .. grid-item-card:: :doc:`Quickstart ` + .. grid-item-card:: :doc:`Quickstart ` :shadow: md :margin: 2 2 0 0 diff --git a/docs/products/clickhouse/getting-started.rst b/docs/products/clickhouse/get-started.rst similarity index 100% rename from docs/products/clickhouse/getting-started.rst rename to docs/products/clickhouse/get-started.rst diff --git a/docs/products/clickhouse/howto/load-dataset.rst b/docs/products/clickhouse/howto/load-dataset.rst index 3b34effe14..39b90004a5 100644 --- a/docs/products/clickhouse/howto/load-dataset.rst +++ b/docs/products/clickhouse/howto/load-dataset.rst @@ -29,7 +29,7 @@ Once done, you should have two files available: ``hits_v1.tsv`` and ``visits_v1. Set up the service and database ------------------------------- -If you don't yet have an Aiven for ClickHouse service, follow the steps in our :doc:`getting started guide ` to create one. +If you don't yet have an Aiven for ClickHouse service, follow the steps in our :doc:`getting started guide ` to create one. When you create a service, a default database was already added. However, you can create separate databases specific to your use case. We will create a database with the name ``datasets``, keeping it the same as in the ClickHouse documentation. diff --git a/docs/products/flink.rst b/docs/products/flink.rst index 31d57263e5..abf3fe8366 100644 --- a/docs/products/flink.rst +++ b/docs/products/flink.rst @@ -5,7 +5,7 @@ Aiven for Apache Flink® is a fully managed service that leverages the power of .. grid:: 1 2 2 2 - .. grid-item-card:: :doc:`Quickstart ` + .. grid-item-card:: :doc:`Quickstart ` :shadow: md :margin: 2 2 0 0 diff --git a/docs/products/flink/getting-started.rst b/docs/products/flink/get-started.rst similarity index 100% rename from docs/products/flink/getting-started.rst rename to docs/products/flink/get-started.rst diff --git a/docs/products/flink/howto/flink-confluent-avro.rst b/docs/products/flink/howto/flink-confluent-avro.rst index 1e53fd0927..b190132267 100644 --- a/docs/products/flink/howto/flink-confluent-avro.rst +++ b/docs/products/flink/howto/flink-confluent-avro.rst @@ -12,7 +12,7 @@ Prerequisites -------------- * :doc:`Aiven for Apache Flink service ` with Aiven for Apache Kafka® integration. See :doc:`/docs/products/flink/howto/create-integration` for more information. -* Aiven for Apache Kafka® service with Karapace Schema registry enabled. See :doc:`/docs/products/kafka/karapace/getting-started` for more information. +* Aiven for Apache Kafka® service with Karapace Schema registry enabled. See :doc:`/docs/products/kafka/karapace/get-started` for more information. * By default, Flink cannot create Apache Kafka topics while pushing the first record automatically. To change this behavior, enable in the Aiven for Apache Kafka target service the ``kafka.auto_create_topics_enable`` option in **Advanced configuration** section. Create an Apache Flink® table with Confluent Avro diff --git a/docs/products/kafka.rst b/docs/products/kafka.rst index b5c581a86c..d3202c99ff 100644 --- a/docs/products/kafka.rst +++ b/docs/products/kafka.rst @@ -27,7 +27,7 @@ Apache Kafka moves data between systems, and Apache Kafka Connect is how to inte Get started with Aiven for Apache Kafka --------------------------------------- -Take your first steps with Aiven for Apache Kafka by following our :doc:`/docs/products/kafka/getting-started` article, or browse through our full list of articles: +Take your first steps with Aiven for Apache Kafka by following our :doc:`/docs/products/kafka/get-started` article, or browse through our full list of articles: .. grid:: 1 2 2 2 diff --git a/docs/products/kafka/concepts/schema-registry-authorization.rst b/docs/products/kafka/concepts/schema-registry-authorization.rst index 0181e73918..3e4f140860 100644 --- a/docs/products/kafka/concepts/schema-registry-authorization.rst +++ b/docs/products/kafka/concepts/schema-registry-authorization.rst @@ -1,6 +1,6 @@ Schema registry authorization ============================= -The schema registry authorization feature when enabled in :doc:`Karapace schema registry ` allows you to authenticate the user, and control read or write access to the individual resources available in the Schema Registry. +The schema registry authorization feature when enabled in :doc:`Karapace schema registry ` allows you to authenticate the user, and control read or write access to the individual resources available in the Schema Registry. For information on schema registry authorization for Aiven for Apache Kafka® services, see :doc:`Karapace schema registry authorization `. diff --git a/docs/products/kafka/getting-started.rst b/docs/products/kafka/get-started.rst similarity index 100% rename from docs/products/kafka/getting-started.rst rename to docs/products/kafka/get-started.rst diff --git a/docs/products/kafka/howto/enable-oidc.rst b/docs/products/kafka/howto/enable-oidc.rst index 96350440d6..5294f1a5e2 100644 --- a/docs/products/kafka/howto/enable-oidc.rst +++ b/docs/products/kafka/howto/enable-oidc.rst @@ -10,7 +10,7 @@ Aiven for Apache Kafka integrates with a wide range of OpenID Connect identity p Before proceeding with the setup, ensure you have: -* :doc:`Aiven for Apache Kafka® ` service running. +* :doc:`Aiven for Apache Kafka® ` service running. * **Access to an OIDC provider**: Options include Auth0, Okta, Google Identity Platform, Azure, or any other OIDC compliant provider. * Required configuration details from your OIDC provider: diff --git a/docs/products/kafka/howto/enabled-consumer-lag-predictor.rst b/docs/products/kafka/howto/enabled-consumer-lag-predictor.rst index 0d7bfacfc1..066d450b77 100644 --- a/docs/products/kafka/howto/enabled-consumer-lag-predictor.rst +++ b/docs/products/kafka/howto/enabled-consumer-lag-predictor.rst @@ -13,7 +13,7 @@ Prerequisites Before you start, ensure you have the following: - Aiven account. -- :doc:`Aiven for Apache Kafka® ` service running. +- :doc:`Aiven for Apache Kafka® ` service running. - :doc:`Prometheus integration ` set up for your Aiven for Apache Kafka for extracting metrics. - Necessary permissions to modify service configurations. diff --git a/docs/products/kafka/howto/fake-sample-data.rst b/docs/products/kafka/howto/fake-sample-data.rst index ad49b0e1dd..d9ee79684f 100644 --- a/docs/products/kafka/howto/fake-sample-data.rst +++ b/docs/products/kafka/howto/fake-sample-data.rst @@ -7,7 +7,7 @@ Learning to work with streaming data is much more fun with data, so to get you s The following example is based on `Docker `_ images, which require `Docker `_ or `Podman `_ to be executed. -The following example assumes you have an Aiven for Apache Kafka® service running. You can create one following the :doc:`dedicated instructions `. +The following example assumes you have an Aiven for Apache Kafka® service running. You can create one following the :doc:`dedicated instructions `. Fake data generator on Docker diff --git a/docs/products/kafka/howto/kafka-conduktor.rst b/docs/products/kafka/howto/kafka-conduktor.rst index 902c3af577..646985051b 100644 --- a/docs/products/kafka/howto/kafka-conduktor.rst +++ b/docs/products/kafka/howto/kafka-conduktor.rst @@ -3,7 +3,7 @@ Connect to Apache Kafka® with Conduktor `Conduktor `_ is a friendly user interface for Apache Kafka, and it works well with Aiven. In fact, there is built-in support for setting up the connection. You will need to add the CA certificate for each of your Aiven projects to Conduktor before you can connect, this is outlined in the steps below. -1. Visit the **Service overview** page for your Aiven for Apache Kafka® service (the :doc:`/docs/products/kafka/getting-started` page is a good place for more information about creating a new service if you don't have one already). +1. Visit the **Service overview** page for your Aiven for Apache Kafka® service (the :doc:`/docs/products/kafka/get-started` page is a good place for more information about creating a new service if you don't have one already). 2. Download the **Access Key**, **Access Certificate** and **CA Certificate** (if you didn't have that already) into a directory on your computer. diff --git a/docs/products/kafka/howto/kafka-klaw.rst b/docs/products/kafka/howto/kafka-klaw.rst index f9a3a90c8c..28fb29879f 100644 --- a/docs/products/kafka/howto/kafka-klaw.rst +++ b/docs/products/kafka/howto/kafka-klaw.rst @@ -9,7 +9,7 @@ Prerequisites ------------- To connect Aiven for Apache Kafka® and Klaw, you need to have the following setup: -* A running Aiven for Apache Kafka® service. See :doc:`Getting started with Aiven for Apache Kafka ` for more information. +* A running Aiven for Apache Kafka® service. See :doc:`Getting started with Aiven for Apache Kafka ` for more information. * A running Klaw cluster. See `Run Klaw from the source `_ for more information. * Configured :doc:`Java keystore and truststore containing the service SSL certificates `. diff --git a/docs/products/kafka/kafka-connect.rst b/docs/products/kafka/kafka-connect.rst index 2b46b20b70..8ff05357d5 100644 --- a/docs/products/kafka/kafka-connect.rst +++ b/docs/products/kafka/kafka-connect.rst @@ -127,7 +127,7 @@ Sink connectors Get started with Aiven for Apache Kafka® Connect ------------------------------------------------ -Take your first steps with Aiven for Apache Kafka Connect by following our :doc:`/docs/products/kafka/kafka-connect/getting-started` article, or browse through our full list of articles: +Take your first steps with Aiven for Apache Kafka Connect by following our :doc:`/docs/products/kafka/kafka-connect/get-started` article, or browse through our full list of articles: .. grid:: 1 2 2 2 diff --git a/docs/products/kafka/kafka-connect/getting-started.rst b/docs/products/kafka/kafka-connect/get-started.rst similarity index 100% rename from docs/products/kafka/kafka-connect/getting-started.rst rename to docs/products/kafka/kafka-connect/get-started.rst diff --git a/docs/products/kafka/kafka-mirrormaker.rst b/docs/products/kafka/kafka-mirrormaker.rst index ece98f0ff4..35bbe2954d 100644 --- a/docs/products/kafka/kafka-mirrormaker.rst +++ b/docs/products/kafka/kafka-mirrormaker.rst @@ -18,7 +18,7 @@ Apache Kafka® represents the best in class data streaming solution. Apache Kafk Get started with Aiven for Apache Kafka® MirrorMaker 2 ------------------------------------------------------ -Take your first steps with Aiven for Apache Kafka® MirrorMaker 2 by following our :doc:`/docs/products/kafka/kafka-mirrormaker/getting-started` article, or browse through our full list of articles: +Take your first steps with Aiven for Apache Kafka® MirrorMaker 2 by following our :doc:`/docs/products/kafka/kafka-mirrormaker/get-started` article, or browse through our full list of articles: .. grid:: 1 2 2 2 diff --git a/docs/products/kafka/kafka-mirrormaker/getting-started.rst b/docs/products/kafka/kafka-mirrormaker/get-started.rst similarity index 100% rename from docs/products/kafka/kafka-mirrormaker/getting-started.rst rename to docs/products/kafka/kafka-mirrormaker/get-started.rst diff --git a/docs/products/kafka/kafka-mirrormaker/howto/setup-replication-flow.rst b/docs/products/kafka/kafka-mirrormaker/howto/setup-replication-flow.rst index bbd940a653..2c0bf96b0e 100644 --- a/docs/products/kafka/kafka-mirrormaker/howto/setup-replication-flow.rst +++ b/docs/products/kafka/kafka-mirrormaker/howto/setup-replication-flow.rst @@ -13,7 +13,7 @@ To define a replication flow between a source Apache Kafka cluster and a target .. Note:: - If no Aiven for Apache Kafka MirrorMaker 2 are already defined, :doc:`you can create one in the Aiven console <../getting-started>`. + If no Aiven for Apache Kafka MirrorMaker 2 are already defined, :doc:`you can create one in the Aiven console <../get-started>`. 2. In the service **Overview** screen, scroll to the **Service integrations** section and select **Manage integrations**. diff --git a/docs/products/kafka/karapace.rst b/docs/products/kafka/karapace.rst index 50c7509695..bc33deea19 100644 --- a/docs/products/kafka/karapace.rst +++ b/docs/products/kafka/karapace.rst @@ -14,7 +14,7 @@ Karapace REST provides a RESTful interface to your Apache Kafka cluster, allowin Get started with Karapace ------------------------- -Take your first steps Karapace by following our :doc:`/docs/products/kafka/karapace/getting-started` article, or browse through other articles: +Take your first steps Karapace by following our :doc:`/docs/products/kafka/karapace/get-started` article, or browse through other articles: .. grid:: 1 2 2 2 diff --git a/docs/products/kafka/karapace/concepts/acl-definition.rst b/docs/products/kafka/karapace/concepts/acl-definition.rst index b8424716d5..74c597a5cc 100644 --- a/docs/products/kafka/karapace/concepts/acl-definition.rst +++ b/docs/products/kafka/karapace/concepts/acl-definition.rst @@ -73,5 +73,5 @@ The following table provides you with examples: The user that manages the ACLs is a superuser with write access to everything in the schema registry. In the Aiven Console, the superuser can view and modify all schemas in the Schema tab of a Kafka service. The superuser and its ACL entries are not visible in the Console but are added automatically by the Aiven platform. -The schema registry authorization feature enabled in :doc:`Karapace schema registry ` allows you to both authenticate the user, and additionally grant or deny access to individual `Karapace schema registry REST API endpoints `_ and filter the content the endpoints return. +The schema registry authorization feature enabled in :doc:`Karapace schema registry ` allows you to both authenticate the user, and additionally grant or deny access to individual `Karapace schema registry REST API endpoints `_ and filter the content the endpoints return. diff --git a/docs/products/kafka/karapace/getting-started.rst b/docs/products/kafka/karapace/get-started.rst similarity index 100% rename from docs/products/kafka/karapace/getting-started.rst rename to docs/products/kafka/karapace/get-started.rst diff --git a/docs/products/kafka/karapace/howto/enable-oauth-oidc-kafka-rest-proxy.rst b/docs/products/kafka/karapace/howto/enable-oauth-oidc-kafka-rest-proxy.rst index 1e2fcbef47..376d2b3685 100644 --- a/docs/products/kafka/karapace/howto/enable-oauth-oidc-kafka-rest-proxy.rst +++ b/docs/products/kafka/karapace/howto/enable-oauth-oidc-kafka-rest-proxy.rst @@ -36,7 +36,7 @@ To establish OAuth2/OIDC authentication for the Karapace REST proxy, complete th Prerequisites ``````````````` -* :doc:`Aiven for Apache Kafka® ` service running with :doc:`OAuth2/OIDC enabled `. +* :doc:`Aiven for Apache Kafka® ` service running with :doc:`OAuth2/OIDC enabled `. * :doc:`Karapace schema registry and REST APIs enabled `. * Ensure access to an OIDC-compliant provider, such as Auth0, Okta, Google Identity Platform, or Azure. diff --git a/docs/products/m3db.rst b/docs/products/m3db.rst index 99f33d4314..31c80ec010 100644 --- a/docs/products/m3db.rst +++ b/docs/products/m3db.rst @@ -23,7 +23,7 @@ Read more about `the M3 components `_ Get started with Aiven for M3 ----------------------------- -Take your first steps with Aiven for M3 by following our :doc:`/docs/products/m3db/getting-started` article, or browse through our full list of articles: +Take your first steps with Aiven for M3 by following our :doc:`/docs/products/m3db/get-started` article, or browse through our full list of articles: .. grid:: 1 2 2 2 diff --git a/docs/products/m3db/getting-started.rst b/docs/products/m3db/get-started.rst similarity index 100% rename from docs/products/m3db/getting-started.rst rename to docs/products/m3db/get-started.rst diff --git a/docs/products/opensearch.rst b/docs/products/opensearch.rst index d26522ba59..e9d0d9b2e7 100644 --- a/docs/products/opensearch.rst +++ b/docs/products/opensearch.rst @@ -8,7 +8,7 @@ Aiven for OpenSearch® is a fully managed distributed search and analytics suite .. grid:: 1 2 2 2 - .. grid-item-card:: :doc:`Quickstart ` + .. grid-item-card:: :doc:`Quickstart ` :shadow: md :margin: 2 2 0 0 diff --git a/docs/products/opensearch/concepts/opensearch-vs-elasticsearch.rst b/docs/products/opensearch/concepts/opensearch-vs-elasticsearch.rst index 22a6f00bd6..cbcbf5874f 100644 --- a/docs/products/opensearch/concepts/opensearch-vs-elasticsearch.rst +++ b/docs/products/opensearch/concepts/opensearch-vs-elasticsearch.rst @@ -5,7 +5,7 @@ OpenSearch® is the open source continuation of the original Elasticsearch proje Version 1.0 release of OpenSearch should be very similar to the Elasticsearch release that it is based on, and Aiven encourages all customers to upgrade at their earliest convenience. This is to ensure that your platforms can continue to receive upgrades in the future. -To start exploring Aiven for OpenSearch®, check out the :doc:`Get Started with Aiven for OpenSearch® `. +To start exploring Aiven for OpenSearch®, check out the :doc:`Get Started with Aiven for OpenSearch® `. ----- diff --git a/docs/products/opensearch/dashboards.rst b/docs/products/opensearch/dashboards.rst index 6d2d855725..7c2ad9eff6 100644 --- a/docs/products/opensearch/dashboards.rst +++ b/docs/products/opensearch/dashboards.rst @@ -9,7 +9,7 @@ OpenSearch® Dashboards is both a visualisation tool for data in the cluster and Get started with Aiven for OpenSearch Dashboards ------------------------------------------------ -Take your first steps with Aiven for OpenSearch Dashboards by following our :doc:`/docs/products/opensearch/dashboards/getting-started` article. +Take your first steps with Aiven for OpenSearch Dashboards by following our :doc:`/docs/products/opensearch/dashboards/get-started` article. .. note:: Starting with Aiven for OpenSearch® versions 1.3.13 and 2.10, OpenSearch Dashboards will remain available during a maintenance update that also consists of version updates to your Aiven for OpenSearch service. diff --git a/docs/products/opensearch/dashboards/getting-started.rst b/docs/products/opensearch/dashboards/get-started.rst similarity index 90% rename from docs/products/opensearch/dashboards/getting-started.rst rename to docs/products/opensearch/dashboards/get-started.rst index a9b542656d..77fc0c6612 100644 --- a/docs/products/opensearch/dashboards/getting-started.rst +++ b/docs/products/opensearch/dashboards/get-started.rst @@ -1,7 +1,7 @@ Getting started =============== -To start using **Aiven for OpenSearch® Dashboards**, :doc:`create Aiven for OpenSearch® service first` and OpenSearch Dashboards service will be added alongside it. Once the Aiven for OpenSearch service is running you can find connection information to your OpenSearch Dashboards in the service overview page and use your favourite browser to access OpenSearch Dashboards service. +To start using **Aiven for OpenSearch® Dashboards**, :doc:`create Aiven for OpenSearch® service first` and OpenSearch Dashboards service will be added alongside it. Once the Aiven for OpenSearch service is running you can find connection information to your OpenSearch Dashboards in the service overview page and use your favourite browser to access OpenSearch Dashboards service. .. note:: diff --git a/docs/products/opensearch/getting-started.rst b/docs/products/opensearch/get-started.rst similarity index 100% rename from docs/products/opensearch/getting-started.rst rename to docs/products/opensearch/get-started.rst diff --git a/docs/products/opensearch/howto/opensearch-aggregations-and-nodejs.rst b/docs/products/opensearch/howto/opensearch-aggregations-and-nodejs.rst index c16127a2ec..b534ca7567 100644 --- a/docs/products/opensearch/howto/opensearch-aggregations-and-nodejs.rst +++ b/docs/products/opensearch/howto/opensearch-aggregations-and-nodejs.rst @@ -9,7 +9,7 @@ Learn how to aggregate data using OpenSearch and its NodeJS client. In this tuto Prepare the playground ********************** -You can create an OpenSearch cluster either with the visual interface or with the command line. Depending on your preference follow the instructions for :doc:`getting started with the console for Aiven for Opensearch ` or see :doc:`how to create a service with the help of Aiven command line interface `. +You can create an OpenSearch cluster either with the visual interface or with the command line. Depending on your preference follow the instructions for :doc:`getting started with the console for Aiven for Opensearch ` or see :doc:`how to create a service with the help of Aiven command line interface `. .. note:: diff --git a/docs/products/opensearch/howto/opensearch-and-nodejs.rst b/docs/products/opensearch/howto/opensearch-and-nodejs.rst index 83a046a2f6..537c26cad7 100644 --- a/docs/products/opensearch/howto/opensearch-and-nodejs.rst +++ b/docs/products/opensearch/howto/opensearch-and-nodejs.rst @@ -6,7 +6,7 @@ Learn how the OpenSearch® JavaScript client gives a clear and useful interface Prepare the playground ********************** -You can create an OpenSearch cluster either with the visual interface or with the command line. Depending on your preference follow the instructions for :doc:`getting started with the console for Aiven for Opensearch ` or see :doc:`how to create a service with the help of Aiven command line interface `. +You can create an OpenSearch cluster either with the visual interface or with the command line. Depending on your preference follow the instructions for :doc:`getting started with the console for Aiven for Opensearch ` or see :doc:`how to create a service with the help of Aiven command line interface `. .. note:: diff --git a/docs/products/postgresql.rst b/docs/products/postgresql.rst index 31e38f5c23..d606b8a273 100644 --- a/docs/products/postgresql.rst +++ b/docs/products/postgresql.rst @@ -7,7 +7,7 @@ Aiven for PostgreSQL® is is a fully-managed and hosted relational database serv .. grid:: 1 2 2 2 - .. grid-item-card:: :doc:`Quickstart ` + .. grid-item-card:: :doc:`Quickstart ` :shadow: md :margin: 2 2 0 0 diff --git a/docs/products/postgresql/getting-started.rst b/docs/products/postgresql/get-started.rst similarity index 100% rename from docs/products/postgresql/getting-started.rst rename to docs/products/postgresql/get-started.rst diff --git a/docs/tools/terraform/howto/vpc-peering-aws.rst b/docs/tools/terraform/howto/vpc-peering-aws.rst index 675bf43b47..217128a44b 100644 --- a/docs/tools/terraform/howto/vpc-peering-aws.rst +++ b/docs/tools/terraform/howto/vpc-peering-aws.rst @@ -12,7 +12,7 @@ Prerequisites: * Create an :doc:`Aiven authentication token `. -* `Install the AWS CLI `_. +* `Install the AWS CLI `_. * `Configure the AWS CLI `_. diff --git a/index.rst b/index.rst index 0445c034c6..1dad969650 100644 --- a/index.rst +++ b/index.rst @@ -253,7 +253,7 @@ Automation A public API you can use for programmatic integrations. - .. button-link:: docs/tools/api + .. button-link:: https://docs.aiven.io/docs/tools/api :color: primary :outline: From 68255fd5fb855e7babebe613933e89e3f93116dd Mon Sep 17 00:00:00 2001 From: Dorota Wojcik Date: Thu, 11 Jan 2024 13:10:56 +0100 Subject: [PATCH 08/10] comments --- includes/clouds-list.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/includes/clouds-list.rst b/includes/clouds-list.rst index 3e300b5c33..d30d77a493 100644 --- a/includes/clouds-list.rst +++ b/includes/clouds-list.rst @@ -425,7 +425,7 @@ Oracle Cloud Infrastructure .. important:: - Oracle Cloud Infrastructure (OCI) is supported on the Aiven platfrom as a :doc:`limited availability feature `. If you're interested in trying it out, contact the sales team at sales@Aiven.io. + Oracle Cloud Infrastructure (OCI) is supported on the Aiven platform as a :doc:`limited availability feature `. For more information or access, contact the sales team at sales@Aiven.io. .. list-table:: :header-rows: 1 From cd4743d9069a7139e124d202d61801db7a2a7593 Mon Sep 17 00:00:00 2001 From: Arthur Date: Thu, 11 Jan 2024 14:03:05 +0100 Subject: [PATCH 09/10] harmonize related pages section (#2419) --- docs/platform/concepts/availability-zones.rst | 2 +- docs/platform/howto/disk-autoscaler.rst | 2 +- docs/platform/howto/vpc-peering-upcloud.rst | 2 +- .../cassandra/howto/enable-cross-cluster-replication.rst | 2 +- docs/products/cassandra/howto/zdm-proxy.rst | 2 +- .../cassandra/reference/cassandra-metrics-datadog.rst | 2 +- .../clickhouse/concepts/clickhouse-tiered-storage.rst | 2 +- docs/products/clickhouse/concepts/disaster-recovery.rst | 2 +- docs/products/clickhouse/concepts/federated-queries.rst | 2 +- docs/products/clickhouse/howto/check-data-tiered-storage.rst | 2 +- docs/products/clickhouse/howto/configure-tiered-storage.rst | 2 +- docs/products/clickhouse/howto/data-service-integration.rst | 2 +- docs/products/clickhouse/howto/enable-tiered-storage.rst | 2 +- docs/products/clickhouse/howto/integration-databases.rst | 2 +- docs/products/clickhouse/howto/run-federated-queries.rst | 2 +- .../clickhouse/howto/transfer-data-tiered-storage.rst | 2 +- .../clickhouse/reference/clickhouse-metrics-datadog.rst | 2 +- .../clickhouse/reference/supported-interfaces-drivers.rst | 2 +- docs/products/dragonfly/concepts/overview.rst | 2 +- .../dragonfly/howto/migrate-aiven-redis-df-console.rst | 2 +- .../products/dragonfly/howto/migrate-ext-redis-df-console.rst | 2 +- docs/products/kafka/concepts/kafka-tiered-storage.rst | 2 +- docs/products/kafka/concepts/tiered-storage-guarantees.rst | 2 +- docs/products/kafka/concepts/tiered-storage-how-it-works.rst | 2 +- docs/products/kafka/concepts/tiered-storage-limitations.rst | 2 +- docs/products/kafka/howto/enable-oidc.rst | 4 ++-- docs/products/mysql/howto/migrate-db-to-aiven-via-console.rst | 2 +- docs/products/opensearch/howto/audit-logs.rst | 2 +- .../opensearch/howto/opensearch-dashboard-multi_tenancy.rst | 2 +- .../opensearch/howto/opensearch-search-and-python.rst | 4 ++-- docs/products/postgresql/concepts/pg-shared-buffers.rst | 4 ++-- docs/products/postgresql/concepts/pgvector.rst | 2 +- .../postgresql/howto/migrate-db-to-aiven-via-console.rst | 2 +- docs/products/postgresql/howto/use-pgvector.rst | 2 +- docs/products/redis/howto/migrate-redis-aiven-via-console.rst | 2 +- 35 files changed, 38 insertions(+), 38 deletions(-) diff --git a/docs/platform/concepts/availability-zones.rst b/docs/platform/concepts/availability-zones.rst index cedbe8ef99..fce544ea13 100644 --- a/docs/platform/concepts/availability-zones.rst +++ b/docs/platform/concepts/availability-zones.rst @@ -62,7 +62,7 @@ Aiven supports a subset of existing Azure cloud regions with availability zones. - ``azure-westeurope`` - ``azure-westus2`` -Related reading +Related pages --------------- - :doc:`PostgreSQL® backups ` diff --git a/docs/platform/howto/disk-autoscaler.rst b/docs/platform/howto/disk-autoscaler.rst index 9a9957c5bc..aace7e1835 100644 --- a/docs/platform/howto/disk-autoscaler.rst +++ b/docs/platform/howto/disk-autoscaler.rst @@ -290,7 +290,7 @@ You can disable disk autoscaler on your service with the :doc:`Aiven CLI client avn service integration-endpoint-delete ENDPOINT_ID -Related reading +Related pages --------------- :doc:`Dynamic disk sizing (DDS) ` diff --git a/docs/platform/howto/vpc-peering-upcloud.rst b/docs/platform/howto/vpc-peering-upcloud.rst index 05fc4e88ce..f2fb38501e 100644 --- a/docs/platform/howto/vpc-peering-upcloud.rst +++ b/docs/platform/howto/vpc-peering-upcloud.rst @@ -160,7 +160,7 @@ To refresh the DHCP lease for a network interface, run the following commands: dhclient NETWORK_INTERFACE_NAME -Related reading +Related pages --------------- * :doc:`Manage Virtual Private Cloud (VPC) peering ` diff --git a/docs/products/cassandra/howto/enable-cross-cluster-replication.rst b/docs/products/cassandra/howto/enable-cross-cluster-replication.rst index f1aa19539f..699ff78476 100644 --- a/docs/products/cassandra/howto/enable-cross-cluster-replication.rst +++ b/docs/products/cassandra/howto/enable-cross-cluster-replication.rst @@ -272,7 +272,7 @@ What's next * :doc:`Manage CCR on Aiven for Apache Cassandra ` * :doc:`Disable CCR on Aiven for Apache Cassandra ` -Related reading +Related pages --------------- * :doc:`About cross-cluster replication on Aiven for Apache Cassandra ` diff --git a/docs/products/cassandra/howto/zdm-proxy.rst b/docs/products/cassandra/howto/zdm-proxy.rst index 34512e7e24..cf35338de0 100644 --- a/docs/products/cassandra/howto/zdm-proxy.rst +++ b/docs/products/cassandra/howto/zdm-proxy.rst @@ -246,7 +246,7 @@ You can expect to receive output similar to the following: ``50`` and ``48`` are there in the target table since ZDM Proxy has forwarded the write request to the target service. ``42``, ``44``, and ``46`` are not there since ZDM Proxy has not sent the read request to the target service. -Related reading +Related pages --------------- * `zdm-proxy GitHub `_ diff --git a/docs/products/cassandra/reference/cassandra-metrics-datadog.rst b/docs/products/cassandra/reference/cassandra-metrics-datadog.rst index ab9efac5cf..fcb998cbe2 100644 --- a/docs/products/cassandra/reference/cassandra-metrics-datadog.rst +++ b/docs/products/cassandra/reference/cassandra-metrics-datadog.rst @@ -8,7 +8,7 @@ Get a metrics list for your service The list of Aiven for Apache Cassandra metrics available in Datadog corresponds to the list of metrics available for the open-source Apache Cassandra and can be checked in `Metrics `_. -Related reading +Related pages --------------- * Check how to use Datadog with Aiven services in :doc:`Datadog and Aiven `. diff --git a/docs/products/clickhouse/concepts/clickhouse-tiered-storage.rst b/docs/products/clickhouse/concepts/clickhouse-tiered-storage.rst index 2d5becf515..fe786cfed5 100644 --- a/docs/products/clickhouse/concepts/clickhouse-tiered-storage.rst +++ b/docs/products/clickhouse/concepts/clickhouse-tiered-storage.rst @@ -76,7 +76,7 @@ What's next * :doc:`Enable tiered storage in Aiven for ClickHouse ` * :doc:`Configure data retention thresholds for tiered storage ` -Related reading +Related pages --------------- * :doc:`Check data volume distribution between different disks ` diff --git a/docs/products/clickhouse/concepts/disaster-recovery.rst b/docs/products/clickhouse/concepts/disaster-recovery.rst index 6b6a8c31db..519c629be7 100644 --- a/docs/products/clickhouse/concepts/disaster-recovery.rst +++ b/docs/products/clickhouse/concepts/disaster-recovery.rst @@ -73,7 +73,7 @@ Aiven for ClickHouse has a few restrictions on the disaster recovery capability. For all the restrictions and limits for Aiven for ClickHouse, see :doc:`Aiven for ClickHouse limits and limitations `. -Related reading +Related pages --------------- * :doc:`Disaster Recovery testing scenarios ` diff --git a/docs/products/clickhouse/concepts/federated-queries.rst b/docs/products/clickhouse/concepts/federated-queries.rst index def31557bc..57c42d1ba3 100644 --- a/docs/products/clickhouse/concepts/federated-queries.rst +++ b/docs/products/clickhouse/concepts/federated-queries.rst @@ -46,7 +46,7 @@ Limitations * Federated queries in Aiven for ClickHouse only support S3-compatible object storage providers for the time being. More external data sources coming soon! * Virtual tables are only supported for URL sources, using the URL table engine. Stay tuned for us supporting the S3 table engine in the future! -Related reading +Related pages --------------- * :doc:`Read and pull data from S3 object storages and web resources over HTTP ` diff --git a/docs/products/clickhouse/howto/check-data-tiered-storage.rst b/docs/products/clickhouse/howto/check-data-tiered-storage.rst index 15c713c362..7fa7b1ad79 100644 --- a/docs/products/clickhouse/howto/check-data-tiered-storage.rst +++ b/docs/products/clickhouse/howto/check-data-tiered-storage.rst @@ -72,7 +72,7 @@ What's next * :doc:`Transfer data between SSD and object storage ` * :doc:`Configure data retention thresholds for tiered storage ` -Related reading +Related pages --------------- * :doc:`About tiered storage in Aiven for ClickHouse ` diff --git a/docs/products/clickhouse/howto/configure-tiered-storage.rst b/docs/products/clickhouse/howto/configure-tiered-storage.rst index a983a41aaf..6071e9e423 100644 --- a/docs/products/clickhouse/howto/configure-tiered-storage.rst +++ b/docs/products/clickhouse/howto/configure-tiered-storage.rst @@ -83,7 +83,7 @@ What's next * :doc:`Check data volume distribution between different disks ` -Related reading +Related pages --------------- * :doc:`About tiered storage in Aiven for ClickHouse ` diff --git a/docs/products/clickhouse/howto/data-service-integration.rst b/docs/products/clickhouse/howto/data-service-integration.rst index 52f7d6909a..9af191cc0a 100644 --- a/docs/products/clickhouse/howto/data-service-integration.rst +++ b/docs/products/clickhouse/howto/data-service-integration.rst @@ -98,7 +98,7 @@ Stop data service integrations Your integration has been removed along with all the corresponding databases and configuration information. -Related reading +Related pages --------------- * :doc:`Manage Aiven for ClickHouse® integration databases ` diff --git a/docs/products/clickhouse/howto/enable-tiered-storage.rst b/docs/products/clickhouse/howto/enable-tiered-storage.rst index c4a2488741..7c74259e43 100644 --- a/docs/products/clickhouse/howto/enable-tiered-storage.rst +++ b/docs/products/clickhouse/howto/enable-tiered-storage.rst @@ -70,7 +70,7 @@ What's next * :doc:`Configure data retention thresholds for tiered storage ` * :doc:`Check data volume distribution between different disks ` -Related reading +Related pages --------------- * :doc:`About tiered storage in Aiven for ClickHouse ` diff --git a/docs/products/clickhouse/howto/integration-databases.rst b/docs/products/clickhouse/howto/integration-databases.rst index c2a426cc35..de726c68f1 100644 --- a/docs/products/clickhouse/howto/integration-databases.rst +++ b/docs/products/clickhouse/howto/integration-databases.rst @@ -112,7 +112,7 @@ Delete integration databases Your integration database has been removed from the **Databases and tables** list. -Related reading +Related pages --------------- * :doc:`Manage Aiven for ClickHouse® data service integrations ` diff --git a/docs/products/clickhouse/howto/run-federated-queries.rst b/docs/products/clickhouse/howto/run-federated-queries.rst index 3c53b060ea..5a7984aceb 100644 --- a/docs/products/clickhouse/howto/run-federated-queries.rst +++ b/docs/products/clickhouse/howto/run-federated-queries.rst @@ -194,7 +194,7 @@ Once the table is defined, SELECT and INSERT statements execute GET and POST req INSERT INTO trips_export_endpoint_table VALUES (8765, 10, now() - INTERVAL 15 MINUTE, now(), 50, 20) -Related reading +Related pages --------------- * :doc:`About querying external data in Aiven for ClickHouse® ` diff --git a/docs/products/clickhouse/howto/transfer-data-tiered-storage.rst b/docs/products/clickhouse/howto/transfer-data-tiered-storage.rst index 113cbb6adf..3590d72819 100644 --- a/docs/products/clickhouse/howto/transfer-data-tiered-storage.rst +++ b/docs/products/clickhouse/howto/transfer-data-tiered-storage.rst @@ -65,7 +65,7 @@ What's next * :doc:`Check data distribution between SSD and object storage ` * :doc:`Configure data retention thresholds for tiered storage ` -Related reading +Related pages --------------- * :doc:`About tiered storage in Aiven for ClickHouse ` diff --git a/docs/products/clickhouse/reference/clickhouse-metrics-datadog.rst b/docs/products/clickhouse/reference/clickhouse-metrics-datadog.rst index 57f32ce6bb..98dba8fd77 100644 --- a/docs/products/clickhouse/reference/clickhouse-metrics-datadog.rst +++ b/docs/products/clickhouse/reference/clickhouse-metrics-datadog.rst @@ -8,7 +8,7 @@ Get a metrics list for your service The list of Aiven for ClickHouse metrics available in Datadog corresponds to the list of metrics available for the open-source ClickHouse and can be checked in `Metrics `_. -Related reading +Related pages --------------- * Check how to use Datadog with Aiven services in :doc:`Datadog and Aiven `. diff --git a/docs/products/clickhouse/reference/supported-interfaces-drivers.rst b/docs/products/clickhouse/reference/supported-interfaces-drivers.rst index ab517ad29e..75d5a9b38a 100644 --- a/docs/products/clickhouse/reference/supported-interfaces-drivers.rst +++ b/docs/products/clickhouse/reference/supported-interfaces-drivers.rst @@ -43,7 +43,7 @@ There are a number of drivers (libraries) that use one of :ref:`the fundamental You can connect to Aiven for ClickHouse with any driver that uses TLS and one of the supported protocols. -Related reading +Related pages --------------- * :doc:`How to connect to Aiven for ClickHouse using different libraries ` diff --git a/docs/products/dragonfly/concepts/overview.rst b/docs/products/dragonfly/concepts/overview.rst index 59c15823b0..270a875a62 100644 --- a/docs/products/dragonfly/concepts/overview.rst +++ b/docs/products/dragonfly/concepts/overview.rst @@ -39,6 +39,6 @@ Use cases * **Scaling and performance needs:** Perfectly suited for situations where the need for greater scalability and higher throughput goes beyond what Redis OSS can handle. -Related reading +Related pages ---------------- * For detailed information about Dragonfly, refer to `Dragonfly documentation `_. diff --git a/docs/products/dragonfly/howto/migrate-aiven-redis-df-console.rst b/docs/products/dragonfly/howto/migrate-aiven-redis-df-console.rst index b6c2f1484f..fe1f8ab089 100644 --- a/docs/products/dragonfly/howto/migrate-aiven-redis-df-console.rst +++ b/docs/products/dragonfly/howto/migrate-aiven-redis-df-console.rst @@ -101,7 +101,7 @@ Upon successful migration: Your data is now synchronized to Aiven for Dragonfly, with new writes to the source database being continuously synced. -Related reading +Related pages --------------- * :doc:`Aiven for Redis®* documentation ` diff --git a/docs/products/dragonfly/howto/migrate-ext-redis-df-console.rst b/docs/products/dragonfly/howto/migrate-ext-redis-df-console.rst index d49d647c47..9229169b09 100644 --- a/docs/products/dragonfly/howto/migrate-ext-redis-df-console.rst +++ b/docs/products/dragonfly/howto/migrate-ext-redis-df-console.rst @@ -108,7 +108,7 @@ Once the migration is complete: -Related reading +Related pages --------------- * Migrating to Aiven for Dragonfly * Aiven for Dragonfly documentation ` diff --git a/docs/products/kafka/concepts/kafka-tiered-storage.rst b/docs/products/kafka/concepts/kafka-tiered-storage.rst index d751a8f975..9c2b5474e6 100644 --- a/docs/products/kafka/concepts/kafka-tiered-storage.rst +++ b/docs/products/kafka/concepts/kafka-tiered-storage.rst @@ -43,7 +43,7 @@ Pricing Tiered storage costs are determined by the amount of remote storage used, measured in GB/hour. The highest usage level within each hour is the basis for calculating charges. -Related reading +Related pages ---------------- * :doc:`How tiered storage works in Aiven for Apache Kafka® ` diff --git a/docs/products/kafka/concepts/tiered-storage-guarantees.rst b/docs/products/kafka/concepts/tiered-storage-guarantees.rst index e8f21714b4..585695e459 100644 --- a/docs/products/kafka/concepts/tiered-storage-guarantees.rst +++ b/docs/products/kafka/concepts/tiered-storage-guarantees.rst @@ -17,7 +17,7 @@ Let's say you have a topic with a **total retention threshold** of **1000 GB** a * If the total size of the data exceeds 1000 GB, Apache Kafka will begin deleting the oldest data from remote storage. -Related reading +Related pages ---------------- * :doc:`Tiered storage in Aiven for Apache Kafka® overview ` diff --git a/docs/products/kafka/concepts/tiered-storage-how-it-works.rst b/docs/products/kafka/concepts/tiered-storage-how-it-works.rst index 0532a4c95e..ebb02ec995 100644 --- a/docs/products/kafka/concepts/tiered-storage-how-it-works.rst +++ b/docs/products/kafka/concepts/tiered-storage-how-it-works.rst @@ -50,7 +50,7 @@ Segments are encrypted with 256-bit AES encryption before being uploaded to the -Related reading +Related pages ---------------- * :doc:`Tiered storage in Aiven for Apache Kafka® overview ` diff --git a/docs/products/kafka/concepts/tiered-storage-limitations.rst b/docs/products/kafka/concepts/tiered-storage-limitations.rst index 198d6766c7..54fcab8a4a 100644 --- a/docs/products/kafka/concepts/tiered-storage-limitations.rst +++ b/docs/products/kafka/concepts/tiered-storage-limitations.rst @@ -12,7 +12,7 @@ Limitations * If you enable tiered storage on a service, you can't migrate the service to a different region or cloud, except for moving to a virtual cloud in the same region. For migration to a different region or cloud, contact `Aiven support `_. -Related reading +Related pages ---------------- * :doc:`Tiered storage in Aiven for Apache Kafka® overview ` diff --git a/docs/products/kafka/howto/enable-oidc.rst b/docs/products/kafka/howto/enable-oidc.rst index 5294f1a5e2..7cf5480cbb 100644 --- a/docs/products/kafka/howto/enable-oidc.rst +++ b/docs/products/kafka/howto/enable-oidc.rst @@ -92,6 +92,6 @@ For detailed explanations on the OIDC parameters, refer to the :ref:`console-aut -See also --------- +Related pages +------------- - Enable OAuth2/OIDC support for Apache Kafka® REST proxy \ No newline at end of file diff --git a/docs/products/mysql/howto/migrate-db-to-aiven-via-console.rst b/docs/products/mysql/howto/migrate-db-to-aiven-via-console.rst index b9361e69ac..9b1feb886a 100644 --- a/docs/products/mysql/howto/migrate-db-to-aiven-via-console.rst +++ b/docs/products/mysql/howto/migrate-db-to-aiven-via-console.rst @@ -214,7 +214,7 @@ If you :ref:`stop a migration process `, you cannot restar If you start a new migration using the same connection details when your *target* database is not empty, the migration tool truncates your *target* database and an existing data set gets overwritten with the new data set. -Related reading +Related pages --------------- - :doc:`Migrate to Aiven for MySQL from an external MySQL ` diff --git a/docs/products/opensearch/howto/audit-logs.rst b/docs/products/opensearch/howto/audit-logs.rst index 4fc1fc3464..a036d2e474 100644 --- a/docs/products/opensearch/howto/audit-logs.rst +++ b/docs/products/opensearch/howto/audit-logs.rst @@ -88,6 +88,6 @@ To access and visualize audit logs in OpenSearch, follow the steps below: 3. **Save and modify visualization**: Once you've created a visualization, save it for future reference. You can always return to modify and update it as your requirements evolve -Related reading +Related pages ---------------- * `OpenSearch audit logs documentation `_ \ No newline at end of file diff --git a/docs/products/opensearch/howto/opensearch-dashboard-multi_tenancy.rst b/docs/products/opensearch/howto/opensearch-dashboard-multi_tenancy.rst index 1604ecb918..87bcee273b 100644 --- a/docs/products/opensearch/howto/opensearch-dashboard-multi_tenancy.rst +++ b/docs/products/opensearch/howto/opensearch-dashboard-multi_tenancy.rst @@ -80,6 +80,6 @@ To manage tenants in the OpenSearch dashboard, you can follow these steps: * **Edit, delete, or duplicate tenants**: To manage existing tenants, select them from the list and use the **Actions** dropdown to edit, delete, or duplicate them according to your needs. -Related articles +Related pages ------------------ * `OpenSearch Dashboards multi-tenancy `_ \ No newline at end of file diff --git a/docs/products/opensearch/howto/opensearch-search-and-python.rst b/docs/products/opensearch/howto/opensearch-search-and-python.rst index e8736cd30a..f127bf8b1d 100644 --- a/docs/products/opensearch/howto/opensearch-search-and-python.rst +++ b/docs/products/opensearch/howto/opensearch-search-and-python.rst @@ -522,8 +522,8 @@ As you can see, this search returns results 🍍: It is your turn, try out more combinations to better understand the fuzzy query. -Read more -''''''''' +Related pages +''''''''''''' Want to try out OpenSearch with other clients? You can learn how to write search queries with NodeJS client, see :doc:`our tutorial `. We created an OpenSearch cluster, connected to it, and tried out different types of search queries. Now, you can explore more resources to help you to learn other features of OpenSearch and its Python client. diff --git a/docs/products/postgresql/concepts/pg-shared-buffers.rst b/docs/products/postgresql/concepts/pg-shared-buffers.rst index bafbf57875..171525b8c9 100644 --- a/docs/products/postgresql/concepts/pg-shared-buffers.rst +++ b/docs/products/postgresql/concepts/pg-shared-buffers.rst @@ -170,7 +170,7 @@ You may want to prewarm the ``shared_buffers`` in anticipation of a specific wor If the ``shared buffers`` size is less than pre-loaded data, only the tailing end of the data is cached as the earlier data encounters a forced ejection. -Read more ------------ +Related pages +------------- For more information on shared buffers, see `Resource Consumption `_ in the PostgreSQL documentation. diff --git a/docs/products/postgresql/concepts/pgvector.rst b/docs/products/postgresql/concepts/pgvector.rst index e8a6a119de..2eb919dbcf 100644 --- a/docs/products/postgresql/concepts/pgvector.rst +++ b/docs/products/postgresql/concepts/pgvector.rst @@ -58,7 +58,7 @@ What's next :doc:`Enable and use pgvector on Aiven for PostgreSQL® ` -Related reading +Related pages --------------- `pgvector README on GitHub `_ diff --git a/docs/products/postgresql/howto/migrate-db-to-aiven-via-console.rst b/docs/products/postgresql/howto/migrate-db-to-aiven-via-console.rst index 5a53490ea8..f28da50846 100644 --- a/docs/products/postgresql/howto/migrate-db-to-aiven-via-console.rst +++ b/docs/products/postgresql/howto/migrate-db-to-aiven-via-console.rst @@ -285,7 +285,7 @@ As soon as the wizard communicates the completion of the migration, check if the You have successfully migrated your PostgreSQL database into you Aiven for PostgreSQL service. -Related reading +Related pages --------------- - :doc:`About aiven-db-migrate ` diff --git a/docs/products/postgresql/howto/use-pgvector.rst b/docs/products/postgresql/howto/use-pgvector.rst index 0cdea7446c..cf0cfc9297 100644 --- a/docs/products/postgresql/howto/use-pgvector.rst +++ b/docs/products/postgresql/howto/use-pgvector.rst @@ -100,7 +100,7 @@ To stop the pgvector extension and remove it from a database, run the following DROP EXTENSION vector; -Related reading +Related pages --------------- * :doc:`pgvector for AI-powered search in Aiven for PostgreSQL® ` diff --git a/docs/products/redis/howto/migrate-redis-aiven-via-console.rst b/docs/products/redis/howto/migrate-redis-aiven-via-console.rst index 574a38e313..bb46ded480 100644 --- a/docs/products/redis/howto/migrate-redis-aiven-via-console.rst +++ b/docs/products/redis/howto/migrate-redis-aiven-via-console.rst @@ -101,7 +101,7 @@ When the wizard informs you about the completion of the migration, you can choos Your data has been successfully migrated to the designated Aiven for Redis database, and any subsequent additions to the connected databases are being continuously synchronized. -Related articles +Related pages ---------------- * :doc:`/docs/products/redis/howto/migrate-aiven-redis` From a4e0f0956d839a35c48f48067ccf6cbe28dce8ea Mon Sep 17 00:00:00 2001 From: Ryan Skraba Date: Thu, 11 Jan 2024 15:06:09 +0100 Subject: [PATCH 10/10] Fix typo --- docs/products/kafka/concepts/tiered-storage-how-it-works.rst | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/products/kafka/concepts/tiered-storage-how-it-works.rst b/docs/products/kafka/concepts/tiered-storage-how-it-works.rst index ebb02ec995..7add4fc278 100644 --- a/docs/products/kafka/concepts/tiered-storage-how-it-works.rst +++ b/docs/products/kafka/concepts/tiered-storage-how-it-works.rst @@ -55,7 +55,7 @@ Related pages * :doc:`Tiered storage in Aiven for Apache Kafka® overview ` * :doc:`Guarantees ` -* :doc:`Limitiations ` +* :doc:`Limitations ` * :doc:`Enabled tiered storage for Aiven for Apache Kafka® service `