From 51a5e2171ca603b7f08d025c96bfc69ff41bf307 Mon Sep 17 00:00:00 2001 From: Arthur <28596581+ArthurFlageul@users.noreply.github.com> Date: Thu, 23 Nov 2023 16:17:07 +0100 Subject: [PATCH] Fix: content indentation and more (#2283) --- conf.py | 4 - docs/integrations/rsyslog.rst | 29 +++---- .../concepts/choosing-timeseries-database.rst | 2 +- .../howto/create-service-integration.rst | 6 +- .../reference/project-member-privileges.rst | 32 ++++---- docs/products/flink/howto/connect-kafka.rst | 5 +- .../flink/howto/connect-opensearch.rst | 3 +- .../flink/howto/create-integration.rst | 18 ++--- .../flink/howto/flink-confluent-avro.rst | 78 +++++++++---------- .../flink/howto/manage-flink-tables.rst | 18 +++-- .../products/flink/howto/pg-cdc-connector.rst | 33 ++++---- .../kafka/howto/change-retention-period.rst | 8 +- .../howto/configure-topic-tiered-storage.rst | 6 +- .../products/kafka/howto/fake-sample-data.rst | 45 +++++------ ...ntegrate-service-logs-into-kafka-topic.rst | 4 +- docs/products/kafka/howto/kafka-conduktor.rst | 18 ++--- docs/products/kafka/howto/kafka-sasl-auth.rst | 6 +- .../kafka/howto/kafka-tools-config-file.rst | 24 +++--- .../kafka/howto/keystore-truststore.rst | 16 ++-- .../kafka/kafka-connect/getting-started.rst | 8 +- .../howto/cassandra-streamreactor-sink.rst | 19 +++-- .../howto/cassandra-streamreactor-source.rst | 23 +++--- .../debezium-source-connector-mongodb.rst | 12 +-- .../howto/elasticsearch-sink.rst | 6 +- .../kafka/kafka-connect/howto/gcs-sink.rst | 4 +- .../kafka/kafka-connect/howto/http-sink.rst | 4 +- .../kafka/kafka-connect/howto/influx-sink.rst | 4 +- .../kafka/kafka-connect/howto/jdbc-sink.rst | 28 +++---- .../howto/mongodb-poll-source-connector.rst | 4 +- .../howto/mongodb-sink-lenses.rst | 4 +- .../howto/mqtt-source-connector.rst | 2 +- .../kafka-connect/howto/opensearch-sink.rst | 4 +- .../howto/redis-streamreactor-sink.rst | 6 +- .../kafka-connect/howto/snowflake-sink.rst | 20 ++--- .../kafka/kafka-connect/howto/splunk-sink.rst | 8 +- .../concepts/pg-connection-pooling.rst | 2 +- docs/tools/kubernetes.rst | 5 +- .../howto/promote-to-master-pg-rr.rst | 4 +- docs/tutorials/anomaly-detection.rst | 4 +- includes/services-memory-capped.rst | 4 +- 40 files changed, 266 insertions(+), 264 deletions(-) diff --git a/conf.py b/conf.py index dc4be3bf20..7a707298e6 100644 --- a/conf.py +++ b/conf.py @@ -312,10 +312,6 @@ :width: 24px :class: no-scaled-link -.. |tick| image:: /images/icon-tick.png - :width: 24px - :class: no-scaled-link - .. |beta| replace:: :bdg-secondary:`beta` .. |preview| replace:: :bdg-secondary:`preview` diff --git a/docs/integrations/rsyslog.rst b/docs/integrations/rsyslog.rst index d9a018138d..db96b2b7f8 100644 --- a/docs/integrations/rsyslog.rst +++ b/docs/integrations/rsyslog.rst @@ -28,16 +28,16 @@ Client `__ . .. code:: avn service integration-endpoint-create --project your-project \ -     -d example-syslog -t rsyslog \ -     -c server=logs.example.com -c port=514 \ -     -c format=rfc5424 -c tls=true + -d example-syslog -t rsyslog \ + -c server=logs.example.com -c port=514 \ + -c format=rfc5424 -c tls=true When defining the remote syslog server the following parameters can be applied using the ``-c`` switch. Required: -- ``server`` -  DNS name or IPv4 address of the server +- ``server`` - DNS name or IPv4 address of the server - ``port`` - port to connect to @@ -100,17 +100,17 @@ endpoint previously created .. code:: avn service integration-endpoint-list --project your-project - ENDPOINT_ID                           ENDPOINT_NAME   ENDPOINT_TYPE - ====================================  ==============  ============= - 618fb764-5832-4636-ba26-0d9857222cfd  example-syslog  rsyslog + ENDPOINT_ID ENDPOINT_NAME ENDPOINT_TYPE + ==================================== ============== ============= + 618fb764-5832-4636-ba26-0d9857222cfd example-syslog rsyslog Then you can link the service to the endpoint .. code:: avn service integration-create --project your-project \ -     -t rsyslog -s your-service \ -     -D 618fb764-5832-4636-ba26-0d9857222cfd + -t rsyslog -s your-service \ + -D 618fb764-5832-4636-ba26-0d9857222cfd Example configurations ---------------------- @@ -119,15 +119,6 @@ Rsyslog is a standard integration so you can use it with any external system. We .. note:: All integrations can be configured using the Aiven Console or the Aiven CLI though the examples are easier to copy and paste in the CLI form. -* :ref:`Coralogix` -* :doc:`Datadog ` -* :ref:`Loggly` -* :doc:`Logtail ` -* :ref:`Mezmo` -* :ref:`New Relic` -* :ref:`Papertrail` -* :ref:`Sumo Logic` - .. _rsyslog_coralogix: Coralogix @@ -216,7 +207,7 @@ Papertrail ~~~~~~~~~~ As `Papertrail `_ identifies the client based on -the server and port  you only need to copy the appropriate values from the +the server and port you only need to copy the appropriate values from the "Log Destinations" page and use those as the values for ``server`` and ``port`` respectively. You **do not need** the ca-bundle as the Papertrail servers use certificates signed by a known CA. You also need to set the format to diff --git a/docs/platform/concepts/choosing-timeseries-database.rst b/docs/platform/concepts/choosing-timeseries-database.rst index 7612206818..b8d9d1061a 100644 --- a/docs/platform/concepts/choosing-timeseries-database.rst +++ b/docs/platform/concepts/choosing-timeseries-database.rst @@ -17,4 +17,4 @@ time series databases in its product portfolio. **high availability.** Find our more about our time series offerings on `our website -`__ . +`__. diff --git a/docs/platform/howto/create-service-integration.rst b/docs/platform/howto/create-service-integration.rst index 9b7de9a38f..42bec3e76e 100644 --- a/docs/platform/howto/create-service-integration.rst +++ b/docs/platform/howto/create-service-integration.rst @@ -37,10 +37,10 @@ Create an integration This dashboard is a predefined view that is automatically maintained by Aiven. - .. note:: +.. note:: - It may take a minute to start getting data into to the dashboard view if you just enabled the integrations. The view can be refreshed by reloading in the top-right corner. You can add custom dashboards by either defining them from scratch in Grafana or by saving a copy of the predefined dashboard under a different name that does not start with *Aiven*. + It may take a minute to start getting data into to the dashboard view if you just enabled the integrations. The view can be refreshed by reloading in the top-right corner. You can add custom dashboards by either defining them from scratch in Grafana or by saving a copy of the predefined dashboard under a different name that does not start with *Aiven*. .. warning:: - Any changes that you make to the predefined dashboard are eventually automatically overwritten by the system. + Any changes that you make to the predefined dashboard are eventually automatically overwritten by the system. diff --git a/docs/platform/reference/project-member-privileges.rst b/docs/platform/reference/project-member-privileges.rst index 27b85e068c..dd95521521 100644 --- a/docs/platform/reference/project-member-privileges.rst +++ b/docs/platform/reference/project-member-privileges.rst @@ -44,28 +44,28 @@ You can grant different levels of access to project members using roles: - Power services on/off - Edit members and roles * - Administrator - - |tick| - - |tick| - - |tick| - - |tick| - - |tick| - - |tick| + - ✅ + - ✅ + - ✅ + - ✅ + - ✅ + - ✅ * - Operator - - |tick| - - |tick| - - |tick| - - |tick| - - |tick| + - ✅ + - ✅ + - ✅ + - ✅ + - ✅ - * - Developer - - |tick| - - |tick| - - |tick| - - |tick| + - ✅ + - ✅ + - ✅ + - ✅ - - * - Read Only - - |tick| + - ✅ - - - diff --git a/docs/products/flink/howto/connect-kafka.rst b/docs/products/flink/howto/connect-kafka.rst index bf3853fe56..d4ceac2304 100644 --- a/docs/products/flink/howto/connect-kafka.rst +++ b/docs/products/flink/howto/connect-kafka.rst @@ -17,8 +17,9 @@ To create a Apache Flink® table based on an Aiven for Apache Kafka® topic via 1. In the Aiven for Apache Flink service page, select **Application** from the left sidebar. 2. Create a new application or select an existing one with Aiven for Apache Kafka integration. -.. note:: - If editing an existing application, create a new version to make changes to the source or sink tables. + .. note:: + + If editing an existing application, create a new version to make changes to the source or sink tables. 3. In the **Create new version** screen, select **Add source tables**. 4. Select **Add new table** or select **Edit** if you want to edit an existing source table. diff --git a/docs/products/flink/howto/connect-opensearch.rst b/docs/products/flink/howto/connect-opensearch.rst index 44a963d09e..59116f26a9 100644 --- a/docs/products/flink/howto/connect-opensearch.rst +++ b/docs/products/flink/howto/connect-opensearch.rst @@ -21,7 +21,8 @@ To create an Apache Flink table based on an Aiven for OpenSearch® index via Aiv 2. Create a new application or select an existing one with Aiven for OpenSearch® integration. .. note:: - If editing an existing application, create a new version to make changes to the source or sink tables. + + If editing an existing application, create a new version to make changes to the source or sink tables. 3. In the **Create new version** screen, select **Add sink tables**. diff --git a/docs/products/flink/howto/create-integration.rst b/docs/products/flink/howto/create-integration.rst index 2ddb5b91c5..7cd231e140 100644 --- a/docs/products/flink/howto/create-integration.rst +++ b/docs/products/flink/howto/create-integration.rst @@ -14,21 +14,21 @@ You can easily create Aiven for Apache Flink® data service integrations via the 1. Navigate to the Aiven for Apache Flink® service page. 2. If you are setting up the first integration for the selected Aiven for Apache Flink service, select **Get Started** in the service **Overview** screen. -.. image:: /images/products/flink/integrations-get-started.png - :scale: 80 % - :alt: Image of the Aiven for Apache Flink Overview page with focus on the Get Started Icon + .. image:: /images/products/flink/integrations-get-started.png + :scale: 80 % + :alt: Image of the Aiven for Apache Flink Overview page with focus on the Get Started Icon 3. To configure the data flow with Apache Flink®, select the Aiven for Apache Kafka®, Aiven for PostgreSQL®, or Aiven for OpenSearch® service that you wish to integrate. Click the **Integrate** button to complete the integration process. -.. image:: /images/products/flink/integrations-select-services.png - :scale: 50 % - :alt: Image of the Aiven for Apache Flink Integration page showing an Aiven for Apache Kafka® and an Aiven for PostgreSQL® services + .. image:: /images/products/flink/integrations-select-services.png + :scale: 50 % + :alt: Image of the Aiven for Apache Flink Integration page showing an Aiven for Apache Kafka® and an Aiven for PostgreSQL® services 4. You can include additional integrations using the plus(**+**) button in the **Data Flow** section -.. image:: /images/products/flink/integrations-add.png - :scale: 60 % - :alt: Image of the Aiven for Apache Flink Integration page showing an existing Aiven for Apache Kafka integration and the + icon to add additional integrations + .. image:: /images/products/flink/integrations-add.png + :scale: 60 % + :alt: Image of the Aiven for Apache Flink Integration page showing an existing Aiven for Apache Kafka integration and the + icon to add additional integrations diff --git a/docs/products/flink/howto/flink-confluent-avro.rst b/docs/products/flink/howto/flink-confluent-avro.rst index dc6592b089..1e53fd0927 100644 --- a/docs/products/flink/howto/flink-confluent-avro.rst +++ b/docs/products/flink/howto/flink-confluent-avro.rst @@ -29,46 +29,46 @@ Create an Apache Flink® table with Confluent Avro 5. In the **Add new source table** or **Edit source table** screen, select the Aiven for Apache Kafka® service as the integrated service. 6. In the **Table SQL** section, enter the SQL statement below to create an Apache Kafka®-based Apache Flink® table with Confluent Avro: -.. code:: sql - - CREATE TABLE kafka ( - -- specify the table columns - ) WITH ( - 'connector' = 'kafka', - 'properties.bootstrap.servers' = '', - 'scan.startup.mode' = 'earliest-offset', - 'topic' = 'my_test.public.students', - 'value.format' = 'avro-confluent', -- the value data format is Confluent Avro - 'value.avro-confluent.url' = 'http://localhost:8082', -- the URL of the schema registry - 'value.avro-confluent.basic-auth.credentials-source' = 'USER_INFO', -- the source of the user credentials for accessing the schema registry - 'value.avro-confluent.basic-auth.user-info' = 'user_info' -- the user credentials for accessing the schema registry - ) - -The following are the parameters: - -* ``connector``: the **Kafka connector type**, between the **Apache Kafka SQL Connector** (value ``kafka``) for standard topic reads/writes and the **Upsert Kafka SQL Connector** (value ``upsert-kafka``) for changelog type of integration based on message key. - - .. note:: - For more information on the connector types and the requirements for each, see the articles on :doc:`Kafka connector types ` and :doc:`the requirements for each connector type `. - -* ``properties.bootstrap.servers``: this parameter can be left empty since the connection details will be retrieved from the Aiven for Apache Kafka integration definition - -* ``topic``: the topic to be used as a source for the data pipeline. If you want to use a new topic that does not yet exist, write the topic name. -* ``value.format``: indicates that the value data format is in the Confluent Avro format. - - .. note:: - The ``key.format`` parameter can also be set to the ``avro-confluent`` format. - -* ``avro-confluent.url``: this is the URL for the Karapace schema registry. -* ``value.avro-confluent.basic-auth.credentials-source``: this specifies the source of the user credentials for accessing the Karapace schema registry. At present, only the ``USER_INFO`` value is supported for this parameter. -* ``value.avro-confluent.basic-auth.user-info``: this should be set to the ``user_info`` string you created earlier. + .. code:: sql - .. important:: - To access the Karapace schema registry, the user needs to provide the username and password using the ``user_info`` parameter. The ``user_info`` parameter is a string formatted as ``user_info = f"{username}:{password}"``. - - Additionally, on the source table, the user only needs read permission to the subject containing the schema. However, on the sink table, if the schema does not exist, the user must have write permission for the schema registry. - - It is important to provide this information to authenticate and access the Karapace schema registry. + CREATE TABLE kafka ( + -- specify the table columns + ) WITH ( + 'connector' = 'kafka', + 'properties.bootstrap.servers' = '', + 'scan.startup.mode' = 'earliest-offset', + 'topic' = 'my_test.public.students', + 'value.format' = 'avro-confluent', -- the value data format is Confluent Avro + 'value.avro-confluent.url' = 'http://localhost:8082', -- the URL of the schema registry + 'value.avro-confluent.basic-auth.credentials-source' = 'USER_INFO', -- the source of the user credentials for accessing the schema registry + 'value.avro-confluent.basic-auth.user-info' = 'user_info' -- the user credentials for accessing the schema registry + ) + + The following are the parameters: + + * ``connector``: the **Kafka connector type**, between the **Apache Kafka SQL Connector** (value ``kafka``) for standard topic reads/writes and the **Upsert Kafka SQL Connector** (value ``upsert-kafka``) for changelog type of integration based on message key. + + .. note:: + For more information on the connector types and the requirements for each, see the articles on :doc:`Kafka connector types ` and :doc:`the requirements for each connector type `. + + * ``properties.bootstrap.servers``: this parameter can be left empty since the connection details will be retrieved from the Aiven for Apache Kafka integration definition + + * ``topic``: the topic to be used as a source for the data pipeline. If you want to use a new topic that does not yet exist, write the topic name. + * ``value.format``: indicates that the value data format is in the Confluent Avro format. + + .. note:: + The ``key.format`` parameter can also be set to the ``avro-confluent`` format. + + * ``avro-confluent.url``: this is the URL for the Karapace schema registry. + * ``value.avro-confluent.basic-auth.credentials-source``: this specifies the source of the user credentials for accessing the Karapace schema registry. At present, only the ``USER_INFO`` value is supported for this parameter. + * ``value.avro-confluent.basic-auth.user-info``: this should be set to the ``user_info`` string you created earlier. + + .. important:: + To access the Karapace schema registry, the user needs to provide the username and password using the ``user_info`` parameter. The ``user_info`` parameter is a string formatted as ``user_info = f"{username}:{password}"``. + + Additionally, on the source table, the user only needs read permission to the subject containing the schema. However, on the sink table, if the schema does not exist, the user must have write permission for the schema registry. + + It is important to provide this information to authenticate and access the Karapace schema registry. 7. To create a sink table, select **Add sink tables** and repeat steps 4-6 for sink tables. 8. In the **Create statement** section, create a statement that defines the fields retrieved from each message in a topic. diff --git a/docs/products/flink/howto/manage-flink-tables.rst b/docs/products/flink/howto/manage-flink-tables.rst index 708f6c2898..b389d8bcf9 100644 --- a/docs/products/flink/howto/manage-flink-tables.rst +++ b/docs/products/flink/howto/manage-flink-tables.rst @@ -22,8 +22,9 @@ Follow these steps add a new table to an application using the `Aiven Console `_ images, which require `Docker `_ or `Podman `_ to be executed. + The following example is based on `Docker `_ images, which require `Docker `_ or `Podman `_ to be executed. -The following example assumes you have an Aiven for Apache Kafka® service running. You can create one following the :doc:`dedicated instructions <../getting-started>`. +The following example assumes you have an Aiven for Apache Kafka® service running. You can create one following the :doc:`dedicated instructions `. Fake data generator on Docker @@ -25,39 +25,40 @@ To learn data streaming, you need a continuous flow of data and for that you can 3. Create a new access token via the `Aiven Console `_ or the following command in the :doc:`Aiven CLI `, changing the ``max-age-seconds`` appropriately for the duration of your test: -.. code:: - - avn user access-token create \ - --description "Token used by Fake data generator" \ - --max-age-seconds 3600 \ - --json | jq -r '.[].full_token' + .. code:: + + avn user access-token create \ + --description "Token used by Fake data generator" \ + --max-age-seconds 3600 \ + --json | jq -r '.[].full_token' -.. Tip:: + .. Tip:: - The above command uses ``jq`` (https://stedolan.github.io/jq/) to parse the result of the Aiven CLI command. If you don't have ``jq`` installed, you can remove the ``| jq -r '.[].full_token'`` section from the above command and parse the JSON result manually to extract the access token. + The above command uses ``jq`` (https://stedolan.github.io/jq/) to parse the result of the Aiven CLI command. + If you don't have ``jq`` installed, you can remove the ``| jq -r '.[].full_token'`` section from the above command and parse the JSON result manually to extract the access token. 4. Edit the ``conf/env.conf`` file filling the following placeholders: -* ``my_project_name``: the name of your Aiven project -* ``my_kafka_service_name``: the name of your Aiven for Apache Kafka instance -* ``my_topic_name``: the name of the target topic, can be any name -* ``my_aiven_email``: the email address used as username to log in to Aiven services -* ``my_aiven_token``: the access token generated during the previous step + * ``my_project_name``: the name of your Aiven project + * ``my_kafka_service_name``: the name of your Aiven for Apache Kafka instance + * ``my_topic_name``: the name of the target topic, can be any name + * ``my_aiven_email``: the email address used as username to log in to Aiven services + * ``my_aiven_token``: the access token generated during the previous step 5. Build the Docker image with: -.. code:: + .. code:: - docker build -t fake-data-producer-for-apache-kafka-docker . + docker build -t fake-data-producer-for-apache-kafka-docker . -.. Tip:: + .. Tip:: - Every time you change any parameters in the ``conf/env.conf`` file, you need to rebuild the Docker image to start using them. + Every time you change any parameters in the ``conf/env.conf`` file, you need to rebuild the Docker image to start using them. 6. Start the streaming data flow with: -.. code:: - - docker run fake-data-producer-for-apache-kafka-docker + .. code:: + + docker run fake-data-producer-for-apache-kafka-docker 7. Once the Docker image is running, check in the target Aiven for Apache Kafka® service that the topic is populated. This can be done with the `Aiven Console `_, if the Kafka REST option is enabled, in the *Topics* tab. Alternatively you can use tools like :doc:`kcat ` to achieve the same. diff --git a/docs/products/kafka/howto/integrate-service-logs-into-kafka-topic.rst b/docs/products/kafka/howto/integrate-service-logs-into-kafka-topic.rst index f7fc15c046..176fd17572 100644 --- a/docs/products/kafka/howto/integrate-service-logs-into-kafka-topic.rst +++ b/docs/products/kafka/howto/integrate-service-logs-into-kafka-topic.rst @@ -37,7 +37,9 @@ Test the integration (with Aiven for Apache Kafka) 3. From the **Topic info** screen, select **Messages**. .. note:: - Alternatively, you can access the messages for a topic by selecting the ellipsis in the row of the topic and choosing **Topic messages**. + + Alternatively, you can access the messages for a topic by selecting the ellipsis in the row of the topic and choosing **Topic messages**. + 4. In the **Messages** screen, select **Fetch Messages** to view the log entries that were sent from your source service. 5. To see the messages in JSON format, use the **FORMAT** drop-down menu and select *json*. diff --git a/docs/products/kafka/howto/kafka-conduktor.rst b/docs/products/kafka/howto/kafka-conduktor.rst index f2217bc6d7..902c3af577 100644 --- a/docs/products/kafka/howto/kafka-conduktor.rst +++ b/docs/products/kafka/howto/kafka-conduktor.rst @@ -13,21 +13,21 @@ Connect to Apache Kafka® with Conduktor * The three files downloaded: access key, access certificate and CA certificate -.. image:: /images/products/kafka/conduktor-config.png - :alt: Screenshot of the cluster configuration screen + .. image:: /images/products/kafka/conduktor-config.png + :alt: Screenshot of the cluster configuration screen -Conduktor will create the keystore and truststore files in the folder that you specified, or you can choose an alternative location. Click the **Create** button and the helper will create the configuration for Conduktor to connect to your Aiven for Apache Kafka service. + Conduktor will create the keystore and truststore files in the folder that you specified, or you can choose an alternative location. Click the **Create** button and the helper will create the configuration for Conduktor to connect to your Aiven for Apache Kafka service. 4. Click the **Test Kafka Connectivity** button to check that everything is working as expected. -.. Tip:: + .. Tip:: - If you experience a Java SSL error when testing the connectivity, add the service CA certificate to the list of Conduktor's trusted certificates. + If you experience a Java SSL error when testing the connectivity, add the service CA certificate to the list of Conduktor's trusted certificates. - * Download the **CA Certificate** file to your computer. + * Download the **CA Certificate** file to your computer. - * In the Conduktor application, click the settings dropdown in the bottom right hand side and choose **Network**. - - * On the **Trusted Certificates** tab, select **Import** and then supply the CA certificate file you downloaded. Save the settings. + * In the Conduktor application, click the settings dropdown in the bottom right hand side and choose **Network**. + + * On the **Trusted Certificates** tab, select **Import** and then supply the CA certificate file you downloaded. Save the settings. Once connected, you can visit the `Conduktor documentation `_ to learn more about using this tool. diff --git a/docs/products/kafka/howto/kafka-sasl-auth.rst b/docs/products/kafka/howto/kafka-sasl-auth.rst index 33922d60f7..e17348f3bf 100644 --- a/docs/products/kafka/howto/kafka-sasl-auth.rst +++ b/docs/products/kafka/howto/kafka-sasl-auth.rst @@ -9,9 +9,9 @@ Aiven offers a selection of :doc:`authentication methods for Apache Kafka® <../ 4. Select **Change**. 5. Enable the ``kafka_authentication_methods.sasl`` setting, and then select **Save advanced configuration**. -.. image:: /images/products/kafka/enable-sasl.png - :alt: Enable SASL authentication for Apache Kafka - :width: 100% + .. image:: /images/products/kafka/enable-sasl.png + :alt: Enable SASL authentication for Apache Kafka + :width: 100% The **Connection information** at the top of the **Overview** page will now offer the ability to connect via SASL or via Client Certificate. diff --git a/docs/products/kafka/howto/kafka-tools-config-file.rst b/docs/products/kafka/howto/kafka-tools-config-file.rst index e05cff471e..97d9059ab5 100644 --- a/docs/products/kafka/howto/kafka-tools-config-file.rst +++ b/docs/products/kafka/howto/kafka-tools-config-file.rst @@ -28,14 +28,16 @@ Define the configuration file The ``avn service user-kafka-java-creds`` :ref:`Aiven CLI command ` accepts a ``--password`` parameter setting the same password for the truststore, keystore and key -An example of the ``configuration.properties`` content is the following:: - - security.protocol=SSL - ssl.protocol=TLS - ssl.keystore.type=PKCS12 - ssl.keystore.location=client.keystore.p12 - ssl.keystore.password=changeit - ssl.key.password=changeit - ssl.truststore.location=client.truststore.jks - ssl.truststore.password=changeit - ssl.truststore.type=JKS +An example of the ``configuration.properties`` content is the following: + +.. code:: + + security.protocol=SSL + ssl.protocol=TLS + ssl.keystore.type=PKCS12 + ssl.keystore.location=client.keystore.p12 + ssl.keystore.password=changeit + ssl.key.password=changeit + ssl.truststore.location=client.truststore.jks + ssl.truststore.password=changeit + ssl.truststore.type=JKS diff --git a/docs/products/kafka/howto/keystore-truststore.rst b/docs/products/kafka/howto/keystore-truststore.rst index b0269f15ff..ca509988d2 100644 --- a/docs/products/kafka/howto/keystore-truststore.rst +++ b/docs/products/kafka/howto/keystore-truststore.rst @@ -11,30 +11,30 @@ To create these files: 2. Download the **Access Key**, **Access Certificate** and **CA Certificate**. The resulting ``service.key``, ``service.cert`` and ``ca.pem`` are going to be used in the following steps. -.. image:: /images/products/kafka/ssl-certificates-download.png - :alt: Download the Access Key, Access Certificate and CA Certificate from the Aiven console + .. image:: /images/products/kafka/ssl-certificates-download.png + :alt: Download the Access Key, Access Certificate and CA Certificate from the Aiven console 3. Use the ``openssl`` utility to create the keystore with the ``service.key`` and ``service.cert`` files downloaded previously: -.. code:: + .. code:: - openssl pkcs12 -export \ + openssl pkcs12 -export \ -inkey service.key \ -in service.cert \ -out client.keystore.p12 \ -name service_key -.. Note:: - The format has to be ``PKCS12`` , which is the default since Java 9. + .. Note:: + The format has to be ``PKCS12`` , which is the default since Java 9. 5. Enter a password to protect the keystore and the key, when prompted 6. In the folder where the certificates are stored, use the ``keytool`` utility to create the truststore with the ``ca.pem`` file as input: -.. code:: + .. code:: - keytool -import \ + keytool -import \ -file ca.pem \ -alias CA \ -keystore client.truststore.jks diff --git a/docs/products/kafka/kafka-connect/getting-started.rst b/docs/products/kafka/kafka-connect/getting-started.rst index 20996edb0c..87c43334a9 100644 --- a/docs/products/kafka/kafka-connect/getting-started.rst +++ b/docs/products/kafka/kafka-connect/getting-started.rst @@ -19,10 +19,10 @@ To create a new Aiven for Apache Kafka Connect dedicated service: 5. Select the cloud provider and region on which you want to run your service. -.. note:: The pricing for the same service may vary between - different providers and regions. The service summary on the - right side of the console shows you the pricing for your - selected options. + .. note:: The pricing for the same service may vary between + different providers and regions. The service summary on the + right side of the console shows you the pricing for your + selected options. 6. Select a service plan. This defines how many servers and what kind of memory, CPU, and disk resources are allocated to your service. diff --git a/docs/products/kafka/kafka-connect/howto/cassandra-streamreactor-sink.rst b/docs/products/kafka/kafka-connect/howto/cassandra-streamreactor-sink.rst index 6ce4c3a6ca..1c7ff0c029 100644 --- a/docs/products/kafka/kafka-connect/howto/cassandra-streamreactor-sink.rst +++ b/docs/products/kafka/kafka-connect/howto/cassandra-streamreactor-sink.rst @@ -41,15 +41,15 @@ Furthermore you need to collect the following information about the target Cassa * ``TOPIC_LIST``: The list of topics to sink divided by comma * ``KCQL_TRANSFORMATION``: The KCQL syntax to parse the topic data, should be in the format: - :: + .. code:: - INSERT INTO CASSANDRA_TABLE - SELECT LIST_OF_FIELDS - FROM APACHE_KAFKA_TOPIC + INSERT INTO CASSANDRA_TABLE + SELECT LIST_OF_FIELDS + FROM APACHE_KAFKA_TOPIC .. Warning:: - The Cassandra destination table ``CASSANDRA_TABLE`` needs to be created before starting the connector, otherwise the connector task will fail. + The Cassandra destination table ``CASSANDRA_TABLE`` needs to be created before starting the connector, otherwise the connector task will fail. * ``APACHE_KAFKA_HOST``: The hostname of the Apache Kafka service, only needed when using Avro as data format * ``SCHEMA_REGISTRY_PORT``: The Apache Kafka's schema registry port, only needed when using Avro as data format @@ -130,9 +130,9 @@ To create a Apache Kafka Connect connector, follow these steps: 6. Paste the connector configuration (stored in the ``cassandra_sink.json`` file) in the form. 7. Select **Apply**. -.. Note:: + .. Note:: - The Aiven Console parses the configuration file and fills the relevant UI fields. You can review the UI fields across the various tab and change them if necessary. The changes will be reflected in JSON format in the **Connector configuration** text box. + The Aiven Console parses the configuration file and fills the relevant UI fields. You can review the UI fields across the various tab and change them if necessary. The changes will be reflected in JSON format in the **Connector configuration** text box. 8. After all the settings are correctly configured, select **Create connector**. 9. Verify the connector status under the **Connectors** screen. @@ -154,7 +154,10 @@ If you have a topic named ``students`` containing the following data that you wa {"id":3, "name":"carlo", "age": 33} {"id":2, "name":"lucy", "age": 21} -You can sink the ``students`` topic to Cassandra with the following connector configuration, after replacing the placeholders for ``CASSANDRA_HOST``, ``CASSANDRA_PORT``, ``CASSANDRA_USERNAME``, ``CASSANDRA_PASSWORD``, ``CASSANDRA_KEYSTORE``, ``CASSANDRA_KEYSTORE_PASSWORD``, ``CASSANDRA_TRUSTSTORE``, ``CASSANDRA_TRUSTSTORE_PASSWORD`, ``CASSANDRA_KEYSPACE``: +You can sink the ``students`` topic to Cassandra with the following connector configuration, +after replacing the placeholders for ``CASSANDRA_HOST``, ``CASSANDRA_PORT``, ``CASSANDRA_USERNAME``, +``CASSANDRA_PASSWORD``, ``CASSANDRA_KEYSTORE``, ``CASSANDRA_KEYSTORE_PASSWORD``, ``CASSANDRA_TRUSTSTORE``, +``CASSANDRA_TRUSTSTORE_PASSWORD``, ``CASSANDRA_KEYSPACE``. .. code-block:: json diff --git a/docs/products/kafka/kafka-connect/howto/cassandra-streamreactor-source.rst b/docs/products/kafka/kafka-connect/howto/cassandra-streamreactor-source.rst index d563230232..de78bf350b 100644 --- a/docs/products/kafka/kafka-connect/howto/cassandra-streamreactor-source.rst +++ b/docs/products/kafka/kafka-connect/howto/cassandra-streamreactor-source.rst @@ -35,17 +35,17 @@ Furthermore you need to collect the following information about the target Cassa * ``CASSANDRA_KEYSPACE``: The Cassandra keyspace to use to source the data from * ``KCQL_TRANSFORMATION``: The KCQL syntax to parse the topic data, should be in the format: - :: + .. code:: - INSERT INTO APACHE_KAFKA_TOPIC - SELECT LIST_OF_FIELDS - FROM CASSANDRA_TABLE - [PK CASSANDRA_TABLE_COLUMN] - [INCREMENTAL_MODE=MODE] + INSERT INTO APACHE_KAFKA_TOPIC + SELECT LIST_OF_FIELDS + FROM CASSANDRA_TABLE + [PK CASSANDRA_TABLE_COLUMN] + [INCREMENTAL_MODE=MODE] -.. Warning:: + .. Warning:: - By default the connector acts in **bulk mode**, extracting all the rows from the Cassandra table on a polling interval and pushing them to the Apache Kafka topic. You can however define **incremental options** by defining the `incremental mode and primary key `_. + By default the connector acts in **bulk mode**, extracting all the rows from the Cassandra table on a polling interval and pushing them to the Apache Kafka topic. You can however define **incremental options** by defining the `incremental mode and primary key `_. * ``APACHE_KAFKA_HOST``: The hostname of the Apache Kafka service, only needed when using Avro as data format * ``SCHEMA_REGISTRY_PORT``: The Apache Kafka's schema registry port, only needed when using Avro as data format @@ -126,17 +126,16 @@ To create a Kafka Connect connector, follow these steps: 6. Paste the connector configuration (stored in the ``cassandra_source.json`` file) in the form. 7. Select **Apply**. -.. Note:: + .. Note:: - The Aiven Console parses the configuration file and fills the relevant UI fields. You can review the UI fields across the various tab and change them if necessary. The changes will be reflected in JSON format in the **Connector configuration** text box. + The Aiven Console parses the configuration file and fills the relevant UI fields. You can review the UI fields across the various tab and change them if necessary. The changes will be reflected in JSON format in the **Connector configuration** text box. 8. After all the settings are correctly configured, select **Create new connector**. 9. Verify the connector status under the **Connectors** screen. 10. Verify the presence of the data in the target Cassandra service. -.. Note:: - You can also create connectors using the :ref:`Aiven CLI command `. +You can also create connectors using the :ref:`Aiven CLI command `. Example: Create a Cassandra source connector ------------------------------------------------------- diff --git a/docs/products/kafka/kafka-connect/howto/debezium-source-connector-mongodb.rst b/docs/products/kafka/kafka-connect/howto/debezium-source-connector-mongodb.rst index b1b5b9cb5d..cb287b625e 100644 --- a/docs/products/kafka/kafka-connect/howto/debezium-source-connector-mongodb.rst +++ b/docs/products/kafka/kafka-connect/howto/debezium-source-connector-mongodb.rst @@ -69,11 +69,11 @@ The configuration file contains the following entries: * ``tasks.max``: maximum number of tasks to execute in parallel. By default this is 1, the connector can use at most 1 task for each collection defined. Replace ``NR_TASKS`` with the amount of parallel task based on the number of input collections. * ``key.converter`` and ``value.converter``: defines the messages data format in the Apache Kafka topic. The ``io.confluent.connect.avro.AvroConverter`` converter pushes messages in Avro format. To store the messages schema we use Aiven's `Karapace schema registry `_ as specified by the ``schema.registry.url`` parameter and related credentials. - .. Note:: + .. Note:: - The ``key.converter`` and ``value.converter`` sections are only needed when pushing data in Avro format. If omitted the messages will be defined in JSON format. + The ``key.converter`` and ``value.converter`` sections are only needed when pushing data in Avro format. If omitted the messages will be defined in JSON format. - The ``USER_INFO`` is **not** a placeholder, no substitution is needed for that parameter. + The ``USER_INFO`` is **not** a placeholder, no substitution is needed for that parameter. Create a Kafka Connect connector with the Aiven Console @@ -89,7 +89,7 @@ To create a Kafka Connect connector, follow these steps: 6. Paste the connector configuration (stored in the ``debezium_source_mongodb.json`` file) in the form. 7. Select **Apply**. - .. note:: + .. note:: The Aiven Console parses the configuration file and fills the relevant UI fields. You can review the UI fields across the various tabs and change them if necessary. The changes will be reflected in JSON format in the **Connector configuration** text box. @@ -102,6 +102,6 @@ To create a Kafka Connect connector, follow these steps: 8. Verify the connector status under the **Connectors** screen. 9. Verify the presence of the data in the target Apache Kafka topic coming from the MongoDB dataset. The topic name is equal to the concatenation of the database and collection name. If you need to change the target table name, you can do so using the Kafka Connect ``RegexRouter`` transformation. -.. note:: - You can also create connectors using the :ref:`Aiven CLI command `. + +You can also create connectors using the :ref:`Aiven CLI command `. diff --git a/docs/products/kafka/kafka-connect/howto/elasticsearch-sink.rst b/docs/products/kafka/kafka-connect/howto/elasticsearch-sink.rst index 050b8ea8ef..ee46ca3e41 100644 --- a/docs/products/kafka/kafka-connect/howto/elasticsearch-sink.rst +++ b/docs/products/kafka/kafka-connect/howto/elasticsearch-sink.rst @@ -101,9 +101,9 @@ To create a Kafka Connect connector, follow these steps: 6. Paste the connector configuration (stored in the ``elasticsearch_sink.json`` file) in the form. 7. Select **Apply**. -.. Note:: - - The Aiven Console parses the configuration file and fills the relevant UI fields. You can review the UI fields across the various tab and change them if necessary. The changes will be reflected in JSON format in the **Connector configuration** text box. + .. Note:: + + The Aiven Console parses the configuration file and fills the relevant UI fields. You can review the UI fields across the various tab and change them if necessary. The changes will be reflected in JSON format in the **Connector configuration** text box. 8. After all the settings are correctly configured, select **Create connector**. 9. Verify the connector status under the **Connectors** screen. diff --git a/docs/products/kafka/kafka-connect/howto/gcs-sink.rst b/docs/products/kafka/kafka-connect/howto/gcs-sink.rst index 249958d247..b7a53d5593 100644 --- a/docs/products/kafka/kafka-connect/howto/gcs-sink.rst +++ b/docs/products/kafka/kafka-connect/howto/gcs-sink.rst @@ -86,9 +86,9 @@ To create a Kafka Connect connector, follow these steps: 6. Paste the connector configuration (stored in the ``gcs_sink.json`` file) in the form. 7. Select **Apply**. -.. Note:: + .. Note:: - The Aiven Console parses the configuration file and fills the relevant UI fields. You can review the UI fields across the various tab and change them if necessary. The changes will be reflected in JSON format in the **Connector configuration** text box. + The Aiven Console parses the configuration file and fills the relevant UI fields. You can review the UI fields across the various tab and change them if necessary. The changes will be reflected in JSON format in the **Connector configuration** text box. 7. After all the settings are correctly configured, select **Create connector**. 8. Verify the connector status under the **Connectors** screen. diff --git a/docs/products/kafka/kafka-connect/howto/http-sink.rst b/docs/products/kafka/kafka-connect/howto/http-sink.rst index fa15f924e2..b2621a35dc 100644 --- a/docs/products/kafka/kafka-connect/howto/http-sink.rst +++ b/docs/products/kafka/kafka-connect/howto/http-sink.rst @@ -82,9 +82,9 @@ To create a Kafka Connect connector, follow these steps: 6. Paste the connector configuration (stored in the ``http_sink.json`` file) in the form. 7. Select **Apply**. -.. Note:: + .. Note:: - The Aiven Console parses the configuration file and fills the relevant UI fields. You can review the UI fields across the various tabs and change them if necessary. The changes will be reflected in JSON format in the **Connector configuration** text box. + The Aiven Console parses the configuration file and fills the relevant UI fields. You can review the UI fields across the various tabs and change them if necessary. The changes will be reflected in JSON format in the **Connector configuration** text box. 8. After all the settings are correctly configured, select **Create connector**. 9. Verify the connector status under the **Connectors** screen. diff --git a/docs/products/kafka/kafka-connect/howto/influx-sink.rst b/docs/products/kafka/kafka-connect/howto/influx-sink.rst index 4d6f83063a..c08e7a694a 100644 --- a/docs/products/kafka/kafka-connect/howto/influx-sink.rst +++ b/docs/products/kafka/kafka-connect/howto/influx-sink.rst @@ -108,9 +108,9 @@ To create a Apache Kafka Connect connector, follow these steps: 6. Paste the connector configuration (stored in the ``influxdb_sink.json`` file) in the form. 7. Select **Apply**. -.. Note:: + .. Note:: - The Aiven Console parses the configuration file and fills the relevant UI fields. You can review the UI fields across the various tab and change them if necessary. The changes will be reflected in JSON format in the **Connector configuration** text box. + The Aiven Console parses the configuration file and fills the relevant UI fields. You can review the UI fields across the various tab and change them if necessary. The changes will be reflected in JSON format in the **Connector configuration** text box. 8. After all the settings are correctly configured, select **Create connector**. 9. Verify the connector status under the **Connectors** screen. diff --git a/docs/products/kafka/kafka-connect/howto/jdbc-sink.rst b/docs/products/kafka/kafka-connect/howto/jdbc-sink.rst index 05cbdded31..65b6135d43 100644 --- a/docs/products/kafka/kafka-connect/howto/jdbc-sink.rst +++ b/docs/products/kafka/kafka-connect/howto/jdbc-sink.rst @@ -21,8 +21,9 @@ To setup a JDBC sink connector, you need an Aiven for Apache Kafka service :doc: Furthermore you need to collect the following information about the target database service upfront: * ``DB_CONNECTION_URL``: The database JDBC connection URL, the following are few examples based on different technologies: - * PostgreSQL: ``jdbc:postgresql://HOST:PORT/DB_NAME?sslmode=SSL_MODE`` - * MySQL: ``jdbc:mysql://HOST:PORT/DB_NAME?ssl-mode=SSL_MODE`` + + * PostgreSQL: ``jdbc:postgresql://HOST:PORT/DB_NAME?sslmode=SSL_MODE`` + * MySQL: ``jdbc:mysql://HOST:PORT/DB_NAME?ssl-mode=SSL_MODE`` * ``DB_USERNAME``: The database username to connect * ``DB_PASSWORD``: The password for the username selected @@ -35,11 +36,11 @@ Furthermore you need to collect the following information about the target datab .. Note:: - If you're using Aiven for PostgreSQL® and Aiven for MySQL® the above details are available in the `Aiven console `_ service *Overview tab* or via the dedicated ``avn service get`` command with the :ref:`Aiven CLI `. + If you're using Aiven for PostgreSQL® and Aiven for MySQL® the above details are available in the `Aiven console `_ service *Overview tab* or via the dedicated ``avn service get`` command with the :ref:`Aiven CLI `. - The ``SCHEMA_REGISTRY`` related parameters are available in the Aiven for Apache Kafka® service page, *Overview* tab, and *Schema Registry* subtab + The ``SCHEMA_REGISTRY`` related parameters are available in the Aiven for Apache Kafka® service page, *Overview* tab, and *Schema Registry* subtab - As of version 3.0, Aiven for Apache Kafka no longer supports Confluent Schema Registry. For more information, read `the article describing the replacement, Karapace `_ + As of version 3.0, Aiven for Apache Kafka no longer supports Confluent Schema Registry. For more information, read `the article describing the replacement, Karapace `_ Setup a JDBC sink connector with Aiven Console ---------------------------------------------------- @@ -85,20 +86,21 @@ The configuration file contains the following entries: * ``auto.create``: boolean flag enabling the target table creation if it doesn't exists. * ``auto.evolve``: boolean flag enabling the target table modification in cases of schema modification of the messages in the topic. * ``insert.mode``: defines the insert mode, it can be: - * ``insert``: uses standard ``INSERT`` statements. - * ``upsert``: uses the upsert semantics supported by the target database, more information in the `dedicated GitHub repository `__ - * ``update``: uses the update semantics supported by the target database. E.g. ``UPDATE``, more information in the `dedicated GitHub repository `__ + + * ``insert``: uses standard ``INSERT`` statements. + * ``upsert``: uses the upsert semantics supported by the target database, more information in the `dedicated GitHub repository `__ + * ``update``: uses the update semantics supported by the target database. E.g. ``UPDATE``, more information in the `dedicated GitHub repository `__ * ``delete.enabled``: boolean flag enabling the deletion of rows in the target table on tombstone messages. -.. Note:: + .. Note:: - A tombstone message has: + A tombstone message has: - * a not null **key** - * a null **value** + * a not null **key** + * a null **value** - In case of tombstone messages and ``delete.enabled`` set to ``true``, the JDBC sink connector will delete the row referenced by the message key. If set to ``true``, it requires the ``pk.mode`` to be ``record_key`` to be able to identify the rows to delete. + In case of tombstone messages and ``delete.enabled`` set to ``true``, the JDBC sink connector will delete the row referenced by the message key. If set to ``true``, it requires the ``pk.mode`` to be ``record_key`` to be able to identify the rows to delete. * ``pk.mode``: defines the fields to use as primary key. Allowed options are: diff --git a/docs/products/kafka/kafka-connect/howto/mongodb-poll-source-connector.rst b/docs/products/kafka/kafka-connect/howto/mongodb-poll-source-connector.rst index 745e3b38a8..9809454d59 100644 --- a/docs/products/kafka/kafka-connect/howto/mongodb-poll-source-connector.rst +++ b/docs/products/kafka/kafka-connect/howto/mongodb-poll-source-connector.rst @@ -88,9 +88,9 @@ To create a Kafka Connect connector, follow these steps: 6. Paste the connector configuration (stored in the ``mongodb_source.json`` file) in the form. 7. Select **Apply**. -.. Note:: + .. Note:: - The Aiven Console parses the configuration file and fills the relevant UI fields. You can review the UI fields across the various tab and change them if necessary. The changes will be reflected in JSON format in the **Connector configuration** text box. + The Aiven Console parses the configuration file and fills the relevant UI fields. You can review the UI fields across the various tab and change them if necessary. The changes will be reflected in JSON format in the **Connector configuration** text box. 8. After all the settings are correctly configured, select **Create connector**. 9. Verify the connector status under the **Connectors** screen. diff --git a/docs/products/kafka/kafka-connect/howto/mongodb-sink-lenses.rst b/docs/products/kafka/kafka-connect/howto/mongodb-sink-lenses.rst index 62a1aa3ffc..6c0f1038d5 100644 --- a/docs/products/kafka/kafka-connect/howto/mongodb-sink-lenses.rst +++ b/docs/products/kafka/kafka-connect/howto/mongodb-sink-lenses.rst @@ -112,9 +112,9 @@ To create a Apache Kafka Connect connector, follow these steps: 6. Paste the connector configuration (stored in the ``mongodb_sink.json`` file) in the form. 7. Select **Apply**. -.. Note:: + .. Note:: - The Aiven Console parses the configuration file and fills the relevant UI fields. You can review the UI fields across the various tab and change them if necessary. The changes will be reflected in JSON format in the **Connector configuration** text box. + The Aiven Console parses the configuration file and fills the relevant UI fields. You can review the UI fields across the various tab and change them if necessary. The changes will be reflected in JSON format in the **Connector configuration** text box. 8. After all the settings are correctly configured, select **Create connector**. 9. Verify the connector status under the **Connectors** screen. diff --git a/docs/products/kafka/kafka-connect/howto/mqtt-source-connector.rst b/docs/products/kafka/kafka-connect/howto/mqtt-source-connector.rst index 5fb13dfe5d..a5a49b5b45 100644 --- a/docs/products/kafka/kafka-connect/howto/mqtt-source-connector.rst +++ b/docs/products/kafka/kafka-connect/howto/mqtt-source-connector.rst @@ -81,7 +81,7 @@ To create a Apache Kafka Connect connector, follow these steps: 6. Paste the connector configuration (stored in the ``mqtt_source.json`` file) in the form. 7. Select **Apply**. -To create the connector, access the `Aiven Console `_ and select the Aiven for Apache Kafka® or Aiven for Apache Kafka® Connect service where the connector needs to be defined, then: + To create the connector, access the `Aiven Console `_ and select the Aiven for Apache Kafka® or Aiven for Apache Kafka® Connect service where the connector needs to be defined, then: .. Note:: diff --git a/docs/products/kafka/kafka-connect/howto/opensearch-sink.rst b/docs/products/kafka/kafka-connect/howto/opensearch-sink.rst index 888985cc6f..9bd8f5f241 100644 --- a/docs/products/kafka/kafka-connect/howto/opensearch-sink.rst +++ b/docs/products/kafka/kafka-connect/howto/opensearch-sink.rst @@ -104,9 +104,9 @@ To create a Kafka Connect connector, follow these steps: 6. Paste the connector configuration (stored in the ``opensearch_sink.json`` file) in the form. 7. Select **Apply**. -.. Note:: + .. Note:: - The Aiven Console parses the configuration file and fills the relevant UI fields. You can review the UI fields across the various tab and change them if necessary. The changes will be reflected in JSON format in the **Connector configuration** text box. + The Aiven Console parses the configuration file and fills the relevant UI fields. You can review the UI fields across the various tab and change them if necessary. The changes will be reflected in JSON format in the **Connector configuration** text box. 8. After all the settings are correctly configured, select **Create connector**. 9. Verify the connector status under the **Connectors** screen. diff --git a/docs/products/kafka/kafka-connect/howto/redis-streamreactor-sink.rst b/docs/products/kafka/kafka-connect/howto/redis-streamreactor-sink.rst index fbb275e15b..b54178de2d 100644 --- a/docs/products/kafka/kafka-connect/howto/redis-streamreactor-sink.rst +++ b/docs/products/kafka/kafka-connect/howto/redis-streamreactor-sink.rst @@ -107,10 +107,10 @@ To create a Apache Kafka Connect connector, follow these steps: 6. Paste the connector configuration (stored in the ``redis_sink.json`` file) in the form. 7. Select **Apply**. -.. Note:: - - The Aiven Console parses the configuration file and fills the relevant UI fields. You can review the UI fields across the various tab and change them if necessary. The changes will be reflected in JSON format in the **Connector configuration** text box. + .. Note:: + The Aiven Console parses the configuration file and fills the relevant UI fields. You can review the UI fields across the various tab and change them if necessary. The changes will be reflected in JSON format in the **Connector configuration** text box. + 8. After all the settings are correctly configured, select **Create connector**. 9. Verify the connector status under the **Connectors** screen. 10. Verify the presence of the data in the target Redis service. diff --git a/docs/products/kafka/kafka-connect/howto/snowflake-sink.rst b/docs/products/kafka/kafka-connect/howto/snowflake-sink.rst index 07af860ac6..7932bc52b6 100644 --- a/docs/products/kafka/kafka-connect/howto/snowflake-sink.rst +++ b/docs/products/kafka/kafka-connect/howto/snowflake-sink.rst @@ -106,17 +106,17 @@ To create a Kafka Connect connector, follow these steps: 6. Paste the connector configuration (stored in the ``snowflake_sink.json`` file) in the form. 7. Select **Apply**. -.. Note:: + .. Note:: - The Aiven Console parses the configuration file and fills the relevant UI fields. You can review the UI fields across the various tab and change them if necessary. The changes will be reflected in JSON format in the **Connector configuration** text box. + The Aiven Console parses the configuration file and fills the relevant UI fields. You can review the UI fields across the various tab and change them if necessary. The changes will be reflected in JSON format in the **Connector configuration** text box. 8. After all the settings are correctly configured, select **Create connector**. 9. Verify the connector status under the **Connectors** screen. 10. Verify the presence of the data in the target Snowflake database. -.. Note:: + .. Note:: - You can also create connectors using the :ref:`Aiven CLI command `. + You can also create connectors using the :ref:`Aiven CLI command `. Example: Create a Snowflake sink connector on a topic in Avro format @@ -130,12 +130,14 @@ The example creates an Snowflake sink connector with the following properties: * Snowflake schema: ``testschema`` * Snowflake URL: ``XX0000.eu-central-1.snowflakecomputing.com`` * Snowflake user: ``testuser`` -* User private key:: +* User private key: + + .. code:: - XXXXXXXYYY - ZZZZZZZZZZ - KKKKKKKKKK - YY + XXXXXXXYYY + ZZZZZZZZZZ + KKKKKKKKKK + YY * User private key passphrase: ``password123`` diff --git a/docs/products/kafka/kafka-connect/howto/splunk-sink.rst b/docs/products/kafka/kafka-connect/howto/splunk-sink.rst index bb146b7d59..e7335839d9 100644 --- a/docs/products/kafka/kafka-connect/howto/splunk-sink.rst +++ b/docs/products/kafka/kafka-connect/howto/splunk-sink.rst @@ -82,17 +82,17 @@ To create a Apache Kafka Connect connector, follow these steps: 6. Paste the connector configuration (stored in the ``splunk_sink.json`` file) in the form. 7. Select **Apply**. -.. Note:: + .. Note:: - The Aiven Console parses the configuration file and fills the relevant UI fields. You can review the UI fields across the various tabs and change them if necessary. The changes will be reflected in JSON format in the **Connector configuration** text box. + The Aiven Console parses the configuration file and fills the relevant UI fields. You can review the UI fields across the various tabs and change them if necessary. The changes will be reflected in JSON format in the **Connector configuration** text box. 7. After all the settings are correctly configured, select **Create connector**. 8. Verify the connector status under the **Connectors** screen. 9. Verify the data in the target Splunk instance. -.. Note:: + .. Note:: - You can also create connectors using the :ref:`Aiven CLI command `. + You can also create connectors using the :ref:`Aiven CLI command `. Example: Create a simple Splunk sink connector ---------------------------------------------- diff --git a/docs/products/postgresql/concepts/pg-connection-pooling.rst b/docs/products/postgresql/concepts/pg-connection-pooling.rst index 4cd513da4f..4d859c52d9 100644 --- a/docs/products/postgresql/concepts/pg-connection-pooling.rst +++ b/docs/products/postgresql/concepts/pg-connection-pooling.rst @@ -80,7 +80,7 @@ Adding a PgBouncer pooler that utilizes fewer backend connections frees up serve end -Instead of having dedicated connections per client, now PgBouncer manages the connections assignment optimising them based on client request and settings like the :ref:`pooling-modes`. +Instead of having dedicated connections per client, now PgBouncer manages the connections assignment optimising them based on client request and settings like the pooling modes. .. Tip:: Many frameworks and libraries (ORMs, Django, Rails, etc.) support client-side pooling, which solves much the same problem. However, when there are many distributed applications or devices accessing the same database, a server-side solution is a better approach. diff --git a/docs/tools/kubernetes.rst b/docs/tools/kubernetes.rst index 8014be69a3..3a28047ff2 100644 --- a/docs/tools/kubernetes.rst +++ b/docs/tools/kubernetes.rst @@ -6,9 +6,6 @@ Aiven Operator for Kubernetes® allows users to manage Aiven services through th .. note:: Only Aiven for PostgreSQL®, Aiven for Apache Kafka®, Aiven for ClickHouse®, Aiven for Redis®*, and Aiven for OpenSearch® are supported at this time. - -| - Kubernetes, also known as K8s, is an open-source system for automating deployment, scaling, and management of containerized applications. Kubernetes Operators are software extensions to Kubernetes that make use of custom resources to manage applications and their components. `Kubernetes website `_. Getting started @@ -166,7 +163,7 @@ Create a file named ``pod-psql.yaml`` with the content below: - secretRef: name: pg-connection -The connection information – in this case, the PostgreSQL service URI – is automatically created by the operator within a Kubernetes secret named after the value from the ``connInfoSecretTarget.name`` field. +The connection information in this case, the PostgreSQL service URI, is automatically created by the operator within a Kubernetes secret named after the value from the ``connInfoSecretTarget.name`` field. Go ahead and run ``apply`` to create the pod and test the connection: diff --git a/docs/tools/terraform/howto/promote-to-master-pg-rr.rst b/docs/tools/terraform/howto/promote-to-master-pg-rr.rst index fdf17e8043..73cb615a38 100644 --- a/docs/tools/terraform/howto/promote-to-master-pg-rr.rst +++ b/docs/tools/terraform/howto/promote-to-master-pg-rr.rst @@ -1,5 +1,5 @@ -Terraform to apply "promote to master" on PostgreSQL® read-only replica -####################################################################### +Promote PostgreSQL® read-only replica to primary +################################################# On the Aiven console, if you use "service integrations" to create a read-only replica from an existing PostgreSQL or MySQL service, there is an option for the read-only replica service to promote to master using the **Promote to master** button under the *Overview* tab. While the Terraform documentation does not explicitly mention how to promote the read-only replica to master, you can remove the service integration between services to accomplish the task. diff --git a/docs/tutorials/anomaly-detection.rst b/docs/tutorials/anomaly-detection.rst index e5cf1868a6..dc95f360cc 100644 --- a/docs/tutorials/anomaly-detection.rst +++ b/docs/tutorials/anomaly-detection.rst @@ -184,7 +184,7 @@ After creating the service, you'll be redirected to the service details page. Ap You can define the service integrations, in the Aiven for Apache Flink® **Overview** tab, with the following steps: -1. Click **Get started** on the banner at the top of the *Overview* page. +1. Click **Get started** on the banner at the top of the **Overview** page. .. image:: /images/tutorials/anomaly-detection/flink-console-integration.png :alt: Aiven for Apache Flink Overview tab, showing the **Get started** button @@ -192,7 +192,7 @@ You can define the service integrations, in the Aiven for Apache Flink® **Overv 2. Select **Aiven for Apache Kafka®** and then select the ``demo-kafka`` service. 3. Click **Integrate**. 4. Click the **+** icon under *Data Flow*. -5. Check the **Aiven for PostgreSQL** checkbox in the `Aiven Data Services` section. +5. Check the **Aiven for PostgreSQL** checkbox in the ``Aiven Data Services`` section. 6. Select **Aiven for PostgreSQL®** and then select the ``demo-postgresql`` service. 7. Click **Integrate**. diff --git a/includes/services-memory-capped.rst b/includes/services-memory-capped.rst index 82fbac0b6a..964cfd2dec 100644 --- a/includes/services-memory-capped.rst +++ b/includes/services-memory-capped.rst @@ -12,7 +12,7 @@ This **service memory** can be calculated as: |service_memory| .. important:: - | Reserved memory for non-service use is capped to a maximum of 4GB. - | For MySQL, a 600MB minimum is always guaranteed. + Reserved memory for non-service use is capped to a maximum of 4GB. + For MySQL, a 600MB minimum is always guaranteed.