From 319b67726312c2710ca191800c7c26d4350d4b0d Mon Sep 17 00:00:00 2001 From: Arthur <28596581+ArthurFlageul@users.noreply.github.com> Date: Fri, 24 Nov 2023 13:42:09 +0100 Subject: [PATCH] Fix: pipes and unwanted chars (#2288) --- .../aiven-node-firewall-configuration.rst | 2 +- .../concepts/enhanced-compliance-env.rst | 6 +- .../access-jmx-metrics-jolokia.rst | 12 +- .../howto/integrations/prometheus-metrics.rst | 26 ++-- docs/platform/howto/use-aws-privatelinks.rst | 138 +++++++++--------- docs/platform/howto/vnet-peering-azure.rst | 2 +- docs/products/clickhouse.rst | 2 +- .../concepts/horizontal-vertical-scaling.rst | 14 +- docs/products/kafka/howto/kcat.rst | 10 +- .../kafka/howto/prevent-full-disks.rst | 5 +- .../kafka/howto/viewing-resetting-offset.rst | 14 +- .../manage-kafka-rest-proxy-authorization.rst | 2 +- .../mysql/concepts/mysql-memory-usage.rst | 4 +- .../products/mysql/howto/connect-from-cli.rst | 2 +- docs/products/opensearch/concepts/indices.rst | 6 +- .../postgresql/concepts/timescaledb.rst | 2 +- .../howto/migrate-using-bucardo.rst | 60 ++++---- docs/products/postgresql/howto/pagila.rst | 2 +- .../howto/create-manage-teams.rst | 14 +- docs/tutorials/anomaly-detection.rst | 9 +- 20 files changed, 163 insertions(+), 169 deletions(-) diff --git a/docs/platform/concepts/aiven-node-firewall-configuration.rst b/docs/platform/concepts/aiven-node-firewall-configuration.rst index 2cae02b386..d9be72c682 100644 --- a/docs/platform/concepts/aiven-node-firewall-configuration.rst +++ b/docs/platform/concepts/aiven-node-firewall-configuration.rst @@ -7,7 +7,7 @@ Intra-node connections are limited to point-to-point connections to specific IP Service ports that you can connect to depend on the service type and deployment type. The configuration can also affect the ports that are available: * Is the service in a public network, :doc:`dedicated VPC `, virtual cloud account, or a :doc:`Bring Your Own Cloud (BYOC) ` setup ? -* Have you configured IP ranges in  user_config.ip_filter? +* Have you configured IP ranges in user_config.ip_filter? * Have you :doc:`enabled public internet access for services in a VPC `? Commonly opened ports diff --git a/docs/platform/concepts/enhanced-compliance-env.rst b/docs/platform/concepts/enhanced-compliance-env.rst index 24f4eaa15f..f7095d17bf 100644 --- a/docs/platform/concepts/enhanced-compliance-env.rst +++ b/docs/platform/concepts/enhanced-compliance-env.rst @@ -2,7 +2,7 @@ Enhanced compliance environments (ECE) =========================================== As a business that collects, manages, and operates on sensitive data that is protected by privacy and -compliance rules and regulations – any vendor that assists with this collection, management and +compliance rules and regulations - any vendor that assists with this collection, management and operation is subject to these same rules and regulations. Aiven meets the needs of these businesses by providing specialized enhanced compliance environments (ECE) that comply with many of the most common compliance requirements. @@ -12,7 +12,7 @@ compliance requirement that no ECE VPC is shared and the managed environment is from the standard Aiven deployment environment. This decreases the blast radius of the environment to prevent inadvertent data sharing. Furthermore, users of an ECE **must** encrypt all data prior to reaching an Aiven service. As part of the increased compliance of the environment, enhanced logging -is enabled for – ``stderr``, ``stout``, and ``stdin``. +is enabled for - ``stderr``, ``stout``, and ``stdin``. Who is eligible? ---------------- @@ -73,7 +73,7 @@ Migrating ---------------- Migrations to Aiven are fairly straightforward in general, but migrating to an ECE can add a tiny bit of complexity. If the migration is for a new service there are a few standard -migration methods that will work – please contact `sales `_ and a Solution Architect +migration methods that will work - please contact `sales `_ and a Solution Architect will be able to help. If you need to migrate an existing Aiven service to an ECE the standard automated migration diff --git a/docs/platform/howto/integrations/access-jmx-metrics-jolokia.rst b/docs/platform/howto/integrations/access-jmx-metrics-jolokia.rst index 2b5d2a11bb..c981f07e0f 100644 --- a/docs/platform/howto/integrations/access-jmx-metrics-jolokia.rst +++ b/docs/platform/howto/integrations/access-jmx-metrics-jolokia.rst @@ -61,9 +61,9 @@ Ensure that you use port 6733, the default port for Jolokia. Replace ``joljkr2l: .. code-block:: shell curl --cacert ca.pem \ -     -X POST \ -     https://joljkr2l:PWD@HOST_IP:6733/jolokia/  \ -     -d \ + -X POST \ + https://joljkr2l:PWD@HOST_IP:6733/jolokia/ \ + -d \ '{"type":"read","mbean":"kafka.server:type=ReplicaManager,name=PartitionCount"}' Jolokia supports searching beans using ``search`` command: @@ -71,9 +71,9 @@ Jolokia supports searching beans using ``search`` command: .. code-block:: shell curl --cacert ca.pem \ -     -X POST \ -     https://joljkr2l:PWD@HOST_IP:6733/jolokia/  \ -     -d \ + -X POST \ + https://joljkr2l:PWD@HOST_IP:6733/jolokia/ \ + -d \ '{"type":"search","mbean":"kafka.server:*"}' diff --git a/docs/platform/howto/integrations/prometheus-metrics.rst b/docs/platform/howto/integrations/prometheus-metrics.rst index 8e78b38a92..23e940eec9 100644 --- a/docs/platform/howto/integrations/prometheus-metrics.rst +++ b/docs/platform/howto/integrations/prometheus-metrics.rst @@ -22,9 +22,9 @@ Check Prometheus support for your service Usually one Prometheus integration endpoint can be used for all services in the same project. To check if Prometheus is supported on your service, you need to verify if the project for this service has a Prometheus integration endpoint created. For this purpose, take the following steps: -#. | Log in to `Aiven Console `_, go to **Projects** in the top navigation bar, and select your project from the dropdown list. -#. | On the **Services** page, select **Integration endpoints** from the left sidebar. -#. | On the **Integration endpoints** page, select **Prometheus** from the list available integration endpoints, and check if there is any endpoint available under **Endpoint Name**. +#. Log in to `Aiven Console `_, go to **Projects** in the top navigation bar, and select your project from the dropdown list. +#. On the **Services** page, select **Integration endpoints** from the left sidebar. +#. On the **Integration endpoints** page, select **Prometheus** from the list available integration endpoints, and check if there is any endpoint available under **Endpoint Name**. If there is a Prometheus endpoint available, your service supports Prometheus. If there's no Prometheus endpoint available, proceed to :ref:`Enable Prometheus on your Aiven project ` to set up Prometheus for your service (project). @@ -35,21 +35,21 @@ Enable Prometheus Aiven offers Prometheus endpoints for your services. To enable this feature, take the following steps: -#. | Log in to `Aiven Console `_, go to **Projects** in the top navigation bar, and select your project from the dropdown list. -#. | On the **Services** page, select **Integration endpoints** from the left sidebar. -#. | On the **Integration endpoints** page, select **Prometheus** from the list available integration endpoints, and select **Add new endpoint**. -#. | In the **Create new Prometheus endpoint** window, enter the details for the endpoint, and select **Create**. -#. | Select **Services** from the sidebar, and navigate to the service that you would like to monitor. -#. | On the **Overview** page of your service, go to the **Service integrations** section, and select **Manage integrations**. -#. | On the **Integrations** page, select **Prometheus**. -#. | In the **Prometheus integration** window, select the endpoint name you created from the dropdown list, and select **Enable**. +#. Log in to `Aiven Console `_, go to **Projects** in the top navigation bar, and select your project from the dropdown list. +#. On the **Services** page, select **Integration endpoints** from the left sidebar. +#. On the **Integration endpoints** page, select **Prometheus** from the list available integration endpoints, and select **Add new endpoint**. +#. In the **Create new Prometheus endpoint** window, enter the details for the endpoint, and select **Create**. +#. Select **Services** from the sidebar, and navigate to the service that you would like to monitor. +#. On the **Overview** page of your service, go to the **Service integrations** section, and select **Manage integrations**. +#. On the **Integrations** page, select **Prometheus**. +#. In the **Prometheus integration** window, select the endpoint name you created from the dropdown list, and select **Enable**. .. note:: At the top of the **Integrations** page, you will see the Prometheus integration listed and status ``active``. -#. | From the **Integrations** page, go to the **Overview** page > the **Connection information** section > the **Prometheus** tab. -#. | Copy **Service URI**, and use it in your browser to access the Prometheus dashboard. +#. From the **Integrations** page, go to the **Overview** page > the **Connection information** section > the **Prometheus** tab. +#. Copy **Service URI**, and use it in your browser to access the Prometheus dashboard. .. topic:: Result diff --git a/docs/platform/howto/use-aws-privatelinks.rst b/docs/platform/howto/use-aws-privatelinks.rst index 1675f4d87f..fa9fa595af 100644 --- a/docs/platform/howto/use-aws-privatelinks.rst +++ b/docs/platform/howto/use-aws-privatelinks.rst @@ -25,27 +25,27 @@ AWS PrivateLink. You also need the AWS console or CLI to create a VPC endpoint. **Note:** Aiven for Apache Cassandra® and Aiven for M3 services do not currently support AWS PrivateLink. -#. | Create an AWS PrivateLink resource on the Aiven service: +#. Create an AWS PrivateLink resource on the Aiven service: - | The Amazon Resource Name (ARN) for the principals that are allowed - to connect to the VPC endpoint service and the AWS network load - balancer requires your Amazon account ID. In addition, you can set - the access scope for an entire AWS account, a given user account, - or a given role. Only give permissions to roles that you trust, as - an allowed role can connect from any VPC. + The Amazon Resource Name (ARN) for the principals that are allowed + to connect to the VPC endpoint service and the AWS network load + balancer requires your Amazon account ID. In addition, you can set + the access scope for an entire AWS account, a given user account, + or a given role. Only give permissions to roles that you trust, as + an allowed role can connect from any VPC. - Using the Aiven CLI, run the following command including your AWS account ID, the access scope, and the name of your Aiven service: .. code:: - $ avn service privatelink aws create --principal arn:aws:iam::$AWS_account_ID:$access_scope $Aiven_service_name + avn service privatelink aws create --principal arn:aws:iam::$AWS_account_ID:$access_scope $Aiven_service_name For example: .. code:: - $ avn service privatelink aws create --principal arn:aws:iam::012345678901:user/mwf my-kafka + avn service privatelink aws create --principal arn:aws:iam::012345678901:user/mwf my-kafka - Using `Aiven Console `__: @@ -58,54 +58,56 @@ currently support AWS PrivateLink. #. In the **Create Privatelink** window, enter the Amazon Resource Names (ARN) for the principals that you want to use, and select **Create** . - | This creates an AWS network load balancer dedicated to your Aiven - service and attaches it to an AWS VPC endpoint service that you can - later use to connect to your account's VPC endpoint. - - | The PrivateLink resource stays in the initial ``creating`` state - for up to a few minutes while the load balancer is being launched. - After the load balancer and VPC endpoint service have been created, - the state changes to ``active`` and the ``aws_service_id`` and - ``aws_service_name`` values are set. + This creates an AWS network load balancer dedicated to your Aiven + service and attaches it to an AWS VPC endpoint service that you can + later use to connect to your account's VPC endpoint. + + The PrivateLink resource stays in the initial ``creating`` state + for up to a few minutes while the load balancer is being launched. + After the load balancer and VPC endpoint service have been created, + the state changes to ``active`` and the ``aws_service_id`` and + ``aws_service_name`` values are set. #. In the AWS CLI, run the following command to create a VPC endpoint: .. code:: - $ aws ec2 --region eu-west-1 create-vpc-endpoint --vpc-endpoint-type Interface --vpc-id $your_vpc_id --subnet-ids $space_separated_list_of_subnet_ids --security-group-ids $security_group_ids --service-name com.amazonaws.vpce.eu-west-1.vpce-svc-0b16e88f3b706aaf1 + aws ec2 --region eu-west-1 create-vpc-endpoint --vpc-endpoint-type Interface --vpc-id $your_vpc_id --subnet-ids $space_separated_list_of_subnet_ids --security-group-ids $security_group_ids --service-name com.amazonaws.vpce.eu-west-1.vpce-svc-0b16e88f3b706aaf1 - | - | Replace the ``--service-name`` value with the value shown next to - **Network** > **AWS service name** in `Aiven Console `__ or by - running the following command in the Aiven CLI: + + Replace the ``--service-name`` value with the value shown next to + **Network** > **AWS service name** in `Aiven Console `__ or by + running the following command in the Aiven CLI: - .. code:: + .. code:: - $ avn service privatelink aws get aws_service_name + avn service privatelink aws get aws_service_name - | - | Note that for fault tolerance, you should specify a subnet ID for - each availability zone in the region. The security groups determine - the instances that are allowed to connect to the endpoint network - interfaces created by AWS into the specified subnets. + + Note that for fault tolerance, you should specify a subnet ID for + each availability zone in the region. The security groups determine + the instances that are allowed to connect to the endpoint network + interfaces created by AWS into the specified subnets. - | Alternatively, you can create the VPC endpoint in `AWS Console `__ under **VPC** > **Endpoints** > **Create endpoint** . See the `AWS documentation `__ for details. + Alternatively, you can create the VPC endpoint in `AWS Console `__ under **VPC** > **Endpoints** > **Create endpoint** . See the `AWS documentation `__ for details. - | **Note:** For Aiven for Apache Kafka® services, the security group - for the VPC endpoint must allow ingress in the port range - ``10000-31000`` to accommodate the pool of Kafka broker ports used - in our PrivateLink implementation. + .. note:: + + For Aiven for Apache Kafka® services, the security group + for the VPC endpoint must allow ingress in the port range + ``10000-31000`` to accommodate the pool of Kafka broker ports used + in our PrivateLink implementation. - | It takes a while before the endpoint is ready to use as AWS - provisions network interfaces to each of the subnets and connects - them to the Aiven VPC endpoint service. Once the AWS endpoint state - changes to ``available`` , the connection is visible in Aiven. + It takes a while before the endpoint is ready to use as AWS + provisions network interfaces to each of the subnets and connects + them to the Aiven VPC endpoint service. Once the AWS endpoint state + changes to ``available`` , the connection is visible in Aiven. -#. | Enable PrivateLink access for Aiven service components: +#. Enable PrivateLink access for Aiven service components: - | You can control each service component separately - for example, - you can enable PrivateLink access for Kafka while allowing Kafka - Connect to connect via VPC peering connections only. + You can control each service component separately - for example, + you can enable PrivateLink access for Kafka while allowing Kafka + Connect to connect via VPC peering connections only. - In the Aiven CLI, set ``user_config.privatelink_access.`` to ``true`` @@ -113,10 +115,10 @@ currently support AWS PrivateLink. .. code:: - $ avn service update -c privatelink_access.kafka=true $Aiven_service_name - $ avn service update -c privatelink_access.kafka_connect=true $Aiven_service_name - $ avn service update -c privatelink_access.kafka_rest=true $Aiven_service_name - $ avn service update -c privatelink_access.schema_registry=true $Aiven_service_name + avn service update -c privatelink_access.kafka=true $Aiven_service_name + avn service update -c privatelink_access.kafka_connect=true $Aiven_service_name + avn service update -c privatelink_access.kafka_rest=true $Aiven_service_name + avn service update -c privatelink_access.schema_registry=true $Aiven_service_name - In `Aiven Console `__: @@ -163,32 +165,32 @@ To acquire connection information for your service component using AWS PrivateLi * For SSL connection information for your service component using AWS PrivateLink, run the following command: -.. code-block:: bash - - avn service connection-info UTILITY_NAME SERVICE_NAME --privatelink-connection-id PRIVATELINK_CONNECTION_ID - + .. code-block:: bash + + avn service connection-info UTILITY_NAME SERVICE_NAME --privatelink-connection-id PRIVATELINK_CONNECTION_ID + .. topic:: Where - - * UTILITY_NAME for Aiven for Apache Kafka®, for example, can be ``kcat``. - * SERVICE_NAME for Aiven for Apache Kafka®, for example, can be ``kafka-12a3b4c5``. - * PRIVATELINK_CONNECTION_ID can be ``plc39413abcdef``. + + * UTILITY_NAME for Aiven for Apache Kafka®, for example, can be ``kcat``. + * SERVICE_NAME for Aiven for Apache Kafka®, for example, can be ``kafka-12a3b4c5``. + * PRIVATELINK_CONNECTION_ID can be ``plc39413abcdef``. * For SASL connection information for Aiven for Apache Kafka® service components using AWS PrivateLink, run the following command: -.. code-block:: bash - - avn service connection-info UTILITY_NAME SERVICE_NAME --privatelink-connection-id PRIVATELINK_CONNECTION_ID -a sasl - + .. code-block:: bash + + avn service connection-info UTILITY_NAME SERVICE_NAME --privatelink-connection-id PRIVATELINK_CONNECTION_ID -a sasl + .. topic:: Where - - * UTILITY_NAME for Aiven for Apache Kafka®, for example, can be ``kcat``. - * SERVICE_NAME for Aiven for Apache Kafka®, for example, can be ``kafka-12a3b4c5``. - * PRIVATELINK_CONNECTION_ID can be ``plc39413abcdef``. - + + * UTILITY_NAME for Aiven for Apache Kafka®, for example, can be ``kcat``. + * SERVICE_NAME for Aiven for Apache Kafka®, for example, can be ``kafka-12a3b4c5``. + * PRIVATELINK_CONNECTION_ID can be ``plc39413abcdef``. + .. note:: - + SSL certificates and SASL credentials are the same for all the connections. You can use the same credentials with any access route. - + .. _h_2a1689a687: Update the allowed principals list @@ -203,7 +205,7 @@ allowed to connect a VPC endpoint: # avn service privatelink aws update --principal arn:aws:iam::$AWS_account_ID:$access_scope $Aiven_service_name - | **Note:** When you add an entry, also include the ``--principal`` arguments for existing entries. + **Note:** When you add an entry, also include the ``--principal`` arguments for existing entries. - In `Aiven Console `__: @@ -226,7 +228,7 @@ Deleting a privatelink connection .. code:: - $ avn service privatelink aws delete $Aiven_service_name + avn service privatelink aws delete $Aiven_service_name .. code:: diff --git a/docs/platform/howto/vnet-peering-azure.rst b/docs/platform/howto/vnet-peering-azure.rst index 6486fbcc4e..6a675188c1 100644 --- a/docs/platform/howto/vnet-peering-azure.rst +++ b/docs/platform/howto/vnet-peering-azure.rst @@ -58,7 +58,7 @@ is not needed if there's only one subscription: .. code:: - az account set --subscription   + az account set --subscription 2. create application object diff --git a/docs/products/clickhouse.rst b/docs/products/clickhouse.rst index e168e45ccf..5667e0b532 100644 --- a/docs/products/clickhouse.rst +++ b/docs/products/clickhouse.rst @@ -1,7 +1,7 @@ Aiven for ClickHouse® ===================== -Aiven for ClickHouse® is a fully managed distributed columnar database based on open source ClickHouse – a fast, resource effective solution tailored for data warehouse and generation of real-time analytical data reports using advanced SQL queries. +Aiven for ClickHouse® is a fully managed distributed columnar database based on open source ClickHouse - a fast, resource effective solution tailored for data warehouse and generation of real-time analytical data reports using advanced SQL queries. ------------------- diff --git a/docs/products/kafka/concepts/horizontal-vertical-scaling.rst b/docs/products/kafka/concepts/horizontal-vertical-scaling.rst index 099b20e43d..d643e0a14e 100644 --- a/docs/products/kafka/concepts/horizontal-vertical-scaling.rst +++ b/docs/products/kafka/concepts/horizontal-vertical-scaling.rst @@ -2,7 +2,7 @@ Scaling options in Apache Kafka® ================================ Aiven for Apache Kafka® has a number of predefined plans that specify the -number of brokers and the capacity of individual brokers. The predefined plans consist of 3, 6, 9, 15, or 30 brokers, but we can also create +number of brokers and the capacity of individual brokers. The predefined plans consist of 3, 6, 9, 15, or 30 brokers, but we can also create larger custom plans based on customer requirements. To increase the capacity of an existing Kafka cluster, two options are @@ -16,9 +16,9 @@ Both scaling options are available for all Aiven for Apache Kafka customers and .. Note:: - When you change the service plan, Aiven automatically starts adding new brokers with the new specifications to your existing cluster. Once the new brokers are online and the data is replicated to them from the older nodes, the old brokers are retired one by one. + When you change the service plan, Aiven automatically starts adding new brokers with the new specifications to your existing cluster. Once the new brokers are online and the data is replicated to them from the older nodes, the old brokers are retired one by one. -Aiven recommends to use both the vertical and horizontal scaling capabilities of Aiven for Apache Kafka to achieve the best possible performance and fault tolerance.  +Aiven recommends to use both the vertical and horizontal scaling capabilities of Aiven for Apache Kafka to achieve the best possible performance and fault tolerance. .. Tip:: @@ -28,7 +28,7 @@ Aiven recommends to use both the vertical and horizontal scaling capabilities of Vertical scaling ---------------- -Scaling vertically a Kafka cluster means keeping the same number of brokers but replacing existing nodes with higher capacity nodes.  +Scaling vertically a Kafka cluster means keeping the same number of brokers but replacing existing nodes with higher capacity nodes. If you cannot increase the partition or topic count of your Kafka cluster due to application constraints, this is usually the only available option. An example of vertical scaling is when changing the service plan from **Aiven for Apache Kafka Business-4** to **Aiven for Apache Kafka Business-8**. For such case Aiven immediately launches three new brokers with the increased capacity defined by the **Business-8** plan, transfers the data in the new nodes, and once the data is replicated retires the old brokers. @@ -36,14 +36,14 @@ An example of vertical scaling is when changing the service plan from **Aiven fo Horizontal scaling -------------------- -Scaling horizontally means adding more brokers to an existing Kafka cluster. This allows sharing the load in the cluster between a larger number of individual nodes, allowing the cluster to serve more requests as a whole. +Scaling horizontally means adding more brokers to an existing Kafka cluster. This allows sharing the load in the cluster between a larger number of individual nodes, allowing the cluster to serve more requests as a whole. .. Note:: Scaling horizontally also makes the cluster more resilient to a failure of a single node: if one broker in a 3-node cluster fails, the remaining two nodes get a 50% load increase, which may cause availability issues in the cluster. If one broker in a 9-node cluster fails, the remaining 8 nodes will only see a load increase of roughly 13%. -An example of horizontal scaling is changing the service from the 3-node **Aiven for Apache Kafka Business-8** plan to the 6-node *Aiven for Apache Kafka Premium-6x-8* plan. For such case, Aiven immediately launches six new brokers adding them to the existing cluster. The existing cluster nodes stay online, and once the new brokers are online and included in the cluster configuration, Kafka starts placing partition replicas on them. -Once all the data is copied to the new brokers the old nodes are removed.  +An example of horizontal scaling is changing the service from the 3-node **Aiven for Apache Kafka Business-8** plan to the 6-node *Aiven for Apache Kafka Premium-6x-8* plan. For such case, Aiven immediately launches six new brokers adding them to the existing cluster. The existing cluster nodes stay online, and once the new brokers are online and included in the cluster configuration, Kafka starts placing partition replicas on them. +Once all the data is copied to the new brokers the old nodes are removed. .. Warning:: diff --git a/docs/products/kafka/howto/kcat.rst b/docs/products/kafka/howto/kcat.rst index 39dc33a212..ae971be68a 100644 --- a/docs/products/kafka/howto/kcat.rst +++ b/docs/products/kafka/howto/kcat.rst @@ -50,11 +50,11 @@ Alternatively, the same settings can be specified directly on the command line w .. code:: kcat \ -     -b demo-kafka.my-demo-project.aivencloud.com:17072 \ -     -X security.protocol=ssl \ -     -X ssl.key.location=service.key \ -     -X ssl.certificate.location=service.cert \ -     -X ssl.ca.location=ca.pem + -b demo-kafka.my-demo-project.aivencloud.com:17072 \ + -X security.protocol=ssl \ + -X ssl.key.location=service.key \ + -X ssl.certificate.location=service.cert \ + -X ssl.ca.location=ca.pem If :doc:`SASL authentication ` is enabled, then the ``kcat`` configuration file requires the following entries: diff --git a/docs/products/kafka/howto/prevent-full-disks.rst b/docs/products/kafka/howto/prevent-full-disks.rst index f5a03f2d13..976a01bae2 100644 --- a/docs/products/kafka/howto/prevent-full-disks.rst +++ b/docs/products/kafka/howto/prevent-full-disks.rst @@ -5,9 +5,8 @@ To ensure the smooth functioning of your **Aiven for Apache Kafka®** services, If any node in the service surpasses the critical threshold of disk usage (more than 97%), the access control list (ACL) used to authorize API requests by Apache Kafka clients will be updated on all nodes. This update will prevent operations that could further increase disk usage, including: -- ``Write`` and ``IdempotentWrite`` operations that clients use to produce new messages - -- ``CreateTopics`` operation that creates one or more topics, each carrying some overhead on disk +- The ``Write`` and ``IdempotentWrite`` operations that clients use to produce new messages. +- The ``CreateTopics`` operation that creates one or more topics, each carrying some overhead on disk. When the disk space is insufficient, and the ACL blocks write operations, you will encounter an error. For example, if you are using the Python client for Apache Kafka, you may receive the following error message: diff --git a/docs/products/kafka/howto/viewing-resetting-offset.rst b/docs/products/kafka/howto/viewing-resetting-offset.rst index f0b375d15e..c1e4c66f56 100644 --- a/docs/products/kafka/howto/viewing-resetting-offset.rst +++ b/docs/products/kafka/howto/viewing-resetting-offset.rst @@ -85,13 +85,13 @@ To reset the offset use the following command replacing: .. code:: kafka-consumer-groups.sh \ -     --bootstrap-server demo-kafka.my-project.aivencloud.com:17072 \ -     --command-config consumer.properties \ -     --group test-group \ -     --topic test-topic \ -     --reset-offsets \ -     --to-earliest \ -     --execute + --bootstrap-server demo-kafka.my-project.aivencloud.com:17072 \ + --command-config consumer.properties \ + --group test-group \ + --topic test-topic \ + --reset-offsets \ + --to-earliest \ + --execute The ``--reset-offsets`` command has the following additional options: diff --git a/docs/products/kafka/karapace/howto/manage-kafka-rest-proxy-authorization.rst b/docs/products/kafka/karapace/howto/manage-kafka-rest-proxy-authorization.rst index 38e4001945..0d40ff69d9 100644 --- a/docs/products/kafka/karapace/howto/manage-kafka-rest-proxy-authorization.rst +++ b/docs/products/kafka/karapace/howto/manage-kafka-rest-proxy-authorization.rst @@ -3,7 +3,7 @@ Manage Apache Kafka® REST proxy authorization Apache Kafka® REST proxy authorization allows you to use the RESTful interface to connect to Kafka clusters, produce and consume messages easily, and execute administrative activities using Aiven CLI. This feature is disabled by default, and you need to :doc:`enable Apache Kafka REST proxy authorization `. -When you enable Apache Kafka REST proxy authorization, Karapace sends the HTTP basic authentication credentials to Apache Kafka®. The authentication and authorization are then performed by Apache Kafka, depending on the ACL defined in Apache Kafka. To configure the ACLs for authorization, see :doc:`Kafka Access Control Lists (ACLs) `. +When you enable Apache Kafka REST proxy authorization, Karapace sends the HTTP basic authentication credentials to Apache Kafka®. The authentication and authorization are then performed by Apache Kafka, depending on the ACL defined in Apache Kafka. To configure the ACLs for authorization, see :doc:`Kafka Access Control Lists (ACLs) `. When Apache Kafka REST proxy authorization is disabled, the REST Proxy bypasses the Apache Kafka ACLs, so any operation via REST API call is performed without any restrictions. diff --git a/docs/products/mysql/concepts/mysql-memory-usage.rst b/docs/products/mysql/concepts/mysql-memory-usage.rst index 0553ddd081..b6031eb53e 100644 --- a/docs/products/mysql/concepts/mysql-memory-usage.rst +++ b/docs/products/mysql/concepts/mysql-memory-usage.rst @@ -10,9 +10,9 @@ This can lead to a false impression that the service is misbehaving, when it is The InnoDB buffer pool ---------------------- -Arguably, the most important MySQL component is the InnoDB Buffer Pool. +Arguably, the most important MySQL component is the InnoDB Buffer Pool. -Every time an operation happens to a table (read or write), the page where the records (and indexes) are located is loaded into the Buffer Pool. +Every time an operation happens to a table (read or write), the page where the records (and indexes) are located is loaded into the Buffer Pool. This means that if the data you read and write the most has its pages in the Buffer Pool, the performance will be better than if you have to read pages from disk. When there are no more free pages in the pool, older pages must be evicted and if they were modified, synchronized back to disk (checkpointing). diff --git a/docs/products/mysql/howto/connect-from-cli.rst b/docs/products/mysql/howto/connect-from-cli.rst index 4fbebbf32c..482c451468 100644 --- a/docs/products/mysql/howto/connect-from-cli.rst +++ b/docs/products/mysql/howto/connect-from-cli.rst @@ -48,7 +48,7 @@ You can execute this query to test: +-------+ | three | +-------+ - |     3 | + | 3 | +-------+ 1 row in set (0.0539 sec) diff --git a/docs/products/opensearch/concepts/indices.rst b/docs/products/opensearch/concepts/indices.rst index 70e4989aaf..6d06b5affc 100644 --- a/docs/products/opensearch/concepts/indices.rst +++ b/docs/products/opensearch/concepts/indices.rst @@ -117,12 +117,12 @@ The replication factor is adjusted automatically: * If ``number_of_replicas`` is too large for the current cluster size, it is automatically lowered to the maximum possible value (the number of nodes on the cluster - 1). * If ``number_of_replicas`` is 0 on a multi-node cluster, it is automatically increased to 1. -* If ``number_of_replicas`` is between 1 and the maximum value, it is not adjusted. +* If ``number_of_replicas`` is between 1 and the maximum value, it is not adjusted. -When the replication factor (``number_of_replicas`` value) is greater than the size of the cluster, ``number_of_replicas`` is automatically lowered, as it is not possible to replicate index shards to more nodes than there are on the cluster. +When the replication factor (``number_of_replicas`` value) is greater than the size of the cluster, ``number_of_replicas`` is automatically lowered, as it is not possible to replicate index shards to more nodes than there are on the cluster. .. note:: - The replication factor is ``number_of_replicas`` + 1. For example, for a three-node cluster, the maximum ``number_of_replicas`` value is 2, which means that all shards on the index are replicated to all three nodes. + The replication factor is ``number_of_replicas`` + 1. For example, for a three-node cluster, the maximum ``number_of_replicas`` value is 2, which means that all shards on the index are replicated to all three nodes. ------ diff --git a/docs/products/postgresql/concepts/timescaledb.rst b/docs/products/postgresql/concepts/timescaledb.rst index a6c36fedef..d77fb39efa 100644 --- a/docs/products/postgresql/concepts/timescaledb.rst +++ b/docs/products/postgresql/concepts/timescaledb.rst @@ -8,7 +8,7 @@ A time series indexes a series of data points in chronological order, usually as * the temperature of a home during a day * the position of a satellite during a day -The data in these examples consists of a measured value (temperature or position) corresponding to the time at which the reading of the value took place.  +The data in these examples consists of a measured value (temperature or position) corresponding to the time at which the reading of the value took place. Enable TimescaleDB on Aiven for PostgreSQL ------------------------------------------ diff --git a/docs/products/postgresql/howto/migrate-using-bucardo.rst b/docs/products/postgresql/howto/migrate-using-bucardo.rst index c054df8748..fd86cc35f4 100644 --- a/docs/products/postgresql/howto/migrate-using-bucardo.rst +++ b/docs/products/postgresql/howto/migrate-using-bucardo.rst @@ -32,31 +32,25 @@ Replicating changes To migrate your data using Bucardo: -#. | Install Bucardo using `the installation - instructions `__ on the - Bucardo site. - -#. | Install the ``aiven_extras`` `extension `_ to your current database. - | Bucardo requires the superuser role to set the - ``session_replication_role`` parameter. Aiven uses the open source - ``aiven_extras`` extension to allow you to run ``superuser`` - commands as a different user, as direct ``superuser`` access is not - provided for security reasons. - -#. | Open and edit the ``Bucardo.pm`` file with administrator - privileges. - | The location of the file can vary according to your operating - system, but you might find it in - ``/usr/local/share/perl5/5.32/Bucardo.pm`` , for example. - - a. Scroll down until you see a ``disable_triggers`` function, in line 5324 in `Bucardo.pm `_. - - b. In line 5359 in `Bucardo.pm `_, change ``SET session_replication_role = default`` to +#. Install Bucardo using `the installation instructions `__ on the Bucardo site. + +#. Install the ``aiven_extras`` `extension `_ to your current database. + Bucardo requires the superuser role to set the + ``session_replication_role`` parameter. Aiven uses the open source + ``aiven_extras`` extension to allow you to run ``superuser`` + commands as a different user, as direct ``superuser`` access is not + provided for security reasons. + +#. Open and edit the ``Bucardo.pm`` file with administrator privileges. The location of the file can vary according to your operating system, but you might find it in ``/usr/local/share/perl5/5.32/Bucardo.pm``, for example. + + #. Scroll down until you see a ``disable_triggers`` function, in line 5324 in `Bucardo.pm `_. + + #. In line 5359 in `Bucardo.pm `_, change ``SET session_replication_role = default`` to the following: - .. code:: + .. code:: - $dbh->do(q{select aiven_extras.session_replication_role('replica');}); + $dbh->do(q{select aiven_extras.session_replication_role('replica');}); c. Scroll down to the ``enable_triggers`` function in line 5395 in `Bucardo.pm `_. @@ -67,16 +61,16 @@ To migrate your data using Bucardo: $dbh->do(q{select aiven_extras.session_replication_role('origin');}); - e. | Save your changes and close the file. + e. Save your changes and close the file. -#. | Add your source and destination databases. - | For example: +#. Add your source and destination databases. + For example: - .. code:: + .. code:: - bucardo add db srcdb dbhost=0.0.0.0 dbport=5432 dbname=all_your_base dbuser=$DBUSER dbpass=$DBPASS + bucardo add db srcdb dbhost=0.0.0.0 dbport=5432 dbname=all_your_base dbuser=$DBUSER dbpass=$DBPASS - bucardo add db destdb dbhost=cg-pg-dev-sandbox.aivencloud.com dbport=21691 dbname=all_your_base dbuser=$DBUSER dbpass=$DBPASS + bucardo add db destdb dbhost=cg-pg-dev-sandbox.aivencloud.com dbport=21691 dbname=all_your_base dbuser=$DBUSER dbpass=$DBPASS #. Add the tables that you want to replicate: @@ -93,8 +87,8 @@ To migrate your data using Bucardo: pg_dump --schema-only --no-owner all_your_base > base.sql psql "$AIVEN_DB_URL" < base.sql - | You can restore the source data or provide only the schema, - depending on the size of your current database. + You can restore the source data or provide only the schema, + depending on the size of your current database. #. Create the ``dbgroup`` for Bucardo: @@ -105,10 +99,10 @@ To migrate your data using Bucardo: (sudo) bucardo start bucardo status sync_src_to_dest -#. | Start Bucardo and run the ``status`` command. When ``Current state`` is ``Good`` , the data is flowing to your - Aiven database. +#. Start Bucardo and run the ``status`` command. When ``Current state`` is ``Good`` , the data is flowing to your + Aiven database. -#. | Log in to the `Aiven web console `_, select your Aiven for PostgreSQL service from the **Services** list, and select **Current Queries** from the sidebar in your service's page. This shows you that the ``bucardo`` process is inserting data. +#. Log in to the `Aiven web console `_, select your Aiven for PostgreSQL service from the **Services** list, and select **Current Queries** from the sidebar in your service's page. This shows you that the ``bucardo`` process is inserting data. #. Once all your data is synchronized, switch the database connection for your applications to Aiven for PostgreSQL. diff --git a/docs/products/postgresql/howto/pagila.rst b/docs/products/postgresql/howto/pagila.rst index b579c27d0b..943d2c0abd 100644 --- a/docs/products/postgresql/howto/pagila.rst +++ b/docs/products/postgresql/howto/pagila.rst @@ -191,7 +191,7 @@ Let's explore the dataset with a few queries. All the queries results were limit |MARGARET |MOORE | |DOROTHY |TAYLOR | -.. dropdown:: See who rented most DVDs – and how many times +.. dropdown:: See who rented most DVDs - and how many times .. code:: sql diff --git a/docs/tools/aiven-console/howto/create-manage-teams.rst b/docs/tools/aiven-console/howto/create-manage-teams.rst index c7eea7d0bc..2e72057bab 100644 --- a/docs/tools/aiven-console/howto/create-manage-teams.rst +++ b/docs/tools/aiven-console/howto/create-manage-teams.rst @@ -2,18 +2,16 @@ Create and manage teams ======================= +**Teams** let you create user groups and assign different access levels to specific projects. Users must be part of an organization before being added to a team. To create and manage teams, click **Admin** and then select **Teams**. + .. important:: - **Teams are becoming groups** - - :doc:`Groups ` are an easier way to control access to your organization's projects and services for a group of users. + **Teams are becoming groups** - :ref:`migrate_teams_to_groups` - - -**Teams** let you create user groups and assign different access levels to specific projects. Users must be part of an organization before being added to a team. To create and manage teams, click **Admin** and then select **Teams**. + :doc:`Groups ` are an easier way to control access to your organization's projects and services for a group of users. + See :ref:`migrate_teams_to_groups`. Create a new team --------------------------- +----------------- #. Click **Create new team**. diff --git a/docs/tutorials/anomaly-detection.rst b/docs/tutorials/anomaly-detection.rst index dc95f360cc..64782eb3b7 100644 --- a/docs/tutorials/anomaly-detection.rst +++ b/docs/tutorials/anomaly-detection.rst @@ -18,7 +18,7 @@ What are you going to build Anomaly detection is a way to find unusual or unexpected things in data. It is immensely helpful in a variety of fields, such as fraud detection, network security, quality control and others. By following this tutorial you will build your own streaming anomaly detection system. -For example: a payment processor might set up anomaly detection against an e-commerce store if it notices that the store – which sells its items in Indian Rupees and is only configured to sell to the Indian market – is suddenly receiving a high volume of orders from Spain. This behavior could indicate fraud. Another example is that of a domain hosting service implementing a CAPTCHA against an IP address it deems is interacting with one of its domains in rapid succession. +For example: a payment processor might set up anomaly detection against an e-commerce store if it notices that the store - which sells its items in Indian Rupees and is only configured to sell to the Indian market - is suddenly receiving a high volume of orders from Spain. This behavior could indicate fraud. Another example is that of a domain hosting service implementing a CAPTCHA against an IP address it deems is interacting with one of its domains in rapid succession. While it's often easier to validate anomalies in data once they due to be stored in the database, it's more useful to check in-stream and address unwanted activity before it affects our dataset. @@ -460,9 +460,10 @@ You can go ahead an try yourself to define the windowing pipeline. If, on the ot 2. Click on **Create new application** and name it ``cpu_agg``. 3. Click on **Create first version**. 4. To import the source ``CPU_IN`` table from the previously created ``filtering`` application: - * Click on **Import existing source table** - * Select ``filtering`` as application, ``Version 1`` as version, ``CPU_IN`` as table and click **Next** - * Click on **Add table** + + 1. Click on **Import existing source table** + 2. Select ``filtering`` as application, ``Version 1`` as version, ``CPU_IN`` as table and click **Next** + 3. Click on **Add table** 5. Navigate to the **Add sink tables** tab. 6. Create the sink table (named ``CPU_OUT_AGG``) pointing to a new Apache Kafka® topic named ``cpu_agg_stats``, where the 30 second aggregated data will land, by: