Skip to content
This repository has been archived by the owner on Jan 29, 2024. It is now read-only.

Commit

Permalink
markup fixes (#2310)
Browse files Browse the repository at this point in the history
  • Loading branch information
ArthurFlag authored Nov 29, 2023
1 parent 983f8be commit f7cf0f0
Show file tree
Hide file tree
Showing 71 changed files with 699 additions and 446 deletions.
4 changes: 3 additions & 1 deletion docs/community/documentation/tips-tricks/renaming-files.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,9 @@
Rename files and adding redirects
===================================

The project supports a redirects file, named ``_redirects``; the format is `source` and `destination` as paths relative to the root of the project. Here's an example::
The project supports a redirects file, named ``_redirects``; the format is `source` and `destination` as paths relative to the root of the project. Here's an example:

.. code::
/docs/products/flink/howto/real-time-alerting-solution-cli.html /docs/products/flink/howto/real-time-alerting-solution.html
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,3 @@
Jolokia

Access JMX metrics via Jolokia
===============================

Expand Down
18 changes: 12 additions & 6 deletions docs/platform/howto/search-services.rst
Original file line number Diff line number Diff line change
Expand Up @@ -20,13 +20,17 @@ You can also add filters to the search field yourself. The supported filters are
* ``provider``
* ``region``

You can add multiple values to filters separated by a comma. For example, this is how you would view all running PostgreSQL® services that are hosted on AWS or Google Cloud::
You can add multiple values to filters separated by a comma. For example, this is how you would view all running PostgreSQL® services that are hosted on AWS or Google Cloud:

service:pg status:running provider:aws,google
.. code::
You can use these filters alongside keyword searches. For example, to see all powered off Kafka® services with *production* in the name, you could use the following:::
service:pg status:running provider:aws,google
production service:kafka status:poweroff
You can use these filters alongside keyword searches. For example, to see all powered off Kafka® services with *production* in the name, you could use the following:

.. code::
production service:kafka status:poweroff
Filter by service type
~~~~~~~~~~~~~~~~~~~~~~~
Expand Down Expand Up @@ -107,6 +111,8 @@ To filter the services by the cloud provider they are hosted on, use the filter
Filter by cloud region
~~~~~~~~~~~~~~~~~~~~~~~

Find the supported values for the ``region`` filter in the *Cloud* column of the tables in :doc:`List of available cloud regions </docs/platform/reference/list_of_clouds>`. For example, to see all services in the AWS ``eu-central-1`` region, you use this filter::
Find the supported values for the ``region`` filter in the *Cloud* column of the tables in :doc:`List of available cloud regions </docs/platform/reference/list_of_clouds>`. For example, to see all services in the AWS ``eu-central-1`` region, you use this filter:

region:aws-eu-central-1
.. code::
region:aws-eu-central-1
8 changes: 6 additions & 2 deletions docs/platform/howto/static-ip-addresses.rst
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,9 @@ The command returns some information about the newly created static IP address.
When the IP address has been provisioned, the state turns to ``created``. The
list of static IP addresses in the current project is available using the
``static-ip list`` command::
``static-ip list`` command:

.. code::
avn static-ip list
Expand All @@ -81,7 +83,9 @@ Associate static IP addresses with a service
--------------------------------------------

Using the name of the service, and the ID of the static IP address, you can
assign which service a static IP should be used by::
assign which service a static IP should be used by:

.. code::
avn static-ip associate --service my-static-pg ip359373e5e56
avn static-ip associate --service my-static-pg ip358375b2765
Expand Down
62 changes: 38 additions & 24 deletions docs/platform/howto/tag-resources.rst
Original file line number Diff line number Diff line change
Expand Up @@ -65,46 +65,60 @@ Add and modify resource tags with the Aiven client
Add and modify service tags
""""""""""""""""""""""""""""

* Add new tags to a service::
* Add new tags to a service:

avn service tags update your-service --add-tag business_unit=sales --add-tag env=smoke_test
.. code::
* Modify or remove tags::
avn service tags update your-service --add-tag business_unit=sales --add-tag env=smoke_test
avn service tags update your-service --add-tag env=production --remove-tag business_unit
* Modify or remove tags:

.. code::
avn service tags update your-service --add-tag env=production --remove-tag business_unit
* List service tags::
* List service tags:

avn service tags list your-service
KEY VALUE
=== ==========
env production
.. code::
avn service tags list your-service
KEY VALUE
=== ==========
env production
* Replace tags with a set of new ones, removing the old ones::
* Replace tags with a set of new ones, removing the old ones:

avn service tags replace your-service --tag cost_center=U1345
.. code::
avn service tags replace your-service --tag cost_center=U1345
avn service tags list your-service
KEY VALUE
=========== =====
cost_center U1345
avn service tags list your-service
KEY VALUE
=========== =====
cost_center U1345
Add and modify project tags
""""""""""""""""""""""""""""

The commands ``update``, ``list`` and ``replace`` exist for tagging projects too, and work the same way:

* Add tags to a project::
* Add tags to a project:

.. code::
avn project tags update --project your-project --add-tag business_unit=sales
avn project tags update --project your-project --add-tag business_unit=sales
* Replace project tags:

.. code::
* Replace project tags::
avn project tags replace --project your-project --tag env=smoke_test
avn project tags replace --project your-project --tag env=smoke_test
* List project tags:

* List project tags::
.. code::
avn project tags list
KEY VALUE
=== ==========
env smoke_test
avn project tags list
KEY VALUE
=== ==========
env smoke_test
6 changes: 4 additions & 2 deletions docs/products/cassandra/howto/connect-python.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,11 @@ This example connects to an Aiven for Apache Cassandra® service from Python as
Pre-requisites
''''''''''''''

Install the ``cassandra-driver`` library::
Install the ``cassandra-driver`` library:

pip install cassandra-driver
.. code::
pip install cassandra-driver
Variables
'''''''''
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -164,9 +164,9 @@ In a client library

To configure the consistency level in a client library, add an extra parameter or object to define the consistency level on your software component before running a particular query.

.. topic:: Example::
In Python, you can specify ``consistency_level`` as a parameter for the ``SimpleStatement`` object.
.. topic:: Example:

In Python, you can specify ``consistency_level`` as a parameter for the ``SimpleStatement`` object.

.. code-block:: bash
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,9 @@ The following parameters are used:
* ``workload``: option calls a specific workload file that is compiled inside the ``nb`` executable and instructs ``nb`` to generate key/value pairs for a table called ``baselines.keyvalue``. You can read more on how to define custom workloads in the :ref:`dedicated documentation <nosqlbench_cassandra>`
* ``phase``: refers to a specific point in the ``workload`` definition file and specifies the particular ``activity`` to run. In the example, the phase is ``schema`` which means that the nosqlbench will create the schema of the Cassandra keyspace.

To create client connections and produce data in the keyspace and tables created, you need to run the following command line, after substituting the placeholders for ``HOST``, ``PORT``, ``PASSWORD`` and ``SSL_CERTFILE``::
To create client connections and produce data in the keyspace and tables created, you need to run the following command line, after substituting the placeholders for ``HOST``, ``PORT``, ``PASSWORD`` and ``SSL_CERTFILE``:

.. code::
./nb run \
host=HOST \
Expand Down
4 changes: 2 additions & 2 deletions docs/products/clickhouse/getting-started.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,9 +22,9 @@ Create a database
2. In the **Databases and tables** page, select **Create database** > **ClickHouse database**.
3. In the **Create ClickHouse database** window, enter a name for your database and select **Create database**.

.. note::
.. note::

All databases must be created through the web console.
All databases must be created through the web console.

Connect to ClickHouse
---------------------
Expand Down
43 changes: 21 additions & 22 deletions docs/products/clickhouse/howto/check-data-tiered-storage.rst
Original file line number Diff line number Diff line change
Expand Up @@ -34,26 +34,26 @@ Run a data distribution check with the ClickHouse client (CLI)

.. code-block:: bash
SELECT
database,
table,
disk_name,
formatReadableSize(sum(data_compressed_bytes)) AS total_size,
count(*) AS parts_count,
formatReadableSize(min(data_compressed_bytes)) AS min_part_size,
formatReadableSize(median(data_compressed_bytes)) AS median_part_size,
formatReadableSize(max(data_compressed_bytes)) AS max_part_size
FROM system.parts
GROUP BY
database,
table,
disk_name
ORDER BY
database ASC,
table ASC,
disk_name ASC
You can expect to receive the following output:
SELECT
database,
table,
disk_name,
formatReadableSize(sum(data_compressed_bytes)) AS total_size,
count(*) AS parts_count,
formatReadableSize(min(data_compressed_bytes)) AS min_part_size,
formatReadableSize(median(data_compressed_bytes)) AS median_part_size,
formatReadableSize(max(data_compressed_bytes)) AS max_part_size
FROM system.parts
GROUP BY
database,
table,
disk_name
ORDER BY
database ASC,
table ASC,
disk_name ASC
You can expect to receive the following output:
.. code-block:: bash
Expand All @@ -63,9 +63,8 @@ Run a data distribution check with the ClickHouse client (CLI)
│ system │ query_log │ default │ 75.85 MiB │ 102 │ 7.51 KiB │ 12.36 KiB │ 1.55 MiB │
└──────────┴───────────┴───────────┴────────────┴─────────────┴───────────────┴──────────────────┴───────────────┘
.. topic:: Result
The query returns a table with data distribution details for all databases and tables that belong to your service: the storage device they use, their total sizes as well as parts counts and sizing.
The query returns a table with data distribution details for all databases and tables that belong to your service: the storage device they use, their total sizes as well as parts counts and sizing.

What's next
-----------
Expand Down
42 changes: 21 additions & 21 deletions docs/products/clickhouse/howto/connect-with-java.rst
Original file line number Diff line number Diff line change
Expand Up @@ -45,33 +45,33 @@ Connect to the service
2. Replace ``CLICKHOUSE_HTTPS_HOST`` and ``CLICKHOUSE_HTTPS_PORT`` in the command with your connection values and run the code.

.. code-block:: shell
.. code-block:: shell
jdbc:ch://CLICKHOUSE_HTTPS_HOST:CLICKHOUSE_HTTPS_PORT?ssl=true&sslmode=STRICT
jdbc:ch://CLICKHOUSE_HTTPS_HOST:CLICKHOUSE_HTTPS_PORT?ssl=true&sslmode=STRICT
3. Replace ``CLICKHOUSE_USER`` and ``CLICKHOUSE_PASSWORD`` in the code with meaningful data and run the code.

.. code-block:: java
import com.clickhouse.jdbc.ClickHouseConnection;
import com.clickhouse.jdbc.ClickHouseDataSource;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
public class Main {
public static void main(String[] args) throws SQLException {
String connString = "jdbc:ch://CLICKHOUSE_HTTPS_HOST:CLICKHOUSE_HTTPS_PORT?ssl=true&sslmode=STRICT";
ClickHouseDataSource database = new ClickHouseDataSource(connString);
ClickHouseConnection connection = database.getConnection("CLICKHOUSE_USER", "CLICKHOUSE_PASSWORD");
Statement statement = connection.createStatement();
ResultSet result_set = statement.executeQuery("SELECT 1 AS one");
while (result_set.next()) {
System.out.println(result_set.getInt("one"));
.. code-block:: java
import com.clickhouse.jdbc.ClickHouseConnection;
import com.clickhouse.jdbc.ClickHouseDataSource;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
public class Main {
public static void main(String[] args) throws SQLException {
String connString = "jdbc:ch://CLICKHOUSE_HTTPS_HOST:CLICKHOUSE_HTTPS_PORT?ssl=true&sslmode=STRICT";
ClickHouseDataSource database = new ClickHouseDataSource(connString);
ClickHouseConnection connection = database.getConnection("CLICKHOUSE_USER", "CLICKHOUSE_PASSWORD");
Statement statement = connection.createStatement();
ResultSet result_set = statement.executeQuery("SELECT 1 AS one");
while (result_set.next()) {
System.out.println(result_set.getInt("one"));
}
}
}
}
.. topic:: Expected result

Expand Down
40 changes: 23 additions & 17 deletions docs/products/clickhouse/howto/load-dataset.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,11 @@ The steps below show you how to download the dataset, set up a connection with t
Download the dataset
--------------------

Download the original dataset directly from `the dataset documentation page <https://clickhouse.com/docs/en/getting-started/example-datasets/metrica/>`_. You can do this using cURL, where the generic command looks like this::
Download the original dataset directly from `the dataset documentation page <https://clickhouse.com/docs/en/getting-started/example-datasets/metrica/>`_. You can do this using cURL, where the generic command looks like this:

curl address_to_file_in_format_tsv_xz | unxz --threads=`nproc` > file-name.tsv
.. code::
curl address_to_file_in_format_tsv_xz | unxz --threads=`nproc` > file-name.tsv
.. note::
The ``nproc`` Linux command, which prints the number of processing units, is not available on macOS. To use the above command, add an alias for ``nproc`` into your ``~/.zshrc`` file: ``alias nproc="sysctl -n hw.logicalcpu"``.
Expand Down Expand Up @@ -79,24 +81,28 @@ Load data

Now that you have a dataset with two empty tables, we'll load data into each of the tables. However, because we need to access files outside the docker container, we'll run the command specifying ``--query`` parameter. To do this:

1. Go to the folder where you stored the downloaded files for ``hits_v1.tsv`` and ``visits_v1.tsv``.

#. Run the following command::

cat hits_v1.tsv | docker run \
--interactive \
--rm clickhouse/clickhouse-server clickhouse-client \
--user USERNAME \
--password PASSWORD \
--host HOST \
--port PORT \
--secure \
--max_insert_block_size=100000 \
--query="INSERT INTO datasets.hits_v1 FORMAT TSV"
#. Go to the folder where you stored the downloaded files for ``hits_v1.tsv`` and ``visits_v1.tsv``.

#. Run the following command:

.. code::
cat hits_v1.tsv | docker run \
--interactive \
--rm clickhouse/clickhouse-server clickhouse-client \
--user USERNAME \
--password PASSWORD \
--host HOST \
--port PORT \
--secure \
--max_insert_block_size=100000 \
--query="INSERT INTO datasets.hits_v1 FORMAT TSV"
``hits_v1.tsv`` contains approximately 7Gb of data. Depending on your internet connection, it can take some time to load all the items.

#. Run the corresponding command for ``visits_v1.tsv``::
#. Run the corresponding command for ``visits_v1.tsv``:

.. code::
cat visits_v1.tsv | docker run \
--interactive \
Expand Down
Loading

0 comments on commit f7cf0f0

Please sign in to comment.