Skip to content
This repository has been archived by the owner on Jan 29, 2024. It is now read-only.

Commit

Permalink
Fix apostrophes (#2281)
Browse files Browse the repository at this point in the history
  • Loading branch information
ArthurFlag authored Nov 22, 2023
1 parent 16871d2 commit c46b9d3
Show file tree
Hide file tree
Showing 34 changed files with 39 additions and 39 deletions.
4 changes: 2 additions & 2 deletions docs/platform/concepts/enhanced-compliance-env.rst
Original file line number Diff line number Diff line change
Expand Up @@ -33,9 +33,9 @@ any applicable marketplaces and spend down their commitment accordingly.

Similarities to a standard environment
------------------------------------------------
In many ways, an ECE is the same as a standard Aiven deployment. All of Aivens tooling
In many ways, an ECE is the same as a standard Aiven deployment. All of Aiven's tooling
(:doc:`CLI </docs/tools/cli>`, :doc:`Terraform </docs/tools/terraform>`, etc.) interact with ECEs seamlessly, you will still be able to take advantage
of all of Aivens service integrations, and access to the environment can be achieved through
of all of Aiven's service integrations, and access to the environment can be achieved through
VPC peering or Privatelink (on AWS or Azure). However, there are some key differences from
standard environments as well:

Expand Down
2 changes: 1 addition & 1 deletion docs/platform/howto/saml/setup-saml-fusionauth.rst
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ First you need to create an API Key in your FusionAuth instance:

#. On the **API Keys** page, find your new key and click on the value in the **Key** column.

#. Copy the whole key. Youll use this for the script.
#. Copy the whole key. You'll use this for the script.

.. image:: /images/platform/howto/saml/fusionauth/grab-api-key.png
:alt: Grabbing API Key.
Expand Down
2 changes: 1 addition & 1 deletion docs/platform/reference/eol-for-major-versions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ software.
**Version numbering**
~~~~~~~~~~~~~~~~~~~~~

Aiven services inherit the upstream projects software versioning
Aiven services inherit the upstream project's software versioning
scheme. Depending on the service, a major version can be either a single
digit (e.g. PostgreSQL® 14) or ``major.minor`` (e.g. Kafka® 3.2). The
exact version of the service is visible in `Aiven Console <https://console.aiven.io/>`_ once the
Expand Down
2 changes: 1 addition & 1 deletion docs/products/kafka/concepts/consumer-lag-predictor.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Consumer lag predictor for Aiven for Apache Kafka®
The **consumer lag predictor** for Aiven for Apache Kafka estimates the delay between the time a message is produced and when it's eventually consumed by a consumer group. This information can be used to improve the performance, scalability, and cost-effectiveness of your Kafka cluster.

.. important::
Consumer Lag Predictor for Aiven for Apache Kafka® is a limited availability feature. If youre interested in trying out this feature, contact the sales team at [email protected].
Consumer Lag Predictor for Aiven for Apache Kafka® is a limited availability feature. If you're interested in trying out this feature, contact the sales team at [email protected].

To use the **consumer lag predictor** effectively, setting up :doc:`Prometheus integration </docs/platform/howto/integrations/prometheus-metrics>` with your Aiven for Apache Kafka® service is essential. Prometheus integration enables the extraction of key metrics necessary for lag prediction and monitoring.

Expand Down
2 changes: 1 addition & 1 deletion docs/products/kafka/concepts/kafka-tiered-storage.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Tiered storage in Aiven for Apache Kafka® enables more effective data managemen

.. important::

Aiven for Apache Kafka® tiered storage is a :doc:`limited availability feature </docs/platform/concepts/beta_services>`. If youre interested in trying out this feature, contact the sales team at [email protected].
Aiven for Apache Kafka® tiered storage is a :doc:`limited availability feature </docs/platform/concepts/beta_services>`. If you're interested in trying out this feature, contact the sales team at [email protected].


.. note::
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ How tiered storage works in Aiven for Apache Kafka®

.. important::

Aiven for Apache Kafka® tiered storage is a :doc:`limited availability feature </docs/platform/concepts/beta_services>`. If youre interested in trying out this feature, contact the sales team at [email protected].
Aiven for Apache Kafka® tiered storage is a :doc:`limited availability feature </docs/platform/concepts/beta_services>`. If you're interested in trying out this feature, contact the sales team at [email protected].

Aiven for Apache Kafka® tiered storage is a feature that optimizes data management across two distinct storage tiers:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Aiven for Apache Kafka® offers flexibility in configuring tiered storage and se

.. important::

Aiven for Apache Kafka® tiered storage is a :doc:`limited availability feature </docs/platform/concepts/beta_services>`. If youre interested in trying out this feature, contact the sales team at [email protected].
Aiven for Apache Kafka® tiered storage is a :doc:`limited availability feature </docs/platform/concepts/beta_services>`. If you're interested in trying out this feature, contact the sales team at [email protected].

Prerequisite
------------
Expand Down
2 changes: 1 addition & 1 deletion docs/products/kafka/howto/enable-kafka-tiered-storage.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Learn how to enable tiered storage capability of Aiven for Apache Kafka®. This

.. important::

Aiven for Apache Kafka® tiered storage is a :doc:`limited availability feature </docs/platform/concepts/beta_services>`. If youre interested in trying out this feature, contact the sales team at [email protected].
Aiven for Apache Kafka® tiered storage is a :doc:`limited availability feature </docs/platform/concepts/beta_services>`. If you're interested in trying out this feature, contact the sales team at [email protected].

Prerequisites
--------------
Expand Down
2 changes: 1 addition & 1 deletion docs/products/kafka/howto/enable-oidc.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ OpenID Connect (OIDC) is an authentication protocol built on OAuth 2.0. Aiven fo

Prerequisites
-------------
Aiven for Apache Kafka integrates with a wide range of OpenID Connect identity providers (IdPs). However, the exact configuration steps can differ based on your chosen IdP. Refer to your Identity Providers official documentation for specific configuration guidelines.
Aiven for Apache Kafka integrates with a wide range of OpenID Connect identity providers (IdPs). However, the exact configuration steps can differ based on your chosen IdP. Refer to your Identity Provider's official documentation for specific configuration guidelines.

Before proceeding with the setup, ensure you have:

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Enable the consumer lag predictor for Aiven for Apache Kafka®
The :doc:`consumer lag predictor </docs/products/kafka/concepts/consumer-lag-predictor>` in Aiven for Apache Kafka® provides visibility into the time between message production and consumption, allowing for improved cluster performance and scalability.

.. important::
Consumer Lag Predictor for Aiven for Apache Kafka® is a limited availability feature. If youre interested in trying out this feature, contact the sales team at [email protected].
Consumer Lag Predictor for Aiven for Apache Kafka® is a limited availability feature. If you're interested in trying out this feature, contact the sales team at [email protected].


Prerequisites
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ For an in-depth understanding of tiered storage, how it works, and its benefits,

.. important::

Aiven for Apache Kafka® tiered storage is a :doc:`limited availability feature </docs/platform/concepts/beta_services>`. If youre interested in trying out this feature, contact the sales team at [email protected].
Aiven for Apache Kafka® tiered storage is a :doc:`limited availability feature </docs/platform/concepts/beta_services>`. If you're interested in trying out this feature, contact the sales team at [email protected].

Enable tiered storage for service
----------------------------------
Expand Down
2 changes: 1 addition & 1 deletion docs/products/kafka/howto/tiered-storage-overview-page.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Aiven for Apache Kafka® offers a comprehensive overview of tiered storage, allo

.. important::

Aiven for Apache Kafka® tiered storage is a :doc:`limited availability feature </docs/platform/concepts/beta_services>`. If youre interested in trying out this feature, contact the sales team at [email protected].
Aiven for Apache Kafka® tiered storage is a :doc:`limited availability feature </docs/platform/concepts/beta_services>`. If you're interested in trying out this feature, contact the sales team at [email protected].


Access tiered storage overview
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -115,7 +115,7 @@ Pre-configure the source

.. code-block:: bash
GRANT ALL ON <database-name>.* TO ‘username@‘%;
GRANT ALL ON <database-name>.* TO ‘username'@‘%';
Reload the grant tables to apply the changes to the permissions.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ To organise our development space we'll use these files:
- ``helpers.js`` to contain utilities for logging responses,
- ``search.js`` and ``aggregation.js`` for methods specific to search and aggregation requests.

Well be adding code into these files and running the methods from the command line.
We'll be adding code into these files and running the methods from the command line.

Connect to the cluster and load data
------------------------------------
Expand Down
4 changes: 2 additions & 2 deletions docs/products/opensearch/howto/opensearch-and-nodejs.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
Write search queries with OpenSearch® and NodeJS
================================================

Learn how the OpenSearch® JavaScript client gives a clear and useful interface to communicate with an OpenSearch cluster and run search queries. To make it more delicious well be using a recipe dataset from Kaggle 🍕.
Learn how the OpenSearch® JavaScript client gives a clear and useful interface to communicate with an OpenSearch cluster and run search queries. To make it more delicious we'll be using a recipe dataset from Kaggle 🍕.

Prepare the playground
**********************
Expand All @@ -22,7 +22,7 @@ To organise our development space we'll use these files:
- ``helpers.js`` to contain utilities for logging responses,
- ``search.js`` for methods specific to search requests.

Well be adding code into these files and running the methods from the command line.
We'll be adding code into these files and running the methods from the command line.

Connect to the cluster and load data
------------------------------------
Expand Down
6 changes: 3 additions & 3 deletions docs/products/opensearch/howto/sample-dataset.rst
Original file line number Diff line number Diff line change
Expand Up @@ -166,7 +166,7 @@ To load data with NodeJS we'll use `OpenSearch JavaScript client <https://githu

Download `full_format_recipes.json <https://www.kaggle.com/hugodarwood/epirecipes?select=full_format_recipes.json>`_, unzip and put it into the project folder.

It is possible to index values either one by one or by using a bulk operation. Because we have a file containing a long list of recipes well use a bulk operation. A bulk endpoint expects a request in a format of a list where an action and an optional document are followed one after another:
It is possible to index values either one by one or by using a bulk operation. Because we have a file containing a long list of recipes we'll use a bulk operation. A bulk endpoint expects a request in a format of a list where an action and an optional document are followed one after another:

* Action and metadata
* Optional document
Expand All @@ -193,7 +193,7 @@ To achieve this expected format, use a flat map to create a flat list of such pa
client.bulk({ refresh: true, body }, console.log(result.body));
};
Run this method to load the data and wait till it's done. Were injecting over 20k recipes, so it can take 10-15 seconds.
Run this method to load the data and wait till it's done. We're injecting over 20k recipes, so it can take 10-15 seconds.

.. _get-mapping-with-nodejs:

Expand Down Expand Up @@ -237,7 +237,7 @@ You should be able to see the following structure:
title: { type: 'text', fields: { keyword: [Object] } }
}
These are the fields you can play with. You can find information on dynamic mapping types `in the documentation <https://opensearch.org/docs/latest/opensearch/mappings/#dynamic-mapping>`_.
These are the fields you can play with. You can find information on dynamic mapping types `in the documentation <https://opensearch.org/docs/latest/field-types/index/#dynamic-mapping>`_.

Sample queries with HTTP client
-------------------------------
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -61,7 +61,7 @@ This example assumes a source database called ``origin_database`` on a self-mana

When creating a publication entry, the ``publish`` parameter defines the operations to transfer. In the above example, all the ``INSERT``, ``UPDATE`` or ``DELETE`` operations will be transferred.

3. PostgreSQLs logical replication doesnt copy table definitions, that can be extracted from the ``origin_database`` with ``pg_dump`` and included in a ``origin-database-schema.sql`` file with::
3. PostgreSQL's logical replication doesn't copy table definitions, that can be extracted from the ``origin_database`` with ``pg_dump`` and included in a ``origin-database-schema.sql`` file with::

pg_dump --schema-only --no-publications \
SRC_CONN_URI \
Expand Down
2 changes: 1 addition & 1 deletion docs/tools/cli/account/account-authentication-method.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
``avn account authentication-method``
========================================================

Here youll find the full list of commands for ``avn account authentication-method``.
Here you'll find the full list of commands for ``avn account authentication-method``.


Manage account authentication methods
Expand Down
2 changes: 1 addition & 1 deletion docs/tools/cli/cloud.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
``avn cloud``
==================================

Here youll find the full list of commands for ``avn cloud``.
Here you'll find the full list of commands for ``avn cloud``.


List cloud region details
Expand Down
2 changes: 1 addition & 1 deletion docs/tools/cli/credits.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
``avn credits``
==================================

Here youll find the full list of commands for ``avn credits``.
Here you'll find the full list of commands for ``avn credits``.


Aiven credits
Expand Down
2 changes: 1 addition & 1 deletion docs/tools/cli/service/acl.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
``avn service acl``
============================================

Here youll find the full list of commands for ``avn service acl``.
Here you'll find the full list of commands for ``avn service acl``.


Manage Apache Kafka® access control lists
Expand Down
2 changes: 1 addition & 1 deletion docs/tools/cli/service/connection-info.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
``avn service connection-info``
==================================================

Here youll find the full list of commands for ``avn service connection-info``.
Here you'll find the full list of commands for ``avn service connection-info``.

.. _avn_cli_service_connection_info_kcat:

Expand Down
2 changes: 1 addition & 1 deletion docs/tools/cli/service/connection-pool.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
``avn service connection-pool``
==================================================

Here youll find the full list of commands for ``avn service connection-pool``.
Here you'll find the full list of commands for ``avn service connection-pool``.


Manage PgBouncer connection pools
Expand Down
2 changes: 1 addition & 1 deletion docs/tools/cli/service/connector.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
``avn service connector``
============================================

Here youll find the full list of commands for ``avn service connector``.
Here you'll find the full list of commands for ``avn service connector``.


Manage Apache Kafka® Connect connectors details
Expand Down
2 changes: 1 addition & 1 deletion docs/tools/cli/service/database.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
``avn service database``
============================================

Here youll find the full list of commands for ``avn service database``.
Here you'll find the full list of commands for ``avn service database``.


Manage databases
Expand Down
2 changes: 1 addition & 1 deletion docs/tools/cli/service/es-acl.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
``avn service es-acl``
============================================

Here youll find the full list of commands for ``avn service es-acl``.
Here you'll find the full list of commands for ``avn service es-acl``.


Manage OpenSearch® access control lists
Expand Down
2 changes: 1 addition & 1 deletion docs/tools/cli/service/m3.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
``avn service m3``
============================================

Here youll find the full list of commands for ``avn service m3``.
Here you'll find the full list of commands for ``avn service m3``.


Manage Aiven for M3 namespaces
Expand Down
2 changes: 1 addition & 1 deletion docs/tools/cli/service/privatelink.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
``avn service privatelink``
==============================================

Here youll find the full list of commands for ``avn service privatelink``.
Here you'll find the full list of commands for ``avn service privatelink``.


Manage Aiven privatelink service for AWS and Azure
Expand Down
2 changes: 1 addition & 1 deletion docs/tools/cli/service/service.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
``avn service index``
============================================

Here youll find the full list of commands for ``avn service index``.
Here you'll find the full list of commands for ``avn service index``.


Manage OpenSearch® indexes
Expand Down
2 changes: 1 addition & 1 deletion docs/tools/cli/service/tags.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
``avn service tags``
============================================

Here youll find the full list of commands for ``avn service tags``.
Here you'll find the full list of commands for ``avn service tags``.


Manage service tags
Expand Down
2 changes: 1 addition & 1 deletion docs/tools/cli/service/user.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
``avn service user``
==================================================

Here youll find the full list of commands for ``avn service user``.
Here you'll find the full list of commands for ``avn service user``.


Manage Aiven users and credentials
Expand Down
2 changes: 1 addition & 1 deletion includes/config-kafka.rst
Original file line number Diff line number Diff line change
Expand Up @@ -550,7 +550,7 @@
~~~~~~~~~~~~~~~~~~~~~~
*integer*

**The timeout used to detect failures when using Kafkas group management facilities** The timeout in milliseconds used to detect failures when using Kafkas group management facilities (defaults to 10000).
**The timeout used to detect failures when using Kafka's group management facilities** The timeout in milliseconds used to detect failures when using Kafka's group management facilities (defaults to 10000).



Expand Down
2 changes: 1 addition & 1 deletion includes/config-kafka_connect.rst
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@
~~~~~~~~~~~~~~~~~~~~~~
*integer*

**The timeout used to detect failures when using Kafkas group management facilities** The timeout in milliseconds used to detect failures when using Kafkas group management facilities (defaults to 10000).
**The timeout used to detect failures when using Kafka's group management facilities** The timeout in milliseconds used to detect failures when using Kafka's group management facilities (defaults to 10000).



Expand Down
4 changes: 2 additions & 2 deletions includes/config-opensearch.rst
Original file line number Diff line number Diff line change
Expand Up @@ -114,13 +114,13 @@
~~~~~~~~~~~~~
*['string', 'null']*

**The key in the JSON payload that stores the users roles** The key in the JSON payload that stores the users roles. The value of this key must be a comma-separated list of roles. Required only if you want to use roles in the JWT
**The key in the JSON payload that stores the user's roles** The key in the JSON payload that stores the user's roles. The value of this key must be a comma-separated list of roles. Required only if you want to use roles in the JWT

``subject_key``
~~~~~~~~~~~~~~~
*['string', 'null']*

**The key in the JSON payload that stores the users name** The key in the JSON payload that stores the users name. If not defined, the subject registered claim is used. Most IdP providers use the preferred_username claim. Optional.
**The key in the JSON payload that stores the user's name** The key in the JSON payload that stores the user's name. If not defined, the subject registered claim is used. Most IdP providers use the preferred_username claim. Optional.

``jwt_header``
~~~~~~~~~~~~~~
Expand Down

0 comments on commit c46b9d3

Please sign in to comment.