From c8178aa00f70aa44d43c51116608b96352fcacc9 Mon Sep 17 00:00:00 2001 From: discourse-gatekeeper-docs-bot Date: Sat, 30 Nov 2024 00:37:30 +0000 Subject: [PATCH 01/14] 'modified: docs/tutorial/t-deploy.md,docs/how-to/h-manage-units.md,docs/how-to/h-enable-monitoring.md,docs/reference/r-contacts.md,docs/how-to/h-backup-restore-configuration.md,docs/explanation/e-hardening.md,docs/how-to/h-upgrade.md,docs/reference/r-listeners.md,docs/reference/r-file-system-paths.md,docs/how-to/h-create-mtls-client-credentials.md,docs/how-to/h-manage-app.md,docs/how-to/h-deploy.md,docs/explanation/e-cluster-configuration.md,docs/index.md,docs/reference/r-releases/r-rev156_136.md,docs/how-to/h-enable-encryption.md,docs/reference/r-releases/r-rev156_126.md,docs/explanation/e-security.md,docs/tutorial/t-setup-environment.md,docs/how-to/h-enable-oauth.md,docs/reference/r-snap-entrypoints.md,docs/reference/r-statuses.md,docs/how-to/h-cluster-migration.md,docs/how-to/h-integrate-alerts-dashboards.md,docs/tutorial/t-cleanup-environment.md,docs/reference/r-performance-tuning.md,docs/reference/r-requirements.md,docs/tutorial/t-enable-encryption.md,docs/tutorial/t-manage-passwords.md,docs/tutorial/t-overview.md,docs/tutorial/t-relate-kafka.md' --- docs/explanation/e-cluster-configuration.md | 19 +++-- docs/explanation/e-hardening.md | 39 ++++----- docs/explanation/e-security.md | 83 ++++++++++--------- docs/how-to/h-backup-restore-configuration.md | 14 ++-- docs/how-to/h-cluster-migration.md | 53 ++++++------ .../h-create-mtls-client-credentials.md | 9 +- docs/how-to/h-deploy.md | 40 +++++---- docs/how-to/h-enable-encryption.md | 23 +++-- docs/how-to/h-enable-monitoring.md | 16 ++-- docs/how-to/h-enable-oauth.md | 8 +- docs/how-to/h-integrate-alerts-dashboards.md | 13 ++- docs/how-to/h-manage-app.md | 2 +- docs/how-to/h-manage-units.md | 62 +++++++------- docs/how-to/h-upgrade.md | 55 +++++++----- docs/index.md | 29 +++---- docs/reference/r-contacts.md | 4 +- docs/reference/r-file-system-paths.md | 26 +++--- docs/reference/r-listeners.md | 18 ++-- docs/reference/r-performance-tuning.md | 10 +-- docs/reference/r-releases/r-rev156_126.md | 37 ++++----- docs/reference/r-releases/r-rev156_136.md | 29 ++++--- docs/reference/r-requirements.md | 2 +- docs/reference/r-snap-entrypoints.md | 8 +- docs/reference/r-statuses.md | 44 +++++----- docs/tutorial/t-cleanup-environment.md | 17 ++-- docs/tutorial/t-deploy.md | 30 +++---- docs/tutorial/t-enable-encryption.md | 28 ++++--- docs/tutorial/t-manage-passwords.md | 27 +++--- docs/tutorial/t-overview.md | 22 ++--- docs/tutorial/t-relate-kafka.md | 34 ++++---- docs/tutorial/t-setup-environment.md | 19 +++-- 31 files changed, 440 insertions(+), 380 deletions(-) diff --git a/docs/explanation/e-cluster-configuration.md b/docs/explanation/e-cluster-configuration.md index c9fc342f..5e9f6709 100644 --- a/docs/explanation/e-cluster-configuration.md +++ b/docs/explanation/e-cluster-configuration.md @@ -1,19 +1,20 @@ # Overview of a cluster configuration content [Apache Kafka](https://kafka.apache.org) is an open-source distributed event streaming platform that requires an external solution to coordinate and sync metadata between all active brokers. -One of such solutions is [ZooKeeper](https://zookeeper.apache.org). +One of such solutions is [Apache ZooKeeper](https://zookeeper.apache.org). -Here are some of the responsibilities of ZooKeeper in a Kafka cluster: +Here are some of the responsibilities of Apache ZooKeeper in an Apache Kafka cluster: - **Cluster membership**: through regular heartbeats, it keeps tracks of the brokers entering and leaving the cluster, providing an up-to-date list of brokers. -- **Controller election**: one of the Kafka brokers is responsible for managing the leader/follower status for all the partitions. ZooKeeper is used to elect a controller and to make sure there is only one of it. -- **Topic configuration**: each topic can be replicated on multiple partitions. ZooKeeper keeps track of the locations of the partitions and replicas, so that high-availability is still attained when a broker shuts down. Topic-specific configuration overrides (e.g. message retention and size) are also stored in ZooKeeper. -- **Access control and authentication**: ZooKeeper stores access control lists (ACL) for Kafka resources, to ensure only the proper, authorized, users or groups can read or write on each topic. +- **Controller election**: one of the Apache Kafka brokers is responsible for managing the leader/follower status for all the partitions. Apache ZooKeeper is used to elect a controller and to make sure there is only one of it. +- **Topic configuration**: each topic can be replicated on multiple partitions. Apache ZooKeeper keeps track of the locations of the partitions and replicas so that high availability is still attained when a broker shuts down. Topic-specific configuration overrides (e.g. message retention and size) are also stored in Apache ZooKeeper. +- **Access control and authentication**: Apache ZooKeeper stores access control lists (ACL) for Apache Kafka resources, to ensure only the proper, authorized, users or groups can read or write on each topic. -The values for the configuration parameters mentioned above are stored in znodes, the hierarchical unit data structure in ZooKeeper. +The values for the configuration parameters mentioned above are stored in znodes, the hierarchical unit data structure in Apache ZooKeeper. A znode is represented by its path and can both have data associated with it and children nodes. -ZooKeeper clients interact with its data structure similarly to a remote file system that would be sync-ed between the ZooKeeper units for high availability. -For a Charmed Kafka related to a Charmed ZooKeeper: +Apache ZooKeeper clients interact with its data structure similarly to a remote file system that would be synced between the Apache ZooKeeper units for high availability. +For a Charmed Apache Kafka related to a Charmed Apache ZooKeeper: + - the list of the broker ids of the cluster can be found in `/kafka/brokers/ids` - the endpoint used to access the broker with id `0` can be found in `/kafka/brokers/ids/0` -- the credentials for the Charmed Kafka users can be found in `/kafka/config/users` \ No newline at end of file +- the credentials for the Charmed Apache Kafka users can be found in `/kafka/config/users` \ No newline at end of file diff --git a/docs/explanation/e-hardening.md b/docs/explanation/e-hardening.md index 32e5f31e..88dab405 100644 --- a/docs/explanation/e-hardening.md +++ b/docs/explanation/e-hardening.md @@ -1,11 +1,11 @@ # Security Hardening Guide This document provides guidance and instructions to achieve -a secure deployment of [Charmed Kafka](https://github.com/canonical/kafka-bundle), including setting up and managing a secure environment. +a secure deployment of [Charmed Apache Kafka](https://github.com/canonical/kafka-bundle), including setting up and managing a secure environment. The document is divided into the following sections: 1. Environment, outlining the recommendation for deploying a secure environment -2. Applications, outlining the product features that enable a secure deployment of a Kafka cluster +2. Applications, outlining the product features that enable a secure deployment of an Apache Kafka cluster 3. Additional resources, providing any further information about security and compliance ## Environment @@ -17,7 +17,7 @@ The environment where applications operate can be divided in two components: ### Cloud -Charmed Kafka can be deployed on top of several clouds and virtualization layers: +Charmed Apache Kafka can be deployed on top of several clouds and virtualization layers: | Cloud | Security guide | |-----------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| @@ -36,7 +36,7 @@ all applications. Therefore, it is imperative that it is set up securely. Please #### Cloud credentials When configuring the cloud credentials to be used with Juju, ensure that the users have correct permissions to operate at the required level. -Juju superusers responsible for bootstrapping and managing controllers require elevated permissions to manage several kind of resources, such as +Juju superusers responsible for bootstrapping and managing controllers require elevated permissions to manage several kinds of resources, such as virtual machines, networks, storages, etc. Please refer to the references below for more information on the policies required to be used depending on the cloud. | Cloud | Cloud user policies | @@ -47,7 +47,7 @@ virtual machines, networks, storages, etc. Please refer to the references below #### Juju users -It is very important that the different juju users are set up with minimal permission depending on the scope for their operations. +It is very important that the different juju users are set up with minimal permission depending on the scope of their operations. Please refer to the [User access levels](https://juju.is/docs/juju/user-permissions) documentation for more information on the access level and corresponding abilities that the different users can be granted. @@ -55,39 +55,39 @@ Juju user credentials must be stored securely and rotated regularly to limit the ## Applications -In the following we provide guidance on how to harden your deployment using: +In the following, we provide guidance on how to harden your deployment using: 1. Operating System -2. Kafka and ZooKeeper Security Upgrades +2. Apache Kafka and Apache ZooKeeper Security Upgrades 3. Encryption 4. Authentication 5. Monitoring and Auditing ### Operating System -Charmed Kafka and Charmed ZooKeeper currently run on top of Ubuntu 22.04. Deploy a [Landscape Client Charm](https://charmhub.io/landscape-client?) in order to +Charmed Apache Kafka and Charmed Apache ZooKeeper currently run on top of Ubuntu 22.04. Deploy a [Landscape Client Charm](https://charmhub.io/landscape-client?) in order to connect the underlying VM to a Landscape User Account to manage security upgrades and integrate Ubuntu Pro subscriptions. -### Kafka and ZooKeeper Security Upgrades +### Apache Kafka and Apache ZooKeeper Security Upgrades -Charmed Kafka and Charmed ZooKeeper operators install a pinned revision of the [Charmed Kafka snap](https://snapcraft.io/charmed-kafka) +Charmed Apache Kafka and Charmed Apache ZooKeeper operators install a pinned revision of the [Charmed Apache Kafka snap](https://snapcraft.io/charmed-kafka) and [Charmed ZooKeeper snap](https://snapcraft.io/charmed-zookeeper), respectively, in order to provide reproducible and secure environments. -New versions of Charmed Kafka and Charmed ZooKeeper may be released to provide patching of vulnerabilities (CVEs). +New versions of Charmed Apache Kafka and Charmed Apache ZooKeeper may be released to provide patching of vulnerabilities (CVEs). It is important to refresh the charm regularly to make sure the workload is as secure as possible. For more information on how to refresh the charm, see the [how-to upgrade](https://charmhub.io/kafka/docs/h-upgrade) guide. ### Encryption -Charmed Kafka must be deployed with encryption enabled. -To do that, you need to relate Kafka and ZooKeeper charms to one of the TLS certificate operator charms. +Charmed Apache Kafka must be deployed with encryption enabled. +To do that, you need to relate Apache Kafka and Apache ZooKeeper charms to one of the TLS certificate operator charms. Please refer to the [Charming Security page](https://charmhub.io/topics/security-with-x-509-certificates) for more information on how to select the right certificate -provider for your use-case. +provider for your use case. For more information on encryption setup, see the [How to enable encryption](https://charmhub.io/kafka/docs/h-enable-encryption) guide. ### Authentication -Charmed Kafka supports the following authentication layers: +Charmed Apache Kafka supports the following authentication layers: 1. [SCRAM-based SASL Authentication](/t/charmed-kafka-how-to-manage-app/10285) 2. [certificate-base Authentication (mTLS)](/t/create-mtls-client-credentials/11079) @@ -98,16 +98,17 @@ Please refer to the [listener reference documentation](/t/charmed-kafka-document ### Monitoring and Auditing -Charmed Kafka provides native integration with the [Canonical Observability Stack (COS)](https://charmhub.io/topics/canonical-observability-stack). +Charmed Apache Kafka provides native integration with the [Canonical Observability Stack (COS)](https://charmhub.io/topics/canonical-observability-stack). To reduce the blast radius of infrastructure disruptions, the general recommendation is to deploy COS and the observed application into separate environments, isolated one another. Refer to the [COS production deployments best practices](https://charmhub.io/topics/canonical-observability-stack/reference/best-practices) for more information. Refer to How-To user guide for more information on: -* [how to integrate the Charmed Kafka deployment with COS](/t/charmed-kafka-how-to-enable-monitoring/10283) + +* [how to integrate the Charmed Apache Kafka deployment with COS](/t/charmed-kafka-how-to-enable-monitoring/10283) * [how to customise the alerting rules and dashboards](/t/charmed-kafka-documentation-how-to-integrate-custom-alerting-rules-and-dashboards/13431) -External user access to Kafka is logged to the `kafka-authorizer.log` that is pushes to [Loki endpoint](https://charmhub.io/loki-k8s) and exposed via [Grafana](https://charmhub.io/grafana), both components being part of the COS stack. +External user access to Apache Kafka is logged to the `kafka-authorizer.log` that is pushes to [Loki endpoint](https://charmhub.io/loki-k8s) and exposed via [Grafana](https://charmhub.io/grafana), both components being part of the COS stack. Access denials are logged at INFO level, whereas allowed accesses are logged at DEBUG level. Depending on the auditing needs, customize the logging level either for all logs via the [`log_level`](https://charmhub.io/kafka/configurations?channel=3/stable#log_level) config option or only tune the logging level of the `authorizerAppender` in the `log4j.properties` file. Refer to the Reference documentation, for more information about @@ -115,4 +116,4 @@ the [file system paths](/t/charmed-kafka-documentation-reference-file-system-pat ## Additional Resources -For further information and details on the security and cryptographic specifications used by Charmed Kafka, please refer to the [Security Explanation page](/t/charmed-kafka-documentation-explanation-security/15714). \ No newline at end of file +For further information and details on the security and cryptographic specifications used by Charmed Apache Kafka, please refer to the [Security Explanation page](/t/charmed-kafka-documentation-explanation-security/15714). \ No newline at end of file diff --git a/docs/explanation/e-security.md b/docs/explanation/e-security.md index 21b58ec2..9b1e195a 100644 --- a/docs/explanation/e-security.md +++ b/docs/explanation/e-security.md @@ -1,29 +1,33 @@ # Cryptography -This document describes cryptography used by Charmed Kafka. +This document describes cryptography used by Charmed Apache Kafka. ## Resource checksums -Every version of the Charmed Kafka and Charmed ZooKeeper operators install a pinned revision of the Charmed Kafka snap -and Charmed ZooKeeper, respectively, in order to -provide reproducible and secure environments. The [Charmed Kafka snap](https://snapstore.io/charmed-kafka) and [Charmed ZooKeeper snap](https://snapstore.io/charmed-zookeeper) package the -Kafka and ZooKeeper workload together with -a set of dependencies and utilities required by the lifecycle of the operators (see [Charmed Kafka snap contents](https://github.com/canonical/charmed-kafka-snap/blob/3/edge/snap/snapcraft.yaml) and [Charmed ZooKeeper snap contents](https://github.com/canonical/charmed-zookeeper-snap/blob/3/edge/snap/snapcraft.yaml)). -Every artifact bundled into the Charmed Kafka snap and Charmed ZooKeeper snap is verified against their SHA256 or SHA512 checksum after download. + +Every version of the Charmed Apache Kafka and Charmed Apache ZooKeeper operators install a pinned revision of the Charmed Apache Kafka snap +and Charmed Apache ZooKeeper, respectively, in order to +provide reproducible and secure environments. The [Charmed Apache Kafka snap](https://snapstore.io/charmed-kafka) and [Charmed Apache ZooKeeper snap](https://snapstore.io/charmed-zookeeper) package the +Apache Kafka and Apache ZooKeeper workload together with +a set of dependencies and utilities required by the lifecycle of the operators (see [Charmed Apache Kafka snap contents](https://github.com/canonical/charmed-kafka-snap/blob/3/edge/snap/snapcraft.yaml) and [Charmed Apache ZooKeeper snap contents](https://github.com/canonical/charmed-zookeeper-snap/blob/3/edge/snap/snapcraft.yaml)). +Every artifact bundled into the Charmed Apache Kafka snap and Charmed Apache ZooKeeper snap is verified against their SHA256 or SHA512 checksum after download. ## Sources verification -Charmed Kafka sources are stored in: + +Charmed Apache Kafka sources are stored in: * GitHub repositories for snaps, rocks and charms -* LaunchPad repositories for the Kafka and ZooKeeper upstream fork used for building their respective distributions +* LaunchPad repositories for the Apache Kafka and Apache ZooKeeper upstream fork used for building their respective distributions ### LaunchPad + Distributions are built using private repositories only, hosted as part of the [SOSS namespace](https://launchpad.net/soss) to eventually integrate with Canonical's standard process for fixing CVEs. Branches associated with releases are mirrored to a public repository, hosted in the [Data Platform namespace](https://launchpad.net/~data-platform) to also provide the community with the patched source code. ### GitHub -All Charmed Kafka and Charmed ZooKeeper artifacts are published and released + +All Charmed Apache Kafka and Charmed Apache ZooKeeper artifacts are published and released programmatically using release pipelines implemented via GitHub Actions. Distributions are published as both GitHub and LaunchPad releases via the [central-uploader repository](https://github.com/canonical/central-uploader), while charms, snaps and rocks are published using the workflows of their respective repositories. @@ -35,60 +39,65 @@ All repositories in GitHub are set up with branch protection rules, requiring: * developers to sign the [Canonical Contributor License Agreement (CLA)](https://ubuntu.com/legal/contributors) ## Encryption -The Charmed Kafka operator can be used to deploy a secure Kafka cluster that provides encryption-in-transit capabilities out of the box + +The Charmed Apache Kafka operator can be used to deploy a secure Apache Kafka cluster that provides encryption-in-transit capabilities out of the box for: * Interbroker communications -* ZooKeeper connection +* Apache ZooKeeper connection * External client connection -In order to set up secure connection Kafka and ZooKeeper applications need to be integrated with TLS Certificate Provider charms, e.g. +To set up a secure connection Apache Kafka and Apache ZooKeeper applications need to be integrated with TLS Certificate Provider charms, e.g. `self-signed-certificates` operator. CSRs are generated for every unit using `tls_certificates_interface` library that uses `cryptography` python library to create X.509 compatible certificates. The CSR is signed by the TLS Certificate Provider and returned to the units, and -stored in a password-protected Keystore file. The password of the Keystore is stored in Juju secrets starting from revision 168 on Kafka -and revision 130 on ZooKeeper. The relation provides also the certificate for the CA to be loaded in a password-protected Truststore file. +stored in a password-protected Keystore file. The password of the Keystore is stored in Juju secrets starting from revision 168 on Apache Kafka +and revision 130 on Apache ZooKeeper. The relation provides also the certificate for the CA to be loaded in a password-protected Truststore file. When encryption is enabled, hostname verification is turned on for client connections, including inter-broker communication. Cipher suite can -be customized by providing a list of allowed cipher suite to be used for external clients and zookeeper connections, using the charm config options +be customized by providing a list of allowed cipher suite to be used for external clients and Apache ZooKeeper connections, using the charm config options `ssl_cipher_suites` and `zookeeper_ssl_cipher_suites` config options respectively. Please refer to the [reference documentation](https://charmhub.io/kafka/configurations) for more information. Encryption at rest is currently not supported, although it can be provided by the substrate (cloud or on-premises). ## Authentication -In the Charmed Kafka solution, authentication layers can be enabled for -1. ZooKeeper connections -2. Kafka inter broker communication -3. Kafka Clients +In the Charmed Apache Kafka solution, authentication layers can be enabled for -### Kafka authentication to ZooKeeper -Authentication to ZooKeeper is based on Simple Authentication and Security Layer (SASL) using digested MD5 hashes of -username and password, and implemented both for client-server (with Kafka) and server-server communication. -Username and passwords are exchanged using peer relations among ZooKeeper units and using normal relations between Kafka and ZooKeeper. -Juju secrets are used for exchanging credentials starting from revision 168 on Kafka and revision 130 on ZooKeeper. +1. Apache ZooKeeper connections +2. Apache Kafka inter-broker communication +3. Apache Kafka Clients -Username and password for the different users are stored in ZooKeeper servers in a JAAS configuration file in plain format. +### Apache Kafka authentication to Apache ZooKeeper + +Authentication to Apache ZooKeeper is based on Simple Authentication and Security Layer (SASL) using digested MD5 hashes of +username and password and implemented both for client-server (with Apache Kafka) and server-server communication. +Username and passwords are exchanged using peer relations among Apache ZooKeeper units and using normal relations between Apache Kafka and Apache ZooKeeper. +Juju secrets are used for exchanging credentials starting from revision 168 on Apache Kafka and revision 130 on Apache ZooKeeper. + +Username and password for the different users are stored in Apache ZooKeeper servers in a JAAS configuration file in plain format. Permission on the file is restricted to the root user. -### Kafka Inter-broker authentication +### Apache Kafka Inter-broker authentication + Authentication among brokers is based on SCRAM-SHA-512 protocol. Username and passwords are exchanged -via peer relations, using Juju secrets from revision 168 on Charmed Kafka. +via peer relations, using Juju secrets from revision 168 on Charmed Apache Kafka. -Kafka username and password used by brokers to authenticate one another are stored -both in a ZooKeeper zNode and in a JAAS configuration file in the Kafka server in plain format. +Apache Kafka username and password used by brokers to authenticate one another are stored +both in a Apache ZooKeeper zNode and in a JAAS configuration file in the Apache Kafka server in plain format. The file needs to be readable and -writable by root (as it is created by the charm), and be readable by the `snap_daemon` user running the Kafka server snap commands. +writable by root (as it is created by the charm), and be readable by the `snap_daemon` user running the Apache Kafka server snap commands. + +### Client authentication to Apache Kafka -### Client authentication to Kafka -Clients can authenticate to Kafka using: +Clients can authenticate to Apache Kafka using: 1. username and password exchanged using SCRAM-SHA-512 protocols 2. client certificates or CAs (mTLS) -When using SCRAM, username and passwords are stored in ZooKeeper to be used by the Kafka processes, -in peer-relation data to be used by the Kafka charm and in external relation to be shared with client applications. -Starting from revision 168 on Kafka, Juju secrets are used for storing the credentials in place of plain unencrypted text. +When using SCRAM, username and passwords are stored in Apache ZooKeeper to be used by the Apache Kafka processes, +in peer-relation data to be used by the Apache Kafka charm and in external relation to be shared with client applications. +Starting from revision 168 on Charmed Apache Kafka, Juju secrets are used for storing the credentials in place of plain unencrypted text. -When using mTLS, client certificates are loaded into a `tls-certificates` operator and provided to the Kafka charm via the plain-text unencrypted +When using mTLS, client certificates are loaded into a `tls-certificates` operator and provided to the Apache Kafka charm via the plain-text unencrypted relation. Certificates are stored in the password-protected Truststore file. \ No newline at end of file diff --git a/docs/how-to/h-backup-restore-configuration.md b/docs/how-to/h-backup-restore-configuration.md index 723d184c..4c697d57 100644 --- a/docs/how-to/h-backup-restore-configuration.md +++ b/docs/how-to/h-backup-restore-configuration.md @@ -1,10 +1,10 @@ # Configuration backup and restore -Charmed Kafka's configuration is distributed using [Charmed ZooKeeper](https://charmhub.io/zookeeper?channel=3/stable). -A Charmed ZooKeeper backup can be stored on any S3-compatible storage. +Charmed Apache Kafka's configuration is distributed using [Charmed Apache ZooKeeper](https://charmhub.io/zookeeper?channel=3/stable). +A Charmed Apache ZooKeeper backup can be stored on any S3-compatible storage. S3 access and configurations are managed with the [`s3-integrator` charm](https://charmhub.io/s3-integrator). -This guide contains step-by-step instructions on how to deploy and configure the `s3-integrator` charm for [AWS S3](https://aws.amazon.com/s3/), send the configurations to the Charmed ZooKeeper application, and finally manage your Charmed ZooKeeper backups. +This guide contains step-by-step instructions on how to deploy and configure the `s3-integrator` charm for [AWS S3](https://aws.amazon.com/s3/), send the configurations to the Charmed Apache ZooKeeper application, and finally manage your Charmed Apache ZooKeeper backups. ## Configure `s3-integrator` @@ -29,9 +29,9 @@ juju config s3-integrator \ The only mandatory configuration parameter in the command above is the `bucket`. [/note] -### Integrate with Charmed ZooKeeper +### Integrate with Charmed Apache ZooKeeper -To pass these configurations to Charmed ZooKeeper, integrate the two applications: +To pass these configurations to Charmed Apache ZooKeeper, integrate the two applications: ```shell juju integrate s3-integrator zookeeper @@ -47,13 +47,13 @@ juju run zookeeper/leader restore backup-id= ## Create a backup -Check that Charmed ZooKeeper deployment with configurations set for S3 storage is `active` and `idle` with the `juju status` command. Once it's active, create a backup with the `create-backup` command: +Check that Charmed Apache ZooKeeper deployment with configurations set for S3 storage is `active` and `idle` with the `juju status` command. Once it's active, create a backup with the `create-backup` command: ```shell juju run zookeeper/leader create-backup ``` -Charmed ZooKeeper backups created with the command above will always be **full** backups: a copy of *all* the Charmed Kafka configuration will be stored in S3. +Charmed Apache ZooKeeper backups created with the command above will always be **full** backups: a copy of *all* the Charmed Apache Kafka configuration will be stored in S3. The command will output the ID of the newly created backup: diff --git a/docs/how-to/h-cluster-migration.md b/docs/how-to/h-cluster-migration.md index 6b9ffa0a..1da37f60 100644 --- a/docs/how-to/h-cluster-migration.md +++ b/docs/how-to/h-cluster-migration.md @@ -1,27 +1,30 @@ # Cluster migration using MirrorMaker2.0 -This How-To guide covers executing a cluster migration to a Charmed Kafka deployment using MirrorMaker2.0, running as a process on each of the Juju units in an active/passive setup, where MirrorMaker will act as a consumer from an existing cluster, and a producer to the Charmed Kafka cluster. In parallel (one process on each unit), data and consumer offsets for all existing topics will be synced one-way until both clusters are in-sync, with all data replicated across both in real-time. +This How-To guide covers executing a cluster migration to a Charmed Apache Kafka deployment using MirrorMaker2.0. + +The MirrorMaker runs on the new (destination) cluster as a process on each Juju unit in an active/passive setup. It acts as a consumer from an existing cluster and a producer to the Charmed Apache Kafka cluster. Data and consumer offsets for all existing topics will be synced **one-way** in parallel (one process on each unit) until both clusters are in-sync, with all data replicated across both in real-time. ## MirrorMaker2 overview Under the hood, MirrorMaker uses Kafka Connect source connectors to replicate data, those being the following: + - **MirrorSourceConnector** - replicates topics from an original cluster to a new cluster. It also replicates ACLs and is necessary for the MirrorCheckpointConnector to run - **MirrorCheckpointConnector** - periodically tracks offsets. If enabled, it also synchronizes consumer group offsets between the original and new clusters - **MirrorHeartbeatConnector** - periodically checks connectivity between the original and new clusters -Together, they are used for cluster->cluster replication of topics, consumer groups, topic configuration and ACLs, preserving partitioning and consumer offsets. For more detail on MirrorMaker internals, consult the [MirrorMaker README.md](https://github.com/apache/kafka/blob/trunk/connect/mirror/README.md) and the [MirrorMaker 2.0 KIP](https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0). In practice, it lets sync data one-way between two live Kafka clusters with minimal impact on the ongoing production service. +Together, they are used for cluster->cluster replication of topics, consumer groups, topic configuration and ACLs, preserving partitioning and consumer offsets. For more detail on MirrorMaker internals, consult the [MirrorMaker README.md](https://github.com/apache/kafka/blob/trunk/connect/mirror/README.md) and the [MirrorMaker 2.0 KIP](https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0). In practice, it lets sync data one way between two live Apache Kafka clusters with minimal impact on the ongoing production service. In short, MirrorMaker runs as a distributed service on the new cluster, and consumes all topics, groups and offsets from the still-active original cluster in production, before producing them one-way to the new cluster that may not yet be serving traffic to external clients. The original, in-production cluster is referred to as an ‘active’ cluster, and the new cluster still waiting to serve external clients is ‘passive’. The MirrorMaker service can be configured using much the same configuration as available for Kafka Connect. ## Pre-requisites -- An existing Kafka cluster to migrate from -- A bootstrapped Juju VM machine cloud running Charmed Kafka to migrate to - - A tutorial on how to set-up a Charmed Kafka deployment can be found as part of the [Charmed Kafka Tutorial](/t/charmed-kafka-tutorial-overview/10571) - - The CLI tool `yq` - https://github.com/mikefarah/yq +- An existing Apache Kafka cluster to migrate from +- A bootstrapped Juju VM machine cloud running Charmed Apache Kafka to migrate to + - A tutorial on how to set up a Charmed Apache Kafka deployment can be found as part of the [Charmed Apache Kafka Tutorial](/t/charmed-kafka-tutorial-overview/10571) +- The CLI tool `yq` - https://github.com/mikefarah/yq - `snap install yq --channel=v3/stable` -## Getting Charmed Kafka cluster details and admin credentials +## Getting Charmed Apache Kafka cluster details and admin credentials By design, the `kafka` charm will not expose any available connections until related to by a client. In this case, we deploy `data-integrator` charms and relating them to each `kafka` application, requesting `admin` level privileges: @@ -31,10 +34,10 @@ juju relate kafka data-integrator ``` When the `data-integrator` charm relates to a `kafka` application on the `kafka-client` relation interface, passing `extra-user-roles=admin`, a new user with `super.user` permissions will be created on that cluster, with the charm passing back the credentials and broker addresses in the relation data to the `data-integrator`. -As we will need full access to both clusters, we must grab these newly-generated authorisation credentials from the `data-integrator`: +As we will need full access to both clusters, we must grab these newly generated authorisation credentials from the `data-integrator`: ```bash -# SASL credentials to connect to the Charmed Kafka cluster +# SASL credentials to connect to the Charmed Apache Kafka cluster export NEW_USERNAME=$(juju show-unit data-integrator/0 | yq -r '.. | .username? // empty') export NEW_PASSWORD=$(juju show-unit data-integrator/0 | yq -r '.. | .password? // empty') @@ -50,18 +53,20 @@ export NEW_SASL_JAAS_CONFIG="org.apache.kafka.common.security.scram.ScramLoginMo To authenticate MirrorMaker to both clusters, it will need full `super.user` permissions on **BOTH** clusters. MirrorMaker supports every possible `security.protocol` supported by Apache Kafka. In this guide, we will make the assumption that the original cluster is using `SASL_PLAINTEXT` authentication, as such, the required information is as follows: ```bash -# comma-separated list of kafka server IPs and ports to connect to +# comma-separated list of Apache Kafka server IPs and ports to connect to OLD_SERVERS # string of sasl.jaas.config property OLD_SASL_JAAS_CONFIG ``` -> **NOTE** - If using `SSL` or `SASL_SSL` authentication, review the configuration options supported by Kafka Connect in the [Apache Kafka documentation](https://kafka.apache.org/documentation/#connectconfigs) +[note] +If using `SSL` or `SASL_SSL` authentication, review the configuration options supported by Kafka Connect in the [Apache Kafka documentation](https://kafka.apache.org/documentation/#connectconfigs) +[/note] -## Generating `mm2.properties` file on the Charmed Kafka cluster +## Generating `mm2.properties` file on the Charmed Apache Kafka cluster -MirrorMaker takes a `.properties` file for its configuration to fine-tune behaviour. See below an example `mm2.properties` file that can be placed on each of the Charmed Kafka units using the above credentials: +MirrorMaker takes a `.properties` file for its configuration to fine-tune behaviour. See below an example `mm2.properties` file that can be placed on each of the Charmed Apache Kafka units using the above credentials: ```bash # Aliases for each cluster, can be set to any unique alias @@ -71,15 +76,15 @@ clusters = old,new old->new.enabled = true new->old.enabled = false -# comma-separated list of kafka server IPs and ports to connect from both clusters +# comma-separated list of Apache Kafka server IPs and ports to connect from both clusters old.bootstrap.servers=$OLD_SERVERS new.bootstrap.servers=$NEW_SERVERS -# sasl authentication config for each cluster, in this case using the 'admin' users created by the integrator charm for Charmed Kafka +# sasl authentication config for each cluster, in this case using the 'admin' users created by the integrator charm for Charmed Apache Kafka old.sasl.jaas.config=$OLD_SASL_JAAS_CONFIG new.sasl.jaas.config=$NEW_SASL_JAAS_CONFIG -# if not deployed with TLS, Charmed Kafka uses SCRAM-SHA-512 for SASL auth, with a SASL_PLAINTEXT listener +# if not deployed with TLS, Charmed Apache Kafka uses SCRAM-SHA-512 for SASL auth, with a SASL_PLAINTEXT listener sasl.mechanism=SCRAM-SHA-512 security.protocol=SASL_PLAINTEXT @@ -110,7 +115,7 @@ old.consumer.isolation.level=read_committed new.consumer.isolation.level=read_committed # Specific Connector configuration for ensuring Exactly-Once-Delivery (EOD) -# NOTE - EOD support guarantees released with Kafka 3.5.0 so some of these options may not work as expected +# NOTE - EOD support guarantees released with Apache Kafka 3.5.0 so some of these options may not work as expected old.producer.enable.idempotence=true new.producer.enable.idempotence=true old.producer.acks=all @@ -119,7 +124,7 @@ new.producer.acks=all # new.exactly.once.support = enabled ``` -Once these properties have been generated (in this example, saved to `/tmp/mm2.properties`), it is needed to place them on every Charmed Kafka unit: +Once these properties have been generated (in this example, saved to `/tmp/mm2.properties`), it is needed to place them on every Charmed Apache Kafka unit: ```bash cat /tmp/mm2.properties | juju ssh kafka/ sudo -i 'sudo tee -a /var/snap/charmed-kafka/current/etc/kafka/mm2.properties' @@ -127,7 +132,7 @@ cat /tmp/mm2.properties | juju ssh kafka/ sudo -i 'sudo tee -a /var/snap/cha ## Starting a dedicated MirrorMaker cluster -It is strongly advised to run MirrorMaker services on the downstream cluster to avoid service impact due to resource use. Now that the properties are set on each unit of the new cluster, the MirrorMaker services can be started using with JMX metrics exporters using the following: +It is strongly advised to run MirrorMaker services on the downstream cluster to avoid service impact due to resource use. Now that the properties are set on each unit of the new cluster, the MirrorMaker services can be started with JMX metrics exporters using the following: ```bash # building KAFKA_OPTS env-var for running with an exporter @@ -139,7 +144,7 @@ juju ssh kafka/ sudo -i 'cd /snap/charmed-kafka/current/opt/kafka/bin && KAF ## Monitoring and validating data replication -The migration process can be monitored using built-in Kafka bin-commands on the original cluster. In the Charmed Kafka cluster, these bin-commands are also mapped to snap commands on the units (e.g `charmed-kafka.get-offsets` or `charmed-kafka.topics`). +The migration process can be monitored using the original cluster's built-in Apache Kafka bin commands. In the Charmed Apache Kafka cluster, these bin commands are also mapped to snap commands on the units (e.g. `charmed-kafka.get-offsets` or `charmed-kafka.topics`). To monitor the current consumer offsets, run the following on the original cluster being migrated from: @@ -164,11 +169,11 @@ There is also a [range of different metrics](https://github.com/apache/kafka/blo curl 10.248.204.198:9099/metrics | grep records_count ``` -## Switching client traffic from original cluster to Charmed Kafka cluster +## Switching client traffic -Once happy that all the necessary data has been successfully migrated, stop all active consumer applications on the original cluster, and redirect them to the Charmed Kafka cluster, making sure to use the Charmed Kafka cluster server addresses and authentication. After doing so, they will re-join their original consumer groups at the last committed offset it had originally, and continue consuming as normal. -Finally, the producer client applications can be stopped, updated with the Charmed Kafka cluster server addresses and authentication, and restarted, with any newly produced messages being received by the migrated consumer client applications, completing the migration of both the data, and the client applications. +Once happy with data migration, stop all active consumer applications on the original cluster and redirect them to the new Charmed Apache Kafka cluster, making sure to use the Charmed Apache Kafka cluster server addresses and authentication. After doing so, they will re-join their original consumer groups at the last committed offset it had originally, and continue consuming as normal. +Finally, the producer client applications can be stopped, updated with the Charmed Apache Kafka cluster server addresses and authentication, and restarted, with any newly produced messages being received by the migrated consumer client applications, completing the migration of both the data, and the client applications. ## Stopping MirrorMaker replication -Once confident in the successful completion of the data client migration, the running processes on each of the charm units can be killed, stopping the MirrorMaker processes active on the Charmed Kafka cluster. \ No newline at end of file +Once confident in the successful completion of the data client migration, the running processes on each of the charm units can be killed, stopping the MirrorMaker processes active on the Charmed Apache Kafka cluster. \ No newline at end of file diff --git a/docs/how-to/h-create-mtls-client-credentials.md b/docs/how-to/h-create-mtls-client-credentials.md index 495e7d76..8aae2939 100644 --- a/docs/how-to/h-create-mtls-client-credentials.md +++ b/docs/how-to/h-create-mtls-client-credentials.md @@ -1,14 +1,15 @@ # Create mTLS client credentials Requirements: -- Charmed Kafka cluster up and running + +- Charmed Apache Kafka cluster up and running - [Admin credentials](./h-manage-app.md) - [Encryption enabled](./h-enable-encryption.md) - [Java JRE](https://ubuntu.com/tutorials/install-jre#1-overview) installed - [`charmed-kafka` snap](https://snapcraft.io/charmed-kafka) installed - [jq](https://snapcraft.io/jq) installed -Goal: Create mTLS credentials for a client application to be able to connect to the Kafka cluster. +This guide includes step-by-step instructions on how to create mTLS credentials for a client application to be able to connect to a Charmed Apache Kafka cluster. ## Authentication @@ -16,11 +17,11 @@ Goal: Create mTLS credentials for a client application to be able to connect to # ---------- Environment SNAP_KAFKA_PATH=/var/snap/charmed-kafka/current/etc/kafka -# Kafka ports +# Apache Kafka ports KAFKA_SASL_PORT=9093 KAFKA_MTLS_PORT=9094 -# Kafka servers +# Apache Kafka servers KAFKA_SERVERS_SASL=$KAFKA_SASL_PORT KAFKA_SERVERS_MTLS=$KAFKA_MTLS_PORT diff --git a/docs/how-to/h-deploy.md b/docs/how-to/h-deploy.md index 9c9d5910..28c7f14d 100644 --- a/docs/how-to/h-deploy.md +++ b/docs/how-to/h-deploy.md @@ -1,19 +1,19 @@ -# How to deploy Charmed Kafka +# How to deploy Charmed Apache Kafka -To deploy a Charmed Kafka cluster on a bare environment, it is necessary to: +To deploy a Charmed Apache Kafka cluster on a bare environment, it is necessary to: 1. Set up a Juju Controller 2. Set up a Juju Model -3. Deploy Charmed Kafka and Charmed ZooKeeper +3. Deploy Charmed Apache Kafka and Charmed Apache ZooKeeper 4. (Optionally) Create an external admin user In the next subsections, we will cover these steps separately by referring to -relevant Juju documentation and providing details on the Charmed Kafka specifics. +relevant Juju documentation and providing details on the Charmed Apache Kafka specifics. If you already have a Juju controller and/or a Juju model, you can skip the associated steps. ## Juju controller setup -Before deploying Kafka, make sure you have a Juju controller accessible from +Before deploying Apache Kafka, make sure you have a Juju controller accessible from your local environment using the [Juju client snap](https://snapcraft.io/juju). The properties of your current controller can be listed using `juju show-controller`. @@ -24,10 +24,12 @@ The cloud information can be retrieved with the following command juju show-controller | yq '.[].details.cloud' ``` -> **IMPORTANT** If the cloud is `k8s`, please refer to the [Charmed Kafka K8s documentation](/t/charmed-kafka-k8s-documentation/10296) instead. +[note type="caution"] +If the cloud is `k8s`, please refer to the [Charmed Kafka K8s documentation](/t/charmed-kafka-k8s-documentation/10296) instead. +[/note] You can find more information on how to bootstrap and configure a controller for different -clouds [here](https://juju.is/docs/juju/manage-controllers#heading--bootstrap-a-controller). +clouds in the [Juju documentation](https://juju.is/docs/juju/manage-controllers#heading--bootstrap-a-controller). Make sure you bootstrap a `machine` Juju controller. ## Juju model setup @@ -51,20 +53,24 @@ can be obtained by juju show-model | yq '.[].type' ``` -> **IMPORTANT** If the model is `k8s`, please refer to the [Charmed Kafka K8s documentation](https://discourse.charmhub.io/t/charmed-kafka-k8s-documentation/10296) instead. +[note type="caution"] +If the model is `k8s`, please refer to the [Charmed Kafka K8s documentation](https://discourse.charmhub.io/t/charmed-kafka-k8s-documentation/10296) instead. +[/note] -## Deploy Charmed Kafka and Charmed ZooKeeper +## Deploy Charmed Apache Kafka and Charmed Apache ZooKeeper -The Kafka and ZooKeeper charms can both be deployed as follows: +The Apache Kafka and Apache ZooKeeper charms can both be deployed as follows: ```shell $ juju deploy kafka --channel 3/stable -n --trust $ juju deploy zookeeper --channel 3/stable -n ``` -where `` and `` – the number of units to deploy for Kafka and ZooKeeper. We recommend values of at least `3` and `5` respectively. +where `` and `` – the number of units to deploy for Apache Kafka and Apache ZooKeeper. We recommend values of at least `3` and `5` respectively. -> **NOTE** The `--trust` option is needed for the Kafka application if NodePort is used. For more information about the trust options usage, see the [Juju documentation](/t/5476#heading--trust-an-application-with-a-credential). +[note] +The `--trust` option is needed for the Kafka application if NodePort is used. For more information about the trust options usage, see the [Juju documentation](/t/5476#heading--trust-an-application-with-a-credential). +[/note] After this, it is necessary to connect them: @@ -77,14 +83,14 @@ should be ready to be used. ## (Optional) Create an external admin users -Charmed Kafka aims to follow the _secure by default_ paradigm. As a consequence, after being deployed the Kafka cluster +Charmed Apache Kafka aims to follow the _secure by default_ paradigm. As a consequence, after being deployed the Apache Kafka cluster won't expose any external listener. In fact, ports are only opened when client applications are related, also depending on the protocols to be used. Please refer to [this table](/t/charmed-kafka-documentation-reference-listeners/13264) for more information about the available listeners and protocols. -It is however generally useful for most of the use-cases to create a first admin user -to be used to manage the Kafka cluster (either internally or externally). +It is however generally useful for most of the use cases to create a first admin user +to be used to manage the Apache Kafka cluster (either internally or externally). To create an admin user, deploy the [Data Integrator Charm](https://charmhub.io/data-integrator) with `extra-user-roles` set to `admin` @@ -93,13 +99,13 @@ To create an admin user, deploy the [Data Integrator Charm](https://charmhub.io/ juju deploy data-integrator --channel stable --config topic-name=test-topic --config extra-user-roles=admin ``` -and relate to the Kafka charm +and relate to the Apache Kafka charm ```shell juju relate data-integrator kafka ``` -To retrieve authentication information such as the username, password, etc. use +To retrieve authentication information such as the username, password, etc. use: ```shell juju run data-integrator/leader get-credentials diff --git a/docs/how-to/h-enable-encryption.md b/docs/how-to/h-enable-encryption.md index 0fbe6fe6..f404c3a0 100644 --- a/docs/how-to/h-enable-encryption.md +++ b/docs/how-to/h-enable-encryption.md @@ -1,11 +1,12 @@ # How to enable encryption -## Deploy a TLS Provider charm +The Apache Kafka and Apache ZooKeeper charms implement the Requirer side of the [`tls-certificates/v1`](https://github.com/canonical/charm-relation-interfaces/blob/main/interfaces/tls_certificates/v1/README.md) charm relation. Therefore, any charm implementing the Provider side could be used. +To enable encryption, you should first deploy a TLS certificates Provider charm. -To enable encryption, you should first deploy a TLS certificates Provider charm. The Kafka and ZooKeeper charms implements the Requirer side of the [`tls-certificates/v1`](https://github.com/canonical/charm-relation-interfaces/blob/main/interfaces/tls_certificates/v1/README.md) charm relation. -Therefore, any charm implementing the Provider side could be used. +## Deploy a TLS Provider charm -One possible option, suitable for testing, could be to use the `self-signed-certificates`, although this setup is however not recommended for production clusters. +One possible option, suitable for testing, could be to use the `self-signed-certificates` charm. +However, this setup is not recommended for production clusters. To deploy a `self-signed-certificates` charm: @@ -16,9 +17,9 @@ juju deploy self-signed-certificates --channel=edge juju config self-signed-certificates ca-common-name="Test CA" ``` -Please refer to [this post](https://charmhub.io/topics/security-with-x-509-certificates) for an overview of the TLS certificates Providers charms and some guidance on how to choose the right charm for your use-case. +Please refer to [this post](https://charmhub.io/topics/security-with-x-509-certificates) for an overview of the TLS certificates Providers charms and some guidance on how to choose the right charm for your use case. -## Enable TLS on Kafka and ZooKeeper +## Relate the charms ``` juju relate zookeeper @@ -27,7 +28,9 @@ juju relate kafka:certificates where `` is the name of the TLS certificate provider charm deployed. -> **Note** If Kafka and ZooKeeper are already related, they will start renegotiating the relation to provide each other certificates and enable/open to correct ports/connections. Otherwise relate them after the both relations with the `` . +[note] +If Apache Kafka and Apache ZooKeeper are already related, they will start renegotiating the relation to provide each other certificates and enable/open to correct ports/connections. Otherwise, relate them after the both relations with the ``. +[/note] ## Manage keys @@ -37,7 +40,8 @@ Updates to private keys for certificate signing requests (CSR) can be made via t juju run kafka/ set-tls-private-key ``` -Passing keys to external/internal keys should *only be done with* `base64 -w0` *not* `cat`, as follows +Passing keys to external/internal keys should *only be done with* `base64 -w0` *not* `cat`, as follows: + ```shell # generate shared internal key openssl genrsa -out internal-key.pem 3072 @@ -45,7 +49,8 @@ openssl genrsa -out internal-key.pem 3072 juju run kafka/ set-tls-private-key "internal-key=$(base64 -w0 internal-key.pem)" ``` -To disable TLS remove the relation +To disable TLS remove the relation: + ```shell juju remove-relation kafka juju remove-relation zookeeper diff --git a/docs/how-to/h-enable-monitoring.md b/docs/how-to/h-enable-monitoring.md index 5bdde382..97a085a7 100644 --- a/docs/how-to/h-enable-monitoring.md +++ b/docs/how-to/h-enable-monitoring.md @@ -1,17 +1,17 @@ # Enable monitoring -Both Charmed Kafka and Charmed ZooKeeper come with the [JMX exporter](https://github.com/prometheus/jmx_exporter/). +Both Charmed Apache Kafka and Charmed Apache ZooKeeper come with the [JMX exporter](https://github.com/prometheus/jmx_exporter/). The metrics can be queried by accessing the `http://:9101/metrics` and `http://:9998/metrics` endpoints, respectively. Additionally, the charm provides integration with the [Canonical Observability Stack](https://charmhub.io/topics/canonical-observability-stack). Deploy the `cos-lite` bundle in a Kubernetes environment. This can be done by following the [deployment tutorial](https://charmhub.io/topics/canonical-observability-stack/tutorials/install-microk8s). -Since the Charmed Kafka Operator is deployed directly on a cloud infrastructure environment, it is +Since the Charmed Apache Kafka Operator is deployed directly on a cloud infrastructure environment, it is needed to offer the endpoints of the COS relations. The [offers-overlay](https://github.com/canonical/cos-lite-bundle/blob/main/overlays/offers-overlay.yaml) can be used, and this step is shown in the COS tutorial. -Switch to COS K8s environment and offer COS interfaces to be cross-model related with Charmed Kafka VM model: +Switch to COS K8s environment and offer COS interfaces to be cross-model related with Charmed Apache Kafka VM model: ```shell # Switch to Kubernetes controller, for the cos model. @@ -22,7 +22,7 @@ juju offer loki:logging loki-logging juju offer prometheus:receive-remote-write prometheus-receive-remote-write ``` -Switch to Charmed Kafka VM model, find offers and relate with them: +Switch to Charmed Apache Kafka VM model, find offers and relate with them: ```shell # We are on the Kubernetes controller, for the cos model. Switch to kafka model @@ -49,7 +49,7 @@ juju consume :admin/.loki-logging juju consume :admin/.grafana-dashboards ``` -Now, deploy `grafana-agent` (subordinate charm) and relate it with Charmed Kafka and Charmed ZooKeeper: +Now, deploy `grafana-agent` (subordinate charm) and relate it with Charmed Apache Kafka and Charmed Apache ZooKeeper: ```shell juju deploy grafana-agent @@ -80,9 +80,9 @@ juju run grafana/leader get-admin-password --model : @@ -90,7 +90,7 @@ juju config kafka log_level= Possible values are `ERROR`, `WARNING`, `INFO`, `DEBUG`. -### ZooKeeper +### Apache ZooKeeper ``` juju config kafka log-level= diff --git a/docs/how-to/h-enable-oauth.md b/docs/how-to/h-enable-oauth.md index dd3260b9..13c62988 100644 --- a/docs/how-to/h-enable-oauth.md +++ b/docs/how-to/h-enable-oauth.md @@ -2,7 +2,7 @@ Versions used for this integration example: - LXD (v5.21.1) - MicroK8s (v1.28.10) -- Kafka charm: built from [this feature PR](https://github.com/canonical/kafka-operator/pull/168), which adds Hydra integration +- Apache Kafka charm: built from [this feature PR](https://github.com/canonical/kafka-operator/pull/168), which adds Hydra integration ## Initial deployment @@ -35,7 +35,7 @@ $ juju offer admin/iam.hydra:oauth $ juju offer admin/iam.self-signed-certificates:certificates ``` -Kafka setup: +Apache Kafka setup: ```bash # On the lxd controller @@ -52,7 +52,7 @@ $ juju integrate kafka:certificates self-signed-certificates $ juju integrate zookeeper self-signed-certificates ``` -Once everything is settled, integrate Kafka and Hydra: +Once everything is settled, integrate Apache Kafka and Hydra: ```bash # On the lxd model @@ -83,4 +83,4 @@ $ curl https://10.64.140.44/iam-hydra/oauth2/token -k -u eeec2a88-52bf-46e6-85bf {"access_token":"ory_at_b2pcwnwTpCVHPbxoU7L45isbRJhNdBbn91y4Ex0YNrA.easwGEfsTJ7VnNfER2svIMHwen5ZzNXaVZm8i7QdLLg","expires_in":3599,"scope":"profile","token_type":"bearer"} ``` -With this token, a client can now authenticate on Kafka using oAuth listeners. \ No newline at end of file +With this token, a client can now authenticate on Apache Kafka using oAuth listeners. \ No newline at end of file diff --git a/docs/how-to/h-integrate-alerts-dashboards.md b/docs/how-to/h-integrate-alerts-dashboards.md index 75fd8342..9e8d4d06 100644 --- a/docs/how-to/h-integrate-alerts-dashboards.md +++ b/docs/how-to/h-integrate-alerts-dashboards.md @@ -1,16 +1,16 @@ # Integrate custom alerting rules and dashboards -This guide shows you how to integrate an existing set of rules and/or dashboards to your Charmed Kafka and Charmed ZooKeeper deployment to be consumed with the [Canonical Observability Stack (COS)](https://charmhub.io/topics/canonical-observability-stack). +This guide shows you how to integrate an existing set of rules and/or dashboards to your Charmed Apache Kafka and Charmed Apache ZooKeeper deployment to be consumed with the [Canonical Observability Stack (COS)](https://charmhub.io/topics/canonical-observability-stack). To do so, we will sync resources stored in a git repository to COS Lite. ## Prerequisites -Deploy the `cos-lite` bundle in a Kubernetes environment and integrate Charmed Kafka and Charmed ZooKeeper to the COS offers, as shown in the [How to Enable Monitoring](/t/charmed-kafka-documentation-how-to-enable-monitoring/10283) guide. +Deploy the `cos-lite` bundle in a Kubernetes environment and integrate Charmed Apache Kafka and Charmed Apache ZooKeeper to the COS offers, as shown in the [How to Enable Monitoring](/t/charmed-kafka-documentation-how-to-enable-monitoring/10283) guide. This guide will refer to the models that charms are deployed into as: * `` for the model containing observability charms (and deployed on K8s) -* `` for the model containing Charmed Kafka and Charmed ZooKeeper +* `` for the model containing Charmed Apache Kafka and Charmed Apache ZooKeeper * `` for other optional charms (e.g. TLS-certificates operators, `grafana-agent`, `data-integrator`, etc.). @@ -37,10 +37,9 @@ The COS configuration charm keeps the monitoring stack in sync with our reposito Refer to the [documentation](https://charmhub.io/cos-configuration-k8s/configure) for all configuration options, including how to access a private repository. Adding, updating or deleting an alert rule or a dashboard in the repository will be reflected in the monitoring stack. -[Note] +[note] You need to manually refresh `cos-config`'s local repository with the *sync-now* action if you do not want to wait for the next [update-status event](/t/event-update-status/6484) to pull the latest changes. -[/Note] - +[/note] ## Forward the rules and dashboards @@ -66,4 +65,4 @@ As for the dashboards, they should be available in the Grafana interface. ## Conclusion -In this guide, we enabled monitoring on a Kafka deployment and integrated alert rules and dashboards by syncing a git repository to the COS stack. \ No newline at end of file +In this guide, we enabled monitoring on a Charmed Apache Kafka deployment and integrated alert rules and dashboards by syncing a git repository to the COS stack. \ No newline at end of file diff --git a/docs/how-to/h-manage-app.md b/docs/how-to/h-manage-app.md index e67b335c..2391b834 100644 --- a/docs/how-to/h-manage-app.md +++ b/docs/how-to/h-manage-app.md @@ -102,7 +102,7 @@ juju remove-application data-integrator ## Internal password rotation -The operator user is used internally by the Charmed Kafka Operator, the `set-password` action can be used to rotate its password. +The operator user is used internally by the Charmed Apache Kafka Operator, the `set-password` action can be used to rotate its password. ```shell # to set a specific password for the operator user diff --git a/docs/how-to/h-manage-units.md b/docs/how-to/h-manage-units.md index d2dd7f66..9f3a7104 100644 --- a/docs/how-to/h-manage-units.md +++ b/docs/how-to/h-manage-units.md @@ -4,8 +4,8 @@ Unit management guide for scaling and running admin utility scripts. ## Replication and Scaling -Increasing the number of Kafka brokers can be achieved by adding more units -to the Charmed Kafka application, for example: +Increasing the number of Apache Kafka brokers can be achieved by adding more units +to the Charmed Apache Kafka application, for example: ```shell juju add-unit kafka -n @@ -13,76 +13,80 @@ juju add-unit kafka -n For more information on how to manage units, please refer to the [Juju documentation](https://juju.is/docs/juju/manage-units) -It is important to note that when adding more units, the Kafka cluster will not +It is important to note that when adding more units, the Apache Kafka cluster will not *automatically* rebalance existing topics and partitions. New storage and new brokers will be used only when new topics and new partitions are created. Partition reassignment can still be done manually by the admin user by using the -`charmed-kafka.reassign-partitions` Kafka bin utility script. Please refer to +`charmed-kafka.reassign-partitions` Apache Kafka bin utility script. Please refer to its documentation for more information. -> **IMPORTANT** Scaling down is currently not supported in the charm automation. -> If partition reassignment is not manually performed before scaling down in order -> to make sure the decommissioned units do not hold any data, **your cluster may -> suffer to data loss**. +[note type="caution"] +Scaling down is currently not supported in the charm automation. +If partition reassignment is not manually performed before scaling down in order +to make sure the decommissioned units do not hold any data, **your cluster may +suffer to data loss**. +[/note] -## Running Kafka admin utility scripts +## Running Apache Kafka admin utility scripts Apache Kafka ships with `bin/*.sh` commands to do various administrative tasks such as: * `bin/kafka-config.sh` to update cluster configuration * `bin/kafka-topics.sh` for topic management -* `bin/kafka-acls.sh` for management of ACLs of Kafka users +* `bin/kafka-acls.sh` for management of ACLs of Apache Kafka users -Please refer to the upstream [Kafka project](https://github.com/apache/kafka/tree/trunk/bin), -for a full list of the bash commands available in Kafka distributions. Also, you can +Please refer to the upstream [Apache Kafka project](https://github.com/apache/kafka/tree/trunk/bin), +for a full list of the bash commands available in Apache Kafka distributions. Also, you can use `--help` argument for printing a short summary of the argument for a given bash command. -The most important commands are also exposed via the [Charmed Kafka snap](https://snapcraft.io/charmed-kafka), +The most important commands are also exposed via the [Charmed Apache Kafka snap](https://snapcraft.io/charmed-kafka), accessible via `charmed-kafka.`. Please refer to [this table](/t/charmed-kafka-documentation-reference-snap-entrypoints/13263) for -more information about the mapping between the Kafka bin commands and the snap entrypoints. +more information about the mapping between the Apache Kafka bin commands and the snap entrypoints. -> **IMPORTANT** Before running bash scripts, make sure that some listeners have been correctly -> opened by creating appropriate integrations. Please refer to [this table](/t/charmed-kafka-documentation-reference-listeners/13264) for more -> information about how listeners are opened based on relations. To simply open a -> SASL/SCRAM listener, just integrate a client application using the data-integrator, -> as described [here](/t/charmed-kafka-how-to-manage-app/10285). +[note typer="caution"] +Before running bash scripts, make sure that some listeners have been correctly +opened by creating appropriate integrations. Please refer to [this table](/t/charmed-kafka-documentation-reference-listeners/13264) for more +information about how listeners are opened based on relations. To simply open a +SASL/SCRAM listener, just integrate a client application using the data integrator, +as described in the [How to manage app guide](/t/charmed-kafka-how-to-manage-app/10285). +[/note] To run most of the scripts, you need to provide: -1. the Kafka service endpoints, generally referred to as *bootstrap servers* +1. the Apache Kafka service endpoints, generally referred to as *bootstrap servers* 2. authentication information -### Juju admins of the Kafka deployment +### Juju admins of the Apache Kafka deployment -For Juju admins of the Kafka deployment, the bootstrap servers information can -be obtained using +For Juju admins of the Apache Kafka deployment, the bootstrap servers information can +be obtained using: ``` BOOTSTRAP_SERVERS=$(juju run kafka/leader get-admin-credentials | grep "bootstrap.servers" | cut -d "=" -f 2) ``` Admin client authentication information is stored in the -`/var/snap/charmed-kafka/common/etc/kafka/client.properties` file present on every Kafka -broker. The content of the file can be accessed using +`/var/snap/charmed-kafka/common/etc/kafka/client.properties` file is present on every Apache Kafka +broker. The content of the file can be accessed using: ``` juju ssh kafka/leader `cat /etc/kafka/client.properties` ``` -This file can be provided to the Kafka bin commands via the `--command-config` +This file can be provided to the Apache Kafka bin commands via the `--command-config` argument. Note that `client.properties` may also refer to other files ( e.g. truststore and keystore for TLS-enabled connections). Those files also need to be accessible and correctly specified. -Commands can also be run within a Kafka broker, since both the authentication -file (along with the truststore if needed) and the Charmed Kafka snap are +Commands can also be run within a Apache Kafka broker, since both the authentication +file (along with the truststore if needed) and the Charmed Apache Kafka snap are already present. #### Example (listing topics) -For instance, in order to list the current topics on the Kafka cluster, you can run: +For instance, to list the current topics on the Apache Kafka cluster, you can run: ``` juju ssh kafka/leader 'charmed-kafka.topics --bootstrap-server $BOOTSTRAP_SERVERS --list --command-config /var/snap/charmed-kafka/common/etc/kafka/client.properties' diff --git a/docs/how-to/h-upgrade.md b/docs/how-to/h-upgrade.md index 3f73e48c..9266898f 100644 --- a/docs/how-to/h-upgrade.md +++ b/docs/how-to/h-upgrade.md @@ -1,19 +1,23 @@ # How to upgrade between minor versions -> **Note** This feature is available on Charmed Kafka and Charmed ZooKeeper from revisions 134 and 103, respectively. Upgrade from previous versions is **not supported**, although possible (see e.g. [here](https://github.com/deusebio/kafka-pre-upgrade-patch) for a custom example). +[note] +This feature is available on Charmed Apache Kafka and Charmed Apache ZooKeeper from revisions 134 and 103, respectively. Upgrade from previous versions is **not supported**, although possible (see [example](https://github.com/deusebio/kafka-pre-upgrade-patch)). +[/note] -Charm upgrades can include both upgrades of operator code (e.g. the revision used by the charm) and/or the workload version. Note that since the charm code pins a particular version of the workload, a charm upgrade may or may not involve also a workload version upgrade. In general, the following guide only applies for in-place upgrades that involve (at most) minor version upgrade of Kafka workload, e.g. between Kafka 3.4.x to 3.5.x. Major workload upgrades are generally **NOT SUPPORTED**, and they should be carried out using full cluster-to-cluster migrations. Please refer to the how-to guide about cluster migration [how-to guide about cluster migration](/t/charmed-kafka-how-to-cluster-migration/10951) for more information on how this can be achieved. +Charm upgrades can include both upgrades of operator code (e.g. the revision used by the charm) and/or the workload version. Note that since the charm code pins a particular version of the workload, a charm upgrade may or may not involve also a workload version upgrade. -Perform other extraordinary operations on the Kafka cluster while upgrading is not supported. As an example, these may be (but not limited to) the following: +In general, the following guide only applies for in-place upgrades that involve (at most) minor version upgrade of Apache Kafka workload, e.g. between Apache Kafka 3.4.x to 3.5.x. Major workload upgrades are generally **NOT SUPPORTED**, and they should be carried out using [full cluster-to-cluster migrations](/t/charmed-kafka-how-to-cluster-migration/10951). + +While upgrading an Apache Kafka cluster, do not perform any other major operations, including, but no limited to, the following: 1. Adding or removing units 2. Creating or destroying new relations 3. Changes in workload configuration -4. Upgrading other connected applications (e.g. ZooKeeper) +4. Upgrading other connected applications (e.g. Apache ZooKeeper) The concurrency with other operations is not supported, and it can lead the cluster into inconsistent states. -## Minor upgrade process overview +## Minor upgrade process When performing an in-place upgrade process, the full process is composed by the following high-level steps: @@ -22,9 +26,9 @@ When performing an in-place upgrade process, the full process is composed by the 3. **Upgrade** the charm and/or the workload. Once started, all units in a cluster will refresh the charm code and undergo a workload restart/update. The upgrade will be aborted if the unit upgrade has failed, requiring the admin user to rollback. 4. **Post-upgrade checks** to make sure all units are in the proper state and the cluster is healthy. -## Step 1: Collect +### Step 1: Collect -The first step is to record the revisions of the running application, as a safety measure for a rollback action if needed. To accomplish this, simply run the `juju status` command and look for the revisions of the deployed Kafka and ZooKeeper applications. You can also retrieve this with the following command (that requires [`yq`](https://snapcraft.io/install/yq/ubuntu) to be installed): +The first step is to record the revisions of the running application, as a safety measure for a rollback action if needed. To accomplish this, simply run the `juju status` command and look for the revisions of the deployed Apache Kafka and Apache ZooKeeper applications. You can also retrieve this with the following command (that requires [yq](https://snapcraft.io/install/yq/ubuntu) to be installed): ```shell KAFKA_CHARM_REVISION=$(juju status --format json | yq .applications..charm-rev) @@ -33,9 +37,10 @@ ZOOKEEPER_CHARM_REVISION=$(juju status --format json | yq .applications.` and `}` placeholder appropriately, e.g. `kafka` and `zookeeper`. -## Step 2: Prepare +### Step 2: Prepare Before upgrading, the charm needs to perform some preparatory tasks to define the upgrade plan. + To do so, run the `pre-upgrade-check` action against the leader unit: ```shell @@ -44,34 +49,38 @@ juju run kafka/leader pre-upgrade-check Make sure that the output of the action is successful. -> **Note**: This action must be run before Charmed Kafka upgrades. +[note] +This action must be run before Charmed Kafka upgrades. +[/note] The action will also configure the charm to minimize high-availability reduction and ensure a safe upgrade process. After successful execution, the charm is ready to be upgraded. -## Step 3: Upgrade +### Step 3: Upgrade Use the [`juju refresh`](https://juju.is/docs/juju/juju-refresh) command to trigger the charm upgrade process. Note that the upgrade can be performed against: -* selected channel/track, therefore upgrading to the latest revision published on that track +* selected channel/track, therefore upgrading to the latest revision published on that track: ```shell juju refresh kafka --channel 3/edge ``` -* selected revision +* selected revision: ```shell juju refresh kafka --revision= ``` -* a local charm file +* a local charm file: ```shell juju refresh kafka --path ./kafka_ubuntu-22.04-amd64.charm ``` -When issuing the commands, all units will refresh (i.e. receive new charm content), and the upgrade charm event will be fired. The charm will take care of executing an update (if required) and a restart of the workload one unit at a time to not lose high-availability. +When issuing the commands, all units will refresh (i.e. receive new charm content), and the upgrade charm event will be fired. The charm will take care of executing an update (if required) and a restart of the workload one unit at a time to not lose high availability. -> **Note** On Juju<3.4.4, the refresh operation may transitively fail because of [this issue](https://bugs.launchpad.net/juju/+bug/2053242) on Juju. The failure will resolve itself and the upgrade process will resume normally in few minutes (as soon as the new charm has been downloaded and the upgrade events are appropriately emitted. +[note] +On Juju<3.4.4, the refresh operation may transitively fail because of [this issue](https://bugs.launchpad.net/juju/+bug/2053242) on Juju. The failure will resolve itself and the upgrade process will resume normally in a few minutes (as soon as the new charm has been downloaded and the upgrade events are appropriately emitted). +[/note] The upgrade process can be monitored using `juju status` command, where the message of the units will provide information about which units have been upgraded already, which unit is currently upgrading and which units are waiting for the upgrade to be triggered, as shown below: @@ -90,17 +99,17 @@ kafka/2 active idle 5 10.193.41.221 Upgrade completed ``` -### Failing upgrade +#### Failing upgrade Before upgrading the unit, the charm will check whether the upgrade can be performed, e.g. this may mean: -1. Checking that the upgrade from the previous charm revision and Kafka version is possible. -2. Checking that other external applications that Kafka depends on (e.g. ZooKeeper) are running the correct version. +1. Checking that the upgrade from the previous charm revision and Apache Kafka version is possible. +2. Checking that other external applications that Apache Kafka depends on (e.g. Apache ZooKeeper) are running the correct version. Note that these checks are only possible after a refresh of the charm code, and therefore cannot be done upfront (e.g. during the `pre-upgrade-checks` action). If some of these checks fail, the upgrade will be aborted. When this happens, the workload may still be operating (as only the operator may have failed) but we recommend to rollback the upgrade as soon as possible. -To roll back the upgrade, re-run steps 2 and 3, using the revision taken in step 1, i.e. +To roll back the upgrade, re-run steps 2 and 3, using the revision taken in step 1: ```shell juju run kafka/leader pre-upgrade-check @@ -110,9 +119,9 @@ juju refresh kafka --revision=${KAFKA_CHARM_REVISION} We strongly recommend to also retrieve the full set of logs with `juju debug-log`, to extract insights on why the upgrade failed. -## ZooKeeper upgrade +## Apache ZooKeeper upgrade -Although the previous steps focused on upgrading Kafka, the same process can also be applied to ZooKeeper. However, for revisions prior to XXX, a patch needs to be applied before running the aforementioned process. The ZooKeeper process, as part of its operations, overwrites the `zoo.cfg` pinning the snap revision for the `dynamicConfigFile`. This may create problems in the upgrade if `snapd` removes the previous revision once the snap is refreshed. To prevent this, it is sufficient to replace the `` with `current`. +Although the previous steps focused on upgrading Apache Kafka, the same process can also be applied to Apache ZooKeeper. However, for revisions prior to XXX, a patch needs to be applied before running the aforementioned process. The Apache ZooKeeper process, as part of its operations, overwrites the `zoo.cfg` pinning the snap revision for the `dynamicConfigFile`. This may create problems in the upgrade if `snapd` removes the previous revision once the snap is refreshed. To prevent this, it is sufficient to replace the `` with `current`. To do so, on each unit, first apply the patch: @@ -130,6 +139,6 @@ Check that the server has started correctly, and then apply the patch to the nex Once all the units have been patched, proceed with the upgrade process, as outlined above. -## Kafka and ZooKeeper combined upgrades +## Apache Kafka and Apache ZooKeeper combined upgrades -If Kafka and ZooKeeper charms need both to be upgraded, we recommend you to start the upgrade from the ZooKeeper cluster. As outlined above, the two upgrades should **NEVER** be done concurrently. \ No newline at end of file +If Apache Kafka and Apache ZooKeeper charms need both to be upgraded, we recommend starting the upgrade from the Apache ZooKeeper cluster. As outlined above, the two upgrades should **NEVER** be done concurrently. \ No newline at end of file diff --git a/docs/index.md b/docs/index.md index 67f56825..f26728e8 100644 --- a/docs/index.md +++ b/docs/index.md @@ -1,20 +1,20 @@ -# Charmed Kafka documentation +# Charmed Apache Kafka documentation -Charmed Kafka is an open-source operator that makes it easier to manage Apache Kafka, with built-in support for enterprise features. +Charmed Apache Kafka is an open-source operator that makes it easier to manage Apache Kafka, with built-in support for enterprise features. -Apache Kafka is a free, open source software project by the Apache Software Foundation. Users can find out more at the [Kafka project page](https://kafka.apache.org). +Apache Kafka is a free, open-source software project by the Apache Software Foundation. Users can find out more at the [Apache Kafka project page](https://kafka.apache.org). -Charmed Kafka is built on top of [Juju](https://juju.is/) and reliably simplifies the deployment, scaling, design, and management of [Apache Kafka](https://kafka.apache.org/) in production. Additionally, you can use the charm to manage your Kafka clusters with automation capabilities. It also offers replication, TLS, password rotation, easy-to-use application integration, and monitoring. -Charmed Kafka operates Apache Kafka on physical systems, Virtual Machines (VM), and a wide range of cloud and cloud-like environments, including AWS, Azure, OpenStack, and VMware. +Charmed Apache Kafka is built on top of [Juju](https://juju.is/) and reliably simplifies the deployment, scaling, design, and management of [Apache Kafka](https://kafka.apache.org/) in production. Additionally, you can use the charm to manage your Apache Kafka clusters with automation capabilities. It also offers replication, TLS, password rotation, easy-to-use application integration, and monitoring. +Charmed Apache Kafka operates Apache Kafka on physical systems, Virtual Machines (VM), and a wide range of cloud and cloud-like environments, including AWS, Azure, OpenStack, and VMware. -Charmed Kafka is a solution designed and developed to help ops teams and +Charmed Apache Kafka is a solution designed and developed to help ops teams and administrators automate Apache Kafka operations from [Day 0 to Day 2](https://codilime.com/blog/day-0-day-1-day-2-the-software-lifecycle-in-the-cloud-age/), across multiple cloud environments and platforms. [note] -Canonical has also developed the [Charmed Kafka K8s operator](/t/charmed-kafka-k8s-documentation/10296) to support Kafka in Kubernetes environments. +Canonical has also developed the [Charmed Apache Kafka K8s operator](/t/charmed-kafka-k8s-documentation/10296) to support Apache Kafka in Kubernetes environments. [/note] -Charmed Kafka is developed and supported by [Canonical](https://canonical.com/), as part of its commitment to +Charmed Apache Kafka is developed and supported by [Canonical](https://canonical.com/), as part of its commitment to provide open-source, self-driving solutions, seamlessly integrated using the Operator Framework Juju. Please refer to [Charmhub](https://charmhub.io/), for more charmed operators that can be integrated by [Juju](https://juju.is/). @@ -22,30 +22,31 @@ refer to [Charmhub](https://charmhub.io/), for more charmed operators that can b | | | |--|--| -| [Tutorials](/t/charmed-kafka-tutorial-overview/10571)
Get started - a hands-on introduction to using Charmed Kafka operator for new users
| [How-to guides](/t/charmed-kafka-how-to-manage-units/10287)
Step-by-step guides covering key operations and common tasks | +| [Tutorials](/t/charmed-kafka-tutorial-overview/10571)
Get started - a hands-on introduction to using Charmed Apache Kafka operator for new users
| [How-to guides](/t/charmed-kafka-how-to-manage-units/10287)
Step-by-step guides covering key operations and common tasks | | [Reference](https://charmhub.io/kafka/actions?channel=3/stable)
Technical information - specifications, APIs, architecture | [Explanation]()
Concepts - discussion and clarification of key topics | ## Project and community -Charmed Kafka is a distribution of Apache Kafka. It’s an open-source project that welcomes community contributions, suggestions, fixes and constructive feedback. +Charmed Apache Kafka is a distribution of Apache Kafka. It’s an open-source project that welcomes community contributions, suggestions, fixes and constructive feedback. + - [Read our Code of Conduct](https://ubuntu.com/community/code-of-conduct) - [Join the Discourse forum](/tag/kafka) - [Contribute](https://github.com/canonical/kafka-operator/blob/main/CONTRIBUTING.md) and report [issues](https://github.com/canonical/kafka-operator/issues/new) - Explore [Canonical Data Fabric solutions](https://canonical.com/data) - [Contact us]([/t/13107) for all further questions -Apache®, Apache Kafka, Kafka®, and the Kafka logo are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. +Apache®, Apache Kafka, Kafka®, and the Apache Kafka logo are either registered trademarks or trademarks of the Apache Software Foundation in the United States and/or other countries. ## License -The Charmed Kafka Operator is free software, distributed under the Apache Software License, version 2.0. See [LICENSE](https://github.com/canonical/kafka-operator/blob/main/LICENSE) for more information. +The Charmed Apache Kafka Operator is free software, distributed under the Apache Software License, version 2.0. See [LICENSE](https://github.com/canonical/kafka-operator/blob/main/LICENSE) for more information. # Contents 1. [Tutorial](tutorial) 1. [1. Introduction](tutorial/t-overview.md) 1. [2. Set up the environment](tutorial/t-setup-environment.md) - 1. [3. Deploy Kafka](tutorial/t-deploy.md) + 1. [3. Deploy Apache Kafka](tutorial/t-deploy.md) 1. [4. Integrate with client applications](tutorial/t-relate-kafka.md) 1. [5. Manage passwords](tutorial/t-manage-passwords.md) 1. [6. Enable Encryption](tutorial/t-enable-encryption.md) @@ -71,7 +72,7 @@ The Charmed Kafka Operator is free software, distributed under the Apache Softwa 1. [Revision 156/136](reference/r-releases/r-rev156_136.md) 1. [File System Paths](reference/r-file-system-paths.md) 1. [Snap Entrypoints](reference/r-snap-entrypoints.md) - 1. [Kafka Listeners](reference/r-listeners.md) + 1. [Apache Kafka Listeners](reference/r-listeners.md) 1. [Statuses](reference/r-statuses.md) 1. [Requirements](reference/r-requirements.md) 1. [Performance Tuning](reference/r-performance-tuning.md) diff --git a/docs/reference/r-contacts.md b/docs/reference/r-contacts.md index bae9de0e..02b26ee1 100644 --- a/docs/reference/r-contacts.md +++ b/docs/reference/r-contacts.md @@ -10,8 +10,8 @@ Security issues should be reported through [Launchpad](https://wiki.ubuntu.com/D # Useful links * [Canonical Data Fabric](https://ubuntu.com/data/) -* [Charmed Kafka](https://charmhub.io/kafka) -* [Git sources for Charmed Kafka](https://github.com/canonical/kafka-operator) +* [Charmed Apache Kafka](https://charmhub.io/kafka) +* [Git sources for Charmed Apache Kafka](https://github.com/canonical/kafka-operator) * [Canonical Data on Launchpad](https://launchpad.net/~data-platform) * [Canonical Data on Matrix](https://matrix.to/#/#charmhub-data-platform:ubuntu.com) * [Mailing list on Launchpad](https://lists.launchpad.net/data-platform/) \ No newline at end of file diff --git a/docs/reference/r-file-system-paths.md b/docs/reference/r-file-system-paths.md index e0bd1827..d7d997d6 100644 --- a/docs/reference/r-file-system-paths.md +++ b/docs/reference/r-file-system-paths.md @@ -1,30 +1,30 @@ # File system path -In the following table, we summarize some of the most relevant file paths used in the Kafka and ZooKeeper charms. +In the following table, we summarize some of the most relevant file paths used in the Apache Kafka and Apache ZooKeeper charms. -## Kafka +## Apache Kafka | Path | Description | Permission | |------------------------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------| -| `/snap/charmed-kafka/current/opt/kafka` | Binary files for the Charmed Kafka distribution. Note that this is a read-only Squashfs file system. | (read-only) | -| `/snap/charmed-kafka/current/opt/kafka/bin/*.sh` | General bash scripts to provide helpers and utilities for managing and interacting with Kafka. | (read-only) | -| `/var/snap/charmed-kafka/current/etc/kafka/` | Configuration files used by Kafka daemon process. These files are generally written and managed by the charm. | (owned by `snap_daemon`, managed by `charm`) | -| `/var/snap/charmed-kafka/common/var/log/kafka/` | Application Logging files generated by the Kafka daemon process. These files are written by the workload, but they may be read by other components to provide monitoring (for example, Grafana or other charms). | (owned and managed by `snap_daemon`) | -| `/var/snap/charmed-kafka/common/var/lib/kafka/` | Raw data stored persistently by Kafka during its operations. The files are written and managed by Kafka only. | (owned and managed by `snap_daemon`) | +| `/snap/charmed-kafka/current/opt/kafka` | Binary files for the Charmed Apache Kafka distribution. Note that this is a read-only Squashfs file system. | (read-only) | +| `/snap/charmed-kafka/current/opt/kafka/bin/*.sh` | General bash scripts to provide helpers and utilities for managing and interacting with Apache Kafka. | (read-only) | +| `/var/snap/charmed-kafka/current/etc/kafka/` | Configuration files used by Apache Kafka daemon process. These files are generally written and managed by the charm. | (owned by `snap_daemon`, managed by `charm`) | +| `/var/snap/charmed-kafka/common/var/log/kafka/` | Application Logging files generated by the Apache Kafka daemon process. These files are written by the workload, but they may be read by other components to provide monitoring (for example, Grafana or other charms). | (owned and managed by `snap_daemon`) | +| `/var/snap/charmed-kafka/common/var/lib/kafka/` | Raw data stored persistently by Apache Kafka during its operations. The files are written and managed by Apache Kafka only. | (owned and managed by `snap_daemon`) | External storage is used for storing persistent raw data that is mounted at `/var/snap/charmed-kafka/common/var/lib/kafka/`, with `` being a progressive number. Multiple storage volumes can be used for providing both horizontal scalability and provide IO parallelisation to enhance throughput. -## ZooKeeper +## Apache ZooKeeper | Path | Description | Permission | |--------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------| -| `/snap/charmed-zookeeper/current/opt/zookeeper` | Binary files for the Charmed ZooKeeper distribution. Note that this is a readonly squashfs file system. | (read-only) | -| `/snap/charmed-zookeeper/current/opt/zookeeper/bin/*.sh` | General bash scripts to provide helpers and utilities for managing and interacting with ZooKeeper. | (read-only) | -| `/var/snap/charmed-zookeeper/current/etc/zookeeper/ ` | Configuration files used by ZooKeeper daemon process. These files are generally written and managed by the charm. | (owned by `snap_daemon`, managed by `charm`) | -| `/var/snap/charmed-zookeeper/common/var/log/zookeeper/ ` | Application Logging files generated by the ZooKeeper daemon process. These files are written by the workload, but they may be read by other components to provide monitoring (for example, Grafana or other charms). | (owned and managed by `snap_daemon`) | -| `/var/snap/charmed-zookeeper/common/var/lib/zookeeper/` | Raw data stored persistently by ZooKeeper during its operations. The files are written and managed by ZooKeeper only. | (owned and managed by `snap_daemon`) | +| `/snap/charmed-zookeeper/current/opt/zookeeper` | Binary files for the Charmed Apache ZooKeeper distribution. Note that this is a readonly squashfs file system. | (read-only) | +| `/snap/charmed-zookeeper/current/opt/zookeeper/bin/*.sh` | General bash scripts to provide helpers and utilities for managing and interacting with Apache ZooKeeper. | (read-only) | +| `/var/snap/charmed-zookeeper/current/etc/zookeeper/ ` | Configuration files used by Apache ZooKeeper daemon process. These files are generally written and managed by the charm. | (owned by `snap_daemon`, managed by `charm`) | +| `/var/snap/charmed-zookeeper/common/var/log/zookeeper/ ` | Application Logging files generated by the Apache ZooKeeper daemon process. These files are written by the workload, but they may be read by other components to provide monitoring (for example, Grafana or other charms). | (owned and managed by `snap_daemon`) | +| `/var/snap/charmed-zookeeper/common/var/lib/zookeeper/` | Raw data stored persistently by Apache ZooKeeper during its operations. The files are written and managed by Apache ZooKeeper only. | (owned and managed by `snap_daemon`) | External storage is used for storing persistent raw data, and it is diff --git a/docs/reference/r-listeners.md b/docs/reference/r-listeners.md index 28cdb274..5abfcd67 100644 --- a/docs/reference/r-listeners.md +++ b/docs/reference/r-listeners.md @@ -1,16 +1,16 @@ -# Kafka listeners +# Apache Kafka listeners -Charmed Kafka comes with a set of listeners that can be enabled for +Charmed Apache Kafka comes with a set of listeners that can be enabled for inter- and intra-cluster communication. *Internal listeners* are used for internal traffic and exchange of information -between Kafka brokers, whereas *external listeners* are used for external clients -to be optionally enabled based the relations created on particular +between Apache Kafka brokers, whereas *external listeners* are used for external clients +to be optionally enabled based on the relations created on particular charm endpoints. Each listener is characterized by a specific port, scope and protocol. -In the following table we summarize the protocols, the port and -the relation that each listener is bound to. Nota that based on whether a `certificates` -relation is present, one of two mutually exclusive type of listener can be +In the following table, we summarize the protocols, the port and +the relation that each listener is bound to. Note that based on whether a `certificates` +relation is present, one of two mutually exclusive types of listeners can be opened. | Listener name | Driving endpoints | Protocol | Port | Scope | @@ -22,4 +22,6 @@ opened. | SSL_EXTERNAL | `trusted-certificate` + `certificates` | SSL | `9094` | external | | SSL_EXTERNAL | `trusted-ca` + `certificates` | SSL | `9094` | external | -> **Note** Since `cluster` is a peer-relation, the `SASL_INTERNAL` listener is always enabled. \ No newline at end of file +[note] +Since `cluster` is a peer-relation, the `SASL_INTERNAL` listener is always enabled. +[/note] \ No newline at end of file diff --git a/docs/reference/r-performance-tuning.md b/docs/reference/r-performance-tuning.md index 92526ce1..dfb40cc5 100644 --- a/docs/reference/r-performance-tuning.md +++ b/docs/reference/r-performance-tuning.md @@ -1,14 +1,14 @@ # Performance tuning -This section contains some suggested values to get a better performance from Charmed Kafka. +This section contains some suggested values to get a better performance from Charmed Apache Kafka. ## Virtual memory handling (recommended) -Kafka brokers make heavy use of the OS page cache to maintain performance. They never normally explicitly issue a command to ensure messages have been persisted to disk (`sync`), relying instead on the underlying OS to ensure that larger chunks (pages) of data are persisted from the page cache to the disk when the OS deems it efficient and/or necessary to do so. As such, there is a range of runtime kernel parameter tuning that is recommended to set on machines running Kafka to improve performance. +Apache Kafka brokers make heavy use of the OS page cache to maintain performance. They never normally explicitly issue a command to ensure messages have been persisted to disk (`sync`), relying instead on the underlying OS to ensure that larger chunks (pages) of data are persisted from the page cache to the disk when the OS deems it efficient and/or necessary to do so. As such, there is a range of runtime kernel parameter tuning that is recommended to be set on machines running Apache Kafka to improve performance. To configure these settings, one can write them to `/etc/sysctl.conf` using `sudo echo $SETTING >> /etc/sysctl.conf`. Note that the settings shown below are simply sensible defaults that may not apply to every workload: ```bash -# ensures low likelihood of memory being assigned to swap-space rather than drop pages from the page cache +# ensures a low likelihood of memory being assigned to swap space rather than drop pages from the page cache vm.swappiness=1 # higher ratio results in less frequent disk flushes and better disk I/O performance @@ -18,7 +18,7 @@ vm.dirty_background_ratio=5 ## Memory maps (recommended) -Each Kafka log segment requires an `index` file and a `timeindex` file, both requiring one map area. The default OS maximum number of memory map areas a process can have is set by `vm.max_map_count=65536`. For production deployments with a large number of partitions and log-segments, it is likely to exceed the maximum OS limit. +Each Apache Kafka log segment requires an `index` file and a `timeindex` file, both requiring one map area. The default OS maximum number of memory map areas a process can have is set by `vm.max_map_count=65536`. For production deployments with a large number of partitions and log-segments, it is likely to exceed the maximum OS limit. It is recommended to set the mmap number sufficiently higher than the number of memory mapped files. This can also be written to `/etc/sysctl.conf`: @@ -28,7 +28,7 @@ vm.max_map_count= ## File descriptors (recommended) -Kafka uses file descriptors for log segments and open connections. If a broker hosts many partitions, keep in mind that the broker requires **at least** `(number_of_partitions)*(partition_size/segment_size)` file descriptors to track all the log segments and number of connections. +Apache Kafka uses file descriptors for log segments and open connections. If a broker hosts many partitions, keep in mind that the broker requires **at least** `(number_of_partitions)*(partition_size/segment_size)` file descriptors to track all the log segments and number of connections. To configure those limits, update the values and add the following to `/etc/security/limits.d/root.conf`: diff --git a/docs/reference/r-releases/r-rev156_126.md b/docs/reference/r-releases/r-rev156_126.md index a3a51849..99754861 100644 --- a/docs/reference/r-releases/r-rev156_126.md +++ b/docs/reference/r-releases/r-rev156_126.md @@ -3,9 +3,9 @@ Dear community, -We are extremely thrilled and excited to share that Charmed Kafka and Charmed ZooKeeper have now been released as GA. You can find them in [charmhub.io](https://charmhub.io/) under the `3/stable` track. +We are extremely thrilled and excited to share that Charmed Apache Kafka and Charmed Apache ZooKeeper have now been released as GA. You can find them in [charmhub.io](https://charmhub.io/) under the `3/stable` track. -More information are available in the [Canonical website](https://canonical.com/data/kafka), alongside its [documentation](https://canonical.com/data/docs/kafka/iaas). +More information is available on the [Canonical website](https://canonical.com/data/kafka), alongside its [documentation](https://canonical.com/data/docs/kafka/iaas). Also find the full announcement of the release [here](https://canonical.com/blog/charmed-kafka-general-availability) and [here](https://discourse.charmhub.io/t/announcing-general-availability-of-charmed-kafka/13277). And more importantly, make sure you don't miss out the [webinar](https://www.linkedin.com/events/7161727829259366401/about/) that Raúl Zamora and Rob Gibbon will be holding later today. @@ -14,7 +14,7 @@ Please reach out should you have any question, comment, feedback or information. ## Features * Deploying on VM (tested with LXD, MAAS) -* ZooKeeper using SASL authentication +* Apache ZooKeeper using SASL authentication * Scaling up/down in one simple Juju command * Multi-broker support and Highly-Available setups * Inter-broker authenticated communication @@ -32,29 +32,28 @@ and [GitHub](https://github.com/canonical/kafka-operator/issues) platforms. ## Inside the charms -* Charmed ZooKeeper charm ships the ZooKeeper [3.8.2-ubuntu0](https://launchpad.net/zookeeper-releases/3.x/3.8.2-ubuntu0), built and supported by Canonical -* Charmed Kafka charm ships the Kafka [3.6.0-ubuntu0](https://launchpad.net/kafka-releases/3.x/3.6.0-ubuntu0), built and supported by Canonical -* Charmed ZooKeeper charm is based on [charmed-zookeeper snap](https://snapcraft.io/charmed-zookeeper) on the `3/stable` (Ubuntu LTS “22.04” - core22-based) -* Charmed Kafka charm is based on [charmed-kafka snap](https://snapcraft.io/charmed-kafka) on the `3/stable` channel (Ubuntu LTS “22.04” - core22-based) -* Principal charms supports the latest LTS series “22.04” only. +* Charmed Apache ZooKeeper charm ships the Apache ZooKeeper [3.8.2-ubuntu0](https://launchpad.net/zookeeper-releases/3.x/3.8.2-ubuntu0), built and supported by Canonical +* Charmed Apache Kafka charm ships the Apache Kafka [3.6.0-ubuntu0](https://launchpad.net/kafka-releases/3.x/3.6.0-ubuntu0), built and supported by Canonical +* Charmed Apache ZooKeeper charm is based on [charmed-zookeeper snap](https://snapcraft.io/charmed-zookeeper) on the `3/stable` (Ubuntu LTS “22.04” - core22-based) +* Charmed Apache Kafka charm is based on [charmed-kafka snap](https://snapcraft.io/charmed-kafka) on the `3/stable` channel (Ubuntu LTS “22.04” - core22-based) +* Principal charms support the latest LTS series “22.04” only. -More information about the artifacts are provided by the following table: +More information about the artifacts is provided by the following table: | Artifact | Track/Series | Version/Revision | Code | |------------------------|--------------|------------------|---------------------------------------------------------------------------------------------------------------------| -| ZooKeeper distribution | 3.x | 3.8.2-ubuntu0 | [5bb82d](https://git.launchpad.net/zookeeper-releases/tree/?h=lp-3.8.2&id=5bb82df4ffba910a5b30dd42499921466405f087) | -| Kafka distribution | 3.x | 3.6.0-ubuntu0 | [424389](https://git.launchpad.net/kafka-releases/tree/?h=lp-3.6.0&id=424389bb8f230beaef4ccb94aca464b5d22ac310) | -| Charmed ZooKeeper snap | 3/stable | 28 | [9757f4](https://github.com/canonical/charmed-zookeeper-snap/tree/9757f4a2a889981275f8f2a1a87e1c78ae1adb77) | -| ZooKeeper operator | 3/stable | 126 | [9ebd9a](https://github.com/canonical/zookeeper-operator/commit/9ebd9a2050e0bd626feb0019222d45f211ca7774) | -| Charmed Kafka snap | 3/stable | 30 | [c0ce27](https://github.com/canonical/charmed-kafka-snap/tree/c0ce275f70f688e66f10f295456d2b5ff33d4f64) | -| Kafka operator | 3/stable | 156 | [01d65c](https://github.com/canonical/kafka-operator/tree/01d65c3444b593d5f18d197a6514421afd3f2bc6) | - +| Apache ZooKeeper distribution | 3.x | 3.8.2-ubuntu0 | [5bb82d](https://git.launchpad.net/zookeeper-releases/tree/?h=lp-3.8.2&id=5bb82df4ffba910a5b30dd42499921466405f087) | +| Apache Kafka distribution | 3.x | 3.6.0-ubuntu0 | [424389](https://git.launchpad.net/kafka-releases/tree/?h=lp-3.6.0&id=424389bb8f230beaef4ccb94aca464b5d22ac310) | +| Charmed Apache ZooKeeper snap | 3/stable | 28 | [9757f4](https://github.com/canonical/charmed-zookeeper-snap/tree/9757f4a2a889981275f8f2a1a87e1c78ae1adb77) | +| Charmed Apache ZooKeeper operator | 3/stable | 126 | [9ebd9a](https://github.com/canonical/zookeeper-operator/commit/9ebd9a2050e0bd626feb0019222d45f211ca7774) | +| Charmed Apache Kafka snap | 3/stable | 30 | [c0ce27](https://github.com/canonical/charmed-kafka-snap/tree/c0ce275f70f688e66f10f295456d2b5ff33d4f64) | +| Charmed Apache Kafka operator | 3/stable | 156 | [01d65c](https://github.com/canonical/kafka-operator/tree/01d65c3444b593d5f18d197a6514421afd3f2bc6) | ## Technical notes -* A Charmed Kafka cluster is secure by default, meaning that when deployed if there are no client charms related to it, external listeners will not be enabled. -* We recommend to deploy one `data-integrator` with `extra-user-roles=admin` alongside the Kafka deployment, in order to enable listeners and also create one user with elevated permission +* A Charmed Apache Kafka cluster is secure by default, meaning that when deployed if there are no client charms related to it, external listeners will not be enabled. +* We recommend deploying one `data-integrator` with `extra-user-roles=admin` alongside the Apache Kafka deployment, in order to enable listeners and also create one user with elevated permission to perform administrative tasks. For more information, see the [How-to manage application](/t/charmed-kafka-documentation-how-to-manage-app/10285) guide. * The current release has been tested with Juju 2.9.45+ and Juju 3.1+ -* Inplace upgrade for charms tracking `latest` is not supported, both for ZooKeeper and Kafka charms. Perform data migration to upgrade to a Charmed Kafka cluster managed via a `3/stable` charm. +* Inplace upgrade for charms tracking `latest` is not supported, both for Apache ZooKeeper and Apache Kafka charms. Perform data migration to upgrade to a Charmed Apache Kafka cluster managed via a `3/stable` charm. For more information on how to perform the migration, see [How-to migrate a cluster](/t/charmed-kafka-documentation-how-to-migrate-a-cluster/10951) guide. \ No newline at end of file diff --git a/docs/reference/r-releases/r-rev156_136.md b/docs/reference/r-releases/r-rev156_136.md index 078515e4..fb3cd661 100644 --- a/docs/reference/r-releases/r-rev156_136.md +++ b/docs/reference/r-releases/r-rev156_136.md @@ -3,7 +3,7 @@ Dear community, -We are glad to report that we have just released a new updated version for Charmed ZooKeeper on the `3/stable` channel, upgrading its revision from 126 to 136. +We are glad to report that we have just released a new updated version for Charmed Apache ZooKeeper on the `3/stable` channel, upgrading its revision from 126 to 136. The release of a new version was promoted by the need of backporting some features that should provide increased robustness and resilience during operation as well as smaller workload upgrades and fixes. See the technical notes for further information. @@ -25,27 +25,26 @@ and [GitHub](https://github.com/canonical/kafka-operator/issues) platforms. ## Inside the charms -* Charmed ZooKeeper charm ships the ZooKeeper [3.8.4-ubuntu0](https://launchpad.net/zookeeper-releases/3.x/3.8.4-ubuntu0), built and supported by Canonical -* Charmed Kafka charm ships the Kafka [3.6.0-ubuntu0](https://launchpad.net/kafka-releases/3.x/3.6.0-ubuntu0), built and supported by Canonical -* Charmed ZooKeeper charm is based on [charmed-zookeeper snap](https://snapcraft.io/charmed-zookeeper) on the `3/stable` (Ubuntu LTS “22.04” - core22-based) -* Charmed Kafka charm is based on [charmed-kafka snap](https://snapcraft.io/charmed-kafka) on the `3/stable` channel (Ubuntu LTS “22.04” - core22-based) -* Principal charms supports the latest LTS series “22.04” only. +* Charmed Apache ZooKeeper charm ships the ZooKeeper [3.8.4-ubuntu0](https://launchpad.net/zookeeper-releases/3.x/3.8.4-ubuntu0), built and supported by Canonical +* Charmed Apache Kafka charm ships the Apache Kafka [3.6.0-ubuntu0](https://launchpad.net/kafka-releases/3.x/3.6.0-ubuntu0), built and supported by Canonical +* Charmed Apache ZooKeeper charm is based on [charmed-zookeeper snap](https://snapcraft.io/charmed-zookeeper) on the `3/stable` (Ubuntu LTS “22.04” - core22-based) +* Charmed Apache Kafka charm is based on [charmed-kafka snap](https://snapcraft.io/charmed-kafka) on the `3/stable` channel (Ubuntu LTS “22.04” - core22-based) +* Principal charms support the latest LTS series “22.04” only. -More information about the artifacts are provided by the following table: +More information about the artifacts is provided by the following table: | Artifact | Track/Series | Version/Revision | Code | |------------------------|--------------|------------------|---------------------------------------------------------------------------------------------------------------------| -| ZooKeeper distribution | 3.x | 3.8.4-ubuntu0 | [78499c](https://git.launchpad.net/zookeeper-releases/tree/?h=lp-3.8.4&id=78499c9f4d4610f9fb963afdad1ffd1aab2a96b8) | -| Kafka distribution | 3.x | 3.6.0-ubuntu0 | [424389](https://git.launchpad.net/kafka-releases/tree/?h=lp-3.6.0&id=424389bb8f230beaef4ccb94aca464b5d22ac310) | -| Charmed ZooKeeper snap | 3/stable | 30 | [d85fed](https://github.com/canonical/charmed-zookeeper-snap/tree/d85fed4c2f83d99dbc028ff10c2e38915b6cdf04) | -| ZooKeeper operator | 3/stable | 136 | [0b7d66](https://github.com/canonical/zookeeper-operator/tree/0b7d66170d80e23804032034119a419f174bb965) | -| Charmed Kafka snap | 3/stable | 30 | [c0ce27](https://github.com/canonical/charmed-kafka-snap/tree/c0ce275f70f688e66f10f295456d2b5ff33d4f64) | -| Kafka operator | 3/stable | 156 | [01d65c](https://github.com/canonical/kafka-operator/tree/01d65c3444b593d5f18d197a6514421afd3f2bc6) | - +| Apache ZooKeeper distribution | 3.x | 3.8.4-ubuntu0 | [78499c](https://git.launchpad.net/zookeeper-releases/tree/?h=lp-3.8.4&id=78499c9f4d4610f9fb963afdad1ffd1aab2a96b8) | +| Apache Kafka distribution | 3.x | 3.6.0-ubuntu0 | [424389](https://git.launchpad.net/kafka-releases/tree/?h=lp-3.6.0&id=424389bb8f230beaef4ccb94aca464b5d22ac310) | +| Charmed Apache ZooKeeper snap | 3/stable | 30 | [d85fed](https://github.com/canonical/charmed-zookeeper-snap/tree/d85fed4c2f83d99dbc028ff10c2e38915b6cdf04) | +| Charmed Apache ZooKeeper operator | 3/stable | 136 | [0b7d66](https://github.com/canonical/zookeeper-operator/tree/0b7d66170d80e23804032034119a419f174bb965) | +| Charmed Apache Kafka snap | 3/stable | 30 | [c0ce27](https://github.com/canonical/charmed-kafka-snap/tree/c0ce275f70f688e66f10f295456d2b5ff33d4f64) | +| Charmed Apache Kafka operator | 3/stable | 156 | [01d65c](https://github.com/canonical/kafka-operator/tree/01d65c3444b593d5f18d197a6514421afd3f2bc6) | ## Technical notes -* Rev126 on Charmed ZooKeeper was observed to sporadically trigger ZooKeeper reconfiguration of the clusters by removing all server but the Juju leader from the ZooKeeper quorum. This leads to a +* Rev126 on Charmed Apache ZooKeeper was observed to sporadically trigger Apache ZooKeeper reconfiguration of the clusters by removing all servers but the Juju leader from the Apache ZooKeeper quorum. This leads to a non-highly available cluster, that it is however still up and running. The reconfiguration generally resulted from some glitch and connection drop with the Juju controller that resulted in transient inconsistent databag of juju events. This was once observed during a controller upgrade (see reported [bug](https://bugs.launchpad.net/juju/+bug/2053055) on Juju), but its occurrence is not limited to it. diff --git a/docs/reference/r-requirements.md b/docs/reference/r-requirements.md index c8bb2c63..39fc69d3 100644 --- a/docs/reference/r-requirements.md +++ b/docs/reference/r-requirements.md @@ -11,7 +11,7 @@ The minimum supported Juju versions are: ## Minimum requirements -For production environments, it is recommended to deploy at least five nodes for ZooKeeper and three for Kafka. While the following requirements are meant to be for production, the charm can be deployed in much smaller environments. +For production environments, it is recommended to deploy at least five nodes for Apache ZooKeeper and three for Apache Kafka. While the following requirements are meant to be for production, the charm can be deployed in much smaller environments. - 64GB of RAM - 24 cores diff --git a/docs/reference/r-snap-entrypoints.md b/docs/reference/r-snap-entrypoints.md index 98504cfd..e34f032e 100644 --- a/docs/reference/r-snap-entrypoints.md +++ b/docs/reference/r-snap-entrypoints.md @@ -1,11 +1,11 @@ -# Charmed Kafka snap entrypoints +# Charmed Apache Kafka snap entrypoints -Snap entrypoints wrap the Kafka Distribution Bash scripts and make sure +Snap entrypoints wrap the Apache Kafka Distribution Bash scripts and make sure that they run with the correct environment settings (configuration files, logging files, etc). Below is a reference table for the mapping between entrypoints and wrapped bash script: -| Snap Entrypoint | Kafka Distribution Bash Script | +| Snap Entrypoint | Apache Kafka Distribution Bash Script | |-------------------------------------------------|----------------------------------------------------------| | `charmed-kafka.daemon` | `$SNAP/opt/kafka/bin/kafka-server-start.sh` | | `charmed-kafka.log-dirs` | `$SNAP/opt/kafka/bin/kafka-log-dirs.sh` | @@ -33,7 +33,7 @@ Below is a reference table for the mapping between entrypoints and wrapped bash | `charmed-kafka.trogdor` | `$SNAP/opt/kafka/bin/trogdor.sh` | | `charmed-kafka.keytool` | `$SNAP/usr/lib/jvm/java-17-openjdk-amd64/bin/keytool` | -Available Kafka bin commands can also be found with: +Available Apache Kafka bin commands can also be found with: ``` snap info charmed-kafka --channel 3/stable diff --git a/docs/reference/r-statuses.md b/docs/reference/r-statuses.md index 30521e23..06582754 100644 --- a/docs/reference/r-statuses.md +++ b/docs/reference/r-statuses.md @@ -2,46 +2,46 @@ The charm follows [standard Juju applications statuses](https://juju.is/docs/olm/status-values#heading--application-status). Here you can find the expected end-users reactions on different statuses: -## Kafka +## Apache Kafka | Juju Status | Message | Expectations | Actions | |-----------------|----------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | **Active** | | Normal charm operations | No actions required | | **Active** | manual partition reassignment may be needed to utilize new storage volumes | Existing data is not automatically rebalanced when new storage is attached. New storage will be used for newly created topics and/or partitions | Inspect the storage utilization and based on the need, use the bash utility script `/snap/charmed-kafka/current/opt/kafka/bin/kafka-reassign-partitions.sh` for manual data rebalancing. | | **Active** | potential data loss due to storage removal without replication | Some partition/topics are not replicated on multiple storages, therefore potentially leading to data loss | Add new storage, increase replication of topics/partitions and/or rebalance data across multiple storages/brokers | -| **Active** | machine system settings are not optimal - see logs for info | The broker is running on a machine that has sub-optimal OS settings. Although this may not preclude Kafka to work, it may result in sub-optimal performances | Check the `juju debug-log` for insights on which settings are sub-optimal and may be changed | -| **Active** | sysctl params cannot be set. Is the machine running on a container? | Some of the sysctl settings required by Kafka could not be set, therefore affective Kafka performance and correct settings. This can also be due to the charm being deployed on the wrong substrate | Remove the deployment and make sure that the selected charm is correct given the Juju cloud substrate | +| **Active** | machine system settings are not optimal - see logs for info | The broker is running on a machine that has sub-optimal OS settings. Although this may not preclude Apache Kafka to work, it may result in sub-optimal performances | Check the `juju debug-log` for insights on which settings are sub-optimal and may be changed | +| **Active** | sysctl params cannot be set. Is the machine running on a container? | Some of the sysctl settings required by Apache Kafka could not be set, therefore affecting Apache Kafka performance and correct settings. This can also be due to the charm being deployed on the wrong substrate | Remove the deployment and make sure that the selected charm is correct given the Juju cloud substrate | | **Blocked** | unable to install charmed-kafka snap | There are issues with the network connection and/or the Snap Store | Check your internet connection and https://status.snapcraft.io/. Remove the application and when everything is ok, deploy the charm again | -| **Blocked** | snap service not running | The charm failed to start the snap daemon processes | Check the Kafka logs for insights on the issue | -| **Blocked** | missing required zookeeper relation | Kafka charm has not been connected to any ZooKeeper cluster | Relate to a ZooKeeper charm | -| **Blocked** | unit not connected to zookeeper | Although the relation is present, the unit has failed to connect to ZooKeeper | Make sure that Kafka and ZooKeeper can connect and exchange data. When using encryption, make sure that certificates/ca are correctly setup. | -| **Blocked** | tls must be enabled on both kafka and zookeeper | Encryption (and relation with TLS-certificates operators) must be either enabled or disabled on both Kafka and ZooKeeper | Make sure that both Kafka and ZooKeeper either both use or neither of them use encryption. | -| **Waiting** | zookeeper credentials not created yet | Credentials are being created on ZooKeeper, and Kafka is waiting to receive them to connect to ZooKeeper | | -| **Waiting** | internal broker credentials not yet added | Intra-broker credentials being created to enable communication and syncing among brokers belonging to the Kafka clusters. | | -| **Waiting** | unit waiting for signed certificates | Unit has requested a CSR request via the `certificates` relation and it is waiting to received the signed certificate | | +| **Blocked** | snap service not running | The charm failed to start the snap daemon processes | Check the Apache Kafka logs for insights on the issue | +| **Blocked** | missing required zookeeper relation | Apache Kafka charm has not been connected to any ZooKeeper cluster | Relate to an Apache ZooKeeper charm | +| **Blocked** | unit not connected to zookeeper | Although the relation is present, the unit has failed to connect to Apache ZooKeeper | Make sure that Apache Kafka and Apache ZooKeeper can connect and exchange data. When using encryption, make sure that certificates/ca are correctly setup. | +| **Blocked** | tls must be enabled on both kafka and zookeeper | Encryption (and relation with TLS-certificates operators) must be either enabled or disabled on both Apache Kafka and Apache ZooKeeper | Make sure that both Apache Kafka and Apache ZooKeeper either both use or neither of them use encryption. | +| **Waiting** | zookeeper credentials not created yet | Credentials are being created on Apache ZooKeeper, and Apache Kafka is waiting to receive them to connect to Apache ZooKeeper | | +| **Waiting** | internal broker credentials not yet added | Intra-broker credentials being created to enable communication and syncing among brokers belonging to the Apache Kafka clusters. | | +| **Waiting** | unit waiting for signed certificates | Unit has requested a CSR request via the `certificates` relation and it is waiting to receive the signed certificate | | | **Maintenance** | | Charm is performing the internal maintenance (e.g. cluster re-configuration, upgrade, ...) | No actions required | | **Error** | any | An unhanded internal error happened | Read the message hint. Execute `juju resolve ` after addressing the root of the error state | | **Terminated** | any | The unit is gone and will be cleaned by Juju soon | No actions possible | | **Unknown** | any | Juju doesn't know the charm app/unit status. Possible reason: K8s charm termination in progress. | Manual investigation required if status is permanent | -## ZooKeeper +## Apache ZooKeeper | Juju Status | Message | Expectations | Actions | |-----------------|----------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | **Active** | | Normal charm operations | No actions required | | **Blocked** | unable to install zookeeper service | There are issues with the network connection and/or the Snap Store | Check your internet connection and https://status.snapcraft.io/. Remove the application and when everything is ok, deploy the charm again | -| **Blocked** | zookeeper service not running | The charm failed to start the snap daemon processes | Check the ZooKeeper logs for insights on the issue | -| **Blocked** | zookeeper service is unreachable or not serving requests | The ZooKeeper service is either down or not exposed through the correct port | Check the ZooKeeper logs for the impacted units and insights on underlying issue | | -| **Waiting** | waiting for leader to create internal user credentials | The ZooKeeper cluster is being initialized and the leader is setting up credentials | | -| **Waiting** | other units starting first | ZooKeeper units are being started and added to the quorum in order | | -| **Waiting** | unit waiting for signed certificates | Unit has requested a CSR request via the `certificates` relation and it is waiting to received the signed certificate | | +| **Blocked** | zookeeper service not running | The charm failed to start the snap daemon processes | Check the Apache ZooKeeper logs for insights on the issue | +| **Blocked** | zookeeper service is unreachable or not serving requests | The Apache ZooKeeper service is either down or not exposed through the correct port | Check the Apache ZooKeeper logs for the impacted units and insights on underlying issue | | +| **Waiting** | waiting for leader to create internal user credentials | The Apache ZooKeeper cluster is being initialized and the leader is setting up credentials | | +| **Waiting** | other units starting first | Apache ZooKeeper units are being started and added to the quorum in order | | +| **Waiting** | unit waiting for signed certificates | Unit has requested a CSR request via the `certificates` relation and it is waiting to receive the signed certificate | | | **Maintenance** | not all units registered IP | The units are being registered to the quorum | | | | -| **Maintenance** | cluster not stable - not all units related | Some ZooKeeper units are not connected, reducing cluster availability and obstructing elections | Make sure the units can reach each other and communicate | -| **Maintenance** | cluster not stable - quorum is stale | The cluster does not have an active quorum, preventing the cluster from running elections | Do not perform any extra-ordinary operation. Wait for the units to connect and form a quorum. If the problem persists, please check the ZooKeeper logs on all units for further insights. | -| **Maintenance** | cluster not stable - not all units added to quorum | Some ZooKeeper units are not part of the quorum, reducing cluster availability and obstructing elections | Do not perform any extra-ordinary operation. Wait for the units to connect and form a quorum. If some units keep being not connected, please check the ZooKeeper logs of such units for further insights. | -| **Maintenance** | provider not ready - not all units using same encryption | Units use different settings for encryption, therefore preventing correct cluster operations. | This situation can transiently occur when new protocols / certificates are being setup. If the message persist, please check the ZooKeeper logs for further insights. | -| **Maintenance** | provider not ready - switching quorum encryption | Encryption is being enabled / disabled. | This situation can transiently occur when encryption are being setup. If the message persist, please check the ZooKeeper logs for further insights. | +| **Maintenance** | cluster not stable - not all units related | Some Apache ZooKeeper units are not connected, reducing cluster availability and obstructing elections | Make sure the units can reach each other and communicate | +| **Maintenance** | cluster not stable - quorum is stale | The cluster does not have an active quorum, preventing the cluster from running elections | Do not perform any extra-ordinary operation. Wait for the units to connect and form a quorum. If the problem persists, please check the Apache ZooKeeper logs on all units for further insights. | +| **Maintenance** | cluster not stable - not all units added to quorum | Some Apache ZooKeeper units are not part of the quorum, reducing cluster availability and obstructing elections | Do not perform any extra-ordinary operation. Wait for the units to connect and form a quorum. If some units keep being not connected, please check the Apache ZooKeeper logs of such units for further insights. | +| **Maintenance** | provider not ready - not all units using same encryption | Units use different settings for encryption, therefore preventing correct cluster operations. | This situation can transiently occur when new protocols / certificates are being setup. If the message persist, please check the Apache ZooKeeper logs for further insights. | +| **Maintenance** | provider not ready - switching quorum encryption | Encryption is being enabled / disabled. | This situation can transiently occur when encryption is being set up. If the message persists, please check the Apache ZooKeeper logs for further insights. | | **Maintenance** | provider not ready - portUnification not yet disabled | Specifies that the client port should accept SSL connections (using the same configuration as the secure client port). | | | **Error** | any | An unhanded internal error happened | Read the message hint. Run `juju resolve ` after addressing the root of the error state | | **Terminated** | any | The unit is gone and will be cleaned by Juju soon | No actions possible | -| **Unknown** | any | Juju doesn't know the charm app/unit status. Possible reason: K8s charm termination in progress. | Manual investigation required if status is permanent | \ No newline at end of file +| **Unknown** | any | Juju doesn't know the charm app/unit status. Possible reason: K8s charm termination in progress. | Manual investigation required if the status is permanent | \ No newline at end of file diff --git a/docs/tutorial/t-cleanup-environment.md b/docs/tutorial/t-cleanup-environment.md index a1c72df6..35b52187 100644 --- a/docs/tutorial/t-cleanup-environment.md +++ b/docs/tutorial/t-cleanup-environment.md @@ -1,12 +1,14 @@ -This is part of the [Charmed Kafka Tutorial](/t/charmed-kafka-tutorial-overview/10571). Please refer to this page for more information and the overview of the content. +This is part of the [Charmed Apache Kafka Tutorial](/t/charmed-kafka-tutorial-overview/10571). Please refer to this page for more information and an overview of the content. ## Clean up your environment -If you're done using Charmed Kafka and Juju and would like to free up resources on your machine, you can remove Charmed Kafka, Charmed Zookeeper and Juju. +If you're done using Charmed Apache Kafka and Juju and would like to free up resources on your machine, you can remove Charmed Apache Kafka, Charmed Apache Zookeeper and Juju. -> **Warning**: when you remove Charmed Kafka as shown below you will lose all the data in Kafka. Further, when you remove Juju as shown below you will lose access to any other applications you have hosted on Juju. +[note type="caution"] +Removing Charmed Apache Kafka as shown below will delete all the data in the Apache Kafka. Further, when you remove Juju as shown below you lose access to any other applications you have hosted on Juju. +[/note] -To remove Charmed Kafka and the model it is hosted on run the command: +To remove Charmed Apache Kafka and the model it is hosted on run the command: ```shell juju destroy-model tutorial --destroy-storage --force @@ -26,10 +28,11 @@ sudo snap remove juju --purge ## What's next? -In this tutorial, we've successfully deployed Kafka, added/removed replicas, added/removed users to/from the cluster, and even enabled and disabled TLS. -You may now keep your Charmed Kafka deployment running or remove it entirely using the steps in [Remove Charmed Kafka and Juju](#remove-charmed-kafka-and-juju). +In this tutorial, we've successfully deployed Apache Kafka, added/removed replicas, added/removed users to/from the cluster, and even enabled and disabled TLS. +You may now keep your Charmed Apache Kafka deployment running or remove it entirely using the steps in [Remove Charmed Apache Kafka and Juju](#remove-charmed-kafka-and-juju). If you're looking for what to do next you can: -- Run [Charmed Kafka on Kubernetes](https://github.com/canonical/kafka-k8s-operator). + +- Run [Charmed Apache Kafka on Kubernetes](https://github.com/canonical/kafka-k8s-operator). - Check out our Charmed offerings of [PostgreSQL](https://charmhub.io/postgresql?channel=edge) and [MongoDB](https://charmhub.io/mongodb?channel=5/edge). - Read about [High Availability Best Practices](https://canonical.com/blog/database-high-availability) - [Report](https://github.com/canonical/kafka-operator/issues) any problems you encountered. diff --git a/docs/tutorial/t-deploy.md b/docs/tutorial/t-deploy.md index 93548cd6..efc119d2 100644 --- a/docs/tutorial/t-deploy.md +++ b/docs/tutorial/t-deploy.md @@ -1,8 +1,8 @@ -This is part of the [Charmed Kafka Tutorial](/t/charmed-kafka-tutorial-overview/10571). Please refer to this page for more information and the overview of the content. +This is part of the [Charmed Apache Kafka Tutorial](/t/charmed-kafka-tutorial-overview/10571). Please refer to this page for more information and an overview of the content. -## Deploy Charmed Kafka (and Charmed ZooKeeper) +## Deploy Charmed Apache Kafka (and Charmed ZooKeeper) -To deploy Charmed Kafka, all you need to do is run the following commands, which will automatically fetch [Kafka](https://charmhub.io/kafka?channel=3/stable) and [ZooKeeper](https://charmhub.io/zookeeper?channel=3/stable) charms from [Charmhub](https://charmhub.io/) and deploy them to your model. For example, to deploy a five ZooKeeper unit and three Kafka unit cluster, you can simply run: +To deploy Charmed Apache Kafka, all you need to do is run the following commands, which will automatically fetch [Apache Kafka](https://charmhub.io/kafka?channel=3/stable) and [Apache ZooKeeper](https://charmhub.io/zookeeper?channel=3/stable) charms from [Charmhub](https://charmhub.io/) and deploy them to your model. For example, to deploy a cluster of five Apache ZooKeeper units and three Apache Kafka units, you can simply run: ```shell $ juju deploy zookeeper -n 5 @@ -15,14 +15,14 @@ After this, it is necessary to connect them: $ juju relate kafka zookeeper ``` -Juju will now fetch Charmed Kafka and Zookeeper and begin deploying it to the LXD cloud. This process can take several minutes depending on how provisioned (RAM, CPU, etc) your machine is. You can track the progress by running: +Juju will now fetch Charmed Apache Kafka and Apache Zookeeper and begin deploying them to the LXD cloud. This process can take several minutes depending on how provisioned (RAM, CPU, etc) your machine is. You can track the progress by running: ```shell juju status --watch 1s ``` -This command is useful for checking the status of Charmed ZooKeeper and Charmed Kafka and gathering information about the machines hosting the two applications. Some of the helpful information it displays includes IP addresses, ports, state, etc. -The command updates the status of the cluster every second and as the application starts you can watch the status and messages of Charmed Kafka and ZooKeeper change. +This command is useful for checking the status of Charmed Apache ZooKeeper and Charmed Apache Kafka and gathering information about the machines hosting the two applications. Some of the helpful information it displays includes IP addresses, ports, state, etc. +The command updates the status of the cluster every second and as the application starts you can watch the status and messages of Charmed Apache Kafka and Apache ZooKeeper change. Wait until the application is ready - when it is ready, `juju status --watch 1s` will show: @@ -57,7 +57,7 @@ Machine State Address Inst id Series AZ Message To exit the screen with `juju status --watch 1s`, enter `Ctrl+c`. -## Access Kafka cluster +## Access Apache Kafka cluster To watch the process, `juju status` can be used. Once all the units show as `active|idle` the credentials to access a broker can be queried with: @@ -77,11 +77,13 @@ password: e2sMfYLQg7sbbBMFTx1qlaZQKTUxr09x username: admin ``` -Providing you the `username` and `password` of the Kafka cluster admin user. +Providing you the `username` and `password` of the Apache Kafka cluster admin user. -> **IMPORTANT** Note that when no other application is related to Kafka, the cluster is secured-by-default and external listeners (bound to port `9092`) are disabled, thus preventing any external incoming connection. +[note type="caution"] +When no other application is related to Apache Kafka, the cluster is secured-by-default and external listeners (bound to port `9092`) are disabled, thus preventing any external incoming connection. +[/note] -Nevertheless, it is still possible to run a command from within the Kafka cluster using the internal listeners in place of the external ones. +Nevertheless, it is still possible to run a command from within the Apache Kafka cluster using the internal listeners in place of the external ones. The internal endpoints can be constructed by replacing the `19092` port in the `bootstrap.servers` returned in the output above, for example: ```shell @@ -94,8 +96,8 @@ Once you have fetched the `INTERNAL_LISTENERS`, log in to one of the Kafka conta juju ssh kafka/leader sudo -i ``` -When the unit is started, the Charmed Kafka Operator installs the [`charmed-kafka`](https://snapcraft.io/charmed-kafka) Snap in the unit that provides a number of entrypoints (that corresponds to the bin commands in the Kafka distribution) for performing various administrative tasks, e.g `charmed-kafka.config` to update cluster configuration, `charmed-kafka.topics` for topic management, and many more! -Within the machine, the Charmed Kafka Operator also creates a `client.properties` file that already provides the relevant settings to connect to the cluster using the CLI +When the unit is started, the Charmed Apache Kafka Operator installs the [`charmed-kafka`](https://snapcraft.io/charmed-kafka) Snap in the unit that provides a number of entrypoints (that corresponds to the bin commands in the Apache Kafka distribution) for performing various administrative tasks, e.g `charmed-kafka.config` to update cluster configuration, `charmed-kafka.topics` for topic management, and many more! +Within the machine, the Charmed Apache Kafka Operator also creates a `client.properties` file that already provides the relevant settings to connect to the cluster using the CLI ```shell CLIENT_PROPERTIES=/var/snap/charmed-kafka/current/etc/kafka/client.properties @@ -130,7 +132,7 @@ charmed-kafka.topics \ --command-config $CLIENT_PROPERTIES ``` -Other available Kafka bin commands can also be found with: +Other available Apache Kafka bin commands can also be found with: ```shell snap info charmed-kafka @@ -140,4 +142,4 @@ snap info charmed-kafka However, although the commands above can run within the cluster, it is generally recommended during operations to enable external listeners and use these for running the admin commands from outside the cluster. -To do so, as we will see in the next section, we will deploy a [data-integrator](https://charmhub.io/data-integrator) charm and relate it to Kafka. \ No newline at end of file +To do so, as we will see in the next section, we will deploy a [data-integrator](https://charmhub.io/data-integrator) charm and relate it to Apache Kafka. \ No newline at end of file diff --git a/docs/tutorial/t-enable-encryption.md b/docs/tutorial/t-enable-encryption.md index cd41e5b4..40ad804a 100644 --- a/docs/tutorial/t-enable-encryption.md +++ b/docs/tutorial/t-enable-encryption.md @@ -1,17 +1,19 @@ -This is part of the [Charmed Kafka Tutorial](/t/charmed-kafka-tutorial-overview/10571). Please refer to this page for more information and the overview of the content. +This is part of the [Charmed Apache Kafka Tutorial](/t/charmed-kafka-tutorial-overview/10571). Please refer to this page for more information and an overview of the content. ## Transport Layer Security (TLS) -[TLS](https://en.wikipedia.org/wiki/Transport_Layer_Security) is used to encrypt data exchanged between two applications; it secures data transmitted over the network. Typically, enabling TLS within a highly available database, and between a highly available database and client/server applications, requires domain-specific knowledge and a high level of expertise. Fortunately, the domain-specific knowledge has been encoded into Charmed Kafka. This means (re-)configuring TLS on Charmed Kafka is readily available and requires minimal effort on your end. +[TLS](https://en.wikipedia.org/wiki/Transport_Layer_Security) is used to encrypt data exchanged between two applications; it secures data transmitted over the network. Typically, enabling TLS within a highly available database, and between a highly available database and client/server applications, requires domain-specific knowledge and a high level of expertise. Fortunately, the domain-specific knowledge has been encoded into Charmed Apache Kafka. This means (re-)configuring TLS on Charmed Apache Kafka is readily available and requires minimal effort on your end. -Again, relations come in handy here as TLS is enabled via relations; i.e. by relating Charmed Kafka to the [Self-signed Certificates Charm](https://charmhub.io/self-signed-certificates) via the [`tls-certificates`](https://github.com/canonical/charm-relation-interfaces/blob/main/interfaces/tls_certificates/v1/README.md) charm relations. The `tls-certificates` relation centralises TLS certificate management in a consistent manner and handles providing, requesting, and renewing TLS certificates, making it possible to use different providers, like the self-signed certificates but also other services, e.g. Let's Encrypt. +Again, relations come in handy here as TLS is enabled via relations; i.e. by relating Charmed Apache Kafka to the [Self-signed Certificates Charm](https://charmhub.io/self-signed-certificates) via the [`tls-certificates`](https://github.com/canonical/charm-relation-interfaces/blob/main/interfaces/tls_certificates/v1/README.md) charm relations. The `tls-certificates` relation centralises TLS certificate management in a consistent manner and handles providing, requesting, and renewing TLS certificates, making it possible to use different providers, like the self-signed certificates but also other services, e.g. Let's Encrypt. -> *Note: In this tutorial, we will distribute [self-signed certificates](https://en.wikipedia.org/wiki/Self-signed_certificate) to all charms (Kafka, ZooKeeper and client applications) that are signed using a root self-signed CA -that is also trusted by all applications. This setup is only for show-casing purposes and self-signed certificates should **never** be used in a production cluster. For more information about which charm may better suit your use-case, please refer to [this post](https://charmhub.io/topics/security-with-x-509-certificates).* +[note] +In this tutorial, we will distribute [self-signed certificates](https://en.wikipedia.org/wiki/Self-signed_certificate) to all charms (Kafka, ZooKeeper and client applications) that are signed using a root self-signed CA +that is also trusted by all applications. This setup is only for show-casing purposes and self-signed certificates should **never** be used in a production cluster. For more information about which charm may better suit your use-case, please refer to [this post](https://charmhub.io/topics/security-with-x-509-certificates). +[/note] ### Configure TLS -Before enabling TLS on Charmed Kafka we must first deploy the `self-signed-certificates` charm: +Before enabling TLS on Charmed Apache Kafka we must first deploy the `self-signed-certificates` charm: ```shell juju deploy self-signed-certificates --config ca-common-name="Tutorial CA" @@ -34,7 +36,7 @@ self-signed-certificates/0* active idle 10.1.36.91 ... ``` -To enable TLS on Charmed Kafka, relate the both the `kafka` and `zookeeper` charms with the +To enable TLS on Charmed Apache Kafka, relate both the `kafka` and `zookeeper` charms with the `self-signed-certificates` charm: ```shell @@ -42,7 +44,7 @@ juju relate zookeeper self-signed-certificates juju relate kafka:certificates self-signed-certificates ``` -After the charms settle into `active/idle` states, the Kafka listeners should now have been swapped to the +After the charms settle into `active/idle` states, the Apache Kafka listeners should now have been swapped to the default encrypted port 9093. This can be tested by testing whether the ports are open/closed with `telnet` ```shell @@ -52,10 +54,10 @@ telnet 9093 ### Enable TLS encrypted connection -Once the Kafka cluster is enabled to use encrypted connection, client applications should be configured as well to connect to +Once the Apache Kafka cluster is enabled to use encrypted connection, client applications should be configured as well to connect to the correct port as well as trust the self-signed CA provided by the `self-signed-certificates` charm. -Make sure that the `kafka-test-app` is not connected to the Kafka charm, by removing the relation if it exists +Make sure that the `kafka-test-app` is not connected to the Apache Kafka charm, by removing the relation if it exists ```shell juju remove-relation kafka-test-app kafka @@ -80,7 +82,7 @@ and then relate with the `kafka` cluster juju relate kafka kafka-test-app ``` -As before, you can check that the messages are pushed into the Kafka cluster by inspecting the logs +As before, you can check that the messages are pushed into the Apache Kafka cluster by inspecting the logs ```shell juju exec --application kafka-test-app "tail /tmp/*.log" @@ -92,11 +94,11 @@ with the encrypted port `9093`. ### Remove external TLS certificate -To remove the external TLS and return to the locally generate one, un-relate applications: +To remove the external TLS and return to the locally generated one, un-relate applications: ```shell juju remove-relation kafka self-signed-certificates juju remove-relation zookeeper self-signed-certificates ``` -The Charmed Kafka application is not using TLS anymore. \ No newline at end of file +The Charmed Apache Kafka application is not using TLS anymore. \ No newline at end of file diff --git a/docs/tutorial/t-manage-passwords.md b/docs/tutorial/t-manage-passwords.md index fb62f9be..16fdc376 100644 --- a/docs/tutorial/t-manage-passwords.md +++ b/docs/tutorial/t-manage-passwords.md @@ -1,16 +1,16 @@ -This is part of the [Charmed Kafka Tutorial](/t/charmed-kafka-tutorial-overview/10571). Please refer to this page for more information and the overview of the content. +This is part of the [Charmed Apache Kafka Tutorial](/t/charmed-kafka-tutorial-overview/10571). Please refer to this page for more information and an overview of the content. ## Manage passwords -Passwords help to secure our cluster and are essential for security. Over time it is a good practice to change the password frequently. Here we will go through setting and changing the password both for the admin user and external Kafka users managed by the data-integrator. +Passwords help to secure our cluster and are essential for security. Over time it is a good practice to change the password frequently. Here we will go through setting and changing the password both for the admin user and external Apache Kafka users managed by the data-integrator. ### Admin user -The admin user password management is handled directing by the charm, by using Juju actions. +The admin user password management is handled directly by the charm, by using Juju actions. #### Retrieve the admin password -As previously mentioned, the admin password can be retrieved by running the `get-admin-credentials` action on the Charmed Kafka application: +As previously mentioned, the admin password can be retrieved by running the `get-admin-credentials` action on the Charmed Apache Kafka application: ```shell juju run kafka/leader get-admin-credentials @@ -64,7 +64,9 @@ unit-kafka-1: The admin password is under the result: `admin-password`. It should be different from your previous password. -> **Note** When changing the admin password you will also need to update the admin password the in Kafka connection parameters; as the old password will no longer be valid.* +[note] +When changing the admin password you will also need to update the admin password the in Kafka connection parameters; as the old password will no longer be valid. +[/note] #### Set the admin password @@ -91,15 +93,17 @@ unit-kafka-1: The admin password under the result: `admin-password` should match whatever you passed in when you entered the command. -> **Note** When changing the admin password you will also need to update the admin password in the Kafka connection parameters, as the old password will no longer be valid.* +[note] +When changing the admin password you will also need to update the admin password in the Kafka connection parameters, as the old password will no longer be valid. +[/note] -### External Kafka users +### External Apache Kafka users -Unlike Admin management, the password management for external Kafka users is instead managed using relations. Let's see this into play with the Data Integrator charm, that we have deployed in the previous part of the tutorial. +Unlike Admin management, the password management for external Apache Kafka users is instead managed using relations. Let's see this into play with the Data Integrator charm, that we have deployed in the previous part of the tutorial. #### Retrieve the password -Similarly to the Kafka application, also the `data-integrator` exposes an action to retrieve the credentials, e.g. +Similarly to the Apache Kafka application, also the `data-integrator` exposes an action to retrieve the credentials, e.g. ```shell juju run data-integrator/leader get-credentials @@ -154,6 +158,7 @@ To rotate external passwords with no or limited downtime, please refer to the ho #### Remove the user To remove the user, remove the relation. Removing the relation automatically removes the user that was created when the relation was created. Enter the following to remove the relation: + ```shell juju remove-relation kafka data-integrator ``` @@ -192,7 +197,9 @@ Machine State Address Inst id Series AZ Message 8 started 10.244.26.4 juju-f1a2cd-8 jammy Running ``` -> **Note** The operations above would also apply to charmed applications that implement the `kafka_client` relation, for which password rotation and user deletion can be achieved in the same consistent way. +[note] +The operations above would also apply to charmed applications that implement the `kafka_client` relation, for which password rotation and user deletion can be achieved in the same consistent way. +[/note] ## What's next? diff --git a/docs/tutorial/t-overview.md b/docs/tutorial/t-overview.md index 0899f9ff..7b010554 100644 --- a/docs/tutorial/t-overview.md +++ b/docs/tutorial/t-overview.md @@ -1,22 +1,22 @@ -# Charmed Kafka tutorial +# Charmed Apache Kafka tutorial -The Charmed Kafka Operator delivers automated operations management from [Day 0 to Day 2](https://codilime.com/blog/day-0-day-1-day-2-the-software-lifecycle-in-the-cloud-age/) on the [Apache Kafka](https://kafka.apache.org/) event streaming platform. -It is an open source, end-to-end, production-ready data platform [on top of Juju](https://juju.is/). As a first step this tutorial shows you how to get Charmed Kafka up and running, but the tutorial does not stop there. -As currently Kafka requires a paired [ZooKeeper](https://zookeeper.apache.org/) deployment in production, this operator makes use of the [ZooKeeper Operator](https://github.com/canonical/zookeeper-operator) for various essential functions. -Through this tutorial you will learn a variety of operations, everything from adding replicas to advanced operations such as enabling Transport Layer Security (TLS). +The Charmed Apache Kafka Operator delivers automated operations management from [Day 0 to Day 2](https://codilime.com/blog/day-0-day-1-day-2-the-software-lifecycle-in-the-cloud-age/) on the [Apache Kafka](https://kafka.apache.org/) event streaming platform. +It is an open source, end-to-end, production-ready data platform [on top of Juju](https://juju.is/). As a first step this tutorial shows you how to get Charmed Apache Kafka up and running, but the tutorial does not stop there. +As currently Apache Kafka requires a paired [Apache ZooKeeper](https://zookeeper.apache.org/) deployment in production, this operator makes use of the [Charmed Apache ZooKeeper Operator](https://github.com/canonical/zookeeper-operator) for various essential functions. +Through this tutorial, you will learn a variety of operations, everything from adding replicas to advanced operations such as enabling Transport Layer Security (TLS). -In this tutorial we will walk through how to: +In this tutorial, we will walk through how to: - Set up your environment using LXD and Juju. -- Deploy Kafka using a couple of commands. +- Deploy Apache Kafka using a couple of commands. - Get the admin credentials directly. - Add high availability with replication. - Change the admin password. -- Automatically create Kafka users via Juju relations. +- Automatically create Apache Kafka users via Juju relations. -While this tutorial intends to guide and teach you as you deploy Charmed Kafka, it will be most beneficial if you already have a familiarity with: +While this tutorial intends to guide and teach you as you deploy Charmed Apache Kafka, it will be most beneficial if you already have a familiarity with: - Basic terminal commands. -- Kafka concepts such as replication and users. +- Apache Kafka concepts such as replication and users. ## Minimum requirements @@ -32,7 +32,7 @@ Before we start, make sure your machine meets the following requirements: Here’s an overview of the steps required with links to our separate tutorials that deal with each individual step: * [Set up the environment](/t/charmed-kafka-tutorial-setup-environment/10575) -* [Deploy Kafka](/t/charmed-kafka-tutorial-deploy-kafka/10567) +* [Deploy Apache Kafka](/t/charmed-kafka-tutorial-deploy-kafka/10567) * [Integrate with client applications](/t/charmed-kafka-tutorial-relate-kafka/10573) * [Manage passwords](/t/charmed-kafka-tutorial-manage-passwords/10569) * [Enable encryption](/t/charmed-kafka-documentation-tutorial-enable-security/12043) diff --git a/docs/tutorial/t-relate-kafka.md b/docs/tutorial/t-relate-kafka.md index 890fe1ea..635f8601 100644 --- a/docs/tutorial/t-relate-kafka.md +++ b/docs/tutorial/t-relate-kafka.md @@ -1,14 +1,16 @@ -This is part of the [Charmed Kafka Tutorial](/t/charmed-kafka-tutorial-overview/10571). Please refer to this page for more information and the overview of the content. +This is part of the [Charmed Apache Kafka Tutorial](/t/charmed-kafka-tutorial-overview/10571). Please refer to this page for more information and an overview of the content. ## Integrate with client applications As mentioned in the previous section of the Tutorial, the recommended way to create and manage users is by means of another charm: the [Data Integrator Charm](https://charmhub.io/data-integrator). This lets us to encode users directly in the Juju model, and - as shown in the following - rotate user credentials with and without application downtime using Relations. -> Relations, or what Juju documentation describes also as [Integrations](https://juju.is/docs/sdk/integration), let two charms to exchange information and interact with one another. Creating a relation between Kafka and the Data Integrator will automatically generate a username, password, and assign read/write permissions on a given topic. This is the simplest method to create and manage users in Charmed Kafka. +[note] +Relations, or what Juju documentation describes also as [Integrations](https://juju.is/docs/sdk/integration), let two charms to exchange information and interact with one another. Creating a relation between Apache Kafka and the Data Integrator will automatically generate a username, password, and assign read/write permissions on a given topic. This is the simplest method to create and manage users in Charmed Apache Kafka. +[/note] ### Data Integrator charm -The [Data Integrator charm](https://charmhub.io/data-integrator) is a bare-bones charm for central management of database users, providing support for different kinds of data platforms (e.g. MongoDB, MySQL, PostgreSQL, Kafka, OpenSearch, etc.) with a consistent, opinionated and robust user experience. To deploy the Data Integrator charm we can use the command `juju deploy` we have learned above: +The [Data Integrator charm](https://charmhub.io/data-integrator) is a bare-bones charm for central management of database users, providing support for different kinds of data platforms (e.g. MongoDB, MySQL, PostgreSQL, Apache Kafka, OpenSearch, etc.) with a consistent, opinionated and robust user experience. To deploy the Data Integrator charm we can use the command `juju deploy` we have learned above: ```shell juju deploy data-integrator --channel stable --config topic-name=test-topic --config extra-user-roles=producer,consumer @@ -21,9 +23,9 @@ Located charm "data-integrator" in charm-hub, revision 11 Deploying "data-integrator" from charm-hub charm "data-integrator", revision 11 in channel stable on jammy ``` -### Relate to Kafka +### Relate to Apache Kafka -Now that the Database Integrator Charm has been set up, we can relate it to Kafka. This will automatically create a username, password, and database for the Database Integrator Charm. Relate the two applications with: +Now that the Database Integrator Charm has been set up, we can relate it to Apache Kafka. This will automatically create a username, password, and database for the Database Integrator Charm. Relate the two applications with: ```shell juju relate data-integrator kafka @@ -87,7 +89,7 @@ Save the value listed under `bootstrap-server`, `username` and `password`. *(Not ### Produce/consume messages -We will now use the username and password to produce some messages to Kafka. To do so, we will first deploy the Kafka Test App (available [here](https://charmhub.io/kafka-test-app)): a test charm that also bundles some python scripts to push data to Kafka, e.g. +We will now use the username and password to produce some messages to Apache Kafka. To do so, we will first deploy the [Apache Kafka Test App](https://charmhub.io/kafka-test-app): a test charm that also bundles some python scripts to push data to Apache Kafka, e.g.: ```shell juju deploy kafka-test-app -n1 --channel edge @@ -105,7 +107,7 @@ and make sure that the Python virtual environment libraries are visible: export PYTHONPATH="/var/lib/juju/agents/unit-kafka-test-app-0/charm/venv:/var/lib/juju/agents/unit-kafka-test-app-0/charm/lib" ``` -Once this is setup, you should be able to use the `client.py` script that exposes some functionality to produce and consume messages. +Once this is set up, you should be able to use the `client.py` script that exposes some functionality to produce and consume messages. You can explore the usage of the script ```shell @@ -114,7 +116,7 @@ python3 -m charms.kafka.v0.client --help usage: client.py [-h] [-t TOPIC] [-u USERNAME] [-p PASSWORD] [-c CONSUMER_GROUP_PREFIX] [-s SERVERS] [-x SECURITY_PROTOCOL] [-n NUM_MESSAGES] [-r REPLICATION_FACTOR] [--num-partitions NUM_PARTITIONS] [--producer] [--consumer] [--cafile-path CAFILE_PATH] [--certfile-path CERTFILE_PATH] [--keyfile-path KEYFILE_PATH] [--mongo-uri MONGO_URI] [--origin ORIGIN] -Handler for running a Kafka client +Handler for running an Apache Kafka client options: -h, --help show this help message and exit @@ -169,25 +171,27 @@ python3 -m charms.kafka.v0.client \ ### Charm client applications -Actually, the Data Integrator is only a very special client charm, that implements the `kafka_client` relation for exchanging data with the Kafka charm and user management via relations. +Actually, the Data Integrator is only a very special client charm, that implements the `kafka_client` relation for exchanging data with the Apache Kafka charm and user management via relations. -For example, the steps above for producing and consuming messages to Kafka have also been implemented in the `kafka-test-app` charm (that also implement the `kafka_client` relation) providing a fully integrated charmed user-experience, where producing/consuming messages can simply be achieved using relations. +For example, the steps above for producing and consuming messages to Apache Kafka have also been implemented in the `kafka-test-app` charm (that also implements the `kafka_client` relation) providing a fully integrated charmed user experience, where producing/consuming messages can simply be achieved using relations. #### Producing messages -To produce messages to Kafka, we need to configure the `kafka-test-app` to act as a producer, publishing messages to a specific topic: +To produce messages to Apache Kafka, we need to configure the `kafka-test-app` to act as a producer, publishing messages to a specific topic: ```shell juju config kafka-test-app topic_name=test_kafka_app_topic role=producer num_messages=20 ``` -To start producing messages to Kafka, we **JUST** simply relate the Kafka Test App with Kafka +To start producing messages to Apache Kafka, we **JUST** simply relate the Apache Kafka Test App with Apache Kafka ```shell juju relate kafka-test-app kafka ``` -> **Note**: This will both take care of creating a dedicated user (as much as done for the data-integrator) as well as start a producer process publishing messages to the `test_kafka_app_topic` topic, basically automating what was done before by hands. +[note] +This will both take care of creating a dedicated user (as much as done for the data-integrator) as well as start a producer process publishing messages to the `test_kafka_app_topic` topic, basically automating what was done before by hand. +[/note] After some time, the `juju status` output should show @@ -226,8 +230,8 @@ Note that the `kafka-test-app` charm can also similarly be used to consume messa juju config kafka-test-app topic_name=test_kafka_app_topic role=consumer consumer_group_prefix=cg ``` -After configuring the Kafka Test App, just relate it again with the Kafka charm. This will again create a new user and start the consumer process. +After configuring the Apache Kafka Test App, just relate it again with the Apache Kafka charm. This will again create a new user and start the consumer process. ## What's next? -In the next section, we will learn how to rotate and manage the passwords for the Kafka users, both the admin one and the ones managed by the Data Integrator. \ No newline at end of file +In the next section, we will learn how to rotate and manage the passwords for the Apache Kafka users, both the admin one and the ones managed by the Data Integrator. \ No newline at end of file diff --git a/docs/tutorial/t-setup-environment.md b/docs/tutorial/t-setup-environment.md index 06c3999d..0bc4261a 100644 --- a/docs/tutorial/t-setup-environment.md +++ b/docs/tutorial/t-setup-environment.md @@ -1,27 +1,28 @@ -This is part of the [Charmed Kafka Tutorial](/t/charmed-kafka-tutorial-overview/10571). Please refer to this page for more information and the overview of the content. +This is part of the [Charmed Apache Kafka Tutorial](/t/charmed-kafka-tutorial-overview/10571). Please refer to this page for more information and an overview of the content. ## Setup the environment -For this tutorial, we will need to setup the environment with two main components: +For this tutorial, we will need to set up the environment with two main components: + * LXD that is a simple and lightweight virtual machine provisioner -* Juju that will help us to deploy and manage Kafka and related applications +* Juju that will help us to deploy and manage Apache Kafka and related applications ### Prepare LXD -The fastest, simplest way to get started with Charmed Kafka is to set up a local LXD cloud. LXD is a system container and virtual machine manager; Charmed Kafka will be run in one of these containers and managed by Juju. While this tutorial covers the basics of LXD, you can [explore more LXD here](https://linuxcontainers.org/lxd/getting-started-cli/). LXD comes pre-installed on Ubuntu 20.04 LTS. Verify that LXD is installed by entering the command `which lxd` into the command line, this will output: +The fastest, simplest way to get started with Charmed Apache Kafka is to set up a local LXD cloud. LXD is a system container and virtual machine manager; Charmed Apache Kafka will be run in one of these containers and managed by Juju. While this tutorial covers the basics of LXD, you can [explore more LXD here](https://linuxcontainers.org/lxd/getting-started-cli/). LXD comes pre-installed on Ubuntu 20.04 LTS. Verify that LXD is installed by entering the command `which lxd` into the command line, this will output: ``` /snap/bin/lxd ``` -Although LXD is already installed, we need to run `lxd init` to perform post-installation tasks. For this tutorial the default parameters are preferred and the network bridge should be set to have no IPv6 addresses, since Juju does not support IPv6 addresses with LXD: +Although LXD is already installed, we need to run `lxd init` to perform post-installation tasks. For this tutorial, the default parameters are preferred and the network bridge should be set to have no IPv6 addresses since Juju does not support IPv6 addresses with LXD: ```shell lxd init --auto lxc network set lxdbr0 ipv6.address none ``` -You can list all LXD containers by entering the command `lxc list` in to the command line. Although at this point in the tutorial none should exist and you'll only see this as output: +You can list all LXD containers by entering the command `lxc list` into the command line. However, at this point of the tutorial, none should exist and you'll only see this as output: ``` +------+-------+------+------+------+-----------+ @@ -31,13 +32,13 @@ You can list all LXD containers by entering the command `lxc list` in to the com ### Install and prepare Juju -[Juju](https://juju.is/) is an Operator Lifecycle Manager (OLM) for clouds, bare metal, LXD or Kubernetes. We will be using it to deploy and manage Charmed Kafka. As with LXD, Juju is installed from a snap package: +[Juju](https://juju.is/) is an Operator Lifecycle Manager (OLM) for clouds, bare metal, LXD or Kubernetes. We will be using it to deploy and manage Charmed Apache Kafka. As with LXD, Juju is installed from a snap package: ```shell sudo snap install juju --channel 3.1/stable ``` -Juju already has a built-in knowledge of LXD and how it works, so there is no additional setup or configuration needed. A controller will be used to deploy and control Charmed Kafka. All we need to do is run the following command to bootstrap a Juju controller named ‘overlord’ to LXD. This bootstrapping processes can take several minutes depending on how provisioned (RAM, CPU, etc.) your machine is: +Juju already has built-in knowledge of LXD and how it works, so there is no additional setup or configuration needed. A controller will be used to deploy and control Charmed Apache Kafka. All we need to do is run the following command to bootstrap a Juju controller named ‘overlord’ to LXD. This bootstrapping process can take several minutes depending on how provisioned (RAM, CPU, etc.) your machine is: ```shell juju bootstrap localhost overlord --agent-version 3.1.6 @@ -55,7 +56,7 @@ The Juju controller should exist within an LXD container. You can verify this by where `` is a unique combination of numbers and letters such as `9d7e4e-0` -The controller can work with different models; models host applications such as Charmed Kafka. Set up a specific model for Charmed Kafka named ‘tutorial’: +The controller can work with different models; models host applications such as Charmed Apache Kafka. Set up a specific model for Charmed Apache Kafka named ‘tutorial’: ```shell juju add-model tutorial From 2db877b97428a1ab0f648e9c5a250380cd91fd95 Mon Sep 17 00:00:00 2001 From: discourse-gatekeeper-docs-bot Date: Sat, 30 Nov 2024 20:57:38 +0000 Subject: [PATCH 02/14] 'modified: docs/explanation/e-cluster-configuration.md,docs/explanation/e-hardening.md' --- docs/explanation/e-cluster-configuration.md | 4 ++-- docs/explanation/e-hardening.md | 12 ++++++------ 2 files changed, 8 insertions(+), 8 deletions(-) diff --git a/docs/explanation/e-cluster-configuration.md b/docs/explanation/e-cluster-configuration.md index 5e9f6709..1e5c170d 100644 --- a/docs/explanation/e-cluster-configuration.md +++ b/docs/explanation/e-cluster-configuration.md @@ -1,11 +1,11 @@ -# Overview of a cluster configuration content +# Cluster configuration [Apache Kafka](https://kafka.apache.org) is an open-source distributed event streaming platform that requires an external solution to coordinate and sync metadata between all active brokers. One of such solutions is [Apache ZooKeeper](https://zookeeper.apache.org). Here are some of the responsibilities of Apache ZooKeeper in an Apache Kafka cluster: -- **Cluster membership**: through regular heartbeats, it keeps tracks of the brokers entering and leaving the cluster, providing an up-to-date list of brokers. +- **Cluster membership**: through regular heartbeats, it keeps track of the brokers entering and leaving the cluster, providing an up-to-date list of brokers. - **Controller election**: one of the Apache Kafka brokers is responsible for managing the leader/follower status for all the partitions. Apache ZooKeeper is used to elect a controller and to make sure there is only one of it. - **Topic configuration**: each topic can be replicated on multiple partitions. Apache ZooKeeper keeps track of the locations of the partitions and replicas so that high availability is still attained when a broker shuts down. Topic-specific configuration overrides (e.g. message retention and size) are also stored in Apache ZooKeeper. - **Access control and authentication**: Apache ZooKeeper stores access control lists (ACL) for Apache Kafka resources, to ensure only the proper, authorized, users or groups can read or write on each topic. diff --git a/docs/explanation/e-hardening.md b/docs/explanation/e-hardening.md index 88dab405..955173a2 100644 --- a/docs/explanation/e-hardening.md +++ b/docs/explanation/e-hardening.md @@ -57,21 +57,21 @@ Juju user credentials must be stored securely and rotated regularly to limit the In the following, we provide guidance on how to harden your deployment using: -1. Operating System -2. Apache Kafka and Apache ZooKeeper Security Upgrades +1. Base images +2. Charmed operator security upgrades 3. Encryption 4. Authentication -5. Monitoring and Auditing +5. Monitoring and auditing -### Operating System +### Base images Charmed Apache Kafka and Charmed Apache ZooKeeper currently run on top of Ubuntu 22.04. Deploy a [Landscape Client Charm](https://charmhub.io/landscape-client?) in order to connect the underlying VM to a Landscape User Account to manage security upgrades and integrate Ubuntu Pro subscriptions. -### Apache Kafka and Apache ZooKeeper Security Upgrades +### Charmed operator security upgrades Charmed Apache Kafka and Charmed Apache ZooKeeper operators install a pinned revision of the [Charmed Apache Kafka snap](https://snapcraft.io/charmed-kafka) -and [Charmed ZooKeeper snap](https://snapcraft.io/charmed-zookeeper), respectively, in order to provide reproducible and secure environments. +and [Charmed ZooKeeper snap](https://snapcraft.io/charmed-zookeeper), respectively, to provide reproducible and secure environments. New versions of Charmed Apache Kafka and Charmed Apache ZooKeeper may be released to provide patching of vulnerabilities (CVEs). It is important to refresh the charm regularly to make sure the workload is as secure as possible. For more information on how to refresh the charm, see the [how-to upgrade](https://charmhub.io/kafka/docs/h-upgrade) guide. From 62955a4cb968cdf4e417244cde7fe215e40148f8 Mon Sep 17 00:00:00 2001 From: discourse-gatekeeper-docs-bot Date: Sun, 1 Dec 2024 00:54:28 +0000 Subject: [PATCH 03/14] 'modified: docs/how-to/h-cluster-migration.md,docs/how-to/h-enable-monitoring.md,docs/how-to/h-enable-encryption.md,docs/how-to/h-deploy.md,docs/explanation/e-hardening.md,docs/index.md,docs/explanation/e-security.md' --- docs/explanation/e-hardening.md | 4 +- docs/explanation/e-security.md | 4 +- docs/how-to/h-cluster-migration.md | 15 ++++-- docs/how-to/h-deploy.md | 79 ++++++++++++++++++------------ docs/how-to/h-enable-encryption.md | 2 +- docs/how-to/h-enable-monitoring.md | 6 ++- docs/index.md | 2 +- 7 files changed, 68 insertions(+), 44 deletions(-) diff --git a/docs/explanation/e-hardening.md b/docs/explanation/e-hardening.md index 955173a2..73a680a4 100644 --- a/docs/explanation/e-hardening.md +++ b/docs/explanation/e-hardening.md @@ -108,8 +108,8 @@ Refer to How-To user guide for more information on: * [how to integrate the Charmed Apache Kafka deployment with COS](/t/charmed-kafka-how-to-enable-monitoring/10283) * [how to customise the alerting rules and dashboards](/t/charmed-kafka-documentation-how-to-integrate-custom-alerting-rules-and-dashboards/13431) -External user access to Apache Kafka is logged to the `kafka-authorizer.log` that is pushes to [Loki endpoint](https://charmhub.io/loki-k8s) and exposed via [Grafana](https://charmhub.io/grafana), both components being part of the COS stack. -Access denials are logged at INFO level, whereas allowed accesses are logged at DEBUG level. Depending on the auditing needs, +External user access to Apache Kafka is logged to the `kafka-authorizer.log` that is pushed to [Loki endpoint](https://charmhub.io/loki-k8s) and exposed via [Grafana](https://charmhub.io/grafana), both components being part of the COS stack. +Access denials are logged at the `INFO` level, whereas allowed accesses are logged at the `DEBUG` level. Depending on the auditing needs, customize the logging level either for all logs via the [`log_level`](https://charmhub.io/kafka/configurations?channel=3/stable#log_level) config option or only tune the logging level of the `authorizerAppender` in the `log4j.properties` file. Refer to the Reference documentation, for more information about the [file system paths](/t/charmed-kafka-documentation-reference-file-system-paths/13262). diff --git a/docs/explanation/e-security.md b/docs/explanation/e-security.md index 9b1e195a..627c423e 100644 --- a/docs/explanation/e-security.md +++ b/docs/explanation/e-security.md @@ -34,7 +34,7 @@ charms, snaps and rocks are published using the workflows of their respective re All repositories in GitHub are set up with branch protection rules, requiring: -* new commits to be merged to main branches via Pull-Request with at least 2 approvals from repository maintainers +* new commits to be merged to main branches via pull request with at least 2 approvals from repository maintainers * new commits to be signed (e.g. using GPG keys) * developers to sign the [Canonical Contributor License Agreement (CLA)](https://ubuntu.com/legal/contributors) @@ -62,7 +62,7 @@ Encryption at rest is currently not supported, although it can be provided by th ## Authentication -In the Charmed Apache Kafka solution, authentication layers can be enabled for +In the Charmed Apache Kafka solution, authentication layers can be enabled for: 1. Apache ZooKeeper connections 2. Apache Kafka inter-broker communication diff --git a/docs/how-to/h-cluster-migration.md b/docs/how-to/h-cluster-migration.md index 1da37f60..9be051f2 100644 --- a/docs/how-to/h-cluster-migration.md +++ b/docs/how-to/h-cluster-migration.md @@ -18,15 +18,19 @@ In short, MirrorMaker runs as a distributed service on the new cluster, and cons ## Pre-requisites -- An existing Apache Kafka cluster to migrate from -- A bootstrapped Juju VM machine cloud running Charmed Apache Kafka to migrate to - - A tutorial on how to set up a Charmed Apache Kafka deployment can be found as part of the [Charmed Apache Kafka Tutorial](/t/charmed-kafka-tutorial-overview/10571) +To migrate a cluster we need: + +- An "old" existing Apache Kafka cluster to migrate from. + - The cluster needs to be reachable from/to the new Apache Kafka cluster. +- A bootstrapped Juju VM cloud running Charmed Apache Kafka to migrate to. For guidance on how to deploy a new Charmed Kafka, see: + - The [Charmed Apache Kafka Tutorial](/t/charmed-kafka-tutorial-overview/10571) + - The [How to deploy guide](/t/charmed-apache-kafka-documentation-how-to-deploy/13261) for Charmed Apache Kafka K8s - The CLI tool `yq` - https://github.com/mikefarah/yq - `snap install yq --channel=v3/stable` -## Getting Charmed Apache Kafka cluster details and admin credentials +## Getting cluster details and admin credentials -By design, the `kafka` charm will not expose any available connections until related to by a client. In this case, we deploy `data-integrator` charms and relating them to each `kafka` application, requesting `admin` level privileges: +By design, the `kafka` charm will not expose any available connections until related by a client. In this case, we deploy `data-integrator` charms and relate them to each `kafka` application, requesting `admin` level privileges: ```bash juju deploy data-integrator --channel=edge -n 1 --config extra-user-roles="admin" --config topic-name="default" @@ -172,6 +176,7 @@ curl 10.248.204.198:9099/metrics | grep records_count ## Switching client traffic Once happy with data migration, stop all active consumer applications on the original cluster and redirect them to the new Charmed Apache Kafka cluster, making sure to use the Charmed Apache Kafka cluster server addresses and authentication. After doing so, they will re-join their original consumer groups at the last committed offset it had originally, and continue consuming as normal. + Finally, the producer client applications can be stopped, updated with the Charmed Apache Kafka cluster server addresses and authentication, and restarted, with any newly produced messages being received by the migrated consumer client applications, completing the migration of both the data, and the client applications. ## Stopping MirrorMaker replication diff --git a/docs/how-to/h-deploy.md b/docs/how-to/h-deploy.md index 28c7f14d..54bdbbdf 100644 --- a/docs/how-to/h-deploy.md +++ b/docs/how-to/h-deploy.md @@ -1,5 +1,9 @@ # How to deploy Charmed Apache Kafka +[note type="caution"] +For K8s Charmed Apache Kafka, see the [Charmed Apache Kafka K8s documentation](/t/charmed-kafka-k8s-documentation-how-to-deploy/13266) instead. +[/note] + To deploy a Charmed Apache Kafka cluster on a bare environment, it is necessary to: 1. Set up a Juju Controller @@ -13,55 +17,58 @@ If you already have a Juju controller and/or a Juju model, you can skip the asso ## Juju controller setup -Before deploying Apache Kafka, make sure you have a Juju controller accessible from +Make sure you have a Juju controller accessible from your local environment using the [Juju client snap](https://snapcraft.io/juju). -The properties of your current controller can be listed using `juju show-controller`. +List available controllers: Make sure that the controller's back-end cloud is **not** K8s. The cloud information can be retrieved with the following command ```commandline -juju show-controller | yq '.[].details.cloud' +juju list-controllers ``` -[note type="caution"] -If the cloud is `k8s`, please refer to the [Charmed Kafka K8s documentation](/t/charmed-kafka-k8s-documentation/10296) instead. -[/note] +Switch to another controller if needed: -You can find more information on how to bootstrap and configure a controller for different -clouds in the [Juju documentation](https://juju.is/docs/juju/manage-controllers#heading--bootstrap-a-controller). -Make sure you bootstrap a `machine` Juju controller. +```commandline +juju switch +``` + +If there are no suitable controllers, create a new one: + +```commandline +juju bootstrap +``` + +where `` -- the cloud to deploy controller to, e.g., `localhost`. For more information on how to set up a new cloud, see the [How to manage clouds](https:///t/1100) guide in Juju documentation. + +For more Juju controller setup guidance, see the [How to manage controllers](/t/1111) guide in Juju documentation. ## Juju model setup You can create a new Juju model using -``` +```commandline juju add-model ``` -Alternatively, you can use a pre-existing Juju model and switch to it by running the following command: +Alternatively, you can switch to any existing Juju model: -``` +```commandline juju switch ``` -Make sure that the model is **not** a `k8s` type. The type of the model -can be obtained by +Make sure that the model is of a correct type (not `k8s`): -``` +```commandline juju show-model | yq '.[].type' ``` -[note type="caution"] -If the model is `k8s`, please refer to the [Charmed Kafka K8s documentation](https://discourse.charmhub.io/t/charmed-kafka-k8s-documentation/10296) instead. -[/note] - ## Deploy Charmed Apache Kafka and Charmed Apache ZooKeeper The Apache Kafka and Apache ZooKeeper charms can both be deployed as follows: -```shell +```commandline $ juju deploy kafka --channel 3/stable -n --trust $ juju deploy zookeeper --channel 3/stable -n ``` @@ -69,44 +76,52 @@ $ juju deploy zookeeper --channel 3/stable -n where `` and `` – the number of units to deploy for Apache Kafka and Apache ZooKeeper. We recommend values of at least `3` and `5` respectively. [note] -The `--trust` option is needed for the Kafka application if NodePort is used. For more information about the trust options usage, see the [Juju documentation](/t/5476#heading--trust-an-application-with-a-credential). +The `--trust` option is needed for the Apache Kafka application if NodePort is used. For more information about the trust options usage, see the [Juju documentation](/t/5476#heading--trust-an-application-with-a-credential). [/note] -After this, it is necessary to connect them: +Connect Apache ZooKeeper and Apache Kafka by relating/integrating the charms: ```shell $ juju relate kafka zookeeper ``` -Once all the units show as `active|idle` in the `juju status` output, the deployment -should be ready to be used. +Check the status of the deployment: + +```commandline +juju status +``` + +The deployment should be complete once all the units show `active` or `idle` status. ## (Optional) Create an external admin users Charmed Apache Kafka aims to follow the _secure by default_ paradigm. As a consequence, after being deployed the Apache Kafka cluster won't expose any external listener. In fact, ports are only opened when client applications are related, also -depending on the protocols to be used. Please refer to [this table](/t/charmed-kafka-documentation-reference-listeners/13264) for -more information about the available listeners and protocols. +depending on the protocols to be used. + +[note] +For more information about the available listeners and protocols please refer to [this table](/t/13264). +[/note] It is however generally useful for most of the use cases to create a first admin user to be used to manage the Apache Kafka cluster (either internally or externally). To create an admin user, deploy the [Data Integrator Charm](https://charmhub.io/data-integrator) with -`extra-user-roles` set to `admin` +`extra-user-roles` set to `admin`: -```shell +```commandline juju deploy data-integrator --channel stable --config topic-name=test-topic --config extra-user-roles=admin ``` -and relate to the Apache Kafka charm +... and relate it to the Apache Kafka charm: -```shell +```commandline juju relate data-integrator kafka ``` -To retrieve authentication information such as the username, password, etc. use: +To retrieve authentication information, such as the username and password, use: -```shell +```commandline juju run data-integrator/leader get-credentials ``` \ No newline at end of file diff --git a/docs/how-to/h-enable-encryption.md b/docs/how-to/h-enable-encryption.md index f404c3a0..b4e3fe3d 100644 --- a/docs/how-to/h-enable-encryption.md +++ b/docs/how-to/h-enable-encryption.md @@ -29,7 +29,7 @@ juju relate kafka:certificates where `` is the name of the TLS certificate provider charm deployed. [note] -If Apache Kafka and Apache ZooKeeper are already related, they will start renegotiating the relation to provide each other certificates and enable/open to correct ports/connections. Otherwise, relate them after the both relations with the ``. +If Apache Kafka and Apache ZooKeeper are already related, they will start renegotiating the relation to provide each other certificates and enable/open the correct ports/connections. Otherwise, relate them after the both relations with the ``. [/note] ## Manage keys diff --git a/docs/how-to/h-enable-monitoring.md b/docs/how-to/h-enable-monitoring.md index 97a085a7..f464da92 100644 --- a/docs/how-to/h-enable-monitoring.md +++ b/docs/how-to/h-enable-monitoring.md @@ -11,6 +11,8 @@ Since the Charmed Apache Kafka Operator is deployed directly on a cloud infrastr needed to offer the endpoints of the COS relations. The [offers-overlay](https://github.com/canonical/cos-lite-bundle/blob/main/overlays/offers-overlay.yaml) can be used, and this step is shown in the COS tutorial. +## Offer interfaces via the COS controller + Switch to COS K8s environment and offer COS interfaces to be cross-model related with Charmed Apache Kafka VM model: ```shell @@ -22,6 +24,8 @@ juju offer loki:logging loki-logging juju offer prometheus:receive-remote-write prometheus-receive-remote-write ``` +## Consume offers via the Apache Kafka model + Switch to Charmed Apache Kafka VM model, find offers and relate with them: ```shell @@ -70,7 +74,7 @@ models, e.g. `` and ``. After this is complete, the monitoring COS stack should be up and running and ready to be used. -### Connect Grafana web interface +## Connect Grafana web interface To connect to the Grafana web interface, follow the [Browse dashboards](https://charmhub.io/topics/canonical-observability-stack/tutorials/install-microk8s?_ga=2.201254254.1948444620.1704703837-757109492.1701777558#heading--browse-dashboards) section of the MicroK8s "Getting started" guide. diff --git a/docs/index.md b/docs/index.md index f26728e8..478694db 100644 --- a/docs/index.md +++ b/docs/index.md @@ -80,4 +80,4 @@ The Charmed Apache Kafka Operator is free software, distributed under the Apache 1. [Explanation](explanation) 1. [Security](explanation/e-security.md) 1. [Hardening Guide](explanation/e-hardening.md) - 1. [Overview of Cluster Configuration](explanation/e-cluster-configuration.md) \ No newline at end of file + 1. [Cluster configuration](explanation/e-cluster-configuration.md) \ No newline at end of file From 6298e311a222bdc1d58b2ddff87d39cffc3c2cf0 Mon Sep 17 00:00:00 2001 From: discourse-gatekeeper-docs-bot Date: Sun, 1 Dec 2024 12:41:58 +0000 Subject: [PATCH 04/14] 'modified: docs/how-to/h-upgrade.md,docs/reference/r-listeners.md,docs/tutorial/t-setup-environment.md' --- docs/how-to/h-upgrade.md | 6 +++--- docs/reference/r-listeners.md | 2 +- docs/tutorial/t-setup-environment.md | 2 +- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/how-to/h-upgrade.md b/docs/how-to/h-upgrade.md index 9266898f..69d364fe 100644 --- a/docs/how-to/h-upgrade.md +++ b/docs/how-to/h-upgrade.md @@ -8,7 +8,7 @@ Charm upgrades can include both upgrades of operator code (e.g. the revision use In general, the following guide only applies for in-place upgrades that involve (at most) minor version upgrade of Apache Kafka workload, e.g. between Apache Kafka 3.4.x to 3.5.x. Major workload upgrades are generally **NOT SUPPORTED**, and they should be carried out using [full cluster-to-cluster migrations](/t/charmed-kafka-how-to-cluster-migration/10951). -While upgrading an Apache Kafka cluster, do not perform any other major operations, including, but no limited to, the following: +While upgrading an Apache Kafka cluster, do not perform any other major operations, including, but not limited to, the following: 1. Adding or removing units 2. Creating or destroying new relations @@ -19,11 +19,11 @@ The concurrency with other operations is not supported, and it can lead the clus ## Minor upgrade process -When performing an in-place upgrade process, the full process is composed by the following high-level steps: +When performing an in-place upgrade process, the full process is composed of the following high-level steps: 1. **Collect** all necessary pre-upgrade information, necessary for a rollback (if ever needed) 2. **Prepare** the charm for the in-place upgrade, by running some preparatory tasks -3. **Upgrade** the charm and/or the workload. Once started, all units in a cluster will refresh the charm code and undergo a workload restart/update. The upgrade will be aborted if the unit upgrade has failed, requiring the admin user to rollback. +3. **Upgrade** the charm and/or the workload. Once started, all units in a cluster will refresh the charm code and undergo a workload restart/update. The upgrade will be aborted if the unit upgrade has failed, requiring the admin user to roll back. 4. **Post-upgrade checks** to make sure all units are in the proper state and the cluster is healthy. ### Step 1: Collect diff --git a/docs/reference/r-listeners.md b/docs/reference/r-listeners.md index 5abfcd67..9a6a4985 100644 --- a/docs/reference/r-listeners.md +++ b/docs/reference/r-listeners.md @@ -23,5 +23,5 @@ opened. | SSL_EXTERNAL | `trusted-ca` + `certificates` | SSL | `9094` | external | [note] -Since `cluster` is a peer-relation, the `SASL_INTERNAL` listener is always enabled. +Since `cluster` is a peer relation, the `SASL_INTERNAL` listener is always enabled. [/note] \ No newline at end of file diff --git a/docs/tutorial/t-setup-environment.md b/docs/tutorial/t-setup-environment.md index 0bc4261a..22399161 100644 --- a/docs/tutorial/t-setup-environment.md +++ b/docs/tutorial/t-setup-environment.md @@ -56,7 +56,7 @@ The Juju controller should exist within an LXD container. You can verify this by where `` is a unique combination of numbers and letters such as `9d7e4e-0` -The controller can work with different models; models host applications such as Charmed Apache Kafka. Set up a specific model for Charmed Apache Kafka named ‘tutorial’: +The controller can work with different models; models host applications such as Charmed Apache Kafka. Set up a specific model for Charmed Apache Kafka named `tutorial`: ```shell juju add-model tutorial From 2d84748ec1b18a1fcdf48ccb74797f1ffd47f72f Mon Sep 17 00:00:00 2001 From: discourse-gatekeeper-docs-bot Date: Fri, 6 Dec 2024 20:24:44 +0000 Subject: [PATCH 05/14] 'modified: docs/how-to/h-upgrade.md,docs/explanation/e-security.md,docs/explanation/e-cluster-configuration.md,docs/explanation/e-hardening.md,docs/tutorial/t-setup-environment.md' --- docs/explanation/e-cluster-configuration.md | 2 +- docs/explanation/e-hardening.md | 10 +++++----- docs/explanation/e-security.md | 20 ++++++++++---------- docs/how-to/h-upgrade.md | 10 +++++----- docs/tutorial/t-setup-environment.md | 4 ++-- 5 files changed, 23 insertions(+), 23 deletions(-) diff --git a/docs/explanation/e-cluster-configuration.md b/docs/explanation/e-cluster-configuration.md index 1e5c170d..0be161d5 100644 --- a/docs/explanation/e-cluster-configuration.md +++ b/docs/explanation/e-cluster-configuration.md @@ -17,4 +17,4 @@ For a Charmed Apache Kafka related to a Charmed Apache ZooKeeper: - the list of the broker ids of the cluster can be found in `/kafka/brokers/ids` - the endpoint used to access the broker with id `0` can be found in `/kafka/brokers/ids/0` -- the credentials for the Charmed Apache Kafka users can be found in `/kafka/config/users` \ No newline at end of file +- the credentials for the Apache Kafka users can be found in `/kafka/config/users` \ No newline at end of file diff --git a/docs/explanation/e-hardening.md b/docs/explanation/e-hardening.md index 73a680a4..7cd1dcde 100644 --- a/docs/explanation/e-hardening.md +++ b/docs/explanation/e-hardening.md @@ -57,18 +57,18 @@ Juju user credentials must be stored securely and rotated regularly to limit the In the following, we provide guidance on how to harden your deployment using: -1. Base images -2. Charmed operator security upgrades +1. Operating systems +2. Security upgrades 3. Encryption 4. Authentication 5. Monitoring and auditing -### Base images +### Operating systems Charmed Apache Kafka and Charmed Apache ZooKeeper currently run on top of Ubuntu 22.04. Deploy a [Landscape Client Charm](https://charmhub.io/landscape-client?) in order to connect the underlying VM to a Landscape User Account to manage security upgrades and integrate Ubuntu Pro subscriptions. -### Charmed operator security upgrades +### Security upgrades Charmed Apache Kafka and Charmed Apache ZooKeeper operators install a pinned revision of the [Charmed Apache Kafka snap](https://snapcraft.io/charmed-kafka) and [Charmed ZooKeeper snap](https://snapcraft.io/charmed-zookeeper), respectively, to provide reproducible and secure environments. @@ -79,7 +79,7 @@ For more information on how to refresh the charm, see the [how-to upgrade](https ### Encryption Charmed Apache Kafka must be deployed with encryption enabled. -To do that, you need to relate Apache Kafka and Apache ZooKeeper charms to one of the TLS certificate operator charms. +To do that, you need to relate Charmed Apache Kafka and Charmed Apache ZooKeeper to one of the TLS certificate operator charms. Please refer to the [Charming Security page](https://charmhub.io/topics/security-with-x-509-certificates) for more information on how to select the right certificate provider for your use case. diff --git a/docs/explanation/e-security.md b/docs/explanation/e-security.md index 627c423e..9723786a 100644 --- a/docs/explanation/e-security.md +++ b/docs/explanation/e-security.md @@ -27,7 +27,7 @@ to also provide the community with the patched source code. ### GitHub -All Charmed Apache Kafka and Charmed Apache ZooKeeper artifacts are published and released +All Apache Kafka and Apache ZooKeeper artifacts built by Canonical are published and released programmatically using release pipelines implemented via GitHub Actions. Distributions are published as both GitHub and LaunchPad releases via the [central-uploader repository](https://github.com/canonical/central-uploader), while charms, snaps and rocks are published using the workflows of their respective repositories. @@ -40,18 +40,18 @@ All repositories in GitHub are set up with branch protection rules, requiring: ## Encryption -The Charmed Apache Kafka operator can be used to deploy a secure Apache Kafka cluster that provides encryption-in-transit capabilities out of the box +Charmed Apache Kafka can be used to deploy a secure Apache Kafka cluster that provides encryption-in-transit capabilities out of the box for: * Interbroker communications * Apache ZooKeeper connection * External client connection -To set up a secure connection Apache Kafka and Apache ZooKeeper applications need to be integrated with TLS Certificate Provider charms, e.g. +To set up a secure connection Charmed Apache Kafka and Charmed Apache ZooKeeper applications need to be integrated with TLS Certificate Provider charms, e.g. `self-signed-certificates` operator. CSRs are generated for every unit using `tls_certificates_interface` library that uses `cryptography` python library to create X.509 compatible certificates. The CSR is signed by the TLS Certificate Provider and returned to the units, and -stored in a password-protected Keystore file. The password of the Keystore is stored in Juju secrets starting from revision 168 on Apache Kafka -and revision 130 on Apache ZooKeeper. The relation provides also the certificate for the CA to be loaded in a password-protected Truststore file. +stored in a password-protected Keystore file. The password of the Keystore is stored in Juju secrets starting from revision 168 of Charmed Apache Kafka +and revision 130 of Charmed Apache ZooKeeper. The relation provides also the certificate for the CA to be loaded in a password-protected Truststore file. When encryption is enabled, hostname verification is turned on for client connections, including inter-broker communication. Cipher suite can be customized by providing a list of allowed cipher suite to be used for external clients and Apache ZooKeeper connections, using the charm config options @@ -62,7 +62,7 @@ Encryption at rest is currently not supported, although it can be provided by th ## Authentication -In the Charmed Apache Kafka solution, authentication layers can be enabled for: +In Charmed Apache Kafka, authentication layers can be enabled for: 1. Apache ZooKeeper connections 2. Apache Kafka inter-broker communication @@ -73,7 +73,7 @@ In the Charmed Apache Kafka solution, authentication layers can be enabled for: Authentication to Apache ZooKeeper is based on Simple Authentication and Security Layer (SASL) using digested MD5 hashes of username and password and implemented both for client-server (with Apache Kafka) and server-server communication. Username and passwords are exchanged using peer relations among Apache ZooKeeper units and using normal relations between Apache Kafka and Apache ZooKeeper. -Juju secrets are used for exchanging credentials starting from revision 168 on Apache Kafka and revision 130 on Apache ZooKeeper. +Juju secrets are used for exchanging credentials starting from revision 168 of Charmed Apache Kafka and revision 130 of Charmed Apache ZooKeeper. Username and password for the different users are stored in Apache ZooKeeper servers in a JAAS configuration file in plain format. Permission on the file is restricted to the root user. @@ -81,7 +81,7 @@ Permission on the file is restricted to the root user. ### Apache Kafka Inter-broker authentication Authentication among brokers is based on SCRAM-SHA-512 protocol. Username and passwords are exchanged -via peer relations, using Juju secrets from revision 168 on Charmed Apache Kafka. +via peer relations, using Juju secrets from revision 168 of Charmed Apache Kafka. Apache Kafka username and password used by brokers to authenticate one another are stored both in a Apache ZooKeeper zNode and in a JAAS configuration file in the Apache Kafka server in plain format. @@ -97,7 +97,7 @@ Clients can authenticate to Apache Kafka using: When using SCRAM, username and passwords are stored in Apache ZooKeeper to be used by the Apache Kafka processes, in peer-relation data to be used by the Apache Kafka charm and in external relation to be shared with client applications. -Starting from revision 168 on Charmed Apache Kafka, Juju secrets are used for storing the credentials in place of plain unencrypted text. +Starting from revision 168 of Charmed Apache Kafka, Juju secrets are used for storing the credentials in place of plain unencrypted text. -When using mTLS, client certificates are loaded into a `tls-certificates` operator and provided to the Apache Kafka charm via the plain-text unencrypted +When using mTLS, client certificates are loaded into a `tls-certificates` operator and provided to the Charmed Apache Kafka via the plain-text unencrypted relation. Certificates are stored in the password-protected Truststore file. \ No newline at end of file diff --git a/docs/how-to/h-upgrade.md b/docs/how-to/h-upgrade.md index 69d364fe..b82be3d9 100644 --- a/docs/how-to/h-upgrade.md +++ b/docs/how-to/h-upgrade.md @@ -13,7 +13,7 @@ While upgrading an Apache Kafka cluster, do not perform any other major operatio 1. Adding or removing units 2. Creating or destroying new relations 3. Changes in workload configuration -4. Upgrading other connected applications (e.g. Apache ZooKeeper) +4. Upgrading other connected applications (e.g. Charmed Apache ZooKeeper) The concurrency with other operations is not supported, and it can lead the cluster into inconsistent states. @@ -28,7 +28,7 @@ When performing an in-place upgrade process, the full process is composed of the ### Step 1: Collect -The first step is to record the revisions of the running application, as a safety measure for a rollback action if needed. To accomplish this, simply run the `juju status` command and look for the revisions of the deployed Apache Kafka and Apache ZooKeeper applications. You can also retrieve this with the following command (that requires [yq](https://snapcraft.io/install/yq/ubuntu) to be installed): +The first step is to record the revisions of the running application, as a safety measure for a rollback action if needed. To accomplish this, simply run the `juju status` command and look for the revisions of the deployed Charmed Apache Kafka and Charmed Apache ZooKeeper applications. You can also retrieve this with the following command (that requires [yq](https://snapcraft.io/install/yq/ubuntu) to be installed): ```shell KAFKA_CHARM_REVISION=$(juju status --format json | yq .applications..charm-rev) @@ -121,7 +121,7 @@ We strongly recommend to also retrieve the full set of logs with `juju debug-log ## Apache ZooKeeper upgrade -Although the previous steps focused on upgrading Apache Kafka, the same process can also be applied to Apache ZooKeeper. However, for revisions prior to XXX, a patch needs to be applied before running the aforementioned process. The Apache ZooKeeper process, as part of its operations, overwrites the `zoo.cfg` pinning the snap revision for the `dynamicConfigFile`. This may create problems in the upgrade if `snapd` removes the previous revision once the snap is refreshed. To prevent this, it is sufficient to replace the `` with `current`. +Although the previous steps focused on upgrading Charmed Apache Kafka, the same process can also be applied to Apache ZooKeeper. However, for revisions prior to XXX, a patch needs to be applied before running the aforementioned process. The Apache ZooKeeper process, as part of its operations, overwrites the `zoo.cfg` pinning the snap revision for the `dynamicConfigFile`. This may create problems in the upgrade if `snapd` removes the previous revision once the snap is refreshed. To prevent this, it is sufficient to replace the `` with `current`. To do so, on each unit, first apply the patch: @@ -139,6 +139,6 @@ Check that the server has started correctly, and then apply the patch to the nex Once all the units have been patched, proceed with the upgrade process, as outlined above. -## Apache Kafka and Apache ZooKeeper combined upgrades +## Combined upgrades -If Apache Kafka and Apache ZooKeeper charms need both to be upgraded, we recommend starting the upgrade from the Apache ZooKeeper cluster. As outlined above, the two upgrades should **NEVER** be done concurrently. \ No newline at end of file +If Charmed Apache Kafka and Charmed Apache ZooKeeper both need to be upgraded, we recommend starting the upgrade from the Charmed Apache ZooKeeper. As outlined above, the two upgrades should **NEVER** be done concurrently. \ No newline at end of file diff --git a/docs/tutorial/t-setup-environment.md b/docs/tutorial/t-setup-environment.md index 22399161..ff1c38f3 100644 --- a/docs/tutorial/t-setup-environment.md +++ b/docs/tutorial/t-setup-environment.md @@ -9,7 +9,7 @@ For this tutorial, we will need to set up the environment with two main componen ### Prepare LXD -The fastest, simplest way to get started with Charmed Apache Kafka is to set up a local LXD cloud. LXD is a system container and virtual machine manager; Charmed Apache Kafka will be run in one of these containers and managed by Juju. While this tutorial covers the basics of LXD, you can [explore more LXD here](https://linuxcontainers.org/lxd/getting-started-cli/). LXD comes pre-installed on Ubuntu 20.04 LTS. Verify that LXD is installed by entering the command `which lxd` into the command line, this will output: +The fastest, simplest way to get started with Charmed Apache Kafka is to set up a local LXD cloud. LXD is a system container and virtual machine manager; Apache Kafka will be run in one of these containers and managed by Juju. While this tutorial covers the basics of LXD, you can [explore more LXD here](https://linuxcontainers.org/lxd/getting-started-cli/). LXD comes pre-installed on Ubuntu 20.04 LTS. Verify that LXD is installed by entering the command `which lxd` into the command line, this will output: ``` /snap/bin/lxd @@ -32,7 +32,7 @@ You can list all LXD containers by entering the command `lxc list` into the comm ### Install and prepare Juju -[Juju](https://juju.is/) is an Operator Lifecycle Manager (OLM) for clouds, bare metal, LXD or Kubernetes. We will be using it to deploy and manage Charmed Apache Kafka. As with LXD, Juju is installed from a snap package: +[Juju](https://juju.is/) is an Operator Lifecycle Manager (OLM) for clouds, bare metal, LXD or Kubernetes. We will be using it to deploy and manage Apache Kafka. As with LXD, Juju is installed from a snap package: ```shell sudo snap install juju --channel 3.1/stable From 61917c89f34987e4bd2fb0d9f6eefca669d8d1c5 Mon Sep 17 00:00:00 2001 From: discourse-gatekeeper-docs-bot Date: Fri, 6 Dec 2024 20:28:10 +0000 Subject: [PATCH 06/14] 'modified: docs/explanation/e-hardening.md' --- docs/explanation/e-hardening.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/explanation/e-hardening.md b/docs/explanation/e-hardening.md index 7cd1dcde..1704f532 100644 --- a/docs/explanation/e-hardening.md +++ b/docs/explanation/e-hardening.md @@ -57,13 +57,13 @@ Juju user credentials must be stored securely and rotated regularly to limit the In the following, we provide guidance on how to harden your deployment using: -1. Operating systems +1. Operating system 2. Security upgrades 3. Encryption 4. Authentication 5. Monitoring and auditing -### Operating systems +### Operating system Charmed Apache Kafka and Charmed Apache ZooKeeper currently run on top of Ubuntu 22.04. Deploy a [Landscape Client Charm](https://charmhub.io/landscape-client?) in order to connect the underlying VM to a Landscape User Account to manage security upgrades and integrate Ubuntu Pro subscriptions. From a40c1218d9fcb47bc42c6a80629927dbbc216c08 Mon Sep 17 00:00:00 2001 From: discourse-gatekeeper-docs-bot Date: Sat, 7 Dec 2024 00:53:28 +0000 Subject: [PATCH 07/14] 'modified: docs/how-to/h-deploy-aws.md,docs/reference/r-performance-tuning.md,docs/how-to/h-deploy-azure.md,docs/explanation/e-cluster-configuration.md,docs/how-to/h-kraft-mode.md,docs/how-to/h-manage-units.md,docs/how-to/h-cluster-migration.md,docs/how-to/h-upgrade.md,docs/reference/r-listeners.md' --- docs/explanation/e-cluster-configuration.md | 2 +- docs/how-to/h-cluster-migration.md | 2 +- docs/how-to/h-deploy-aws.md | 2 +- docs/how-to/h-deploy-azure.md | 2 +- docs/how-to/h-kraft-mode.md | 6 +++--- docs/how-to/h-manage-units.md | 4 ++-- docs/how-to/h-upgrade.md | 2 +- docs/reference/r-listeners.md | 2 +- docs/reference/r-performance-tuning.md | 2 +- 9 files changed, 12 insertions(+), 12 deletions(-) diff --git a/docs/explanation/e-cluster-configuration.md b/docs/explanation/e-cluster-configuration.md index 0be161d5..81ee05ce 100644 --- a/docs/explanation/e-cluster-configuration.md +++ b/docs/explanation/e-cluster-configuration.md @@ -6,7 +6,7 @@ One of such solutions is [Apache ZooKeeper](https://zookeeper.apache.org). Here are some of the responsibilities of Apache ZooKeeper in an Apache Kafka cluster: - **Cluster membership**: through regular heartbeats, it keeps track of the brokers entering and leaving the cluster, providing an up-to-date list of brokers. -- **Controller election**: one of the Apache Kafka brokers is responsible for managing the leader/follower status for all the partitions. Apache ZooKeeper is used to elect a controller and to make sure there is only one of it. +- **Controller election**: one of the Kafka Brokers is responsible for managing the leader/follower status for all the partitions. Apache ZooKeeper is used to elect a controller and to make sure there is only one of it. - **Topic configuration**: each topic can be replicated on multiple partitions. Apache ZooKeeper keeps track of the locations of the partitions and replicas so that high availability is still attained when a broker shuts down. Topic-specific configuration overrides (e.g. message retention and size) are also stored in Apache ZooKeeper. - **Access control and authentication**: Apache ZooKeeper stores access control lists (ACL) for Apache Kafka resources, to ensure only the proper, authorized, users or groups can read or write on each topic. diff --git a/docs/how-to/h-cluster-migration.md b/docs/how-to/h-cluster-migration.md index 9be051f2..48efbf8e 100644 --- a/docs/how-to/h-cluster-migration.md +++ b/docs/how-to/h-cluster-migration.md @@ -22,7 +22,7 @@ To migrate a cluster we need: - An "old" existing Apache Kafka cluster to migrate from. - The cluster needs to be reachable from/to the new Apache Kafka cluster. -- A bootstrapped Juju VM cloud running Charmed Apache Kafka to migrate to. For guidance on how to deploy a new Charmed Kafka, see: +- A bootstrapped Juju VM cloud running Charmed Apache Kafka to migrate to. For guidance on how to deploy a new Charmed Apache Kafka, see: - The [Charmed Apache Kafka Tutorial](/t/charmed-kafka-tutorial-overview/10571) - The [How to deploy guide](/t/charmed-apache-kafka-documentation-how-to-deploy/13261) for Charmed Apache Kafka K8s - The CLI tool `yq` - https://github.com/mikefarah/yq diff --git a/docs/how-to/h-deploy-aws.md b/docs/how-to/h-deploy-aws.md index 8c18f96a..650743d9 100644 --- a/docs/how-to/h-deploy-aws.md +++ b/docs/how-to/h-deploy-aws.md @@ -116,7 +116,7 @@ juju integrate kafka zookeeper ``` [note type="caution"] -The smallest AWS instance types may not provide sufficient resources to host a Kafka broker. We recommend choosing an instance type with a minimum of `8` GB of RAM and `4` CPU cores, such as `m7i.xlarge`. +The smallest AWS instance types may not provide sufficient resources to host a Kafka Broker. We recommend choosing an instance type with a minimum of `8` GB of RAM and `4` CPU cores, such as `m7i.xlarge`. For more guidance on sizing production environments, see the [Requirements page](/t/charmed-kafka-reference-requirements/10563). Additional information about AWS instance types is available in the [AWS documentation](https://us-east-1.console.aws.amazon.com/ec2/home?region=us-east-1#Instances:instanceState=running). [/note] diff --git a/docs/how-to/h-deploy-azure.md b/docs/how-to/h-deploy-azure.md index bf3ab9a4..d9f48139 100644 --- a/docs/how-to/h-deploy-azure.md +++ b/docs/how-to/h-deploy-azure.md @@ -162,7 +162,7 @@ juju integrate kafka zookeeper [note type="caution"] Note that the smallest instance types on Azure may not have enough resources for hosting -a Kafka broker. We recommend selecting an instance type that provides at the very least `8` GB of RAM and `4` cores, e.g. `Standard_A4_v2`. +a Kafka Broker. We recommend selecting an instance type that provides at the very least `8` GB of RAM and `4` cores, e.g. `Standard_A4_v2`. For more guidance on production environment sizing, see the [Requirements page](/t/charmed-kafka-reference-requirements/10563). You can find more information about the available instance types in the [Azure documentation](https://learn.microsoft.com/en-us/azure/virtual-machines/sizes/overview). [/note] diff --git a/docs/how-to/h-kraft-mode.md b/docs/how-to/h-kraft-mode.md index 4564df19..748b6bde 100644 --- a/docs/how-to/h-kraft-mode.md +++ b/docs/how-to/h-kraft-mode.md @@ -9,7 +9,7 @@ This guide provides step-by-step instructions to configure Kafka in [KRaft mode] ## Prerequisites -Follow the first steps of the [How to deploy Charmed Kafka](https://discourse.charmhub.io/t/charmed-kafka-documentation-how-to-deploy/13261) guide to set up the environment. Stop before deploying Charmed Kafka and continue with the instructions below. +Follow the first steps of the [How to deploy Charmed Apache Kafka](https://discourse.charmhub.io/t/charmed-kafka-documentation-how-to-deploy/13261) guide to set up the environment. Stop before deploying Charmed Apache Kafka and continue with the instructions below. ## Roles setup @@ -17,7 +17,7 @@ A new **role** has been introduced to the charm, named `controller`. The applica ### Single application deployment -To deploy Charmed Kafka in KRaft mode as a single application, assign both `controller` and `broker` roles to the application when using the `juju deploy` command: +To deploy Charmed Apache Kafka in KRaft mode as a single application, assign both `controller` and `broker` roles to the application when using the `juju deploy` command: ```shell juju deploy kafka --channel 3/edge --config roles="broker,controller" @@ -27,7 +27,7 @@ Once the unit is shown as `active|idle` in the `juju status` command output, the ### Multiple applications deployment -To deploy Charmed Kafka in KRaft mode as multiple applications, you need to split roles between applications. +To deploy Charmed Apache Kafka in KRaft mode as multiple applications, you need to split roles between applications. First, deploy the applications: a dedicated cluster controller and a broker cluster with relevant roles: ```shell diff --git a/docs/how-to/h-manage-units.md b/docs/how-to/h-manage-units.md index 9f3a7104..6755d00f 100644 --- a/docs/how-to/h-manage-units.md +++ b/docs/how-to/h-manage-units.md @@ -4,7 +4,7 @@ Unit management guide for scaling and running admin utility scripts. ## Replication and Scaling -Increasing the number of Apache Kafka brokers can be achieved by adding more units +Increasing the number of Apache Kafka Brokers can be achieved by adding more units to the Charmed Apache Kafka application, for example: ```shell @@ -80,7 +80,7 @@ argument. Note that `client.properties` may also refer to other files ( e.g. truststore and keystore for TLS-enabled connections). Those files also need to be accessible and correctly specified. -Commands can also be run within a Apache Kafka broker, since both the authentication +Commands can also be run within a Apache Kafka Broker, since both the authentication file (along with the truststore if needed) and the Charmed Apache Kafka snap are already present. diff --git a/docs/how-to/h-upgrade.md b/docs/how-to/h-upgrade.md index b82be3d9..258d3d62 100644 --- a/docs/how-to/h-upgrade.md +++ b/docs/how-to/h-upgrade.md @@ -50,7 +50,7 @@ juju run kafka/leader pre-upgrade-check Make sure that the output of the action is successful. [note] -This action must be run before Charmed Kafka upgrades. +This action must be run before Charmed Apache Kafka upgrades. [/note] The action will also configure the charm to minimize high-availability reduction and ensure a safe upgrade process. After successful execution, the charm is ready to be upgraded. diff --git a/docs/reference/r-listeners.md b/docs/reference/r-listeners.md index 9a6a4985..96123408 100644 --- a/docs/reference/r-listeners.md +++ b/docs/reference/r-listeners.md @@ -4,7 +4,7 @@ Charmed Apache Kafka comes with a set of listeners that can be enabled for inter- and intra-cluster communication. *Internal listeners* are used for internal traffic and exchange of information -between Apache Kafka brokers, whereas *external listeners* are used for external clients +between Apache Kafka Brokers, whereas *external listeners* are used for external clients to be optionally enabled based on the relations created on particular charm endpoints. Each listener is characterized by a specific port, scope and protocol. diff --git a/docs/reference/r-performance-tuning.md b/docs/reference/r-performance-tuning.md index dfb40cc5..63889821 100644 --- a/docs/reference/r-performance-tuning.md +++ b/docs/reference/r-performance-tuning.md @@ -4,7 +4,7 @@ This section contains some suggested values to get a better performance from Cha ## Virtual memory handling (recommended) -Apache Kafka brokers make heavy use of the OS page cache to maintain performance. They never normally explicitly issue a command to ensure messages have been persisted to disk (`sync`), relying instead on the underlying OS to ensure that larger chunks (pages) of data are persisted from the page cache to the disk when the OS deems it efficient and/or necessary to do so. As such, there is a range of runtime kernel parameter tuning that is recommended to be set on machines running Apache Kafka to improve performance. +Apache Kafka Brokers make heavy use of the OS page cache to maintain performance. They never normally explicitly issue a command to ensure messages have been persisted to disk (`sync`), relying instead on the underlying OS to ensure that larger chunks (pages) of data are persisted from the page cache to the disk when the OS deems it efficient and/or necessary to do so. As such, there is a range of runtime kernel parameter tuning that is recommended to be set on machines running Apache Kafka to improve performance. To configure these settings, one can write them to `/etc/sysctl.conf` using `sudo echo $SETTING >> /etc/sysctl.conf`. Note that the settings shown below are simply sensible defaults that may not apply to every workload: ```bash From 1ef29530264a93679226868ebde5b70e69421693 Mon Sep 17 00:00:00 2001 From: Vladimir <48120135+izmalk@users.noreply.github.com> Date: Mon, 9 Dec 2024 18:05:21 +0000 Subject: [PATCH 08/14] Apache renaming --- CONTRIBUTING.md | 2 +- README.md | 16 ++++++++-------- actions.yaml | 2 +- config.yaml | 4 ++-- metadata.yaml | 6 +++--- 5 files changed, 15 insertions(+), 15 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 649f3804..85ae2ec1 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -71,4 +71,4 @@ tox # runs 'lint' and 'unit' environments ## Canonical Contributor Agreement -Canonical welcomes contributions to the Charmed Kafka Operator. Please check out our [contributor agreement](https://ubuntu.com/legal/contributors) if you're interested in contributing to the solution. +Canonical welcomes contributions to the Charmed Apache Kafka Operator. Please check out our [contributor agreement](https://ubuntu.com/legal/contributors) if you're interested in contributing to the solution. diff --git a/README.md b/README.md index 8c520aa4..1843cceb 100644 --- a/README.md +++ b/README.md @@ -14,7 +14,7 @@ The Charmed Operator can be found on [Charmhub](https://charmhub.io/kafka) and i - SASL/SCRAM auth for Broker-Broker and Client-Broker authentication enabled by default. - Access control management supported with user-provided ACL lists. -As currently Kafka requires a paired ZooKeeper deployment in production, this operator makes use of the [ZooKeeper Operator](https://github.com/canonical/zookeeper-operator) for various essential functions. +As currently Apache Kafka requires a paired Apache ZooKeeper deployment in production, this operator makes use of the [ZooKeeper Operator](https://github.com/canonical/zookeeper-operator) for various essential functions. ### Features checklist @@ -33,7 +33,7 @@ The following are some of the most important planned features and their implemen ## Requirements -For production environments, it is recommended to deploy at least 5 nodes for Zookeeper and 3 for Kafka. +For production environments, it is recommended to deploy at least 5 nodes for Apache Zookeeper and 3 for Apache Kafka. The following requirements are meant to be for production environment: @@ -51,7 +51,7 @@ For more information on how to perform typical tasks, see the How to guides sect ### Deployment -The Kafka and ZooKeeper operators can both be deployed as follows: +The Apache Kafka and ZooKeeper operators can both be deployed as follows: ```shell $ juju deploy zookeeper -n 5 @@ -70,18 +70,18 @@ To watch the process, the `juju status` command can be used. Once all the units juju run-action kafka/leader get-admin-credentials --wait ``` -Apache Kafka ships with `bin/*.sh` commands to do various administrative tasks, e.g `bin/kafka-config.sh` to update cluster configuration, `bin/kafka-topics.sh` for topic management, and many more! The Kafka Charmed Operator provides these commands for administrators to run their desired cluster configurations securely with SASL authentication, either from within the cluster or as an external client. +Apache Kafka ships with `bin/*.sh` commands to do various administrative tasks, e.g `bin/kafka-config.sh` to update cluster configuration, `bin/kafka-topics.sh` for topic management, and many more! The Apache Kafka Charmed Operator provides these commands for administrators to run their desired cluster configurations securely with SASL authentication, either from within the cluster or as an external client. -For example, to list the current topics on the Kafka cluster, run the following command: +For example, to list the current topics on the Apache Kafka cluster, run the following command: ```shell BOOTSTRAP_SERVERS=$(juju run-action kafka/leader get-admin-credentials --wait | grep "bootstrap.servers" | cut -d "=" -f 2) juju ssh kafka/leader 'charmed-kafka.topics --bootstrap-server $BOOTSTRAP_SERVERS --list --command-config /var/snap/charmed-kafka/common/client.properties' ``` -Note that Charmed Apache Kafka cluster is secure-by-default: when no other application is related to Kafka, listeners are disabled, thus preventing any incoming connection. However, even for running the commands above, listeners must be enabled. If there are no other applications, you can deploy a `data-integrator` charm and relate it to Kafka to enable listeners. +Note that Charmed Apache Kafka cluster is secure-by-default: when no other application is related to Apache Kafka, listeners are disabled, thus preventing any incoming connection. However, even for running the commands above, listeners must be enabled. If there are no other applications, you can deploy a `data-integrator` charm and relate it to Charmed Apache Kafka to enable listeners. -Available Kafka bin commands can be found with: +Available Apache Kafka bin commands can be found with: ``` snap info charmed-kafka @@ -119,7 +119,7 @@ Use the same action without a password parameter to randomly generate a password Currently, the Charmed Apache Kafka Operator supports 1 or more storage volumes. A 10G storage volume will be installed by default for `log.dirs`. This is used for logs storage, mounted on `/var/snap/kafka/common` -When storage is added or removed, the Kafka service will restart to ensure it uses the new volumes. Additionally, log + charm status messages will prompt users to manually reassign partitions so that the new storage volumes are populated. By default, Kafka will not assign partitions to new directories/units until existing topic partitions are assigned to it, or a new topic is created. +When storage is added or removed, the Apache Kafka service will restart to ensure it uses the new volumes. Additionally, log + charm status messages will prompt users to manually reassign partitions so that the new storage volumes are populated. By default, Apache Kafka will not assign partitions to new directories/units until existing topic partitions are assigned to it, or a new topic is created. ## Relations diff --git a/actions.yaml b/actions.yaml index 6cc39eaa..05c2a239 100644 --- a/actions.yaml +++ b/actions.yaml @@ -28,7 +28,7 @@ set-tls-private-key: get-admin-credentials: description: Get administrator authentication credentials for client commands - The returned client_properties can be used for Kafka bin commands using `--bootstrap-server` and `--command-config` for admin level administration + The returned client_properties can be used for Apache Kafka bin commands using `--bootstrap-server` and `--command-config` for admin level administration This action must be called on the leader unit. get-listeners: diff --git a/config.yaml b/config.yaml index 81633fe5..7ef62f20 100644 --- a/config.yaml +++ b/config.yaml @@ -37,7 +37,7 @@ options: type: int default: 1073741824 message_max_bytes: - description: The largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In previous message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.This can be set per topic with the topic level max.message.bytes config. + description: The largest record batch size allowed by Apache Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In previous message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.This can be set per topic with the topic level max.message.bytes config. type: int default: 1048588 offsets_topic_num_partitions: @@ -113,6 +113,6 @@ options: type: float default: 0.8 expose_external: - description: "String to determine how to expose the Kafka cluster externally from the Kubernetes cluster. Possible values: 'nodeport', 'none'" + description: "String to determine how to expose the Apache Kafka cluster externally from the Kubernetes cluster. Possible values: 'nodeport', 'none'" type: string default: "nodeport" diff --git a/metadata.yaml b/metadata.yaml index 15fdb422..c6a15364 100644 --- a/metadata.yaml +++ b/metadata.yaml @@ -3,12 +3,12 @@ name: kafka display-name: Apache Kafka description: | - Kafka is an event streaming platform. This charm deploys and operates Kafka on + Apache Kafka is an event streaming platform. This charm deploys and operates Apache Kafka on a VM machines environment. Apache Kafka is a free, open source software project by the Apache Software Foundation. - Users can find out more at the [Kafka project page](https://kafka.apache.org/). -summary: Charmed Kafka Operator + Users can find out more at the [Apache Kafka project page](https://kafka.apache.org/). +summary: Charmed Apache Kafka Operator docs: https://discourse.charmhub.io/t/charmed-kafka-documentation/10288 source: https://github.com/canonical/kafka-operator issues: https://github.com/canonical/kafka-operator/issues From 5ef725cff1fa669b47d3fa93f3b6843560507c90 Mon Sep 17 00:00:00 2001 From: Vladimir <48120135+izmalk@users.noreply.github.com> Date: Mon, 9 Dec 2024 18:18:06 +0000 Subject: [PATCH 09/14] Revert "Apache renaming" This reverts commit 1ef29530264a93679226868ebde5b70e69421693. --- CONTRIBUTING.md | 2 +- README.md | 16 ++++++++-------- actions.yaml | 2 +- config.yaml | 4 ++-- metadata.yaml | 6 +++--- 5 files changed, 15 insertions(+), 15 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 85ae2ec1..649f3804 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -71,4 +71,4 @@ tox # runs 'lint' and 'unit' environments ## Canonical Contributor Agreement -Canonical welcomes contributions to the Charmed Apache Kafka Operator. Please check out our [contributor agreement](https://ubuntu.com/legal/contributors) if you're interested in contributing to the solution. +Canonical welcomes contributions to the Charmed Kafka Operator. Please check out our [contributor agreement](https://ubuntu.com/legal/contributors) if you're interested in contributing to the solution. diff --git a/README.md b/README.md index 1843cceb..8c520aa4 100644 --- a/README.md +++ b/README.md @@ -14,7 +14,7 @@ The Charmed Operator can be found on [Charmhub](https://charmhub.io/kafka) and i - SASL/SCRAM auth for Broker-Broker and Client-Broker authentication enabled by default. - Access control management supported with user-provided ACL lists. -As currently Apache Kafka requires a paired Apache ZooKeeper deployment in production, this operator makes use of the [ZooKeeper Operator](https://github.com/canonical/zookeeper-operator) for various essential functions. +As currently Kafka requires a paired ZooKeeper deployment in production, this operator makes use of the [ZooKeeper Operator](https://github.com/canonical/zookeeper-operator) for various essential functions. ### Features checklist @@ -33,7 +33,7 @@ The following are some of the most important planned features and their implemen ## Requirements -For production environments, it is recommended to deploy at least 5 nodes for Apache Zookeeper and 3 for Apache Kafka. +For production environments, it is recommended to deploy at least 5 nodes for Zookeeper and 3 for Kafka. The following requirements are meant to be for production environment: @@ -51,7 +51,7 @@ For more information on how to perform typical tasks, see the How to guides sect ### Deployment -The Apache Kafka and ZooKeeper operators can both be deployed as follows: +The Kafka and ZooKeeper operators can both be deployed as follows: ```shell $ juju deploy zookeeper -n 5 @@ -70,18 +70,18 @@ To watch the process, the `juju status` command can be used. Once all the units juju run-action kafka/leader get-admin-credentials --wait ``` -Apache Kafka ships with `bin/*.sh` commands to do various administrative tasks, e.g `bin/kafka-config.sh` to update cluster configuration, `bin/kafka-topics.sh` for topic management, and many more! The Apache Kafka Charmed Operator provides these commands for administrators to run their desired cluster configurations securely with SASL authentication, either from within the cluster or as an external client. +Apache Kafka ships with `bin/*.sh` commands to do various administrative tasks, e.g `bin/kafka-config.sh` to update cluster configuration, `bin/kafka-topics.sh` for topic management, and many more! The Kafka Charmed Operator provides these commands for administrators to run their desired cluster configurations securely with SASL authentication, either from within the cluster or as an external client. -For example, to list the current topics on the Apache Kafka cluster, run the following command: +For example, to list the current topics on the Kafka cluster, run the following command: ```shell BOOTSTRAP_SERVERS=$(juju run-action kafka/leader get-admin-credentials --wait | grep "bootstrap.servers" | cut -d "=" -f 2) juju ssh kafka/leader 'charmed-kafka.topics --bootstrap-server $BOOTSTRAP_SERVERS --list --command-config /var/snap/charmed-kafka/common/client.properties' ``` -Note that Charmed Apache Kafka cluster is secure-by-default: when no other application is related to Apache Kafka, listeners are disabled, thus preventing any incoming connection. However, even for running the commands above, listeners must be enabled. If there are no other applications, you can deploy a `data-integrator` charm and relate it to Charmed Apache Kafka to enable listeners. +Note that Charmed Apache Kafka cluster is secure-by-default: when no other application is related to Kafka, listeners are disabled, thus preventing any incoming connection. However, even for running the commands above, listeners must be enabled. If there are no other applications, you can deploy a `data-integrator` charm and relate it to Kafka to enable listeners. -Available Apache Kafka bin commands can be found with: +Available Kafka bin commands can be found with: ``` snap info charmed-kafka @@ -119,7 +119,7 @@ Use the same action without a password parameter to randomly generate a password Currently, the Charmed Apache Kafka Operator supports 1 or more storage volumes. A 10G storage volume will be installed by default for `log.dirs`. This is used for logs storage, mounted on `/var/snap/kafka/common` -When storage is added or removed, the Apache Kafka service will restart to ensure it uses the new volumes. Additionally, log + charm status messages will prompt users to manually reassign partitions so that the new storage volumes are populated. By default, Apache Kafka will not assign partitions to new directories/units until existing topic partitions are assigned to it, or a new topic is created. +When storage is added or removed, the Kafka service will restart to ensure it uses the new volumes. Additionally, log + charm status messages will prompt users to manually reassign partitions so that the new storage volumes are populated. By default, Kafka will not assign partitions to new directories/units until existing topic partitions are assigned to it, or a new topic is created. ## Relations diff --git a/actions.yaml b/actions.yaml index 05c2a239..6cc39eaa 100644 --- a/actions.yaml +++ b/actions.yaml @@ -28,7 +28,7 @@ set-tls-private-key: get-admin-credentials: description: Get administrator authentication credentials for client commands - The returned client_properties can be used for Apache Kafka bin commands using `--bootstrap-server` and `--command-config` for admin level administration + The returned client_properties can be used for Kafka bin commands using `--bootstrap-server` and `--command-config` for admin level administration This action must be called on the leader unit. get-listeners: diff --git a/config.yaml b/config.yaml index 7ef62f20..81633fe5 100644 --- a/config.yaml +++ b/config.yaml @@ -37,7 +37,7 @@ options: type: int default: 1073741824 message_max_bytes: - description: The largest record batch size allowed by Apache Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In previous message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.This can be set per topic with the topic level max.message.bytes config. + description: The largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In previous message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.This can be set per topic with the topic level max.message.bytes config. type: int default: 1048588 offsets_topic_num_partitions: @@ -113,6 +113,6 @@ options: type: float default: 0.8 expose_external: - description: "String to determine how to expose the Apache Kafka cluster externally from the Kubernetes cluster. Possible values: 'nodeport', 'none'" + description: "String to determine how to expose the Kafka cluster externally from the Kubernetes cluster. Possible values: 'nodeport', 'none'" type: string default: "nodeport" diff --git a/metadata.yaml b/metadata.yaml index c6a15364..15fdb422 100644 --- a/metadata.yaml +++ b/metadata.yaml @@ -3,12 +3,12 @@ name: kafka display-name: Apache Kafka description: | - Apache Kafka is an event streaming platform. This charm deploys and operates Apache Kafka on + Kafka is an event streaming platform. This charm deploys and operates Kafka on a VM machines environment. Apache Kafka is a free, open source software project by the Apache Software Foundation. - Users can find out more at the [Apache Kafka project page](https://kafka.apache.org/). -summary: Charmed Apache Kafka Operator + Users can find out more at the [Kafka project page](https://kafka.apache.org/). +summary: Charmed Kafka Operator docs: https://discourse.charmhub.io/t/charmed-kafka-documentation/10288 source: https://github.com/canonical/kafka-operator issues: https://github.com/canonical/kafka-operator/issues From 0dbb0230b6c3977a109bb734aa47e515a4027e9f Mon Sep 17 00:00:00 2001 From: discourse-gatekeeper-docs-bot Date: Tue, 10 Dec 2024 18:06:49 +0000 Subject: [PATCH 10/14] 'modified: docs/how-to/h-enable-oauth.md,docs/tutorial/t-deploy.md,docs/how-to/h-enable-monitoring.md,docs/how-to/h-cluster-migration.md,docs/tutorial/t-manage-passwords.md,docs/reference/r-releases/r-rev156_126.md,docs/how-to/h-upgrade.md,docs/how-to/h-manage-units.md,docs/tutorial/t-enable-encryption.md,docs/how-to/h-deploy.md,docs/reference/r-statuses.md,docs/tutorial/t-overview.md,docs/tutorial/t-relate-kafka.md,docs/how-to/h-backup-restore-configuration.md' --- docs/how-to/h-backup-restore-configuration.md | 10 +++++----- docs/how-to/h-cluster-migration.md | 8 ++++---- docs/how-to/h-deploy.md | 6 +++--- docs/how-to/h-enable-monitoring.md | 2 +- docs/how-to/h-enable-oauth.md | 6 +++--- docs/how-to/h-manage-units.md | 2 +- docs/how-to/h-upgrade.md | 2 +- docs/reference/r-releases/r-rev156_126.md | 4 ++-- docs/reference/r-statuses.md | 4 ++-- docs/tutorial/t-deploy.md | 6 +++--- docs/tutorial/t-enable-encryption.md | 2 +- docs/tutorial/t-manage-passwords.md | 2 +- docs/tutorial/t-overview.md | 4 ++-- docs/tutorial/t-relate-kafka.md | 8 ++++---- 14 files changed, 33 insertions(+), 33 deletions(-) diff --git a/docs/how-to/h-backup-restore-configuration.md b/docs/how-to/h-backup-restore-configuration.md index 4c697d57..36508220 100644 --- a/docs/how-to/h-backup-restore-configuration.md +++ b/docs/how-to/h-backup-restore-configuration.md @@ -1,10 +1,10 @@ # Configuration backup and restore -Charmed Apache Kafka's configuration is distributed using [Charmed Apache ZooKeeper](https://charmhub.io/zookeeper?channel=3/stable). -A Charmed Apache ZooKeeper backup can be stored on any S3-compatible storage. -S3 access and configurations are managed with the [`s3-integrator` charm](https://charmhub.io/s3-integrator). +Apache Kafka configuration is distributed using Apache ZooKeeper. +An Apache ZooKeeper backup can be stored on any S3-compatible storage. +S3 access and configurations are managed with the [`s3-integrator` charm](https://charmhub.io/s3-integrator), that can be integrated with Apache ZooKeeper. -This guide contains step-by-step instructions on how to deploy and configure the `s3-integrator` charm for [AWS S3](https://aws.amazon.com/s3/), send the configurations to the Charmed Apache ZooKeeper application, and finally manage your Charmed Apache ZooKeeper backups. +This guide contains step-by-step instructions on how to deploy and configure the `s3-integrator` charm for [AWS S3](https://aws.amazon.com/s3/), send the configurations to Charmed Apache ZooKeeper, and finally manage your Apache ZooKeeper backups. ## Configure `s3-integrator` @@ -53,7 +53,7 @@ Check that Charmed Apache ZooKeeper deployment with configurations set for S3 st juju run zookeeper/leader create-backup ``` -Charmed Apache ZooKeeper backups created with the command above will always be **full** backups: a copy of *all* the Charmed Apache Kafka configuration will be stored in S3. +Apache ZooKeeper backups created with the command above will always be **full** backups: a copy of *all* the Apache Kafka configuration will be stored in S3. The command will output the ID of the newly created backup: diff --git a/docs/how-to/h-cluster-migration.md b/docs/how-to/h-cluster-migration.md index 48efbf8e..fca114fc 100644 --- a/docs/how-to/h-cluster-migration.md +++ b/docs/how-to/h-cluster-migration.md @@ -24,7 +24,7 @@ To migrate a cluster we need: - The cluster needs to be reachable from/to the new Apache Kafka cluster. - A bootstrapped Juju VM cloud running Charmed Apache Kafka to migrate to. For guidance on how to deploy a new Charmed Apache Kafka, see: - The [Charmed Apache Kafka Tutorial](/t/charmed-kafka-tutorial-overview/10571) - - The [How to deploy guide](/t/charmed-apache-kafka-documentation-how-to-deploy/13261) for Charmed Apache Kafka K8s + - The [How to deploy guide](/t/charmed-apache-kafka-documentation-how-to-deploy/13261) for Charmed Apache Kafka - The CLI tool `yq` - https://github.com/mikefarah/yq - `snap install yq --channel=v3/stable` @@ -68,9 +68,9 @@ OLD_SASL_JAAS_CONFIG If using `SSL` or `SASL_SSL` authentication, review the configuration options supported by Kafka Connect in the [Apache Kafka documentation](https://kafka.apache.org/documentation/#connectconfigs) [/note] -## Generating `mm2.properties` file on the Charmed Apache Kafka cluster +## Generating `mm2.properties` file on the Apache Kafka cluster -MirrorMaker takes a `.properties` file for its configuration to fine-tune behaviour. See below an example `mm2.properties` file that can be placed on each of the Charmed Apache Kafka units using the above credentials: +MirrorMaker takes a `.properties` file for its configuration to fine-tune behaviour. See below an example `mm2.properties` file that can be placed on each of the Apache Kafka units using the above credentials: ```bash # Aliases for each cluster, can be set to any unique alias @@ -128,7 +128,7 @@ new.producer.acks=all # new.exactly.once.support = enabled ``` -Once these properties have been generated (in this example, saved to `/tmp/mm2.properties`), it is needed to place them on every Charmed Apache Kafka unit: +Once these properties have been generated (in this example, saved to `/tmp/mm2.properties`), it is needed to place them on every Apache Kafka unit: ```bash cat /tmp/mm2.properties | juju ssh kafka/ sudo -i 'sudo tee -a /var/snap/charmed-kafka/current/etc/kafka/mm2.properties' diff --git a/docs/how-to/h-deploy.md b/docs/how-to/h-deploy.md index 54bdbbdf..adae0809 100644 --- a/docs/how-to/h-deploy.md +++ b/docs/how-to/h-deploy.md @@ -66,20 +66,20 @@ juju show-model | yq '.[].type' ## Deploy Charmed Apache Kafka and Charmed Apache ZooKeeper -The Apache Kafka and Apache ZooKeeper charms can both be deployed as follows: +Charmed Apache Kafka and Charmed Apache ZooKeeper can both be deployed as follows: ```commandline $ juju deploy kafka --channel 3/stable -n --trust $ juju deploy zookeeper --channel 3/stable -n ``` -where `` and `` – the number of units to deploy for Apache Kafka and Apache ZooKeeper. We recommend values of at least `3` and `5` respectively. +where `` and `` – the number of units to deploy for Charmed Apache Kafka and Charmed Apache ZooKeeper. We recommend values of at least `3` and `5` respectively. [note] The `--trust` option is needed for the Apache Kafka application if NodePort is used. For more information about the trust options usage, see the [Juju documentation](/t/5476#heading--trust-an-application-with-a-credential). [/note] -Connect Apache ZooKeeper and Apache Kafka by relating/integrating the charms: +Connect Charmed Apache ZooKeeper and Charmed Apache Kafka by relating/integrating them: ```shell $ juju relate kafka zookeeper diff --git a/docs/how-to/h-enable-monitoring.md b/docs/how-to/h-enable-monitoring.md index f464da92..7a12a797 100644 --- a/docs/how-to/h-enable-monitoring.md +++ b/docs/how-to/h-enable-monitoring.md @@ -7,7 +7,7 @@ Additionally, the charm provides integration with the [Canonical Observability S Deploy the `cos-lite` bundle in a Kubernetes environment. This can be done by following the [deployment tutorial](https://charmhub.io/topics/canonical-observability-stack/tutorials/install-microk8s). -Since the Charmed Apache Kafka Operator is deployed directly on a cloud infrastructure environment, it is +Since the Charmed Apache Kafka is deployed directly on a cloud infrastructure environment, it is needed to offer the endpoints of the COS relations. The [offers-overlay](https://github.com/canonical/cos-lite-bundle/blob/main/overlays/offers-overlay.yaml) can be used, and this step is shown in the COS tutorial. diff --git a/docs/how-to/h-enable-oauth.md b/docs/how-to/h-enable-oauth.md index 13c62988..d765255b 100644 --- a/docs/how-to/h-enable-oauth.md +++ b/docs/how-to/h-enable-oauth.md @@ -2,7 +2,7 @@ Versions used for this integration example: - LXD (v5.21.1) - MicroK8s (v1.28.10) -- Apache Kafka charm: built from [this feature PR](https://github.com/canonical/kafka-operator/pull/168), which adds Hydra integration +- Charmed Apache Kafka: built from [this feature PR](https://github.com/canonical/kafka-operator/pull/168), which adds Hydra integration ## Initial deployment @@ -35,7 +35,7 @@ $ juju offer admin/iam.hydra:oauth $ juju offer admin/iam.self-signed-certificates:certificates ``` -Apache Kafka setup: +Charmed Apache Kafka setup: ```bash # On the lxd controller @@ -52,7 +52,7 @@ $ juju integrate kafka:certificates self-signed-certificates $ juju integrate zookeeper self-signed-certificates ``` -Once everything is settled, integrate Apache Kafka and Hydra: +Once everything is settled, integrate Charmed Apache Kafka and Hydra: ```bash # On the lxd model diff --git a/docs/how-to/h-manage-units.md b/docs/how-to/h-manage-units.md index 6755d00f..b9851390 100644 --- a/docs/how-to/h-manage-units.md +++ b/docs/how-to/h-manage-units.md @@ -18,7 +18,7 @@ It is important to note that when adding more units, the Apache Kafka cluster wi will be used only when new topics and new partitions are created. Partition reassignment can still be done manually by the admin user by using the -`charmed-kafka.reassign-partitions` Apache Kafka bin utility script. Please refer to +`charmed-kafka.reassign-partitions` Charmed Apache Kafka bin utility script. Please refer to its documentation for more information. [note type="caution"] diff --git a/docs/how-to/h-upgrade.md b/docs/how-to/h-upgrade.md index 258d3d62..ba417406 100644 --- a/docs/how-to/h-upgrade.md +++ b/docs/how-to/h-upgrade.md @@ -121,7 +121,7 @@ We strongly recommend to also retrieve the full set of logs with `juju debug-log ## Apache ZooKeeper upgrade -Although the previous steps focused on upgrading Charmed Apache Kafka, the same process can also be applied to Apache ZooKeeper. However, for revisions prior to XXX, a patch needs to be applied before running the aforementioned process. The Apache ZooKeeper process, as part of its operations, overwrites the `zoo.cfg` pinning the snap revision for the `dynamicConfigFile`. This may create problems in the upgrade if `snapd` removes the previous revision once the snap is refreshed. To prevent this, it is sufficient to replace the `` with `current`. +Although the previous steps focused on upgrading Charmed Apache Kafka, the same process can also be applied to Apache ZooKeeper. However, for revisions prior to 130, a patch needs to be applied before running the aforementioned process. The Apache ZooKeeper process, as part of its operations, overwrites the `zoo.cfg` pinning the snap revision for the `dynamicConfigFile`. This may create problems in the upgrade if `snapd` removes the previous revision once the snap is refreshed. To prevent this, it is sufficient to replace the `` with `current`. To do so, on each unit, first apply the patch: diff --git a/docs/reference/r-releases/r-rev156_126.md b/docs/reference/r-releases/r-rev156_126.md index 99754861..3b89c53d 100644 --- a/docs/reference/r-releases/r-rev156_126.md +++ b/docs/reference/r-releases/r-rev156_126.md @@ -52,8 +52,8 @@ More information about the artifacts is provided by the following table: ## Technical notes * A Charmed Apache Kafka cluster is secure by default, meaning that when deployed if there are no client charms related to it, external listeners will not be enabled. -* We recommend deploying one `data-integrator` with `extra-user-roles=admin` alongside the Apache Kafka deployment, in order to enable listeners and also create one user with elevated permission +* We recommend deploying one `data-integrator` with `extra-user-roles=admin` alongside the Charmed Apache Kafka deployment, in order to enable listeners and also create one user with elevated permission to perform administrative tasks. For more information, see the [How-to manage application](/t/charmed-kafka-documentation-how-to-manage-app/10285) guide. * The current release has been tested with Juju 2.9.45+ and Juju 3.1+ -* Inplace upgrade for charms tracking `latest` is not supported, both for Apache ZooKeeper and Apache Kafka charms. Perform data migration to upgrade to a Charmed Apache Kafka cluster managed via a `3/stable` charm. +* Inplace upgrade for charms tracking `latest` is not supported, both for Charmed Apache ZooKeeper and Charmed Apache Kafka charms. Perform data migration to upgrade to a Charmed Apache Kafka cluster managed via a `3/stable` charm. For more information on how to perform the migration, see [How-to migrate a cluster](/t/charmed-kafka-documentation-how-to-migrate-a-cluster/10951) guide. \ No newline at end of file diff --git a/docs/reference/r-statuses.md b/docs/reference/r-statuses.md index 06582754..b8e12263 100644 --- a/docs/reference/r-statuses.md +++ b/docs/reference/r-statuses.md @@ -15,8 +15,8 @@ The charm follows [standard Juju applications statuses](https://juju.is/docs/olm | **Blocked** | snap service not running | The charm failed to start the snap daemon processes | Check the Apache Kafka logs for insights on the issue | | **Blocked** | missing required zookeeper relation | Apache Kafka charm has not been connected to any ZooKeeper cluster | Relate to an Apache ZooKeeper charm | | **Blocked** | unit not connected to zookeeper | Although the relation is present, the unit has failed to connect to Apache ZooKeeper | Make sure that Apache Kafka and Apache ZooKeeper can connect and exchange data. When using encryption, make sure that certificates/ca are correctly setup. | -| **Blocked** | tls must be enabled on both kafka and zookeeper | Encryption (and relation with TLS-certificates operators) must be either enabled or disabled on both Apache Kafka and Apache ZooKeeper | Make sure that both Apache Kafka and Apache ZooKeeper either both use or neither of them use encryption. | -| **Waiting** | zookeeper credentials not created yet | Credentials are being created on Apache ZooKeeper, and Apache Kafka is waiting to receive them to connect to Apache ZooKeeper | | +| **Blocked** | tls must be enabled on both kafka and zookeeper | Encryption (and relation with TLS-certificates operators) must be either enabled or disabled on both Charmed Apache Kafka and Charmed Apache ZooKeeper | Make sure that both Charmed Apache Kafka and Charmed Apache ZooKeeper either both use or neither of them use encryption. | +| **Waiting** | zookeeper credentials not created yet | Credentials are being created on Charmed Apache ZooKeeper, and Charmed Apache Kafka is waiting to receive them to connect to Apache ZooKeeper | | | **Waiting** | internal broker credentials not yet added | Intra-broker credentials being created to enable communication and syncing among brokers belonging to the Apache Kafka clusters. | | | **Waiting** | unit waiting for signed certificates | Unit has requested a CSR request via the `certificates` relation and it is waiting to receive the signed certificate | | | **Maintenance** | | Charm is performing the internal maintenance (e.g. cluster re-configuration, upgrade, ...) | No actions required | diff --git a/docs/tutorial/t-deploy.md b/docs/tutorial/t-deploy.md index efc119d2..b473c24f 100644 --- a/docs/tutorial/t-deploy.md +++ b/docs/tutorial/t-deploy.md @@ -15,14 +15,14 @@ After this, it is necessary to connect them: $ juju relate kafka zookeeper ``` -Juju will now fetch Charmed Apache Kafka and Apache Zookeeper and begin deploying them to the LXD cloud. This process can take several minutes depending on how provisioned (RAM, CPU, etc) your machine is. You can track the progress by running: +Juju will now fetch Charmed Apache Kafka and Charmed Apache Zookeeper and begin deploying them to the LXD cloud. This process can take several minutes depending on how provisioned (RAM, CPU, etc) your machine is. You can track the progress by running: ```shell juju status --watch 1s ``` This command is useful for checking the status of Charmed Apache ZooKeeper and Charmed Apache Kafka and gathering information about the machines hosting the two applications. Some of the helpful information it displays includes IP addresses, ports, state, etc. -The command updates the status of the cluster every second and as the application starts you can watch the status and messages of Charmed Apache Kafka and Apache ZooKeeper change. +The command updates the status of the cluster every second and as the application starts you can watch the status and messages of Charmed Apache Kafka and Charmed Apache ZooKeeper change. Wait until the application is ready - when it is ready, `juju status --watch 1s` will show: @@ -142,4 +142,4 @@ snap info charmed-kafka However, although the commands above can run within the cluster, it is generally recommended during operations to enable external listeners and use these for running the admin commands from outside the cluster. -To do so, as we will see in the next section, we will deploy a [data-integrator](https://charmhub.io/data-integrator) charm and relate it to Apache Kafka. \ No newline at end of file +To do so, as we will see in the next section, we will deploy a [data-integrator](https://charmhub.io/data-integrator) charm and relate it to Charmed Apache Kafka. \ No newline at end of file diff --git a/docs/tutorial/t-enable-encryption.md b/docs/tutorial/t-enable-encryption.md index 40ad804a..3fd9ef26 100644 --- a/docs/tutorial/t-enable-encryption.md +++ b/docs/tutorial/t-enable-encryption.md @@ -57,7 +57,7 @@ telnet 9093 Once the Apache Kafka cluster is enabled to use encrypted connection, client applications should be configured as well to connect to the correct port as well as trust the self-signed CA provided by the `self-signed-certificates` charm. -Make sure that the `kafka-test-app` is not connected to the Apache Kafka charm, by removing the relation if it exists +Make sure that the `kafka-test-app` is not connected to the Charmed Apache Kafka, by removing the relation if it exists ```shell juju remove-relation kafka-test-app kafka diff --git a/docs/tutorial/t-manage-passwords.md b/docs/tutorial/t-manage-passwords.md index 16fdc376..675febdb 100644 --- a/docs/tutorial/t-manage-passwords.md +++ b/docs/tutorial/t-manage-passwords.md @@ -103,7 +103,7 @@ Unlike Admin management, the password management for external Apache Kafka users #### Retrieve the password -Similarly to the Apache Kafka application, also the `data-integrator` exposes an action to retrieve the credentials, e.g. +Similarly to the Charmed Apache Kafka, the `data-integrator` also exposes an action to retrieve the credentials, e.g. ```shell juju run data-integrator/leader get-credentials diff --git a/docs/tutorial/t-overview.md b/docs/tutorial/t-overview.md index 7b010554..9a085ae0 100644 --- a/docs/tutorial/t-overview.md +++ b/docs/tutorial/t-overview.md @@ -8,7 +8,7 @@ Through this tutorial, you will learn a variety of operations, everything from a In this tutorial, we will walk through how to: - Set up your environment using LXD and Juju. -- Deploy Apache Kafka using a couple of commands. +- Deploy Charmed Apache Kafka using a couple of commands. - Get the admin credentials directly. - Add high availability with replication. - Change the admin password. @@ -32,7 +32,7 @@ Before we start, make sure your machine meets the following requirements: Here’s an overview of the steps required with links to our separate tutorials that deal with each individual step: * [Set up the environment](/t/charmed-kafka-tutorial-setup-environment/10575) -* [Deploy Apache Kafka](/t/charmed-kafka-tutorial-deploy-kafka/10567) +* [Deploy Charmed Apache Kafka](/t/charmed-kafka-tutorial-deploy-kafka/10567) * [Integrate with client applications](/t/charmed-kafka-tutorial-relate-kafka/10573) * [Manage passwords](/t/charmed-kafka-tutorial-manage-passwords/10569) * [Enable encryption](/t/charmed-kafka-documentation-tutorial-enable-security/12043) diff --git a/docs/tutorial/t-relate-kafka.md b/docs/tutorial/t-relate-kafka.md index 635f8601..b78b8a51 100644 --- a/docs/tutorial/t-relate-kafka.md +++ b/docs/tutorial/t-relate-kafka.md @@ -23,9 +23,9 @@ Located charm "data-integrator" in charm-hub, revision 11 Deploying "data-integrator" from charm-hub charm "data-integrator", revision 11 in channel stable on jammy ``` -### Relate to Apache Kafka +### Relate to Charmed Apache Kafka -Now that the Database Integrator Charm has been set up, we can relate it to Apache Kafka. This will automatically create a username, password, and database for the Database Integrator Charm. Relate the two applications with: +Now that the Database Integrator charm has been set up, we can relate it to Charmed Apache Kafka. This will automatically create a username, password, and database for the Database Integrator charm. Relate the two applications with: ```shell juju relate data-integrator kafka @@ -171,7 +171,7 @@ python3 -m charms.kafka.v0.client \ ### Charm client applications -Actually, the Data Integrator is only a very special client charm, that implements the `kafka_client` relation for exchanging data with the Apache Kafka charm and user management via relations. +Actually, the Data Integrator is only a very special client charm, that implements the `kafka_client` relation for exchanging data with Charmed Apache Kafka and user management via relations. For example, the steps above for producing and consuming messages to Apache Kafka have also been implemented in the `kafka-test-app` charm (that also implements the `kafka_client` relation) providing a fully integrated charmed user experience, where producing/consuming messages can simply be achieved using relations. @@ -230,7 +230,7 @@ Note that the `kafka-test-app` charm can also similarly be used to consume messa juju config kafka-test-app topic_name=test_kafka_app_topic role=consumer consumer_group_prefix=cg ``` -After configuring the Apache Kafka Test App, just relate it again with the Apache Kafka charm. This will again create a new user and start the consumer process. +After configuring the Apache Kafka Test App, just relate it again with the Charmed Apache Kafka. This will again create a new user and start the consumer process. ## What's next? From 12850fee2dbe97817e2f8d426cadddf1901b1554 Mon Sep 17 00:00:00 2001 From: discourse-gatekeeper-docs-bot Date: Tue, 17 Dec 2024 22:48:48 +0000 Subject: [PATCH 11/14] 'modified: docs/explanation/e-hardening.md,docs/explanation/e-security.md' --- docs/explanation/e-hardening.md | 2 +- docs/explanation/e-security.md | 4 ++-- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/explanation/e-hardening.md b/docs/explanation/e-hardening.md index 1704f532..2c30b902 100644 --- a/docs/explanation/e-hardening.md +++ b/docs/explanation/e-hardening.md @@ -47,7 +47,7 @@ virtual machines, networks, storages, etc. Please refer to the references below #### Juju users -It is very important that the different juju users are set up with minimal permission depending on the scope of their operations. +It is very important that the different Juju users are set up with minimal permissions depending on the scope of their operations. Please refer to the [User access levels](https://juju.is/docs/juju/user-permissions) documentation for more information on the access level and corresponding abilities that the different users can be granted. diff --git a/docs/explanation/e-security.md b/docs/explanation/e-security.md index 9723786a..f3c9d3b4 100644 --- a/docs/explanation/e-security.md +++ b/docs/explanation/e-security.md @@ -5,7 +5,7 @@ This document describes cryptography used by Charmed Apache Kafka. ## Resource checksums Every version of the Charmed Apache Kafka and Charmed Apache ZooKeeper operators install a pinned revision of the Charmed Apache Kafka snap -and Charmed Apache ZooKeeper, respectively, in order to +and Charmed Apache ZooKeeper, respectively, to provide reproducible and secure environments. The [Charmed Apache Kafka snap](https://snapstore.io/charmed-kafka) and [Charmed Apache ZooKeeper snap](https://snapstore.io/charmed-zookeeper) package the Apache Kafka and Apache ZooKeeper workload together with a set of dependencies and utilities required by the lifecycle of the operators (see [Charmed Apache Kafka snap contents](https://github.com/canonical/charmed-kafka-snap/blob/3/edge/snap/snapcraft.yaml) and [Charmed Apache ZooKeeper snap contents](https://github.com/canonical/charmed-zookeeper-snap/blob/3/edge/snap/snapcraft.yaml)). @@ -66,7 +66,7 @@ In Charmed Apache Kafka, authentication layers can be enabled for: 1. Apache ZooKeeper connections 2. Apache Kafka inter-broker communication -3. Apache Kafka Clients +3. Apache Kafka clients ### Apache Kafka authentication to Apache ZooKeeper From 89778e16a2b05fcc581ab2a30a5f3c094c0e19b8 Mon Sep 17 00:00:00 2001 From: discourse-gatekeeper-docs-bot Date: Wed, 18 Dec 2024 13:29:39 +0000 Subject: [PATCH 12/14] 'modified: docs/index.md,docs/explanation/e-cluster-configuration.md // new: docs/reference/r-releases/r-rev195_149.md' --- docs/explanation/e-cluster-configuration.md | 2 +- docs/index.md | 1 + docs/reference/r-releases/r-rev195_149.md | 109 ++++++++++++++++++++ 3 files changed, 111 insertions(+), 1 deletion(-) create mode 100644 docs/reference/r-releases/r-rev195_149.md diff --git a/docs/explanation/e-cluster-configuration.md b/docs/explanation/e-cluster-configuration.md index 81ee05ce..0be161d5 100644 --- a/docs/explanation/e-cluster-configuration.md +++ b/docs/explanation/e-cluster-configuration.md @@ -6,7 +6,7 @@ One of such solutions is [Apache ZooKeeper](https://zookeeper.apache.org). Here are some of the responsibilities of Apache ZooKeeper in an Apache Kafka cluster: - **Cluster membership**: through regular heartbeats, it keeps track of the brokers entering and leaving the cluster, providing an up-to-date list of brokers. -- **Controller election**: one of the Kafka Brokers is responsible for managing the leader/follower status for all the partitions. Apache ZooKeeper is used to elect a controller and to make sure there is only one of it. +- **Controller election**: one of the Apache Kafka brokers is responsible for managing the leader/follower status for all the partitions. Apache ZooKeeper is used to elect a controller and to make sure there is only one of it. - **Topic configuration**: each topic can be replicated on multiple partitions. Apache ZooKeeper keeps track of the locations of the partitions and replicas so that high availability is still attained when a broker shuts down. Topic-specific configuration overrides (e.g. message retention and size) are also stored in Apache ZooKeeper. - **Access control and authentication**: Apache ZooKeeper stores access control lists (ACL) for Apache Kafka resources, to ensure only the proper, authorized, users or groups can read or write on each topic. diff --git a/docs/index.md b/docs/index.md index 478694db..3b7adade 100644 --- a/docs/index.md +++ b/docs/index.md @@ -70,6 +70,7 @@ The Charmed Apache Kafka Operator is free software, distributed under the Apache 1. [Release Notes](reference/r-releases) 1. [Revision 156/126](reference/r-releases/r-rev156_126.md) 1. [Revision 156/136](reference/r-releases/r-rev156_136.md) + 1. [Revision 195/149](reference/r-releases/r-rev195_149.md) 1. [File System Paths](reference/r-file-system-paths.md) 1. [Snap Entrypoints](reference/r-snap-entrypoints.md) 1. [Apache Kafka Listeners](reference/r-listeners.md) diff --git a/docs/reference/r-releases/r-rev195_149.md b/docs/reference/r-releases/r-rev195_149.md new file mode 100644 index 00000000..a31fd22b --- /dev/null +++ b/docs/reference/r-releases/r-rev195_149.md @@ -0,0 +1,109 @@ +# Revision 195/149 +Wed, Dec 18th, 2024 + +Dear community, + +We are pleased to report that we have just released a new updated version for the Charmed Apache Kafka bundles on the `3/stable` channel, +upgrading Charmed Apache ZooKeeper revision from 136 to 149, and Charmed Apache Kafka from 156 to 195. + +The new release comes with a number of new features from the charms, from Juju secrets support, OAuth/OIDC authentication support, various improvements in the UI/UX and dependencies upgrades. + +Please reach out should you have any question, comment, feedback or information. You can find us here in [Matrix](https://matrix.to/#/#charmhub-data-platform:ubuntu.com) or also on [Discourse](https://discourse.charmhub.io/). + +## Charmed Apache Kafka bundle + +New features and bug fixes in the Charmed Apache Kafka bundle: + +### Features + +* [DPE-2285] Refer to Charmhub space from GitHub (#200) +* [DPE-3333] Add integration test for broken tls (#188) +* [DPE-3721] chore: use tools-log4j.properties for run_bin_command (#201) +* [DPE-3735] Integration of custom alerting rules and dashboards (#180) +* [DPE-3780] Set workload version in install hook (#182) +* [DPE-3857] Test consistency between workload and metadata versions (#186) +* [DPE-3926] Enforce zookeeper client interface (#196) +* [DPE-3928] feat: secrets integration (#189) +* [DPE-5702] chore: Active Controllers alert set to == 0 (#252) +* [CSS-6503] Add OAuth support for non-charmed external clients (#168) +* [DPE-5757] Add `extra-listeners` config option (#269) + +### Bug fixes + +* [DPE-3880] Remove instance field from grafana dashboard (#191) +* [DPE-3880] Remove all instances of $job variable in dashboard (#181) +* [DPE-3900] Remove APT references (#183) +* [DPE-3932] Fix illegal character on matrix channel (#187) +* [DPE-4133] Do not change permissions on existing folders when reusing storage (#195) +* [DPE-4362] fix: alive, restart and alive handling (#202) +* [DPE-5757] fix: ensure certs are refreshed on SANs DNS changes (#276) + +### Other changes + +* [MISC] Test on juju 3.4 (#190) +* [MISC] Update package dependencies +* [DPE-3588] Release documentation update (#175) +* [MISC] CI improvements (#209) +* [DPE-3214] Release 3.6.1 (#179) +* [DPE-5565] Upgrade dataplatform libs to v38 +* [discourse-gatekeeper] Migrate charm docs (#210, #203, #198, #194, #192) +* [DPE-3932] Update information in metadata.yaml + +## Charmed Apache ZooKeeper bundle + +New features and bug fixes in the Charmed Apache ZooKeeper bundle: + +### Features + +* [DPE-2285] Refer to Charmhub space from GitHub (#143) +* [DPE-2597] Re use existing storage (#138) +* [DPE-3737] Implement ZK client interface (#142) +* [DPE-3782] Set workload version in install and config hooks (#130) +* [DPE-3857] Test consistency between workload and metadata versions (#136) +* [DPE-3869] Secrets in ZK (#129) +* [DPE-5626] chore: update ZooKeeper up alerting (#166) + +### Bug fixes + +* [DPE-3880] Remove job variable from dashboard (#134) +* [DPE-3900] Remove APT references (#131) +* [DPE-3932] Fix illegal character on matrix channel (#133, #135) +* [DPE-4183] fix: only handle quorum removal on relation-departed (#146) +* [DPE-4362] fix: alive, restart and alive handling (#145) + +### Other changes + +* [DPE-5565] Stable release upgrade +* chore: bump dp_libs ver (#147) +* [MISC] General update dependencies (#144) +* [MISC] Update CI to Juju 3.4 (#137) +* [DPE-3932] Update information in metadata.yaml +* [MISC] Update cryptography to 42.0.5 + +Canonical Data issues are now public on both [Jira](https://warthogs.atlassian.net/jira/software/c/projects/DPE/issues/) +and [GitHub](https://github.com/canonical/kafka-operator/issues) platforms. + +## Inside the charms + +Contents of the Charmed Apache Kafka and Charmed Apache ZooKeeper include: + +* Charmed Apache ZooKeeper is based on the [charmed-zookeeper snap](https://snapcraft.io/charmed-zookeeper) of the `3/stable` channel (Ubuntu LTS “22.04” - core22-based) that ships the Apache ZooKeeper [3.8.4-ubuntu0](https://launchpad.net/zookeeper-releases/3.x/3.8.4-ubuntu0), built and supported by Canonical +* Charmed Apache Kafka is based on the [charmed-kafka snap](https://snapcraft.io/charmed-kafka) of the `3/stable` channel (Ubuntu LTS “22.04” - core22-based) that ships the Apache Kafka [3.6.1-ubuntu0](https://launchpad.net/kafka-releases/3.x/3.6.1-ubuntu0), built and supported by Canonical +* Principal charms support the latest LTS series “22.04” only. + +More information about the artifacts are provided by the following table: + +| Artifact | Track/Series | Version/Revision | Code | +|-----------------------------------|--------------|------------------|---------------------------------------------------------------------------------------------------------------------| +| Apache ZooKeeper distribution | 3.x | 3.8.4-ubuntu0 | [78499c](https://git.launchpad.net/zookeeper-releases/tree/?h=lp-3.8.4&id=78499c9f4d4610f9fb963afdad1ffd1aab2a96b8) | +| Apache Kafka distribution | 3.x | 3.6.1-ubuntu0 | [db44db](https://git.launchpad.net/kafka-releases/tree/?h=lp-3.6.1&id=db44db1ebf870854dddfc3be0187a976b997d4dc) | +| Charmed Apache ZooKeeper snap | 3/stable | 34 | [13f3c6](https://github.com/canonical/charmed-zookeeper-snap/tree/13f3c620658fdc55b7d6745b81c7b5a00e042e10) | +| Charmed Apache ZooKeeper operator | 3/stable | 149 | [40576c](https://github.com/canonical/zookeeper-operator/commit/40576c1c87badd1e2352afc013ed0754808ef44c) | +| Charmed Apache Kafka snap | 3/stable | 37 | [c266f9](https://github.com/canonical/charmed-kafka-snap/tree/c266f9cd283408d2106d4682b67661205a12ea7f) | +| Charmed Apache Kafka operator | 3/stable | 195 | [7948df](https://github.com/canonical/kafka-operator/pull/241/commits/7948dfbbfaaa53fccc88beaa90f80de1e70beaa9) | + + +## Technical notes + +* [GitHub Releases](https://github.com/canonical/kafka-operator/releases) provide a detailed list of bugfixes, PRs, and commits for each revision. +* Upgrades from previous stable versions can be done with the standard upgrading process, as outlined in the [documentation](/t/charmed-kafka-documentation-how-to-upgrade/11814) \ No newline at end of file From a256b82cf189ce7cef8e6b935257f1c0bdf3f642 Mon Sep 17 00:00:00 2001 From: discourse-gatekeeper-docs-bot Date: Wed, 18 Dec 2024 13:41:27 +0000 Subject: [PATCH 13/14] 'modified: docs/how-to/h-manage-units.md,docs/how-to/h-deploy-azure.md,docs/how-to/h-deploy-aws.md,docs/reference/r-performance-tuning.md,docs/reference/r-listeners.md' --- docs/how-to/h-deploy-aws.md | 2 +- docs/how-to/h-deploy-azure.md | 2 +- docs/how-to/h-manage-units.md | 4 ++-- docs/reference/r-listeners.md | 2 +- docs/reference/r-performance-tuning.md | 2 +- 5 files changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/how-to/h-deploy-aws.md b/docs/how-to/h-deploy-aws.md index 650743d9..b1289ea3 100644 --- a/docs/how-to/h-deploy-aws.md +++ b/docs/how-to/h-deploy-aws.md @@ -116,7 +116,7 @@ juju integrate kafka zookeeper ``` [note type="caution"] -The smallest AWS instance types may not provide sufficient resources to host a Kafka Broker. We recommend choosing an instance type with a minimum of `8` GB of RAM and `4` CPU cores, such as `m7i.xlarge`. +The smallest AWS instance types may not provide sufficient resources to host an Apache Kafka broker. We recommend choosing an instance type with a minimum of `8` GB of RAM and `4` CPU cores, such as `m7i.xlarge`. For more guidance on sizing production environments, see the [Requirements page](/t/charmed-kafka-reference-requirements/10563). Additional information about AWS instance types is available in the [AWS documentation](https://us-east-1.console.aws.amazon.com/ec2/home?region=us-east-1#Instances:instanceState=running). [/note] diff --git a/docs/how-to/h-deploy-azure.md b/docs/how-to/h-deploy-azure.md index d9f48139..05c3eb2b 100644 --- a/docs/how-to/h-deploy-azure.md +++ b/docs/how-to/h-deploy-azure.md @@ -162,7 +162,7 @@ juju integrate kafka zookeeper [note type="caution"] Note that the smallest instance types on Azure may not have enough resources for hosting -a Kafka Broker. We recommend selecting an instance type that provides at the very least `8` GB of RAM and `4` cores, e.g. `Standard_A4_v2`. +an Apache Kafka broker. We recommend selecting an instance type that provides at the very least `8` GB of RAM and `4` cores, e.g. `Standard_A4_v2`. For more guidance on production environment sizing, see the [Requirements page](/t/charmed-kafka-reference-requirements/10563). You can find more information about the available instance types in the [Azure documentation](https://learn.microsoft.com/en-us/azure/virtual-machines/sizes/overview). [/note] diff --git a/docs/how-to/h-manage-units.md b/docs/how-to/h-manage-units.md index b9851390..f38b3ea7 100644 --- a/docs/how-to/h-manage-units.md +++ b/docs/how-to/h-manage-units.md @@ -4,7 +4,7 @@ Unit management guide for scaling and running admin utility scripts. ## Replication and Scaling -Increasing the number of Apache Kafka Brokers can be achieved by adding more units +Increasing the number of Apache Kafka brokers can be achieved by adding more units to the Charmed Apache Kafka application, for example: ```shell @@ -80,7 +80,7 @@ argument. Note that `client.properties` may also refer to other files ( e.g. truststore and keystore for TLS-enabled connections). Those files also need to be accessible and correctly specified. -Commands can also be run within a Apache Kafka Broker, since both the authentication +Commands can also be run within an Apache Kafka broker, since both the authentication file (along with the truststore if needed) and the Charmed Apache Kafka snap are already present. diff --git a/docs/reference/r-listeners.md b/docs/reference/r-listeners.md index 96123408..9a6a4985 100644 --- a/docs/reference/r-listeners.md +++ b/docs/reference/r-listeners.md @@ -4,7 +4,7 @@ Charmed Apache Kafka comes with a set of listeners that can be enabled for inter- and intra-cluster communication. *Internal listeners* are used for internal traffic and exchange of information -between Apache Kafka Brokers, whereas *external listeners* are used for external clients +between Apache Kafka brokers, whereas *external listeners* are used for external clients to be optionally enabled based on the relations created on particular charm endpoints. Each listener is characterized by a specific port, scope and protocol. diff --git a/docs/reference/r-performance-tuning.md b/docs/reference/r-performance-tuning.md index 63889821..dfb40cc5 100644 --- a/docs/reference/r-performance-tuning.md +++ b/docs/reference/r-performance-tuning.md @@ -4,7 +4,7 @@ This section contains some suggested values to get a better performance from Cha ## Virtual memory handling (recommended) -Apache Kafka Brokers make heavy use of the OS page cache to maintain performance. They never normally explicitly issue a command to ensure messages have been persisted to disk (`sync`), relying instead on the underlying OS to ensure that larger chunks (pages) of data are persisted from the page cache to the disk when the OS deems it efficient and/or necessary to do so. As such, there is a range of runtime kernel parameter tuning that is recommended to be set on machines running Apache Kafka to improve performance. +Apache Kafka brokers make heavy use of the OS page cache to maintain performance. They never normally explicitly issue a command to ensure messages have been persisted to disk (`sync`), relying instead on the underlying OS to ensure that larger chunks (pages) of data are persisted from the page cache to the disk when the OS deems it efficient and/or necessary to do so. As such, there is a range of runtime kernel parameter tuning that is recommended to be set on machines running Apache Kafka to improve performance. To configure these settings, one can write them to `/etc/sysctl.conf` using `sudo echo $SETTING >> /etc/sysctl.conf`. Note that the settings shown below are simply sensible defaults that may not apply to every workload: ```bash From 9a4fc873f2703719644f4bb69e1de6d957e157cc Mon Sep 17 00:00:00 2001 From: discourse-gatekeeper-docs-bot Date: Wed, 18 Dec 2024 14:03:04 +0000 Subject: [PATCH 14/14] 'modified: docs/tutorial/t-enable-encryption.md' --- docs/tutorial/t-enable-encryption.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/tutorial/t-enable-encryption.md b/docs/tutorial/t-enable-encryption.md index 3fd9ef26..a768e704 100644 --- a/docs/tutorial/t-enable-encryption.md +++ b/docs/tutorial/t-enable-encryption.md @@ -54,8 +54,8 @@ telnet 9093 ### Enable TLS encrypted connection -Once the Apache Kafka cluster is enabled to use encrypted connection, client applications should be configured as well to connect to -the correct port as well as trust the self-signed CA provided by the `self-signed-certificates` charm. +Once TLS is configured on the cluster side, client applications should be configured as well to connect to +the correct port and trust the self-signed CA provided by the `self-signed-certificates` charm. Make sure that the `kafka-test-app` is not connected to the Charmed Apache Kafka, by removing the relation if it exists