Skip to content

Commit

Permalink
[DPE-6161] Document terminology decisions (#286)
Browse files Browse the repository at this point in the history
Add the Terminology section to the CONTRIBUTING.md file.
Fix terminology in Readme files, CONTRIBUTING.md, and yaml specs.
Co-authored-by: deusebio <[email protected]>
  • Loading branch information
izmalk authored Dec 19, 2024
1 parent 1c6b435 commit 723effa
Show file tree
Hide file tree
Showing 5 changed files with 49 additions and 19 deletions.
36 changes: 33 additions & 3 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,13 +40,13 @@ juju model-config logging-config="<root>=INFO;unit=DEBUG"
# Build the charm locally
charmcraft pack

# Deploy the latest ZooKeeper release
# Deploy the latest Apache ZooKeeper release
juju deploy zookeeper --channel edge -n 3

# Deploy the charm
juju deploy ./*.charm -n 3

# After ZooKeeper has initialised, relate the applications
# After Apache ZooKeeper has initialised, relate the applications
juju relate kafka zookeeper
```

Expand All @@ -69,6 +69,36 @@ tox run -e integration # integration tests
tox # runs 'lint' and 'unit' environments
```

## Documentation

Product documentation is stored in [Discourse](https://discourse.charmhub.io/t/charmed-apache-kafka-documentation/10288) and published on Charmhub and the Canonical website via Discourse API.
The documentation in this repository under the `docs` folder is a mirror synched by [Discourse Gatekeeper](https://github.com/canonical/discourse-gatekeeper), that takes care of automatically raising and updating a PR whenever changes to the content on Discourse are made.
Although Discourse content can be edited directly, unless the modifications are trivial and obvious (typos, spellings, formatting) we generally recommend to follow a review process:

1. Create a branch (either in the main repo or in a fork) from the current `main` and modify documentation files as necessary.
2. Raise a PR against the `main` to start the review process, and conduct the code review within the PR.
3. Once the PR is approved and all comments are addressed, the PR should NOT be merged directly! All the modifications should be applied to Discourse manually. If needed, new Discourse topics can be created, and referenced in the navigation table of the main index file on Discourse.
4. Discourse Gatekeeper will raise a new PR or add new commits to an open Discourse PR, tracking the `discourse-gatekeeper/migrate` branch. The [sync_docs.yaml](https://github.com/canonical/kafka-operator/actions/workflows/sync_docs.yaml) GitHub Actions provides further details on the Gatekeeper integration that can be run (a) in a scheduled fashion every night; (b) as a part of pull request CI, and (c) can be triggered manually. If new topics are referenced in the main index file on Discourse, these will be added to `docs/index.md` and the new topics pulled from Discourse.
5. Once Gatekeeper has raised a new or updated an existing PR, feel free to close the initial PR manually created in step 2, with a comment referring to the PR created by Gatekeeper. If the initial PR was referring to a ticket, add the ticket to either the title or the description of the GateKeeper PR.

### Terminology

Apache®, [Apache Kafka, Kafka®](https://kafka.apache.org/), [Apache ZooKeeper, ZooKeeper™](https://zookeeper.apache.org/) and their respective logos are either registered trademarks or trademarks of the [Apache Software Foundation](https://www.apache.org/) in the United States and/or other countries.

For documentation in this repository the following conventions are applied (see the table below).

| Full form | Alternatives | Incorrect examples |
| -------- | ------- | ------- |
| Apache Kafka | | Kafka |
| Charmed Apache Kafka | | Charmed Kafka |
| Kafka Connect | | Kafka connect |
| Apache Kafka brokers | | Kafka brokers, Apache Kafka Brokers |
| Apache Kafka cluster | | Charmed Apache Kafka cluster |

The full form must be used at least once per page.
The full form must be used at the first entry to the page’s headings, body of text, callouts, and graphics.
For subsequent usage, the full form can be substituted by alternatives.

## Canonical Contributor Agreement

Canonical welcomes contributions to the Charmed Kafka Operator. Please check out our [contributor agreement](https://ubuntu.com/legal/contributors) if you're interested in contributing to the solution.
Canonical welcomes contributions to Charmed Apache Kafka. Please check out our [contributor agreement](https://ubuntu.com/legal/contributors) if you're interested in contributing to the solution.
18 changes: 9 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ The Charmed Operator can be found on [Charmhub](https://charmhub.io/kafka) and i
- SASL/SCRAM auth for Broker-Broker and Client-Broker authentication enabled by default.
- Access control management supported with user-provided ACL lists.

As currently Kafka requires a paired ZooKeeper deployment in production, this operator makes use of the [ZooKeeper Operator](https://github.com/canonical/zookeeper-operator) for various essential functions.
As currently Apache Kafka requires a paired Apache ZooKeeper deployment in production, this operator makes use of [Charmed Apache ZooKeeper](https://github.com/canonical/zookeeper-operator) for various essential functions.

### Features checklist

Expand All @@ -33,7 +33,7 @@ The following are some of the most important planned features and their implemen

## Requirements

For production environments, it is recommended to deploy at least 5 nodes for Zookeeper and 3 for Kafka.
For production environments, it is recommended to deploy at least 5 nodes for Apache Zookeeper and 3 for Apache Kafka.

The following requirements are meant to be for production environment:

Expand All @@ -51,7 +51,7 @@ For more information on how to perform typical tasks, see the How to guides sect

### Deployment

The Kafka and ZooKeeper operators can both be deployed as follows:
Charmed Apache Kafka and Charmed Apache ZooKeeper can both be deployed as follows:

```shell
$ juju deploy zookeeper -n 5
Expand All @@ -70,18 +70,18 @@ To watch the process, the `juju status` command can be used. Once all the units
juju run-action kafka/leader get-admin-credentials --wait
```

Apache Kafka ships with `bin/*.sh` commands to do various administrative tasks, e.g `bin/kafka-config.sh` to update cluster configuration, `bin/kafka-topics.sh` for topic management, and many more! The Kafka Charmed Operator provides these commands for administrators to run their desired cluster configurations securely with SASL authentication, either from within the cluster or as an external client.
Apache Kafka ships with `bin/*.sh` commands to do various administrative tasks, e.g `bin/kafka-config.sh` to update cluster configuration, `bin/kafka-topics.sh` for topic management, and many more! Charmed Apache Kafka provides these commands for administrators to run their desired cluster configurations securely with SASL authentication, either from within the cluster or as an external client.

For example, to list the current topics on the Kafka cluster, run the following command:
For example, to list the current topics on the Apache Kafka cluster, run the following command:

```shell
BOOTSTRAP_SERVERS=$(juju run-action kafka/leader get-admin-credentials --wait | grep "bootstrap.servers" | cut -d "=" -f 2)
juju ssh kafka/leader 'charmed-kafka.topics --bootstrap-server $BOOTSTRAP_SERVERS --list --command-config /var/snap/charmed-kafka/common/client.properties'
```

Note that Charmed Apache Kafka cluster is secure-by-default: when no other application is related to Kafka, listeners are disabled, thus preventing any incoming connection. However, even for running the commands above, listeners must be enabled. If there are no other applications, you can deploy a `data-integrator` charm and relate it to Kafka to enable listeners.
Note that Charmed Apache Kafka cluster is secure-by-default: when no other application is related to Charmed Apache Kafka, listeners are disabled, thus preventing any incoming connection. However, even for running the commands above, listeners must be enabled. If there are no other applications, you can deploy a `data-integrator` charm and relate it to Charmed Apache Kafka to enable listeners.

Available Kafka bin commands can be found with:
Available Charmed Apache Kafka bin commands can be found with:

```
snap info charmed-kafka
Expand Down Expand Up @@ -119,7 +119,7 @@ Use the same action without a password parameter to randomly generate a password
Currently, the Charmed Apache Kafka Operator supports 1 or more storage volumes. A 10G storage volume will be installed by default for `log.dirs`.
This is used for logs storage, mounted on `/var/snap/kafka/common`

When storage is added or removed, the Kafka service will restart to ensure it uses the new volumes. Additionally, log + charm status messages will prompt users to manually reassign partitions so that the new storage volumes are populated. By default, Kafka will not assign partitions to new directories/units until existing topic partitions are assigned to it, or a new topic is created.
When storage is added or removed, the Apache Kafka service will restart to ensure it uses the new volumes. Additionally, log + charm status messages will prompt users to manually reassign partitions so that the new storage volumes are populated. By default, Apache Kafka will not assign partitions to new directories/units until existing topic partitions are assigned to it, or a new topic is created.

## Relations

Expand Down Expand Up @@ -270,4 +270,4 @@ Also, if you truly enjoy working on open-source projects like this one, check ou

## License

The Charmed Apache Kafka Operator is free software, distributed under the Apache Software License, version 2.0. See [LICENSE](https://github.com/canonical/kafka-operator/blob/main/LICENSE) for more information.
Charmed Apache Kafka is free software, distributed under the Apache Software License, version 2.0. See [LICENSE](https://github.com/canonical/kafka-operator/blob/main/LICENSE) for more information.
2 changes: 1 addition & 1 deletion actions.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ set-tls-private-key:

get-admin-credentials:
description: Get administrator authentication credentials for client commands
The returned client_properties can be used for Kafka bin commands using `--bootstrap-server` and `--command-config` for admin level administration
The returned client_properties can be used for Apache Kafka bin commands using `--bootstrap-server` and `--command-config` for admin level administration
This action must be called on the leader unit.

get-listeners:
Expand Down
6 changes: 3 additions & 3 deletions config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ options:
type: int
default: 1073741824
message_max_bytes:
description: The largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In previous message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.This can be set per topic with the topic level max.message.bytes config.
description: The largest record batch size allowed by Apache Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In previous message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.This can be set per topic with the topic level max.message.bytes config.
type: int
default: 1048588
offsets_topic_num_partitions:
Expand Down Expand Up @@ -81,7 +81,7 @@ options:
type: int
default: 11
zookeeper_ssl_cipher_suites:
description: Specifies the enabled cipher suites to be used in ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the zookeeper.ssl.ciphersuites system property (note the single word "ciphersuites"). The default value of null means the list of enabled cipher suites is determined by the Java runtime being used.
description: Specifies the enabled cipher suites to be used in Apache ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the zookeeper.ssl.ciphersuites system property (note the single word "ciphersuites"). The default value of null means the list of enabled cipher suites is determined by the Java runtime being used.
type: string
default: ""
profile:
Expand Down Expand Up @@ -113,6 +113,6 @@ options:
type: float
default: 0.8
expose_external:
description: "String to determine how to expose the Kafka cluster externally from the Kubernetes cluster. Possible values: 'nodeport', 'none'"
description: "String to determine how to expose the Apache Kafka cluster externally from the Kubernetes cluster. Possible values: 'nodeport', 'none'"
type: string
default: "nodeport"
6 changes: 3 additions & 3 deletions metadata.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,12 @@
name: kafka
display-name: Apache Kafka
description: |
Kafka is an event streaming platform. This charm deploys and operates Kafka on
Apache Kafka is an event streaming platform. This charm deploys and operates Apache Kafka on
a VM machines environment.
Apache Kafka is a free, open source software project by the Apache Software Foundation.
Users can find out more at the [Kafka project page](https://kafka.apache.org/).
summary: Charmed Kafka Operator
Users can find out more at the [Apache Kafka project page](https://kafka.apache.org/).
summary: Charmed Apache Kafka Operator
docs: https://discourse.charmhub.io/t/charmed-kafka-documentation/10288
source: https://github.com/canonical/kafka-operator
issues: https://github.com/canonical/kafka-operator/issues
Expand Down

0 comments on commit 723effa

Please sign in to comment.