diff --git a/README.md b/README.md
index 6e7446e4..b604ef96 100644
--- a/README.md
+++ b/README.md
@@ -1,7 +1,7 @@
-
+
# Lightstreamer Kafka Connector
-_Extend Kafka topics to the web effortlessly. Stream real-time data to mobile and web apps, anywhere. Scale Kafka to millions of clients._
+_Last-mile data streaming. Stream real-time Kafka data to mobile and web apps, anywhere. Scale Kafka to millions of clients._
- [Introduction](#introduction)
- [Last-Mile Integration](#last-mile-integration)
@@ -13,6 +13,16 @@ _Extend Kafka topics to the web effortlessly. Stream real-time data to mobile an
- [Kafka Client vs. Kafka Connect](#kafka-client-vs-kafka-connect)
- [Lightstreamer Kafka Connector as a Kafka Client](#lightstreamer-kafka-connector-as-a-kafka-client)
- [Lightstreamer Kafka Connector as a Kafka Connect Sink Connector](#lightstreamer-kafka-connector-as-a-kafka-connect-sink-connector)
+- [QUICK START: Set up in 5 minutes](#quick-start-set-up-in-5-minutes)
+ - [Last-Mile Integration](#last-mile-integration)
+ - [Intelligent Streaming](#intelligent-streaming)
+ - [Comprehensive Client SDKs](#comprehensive-client-sdks)
+ - [Massive Scalability](#massive-scalability)
+ - [Other Features](#other-features)
+- [Architecture](#architecture)
+ - [Kafka Client vs. Kafka Connect](#kafka-client-vs-kafka-connect)
+ - [Lightstreamer Kafka Connector as a Kafka Client](#lightstreamer-kafka-connector-as-a-kafka-client)
+ - [Lightstreamer Kafka Connector as a Kafka Connect Sink Connector](#lightstreamer-kafka-connector-as-a-kafka-connect-sink-connector)
- [QUICK START: Set up in 5 minutes](#quick-start-set-up-in-5-minutes)
- [Run](#run)
- [Installation](#installation)
@@ -107,22 +117,22 @@ In this mode, the Lightstreamer Kafka Connector integrates with the Kafka Connec
# QUICK START: Set up in 5 minutes
-To efficiently showcase the functionalities of the Lightstreamer Kafka Connector, we have prepared an accessible quickstart application located in the [`examples/quickstart`](examples/quickstart/) directory. This streamlined application facilitates real-time streaming of data from a Kafka topic directly to a web interface. It leverages a modified version of the [Stock List Demo](https://github.com/Lightstreamer/Lightstreamer-example-StockList-client-javascript?tab=readme-ov-file#basic-stock-list-demo---html-client), specifically adapted to demonstrate Kafka integration. This setup is designed for rapid comprehension, enabling you to swiftly grasp and observe the connector's performance in a real-world scenario.
+To efficiently showcase the functionalities of the Lightstreamer Kafka Connector, we have prepared an accessible quickstart application located in the [`examples/quickstart`](/examples/quickstart/) directory. This streamlined application facilitates real-time streaming of data from a Kafka topic directly to a web interface. It leverages a modified version of the [Stock List Demo](https://github.com/Lightstreamer/Lightstreamer-example-StockList-client-javascript?tab=readme-ov-file#basic-stock-list-demo---html-client), specifically adapted to demonstrate Kafka integration. This setup is designed for rapid comprehension, enabling you to swiftly grasp and observe the connector's performance in a real-world scenario.
![Quickstart Diagram](/pictures/quickstart-diagram.png)
The diagram above illustrates how, in this setup, a stream of simulated market events is channeled from Kafka to the web client via the Lightstreamer Kafka Connector.
-To provide a complete stack, the app is based on _Docker Compose_. The [Docker Compose file](examples/quickstart/docker-compose.yml) comprises the following services:
+To provide a complete stack, the app is based on _Docker Compose_. The [Docker Compose file](/examples/quickstart/docker-compose.yml) comprises the following services:
-1. _broker_: a Kafka broker, based on the [Docker Image for Apache Kafka](https://kafka.apache.org/documentation/#docker). Please notice that other versions of this quickstart are availbale in the in the [`examples`](examples/) directory, specifically targeted to other brokers, including [`Confluent Cloud`](examples/quickstart-confluent-cloud/), [`Redpanda Serverless`](examples/quickstart-redpanda-serverless), [`Redpanda Self-hosted`](examples/quickstart-redpanda-selfhosted), [`Aiven`](examples/quickstart-aiven), and more.
-2. _kafka-connector_: Lightstreamer Server with the Kafka Connector, based on the [Lightstreamer Kafka Connector Docker image example](examples/docker/), which also includes a web client mounted on `/lightstreamer/pages/QuickStart`
-3. _producer_: a native Kafka Producer, based on the provided [`Dockerfile`](examples/quickstart-producer/Dockerfile) file from the [`quickstart-producer`](examples/quickstart-producer/) sample client
+1. _broker_: a Kafka broker, based on the [Docker Image for Apache Kafka](https://kafka.apache.org/documentation/#docker). Please notice that other versions of this quickstart are availbale in the in the [`examples`](/examples/) directory, specifically targeted to other brokers, including [`Confluent Cloud`](/examples/vendors/confluent/README.md), [`Redpanda Serverless`](/examples/vendors/redpanda/quickstart-redpanda-serverless), [`Redpanda Self-hosted`](/examples/vendors/redpanda/quickstart-redpanda-selfhosted), [`Aiven`](/examples/quickstart-aiven), and more.
+2. _kafka-connector_: Lightstreamer Server with the Kafka Connector, based on the [Lightstreamer Kafka Connector Docker image example](/examples/docker/), which also includes a web client mounted on `/lightstreamer/pages/QuickStart`
+3. _producer_: a native Kafka Producer, based on the provided [`Dockerfile`](/examples/quickstart-producer/Dockerfile) file from the [`quickstart-producer`](/examples/quickstart-producer/) sample client
## Run
-1. Make sure you have Docker, Docker Compose, and JDK version 17 installed on your local machine.
-2. From the [`examples/quickstart`](examples/quickstart/) folder, run the following:
+1. Make sure you have Docker, Docker Compose, and a JDK (Java Development Kit) v17 or newer installed on your local machine.
+2. From the [`examples/quickstart`](/examples/quickstart/) folder, run the following:
```sh
$ ./start.sh
@@ -155,12 +165,12 @@ This section will guide you through the installation of the Kafka Connector to g
## Requirements
- JDK (Java Development Kit) v17 or newer
-- [Lightstreamer Broker](https://lightstreamer.com/download/) v7.4.2 or newer (check the `LS_HOME/GETTING_STARTED.TXT` file for the instructions)
+- [Lightstreamer Broker](https://lightstreamer.com/download/) (also referred to as _Lightstreamer Server_) v7.4.2 or newer. Follow the installation instructions in the `LS_HOME/GETTING_STARTED.TXT` file included in the downloaded package.
- A running Kafka broker or Kafka cluster
## Deploy
-Download the deployment archive `lightstreamer-kafka-connector-.zip` from the [Releases](https://github.com/Lightstreamer/Lightstreamer-kafka-connector/releases/) page. Alternatively, check out this repository and execute the following command from the [`kafka-connector-project`](kafka-connector-project/) folder:
+Download the deployment archive `lightstreamer-kafka-connector-.zip` from the [Releases](https://github.com/Lightstreamer/Lightstreamer-kafka-connector/releases/) page. Alternatively, check out this repository and execute the following command from the [`kafka-connector-project`](/kafka-connector-project/) folder:
```sh
$ ./gradlew adapterDistZip
@@ -196,7 +206,7 @@ LS_HOME/
## Configure
-Before starting the Kafka Connector, you need to properly configure the `LS_HOME/adapters/lightstreamer-kafka-connector-/adapters.xml` file. For convenience, the package comes with a predefined configuration (the same used in the [_Quick Start_](#quick-start) app), which can be customized in all its aspects as per your requirements. Of course, you may add as many different connection configurations as desired to fit your needs.
+Before starting the Kafka Connector, you need to properly configure the `LS_HOME/adapters/lightstreamer-kafka-connector-/adapters.xml` file. For convenience, the package comes with a predefined configuration (the same used in the [_Quick Start_](#quick-start-set-up-in-5-minutes) app), which can be customized in all its aspects as per your requirements. Of course, you may add as many different connection configurations as desired to fit your needs.
To quickly complete the installation and verify the successful integration with Kafka, edit the _data_provider_ block `QuickStart` in the file as follows:
@@ -206,13 +216,12 @@ To quickly complete the installation and verify the successful integration with
kafka.connection.string
```
-- Optionally customize the `LS_HOME/adapters/lightstreamer-kafka-connector-/log4j.properties` file (the current settings produce the additional `quickstart.log` file).
- Configure topic and record mapping.
- Since a generic Ligthstreamer client needs to subscribe to one or more items to receive real-time updates, the Kafka Connector has to offer proper mechanisms to realize the mapping between Kafka topics and Lightstreamer items.
+ To enable a generic Lightstreamer client to receive real-time updates, it needs to subscribe to one or more items. Therefore, the Kafka Connector provides suitable mechanisms to map Kafka topics to Lightstreamer items effectively.
- The `QuickStart` [factory configuration](kafka-connector-project/kafka-connector/src/adapter/dist/adapters.xml#L39) comes with a simple mapping through the following settings:
+ The `QuickStart` [factory configuration](/kafka-connector-project/kafka-connector/src/adapter/dist/adapters.xml#L39) comes with a straightforward mapping defined through the following settings:
- An item template:
```xml
@@ -254,6 +263,8 @@ To quickly complete the installation and verify the successful integration with
This way, the routed event is transformed into a flat structure, which can be forwarded to the clients.
+- Optionally, customize the `LS_HOME/adapters/lightstreamer-kafka-connector-/log4j.properties` file (the current settings produce the `quickstart.log` file).
+
You can get more details about all possible settings in the [Configuration](#configuration) section.
### Connection with Confluent Cloud
@@ -295,7 +306,7 @@ where you have to replace `username` and `password` with the credentials generat
## Start
-1. Launch Lightstreamer Server.
+### 1. Launch Lightstreamer Server
From the `LS_HOME/bin/unix-like` directory, run the following:
@@ -303,11 +314,11 @@ where you have to replace `username` and `password` with the credentials generat
$ ./background_start.sh
```
-2. Attach a Lightstreamer consumer.
+### 2. Attach a Lightstreamer consumer
- The [`kafka-connector-utils`](kafka-connector-project/kafka-connector-utils) submodule hosts a simple Lightstreamer Java client that can be used to test the consumption of Kafka events from any Kafka topics.
+ The [`kafka-connector-utils`](/kafka-connector-project/kafka-connector-utils) submodule hosts a simple Lightstreamer Java client that can be used to test the consumption of Kafka events from any Kafka topics.
- Before launching the consumer, you first need to build it from the [`kafka-connector-project`](kafka-connector-project/) folder with the command:
+ Before launching the consumer, you first need to build it from the [`kafka-connector-project`](/kafka-connector-project/) folder with the command:
```sh
$ ./gradlew kafka-connector-utils:build
@@ -318,7 +329,7 @@ where you have to replace `username` and `password` with the credentials generat
Then, launch it with:
```sh
- $ java -jar kafka-connector-utils/build/libs/lightstreamer-kafka-connector-utils-consumer-all-.jar --address http://localhost:8080 --adapter-set KafkaConnector --data-adapter QuickStart --items stock-[index=1],stock-[index=2],stock-[index=3] --fields ask,bid,min,max
+ $ java -jar kafka-connector-utils/build/libs/lightstreamer-kafka-connector-utils-consumer-all-.jar --address http://localhost:8080 --adapter-set KafkaConnector --data-adapter QuickStartConfluentCloud --items stock-[index=1],stock-[index=2],stock-[index=3] --fields stock_name,ask,bid,min,max
```
As you can see, you have to specify a few parameters:
@@ -332,9 +343,9 @@ where you have to replace `username` and `password` with the credentials generat
> [!NOTE]
> While we've provided examples in JavaScript (suitable for web browsers) and Java (geared towards desktop applications), you are encouraged to utilize any of the [Lightstreamer client SDKs](https://lightstreamer.com/download/#client-sdks) for developing clients in other environments, including iOS, Android, Python, and more.
-3. Publish the events.
+### 3. Publish the events
- The [`examples/quickstart-producer`](examples/quickstart-producer/) folder hosts a simple Kafka producer to publish simulated market events for the _QuickStart_ app.
+ The [`examples/quickstart-producer`](/examples/quickstart-producer/) folder hosts a simple Kafka producer to publish simulated market events for the _QuickStart_ app.
Before launching the producer, you first need to build it. Open a new shell from the folder and execute the command:
@@ -359,12 +370,12 @@ where you have to replace `username` and `password` with the credentials generat
```java
security.protocol=SASL_SSL
- sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="" password="";
+ sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="" password="";
sasl.mechanism=PLAIN
...
```
- where you have to replace `` and `` with the _API Key_ and _secret_ generated on the _Confluent CLI_ or from the _Confluent Cloud Console_.
+ where you have to replace `` and `` with the API key and API secret generated on the _Confluent CLI_ or from the _Confluent Cloud Console_.
```sh
$ java -jar build/libs/quickstart-producer-all.jar --bootstrap-servers --topic stocks --config-file
@@ -388,7 +399,7 @@ where you have to replace `username` and `password` with the credentials generat
$ java -jar build/libs/quickstart-producer-all.jar --bootstrap-servers --topic stocks --config-file
```
-4. Check the consumed events.
+### 4. Check the consumed events
After starting the publisher, you should immediately see the real-time updates flowing from the consumer shell:
@@ -473,11 +484,12 @@ _Optional_. The `name` attribute of the `data_provider` tag defines _Kafka Conne
Furthermore, the name is also used to group all logging messages belonging to the same connection.
> [!TIP]
-> For every Data Adaper connection, add a new logger and its relative file appender to `log4j.properties`, so that you can log to dedicated files all the interactions pertinent to the connection with the Kafka cluster and the message retrieval operations, along with their routing to the subscribed items.
-> For example, the factory [logging configuration](kafka-connector-project/dist/log4j.properties#L23) provides the logger `QuickStart` to print every log messages relative to the `QuickStart` connection:
+> For every Data Adapter connection, add a new logger and its relative file appender to `log4j.properties`, so that you can log to dedicated files all the interactions pertinent to the connection with the Kafka cluster and the message retrieval operations, along with their routing to the subscribed items.
+> For example, the factory [logging configuration](/kafka-connector-project/kafka-connector/src/adapter/dist/log4j.properties#L23) provides the logger `QuickStart` to print every log messages relative to the `QuickStart` connection:
> ```java
> ...
> # QuickStart logger
+> log4j.logger.QuickStart=INFO, QuickStartFile
> log4j.appender.QuickStartFile=org.apache.log4j.RollingFileAppender
> log4j.appender.QuickStartFile.layout=org.apache.log4j.PatternLayout
> log4j.appender.QuickStartFile.layout.ConversionPattern=[%d] [%-10c{1}] %-5p %m%n
@@ -686,7 +698,7 @@ Example:
#### Quick Start SSL Example
-Check out the [adapters.xml](examples/quickstart-ssl/adapters.xml#L17) file of the [_Quick Start SSL_](examples/quickstart-ssl/) app, where you can find an example of encryption configuration.
+Check out the [adapters.xml](/examples/quickstart-ssl/adapters.xml#L17) file of the [_Quick Start SSL_](/examples/quickstart-ssl/) app, where you can find an example of encryption configuration.
### Broker Authentication Parameters
@@ -818,11 +830,11 @@ Example of configuration with the use of a ticket cache:
#### Quick Start Confluent Cloud Example
-Check out the [adapters.xml](examples/quickstart-confluent-cloud/adapters.xml#L22) file of the [_Quick Start Confluent Cloud_](examples/quickstart-confluent-cloud/) app, where you can find an example of an authentication configuration that uses SASL/PLAIN.
+Check out the [adapters.xml](/examples/vendors/confluent/quickstart-confluent-cloud/adapters.xml#L22) file of the [_Quick Start Confluent Cloud_](/examples/vendors/confluent/quickstart-confluent-cloud/) app, where you can find an example of an authentication configuration that uses SASL/PLAIN.
#### Quick Start Redpanda Serverless Example
-Check out the [adapters.xml](examples/quickstart-redpanda-serverless/adapters.xml#L22) file of the [_Quick Start Redpanda Serverless_](examples/quickstart-redpanda-serverless/) app, where you can find an example of an authentication configuration that uses SASL/SCRAM.
+Check out the [adapters.xml](/examples/vendors/redpanda/quickstart-redpanda-serverless/adapters.xml#L22) file of the [_Quick Start Redpanda Serverless_](/examples/vendors/redpanda/quickstart-redpanda-serverless/) app, where you can find an example of an authentication configuration that uses SASL/SCRAM.
### Record Evaluation
@@ -1062,7 +1074,7 @@ To configure the mapping, you define the set of all subscribable fields through
The configuration specifies that the field `fieldNameX` will contain the value extracted from the deserialized Kafka record through the `extractionExpressionX`, written using the [_Data Extraction Language_](#data-extraction-language). This approach makes it possible to transform a Kafka record of any complexity to the flat structure required by Lightstreamer.
-The `QuickStart` [factory configuration](kafka-connector-project/kafka-connector/src/adapter/dist/adapters.xml#L352) shows a basic example, where a simple _direct_ mapping has been defined between every attribute of the JSON record value and a Lightstreamer field with the corresponding name. Of course, thanks to the _Data Extraction Language_, more complex mapping can be employed.
+The `QuickStart` [factory configuration](/kafka-connector-project/kafka-connector/src/adapter/dist/adapters.xml#L352) shows a basic example, where a simple _direct_ mapping has been defined between every attribute of the JSON record value and a Lightstreamer field with the corresponding name. Of course, thanks to the _Data Extraction Language_, more complex mapping can be employed.
```xml
...
@@ -1303,7 +1315,7 @@ Example:
#### Quick Start Schema Registry Example
-Check out the [adapters.xml](examples/quickstart-schema-registry/adapters.xml#L58) file of the [_Quick Start Schema Registry_](examples/quickstart-schema-registry/) app, where you can find an example of Schema Registry settings.
+Check out the [adapters.xml](/examples/quickstart-schema-registry/adapters.xml#L58) file of the [_Quick Start Schema Registry_](/examples/quickstart-schema-registry/) app, where you can find an example of Schema Registry settings.
# Customize the Kafka Connector Metadata Adapter Class
@@ -1357,7 +1369,7 @@ For a Gradle project, edit your _build.gradle_ file as follows:
}
```
-In the [examples/custom-kafka-connector-adapter](examples/custom-kafka-connector-adapter/) folder, you can find a sample Gradle project you may use as a starting point to build and deploy your custom extension.
+In the [examples/custom-kafka-connector-adapter](/examples/custom-kafka-connector-adapter/) folder, you can find a sample Gradle project you may use as a starting point to build and deploy your custom extension.
# Kafka Connect Lightstreamer Sink Connector
@@ -1367,7 +1379,7 @@ In this scenario, an instance of the connector plugin acts as a [_Remote Adapter
![KafkaConnectArchitecture](/pictures/kafka-connect.png)
-The connector has been developed for Kafka Connect framework version 3.7 and requires JDK version 17 or later.
+The connector has been developed for Kafka Connect framework version 3.7 and requires JDK (Java Development Kit) v17 or newer.
## Usage
@@ -1378,7 +1390,7 @@ Before running the connector, you first need to deploy a Proxy Adapter into the
#### Requirements
- JDK (Java Development Kit) v17 or newer
-- [Lightstreamer Broker](https://lightstreamer.com/download/) v7.4.2 or newer (check the `LS_HOME/GETTING_STARTED.TXT` file for the instructions)
+- [Lightstreamer Broker](https://lightstreamer.com/download/) (also referred to as _Lightstreamer Server_) v7.4.2 or newer. Follow the installation instructions in the `LS_HOME/GETTING_STARTED.TXT` file included in the downloaded package.
#### Steps
@@ -1434,7 +1446,7 @@ LS_HOME/
To manually install the Kafka Connect Lightstreamer Sink Connector to a local Confluent Platform (version 7.6 or later) and run it in [_standalone mode_](https://docs.confluent.io/platform/current/connect/userguide.html#standalone-mode):
-1. Download the connector zip file `lightstreamer-kafka-connect-lightstreamer-.zip` from the [Releases](https://github.com/Lightstreamer/Lightstreamer-kafka-connector/releases) page. Alternatively, check out this repository and execute the following command from the [`kafka-connector-project`](kafka-connector-project/) folder:
+1. Download the connector zip file `lightstreamer-kafka-connect-lightstreamer-.zip` from the [Releases](https://github.com/Lightstreamer/Lightstreamer-kafka-connector/releases) page. Alternatively, check out this repository and execute the following command from the [`kafka-connector-project`](/kafka-connector-project/) folder:
```sh
$ ./gradlew connectDistZip
@@ -1497,9 +1509,9 @@ To verify that an events stream actually flows from Kafka to a Lightstreamer con
### Running in Docker
-If you want to build a local Docker image based on Kafka Connect with the connector plugin, check out the [exmaples/docker-kafka-connect](./examples/docker-kafka-connect/) folder.
+If you want to build a local Docker image based on Kafka Connect with the connector plugin, check out the [exmaples/docker-kafka-connect](/examples/docker-kafka-connect/) folder.
-In addition, the [examples/quickstart-kafka-connect](./examples/quickstart-kafka-connect/) folder shows how to use that image in Docker Compose through a Kafka Connect version of the _Quick Start_ app.
+In addition, the [examples/quickstart-kafka-connect](/examples/quickstart-kafka-connect/) folder shows how to use that image in Docker Compose through a Kafka Connect version of the _Quick Start_ app.
## Supported Converters
@@ -1749,12 +1761,12 @@ The configuration above specifies how to route records published from the topic
# Docs
-The [docs](docs/) folder contains the complete [Kafka Connector API Reference](https://lightstreamer.github.io/Lightstreamer-kafka-connector/javadoc), which is useful for implementing custom authentication and authorization logic, as described in the [Customize the Kafka Connector Metadata Adapter Class](#customize-the-kafka-connector-metadata-adapter-class) section.
+The [docs](/docs/) folder contains the complete [Kafka Connector API Reference](https://lightstreamer.github.io/Lightstreamer-kafka-connector/javadoc), which is useful for implementing custom authentication and authorization logic, as described in the [Customize the Kafka Connector Metadata Adapter Class](#customize-the-kafka-connector-metadata-adapter-class) section.
To learn more about the [Lightstreamer Broker](https://lightstreamer.com/products/lightstreamer/) and the [Lightstreamer Kafka Connector](https://lightstreamer.com/products/kafka-connector/), visit their respective product pages.
# Examples
-The [examples](examples/) folder contains all the examples referenced throughout this guide, along with additional resources tailored for specific Kafka broker vendors. Additionally, you can explore the [_Airport Demo_](examples/airport-demo/) for deeper insights into various usage and configuration options of the Lightstreamer Kafka Connector.
+The [examples](/examples/) folder contains all the examples referenced throughout this guide, along with additional resources tailored for specific Kafka broker vendors. Additionally, you can explore the [_Airport Demo_](/examples/airport-demo/) for deeper insights into various usage and configuration options of the Lightstreamer Kafka Connector.
For more examples and live demos, visit our [online showcase](https://demos.lightstreamer.com/?p=kafkaconnector&lclient=noone&f=all&lall=all&sallall=all).
diff --git a/examples/README.md b/examples/README.md
index 3509a4a8..73133447 100644
--- a/examples/README.md
+++ b/examples/README.md
@@ -4,14 +4,14 @@ This folder contains several examples showing you how to use Lightstreamer Kafka
- [docker](./docker/): a minimal Docker image
- [quickstart](quickstart/): the _Quick Start_ app
-- [quickstart-producer](quickstart-producer//): the producer for the _Quick Start_ app
+- [quickstart-producer](quickstart-producer/): the producer for the _Quick Start_ app
- [quickstart-ssl](quickstart-ssl/): the _Quick Start_ app with encryption settings
-- [quickstart-confluent-cloud](quickstart-confluent-cloud/): the _Quick Start_ app with _Confluent Cloud_ as the target Kafka Cluster
-- [quickstart-redpanda-selfhosted](quickstart-redpanda-selfhosted/): the _Quick Start_ app with _Redpanda_ as the target broker
-- [quickstart-redpanda-serverless](quickstart-redpanda-serverless/): the _Quick Start_ app with _Redpanda Serverless_ as the target cluster
-- [quickstart-aiven](quickstart-aiven/): the _Quick Start_ app with _Aiven for Apache Kafka_ as the target Kafka cluster
-- [quickstart-axual](quickstart-axual/): the _Quick Start_ app with a _shared test cluster_ from _Axual Platform_ as the target Kafka cluster
+- [quickstart-confluent](vendors/confluent/quickstart-confluent/): the _Quick Start_ app with the _Confluent Platform_ as the target broker
- [quickstart-schema-registry](quickstart-schema-registry/): the _Quick Start_ app with the _Confluent Schema Registry_
-- [quickstart-kafka-connect](quickstart-kafka-connect//): the _Quick Start_ app running with the Kafka Connect Connector packaging of the connector.
+- [quickstart-redpanda-selfhosted](vendors/redpanda/quickstart-redpanda-selfhosted/): the _Quick Start_ app with _Redpanda_ as the target broker
+- [quickstart-redpanda-serverless](vendors/redpanda/quickstart-redpanda-serverless/): the _Quick Start_ app with _Redpanda Serverless_ as the target cluster
+- [quickstart-aiven](vendors/aiven/quickstart-aiven/): the _Quick Start_ app with _Aiven for Apache Kafka_ as the target Kafka cluster
+- [quickstart-axual](vendors/axual/quickstart-axual/): the _Quick Start_ app with a _shared test cluster_ from _Axual Platform_ as the target Kafka cluster
+- [quickstart-kafka-connect](quickstart-kafka-connect/): the _Quick Start_ app running with the Kafka Connect Connector packaging of the connector.
- [airport-demo](airport-demo/): a demo showing a basic departure board
- [custom-kafka-connector-adapter](custom-kafka-connector-adapter/): a sample Gradle project for building and deploying a custom _Kafka Connector Metadata Adapter_
diff --git a/examples/airport-demo/docker-compose-kafka.yml b/examples/airport-demo/docker-compose-kafka.yml
index d49457cf..2ecd5f01 100644
--- a/examples/airport-demo/docker-compose-kafka.yml
+++ b/examples/airport-demo/docker-compose-kafka.yml
@@ -50,7 +50,7 @@ services:
KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: 'broker:29092'
- KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
+ KAFKA_REST_LISTENERS: 'http://0.0.0.0:8082'
KAFKA_AUTO_CREATE_TOPICS_ENABLE: false
init-broker:
diff --git a/examples/compose-templates/docker b/examples/compose-templates/docker
new file mode 120000
index 00000000..bd4c07fe
--- /dev/null
+++ b/examples/compose-templates/docker
@@ -0,0 +1 @@
+../docker
\ No newline at end of file
diff --git a/examples/compose-templates/docker-kafka-connect b/examples/compose-templates/docker-kafka-connect
new file mode 120000
index 00000000..e7216ab3
--- /dev/null
+++ b/examples/compose-templates/docker-kafka-connect
@@ -0,0 +1 @@
+../docker-kafka-connect/
\ No newline at end of file
diff --git a/examples/compose-templates/helpers.sh b/examples/compose-templates/helpers.sh
new file mode 120000
index 00000000..16c60939
--- /dev/null
+++ b/examples/compose-templates/helpers.sh
@@ -0,0 +1 @@
+../utils/helpers.sh
\ No newline at end of file
diff --git a/examples/compose-templates/start-connect.sh b/examples/compose-templates/start-connect.sh
index 6bea4d17..fafb036a 100755
--- a/examples/compose-templates/start-connect.sh
+++ b/examples/compose-templates/start-connect.sh
@@ -1,8 +1,8 @@
#!/bin/bash
-source ../utils/helpers.sh
+source ./helpers.sh
# Build the Lightstreamer Kafka Connector Docker image
-../docker-kafka-connect/build.sh
+./docker-kafka-connect/build.sh
if [ $? == 0 ]; then
export version
diff --git a/examples/compose-templates/start.sh b/examples/compose-templates/start.sh
index a5ac367d..ea8851b3 100755
--- a/examples/compose-templates/start.sh
+++ b/examples/compose-templates/start.sh
@@ -1,8 +1,9 @@
#!/bin/bash
-source ../utils/helpers.sh
+#source ../utils/helpers.sh
+source ./helpers.sh
# Build the Lightstreamer Kafka Connector Docker image
-../docker/build.sh
+./docker/build.sh
if [ $? == 0 ]; then
export version
diff --git a/examples/compose-templates/stop.sh b/examples/compose-templates/stop.sh
index a4b46d0a..76b3e496 100755
--- a/examples/compose-templates/stop.sh
+++ b/examples/compose-templates/stop.sh
@@ -1,5 +1,5 @@
#!/bin/bash
-source ../utils/helpers.sh
+source ./helpers.sh
# Export the version env variable to be used by Compose
export version
diff --git a/examples/docker-kafka-connect/README.md b/examples/docker-kafka-connect/README.md
index 715e1716..5c3aa7a8 100644
--- a/examples/docker-kafka-connect/README.md
+++ b/examples/docker-kafka-connect/README.md
@@ -8,7 +8,7 @@ The image is based on the official [Official Confluent Docker Base Image for Kaf
### Requirements:
-- Java 17
+- JDK version 17 or newer
- Docker
### Instructions
diff --git a/examples/docker/README.md b/examples/docker/README.md
index a6823c64..f6e25645 100644
--- a/examples/docker/README.md
+++ b/examples/docker/README.md
@@ -6,7 +6,7 @@ The image is built by deriving the official [Lightstreamer Docker image](https:/
## Requirements:
-- JDK version 17
+- JDK version 17 or newer
- Docker
## Instructions
@@ -42,8 +42,6 @@ The image is built by deriving the official [Lightstreamer Docker image](https:/
$ docker run --name kafka-connector -d -p 8080:8080 lightstreamer-kafka-connector-
```
-5. Check the logs:
-
- ```sh
- $ docker logs -f kafka-connector
- ```
+5. Verify the container
+
+ Point your browser to [http://localhost:8080](http://localhost:8080) and see the Lightstreamer Welcome pages.
diff --git a/examples/docker/build.sh b/examples/docker/build.sh
index 687cc641..ddbb9da8 100755
--- a/examples/docker/build.sh
+++ b/examples/docker/build.sh
@@ -1,6 +1,6 @@
#!/bin/bash
set -eu
-source ../utils/helpers.sh
+source ./helpers.sh
SCRIPT_DIR=$(dirname ${BASH_SOURCE[0]})
TMP_DIR=${SCRIPT_DIR}/tmp
diff --git a/examples/docker/helpers.sh b/examples/docker/helpers.sh
new file mode 120000
index 00000000..16c60939
--- /dev/null
+++ b/examples/docker/helpers.sh
@@ -0,0 +1 @@
+../utils/helpers.sh
\ No newline at end of file
diff --git a/examples/docker/resources/README.md b/examples/docker/resources/README.md
index ae7b6152..418c6853 100644
--- a/examples/docker/resources/README.md
+++ b/examples/docker/resources/README.md
@@ -1,6 +1,6 @@
# resources folder
Copy into this folder any customizable Kafka Connector resource, such as:
- - `adapters.xml`.
- - `log4j.properties` (or any other referenced log configuration file).
- - Local schema, key store, and trust store files referenced in `adapters.xml`.
+ - `adapters.xml`
+ - `log4j.properties` (or any other referenced log configuration file)
+ - Local schema, key store, and trust store files referenced in `adapters.xml`
diff --git a/examples/quickstart-aiven/log4j.properties b/examples/quickstart-aiven/log4j.properties
deleted file mode 120000
index 439ae5ab..00000000
--- a/examples/quickstart-aiven/log4j.properties
+++ /dev/null
@@ -1 +0,0 @@
-../compose-templates/log4j.properties
\ No newline at end of file
diff --git a/examples/quickstart-aiven/start.sh b/examples/quickstart-aiven/start.sh
deleted file mode 120000
index dd868ce4..00000000
--- a/examples/quickstart-aiven/start.sh
+++ /dev/null
@@ -1 +0,0 @@
-../compose-templates/start.sh
\ No newline at end of file
diff --git a/examples/quickstart-aiven/stop.sh b/examples/quickstart-aiven/stop.sh
deleted file mode 120000
index 515a8a07..00000000
--- a/examples/quickstart-aiven/stop.sh
+++ /dev/null
@@ -1 +0,0 @@
-../compose-templates/stop.sh
\ No newline at end of file
diff --git a/examples/quickstart-aiven/web b/examples/quickstart-aiven/web
deleted file mode 120000
index 38321676..00000000
--- a/examples/quickstart-aiven/web
+++ /dev/null
@@ -1 +0,0 @@
-../compose-templates/web/
\ No newline at end of file
diff --git a/examples/quickstart-axual/log4j.properties b/examples/quickstart-axual/log4j.properties
deleted file mode 120000
index 439ae5ab..00000000
--- a/examples/quickstart-axual/log4j.properties
+++ /dev/null
@@ -1 +0,0 @@
-../compose-templates/log4j.properties
\ No newline at end of file
diff --git a/examples/quickstart-axual/start.sh b/examples/quickstart-axual/start.sh
deleted file mode 120000
index dd868ce4..00000000
--- a/examples/quickstart-axual/start.sh
+++ /dev/null
@@ -1 +0,0 @@
-../compose-templates/start.sh
\ No newline at end of file
diff --git a/examples/quickstart-axual/stop.sh b/examples/quickstart-axual/stop.sh
deleted file mode 120000
index 515a8a07..00000000
--- a/examples/quickstart-axual/stop.sh
+++ /dev/null
@@ -1 +0,0 @@
-../compose-templates/stop.sh
\ No newline at end of file
diff --git a/examples/quickstart-axual/web b/examples/quickstart-axual/web
deleted file mode 120000
index 38321676..00000000
--- a/examples/quickstart-axual/web
+++ /dev/null
@@ -1 +0,0 @@
-../compose-templates/web/
\ No newline at end of file
diff --git a/examples/quickstart-confluent-cloud/README.md b/examples/quickstart-confluent-cloud/README.md
deleted file mode 100644
index 7df7a804..00000000
--- a/examples/quickstart-confluent-cloud/README.md
+++ /dev/null
@@ -1,72 +0,0 @@
-# Quick Start with Confluent Cloud
-
-This folder contains a variant of the [_Quick Start SSL_](../quickstart-ssl/README.md#quick-start-ssl) app configured to use _Confluent Cloud_ as the target Kafka cluster.
-
-The [docker-compose.yml](docker-compose.yml) file has been revised to realize the integration with _Confluent Cloud_ as follows:
-
-- removal of the `broker` service, because replaced by the remote Kafka cluster
-- _kafka-connector_:
- - definition of new environment variables to configure remote endpoint, credentials, and topic name in the `adapters.xml` through the _variable-expansion_ feature of Lightstreamer:
- ```yaml
- ...
- environment:
- - bootstrap_server=${bootstrap_server}
- - api_key=${api_key}
- - secret=${secret}
- # adapters.xml uses env variable "topic_mapping", built from env variable "topic"
- - topic_mapping=map.${topic}.to
- ...
- ```
- - adaption of [`adapters.xml`](./adapters.xml) to include:
- - new Kafka cluster address retrieved from the environment variable `bootstrap_server`:
- ```xml
- $env.bootstrap_server
- ```
-
- - encryption settings:
- ```xml
- true
- TLSv1.2
- true
- ```
-
- - authentication settings, with the credentials retrieved from environment variables `api_key` and `secret`:
- ```xml
- true
- PLAIN
- $env.api_key
- $env.secret
- ```
- - parameter `map..to` built from env variable `topic_mapping`, composed from env variable `topic`
- ```xml
- item-template.stock
- ```
-
-- _producer_:
- - parameter `--boostrap-servers` retrieved from the environment variable `bootstrap_server`
- - parameter `--topic` retrieved from the environment variable `topic`
- - provisioning of the `producer.properties` configuration file to enable `SASL/PLAN` over TLS, with username and password retrieved from the environment variables `api_key` and `secret`:
-
- ```yaml
- # Configure SASL/PLAIN mechanism
- sasl.mechanism=PLAIN
- # Enable SSL encryption
- security.protocol=SASL_SSL
- # JAAS configuration
- sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="${api_key}" password="${secret}";
- ```
-
-## Run
-
-From this directory, run follow the command:
-
-```sh
-$ bootstrap_server= api_key= secret= topic= ./start.sh
-```
-
-where:
-- `bootstrap_server` is the Kafla cluster address
-- `API.key` and `secret` are the credentials generated on the _Confluent CLI_ or from the _Confluent Cloud Console_
-- `topic` is the name of the topic
-
-Then, point your browser to [http://localhost:8080/QuickStart](http://localhost:8080/QuickStart).
diff --git a/examples/quickstart-confluent-cloud/adapters.xml b/examples/quickstart-confluent-cloud/adapters.xml
deleted file mode 100644
index d23b9bf5..00000000
--- a/examples/quickstart-confluent-cloud/adapters.xml
+++ /dev/null
@@ -1,59 +0,0 @@
-
-
-
-
- com.lightstreamer.kafka.adapters.pub.KafkaConnectorMetadataAdapter
- log4j.properties
-
-
-
-
-
- com.lightstreamer.kafka.adapters.KafkaConnectorDataAdapter
-
- $env.bootstrap_server
- quick-start-group
-
-
- true
- TLSv1.2
- true
-
-
- true
- PLAIN
- $env.api_key
- $env.secret
-
-
- EARLIEST
- INTEGER
- JSON
-
-
- stock-#{index=KEY}
- item-template.stock
-
-
- #{VALUE.timestamp}
- #{VALUE.time}
- #{VALUE.name}
- #{VALUE.last_price}
- #{VALUE.ask}
- #{VALUE.ask_quantity}
- #{VALUE.bid}
- #{VALUE.bid_quantity}
- #{VALUE.pct_change}
- #{VALUE.min}
- #{VALUE.max}
- #{VALUE.ref_price}
- #{VALUE.open_price}
- #{VALUE.item_status}
- #{TIMESTAMP}
- #{TOPIC}
- #{OFFSET}
- #{PARTITION}
-
-
-
-
diff --git a/examples/quickstart-confluent-cloud/docker-compose.yml b/examples/quickstart-confluent-cloud/docker-compose.yml
deleted file mode 100644
index 99d4ddae..00000000
--- a/examples/quickstart-confluent-cloud/docker-compose.yml
+++ /dev/null
@@ -1,40 +0,0 @@
----
-name: quickstart-kafka-connector-confluent-cloud
-services:
- kafka-connector:
- container_name: kafka-connector
- image: lightstreamer-kafka-connector-${version}
- depends_on:
- - producer
- ports:
- - 8080:8080
- environment:
- - bootstrap_server=${bootstrap_server}
- - api_key=${api_key}
- - secret=${secret}
- - topic_mapping=map.${topic}.to
- volumes:
- - ./web:/lightstreamer/pages/QuickStart
- - ./adapters.xml:/lightstreamer/adapters/lightstreamer-kafka-connector-${version}/adapters.xml
- - ./log4j.properties:/lightstreamer/adapters/lightstreamer-kafka-connector-${version}/log4j.properties
-
- producer:
- container_name: producer
- build:
- context: ../quickstart-producer
- args:
- VERSION: ${version}
- configs:
- - source: producer.properties
- target: /usr/app/producer.properties
- command: ["--bootstrap-servers", "${bootstrap_server}", "--topic", "${topic}", "--config-file", "/usr/app/producer.properties"]
-
-configs:
- producer.properties:
- content: |
- # Configure SASL/PLAIN mechanism
- sasl.mechanism=PLAIN
- # Enable SSL encryption
- security.protocol=SASL_SSL
- # JAAS configuration
- sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="${api_key}" password="${secret}";
diff --git a/examples/quickstart-confluent-cloud/log4j.properties b/examples/quickstart-confluent-cloud/log4j.properties
deleted file mode 120000
index 439ae5ab..00000000
--- a/examples/quickstart-confluent-cloud/log4j.properties
+++ /dev/null
@@ -1 +0,0 @@
-../compose-templates/log4j.properties
\ No newline at end of file
diff --git a/examples/quickstart-confluent-cloud/start.sh b/examples/quickstart-confluent-cloud/start.sh
deleted file mode 120000
index dd868ce4..00000000
--- a/examples/quickstart-confluent-cloud/start.sh
+++ /dev/null
@@ -1 +0,0 @@
-../compose-templates/start.sh
\ No newline at end of file
diff --git a/examples/quickstart-confluent-cloud/stop.sh b/examples/quickstart-confluent-cloud/stop.sh
deleted file mode 120000
index 515a8a07..00000000
--- a/examples/quickstart-confluent-cloud/stop.sh
+++ /dev/null
@@ -1 +0,0 @@
-../compose-templates/stop.sh
\ No newline at end of file
diff --git a/examples/quickstart-confluent-cloud/web b/examples/quickstart-confluent-cloud/web
deleted file mode 120000
index 38321676..00000000
--- a/examples/quickstart-confluent-cloud/web
+++ /dev/null
@@ -1 +0,0 @@
-../compose-templates/web/
\ No newline at end of file
diff --git a/examples/quickstart-kafka-connect/docker-compose.yml b/examples/quickstart-kafka-connect/docker-compose.yml
index 226ff325..43827c38 100644
--- a/examples/quickstart-kafka-connect/docker-compose.yml
+++ b/examples/quickstart-kafka-connect/docker-compose.yml
@@ -69,7 +69,7 @@ services:
KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: 'broker:29092'
- KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
+ KAFKA_REST_LISTENERS: 'http://0.0.0.0:8082'
schema-registry:
image: confluentinc/cp-schema-registry:7.5.0
diff --git a/examples/quickstart-kafka-connect/docker-kafka-connect b/examples/quickstart-kafka-connect/docker-kafka-connect
new file mode 120000
index 00000000..2aad6689
--- /dev/null
+++ b/examples/quickstart-kafka-connect/docker-kafka-connect
@@ -0,0 +1 @@
+../compose-templates/docker-kafka-connect
\ No newline at end of file
diff --git a/examples/quickstart-kafka-connect/helpers.sh b/examples/quickstart-kafka-connect/helpers.sh
new file mode 120000
index 00000000..d97f9467
--- /dev/null
+++ b/examples/quickstart-kafka-connect/helpers.sh
@@ -0,0 +1 @@
+../compose-templates/helpers.sh
\ No newline at end of file
diff --git a/examples/quickstart-redpanda-selfhosted/log4j.properties b/examples/quickstart-redpanda-selfhosted/log4j.properties
deleted file mode 120000
index 439ae5ab..00000000
--- a/examples/quickstart-redpanda-selfhosted/log4j.properties
+++ /dev/null
@@ -1 +0,0 @@
-../compose-templates/log4j.properties
\ No newline at end of file
diff --git a/examples/quickstart-redpanda-selfhosted/secrets b/examples/quickstart-redpanda-selfhosted/secrets
deleted file mode 120000
index 3608c18f..00000000
--- a/examples/quickstart-redpanda-selfhosted/secrets
+++ /dev/null
@@ -1 +0,0 @@
-../compose-templates/secrets/
\ No newline at end of file
diff --git a/examples/quickstart-redpanda-selfhosted/start.sh b/examples/quickstart-redpanda-selfhosted/start.sh
deleted file mode 120000
index dd868ce4..00000000
--- a/examples/quickstart-redpanda-selfhosted/start.sh
+++ /dev/null
@@ -1 +0,0 @@
-../compose-templates/start.sh
\ No newline at end of file
diff --git a/examples/quickstart-redpanda-selfhosted/stop.sh b/examples/quickstart-redpanda-selfhosted/stop.sh
deleted file mode 120000
index 515a8a07..00000000
--- a/examples/quickstart-redpanda-selfhosted/stop.sh
+++ /dev/null
@@ -1 +0,0 @@
-../compose-templates/stop.sh
\ No newline at end of file
diff --git a/examples/quickstart-redpanda-selfhosted/web b/examples/quickstart-redpanda-selfhosted/web
deleted file mode 120000
index 38321676..00000000
--- a/examples/quickstart-redpanda-selfhosted/web
+++ /dev/null
@@ -1 +0,0 @@
-../compose-templates/web/
\ No newline at end of file
diff --git a/examples/quickstart-redpanda-serverless/log4j.properties b/examples/quickstart-redpanda-serverless/log4j.properties
deleted file mode 120000
index 439ae5ab..00000000
--- a/examples/quickstart-redpanda-serverless/log4j.properties
+++ /dev/null
@@ -1 +0,0 @@
-../compose-templates/log4j.properties
\ No newline at end of file
diff --git a/examples/quickstart-redpanda-serverless/start.sh b/examples/quickstart-redpanda-serverless/start.sh
deleted file mode 120000
index dd868ce4..00000000
--- a/examples/quickstart-redpanda-serverless/start.sh
+++ /dev/null
@@ -1 +0,0 @@
-../compose-templates/start.sh
\ No newline at end of file
diff --git a/examples/quickstart-redpanda-serverless/stop.sh b/examples/quickstart-redpanda-serverless/stop.sh
deleted file mode 120000
index 515a8a07..00000000
--- a/examples/quickstart-redpanda-serverless/stop.sh
+++ /dev/null
@@ -1 +0,0 @@
-../compose-templates/stop.sh
\ No newline at end of file
diff --git a/examples/quickstart-redpanda-serverless/web b/examples/quickstart-redpanda-serverless/web
deleted file mode 120000
index 38321676..00000000
--- a/examples/quickstart-redpanda-serverless/web
+++ /dev/null
@@ -1 +0,0 @@
-../compose-templates/web/
\ No newline at end of file
diff --git a/examples/quickstart-schema-registry/README.md b/examples/quickstart-schema-registry/README.md
index 04899da1..2a2e8191 100644
--- a/examples/quickstart-schema-registry/README.md
+++ b/examples/quickstart-schema-registry/README.md
@@ -1,27 +1,31 @@
# Quick Start with Schema Registry
-This folder contains a variant of the [_Quick Start SSL_](../quickstart-ssl/README.md#quick-start-ssl) app configured to use the _Confluent Schema Registry_.
+This folder contains a variant of the [_Quick Start SSL_](../../../quickstart-ssl/README.md#quick-start-ssl) app configured to use the _Confluent Schema Registry_.
The [docker-compose.yml](docker-compose.yml) file has been revised to configure the integration with [_Confluent Docker Image for Schema Registry_](https://hub.docker.com/r/confluentinc/cp-schema-registry) as follows:
-- new `schema-registry` service, pulled from the mentioned Docker image and configured with security settings
+- New `schema-registry` service, pulled from the mentioned Docker image and configured with security settings.
- _kafka-connector_:
- adaption of [`adapters.xml`](./adapters.xml) to include:
- - enabling of the Schema Registry:
+ Adaption of [`adapters.xml`](./adapters.xml) to include the following changes:
+
+ - Enabling of the Schema Registry:
```xml
true
```
- - configuration of the target Schema Registry URL:
+
+ - Configuration of the target Schema Registry URL:
```xml
https://schema-registry:8084
```
- - configuration of the trust store to authenticate the Schema Registry.
+
+ - Configuration of the trust store to authenticate the Schema Registry:
```xml
secrets/kafka-connector.truststore.jks
kafka-connector-truststore-password
```
- - configuration of the key store for client authentication with the Schema Registry.
+
+ - Configuration of the key store for client authentication with the Schema Registry:
```xml
true
secrets/kafka-connector.keystore.jks
@@ -30,7 +34,7 @@ The [docker-compose.yml](docker-compose.yml) file has been revised to configure
```
- _producer_:
- extension of the `producer.properties` configuration file with the settings required to communicate with the Schema Registry:
+ Extension of the `producer.properties` configuration file with the settings required to communicate with the Schema Registry:
```yaml
...
@@ -47,10 +51,10 @@ The [docker-compose.yml](docker-compose.yml) file has been revised to configure
schema.registry.ssl.key.password=producer-private-key-password
```
-In addition, the `schema-registry` service references the local [`secrets/schema-registry`](../compose-templates/secrets/schema-registry/) folder to retrieve its secrets:
+In addition, the `schema-registry` service references the local [`secrets/schema-registry`](./secrets/schema-registry/) folder to retrieve the following secrets:
-- the trust store file [`schema-registry.truststore.jks`](../compose-templates/secrets/schema-registry/schema-registry.truststore.jks);
-- the key store file [`schema-registry.keystore.jks`](../compose-templates/secrets/schema-registry/schema-registry.keystore.jks);
+- The trust store file [`schema-registry.truststore.jks`](../../../compose-templates/secrets/schema-registry/schema-registry.truststore.jks)
+- The key store file [`schema-registry.keystore.jks`](../../../compose-templates/secrets/schema-registry/schema-registry.keystore.jks)
You can regenerate all of them with:
@@ -60,4 +64,4 @@ $ ./generate-secrets.sh
## Run
-From this directory, follow the same instructions you can find in the [Quick Start](../../README.md#run) section of the main README file.
+From this directory, follow the same instructions you can find in the [Quick Start](../../../../README.md#run) section of the main README file.
diff --git a/examples/quickstart-schema-registry/docker b/examples/quickstart-schema-registry/docker
new file mode 120000
index 00000000..233372ad
--- /dev/null
+++ b/examples/quickstart-schema-registry/docker
@@ -0,0 +1 @@
+../../../compose-templates/docker
\ No newline at end of file
diff --git a/examples/quickstart-schema-registry/docker-compose.yml b/examples/quickstart-schema-registry/docker-compose.yml
index d5088192..6c1411ed 100644
--- a/examples/quickstart-schema-registry/docker-compose.yml
+++ b/examples/quickstart-schema-registry/docker-compose.yml
@@ -22,7 +22,7 @@ services:
- broker
- schema-registry
build:
- context: ../quickstart-producer
+ context: quickstart-producer
args:
VERSION: ${version}
configs:
@@ -63,7 +63,7 @@ services:
KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: 'broker:29092'
- KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
+ KAFKA_REST_LISTENERS: 'http://0.0.0.0:8082'
volumes:
- ./secrets/broker:/etc/kafka/secrets
diff --git a/examples/quickstart-schema-registry/helpers.sh b/examples/quickstart-schema-registry/helpers.sh
new file mode 120000
index 00000000..403793f5
--- /dev/null
+++ b/examples/quickstart-schema-registry/helpers.sh
@@ -0,0 +1 @@
+../../../compose-templates/helpers.sh
\ No newline at end of file
diff --git a/examples/quickstart-schema-registry/log4j.properties b/examples/quickstart-schema-registry/log4j.properties
index 439ae5ab..0fa46f55 120000
--- a/examples/quickstart-schema-registry/log4j.properties
+++ b/examples/quickstart-schema-registry/log4j.properties
@@ -1 +1 @@
-../compose-templates/log4j.properties
\ No newline at end of file
+../../../compose-templates/log4j.properties
\ No newline at end of file
diff --git a/examples/quickstart-schema-registry/quickstart-producer b/examples/quickstart-schema-registry/quickstart-producer
new file mode 120000
index 00000000..9e537bcd
--- /dev/null
+++ b/examples/quickstart-schema-registry/quickstart-producer
@@ -0,0 +1 @@
+../../../quickstart-producer/
\ No newline at end of file
diff --git a/examples/quickstart-schema-registry/secrets b/examples/quickstart-schema-registry/secrets
index 3608c18f..1260813e 120000
--- a/examples/quickstart-schema-registry/secrets
+++ b/examples/quickstart-schema-registry/secrets
@@ -1 +1 @@
-../compose-templates/secrets/
\ No newline at end of file
+../../../compose-templates/secrets
\ No newline at end of file
diff --git a/examples/quickstart-schema-registry/start.sh b/examples/quickstart-schema-registry/start.sh
index dd868ce4..e67fc61f 120000
--- a/examples/quickstart-schema-registry/start.sh
+++ b/examples/quickstart-schema-registry/start.sh
@@ -1 +1 @@
-../compose-templates/start.sh
\ No newline at end of file
+../../../compose-templates/start.sh
\ No newline at end of file
diff --git a/examples/quickstart-schema-registry/stop.sh b/examples/quickstart-schema-registry/stop.sh
index 515a8a07..c08155cc 120000
--- a/examples/quickstart-schema-registry/stop.sh
+++ b/examples/quickstart-schema-registry/stop.sh
@@ -1 +1 @@
-../compose-templates/stop.sh
\ No newline at end of file
+../../../compose-templates/stop.sh
\ No newline at end of file
diff --git a/examples/quickstart-schema-registry/web b/examples/quickstart-schema-registry/web
index 38321676..7fe2f7a7 120000
--- a/examples/quickstart-schema-registry/web
+++ b/examples/quickstart-schema-registry/web
@@ -1 +1 @@
-../compose-templates/web/
\ No newline at end of file
+../../../compose-templates/web/
\ No newline at end of file
diff --git a/examples/quickstart-ssl/README.md b/examples/quickstart-ssl/README.md
index c7eccbf6..8ff80647 100644
--- a/examples/quickstart-ssl/README.md
+++ b/examples/quickstart-ssl/README.md
@@ -5,8 +5,8 @@ This folder contains a variant of the [_Quick Start_](../../README.md#quick-star
The [docker-compose.yml](docker-compose.yml) file has been revised to enable support for SSL, as follows:
- _broker_:
- - enabling of SSL enabled on port `29094`
- - definition of new environment variables to configure key store, trust store, client authentication, and secrets:
+ - Enabling of SSL on port `29094`
+ - Definition of new environment variables to configure key store, trust store, client authentication, and secrets:
- `KAFKA_SSL_TRUSTSTORE_FILENAME`
- `KAFKA_SSL_TRUSTSTORE_CREDENTIALS`
- `KAFKA_SSL_KEYSTORE_FILENAME`
@@ -15,32 +15,37 @@ The [docker-compose.yml](docker-compose.yml) file has been revised to enable sup
- `KAFKA_SSL_CLIENT_AUTH`
- _kafka-connector_:
- adaption of [`adapters.xml`](./adapters.xml) to include:
- - new SSL endpoint (`broker:29094`):
+ Adaption of [`adapters.xml`](./adapters.xml) to include the following changes:
+
+ - Update of the parameter `bootstrap.servers` to the new SSL endpoint (`broker:29094`):
```xml
broker:29094
```
- - encryption settings:
+
+ - Configuration of the encryption settings:
```xml
true
TLSv1.2
false
```
- - configuration of the trust store to authenticate the broker:
+
+ - Configuration of the trust store to authenticate the broker:
```xml
secrets/kafka.connector.truststore.jks
kafka-connector-truststore-password
```
- - configuration of the key store for client authentication with the broker:
+
+ - Configuration of the key store for client authentication with the broker:
```xml
true
secrets/kafka-connector.keystore.jks
kafka-connector-password
kafka-connector-private-keypassword
```
+
- _producer_:
- - parameter `--bootstrap-servers` set to the new SSL endpoint (`broker:29094`)
- - provisioning of the `producer.properties` configuration file to enable SSL support:
+ - Update of the parameter `--bootstrap-servers` to the new SSL endpoint (`broker:29094`)
+ - Provisioning of the `producer.properties` configuration file to enable SSL support:
```yaml
# Enable SSL
security.protocol=SSL
@@ -57,18 +62,18 @@ The [docker-compose.yml](docker-compose.yml) file has been revised to enable sup
In addition, all services reference the local [`secrets`](../compose-templates/secrets/) folder to retrieve their secrets. In particular:
-- _broker_ mounts [`secrets/broker`](../compose-templates/secrets/broker/) to `/etc/kafka/secrets` for:
- - the trust store file [`broker.truststore.jks`](../compose-templates/secrets/broker/broker.truststore.jks)
- - the key store file [`broker.keystore.jks`](../compose-templates/secrets/broker/broker.keystore.jks)
- - the credentials files [`broker_keystore_credentials`](../compose-templates/secrets/broker/broker_keystore_credentials) and [`broker_key_credentials`](../compose-templates/secrets/broker/broker_key_credentials)
+- _broker_ mounts [`secrets/broker`](../compose-templates/secrets/broker/) to `/etc/kafka/secrets` for the following resources:
+ - The trust store file [`broker.truststore.jks`](../compose-templates/secrets/broker/broker.truststore.jks)
+ - The key store file [`broker.keystore.jks`](../compose-templates/secrets/broker/broker.keystore.jks)
+ - The credentials files [`broker_keystore_credentials`](../compose-templates/secrets/broker/broker_keystore_credentials) and [`broker_key_credentials`](../compose-templates/secrets/broker/broker_key_credentials)
-- _kafka-connector_ mounts [`secrets/kafka-connector`](../compose-templates/secrets/kafka-connector/) to `LS_KAFKA_CONNECTOR_HOME/secrets` for:
- - the trust store file [`kafka-connector.truststore.jks`](../compose-templates/secrets/kafka-connector/kafka-connector.truststore.jks)
- - the key store file [`kafka-connector.keystore.jks`](../compose-templates/secrets/kafka-connector/kafka-connector.keystore.jks)
+- _kafka-connector_ mounts [`secrets/kafka-connector`](../compose-templates/secrets/kafka-connector/) to `LS_KAFKA_CONNECTOR_HOME/secrets` for the following resources:
+ - The trust store file [`kafka-connector.truststore.jks`](../compose-templates/secrets/kafka-connector/kafka-connector.truststore.jks)
+ - The key store file [`kafka-connector.keystore.jks`](../compose-templates/secrets/kafka-connector/kafka-connector.keystore.jks)
-- _producer_ mounts [`secrets/producer`](../compose-templates/secrets/producer/) to `/usr/app/secrets` for:
- - the trust store file [`producer.truststore.jks`](../compose-templates/secrets/producer/producer.truststore.jks)
- - the key store file [`producer.keystore.jks`](../compose-templates/secrets/producer/producer.keystore.jks)
+- _producer_ mounts [`secrets/producer`](../compose-templates/secrets/producer/) to `/usr/app/secrets` for the following resources:
+ - The trust store file [`producer.truststore.jks`](../compose-templates/secrets/producer/producer.truststore.jks)
+ - The key store file [`producer.keystore.jks`](../compose-templates/secrets/producer/producer.keystore.jks)
You can regenerate all of them with:
diff --git a/examples/quickstart-ssl/docker b/examples/quickstart-ssl/docker
new file mode 120000
index 00000000..880b1623
--- /dev/null
+++ b/examples/quickstart-ssl/docker
@@ -0,0 +1 @@
+../compose-templates/docker
\ No newline at end of file
diff --git a/examples/quickstart-ssl/docker-compose.yml b/examples/quickstart-ssl/docker-compose.yml
index c2a7cf6d..7cc57282 100644
--- a/examples/quickstart-ssl/docker-compose.yml
+++ b/examples/quickstart-ssl/docker-compose.yml
@@ -61,7 +61,7 @@ services:
KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: 'broker:29092'
- KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
+ KAFKA_REST_LISTENERS: 'http://0.0.0.0:8082'
volumes:
- ./secrets/broker:/etc/kafka/secrets
diff --git a/examples/quickstart-ssl/helpers.sh b/examples/quickstart-ssl/helpers.sh
new file mode 120000
index 00000000..d97f9467
--- /dev/null
+++ b/examples/quickstart-ssl/helpers.sh
@@ -0,0 +1 @@
+../compose-templates/helpers.sh
\ No newline at end of file
diff --git a/examples/quickstart/docker b/examples/quickstart/docker
new file mode 120000
index 00000000..880b1623
--- /dev/null
+++ b/examples/quickstart/docker
@@ -0,0 +1 @@
+../compose-templates/docker
\ No newline at end of file
diff --git a/examples/quickstart/docker-compose.yml b/examples/quickstart/docker-compose.yml
index 27e637f7..6984873a 100644
--- a/examples/quickstart/docker-compose.yml
+++ b/examples/quickstart/docker-compose.yml
@@ -46,4 +46,4 @@ services:
KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'
KAFKA_REST_HOST_NAME: rest-proxy
KAFKA_REST_BOOTSTRAP_SERVERS: 'broker:29092'
- KAFKA_REST_LISTENERS: "http://0.0.0.0:8082"
+ KAFKA_REST_LISTENERS: 'http://0.0.0.0:8082'
diff --git a/examples/quickstart/helpers.sh b/examples/quickstart/helpers.sh
new file mode 120000
index 00000000..d97f9467
--- /dev/null
+++ b/examples/quickstart/helpers.sh
@@ -0,0 +1 @@
+../compose-templates/helpers.sh
\ No newline at end of file
diff --git a/examples/utils/helpers.sh b/examples/utils/helpers.sh
index 5a046649..fa8efbff 100755
--- a/examples/utils/helpers.sh
+++ b/examples/utils/helpers.sh
@@ -1,6 +1,5 @@
#!/bin/bash
-HELPER_DIR=$(dirname ${BASH_SOURCE[0]}) ## realpath
-projectDir="$HELPER_DIR/../../kafka-connector-project"
+projectDir="${PWD%/examples/*}/kafka-connector-project"
# Alias to the the local gradlew command
_gradle="${projectDir}/gradlew --project-dir ${projectDir}"
diff --git a/examples/quickstart-aiven/README.md b/examples/vendors/aiven/quickstart-aiven/README.md
similarity index 72%
rename from examples/quickstart-aiven/README.md
rename to examples/vendors/aiven/quickstart-aiven/README.md
index 3eb33a88..3259fefa 100644
--- a/examples/quickstart-aiven/README.md
+++ b/examples/vendors/aiven/quickstart-aiven/README.md
@@ -1,6 +1,6 @@
# Quick Start with Aiven for Apache Kafka
-This folder contains a variant of the [_Quick Start SSL_](../quickstart-ssl/README.md#quick-start-ssl) app configured to use [_Aiven for Apache Kafka_](https://aiven.io/docs/products/kafka) as the target cluster. You may follow the [_Getting started_](https://aiven.io/docs/products/kafka/get-started) on the Aiven site to perform the following operations:
+This folder contains a variant of the [_Quick Start SSL_](../../../quickstart-ssl/README.md#quick-start-ssl) app configured to use [_Aiven for Apache Kafka_](https://aiven.io/docs/products/kafka) as the target Kafka cluster. You may follow the [_Getting started_](https://aiven.io/docs/products/kafka/get-started) on the Aiven site to perform the following operations:
- Create a new _Apache Kafka_ service.
- Enable the SASL authentication mechanism.
@@ -12,7 +12,7 @@ This folder contains a variant of the [_Quick Start SSL_](../quickstart-ssl/READ
The [docker-compose.yml](docker-compose.yml) file has been revised to realize the integration with _Aiven for Apache Kafka_ as follows:
-- Removal of the `broker` service, because replaced by the remote cluster
+- Removal of the `broker` service, because replaced by the remote cluster.
- _kafka-connector_:
- Definition of new environment variables to configure remote endpoint, credentials in the `adapters.xml` through the _variable-expansion_ feature of Lightstreamer:
```yaml
@@ -30,23 +30,22 @@ The [docker-compose.yml](docker-compose.yml) file has been revised to realize th
...
- ./secrets:/lightstreamer/adapters/lightstreamer-kafka-connector-${version}/secrets
```
- - Adaption of [`adapters.xml`](./adapters.xml) to include:
- - new Kafka cluster address retrieved from the environment variable `bootstrap_server`:
+ - Adaption of [`adapters.xml`](./adapters.xml) to include the following:
+ - Update of the parameter `bootstrap.servers` to the environment variable `bootstrap_server`:
```xml
$env.bootstrap_server
```
- - encryption settings, with the trust store password retrieved from the environment variable `truststore_password`
+ - Configuration of the encryption settings, with the trust store password retrieved from the environment variable `truststore_password`
```xml
true
TLSv1.2
false
secrets/client.truststore.jks
$env.truststore_password
-
```
- - authentication settings, with the credentials retrieved from environment variables `username` and `password`:
+ - Configuration of the authentication settings, with the credentials retrieved from environment variables `username` and `password`:
```xml
true
SCRAM-SHA-256
@@ -55,14 +54,14 @@ The [docker-compose.yml](docker-compose.yml) file has been revised to realize th
```
- _producer_:
- - mounting of the local `secrets` folder to `/usr/app/secrets` in the container:
+ - Mounting of the local `secrets` folder to `/usr/app/secrets` in the container:
```yaml
volumes:
- ./secrets:/usr/app/secrets
```
- - parameter `--boostrap-servers` retrieved from the environment variable `bootstrap_server`
- - provisioning of the `producer.properties` configuration file to enable `SASL/SCRAM` over TLS, with username, password, and trust store password retrieved from the environment variables `username`, `password`, and `truststore_password`:
+ - Update of the parameter `--boostrap-servers` fto the environment variable `bootstrap_server`
+ - Provisioning of the `producer.properties` configuration file to enable `SASL/SCRAM` over TLS, with username, password, and trust store password retrieved from the environment variables `username`, `password`, and `truststore_password`:
```yaml
# Configure SASL/SCRAM mechanism
@@ -86,8 +85,8 @@ $ bootstrap_server= username= password= tr
```
where:
-- `bootstrap_server` is the bootstrap server address of the Apache Kafka service
-- `username` and `password` are the credentials of the user automatically created from the _Aiven Console_
-- `truststore_password` is the password of the trust store file
+- `bootstrap_server` is the bootstrap server address of the Apache Kafka service.
+- `username` and `password` are the credentials of the user automatically created from the _Aiven Console_.
+- `truststore_password` is the password of the trust store file.
Then, point your browser to [http://localhost:8080/QuickStart](http://localhost:8080/QuickStart).
diff --git a/examples/quickstart-aiven/adapters.xml b/examples/vendors/aiven/quickstart-aiven/adapters.xml
similarity index 100%
rename from examples/quickstart-aiven/adapters.xml
rename to examples/vendors/aiven/quickstart-aiven/adapters.xml
diff --git a/examples/vendors/aiven/quickstart-aiven/docker b/examples/vendors/aiven/quickstart-aiven/docker
new file mode 120000
index 00000000..233372ad
--- /dev/null
+++ b/examples/vendors/aiven/quickstart-aiven/docker
@@ -0,0 +1 @@
+../../../compose-templates/docker
\ No newline at end of file
diff --git a/examples/quickstart-aiven/docker-compose.yml b/examples/vendors/aiven/quickstart-aiven/docker-compose.yml
similarity index 97%
rename from examples/quickstart-aiven/docker-compose.yml
rename to examples/vendors/aiven/quickstart-aiven/docker-compose.yml
index d2ab55f7..6b5a2a86 100644
--- a/examples/quickstart-aiven/docker-compose.yml
+++ b/examples/vendors/aiven/quickstart-aiven/docker-compose.yml
@@ -22,7 +22,7 @@ services:
producer:
container_name: producer
build:
- context: ../quickstart-producer
+ context: quickstart-producer
args:
VERSION: ${version}
configs:
diff --git a/examples/vendors/aiven/quickstart-aiven/helpers.sh b/examples/vendors/aiven/quickstart-aiven/helpers.sh
new file mode 120000
index 00000000..403793f5
--- /dev/null
+++ b/examples/vendors/aiven/quickstart-aiven/helpers.sh
@@ -0,0 +1 @@
+../../../compose-templates/helpers.sh
\ No newline at end of file
diff --git a/examples/vendors/aiven/quickstart-aiven/log4j.properties b/examples/vendors/aiven/quickstart-aiven/log4j.properties
new file mode 120000
index 00000000..0fa46f55
--- /dev/null
+++ b/examples/vendors/aiven/quickstart-aiven/log4j.properties
@@ -0,0 +1 @@
+../../../compose-templates/log4j.properties
\ No newline at end of file
diff --git a/examples/vendors/aiven/quickstart-aiven/quickstart-producer b/examples/vendors/aiven/quickstart-aiven/quickstart-producer
new file mode 120000
index 00000000..9e537bcd
--- /dev/null
+++ b/examples/vendors/aiven/quickstart-aiven/quickstart-producer
@@ -0,0 +1 @@
+../../../quickstart-producer/
\ No newline at end of file
diff --git a/examples/vendors/aiven/quickstart-aiven/start.sh b/examples/vendors/aiven/quickstart-aiven/start.sh
new file mode 120000
index 00000000..e67fc61f
--- /dev/null
+++ b/examples/vendors/aiven/quickstart-aiven/start.sh
@@ -0,0 +1 @@
+../../../compose-templates/start.sh
\ No newline at end of file
diff --git a/examples/vendors/aiven/quickstart-aiven/stop.sh b/examples/vendors/aiven/quickstart-aiven/stop.sh
new file mode 120000
index 00000000..c08155cc
--- /dev/null
+++ b/examples/vendors/aiven/quickstart-aiven/stop.sh
@@ -0,0 +1 @@
+../../../compose-templates/stop.sh
\ No newline at end of file
diff --git a/examples/vendors/aiven/quickstart-aiven/web b/examples/vendors/aiven/quickstart-aiven/web
new file mode 120000
index 00000000..7fe2f7a7
--- /dev/null
+++ b/examples/vendors/aiven/quickstart-aiven/web
@@ -0,0 +1 @@
+../../../compose-templates/web/
\ No newline at end of file
diff --git a/examples/quickstart-axual/README.md b/examples/vendors/axual/quickstart-axual/README.md
similarity index 64%
rename from examples/quickstart-axual/README.md
rename to examples/vendors/axual/quickstart-axual/README.md
index 203c840f..5a408826 100644
--- a/examples/quickstart-axual/README.md
+++ b/examples/vendors/axual/quickstart-axual/README.md
@@ -1,9 +1,9 @@
# Quick Start with Axual
-This folder contains a variant of the [_Quick Start SSL_](../quickstart-ssl/README.md#quick-start-ssl) app configured to use a _shared test cluster_ from [_Axual Platform_](https://axual.com/) as the target cluster. You may follow the [_Getting started_](https://docs.axual.io/axual/2024.1/getting_started/index.html) on the Axual site to perform the following operations:
+This folder contains a variant of the [_Quick Start SSL_](../../../quickstart-ssl/README.md#quick-start-ssl) app configured to use a _shared test cluster_ from [_Axual Platform_](https://axual.com/) as the target Kafka cluster. You may follow the [_Getting started_](https://docs.axual.io/axual/2024.1/getting_started/index.html) on the Axual site to perform the following operations:
- Add a new topic `stocks` with:
- - string key type
+ - String key type
- JSON value type
- _delete_ retention policy
- 1 partition
@@ -19,9 +19,9 @@ This folder contains a variant of the [_Quick Start SSL_](../quickstart-ssl/READ
The [docker-compose.yml](docker-compose.yml) file has been revised to realize the integration with _Aiven for Apache Kafka_ as follows:
-- removal of the `broker` service, because replaced by the remote cluster
+- Removal of the `broker` service, because replaced by the remote cluster
- _kafka-connector_:
- - definition of new environment variables to configure remote endpoint, credentials, topic name and consumer group in the `adapters.xml` through the _variable-expansion_ feature of Lightstreamer:
+ - Definition of new environment variables to configure remote endpoint, credentials, topic name and consumer group in the `adapters.xml` through the _variable-expansion_ feature of Lightstreamer:
```yaml
...
environment:
@@ -32,25 +32,25 @@ The [docker-compose.yml](docker-compose.yml) file has been revised to realize th
- topic_mapping=map.${topic}.to
...
```
- - adaption of [`adapters.xml`](./adapters.xml) to include:
- - new Kafka cluster address retrieved from the environment variable `bootstrap_server`:
+ - Adaption of [`adapters.xml`](./adapters.xml) to include the following:
+ - Update of the parameter `bootstrap.servers` to the environment variable `bootstrap_server`:
```xml
$env.bootstrap_server
```
- - the consumer group retrieved from the environment variable `group_id`
+ - Update of the parameter `group.id` to the the environment variable `group_id`:
```xml
$env.group_id
```
- - encryption settings:
+ - Configuration of the encryption settings:
```xml
true
TLSv1.3
false
```
- - authentication settings, with the credentials retrieved from environment variables `username` and `password`:
+ - Configuration of the authentication settings, with the credentials retrieved from environment variables `username` and `password`:
```xml
true
SCRAM-SHA-256
@@ -58,15 +58,15 @@ The [docker-compose.yml](docker-compose.yml) file has been revised to realize th
$env.password
```
- - parameter `map..to` built from env variable `topic_mapping`, composed from env variable `topic`
+ - Update of the parameter `map..to` to the environment variable `topic_mapping` (which in turn is composed from env variable `topic`)
```xml
item-template.stock
```
- _producer_:
- - parameter `--boostrap-servers` retrieved from the environment variable `bootstrap_server`
- - parameter `--topic` retrieved from the environment variable `topic`
- - provisioning of the `producer.properties` configuration file to enable `SASL/SCRAM` over TLS, with username and password retrieved from the environment variables `username` and `password`:
+ - Update of the parameter `--boostrap-servers` to the environment variable `bootstrap_server`
+ - Update of the parameter `--topic` to the environment variable `topic`
+ - Provisioning of the `producer.properties` configuration file to enable `SASL/SCRAM` over TLS, with username and password retrieved from the environment variables `username` and `password`:
```yaml
# Configure SASL/SCRAM mechanism
@@ -86,10 +86,10 @@ $ bootstrap_server= group_id= username= pa
```
where:
-- `bootstrap_server` is the bootstrap server address of the Axual cluster
-- `group_id` is the consumer group ID
-- `username` and `password` are the authentication credentials
-- `topic` is the name of the topic
+- `bootstrap_server` is the bootstrap server address of the Axual cluster.
+- `group_id` is the consumer group ID.
+- `username` and `password` are the authentication credentials.
+- `topic` is the name of the topic.
> [!TIP]
> You can get the correct values for bootstrap_server, group_id, and topic by looking at the _Cluster connectivity Information_ of the `stocks-application` from the Axual _Self-service_ portal.
diff --git a/examples/quickstart-axual/adapters.xml b/examples/vendors/axual/quickstart-axual/adapters.xml
similarity index 100%
rename from examples/quickstart-axual/adapters.xml
rename to examples/vendors/axual/quickstart-axual/adapters.xml
diff --git a/examples/vendors/axual/quickstart-axual/credentials.txt b/examples/vendors/axual/quickstart-axual/credentials.txt
new file mode 100644
index 00000000..e69de29b
diff --git a/examples/vendors/axual/quickstart-axual/docker b/examples/vendors/axual/quickstart-axual/docker
new file mode 120000
index 00000000..233372ad
--- /dev/null
+++ b/examples/vendors/axual/quickstart-axual/docker
@@ -0,0 +1 @@
+../../../compose-templates/docker
\ No newline at end of file
diff --git a/examples/quickstart-axual/docker-compose.yml b/examples/vendors/axual/quickstart-axual/docker-compose.yml
similarity index 97%
rename from examples/quickstart-axual/docker-compose.yml
rename to examples/vendors/axual/quickstart-axual/docker-compose.yml
index 811d64ec..9da52c2a 100644
--- a/examples/quickstart-axual/docker-compose.yml
+++ b/examples/vendors/axual/quickstart-axual/docker-compose.yml
@@ -22,7 +22,7 @@ services:
producer:
container_name: producer
build:
- context: ../quickstart-producer
+ context: quickstart-producer
args:
VERSION: ${version}
configs:
diff --git a/examples/vendors/axual/quickstart-axual/helpers.sh b/examples/vendors/axual/quickstart-axual/helpers.sh
new file mode 120000
index 00000000..403793f5
--- /dev/null
+++ b/examples/vendors/axual/quickstart-axual/helpers.sh
@@ -0,0 +1 @@
+../../../compose-templates/helpers.sh
\ No newline at end of file
diff --git a/examples/quickstart-axual/launch.sh b/examples/vendors/axual/quickstart-axual/launch.sh
similarity index 100%
rename from examples/quickstart-axual/launch.sh
rename to examples/vendors/axual/quickstart-axual/launch.sh
diff --git a/examples/vendors/axual/quickstart-axual/log4j.properties b/examples/vendors/axual/quickstart-axual/log4j.properties
new file mode 120000
index 00000000..0fa46f55
--- /dev/null
+++ b/examples/vendors/axual/quickstart-axual/log4j.properties
@@ -0,0 +1 @@
+../../../compose-templates/log4j.properties
\ No newline at end of file
diff --git a/examples/vendors/axual/quickstart-axual/quickstart-producer b/examples/vendors/axual/quickstart-axual/quickstart-producer
new file mode 120000
index 00000000..9e537bcd
--- /dev/null
+++ b/examples/vendors/axual/quickstart-axual/quickstart-producer
@@ -0,0 +1 @@
+../../../quickstart-producer/
\ No newline at end of file
diff --git a/examples/vendors/axual/quickstart-axual/start.sh b/examples/vendors/axual/quickstart-axual/start.sh
new file mode 120000
index 00000000..e67fc61f
--- /dev/null
+++ b/examples/vendors/axual/quickstart-axual/start.sh
@@ -0,0 +1 @@
+../../../compose-templates/start.sh
\ No newline at end of file
diff --git a/examples/vendors/axual/quickstart-axual/stop.sh b/examples/vendors/axual/quickstart-axual/stop.sh
new file mode 120000
index 00000000..c08155cc
--- /dev/null
+++ b/examples/vendors/axual/quickstart-axual/stop.sh
@@ -0,0 +1 @@
+../../../compose-templates/stop.sh
\ No newline at end of file
diff --git a/examples/vendors/axual/quickstart-axual/web b/examples/vendors/axual/quickstart-axual/web
new file mode 120000
index 00000000..7fe2f7a7
--- /dev/null
+++ b/examples/vendors/axual/quickstart-axual/web
@@ -0,0 +1 @@
+../../../compose-templates/web/
\ No newline at end of file
diff --git a/examples/vendors/confluent/README.md b/examples/vendors/confluent/README.md
new file mode 100644
index 00000000..419b5b3e
--- /dev/null
+++ b/examples/vendors/confluent/README.md
@@ -0,0 +1,1770 @@
+
+
+# Lightstreamer Kafka Connector
+_Last-mile data streaming. Stream real-time Kafka data to mobile and web apps, anywhere. Scale Kafka to millions of clients._
+
+- [Introduction](#introduction)
+ - [Last-Mile Integration](#last-mile-integration)
+ - [Intelligent Streaming](#intelligent-streaming)
+ - [Comprehensive Client SDKs](#comprehensive-client-sdks)
+ - [Massive Scalability](#massive-scalability)
+ - [Other Features](#other-features)
+- [Architecture](#architecture)
+ - [Kafka Client vs. Kafka Connect](#kafka-client-vs-kafka-connect)
+ - [Lightstreamer Kafka Connector as a Kafka Client](#lightstreamer-kafka-connector-as-a-kafka-client)
+ - [Lightstreamer Kafka Connector as a Kafka Connect Sink Connector](#lightstreamer-kafka-connector-as-a-kafka-connect-sink-connector)
+- [QUICK START: Set up in 5 minutes](#quick-start-set-up-in-5-minutes)
+ - [Run](#run)
+- [Deployment](#deployment)
+ - [Manual Deployment](#manual-deployment)
+ - [Requirements](#requirements)
+ - [Install](#install)
+ - [Configure](#configure)
+ - [Start](#start)
+ - [Docker-based Deployment](#docker-based-deployment)
+ - [Requirements](#requirements-1)
+ - [Build the Image](#build-the-image)
+ - [Start](#start-1)
+ - [End-to-End Streaming](#end-to-end-streaming)
+- [Configuration](#configuration)
+ - [Global Settings](#global-settings)
+ - [Connection Settings](#connection-settings)
+ - [General Parameters](#general-parameters)
+ - [Encryption Parameters](#encryption-parameters)
+ - [Broker Authentication Parameters](#broker-authentication-parameters)
+ - [Record Evaluation](#record-evaluation)
+ - [Topic Mapping](#topic-mapping)
+ - [Data Extraction Language](#data-extraction-language)
+ - [Record Routing (`map..to`)](#record-routing-maptopicto)
+ - [Record Mapping (`field.`)](#record-mapping-fieldfieldname)
+ - [Filtered Record Routing (`item-template.`)](#filtered-record-routing-item-templatetemplate-name)
+ - [Schema Registry](#schema-registry)
+ - [`schema.registry.url`](#schemaregistryurl)
+ - [Basic HTTP Authenticaion Parameters](#basic-http-authentication-parameters)
+ - [Encryption Parameters](#encryption-parameters-1)
+ - [Quick Start Schema Registry Example](#quick-start-schema-registry-example)
+- [Customize the Kafka Connector Metadata Adapter Class](#customize-the-kafka-connector-metadata-adapter-class)
+ - [Develop the Extension](#develop-the-extension)
+- [Kafka Lightstreamer Sink Connector](#kafka-connect-lightstreamer-sink-connector)
+ - [Usage](#usage)
+ - [Lightstreamer Setup](#lightstreamer-setup)
+ - [Running](#running)
+ - [Running in Docker](#running-in-docker)
+ - [Supported Converters](#supported-converters)
+ - [Configuration Reference](#configuration-reference)
+- [Docs](#docs)
+- [Examples](#examples)
+
+# Introduction
+
+Is your product struggling to deliver Kafka events to remote users? The [Lightstreamer Kafka Connector](https://lightstreamer.com/confluent/) is an intelligent proxy that bridges the gap, providing seamless, real-time data streaming to web and mobile applications with unmatched ease and reliability. It streams data in real time to your apps over WebSockets, eliminating the need for polling a REST proxy and surpassing the limitations of MQTT.
+
+## Last-Mile Integration
+
+Kafka, while powerful, isn’t designed for direct internet access—particularly when it comes to the **last mile**, the critical network segment that extends beyond enterprise boundaries and edges (LAN or WAN) to reach end users. Last-mile integration is essential for delivering real-time Kafka data to mobile, web, and desktop applications, addressing challenges that go beyond Kafka’s typical scope, such as:
+- Disruptions from corporate firewalls and client-side proxies blocking Kafka connections.
+- Performance issues due to unpredictable internet bandwidth, including packet loss and disconnections.
+- User interfaces struggling with large data volumes.
+- The need for scalable solutions capable of supporting millions of concurrent users.
+
+![High-Level Architecture](/pictures/architecture.png)
+
+## Intelligent Streaming
+
+With **Intelligent Streaming**, Lightstreamer dynamically adjusts the data flow to match each user’s network conditions, ensuring all users stay in sync regardless of connection quality. By resampling and conflating data on the fly, it delivers real-time updates with adaptive throttling, effectively handling packet loss without buffering delays. It also manages disconnections and reconnections seamlessly, keeping your users connected and up-to-date.
+
+## Comprehensive Client SDKs
+
+The rich set of supplied client libraries makes it easy to consume real-time Kafka data across a variety of platforms and languages.
+
+![Client APIs](/pictures/client-platforms.png)
+
+## Massive Scalability
+
+Connect millions of clients without compromising performance. Fanout real-time messages published on Kafka topics efficiently, preventing overload on the Kafka brokers. Check out the [load tests performed on the Lightstreamer Kafka Connector vs. plain Kafka](https://github.com/Lightstreamer/lightstreamer-kafka-connector-loadtest).
+
+## Other Features
+
+The Lightstreamer Kafka Connector provides a wide range of powerful features, including firewall and proxy traversal, server-side filtering, advanced topic mapping, record evaluation, Schema Registry support, push notifications, and maximum security. [Explore more details](https://lightstreamer.com/confluent/).
+
+# Architecture
+
+![Architecture](/pictures/architecture-full-confluent.png)
+
+The Lightstreamer Kafka Connector seamlessly integrates the [Lightstreamer Broker](https://lightstreamer.com/products/lightstreamer/) with [Confluent Cloud](https://www.confluent.io/confluent-cloud/?utm_campaign=tm.pmm_cd.cwc_partner_Lightstreamer_generic&utm_source=Lightstreamer&utm_medium=partnerref) and Confluent Platform. While existing producers and consumers continue connecting directly to the Kafka broker, internet-based applications connect through the Lightstreamer Broker, which efficiently handles last-mile data delivery. Authentication and authorization for internet-based clients are managed via a custom Metadata Adapter, created using the [Metadata Adapter API Extension](#customize-the-kafka-connector-metadata-adapter-class) and integrated into the Lightstreamer Broker.
+
+Both the Kafka Connector and the Metadata Adapter run in-process with the Lightstreamer Broker, which can be deployed in the cloud or on-premises.
+
+## Kafka Client vs. Kafka Connect
+
+The Lightstreamer Kafka Connector can operate in two distinct modes: as a direct Kafka client or as a Kafka Connect connector.
+
+### Lightstreamer Kafka Connector as a Kafka Client
+
+In this mode, the Lightstreamer Kafka Connector uses the Kafka client API to communicate directly with the Kafka broker. This approach is typically lighter, faster, and more scalable, as it avoids the additional layer of Kafka Connect. All sections of this documentation refer to this mode, except for the section specifically dedicated to the Sink Connector.
+
+### Lightstreamer Kafka Connector as a Kafka Connect Sink Connector
+
+In this mode, the Lightstreamer Kafka Connector integrates with the Kafka Connect framework, acting as a sink connector. While this introduces an additional messaging layer, there are scenarios where the standardized deployment provided by Kafka Connect is required. For more details on using the Lightstreamer Kafka Connector as a Kafka Connect sink connector, please refer to this section: [Kafka Connect Lightstreamer Sink Connector](#kafka-connect-lightstreamer-sink-connector).
+
+# QUICK START: Set up in 5 minutes
+
+To efficiently showcase the functionalities of the Lightstreamer Kafka Connector, we have prepared an accessible quickstart application located in the [`examples/vendors/confluent/quickstart-confluent/`](/examples/vendors/confluent/quickstart-confluent/) directory. This streamlined application facilitates real-time streaming of data from a Kafka topic directly to a web interface. It leverages a modified version of the [Stock List Demo](https://github.com/Lightstreamer/Lightstreamer-example-StockList-client-javascript?tab=readme-ov-file#basic-stock-list-demo---html-client), specifically adapted to demonstrate Kafka integration. This setup is designed for rapid comprehension, enabling you to swiftly grasp and observe the connector's performance in a real-world scenario.
+
+![Quickstart Diagram](/pictures/quickstart-diagram.png)
+
+The diagram above illustrates how, in this setup, a stream of simulated market events is channeled from Kafka to the web client via the Lightstreamer Kafka Connector.
+
+To provide a complete stack, the app is based on _Docker Compose_. The [Docker Compose file](/examples/vendors/confluent/quickstart-confluent/docker-compose.yml) comprises the following services:
+
+1. _broker_: the Kafka broker, based on the [Official Confluent Docker Image for Kafka (Community Version)](https://hub.docker.com/r/confluentinc/cp-kafka)
+2. _kafka-connector_: Lightstreamer Server with the Kafka Connector, based on the [Lightstreamer Kafka Connector Docker image example](/examples/docker/), which also includes a web client mounted on `/lightstreamer/pages/QuickStart`
+3. _producer_: a native Kafka Producer, based on the provided [`Dockerfile`](/examples/quickstart-producer/Dockerfile) file from the [`quickstart-producer`](/examples/quickstart-producer/) sample client
+
+## Run
+
+1. Make sure you have Docker, Docker Compose, and a JDK (Java Development Kit) v17 or newer installed on your local machine.
+2. From the [`examples/vendors/confluent/quickstart-confluent/`](/examples/vendors/confluent/quickstart-confluent/) folder, run the following:
+
+ ```sh
+ $ ./start.sh
+ ...
+ ⠏ Network quickstart_default Created
+ ✔ Container broker Started
+ ✔ Container producer Started
+ ✔ Container kafka-connector Started
+ ...
+ Services started. Now you can point your browser to http://localhost:8080/QuickStart to see real-time data.
+ ...
+ ```
+
+3. Once all containers are ready, point your browser to [http://localhost:8080/QuickStart](http://localhost:8080/QuickStart).
+
+4. After a few moments, the user interface starts displaying the real-time stock data.
+
+ ![Demo](/pictures/quickstart.gif)
+
+5. To shutdown Docker Compose and clean up all temporary resources:
+
+ ```sh
+ $ ./stop.sh
+ ```
+
+# Deployment
+
+This section will guide you through deploying the Kafka Connector quickly and easily with [Confluent Cloud](https://www.confluent.io/confluent-cloud/?utm_campaign=tm.pmm_cd.cwc_partner_Lightstreamer_generic&utm_source=Lightstreamer&utm_medium=partnerref).
+
+Deployment options:
+
+- **Manual Deployment:**
+ Download and configure the Lightstreamer Broker and Kafka Connector from their respective archives.
+
+- **Docker-based Deployment:**
+ Build and configure a Docker image that seamlessly integrates the Lightstreamer Broker and the Kafka Connector.
+
+In both cases, you'll need a [Confluent Cloud](https://www.confluent.io/confluent-cloud/tryfree/?utm_campaign=tm.pmm_cd.cwc_partner_Lightstreamer_tryfree&utm_source=Lightstreamer&utm_medium=partnerref) account.
+
+> [!TIP]
+> Don't have a Confluent Cloud account yet? Start [your free trial of Confluent Cloud](https://www.confluent.io/confluent-cloud/tryfree/?utm_campaign=tm.pmm_cd.cwc_partner_Lightstreamer_tryfree&utm_source=Lightstreamer&utm_medium=partnerref) today. New signups receive $400 to spend during their first 30 days.
+
+## Manual Deployment
+
+### Requirements
+
+- JDK (Java Development Kit) v17 or newer
+- [Lightstreamer Broker](https://lightstreamer.com/download/) (also referred to as _Lightstreamer Server_) v7.4.2 or newer. Follow the installation instructions in the `LS_HOME/GETTING_STARTED.TXT` file included in the downloaded package.
+
+### Install
+
+Download the deployment archive `lightstreamer-kafka-connector-.zip` from the [Releases](https://github.com/Lightstreamer/Lightstreamer-kafka-connector/releases/) page. Alternatively, check out this repository and execute the following command from the [`kafka-connector-project`](/kafka-connector-project/) folder:
+
+```sh
+$ ./gradlew adapterDistZip
+```
+
+which generates the archive file under the `kafka-connector-project/kafka-connector/build/distributions` folder.
+
+Then, unpack it into the `adapters` folder of the Lightstreamer Server installation:
+
+```sh
+$ unzip lightstreamer-kafka-connector-.zip -d LS_HOME/adapters
+```
+
+Finally, check that the Lightstreamer layout looks like the following:
+
+```sh
+LS_HOME/
+...
+├── adapters
+│ ├── lightstreamer-kafka-connector-
+│ │ ├── LICENSE
+│ │ ├── README.md
+│ │ ├── adapters.xml
+│ │ ├── javadoc
+│ │ ├── lib
+│ │ ├── log4j.properties
+│ └── welcome_res
+...
+├── audit
+├── bin
+...
+```
+
+### Configure
+
+Before starting the Kafka Connector, you need to properly configure the `LS_HOME/adapters/lightstreamer-kafka-connector-/adapters.xml` file. For convenience, the package comes with a predefined configuration (the same used in the [_Quick Start_](#quick-start-set-up-in-5-minutes) app), which can be customized in all its aspects as per your requirements. Of course, you may add as many different connection configurations as desired to fit your needs.
+
+To quickly complete the installation and verify the successful integration with Kafka, edit the _data_provider_ block `QuickStartConfluentCloud` in the file as follows:
+
+- Update the [`bootstrap.servers`](#bootstrapservers) parameter with the connection string of Kafka:
+
+ ```xml
+ kafka.connection.string
+ ```
+
+- Update the [`authentication.username` and `authentication.password`](#authenticationmechanism) parameters with the API key and API secret linked to your Confluent Cloud account:
+
+ ```xml
+ API.key
+ API.secret
+ ```
+
+- Configure topic and record mapping.
+
+ To enable a generic Lightstreamer client to receive real-time updates, it needs to subscribe to one or more items. Therefore, the Kafka Connector provides suitable mechanisms to map Kafka topics to Lightstreamer items effectively.
+
+ The `QuickStartConfluentCloud` [factory configuration](/kafka-connector-project/kafka-connector/src/adapter/dist/adapters.xml#L419) comes with a straightforward mapping defined through the following settings:
+
+ - An item template:
+ ```xml
+ stock-#{index=KEY}
+ ```
+
+ which defines the general format name of the items a client must subscribe to to receive updates from the Kafka Connector. The [_extraction expression_](#filtered-record-routing-item-templatetemplate-name) syntax used here - denoted within `#{...}` - permits the clients to specify filtering values to be compared against the actual contents of a Kafka record, evaluated through [_Extraction Keys_](#record-mapping-fieldfieldname) used to extract each part of a record. In this case, the `KEY` predefined constant extracts the key part of Kafka records.
+
+ - A topic mapping:
+ ```xml
+ item-template.stock
+ ```
+ which maps the topic `stocks` to the provided item template.
+
+ This configuration instructs the Kafka Connector to analyze every single event published to the topic `stocks` and check if it matches against any item subscribed by the client as:
+
+ - `stock-[index=1]`: an item with the `index` parameter bound to a record key equal to `1`
+ - `stock-[index=2]`: an item with the `index` parameter bound to a record key equal to `2`
+ - ...
+
+ The Kafka Connector will then route the event to all matched items.
+
+ In addition, the following section defines how to map the record to the tabular form of Lightstreamer fields, by using the aforementioned _Extraction Keys_. In this case, the `VALUE` predefined constant extracts the value part of Kafka records.
+
+ ```xml
+ #{VALUE.name}
+ #{VALUE.last_price}
+ #{VALUE.ask}
+ #{VALUE.ask_quantity}
+ #{VALUE.bid}
+ #{VALUE.bid_quantity}
+ #{VALUE.pct_change}
+ #{VALUE.min}
+ #{VALUE.max}
+ #{VALUE.ref_price}
+ #{VALUE.open_price}
+ #{VALUE.item_status}
+ ```
+
+ This way, the routed event is transformed into a flat structure, which can be forwarded to the clients.
+
+- Optionally, customize the `LS_HOME/adapters/lightstreamer-kafka-connector-/log4j.properties` file (the current settings produce the `quickstart-confluent.log` file).
+
+You can get more details about all possible settings in the [Configuration](#configuration) section.
+
+### Start
+
+To start the Kafka Connector, run the following fom the `LS_HOME/bin/unix-like` directory:
+
+ ```sh
+ $ ./background_start.sh
+ ```
+
+Then, point your browser to [http://localhost:8080](http://localhost:8080) and see a welcome page with some demos running out of the box.
+
+## Docker-based Deployment
+
+### Requirements:
+
+- JDK (Java Development Kit) v17 or newer
+- Docker
+
+### Build the Image
+
+To build the Docker Image of the Lightstreamer Kafka Connector, follow the steps:
+
+1. Copy the factory [adapters.xml](/kafka-connector-project/kafka-connector/src/adapter/dist/adapters.xml) file into the [examples/docker/resources](/examples/docker/resources) folder.
+
+2. Custome the file by editing the _data provider_ block `QuickStartConfluentCloud` as explained in the previous [Configure](#configure) section.
+
+3. Optionally, provide a minimal version of the `log4j.properties` file similar to the following:
+
+ ```java
+ log4j.logger.com.lightstreamer.kafka.adapters.pub.KafkaConnectorMetadataAdapter
+ log4j.logger.org.apache.kafka=WARN, stdout
+ log4j.appender.stdout=org.apache.log4j.ConsoleAppender
+ log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
+ log4j.appender.stdout.layout.ConversionPattern=[%d] [%-10c{1}] %-5p %m%n
+ log4j.appender.stdout.Target=System.out
+
+ # QuickStartConfluentCloud logger
+ log4j.logger.QuickStartConfluentCloud=INFO, stdout
+ ```
+
+ and put it in the [examples/docker/resources](/examples/docker/resources) folder.
+
+3. Run the following from the [/examples/docker](examples/docker) directory:
+
+ ```sh
+ $ ./build.sh
+ ```
+
+For more insights on creating a Docker Image for the Lightstreamer Kafka Connector, check out [examples/docker](/examples/docker/).
+
+### Start
+
+To launch the container, run the following from the [examples/docker](/examples/docker/) directory:
+
+```sh
+$ docker run --name kafka-connector -d -p 8080:8080 lightstreamer-kafka-connector-
+```
+
+Then, point your browser to [http://localhost:8080](http://localhost:8080) and see a welcome page with some demos running out of the box.
+
+## End-to-End Streaming
+
+After successfully launching the Lightstreamer Kafka Connector — whether manually or using Docker - it's time to connect a Lightstreamer consumer and a Kafka producer to observe a basic _end-to-end_ streaming flow in action
+
+### 1. Attach a Lightstreamer consumer
+
+ The [`kafka-connector-utils`](/kafka-connector-project/kafka-connector-utils) submodule hosts a simple Lightstreamer Java client that can be used to test the consumption of Kafka events from any Kafka topics.
+
+ Before launching the consumer, you first need to build it from the [`kafka-connector-project`](/kafka-connector-project/) folder with the command:
+
+ ```sh
+ $ ./gradlew kafka-connector-utils:build
+ ```
+
+ which generates the `lightstreamer-kafka-connector-utils-consumer-all-.jar` file under the `kafka-connector-project/kafka-connector-utils/build/libs` folder.
+
+ Then, launch it with:
+
+ ```sh
+ $ java -jar kafka-connector-utils/build/libs/lightstreamer-kafka-connector-utils-consumer-all-.jar --address http://localhost:8080 --adapter-set KafkaConnector --data-adapter QuickStartConfluentCloud --items stock-[index=1],stock-[index=2],stock-[index=3] --fields stock_name,ask,bid,min,max
+ ```
+
+ As you can see, you have to specify a few parameters:
+
+ - `--address`: the Lightstreamer Server address
+ - `--adapter-set`: the name of the requested Adapter Set, which triggers Ligthtreamer to activate the Kafka Connector deployed into the `adapters` folder
+ - `--data-adapter`: the name of the requested Data Adapter, which identifies the selected Kafka connection configuration
+ - `--items`: the list of items to subscribe to
+ - `--fields`: the list of requested fields for the items
+
+ > [!NOTE]
+ > While we've provided examples in JavaScript (suitable for web browsers) and Java (geared towards desktop applications), you are encouraged to utilize any of the [Lightstreamer client SDKs](https://lightstreamer.com/download/#client-sdks) for developing clients in other environments, including iOS, Android, Python, and more.
+
+### 2. Publish the events
+
+ The [`examples/quickstart-producer`](/examples/quickstart-producer/) folder hosts a simple Kafka producer to publish simulated market events for the _QuickStart_ app.
+
+ Before launching the producer, you first need to build it. Open a new shell from the folder and execute the command:
+
+ ```sh
+ $ cd examples/quickstart-producer
+ $ ./gradlew build
+ ```
+
+ which generates the `quickstart-producer-all.jar` file under the `build/libs` folder.
+
+ Then, create a properties file that includes encryption and authentication settings, as follows:
+
+ ```java
+ security.protocol=SASL_SSL
+ sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="" password="";
+ sasl.mechanism=PLAIN
+ ...
+ ```
+
+ where you have to replace `` and `` with the API key and API secret linked to your Confluent Cloud account and generated on the _Confluent CLI_ or from the _Confluent Cloud Console_.
+
+ Now, launch the publisher:
+
+ ```sh
+ $ java -jar build/libs/quickstart-producer-all.jar --bootstrap-servers --topic stocks --config-file
+ ```
+
+ ![producer_video](/pictures/producer-confluent.gif)
+
+### 3. Check the consumed events
+
+ After starting the publisher, you should immediately see the real-time updates flowing from the consumer shell:
+
+ ![consumer_video](/pictures/consumer-confluent.gif)
+
+# Configuration
+
+As already anticipated, the Kafka Connector is a Lightstreamer Adapter Set, which means it is made up of a Metadata Adapter and one or more Data Adapters, whose settings are defined in the `LS_HOME/adapters/lightstreamer-kafka-connector-/adapters.xml` file.
+
+The following sections will guide you through the configuration details.
+
+## Global Settings
+
+### `adapter_conf['id']` - _Kafka Connector Identifier_
+
+ _Mandatory_. The `id` attribute of the `adapters_conf` root tag defines the _Kafka Connector Identifier_, which will be used by the Clients to request this Adapter Set while setting up the connection to a Lightstreamer Server through a _LightstreamerClient_ object.
+
+ The factory value is set to `KafkaConnector` for convenience, but you are free to change it as per your requirements.
+
+ Example:
+
+ ```xml
+
+ ```
+
+### `adapter_class`
+
+_Mandatory_. The `adapter_class` tag, specified inside the _metadata_provider_ block, defines the Java class name of the Metadata Adapter.
+
+The factory value is set to `com.lightstreamer.kafka.adapters.pub.KafkaConnectorMetadataAdapter`, which implements the internal business of the Kafka Connector.
+
+It is possible to provide a custom implementation by extending this class: just package your new class in a jar file and deploy it along with all required dependencies into the `LS_HOME/adapters/lightstreamer-kafka-connector-/lib` folder.
+
+See the [Customize the Kafka Connector Metadata Class](#customize-the-kafkaconnector-metadata-adapter-class) section for more details.
+
+Example:
+
+```xml
+...
+
+ ...
+ your.custom.class
+ ...
+
+...
+```
+
+### `logging.configuration.path`
+
+_Mandatory_. The path of the [reload4j](https://reload4j.qos.ch/) configuration file, relative to the deployment folder (`LS_HOME/adapters/lightstreamer-kafka-connector-`).
+
+The parameter is specified inside the _metadata_provider_ block.
+
+The factory value points to the predefined file `LS_HOME/adapters/lightstreamer-kafka-connector-/log4g.properties`.
+
+Example:
+
+```xml
+...
+
+ ...
+ log4j.properties
+ ...
+
+...
+```
+
+## Connection Settings
+
+The Kafka Connector allows the configuration of separate independent connections to different Kafka brokers/clusters.
+
+Every single connection is configured via the definition of its own Data Adapter through the _data_provider_ block. At least one connection must be provided.
+
+Since the Kafka Connector manages the physical connection to Kafka by wrapping an internal Kafka Consumer, several configuration settings in the Data Adapter are identical to those required by the usual Kafka Consumer configuration.
+
+### General Parameters
+
+#### `data_provider['name']` - _Kafka Connection Name_
+
+_Optional_. The `name` attribute of the `data_provider` tag defines _Kafka Connection Name_, which will be used by the Clients to request real-time data from this specific Kafka connection through a _Subscription_ object.
+
+Furthermore, the name is also used to group all logging messages belonging to the same connection.
+
+> [!TIP]
+> For every Data Adapter connection, add a new logger and its relative file appender to `log4j.properties`, so that you can log to dedicated files all the interactions pertinent to the connection with the Kafka cluster and the message retrieval operations, along with their routing to the subscribed items.
+> For example, the factory [logging configuration](/kafka-connector-project/kafka-connector/src/adapter/dist/log4j.properties#L30) provides the logger `QuickStartConfluentCloud` to print every log messages relative to the `QuickStartConfluent` connection:
+> ```java
+> ...
+> # QuickStartConfluentCloud logger
+> log4j.logger.QuickStartConfluentCloud=INFO, QuickStartConfluentCloudFile
+> log4j.appender.QuickStartConfluentCloudFile=org.apache.log4j.RollingFileAppender
+> log4j.appender.QuickStartConfluentCloudFile.layout=org.apache.log4j.PatternLayout
+> log4j.appender.QuickStartConfluentCloudFile.layout.ConversionPattern=[%d] [%-10c{1}] %-5p %m%n
+> log4j.appender.QuickStartConfluentCloudFile.File=../../logs/quickstart-confluent.log
+> ```
+
+Example:
+
+```xml
+
+```
+
+Default value: `DEFAULT`, but only one `DEFAULT` configuration is permitted.
+
+#### `adapter_class`
+
+_Mandatory_. The `adapter_class` tag defines the Java class name of the Data Adapter. DO NOT EDIT IT!.
+
+Factory value: `com.lightstreamer.kafka.adapters.KafkaConnectorDataAdapter`.
+
+#### `enable`
+
+_Optional_. Enable this connection configuration. Can be one of the following:
+- `true`
+- `false`
+
+If disabled, Lightstreamer Server will automatically deny every subscription made to this connection.
+
+Default value: `true`.
+
+Example:
+
+```xml
+false
+```
+
+#### `bootstrap.servers`
+
+_Mandatory_. The Kafka Cluster bootstrap server endpoint expressed as the list of host/port pairs used to establish the initial connection.
+
+The parameter sets the value of the [`bootstrap.servers`](https://kafka.apache.org/documentation/#consumerconfigs_bootstrap.servers) key to configure the internal Kafka Consumer.
+
+Example:
+
+```xml
+broker:29092,broker:29093
+```
+
+#### `group.id`
+
+_Optional_. The name of the consumer group this connection belongs to.
+
+The parameter sets the value for the [`group.id`](https://kafka.apache.org/documentation/#consumerconfigs_group.id) key to configure the internal Kafka Consumer.
+
+Default value: _Kafka Connector Identifier_ + _Connection Name_ + _Randomly generated suffix_.
+
+```xml
+kafka-connector-group
+```
+
+### Encryption Parameters
+
+A TCP secure connection to Kafka is configured through parameters with the prefix `encryption`.
+
+#### `encryption.enable`
+
+_Optional_. Enable encryption of this connection. Can be one of the following:
+- `true`
+- `false`
+
+Default value: `false`.
+
+Example:
+
+```xml
+true
+```
+
+#### `encryption.protocol`
+
+_Optional_. The SSL protocol to be used. Can be one of the following:
+- `TLSv1.2`
+- `TLSv1.3`
+
+Default value: `TLSv1.3` when running on Java 11 or newer, `TLSv1.2` otherwise.
+
+Example:
+
+```xml
+TLSv1.2
+```
+
+#### `encryption.enabled.protocols`
+
+_Optional_. The list of enabled secure communication protocols.
+
+Default value: `TLSv1.2,TLSv1.3` when running on Java 11 or newer, `TLSv1.2` otherwise.
+
+Example:
+
+```xml
+TLSv1.3
+```
+
+#### `encryption.cipher.suites`
+
+_Optional_. The list of enabled secure cipher suites.
+
+Default value: all the available cipher suites in the running JVM.
+
+Example:
+
+```xml
+TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
+```
+
+#### `encryption.hostname.verification.enable`
+
+_Optional_. Enable hostname verification. Can be one of the following:
+- `true`
+- `false`
+
+Default value: `false`.
+
+Example:
+
+```xml
+true
+```
+
+#### `encryption.truststore.path`
+
+_Optional_. The path of the trust store file, relative to the deployment folder (`LS_HOME/adapters/lightstreamer-kafka-connector-`).
+
+Example:
+
+```xml
+secrets/kafka-connector.truststore.jks
+```
+
+#### `encryption.truststore.password `
+
+_Optional_. The password of the trust store.
+
+If not set, checking the integrity of the trust store file configured will not be possible.
+
+Example:
+
+```xml
+kafka-connector-truststore-password
+```
+
+#### `encryption.keystore.enable`
+
+_Optional_. Enable a key store. Can be one of the following:
+- `true`
+- `false`
+
+A key store is required if the mutual TLS is enabled on Kafka.
+
+If enabled, the following parameters configure the key store settings:
+
+- `encryption.keystore.path`
+- `encryption.keystore.password`
+- `encryption.keystore.key.password`
+
+Default value: `false`.
+
+Example:
+
+```xml
+true
+```
+
+#### `encryption.keystore.path`
+
+_Mandatory if [key store](#encryptionkeystoreenable) is enabled_. The path of the key store file, relative to the deployment folder (`LS_HOME/adapters/lightstreamer-kafka-connector-`).
+
+Example:
+
+```xml
+secrets/kafka-connector.keystore.jks
+```
+
+#### `encryption.keystore.password`
+
+_Optional_. The password of the key store.
+
+If not set, checking the integrity of the key store file configured will not be possible.
+
+Example:
+
+```xml
+keystore-password
+```
+
+#### `encryption.keystore.key.password`
+
+_Optional_. The password of the private key in the key store file.
+
+Example:
+
+```xml
+kafka-connector-private-key-password
+```
+
+#### Quick Start SSL Example
+
+Check out the [adapters.xml](/examples/quickstart-ssl/adapters.xml#L17) file of the [_Quick Start SSL_](/examples/quickstart-ssl/) app, where you can find an example of encryption configuration.
+
+### Broker Authentication Parameters
+
+Broker authentication is configured through parameters with the prefix `authentication`.
+
+#### `authentication.enable`
+
+_Optional_. Enable the authentication of this connection against the Kafka Cluster. Can be one of the following:
+- `true`
+- `false`
+
+Default value: `false`.
+
+Example:
+
+```xml
+true
+```
+
+#### `authentication.mechanism`
+
+_Mandatory if [authentication](#authenticationenable) is enabled_. The SASL mechanism type. The Kafka Connector accepts the following authentication mechanisms:
+
+- `PLAIN` (the default value)
+- `SCRAM-SHA-256`
+- `SCRAM-SHA-512`
+- `GSSAPI`
+
+In the case of `PLAIN`, `SCRAM-SHA-256`, and `SCRAM-SHA-512` mechanisms, the credentials must be configured through the following mandatory parameters (which are not allowed for `GSSAPI`):
+
+- `authentication.username`: the username
+- `authentication.password`: the password
+
+##### `PLAIN`
+
+Example:
+
+```xml
+true
+PLAIN
+authorized-kafka-user
+authorized-kafka-user-password
+```
+
+##### `SCRAM-SHA-256`
+
+Example:
+
+```xml
+true
+SCRAM-SHA-256
+authorized-kafka-usee
+authorized-kafka-user-password
+```
+
+##### `SCRAM-SHA-512`
+
+Example:
+
+```xml
+true
+SCRAM-SHA-512
+authorized-kafka-username
+authorized-kafka-username-password
+```
+
+##### `GSSAPI`
+
+In the case of `GSSAPI` authentication mechanism, the following parameters will be part of the authentication configuration:
+
+- `authentication.gssapi.key.tab.enable`
+
+ _Optional_. Enable the use of a keytab. Can be one of the following:
+ - `true`
+ - `false`
+
+ Default value: `false`.
+
+- `authentication.gssapi.key.tab.path`
+
+ _Mandatory if keytab is enabled_. The path to the kaytab file, relative to the deployment folder (`LS_HOME/adapters/lightstreamer-kafka-connector-`).
+
+- `authentication.gssapi.store.key.enable`
+
+ _Optional_. Enable storage of the principal key. Can be one of the following:
+ - `true`
+ - `false`
+
+ Default value: `false`.
+
+- `authentication.gssapi.kerberos.service.name`
+
+ _Mandatory_. The name of the Kerberos service.
+
+- `authentication.gssapi.principal`
+
+ _Mandatory if ticket cache is disabled_. The name of the principal to be used.
+
+- `authentication.gssapi.ticket.cache.enable`
+
+ _Optional_. Enable the use of a ticket cache. Can be one of the following:
+ - `true`
+ - `false`
+
+ Default value: `false`.
+
+Example:
+
+```xml
+...
+true
+GSSAPI
+true
+gssapi/kafka-connector.keytab
+true
+kafka
+kafka-connector-1@LIGHTSTREAMER.COM
+...
+```
+
+Example of configuration with the use of a ticket cache:
+
+```xml
+true
+GSSAPI
+kafka
+true
+```
+
+Check out the `QuickStartConfluentCloud` [factory configuration](/kafka-connector-project/kafka-connector/src/adapter/dist/adapters.xml#L432) file, where you can find an example of an authentication configuration that uses SASL/PLAIN.
+
+### Record Evaluation
+
+The Kafka Connector can deserialize Kafka records from the following formats:
+
+- _Apache Avro_
+- _JSON_
+- _String_
+- _Integer_
+- _Float_
+
+and other scalar types (see [the complete list](#recordkeyevaluatortype-and-recordvalueevaluatortype)).
+
+In particular, the Kafka Connector supports message validation for _Avro_ and _JSON_, which can be specified through:
+
+- Local schema files
+- The _Confluent Schema Registry_
+
+The Kafka Connector enables the independent deserialization of keys and values, allowing them to have different formats. Additionally:
+
+- Message validation against the Confluent Schema Registry can be enabled separately for the key and value (through [`record.key.evaluator.schema.registry.enable` and `record.value.evaluator.schema.registry.enable`](#recordkeyevaluatorschemaregistryenable-and-recordvalueevaluatorschemaregistryenable))
+- Message validation against local schema files must be specified separately for the key and the value (through [`record.key.evaluator.schema.path` and `record.value.evaluator.schema.path`](#recordkeyevaluatorschemapath-and-recordvalueevaluatorschemapath))
+
+> [!IMPORTANT]
+> For Avro, schema validation is mandatory, therefore either a local schema file must be provided or the Confluent Schema Registry must be enabled.
+
+#### `record.consume.from`
+
+_Optional_. Specifies where to start consuming events from:
+
+- `LATEST`: start consuming events from the end of the topic partition
+- `EARLIEST`: start consuming events from the beginning of the topic partition
+
+The parameter sets the value of the [`auto.offset.reset`](https://kafka.apache.org/documentation/#consumerconfigs_auto.offset.reset) key to configure the internal Kafka Consumer.
+
+Default value: `LATEST`.
+
+Example:
+
+```xml
+EARLIEST
+```
+
+#### `record.key.evaluator.type` and `record.value.evaluator.type`
+
+_Optional_. The format to be used to deserialize respectively the key and value of a Kafka record. Can be one of the following:
+
+- `AVRO`
+- `JSON`
+- `STRING`
+- `INTEGER`
+- `BOOLEAN`
+- `BYTE_ARRAY`
+- `BYTE_BUFFER`
+- `BYTES`
+- `DOUBLE`
+- `FLOAT`
+- `LONG`
+- `SHORT`
+- `UUID`
+
+Default value: `STRING`.
+
+Examples:
+
+```xml
+INTEGER
+JSON
+```
+
+#### `record.key.evaluator.schema.path` and `record.value.evaluator.schema.path`
+
+_Mandatory if [evaluator type](#recordkeyevaluatortype-and-recordvalueevaluatortype) is `AVRO` and the [Confluent Schema Registry](#recordkeyevaluatorschemaregistryenable-and-recordvalueevaluatorschemaregistryenable) is disabled_. The path of the local schema file relative to the deployment folder (`LS_HOME/adapters/lightstreamer-kafka-connector-`) for message validation respectively of the key and the value.
+
+Examples:
+
+```xml
+schema/record_key.avsc
+schemas/record_value.avsc
+```
+
+#### `record.key.evaluator.schema.registry.enable` and `record.value.evaluator.schema.registry.enable`
+
+_Mandatory if [evaluator type](#recordkeyevaluatortype-and-recordvalueevaluatortype) is `AVRO` and no [local schema paths](#recordkeyevaluatorschemaregistryenable-and-recordvalueevaluatorschemaregistryenable) are specified_. Enable the use of the [Confluent Schema Registry](#schema-registry) for validation respectively of the key and the value. Can be one of the following:
+- `true`
+- `false`
+
+Default value: `false`.
+
+Examples:
+
+```xml
+true
+true
+```
+
+#### `record.extraction.error.strategy`
+
+_Optional_. The error handling strategy to be used if an error occurs while [extracting data](#record-mapping-fieldfieldname) from incoming deserialized records. Can be one of the following:
+
+- `IGNORE_AND_CONTINUE`: ignore the error and continue to process the next record
+- `FORCE_UNSUBSCRIPTION`: stop processing records and force unsubscription of the items requested by all the clients subscribed to this connection
+
+Default value: `IGNORE_AND_CONTINUE`.
+
+Example:
+
+```xml
+FORCE_UNSUBSCRIPTION
+```
+
+### Topic Mapping
+
+The Kafka Connector allows the configuration of several routing and mapping strategies, thus enabling the convey of Kafka events streams to a potentially huge amount of devices connected to Lightstreamer with great flexibility.
+
+The _Data Extraction Language_ is the _ad hoc_ tool provided for in-depth analysis of Kafa records to extract data that can be used for the following purposes:
+- Mapping records to Lightstreamer fields
+- Filtering routing to the designated Lightstreamer items
+
+#### Data Extraction Language
+
+To write an extraction expression, the _Data Extraction Language_ provides a pretty minimal syntax with the following basic rules:
+
+- Expressions must be enclosed within `#{...}`
+- Expressions use _Extraction Keys_, a set of predefined constants that reference specific parts of the record structure:
+
+ - `#{KEY}`: the key
+ - `#{VALUE}`: the value
+ - `#{TOPIC}`: the topic
+ - `#{TIMESTAMP}`: the timestamp
+ - `#{PARTITION}`: the partition
+ - `#{OFFSET}`: the offset
+
+- Expressions use the _dot notation_ to access attributes or fields of record keys and record values serialized in JSON or Avro formats:
+
+ ```js
+ KEY.attribute1Name.attribute2Name...
+ VALUE.attribute1Name.attribute2Name...
+ ```
+
+ > [!IMPORTANT]
+ > Currently, it is required that the top-level element of either a record key or record value is:
+ > - An [Object](https://www.json.org/json-en.html), in the case of JSON format
+ > - A [Record](https://avro.apache.org/docs/1.11.1/specification/#schema-record), in the case of Avro format
+ >
+ > Such a constraint may be removed in a future version of the Kafka Connector.
+
+- Expressions use the _square notation_ to access:
+
+ - Indexed attributes:
+
+ ```js
+ KEY.attribute1Name[i].attribute2Name...
+ VALUE.attribute1Name[i].attribute2Name...
+ ```
+ where `i` is a 0-indexed value.
+
+ - Key-based attributes:
+
+ ```js
+ KEY.attribute1Name['keyName'].attribute2Name...
+ VALUE.attribute1Name['keyName'].attribute2Name...
+ ```
+ where `keyName` is a string value.
+
+ > [!TIP]
+ > For JSON format, accessing a child attribute using either dot notation or square bracket notation is equivalent:
+ >
+ > ```js
+ > VALUE.myProperty.myChild.childProperty
+ > VALUE.myProperty['myChild'].childProperty
+ > ```
+
+- Expressions must evaluate to a _scalar_ value
+
+ In case of non-scalar value, an error will be thrown during the extraction process and handled as per the [configured strategy](#recordextractionerrorstrategy).
+
+#### Record Routing (`map..to`)
+
+To configure a simple routing of Kafka event streams to Lightstreamer items, use at least one `map..to` parameter. The general format is:
+
+```xml
+item1,item2,itemN,...
+```
+
+which defines the mapping between the source Kafka topic (``) and the target items (`item1`, `item2`, `itemN`, etc.).
+
+This configuration enables the implementation of various routing scenarios, as shown by the following examples:
+
+- _One-to-One_
+
+ ```xml
+ sample-item
+ ```
+
+ ![one-to-one](/pictures/one-to-one.png)
+
+ This is the most straightforward scenario one may think of: every record published to the Kafka topic `sample-topic` will simply be routed to the Lightstreamer item `sample-item`. Therefore, messages will be immediately broadcasted as real-time updates to all clients subscribed to such an item.
+
+- _Many-to-One_
+
+ ```xml
+ sample-item
+ sample-item
+ sample-item
+ ```
+
+ ![many-to-one](/pictures/many-to-one.png)
+
+ With this scenario, it is possible to broadcast to all clients subscribed to a single item (`sample-item`) every message published to different topics (`sample-topic1`, `sample-topic2`, `sample-topic3`).
+
+- _One-to-Many_
+
+ The one-to-many scenario is also supported, though it's often unnecessary. Lightstreamer already provides full control over individual items, such as differentiating access authorization for various users or subscribing with different maximum update frequencies, without requiring data replication across multiple items.
+
+ ```xml
+ sample-item1,sample-item2,sample-item3
+ ```
+
+ Every record published to the Kafka topic `sample-topic` will be routed to the Lightstreamer items `sample-item1`, `sample-item2`, and `sample-item3`.
+
+#### Record Mapping (`field.`)
+
+To forward real-time updates to the Lightstreamer clients, a Kafka record must be mapped to Lightstreamer fields, which define the _schema_ of any Lightstreamer item.
+
+![record-mapping](/pictures/record-fields-mapping.png)
+
+To configure the mapping, you define the set of all subscribable fields through parameters with the prefix `field.`:
+
+```xml
+extractionExpression1
+extractionExpression2
+...
+extractionExpressionN
+...
+```
+
+The configuration specifies that the field `fieldNameX` will contain the value extracted from the deserialized Kafka record through the `extractionExpressionX`, written using the [_Data Extraction Language_](#data-extraction-language). This approach makes it possible to transform a Kafka record of any complexity to the flat structure required by Lightstreamer.
+
+The `QuickStartConfluentCloud` [factory configuration](/kafka-connector-project/kafka-connector/src/adapter/dist/adapters.xml#L448) shows a basic example, where a simple _direct_ mapping has been defined between every attribute of the JSON record value and a Lightstreamer field with the corresponding name. Of course, thanks to the _Data Extraction Language_, more complex mapping can be employed.
+
+```xml
+...
+#{VALUE.timestamp}
+#{VALUE.time}
+#{VALUE.name}
+#{VALUE.last_price}
+#{VALUE.ask}
+#{VALUE.ask_quantity}
+#{VALUE.bid}
+#{VALUE.bid_quantity}
+#{VALUE.pct_change}
+#{VALUE.min}
+#{VALUE.max}
+#{VALUE.ref_price}
+#{VALUE.open_price}
+#{VALUE.item_status}
+..
+```
+
+#### Filtered Record Routing (`item-template.`)
+
+Besides mapping topics to statically predefined items, the Kafka Connector allows you to configure the _item templates_,
+which specify the rules needed to decide if a message can be forwarded to the items specified by the clients, thus enabling a _filtered routing_.
+The item template leverages the [_Data Extraction Language_](#data-extraction-language) to extract data from Kafka records and match them against the _parameterized_ subscribed items.
+
+![filtered-routing](/pictures/filtered-routing.png)
+
+To configure an item template, use the `item-template.` parameter:
+
+```xml
+-
+```
+
+Then, map one (or more) topic to the template by referecing it in the `map..to` parameter:
+
+```xml
+item-template.
+```
+
+> [!TIP]
+> It is allowed to mix references to simple item names and item templates in the same topic mapping configuration:
+>
+> ```xml
+> item-template.template1,item1,item2
+> ```
+
+The item template is made of:
+- ``: the prefix of the item name
+- ``: a sequence of _extraction expressions_, which define filtering rules specified as:
+
+ ```js
+ #{paramName1=,paramName2=,...}
+ ```
+
+ where `paramNameX` is a _bind parameter_ to be specified by the clients and whose actual value will be extracted from the deserialized Kafka record by evaluating the `` expression (written using the _Data Extraction Language_).
+
+To activate the filtered routing, the Lightstreamer clients must subscribe to a parameterized item that specifies a filtering value for every bind parameter defined in the template:
+
+```js
+-[paramName1=filterValue_1,paramName2=filerValue_2,...]
+```
+
+Upon consuming a message, the Kafka Connector _expands_ every item template addressed by the record topic by evaluating each extraction expression and binding the extracted value to the associated parameter. The expanded template will result as:
+
+```js
+-[paramName1=extractedValue_1,paramName2=extractedValue_2,...]
+```
+
+Finally, the message will be mapped and routed only in case the subscribed item completely matches the expanded template or, more formally, the following is true:
+
+`filterValue_X == extractValue_X for every paramName_X`
+
+##### Example
+
+Consider the following configuration:
+
+```xml
+user-#{firstName=VALUE.name,lastName=VALUE.surname}
+user-#{age=VALUE.age}
+item-template.by-name,item-template.by-age
+```
+
+which specifies how to route records published from the topic `user` to the item templates defined to extract some personal data.
+
+Let's suppose we have three different Lightstreamer clients:
+
+1. _Client A_ subscribes to the following parameterized items:
+ - _SA1_ `user-[firstName=James,lastName=Kirk]` for receiving real-time updates relative to the user `James Kirk`
+ - _SA2_ `user-[age=45]` for receiving real-time updates relative to any 45 year-old user
+2. _Client B_ subscribes to the parameterized item _SB1_ `user-[firstName=Montgomery,lastName=Scotty]` for receiving real-time updates relative to the user `Montgomery Scotty`.
+3. _Client C_ subscribes to the parameterized item _SC1_ `user-[age=37]` for receiving real-time updates relative to any 37 year-old user.
+
+Now, let's see how filtered routing works for the following incoming Kafka records published to the topic `user`:
+
+- Record 1:
+ ```js
+ {
+ ...
+ "name": "James",
+ "surname": "Kirk",
+ "age": 37,
+ ...
+ }
+ ```
+
+ | Template | Expansion | Matched Subscribed Item | Routed to Client |
+ | ----------| -------------------------------------- | ----------------------- | -----------------|
+ | `by-name` | `user-[firstName=James,lastName=Kirk]` | _SA1_ | _Client A_ |
+ | `by-age` | `user-[age=37]` | _SC1_ | _Client C_ |
+
+
+- Record 2:
+ ```js
+ {
+ ...
+ "name": "Montgomery",
+ "surname": "Scotty",
+ "age": 45
+ ...
+ }
+ ```
+
+ | Template | Expansion | Matched Subscribed Item | Routed to Client |
+ | --------- | --------------------------------------------- | ----------------------- | -----------------|
+ | `by-name` | `user-[firstName=Montgomery,lastName=Scotty]` | _SB1_ | _Client B_ |
+ | `by-age` | `user-[age=45]` | _SA2_ | _Client A_ |
+
+- Record 3:
+ ```js
+ {
+ ...
+ "name": "Nyota",
+ "surname": "Uhura",
+ "age": 37,
+ ...
+ }
+ ```
+
+ | Template | Expansion | Matched Subscribed Item | Routed to Client |
+ | ----------| --------------------------------------- | ----------------------- | -----------------|
+ | `by-name` | `user-[firstName=Nyota,lastName=Uhura]` | _None_ | _None_ |
+ | `by-age` | `user-[age=37]` | _SC1_ | _Client C_ |
+
+
+
+### Schema Registry
+
+A _Schema Registry_ is a centralized repository that manages and validates schemas, which define the structure of valid messages.
+
+The Kafka Connector supports integration with the [_Confluent Schema Registry_](https://docs.confluent.io/platform/current/schema-registry/index.html) through the configuration of parameters with the prefix `schema.registry`.
+
+#### `schema.registry.url`
+
+_Mandatory if the [Confluent Schema Registry](#recordkeyevaluatorschemaregistryenable-and-recordvalueevaluatorschemaregistryenable) is enabled_. The URL of the Confluent Schema Registry.
+
+Example:
+
+```xml
+http//localhost:8081
+```
+
+An encrypted connection is enabled by specifying the `https` protocol (see the [next section](#encryption-parameters-1)).
+
+Example:
+
+```xml
+https://localhost:8084
+```
+
+#### Basic HTTP Authentication Parameters
+
+[Basic HTTP authentication](https://docs.confluent.io/platform/current/schema-registry/security/index.html#configuring-the-rest-api-for-basic-http-authentication) mechanism is supported through the configuration of parameters with the prefix `schema.basic.authentication`.
+
+##### `schema.registry.basic.authentication.enable`
+
+_Optional_. Enable Basic HTTP authentication of this connection against the Schema Registry. Can be one of the following:
+- `true`
+- `false`
+
+Default value: `false`.
+
+Example:
+
+```xml
+true
+```
+
+##### `schema.registry.basic.authentication.username` and `schema.registry.basic.authentication.password`
+
+_Mandatory if [Basic HTTP Authentication](#schemaregistrybasicauthenticationenable) is enabled_. The credentials.
+
+- `schema.registry.basic.authentication.username`: the username
+- `schema.registry.basic.authentication.password`: the password
+
+Example:
+
+```xml
+authorized-schema-registry-user
+authorized-schema-registry-user-password
+```
+
+#### Encryption Parameters
+
+A secure connection to the Confluent Schema Registry can be configured through parameters with the prefix `schema.registry.encryption`, each one having the same meaning as the homologous parameters defined in the [Encryption Parameters](#encryption-parameters) section:
+
+- `schema.registry.encryption.protocol` (see [encryption.protocol](#encryptionprotocol))
+- `schema.registry.encryption.enabled.protocols` (see [encryption.enabled.protocols](#encryptionenabledprotocols))
+- `schema.registry.encryption.cipher.suites` (see [encryption.cipher.suites](#encryptionciphersuites))
+- `schema.registry.encryption.truststore.path` (see [encryption.truststore.path](#encryptiontruststorepath))
+- `schema.registry.encryption.truststore.password` (see [encryption.truststore.password](#encryptiontruststorepassword))
+- `schema.registry.encryption.hostname.verification.enable` (see [encryption.hostname.verification.enable](#encryptionhostnameverificationenable))
+- `schema.registry.encryption.keystore.enable` (see [encryption.keystore.enable](#encryptionkeystoreenable))
+- `schema.registry.encryption.keystore.path` (see [encryption.keystore.path](#encryptionkeystorepath))
+- `schema.registry.encryption.keystore.password` (see [encryption.keystore.password](#encryptionkeystorepassword))
+- `schema.registry.encryption.keystore.key.password` (see [encryption.keystore.key.password](#encryptionkeystorekeypassword))
+
+Example:
+
+```xml
+
+https//localhost:8084
+
+
+TLSv1.3
+TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
+true
+
+
+secrets/secrets/kafka.connector.schema.registry.truststore.jks
+
+
+true
+secrets/kafka-connector.keystore.jks
+kafka-connector-password
+schemaregistry-private-key-password
+```
+
+#### Quick Start Schema Registry Example
+
+Check out the [adapters.xml](/examples/quickstart-schema-registry/adapters.xml#L58) file of the [_Quick Start Schema Registry_](/examples/quickstart-schema-registry/) app, where you can find an example of Schema Registry settings.
+
+# Customize the Kafka Connector Metadata Adapter Class
+
+If you have any specific need to customize the _Kafka Connector Metadata Adapter_ class (e.g, for implementing custom authentication and authorization logic), you can provide your implementation by extending the factory class [`com.lightstreamer.kafka.adapters.pub.KafkaConnectorMetadataAdapter`](https://lightstreamer.github.io/Lightstreamer-kafka-connector/javadoc/com/lightstreamer/kafka/adapters/pub/KafkaConnectorMetadataAdapter.html). The class provides the following hook methods, which you can override to add your custom logic:
+
+- [_postInit_](https://lightstreamer.github.io/Lightstreamer-kafka-connector/javadoc/com/lightstreamer/kafka/adapters/pub/KafkaConnectorMetadataAdapter.html#postInit(java.util.Map,java.io.File)): invoked after the initialization phase of the Kafka Connector Metadata Adapter has been completed
+
+- [_onSubscription_](https://lightstreamer.github.io/Lightstreamer-kafka-connector/javadoc/com/lightstreamer/kafka/adapters/pub/KafkaConnectorMetadataAdapter.html#onSubscription(java.lang.String,java.lang.String,com.lightstreamer.interfaces.metadata.TableInfo%5B%5D)): invoked to notify that a user has submitted a subscription
+
+- [_onUnsubcription_](https://lightstreamer.github.io/Lightstreamer-kafka-connector/javadoc/com/lightstreamer/kafka/adapters/pub/KafkaConnectorMetadataAdapter.html#onUnsubscription(java.lang.String,com.lightstreamer.interfaces.metadata.TableInfo%5B%5D)): invoked to notify that a Subscription has been removed
+
+## Develop the Extension
+
+To develop your extension, you need the Kafka Connector jar library, which is hosted on _Github Packages_.
+
+For a Maven project, add the dependency to your _pom.xml_ file:
+
+```xml
+
+ com.lightstreamer.kafka
+ kafka-connector
+ VERSION
+
+```
+
+and follow these [instructions](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-apache-maven-registry#authenticating-to-github-packages) to configure the repository and authentication.
+
+For a Gradle project, edit your _build.gradle_ file as follows:
+
+1. Add the dependency:
+
+ ```groovy
+ dependencies {
+ implementation group: 'com.lightstreamer.kafka', name: 'kafka-connector', 'version': ''
+ }
+ ```
+
+2. Add the repository and specify your personal access token:
+
+ ```grrovy
+ repositories {
+ mavenCentral()
+ maven {
+ name = "GitHubPackages"
+ url = uri("https://maven.pkg.github.com/lightstreamer/lightstreamer-kafka-connector")
+ credentials {
+ username = project.findProperty("gpr.user") ?: System.getenv("USERNAME")
+ password = project.findProperty("gpr.key") ?: System.getenv("TOKEN")
+ }
+ }
+ }
+ ```
+
+In the [examples/custom-kafka-connector-adapter](/examples/custom-kafka-connector-adapter/) folder, you can find a sample Gradle project you may use as a starting point to build and deploy your custom extension.
+
+# Kafka Connect Lightstreamer Sink Connector
+
+The Lightstreamer Kafka Connector is also available as _Sink Connector plugin_ to be installed into [_Kafka Connect_](https://docs.confluent.io/platform/current/connect/index.html).
+
+In this scenario, an instance of the connector plugin acts as a [_Remote Adapter_](https://github.com/Lightstreamer/Lightstreamer-lib-adapter-java-remote) for the Lightstreamer server as depicted in the following picture:
+
+![KafkaConnectArchitecture](/pictures/kafka-connect.png)
+
+The connector has been developed for Kafka Connect framework version 3.7 and requires JDK (Java Development Kit) v17 or newer.
+
+## Usage
+
+### Lightstreamer Setup
+
+Before running the connector, you first need to deploy a Proxy Adapter into the Lightstreamer server instance.
+
+#### Requirements
+
+- JDK (Java Development Kit) v17 or newer
+- [Lightstreamer Broker](https://lightstreamer.com/download/) (also referred to as _Lightstreamer Server_) v7.4.2 or newer. Follow the installation instructions in the `LS_HOME/GETTING_STARTED.TXT` file included in the downloaded package.
+
+#### Steps
+
+1. Create a directory within `LS_HOME/adapters` (choose whatever name you prefer, for example `kafka-connect-proxy`).
+
+2. Copy the sample [`adapters.xml`](./kafka-connector-project/config/kafka-connect-proxy/adapters.xml) file to the `kafka-connect-proxy` directory.
+
+3. Edit the file as follows:
+
+ - Update the `id` attribute of the `adapters_conf` root tag. This settings has the same role of the already documented [Kafka Connector Identifier](#adapter_confid---kafka-connector-identifier).
+
+ - Update the `name` attribute of the data_provider tag. This settings has the same role of the already documented [Kafka Connection Name](#data_providername---kafka-connection-name).
+
+ - Update the `request_reply_port` parameter with the listening TCP port:
+
+ ```xml
+ 6661
+ ```
+
+ - If authentication is required:
+
+ - Set the `auth` parameter to `Y`:
+
+ ```xml
+ Y
+ ```
+
+ - Add the following parameters with the selected credential settings:
+
+ ```xml
+ USERNAME
+ PASSWORD
+ ```
+
+> [!NOTE]
+> As the `id` attribute must be unique across all the Adapter Sets deployed in the same Lightstreamer instance, make sure there is no conflict with any previously installed adapters (for example, the factory [adapters.xml](./kafka-connector-project/kafka-connector/src/adapter/dist/adapters.xml) file included in the _Kafka Connector_ distribution package).
+
+Finally, check that the Lightstreamer layout looks like the following:
+
+```sh
+LS_HOME/
+...
+├── adapters
+│ ├── kafka-connect-proxy
+│ │ ├── adapters.xml
+│ └── welcome_res
+...
+├── audit
+├── bin
+...
+```
+### Running
+
+To manually install the Kafka Connect Lightstreamer Sink Connector to a local Confluent Platform (version 7.6 or later) and run it in [_standalone mode_](https://docs.confluent.io/platform/current/connect/userguide.html#standalone-mode):
+
+1. Download the connector zip file `lightstreamer-kafka-connect-lightstreamer-.zip` from the [Releases](https://github.com/Lightstreamer/Lightstreamer-kafka-connector/releases) page. Alternatively, check out this repository and execute the following command from the [`kafka-connector-project`](/kafka-connector-project/) folder:
+
+ ```sh
+ $ ./gradlew connectDistZip
+ ```
+
+ which generates the zip file under the `kafka-connector-project/kafka-connector/build/distributions` folder.
+
+2. Extract the zip file into the desired location.
+
+ For example, you can copy the connector contents into a new directory named `CONFLUENT_HOME/share/kafka/plugins`.
+
+3. Edit the worker configuration properties file, ensuring you include the previous path in the `plugin.path` properties, for example:
+
+ ```
+ plugins.path=/usr/local/share/kafka/plugins
+ ```
+
+ You may want to use the provided [connect-standalone-local.properties](./kafka-connector-project/config/kafka-connect-config/connect-standalone-local.properties) file as a starting point.
+
+3. Edit the connector configuration properties file as detailed in the [Configuration Reference](#configuration-reference) section.
+
+ You may want to use the provided [`quickstart-lightstreamer-local.properties`](./kafka-connector-project/config/kafka-connect-config/quickstart-lightstreamer-local.properties) or [`quickstart-lightstreamer-local.json`](./kafka-connector-project/config/kafka-connect-config/quickstart-lightstreamer-local.json) files as starting pint. This file provides the set of pre-configured settings to feed Lightstreamer with stock market events, as already shown in the [installation instruction](#installation) for the Lightstreamer Kafka Connector.
+
+4. Launch the Lightstreamer Server instance already configured in the [Lightstreamer Setup](#lightstreamer-setup) section.
+
+5. Start the Connect worker with:
+
+ ```sh
+ $ bin/connect-standalone.sh connect-standalone-local.properties quickstart-lightstreamer-local.properties
+ ```
+
+To verify that an events stream actually flows from Kafka to a Lightstreamer consumer leveraging the same example already shwon in the [Start](#start) section:
+
+1. Attach a Lightstreamer consumer as specified in the step 2 of the [Start](#start) section.
+
+2. Make sure that a Schema Registy service is reachable from your local machine.
+
+3. Edit a `producer.properties` file as follows:
+
+ ```
+ # JSON serializer with support for the Schema Registry
+ value.serializer=io.confluent.kafka.serializers.json.KafkaJsonSchemaSerializer
+ # Schema Registry URL
+ schema.registry.url=http://:
+ ```
+
+ This configuration enables the producer to leverage the Schema Registry, which is required by Kafka Connect when a connector wants to deserialize JSON messages (unless an embedded schema is provided).
+
+2. Publish events as specified in the step 3 of the [Start](#start) section.
+
+ This time, run the publisher passing as further argument the `producer.properties` file:
+
+ ```sh
+ $ java -jar examples/quickstart-producer/build/libs/quickstart-producer-all.jar --bootstrap-servers --topic stocks --confg-file producer.properties
+ ```
+
+3. Check the consumed events.
+
+ You shouls see real-time updated as shown in the step 4 of the [Start](#start) section.
+
+### Running in Docker
+
+If you want to build a local Docker image based on Kafka Connect with the connector plugin, check out the [exmaples/docker-kafka-connect](/examples/docker-kafka-connect/) folder.
+
+In addition, the [examples/quickstart-kafka-connect](/examples/quickstart-kafka-connect/) folder shows how to use that image in Docker Compose through a Kafka Connect version of the _Quick Start_ app.
+
+## Supported Converters
+
+The Kafka Connect Lightstreamer Sink Connector supports all the [converters](https://docs.confluent.io/platform/current/connect/index.html#converters) that come packaged with the Confluent Platform. These include:
+
+- _AvroConverter_ `io.confluent.connect.avro.AvroConverter`
+- _ProtobufConverter_ `io.confluent.connect.protobuf.ProtobufConverter`
+- _JsonSchemaConverter_ `io.confluent.connect.json.JsonSchemaConverter`
+- _JsonConverter_ `org.apache.kafka.connect.json.JsonConverter`
+- _StringConverter_ `org.apache.kafka.connect.storage.StringConverter`
+- _ByteArrayConverter_ `org.apache.kafka.connect.converters.ByteArrayConverter`
+
+It also supports the built-in primitive converters:
+
+- `org.apache.kafka.connect.converters.DoubleConverter`
+- `org.apache.kafka.connect.converters.FloatConverter`
+- `org.apache.kafka.connect.converters.IntegerConverter`
+- `org.apache.kafka.connect.converters.LongConverter`
+- `org.apache.kafka.connect.converters.ShortConverter`
+
+## Configuration Reference
+
+The Kafka Connect Lightstreamer Sink Connector configuration properties are described below.
+
+### `connector.class`
+
+To use the connector, specify the following setting:
+`connector.class=com.lightstreamer.kafka.connect.LightstreamerSinkConnector`
+
+### `tasks.max`
+
+Due to the one-to-one relationship between a Proxy Adapter instance (deployed into the Lightstreamer server) and a Remote Adapter instance (a task), configuring more than one task in the `tasks.max` configuration parameter is pointless.
+
+### `lightstreamer.server.proxy_adapter.address`
+
+The Lightstreamer server's Proxy Adapter address to connect to in the format **`host:port`**.
+
+- **Type:** string
+- **Default:** none
+- **Importance:** high
+
+Example:
+
+```
+lightstreamer.server.proxy_adapter.address=lightstreamer.com:6661
+```
+
+### `lightstreamer.server.proxy_adapter.socket.connection.setup.timeout.ms`
+
+The (optional) amount of time in milliseconds the connctor will wait for the socket connection to be established to the Lightstreamer server's Proxy Adapter before terminating the task. Specify `0` for infinite timeout.
+
+- **Type:** int
+- **Default:** 5000 (5 seconds)
+- **Valid Values:** [0,...]
+- **Importance:** low
+
+Example:
+
+```
+lightstreamer.server.proxy_adapter.socket.connection.setup.timeout.ms=15000
+```
+
+### `lightstreamer.server.proxy_adapter.socket.connection.setup.max.retries`
+
+The (optional) max number of retries to establish a connection to the Lightstreamer server's Proxy Adapter.
+
+- **Type:** int
+- **Default:** 1
+- **Valid Values:** [0,...]
+- **Importance:** medium
+
+Example:
+
+```
+lightstreamer.server.proxy_adapter.socket.connection.setup.max.retries=5
+```
+
+### `lightstreamer.server.proxy_adapter.socket.connection.setup.retry.delay.ms`
+
+The (optional) amount of time in milliseconds to wait before retrying to establish a new connection to the Lightstreamer server's Proxy Adapter in case of failure. Only applicable if
+`lightstreamer.server.proxy_adapter.socket.connection.setup.max.retries` > 0.
+
+- **Type:** long
+- **Default:** 5000 (5 seconds)
+- **Valid Values:** [0,...]
+- **Importance:** low
+
+Example:
+
+```
+lightstreamer.server.proxy_adapter.socket.connection.setup.retry.delay.ms=15000
+```
+
+### `lightstreamer.server.proxy_adapter.username`
+
+The username to use for authenticating to the Lightstreamer server's Proxy Adapter. This setting requires authentication to be enabled in the [configuration](#lightstreamer-setup) of the Proxy Adapter.
+
+- **Type:** string
+- **Importance:** medium
+- **Default:** none
+
+Example:
+
+```
+lightstreamer.server.proxy_adapter.username=lightstreamer_user
+```
+
+### `lightstreamer.server.proxy_adapter.password`
+
+The password to use for authenticating to the Lightstreamer server's Proxy Adapter. This setting requires authentication to be enabled in the [configuration](#lightstreamer-setup) of the Proxy Adapter.
+
+- **Type:** string
+- **Default:** none
+- **Importance:** medium
+
+Example:
+ ```
+ lightstreamer.server.proxy_adapter.password=lightstreamer_password
+ ```
+
+### `record.extraction.error.strategy`
+
+The (optional) error handling strategy to be used if an error occurs while extracting data from incoming deserialized records. Can be one of the following:
+
+- `TERMINATE_TASK`: terminate the task immediately
+- `IGNORE_AND_CONTINUE`: ignore the error and continue to process the next record
+- `FORWARD_TO_DLQ`: forward the record to the dead letter queue
+
+In particular, the `FORWARD_TO_DLQ` value requires a [_dead letter queue_](https://www.confluent.io/blog/kafka-connect-deep-dive-error-handling-dead-letter-queues/) to be configured; otherwise it will fallback to `TERMINATE_TASK`.
+
+
+- **Type:** string
+- **Default:** `TERMINATE_TASK`
+- **Valid Values:** [`IGNORE_AND_CONTINUE`, `FORWARD_TO_DLQ`, `TERMINATE_TASK`]
+- **Importance:** medium
+
+Example:
+
+```
+record.extraction.error.strategy=FORWARD_TO_DLQ
+```
+
+### `topic.mappings`
+
+> [!IMPORTANT]
+> This configuration implements the same concepts already presented in the [Record Routing](#record-routing-maptopicto) section.
+
+Semicolon-separated list of mappings between source topics and Lightstreamer items. The list should describe a set of mappings in the form:
+
+`[topicName1]:[mappingList1];[topicName2]:[mappingList2];...[topicNameN]:[mappingListN]`
+
+where every specified topic (`[topicNameX]`) is mapped to the item names or item templates specified as comma-separated list (`[mappingListX]`).
+
+- **Type:** string
+- **Default:** none
+- **Valid Values:**
+ [topicName1]:[mappingList1];
+ [topicName2]:[mappingList2];...
+- **Importance:** high
+
+Example:
+
+```
+topic.mappings=sample-topic:item-template.template1,item1,item2;order-topic:order-item
+```
+
+The configuration above specifes:
+
+- A _One-to-Many_ mapping between the topic `sample-topic` and the Lightstreamer items `sample-item1`, `sample-item2`, and `sample-item3`
+- [_Filtered routing_](#filtered-record-routing-item-templatetemplate-name) through the reference to the item template `template1` (not shown in the snippet)
+- A _One-to-one_ mapping between the topic `order-topic` and the Lightstreamer item `order-item`
+
+### `record.mappings`
+
+> [!IMPORTANT]
+> This configuration implements the same concepts already presented in the [Record Mapping](#record-mapping-fieldfieldname) section.
+
+The list of mapping between Kafa records and Ligtstreamer fields. The list should describe a set of subscribable fields in the following form:
+
+ `[fieldName1]:[extractionExpression1],[fieldName2]:[extractionExpressionN],...,[fieldNameN]:[extractionExpressionN]`
+
+where the Lightstreamer field `[fieldNameX]` whill hold the data extracted from a deserialized Kafka record using the
+_Data Extraction Language_ `[extractionExpressionX]`.
+
+- **Type:** list
+- **Default:** none
+- **Valid Values:**
+ [fieldName1]:[extractionExpression1],
+ [fieldName2]:[extractionExpression2],...
+- **Importance:** high
+
+Example:
+
+```
+record.mappings=index:#{KEY.}, \
+ stock_name:#{VALUE.name}, \
+ last_price:#{VALUE.last_price}
+```
+
+The configuration above specifies the following mappings:
+
+1. The record key to the Lightstreamer field `index`
+2. The `name` attribute of the record value to the Lightstreamer field `stock_name`
+3. The `last_price` of the record value to the Lightstreamer field `last_price`
+
+
+### `item.templates`
+
+> [!IMPORTANT]
+> This configuration implements the same concepts already presented in the [Filtered Routing](#filtered-record-routing-item-templatetemplate-name) section.
+
+Semicolon-separated list of _item templates_, which specify the rules to enable the _filtering routing_. The list should describe a set of templates in the following form:
+
+`[templateName1]:[template1];[templateName2]:[template2];...;[templateNameN]:[templateN]`
+
+where the `[templateX]` configures the item template `[templaeName]` defining the general format of the items the Lightstremer clients must subscribe to to receive udpdates.
+
+A template is specified in the form:
+
+```
+item-prefix-#{paramName1=extractionExpression1,paramName2=extractionExpression2,...}
+```
+
+To map a topic to an item template, reference it using the `item-template` prefix in the `topic.mappings` configuration:
+
+```
+topic.mappings=some-topic:item-template.templateName1,item-template.templateName2,...
+```
+
+- **Type:** string
+- **Default:** null
+- **Valid Values:**
+ [templateName1]:[template1];
+ [templateName2]:[template2];...
+- **Importance:** high
+
+Example:
+
+```
+item.templates=by-name:user-#{firstName=VALUE.name,lastName=VALUE.surname}; \
+ by-age:user-#{age=VALUE.age}
+
+topic.mappings=user:item-template.by-name,item-template.by-age
+```
+
+The configuration above specifies how to route records published from the topic `user` to the item templates `by-name` and `by-age`, which define the rules to extract some personal data by leverging _Data Extraction Langauge_ expressions.
+
+# Docs
+
+The [docs](/docs/) folder contains the complete [Kafka Connector API Reference](https://lightstreamer.github.io/Lightstreamer-kafka-connector/javadoc), which is useful for implementing custom authentication and authorization logic, as described in the [Customize the Kafka Connector Metadata Adapter Class](#customize-the-kafka-connector-metadata-adapter-class) section.
+
+To learn more about the [Lightstreamer Broker](https://lightstreamer.com/products/lightstreamer/) and the [Lightstreamer Kafka Connector](https://lightstreamer.com/confluent/), visit their respective product pages.
+
+# Examples
+
+The [examples](/examples/) folder contains all the examples referenced throughout this guide, along with additional resources tailored for specific Kafka broker vendors. Additionally, you can explore the [_Airport Demo_](/examples/airport-demo/) for deeper insights into various usage and configuration options of the Lightstreamer Kafka Connector.
+
+For more examples and live demos, visit our [online showcase](https://demos.lightstreamer.com/?p=kafkaconnector&lclient=noone&f=all&lall=all&sallall=all).
diff --git a/examples/vendors/confluent/quickstart-confluent/README.md b/examples/vendors/confluent/quickstart-confluent/README.md
new file mode 100644
index 00000000..c3998810
--- /dev/null
+++ b/examples/vendors/confluent/quickstart-confluent/README.md
@@ -0,0 +1,3 @@
+# Quick Start
+
+This folder contains all the resources needed to launch the _Quick Start_ app. See the [Quick Start](/examples/vendors/confluent/README.md#quick-start-set-up-in-5-minutes) section for more details.
diff --git a/examples/vendors/confluent/quickstart-confluent/docker b/examples/vendors/confluent/quickstart-confluent/docker
new file mode 120000
index 00000000..233372ad
--- /dev/null
+++ b/examples/vendors/confluent/quickstart-confluent/docker
@@ -0,0 +1 @@
+../../../compose-templates/docker
\ No newline at end of file
diff --git a/examples/vendors/confluent/quickstart-confluent/docker-compose.yml b/examples/vendors/confluent/quickstart-confluent/docker-compose.yml
new file mode 100644
index 00000000..234ba115
--- /dev/null
+++ b/examples/vendors/confluent/quickstart-confluent/docker-compose.yml
@@ -0,0 +1,50 @@
+---
+name: quickstart-kafka-connector-confluent
+services:
+ kafka-connector:
+ container_name: kafka-connector
+ image: lightstreamer-kafka-connector-${version}
+ depends_on:
+ - broker
+ - producer
+ ports:
+ - 8080:8080
+ volumes:
+ - ./web:/lightstreamer/pages/QuickStart
+ - ./log4j.properties:/lightstreamer/adapters/lightstreamer-kafka-connector-${version}/log4j.properties
+
+ producer:
+ container_name: producer
+ depends_on:
+ - broker
+ build:
+ context: quickstart-producer
+ args:
+ VERSION: ${version}
+ command: ["--bootstrap-servers", "broker:29092", "--topic", "stocks"]
+
+ broker:
+ image: confluentinc/cp-kafka
+ hostname: broker
+ container_name: broker
+ ports:
+ - "9092:9092"
+ - "8082:8082"
+ environment:
+ KAFKA_NODE_ID: 1
+ KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT'
+ KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092'
+ KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
+ KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
+ KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
+ KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
+ KAFKA_PROCESS_ROLES: 'broker,controller'
+ KAFKA_CONTROLLER_QUORUM_VOTERS: '1@broker:29093'
+ KAFKA_LISTENERS: 'PLAINTEXT://broker:29092,CONTROLLER://broker:29093,PLAINTEXT_HOST://0.0.0.0:9092'
+ KAFKA_INTER_BROKER_LISTENER_NAME: 'PLAINTEXT'
+ KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER'
+ KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'
+ KAFKA_REST_HOST_NAME: rest-proxy
+ KAFKA_REST_BOOTSTRAP_SERVERS: 'broker:29092'
+ KAFKA_REST_LISTENERS: 'http://0.0.0.0:8082'
+ CLUSTER_ID: 'MkU3OEVBNTcwNTJENDM2Qk'
diff --git a/examples/vendors/confluent/quickstart-confluent/helpers.sh b/examples/vendors/confluent/quickstart-confluent/helpers.sh
new file mode 120000
index 00000000..403793f5
--- /dev/null
+++ b/examples/vendors/confluent/quickstart-confluent/helpers.sh
@@ -0,0 +1 @@
+../../../compose-templates/helpers.sh
\ No newline at end of file
diff --git a/examples/vendors/confluent/quickstart-confluent/log4j.properties b/examples/vendors/confluent/quickstart-confluent/log4j.properties
new file mode 120000
index 00000000..0fa46f55
--- /dev/null
+++ b/examples/vendors/confluent/quickstart-confluent/log4j.properties
@@ -0,0 +1 @@
+../../../compose-templates/log4j.properties
\ No newline at end of file
diff --git a/examples/vendors/confluent/quickstart-confluent/quickstart-producer b/examples/vendors/confluent/quickstart-confluent/quickstart-producer
new file mode 120000
index 00000000..9e537bcd
--- /dev/null
+++ b/examples/vendors/confluent/quickstart-confluent/quickstart-producer
@@ -0,0 +1 @@
+../../../quickstart-producer/
\ No newline at end of file
diff --git a/examples/vendors/confluent/quickstart-confluent/start.sh b/examples/vendors/confluent/quickstart-confluent/start.sh
new file mode 120000
index 00000000..e67fc61f
--- /dev/null
+++ b/examples/vendors/confluent/quickstart-confluent/start.sh
@@ -0,0 +1 @@
+../../../compose-templates/start.sh
\ No newline at end of file
diff --git a/examples/vendors/confluent/quickstart-confluent/stop.sh b/examples/vendors/confluent/quickstart-confluent/stop.sh
new file mode 120000
index 00000000..c08155cc
--- /dev/null
+++ b/examples/vendors/confluent/quickstart-confluent/stop.sh
@@ -0,0 +1 @@
+../../../compose-templates/stop.sh
\ No newline at end of file
diff --git a/examples/vendors/confluent/quickstart-confluent/web b/examples/vendors/confluent/quickstart-confluent/web
new file mode 120000
index 00000000..7fe2f7a7
--- /dev/null
+++ b/examples/vendors/confluent/quickstart-confluent/web
@@ -0,0 +1 @@
+../../../compose-templates/web/
\ No newline at end of file
diff --git a/examples/quickstart-redpanda-selfhosted/README.md b/examples/vendors/redpanda/quickstart-redpanda-selfhosted/README.md
similarity index 91%
rename from examples/quickstart-redpanda-selfhosted/README.md
rename to examples/vendors/redpanda/quickstart-redpanda-selfhosted/README.md
index f44152b2..16fe1e3b 100644
--- a/examples/quickstart-redpanda-selfhosted/README.md
+++ b/examples/vendors/redpanda/quickstart-redpanda-selfhosted/README.md
@@ -1,6 +1,6 @@
-# Quick Start Redpanda Self-hosted
+# Quick Start with Redpanda Self-hosted
-This folder contains a variant of the [_Quick Start_](../../README.md#quick-start-set-up-in-5-minutes) app configured to use _Redpanda_ as the target broker.
+This folder contains a variant of the [_Quick Start_](../../../../README.md#quick-start-set-up-in-5-minutes) app configured to use _Redpanda_ as the target broker.
The [docker-compose.yml](docker-compose.yml) file has been revised to realize the integration with [_Redpanda Self-hosted_](https://docs.redpanda.com/current/get-started/quick-start/).
In particular, the previous `broker` service has been replaced with the following definition:
@@ -42,7 +42,7 @@ which is a slightly modified version of the `redpanda-0` service included in the
## Run
-From this directory, follow the same instructions you can find in the [Quick Start](../../README.md#run) section of the main README file.
+From this directory, follow the same instructions you can find in the [Quick Start](../../../../README.md#run) section of the main README file.
## Explore the Topic
diff --git a/examples/quickstart-redpanda-selfhosted/console.png b/examples/vendors/redpanda/quickstart-redpanda-selfhosted/console.png
similarity index 100%
rename from examples/quickstart-redpanda-selfhosted/console.png
rename to examples/vendors/redpanda/quickstart-redpanda-selfhosted/console.png
diff --git a/examples/vendors/redpanda/quickstart-redpanda-selfhosted/docker b/examples/vendors/redpanda/quickstart-redpanda-selfhosted/docker
new file mode 120000
index 00000000..233372ad
--- /dev/null
+++ b/examples/vendors/redpanda/quickstart-redpanda-selfhosted/docker
@@ -0,0 +1 @@
+../../../compose-templates/docker
\ No newline at end of file
diff --git a/examples/quickstart-redpanda-selfhosted/docker-compose.yml b/examples/vendors/redpanda/quickstart-redpanda-selfhosted/docker-compose.yml
similarity index 98%
rename from examples/quickstart-redpanda-selfhosted/docker-compose.yml
rename to examples/vendors/redpanda/quickstart-redpanda-selfhosted/docker-compose.yml
index 772e6c29..15a03506 100644
--- a/examples/quickstart-redpanda-selfhosted/docker-compose.yml
+++ b/examples/vendors/redpanda/quickstart-redpanda-selfhosted/docker-compose.yml
@@ -18,7 +18,7 @@ services:
depends_on:
- broker
build:
- context: ../quickstart-producer
+ context: quickstart-producer
args:
VERSION: ${version}
volumes:
diff --git a/examples/vendors/redpanda/quickstart-redpanda-selfhosted/helpers.sh b/examples/vendors/redpanda/quickstart-redpanda-selfhosted/helpers.sh
new file mode 120000
index 00000000..403793f5
--- /dev/null
+++ b/examples/vendors/redpanda/quickstart-redpanda-selfhosted/helpers.sh
@@ -0,0 +1 @@
+../../../compose-templates/helpers.sh
\ No newline at end of file
diff --git a/examples/vendors/redpanda/quickstart-redpanda-selfhosted/log4j.properties b/examples/vendors/redpanda/quickstart-redpanda-selfhosted/log4j.properties
new file mode 120000
index 00000000..0fa46f55
--- /dev/null
+++ b/examples/vendors/redpanda/quickstart-redpanda-selfhosted/log4j.properties
@@ -0,0 +1 @@
+../../../compose-templates/log4j.properties
\ No newline at end of file
diff --git a/examples/vendors/redpanda/quickstart-redpanda-selfhosted/quickstart-producer b/examples/vendors/redpanda/quickstart-redpanda-selfhosted/quickstart-producer
new file mode 120000
index 00000000..9e537bcd
--- /dev/null
+++ b/examples/vendors/redpanda/quickstart-redpanda-selfhosted/quickstart-producer
@@ -0,0 +1 @@
+../../../quickstart-producer/
\ No newline at end of file
diff --git a/examples/vendors/redpanda/quickstart-redpanda-selfhosted/secrets b/examples/vendors/redpanda/quickstart-redpanda-selfhosted/secrets
new file mode 120000
index 00000000..27849a2c
--- /dev/null
+++ b/examples/vendors/redpanda/quickstart-redpanda-selfhosted/secrets
@@ -0,0 +1 @@
+../../../compose-templates/secrets/
\ No newline at end of file
diff --git a/examples/vendors/redpanda/quickstart-redpanda-selfhosted/start.sh b/examples/vendors/redpanda/quickstart-redpanda-selfhosted/start.sh
new file mode 120000
index 00000000..e67fc61f
--- /dev/null
+++ b/examples/vendors/redpanda/quickstart-redpanda-selfhosted/start.sh
@@ -0,0 +1 @@
+../../../compose-templates/start.sh
\ No newline at end of file
diff --git a/examples/vendors/redpanda/quickstart-redpanda-selfhosted/stop.sh b/examples/vendors/redpanda/quickstart-redpanda-selfhosted/stop.sh
new file mode 120000
index 00000000..c08155cc
--- /dev/null
+++ b/examples/vendors/redpanda/quickstart-redpanda-selfhosted/stop.sh
@@ -0,0 +1 @@
+../../../compose-templates/stop.sh
\ No newline at end of file
diff --git a/examples/vendors/redpanda/quickstart-redpanda-selfhosted/web b/examples/vendors/redpanda/quickstart-redpanda-selfhosted/web
new file mode 120000
index 00000000..7fe2f7a7
--- /dev/null
+++ b/examples/vendors/redpanda/quickstart-redpanda-selfhosted/web
@@ -0,0 +1 @@
+../../../compose-templates/web/
\ No newline at end of file
diff --git a/examples/quickstart-redpanda-serverless/README.md b/examples/vendors/redpanda/quickstart-redpanda-serverless/README.md
similarity index 57%
rename from examples/quickstart-redpanda-serverless/README.md
rename to examples/vendors/redpanda/quickstart-redpanda-serverless/README.md
index b8245afe..067d97f2 100644
--- a/examples/quickstart-redpanda-serverless/README.md
+++ b/examples/vendors/redpanda/quickstart-redpanda-serverless/README.md
@@ -1,18 +1,18 @@
# Quick Start with Redpanda Serverless
-This folder contains a variant of the [_Quick Start SSL_](../quickstart-ssl/README.md#quick-start-ssl) app configured to use [_Redpanda Serverless_](https://redpanda.com/redpanda-cloud/serverless) as the target cluster. You may follow the [instructions](https://docs.redpanda.com/current/deploy/deployment-option/cloud/serverless/) on [Redpanda Docs](https://docs.redpanda.com/current/home/) to perform the following operations:
+This folder contains a variant of the [_Quick Start SSL_](../../../quickstart-ssl/README.md#quick-start-ssl) app configured to use [_Redpanda Serverless_](https://redpanda.com/redpanda-cloud/serverless) as the target cluster. You may follow the [instructions](https://docs.redpanda.com/current/deploy/deployment-option/cloud/serverless/) on [Redpanda Docs](https://docs.redpanda.com/current/home/) to perform the following operations:
-- deploy a _Serverless Cluster_
-- create a user that uses `SCRAM-SHA-256` mechanism
-- create a topic
-- allow `All` permissions to the user on the topic
-- allow `All` permissions to the user on the consumer group `quick-start-group`
+- Deploy a _Serverless Cluster_.
+- Create a user that uses `SCRAM-SHA-256` mechanism.
+- Create a topic.
+- Allow `All` permissions to the user on the topic.
+- Allow `All` permissions to the user on the consumer group `quick-start-group`.
The [docker-compose.yml](docker-compose.yml) file has been revised to realize the integration with _Redpanda Serverless_ as follows:
-- removal of the `broker` service, because replaced by the remote cluster
+- Removal of the `broker` service, because replaced by the remote cluster
- _kafka-connector_:
- - definition of new environment variables to configure remote endpoint, credentials, and topic name in the `adapters.xml` through the _variable-expansion_ feature of Lightstreamer:
+ - Definition of new environment variables to configure remote endpoint, credentials, and topic name in the `adapters.xml` through the _variable-expansion_ feature of Lightstreamer:
```yaml
...
environment:
@@ -23,35 +23,36 @@ The [docker-compose.yml](docker-compose.yml) file has been revised to realize th
- topic_mapping=map.${topic}.to
...
```
- - adaption of [`adapters.xml`](./adapters.xml) to include:
- - new Kafka cluster address retrieved from the environment variable `bootstrap_server`:
+ - Aaption of [`adapters.xml`](./adapters.xml) to include thw following changes:
+ - Update of the parameter `bootstrap.servers` to the environment variable `bootstrap_server`:
```xml
$env.bootstrap_server
```
- - encryption settings:
+ - Configuration of the encryption settings:
```xml
true
TLSv1.2
true
```
- - authentication settings, with the credentials retrieved from environment variables `username` and `password`:
+ - Configuration of the authentication settings, with the credentials retrieved from environment variables `username` and `password`:
```xml
true
SCRAM-SHA-256
$env.username
$env.password
```
- - parameter `map..to` built from env variable `topic_mapping`, composed from env variable `topic`
+
+ - Update of the parameter `map..to` to the environment variable `topic_mapping` (which in turn is composed from env variable `topic`)
```xml
item-template.stock
```
- _producer_:
- - parameter `--boostrap-servers` retrieved from the environment variable `bootstrap_server`
- - parameter `--topic` retrieved from the environment variable `topic`
- - provisioning of the `producer.properties` configuration file to enable `SASL/SCRAM` over TLS, with username and password retrieved from the environment variables `username` and `password`:
+ - Update of the parameter `--boostrap-servers` from the environment variable `bootstrap_server`
+ - Update of the parameter `--topic` from the environment variable `topic`
+ - Provisioning of the `producer.properties` configuration file to enable `SASL/SCRAM` over TLS, with username and password retrieved from the environment variables `username` and `password`:
```yaml
# Configure SASL/PLAIN mechanism
@@ -71,8 +72,8 @@ $ bootstrap_server= username= password= to
```
where:
-- `bootstrap_server` is the bootstrap server address of the Redpanda cluster
-- `username` and `password` are the credentials of the user created from the _Redpanda Console_
-- `topic` is the name of the topic created on the _rpk_ tool or from the _Redpanda Console_
+- `bootstrap_server` is the bootstrap server address of the Redpanda cluster.
+- `username` and `password` are the credentials of the user created from the _Redpanda Console_.
+- `topic` is the name of the topic created on the _rpk_ tool or from the _Redpanda Console_.
Then, point your browser to [http://localhost:8080/QuickStart](http://localhost:8080/QuickStart).
diff --git a/examples/quickstart-redpanda-serverless/adapters.xml b/examples/vendors/redpanda/quickstart-redpanda-serverless/adapters.xml
similarity index 100%
rename from examples/quickstart-redpanda-serverless/adapters.xml
rename to examples/vendors/redpanda/quickstart-redpanda-serverless/adapters.xml
diff --git a/examples/vendors/redpanda/quickstart-redpanda-serverless/docker b/examples/vendors/redpanda/quickstart-redpanda-serverless/docker
new file mode 120000
index 00000000..233372ad
--- /dev/null
+++ b/examples/vendors/redpanda/quickstart-redpanda-serverless/docker
@@ -0,0 +1 @@
+../../../compose-templates/docker
\ No newline at end of file
diff --git a/examples/quickstart-redpanda-serverless/docker-compose.yml b/examples/vendors/redpanda/quickstart-redpanda-serverless/docker-compose.yml
similarity index 97%
rename from examples/quickstart-redpanda-serverless/docker-compose.yml
rename to examples/vendors/redpanda/quickstart-redpanda-serverless/docker-compose.yml
index 0e41f645..a0fb7bc4 100644
--- a/examples/quickstart-redpanda-serverless/docker-compose.yml
+++ b/examples/vendors/redpanda/quickstart-redpanda-serverless/docker-compose.yml
@@ -21,7 +21,7 @@ services:
producer:
container_name: producer
build:
- context: ../quickstart-producer
+ context: quickstart-producer
args:
VERSION: ${version}
configs:
diff --git a/examples/vendors/redpanda/quickstart-redpanda-serverless/helpers.sh b/examples/vendors/redpanda/quickstart-redpanda-serverless/helpers.sh
new file mode 120000
index 00000000..403793f5
--- /dev/null
+++ b/examples/vendors/redpanda/quickstart-redpanda-serverless/helpers.sh
@@ -0,0 +1 @@
+../../../compose-templates/helpers.sh
\ No newline at end of file
diff --git a/examples/vendors/redpanda/quickstart-redpanda-serverless/log4j.properties b/examples/vendors/redpanda/quickstart-redpanda-serverless/log4j.properties
new file mode 120000
index 00000000..0fa46f55
--- /dev/null
+++ b/examples/vendors/redpanda/quickstart-redpanda-serverless/log4j.properties
@@ -0,0 +1 @@
+../../../compose-templates/log4j.properties
\ No newline at end of file
diff --git a/examples/vendors/redpanda/quickstart-redpanda-serverless/quickstart-producer b/examples/vendors/redpanda/quickstart-redpanda-serverless/quickstart-producer
new file mode 120000
index 00000000..9e537bcd
--- /dev/null
+++ b/examples/vendors/redpanda/quickstart-redpanda-serverless/quickstart-producer
@@ -0,0 +1 @@
+../../../quickstart-producer/
\ No newline at end of file
diff --git a/examples/vendors/redpanda/quickstart-redpanda-serverless/start.sh b/examples/vendors/redpanda/quickstart-redpanda-serverless/start.sh
new file mode 120000
index 00000000..e67fc61f
--- /dev/null
+++ b/examples/vendors/redpanda/quickstart-redpanda-serverless/start.sh
@@ -0,0 +1 @@
+../../../compose-templates/start.sh
\ No newline at end of file
diff --git a/examples/vendors/redpanda/quickstart-redpanda-serverless/stop.sh b/examples/vendors/redpanda/quickstart-redpanda-serverless/stop.sh
new file mode 120000
index 00000000..c08155cc
--- /dev/null
+++ b/examples/vendors/redpanda/quickstart-redpanda-serverless/stop.sh
@@ -0,0 +1 @@
+../../../compose-templates/stop.sh
\ No newline at end of file
diff --git a/examples/vendors/redpanda/quickstart-redpanda-serverless/web b/examples/vendors/redpanda/quickstart-redpanda-serverless/web
new file mode 120000
index 00000000..7fe2f7a7
--- /dev/null
+++ b/examples/vendors/redpanda/quickstart-redpanda-serverless/web
@@ -0,0 +1 @@
+../../../compose-templates/web/
\ No newline at end of file
diff --git a/kafka-connector-project/kafka-connector/src/adapter/dist/adapters.xml b/kafka-connector-project/kafka-connector/src/adapter/dist/adapters.xml
index 32a9d600..f204591d 100644
--- a/kafka-connector-project/kafka-connector/src/adapter/dist/adapters.xml
+++ b/kafka-connector-project/kafka-connector/src/adapter/dist/adapters.xml
@@ -416,4 +416,54 @@
-->
+
+
+
+ com.lightstreamer.kafka.adapters.KafkaConnectorDataAdapter
+
+ broker:29092
+ quick-start-group-confluent
+
+
+ true
+ TLSv1.2
+ true
+
+
+ true
+ PLAIN
+ API.key
+ API.secret
+
+
+ EARLIEST
+ INTEGER
+ JSON
+
+
+ stock-#{index=KEY}
+ item-template.stock
+
+
+ #{VALUE.timestamp}
+ #{VALUE.time}
+ #{VALUE.name}
+ #{VALUE.last_price}
+ #{VALUE.ask}
+ #{VALUE.ask_quantity}
+ #{VALUE.bid}
+ #{VALUE.bid_quantity}
+ #{VALUE.pct_change}
+ #{VALUE.min}
+ #{VALUE.max}
+ #{VALUE.ref_price}
+ #{VALUE.open_price}
+ #{VALUE.item_status}
+ #{TIMESTAMP}
+ #{TOPIC}
+ #{OFFSET}
+ #{PARTITION}
+
+
+
diff --git a/kafka-connector-project/kafka-connector/src/adapter/dist/log4j.properties b/kafka-connector-project/kafka-connector/src/adapter/dist/log4j.properties
index 39cce811..28eb7081 100644
--- a/kafka-connector-project/kafka-connector/src/adapter/dist/log4j.properties
+++ b/kafka-connector-project/kafka-connector/src/adapter/dist/log4j.properties
@@ -26,3 +26,10 @@ log4j.appender.QuickStartFile=org.apache.log4j.RollingFileAppender
log4j.appender.QuickStartFile.layout=org.apache.log4j.PatternLayout
log4j.appender.QuickStartFile.layout.ConversionPattern=[%d] [%-10c{1}] %-5p %m%n
log4j.appender.QuickStartFile.File=../../logs/quickstart.log
+
+# QuickStartConfluentCloud logger
+log4j.logger.QuickStartConfluentCloud=INFO, QuickStartConfluentCloudFile
+log4j.appender.QuickStartConfluentCloudFile=org.apache.log4j.RollingFileAppender
+log4j.appender.QuickStartConfluentCloudFile.layout=org.apache.log4j.PatternLayout
+log4j.appender.QuickStartConfluentCloudFile.layout.ConversionPattern=[%d] [%-10c{1}] %-5p %m%n
+log4j.appender.QuickStartConfluentCloudFile.File=../../logs/quickstart-confluent.log
diff --git a/pictures/confluent-lightstreamer.png b/pictures/confluent-lightstreamer.png
new file mode 100644
index 00000000..f95cceb4
Binary files /dev/null and b/pictures/confluent-lightstreamer.png differ
diff --git a/pictures/consumer-confluent.gif b/pictures/consumer-confluent.gif
new file mode 100644
index 00000000..f36ee97e
Binary files /dev/null and b/pictures/consumer-confluent.gif differ
diff --git a/pictures/producer-confluent.gif b/pictures/producer-confluent.gif
new file mode 100644
index 00000000..d5cf53b9
Binary files /dev/null and b/pictures/producer-confluent.gif differ