Skip to content

Commit

Permalink
add docs for hstream kafka api (#48)
Browse files Browse the repository at this point in the history
* kafka: add compatibility doc

* add java client examples

* add kafka java client docs

* reorganize contents


---------

Co-authored-by: Yue Yang <[email protected]>
  • Loading branch information
daleiz and g1eny0ung authored Jan 8, 2024
1 parent 9f0cc05 commit b3e2416
Show file tree
Hide file tree
Showing 37 changed files with 564 additions and 19 deletions.
6 changes: 2 additions & 4 deletions docs/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,8 @@ order:
'overview',
'start',
'platform',
'write',
'receive',
'process',
'ingest-and-distribute',
'develop',
'develop-with-kafka-api',
'deploy',
'security',
'reference',
Expand Down
6 changes: 6 additions & 0 deletions docs/develop-with-kafka-api/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
---
order: ['compatibility.md', 'java.md']
collapsed: false
---

Develop (with Kafka API)
43 changes: 43 additions & 0 deletions docs/develop-with-kafka-api/compatibility.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,43 @@
# Kafka Compatibility

## Overview

HStream also supports Kafka API since 0.19.0, so users can connect to HStream using Kafka clients. HStream implements [Kafka protocol](https://kafka.apache.org/protocol.html) underly, so you do not need to change any code in your current Kafka applications, just updating the Kafka URLs in your configurations to point to a HStream cluster, and that is it, then you can start streaming from your Kafka applications to a HStream cluster.

::: tip

Refer to [get started with Kafka API](../start/get-started-with-kafka-api.md) to learn how to enable HStream'support of Kafka API.

:::

## Compatibility with Apache Kafka

HStream supports Apache Kafka version 0.11 and later, and most Kafka clients should be able to auto-negotiate protocol versions.

Currenty, the clients below are tested by HStream.

| Language | Kafka Client |
| -------- | ----------------------------------------------------------- |
| Java | [Apache Kafka Java Client](https://github.com/apache/kafka) |
| Python | [kafka-python](https://github.com/dpkp/kafka-python) |
| Go | [franz-go](https://github.com/twmb/franz-go) |
| C/C++ | [librdkafka](https://github.com/confluentinc/librdkafka) |

::: tip

Recommand using the latest version of each Kafka client

:::

## Features not supported in Apache Kafka

HStream do not support below Kafka features now(we plan to support them in the later version):

- Kafka transactions
- Quotas in Kafka

::: tip

The configuration of Kafka brokers is not applicable to HStream, as HStream is a completely different implementation.

:::
37 changes: 37 additions & 0 deletions docs/develop-with-kafka-api/java.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
# Develope with Java Kafka client

This page shows how to use [Apache Kafka Java Client](https://github.com/apache/kafka) to interact with HStream.


::: tip

Replace the variables in the following code according to your setup.

:::

## Create a Topic


::: code-group

<<< @/../kafka-examples/java/app/src/main/java/CreateTopic.java [Java]

:::

## Produce a Record


::: code-group

<<< @/../kafka-examples/java/app/src/main/java/Produce.java [Java]

:::

## Consume Records


::: code-group

<<< @/../kafka-examples/java/app/src/main/java/Consume.java [Java]

:::
6 changes: 6 additions & 0 deletions docs/develop/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
---
order: ['write', 'receive', 'process', 'ingest-and-distribute']
collapsed: false
---

Develope
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
4 changes: 2 additions & 2 deletions docs/process/sql.md → docs/develop/process/sql.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ streams to get some useful information.
## Requirements

Ensure you have deployed HStreamDB successfully. The easiest way is to follow
[quickstart](../start/quickstart-with-docker.md) to start a local cluster. Of
[quickstart](../../start/quickstart-with-docker.md) to start a local cluster. Of
course, you can also try other methods mentioned in the Deployment part.

## Step 1: Create related streams
Expand Down Expand Up @@ -199,4 +199,4 @@ The result is updated right away.
## Related Pages

For a detailed introduction to the SQL, see
[HStream SQL](../reference/sql/sql-overview.md).
[HStream SQL](../../reference/sql/sql-overview.md).
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
File renamed without changes.
2 changes: 1 addition & 1 deletion docs/write/stream.md → docs/develop/write/stream.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ requirements:
- Contain only the following characters: Letters `[A-Za-z]`, numbers `[0-9]`, underscores `_`

\*For the cases where the resource name is used as a part of a SQL statement,
such as in [HStream SQL Shell](../reference/cli.md#hstream-sql-shell), there
such as in [HStream SQL Shell](../../reference/cli.md#hstream-sql-shell), there
will be situations where the resource name cannot be parsed properly (such as
conflicts with Keywords etc.), enclose the resource name with double quotes `"`.

Expand Down
File renamed without changes.
2 changes: 1 addition & 1 deletion docs/platform/create-connectors-in-platform.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ After filling in the configuration, click the **Create** button to create the so

::: tip

For more details about the configuration of each source connector, please refer to [Connectors](../ingest-and-distribute/connectors.md).
For more details about the configuration of each source connector, please refer to [Connectors](../develop/ingest-and-distribute/connectors.md).

:::

Expand Down
6 changes: 3 additions & 3 deletions docs/platform/stream-in-platform.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,11 +14,11 @@ This tutorial guides you on how to create and manage streams in HStream Platform

After clicking the **New stream** button, you will be directed to the **New Stream** page. You need to set some necessary properties for your stream and create it:

1. Specify the **stream name**. You can refer to [Guidelines to name a resource](../write/stream.md#guidelines-to-name-a-resource) to name a stream.
1. Specify the **stream name**. You can refer to [Guidelines to name a resource](../develop/write/stream.md#guidelines-to-name-a-resource) to name a stream.

2. Fill in with the number of **shards** you want this stream to have. The default value is **1**.

> Shard is the primary storage unit for the stream. For more details, please refer to [Sharding in HStreamDB](../write/shards.md#sharding-in-hstreamdb).
> Shard is the primary storage unit for the stream. For more details, please refer to [Sharding in HStreamDB](../develop/write/shards.md#sharding-in-hstreamdb).
3. Fill in with the number of **replicas** for each stream. The default value is **3**.

Expand All @@ -27,7 +27,7 @@ After clicking the **New stream** button, you will be directed to the **New Stre
5. Click the **Confirm** button to create a stream.

::: tip
For more details about **replicas** and **retention**, please refer to [Attributes of a Stream](../write/stream.md#attributes-of-a-stream).
For more details about **replicas** and **retention**, please refer to [Attributes of a Stream](../develop/write/stream.md#attributes-of-a-stream).
:::

::: warning
Expand Down
4 changes: 2 additions & 2 deletions docs/platform/subscription-in-platform.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ This tutorial guides you on how to create and manage subscriptions in HStream Pl

After clicking the **New subscription** button, you will be directed to the **New subscription** page. You need to set some necessary properties for your stream and create it:

1. Specify the **Subscription ID**. You can refer to [Guidelines to name a resource](../write/stream.md#guidelines-to-name-a-resource) to name a subscription.
1. Specify the **Subscription ID**. You can refer to [Guidelines to name a resource](../develop/write/stream.md#guidelines-to-name-a-resource) to name a subscription.

2. Select a stream as the source from the dropdown list.

Expand All @@ -25,7 +25,7 @@ After clicking the **New subscription** button, you will be directed to the **Ne
5. Click the **Confirm** button to create a subscription.

::: tip
For more details about **ACK timeout** and **max unacked records**, please refer to [Attributes of a Subscription](../receive/subscription.md#attributes-of-a-subscription).
For more details about **ACK timeout** and **max unacked records**, please refer to [Attributes of a Subscription](../develop/receive/subscription.md#attributes-of-a-subscription).
:::

::: warning
Expand Down
2 changes: 1 addition & 1 deletion docs/platform/write-in-platform.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ A record is like a piece of JSON data. You can add arbitrary fields to a record,
A record also ships with a partition key, which is used to determine which shard the record will be allocated to and improve the read/write performance.

::: tip
For more details about the partition key, please refer to [Partition Key](../write/write.md#write-records-with-partition-keys).
For more details about the partition key, please refer to [Partition Key](../develop/write/write.md#write-records-with-partition-keys).
:::

Take the following steps to write records to a stream:
Expand Down
4 changes: 2 additions & 2 deletions docs/release-notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -440,7 +440,7 @@ In particular, this release provides connectors listed below:
- [sink-mysql](https://github.com/hstreamdb/hstream-connectors/blob/main/docs/specs/sink_mysql_spec.md)
- [sink-postgresql](https://github.com/hstreamdb/hstream-connectors/blob/main/docs/specs/sink_postgresql_spec.md)

You can refer to [the documentation](./ingest-and-distribute/overview.md) to learn more about
You can refer to [the documentation](./develop/ingest-and-distribute/overview.md) to learn more about
HStream IO.

#### New Stream Processing Engine
Expand All @@ -451,7 +451,7 @@ magnificently. The new engine also supports **multi-way join**, **sub-queries**,
and **more** general materialized views.

The feature is still experimental. For try-outs, please refer to
[the SQL guides](./process/sql.md).
[the SQL guides](./develop/process/sql.md).

#### Gossip-based HServer Clusters

Expand Down
1 change: 1 addition & 0 deletions docs/start/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
order:
- try-out-hstream-platform.md
- quickstart-with-docker.md
- get-started-with-kafka-api.md
- hstream-console.md
collapsed: false
---
Expand Down
1 change: 1 addition & 0 deletions docs/start/get-started-with-kafka-api.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
# Get Started with Kafka API
6 changes: 3 additions & 3 deletions docs/start/hstream-console.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,11 +23,11 @@ you can intuitively visualize the resource status, and easily find out the bottl
### Data synchronization

With connectors in HStream Console, you can gain the ability to synchronize data between HStreamDB and other data sources, such as MySQL, PostgreSQL, and Elasticsearch.
Check out [HStream IO Overview](../ingest-and-distribute/overview.md) to learn more about connectors.
Check out [HStream IO Overview](../develop/ingest-and-distribute/overview.md) to learn more about connectors.

## Next steps

To learn more about HStreamDB's resources, follow the links below:

- [Streams](../write/stream.md)
- [Subscriptions](../receive/subscription.md)
- [Streams](../develop/write/stream.md)
- [Subscriptions](../develop/receive/subscription.md)
36 changes: 36 additions & 0 deletions kafka-examples/java/app/build.gradle
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@

plugins {
id 'application'
id "com.diffplug.spotless" version "6.2.0"
}

repositories {
mavenCentral()
}

dependencies {
implementation 'org.apache.kafka:kafka-clients:3.6.1'
}

java {
toolchain {
languageVersion.set(JavaLanguageVersion.of(11))
}
}

application {
// Define the main class for the application.
mainClass = 'Main'
}

spotless {
java {
googleJavaFormat()
}

groovyGradle {
target '*.gradle'
greclipse()
indentWithSpaces()
}
}
27 changes: 27 additions & 0 deletions kafka-examples/java/app/src/main/java/Consume.java
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
import java.time.Duration;
import java.util.Collections;
import java.util.Properties;
import org.apache.kafka.clients.consumer.KafkaConsumer;

class Consume {
public static void main(String[] args) throws Exception {
String endpoint = "localhost:9092";
String topicName = "my_topic";
String groupName = "my_group";

var props = new Properties();
props.put("bootstrap.servers", endpoint);
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("auto.offset.reset", "earliest");
props.put("group.id", groupName);

try (var consumer = new KafkaConsumer<String, String>(props)) {
consumer.subscribe(Collections.singleton(topicName));
var records = consumer.poll(Duration.ofSeconds(10));
for (var record : records) {
System.out.println(record);
}
}
}
}
24 changes: 24 additions & 0 deletions kafka-examples/java/app/src/main/java/CreateTopic.java
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
import java.util.Collections;
import java.util.Properties;
import org.apache.kafka.clients.admin.AdminClient;
import org.apache.kafka.clients.admin.NewTopic;

class CreateTopic {
public static void main(String[] args) throws Exception {
String endpoint = "localhost:9092";
String topicName = "my_topic";
int partitions = 1;
short replicationFactor = 1;

var props = new Properties();
props.put("bootstrap.servers", endpoint);

try (var admin = AdminClient.create(props)) {
admin
.createTopics(
Collections.singleton(new NewTopic(topicName, partitions, replicationFactor)))
.all()
.get();
}
}
}
7 changes: 7 additions & 0 deletions kafka-examples/java/app/src/main/java/Main.java
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
class Main {
public static void main(String[] args) throws Exception {
CreateTopic.main(args);
Produce.main(args);
Consume.main(args);
}
}
20 changes: 20 additions & 0 deletions kafka-examples/java/app/src/main/java/Produce.java
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
import java.util.Properties;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;

class Produce {
public static void main(String[] args) throws Exception {
String endpoint = "localhost:9092";
String topicName = "my_topic";

var props = new Properties();
props.put("bootstrap.servers", endpoint);
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

try (var producer = new KafkaProducer<String, String>(props)) {
producer.send(new ProducerRecord<>(topicName, "Hello HStream!"));
producer.flush();
}
}
}
Binary file not shown.
5 changes: 5 additions & 0 deletions kafka-examples/java/gradle/wrapper/gradle-wrapper.properties
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
distributionUrl=https\://services.gradle.org/distributions/gradle-7.4.2-bin.zip
zipStoreBase=GRADLE_USER_HOME
zipStorePath=wrapper/dists
Loading

0 comments on commit b3e2416

Please sign in to comment.