Skip to content

Commit

Permalink
Update quickstart code and copy to match
Browse files Browse the repository at this point in the history
  • Loading branch information
stereosky committed Jun 20, 2024
1 parent 7327cb1 commit bd67074
Showing 1 changed file with 10 additions and 12 deletions.
22 changes: 10 additions & 12 deletions docs/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,11 +22,10 @@ python -m pip install quixstreams

### Step 2. Producing data to Kafka

In order to process events with Quix Streams, they first need to be in Kafka.
Let's create the file `producer.py` to generate some test data into the Kafka topic:
In order to process events with Quix Streams, they first need to be in Kafka.
Let's create the file `producer.py` to write some test data into a Kafka topic.

```python

from quixstreams import Application

# Create an Application - the main configuration entry point
Expand Down Expand Up @@ -65,8 +64,8 @@ if __name__ == "__main__":

### Step 3. Consuming data from Kafka

Let's create the file `consumer.py` with streaming processing code.
It will start consuming messages from Kafka and applying transformations to them.
Let's create the file `consumer.py` to process the data in the topic.
It will start consuming messages from Kafka and apply transformations to them.

```python
from quixstreams import Application
Expand All @@ -86,7 +85,7 @@ messages_topic = app.topic(name="messages", value_deserializer="json")
sdf = app.dataframe(topic=messages_topic)

# Print the input data
sdf = sdf.update(lambda message: print("Input: ", message))
sdf = sdf.update(lambda message: print(f"Input: {message}"))

# Define a transformation to split incoming sentences
# into words using a lambda function
Expand All @@ -99,17 +98,16 @@ sdf = sdf.apply(
sdf["length"] = sdf["text"].apply(lambda word: len(word))

# Print the output result
sdf = sdf.update(lambda word: print(word))
sdf = sdf.update(lambda word: print(f"Output: {word}"))

# Run the streaming application
if __name__ == "__main__":
app.run(sdf)
```


### Step 4. Running the Producer

Let's run the `producer.py` to fill the topic with data.
Let's run the `producer.py` in a terminal to fill the topic with data.
If the topic does not exist yet, Quix Streams will create it with the default number of partitions.

```commandline
Expand All @@ -127,10 +125,11 @@ Produce event with key="id3" value="b'{"chat_id":"id3","text":"Mollis nunc sed i
### Step 5. Running the Consumer

Now that you have a topic with data, you may start consuming events and process them.
Let's run the `consumer.py` to see the results:
Let's run the `consumer.py` to see the results.

```commandline
python consumer.py
[2024-02-21 19:57:38,669] [INFO] : Initializing processing of StreamingDataFrame
[2024-02-21 19:57:38,669] [INFO] : Topics required for this application: "messages", "words"
[2024-02-21 19:57:38,699] [INFO] : Validating Kafka topics exist and are configured correctly...
Expand All @@ -147,7 +146,6 @@ Input: {'chat_id': 'id2', 'text': 'Consectetur adipiscing elit sed'}
...
```


## Next steps

Now that you have a simple Quix Streams application working, you can dive into more advanced features:
Expand All @@ -164,4 +162,4 @@ Or check out the tutorials for more in-depth examples:

## Getting help

If you run into any problems, please create an [issue](https://github.com/quixio/quix-streams/issues) or ask in `#quix-help` in our **[Quix Community on Slack](https://quix.io/slack-invite)**.
If you run into any problems, please create an [issue](https://github.com/quixio/quix-streams/issues) or ask in `#quix-help` in **[Quix Community on Slack](https://quix.io/slack-invite)**.

0 comments on commit bd67074

Please sign in to comment.