✔️ Start Kafka and ZooKeeper using docker compose
✔️ Create a producer service
✔️ Create two consumer services
✔️ Publish messages into Kafka
What am I going to do during this workshop ?
You will create a microservice architecture enhanced by Kafka.
Take a look at the schema below:
Let's describe the situation:
- The waiter asks the kitchen to prepare a pizza of type margherita for table 17
- Since this restaurant has a lot of orders, you'll help them scale up using Kafka
- Once the order has been published into Kafka, two consumers will consume this order
- The cook will prepare the order
- The manager keeps track of the orders and record it in a file (like a database)
But what is Kafka ? What do I need this anyway ?
Kafka is a message broker, meaning that someone is responsible for sending messages to Kafka - the producer - and someone else will receive these messages - the consumer. Other message brokers exists, like RabbitMQ, but Kafka is much more resilient, because it is designed to support stream interruptions.
I'm not going to lie, Kafka is a bit overkill for this workshop. Well it is still a good way to learn its behavior and how to interact with Kafka using your own producer / consumer.
At the end of the workshop, you'll be able to scale up even further this application with more producers / consumers.
Please follow the instructions available here.
You'll need to start Kafka in a docker container, but Kafka actually relies on ZooKeeper.
Those services will start into docker containers.
- Create a
docker-compose.yaml
file
In this file, create a zookeeper
service:
- It uses the following docker image:
confluentinc/cp-zookeeper:latest
- It binds the container port
22181
to your local22181
port - Initialize the required environment variables
Then, create a second service, named kafka
:
- It uses the following docker image:
confluentinc/cp-kafka:latest
- It binds the following container ports to your local ports:
- 19092 to 19092, the internal listener
- 19091 to 19091, the external listener
- Initialize the required environment variables
- It depends on the
zookeeper
service
You'll have to setup non-required environment variables, look for the listeners related ones !
- Control startup and shutdown order in docker compose
- Kafka listeners explained
- Kafka configuration properties
Run docker compose up --build -d
, wait 30 seconds and in a new terminal run docker compose logs kafka | grep -i started
.
It should print:
$ docker compose logs kafka | grep -i started
kafka_1 | [2021-08-08 14:06:44,854] DEBUG [ReplicaStateMachine controllerId=1002] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine)
kafka_1 | [2021-08-08 14:06:44,856] DEBUG [PartitionStateMachine controllerId=1002] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine)
kafka_1 | [2021-08-08 14:06:44,866] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1002] Started data-plane acceptor and processor(s) for endpoint : ListenerName(LISTENER_IN) (kafka.network.SocketServer)
kafka_1 | [2021-08-08 14:06:44,867] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1002] Started data-plane acceptor and processor(s) for endpoint : ListenerName(LISTENER_OUT) (kafka.network.SocketServer)
kafka_1 | [2021-08-08 14:06:44,867] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1002] Started socket server acceptors and processors (kafka.network.SocketServer)
kafka_1 | [2021-08-08 14:06:44,872] INFO [KafkaServer id=1002] started (kafka.server.KafkaServer)
kafka_1 | [2021-08-08 14:10:13,068] DEBUG [ReplicaStateMachine controllerId=1002] Started replica state machine with initial state -> HashMap() (kafka.controller.ZkReplicaStateMachine)
kafka_1 | [2021-08-08 14:10:13,070] DEBUG [PartitionStateMachine controllerId=1002] Started partition state machine with initial state -> HashMap() (kafka.controller.ZkPartitionStateMachine)
kafka_1 | [2021-08-08 14:10:13,099] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1002] Started data-plane acceptor and processor(s) for endpoint : ListenerName(LISTENER_IN) (kafka.network.SocketServer)
kafka_1 | [2021-08-08 14:10:13,100] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1002] Started data-plane acceptor and processor(s) for endpoint : ListenerName(LISTENER_OUT) (kafka.network.SocketServer)
kafka_1 | [2021-08-08 14:10:13,100] INFO [SocketServer listenerType=ZK_BROKER, nodeId=1002] Started socket server acceptors and processors (kafka.network.SocketServer)
kafka_1 | [2021-08-08 14:10:13,104] INFO [KafkaServer id=1002] started (kafka.server.KafkaServer)
In this step, you will create a service responsible for publishing messages into Kafka.
You'll use the Go programming language, and the package sarama to create the producer.
- Create a
client
folder and jump into it - Initialize the go module with the following command:
go mod init waiter
- Install sarama with
go get github.com/Shopify/sarama
- Create a
main.go
file - Create 3 functions:
- The
main
one createProducer
which will create the producer and connect to the Kafka brokerpublishOrder
which will publish a pizza order
- The
- The message that the producer will publish must be of the type
Order
, a structure containing astring
- the pizza type - and an int - the table that ordered the pizza - It must be published on the
pizza-order
topic - You'll then call the two last functions in the
main
one - Once the message has been published, print the partition and the offset it has been published to
Run the following command while your Kafka cluster is running:
go run main.go
It should print the following:
go run main.go
message published on partition 0 with offset 0
And if you run it a second time:
go run main.go
message published on partition `0` with offset `1`
It's now time for you to create the first consumer: the cooks in your kitchen !
First, you will create the actual consumer and then consume messages.
- Create a
kitchen
folder and jump in it - Once again, initialize the go module using
go mo init kitchen
- Install sarama using
go get github.com/Shopify/sarama
- Create a
main.go
file and open it - Create 3 functions:
- The
main
one createConsumer
which creates the consumer using saramaconsumeMessages
which retrieves the partitions, and then read for the incoming messages as they arrive and print their content after decoding them
- The
Start your Kafka cluster using docker-compose up --build
.
Then, start the consumer in a new terminal using go run main.go
in the kitchen
folder.
Finally, start the producer in a new terminal using go run main.go
in the client
folder.
You should see the following in the consumer's terminal:
go run main.go
Received an order for a pizza margherita at table 17 !
This service act as a storage / monitoring for the incoming orders. It will save the incoming orders in a file.
We split the kitchen and manager service to improve the single-responsibility feature of each service.
- You already did one consumer right ? Now just save the result in a file named
log.txt
:) - Create it in the
manager
folder
Use the same commands as in the previous step, and start the new consumer in the manager
folder using go run main.go
.
It should have created a file named log.txt
with the following sentence inside: Received an order for a pizza margherita at table 17 !
.
-
Improve the services: dockerize your producer and consumers !
-
Improve the producer
- Split functions into separate folders
- Take the pizza type, and the table as command line arguments
- Take a configuration file as pizza orders
Luca Georges Francois |
---|
🚀 Don't hesitate to follow us on our different networks, and put a star 🌟 on
PoC's
repositories.