-
Notifications
You must be signed in to change notification settings - Fork 7
1. Infrastructure
- The "scaladays-workshop-2023" project depends on the "server" and "tttClient" projects.
- The "server" project depends on the "commonJVM" project.
- The "tttClient" project depends on the "commonJS" project.
This means that changes in "commonJVM" will affect "server", and changes in "commonJS" will affect "tttClient". Both "server" and "tttClient" are part of the "scaladays-workshop-2023" project, so changes in these projects will affect the main project.
Deep-dive: Explanation of the customisations in the build.sbt
CrossProject: A
crossProject
is a project that is built for multiple platforms. In our case, the JVM and JavaScript platforms. In thisbuild.sbt
,common
is acrossProject
. DocumentationScalaJSPlugin: This plugin enables the compilation of Scala code to JavaScript. It's used in the
tttClient
project. DocumentationDockerPlugin: This plugin is used to create Docker images for the application. It's used in the
server
project. DocumentationJavaAppPackaging: This plugin is part of the sbt-native-packager and is used to package JVM applications. It's used in the
server
project. DocumentationUniversalPlugin: This plugin is part of the sbt-native-packager and is used to create archives containing all project files (including source files). It's used in the
tttClient
project. Documentationaggregate: This method is used to create a list of tasks that are run on the aggregate project and all the aggregated projects. In this
build.sbt
, thescaladays-workshop-2023
project aggregates theserver
andtttClient
projects. DocumentationdependsOn: This method declares that a project depends on other projects. In this
build.sbt
, thescaladays-workshop-2023
project depends on theserver
andtttClient
projects. DocumentationDocker / dockerCommands: This setting allows you to customize the Dockerfile that is generated by the DockerPlugin. Documentation
ScalaJSLinkerConfig: This setting allows you to configure the Scala.js linker, which is responsible for linking your Scala.js code into a single JavaScript file. Documentation
dockerComposeCommand: This task is used to determine the path to the
docker-compose
command on the system. It tries to find the command using thewhich
command on Unix-like systems or thewhere
command on Windows systems. If the command is not found, an exception is thrown.dockerComposeCommand := { Paths .get(Try("which docker-compose".!!).recoverWith { case _ => Try("where docker-compose".!!) }.get) .toFile }dockerRegistryHostPort: This task is used to get the Docker registry host port from an environment variable named
SCALADAYS_WORKSHOP_DOCKER_REGISTRY_HOST_PORT
. If the environment variable is not set, it defaults to5000
.dockerRegistryHostPort := { Properties .envOrElse("SCALADAYS_WORKSHOP_DOCKER_REGISTRY_HOST_PORT", "5000") .toInt }dockerComposeFile: This task is used to specify the path to the
docker-compose.yml
file, which is located in thesrc
directory.dockerComposeFile := (Compile / baseDirectory).value / "src" / "docker-compose.yml"dockerComposeUp: This task is used to start the Docker Compose environment. It runs several tasks sequentially:
- It sets up Docker BuildX in the
server
project.- It publishes the
server
Docker image locally.- It stages the
tttClient
project.- It logs the Docker Compose file after environment variable interpolation.
- It starts the Docker Compose environment in detached mode.
dockerComposeUp := Def .sequential( server / Docker / setupBuildX, server / Docker / publishLocal, tttClient / Universal / stage, Def.task( streams.value.log.info( "Docker compose file after environment variable interpolation:" ) ), Def.task( streams.value.log.info( Process( Seq( "docker-compose", "-f", s"${dockerComposeFile.value.getAbsolutePath()}", "config" ), None, "SCALADAYS_WORKSHOP_DOCKER_REGISTRY_HOST_PORT" -> s"${dockerRegistryHostPort.value}", "SCALADAYS_CLIENT_DIST" -> s"${(tttClient / Universal / stagingDirectory).value / "dist"}" ).!! ) ), Def.task( streams.value.log.info( Process( Seq( "docker-compose", "-f", s"${dockerComposeFile.value.getAbsolutePath()}", "up", "-d" ), None, "SCALADAYS_WORKSHOP_DOCKER_REGISTRY_HOST_PORT" -> s"${dockerRegistryHostPort.value}", "SCALADAYS_CLIENT_DIST" -> s"${(tttClient / Universal / stagingDirectory).value / "dist"}" ).!! ) ) ) .valuedockerComposeDown: This task is used to stop the Docker Compose environment and remove all images. It runs the
docker-compose down --rmi all
command.dockerComposeDown := { val log = streams.value.log log.info( Process( Seq( "docker-compose", "-f", s"${dockerComposeFile.value.getAbsolutePath()}", "down", "--rmi", "all" ), None, "SCALADAYS_WORKSHOP_DOCKER_REGISTRY_HOST_PORT" -> s"${dockerRegistryHostPort.value}", "SCALADAYS_CLIENT_DIST" -> s"${(tttClient / Universal / stagingDirectory).value / "dist"}" ).!! ) }generateConfigToml: This task in the
server
project generates aconfig.toml
file in the managed resources directory from a given Docker registry host port.generateConfigToml := { generateConfigTomlInManagedResourcesFrom( ( (Docker / dockerRegistryHostPort).value, (Compile / resourceManaged).value ) ) }Docker / setupBuildX: This task in the
server
project sets up Docker BuildX. It runs several commands to installbinfmt
, stop and remove the Docker registry, and set up Docker BuildX.Docker / setupBuildX := { (Compile / resourceGenerators).value (Docker / setupBuildX).previous.filter(_ == "Success").getOrElse { val log = streams.value.log val dockerCommand = s"${(Docker / dockerExecCommand).value.mkString("")}" Try { val binFmtInstall = s"$dockerCommand run --privileged --rm tonistiigi/binfmt --install all" log.info( s"Setting up docker buildx appropriately: ${binFmtInstall.!!}" ) }.flatMap { _ => val stopRegistry = s"$dockerCommand container stop registry" Try(log.info(s"Stopping docker registry: ${stopRegistry.!!}")) .recoverWith { case _ => Try("Exception") } }.flatMap { _ => val removeRegistry = s"$dockerCommand container rm registry" Try(log.info(s"removing docker registry: ${removeRegistry.!!}")) .recoverWith { case _ => Try("Exception") } }.flatMap { _ => val buildxSetup = s"$dockerCommand buildx create --config ${(Compile / resourceManaged).value / "docker" / "registry" / "config.toml"} --driver-opt network=host --use" Try( log.info( s"Setting up docker buildx appropriately: ${buildxSetup.!!}" ) ) }.recover { case e: Exception => log.error(s"${e.getMessage}") throw e }.map(_ => "Success") .toOption .getOrElse("Failure") } }Universal / stage: This task in the
tttClient
project stages the project and runsnpm i --dev
andnpm run build
commands in the staging directory.Universal / stage := { val staging = (Universal / stage).value Process("npm i --dev", (Universal / stagingDirectory).value).!! Process("npm run build", (Universal / stagingDirectory).value).!! staging }The
Universal / mappings
setting in thetttClient
project is used to specify the files that should be included in the package when the project is packaged. It's a sequence of tuples, where each tuple consists of a file and its path in the package.There are three
Universal / mappings
settings in thetttClient
project:
Mapped Resources: This setting maps non-SCSS resources in the
resources
directory to the root of the package.Universal / mappings ++= { val mappedResources = (Compile / resources).value mappedResources.filterNot(_.getName() == "custom.scss").map { r => r -> s"${r.getName()}" } }Mapped SCSS Resources: This setting maps SCSS resources in the
resources
directory to thescss
directory of the package.Universal / mappings ++= { val mappedResources = (Compile / resources).value mappedResources.filter(_.getName() == "custom.scss").map { r => r -> s"scss/${r.getName()}" } }Mapped Scala.js Linked Files: This setting maps all linked files from the Scala.js fastLinkJS task to the
lib
directory of the package.Universal / mappings ++= { val log = streams.value.log val report = (Compile / fastLinkJS).value val outputDirectory = (Compile / fastLinkJS / scalaJSLinkerOutputDirectory).value report.data.publicModules.map { m => log.info(s"moduleId: ${m.moduleID}") (outputDirectory / m.jsFileName) -> s"lib/${m.jsFileName}" }.toSeq }These settings ensure that all the necessary files are included in the package when the project is packaged.
As will be discussed in a later section on docker-compose
and the runtime architecture of the workshop project, we are running Kafka as our datastore, and the Confluent schema-registry as our serialization format store. This means that we need to wait for Kafka and the schema-registry to be available before we can start our server project. docker-compose allows us to express and enforce this condition with a single user command.
We are also using library dependencies to communicate with Kafka that depend on a native–binary JNI dependency that is not available for Mactintosh silicon CPUs like the M1. The JNI dependency is a transient dependency of one of our library dependencies. This library has not updated to the latest version of the JNI library, so we need a different solution. Docker may be able to provide an isolated process, but a docker container is not a full Virtual machine. Anything running in a docker container will use the host system's native runtime platform by default.
Therefore, we need to run the container in a virtual machine capable of emulating a linux/amd64 platform so that the native dependency will execute correctly.
Finally we must build the image using the emulated linux/amd64 environment such that it will run under linux/amd64 emulation to enable our JNI Kafka dependency to execute on incompatible machines.
Deep-dive: Customizing a the native packager Docker Image
The
dockerCommands
setting in theserver
project is used to specify the Docker commands that should be run when the Docker image for theserver
project is built. It's a sequence ofCmd
andExecCmd
objects, each representing a Docker command.Here's a breakdown of the
dockerCommands
setting:
Base Image Command for Stage0 and Mainstage: These commands set the base image for the Docker image.
updatedBy
finds the command that contains "From", "openjdk:8" and "stage0" in the original default dockerCommands and swaps it for one that uses the build'sDocker / dockerBaseImage
. This is necessary because though the Docker plugin allows you to specify adockerBaseImage
, it doesn't actually use that image in the docker built for the project. For the definition ofupdatedBy
andbaseImageCommand
, seeproject/sbtExtensions.scala
.dockerCommands := dockerCommands.value .updatedBy( baseImageCommand((Docker / dockerBaseImage).value).forStage("stage0"), c => c match { case Cmd("FROM", args @ _*) => args.contains("openjdk:8") && args.contains("stage0") case _ => false } ) .updatedBy( baseImageCommand((Docker / dockerBaseImage).value).forStage("mainstage"), c => c match { case Cmd("FROM", args @ _*) => args.contains("openjdk:8") && args.contains("mainstage") case _ => false } )Add Command: This command adds the
wait-for-it.sh
script from a URL to the Docker image. The script is used to wait for a service to be available.insertAt
inserts this command at index 6 of the updated default commandsSeq
. For the definition ofinsertAt
seeproject/sbtExtensions.scala
.dockerCommands.value.insertAt( 6, Cmd( "ADD", "--chmod=u=rX,g=rX", "https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh", "/4/opt/docker/bin/wait-for-it.sh" ) )Run Command: This command runs the
stat
command to display information about the Docker image's/4/opt/docker
directory.dockerCommands.value.insertAt(10, Cmd("RUN", "stat", "/4/opt/docker"))Run Command to Change Permissions: This command changes the permissions of the
wait-for-it.sh
script to make it executable after copying from the original build stage.dockerCommands.value.insertAt(20, ExecCmd("RUN", "chmod", "+x", "/opt/docker/bin/wait-for-it.sh"))Entrypoint Command: This command sets the entrypoint for the Docker image. The entrypoint is a script that waits for the
schema-registry
service to be available before running thescaladays-workshop-2023-server
application.dockerCommands.value.updatedBy( ExecCmd( "ENTRYPOINT", "/opt/docker/bin/wait-for-it.sh", "schema-registry:8081", "--timeout=30", "--strict", "--", "/opt/docker/bin/scaladays-workshop-2023-server", "-verbose" ), c => c match { case ExecCmd("ENTRYPOINT", _) => true case _ => false } )These commands ensure that the Docker image is built correctly and that the application starts correctly when the Docker image is run.
In the context of defining Docker commands in the
dockerCommands
setting, there are two types of commands used:ExecCmd
andCmd
.ExecCmd:
- The
ExecCmd
type represents an executable command that is run within the Docker container.- It is used when you want to execute a command that performs an action inside the container.
ExecCmd
commands are executed directly by the shell without any wrapping.- You can use
ExecCmd
to run shell commands, scripts, or executables.- More information about
ExecCmd
can be found in the sbt-native-packager documentation.Cmd:
- The
Cmd
type represents a command that is executed when the Docker container starts.- It is used when you want to specify the primary command to be run when the container is launched.
Cmd
commands are specified as an array of arguments to the command.- The command specified by
Cmd
is the main process running inside the container.- More information about
Cmd
can be found in the sbt-native-packager documentation.When defining Docker commands in the
dockerCommands
setting, you can use bothExecCmd
andCmd
to perform different actions at different stages of the Docker image lifecycle.
1-1-project-set-up
|
---|
The system architecture is described in src/docker-compose.yml
.
In the freshly cloned workspace.
Follow the instructions below:
To set up a ZooKeeper cluster using Docker Compose, follow these steps:
- Open
src/docker-compose.yml
- You should see the following code:
version: "3.5"
services:
zookeeper-1:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_SERVER_ID: 1
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
ZOOKEEPER_SERVERS: zookeeper-1:22888:23888;zookeeper-2:32888:33888;zookeeper-3:42888:43888
zookeeper-2:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_SERVER_ID: 2
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
ZOOKEEPER_SERVERS: zookeeper-1:22888:23888;zookeeper-2:32888:33888;zookeeper-3:42888:43888
zookeeper-3:
image: confluentinc/cp-zookeeper:latest
environment:
ZOOKEEPER_SERVER_ID: ???
ZOOKEEPER_CLIENT_PORT: ???
ZOOKEEPER_TICK_TIME: 2000
ZOOKEEPER_INIT_LIMIT: 5
ZOOKEEPER_SYNC_LIMIT: 2
ZOOKEEPER_SERVERS: ???
This Docker Compose configuration sets up a ZooKeeper cluster with three nodes: zookeeper-1
, zookeeper-2
, and zookeeper-3
.
Deep-dive: Explanation of the zookeeper docker-compose services
Let's explain the key elements:
version: "3.5"
: Specifies the Docker Compose file version.
services
: Defines the list of services to be created.
zookeeper-1, zookeeper-2, zookeeper-3
: Each service represents an individual ZooKeeper node. The numbers at the end of the service names (-1
,-2
,-3
) distinguish between the different nodes.
image: confluentinc/cp-zookeeper:latest
: Specifies the Docker image to be used for the ZooKeeper service. In this case, it uses the latest version of theconfluentinc/cp-zookeeper
image provided by Confluent.
environment
: Sets environment variables for the ZooKeeper service.
ZOOKEEPER_SERVER_ID
: Specifies the unique ID for the ZooKeeper node. Each node in the cluster must have a unique ID.
ZOOKEEPER_CLIENT_PORT
: Defines the port number on which ZooKeeper listens for client connections.
ZOOKEEPER_TICK_TIME
: Sets the length of a single tick, which is the basic time unit used by ZooKeeper.
ZOOKEEPER_INIT_LIMIT
: Defines the time (in ticks) that ZooKeeper servers can take to connect and synchronize with each other.
ZOOKEEPER_SYNC_LIMIT
: Specifies the maximum time (in ticks) that ZooKeeper servers can be out of sync with each other.
ZOOKEEPER_SERVERS
: Sets the list of ZooKeeper servers in the formatserver:id1:host1:port1;server:id2:host2:port2;....
This configuration helps ZooKeeper nodes discover and connect to each other in the cluster.
You need to fill in the ZOOKEEPER_SERVER_ID, ZOOKEEPER_CLIENT_PORT, and ZOOKEEPER_SERVERS environment variables for the zookeeper-3 service to complete the configuration. Use the other zookeeper services as examples to help you fill it in. After filling it in, you can check that the zookeeper cluster is setup appropriately by saving the file, and running:
docker-compose up
in a terminal window in your clone's src/
directory that contains the docker-compose.yml.
These commands launch the ZooKeeper containers according to the configuration specified in the docker-compose.yml
file, and display the logs of the zookeeper node you filled out in the exercise. In the logs you should see no errors and eventually see an info log like following (it may take a few moments):
INFO Successfully connected to leader, using address: zookeeper-3/172.19.0.4:42888
Congratulations, you've setup your zookeeper cluster. Stop following the logs with (<Cmd|Ctrl> C). You can now tear it down with:
docker-compose down
in the terminal and move on to setting up kafka.
1-3-docker-kafka
|
---|
To set up a Kafka cluster using Docker Compose, continue from the previous step and follow these additional steps:
- Take a look at the following code below the ZooKeeper services in the 'src/docker-compose.yml' file in the workspace:
kafka-1:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper-1
- zookeeper-2
- zookeeper-3
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181,zookeeper-2:2181,zookeeper-2:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-1:9092
KAFKA_JMX_PORT: 9998
kafka-2:
image: confluentinc/cp-kafka:latest
depends_on:
- zookeeper-1
- zookeeper-2
- zookeeper-3
environment:
KAFKA_BROKER_ID: 2
KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181,zookeeper-2:2181,zookeeper-2:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-2:9092
KAFKA_JMX_PORT: 9998
kafka-3:
image: ???
depends_on:
???
environment:
KAFKA_BROKER_ID: ???
KAFKA_ZOOKEEPER_CONNECT: ???
KAFKA_ADVERTISED_LISTENERS: ???
KAFKA_JMX_PORT: 9998
This snippet defines three Kafka services, kafka-1
, kafka-2
, and kafka-3
, within the Docker Compose configuration.
Deep-dive: Explanation of the kafka docker-compose services
Let's break down the key elements:
kafka-1
andkafka-2
:
image: confluentinc/cp-kafka:latest
: Specifies the Docker image to be used for the Kafka service. In this case, it uses the latest version of the confluentinc/cp-kafka image provided by Confluent.
depends_on
: Specifies the services that this Kafka service depends on. In this case, it depends onzookeeper-1
,zookeeper-2
, andzookeeper-3
, ensuring that the ZooKeeper cluster is started before the Kafka service.
environment
: Sets environment variables for the Kafka service.
KAFKA_BROKER_ID
: Specifies the unique ID for the Kafka broker. Each broker in the cluster must have a unique ID.
KAFKA_ZOOKEEPER_CONNECT
: Defines the ZooKeeper connection string for the Kafka broker. It specifies the addresses of the ZooKeeper nodes that Kafka will connect to for coordination.
KAFKA_ADVERTISED_LISTENERS
: Specifies the listener configuration for the Kafka broker. In this case, it sets the listener toPLAINTEXT
protocol and defines the advertised listener address askafka-1:9092 or kafka-2:9092
. Clients will use these addresses to connect to the respective Kafka brokers.
KAFKA_JMX_PORT
: Defines the JMX (Java Management Extensions) port for monitoring and managing the Kafka broker.
kafka-3
:Similar to
kafka-1
andkafka-2
, this section defines the configuration for thekafka-3
service.
In the kafka-3
service, the image
, depends_on
, and environment
variable values are left blank (???) and need to be filled in.
To configure the kafka-3
service correctly, you need to provide the appropriate values for image
, depends_on
, KAFKA_BROKER_ID
, KAFKA_ZOOKEEPER_CONNECT
, and KAFKA_ADVERTISED_LISTENERS
based on the values for kafka-1
and kafka-2
.
After filling it in, you can check that the kafka cluster is setup appropriately by saving the file, and running:
docker-compose up -d
in a terminal window in your clone's src/
directory that contains the docker-compose.yml.
These commands launch the ZooKeeper and Kafka containers in the background (-d flag) according to the configuration specified in the docker-compose.yml
file. To display the logs of the Kafka node you filled out in the exercise we need to find the running container with
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
aee0555ce384 confluentinc/cp-kafka:latest "/etc/confluent/dock…" 8 seconds ago Up 3 seconds 9092/tcp src-kafka-2-1
c24d1c9a8352 confluentinc/cp-kafka:latest "/etc/confluent/dock…" 8 seconds ago Up 3 seconds 9092/tcp src-kafka-1-1
1a992f27c648 confluentinc/cp-kafka:latest "/etc/confluent/dock…" 8 seconds ago Up 3 seconds 9092/tcp src-kafka-3-1
94afd6d500ba confluentinc/cp-zookeeper:latest "/etc/confluent/dock…" 8 seconds ago Up 5 seconds 2181/tcp, 2888/tcp, 3888/tcp src-zookeeper-1-1
a801ae8b6c31 confluentinc/cp-zookeeper:latest "/etc/confluent/dock…" 8 seconds ago Up 5 seconds 2181/tcp, 2888/tcp, 3888/tcp src-zookeeper-3-1
1dd0656cad5b confluentinc/cp-zookeeper:latest "/etc/confluent/dock…" 8 seconds ago Up 5 seconds 2181/tcp, 2888/tcp, 3888/tcp src-zookeeper-2-1
And then read the logs with the -f
command to keep reading the stream:
docker logs -f 1a992f27c648
In the logs, you should see no errors and eventually see an info log like the following (it may take a few moments):
INFO [KafkaServer id=<broker_id>] started (kafka.server.KafkaServer)
Congratulations, you've setup your kafka cluster. Stop following the logs with (<Cmd|Ctrl> C). You can now tear it down with:
docker-compose down
in the terminal and move on to adding the rest of the docker-compose configuration.
1-5-other-services-exercise
|
---|
To set up a the kafka-magic monitor, schema-registry, and the game client and server in the docker-compose:
- Take a look at the following code below the kafka services in the 'src/docker-compose.yml' file in the workspace:
schema-registry:
image: "confluentinc/cp-schema-registry:6.2.0"
hostname: schema-registry
depends_on:
- zookeeper-1
- zookeeper-2
- zookeeper-3
- kafka-1
- kafka-2
- kafka-3
ports:
- "8081:8081"
environment:
SCHEMA_REGISTRY_HOST_NAME: schema-registry
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'PLAINTEXT://kafka-1:9092,PLAINTEXT://kafka-2:9092,???'
server:
image: "127.0.0.1:${SCALADAYS_WORKSHOP_DOCKER_REGISTRY_HOST_PORT?Cannot find host port}/scaladays-workshop-2023-server:latest"
platform: linux/amd64
hostname: scaladays-workshop-2023-server
restart: always
environment:
ROOT_LOG_LEVEL : ERROR
ports:
- "28082:8082"
- "28083:8085"
depends_on:
- zookeeper-1
- zookeeper-2
- zookeeper-3
- kafka-1
- kafka-2
- kafka-3
- schema-registry
magic:
image: "digitsy/kafka-magic"
ports:
- "29080:80"
depends_on:
- zookeeper-1
- zookeeper-2
- zookeeper-3
- kafka-1
- kafka-2
- kafka-3
- schema-registry
volumes:
- myConfig:/config
environment:
KMAGIC_ALLOW_TOPIC_DELETE: "true"
KMAGIC_ALLOW_SCHEMA_DELETE: "true"
KMAGIC_CONFIG_STORE_TYPE: "file"
KMAGIC_CONFIG_STORE_CONNECTION: "Data Source=/config/KafkaMagicConfig.db;"
KMAGIC_CONFIG_ENCRYPTION_KEY: "ENTER_YOUR_KEY_HERE"
client:
image: "lipanski/docker-static-website:latest"
ports:
- 23000:3000
depends_on:
- server
volumes:
- ${SCALADAYS_CLIENT_DIST?Cannot find scaladays client distribution}:/home/static
- ${SCALADAYS_CLIENT_DIST?Cannot find scaladays client distribution}/httpd.conf:/home/static/dist/httpd.conf
volumes:
myConfig:
Deep-dive: Explanation of the remaining docker-compose services
Let's examine the above snippet.
schema-registry
:
image
: "confluentinc/cp-schema-registry:6.2.0": Specifies the Docker image to be used for the Schema Registry service. In this case, it uses version 6.2.0 of theconfluentinc/cp-schema-registry
image provided by Confluent.
hostname: schema-registry
: Sets the hostname for the Schema Registry container.depends_on
: Specifies the services that the Schema Registry service depends on. It requires the ZooKeeper cluster (zookeeper-1
,zookeeper-2
,zookeeper-3
) and the Kafka brokers (kafka-1
,kafka-2
,kafka-3
) to be running before starting the Schema Registry service.
ports
: Maps the container's port 8081 to the host's port 8081, allowing access to the Schema Registry service from the host machine.
environment
: Sets environment variables for the Schema Registry service.
SCHEMA_REGISTRY_HOST_NAME
: Specifies the hostname for the Schema Registry service.
SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS
: Defines the bootstrap servers for the Schema Registry to connect to Kafka. In this case, it provides the addresses ofkafka-1:9092
andkafka-2:9092
. Replace ??? with the appropriate address for the third Kafka broker.
server
:
image: "127.0.0.1:${SCALADAYS_WORKSHOP_DOCKER_REGISTRY_HOST_PORT?Cannot find host port}/scaladays-workshop-2023-server:latest"
: Specifies the Docker image to be used for the server service. The image location is determined using an environment variable${SCALADAYS_WORKSHOP_DOCKER_REGISTRY_HOST_PORT}
to fetch the host port.
platform: linux/amd64
: Specifies the platform (architecture) for the server container.
hostname: scaladays-workshop-2023-server
: Sets the hostname for the server container.
restart
: always: Configures the container to automatically restart if it stops for any reason.
environment
: Sets environment variables for the server service.
ROOT_LOG_LEVEL
: Specifies the log level for the server application. In this case, it is set toERROR
.
ports
: Maps the container's ports 8082 and 8085 to the host's ports 28082 and 28083, respectively, allowing access to the server service from the host machine.
depends_on
: Specifies the services that the server service depends on. It requires the ZooKeeper cluster, Kafka brokers, and Schema Registry to be running before starting the server service.
magic
:
image: "digitsy/kafka-magic"
: Specifies the Docker image to be used for the Magic service. In this case, it uses thedigitsy/kafka-magic
image.
ports
: Maps the container's port 80 to the host's port 29080, allowing access to the Magic service from the host machine.
depends_on
: Specifies the services that the Magic service depends on. It requires the ZooKeeper cluster, Kafka brokers, and Schema Registry to be running before starting the Magic service.
volumes
: Mounts the volume myConfig to the /config directory within the container.
environment
: Sets environment variables for the Magic service, including configuration options related to topics and schemas.
client
:
image: "lipanski/docker-static-website:latest"
: Specifies the Docker image to be used for the client service. In this case, it uses thelipanski/docker-static-website
image.
ports
: Maps the container's port 3000 to the host's port 23000, allowing access to the client service from the host machine.
depends_on
: Specifies that the client service depends on the server service to be running before starting.
volumes
:
myConfig
: Defines a named volume called myConfig that can be used for persistent data storage.
Fill in the blank for the kafka-3 server based on the other kafka servers, you can check that everything is setup appropriately by saving the file, runnig:
docker-compose up -d
in a terminal window in your clone's src/
directory that contains the docker-compose.yml, opening http://localhost:29080
in your browser and follow the following steps:
- Open Kafka Magic.
- Click Register New.
- Enter
Scaladays Workshop
in theCluster Name
input. - Enter
kakfa-1:9092,kafka-2:9092,kafka-1:9092
in theBootstrap Servers
input. - Click
Schema Registry
. - Enter
http://schema-registry:8081
in theSchema Registry URL
input. - Toggle
Auto-register schemas
to true. - Click
Verify
. An alert will show success. Close it. - Click Register Connection. Your cluster is registered.
Congratulations, you've setup your infrastructure. You can now tear it down with:
docker-compose down
And move on to the next step.