Skip to content

1. Infrastructure

Fede Fernández edited this page Aug 31, 2023 · 16 revisions

Architecture

Communication

communication-wise

SBT Project Structure

SBT Structure

  • The "scaladays-workshop-2023" project depends on the "server" and "tttClient" projects.
  • The "server" project depends on the "commonJVM" project.
  • The "tttClient" project depends on the "commonJS" project.

This means that changes in "commonJVM" will affect "server", and changes in "commonJS" will affect "tttClient". Both "server" and "tttClient" are part of the "scaladays-workshop-2023" project, so changes in these projects will affect the main project.

Deep-dive: Explanation of the customisations in the build.sbt

Intermediate/Advanced SBT concepts and documentation

  1. CrossProject: A crossProject is a project that is built for multiple platforms. In our case, the JVM and JavaScript platforms. In this build.sbt, common is a crossProject. Documentation

  2. ScalaJSPlugin: This plugin enables the compilation of Scala code to JavaScript. It's used in the tttClient project. Documentation

  3. DockerPlugin: This plugin is used to create Docker images for the application. It's used in the server project. Documentation

  4. JavaAppPackaging: This plugin is part of the sbt-native-packager and is used to package JVM applications. It's used in the server project. Documentation

  5. UniversalPlugin: This plugin is part of the sbt-native-packager and is used to create archives containing all project files (including source files). It's used in the tttClient project. Documentation

  6. aggregate: This method is used to create a list of tasks that are run on the aggregate project and all the aggregated projects. In this build.sbt, the scaladays-workshop-2023 project aggregates the server and tttClient projects. Documentation

  7. dependsOn: This method declares that a project depends on other projects. In this build.sbt, the scaladays-workshop-2023 project depends on the server and tttClient projects. Documentation

  8. Docker / dockerCommands: This setting allows you to customize the Dockerfile that is generated by the DockerPlugin. Documentation

  9. ScalaJSLinkerConfig: This setting allows you to configure the Scala.js linker, which is responsible for linking your Scala.js code into a single JavaScript file. Documentation

Custom Sbt tasks

  1. dockerComposeCommand: This task is used to determine the path to the docker-compose command on the system. It tries to find the command using the which command on Unix-like systems or the where command on Windows systems. If the command is not found, an exception is thrown.

    dockerComposeCommand := {
      Paths
        .get(Try("which docker-compose".!!).recoverWith { case _ =>
          Try("where docker-compose".!!)
        }.get)
        .toFile
    }
  2. dockerRegistryHostPort: This task is used to get the Docker registry host port from an environment variable named SCALADAYS_WORKSHOP_DOCKER_REGISTRY_HOST_PORT. If the environment variable is not set, it defaults to 5000.

    dockerRegistryHostPort := {
      Properties
        .envOrElse("SCALADAYS_WORKSHOP_DOCKER_REGISTRY_HOST_PORT", "5000")
        .toInt
    }
  3. dockerComposeFile: This task is used to specify the path to the docker-compose.yml file, which is located in the src directory.

    dockerComposeFile := (Compile / baseDirectory).value / "src" / "docker-compose.yml"
  4. dockerComposeUp: This task is used to start the Docker Compose environment. It runs several tasks sequentially:

    • It sets up Docker BuildX in the server project.
    • It publishes the server Docker image locally.
    • It stages the tttClient project.
    • It logs the Docker Compose file after environment variable interpolation.
    • It starts the Docker Compose environment in detached mode.
    dockerComposeUp := Def
      .sequential(
        server / Docker / setupBuildX,
        server / Docker / publishLocal,
        tttClient / Universal / stage,
        Def.task(
          streams.value.log.info(
            "Docker compose file after environment variable interpolation:"
          )
        ),
        Def.task(
          streams.value.log.info(
            Process(
              Seq(
                "docker-compose",
                "-f",
                s"${dockerComposeFile.value.getAbsolutePath()}",
                "config"
              ),
              None,
              "SCALADAYS_WORKSHOP_DOCKER_REGISTRY_HOST_PORT" -> s"${dockerRegistryHostPort.value}",
              "SCALADAYS_CLIENT_DIST" -> s"${(tttClient / Universal / stagingDirectory).value / "dist"}"
            ).!!
          )
        ),
        Def.task(
          streams.value.log.info(
            Process(
              Seq(
                "docker-compose",
                "-f",
                s"${dockerComposeFile.value.getAbsolutePath()}",
                "up",
                "-d"
              ),
              None,
              "SCALADAYS_WORKSHOP_DOCKER_REGISTRY_HOST_PORT" -> s"${dockerRegistryHostPort.value}",
              "SCALADAYS_CLIENT_DIST" -> s"${(tttClient / Universal / stagingDirectory).value / "dist"}"
            ).!!
          )
        )
      )
      .value
  5. dockerComposeDown: This task is used to stop the Docker Compose environment and remove all images. It runs the docker-compose down --rmi all command.

    dockerComposeDown := {
      val log = streams.value.log
      log.info(
        Process(
          Seq(
            "docker-compose",
            "-f",
            s"${dockerComposeFile.value.getAbsolutePath()}",
            "down",
            "--rmi",
            "all"
          ),
          None,
          "SCALADAYS_WORKSHOP_DOCKER_REGISTRY_HOST_PORT" -> s"${dockerRegistryHostPort.value}",
          "SCALADAYS_CLIENT_DIST" -> s"${(tttClient / Universal / stagingDirectory).value / "dist"}"
        ).!!
      )
    }
  6. generateConfigToml: This task in the server project generates a config.toml file in the managed resources directory from a given Docker registry host port.

    generateConfigToml := {
        generateConfigTomlInManagedResourcesFrom(
          (
            (Docker / dockerRegistryHostPort).value,
            (Compile / resourceManaged).value
          )
        )
    }
  7. Docker / setupBuildX: This task in the server project sets up Docker BuildX. It runs several commands to install binfmt, stop and remove the Docker registry, and set up Docker BuildX.

    Docker / setupBuildX := {
        (Compile / resourceGenerators).value
        (Docker / setupBuildX).previous.filter(_ == "Success").getOrElse {
          val log = streams.value.log
          val dockerCommand =
            s"${(Docker / dockerExecCommand).value.mkString("")}"
          Try {
            val binFmtInstall =
              s"$dockerCommand run --privileged --rm tonistiigi/binfmt --install all"
            log.info(
              s"Setting up docker buildx appropriately: ${binFmtInstall.!!}"
            )
          }.flatMap { _ =>
            val stopRegistry = s"$dockerCommand container stop registry"
            Try(log.info(s"Stopping docker registry: ${stopRegistry.!!}"))
              .recoverWith { case _ => Try("Exception") }
          }.flatMap { _ =>
            val removeRegistry = s"$dockerCommand container rm registry"
            Try(log.info(s"removing docker registry: ${removeRegistry.!!}"))
              .recoverWith { case _ =>
                Try("Exception")
              }
          }.flatMap { _ =>
            val buildxSetup =
              s"$dockerCommand buildx create --config ${(Compile / resourceManaged).value / "docker" / "registry" / "config.toml"} --driver-opt network=host --use"
            Try(
              log.info(
                s"Setting up docker buildx appropriately: ${buildxSetup.!!}"
              )
            )
          }.recover { case e: Exception =>
            log.error(s"${e.getMessage}")
            throw e
          }.map(_ => "Success")
            .toOption
            .getOrElse("Failure")
        }
    }
  8. Universal / stage: This task in the tttClient project stages the project and runs npm i --dev and npm run build commands in the staging directory.

    Universal / stage := {
        val staging = (Universal / stage).value
        Process("npm i --dev", (Universal / stagingDirectory).value).!!
        Process("npm run build", (Universal / stagingDirectory).value).!!
        staging
      }

A word about scalajs packaging for client resources without sbt-web

The Universal / mappings setting in the tttClient project is used to specify the files that should be included in the package when the project is packaged. It's a sequence of tuples, where each tuple consists of a file and its path in the package.

There are three Universal / mappings settings in the tttClient project:

  1. Mapped Resources: This setting maps non-SCSS resources in the resources directory to the root of the package.

    Universal / mappings ++= {
      val mappedResources = (Compile / resources).value
      mappedResources.filterNot(_.getName() == "custom.scss").map { r =>
        r -> s"${r.getName()}"
      }
    }
  2. Mapped SCSS Resources: This setting maps SCSS resources in the resources directory to the scss directory of the package.

    Universal / mappings ++= {
      val mappedResources = (Compile / resources).value
      mappedResources.filter(_.getName() == "custom.scss").map { r =>
        r -> s"scss/${r.getName()}"
      }
    }
  3. Mapped Scala.js Linked Files: This setting maps all linked files from the Scala.js fastLinkJS task to the lib directory of the package.

    Universal / mappings ++= {
        val log = streams.value.log
        val report = (Compile / fastLinkJS).value
        val outputDirectory =
          (Compile / fastLinkJS / scalaJSLinkerOutputDirectory).value
        report.data.publicModules.map { m =>
          log.info(s"moduleId: ${m.moduleID}")
          (outputDirectory / m.jsFileName) -> s"lib/${m.jsFileName}"
        }.toSeq
      }

These settings ensure that all the necessary files are included in the package when the project is packaged.

Sbt and Docker

Why all the Docker complexity in the build?

As will be discussed in a later section on docker-compose and the runtime architecture of the workshop project, we are running Kafka as our datastore, and the Confluent schema-registry as our serialization format store. This means that we need to wait for Kafka and the schema-registry to be available before we can start our server project. docker-compose allows us to express and enforce this condition with a single user command.

We are also using library dependencies to communicate with Kafka that depend on a native–binary JNI dependency that is not available for Mactintosh silicon CPUs like the M1. The JNI dependency is a transient dependency of one of our library dependencies. This library has not updated to the latest version of the JNI library, so we need a different solution. Docker may be able to provide an isolated process, but a docker container is not a full Virtual machine. Anything running in a docker container will use the host system's native runtime platform by default.

Therefore, we need to run the container in a virtual machine capable of emulating a linux/amd64 platform so that the native dependency will execute correctly.

Finally we must build the image using the emulated linux/amd64 environment such that it will run under linux/amd64 emulation to enable our JNI Kafka dependency to execute on incompatible machines.

Deep-dive: Customizing a the native packager Docker Image
SERVER / dockerCommands:

The dockerCommands setting in the server project is used to specify the Docker commands that should be run when the Docker image for the server project is built. It's a sequence of Cmd and ExecCmd objects, each representing a Docker command.

Here's a breakdown of the dockerCommands setting:

  1. Base Image Command for Stage0 and Mainstage: These commands set the base image for the Docker image. updatedBy finds the command that contains "From", "openjdk:8" and "stage0" in the original default dockerCommands and swaps it for one that uses the build's Docker / dockerBaseImage. This is necessary because though the Docker plugin allows you to specify a dockerBaseImage, it doesn't actually use that image in the docker built for the project. For the definition of updatedBy and baseImageCommand, see project/sbtExtensions.scala.

    dockerCommands := dockerCommands.value
      .updatedBy(
        baseImageCommand((Docker / dockerBaseImage).value).forStage("stage0"),
        c =>
          c match {
            case Cmd("FROM", args @ _*) =>
              args.contains("openjdk:8") && args.contains("stage0")
            case _ => false
          }
      )
      .updatedBy(
        baseImageCommand((Docker / dockerBaseImage).value).forStage("mainstage"),
        c =>
          c match {
            case Cmd("FROM", args @ _*) =>
              args.contains("openjdk:8") && args.contains("mainstage")
            case _ => false
          }
      )
  2. Add Command: This command adds the wait-for-it.sh script from a URL to the Docker image. The script is used to wait for a service to be available. insertAt inserts this command at index 6 of the updated default commands Seq. For the definition of insertAt see project/sbtExtensions.scala.

    dockerCommands.value.insertAt(
      6,
      Cmd(
        "ADD",
        "--chmod=u=rX,g=rX",
        "https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh",
        "/4/opt/docker/bin/wait-for-it.sh"
      )
    )
  3. Run Command: This command runs the stat command to display information about the Docker image's /4/opt/docker directory.

    dockerCommands.value.insertAt(10, Cmd("RUN", "stat", "/4/opt/docker"))
  4. Run Command to Change Permissions: This command changes the permissions of the wait-for-it.sh script to make it executable after copying from the original build stage.

    dockerCommands.value.insertAt(20, ExecCmd("RUN", "chmod", "+x", "/opt/docker/bin/wait-for-it.sh"))
  5. Entrypoint Command: This command sets the entrypoint for the Docker image. The entrypoint is a script that waits for the schema-registry service to be available before running the scaladays-workshop-2023-server application.

    dockerCommands.value.updatedBy(
      ExecCmd(
        "ENTRYPOINT",
        "/opt/docker/bin/wait-for-it.sh",
        "schema-registry:8081",
        "--timeout=30",
        "--strict",
        "--",
        "/opt/docker/bin/scaladays-workshop-2023-server",
        "-verbose"
      ),
      c =>
        c match {
          case ExecCmd("ENTRYPOINT", _) => true
          case _                        => false
        }
    )

These commands ensure that the Docker image is built correctly and that the application starts correctly when the Docker image is run.

In the context of defining Docker commands in the dockerCommands setting, there are two types of commands used: ExecCmd and Cmd.

ExecCmd:

  • The ExecCmd type represents an executable command that is run within the Docker container.
  • It is used when you want to execute a command that performs an action inside the container.
  • ExecCmd commands are executed directly by the shell without any wrapping.
  • You can use ExecCmd to run shell commands, scripts, or executables.
  • More information about ExecCmd can be found in the sbt-native-packager documentation.

Cmd:

  • The Cmd type represents a command that is executed when the Docker container starts.
  • It is used when you want to specify the primary command to be run when the container is launched.
  • Cmd commands are specified as an array of arguments to the command.
  • The command specified by Cmd is the main process running inside the container.
  • More information about Cmd can be found in the sbt-native-packager documentation.

When defining Docker commands in the dockerCommands setting, you can use both ExecCmd and Cmd to perform different actions at different stages of the Docker image lifecycle.

System Architecture

⚠️ Check out this tag 1-1-project-set-up 1-1-project-set-up

The system architecture is described in src/docker-compose.yml.

In the freshly cloned workspace.

Follow the instructions below:

Setting up a ZooKeeper Cluster

To set up a ZooKeeper cluster using Docker Compose, follow these steps:

  1. Open src/docker-compose.yml
  2. You should see the following code:
version: "3.5"
services:
  zookeeper-1:
    image: confluentinc/cp-zookeeper:latest
    environment:
      ZOOKEEPER_SERVER_ID: 1
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_INIT_LIMIT: 5
      ZOOKEEPER_SYNC_LIMIT: 2
      ZOOKEEPER_SERVERS: zookeeper-1:22888:23888;zookeeper-2:32888:33888;zookeeper-3:42888:43888
  zookeeper-2:
    image: confluentinc/cp-zookeeper:latest
    environment:
      ZOOKEEPER_SERVER_ID: 2
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_INIT_LIMIT: 5
      ZOOKEEPER_SYNC_LIMIT: 2
      ZOOKEEPER_SERVERS: zookeeper-1:22888:23888;zookeeper-2:32888:33888;zookeeper-3:42888:43888
  zookeeper-3:
    image: confluentinc/cp-zookeeper:latest
    environment:
      ZOOKEEPER_SERVER_ID: ???
      ZOOKEEPER_CLIENT_PORT: ???
      ZOOKEEPER_TICK_TIME: 2000
      ZOOKEEPER_INIT_LIMIT: 5
      ZOOKEEPER_SYNC_LIMIT: 2
      ZOOKEEPER_SERVERS: ???

Exercise 1

This Docker Compose configuration sets up a ZooKeeper cluster with three nodes: zookeeper-1, zookeeper-2, and zookeeper-3.

Deep-dive: Explanation of the zookeeper docker-compose services

Let's explain the key elements:

version: "3.5": Specifies the Docker Compose file version.

services: Defines the list of services to be created.

zookeeper-1, zookeeper-2, zookeeper-3: Each service represents an individual ZooKeeper node. The numbers at the end of the service names (-1, -2, -3) distinguish between the different nodes.

image: confluentinc/cp-zookeeper:latest: Specifies the Docker image to be used for the ZooKeeper service. In this case, it uses the latest version of the confluentinc/cp-zookeeper image provided by Confluent.

environment: Sets environment variables for the ZooKeeper service.

ZOOKEEPER_SERVER_ID: Specifies the unique ID for the ZooKeeper node. Each node in the cluster must have a unique ID.

ZOOKEEPER_CLIENT_PORT: Defines the port number on which ZooKeeper listens for client connections.

ZOOKEEPER_TICK_TIME: Sets the length of a single tick, which is the basic time unit used by ZooKeeper.

ZOOKEEPER_INIT_LIMIT: Defines the time (in ticks) that ZooKeeper servers can take to connect and synchronize with each other.

ZOOKEEPER_SYNC_LIMIT: Specifies the maximum time (in ticks) that ZooKeeper servers can be out of sync with each other.

ZOOKEEPER_SERVERS: Sets the list of ZooKeeper servers in the format server:id1:host1:port1;server:id2:host2:port2;.... This configuration helps ZooKeeper nodes discover and connect to each other in the cluster.

You need to fill in the ZOOKEEPER_SERVER_ID, ZOOKEEPER_CLIENT_PORT, and ZOOKEEPER_SERVERS environment variables for the zookeeper-3 service to complete the configuration. Use the other zookeeper services as examples to help you fill it in. After filling it in, you can check that the zookeeper cluster is setup appropriately by saving the file, and running:

docker-compose up

in a terminal window in your clone's src/ directory that contains the docker-compose.yml.

These commands launch the ZooKeeper containers according to the configuration specified in the docker-compose.yml file, and display the logs of the zookeeper node you filled out in the exercise. In the logs you should see no errors and eventually see an info log like following (it may take a few moments):

INFO Successfully connected to leader, using address: zookeeper-3/172.19.0.4:42888

Congratulations, you've setup your zookeeper cluster. Stop following the logs with (<Cmd|Ctrl> C). You can now tear it down with:

docker-compose down

in the terminal and move on to setting up kafka.

Solution

1-2-zookeeper-exercise-solution

diff

Setting up a Kafka Cluster

⚠️ Check out this tag 1-3-docker-kafka 1-3-docker-kafka

To set up a Kafka cluster using Docker Compose, continue from the previous step and follow these additional steps:

  1. Take a look at the following code below the ZooKeeper services in the 'src/docker-compose.yml' file in the workspace:
  kafka-1:
    image: confluentinc/cp-kafka:latest
    depends_on:
      - zookeeper-1
      - zookeeper-2
      - zookeeper-3
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181,zookeeper-2:2181,zookeeper-2:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-1:9092
      KAFKA_JMX_PORT: 9998
  kafka-2:
    image: confluentinc/cp-kafka:latest
    depends_on:
      - zookeeper-1
      - zookeeper-2
      - zookeeper-3
    environment:
      KAFKA_BROKER_ID: 2
      KAFKA_ZOOKEEPER_CONNECT: zookeeper-1:2181,zookeeper-2:2181,zookeeper-2:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka-2:9092
      KAFKA_JMX_PORT: 9998
  kafka-3:
    image: ???
    depends_on:
      ???
    environment:
      KAFKA_BROKER_ID: ???
      KAFKA_ZOOKEEPER_CONNECT: ???
      KAFKA_ADVERTISED_LISTENERS: ???
      KAFKA_JMX_PORT: 9998

Exercise 2

This snippet defines three Kafka services, kafka-1, kafka-2, and kafka-3, within the Docker Compose configuration.

Deep-dive: Explanation of the kafka docker-compose services

Let's break down the key elements:

kafka-1 and kafka-2:

image: confluentinc/cp-kafka:latest: Specifies the Docker image to be used for the Kafka service. In this case, it uses the latest version of the confluentinc/cp-kafka image provided by Confluent.

depends_on: Specifies the services that this Kafka service depends on. In this case, it depends on zookeeper-1, zookeeper-2, and zookeeper-3, ensuring that the ZooKeeper cluster is started before the Kafka service.

environment: Sets environment variables for the Kafka service.

KAFKA_BROKER_ID: Specifies the unique ID for the Kafka broker. Each broker in the cluster must have a unique ID.

KAFKA_ZOOKEEPER_CONNECT: Defines the ZooKeeper connection string for the Kafka broker. It specifies the addresses of the ZooKeeper nodes that Kafka will connect to for coordination.

KAFKA_ADVERTISED_LISTENERS: Specifies the listener configuration for the Kafka broker. In this case, it sets the listener to PLAINTEXT protocol and defines the advertised listener address as kafka-1:9092 or kafka-2:9092. Clients will use these addresses to connect to the respective Kafka brokers.

KAFKA_JMX_PORT: Defines the JMX (Java Management Extensions) port for monitoring and managing the Kafka broker.

kafka-3:

Similar to kafka-1 and kafka-2, this section defines the configuration for the kafka-3 service.

In the kafka-3 service, the image, depends_on, and environment variable values are left blank (???) and need to be filled in. To configure the kafka-3 service correctly, you need to provide the appropriate values for image, depends_on, KAFKA_BROKER_ID, KAFKA_ZOOKEEPER_CONNECT, and KAFKA_ADVERTISED_LISTENERS based on the values for kafka-1 and kafka-2.

After filling it in, you can check that the kafka cluster is setup appropriately by saving the file, and running:

docker-compose up -d

in a terminal window in your clone's src/ directory that contains the docker-compose.yml.

These commands launch the ZooKeeper and Kafka containers in the background (-d flag) according to the configuration specified in the docker-compose.yml file. To display the logs of the Kafka node you filled out in the exercise we need to find the running container with

docker ps
CONTAINER ID   IMAGE                              COMMAND                  CREATED         STATUS         PORTS                          NAMES
aee0555ce384   confluentinc/cp-kafka:latest       "/etc/confluent/dock…"   8 seconds ago   Up 3 seconds   9092/tcp                       src-kafka-2-1
c24d1c9a8352   confluentinc/cp-kafka:latest       "/etc/confluent/dock…"   8 seconds ago   Up 3 seconds   9092/tcp                       src-kafka-1-1
1a992f27c648   confluentinc/cp-kafka:latest       "/etc/confluent/dock…"   8 seconds ago   Up 3 seconds   9092/tcp                       src-kafka-3-1
94afd6d500ba   confluentinc/cp-zookeeper:latest   "/etc/confluent/dock…"   8 seconds ago   Up 5 seconds   2181/tcp, 2888/tcp, 3888/tcp   src-zookeeper-1-1
a801ae8b6c31   confluentinc/cp-zookeeper:latest   "/etc/confluent/dock…"   8 seconds ago   Up 5 seconds   2181/tcp, 2888/tcp, 3888/tcp   src-zookeeper-3-1
1dd0656cad5b   confluentinc/cp-zookeeper:latest   "/etc/confluent/dock…"   8 seconds ago   Up 5 seconds   2181/tcp, 2888/tcp, 3888/tcp   src-zookeeper-2-1

And then read the logs with the -f command to keep reading the stream:

docker logs -f 1a992f27c648

In the logs, you should see no errors and eventually see an info log like the following (it may take a few moments):

INFO [KafkaServer id=<broker_id>] started (kafka.server.KafkaServer)

Congratulations, you've setup your kafka cluster. Stop following the logs with (<Cmd|Ctrl> C). You can now tear it down with:

docker-compose down

in the terminal and move on to adding the rest of the docker-compose configuration.

Solution

1-4-kafka-exercise-solution

diff

Adding the schema-registry, kafka monitor, the game client and server

⚠️ Check out this tag 1-5-other-services-exercise 1-5-other-services-exercise

To set up a the kafka-magic monitor, schema-registry, and the game client and server in the docker-compose:

  1. Take a look at the following code below the kafka services in the 'src/docker-compose.yml' file in the workspace:
schema-registry:
    image: "confluentinc/cp-schema-registry:6.2.0"
    hostname: schema-registry
    depends_on:
      - zookeeper-1
      - zookeeper-2
      - zookeeper-3
      - kafka-1
      - kafka-2
      - kafka-3
    ports:
      - "8081:8081"
    environment:
      SCHEMA_REGISTRY_HOST_NAME: schema-registry
      SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: 'PLAINTEXT://kafka-1:9092,PLAINTEXT://kafka-2:9092,???'
  server:
    image: "127.0.0.1:${SCALADAYS_WORKSHOP_DOCKER_REGISTRY_HOST_PORT?Cannot find host port}/scaladays-workshop-2023-server:latest"
    platform: linux/amd64
    hostname: scaladays-workshop-2023-server
    restart: always
    environment:
      ROOT_LOG_LEVEL : ERROR
    ports:
      - "28082:8082"
      - "28083:8085"
    depends_on:
      - zookeeper-1
      - zookeeper-2
      - zookeeper-3
      - kafka-1
      - kafka-2
      - kafka-3
      - schema-registry
  magic:
    image: "digitsy/kafka-magic"
    ports:
      - "29080:80"
    depends_on:
      - zookeeper-1
      - zookeeper-2
      - zookeeper-3
      - kafka-1
      - kafka-2
      - kafka-3
      - schema-registry
    volumes:
      - myConfig:/config
    environment:
      KMAGIC_ALLOW_TOPIC_DELETE: "true"
      KMAGIC_ALLOW_SCHEMA_DELETE: "true"
      KMAGIC_CONFIG_STORE_TYPE: "file"
      KMAGIC_CONFIG_STORE_CONNECTION: "Data Source=/config/KafkaMagicConfig.db;"
      KMAGIC_CONFIG_ENCRYPTION_KEY: "ENTER_YOUR_KEY_HERE"
  client:
    image: "lipanski/docker-static-website:latest"
    ports:
      - 23000:3000
    depends_on:
      - server
    volumes:
      - ${SCALADAYS_CLIENT_DIST?Cannot find scaladays client distribution}:/home/static
      - ${SCALADAYS_CLIENT_DIST?Cannot find scaladays client distribution}/httpd.conf:/home/static/dist/httpd.conf
volumes:
  myConfig:

Exercise 3

Deep-dive: Explanation of the remaining docker-compose services

Let's examine the above snippet.

schema-registry:

image: "confluentinc/cp-schema-registry:6.2.0": Specifies the Docker image to be used for the Schema Registry service. In this case, it uses version 6.2.0 of the confluentinc/cp-schema-registry image provided by Confluent.

hostname: schema-registry: Sets the hostname for the Schema Registry container. depends_on: Specifies the services that the Schema Registry service depends on. It requires the ZooKeeper cluster (zookeeper-1, zookeeper-2, zookeeper-3) and the Kafka brokers (kafka-1, kafka-2, kafka-3) to be running before starting the Schema Registry service.

ports: Maps the container's port 8081 to the host's port 8081, allowing access to the Schema Registry service from the host machine.

environment: Sets environment variables for the Schema Registry service.

SCHEMA_REGISTRY_HOST_NAME: Specifies the hostname for the Schema Registry service.

SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS: Defines the bootstrap servers for the Schema Registry to connect to Kafka. In this case, it provides the addresses of kafka-1:9092 and kafka-2:9092. Replace ??? with the appropriate address for the third Kafka broker.

server:

image: "127.0.0.1:${SCALADAYS_WORKSHOP_DOCKER_REGISTRY_HOST_PORT?Cannot find host port}/scaladays-workshop-2023-server:latest": Specifies the Docker image to be used for the server service. The image location is determined using an environment variable ${SCALADAYS_WORKSHOP_DOCKER_REGISTRY_HOST_PORT} to fetch the host port.

platform: linux/amd64: Specifies the platform (architecture) for the server container.

hostname: scaladays-workshop-2023-server: Sets the hostname for the server container.

restart: always: Configures the container to automatically restart if it stops for any reason.

environment: Sets environment variables for the server service.

ROOT_LOG_LEVEL: Specifies the log level for the server application. In this case, it is set to ERROR.

ports: Maps the container's ports 8082 and 8085 to the host's ports 28082 and 28083, respectively, allowing access to the server service from the host machine.

depends_on: Specifies the services that the server service depends on. It requires the ZooKeeper cluster, Kafka brokers, and Schema Registry to be running before starting the server service.

magic:

image: "digitsy/kafka-magic": Specifies the Docker image to be used for the Magic service. In this case, it uses the digitsy/kafka-magic image.

ports: Maps the container's port 80 to the host's port 29080, allowing access to the Magic service from the host machine.

depends_on: Specifies the services that the Magic service depends on. It requires the ZooKeeper cluster, Kafka brokers, and Schema Registry to be running before starting the Magic service.

volumes: Mounts the volume myConfig to the /config directory within the container.

environment: Sets environment variables for the Magic service, including configuration options related to topics and schemas.

client:

image: "lipanski/docker-static-website:latest": Specifies the Docker image to be used for the client service. In this case, it uses the lipanski/docker-static-website image.

ports: Maps the container's port 3000 to the host's port 23000, allowing access to the client service from the host machine.

depends_on: Specifies that the client service depends on the server service to be running before starting.

volumes:

myConfig: Defines a named volume called myConfig that can be used for persistent data storage.

Fill in the blank for the kafka-3 server based on the other kafka servers, you can check that everything is setup appropriately by saving the file, runnig:

docker-compose up -d

in a terminal window in your clone's src/ directory that contains the docker-compose.yml, opening http://localhost:29080 in your browser and follow the following steps:

  1. Open Kafka Magic.
  2. Click Register New.
  3. Enter Scaladays Workshop in the Cluster Name input.
  4. Enter kakfa-1:9092,kafka-2:9092,kafka-1:9092 in the Bootstrap Servers input.
  5. Click Schema Registry.
  6. Enter http://schema-registry:8081 in the Schema Registry URL input.
  7. Toggle Auto-register schemas to true.
  8. Click Verify. An alert will show success. Close it.
  9. Click Register Connection. Your cluster is registered.

Congratulations, you've setup your infrastructure. You can now tear it down with:

docker-compose down

And move on to the next step.

Solution

1-6-other-services-solution

diff