diff --git a/VisualQnA/docker_compose/amd/gpu/rocm/README.md b/VisualQnA/docker_compose/amd/gpu/rocm/README.md new file mode 100644 index 000000000..5cda5db6f --- /dev/null +++ b/VisualQnA/docker_compose/amd/gpu/rocm/README.md @@ -0,0 +1,156 @@ +# Build Mega Service of VisualQnA on AMD ROCm + +This document outlines the deployment process for a VisualQnA application utilizing the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline on Intel Xeon server. The steps include Docker image creation, container deployment via Docker Compose, and service execution to integrate microservices such as `llm`. We will publish the Docker images to Docker Hub soon, it will simplify the deployment process for this service. + +## 🚀 Build Docker Images + +First of all, you need to build Docker Images locally and install the python package of it. + +### 1. Build LVM and NGINX Docker Images + +```bash +git clone https://github.com/opea-project/GenAIComps.git +cd GenAIComps +docker build --no-cache -t opea/lvm-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/lvms/tgi-llava/Dockerfile . +docker build --no-cache -t opea/nginx:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/nginx/Dockerfile . +``` + +### 2. Build MegaService Docker Image + +To construct the Mega Service, we utilize the [GenAIComps](https://github.com/opea-project/GenAIComps.git) microservice pipeline within the `visualqna.py` Python script. Build MegaService Docker image via below command: + +```bash +git clone https://github.com/opea-project/GenAIExamples.git +cd GenAIExamples/VisualQnA +docker build --no-cache -t opea/visualqna:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f Dockerfile . +``` + +### 3. Build UI Docker Image + +Build frontend Docker image via below command: + +```bash +cd GenAIExamples/VisualQnA/ui +docker build --no-cache -t opea/visualqna-ui:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f docker/Dockerfile . +``` + +### 4. Pull TGI AMD ROCm Image + +```bash +docker pull ghcr.io/huggingface/text-generation-inference:2.4.1-rocm +``` + +Then run the command `docker images`, you will have the following 5 Docker Images: + +1. `ghcr.io/huggingface/text-generation-inference:2.4.1-rocm` +2. `opea/lvm-tgi:latest` +3. `opea/visualqna:latest` +4. `opea/visualqna-ui:latest` +5. `opea/nginx` + +## 🚀 Start Microservices + +### Setup Environment Variables + +Since the `compose.yaml` will consume some environment variables, you need to setup them in advance as below. + +**Export the value of the public IP address of your ROCM server to the `host_ip` environment variable** + +> Change the External_Public_IP below with the actual IPV4 value + +``` +export host_ip="External_Public_IP" +``` + +**Append the value of the public IP address to the no_proxy list** + +``` +export your_no_proxy="${your_no_proxy},${host_ip}" +``` + +```bash +export HOST_IP=${your_host_ip} +export VISUALQNA_TGI_SERVICE_PORT="8399" +export VISUALQNA_HUGGINGFACEHUB_API_TOKEN={your_hugginface_api_token} +export VISUALQNA_CARD_ID="card1" +export VISUALQNA_RENDER_ID="renderD136" +export LVM_MODEL_ID="Xkev/Llama-3.2V-11B-cot" +export MODEL="llava-hf/llava-v1.6-mistral-7b-hf" +export LVM_ENDPOINT="http://${HOST_IP}:8399" +export LVM_SERVICE_PORT=9399 +export MEGA_SERVICE_HOST_IP=${HOST_IP} +export LVM_SERVICE_HOST_IP=${HOST_IP} +export BACKEND_SERVICE_ENDPOINT="http://${HOST_IP}:18003/v1/visualqna" +export FRONTEND_SERVICE_IP=${HOST_IP} +export FRONTEND_SERVICE_PORT=18001 +export BACKEND_SERVICE_NAME=visualqna +export BACKEND_SERVICE_IP=${HOST_IP} +export BACKEND_SERVICE_PORT=18002 +export NGINX_PORT=18003 + +``` + +Note: Please replace with `host_ip` with you external IP address, do not use localhost. + +Note: You can use set_env.sh file with bash command (. setset_env.sh) to set up needed variables. + +### Start all the services Docker Containers + +> Before running the docker compose command, you need to be in the folder that has the docker compose yaml file + +```bash +cd GenAIExamples/VisualQnA/docker_compose/amd/gpu/rocm +``` + +```bash +docker compose -f compose.yaml up -d +``` + +### Validate Microservices + +Follow the instructions to validate MicroServices. + +> Note: If you see an "Internal Server Error" from the `curl` command, wait a few minutes for the microserver to be ready and then try again. + +1. LLM Microservice + + ```bash + http_proxy="" curl http://${host_ip}:9399/v1/lvm -XPOST -d '{"image": "iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFUlEQVR42mP8/5+hnoEIwDiqkL4KAcT9GO0U4BxoAAAAAElFTkSuQmCC", "prompt":"What is this?"}' -H 'Content-Type: application/json' + ``` + +2. MegaService + +```bash +curl http://${host_ip}:8888/v1/visualqna -H "Content-Type: application/json" -d '{ + "messages": [ + { + "role": "user", + "content": [ + { + "type": "text", + "text": "What'\''s in this image?" + }, + { + "type": "image_url", + "image_url": { + "url": "https://www.ilankelman.org/stopsigns/australia.jpg" + } + } + ] + } + ], + "max_tokens": 300 + }' +``` + +## 🚀 Launch the UI + +To access the frontend, open the following URL in your browser: http://{host_ip}:5173. By default, the UI runs on port 5173 internally. If you prefer to use a different host port to access the frontend, you can modify the port mapping in the `compose.yaml` file as shown below: + +```yaml + visualqna-gaudi-ui-server: + image: opea/visualqna-ui:latest + ... + ports: + - "80:5173" +``` diff --git a/VisualQnA/docker_compose/amd/gpu/rocm/compose.yaml b/VisualQnA/docker_compose/amd/gpu/rocm/compose.yaml new file mode 100644 index 000000000..621344fb0 --- /dev/null +++ b/VisualQnA/docker_compose/amd/gpu/rocm/compose.yaml @@ -0,0 +1,99 @@ +# Copyright (C) 2024 Advanced Micro Devices, Inc. +# SPDX-License-Identifier: Apache-2.0 + +services: + visualqna-llava-tgi-service: + image: ghcr.io/huggingface/text-generation-inference:2.4.1-rocm + container_name: visualqna-tgi-service + ports: + - "${VISUALQNA_TGI_SERVICE_PORT:-8399}:80" + environment: + no_proxy: ${no_proxy} + http_proxy: ${http_proxy} + https_proxy: ${https_proxy} + TGI_LLM_ENDPOINT: "http://${HOST_IP}:${VISUALQNA_TGI_SERVICE_PORT}" + HUGGINGFACEHUB_API_TOKEN: ${VISUALQNA_HUGGINGFACEHUB_API_TOKEN} + HUGGING_FACE_HUB_TOKEN: ${VISUALQNA_HUGGINGFACEHUB_API_TOKEN} + volumes: + - "/var/opea/visualqna-service/data:/data" + shm_size: 64g + devices: + - /dev/kfd:/dev/kfd + - /dev/dri/:/dev/dri/ + cap_add: + - SYS_PTRACE + group_add: + - video + security_opt: + - seccomp:unconfined + ipc: host + command: --model-id ${LVM_MODEL_ID} --max-input-length 4096 --max-total-tokens 8192 + lvm-tgi: + image: ${REGISTRY:-opea}/lvm-tgi:${TAG:-latest} + container_name: lvm-tgi-server + depends_on: + - visualqna-llava-tgi-service + ports: + - "9399:9399" + ipc: host + environment: + no_proxy: ${no_proxy} + http_proxy: ${http_proxy} + https_proxy: ${https_proxy} + LVM_ENDPOINT: ${LVM_ENDPOINT} + HF_HUB_DISABLE_PROGRESS_BARS: 1 + HF_HUB_ENABLE_HF_TRANSFER: 0 + restart: unless-stopped + visualqna-rocm-backend-server: + image: ${REGISTRY:-opea}/visualqna:${TAG:-latest} + container_name: visualqna-rocm-backend-server + depends_on: + - visualqna-llava-tgi-service + - lvm-tgi + ports: + - "${BACKEND_SERVICE_PORT:-8888}:8888" + environment: + - no_proxy=${no_proxy} + - https_proxy=${https_proxy} + - http_proxy=${http_proxy} + - MEGA_SERVICE_HOST_IP=${MEGA_SERVICE_HOST_IP} + - LVM_SERVICE_HOST_IP=${LVM_SERVICE_HOST_IP} + ipc: host + restart: always + visualqna-rocm-ui-server: + image: ${REGISTRY:-opea}/visualqna-ui:${TAG:-latest} + container_name: visualqna-rocm-ui-server + depends_on: + - visualqna-rocm-backend-server + ports: + - "${FRONTEND_SERVICE_PORT:-5173}:5173" + environment: + - no_proxy=${no_proxy} + - https_proxy=${https_proxy} + - http_proxy=${http_proxy} + - BACKEND_BASE_URL=${BACKEND_SERVICE_ENDPOINT} + ipc: host + restart: always + visualqna-nginx-server: + image: ${REGISTRY:-opea}/nginx:${TAG:-latest} + container_name: visualqna-rocm-nginx-server + depends_on: + - visualqna-rocm-backend-server + - visualqna-rocm-ui-server + ports: + - "${NGINX_PORT:-80}:80" + environment: + - no_proxy=${no_proxy} + - https_proxy=${https_proxy} + - http_proxy=${http_proxy} + - FRONTEND_SERVICE_IP=${HOST_IP} + - FRONTEND_SERVICE_PORT=${FRONTEND_SERVICE_PORT} + - BACKEND_SERVICE_NAME=${BACKEND_SERVICE_NAME} + - BACKEND_SERVICE_IP=${HOST_IP} + - BACKEND_SERVICE_PORT=${BACKEND_SERVICE_PORT} + ipc: host + restart: always + +networks: + default: + driver: bridge diff --git a/VisualQnA/docker_compose/amd/gpu/rocm/set_env.sh b/VisualQnA/docker_compose/amd/gpu/rocm/set_env.sh new file mode 100644 index 000000000..bf73465ce --- /dev/null +++ b/VisualQnA/docker_compose/amd/gpu/rocm/set_env.sh @@ -0,0 +1,22 @@ +#!/usr/bin/env bash + +# Copyright (C) 2024 Advanced Micro Devices, Inc +# SPDX-License-Identifier: Apache-2.0 + +export HOST_IP=${Your_host_ip_address} +export VISUALQNA_TGI_SERVICE_PORT="8399" +export VISUALQNA_HUGGINGFACEHUB_API_TOKEN=${Your_HUGGINGFACEHUB_API_TOKEN} +export VISUALQNA_CARD_ID="card1" +export VISUALQNA_RENDER_ID="renderD136" +export LVM_MODEL_ID="Xkev/Llama-3.2V-11B-cot" +export LVM_ENDPOINT="http://${HOST_IP}:8399" +export LVM_SERVICE_PORT=9399 +export MEGA_SERVICE_HOST_IP=${HOST_IP} +export LVM_SERVICE_HOST_IP=${HOST_IP} +export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:${BACKEND_SERVICE_PORT}/v1/visualqna" +export FRONTEND_SERVICE_IP=${HOST_IP} +export FRONTEND_SERVICE_PORT=18001 +export BACKEND_SERVICE_NAME=visualqna +export BACKEND_SERVICE_IP=${HOST_IP} +export BACKEND_SERVICE_PORT=18002 +export NGINX_PORT=18003 diff --git a/VisualQnA/tests/test_compose_on_rocm.sh b/VisualQnA/tests/test_compose_on_rocm.sh new file mode 100644 index 000000000..9fd298bf7 --- /dev/null +++ b/VisualQnA/tests/test_compose_on_rocm.sh @@ -0,0 +1,212 @@ +#!/bin/bash +# Copyright (C) 2024 Advanced Micro Devices, Inc. +# SPDX-License-Identifier: Apache-2.0 + +set -x +IMAGE_REPO=${IMAGE_REPO:-"opea"} +IMAGE_TAG=${IMAGE_TAG:-"latest"} +echo "REGISTRY=IMAGE_REPO=${IMAGE_REPO}" +echo "TAG=IMAGE_TAG=${IMAGE_TAG}" + +WORKPATH=$(dirname "$PWD") +LOG_PATH="$WORKPATH/tests" +ip_address=$(hostname -I | awk '{print $1}') + +export REGISTRY=${IMAGE_REPO} +export TAG=${IMAGE_TAG} +export HOST_IP=${ip_address} +export VISUALQNA_TGI_SERVICE_PORT="8399" +export VISUALQNA_HUGGINGFACEHUB_API_TOKEN=${HUGGINGFACEHUB_API_TOKEN} +export VISUALQNA_CARD_ID="card1" +export VISUALQNA_RENDER_ID="renderD136" +export LVM_MODEL_ID="Xkev/Llama-3.2V-11B-cot" +export MODEL="llava-hf/llava-v1.6-mistral-7b-hf" +export LVM_ENDPOINT="http://${HOST_IP}:8399" +export LVM_SERVICE_PORT=9399 +export MEGA_SERVICE_HOST_IP=${HOST_IP} +export LVM_SERVICE_HOST_IP=${HOST_IP} +export BACKEND_SERVICE_ENDPOINT="http://${HOST_IP}:${BACKEND_SERVICE_PORT}/v1/visualqna" +export FRONTEND_SERVICE_IP=${HOST_IP} +export FRONTEND_SERVICE_PORT=5173 +export BACKEND_SERVICE_NAME=visualqna +export BACKEND_SERVICE_IP=${HOST_IP} +export BACKEND_SERVICE_PORT=8888 +export NGINX_PORT=18003 +export PATH="~/miniconda3/bin:$PATH" + +function build_docker_images() { + cd $WORKPATH/docker_image_build + git clone https://github.com/opea-project/GenAIComps.git && cd GenAIComps && git checkout "${opea_branch:-"main"}" && cd ../ + + echo "Build all the images with --no-cache, check docker_image_build.log for details..." + docker compose -f build.yaml build --no-cache > ${LOG_PATH}/docker_image_build.log + + docker pull ghcr.io/huggingface/text-generation-inference:2.4.1-rocm + docker images && sleep 1s +} + +function start_services() { + cd $WORKPATH/docker_compose/amd/gpu/rocm + + sed -i "s/backend_address/$ip_address/g" $WORKPATH/ui/svelte/.env + + # Start Docker Containers + docker compose up -d > ${LOG_PATH}/start_services_with_compose.log + + n=0 + until [[ "$n" -ge 100 ]]; do + docker logs visualqna-tgi-service > ${LOG_PATH}/lvm_tgi_service_start.log + if grep -q Connected ${LOG_PATH}/lvm_tgi_service_start.log; then + break + fi + sleep 5s + n=$((n+1)) + done +} + +function validate_services() { + local URL="$1" + local EXPECTED_RESULT="$2" + local SERVICE_NAME="$3" + local DOCKER_NAME="$4" + local INPUT_DATA="$5" + + local HTTP_STATUS=$(curl -s -o /dev/null -w "%{http_code}" -X POST -d "$INPUT_DATA" -H 'Content-Type: application/json' "$URL") + if [ "$HTTP_STATUS" -eq 200 ]; then + echo "[ $SERVICE_NAME ] HTTP status is 200. Checking content..." + + local CONTENT=$(curl -s -X POST -d "$INPUT_DATA" -H 'Content-Type: application/json' "$URL" | tee ${LOG_PATH}/${SERVICE_NAME}.log) + + if echo "$CONTENT" | grep -q "$EXPECTED_RESULT"; then + echo "[ $SERVICE_NAME ] Content is as expected." + else + echo "[ $SERVICE_NAME ] Content does not match the expected result: $CONTENT" + docker logs ${DOCKER_NAME} >> ${LOG_PATH}/${SERVICE_NAME}.log + exit 1 + fi + else + echo "[ $SERVICE_NAME ] HTTP status is not 200. Received status was $HTTP_STATUS" + docker logs ${DOCKER_NAME} >> ${LOG_PATH}/${SERVICE_NAME}.log + exit 1 + fi + sleep 1s +} + +function validate_microservices() { + # Check if the microservices are running correctly. + + # lvm microservice + validate_services \ + "${ip_address}:9399/v1/lvm" \ + "The image" \ + "lvm-tgi" \ + "visualqna-tgi-service" \ + '{"image": "iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAFUlEQVR42mP8/5+hnoEIwDiqkL4KAcT9GO0U4BxoAAAAAElFTkSuQmCC", "prompt":"What is this?"}' +} + +function validate_megaservice() { + # Curl the Mega Service + validate_services \ + "${ip_address}:8888/v1/visualqna" \ + "The image" \ + "visualqna-rocm-backend-server" \ + "visualqna-rocm-backend-server" \ + '{ + "messages": [ + { + "role": "user", + "content": [ + { + "type": "text", + "text": "What'\''s in this image?" + }, + { + "type": "image_url", + "image_url": { + "url": "https://www.ilankelman.org/stopsigns/australia.jpg" + } + } + ] + } + ], + "max_tokens": 300 + }' + + # test the megeservice via nginx + validate_services \ + "${ip_address}:${NGINX_PORT}/v1/visualqna" \ + "The image" \ + "visualqna-rocm-nginx-server" \ + "visualqna-rocm-nginx-server" \ + '{ + "messages": [ + { + "role": "user", + "content": [ + { + "type": "text", + "text": "What'\''s in this image?" + }, + { + "type": "image_url", + "image_url": { + "url": "https://www.ilankelman.org/stopsigns/australia.jpg" + } + } + ] + } + ], + "max_tokens": 300 + }' +} + +function validate_frontend() { + cd $WORKPATH/ui/svelte + local conda_env_name="OPEA_e2e" + export PATH=${HOME}/miniforge3/bin/:$PATH + if conda info --envs | grep -q "$conda_env_name"; then + echo "$conda_env_name exist!" + else + conda create -n ${conda_env_name} python=3.12 -y + fi + source activate ${conda_env_name} + + sed -i "s/localhost/$ip_address/g" playwright.config.ts + + conda install -c conda-forge nodejs -y + npm install && npm ci && npx playwright install --with-deps + node -v && npm -v && pip list + + exit_status=0 + npx playwright test || exit_status=$? + + if [ $exit_status -ne 0 ]; then + echo "[TEST INFO]: ---------frontend test failed---------" + exit $exit_status + else + echo "[TEST INFO]: ---------frontend test passed---------" + fi +} + +function stop_docker() { + cd $WORKPATH/docker_compose/amd/gpu/rocm/ + docker compose stop && docker compose rm -f +} + +function main() { + + stop_docker + + if [[ "$IMAGE_REPO" == "opea" ]]; then build_docker_images; fi + start_services + + validate_microservices + validate_megaservice + #validate_frontend + + stop_docker + echo y | docker system prune + +} + +main