This document outlines the deployment process for a Translation application utilizing the GenAIComps microservice pipeline on Intel Xeon server. The steps include Docker image creation, container deployment via Docker Compose, and service execution to integrate microservices such as llm
. We will publish the Docker images to Docker Hub soon, it will simplify the deployment process for this service.
To apply a Xeon server on AWS, start by creating an AWS account if you don't have one already. Then, head to the EC2 Console to begin the process. Within the EC2 service, select the Amazon EC2 M7i or M7i-flex instance type to leverage 4th Generation Intel Xeon Scalable processors. These instances are optimized for high-performance computing and demanding workloads.
For detailed information about these instance types, you can refer to this link. Once you've chosen the appropriate instance type, proceed with configuring your instance settings, including network configurations, security groups, and storage options.
After launching your instance, you can connect to it using SSH (for Linux instances) or Remote Desktop Protocol (RDP) (for Windows instances). From there, you'll have full access to your Xeon server, allowing you to install, configure, and manage your applications as needed.
First of all, you need to build Docker Images locally and install the python package of it.
git clone https://github.com/opea-project/GenAIComps.git
cd GenAIComps
docker build -t opea/llm-tgi:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/llms/text-generation/tgi/Dockerfile .
To construct the Mega Service, we utilize the GenAIComps microservice pipeline within the translation.py
Python script. Build MegaService Docker image via below command:
git clone https://github.com/opea-project/GenAIExamples
cd GenAIExamples/Translation/
docker build -t opea/translation:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f Dockerfile .
Build frontend Docker image via below command:
cd GenAIExamples/Translation/ui
docker build -t opea/translation-ui:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f docker/Dockerfile .
Then run the command docker images
, you will have the following Docker Images:
opea/llm-tgi:latest
opea/translation:latest
opea/translation-ui:latest
Since the compose.yaml
will consume some environment variables, you need to set up them in advance as below.
export http_proxy=${your_http_proxy}
export https_proxy=${your_http_proxy}
export LLM_MODEL_ID="haoranxu/ALMA-13B"
export TGI_LLM_ENDPOINT="http://${host_ip}:8008"
export HUGGINGFACEHUB_API_TOKEN=${your_hf_api_token}
export MEGA_SERVICE_HOST_IP=${host_ip}
export LLM_SERVICE_HOST_IP=${host_ip}
export BACKEND_SERVICE_ENDPOINT="http://${host_ip}:8888/v1/translation"
Note: Please replace with host_ip
with you external IP address, do not use localhost.
docker compose up -d
-
TGI Service
curl http://${host_ip}:8008/generate \ -X POST \ -d '{"inputs":"What is Deep Learning?","parameters":{"max_new_tokens":17, "do_sample": true}}' \ -H 'Content-Type: application/json'
-
LLM Microservice
curl http://${host_ip}:9000/v1/chat/completions \ -X POST \ -d '{"query":"Translate this from Chinese to English:\nChinese: 我爱机器翻译。\nEnglish:"}' \ -H 'Content-Type: application/json'
-
MegaService
curl http://${host_ip}:8888/v1/translation -H "Content-Type: application/json" -d '{ "language_from": "Chinese","language_to": "English","source_language": "我爱机器翻译。"}'
Following the validation of all aforementioned microservices, we are now prepared to construct a mega-service.
Open this URL http://{host_ip}:5173
in your browser to access the frontend.