Skip to content

Commit

Permalink
Rename image name XXX-hpu to XXX-gaudi (#1154)
Browse files Browse the repository at this point in the history
Signed-off-by: ZePan110 <[email protected]>
  • Loading branch information
ZePan110 authored Nov 19, 2024
1 parent 17d4b0c commit 8808b51
Show file tree
Hide file tree
Showing 6 changed files with 8 additions and 8 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/_example-workflow.yml
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ jobs:
git clone https://github.com/vllm-project/vllm.git
cd vllm && git rev-parse HEAD && cd ../
fi
if [[ $(grep -c "vllm-hpu:" ${docker_compose_path}) != 0 ]]; then
if [[ $(grep -c "vllm-gaudi:" ${docker_compose_path}) != 0 ]]; then
git clone https://github.com/HabanaAI/vllm-fork.git
cd vllm-fork && git checkout 3c39626 && cd ../
fi
Expand Down
2 changes: 1 addition & 1 deletion ChatQnA/docker_compose/intel/hpu/gaudi/compose_vllm.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ services:
MAX_WARMUP_SEQUENCE_LENGTH: 512
command: --model-id ${RERANK_MODEL_ID} --auto-truncate
vllm-service:
image: ${REGISTRY:-opea}/vllm-hpu:${TAG:-latest}
image: ${REGISTRY:-opea}/vllm-gaudi:${TAG:-latest}
container_name: vllm-gaudi-server
ports:
- "8007:80"
Expand Down
4 changes: 2 additions & 2 deletions ChatQnA/docker_image_build/build.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -119,12 +119,12 @@ services:
dockerfile: Dockerfile.cpu
extends: chatqna
image: ${REGISTRY:-opea}/vllm:${TAG:-latest}
vllm-hpu:
vllm-gaudi:
build:
context: vllm-fork
dockerfile: Dockerfile.hpu
extends: chatqna
image: ${REGISTRY:-opea}/vllm-hpu:${TAG:-latest}
image: ${REGISTRY:-opea}/vllm-gaudi:${TAG:-latest}
nginx:
build:
context: GenAIComps
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1284,7 +1284,7 @@ spec:
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
image: "opea/vllm-hpu:latest"
image: "opea/vllm-gaudi:latest"
args:
- "--enforce-eager"
- "--model"
Expand Down
2 changes: 1 addition & 1 deletion ChatQnA/tests/test_compose_vllm_on_gaudi.sh
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ function build_docker_images() {
git clone https://github.com/HabanaAI/vllm-fork.git && cd vllm-fork && git checkout 3c39626 && cd ../

echo "Build all the images with --no-cache, check docker_image_build.log for details..."
service_list="chatqna chatqna-ui dataprep-redis retriever-redis vllm-hpu nginx"
service_list="chatqna chatqna-ui dataprep-redis retriever-redis vllm-gaudi nginx"
docker compose -f build.yaml build ${service_list} --no-cache > ${LOG_PATH}/docker_image_build.log

docker pull ghcr.io/huggingface/text-embeddings-inference:cpu-1.5
Expand Down
4 changes: 2 additions & 2 deletions docker_images_list.md
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ Take ChatQnA for example. ChatQnA is a chatbot application service based on the
| [opea/llm-tgi](https://hub.docker.com/r/opea/llm-tgi) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/text-generation/tgi/Dockerfile) | The docker image exposed the OPEA LLM microservice upon TGI docker image for GenAI application use |
| [opea/llm-vllm](https://hub.docker.com/r/opea/llm-vllm) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/text-generation/vllm/langchain/Dockerfile) | The docker image exposed the OPEA LLM microservice upon vLLM docker image for GenAI application use |
| [opea/llm-vllm-llamaindex](https://hub.docker.com/r/opea/llm-vllm-llamaindex) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/llms/text-generation/vllm/llama_index/Dockerfile) | This docker image exposes OPEA LLM microservices to the llamaindex framework's vLLM Docker image for use by GenAI applications |
| [opea/llava-hpu](https://hub.docker.com/r/opea/llava-hpu) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/lvms/llava/dependency/Dockerfile.intel_hpu) | The docker image exposed the OPEA microservice running LLaVA as a large visual model (LVM) service for GenAI application use on the Gaudi |
| [opea/llava-gaudi](https://hub.docker.com/r/opea/llava-hpu) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/lvms/llava/dependency/Dockerfile.intel_hpu) | The docker image exposed the OPEA microservice running LLaVA as a large visual model (LVM) service for GenAI application use on the Gaudi |
| [opea/lvm-tgi](https://hub.docker.com/r/opea/lvm-tgi) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/lvms/tgi-llava/Dockerfile) | This docker image is designed to build a large visual model (LVM) microservice using the HuggingFace Text Generation Inference(TGI) framework. The microservice accepts document input and generates a answer to question. |
| [opea/lvm-llava](https://hub.docker.com/r/opea/lvm-llava) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/lvms/llava/dependency/Dockerfile) | The docker image exposed the OPEA microservice running LLaVA as a large visual model (LVM) server for GenAI application use |
| [opea/lvm-llava-svc](https://hub.docker.com/r/opea/lvm-llava-svc) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/lvms/llava/Dockerfile) | The docker image exposed the OPEA microservice running LLaVA as a large visual model (LVM) service for GenAI application use |
Expand Down Expand Up @@ -106,7 +106,7 @@ Take ChatQnA for example. ChatQnA is a chatbot application service based on the
| [opea/video-llama-lvm-server](https://hub.docker.com/r/opea/video-llama-lvm-server) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/lvms/video-llama/dependency/Dockerfile) | The docker image exposed the OPEA microservice running Video-Llama as a large visual model (LVM) server for GenAI application use |
| [opea/tts](https://hub.docker.com/r/opea/tts) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/tts/speecht5/Dockerfile) | The docker image exposed the OPEA Text-To-Speech microservice for GenAI application use |
| [opea/vllm](https://hub.docker.com/r/opea/vllm) | [Link](https://github.com/vllm-project/vllm/blob/main/Dockerfile.cpu) | The docker image powered by vllm-project for deploying and serving vllm Models |
| [opea/vllm-hpu]() | [Link](https://github.com/HabanaAI/vllm-fork/blob/habana_main/Dockerfile.hpu) | The docker image powered by vllm-fork for deploying and serving vllm-hpu Models |
| [opea/vllm-gaudi]() | [Link](https://github.com/HabanaAI/vllm-fork/blob/habana_main/Dockerfile.hpu) | The docker image powered by vllm-fork for deploying and serving vllm-gaudi Models |
| [opea/vllm-openvino](https://hub.docker.com/r/opea/vllm-openvino) | [Link](https://github.com/vllm-project/vllm/blob/main/Dockerfile.openvino) | The docker image powered by vllm-project for deploying and serving vllm Models of the Openvino Framework |
| [opea/web-retriever-chroma](https://hub.docker.com/r/opea/web-retriever-chroma) | [Link](https://github.com/opea-project/GenAIComps/tree/main/comps/web_retrievers/chroma/langchain/Dockerfile) | The docker image exposed the OPEA retrieval microservice based on chroma vectordb for GenAI application use |
| [opea/whisper](https://hub.docker.com/r/opea/whisper) | [Link](https://github.com/opea-project/GenAIComps/blob/main/comps/asr/whisper/dependency/Dockerfile) | The docker image exposed the OPEA Whisper service for GenAI application use |
Expand Down

0 comments on commit 8808b51

Please sign in to comment.