Skip to content

Commit

Permalink
Merge branch 'main' into ecrag_v1
Browse files Browse the repository at this point in the history
  • Loading branch information
Yongbozzz authored Nov 8, 2024
2 parents e850bc2 + 4c27a3d commit 0811e34
Show file tree
Hide file tree
Showing 8 changed files with 46 additions and 377 deletions.
32 changes: 32 additions & 0 deletions .github/workflows/check-online-doc-build.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

name: Check Online Document Building
permissions: {}

on:
pull_request:
branches: [main]

jobs:
build:
runs-on: ubuntu-latest
steps:

- name: Checkout
uses: actions/checkout@v4
with:
path: GenAIExamples

- name: Checkout docs
uses: actions/checkout@v4
with:
repository: opea-project/docs
path: docs

- name: Build Online Document
shell: bash
run: |
echo "build online doc"
cd docs
bash scripts/build.sh
17 changes: 2 additions & 15 deletions ChatQnA/docker_compose/intel/hpu/gaudi/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ To set up environment variables for deploying ChatQnA services, follow these ste
export http_proxy="Your_HTTP_Proxy"
export https_proxy="Your_HTTPs_Proxy"
# Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1"
export no_proxy="Your_No_Proxy",chatqna-gaudi-ui-server,chatqna-gaudi-backend-server,dataprep-redis-service,tei-embedding-service,retriever,tei-reranking-service,tgi-service,vllm_service,vllm-ray-service,guardrails
export no_proxy="Your_No_Proxy",chatqna-gaudi-ui-server,chatqna-gaudi-backend-server,dataprep-redis-service,tei-embedding-service,retriever,tei-reranking-service,tgi-service,vllm_service,guardrails
```

3. Set up other environment variables:
Expand Down Expand Up @@ -227,7 +227,7 @@ For users in China who are unable to download models directly from Huggingface,
export http_proxy="Your_HTTP_Proxy"
export https_proxy="Your_HTTPs_Proxy"
# Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1"
export no_proxy="Your_No_Proxy",chatqna-gaudi-ui-server,chatqna-gaudi-backend-server,dataprep-redis-service,tei-embedding-service,retriever,tei-reranking-service,tgi-service,vllm_service,vllm-ray-service,guardrails
export no_proxy="Your_No_Proxy",chatqna-gaudi-ui-server,chatqna-gaudi-backend-server,dataprep-redis-service,tei-embedding-service,retriever,tei-reranking-service,tgi-service,vllm_service,guardrails
```

3. Set up other environment variables:
Expand Down Expand Up @@ -257,12 +257,6 @@ If use vllm for llm backend.
docker compose -f compose_vllm.yaml up -d
```

If use vllm-on-ray for llm backend.

```bash
docker compose -f compose_vllm_ray.yaml up -d
```

If you want to enable guardrails microservice in the pipeline, please follow the below command instead:

```bash
Expand Down Expand Up @@ -351,13 +345,6 @@ For validation details, please refer to [how-to-validate_service](./how_to_valid
}'
```
```bash
#vLLM-on-Ray Service
curl http://${host_ip}:8006/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "${LLM_MODEL_ID}", "messages": [{"role": "user", "content": "What is Deep Learning?"}]}'
```
5. MegaService
```bash
Expand Down
164 changes: 0 additions & 164 deletions ChatQnA/docker_compose/intel/hpu/gaudi/compose_vllm_ray.yaml

This file was deleted.

6 changes: 0 additions & 6 deletions ChatQnA/docker_image_build/build.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -77,12 +77,6 @@ services:
dockerfile: comps/llms/text-generation/vllm/langchain/Dockerfile
extends: chatqna
image: ${REGISTRY:-opea}/llm-vllm:${TAG:-latest}
llm-vllm-ray-hpu:
build:
context: GenAIComps
dockerfile: comps/llms/text-generation/vllm/ray/dependency/Dockerfile
extends: chatqna
image: ${REGISTRY:-opea}/llm-vllm-ray-hpu:${TAG:-latest}
dataprep-redis:
build:
context: GenAIComps
Expand Down
Loading

0 comments on commit 0811e34

Please sign in to comment.