Skip to content

Commit

Permalink
Merge branch 'main' into docsum
Browse files Browse the repository at this point in the history
  • Loading branch information
MSCetin37 committed Nov 8, 2024
2 parents c0d42eb + 82801d0 commit e75b897
Show file tree
Hide file tree
Showing 83 changed files with 4,769 additions and 630 deletions.
2 changes: 2 additions & 0 deletions .github/code_spell_ignore.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
ModelIn
modelin
4 changes: 4 additions & 0 deletions .github/workflows/_example-workflow.yml
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,10 @@ jobs:
git clone https://github.com/vllm-project/vllm.git
cd vllm && git rev-parse HEAD && cd ../
fi
if [[ $(grep -c "vllm-hpu:" ${docker_compose_path}) != 0 ]]; then
git clone https://github.com/HabanaAI/vllm-fork.git
cd vllm-fork && git rev-parse HEAD && cd ../
fi
git clone https://github.com/opea-project/GenAIComps.git
cd GenAIComps && git checkout ${{ inputs.opea_branch }} && git rev-parse HEAD && cd ../
Expand Down
35 changes: 35 additions & 0 deletions .github/workflows/check-online-doc-build.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

name: Check Online Document Building
permissions: {}

on:
pull_request:
branches: [main]
paths:
- "**.md"
- "**.rst"

jobs:
build:
runs-on: ubuntu-latest
steps:

- name: Checkout
uses: actions/checkout@v4
with:
path: GenAIExamples

- name: Checkout docs
uses: actions/checkout@v4
with:
repository: opea-project/docs
path: docs

- name: Build Online Document
shell: bash
run: |
echo "build online doc"
cd docs
bash scripts/build.sh
1 change: 0 additions & 1 deletion .github/workflows/nightly-docker-build-publish.yml
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,6 @@ jobs:
with:
node: gaudi
example: ${{ matrix.example }}
inject_commit: true
secrets: inherit

get-image-list:
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/pr-path-detection.yml
Original file line number Diff line number Diff line change
Expand Up @@ -68,7 +68,7 @@ jobs:
# echo $url_line
url=$(echo "$url_line"|cut -d '(' -f2 | cut -d ')' -f1|sed 's/\.git$//')
path=$(echo "$url_line"|cut -d':' -f1 | cut -d'/' -f2-)
response=$(curl -L -s -o /dev/null -w "%{http_code}" "$url")
response=$(curl -L -s -o /dev/null -w "%{http_code}" "$url")|| true
if [ "$response" -ne 200 ]; then
echo "**********Validation failed, try again**********"
response_retry=$(curl -s -o /dev/null -w "%{http_code}" "$url")
Expand Down
17 changes: 2 additions & 15 deletions ChatQnA/docker_compose/intel/hpu/gaudi/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ To set up environment variables for deploying ChatQnA services, follow these ste
export http_proxy="Your_HTTP_Proxy"
export https_proxy="Your_HTTPs_Proxy"
# Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1"
export no_proxy="Your_No_Proxy",chatqna-gaudi-ui-server,chatqna-gaudi-backend-server,dataprep-redis-service,tei-embedding-service,retriever,tei-reranking-service,tgi-service,vllm_service,vllm-ray-service,guardrails
export no_proxy="Your_No_Proxy",chatqna-gaudi-ui-server,chatqna-gaudi-backend-server,dataprep-redis-service,tei-embedding-service,retriever,tei-reranking-service,tgi-service,vllm_service,guardrails
```

3. Set up other environment variables:
Expand Down Expand Up @@ -227,7 +227,7 @@ For users in China who are unable to download models directly from Huggingface,
export http_proxy="Your_HTTP_Proxy"
export https_proxy="Your_HTTPs_Proxy"
# Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1"
export no_proxy="Your_No_Proxy",chatqna-gaudi-ui-server,chatqna-gaudi-backend-server,dataprep-redis-service,tei-embedding-service,retriever,tei-reranking-service,tgi-service,vllm_service,vllm-ray-service,guardrails
export no_proxy="Your_No_Proxy",chatqna-gaudi-ui-server,chatqna-gaudi-backend-server,dataprep-redis-service,tei-embedding-service,retriever,tei-reranking-service,tgi-service,vllm_service,guardrails
```

3. Set up other environment variables:
Expand Down Expand Up @@ -257,12 +257,6 @@ If use vllm for llm backend.
docker compose -f compose_vllm.yaml up -d
```

If use vllm-on-ray for llm backend.

```bash
docker compose -f compose_vllm_ray.yaml up -d
```

If you want to enable guardrails microservice in the pipeline, please follow the below command instead:

```bash
Expand Down Expand Up @@ -351,13 +345,6 @@ For validation details, please refer to [how-to-validate_service](./how_to_valid
}'
```
```bash
#vLLM-on-Ray Service
curl http://${host_ip}:8006/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "${LLM_MODEL_ID}", "messages": [{"role": "user", "content": "What is Deep Learning?"}]}'
```
5. MegaService
```bash
Expand Down
164 changes: 0 additions & 164 deletions ChatQnA/docker_compose/intel/hpu/gaudi/compose_vllm_ray.yaml

This file was deleted.

8 changes: 4 additions & 4 deletions ChatQnA/docker_compose/nvidia/gpu/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,6 @@ To set up environment variables for deploying ChatQnA services, follow these ste
```bash
# Example: host_ip="192.168.1.1"
export host_ip="External_Public_IP"
# Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1"
export no_proxy="Your_No_Proxy"
export HUGGINGFACEHUB_API_TOKEN="Your_Huggingface_API_Token"
```

Expand All @@ -27,6 +25,8 @@ To set up environment variables for deploying ChatQnA services, follow these ste
```bash
export http_proxy="Your_HTTP_Proxy"
export https_proxy="Your_HTTPs_Proxy"
# Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1"
export no_proxy="Your_No_Proxy",chatqna-ui-server,chatqna-backend-server,dataprep-redis-service,tei-embedding-service,retriever,tei-reranking-service,tgi-service
```

3. Set up other environment variables:
Expand Down Expand Up @@ -156,8 +156,6 @@ Change the `xxx_MODEL_ID` below for your needs.
```bash
# Example: host_ip="192.168.1.1"
export host_ip="External_Public_IP"
# Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1"
export no_proxy="Your_No_Proxy"
export HUGGINGFACEHUB_API_TOKEN="Your_Huggingface_API_Token"
# Example: NGINX_PORT=80
export NGINX_PORT=${your_nginx_port}
Expand All @@ -168,6 +166,8 @@ Change the `xxx_MODEL_ID` below for your needs.
```bash
export http_proxy="Your_HTTP_Proxy"
export https_proxy="Your_HTTPs_Proxy"
# Example: no_proxy="localhost, 127.0.0.1, 192.168.1.1"
export no_proxy="Your_No_Proxy",chatqna-ui-server,chatqna-backend-server,dataprep-redis-service,tei-embedding-service,retriever,tei-reranking-service,tgi-service
```

3. Set up other environment variables:
Expand Down
6 changes: 0 additions & 6 deletions ChatQnA/docker_image_build/build.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -77,12 +77,6 @@ services:
dockerfile: comps/llms/text-generation/vllm/langchain/Dockerfile
extends: chatqna
image: ${REGISTRY:-opea}/llm-vllm:${TAG:-latest}
llm-vllm-ray-hpu:
build:
context: GenAIComps
dockerfile: comps/llms/text-generation/vllm/ray/dependency/Dockerfile
extends: chatqna
image: ${REGISTRY:-opea}/llm-vllm-ray-hpu:${TAG:-latest}
dataprep-redis:
build:
context: GenAIComps
Expand Down
Loading

0 comments on commit e75b897

Please sign in to comment.