Skip to content

Commit

Permalink
Merge branch 'main' into multiple-model-with-remote-service
Browse files Browse the repository at this point in the history
  • Loading branch information
sgurunat authored Nov 11, 2024
2 parents 0d3ec68 + e3187be commit 7f6ee31
Show file tree
Hide file tree
Showing 135 changed files with 5,898 additions and 850 deletions.
2 changes: 2 additions & 0 deletions .github/code_spell_ignore.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
ModelIn
modelin
4 changes: 4 additions & 0 deletions .github/workflows/_example-workflow.yml
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,10 @@ jobs:
git clone https://github.com/vllm-project/vllm.git
cd vllm && git rev-parse HEAD && cd ../
fi
if [[ $(grep -c "vllm-hpu:" ${docker_compose_path}) != 0 ]]; then
git clone https://github.com/HabanaAI/vllm-fork.git
cd vllm-fork && git rev-parse HEAD && cd ../
fi
git clone https://github.com/opea-project/GenAIComps.git
cd GenAIComps && git checkout ${{ inputs.opea_branch }} && git rev-parse HEAD && cd ../
Expand Down
35 changes: 35 additions & 0 deletions .github/workflows/check-online-doc-build.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

name: Check Online Document Building
permissions: {}

on:
pull_request:
branches: [main]
paths:
- "**.md"
- "**.rst"

jobs:
build:
runs-on: ubuntu-latest
steps:

- name: Checkout
uses: actions/checkout@v4
with:
path: GenAIExamples

- name: Checkout docs
uses: actions/checkout@v4
with:
repository: opea-project/docs
path: docs

- name: Build Online Document
shell: bash
run: |
echo "build online doc"
cd docs
bash scripts/build.sh
1 change: 0 additions & 1 deletion .github/workflows/nightly-docker-build-publish.yml
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,6 @@ jobs:
with:
node: gaudi
example: ${{ matrix.example }}
inject_commit: true
secrets: inherit

get-image-list:
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/pr-path-detection.yml
Original file line number Diff line number Diff line change
Expand Up @@ -61,14 +61,14 @@ jobs:
changed_files="$(git diff --name-status --diff-filter=ARM ${{ github.event.pull_request.base.sha }} ${merged_commit} | awk '/\.md$/ {print $NF}')"
if [ -n "$changed_files" ]; then
for changed_file in $changed_files; do
echo $changed_file
# echo $changed_file
url_lines=$(grep -H -Eo '\]\(http[s]?://[^)]+\)' "$changed_file" | grep -Ev 'GenAIExamples/blob/main') || true
if [ -n "$url_lines" ]; then
for url_line in $url_lines; do
echo $url_line
# echo $url_line
url=$(echo "$url_line"|cut -d '(' -f2 | cut -d ')' -f1|sed 's/\.git$//')
path=$(echo "$url_line"|cut -d':' -f1 | cut -d'/' -f2-)
response=$(curl -L -s -o /dev/null -w "%{http_code}" "$url")
response=$(curl -L -s -o /dev/null -w "%{http_code}" "$url")|| true
if [ "$response" -ne 200 ]; then
echo "**********Validation failed, try again**********"
response_retry=$(curl -s -o /dev/null -w "%{http_code}" "$url")
Expand Down
File renamed without changes.
File renamed without changes.
204 changes: 204 additions & 0 deletions ChatQnA/benchmark/performance/kubernetes/intel/gaudi/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,204 @@
# ChatQnA Benchmarking

This folder contains a collection of Kubernetes manifest files for deploying the ChatQnA service across scalable nodes. It includes a comprehensive [benchmarking tool](https://github.com/opea-project/GenAIEval/blob/main/evals/benchmark/README.md) that enables throughput analysis to assess inference performance.

By following this guide, you can run benchmarks on your deployment and share the results with the OPEA community.

## Purpose

We aim to run these benchmarks and share them with the OPEA community for three primary reasons:

- To offer insights on inference throughput in real-world scenarios, helping you choose the best service or deployment for your needs.
- To establish a baseline for validating optimization solutions across different implementations, providing clear guidance on which methods are most effective for your use case.
- To inspire the community to build upon our benchmarks, allowing us to better quantify new solutions in conjunction with current leading llms, serving frameworks etc.

## Metrics

The benchmark will report the below metrics, including:

- Number of Concurrent Requests
- End-to-End Latency: P50, P90, P99 (in milliseconds)
- End-to-End First Token Latency: P50, P90, P99 (in milliseconds)
- Average Next Token Latency (in milliseconds)
- Average Token Latency (in milliseconds)
- Requests Per Second (RPS)
- Output Tokens Per Second
- Input Tokens Per Second

Results will be displayed in the terminal and saved as CSV file named `1_stats.csv` for easy export to spreadsheets.

## Table of Contents

- [Deployment](#deployment)
- [Prerequisites](#prerequisites)
- [Deployment Scenarios](#deployment-scenarios)
- [Case 1: Baseline Deployment with Rerank](#case-1-baseline-deployment-with-rerank)
- [Case 2: Baseline Deployment without Rerank](#case-2-baseline-deployment-without-rerank)
- [Case 3: Tuned Deployment with Rerank](#case-3-tuned-deployment-with-rerank)
- [Benchmark](#benchmark)
- [Test Configurations](#test-configurations)
- [Test Steps](#test-steps)
- [Upload Retrieval File](#upload-retrieval-file)
- [Run Benchmark Test](#run-benchmark-test)
- [Data collection](#data-collection)
- [Teardown](#teardown)

## Deployment

### Prerequisites

- Kubernetes installation: Use [kubespray](https://github.com/opea-project/docs/blob/main/guide/installation/k8s_install/k8s_install_kubespray.md) or other official Kubernetes installation guides.
- Helm installation: Follow the [Helm documentation](https://helm.sh/docs/intro/install/#helm) to install Helm.
- Setup Hugging Face Token

To access models and APIs from Hugging Face, set your token as environment variable.
```bash
export HF_TOKEN="insert-your-huggingface-token-here"
```
- Prepare Shared Models (Optional but Strongly Recommended)

Downloading models simultaneously to multiple nodes in your cluster can overload resources such as network bandwidth, memory and storage. To prevent resource exhaustion, it's recommended to preload the models in advance.
```bash
pip install -U "huggingface_hub[cli]"
sudo mkdir -p /mnt/models
sudo chmod 777 /mnt/models
huggingface-cli download --cache-dir /mnt/models Intel/neural-chat-7b-v3-3
export MODEL_DIR=/mnt/models
```
Once the models are downloaded, you can consider the following methods for sharing them across nodes:
- Persistent Volume Claim (PVC): This is the recommended approach for production setups. For more details on using PVC, refer to [PVC](https://github.com/opea-project/GenAIInfra/blob/main/helm-charts/README.md#using-persistent-volume).
- Local Host Path: For simpler testing, ensure that each node involved in the deployment follows the steps above to locally prepare the models. After preparing the models, use `--set global.modelUseHostPath=${MODELDIR}` in the deployment command.

- Add OPEA Helm Repository:
```bash
python deploy.py --add-repo
```
- Label Nodes
```base
python deploy.py --add-label --num-nodes 2
```

### Deployment Scenarios

The example below are based on a two-node setup. You can adjust the number of nodes by using the `--num-nodes` option.

By default, these commands use the `default` namespace. To specify a different namespace, use the `--namespace` flag with deploy, uninstall, and kubernetes command. Additionally, update the `namespace` field in `benchmark.yaml` before running the benchmark test.

For additional configuration options, run `python deploy.py --help`

#### Case 1: Baseline Deployment with Rerank

Deploy Command (with node number, Hugging Face token, model directory specified):
```bash
python deploy.py --hf-token $HF_TOKEN --model-dir $MODEL_DIR --num-nodes 2 --with-rerank
```
Uninstall Command:
```bash
python deploy.py --uninstall
```

#### Case 2: Baseline Deployment without Rerank

```bash
python deploy.py --hftoken $HFTOKEN --modeldir $MODELDIR --num-nodes 2
```
#### Case 3: Tuned Deployment with Rerank

```bash
python deploy.py --hftoken $HFTOKEN --modeldir $MODELDIR --num-nodes 2 --with-rerank --tuned
```

## Benchmark

### Test Configurations

| Key | Value |
| -------- | ------- |
| Workload | ChatQnA |
| Tag | V1.1 |

Models configuration
| Key | Value |
| ---------- | ------------------ |
| Embedding | BAAI/bge-base-en-v1.5 |
| Reranking | BAAI/bge-reranker-base |
| Inference | Intel/neural-chat-7b-v3-3 |

Benchmark parameters
| Key | Value |
| ---------- | ------------------ |
| LLM input tokens | 1024 |
| LLM output tokens | 128 |

Number of test requests for different scheduled node number:
| Node count | Concurrency | Query number |
| ----- | -------- | -------- |
| 1 | 128 | 640 |
| 2 | 256 | 1280 |
| 4 | 512 | 2560 |

More detailed configuration can be found in configuration file [benchmark.yaml](./benchmark.yaml).

### Test Steps

Use `kubectl get pods` to confirm that all pods are `READY` before starting the test.

#### Upload Retrieval File

Before testing, upload a specified file to make sure the llm input have the token length of 1k.

Get files:

```bash
wget https://github.com/opea-project/GenAIEval/tree/main/evals/benchmark/data/upload_file_no_rerank.txt
wget https://github.com/opea-project/GenAIEval/tree/main/evals/benchmark/data/upload_file.txt
```

Retrieve the `ClusterIP` of the `chatqna-data-prep` service.

```bash
kubectl get svc
```
Expected output:
```log
chatqna-data-prep ClusterIP xx.xx.xx.xx <none> 6007/TCP 51m
```

Use the following `cURL` command to upload file:

```bash
cd GenAIEval/evals/benchmark/data
# RAG with Rerank
curl -X POST "http://${cluster_ip}:6007/v1/dataprep" \
-H "Content-Type: multipart/form-data" \
-F "files=@./upload_file.txt"
# RAG without Rerank
curl -X POST "http://${cluster_ip}:6007/v1/dataprep" \
-H "Content-Type: multipart/form-data" \
-F "files=@./upload_file_no_rerank.txt"
```

#### Run Benchmark Test

Run the benchmark test using:
```bash
bash benchmark.sh -n 2
```
The `-n` argument specifies the number of test nodes. Required dependencies will be automatically installed when running the benchmark for the first time.

#### Data collection

All the test results will come to the folder `GenAIEval/evals/benchmark/benchmark_output`.

## Teardown

After completing the benchmark, use the following commands to clean up the environment:

Remove Node Labels:
```base
python deploy.py --delete-label
```
Delete the OPEA Helm Repository:
```bash
python deploy.py --delete-repo
```
99 changes: 99 additions & 0 deletions ChatQnA/benchmark/performance/kubernetes/intel/gaudi/benchmark.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
#!/bin/bash

# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

deployment_type="k8s"
node_number=1
service_port=8888
query_per_node=640

benchmark_tool_path="$(pwd)/GenAIEval"

usage() {
echo "Usage: $0 [-d deployment_type] [-n node_number] [-i service_ip] [-p service_port]"
echo " -d deployment_type ChatQnA deployment type, select between k8s and docker (default: k8s)"
echo " -n node_number Test node number, required only for k8s deployment_type, (default: 1)"
echo " -i service_ip chatqna service ip, required only for docker deployment_type"
echo " -p service_port chatqna service port, required only for docker deployment_type, (default: 8888)"
exit 1
}

while getopts ":d:n:i:p:" opt; do
case ${opt} in
d )
deployment_type=$OPTARG
;;
n )
node_number=$OPTARG
;;
i )
service_ip=$OPTARG
;;
p )
service_port=$OPTARG
;;
\? )
echo "Invalid option: -$OPTARG" 1>&2
usage
;;
: )
echo "Invalid option: -$OPTARG requires an argument" 1>&2
usage
;;
esac
done

if [[ "$deployment_type" == "docker" && -z "$service_ip" ]]; then
echo "Error: service_ip is required for docker deployment_type" 1>&2
usage
fi

if [[ "$deployment_type" == "k8s" && ( -n "$service_ip" || -n "$service_port" ) ]]; then
echo "Warning: service_ip and service_port are ignored for k8s deployment_type" 1>&2
fi

function main() {
if [[ ! -d ${benchmark_tool_path} ]]; then
echo "Benchmark tool not found, setting up..."
setup_env
fi
run_benchmark
}

function setup_env() {
git clone https://github.com/opea-project/GenAIEval.git
pushd ${benchmark_tool_path}
python3 -m venv stress_venv
source stress_venv/bin/activate
pip install -r requirements.txt
popd
}

function run_benchmark() {
source ${benchmark_tool_path}/stress_venv/bin/activate
export DEPLOYMENT_TYPE=${deployment_type}
export SERVICE_IP=${service_ip:-"None"}
export SERVICE_PORT=${service_port:-"None"}
if [[ -z $USER_QUERIES ]]; then
user_query=$((query_per_node*node_number))
export USER_QUERIES="[${user_query}, ${user_query}, ${user_query}, ${user_query}]"
echo "USER_QUERIES not configured, setting to: ${USER_QUERIES}."
fi
export WARMUP=$(echo $USER_QUERIES | sed -e 's/[][]//g' -e 's/,.*//')
if [[ -z $WARMUP ]]; then export WARMUP=0; fi
if [[ -z $TEST_OUTPUT_DIR ]]; then
if [[ $DEPLOYMENT_TYPE == "k8s" ]]; then
export TEST_OUTPUT_DIR="${benchmark_tool_path}/evals/benchmark/benchmark_output/node_${node_number}"
else
export TEST_OUTPUT_DIR="${benchmark_tool_path}/evals/benchmark/benchmark_output/docker"
fi
echo "TEST_OUTPUT_DIR not configured, setting to: ${TEST_OUTPUT_DIR}."
fi

envsubst < ./benchmark.yaml > ${benchmark_tool_path}/evals/benchmark/benchmark.yaml
cd ${benchmark_tool_path}/evals/benchmark
python benchmark.py
}

main
Loading

0 comments on commit 7f6ee31

Please sign in to comment.