Skip to content

Commit

Permalink
Doc: Fix broken links (#439)
Browse files Browse the repository at this point in the history
Signed-off-by: Lianhao Lu <[email protected]>
  • Loading branch information
lianhao authored Sep 19, 2024
1 parent b224b65 commit 032ddbc
Show file tree
Hide file tree
Showing 6 changed files with 5 additions and 8 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ The following steps are optional. They're only required if you want to run the w

Follow [GMC README](https://github.com/opea-project/GenAIInfra/blob/main/microservices-connector/README.md)
to install GMC into your kubernetes cluster. [GenAIExamples](https://github.com/opea-project/GenAIExamples) contains several sample GenAI example use case pipelines such as ChatQnA, DocSum, etc.
Once you have deployed GMC in your Kubernetes cluster, you can deploy any of the example pipelines by following its Readme file (e.g. [Docsum](https://github.com/opea-project/GenAIExamples/blob/main/DocSum/kubernetes/README.md)).
Once you have deployed GMC in your Kubernetes cluster, you can deploy any of the example pipelines by following its Readme file (e.g. [Docsum](https://github.com/opea-project/GenAIExamples/blob/main/DocSum/kubernetes/intel/README_gmc.md)).

### Use helm charts to deploy

Expand Down
2 changes: 1 addition & 1 deletion helm-charts/common/tei/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,4 +41,4 @@ curl http://localhost:2081/embed -X POST -d '{"inputs":"What is Deep Learning?"}
| global.modelUseHostPath | string | `"/mnt/opea-models"` | Cached models directory, tei will not download if the model is cached here. The host path "modelUseHostPath" will be mounted to container as /data directory. Set this to null/empty will force it to download model. |
| image.repository | string | `"ghcr.io/huggingface/text-embeddings-inference"` | |
| image.tag | string | `"cpu-1.5"` | |
| horizontalPodAutoscaler.enabled | bool | false | Enable HPA autoscaling for the service deployment based on metrics it provides. See HPA section in ../../README.md before enabling! |
| horizontalPodAutoscaler.enabled | bool | false | Enable HPA autoscaling for the service deployment based on metrics it provides. See [HPA section](../../HPA.md) before enabling! |
2 changes: 1 addition & 1 deletion helm-charts/common/teirerank/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,4 +44,4 @@ curl http://localhost:2082/rerank \
| global.modelUseHostPath | string | `"/mnt/opea-models"` | Cached models directory, teirerank will not download if the model is cached here. The host path "modelUseHostPath" will be mounted to container as /data directory. Set this to null/empty will force it to download model. |
| image.repository | string | `"ghcr.io/huggingface/text-embeddings-inference"` | |
| image.tag | string | `"cpu-1.5"` | |
| horizontalPodAutoscaler.enabled | bool | false | Enable HPA autoscaling for the service deployment based on metrics it provides. See HPA section in ../../README.md before enabling! |
| horizontalPodAutoscaler.enabled | bool | false | Enable HPA autoscaling for the service deployment based on metrics it provides. See [HPA section](../../HPA.md) before enabling! |
2 changes: 1 addition & 1 deletion helm-charts/common/tgi/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,4 +48,4 @@ curl http://localhost:2080/generate \
| global.modelUseHostPath | string | `"/mnt/opea-models"` | Cached models directory, tgi will not download if the model is cached here. The host path "modelUseHostPath" will be mounted to container as /data directory. Set this to null/empty will force it to download model. |
| image.repository | string | `"ghcr.io/huggingface/text-generation-inference"` | |
| image.tag | string | `"1.4"` | |
| horizontalPodAutoscaler.enabled | bool | false | Enable HPA autoscaling for the service deployment based on metrics it provides. See HPA section in ../../README.md before enabling! |
| horizontalPodAutoscaler.enabled | bool | false | Enable HPA autoscaling for the service deployment based on metrics it provides. See [HPA section](../../HPA.md) before enabling! |
2 changes: 1 addition & 1 deletion helm-charts/common/vllm/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

Helm chart for deploying vLLM Inference service.

Refer to [Deploy with Helm Charts](../README.md) for global guides.
Refer to [Deploy with Helm Charts](../../README.md) for global guides.

## Installing the Chart

Expand Down
3 changes: 0 additions & 3 deletions microservices-connector/config/samples/ChatQnA/use_cases.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,6 @@ For Gaudi:
- tei-embedding-service: opea/tei-gaudi:latest
- tgi-service: ghcr.io/huggingface/tgi-gaudi:1.2.1

> [NOTE]
> Refer to [Xeon README](https://github.com/opea-project/GenAIExamples/blob/main/ChatQnA/docker/xeon/README.md) or [Gaudi README](https://github.com/opea-project/GenAIExamples/blob/main/ChatQnA/docker/gaudi/README.md) to build the OPEA images. These too will be available on Docker Hub soon to simplify use.
## Deploy ChatQnA pipeline

There are 3 use cases for ChatQnA example:
Expand Down

0 comments on commit 032ddbc

Please sign in to comment.