Skip to content

Commit

Permalink
add link to getting started across chatqna, different hardware modes (#…
Browse files Browse the repository at this point in the history
…191)

* add link to getting started

Signed-off-by: devpramod <[email protected]>

* update getting started message

Signed-off-by: devpramod <[email protected]>

---------

Signed-off-by: devpramod <[email protected]>
  • Loading branch information
devpramod authored Oct 18, 2024
1 parent 9cf819c commit 8b10a81
Show file tree
Hide file tree
Showing 4 changed files with 7 additions and 15 deletions.
5 changes: 1 addition & 4 deletions examples/ChatQnA/deploy/aipc.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,7 @@ example with OPEA comps to deploy using Ollama. There are several
slice-n-dice ways to enable RAG with vectordb and LLM models, but here we will
be covering one option of doing it for convenience : we will be showcasing how
to build an e2e chatQnA with Redis VectorDB and the llama-3 model,
deployed on the client CPU. For more information on how to setup IDC instance to proceed,
Please follow the instructions here (*** getting started section***). If you do
not have an IDC instance you can skip the step and make sure that all the
(***system level validation***) metrics are addressed such as docker versions.
deployed on the client CPU.
## Overview

There are several ways to setup a ChatQnA use case. Here in this tutorial, we
Expand Down
7 changes: 3 additions & 4 deletions examples/ChatQnA/deploy/gaudi.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,9 @@ example with OPEA comps to deploy using vLLM or TGI service. There are several
slice-n-dice ways to enable RAG with vectordb and LLM models, but here we will
be covering one option of doing it for convenience : we will be showcasing how
to build an e2e chatQnA with Redis VectorDB and neural-chat-7b-v3-3 model,
deployed on Intel® Tiber™ Developer Cloud (ITDC). For more information on how to setup ITDC instance to proceed,
Please follow the instructions here (*** getting started section***). If you do
not have an ITDC instance or the hardware is not supported in the ITDC yet, you can still run this on-prem. To run this on-prem, make sure that all the
(***system level requriements***) are addressed such as docker versions, driver version etc.
deployed on Intel® Tiber™ Developer Cloud (ITDC). To quickly learn about OPEA in just 5 minutes and set up the required hardware and software, please follow the instructions in the
[Getting Started](https://opea-project.github.io/latest/getting-started/README.html) section. If you do
not have an ITDC instance or the hardware is not supported in the ITDC yet, you can still run this on-prem.

## Overview

Expand Down
4 changes: 1 addition & 3 deletions examples/ChatQnA/deploy/nvidia.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,7 @@ example with OPEA comps to deploy using TGI service. There are several
slice-n-dice ways to enable RAG with vectordb and LLM models, but here we will
be covering one option of doing it for convenience : we will be showcasing how
to build an e2e chatQnA with Redis VectorDB and neural-chat-7b-v3-3 model,
deployed on on-prem. If you do not have an IDC instance you can skip
the step and make sure that all the (***system level validation***) metrics are addressed such as docker versions.

deployed on on-prem.
## Overview

There are several ways to setup a ChatQnA use case. Here in this tutorial, we
Expand Down
6 changes: 2 additions & 4 deletions examples/ChatQnA/deploy/xeon.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,8 @@ example with OPEA comps to deploy using vLLM or TGI service. There are several
slice-n-dice ways to enable RAG with vectordb and LLM models, but here we will
be covering one option of doing it for convenience : we will be showcasing how
to build an e2e chatQnA with Redis VectorDB and neural-chat-7b-v3-3 model,
deployed on IDC. For more information on how to setup IDC instance to proceed,
Please follow the instructions here (*** getting started section***). If you do
not have an IDC instance you can skip the step and make sure that all the
(***system level validation***) metrics are addressed such as docker versions.
deployed on IDC. To quickly learn about OPEA in just 5 minutes and set up the required hardware and software, please follow the instructions in the
[Getting Started](https://opea-project.github.io/latest/getting-started/README.html) section.

## Overview

Expand Down

0 comments on commit 8b10a81

Please sign in to comment.