Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reformat agentQnA sample guide #219

Merged
merged 2 commits into from
Nov 14, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
47 changes: 47 additions & 0 deletions examples/AgentQnA/AgentQnA_Guide.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,47 @@
.. _AgentQnA_Guide:

AgentQnA Sample Guide
#####################

.. note:: This guide is in its early development and is a work-in-progress with
placeholder content.

Overview
********

This example showcases a hierarchical multi-agent system for question-answering applications.

Purpose
*******
yinghu5 marked this conversation as resolved.
Show resolved Hide resolved
* Improve relevancy of retrieved context. Agent can rephrase user queries, decompose user queries, and iterate to get the most relevant context for answering user’s questions. Compared to conventional RAG, RAG agent can significantly improve the correctness and relevancy of the answer.
* Use tools to get additional knowledge. For example, knowledge graphs and SQL databases can be exposed as APIs for Agents to gather knowledge that may be missing in the retrieval vector database.
* Hierarchical agent can further improve performance. Expert worker agents, such as retrieval agent, knowledge graph agent, SQL agent, etc., can provide high-quality output for different aspects of a complex query, and the supervisor agent can aggregate the information together to provide a comprehensive answer.

How It Works
************

The supervisor agent interfaces with the user and dispatch tasks to the worker agent and other tools to gather information and come up with answers.
The worker agent uses the retrieval tool to generate answers to the queries posted by the supervisor agent.


.. mermaid::

graph LR;
U[User]-->SA[Supervisor Agent];
SA-->WA[Worker Agent];
WA-->RT[Retrieval Tool];
SA-->T1[Tool 1];
SA-->T2[Tool 2];
SA-->TN[Tool N];
SA-->U;
WA-->SA;
RT-->WA;
T1-->SA;
T2-->SA;
TN-->SA;


Deployment
**********

See the :ref:`agentqna-example-deployment`.
14 changes: 14 additions & 0 deletions examples/AgentQnA/deploy/index.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
.. _agentqna-example-deployment:

AgentQnA Example Deployment Options
###################################

Here are some deployment options, depending on your hardware and environment:

Single Node
***********

.. toctree::
:maxdepth: 1

Xeon Scalable Processor <xeon>
37 changes: 37 additions & 0 deletions examples/AgentQnA/deploy/xeon.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
# Single node on-prem deployment with Docker Compose on Xeon Scalable processors

1. [Optional] Build `Agent` docker image

```
git clone https://github.com/opea-project/GenAIComps.git
cd GenAIComps
docker build -t opea/agent-langchain:latest -f comps/agent/langchain/Dockerfile .
```

2. Launch Tool service

In this example, we will use some of the mock APIs provided in the Meta CRAG KDD Challenge to demonstrate the benefits of gaining additional context from mock knowledge graphs.

```
docker run -d -p=8080:8000 docker.io/aicrowd/kdd-cup-24-crag-mock-api:v0
```

3. clone repo
```
export WORKDIR=$(pwd)
git clone https://github.com/opea-project/GenAIExamples.git
export TOOLSET_PATH=$WORKDIR/GenAIExamples/AgentQnA/tools/
# optional: OPANAI_API_KEY
export OPENAI_API_KEY=<your-openai-key>
```

4. launch `Agent` service

The configurations of the supervisor agent and the worker agent are defined in the docker-compose yaml file. We currently use openAI GPT-4o-mini as LLM, and we plan to add support for llama3.1-70B-instruct (served by TGI-Gaudi) in a subsequent release. To use openai llm, run command below.

```
cd $WORKDIR/GenAIExamples/AgentQnA/docker_compose/intel/cpu/xeon
bash launch_agent_service_openai.sh
```
2 changes: 2 additions & 0 deletions examples/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@ GenAIExamples are designed to give developers an easy entry into generative AI,

ChatQnA/ChatQnA_Guide
ChatQnA/deploy/index
AgentQnA/AgentQnA_Guide
AgentQnA/deploy/index

----

Expand Down