From 7c2b0ac678d9cc5c6bfaf03eb011d4dbaf732553 Mon Sep 17 00:00:00 2001 From: CharleneHu-42 Date: Tue, 22 Oct 2024 16:31:19 +0800 Subject: [PATCH] update new examples to main README Signed-off-by: CharleneHu-42 --- README.md | 30 ++++++++++++++++++++++-------- supported_examples.md | 36 +++++++++++++++++++++++++++--------- 2 files changed, 49 insertions(+), 17 deletions(-) diff --git a/README.md b/README.md index 97922076c..fc0f08afa 100644 --- a/README.md +++ b/README.md @@ -18,14 +18,20 @@ GenAIExamples are designed to give developers an easy entry into generative AI, [GenAIEval](https://github.com/opea-project/GenAIEval) measures service performance metrics such as throughput, latency, and accuracy for GenAIExamples. This feature helps users compare performance across various hardware configurations easily. ## Use Cases -| Scenario | Use Case | | -| ------------------ | ---------------------------------------------------------------------- |--| -| Question Answering | [ChatQnA](https://github.com/opea-project/GenAIExamples/tree/main/ChatQnA) ⭐⭐⭐⭐⭐
  Chatbot with Retrieval Augmented Generation (RAG).

[VisualQnA](https://github.com/opea-project/GenAIExamples/tree/main/VisualQnA) ⭐⭐⭐⭐⭐
  Visual Question-answering. | [More >](https://github.com/opea-project/GenAIExamples?tab=readme-ov-file#deploy-examples) | -| Content Summarization | [DocSum](https://github.com/opea-project/GenAIExamples/tree/main/DocSum)
  Document Summarization Application | [More >](https://github.com/opea-project/GenAIExamples?tab=readme-ov-file#deploy-examples) | -| FAQ Generation | [FaqGen](https://github.com/opea-project/GenAIExamples/tree/main/FaqGen)
  Frequently asked questions (FAQs) generation from your documents, legal texts, customer queries etc. | [More >](https://github.com/opea-project/GenAIExamples?tab=readme-ov-file#deploy-examples) | -| Code Generation | [CodeGen](https://github.com/opea-project/GenAIExamples/tree/main/CodeGen)
  Gen-AI Powered Code Generator | [More >](https://github.com/opea-project/GenAIExamples?tab=readme-ov-file#deploy-examples) | -| Information Retrieval | [DocIndexRetriever](https://github.com/opea-project/GenAIExamples/tree/main/DocIndexRetriever)
  Document Retrieval with Retrieval Augmented Generation (RAG) | [More >](https://github.com/opea-project/GenAIExamples?tab=readme-ov-file#deploy-examples) | -| Fine-tuning | [InstructionTuning](https://github.com/opea-project/GenAIExamples/tree/main/InstructionTuning)
  Application of Instruction Tuning. | [More >](https://github.com/opea-project/GenAIExamples?tab=readme-ov-file#deploy-examples) | + +Below are some highlighted GenAI use cases across various application scenarios: + +| Scenario | Use Case | +| ------------------ | ---------------------------------------------------------------------- | +| Question Answering | [ChatQnA](ChatQnA) ✨: Chatbot with Retrieval Augmented Generation (RAG).
[VisualQnA](VisualQnA) ✨: Visual Question-answering. | +| Image Generation | [Text2Image](Text2Image) ✨: Text-to-image generation. | +| Content Summarization | [DocSum](DocSum): Document Summarization Application. | +| FAQ Generation | [FaqGen](FaqGen): Frequently asked questions (FAQs) generation from your documents, legal texts, customer queries etc. | +| Code Generation | [CodeGen](CodeGen): Gen-AI Powered Code Generator. | +| Information Retrieval | [DocIndexRetriever](DocIndexRetriever): Document Retrieval with Retrieval Augmented Generation (RAG). | +| Fine-tuning | [InstructionTuning](InstructionTuning): Application of Instruction Tuning. | + +For the full list of the available use cases and their supported deployment type, please refer [here](#deploy-examples). ## Getting Started @@ -59,6 +65,14 @@ Deployment are based on released docker images by default, check [docker image l | AudioQnA | [Xeon Instructions](AudioQnA/docker_compose/intel/cpu/xeon/README.md) | [Gaudi Instructions](AudioQnA/docker_compose/intel/hpu/gaudi/README.md) | [AudioQnA with Manifests](AudioQnA/kubernetes/intel/README.md) | Not Supported | [AudioQnA with GMC](AudioQnA/kubernetes/intel/README_gmc.md) | | VisualQnA | [Xeon Instructions](VisualQnA/docker_compose/intel/cpu/xeon/README.md) | [Gaudi Instructions](VisualQnA/docker_compose/intel/hpu/gaudi/README.md) | [VisualQnA with Manifests](VisualQnA/kubernetes/intel/README.md) | Not Supported | [VisualQnA with GMC](VisualQnA/kubernetes/intel/README_gmc.md) | | ProductivitySuite | [Xeon Instructions](ProductivitySuite/docker_compose/intel/cpu/xeon/README.md) | Not Supported | [ProductivitySuite with Manifests](ProductivitySuite/kubernetes/intel/README.md) | Not Supported | Not Supported | +| AgentQnA | [Xeon Instructions](AgentQnA/docker_compose/intel/cpu/xeon) | [Gaudi Instructions](AgentQnA/docker_compose/intel/hpu/gaudi) | Not Supported | Not Supported | Not Supported | +| DocIndexRetriever | [Xeon Instructions](DocIndexRetriever/docker_compose/intel/cpu/xeon/README.md ) | [Gaudi Instructions](DocIndexRetriever/docker_compose/intel/hpu/gaudi/README.md) | Not Supported | Not Supported | Not Supported | +| InstructionTuning | [Xeon Instructions](InstructionTuning/docker_compose/intel/cpu/xeon/README.md) | [Gaudi Instructions](InstructionTuning/docker_compose/intel/hpu/gaudi/README.md) | Not Supported | Not Supported | Not Supported | +| RerankFinetuning | [Xeon Instructions](RerankFinetuning/docker_compose/intel/cpu/xeon/README.md) | [Gaudi Instructions](RerankFinetuning/docker_compose/intel/hpu/gaudi/README.md ) | Not Supported | Not Supported | Not Supported | +| VideoQnA | [Xeon Instructions](VideoQnA/docker_compose/intel/cpu/xeon/README.md) | Not Supported | Not Supported | Not Supported | Not Supported | +| MultimodalQnA | [Xeon Instructions](MultimodalQnA/docker_compose/intel/cpu/xeon/README.md) | [Gaudi Instructions](MultimodalQnA/docker_compose/intel/hpu/gaudi/README.md ) | Not Supported | Not Supported | Not Supported | +| DBQnA | [Xeon Instructions](DBQnA/docker_compose/intel/cpu/xeon/README.md) | Not Supported | Not Supported | Not Supported | Not Supported | + ## Supported Examples diff --git a/supported_examples.md b/supported_examples.md index e913fd22e..bc8aa2401 100644 --- a/supported_examples.md +++ b/supported_examples.md @@ -107,7 +107,7 @@ This document introduces the supported examples of GenAIExamples. The supported | LVM | HW | Description | | --------------------------------------------------------------------------------------------- | ------ | ----------- | -| [llava-hf/llava-v1.6-mistral-7b-hf](https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf) | Gaudi2 | Chatbot | +| [llava-hf/llava-v1.6-mistral-7b-hf](https://huggingface.co/llava-hf/llava-v1.6-mistral-7b-hf) | Xeon/Gaudi2 | Chatbot | ### VideoQnA @@ -122,27 +122,27 @@ By default, the embedding and LVM models are set to a default value as listed be ### RerankFinetuning -Rerank model finetuning example is for training rerank model on a dataset for improving its capability on specific field. +[Rerank model finetuning](./RerankFinetuning/README.md) example is for training rerank model on a dataset for improving its capability on specific field. By default, the base model is set to a default value as listed below: | Service | Base Model | HW | Description | | ----------------- | ------------------------------------------------------------------------- | ---- | ------------------------------- | -| Rerank Finetuning | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Xeon | Rerank model finetuning service | +| Rerank Finetuning | [BAAI/bge-reranker-large](https://huggingface.co/BAAI/bge-reranker-large) | Xeon/Gaudi2 | Rerank model finetuning service | ### InstructionTuning -The Instruction Tuning example is designed to further train large language models (LLMs) on a dataset consisting of (instruction, output) pairs using supervised learning. This process bridges the gap between the LLM's original objective of next-word prediction and the user’s objective of having the model follow human instructions accurately. By leveraging Instruction Tuning, this example enhances the LLM's ability to better understand and execute specific tasks, improving the model's alignment with user instructions and its overall performance. +The [Instruction Tuning](./InstructionTuning/README.md) example is designed to further train large language models (LLMs) on a dataset consisting of (instruction, output) pairs using supervised learning. This process bridges the gap between the LLM's original objective of next-word prediction and the user’s objective of having the model follow human instructions accurately. By leveraging Instruction Tuning, this example enhances the LLM's ability to better understand and execute specific tasks, improving the model's alignment with user instructions and its overall performance. By default, the base model is set to a default value as listed below: | Service | Base Model | HW | Description | | ----------------- | ------------------------------------------------------------------------------------- | ---------- | ------------------------------------ | -| InstructionTuning | [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) | Xeon/Gaudi | LLM model Instruction Tuning service | +| InstructionTuning | [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) | Xeon/Gaudi2 | LLM model Instruction Tuning service | ### DocIndexRetriever -The DocRetriever example demonstrates how to match user queries with free-text records using various retrieval methods. It plays a key role in Retrieval-Augmented Generation (RAG) systems by dynamically fetching relevant information from external sources, ensuring responses are factual and up-to-date. Powered by vector databases, DocRetriever enables efficient, semantic retrieval by storing data as vectors and quickly identifying the most relevant documents based on similarity. +The [DocRetriever](./DocIndexRetriever/README.md) example demonstrates how to match user queries with free-text records using various retrieval methods. It plays a key role in Retrieval-Augmented Generation (RAG) systems by dynamically fetching relevant information from external sources, ensuring responses are factual and up-to-date. Powered by vector databases, DocRetriever enables efficient, semantic retrieval by storing data as vectors and quickly identifying the most relevant documents based on similarity. | Framework | Embedding | Vector Database | Serving | HW | Description | | ------------------------------------------------------------------------------ | --------------------------------------------------- | -------------------------- | --------------------------------------------------------------- | ----------- | -------------------------- | @@ -150,7 +150,7 @@ The DocRetriever example demonstrates how to match user queries with free-text r ### AgentQnA -The AgentQnA example demonstrates a hierarchical, multi-agent system designed for question-answering tasks. A supervisor agent interacts directly with the user, delegating tasks to a worker agent and utilizing various tools to gather information and generate answers. The worker agent primarily uses a retrieval tool to respond to the supervisor's queries. Additionally, the supervisor can access other tools, such as APIs to query knowledge graphs, SQL databases, or external knowledge bases, to enhance the accuracy and relevance of its responses. +The [AgentQnA](./AgentQnA/README.md) example demonstrates a hierarchical, multi-agent system designed for question-answering tasks. A supervisor agent interacts directly with the user, delegating tasks to a worker agent and utilizing various tools to gather information and generate answers. The worker agent primarily uses a retrieval tool to respond to the supervisor's queries. Additionally, the supervisor can access other tools, such as APIs to query knowledge graphs, SQL databases, or external knowledge bases, to enhance the accuracy and relevance of its responses. Worker agent uses open-source websearch tool (duckduckgo), agents use OpenAI GPT-4o-mini as llm backend. @@ -158,7 +158,7 @@ Worker agent uses open-source websearch tool (duckduckgo), agents use OpenAI GPT ### AudioQnA -The AudioQnA example demonstrates the integration of Generative AI (GenAI) models for performing question-answering (QnA) on audio files, with the added functionality of Text-to-Speech (TTS) for generating spoken responses. The example showcases how to convert audio input to text using Automatic Speech Recognition (ASR), generate answers to user queries using a language model, and then convert those answers back to speech using Text-to-Speech (TTS). +The [AudioQnA](./AudioQnA/README.md) example demonstrates the integration of Generative AI (GenAI) models for performing question-answering (QnA) on audio files, with the added functionality of Text-to-Speech (TTS) for generating spoken responses. The example showcases how to convert audio input to text using Automatic Speech Recognition (ASR), generate answers to user queries using a language model, and then convert those answers back to speech using Text-to-Speech (TTS). @@ -179,7 +179,7 @@ The AudioQnA example demonstrates the integration of Generative AI (GenAI) model ### FaqGen -FAQ Generation Application leverages the power of large language models (LLMs) to revolutionize the way you interact with and comprehend complex textual data. By harnessing cutting-edge natural language processing techniques, our application can automatically generate comprehensive and natural-sounding frequently asked questions (FAQs) from your documents, legal texts, customer queries, and other sources. In this example use case, we utilize LangChain to implement FAQ Generation and facilitate LLM inference using Text Generation Inference on Intel Xeon and Gaudi2 processors. +[FAQ Generation](./FaqGen/README.md) application leverages the power of large language models (LLMs) to revolutionize the way you interact with and comprehend complex textual data. By harnessing cutting-edge natural language processing techniques, our application can automatically generate comprehensive and natural-sounding frequently asked questions (FAQs) from your documents, legal texts, customer queries, and other sources. In this example use case, we utilize LangChain to implement FAQ Generation and facilitate LLM inference using Text Generation Inference on Intel Xeon and Gaudi2 processors. | Framework | LLM | Serving | HW | Description | | ------------------------------------------------------------------------------ | ----------------------------------------------------------------- | --------------------------------------------------------------- | ----------- | ----------- | | [LangChain](https://www.langchain.com)/[LlamaIndex](https://www.llamaindex.ai) | [Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) | [TGI](https://github.com/huggingface/text-generation-inference) | Xeon/Gaudi2 | Chatbot | @@ -187,7 +187,25 @@ FAQ Generation Application leverages the power of large language models (LLMs) t ### MultimodalQnA [MultimodalQnA](./MultimodalQnA/README.md) addresses your questions by dynamically fetching the most pertinent multimodal information (frames, transcripts, and/or captions) from your collection of videos. +| Framework | Embedding | LVM | Vector Database | HW | Description | +|----------------------------------------|-------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------|----------------------------|--------|--------------------| +| [LangChain](https://www.langchain.com) | [BridgeTower/bridgetower-large](https://huggingface.co/BridgeTower/bridgetower-large-itm-mlm-gaudi) | [llava-hf/llava-1.5-7b-hf](https://huggingface.co/llava-hf/llava-1.5-7b-hf) | [Redis](https://redis.io/) | Xeon | Multimodal chatbot | +| [LangChain](https://www.langchain.com) | [BridgeTower/bridgetower-large](https://huggingface.co/BridgeTower/bridgetower-large-itm-mlm-gaudi) | [llava-hf/llava-v1.6-vicuna-13b-hf](https://huggingface.co/llava-hf/llava-v1.6-vicuna-13b-hf) | [Redis](https://redis.io/) | Gaudi2 | Multimodal chatbot | ### ProductivitySuite [Productivity Suite](./ProductivitySuite/README.md) streamlines your workflow to boost productivity. It leverages the power of OPEA microservices to deliver a comprehensive suite of features tailored to meet the diverse needs of modern enterprises. + +### DBQnA + +[DBQnA](./DBQnA/README.md) converts your natural language query into an SQL query, automatically executes the generated query on the database and delivers real-time query results. +| Framework | LLM | Database | HW | Description | +|----------------------------------------|-------------------------------------------------------------------------------------------------|-------------------------------------------|------|----------------------------| +| [LangChain](https://www.langchain.com) | [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) | [PostgresDB](https://www.postgresql.org/) | Xeon | Natural language SQL agent | + +### Text2Image + +[Text2Image](./Text2Image/README.md) generates image based on your provided text. +| Framework | LDM | HW | Description | +|----------------------------------------|--------------------------------------------------------------------------------------------------------|-------------|-------------| +| [LangChain](https://www.langchain.com) | [stabilityai/stable-diffusion](https://huggingface.co/stabilityai/stable-diffusion-3-medium-diffusers) | Xeon/Gaudi2 | Text2Image |