Skip to content

Commit

Permalink
Merge branch 'main' into abdulari/agentQnA_docs
Browse files Browse the repository at this point in the history
  • Loading branch information
yinghu5 authored Nov 14, 2024
2 parents 079d4ed + e579124 commit 6e2b81b
Show file tree
Hide file tree
Showing 19 changed files with 336 additions and 126 deletions.
21 changes: 21 additions & 0 deletions .github/workflows/check-online-doc-build.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
# Copyright (C) 2024 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

name: Check Online Document Building
permissions: {}

on:
pull_request:
branches: [main]

jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Build Online Document
shell: bash
run: |
git config --local --get remote.origin.url
echo "build online doc"
bash scripts/build.sh
2 changes: 1 addition & 1 deletion .github/workflows/pr-path-detection.yml
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ jobs:
link_head="https://github.com/opea-project/docs/blob/main"
merged_commit=$(git log -1 --format='%H')
changed_files="$(git diff --name-status --diff-filter=ARM ${{ github.event.pull_request.base.sha }} ${merged_commit} | awk '/\.md$/ {print $NF}')"
png_lines=$(grep -Eo '\]\([^)]+\)' --include='*.md' -r .|grep -Ev 'http'|grep -Ev 'mailto')
png_lines=$(grep -Eo '\]\([^)]+\)' --include='*.md' -r .|grep -Ev 'http'|grep -Ev 'mailto'|grep -Ev 'portal.azure.com')
if [ -n "$png_lines" ]; then
for png_line in $png_lines; do
refer_path=$(echo "$png_line"|cut -d':' -f1 | cut -d'/' -f2-)
Expand Down
14 changes: 14 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# Contributing to OPEA

Welcome to the OPEA open-source community! We are thrilled to have you here and excited about the potential contributions you can bring to the OPEA platform. Whether you are fixing bugs, adding new GenAI components, improving documentation, or sharing your unique use cases, your contributions are invaluable.

Together, we can make OPEA the go-to platform for enterprise AI solutions. Let's work together to push the boundaries of what's possible and create a future where AI is accessible, efficient, and impactful for everyone.

Please check the [Contributing guidelines](https://github.com/opea-project/docs/tree/main/community/CONTRIBUTING.md) for a detailed guide on how to contribute a GenAI component and all the ways you can contribute!

Thank you for being a part of this journey. We can't wait to see what we can achieve together!

# Additional Content

- [Code of Conduct](https://github.com/opea-project/docs/tree/main/community/CODE_OF_CONDUCT.md)
- [Security Policy](https://github.com/opea-project/docs/tree/main/community/SECURITY.md)
36 changes: 22 additions & 14 deletions community/rfcs/24-08-02-OPEA-AIAvatarChatbot.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,24 @@
# 24-08-02-OPEA-AIAvatarChatbot

A RAG-Powered Human-Like AI Avatar Audio Chatbot integrated with OPEA AudioQnA
<!-- The short description of the feature you want to contribute -->
A Human-Like AI Avatar Audio Chatbot integrated with OPEA AudioQnA

Code contributions:
"animation" component: https://github.com/opea-project/GenAIComps/tree/main/comps/animation/wav2lip
"AvatarChatbot" examples: https://github.com/opea-project/GenAIExamples/tree/main/AvatarChatbot

Intel Developer Zone Article "Create an AI Avatar Talking Bot with PyTorch* and Open Platform for Enterprise AI (OPEA)": https://www.intel.com/content/www/us/en/developer/articles/technical/ai-avatar-talking-bot-with-pytorch-and-opea.html

YouTube tech-talk video: https://youtu.be/OjaElyUB8Z0?si=6-IdxwTg0YFMraFl

## Author
<!-- List all contributors of this RFC. -->
[ctao456](https://github.com/ctao456), [alexsin368](https://github.com/alexsin368), [YuningQiu](https://github.com/YuningQiu), [louie-tsai](https://github.com/louie-tsai)

## Status
<!-- Change the PR status to Under Review | Rejected | Accepted. -->
v0.1 - ASMO Team sharing on Fri 6/28/2024
[GenAIComps pr #400](https://github.com/opea-project/GenAIComps/pull/400) (Under Review)
[GenAIExamples pr #523](https://github.com/opea-project/GenAIExamples/pull/523) (Under Review)
v0.1 - ASMO Team sharing on Thursday 10/24/2024
* [GenAIComps pr #775](https://github.com/opea-project/GenAIComps/pull/775) | <span style="color: green;">Merged</span>
* [GenAIExamples pr #923](https://github.com/opea-project/GenAIExamples/pull/923) | <span style="color: green;">Merged</span>

## Objective
<!-- List what problem will this solve? What are the goals and non-goals of this RFC? -->
Expand Down Expand Up @@ -39,10 +46,10 @@ The chatbot will:
* Use multimodal retrieval-augmented generation (RAG) to generate more accurate, in-domain responses, in v0.2

New microservices include:
* animation
* [animation](https://github.com/opea-project/GenAIComps/tree/main/comps/animation/wav2lip)

New megaservices include:
* AvatarChatbot
* [AvatarChatbot](https://github.com/opea-project/GenAIExamples/tree/main/AvatarChatbot)

## Motivation
<!-- List why this problem is valuable to solve? Whether some related work exists? -->
Expand All @@ -60,9 +67,9 @@ Related works include [Nvidia Audio2Face](https://docs.nvidia.com/ace/latest/mod
### Avatar Chatbot design
<!-- Removed PPT slides -->

![avatar chatbot design](assets/design.png)
![avatar chatbot design](assets/avatar_design.png)

Currently, the RAG feature using the `embedding` and `dataprep` microservices is missing in the above design, including uploading relevant documents/weblinks, storing them in the database, and retrieving them for the LLM model. These features will be added in v0.2.
Currently, the RAG feature using the `embedding`, `retrieval`, `reranking` and `dataprep` microservices and VectorDB is missing in the above design, including uploading relevant documents/weblinks, storing them in the database, and retrieving them for the LLM model. These features will be added in v0.2.

Flowchart: AvatarChatbot Megaservice
<!-- Insert Mermaid flowchart here -->
Expand Down Expand Up @@ -217,13 +224,14 @@ End-to-End Inference Time for AvatarChatbot Megaservice (asr -> llm -> tts -> an

On SPR:
~30 seconds for AudioQnA on SPR,
~40-200 seconds for AvatarAnimation on SPR
~30-200 seconds for AvatarAnimation on SPR

On Gaudi 2:
~5 seconds for AudioQnA on Gaudi,
~10-50 seconds for AvatarAnimation on Gaudi, depending on:
~10-40 seconds for AvatarAnimation on Gaudi, depending on:
1) Whether the input is an image or a multi-frame, fixed-fps video
1) LipSync Animation DL model used: Wav2Lip_only or Wav2Lip+GFPGAN or SadTalker
2) Resolution and FPS rate of the resulting mp4 video
2) The `max_tokens` parameter used in LLM text generation
3) LipSync Animation DL model used: Wav2Lip_only or Wav2Lip+GFPGAN or SadTalker
4) Resolution and FPS rate of the resulting mp4 video

All latency reportings are as of 8/2/2024.
All latency reportings are as of 10/24/2024.
Binary file added community/rfcs/assets/avatar_design.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed community/rfcs/assets/design.png
Binary file not shown.
27 changes: 26 additions & 1 deletion conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,9 @@
import os
import sys
from datetime import datetime
import glob
import shutil


sys.path.insert(0, os.path.abspath('.'))

Expand Down Expand Up @@ -84,7 +87,7 @@
# Toc options
'collapse_navigation': False,
'sticky_navigation': True,
'navigation_depth': 3,
'navigation_depth': 4,
}


Expand Down Expand Up @@ -114,6 +117,7 @@
)
}

show_warning_types = True

# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
Expand All @@ -130,10 +134,31 @@
# paths that contain custom static files (such as style sheets)
html_static_path = ['sphinx/_static']

def copy_images(src ,dst):
image_types = ["png", "svg"]
for image_type in image_types:
pattern = "{}/**/*.{}".format(src, image_type)
files = glob.glob(pattern, recursive = True)
for file in files:
sub_name = file.replace(src, '')
dst_filename = "{}{}".format(dst, sub_name)
folder = os.path.dirname(dst_filename)
if not os.path.exists(folder):
os.makedirs(folder)
shutil.copy(file, dst_filename)

def copy_image_to_html(app, docname):
if app.builder.name == 'html':
if os.path.exists(app.srcdir) and os.path.exists(app.outdir):
copy_images(str(app.srcdir) ,str(app.outdir))
else:
print("No existed {} or {}".format(app.srcdir ,app.outdir))

def setup(app):

app.add_css_file("opea-custom.css")
app.add_js_file("opea-custom.js")
app.connect('build-finished', copy_image_to_html)

# Disable "Created using Sphinx" in the HTML footer. Default is True.
html_show_sphinx = False
Expand Down
15 changes: 15 additions & 0 deletions examples/examples.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
Examples
##########

.. rst-class:: rst-columns

.. contents::
:local:
:depth: 1

----

.. comment This include file is generated in the Makefile during doc build
time from all the directories found in the GenAIExamples top level directory
.. include:: examples.txt
15 changes: 1 addition & 14 deletions examples/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -25,19 +25,6 @@ We're building this documentation from content in the
:glob:

/GenAIExamples/README
examples.rst
/GenAIExamples/*

**Example Applications Table of Contents**

.. rst-class:: rst-columns

.. contents::
:local:
:depth: 1

----

.. comment This include file is generated in the Makefile during doc build
time from all the directories found in the GenAIExamples top level directory
.. include:: examples.txt
Loading

0 comments on commit 6e2b81b

Please sign in to comment.