Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ Helm Chart for VideoQnA Application #497

Open
wants to merge 33 commits into
base: main
Choose a base branch
from

Conversation

krish918
Copy link

Description

This PR introduces Helm Chart for VideoQnA application for easy setup of the application suite along with all dependencies.

Issues

N/A

Type of change

List the type of change like below. Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds new functionality)
  • Breaking change (fix or feature that would break existing design and interface)

Dependencies

No Dependencies.

Tests

All helm charts are accompanied with appropriate tests.

ZePan110 and others added 10 commits October 17, 2024 01:23
* Optimize path and link validity check.

Signed-off-by: ZePan110 <[email protected]>

* debug

Signed-off-by: ZePan110 <[email protected]>

* debug2

Signed-off-by: ZePan110 <[email protected]>

* Fix issue

Signed-off-by: ZePan110 <[email protected]>

* Remove debug output

Signed-off-by: ZePan110 <[email protected]>

* test

Signed-off-by: ZePan110 <[email protected]>

* Fix issue.

Signed-off-by: ZePan110 <[email protected]>

* Fix issue

Signed-off-by: ZePan110 <[email protected]>

* Restore test file

Signed-off-by: ZePan110 <[email protected]>

---------

Signed-off-by: ZePan110 <[email protected]>
Signed-off-by: Krishna Murti
<[email protected]>
Signed-off-by: Krishna Murti <[email protected]>
Copy link
Collaborator

@lianhao lianhao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks very much for contributing this.

Besides the embedded comment, we should also:

  1. add the README.md
  2. enable the CI, please find the ci-xxx-values.yaml files in the component helm charts

helm-charts/common/data-prep/values.yaml Outdated Show resolved Hide resolved
@@ -0,0 +1,18 @@
# Copyright (C) 2024 Intel Corporation
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to change the file name vdms-values.yaml to variant_vdms-values.yaml so the update_manifest.sh can automatically pick this to generate manifests files for GMC. Please see llm_uservice as example

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@@ -0,0 +1,15 @@
# Copyright (C) 2024 Intel Corporation
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

again, please change use the variant_vdms-values.yaml as the file names so update_manifest.sh can pick it up automatically.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@@ -0,0 +1,12 @@
# Copyright (C) 2024 Intel Corporation
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

again, filename change is required here to variant_vdm-values.yaml

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@@ -22,7 +25,7 @@ TEI_EMBEDDING_ENDPOINT: ""
LOCAL_EMBEDDING_MODEL: ""

REDIS_URL: ""
INDEX_NAME: "rag-redis"
indexName: "rag-redis"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is no need to change it to lower case, I think

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is related to the first conversation. Based on that, please let me know if this is fine or need to be reverted back?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reverted back to INDEX_NAME as per this conversation.

@@ -0,0 +1,19 @@
# Copyright (C) 2024 Intel Corporation
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

again, filename change is required here to variant_vdm-values.yaml

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@krish918 krish918 marked this pull request as ready for review October 29, 2024 02:19
@krish918 krish918 requested a review from yongfengdu as a code owner October 29, 2024 02:19
nameOverride: "ui"

# Following template value will be resolved in videoqna-ui ConfigMap
BACKEND_SERVICE_ENDPOINT: http://{{ .Release.Name | trunc 57 | trimSuffix "-" }}-nginx/v1/videoqna
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be the "/v1/videoqna", just what we put here for all other UIs, e.g. https://github.com/opea-project/GenAIInfra/blob/main/helm-charts/codegen/values.yaml#L50, and let the nginx reverse proxy to prepend the the scheme and hostname part. I'll let @Ruoyu-y to chime in here because she knows more about all the nginx settings.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because the UI for videoqna is based on Streamlit, which seems to be running code in backend first before sending the results to the browser. The requests library being used in the Python backend code requires the complete URL along with scheme and host, otherwise fails (unlike frontend js frameworks being used in other UIs, which can take relative URLs without scheme and host). I found, I could resolve this by using the service-name for nginx as host, as all pods can communicate with each other with service name. Please let me know, if there are other better ways to do this.

Copy link
Collaborator

@lianhao lianhao Nov 1, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, seems the videoqna-ui is designed to run the code which connects to videoqna mega gateway service in the ui container itself. In this case, we don't need the nginx part at all. We can set the BACKEND_SERVICE_ENDPOINT to the corresponding k8s svc url of the viedoqna mega gateway service, and remove all the nginx related part, i.e.

BACKEND_SERVICE_ENDPOINT: http://{{ include "videoqna.fullname" . }}:{{ .Values.service.port }}/v1/videoqna

we can change the ui service type to NodePort, and ask the user to directly connect to that NodePort in web browser

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

May be, nginx would be helpful for future UI updates, in case we have UIs in react or svelte. Please let me know if that's not the case and I should proceed with removing nginx.

Also, UI being a sub-chart for videoqna mega-application, doesn't seem to have access to values and templates defined in the parent chart i.e. videoqna. So, I am finding it difficult to access videoqna.fullname and other videoqna chart value in the UI sub-chart. However, I can access the videoqna svc name the same way I am accessing nginx svc name right now. Please help me if I am missing something here.

helm-charts/common/ui/templates/configmap.yaml Outdated Show resolved Hide resolved
@lianhao lianhao requested a review from Ruoyu-y November 1, 2024 01:29
Copy link
Collaborator

@mkbhanda mkbhanda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Persistent volume support. And a typo.


```bash
# 1) Download a sample video in current dir:
curl -svLO "https://github.com/opea-project/GenAIExamples/raw/refs/heads/main/VideoQnA/docker_compose/intel/cpu/xeon/data/op_1_0320241830.mp4"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would it not be better for this sample video to live in some common VidoQnA directory versus in docker_compose/intel/cpu/xeon/data

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose requested change needs a PR in GenAIExamples rather than here.

export HFTOKEN="insert-your-huggingface-token-here"

# Set a dir to cache downloaded Video-Llama Model
export MODELDIR=/mnt/opea-models
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using persistent volumes better than a node level directory. We have some charts with PV support. I believe submitted by @poussa

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any possibility, we can take this up in future in further improvements to the videoqna chart?


In this setup, lvm-uservice depends on TGI, you should set LVM_ENDPOINT as tgi endpoint.

### (Option1): Installing the chart separately

First, you need to install the tgi chart, please refer to the [tgi](../tgi) chart for more information.

After you've deployted the tgi chart successfully, please run `kubectl get svc` to get the tgi service endpoint, i.e. `http://tgi`.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
After you've deployted the tgi chart successfully, please run `kubectl get svc` to get the tgi service endpoint, i.e. `http://tgi`.
After you've deployed the tgi chart successfully, please run `kubectl get svc` to get the tgi service endpoint, i.e. `http://tgi`.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is fixed now. This also came in from VisualQnA PR merge.

helm-charts/common/lvm-uservice/values.yaml Show resolved Hide resolved
.github/workflows/_helm-e2e.yaml Outdated Show resolved Hide resolved
@@ -63,6 +80,12 @@ spec:
resources:
{{- toYaml .Values.resources | nindent 12 }}
volumes:
{{- if .Values.global.cacheUseHostPath }}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comment above.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

helm-charts/common/lvm-serving/Chart.yaml Outdated Show resolved Hide resolved
subPath: clip
- mountPath: /home/user/.cache/huggingface/hub
name: model-volume
subPath: huggingface/hub
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When using model-volume, the subPath is not necessary.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

agree, not necessary. But I suppose, as same volume is being re-used to mount different paths from different pod/containers, having a subPath helps to have a proper directory hierarchy inside model-volume and helps with what is coming from which container path instead of dumping everything in the root of model-volume.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The model-volume was used to save the models downloaded from huggingface hub, which is the HUGGINGFACE_HUB_CACHE here https://huggingface.co/docs/text-generation-inference/en/reference/launcher#huggingfacehubcache

@lianhao
Copy link
Collaborator

lianhao commented Nov 8, 2024

@krish918 please fix the chart lint error and resolve the conflict, thanks!

@lianhao lianhao added this to the v1.1 milestone Nov 8, 2024
subPath: clip
- mountPath: /home/user/.cache/huggingface/hub
name: model-volume
subPath: huggingface/hub
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The model-volume was used to save the models downloaded from huggingface hub, which is the HUGGINGFACE_HUB_CACHE here https://huggingface.co/docs/text-generation-inference/en/reference/launcher#huggingfacehubcache

@chensuyue
Copy link
Collaborator

Can we close the CI issue by today?

@chensuyue chensuyue removed this from the v1.1 milestone Nov 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants