This dataprep
microservice accepts the following from the user and ingests them into a Redis vector store:
- Videos (mp4 files) and their transcripts (optional)
- Images (gif, jpg, jpeg, and png files) and their captions (optional)
- Audio (wav files)
# Install ffmpeg static build
wget https://johnvansickle.com/ffmpeg/builds/ffmpeg-git-amd64-static.tar.xz
mkdir ffmpeg-git-amd64-static
tar -xvf ffmpeg-git-amd64-static.tar.xz -C ffmpeg-git-amd64-static --strip-components 1
export PATH=$(pwd)/ffmpeg-git-amd64-static:$PATH
cp $(pwd)/ffmpeg-git-amd64-static/ffmpeg /usr/local/bin/
pip install -r requirements.txt
Please refer to this readme.
export your_ip=$(hostname -I | awk '{print $1}')
export REDIS_URL="redis://${your_ip}:6379"
export INDEX_NAME=${your_redis_index_name}
export PYTHONPATH=${path_to_comps}
This is required only if you are going to consume the generate_captions API of this microservice as in Section 4.3.
Please refer to this readme to start the LVM microservice. After LVM is up, set up environment variables.
export your_ip=$(hostname -I | awk '{print $1}')
export LVM_ENDPOINT="http://${your_ip}:9399/v1/lvm"
Start document preparation microservice for Redis with below command.
python prepare_videodoc_redis.py
Please refer to this readme.
This is required only if you are going to consume the generate_captions API of this microservice as described here.
Please refer to this readme to start the LVM microservice. After LVM is up, set up environment variables.
export your_ip=$(hostname -I | awk '{print $1}')
export LVM_ENDPOINT="http://${your_ip}:9399/v1/lvm"
export your_ip=$(hostname -I | awk '{print $1}')
export EMBEDDING_MODEL_ID="BridgeTower/bridgetower-large-itm-mlm-itc"
export REDIS_URL="redis://${your_ip}:6379"
export WHISPER_MODEL="base"
export INDEX_NAME=${your_redis_index_name}
export HUGGINGFACEHUB_API_TOKEN=${your_hf_api_token}
cd ../../../../
docker build -t opea/dataprep-multimodal-redis:latest --build-arg https_proxy=$https_proxy --build-arg http_proxy=$http_proxy -f comps/dataprep/multimodal/redis/langchain/Dockerfile .
docker run -d --name="dataprep-multimodal-redis" -p 6007:6007 --runtime=runc --ipc=host -e http_proxy=$http_proxy -e https_proxy=$https_proxy -e REDIS_URL=$REDIS_URL -e INDEX_NAME=$INDEX_NAME -e LVM_ENDPOINT=$LVM_ENDPOINT -e HUGGINGFACEHUB_API_TOKEN=$HUGGINGFACEHUB_API_TOKEN opea/dataprep-multimodal-redis:latest
cd comps/dataprep/multimodal/redis/langchain
docker compose -f docker-compose-dataprep-redis.yaml up -d
docker container logs -f dataprep-multimodal-redis
Once this dataprep microservice is started, user can use the below commands to invoke the microservice to convert images and videos and their transcripts (optional) to embeddings and save to the Redis vector store.
This microservice provides 3 different ways for users to ingest files into Redis vector store corresponding to the 3 use cases.
Use case: This API is used when videos are accompanied by transcript files (.vtt
format) or images are accompanied by text caption files (.txt
format).
Important notes:
- Make sure the file paths after
files=@
are correct. - Every transcript or caption file's name must be identical to its corresponding video or image file's name (except their extension - .vtt goes with .mp4 and .txt goes with .jpg, .jpeg, .png, or .gif). For example,
video1.mp4
andvideo1.vtt
. Otherwise, ifvideo1.vtt
is not included correctly in the API call, the microservice will return an errorNo captions file video1.vtt found for video1.mp4
.
curl -X POST \
-H "Content-Type: multipart/form-data" \
-F "files=@./video1.mp4" \
-F "files=@./video1.vtt" \
http://localhost:6007/v1/ingest_with_text
curl -X POST \
-H "Content-Type: multipart/form-data" \
-F "files=@./image.jpg" \
-F "files=@./image.txt" \
http://localhost:6007/v1/ingest_with_text
curl -X POST \
-H "Content-Type: multipart/form-data" \
-F "files=@./video1.mp4" \
-F "files=@./video1.vtt" \
-F "files=@./video2.mp4" \
-F "files=@./video2.vtt" \
-F "files=@./image1.png" \
-F "files=@./image1.txt" \
-F "files=@./image2.jpg" \
-F "files=@./image2.txt" \
http://localhost:6007/v1/ingest_with_text
Use case: This API should be used when a video has meaningful audio or recognizable speech but its transcript file is not available, or for audio files with speech.
In this use case, this microservice will use whisper
model to generate the .vtt
transcript for the video or audio files.
curl -X POST \
-H "Content-Type: multipart/form-data" \
-F "files=@./video1.mp4" \
http://localhost:6007/v1/generate_transcripts
curl -X POST \
-H "Content-Type: multipart/form-data" \
-F "files=@./video1.mp4" \
-F "files=@./video2.mp4" \
-F "files=@./audio1.wav" \
http://localhost:6007/v1/generate_transcripts
Use case: This API should be used when uploading an image, or when uploading a video that does not have meaningful audio or does not have audio.
In this use case, there is no meaningful language transcription. Thus, it is preferred to leverage a LVM microservice to summarize the frames.
- Single video upload
curl -X POST \
-H "Content-Type: multipart/form-data" \
-F "files=@./video1.mp4" \
http://localhost:6007/v1/generate_captions
- Multiple video upload
curl -X POST \
-H "Content-Type: multipart/form-data" \
-F "files=@./video1.mp4" \
-F "files=@./video2.mp4" \
http://localhost:6007/v1/generate_captions
- Single image upload
curl -X POST \
-H "Content-Type: multipart/form-data" \
-F "files=@./image.jpg" \
http://localhost:6007/v1/generate_captions
To get names of uploaded files, use the following command.
curl -X POST \
-H "Content-Type: application/json" \
http://localhost:6007/v1/dataprep/get_files
To delete uploaded files and clear the database, use the following command.
curl -X POST \
-H "Content-Type: application/json" \
http://localhost:6007/v1/dataprep/delete_files