EchoChronos (Time Echo Chronos) is a multi-modal style conversational AI assistant based on a large language model, designed to provide users with a new style of conversational experience. The AI integrates RAG, TTS, and other technologies to enable real-time interaction with users, allowing them to immerse themselves in the charm of classic dialogues and dialogues with history. ๐ฅธ
EchoChronos
โโ ChatStyle # Conversation style module
โโ managers # Used to provide interfaces for various modules
โ โโ __init__.py
โ โโ connect.py # Connection manager, currently only supports WebSocket
โ โโ constants.py # Constants
โ โโ model.py # Style dialogue model manager
โ โโ rag.py # RAG model manager
โ โโ runner.py # Runner manager, used for writing inference logic
โ โโ tts.py # TTS model manager
โโ RAG # RAG module
โโ TTS # TTS module
โโ utils # Toolkit
โโ inference_torch.py # Inference code using PyTorch
โโ inference.py # Inference code using MindSpore
โโ launch.py # Project entry point
โโ README.en.md
โโ README.md
python>=3.10
conda install ffmpeg
pip install mindnlp==0.4.0
git clone --recursive https://gitee.com/xujunda2024/echochronos.git
cd echochronos
pip install -r requirements.txt
-
Set up a Python environment โ๏ธ๐ค
-
After successfully installing the dependencies, prepare the configuration file in the format of
examples/infer_qwen2_lora_fp32.yaml
(be sure to modify the parameters in the configuration file according to your needs). -
Start the GPT-SOVITS service.
-
Prepare the model: For details, please refer to the README.md file of the GPT-SOVITS project.
-
Start the service:
cd TTS/GPT-SoVITS-main/GPT_SOVITS python Server.py
-
-
Currently, this project provides three modes of operation, which can be changed by modifying the "isTerminal", "isWebsocket", and "isWebUI" parameters in the YAML file (Replace
<your_yaml_path>
with your YAML-formatted configuration file).-
Terminal:
python launch.py <your_yaml_path>
-
WebSocket:
python launch.py <your_yaml_path>
-
WebUI (recommended ๐คฉ):
streamlit run launch.py <your_yaml_path>
-
-
It is recommended to use CUDA 11.6 and cuDNN.
-
If you encounter
[ERROR] libcuda.so (needed by mindspore-gpu) is not found.
, pleaseexport CUDA_HOME=/path/to/your/cuda
- Due to the accuracy issues of the Qwen2 models in the MindNLP, inference can only be performed using float32, with memory consumption around 46G.
-
MindSpore has two inference script entry points, and the startup methods are as follows:
-
python inference_ms.py --isTerminal
orpython inference_ms.py --isWebsocket
-
streamlit run webui.py
(recommended ๐คฉ)
-