This repository has mainly the following 2 features:
- Conversation with AI character
- AITuber streaming
I've written a detailed usage guide in the article below:
This project is developed in the following environment:
- Node.js: ^20.0.0
- npm: 10.8.1
- Clone the repository to your local machine.
git clone https://github.com/tegnike/aituber-kit.git
- Open the folder.
cd aituber-kit
- Install packages.
npm install
- Start the application in development mode.
npm run dev
- Open the URL http://localhost:3000
- This is a feature to converse with an AI character.
- It is an extended feature of pixiv/ChatVRM, which is the basis of this repository.
- It can be tried relatively easily as long as you have an API key for various LLMs.
- The recent conversation sentences are retained as memory.
- It is multimodal, capable of recognizing images from the camera or uploaded images to generate responses.
- Enter your API key for various LLMs in the settings screen.
- OpenAI
- Anthropic
- Google Gemini
- Groq
- Local LLM (No API key is required, but a local API server needs to be running.)
- Dify Chatbot (No API key is required, but a local API server needs to be running.)
- Edit the character's setting prompt if necessary.
- Load a VRM file and background file if available.
- Select a speech synthesis engine and configure voice settings if necessary.
- For VOICEVOX, you can select a speaker from multiple options. The VOICEVOX app needs to be running beforehand.
- For Koeiromap, you can finely adjust the voice. An API key is required.
- For Google TTS, languages other than Japanese can also be selected. Credential information is required.
- For Style-Bert-VITS2, a local API server needs to be running.
- For GSVI TTS, a local API server needs to be running.
- ElevenLabs supports various language selection. Please enter the API key.
- Start conversing with the character from the input form. Microphone input is also possible.
- It is possible to retrieve YouTube streaming comments and have the character speak.
- A YouTube API key is required.
- Comments starting with '#' are not read.
- Turn on YouTube mode in the settings screen.
- Enter your YouTube API key and YouTube Live ID.
- Configure other settings the same way as "Conversation with AI Character".
- Start streaming on YouTube and confirm that the character reacts to comments.
- Turn on the conversation continuity mode to be able to speak even if there are no comments.
- You can send messages to the server app via WebSocket and get a response.
- Unlike the above two, it does not complete within the front-end app, so the difficulty level is a bit higher.
- ⚠ This mode is currently not fully maintained, so it may not work.
- Start the server app and open the
ws://127.0.0.1:8000/ws
endpoint. - Turn on WebSocket mode in the settings screen.
- Configure other settings the same way as "Conversation with AI Character".
- Wait for messages from the server app and confirm that the character reacts.
- You can try it with the server app repository I created. [tegnike/aituber-server](https://github.com/tegnike/ aituber-server)
- For detailed settings, please read "[Let's develop with a beautiful girl!! [Open Interpreter]](https://note. com/nike_cha_n/n/nabcfeb7aaf3f)".
- This is a mode where the AI character automatically presents slides.
- You need to prepare slides and script files in advance.
- Proceed to the point where you can interact with the AI character.
- Place the slide folder and script file in the designated folder.
- Turn on Slide Mode in the settings screen.
- Press the Start Slide button to begin the presentation.
- Change the VRM model data at
public/AvatarSample_B.vrm
. Do not change the name. - Change the background image at
public/bg-c.jpg
. Do not change the name.
- Some configuration values can be referenced from the
.env
file contents. - If entered in the settings screen, that value takes precedence.
- Conversation history can be reset in the settings screen.
- Various settings are stored in the browser.
- Elements enclosed in code blocks are not read by TTS.
We are seeking sponsors to continue our development efforts.
Your support will greatly contribute to the development and improvement of the AITuber Kit.
Plus multiple private sponsors
- The license adheres to pixiv/ChatVRM and is under the MIT License.
- Logo Usage Agreement
- VRM Model Usage Agreement
To add a new language to the project, follow these steps:
-
Add Language File:
- Create a new language directory in the
locales
directory and create atranslation.json
file inside it. - Example:
locales/fr/translation.json
(for French)
- Create a new language directory in the
-
Add Translations:
- Add translations to the
translation.json
file, referring to existing language files.
- Add translations to the
-
Update Language Settings:
- Open the
src/lib/i18n.js
file and add the new language to theresources
object.
resources: { ..., fr: { // New language code translation: require("../../locales/fr/translation.json"), }, },
- Open the
-
Add Language Selection Option:
- Add a new language option to the appropriate part of the UI (e.g., language selection dropdown in the settings screen) so users can select the language.
<select> ..., <option value="FR">French - Français</option> </select>
-
Test:
- Test if the application displays correctly in the new language.
This will add support for the new language to the project.
- You also need to add support for the voice language code.
- Add the new language code to the
getVoiceLanguageCode
function in theIntroduction
component.
const getVoiceLanguageCode = (selectLanguage: string) => {
switch (selectLanguage) {
case 'JP':
return 'ja-JP';
case 'EN':
return 'en-US';
case 'ZH':
return 'zh-TW';
case 'zh-TW':
return 'zh-TW';
case 'KO':
return 'ko-KR';
case 'FR':
return 'fr-FR';
default:
return 'ja-JP';
}
}
- Add a new language README (
README_fr.md
), logo usage terms (logo_licence_fr.md
), and VRM model usage terms (vrm_licence_fr.md
) to thedocs
directory.