Skip to content

Commit

Permalink
Update Talkinghead.md
Browse files Browse the repository at this point in the history
  • Loading branch information
Cohee1207 authored Oct 15, 2023
1 parent 48f8db5 commit f6ebe5a
Showing 1 changed file with 25 additions and 35 deletions.
60 changes: 25 additions & 35 deletions Extras/extensions/Talkinghead.md
Original file line number Diff line number Diff line change
@@ -1,40 +1,42 @@
# talkinghead

### What is it?
A implementation of Talking Head Anime 3 Demo for AITuber. It possesses the following features:

-Generates random Live 2D-like motion actions from a single static image.
An implementation of Talking Head Anime 3 Demo for AITuber. It possesses the following features:

-Lip-syncs to the sound output from any TTS output.
- Generates random Live 2D-like motion actions from a single static image.
- Lip-syncs to the sound output from any TTS output.

This extentsion contains the orginal demo programs for the Talking Head(?) Anime from a Single Image 3: Now the Body Too project. As the name implies, the project allows you to animate anime characters, and you only need a single image of that character to do so. There are two demo programs:
This extension contains the original demo programs for the Talking Head(?) Anime from a Single Image 3: Now the Body Too project. As the name implies, the project allows you to animate anime characters, and you only need a single image of that character to do so. There are two demo programs:

The manual_poser lets you manipulate a character's facial expression, head rotation, body rotation, and chest expansion due to breathing through a graphical user interface, so you can save them as default expressions IE Happy, sad, joy etc.
The manual_poser lets you manipulate a character's facial expression, head rotation, body rotation, and chest expansion due to breathing through a graphical user interface, so you can save them as default expressions IE Happy, sad, joy, etc.
ifacialmocap_puppeteer lets you transfer your facial motion to an anime character.

### Hardware Requirements
You can use either CPU or GPU Modes (CPU is default). However in CPU mode expect about 1 FPS and in GPU mode on an RTX3060 i am getting about 9-10 FPS.

You can use either CPU or GPU Modes (CPU is default). However, in CPU mode expect about 1 FPS, and in GPU mode on an RTX3060 I am getting about 9-10 FPS.

The ifacialmocap_puppeteer requires an iOS device that is capable of computing blend shape parameters from a video feed. This means that the device must be able to run iOS 11.0 or higher and must have a TrueDepth front-facing camera. (See this page for more info.) In other words, if you have the iPhone X or something better, you should be all set.

### How to use
You must launch extras with the following modules for talkinghead to work: classify and talkinghead!
classify is required for handling of the talkinghead.png file. Additionally you may also use --talkinghead-gpu to load the blend models into GPU memory and make the animations 10x faster. It is highly recommended to use gpu acceleration! By default once the program starts it will load a default image SillyTavern-extras\talkinghead\tha3\images\lambda_00.png. You can verify it is working by going to http://localhost:5100/api/talkinghead/result_feed or YOUR EXT URL:PORT/api/talkinghead/result_feed.

-Once the server has started goto the Extension API tab and connect. Then simply select a character card to load. (--enable-modules=classify,talkinghead --talkinghead-gpu in server.py)
You must launch extras with the following modules for talkinghead to work: `classify` and `talkinghead`!
classify is required for the handling of the talkinghead.png file. Additionally, you may also use `--talkinghead-gpu` to load the blend models into GPU memory and make the animations 10x faster. It is highly recommended to use GPU acceleration! By default, once the program starts it will load a default image SillyTavern-extras\talkinghead\tha3\images\lambda_00.png. You can verify it is working by going to http://localhost:5100/api/talkinghead/result_feed or `YOUR EXT URL:PORT/api/talkinghead/result_feed`.

- Once the server has started go to the Extension API tab and connect. Then simply select a character card to load. (`--enable-modules=classify,talkinghead --talkinghead-gpu` when starting server.py)

-Now select the Character Expressions, if you check the image type talkinghead box the script will replace your current character expression with the result of YOUR EXT URL:PORT/api/talkinghead/result_feed unchecking the box SHOULD return the image back to the orginal expression, however sometimes you have to send a new message to the chat to "reload" the image.
- Now select the Character Expressions, if you check the image type talkinghead box the script will replace your current character expression with the result of `YOUR EXT URL:PORT/api/talkinghead/result_feed` unchecking the box SHOULD return the image back to the original expression, however sometimes you have to send a new message to the chat to "reload" the image.

-If you do not have a talkinghead.png file in the character directory it will simply show either the default image or the last character card that had a talkinghead.png file. The animation source image is changed when the character card is changed.
- If you do not have a talkinghead.png file in the character directory it will simply show either the default image or the last character card that had a talkinghead.png file. The animation source image is changed when the character card is changed.

-Now open the character expressions scroll down to the talkinghead image and upload an image file that meets the requirements in the section below called "Contraints on Input Images".
- Now open the character expressions scroll down to the talkinghead image and upload an image file that meets the requirements in the section below called "Constraints on Input Images".

-Then check and uncheck the talkinghead box to reload the character. IF the image is funny looking it is probably because it is not transparent / has no alpha layer. Otherwise follow the instructions and teamplate below.
- Then check and uncheck the talkinghead box to reload the character. If the image is funny looking it is probably because it is not transparent / has no alpha layer. Otherwise, follow the instructions and template below.

### Contraints on Input Images
### Constraints on Input Images
In order for the system to work well, the input image must obey the following constraints:

It should be of resolution 512 x 512. (If the program receives an input image of any other size, they will resize the image to this resolution and also output at this resolution.)
It should be of resolution 512 x 512. (If the program receives an input image of any other size, it will resize the image to this resolution and also output at this resolution.)
It must have an alpha channel.
It must contain only one humanoid character.
The character should be standing upright and facing forward.
Expand All @@ -44,27 +46,18 @@ The alpha channels of all pixels that do not belong to the character (i.e., back

<img alt="image" src="https://github.com/miketako3/talking-head-anime-3-demo-for-aituber/blob/main/docs/input_spec.png?raw=true">












### ADVANCED SECTION

### Python Environment

In addition to the base feature (app.py), both manual_poser and ifacialmocap_puppeteer are available as desktop applications. To run them, you need to set up an environment for running programs written in the Python language. The environment needs to have the following software packages:

Python >= 3.8
PyTorch >= 1.11.0 with CUDA support
SciPY >= 1.7.3
wxPython >= 4.1.1
Matplotlib >= 3.5.1
* Python >= 3.8
* PyTorch >= 1.11.0 with CUDA support
* SciPY >= 1.7.3
* wxPython >= 4.1.1
* Matplotlib >= 3.5.1
*
One way to do so is to install Anaconda and run the following commands in your shell:

> conda create -n talking-head-anime-3-demo python=3.8
Expand All @@ -75,6 +68,7 @@ One way to do so is to install Anaconda and run the following commands in your s
> conda install matplotlib
### Additional Blend Models

There is only one (lightest) model included, if you want the additional blend models you need to download the model files from https://www.dropbox.com/s/y7b8jl4n2euv8xe/talking-head-anime-3-models.zip?dl=0 and unzip it to the SillyTavern-extras\talkinghead\tha3\models folder. In the end, the data folder should look like:

+ tha3
Expand Down Expand Up @@ -108,7 +102,3 @@ Note that before running the command above, you might have to activate the Pytho

> conda activate extras
if you have not already activated the environment.




0 comments on commit f6ebe5a

Please sign in to comment.