3DV 2021: Synergy between 3DMM and 3D Landmarks for Accurate 3D Facial Geometry Cho-Ying Wu, Qiangeng Xu, Ulrich Neumann, CGIT Lab at University of Souther California
Forked from original SynergyNet repo [paper] [video] [project page]
This version is adapted to work for morphable model parameters using the Basel Face Model (BFM) and FLAME model. There is one branch for BFM and a separate one for FLAME.
An additional adaptation, is that training relies on using data coming from the output of the Datatool (a Datatool is a tool created by Bonseyes for extracting information from a dataset , see more info here).
Cloning and Data files:
To clone the repository run: git clone https://github.com/david1309/SynergyNet_bonseyes
.
Training relies on getting data via the Datatool API. For this you need to place under a folder called data/
(this folder can be located inside the repo or anywhere in your system) the following files and repositories:
- Dataloader files: copy all the files contained under
data_scripts/
inside yourdata/
folder. To do this run:cp -r data_scripts/* <path_to_data_folder>
- Datatool API repo: clone into the
data/
folder the Datatool API repo, change its name, and clone it's submodules. To do this run:git clone --recurse-submodules https://gitlab.com/bonseyes/artifacts/data_tools/apis/datatool-api datatool_api
(you need to request access to Bonseyes in order to be able to clone).
Once this is done, open the data_config.yaml
and modify the data path to point to your data/
folder e.g. data_path: <path_to_data_folder>
Installing requirements:
Run the following commands to install requirements (the installation assumes your GPU runs in CUDA version 11.3. If your CUDA version is different, go to the Pytorch website and select another version):
pip3 install torch torchvision --extra-index-url https://download.pytorch.org/whl/cu113
pip install -e .
After this, place the BFM model .mat file inside bfm_utils/morphable_models/
folder. Download the file from here.
For easily training the model you can run the provided bash script using the command: bash train_script.sh
.
For a quick debug to validate changes in the code, run: bash train_script_debug.sh
.
Important arguments / hyperparemters of the training script are:
--datatool-root-dir
: the path to the output folder of the Datatool.--train-tags
/--val-tags
: names of the datatool tags (subfolder or subdatasets) you want to use for training and validation .--exp-name
: Name of the experiment, used for saving all related checkpoints (saved models, logs, results images, tensorboard files etc.). Checkpoints are saved underckpts/<exp-name>
.--debug
: if True, quickly runs the training only using few samples of the datatool (few == batch-size).--epochs, --batch-size, --base-lr
: variables controling training details.
You can also train the model using the command line and passing the desired arguments:
python main_train.py --datatool-root-dir="/root/output_debug_all_wv" --train-tags="IBUG" --val-tags="IBUG_Flip" --debug=True --exp-name="debug_cmd" --epochs=10 --batch-size=32
Bibtex
If you find our work useful, please consider to cite our work
@INPROCEEDINGS{wu2021synergy,
author={Wu, Cho-Ying and Xu, Qiangeng and Neumann, Ulrich},
booktitle={2021 International Conference on 3D Vision (3DV)},
title={Synergy between 3DMM and 3D Landmarks for Accurate 3D Facial Geometry},
year={2021}
}
Acknowledgement
The project is developed on [3DDFA] and [FSA-Net]. Thank them for their wonderful work. Thank [3DDFA-V2] for the face detector and rendering codes.