Generative Inbetweening: Adapting Image-to-Video Models for Keyframe Interpolation
Xiaojuan Wang,
Boyang Zhou,
Brian Curless,
Ira Kemelmacher,
Aleksander Holynski,
Steve Seitz
Input frame 1 | Input frame 2 | Generated video |
git clone https://github.com/jeanne-wang/svd_keyframe_interpolation.git
cd svd_keyframe_interpolation
conda env create -f environment.yml
Download the finetuned checkpoint, and put it under checkpoints/
.
mkdir -p checkpoints/svd_reverse_motion_with_attnflip
cd checkpoints/svd_reverse_motion_with_attnflip
pip install gdown
gdown 1H7vgiNVbxSeeleyJOqhoyRbJ97kGWGOK --folder
The example input keyframe pairs are in examples/
folder, and
the corresponding interpolated videos (1024x576, 25 frames) are placed in results/
folder.
To interpolate, run:
bash keyframe_interpolation.sh
We run the code on A100, please set smaller "decode_chunk_size" to run on smaller GPUs like A40.
The synthetic training videos dataset will be released soon.