Skip to content

jeanne-wang/svd_keyframe_interpolation

Repository files navigation

Keyframe Interpolation with Stable Video Diffusion

Generative Inbetweening: Adapting Image-to-Video Models for Keyframe Interpolation
Xiaojuan Wang, Boyang Zhou, Brian Curless, Ira Kemelmacher, Aleksander Holynski, Steve Seitz
arXiv Project Page

Input frame 1 Input frame 2 Generated video

Quick Start

1. Setup repository and environment

git clone https://github.com/jeanne-wang/svd_keyframe_interpolation.git
cd svd_keyframe_interpolation

conda env create -f environment.yml

2. Download checkpoint

Download the finetuned checkpoint, and put it under checkpoints/.

mkdir -p checkpoints/svd_reverse_motion_with_attnflip
cd checkpoints/svd_reverse_motion_with_attnflip
pip install gdown
gdown 1H7vgiNVbxSeeleyJOqhoyRbJ97kGWGOK --folder

3. Launch the inference script!

The example input keyframe pairs are in examples/ folder, and the corresponding interpolated videos (1024x576, 25 frames) are placed in results/ folder.
To interpolate, run:

bash keyframe_interpolation.sh

We run the code on A100, please set smaller "decode_chunk_size" to run on smaller GPUs like A40.

Light-weight finetuing

The synthetic training videos dataset will be released soon.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •