Skip to content

Repo for storing the files I use to make animations with big-sleep, deep-daze, and VQGAN + CLIP.

Notifications You must be signed in to change notification settings

Hoversquid/MLAnimator

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MLAnimator

Repo for storing the files I use to make animations with VQGAN + CLIP (Z+quantize method) and lucidrains's BigSleep and DeepDaze projects.

MLAnimator.py sorts the output folder of big-sleep, deep-daze, or VQGAN + CLIP. It can use FFMPEG can animate the results easily. Notebooks have been simplified and include MLAnimator cells. The original notebooks should be in their source repos.

Google Colab notebooks to try for yourself:

VQGAN + CLIP w/ MLAnimator: VQGAN + CLIP + MLAnimator CoLab Link

deep-daze w/ MLAnimator: DeepDaze + MLAnimator CoLab Link

big-sleep w/ MLAnimator: BigSleep + MLAnimator CoLab Link

MLAnimator: MLAnimator CoLab Link

Source Repos and Downloads:

BigSleep: https://github.com/lucidrains/big-sleep

DeepDaze: https://github.com/lucidrains/deep-daze

VQGAN: https://github.com/CompVis/taming-transformers

Original VQGAN + CLIP Notebook: VQGAN + CLIP CoLab Link

FFMPEG: https://www.ffmpeg.org/

Personal Repo of VQGAN+CLIP (Z-Quantize Method): https://github.com/Hoversquid/VQGAN_CLIP_Z_Quantize

How to use:

Open the command window at the directory where you wish to store the output folder. Call MLAnimator.py in python with the argument -dir set to the directory that contains the unsorted image output (or the directory that contains the sorted folders after using MLAnimator on it.)

The program will ask you for a starting frame and an amount of frames to render. If you skip setting a starting frame, the frames will be chosen from the highest numbered files. If you do set a starting frame, the frames will be chosen starting with that image and selecting higher numbered frames. You can also supply the starting frame and amount of frames with the arguments -sf <number> and -f <number>. Use -h to see all the arguments available.

You can use the argument -m to mirror the animation, making a seemless loop but doubling the file size.

Using the argument -ft <filetype> will let you set the animation to a gif or mp4.

Using the -a option, MLAnimator will select all of the images in the selected directory tree and make animations from them. Requires no further input after use.

MLAnimator.py is also in the CoLab notebook, but you may want to output your renders to Google Drive to save your images in case your runtime limit is hit.

Using the VQGAN_CLIP Notebook:

The prompts that you supply for the image generation are saved as a text file in the output directory under a directory called /Saved_Prompts. This text can be copy and pasted into the main running cell to reuse the prompts from an earlier run.

Known Issue:

If your path for -dir ends with a backslash and quotes (like this: \"), the program may not run. Just remove the ending backslash.

Stay tuned:

I'm planning on updating this repo and making more in depth tutorials for machine learning projects soon!

About

Repo for storing the files I use to make animations with big-sleep, deep-daze, and VQGAN + CLIP.

Resources

Stars

Watchers

Forks

Releases

No releases published

Sponsor this project

Packages

No packages published

Languages