This is a framework that enables you to generate a video based on input waveforms. These waveforms can be music, or used as automation clips like in digital music production.
After git clone
, run sh framework.sh
with as arguments:
- format: For example
1920x1080p60
for 1080p video at a framerate of 60. - mapper: A program that, given the format and the framefile as input, generates one frame of video.
- cache: The directory where all data that can be regenerated is kept. Can be deleted without loss of information.
- audio...:
.wav
files that will be chopped up to generate the framefiles. The first one will only be used as the audio track for the video, and the length will be calculated from it.
This will set up a makefile that can be called from the same directory
that framework.sh
was called from with (probably)
make -f ${cache}/${format}.mk
. The given mapper will be called as
${mapper} ${format} <${framefile} >${image}
for each frame.
format
: the format of the video. Resolution and framerate, encoded as${width}x${height}p${fps}
.frame
: the 'timecode'. Seconds, with as many leading zeroes as needed, and the frame in that second, both starting from zero. Encoded as${second}.${frame}
- framefile: a file or input stream with
${audiofile}: ${value}\n
for every input audio.