Skip to content

Making advanced live portrait feature-reactive (audio, MIDI, motion, proximity, brightness, color, depth, and more)

Notifications You must be signed in to change notification settings

ryanontheinside/ComfyUI-FlexLivePortrait

 
 

Repository files navigation

ComfyUI-AdvancedLivePortrait

Update

11/07/2024

Expressions are feature-reactive (features: audio, MIDI, motion, proximity, and more).

8/21/2024

You can create a video without a video.

Track the face of the source video.

The workflow has been updated.

Introduction

AdvancedLivePortrait is faster and has real-time preview

default.mp4

Edit facial expressions in photos.

Insert facial expressions into videos.

Create animations using multiple facial expressions.

Extract facial expressions from sample photos.

Installation

This project has been registered with ComfyUI-Manager. Now you can install it automatically using the manager.

Usage

The workflows and sample datas placed in '\custom_nodes\ComfyUI-AdvancedLivePortrait\sample'


You can add expressions to the video. See 'workflow2_advanced.json'.

Describes the 'command' in 'workflow2_advanced.json'

readme

[Motion index] = [Changing frame length] : [Length of frames waiting for next motion]

Motion index 0 is the original source image.

They are numbered in the order they lead to the motion_link.

Linking the driving video to 'src_images' will add facial expressions to the driving video.


You can save and load expressions with the 'Load Exp Data' 'Save Exp Data' nodes.

\ComfyUI\output\exp_data\ Path to the folder being saved


Thanks

Original author's link : https://liveportrait.github.io/

This project uses a model converted by kijai. link : https://github.com/kijai/ComfyUI-LivePortraitKJ

About

Making advanced live portrait feature-reactive (audio, MIDI, motion, proximity, brightness, color, depth, and more)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.6%
  • Batchfile 0.4%