Credits for face morphing script: https://github.com/valillon/FaceMorph
Place two facial images in input
-folder. Then start command
./run_morphing_with_images <image1> <image2> <fps> <duration_in_ms>
Example: ./run_morphing_with_images input/frida-paint.jpg input/frida-crop.png 30 3000
Outputs a mp4 file, which morphs faces of two images:
./build.sh
Original README of FaceMorph:
This code generates a morphing effect between two faces.
- Facial landmarks recognition in both faces (Dlib).
- Triangular Delaunay segmentation.
- Affine transformation between the Delaunay triangles of both faces.
- Alpha blending on the paired triangles with a given transparency.
Steps 3 and 4 are iterated for different values of alpha to generate a bunch of morphing frames. After that the, frames are converted into a video file.
This code is a modification of the code originally posted in this blog post. For more details about this code Face Morph Using OpenCV — C++ / Python.
Note that unlike the original code, only the corners and half way points are added to the facial keypoints. The neck and the ears points manually added in the original code have been omitted to make it completely automatic.
Run the proper script or install each therein included library for your OS macos
or ubuntu
. Python 2 required.
$ ./install/install_morphing_dependencies_<myOS>.sh`
Further installation details for OpenCV and Dlib in here, here and here.
The following script runs the entire pipeline.
$./run_morphing_with_images.sh <image1> <image2> <framerate> <duration_milis>
image1
: initial image.
image2
: final image.
framerate
: frame-rate in fps.
duration
: morphing duration in miliseconds.
$./run_morphing_with_images.sh ./example/van_gogh.png ./example/paul_goughin.png 30 2000
$./run_morphing_with_videos.sh <video1> <video2> <framerate>
video1
: initial video clip.
video2
: final video clip.
framerate
: frame-rate in fps.
The shortest video file will determine the duration of the final morphing clip.
Make landmarks from the first face stay for every morphing iteration. That will transfer the facial expression from one face to the other. There are cleaner ways to do it, but a simple one is to comment the line below where landmarks are refreshed.
for (f, a) in enumerate(np.linspace(0,100,n_frames)) :
# some code here
# points = [] # Comment this line to get a cartoon effect
Order 2 frames if you just like to have the cartoon frame:
$./run_morphing_with_images.sh <image1> <image2> 2 1000
Just make alpha
range double and then decrease it when exceeds 1.
for (f, a) in enumerate(np.linspace(0,100,n_frames)) :
alpha = float(a) / 100 # 0 to 1
alpha = 2 * alpha
if alpha > 1 : alpha = 2 - alpha
Run the script with the desired image as input. Delaunay and Voronoi segmentations are saved next to the input image. Unlike the morphing effect, no corners are used here, only facial landmarks.
$ ./python delaunay.py -i <image>
Although Dlib is quite robust in most cases, it has its own limitations for detecting facial landmarks. In case that the nose and eyes are not enough visible, face detection may fail. It also will fail on detecting non-real faces for instance cartoons, even if they have eyes, nose and mouth.
Rafael Redondo, Dec 2018.