You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your work a lot!I have trouble when I trying to generate a picture that has the same lighting as the reference.
I have tried the same method like #6 :
utilizing the R_Net to extract the coefficient (257) of both source and reference image, and discarding the last three elements (254).
2.Combining the identity coefficient of the source image with the other three factors coefficients of the reference image to generate a new coefficient for later face generation.
3.Adding random noise to the above coefficient and utilizing the truncate_generation to obtain the manipulated results.
however the generated images do not preserve both the identity information and other information of the source image.
source image
reference image
generated image
I have also tired the method which optimize from an average latent vector in W+ space. I embed both source image and reference image into W+ space and update it using an Adam optimizer with 3000 iterations, and then I combine them using final_vector = tf.concat([src_vector[:, :8, :], ref_vector[:, 8:, :]], axis=1), but I get a output like this
generated image2
Looking forward to your reply! I would appreciate it if you could provide a demo code.
The text was updated successfully, but these errors were encountered:
Thanks for your work a lot!I have trouble when I trying to generate a picture that has the same lighting as the reference.
I have tried the same method like #6 :
2.Combining the identity coefficient of the source image with the other three factors coefficients of the reference image to generate a new coefficient for later face generation.
3.Adding random noise to the above coefficient and utilizing the truncate_generation to obtain the manipulated results.
however the generated images do not preserve both the identity information and other information of the source image.
source image
reference image
generated image
I have also tired the method which optimize from an average latent vector in W+ space. I embed both source image and reference image into W+ space and update it using an Adam optimizer with 3000 iterations, and then I combine them using
final_vector = tf.concat([src_vector[:, :8, :], ref_vector[:, 8:, :]], axis=1)
, but I get a output like thisgenerated image2
Looking forward to your reply! I would appreciate it if you could provide a demo code.
The text was updated successfully, but these errors were encountered: