You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello and congrats on your great work! Just to clarify something: Is it possible to generate new images for the same id but providing a text input to guide the modification? Eg change hair color or pose etc.
Thanks in advance.
The text was updated successfully, but these errors were encountered:
Hi! This model focuses exclusively on ID information and adapts the CLIP encoder accordingly. It is not designed to follow text promps. This could perhaps be achieved by combining our ID encoder with the original text encoder in some way, but in the current state we only support ID input (plus compatible conditions/adapters such as ControlNet).
Hi,
Thanks for sharing your work. I've been experimenting with it and encountered some issues related to identity preservation when altering inputs in the project_face_embs method:
Replacing the Text: I substituted the "photo of an id person" with different texts, but none of the generated images retained the identity of the reference image.
Adding Extra Text: I also tried adding additional text descriptions, such as "floating on top of water". Similarly, the identity of the original was not preserved.
It seems that these changes significantly disrupt the text conditioning capability. Do you have any suggestions for a workaround or fix?
Thanks for your help!
Hello and congrats on your great work! Just to clarify something: Is it possible to generate new images for the same id but providing a text input to guide the modification? Eg change hair color or pose etc.
Thanks in advance.
The text was updated successfully, but these errors were encountered: