-
-
Notifications
You must be signed in to change notification settings - Fork 598
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Suggestion] Better documentation about landmarks #246
Comments
To complement, the replies to the following issues #69 , #31 and the guide on this question are very helpful. The major problem I still have is, after annotating the images, how can I know to which vertices those facial landmarks should correspond in a model like SFM or 4DFM. I can only think about using a 3D software like Blender to manually do the job, but this seems very error prone. In one of the answers, it is mentioned that 4DFM has landmark detection, but the whole workflow is still very unclear to me. I think, in the end of the day, that if the section "Sample code" did not make the assumption that the user already has a landmark-to-vertex file and facial annotations, and tried to roughly explain the workflow to make the 3D reconstruction of a raw selfie in the wild, it would be much more useful. |
Hi David, Thanks a lot for these suggestions. It's always good to hear about problems that users are encountering. I am currently quite busy but will certainly take in the feedback and use it when I can make some time to work on it. Thank you very much, and also I think many users will already just find your write-up here very useful! |
Update: It turns out that dlib's facial landmark detector already uses the 68 point ibug landmark format. So there is no need to create a new "ibug_to_sfm" landmark-to-vertex file: one can just create the landmark file automaticaly using dlib and then call eos's routines. The following post explains how: https://www.pyimagesearch.com/2017/04/03/facial-landmarks-dlib-opencv-python/ Anyway, it would be interesting to have an "ibug_to_4dface" toml file. Any chance I am missing it? |
In my opinion that's pretty self-explanatory. It should also be documented on dlib's pages or in their documentation. If it isn't, perhaps file an issue there? But it detects 68 points, so it doesn't take much to find out that it's the 68 points from the ibug scheme. And then it should be clear that one can use the
What do you mean exactly with "to 4dface"? If you mean the 4DFM, then may I ask if and how you obtained it? As far as I can see you are not a licensed user but perhaps you're working for a company that we licensed it to? If you don't want to tell publicly, please drop me an email at <4dface> . The 4DFM contains landmark definitions, they're directly included in the model file: eos/include/eos/morphablemodel/MorphableModel.hpp Lines 387 to 400 in f8b235b
And then there's a file defining the mappings for ibug, |
Hi Patrick, When one learns about dlib's facial landmark detection, then the information that the format used by them is ibug's 68 points is right there. However, it took me a while to find this out, because I didn't know where to look before (I just found out about this dlib routine yesterday). Coming from python, I didn't even know what dlib was when I stumbled upon eos the first time a few months ago. So, what I've meant is that including in the Readme a hint such as "You can generate the pts landmarks file by using dlib's facial landmark detection, for example" might save some trouble for computer vision newbies, who might not be familiar with dlib. About the 4DFM model, never mind. I saw your "4dface" repo in a glimpse and thought the model was publicly available, but now I see they are completely different things. Anyway, I was finally able to run eos yesterday and was very impressed with the results, and now I'm studying the code more deeply. Congrats for the impressive work! |
I see! I think that's indeed a very good suggestion, thank you very much. Something like exactly that would indeed be really good to add to the readme.
Ah, ! see :-) Yes I agree this can potentially be a bit confusing - 4dface and the patrikhuber/4dface GitHub repo has been previously a research-demo combining eos with our regression-based landmark detection - and now it's our company name! I've added a note to the top of the patrikhuber/4dface repo's readme a couple of month ago:
Maybe we should make the note bigger... :-) I'm glad you all got it to work! And thank you very much for the congratulations! |
Hi. I asked a question a while ago about how to generate the landmarks file. Although it was properly answered, probably my ignorance about computer vision specifics prevented me from making progress. This issue is just a suggestion: maybe, in the interest of making the system more accessible to the layman, the session "Sample code" could have a better description about what is a landmark-to-vertex file, how to generate one and what kind of format is expected by eos. This is coming from the supposition that the user has just a single photo of a face he wants to reconstruct. Eos is kind of singular in comparison to other "morphable model fitting" algoritms in that they usually make the reconstruction without the need of facial landmarks (maybe they detect them automatically). I know how to make landmark detection with opencv (according to SOME standard), but I have really no clue about how to translate them to vertices on a morphable model such as 4DFM. If the subject is too much of a common knowledge, maybe some references in the README can be very clarifying to newbies.
The text was updated successfully, but these errors were encountered: