-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Style Model Training Independent of Optical Flow? #13
Comments
You just need to create the hollywood dataset once, and you can use it for every new style you want to train. You can use a pretrained optical flow model, as described in the FlowNet2 repository. Since you are asking for the "fastest way": you could in theory only use the COCO dataset and then only use the "shift", "zoom_out" and "single_image" as the datasource parameter. Results may be inferior, since the model only learned from camera motion, but maybe sufficient for your use case. |
By the way, how long did it take you to train the hollywood dataset? |
@jargonfilter I computed the optical flow with the full Hollywood dataset and it took me about 1.5-2 weeks. Produced really high quality results. |
I computed the optical flow on our university cluster with multiple jobs in parallel (there is no "training" involved by the way), but I don't remember the exact number of GPU days. Thanks for some reference numbers, noufali. This is quite a long time. As pointed out in the description, the amount of data can be reduced to one fifth with a simple parameter switch. Concurrent work on video style transfer (Gupta et al., Chen et al.) use smaller datasets, too, thus I don't expect the quality to drop significantly. |
@manuelruder do you mean the |
@bafonso yes, exactly. Can be as low as 1. The script first separates all the video clips by scene, then it ranks every possible tuple in each scene by amount of motion. Then it will take the top |
@bafonso Hey there! yea I can share mine with you. It's definitely a tedious process. Shoot me your email? |
I'm wondering if you could share that with me also @noufali. my email is That would be a lifesaving |
@noufali Excuse me! I've calculated flow files using deepflow and reliable file using consistencyChecker of the part dataset of Hollywood. But I'm not sure the results are right. I tested the calculated reliable results with the occlusions results of MPI-Sintel datset(in my thought, they should be the same), the results are different. Could you show me some calculated results of flow file and reliable file? |
@noufali Sorry to bother and revive this old thread. I just started to work and explore video style transfers on some consumer hardware. It would take ages for me to compute the optical flow of the hollywood dataset. Would it be possible for you to share yours, in case you still have it. |
@noufali Please could you share it with me too? I'm a newbie in the topic and don't have proper hardware and skills for the computations, but I'm very curious about this. My email is [email protected] Thank you! |
@noufali - I would also like the optical flow. Would you consider putting it in a repo, so that people don't have to ask you directly? |
@noufali Hi, sorry to bother you but I'm currently working on a style transfer project and this step is taking ages to complete on my hardware. If you could share your optical flow results with me, I would deeply appreciate it. My email is [email protected] |
What is the fastest way to train a new style? Is it necessary to train an optical flow model on the hollywood dataset, for every new style?
The text was updated successfully, but these errors were encountered: