-
Notifications
You must be signed in to change notification settings - Fork 174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Why does multiscale generation outputs very different results for two almost identical images? #62
Comments
Are you running multiscale generation for each frame before adding them together, or adding the frames together for each step? |
I'm running the multiscale generation for every frame and then I join the resulting frames together. This is the command for every frame:
|
Even with the same seed, you will not have the same starting point at each execution. Even worse, the patching do not apply to the same level of details given the different sizes of images. Combine both and you get your results,. The differences are less if you keep the same size. |
@ouhenio Out of curiousity have you been able to create a loop so that you can automate this process so you don't need to run the shell for every frame? I'm also trying to do something similar but not having much luck |
Hi @jamahun! It's easy to run the script for every image automatically, take a look into Python's subprocesses. I don't have the code with me anymore (this issue is 2 years old 😅), but as you may see in the docs of |
I'm trying to apply style transfer to a video frame by frame, but when I use multiscale generation the results vary heavily even for images that are almost identical. I tried without multiscale generation and I didn't have this issue, but the resulting image quality was worse. Is there a way to use multiscale generation and avoid this?
The text was updated successfully, but these errors were encountered: