-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Depth loss goes negative during long training run #13
Comments
Hello, I'm not the author of this paper, but may I ask if the performance improved when training with longer epochs? |
Performance kept improving, minus this oddity where the depth loss went negative (which I still need to debug). But, for example, my loss for car easy 3D @ 0.7 was 23.8, but the reported performance for the pretrained model is more like 26. There's definitely general concern with this repo in the ability to reproduce the pretrained model reliably. Not sure if it simply takes a bunch of retrying to get the best performance? |
Thank you for your reply. |
cfg.SEED = 1903919922 can reproduce official result |
To try to reproduce the paper results (like other bug reporters, I get worse results on a stock training compared to authors), I left my training running far past the typical 200 epochs. At around ~325 epochs, I found that the loss_depth value went negative. I'm assuming this is an error. Have you observed this in practice?
I will spend some time probing into why this is happening and update this ticket as needed.
The text was updated successfully, but these errors were encountered: