-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Worse accuracy while continuing training due to a possible mistake in initializing setup
#9
Comments
yes @Kumoi0728 you are right . I think this is a bug in the cod. I will look into it |
Recently, I have experimented with the code in this repo. I found even though training from scratch, the resutls are not unsatisfactory, shown in the following figure. In my case, I set I have read the code and understood their working process. Do you have any insight or advice on the poor performance? Thanks. |
I trained a MVTN model with 100 epochs with the following command, and stopped training after 57 epochs.
And the output of the 57th epoch is like this,
When I load the trained model to continue training, although it started training from the 58th epoch correctly, the accuracies got lower,
I found that in
ops.py
line 260-264, only whenis_learning_views = True
, the trained MVTN model will be loaded,and in line 55-56,
is_learning_views
in setup is initialized like this,should the
learned_offset
in line 55 be repalced bylearned_circular
?Becaues the choices of learned
views_config
must belearned_circular
,learned_spherical
,learned_direct
,learned_random
orlearned_transfer
.I am sorry if the reason is not here. I would appreciate it if you could tell me the correct way. :) @ajhamdi
The text was updated successfully, but these errors were encountered: