Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incorrect lr_mult value when using find_generator #18

Open
JonnoFTW opened this issue Jul 15, 2019 · 2 comments · May be fixed by #19
Open

Incorrect lr_mult value when using find_generator #18

JonnoFTW opened this issue Jul 15, 2019 · 2 comments · May be fixed by #19

Comments

@JonnoFTW
Copy link
Contributor

JonnoFTW commented Jul 15, 2019

When using find_generator:

self.lr_mult = (float(end_lr) / float(start_lr)) ** (float(1) / float(steps_per_epoch))

lr_mult causes the learning rate to converge at end_lr after only 1 epoch. If you want to use 2 or more epochs, then your learning rate will exceed end_lr and end up training on a learning rate that is far too high than intended.

The fix here is (when end_lr=1,start_lr=0.001,steps_per_epoch=1000,epochs=4):

lr_mult = ((float(end_lr) / float(start_lr)) ** (1. / float(steps_per_epoch*epochs)))

Here's a sample script to show how the two functions diverge while cur_lr <= end_lr
Figure_1

@surmenok
Copy link
Owner

Nice catch, thank you! Would you mind to create a PR to fix this?

@JonnoFTW
Copy link
Contributor Author

Will do, I'll update the documentation as well with some better examples

@JonnoFTW JonnoFTW linked a pull request Jul 15, 2019 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants