Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

questions about batch size in GHMC_loss #12

Open
longchuan1985 opened this issue Feb 28, 2019 · 1 comment
Open

questions about batch size in GHMC_loss #12

longchuan1985 opened this issue Feb 28, 2019 · 1 comment

Comments

@longchuan1985
Copy link

Hi, thanks for your nice work. In your paper, you mentioned the best bin size is 30, which is a balanced value, what is the batch size in your experiments when you using bin size 30?

@libuyu
Copy link
Owner

libuyu commented Mar 3, 2019

@longchuan1985 We use the batch size of 16 (8 GPUs with two images per GPU), which can be seen in the example script. And I want to clarify that the relationship between the bin size and batch size is not so strong because the effect of bin size mainly depends on the distribution of the gradient norm of examples (but I admit larger batch size will make the distribution more steady).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants