Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

On the issue of model selection #6

Open
PIPIKAI opened this issue Nov 6, 2022 · 5 comments
Open

On the issue of model selection #6

PIPIKAI opened this issue Nov 6, 2022 · 5 comments

Comments

@PIPIKAI
Copy link

PIPIKAI commented Nov 6, 2022

CIRL/train.py

Line 192 in 1b50cd6

if self.results['test'][self.current_epoch] >= self.best_acc:

Choose the best model in your training code based on the correctness of the test set rather than the best correctness of the validation set.

@SakurajimaMaiii
Copy link

SakurajimaMaiii commented Dec 1, 2022

I have the same question.
Previous work, such as FACT uses the training domain validation.
It seems that this paper chooses the best model on the (out of distribution) test set.

@JeffLee1874
Copy link

I have the same question.

@eadstry
Copy link

eadstry commented Mar 22, 2023

I have the same question. In Search of Lost Domain Generalization, pointed out that the optimal model on test should not be selected, which leads to overfitting. The appropriate choice is to select the model on the last epoch

@Linzsd
Copy link

Linzsd commented Apr 9, 2023

can you reproduce the experiments?

@undercutspiky
Copy link

Same question.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants