You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While reading the /examples/NLU/examples/text-classification/run_glue.py file, I noticed that the GLUE dataset only uses the validation set for generating results and does not measure accuracy on the evaluation set. Would it be better to evaluate the accuracy on the evaluation set using the model that performs best on the validation set in run_glue.py?
The text was updated successfully, but these errors were encountered:
While reading the /examples/NLU/examples/text-classification/run_glue.py file, I noticed that the GLUE dataset only uses the validation set for generating results and does not measure accuracy on the evaluation set. Would it be better to evaluate the accuracy on the evaluation set using the model that performs best on the validation set in run_glue.py?
The text was updated successfully, but these errors were encountered: