Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor evaluation flow #35

Open
gravesee opened this issue May 16, 2023 · 2 comments
Open

Refactor evaluation flow #35

gravesee opened this issue May 16, 2023 · 2 comments

Comments

@gravesee
Copy link
Collaborator

To support different evaluation strategies, could a trait be implemented that abstracts and captures the evaluation flow logic to support other paradigms.

@jinlow
Copy link
Owner

jinlow commented May 17, 2023

Yes, definitely!
Are the main ones we would want to support...

  • One metric one or multiple datasets.
  • Multiple metric's one or multiple datasets?

@gravesee
Copy link
Collaborator Author

  • out-of-bag error estimate

The function signature could take the dev data, the tuple of sample indices, the evaluation data, and all metrics

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants