Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Patch eval metrics to markdown #1706

Merged
merged 4 commits into from
Oct 24, 2024

Conversation

EricLiclair
Copy link
Contributor

@EricLiclair EricLiclair commented Oct 22, 2024

  • adds changes to return metrics from evals
  • prints converted metrics to markdown

TODO's:
Collect metrics over the whole benchmark:

    • online,
    • knn,
    • linear, and
    • finetune eval
  • Print metrics at end of benchmark script (currently only printing the converted markdown)
  • Optional: Print also as markdown table

prints the results like:
(values randomly generated and passed to the function)

| Eval Name | Metric Name | Value |
|:---------:|:-----------:|:-----:|
| knn       | val_top1    | 0.44  |
| knn       | val_top5    | 0.27  |
| linear    | val_top1    | 0.26  |
| linear    | val_top5    | 0.27  |
| finetune  | val_top1    | 0.68  |
| finetune  | val_top5    | 0.45  |

which prints markdown like,

Eval Name Metric Name Value
knn val_top1 0.44
knn val_top5 0.27
linear val_top1 0.26
linear val_top5 0.27
finetune val_top1 0.68
finetune val_top5 0.45

potentially fixes #1333

Copy link
Contributor

@guarin guarin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for your PR! I left some comments on how I believe we could simplify it a bit but the code overall looks very good.

benchmarks/imagenet/vitb16/main.py Outdated Show resolved Hide resolved
benchmarks/imagenet/vitb16/main.py Outdated Show resolved Hide resolved
benchmarks/metrics.py Outdated Show resolved Hide resolved
benchmarks/metrics.py Outdated Show resolved Hide resolved
Copy link

codecov bot commented Oct 23, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 84.90%. Comparing base (9578268) to head (599942d).
Report is 1 commits behind head on master.

Additional details and impacted files
@@           Coverage Diff           @@
##           master    #1706   +/-   ##
=======================================
  Coverage   84.90%   84.90%           
=======================================
  Files         156      156           
  Lines        6526     6526           
=======================================
  Hits         5541     5541           
  Misses        985      985           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@EricLiclair EricLiclair force-pushed the patch-eval_metrics_to_markdown branch from e6b3b5a to 3a406ba Compare October 23, 2024 12:40
@EricLiclair EricLiclair requested a review from guarin October 23, 2024 12:47
@EricLiclair EricLiclair marked this pull request as ready for review October 23, 2024 12:48
Copy link
Contributor

@guarin guarin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks so much, this looks great!

@guarin guarin enabled auto-merge (squash) October 24, 2024 07:34
@guarin guarin merged commit b6955fd into lightly-ai:master Oct 24, 2024
14 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Report all metrics at end of benchmark
2 participants