-
Notifications
You must be signed in to change notification settings - Fork 76
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unify metric API #294
Unify metric API #294
Conversation
Codecov ReportAttention:
❗ Your organization needs to install the Codecov GitHub app to enable full functionality. Additional details and impacted files@@ Coverage Diff @@
## main #294 +/- ##
=======================================
Coverage ? 91.03%
=======================================
Files ? 62
Lines ? 3402
Branches ? 0
=======================================
Hits ? 3097
Misses ? 305
Partials ? 0
☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Review part 1/N.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great, this takes a good direction!
I'll try to block sime time to review in detail next week
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really great work Artem, like the simple and straightfoward solution about looping over all evaluate_instance in the evaluate_batch. Solves the problem neatly. Do however put an issue for vectorisation as you've mentioned in several todos.
Almost ready for merge :muscle!
# Conflicts: # tests/metrics/test_localisation_metrics.py
---------- | ||
args: | ||
Unused. | ||
a_batch: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@aaarrti missing docs, please please double-check now everywhere that it is consistent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
# Quantus project URL: <https://github.com/understandable-machine-intelligence-lab/Quantus>. | ||
|
||
|
||
if sys.version_info >= (3, 8): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what is this? this is new?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@final
shows users, that this class must not be inherited from.
E.g.,
from typing import final
@final
class A:
pass
class B(A):
pass
mypy example.py
mypy.ini: [mypy]: Unrecognized option: show_none_errors = False
example.py:9: error: Cannot inherit from final class "A" [misc]
Found 1 error in 1 file (checked 1 source file)
The if sys.version_info >= (3, 8)
can be removed, after we drop python3.7 support (which I hope will be soon, because it is not maintained since half a year https://devguide.python.org/versions/)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just saw that there are mutable default parameters in a lot of metric constructors.
using mutable defaults for parameters is dangerous and can lead to very hard to find bugs:
https://florimond.dev/en/posts/2018/08/python-mutable-defaults-are-the-source-of-all-evil/
@dkrako @annahedstroem Hi, I've realised every metric has had mutable default all this time, e.g., https://github.com/understandable-machine-intelligence-lab/Quantus/blob/main/quantus/metrics/axiomatic/completeness.py#L71. I will replace those with |
done #299 |
Description
BatchedMetric
andMetric
, this way we will have centralized uniform pre- and post-processing as well as clearly defined API.Some metrics can easily be batched, but this will be done in separate PR.
Implemented changes
Metric
, removeevaluate_instance
anstract method.BatchedMetric
is not just an alias toMetric
Metric
Changelist
base.py
(mostly copied over frombase_batched.py
)evaluate_instance
in for-loop.Minimum acceptance criteria