Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improved QA4SM metrics calculators #219

Merged
merged 46 commits into from
Apr 23, 2021

Conversation

s-scherrer
Copy link
Collaborator

@s-scherrer s-scherrer commented Mar 12, 2021

  • new matcher that makes sure all dataframe combinations only have common timestamps
  • new implementation of QA4SM metrics calculators that take only 2 or 3 dataframes

s-scherrer and others added 6 commits March 8, 2021 11:56
This adds a test to check whether the new matching in
BasicTemporalMatching works as the old one in case there are no
duplicates in the reference frame.

It replaced the matcher in the other tests by a dummy that still uses
the old matching implementation.
The following commit will remove the dummy and set the new expected
values.
@s-scherrer
Copy link
Collaborator Author

The previous commit compared the old and the new temporal matching implementation when dropping duplicates before matching (see also #218). The CI failed only due to coverage, the tests are running. The next commit will remove the comparison test and update the target values in the tests affected by removing duplicates in the reference.

@s-scherrer s-scherrer marked this pull request as ready for review April 7, 2021 12:45
Copy link
Member

@wpreimes wpreimes left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@s-scherrer Looks good to me, is this done from your side? Im still a bit worried about #220, but it seems that the current solution is ok for now.

@s-scherrer
Copy link
Collaborator Author

I changed the tests a bit to also have a case where the reference name and the other name have a different alphabetical order. The results don't change much, it's just a bit hacky to figure out which one was the reference.

For QA4SM this shouldn't be a problem, because the dataset names are numbered, e.g. 0-ERA5, 1-ISMN depending on the order they are given, so they are already alphabetically ordered.

In other cases it's only a usability issue - it's not as easy to tell which one was the reference, but the results are still okay. I would propose to solve #220 in a separate PR.

@wpreimes wpreimes merged commit 38ee2ef into TUW-GEO:master Apr 23, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants