-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rating: add rating to organisations and tools #1115
Comments
First phase:
|
I think the assumption there was that we'll only need scores in contests. Are you planning to change this? |
We decide with @akuleshov7 to move it anyway to Execution |
Ok, to clarify reasons - score will be used not only for contests, even if it's calculation may depend on the exact type of execution. |
…oject` (#1142) * Change `ExecutionType` to `TestingType`, pass `testingType` from the frontend * Add columns for best execution and best score in `lnk_contest_project` Part of #1115 Co-authored-by: Kirill Gevorkyan <[email protected]>
* Move `score` from `lnk_contest_execution` to `execution` * Forbid rerun of `TestingType.CONTEST_MODE` executions on backend * Get best_score directly from lnk_contest_project (this column is not being set yet, will be added in subsequent PRs) Second part of #1115
Why do we need to sum all executions? For me this metric is not descriptive, because it mostly represents not the quality of the tool but rather how active they are using our platform. I think historical diagram of best score or variance of best scores over time may be a better option. |
No-no, it should be a sum of ALL best executions on all contestsXprojects @petertrr Imagine if an organization participated in 100000 contests and made 1000000000000000 executions. They will anyway have 100000 best results (same to the number of contests). It is much more valuable than some organization participated in 1 contest and won. We will ban those guys who are cheating and are creating same project again and again. |
The only thing that concerns me is that score in different contests may be calculated differently. Now we support only F-measure (i.e. combined precision and recall for warn-plugin-based tests), but we may have different types in the future. Then we'll need to change rules of aggregation and recalculated already stored values. Right now, to calculate And to calculate organization rating, I think at least the latter can be calculated per request, because it doesn't involve a lot of data. |
…1170) * Store aggregated metrics in Execution excluding `NOT_APPLICABLE` metrics from individual test executions (until we can calculate scores for other types of plugins too) * Change logic of dispaly on frontend: don't calculate metrics only if all test executions under an execution are `NOT_APPLICABLE`; otherwise use filtered data from Execution Related to #1115 Co-authored-by: Nariman Abdullin <[email protected]>
…1170) * Store aggregated metrics in Execution excluding `NOT_APPLICABLE` metrics from individual test executions (until we can calculate scores for other types of plugins too) * Change logic of dispaly on frontend: don't calculate metrics only if all test executions under an execution are `NOT_APPLICABLE`; otherwise use filtered data from Execution Related to #1115 Co-authored-by: Nariman Abdullin <[email protected]>
For other plugins calculation of rating requires saveourtool/save-cli#449 |
* Change returned type in `OrganizationController` from `Organization` to `OrganizationDto` * Add rating in `OrganizationDto` * Display rating on frontend Part of #1115
If we have |
@petertrr what's left? |
Only a follow-up from #1162 (comment), everything else is implemented |
@nulls can you please finalise it? :) |
We need to have the following addition on the backend:
LnkContestProject.kt: bestExecution: Execution
andbestScore: Int
totalScore
(or something like this - please check that these fields are not yet added). It should be taken as sum of the list of BEST EXECUTIONSThe text was updated successfully, but these errors were encountered: