-
Notifications
You must be signed in to change notification settings - Fork 642
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature: Enable defining custom splits in a splits.json
file
#1046
Feature: Enable defining custom splits in a splits.json
file
#1046
Conversation
@michael-hoss can you include more details in your PR description? E.g.:
|
Still some `create_splits_scenes` left, which don't support custom splits.
So far, only GT got filtered by split, and the results file was expected to only contain the exact samples of one specific split.
Sure @whyekit-motional ! Where will splits.json be located?Under e.g. A code snippet to show how your feature will be usedIn unit testsI just added passing unit tests for tracking eval and detection eval for demonstration, please look at the files. They require {
"mini_custom_train": ["scene-0061", "scene-0553"],
"mini_custom_val": ["scene-0103", "scene-0916"]
} In an actual evaluationAlso outside of unit tests, I can evaluate the tracking-megvii baseline submission on a custom split (this time locating the same python nuscenes-devkit/python-sdk/nuscenes/eval/tracking/evaluate.py \
/data/sets/tracking-megvii/results_val_megvii.json \
--output_dir /data/sets/tracking-megvii/eval_outputs_on_custom_mini_split \
--eval_set mini_custom_val \
--dataroot /data/sets/nuscenes \
--version v1.0-trainval \
--verbose 1 ℹ️ Note:
Will this only be used for tracking eval, or it can be used across all the tasks?I am only really familiar with the tracking eval. Just did my best at enabling custom splits also in other tasks (see commit 0106acf). Now, there are only these two occurrences of the
|
Thanks for the details @michael-hoss! I will begin reviewing your PR (but it may take a few days or so, as I'm swamped with other work-related matters at the moment 😅) |
Please have another look @whyekit-motional. I did:
There was only one potentially unrelated issue: I could only run the tracking unit test in a Python 3.10 environment. In my Python 3.7 environment, I got a pandas error realted to MultiIndex creation somewhere deeper down in the mot metrics evaluation. I guess this is related to some version mismatch, similar to my just opened issue #1055 Should we add some more documentation? What else would be needed for a merge? |
@michael-hoss thanks for the changes 👍 I think we can move the discussion regarding the package version to #1055 (apologies I haven't gotten around to trying to reproduce your issue yet) Code-wise, I think it looks pretty good - I just left a couple of minor comments and then we should be good to merge! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your comments @whyekit-motional! Just implemented them and once more executed an own successful test case.
Thanks for this nice feature @michael-hoss! 💯 |
Thank you for this feature @michael-hoss . I was about to implement something similar. <3 |
With the following code, I can specify
eval_set="custom-split-name"
in the tracking evaluation.If I have a
splits.json
file with the following contents in the nuscenes-version-dir, it will work.