-
Notifications
You must be signed in to change notification settings - Fork 642
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training models with different amount of scenes #1057
Comments
@BoynextDoor114514 in addition to deleting the scenes you don't want from the splits, you might also need to remove the corresponding samples from
|
Thank you so much, it works good for me! And I have another question, at the evaluation phase after loading NuScenes tables, the usage of memory became so large and caused the OOM Killer to kill the process : ( |
Perhaps you could first check if there are any other (potentially heavy) processes that are occupying your RAM Otherwise, yes, you would probably have to use a machine with more RAM 😄 |
Hi @BoynextDoor114514, if relevant for you, once my PR #1046 merges, you can evaluate on your own splits in a custom |
Unfortunately, I may have failed. I use OpenPCDet to train the models with different amount of scenes: 5%(35 scenes) and 10%(70 scenes). But I've noticed that in these 2 different conditions, Sorry for disturbing you and thank you sincerely! |
@BoynextDoor114514 you could try isolating the problem by initializing the data-loader, and checking its length |
hello, I've completed training process with 140 scenes. But at the evaluation phase, an error occurred in "assert set(self.pred_boxes.sample_tokens) == set(self.gt_boxes.sample_tokens)" : "AssertionError: Samples in split doesn't match samples in predictions." I try to fix this problem and in my opinion, the gt_boxes come from here https://github.com/nutonomy/nuscenes-devkit/blob/4df2701feb3436ae49edaf70128488865a3f6ff9/python-sdk/nuscenes/eval/detection/evaluate.py#L96C13-L96C93 I'm confused and could you plz give me some idea to fix it? Thank you sincerely! |
@BoynextDoor114514 if you did not change the scenes to be used in the One way to do this could be to load up your JSON file containing all your predictions and checking that each sample token in the |
I've checked my evaluation process and I noticed that the predicted samples are much more than the samples in val-set. |
@BoynextDoor114514 the samples which get inferred (a.k.a. predicted) on and written into a json is entirely determined by the user's code - the |
Hello :)
I want to train my models with different amount of scenes(100, 300, 500, 700), all I should do is to delete scenes from train_detect and train_track in split.py, is that right?
Thank you sincerely!! : )
The text was updated successfully, but these errors were encountered: