You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
No Transformation module specified
model input parameters 32 100 20 1 512 256 88 25 None VGG BiLSTM CTC
loading pretrained model from saved_models_results/None-VGG-BiLSTM-CTC-Seed1111/best_accuracy.pth
dataset_root: result/evaluation_lmdb/IIIT5k_3000 dataset: /
Traceback (most recent call last):
File "test.py", line 282, in
test(opt)
File "test.py", line 226, in test
benchmark_all_eval(model, criterion, converter, opt)
File "test.py", line 46, in benchmark_all_eval
eval_data, eval_data_log = hierarchical_dataset(root=eval_data_path, opt=opt)
File "/workspace/deep-text-recognition-benchmark/dataset.py", line 124, in hierarchical_dataset
concatenated_dataset = ConcatDataset(dataset_list)
File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/dataset.py", line 225, in init
assert len(self.datasets) > 0, 'datasets should not be an empty iterable' # type: ignore[arg-type]
AssertionError: datasets should not be an empty iterable
I keep getting this error for the evaluation part. I did encounter this error with training, but I was able to fix it once I got the data format correct. I checked the evaluation dataset and it's in the correct data format. I even replaced the training-validation (train.py) with training-evaluation (train.py) just to see if the evaluation dataset would work. It works. But for some reason, I keep getting this error with evaluation dataset against test.py
The text was updated successfully, but these errors were encountered:
No Transformation module specified
model input parameters 32 100 20 1 512 256 88 25 None VGG BiLSTM CTC
loading pretrained model from saved_models_results/None-VGG-BiLSTM-CTC-Seed1111/best_accuracy.pth
dataset_root: result/evaluation_lmdb/IIIT5k_3000 dataset: /
Traceback (most recent call last):
File "test.py", line 282, in
test(opt)
File "test.py", line 226, in test
benchmark_all_eval(model, criterion, converter, opt)
File "test.py", line 46, in benchmark_all_eval
eval_data, eval_data_log = hierarchical_dataset(root=eval_data_path, opt=opt)
File "/workspace/deep-text-recognition-benchmark/dataset.py", line 124, in hierarchical_dataset
concatenated_dataset = ConcatDataset(dataset_list)
File "/usr/local/lib/python3.8/dist-packages/torch/utils/data/dataset.py", line 225, in init
assert len(self.datasets) > 0, 'datasets should not be an empty iterable' # type: ignore[arg-type]
AssertionError: datasets should not be an empty iterable
I keep getting this error for the evaluation part. I did encounter this error with training, but I was able to fix it once I got the data format correct. I checked the evaluation dataset and it's in the correct data format. I even replaced the training-validation (train.py) with training-evaluation (train.py) just to see if the evaluation dataset would work. It works. But for some reason, I keep getting this error with evaluation dataset against test.py
The text was updated successfully, but these errors were encountered: