Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closes #425 | Add Dataloader MaXM #554

Merged
merged 1 commit into from
May 14, 2024

Conversation

akhdanfadh
Copy link
Collaborator

Closes #425

There is no subset specified in the homepage, but there are two files for one language: (1) regular QA, and (2) yes-no QA. I assumed each should be a subset (open to discuss). Thus, configs will look like this: maxm_regular_source, maxm_yesno_seacrowd_imqa, etc. When testing, pass maxm_<subset> to the --subset_id parameter.

Checkbox

  • Confirm that this PR is linked to the dataset issue.
  • Create the dataloader script seacrowd/sea_datasets/{my_dataset}/{my_dataset}.py (please use only lowercase and underscore for dataset folder naming, as mentioned in dataset issue) and its __init__.py within {my_dataset} folder.
  • Provide values for the _CITATION, _DATASETNAME, _DESCRIPTION, _HOMEPAGE, _LICENSE, _LOCAL, _URLs, _SUPPORTED_TASKS, _SOURCE_VERSION, and _SEACROWD_VERSION variables.
  • Implement _info(), _split_generators() and _generate_examples() in dataloader script.
  • Make sure that the BUILDER_CONFIGS class attribute is a list with at least one SEACrowdConfig for the source schema and one for a seacrowd schema.
  • Confirm dataloader script works with datasets.load_dataset function.
  • Confirm that your dataloader script passes the test suite run with python -m tests.test_seacrowd seacrowd/sea_datasets/<my_dataset>/<my_dataset>.py or python -m tests.test_seacrowd seacrowd/sea_datasets/<my_dataset>/<my_dataset>.py --subset_id {subset_name_without_source_or_seacrowd_suffix}.
  • If my dataset is local, I have provided an output of the unit-tests in the PR (please copy paste). This is OPTIONAL for public datasets, as we can test these without access to the data files.

Copy link
Collaborator

@faridlazuarda faridlazuarda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there is a better way to indicate whether to use the "source" schema or the "imqa" schema instead of doing string concatenation. I advise to do a simple conditioning instead.

This works well when I'm running a regular schema

INFO:__main__:args: Namespace(path='seacrowd/sea_datasets/maxm/maxm.py', schema=None, subset_id='maxm_regular', data_dir=None, use_auth_token=None)
INFO:__main__:self.PATH: seacrowd/sea_datasets/maxm/maxm.py
INFO:__main__:self.SUBSET_ID: maxm_regular
INFO:__main__:self.SCHEMA: None
INFO:__main__:self.DATA_DIR: None
INFO:__main__:Checking for _SUPPORTED_TASKS ...
module seacrowd.sea_datasets.maxm.maxm
INFO:__main__:Found _SUPPORTED_TASKS=[<Tasks.VISUAL_QUESTION_ANSWERING: 'VQA'>]
INFO:__main__:_SUPPORTED_TASKS implies _MAPPED_SCHEMAS={'IMQA'}
INFO:__main__:schemas_to_check: {'IMQA'}
INFO:__main__:Checking load_dataset with config name maxm_regular_source
/Users/faridadilazuarda/miniconda3/envs/env-seacrowd/lib/python3.10/site-packages/datasets/load.py:2516: FutureWarning: 'use_auth_token' was deprecated in favor of 'token' in version 2.14.0 and will be removed in 3.0.0.
You can remove this warning by passing 'token=<use_auth_token>' instead.
  warnings.warn(
/Users/faridadilazuarda/miniconda3/envs/env-seacrowd/lib/python3.10/site-packages/datasets/load.py:926: FutureWarning: The repository for maxm contains custom code which must be executed to correctly load the dataset. You can inspect the repository content at seacrowd/sea_datasets/maxm/maxm.py
You can avoid this message in future by passing the argument `trust_remote_code=True`.
Passing `trust_remote_code=True` will be mandatory to load this dataset from the next major release of `datasets`.
  warnings.warn(
INFO:__main__:Checking load_dataset with config name maxm_regular_seacrowd_imqa
/Users/faridadilazuarda/miniconda3/envs/env-seacrowd/lib/python3.10/site-packages/datasets/load.py:2516: FutureWarning: 'use_auth_token' was deprecated in favor of 'token' in version 2.14.0 and will be removed in 3.0.0.
You can remove this warning by passing 'token=<use_auth_token>' instead.
  warnings.warn(
/Users/faridadilazuarda/miniconda3/envs/env-seacrowd/lib/python3.10/site-packages/datasets/load.py:926: FutureWarning: The repository for maxm contains custom code which must be executed to correctly load the dataset. You can inspect the repository content at seacrowd/sea_datasets/maxm/maxm.py
You can avoid this message in future by passing the argument `trust_remote_code=True`.
Passing `trust_remote_code=True` will be mandatory to load this dataset from the next major release of `datasets`.
  warnings.warn(
INFO:__main__:Dataset sample [source]
{'image_id': '4bf7b2e9cbdc9721', 'image_url': 'https://open-images-dataset.s3.amazonaws.com/crossmodal-3600/4bf7b2e9cbdc9721.jpg', 'question_id': 'question_th_4bf7b2e9cbdc9721_01', 'question': 'สาวชุดชมพูอยู่บนรางแบบไหน?', 'answers': ['รางรถไฟ', 'รถไฟ'], 'processed_answers': ['รถไฟ', 'รถไฟ', 'รถไฟ', 'รถไฟ', 'รถไฟ', 'รถไฟ', 'รถไฟ', 'รถไฟ', 'ราง รถไฟ', 'ราง รถไฟ', 'ราง รถไฟ', 'ราง รถไฟ', 'ราง รถไฟ', 'ราง รถไฟ', 'ราง รถไฟ', 'ราง รถไฟ'], 'is_collection': False, 'method': 'dt-vq2a'}
INFO:__main__:Dataset sample [seacrowd_imqa]
{'id': '0', 'question_id': 'question_th_4bf7b2e9cbdc9721_01', 'document_id': '4bf7b2e9cbdc9721', 'questions': ['สาวชุดชมพูอยู่บนรางแบบไหน?'], 'type': None, 'choices': None, 'context': None, 'answer': ['รางรถไฟ', 'รถไฟ'], 'image_paths': ['https://open-images-dataset.s3.amazonaws.com/crossmodal-3600/4bf7b2e9cbdc9721.jpg'], 'meta': {'processed_answers': ['รถไฟ', 'รถไฟ', 'รถไฟ', 'รถไฟ', 'รถไฟ', 'รถไฟ', 'รถไฟ', 'รถไฟ', 'ราง รถไฟ', 'ราง รถไฟ', 'ราง รถไฟ', 'ราง รถไฟ', 'ราง รถไฟ', 'ราง รถไฟ', 'ราง รถไฟ', 'ราง รถไฟ'], 'is_collection': False, 'method': 'dt-vq2a'}}
INFO:__main__:Checking global ID uniqueness
INFO:__main__:Found 268 unique IDs
INFO:__main__:Gathering schema statistics
INFO:__main__:Gathering schema statistics
test
==========
id: 268
question_id: 268
document_id: 268
questions: 268
answer: 1109
image_paths: 268
meta: 804

.
----------------------------------------------------------------------
Ran 1 test in 0.288s

OK

However, this happened when I'm running test for the imqa schemas

======================================================================
ERROR: runTest (__main__.TestDataLoader)
Run all tests that check:
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/Users/faridadilazuarda/Documents/GitHub/seacrowd-datahub/tests/test_seacrowd.py", line 134, in setUp
    self.dataset_source = datasets.load_dataset(
  File "/Users/faridadilazuarda/miniconda3/envs/env-seacrowd/lib/python3.10/site-packages/datasets/load.py", line 2556, in load_dataset
    builder_instance = load_dataset_builder(
  File "/Users/faridadilazuarda/miniconda3/envs/env-seacrowd/lib/python3.10/site-packages/datasets/load.py", line 2265, in load_dataset_builder
    builder_instance: DatasetBuilder = builder_cls(
  File "/Users/faridadilazuarda/miniconda3/envs/env-seacrowd/lib/python3.10/site-packages/datasets/builder.py", line 371, in __init__
    self.config, self.config_id = self._create_builder_config(
  File "/Users/faridadilazuarda/miniconda3/envs/env-seacrowd/lib/python3.10/site-packages/datasets/builder.py", line 592, in _create_builder_config
    raise ValueError(
ValueError: BuilderConfig 'maxm_regular_imqa_source' not found. Available: ['maxm_regular_source', 'maxm_regular_seacrowd_imqa', 'maxm_yesno_source', 'maxm_yesno_seacrowd_imqa']

Can you address this issue firsthand? Or, is there any specific script to run your code?

Thank you for contributing!

@akhdanfadh
Copy link
Collaborator Author

akhdanfadh commented Apr 25, 2024

I think there is a better way to indicate whether to use the "source" schema or the "imqa" schema instead of doing string concatenation. I advise to do a simple conditioning instead.

@faridlazuarda I don't understand the problem here. Are you sure you pass either maxm_regular or maxm_yesno to the --subset_id parameter for testing?

Edit:

ValueError: BuilderConfig 'maxm_regular_imqa_source' not found.

The problem is here, you pass maxm_regular_imqa instead of above I mentioned.

Copy link
Collaborator

@faridlazuarda faridlazuarda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there is a better way to indicate whether to use the "source" schema or the "imqa" schema instead of doing string concatenation. I advise to do a simple conditioning instead.

@faridlazuarda I don't understand the problem here. Are you sure you pass either maxm_regular or maxm_yesno to the --subset_id parameter for testing?

Edit:

ValueError: BuilderConfig 'maxm_regular_imqa_source' not found.

The problem is here, you pass maxm_regular_imqa instead of above I mentioned.

I see, it works now, the problem is I shouldn't specify the schema during the test. Looks good to me! waiting for @TysonYu to review.

@holylovenia holylovenia assigned luckysusanto and unassigned TysonYu May 7, 2024
@holylovenia holylovenia requested review from luckysusanto and patrickamadeus and removed request for TysonYu and luckysusanto May 7, 2024 08:07
@holylovenia holylovenia requested review from muhammadravi251001 and removed request for patrickamadeus May 13, 2024 07:32
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. All the test passed successfully. Thanks for the implementation, Sir @akhdanfadh. Merging it in a second.

@muhammadravi251001 muhammadravi251001 merged commit bb4579e into SEACrowd:master May 14, 2024
1 check passed
@akhdanfadh akhdanfadh deleted the maxm branch May 14, 2024 10:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Create dataset loader for MaXM
8 participants