Generally, the objective of datasheet review is to ensure that:
FAQs:
- Can the dataset be free-upon-request?
- Yes. For example, we can approve datasets that are hosted on hubs such as HuggingFace, but are gated by required acknowledgements to terms and conditions. The dataset must indicate that it is free-upon-request.
FAQs:
-
What if a contributor submitted a new datasheet for
X
but a datasheet forX
is already approved?- Is the new datasheet more complete and better than the existing datasheet?
Yes → Proceed with the normal review process, change the existing datasheet’s status to “Deprecated”.
No → Reject.
- Is the new datasheet more complete and better than the existing datasheet?
-
What if more than one contributor submitted new datasheets of the same dataset, and all of them have not yet been approved? (After 23 Nov)
- Pick one that is relatively better than the others, fix the incorrect/inconsistent parts, then “Approve”.
-
What if more than one contributor submitted new datasheets of the same dataset, and all of them have not yet been approved (Before 23 Nov)
- Pick one that is more complete than the others, fix any incorrect/inconsistent parts, then “Approve”.
- For the others, use “sharing points” status.
- Split the obtained points of the datasheet between the contributors. It doesn’t have to be an equal split. The contributor who gives more complete information can receive higher points.
- For example, for a datasheet worth 6 points, the assignment could be: contributor 1 gets 3 points, contributor 2 gets 2 points, contributor 3 gets 1 point.
- We may simplify this by accepting only a subset of identical submissions that have the most complete information and set a ratio split of points (e.g., 7:3).
3. The information provided in the datasheet is correct, the information aligns with the dataset, and the dataset is relevant to SEA.
FAQs:
-
What should I do if the datasheet has incorrect or missing information?
- There are multiple ways to correct this:
- Ask the contributor to fix it (with some guidance) using the edit link (column AU in the approval sheet).
- Reviewer uses the edit link (column AU) to fix it themselves.
- [NOT RECOMMENDED] Reviewer uses a hidden sheet (_raw) to directly edit in the cells. This is only recommended when a large number of data per subset of the total dataset must be edited.
- There are multiple ways to correct this:
-
What should I check and how should I proceed?
- See the checklist below.
Check the following before approving:
- Data availability (is it free and open-source or is it private?)
- Dataset splits (if train, validation, or test are available)
- Dataset size (in lines, disk size, or any provided metric)
- Dataset license
- Task type (whether the data can be represented as the mentioned task)
- Paper (whether it directs to the correct publication. Archival version has higher priority)
- Languages (list of all languages it supports)
- Change the status to Rejected, Approved, or Sharing Points.
- Add notes and obtained points (in column BB in the approval sheet)
- Check the scoring guide and see which languages gets additional points (if any).
- Add the dataloader name (use Python snake case)
- Wait for a GitHub issue to be generated for the approved datasheet.
The objective of datasheet review is to ensure that all dataloaders in SEACrowd has correctness to the HF Dataloader Structure & SEACrowd defined schema and config and follow similar code format and/or style.
-
Metadata correctness. (ensure Tasks, Languages, HOME_URL, DATA_URL is used). Make sure the dataloader also has
__init__.py
. -
All subsets are implemented correctly to respective dataloader issue and according to SEACrowd Schema definition (has both
source
andseacrowd
schema -- if a given task has its SEACrowd Schema, else can raise it to reviewers/mods). -
Pass the test scripts defined in
tests
folder. -
Pass manual check. a. Perform a sampling of configs based on Lang and/or Task combinations b. Execute
datasets.load_dataset
check based on config list (a) c. Check on the dataset schema & few first examples for plausibility. -
Follows some general rules/conventions:
PascalCase
for dataloader class name (and “Dataset” is contained in the suffix of the class name).- Lowercase word characters (regex identifier:
\w
) for schema column names, including thesource
schema if the original dataset doesn’t follow it.
-
The code aligns with the
black
formatter: use thismake check_file=seacrowd/sea_datasets/{dataloader}/{dataloader}.py
-
Follows Dataloader Config Rule The dataset can be divided into different cases based on the compulsory Dataloader Configs: a. Single Language, Single Task (Type 1) b. Multiple Language, Single Task (Type 2) c. Single Language, Multiple Task (Type 3) d. Multiple Language, Multiple Task (Type 4) For a multilingual dataset of Lang Identification (or Linguistic Features/Unit Identification), it’s not considered Multilingual Dataset since the Lang itself is used as the label, or it doesn’t make sense to split the data based on the languages.
Based on afromentioned types, the checklist for Config Correctness:
- For type 1 & 3, both config of
f”{_DATASETNAME}_source”
andf”{_DATASETNAME}_seacrowd_{TASK_TO_SCHEMA}”
must be implemented. - For type 2 & 4, the dataloader config in (1) shouldn't be implemented, consequently it must cover all possible languages as a replacement. The formatting for Multilingual are
f”{_DATASETNAME}_{ISO_639_3_LANG}_source”
andf”{_DATASETNAME}_{ISO_639_3_LANG}_seacrowd_{TASK_TO_SCHEMA}”
. - For point (2), since it won't pass the test-cases using the default args, a custom arg must be provided by the Dataloader PR creator to ensure the reproducibility of Testing among reviewers.
- For type 1 & 3, both config of
- For every dataloader, it requires 2 reviewers per issue (the assignee must not review their own dataloader).
- Once the second reviewer approved, the PR should be merged to the
master
branch usingsquash and merge
strategy for cleaner commit history. - Reviewers will be assigned from the reviewer pool once/twice a week (by @holylovenia) or any reviewers can take any unassigned review process as long as it can be done in timely manner.