--- dataset_info: features: - name: context dtype: string - name: question dtype: string - name: answer struct: - name: answer_end dtype: int64 - name: answer_start dtype: int64 - name: text dtype: string splits: - name: train num_bytes: 3270207 num_examples: 4847 - name: validation num_bytes: 379681 num_examples: 565 - name: test num_bytes: 575308 num_examples: 855 download_size: 2471461 dataset_size: 4225196 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* --- I do not hold the copyright to this dataset; I merely restructured it to have the same structure as other datasets (that we are researching) to facilitate future coding and analysis. I refer to this [link](https://huggingface.co/datasets/SEACrowd/tydiqa_id) for the raw dataset.