--- annotations_creators: - crowdsourced language_creators: - crowdsourced - found languages: - ja licenses: - cc-by-sa-3.0 multilinguality: - monolingual paperswithcode_id: null pretty_name: "JaQuAD: Japanese Question Answering Dataset" size_categories: - 10K魚の一種。\n別名はビワタナゴ(琵琶鱮、琵琶鰱)。", "question": "ビワタナゴの正式名称は何?", "question_type": "Multiple sentence reasoning", "answers": { "text": "イタセンパラ", "answer_start": 0, "answer_type": "Object", }, }, ``` ### Data Fields - `id`: a `string` feature. - `title`: a `string` feature. - `context`: a `string` feature. - `question`: a `string` feature. - `question_type`: a `string` feature. - `answers`: a dictionary feature containing: - `text`: a `string` feature. - `answer_start`: a `int32` feature. - `answer_type`: a `string` feature. ### Data Splitting JaQuAD consists of three sets, `train`, `validation`, and `test`. They were created from disjoint sets of Wikipedia articles. The `test` set is not publicly released yet. The following table shows statistics for each set. Set | Number of Articles | Number of Contexts | Number of Questions --------------|--------------------|--------------------|-------------------- Train | 691 | 9713 | 31748 Validation | 101 | 1431 | 3939 Test | 109 | 1479 | 4009 ## Dataset Creation ### Curation Rationale The JaQuAD dataset was created by [Skelter Labs](https://skelterlabs.com/) to provide a SQuAD-like QA dataset in Japanese. Questions are original and based on Japanese Wikipedia articles. ### Source Data The articles used for the contexts are from [Japanese Wikipedia](https://ja.wikipedia.org/). 88.7% of articles are from the curated list of Japanese high-quality Wikipedia articles, e.g., [featured articles](https://ja.wikipedia.org/wiki/Wikipedia:%E8%89%AF%E8%B3%AA%E3%81%AA%E8%A8%98%E4%BA%8B) and [good articles](https://ja.wikipedia.org/wiki/Wikipedia:%E7%A7%80%E9%80%B8%E3%81%AA%E8%A8%98%E4%BA%8B). ### Annotations Wikipedia articles were scrapped and divided into one more multiple paragraphs as contexts. Annotations (questions and answer spans) are written by fluent Japanese speakers, including natives and non-natives. Annotators were given a context and asked to generate non-trivial questions about information in the context. ### Personal and Sensitive Information No personal or sensitive information is included in this dataset. Dataset annotators has been manually verified it. ## Considerations for Using the Data Users should consider that the articles are sampled from Wikipedia articles but not representative of all Wikipedia articles. ### Social Impact of Dataset The social biases of this dataset have not yet been investigated. ### Discussion of Biases The social biases of this dataset have not yet been investigated. Articles and questions have been selected for quality and diversity. ### Other Known Limitations The JaQuAD dataset has limitations as follows: - Most of them are short answers. - Assume that a question is answerable using the corresponding context. This dataset is incomplete yet. If you find any errors in JaQuAD, please contact us. ## Additional Information ### Dataset Curators Skelter Labs: [https://skelterlabs.com/](https://skelterlabs.com/) ### Licensing Information The JaQuAD dataset is licensed under the [CC BY-SA 3.0](https://creativecommons.org/licenses/by-sa/3.0/) license. ### Citation Information TBA ```bibtex @article{SkelterLabsInc:2022JaQuAD, author = {Byunghoon, So and Kyuhong, Byun and Kyungwon, Kang and Seongjin, Cho}, title = {{JaQuAD}: Japanese Question Answering Dataset for Machine Reading Comprehension}, year = 2022, eid = {arXiv:###}, pages = {arXiv:###}, archivePrefix = {arXiv}, eprint = {###}, } ``` ### Acknowledgements This work was supported by TPU Research Cloud (TRC) program. For training models, we used cloud TPUs provided by TRC. We also thanks to anotators who geernated and labeled JaQuAD