Datasets:
File size: 3,593 Bytes
dee8ec3 35d3314 c56e48e dee8ec3 c2586cd 6afe0cf dee8ec3 cd052a2 7d3de1a 0b15aee cd052a2 387f42a cd052a2 387f42a cd052a2 387f42a cd052a2 3588e19 387f42a 3588e19 387f42a 3588e19 387f42a ec4b09c 53422d6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 |
---
dataset_info:
features:
- name: q_id
dtype: int64
- name: question
dtype: string
- name: choice0
dtype: string
- name: choice1
dtype: string
- name: choice2
dtype: string
- name: choice3
dtype: string
- name: choice4
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 1183800
num_examples: 8939
- name: validation
num_bytes: 148287
num_examples: 1119
download_size: 2637820
dataset_size: 1332087
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
license: cc-by-sa-4.0
task_categories:
- question-answering
language:
- ja
---
評価スコアの再現性確保と SB Intuitions 修正版の公開用クローン
ソース: [yahoojapan/JGLUE on GitHub](https://github.com/yahoojapan/JGLUE/tree/main)
- [datasets/jcommonsenseqa-v1.1](https://github.com/yahoojapan/JGLUE/tree/main/datasets/jcommonsenseqa-v1.1)
# JCommonsenseQA
> JCommonsenseQA is a Japanese version of CommonsenseQA (Talmor+, 2019), which is a multiple-choice question answering dataset that requires commonsense reasoning ability.
> It is built using crowdsourcing with seeds extracted from the knowledge base ConceptNet.
## Licensing Information
[Creative Commons Attribution Share Alike 4.0 International](https://github.com/yahoojapan/JGLUE/blob/main/LICENSE)
## Citation Information
```
@article{栗原 健太郎2023,
title={JGLUE: 日本語言語理解ベンチマーク},
author={栗原 健太郎 and 河原 大輔 and 柴田 知秀},
journal={自然言語処理},
volume={30},
number={1},
pages={63-87},
year={2023},
url = "https://www.jstage.jst.go.jp/article/jnlp/30/1/30_63/_article/-char/ja",
doi={10.5715/jnlp.30.63}
}
@inproceedings{kurihara-etal-2022-jglue,
title = "{JGLUE}: {J}apanese General Language Understanding Evaluation",
author = "Kurihara, Kentaro and
Kawahara, Daisuke and
Shibata, Tomohide",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
month = jun,
year = "2022",
address = "Marseille, France",
publisher = "European Language Resources Association",
url = "https://aclanthology.org/2022.lrec-1.317",
pages = "2957--2966",
abstract = "To develop high-performance natural language understanding (NLU) models, it is necessary to have a benchmark to evaluate and analyze NLU ability from various perspectives. While the English NLU benchmark, GLUE, has been the forerunner, benchmarks are now being released for languages other than English, such as CLUE for Chinese and FLUE for French; but there is no such benchmark for Japanese. We build a Japanese NLU benchmark, JGLUE, from scratch without translation to measure the general NLU ability in Japanese. We hope that JGLUE will facilitate NLU research in Japanese.",
}
@InProceedings{Kurihara_nlp2022,
author = "栗原健太郎 and 河原大輔 and 柴田知秀",
title = "JGLUE: 日本語言語理解ベンチマーク",
booktitle = "言語処理学会第28回年次大会",
year = "2022",
url = "https://www.anlp.jp/proceedings/annual_meeting/2022/pdf_dir/E8-4.pdf"
note= "in Japanese"
}
```
# Subsets
## default
- `q_id` (`str`): 質問を一意識別するための ID
- `question` (`str`): 質問文, (未 NFKC正規化)
- `choice{0..4}` (`str`): 選択肢(`choice0`〜`choice4` の 5つ), (未 NFKC正規化)
- `label` (`int`): `choice{0..4}` に対応した正解選択肢のインデックス(0-4) |