parquet-converter commited on
Commit
8275216
1 Parent(s): 01fa168

Update parquet files

Browse files
README.md DELETED
@@ -1,198 +0,0 @@
1
- ---
2
- language:
3
- - ja
4
- language_creators:
5
- - other
6
- multilinguality:
7
- - monolingual
8
- pretty_name: JaNLI
9
- task_categories:
10
- - text-classification
11
- task_ids:
12
- - natural-language-inference
13
- license: cc-by-sa-4.0
14
- ---
15
-
16
- # Dataset Card for JaNLI
17
-
18
- ## Table of Contents
19
- - [Dataset Card for JaNLI](#dataset-card-for-janli)
20
- - [Table of Contents](#table-of-contents)
21
- - [Dataset Description](#dataset-description)
22
- - [Dataset Summary](#dataset-summary)
23
- - [Languages](#languages)
24
- - [Dataset Structure](#dataset-structure)
25
- - [Data Instances](#data-instances)
26
- - [base](#base)
27
- - [original](#original)
28
- - [Data Fields](#data-fields)
29
- - [base](#base-1)
30
- - [original](#original-1)
31
- - [Data Splits](#data-splits)
32
- - [Annotations](#annotations)
33
- - [Additional Information](#additional-information)
34
- - [Licensing Information](#licensing-information)
35
- - [Citation Information](#citation-information)
36
- - [Contributions](#contributions)
37
-
38
- ## Dataset Description
39
-
40
- - **Homepage:** https://github.com/verypluming/JaNLI
41
- - **Repository:** https://github.com/verypluming/JaNLI
42
- - **Paper:** https://aclanthology.org/2021.blackboxnlp-1.26/
43
-
44
- ### Dataset Summary
45
-
46
- The JaNLI (Japanese Adversarial NLI) dataset, inspired by the English HANS dataset, is designed to necessitate an understanding of Japanese linguistic phenomena and to illuminate the vulnerabilities of models.
47
-
48
- ### Languages
49
-
50
- The language data in JaNLI is in Japanese (BCP-47 [ja-JP](https://www.rfc-editor.org/info/bcp47)).
51
-
52
-
53
-
54
- ## Dataset Structure
55
-
56
-
57
- ### Data Instances
58
- When loading a specific configuration, users has to append a version dependent suffix:
59
-
60
- ```python
61
- import datasets as ds
62
-
63
- dataset: ds.DatasetDict = ds.load_dataset("hpprc/janli")
64
- print(dataset)
65
- # DatasetDict({
66
- # train: Dataset({
67
- # features: ['id', 'premise', 'hypothesis', 'label', 'heuristics', 'number_of_NPs', 'semtag'],
68
- # num_rows: 13680
69
- # })
70
- # test: Dataset({
71
- # features: ['id', 'premise', 'hypothesis', 'label', 'heuristics', 'number_of_NPs', 'semtag'],
72
- # num_rows: 720
73
- # })
74
- # })
75
-
76
- dataset: ds.DatasetDict = ds.load_dataset("hpprc/janli", name="original")
77
- print(dataset)
78
- # DatasetDict({
79
- # train: Dataset({
80
- # features: ['id', 'sentence_A_Ja', 'sentence_B_Ja', 'entailment_label_Ja', 'heuristics', 'number_of_NPs', 'semtag'],
81
- # num_rows: 13680
82
- # })
83
- # test: Dataset({
84
- # features: ['id', 'sentence_A_Ja', 'sentence_B_Ja', 'entailment_label_Ja', 'heuristics', 'number_of_NPs', 'semtag'],
85
- # num_rows: 720
86
- # })
87
- # })
88
- ```
89
-
90
-
91
- #### base
92
-
93
- An example of looks as follows:
94
-
95
- ```json
96
- {
97
- 'id': 12,
98
- 'premise': '若者がフットボール選手を見ている',
99
- 'hypothesis': 'フットボール選手を若者が見ている',
100
- 'label': 0,
101
- 'heuristics': 'overlap-full',
102
- 'number_of_NPs': 2,
103
- 'semtag': 'scrambling'
104
- }
105
- ```
106
-
107
- #### original
108
-
109
- An example of looks as follows:
110
-
111
- ```json
112
- {
113
- 'id': 12,
114
- 'sentence_A_Ja': '若者がフットボール選手を見ている',
115
- 'sentence_B_Ja': 'フットボール選手を若者が見ている',
116
- 'entailment_label_Ja': 0,
117
- 'heuristics': 'overlap-full',
118
- 'number_of_NPs': 2,
119
- 'semtag': 'scrambling'
120
- }
121
- ```
122
-
123
- ### Data Fields
124
-
125
- #### base
126
-
127
- A version adopting the column names of a typical NLI dataset.
128
-
129
- - `id`: The number of the sentence pair.
130
- - `premise`: The premise (sentence_A_Ja).
131
- - `hypothesis`: The hypothesis (sentence_B_Ja).
132
- - `label`: The correct label for this sentence pair (either `entailment` or `non-entailment`); in the setting described in the paper, non-entailment = neutral + contradiction (entailment_label_Ja).
133
- - `heuristics`: The heuristics (structural pattern) tag. The tags are: subsequence, constituent, full-overlap, order-subset, and mixed-subset.
134
- - `number_of_NPs`: The number of noun phrase in a sentence.
135
- - `semtag`: The linguistic phenomena tag.
136
-
137
- #### original
138
-
139
- The original version retaining the unaltered column names.
140
-
141
- - `id`: The number of the sentence pair.
142
- - `sentence_A_Ja`: The premise.
143
- - `sentence_B_Ja`: The hypothesis.
144
- - `entailment_label_Ja`: The correct label for this sentence pair (either `entailment` or `non-entailment`); in the setting described in the paper, non-entailment = neutral + contradiction
145
- - `heuristics`: The heuristics (structural pattern) tag. The tags are: subsequence, constituent, full-overlap, order-subset, and mixed-subset.
146
- - `number_of_NPs`: The number of noun phrase in a sentence.
147
- - `semtag`: The linguistic phenomena tag.
148
-
149
-
150
- ### Data Splits
151
-
152
- | name | train | validation | test |
153
- | -------- | -----: | ---------: | ---: |
154
- | base | 13,680 | | 720 |
155
- | original | 13,680 | | 720 |
156
-
157
-
158
-
159
- ### Annotations
160
-
161
- The annotation process for this Japanese NLI dataset involves tagging each pair (P, H) of a premise and hypothesis with a label for structural pattern and linguistic phenomenon.
162
- The structural relationship between premise and hypothesis sentences is classified into five patterns, with each pattern associated with a type of heuristic that can lead to incorrect predictions of the entailment relation.
163
- Additionally, 11 categories of Japanese linguistic phenomena and constructions are focused on for generating the five patterns of adversarial inferences.
164
-
165
- For each linguistic phenomenon, a template for the premise sentence P is fixed, and multiple templates for hypothesis sentences H are created.
166
- In total, 144 templates for (P, H) pairs are produced.
167
- Each pair of premise and hypothesis sentences is tagged with an entailment label (entailment or non-entailment), a structural pattern, and a linguistic phenomenon label.
168
-
169
- The JaNLI dataset is generated by instantiating each template 100 times, resulting in a total of 14,400 examples.
170
- The same number of entailment and non-entailment examples are generated for each phenomenon.
171
- The structural patterns are annotated with the templates for each linguistic phenomenon, and the ratio of entailment and non-entailment examples is not necessarily 1:1 for each pattern.
172
- The dataset uses a total of 158 words (nouns and verbs), which occur more than 20 times in the JSICK and JSNLI datasets.
173
-
174
-
175
- ## Additional Information
176
-
177
- - [verypluming/JaNLI](https://github.com/verypluming/JaNLI)
178
- - [Hitomi Yanaka, Koji Mineshima, Assessing the Generalization Capacity of Pre-trained Language Models through Japanese Adversarial Natural Language Inference, Proceedings of the 2021 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP2021), 2021.](https://aclanthology.org/2021.blackboxnlp-1.26/)
179
-
180
- ### Licensing Information
181
-
182
- CC BY-SA 4.0
183
-
184
- ### Citation Information
185
-
186
- ```bibtex
187
- @InProceedings{yanaka-EtAl:2021:blackbox,
188
- author = {Yanaka, Hitomi and Mineshima, Koji},
189
- title = {Assessing the Generalization Capacity of Pre-trained Language Models through Japanese Adversarial Natural Language Inference},
190
- booktitle = {Proceedings of the 2021 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP2021)},
191
- url = {https://aclanthology.org/2021.blackboxnlp-1.26/},
192
- year = {2021},
193
- }
194
- ```
195
-
196
- ### Contributions
197
-
198
- Thanks to [Hitomi Yanaka](https://hitomiyanaka.mystrikingly.com/) and Koji Mineshima for creating this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
base/janli-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b89aea3cc598f35928d1264e86bd881f3506a56a356db945d6abcf93aa80179b
3
+ size 30852
base/janli-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c2e9941d78f11f725ced9f44be1392e2a804ae1f89f2a7be8d1ed5f464b272d9
3
+ size 474888
janli.py DELETED
@@ -1,101 +0,0 @@
1
- import datasets as ds
2
- import pandas as pd
3
-
4
- _CITATION = """\
5
- @InProceedings{yanaka-EtAl:2021:blackbox,
6
- author = {Yanaka, Hitomi and Mineshima, Koji},
7
- title = {Assessing the Generalization Capacity of Pre-trained Language Models through Japanese Adversarial Natural Language Inference},
8
- booktitle = {Proceedings of the 2021 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP (BlackboxNLP2021)},
9
- year = {2021},
10
- }
11
- """
12
-
13
- _DESCRIPTION = "The JaNLI (Japanese Adversarial NLI) dataset, inspired by the English HANS dataset, is designed to necessitate an understanding of Japanese linguistic phenomena and to illuminate the vulnerabilities of models."
14
-
15
- _HOMEPAGE = "https://github.com/verypluming/JaNLI"
16
-
17
- _LICENSE = "CC BY-SA 4.0"
18
-
19
- _DOWNLOAD_URL = "https://raw.githubusercontent.com/verypluming/JaNLI/main/janli.tsv"
20
-
21
-
22
- class JaNLIDataset(ds.GeneratorBasedBuilder):
23
- VERSION = ds.Version("1.0.0")
24
- DEFAULT_CONFIG_NAME = "base"
25
-
26
- BUILDER_CONFIGS = [
27
- ds.BuilderConfig(
28
- name="base",
29
- version=VERSION,
30
- description="A version adopting the column names of a typical NLI dataset.",
31
- ),
32
- ds.BuilderConfig(
33
- name="original",
34
- version=VERSION,
35
- description="The original version retaining the unaltered column names.",
36
- ),
37
- ]
38
-
39
- def _info(self) -> ds.DatasetInfo:
40
- if self.config.name == "base":
41
- features = ds.Features(
42
- {
43
- "id": ds.Value("int64"),
44
- "premise": ds.Value("string"),
45
- "hypothesis": ds.Value("string"),
46
- "label": ds.ClassLabel(names=["entailment", "non-entailment"]),
47
- "heuristics": ds.Value("string"),
48
- "number_of_NPs": ds.Value("int32"),
49
- "semtag": ds.Value("string"),
50
- }
51
- )
52
- elif self.config.name == "original":
53
- features = ds.Features(
54
- {
55
- "id": ds.Value("int64"),
56
- "sentence_A_Ja": ds.Value("string"),
57
- "sentence_B_Ja": ds.Value("string"),
58
- "entailment_label_Ja": ds.ClassLabel(names=["entailment", "non-entailment"]),
59
- "heuristics": ds.Value("string"),
60
- "number_of_NPs": ds.Value("int32"),
61
- "semtag": ds.Value("string"),
62
- }
63
- )
64
-
65
- return ds.DatasetInfo(
66
- description=_DESCRIPTION,
67
- citation=_CITATION,
68
- homepage=_HOMEPAGE,
69
- license=_LICENSE,
70
- features=features,
71
- )
72
-
73
- def _split_generators(self, dl_manager: ds.DownloadManager):
74
- data_path = dl_manager.download_and_extract(_DOWNLOAD_URL)
75
- df: pd.DataFrame = pd.read_table(data_path, header=0, sep="\t", index_col=0)
76
- df["id"] = df.index
77
-
78
- if self.config.name == "base":
79
- df = df.rename(
80
- columns={
81
- "sentence_A_Ja": "premise",
82
- "sentence_B_Ja": "hypothesis",
83
- "entailment_label_Ja": "label",
84
- }
85
- )
86
-
87
- return [
88
- ds.SplitGenerator(
89
- name=ds.Split.TRAIN,
90
- gen_kwargs={"df": df[df["split"] == "train"]},
91
- ),
92
- ds.SplitGenerator(
93
- name=ds.Split.TEST,
94
- gen_kwargs={"df": df[df["split"] == "test"]},
95
- ),
96
- ]
97
-
98
- def _generate_examples(self, df: pd.DataFrame):
99
- df = df.drop("split", axis=1)
100
- for i, row in enumerate(df.to_dict("records")):
101
- yield i, row
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
original/janli-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:27791e69bf03e8b1de805ee46a09ed2ba4f6e71a98db266c009d0aa7caa98c65
3
+ size 31008
original/janli-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c89aad4ee1eed7fca58fdbb23e0b30749617aea53b60a442e3dea6ec5ae59ed4
3
+ size 475642
poetry.lock DELETED
The diff for this file is too large to render. See raw diff
 
pyproject.toml DELETED
@@ -1,23 +0,0 @@
1
- [tool.poetry]
2
- name = "datasets-janli"
3
- version = "0.1.0"
4
- description = ""
5
- authors = ["hppRC <hpp.ricecake@gmail.com>"]
6
- readme = "README.md"
7
- packages = []
8
-
9
- [tool.poetry.dependencies]
10
- python = "^3.8.1"
11
- datasets = "^2.11.0"
12
-
13
-
14
- [tool.poetry.group.dev.dependencies]
15
- black = "^22.12.0"
16
- isort = "^5.11.4"
17
- flake8 = "^6.0.0"
18
- mypy = "^0.991"
19
- pytest = "^7.2.0"
20
-
21
- [build-system]
22
- requires = ["poetry-core"]
23
- build-backend = "poetry.core.masonry.api"