Daniel Khashabi commited on
Commit
005e34f
1 Parent(s): c8f6744
Files changed (3) hide show
  1. README.md +169 -0
  2. dataset_infos.json +1 -0
  3. parsinlu_entailment.py +137 -0
README.md ADDED
@@ -0,0 +1,169 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ languages:
7
+ - fa
8
+ licenses:
9
+ - cc-by-nc-sa-4.0
10
+ multilinguality:
11
+ - monolingual
12
+ size_categories:
13
+ - 1K<n<10K
14
+ source_datasets:
15
+ - extended|translated|mnli
16
+ task_categories:
17
+ - textual-entailment
18
+ - natural-language-inference
19
+ task_ids:
20
+ - textual-entailment
21
+ - natural-language-inference
22
+ ---
23
+
24
+ # Dataset Card for PersiNLU (Textual Entailment)
25
+
26
+ ## Table of Contents
27
+ - [Dataset Card for PersiNLU (Textual Entailment)](#dataset-card-for-persi_nlu_entailment)
28
+ - [Table of Contents](#table-of-contents)
29
+ - [Dataset Description](#dataset-description)
30
+ - [Dataset Summary](#dataset-summary)
31
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
32
+ - [Languages](#languages)
33
+ - [Dataset Structure](#dataset-structure)
34
+ - [Data Instances](#data-instances)
35
+ - [Data Fields](#data-fields)
36
+ - [Data Splits](#data-splits)
37
+ - [Dataset Creation](#dataset-creation)
38
+ - [Curation Rationale](#curation-rationale)
39
+ - [Source Data](#source-data)
40
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
41
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
42
+ - [Annotations](#annotations)
43
+ - [Annotation process](#annotation-process)
44
+ - [Who are the annotators?](#who-are-the-annotators)
45
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
46
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
47
+ - [Social Impact of Dataset](#social-impact-of-dataset)
48
+ - [Discussion of Biases](#discussion-of-biases)
49
+ - [Other Known Limitations](#other-known-limitations)
50
+ - [Additional Information](#additional-information)
51
+ - [Dataset Curators](#dataset-curators)
52
+ - [Licensing Information](#licensing-information)
53
+ - [Citation Information](#citation-information)
54
+ - [Contributions](#contributions)
55
+
56
+ ## Dataset Description
57
+
58
+ - **Homepage:** [Github](https://github.com/persiannlp/parsinlu/)
59
+ - **Repository:** [Github](https://github.com/persiannlp/parsinlu/)
60
+ - **Paper:** [Arxiv](https://arxiv.org/abs/2012.06154)
61
+ - **Leaderboard:**
62
+ - **Point of Contact:** d.khashabi@gmail.com
63
+
64
+ ### Dataset Summary
65
+
66
+ A Persian textual entailment task (deciding `sent1` entails `sent2`).
67
+ The questions are partially translated from the SNLI dataset and partially generated by expert annotators.
68
+
69
+ ### Supported Tasks and Leaderboards
70
+
71
+ [More Information Needed]
72
+
73
+ ### Languages
74
+
75
+ The text dataset is in Persian (`fa`).
76
+
77
+ ## Dataset Structure
78
+
79
+ ### Data Instances
80
+
81
+ Here is an example from the dataset:
82
+ ```json
83
+ {
84
+ "sent1": "سالها است که کنگره در تلاش است تا اثربخشی مدیریت اطلاعات و فناوری را در دولت فدرال افزایش دهد.",
85
+ "sent2": "کنگره بودجه ویژه ای برای مدیریت اطلاعات و فناوری در دولت فدرال دارد.",
86
+ "label": "n",
87
+ "category": "translation-train"
88
+ }
89
+ ```
90
+
91
+ ### Data Fields
92
+
93
+ - `sent1`: the first sentence.
94
+ - `sent2`: the second sentence.
95
+ - `source`: whether the questions are translated from MNLI (`translation-.`) or they're written by native speakers (`natural-.`).
96
+ - `label`: `e` if `sent2` is entailed from `sent1`; `c` if `sent2` is contradictory to `sent1`; `n` if the two sentences are neutral.
97
+
98
+ ### Data Splits
99
+
100
+ The train/dev/test splits contains 756/271/1751 samples.
101
+
102
+ ## Dataset Creation
103
+
104
+ ### Curation Rationale
105
+
106
+ For details, check [the corresponding draft](https://arxiv.org/abs/2012.06154).
107
+
108
+ ### Source Data
109
+
110
+ #### Initial Data Collection and Normalization
111
+
112
+ [More Information Needed]
113
+
114
+ #### Who are the source language producers?
115
+
116
+ [More Information Needed]
117
+
118
+ ### Annotations
119
+
120
+ #### Annotation process
121
+
122
+ [More Information Needed]
123
+
124
+ #### Who are the annotators?
125
+
126
+ [More Information Needed]
127
+
128
+ ### Personal and Sensitive Information
129
+
130
+ [More Information Needed]
131
+
132
+ ## Considerations for Using the Data
133
+
134
+ ### Social Impact of Dataset
135
+
136
+ [More Information Needed]
137
+
138
+ ### Discussion of Biases
139
+
140
+ [More Information Needed]
141
+
142
+ ### Other Known Limitations
143
+
144
+ [More Information Needed]
145
+
146
+ ## Additional Information
147
+
148
+ ### Dataset Curators
149
+
150
+ [More Information Needed]
151
+
152
+ ### Licensing Information
153
+
154
+ CC BY-NC-SA 4.0 License
155
+
156
+ ### Citation Information
157
+ ```bibtex
158
+ @article{huggingface:dataset,
159
+ title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
160
+ authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
161
+ year={2020}
162
+ journal = {arXiv e-prints},
163
+ eprint = {2012.06154},
164
+ }
165
+ ```
166
+
167
+ ### Contributions
168
+
169
+ Thanks to [@danyaljj](https://github.com/danyaljj) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"parsinlu-repo": {"description": "A Persian textual entailment task (deciding `sent1` entails `sent2`). \n", "citation": "@article{huggingface:dataset,\n title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},\n authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},\n year={2020}\n journal = {arXiv e-prints},\n eprint = {2012.06154}, \n}\n", "homepage": "https://github.com/persiannlp/parsinlu/", "license": "CC BY-NC-SA 4.0", "features": {"sent1": {"dtype": "string", "id": null, "_type": "Value"}, "sent2": {"dtype": "string", "id": null, "_type": "Value"}, "category": {"dtype": "string", "id": null, "_type": "Value"}, "label": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "parsinlu_reading_comprehension", "config_name": "parsinlu-repo", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 259978, "num_examples": 755, "dataset_name": "parsinlu_reading_comprehension"}, "test": {"name": "test", "num_bytes": 589180, "num_examples": 1675, "dataset_name": "parsinlu_reading_comprehension"}, "validation": {"name": "validation", "num_bytes": 96731, "num_examples": 270, "dataset_name": "parsinlu_reading_comprehension"}}, "download_checksums": {"https://raw.githubusercontent.com/persiannlp/parsinlu/master/data/entailment/train.csv": {"num_bytes": 254228, "checksum": "5e3847a4fc3011dbe52fdc6e2c8ff3d8c1e448ec236c477e860fe0266d3f6d79"}, "https://raw.githubusercontent.com/persiannlp/parsinlu/master/data/entailment/dev.csv": {"num_bytes": 94708, "checksum": "c474270a24d63a1b3e055fdd3f8ecd46f58e76da224c6223add84b88b87d6dcf"}, "https://raw.githubusercontent.com/persiannlp/parsinlu/master/data/entailment/test.csv": {"num_bytes": 576505, "checksum": "cb25c16b51dd5a61ed832be9fee6a4d9eb6b645e5f2caa8ebb665ed190ffdebd"}}, "download_size": 925441, "post_processing_size": null, "dataset_size": 945889, "size_in_bytes": 1871330}}
parsinlu_entailment.py ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """ParsiNLU Persian reading comprehension task"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import csv
20
+ import json
21
+
22
+ import datasets
23
+
24
+
25
+ logger = datasets.logging.get_logger(__name__)
26
+
27
+ _CITATION = """\
28
+ @article{huggingface:dataset,
29
+ title = {ParsiNLU: A Suite of Language Understanding Challenges for Persian},
30
+ authors = {Khashabi, Daniel and Cohan, Arman and Shakeri, Siamak and Hosseini, Pedram and Pezeshkpour, Pouya and Alikhani, Malihe and Aminnaseri, Moin and Bitaab, Marzieh and Brahman, Faeze and Ghazarian, Sarik and others},
31
+ year={2020}
32
+ journal = {arXiv e-prints},
33
+ eprint = {2012.06154},
34
+ }
35
+ """
36
+
37
+ # You can copy an official description
38
+ _DESCRIPTION = """\
39
+ A Persian textual entailment task (deciding `sent1` entails `sent2`).
40
+ """
41
+
42
+ _HOMEPAGE = "https://github.com/persiannlp/parsinlu/"
43
+
44
+ _LICENSE = "CC BY-NC-SA 4.0"
45
+
46
+ _URL = "https://raw.githubusercontent.com/persiannlp/parsinlu/master/data/entailment/"
47
+ _URLs = {
48
+ "train": _URL + "train.csv",
49
+ "dev": _URL + "dev.csv",
50
+ "test": _URL + "test.csv",
51
+ }
52
+
53
+
54
+ class ParsinluReadingComprehension(datasets.GeneratorBasedBuilder):
55
+ """ParsiNLU Persian reading comprehension task."""
56
+
57
+ VERSION = datasets.Version("1.0.0")
58
+
59
+ BUILDER_CONFIGS = [
60
+ datasets.BuilderConfig(
61
+ name="parsinlu-repo", version=VERSION, description="ParsiNLU repository: query-paraphrasing"
62
+ ),
63
+ ]
64
+
65
+ def _info(self):
66
+ features = datasets.Features(
67
+ {
68
+ "sent1": datasets.Value("string"),
69
+ "sent2": datasets.Value("string"),
70
+ "category": datasets.Value("string"),
71
+ "label": datasets.Value("string"),
72
+ }
73
+ )
74
+
75
+ return datasets.DatasetInfo(
76
+ # This is the description that will appear on the datasets page.
77
+ description=_DESCRIPTION,
78
+ # This defines the different columns of the dataset and their types
79
+ features=features, # Here we define them above because they are different between the two configurations
80
+ # If there's a common (input, target) tuple from the features,
81
+ # specify them here. They'll be used if as_supervised=True in
82
+ # builder.as_dataset.
83
+ supervised_keys=None,
84
+ # Homepage of the dataset for documentation
85
+ homepage=_HOMEPAGE,
86
+ # License for the dataset if available
87
+ license=_LICENSE,
88
+ # Citation for the dataset
89
+ citation=_CITATION,
90
+ )
91
+
92
+ def _split_generators(self, dl_manager):
93
+ data_dir = dl_manager.download_and_extract(_URLs)
94
+ return [
95
+ datasets.SplitGenerator(
96
+ name=datasets.Split.TRAIN,
97
+ # These kwargs will be passed to _generate_examples
98
+ gen_kwargs={
99
+ "filepath": data_dir["train"],
100
+ "split": "train",
101
+ },
102
+ ),
103
+ datasets.SplitGenerator(
104
+ name=datasets.Split.TEST,
105
+ # These kwargs will be passed to _generate_examples
106
+ gen_kwargs={"filepath": data_dir["test"], "split": "test"},
107
+ ),
108
+ datasets.SplitGenerator(
109
+ name=datasets.Split.VALIDATION,
110
+ # These kwargs will be passed to _generate_examples
111
+ gen_kwargs={
112
+ "filepath": data_dir["dev"],
113
+ "split": "dev",
114
+ },
115
+ ),
116
+ ]
117
+
118
+ def _generate_examples(self, filepath, split):
119
+ logger.info("generating examples from = %s", filepath)
120
+
121
+ with open(filepath, encoding="utf-8") as f:
122
+ reader = csv.reader(f)
123
+
124
+ for id_, row in enumerate(reader):
125
+ if id_ == 0:
126
+ continue
127
+
128
+ sent1 = row[1].replace("\t", "").replace("\n", "")
129
+ sent2 = row[2].replace("\t", "").replace("\n", "")
130
+ label = row[3].replace("\t", "").replace("\n", "")
131
+ cat = row[4].replace("\t", "").replace("\n", "")
132
+ yield id_, {
133
+ 'sent1': sent1,
134
+ 'sent2': sent2,
135
+ 'label': label,
136
+ 'category': cat,
137
+ }