parquet-converter commited on
Commit
bb83f3c
1 Parent(s): a632c66

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,28 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
28
- *.tsv.gz filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,156 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - expert-generated
4
- language_creators:
5
- - expert-generated
6
- - machine-generated
7
- language:
8
- - en
9
- - nl
10
- license:
11
- - apache-2.0
12
- multilinguality:
13
- - translation
14
- size_categories:
15
- - unknown
16
- source_datasets:
17
- - extended|esnli
18
- task_categories:
19
- - text-classification
20
- task_ids:
21
- - natural-language-inference
22
- pretty_name: iknlp22-transqe
23
- tags:
24
- - quality-estimation
25
- ---
26
- # Dataset Card for IK-NLP-22 Project 3: Translation Quality-driven Data Selection for Natural Language Inference
27
- ## Table of Contents
28
- - [Dataset Card for IK-NLP-22 Project 3: Translation Quality-driven Data Selection for Natural Language Inference](#dataset-card-for-ik-nlp-22-project-3-translation-quality-driven-data-selection-for-natural-language-inference)
29
- - [Table of Contents](#table-of-contents)
30
- - [Dataset Description](#dataset-description)
31
- - [Dataset Summary](#dataset-summary)
32
- - [Languages](#languages)
33
- - [Dataset Structure](#dataset-structure)
34
- - [Data Instances](#data-instances)
35
- - [Data Splits](#data-splits)
36
- - [Data Example](#data-example)
37
- - [Dataset Creation](#dataset-creation)
38
- - [Additional Information](#additional-information)
39
- - [Dataset Curators](#dataset-curators)
40
- - [Licensing Information](#licensing-information)
41
- - [Citation Information](#citation-information)
42
- ## Dataset Description
43
- - **Source:** [Github](https://github.com/OanaMariaCamburu/e-SNLI)
44
- - **Point of Contact:** [Gabriele Sarti](mailto:ik-nlp-course@rug.nl)
45
- ### Dataset Summary
46
- This dataset contains the full [e-SNLI](https://huggingface.co/datasets/esnli) dataset, automatically translated to Dutch using the [Helsinki-NLP/opus-mt-en-nl](https://huggingface.co/Helsinki-NLP/opus-mt-en-nl) neural machine translation model. The translation of each field has been anotated with two quality estimation scores using the referenceless version of the [COMET](https://github.com/Unbabel/COMET/) metric by Unbabel.
47
-
48
- The intended usage of this corpus is restricted to the scope of final project for the 2022 edition of the Natural Language Processing course at the Information Science Master's Degree (IK) at the University of Groningen, taught by [Arianna Bisazza](https://research.rug.nl/en/persons/arianna-bisazza) and [Gabriele Sarti](https://research.rug.nl/en/persons/gabriele-sarti), with the assistance of [Anjali Nair](https://nl.linkedin.com/in/anjalinair012).
49
-
50
- *The e-SNLI corpus was made freely available by the authors on Github. The present dataset was created for educational purposes, and is based on the original e-SNLI dataset by Camburu et al..All rights of the present contents are attributed to the original authors.*
51
-
52
- ### Languages
53
- The language data of this corpus is in English (BCP-47 `en`) and Dutch (BCP-47 `nl`).
54
- ## Dataset Structure
55
- ### Data Instances
56
-
57
- The dataset contains a single condiguration by default, named `plain_text`, with the three original splits `train`, `validation` and `test`. Every split contains the following fields:
58
-
59
- | **Field** | **Description** |
60
- |------------|-----------------------------|
61
- |`premise_en`| The original English premise.|
62
- |`premise_nl`| The premise automatically translated to Dutch.|
63
- |`hypothesis_en`| The original English hypothesis.|
64
- |`hypothesis_nl`| The hypothesis automatically translated to Dutch.|
65
- |`label`| The label of the data instance (0 for entailment, 1 for neutral, 2 for contradiction).|
66
- |`explanation_1_en`| The first explanation for the assigned label in English.|
67
- |`explanation_1_nl`| The first explanation automatically translated to Dutch.|
68
- |`explanation_2_en`| The second explanation for the assigned label in English.|
69
- |`explanation_2_nl`| The second explanation automatically translated to Dutch.|
70
- |`explanation_3_en`| The third explanation for the assigned label in English.|
71
- |`explanation_3_nl`| The third explanation automatically translated to Dutch.|
72
- |`da_premise`| The quality estimation produced by the `wmt20-comet-qe-da` model for the premise translation.|
73
- |`da_hypothesis`| The quality estimation produced by the `wmt20-comet-qe-da` model for the hypothesis translation.|
74
- |`da_explanation_1`| The quality estimation produced by the `wmt20-comet-qe-da` model for the first explanation translation.|
75
- |`da_explanation_2`| The quality estimation produced by the `wmt20-comet-qe-da` model for the second explanation translation.|
76
- |`da_explanation_3`| The quality estimation produced by the `wmt20-comet-qe-da` model for the third explanation translation.|
77
- |`mqm_premise`| The quality estimation produced by the `wmt21-comet-qe-mqm` model for the premise translation.|
78
- |`mqm_hypothesis`| The quality estimation produced by the `wmt21-comet-qe-mqm` model for the hypothesis translation.|
79
- |`mqm_explanation_1`| The quality estimation produced by the `wmt21-comet-qe-mqm` model for the first explanation translation.|
80
- |`mqm_explanation_2`| The quality estimation produced by the `wmt21-comet-qe-mqm` model for the second explanation translation.|
81
- |`mqm_explanation_3`| The quality estimation produced by the `wmt21-comet-qe-mqm` model for the third explanation translation.|
82
-
83
- Explanation 2 and 3 and related quality estimation scores are only present in the `validation` and `test` splits.
84
-
85
- ### Data Splits
86
-
87
- | config| train | validation | test |
88
- |------------:|---------|------------|------|
89
- |`plain_text` | 549'367 | 9842 | 9824 |
90
-
91
- For your analyses, use the amount of data that is the most reasonable for your computational setup. The more, the better.
92
-
93
- ### Data Example
94
-
95
- The following is an example of entry 2000 taken from the `test` split:
96
-
97
- ```json
98
- {
99
- "premise_en": "A young woman wearing a yellow sweater and black pants is ice skating outdoors.",
100
- "premise_nl": "Een jonge vrouw met een gele trui en zwarte broek schaatst buiten.",
101
- "hypothesis_en": "a woman is practicing for the olympics",
102
- "hypothesis_nl": "een vrouw oefent voor de Olympische Spelen",
103
- "label": 1,
104
- "explanation_1_en": "You can not infer it's for the Olympics.",
105
- "explanation_1_nl": "Het is niet voor de Olympische Spelen.",
106
- "explanation_2_en": "Just because a girl is skating outdoors does not mean she is practicing for the Olympics.",
107
- "explanation_2_nl": "Alleen omdat een meisje buiten schaatst betekent niet dat ze oefent voor de Olympische Spelen.",
108
- "explanation_3_en": "Ice skating doesn't imply practicing for the olympics.",
109
- "explanation_3_nl": "Schaatsen betekent niet oefenen voor de Olympische Spelen.",
110
- "da_premise": "0.6099",
111
- "mqm_premise": "0.1298",
112
- "da_hypothesis": "0.8504",
113
- "mqm_hypothesis": "0.1521",
114
- "da_explanation_1": "0.0001",
115
- "mqm_explanation_1": "0.1237",
116
- "da_explanation_2": "0.4017",
117
- "mqm_explanation_2": "0.1467",
118
- "da_explanation_3": "0.6069",
119
- "mqm_explanation_3": "0.1389"
120
- }
121
- ```
122
-
123
- ### Dataset Creation
124
-
125
- The dataset was created through the following steps:
126
-
127
- - Translating every field of the original e-SNLI corpus to Dutch using the [Helsinki-NLP/opus-mt-en-nl](https://huggingface.co/Helsinki-NLP/opus-mt-en-nl) neural machine translation model.
128
-
129
- - Annotating the quality estimation of the translations with two referenceless versions of the [COMET](https://github.com/Unbabel/COMET/) metric by Unbabel.
130
-
131
- ## Additional Information
132
-
133
- ### Dataset Curators
134
-
135
- For problems on this 🤗 Datasets version, please contact us at [ik-nlp-course@rug.nl](mailto:ik-nlp-course@rug.nl).
136
-
137
- ### Licensing Information
138
-
139
- The dataset is licensed under the [Apache 2.0 License](https://www.apache.org/licenses/LICENSE-2.0.html).
140
-
141
- ### Citation Information
142
-
143
- Please cite the authors if you use these corpora in your work:
144
-
145
- ```bibtex
146
- @incollection{NIPS2018_8163,
147
- title = {e-SNLI: Natural Language Inference with Natural Language Explanations},
148
- author = {Camburu, Oana-Maria and Rockt\"{a}schel, Tim and Lukasiewicz, Thomas and Blunsom, Phil},
149
- booktitle = {Advances in Neural Information Processing Systems 31},
150
- editor = {S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett},
151
- pages = {9539--9549},
152
- year = {2018},
153
- publisher = {Curran Associates, Inc.},
154
- url = {http://papers.nips.cc/paper/8163-e-snli-natural-language-inference-with-natural-language-explanations.pdf}
155
- }
156
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/.gitattributes DELETED
@@ -1 +0,0 @@
1
- *.tsv.gz filter=lfs diff=lfs merge=lfs -text
 
 
ik-nlp-22_transqe.py DELETED
@@ -1,137 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The TensorFlow Datasets Authors and the HuggingFace Datasets Authors.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
-
16
- # Lint as: python3
17
- """Dutch translation of the e-SNLI corpus with added quality estimation scores"""
18
-
19
-
20
- import csv
21
- csv.register_dialect("tsv", delimiter="\t")
22
-
23
- import datasets
24
-
25
-
26
- _CITATION = """
27
- @incollection{NIPS2018_8163,
28
- title = {e-SNLI: Natural Language Inference with Natural Language Explanations},
29
- author = {Camburu, Oana-Maria and Rockt\"{a}schel, Tim and Lukasiewicz, Thomas and Blunsom, Phil},
30
- booktitle = {Advances in Neural Information Processing Systems 31},
31
- editor = {S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett},
32
- pages = {9539--9549},
33
- year = {2018},
34
- publisher = {Curran Associates, Inc.},
35
- url = {http://papers.nips.cc/paper/8163-e-snli-natural-language-inference-with-natural-language-explanations.pdf}
36
- }
37
- """
38
-
39
- _DESCRIPTION = """
40
- The e-SNLI dataset extends the Stanford Natural Language Inference Dataset to
41
- include human-annotated natural language explanations of the entailment
42
- relations. This version includes an automatic translation to Dutch and two quality estimation annotations
43
- for each translated field.
44
- """
45
-
46
- _HOMEPAGE = "https://www.rug.nl/masters/information-science/?lang=en"
47
-
48
- _URLS = {
49
- "train": "https://huggingface.co/datasets/GroNLP/ik-nlp-22_transqe/resolve/main/data/train.tsv.gz",
50
- "validation": "https://huggingface.co/datasets/GroNLP/ik-nlp-22_transqe/resolve/main/data/validation.tsv.gz",
51
- "test": "https://huggingface.co/datasets/GroNLP/ik-nlp-22_transqe/resolve/main/data/test.tsv.gz",
52
- }
53
-
54
- class IkNlp22ExpNLIConfig(datasets.GeneratorBasedBuilder):
55
- """e-SNLI corpus with added translation and quality estimation scores"""
56
-
57
- BUILDER_CONFIGS = [
58
- datasets.BuilderConfig(
59
- name="plain_text",
60
- version=datasets.Version("0.0.2"),
61
- description="Plain text import of e-SNLI",
62
- )
63
- ]
64
-
65
- def _info(self):
66
- return datasets.DatasetInfo(
67
- description=_DESCRIPTION,
68
- features=datasets.Features(
69
- {
70
- "premise_en": datasets.Value("string"),
71
- "premise_nl": datasets.Value("string"),
72
- "hypothesis_en": datasets.Value("string"),
73
- "hypothesis_nl": datasets.Value("string"),
74
- "label": datasets.Value("int32"),
75
- "explanation_1_en": datasets.Value("string"),
76
- "explanation_1_nl": datasets.Value("string"),
77
- "explanation_2_en": datasets.Value("string"),
78
- "explanation_2_nl": datasets.Value("string"),
79
- "explanation_3_en": datasets.Value("string"),
80
- "explanation_3_nl": datasets.Value("string"),
81
- "da_premise": datasets.Value("string"),
82
- "mqm_premise": datasets.Value("string"),
83
- "da_hypothesis": datasets.Value("string"),
84
- "mqm_hypothesis": datasets.Value("string"),
85
- "da_explanation_1": datasets.Value("string"),
86
- "mqm_explanation_1": datasets.Value("string"),
87
- "da_explanation_2": datasets.Value("string"),
88
- "mqm_explanation_2": datasets.Value("string"),
89
- "da_explanation_3": datasets.Value("string"),
90
- "mqm_explanation_3": datasets.Value("string"),
91
- }
92
- ),
93
- supervised_keys=None,
94
- homepage=_HOMEPAGE,
95
- citation=_CITATION,
96
- )
97
-
98
- def _split_generators(self, dl_manager):
99
- """Returns SplitGenerators."""
100
-
101
- files = dl_manager.download_and_extract(_URLS)
102
- return [
103
- datasets.SplitGenerator(
104
- name=name,
105
- gen_kwargs={"filepath": filepath},
106
- )
107
- for name, filepath in files.items()
108
- ]
109
-
110
- def _generate_examples(self, filepath):
111
- """Yields examples."""
112
- with open(filepath, encoding="utf-8") as f:
113
- reader = csv.DictReader(f, dialect="tsv")
114
- for i, row in enumerate(reader):
115
- yield i, {
116
- "premise_en": row["premise_en"],
117
- "premise_nl": row["premise_nl"],
118
- "hypothesis_en": row["hypothesis_en"],
119
- "hypothesis_nl": row["hypothesis_nl"],
120
- "label": row["label"],
121
- "explanation_1_en": row["explanation_1_en"],
122
- "explanation_1_nl": row["explanation_1_nl"],
123
- "explanation_2_en": row.get("explanation_2_en", ""),
124
- "explanation_2_nl": row.get("explanation_2_nl", ""),
125
- "explanation_3_en": row.get("explanation_3_en", ""),
126
- "explanation_3_nl": row.get("explanation_3_nl", ""),
127
- "da_premise": row["da_premise"],
128
- "mqm_premise": row["mqm_premise"],
129
- "da_hypothesis": row["da_hypothesis"],
130
- "mqm_hypothesis": row["mqm_hypothesis"],
131
- "da_explanation_1": row["da_explanation_1"],
132
- "mqm_explanation_1": row["mqm_explanation_1"],
133
- "da_explanation_2": row.get("da_explanation_2", ""),
134
- "mqm_explanation_2": row.get("mqm_explanation_2", ""),
135
- "da_explanation_3": row.get("da_explanation_3", ""),
136
- "mqm_explanation_3": row.get("mqm_explanation_3", ""),
137
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/validation.tsv.gz → plain_text/ik-nlp-22_transqe-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1c83cd0c3cf247d13bd3e6c6de9776b4b33a2e811fa4d6c317a55e3ee1e52f3d
3
- size 1834801
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2743d92e2b14b2c74027651a154eacc0b64c1b16a3fbb8158478cbbb4885151c
3
+ size 3671249
data/train.tsv.gz → plain_text/ik-nlp-22_transqe-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:07b39a7d1cff581ec3e532f34addd028529eb84fa2dcd04876d071f9158d5663
3
- size 49754294
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:029f41d44771e231457add27504c6334a017815371215b7f2cbfccc43e9e8037
3
+ size 93226250
data/test.tsv.gz → plain_text/ik-nlp-22_transqe-validation.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4e1baf1c45acfe6e62ad60da63180e6ab7fc88eb09d460850fb78aab81dafc98
3
- size 1822936
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8ae49fcc4497ed15e2afcc11eb93d20566e3390d9b9e5e9ef18a0a160671624f
3
+ size 3704137