gsarti commited on
Commit
a013352
1 Parent(s): b6d769b

Initial commit

Browse files
Files changed (2) hide show
  1. README.md +194 -0
  2. ik_slp_22_htstyle.py +137 -0
README.md ADDED
@@ -0,0 +1,194 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - machine-generated
4
+ - expert-generated
5
+ language_creators:
6
+ - found
7
+ languages:
8
+ - en
9
+ - it
10
+ licenses:
11
+ - private
12
+ multilinguality:
13
+ - translation
14
+ pretty_name: htstyle-iknlp2022
15
+ size_categories:
16
+ - 1K<n<10K
17
+ source_datasets:
18
+ - original
19
+ task_categories:
20
+ - translation
21
+ ---
22
+
23
+ # Dataset Card for IK-NLP-22 Translator Stylometry
24
+
25
+ ## Table of Contents
26
+
27
+ - [Dataset Card for IK-NLP-22 Translator Stylometry](#dataset-card-for-ik-nlp-22-translator-stylometry)
28
+ - [Table of Contents](#table-of-contents)
29
+ - [Dataset Description](#dataset-description)
30
+ - [Dataset Summary](#dataset-summary)
31
+ - [Projects](#projects)
32
+ - [Languages](#languages)
33
+ - [Dataset Structure](#dataset-structure)
34
+ - [Data Instances](#data-instances)
35
+ - [Data Fields](#data-fields)
36
+ - [Data Splits](#data-splits)
37
+ - [Train Split](#train-split)
38
+ - [Test split](#test-split)
39
+ - [Dataset Creation](#dataset-creation)
40
+ - [Additional Information](#additional-information)
41
+ - [Dataset Curators](#dataset-curators)
42
+ - [Licensing Information](#licensing-information)
43
+ - [Citation Information](#citation-information)
44
+
45
+ ## Dataset Description
46
+
47
+ - **Source:** [FLORES-101](https://huggingface.co/datasets/gsarti/flores_101)
48
+ - **Point of Contact:** [Gabriele Sarti](mailto:ik-nlp-course@rug.nl)
49
+
50
+ ### Dataset Summary
51
+
52
+ This dataset contains a sample of sentences taken from the [FLORES-101](https://huggingface.co/datasets/gsarti/flores_101) dataset that were either translated from scratch or post-edited from an existing automatic translation by three human translators. Translation were performed for the English-Italian language pair, and translators' behavioral data (keystrokes, pauses, editing times) were collected using the [PET](https://github.com/wilkeraziz/PET) platform.
53
+
54
+ This dataset is made available for final projects of the 2022 edition of the Natural Language Processing course at the [Information Science Master's Degree](https://www.rug.nl/masters/information-science/?lang=en) at the University of Groningen, taught by [Arianna Bisazza](https://research.rug.nl/en/persons/arianna-bisazza) with the assistance of [Gabriele Sarti](https://research.rug.nl/en/persons/gabriele-sarti).
55
+
56
+ **Disclaimer**: *This repository is provided without a direct data access due to currently unpublished results.* _**For this reason, it is for now strictly forbidden to share or publish all the data associated to this repository**_ *Students will be provided with a compressed folder containing the data upon choosing a project based on this dataset. To load the dataset using 🤗 Datasets, download and unzip the provided folder and pass it to the* `load_dataset` *method as:* `datasets.load_dataset('GroNLP/ik-nlp-22_htstyle', data_dir='path/to/unzipped/folder')`
57
+
58
+ ### Projects
59
+
60
+ To be provided.
61
+
62
+ ### Languages
63
+
64
+ The language data of is in English (BCP-47 `en`) and Italian (BCP-47 `it`)
65
+
66
+ ## Dataset Structure
67
+
68
+ ### Data Instances
69
+
70
+ The dataset contains a single configuration, `main`, with two data splits: `train` and `test`.
71
+
72
+ ### Data Fields
73
+
74
+ The following fields are contained in the dataset:
75
+
76
+ - `item`: The sentence identifier. The first digits of the number represent the document containing the sentence, while the last digit of the number represents the sentence position inside the document. Documents can contain from 3 to 5 semantically-related sentences each.
77
+
78
+ - `subject`: The identifier for the translator performing the translation from scratch or post-editing task. Values: `t1`, `t2` or `t3`.
79
+
80
+ - `tasktype`: The setting of the translation task. Values: `ht` (translation from scratch), `pe1` (post-editing Google Translate), `pe2` (post-editing [mBART](https://huggingface.co/facebook/mbart-large-50-one-to-many-mmt)).
81
+
82
+ - `sl_text`: The original source text extracted from Wikinews, wikibooks or wikivoyage.
83
+
84
+ - `mt_text`: Missing if tasktype is `ht`. Otherwise, contains the automatically-translated sentence before post-editing.
85
+
86
+ - `tl_text`: Final sentence produced by the translator (either via translation from scratch of `sl_text` or post-editing `mt_text`)
87
+
88
+ - `len_sl_chr`: Length of the original source text in characters.
89
+
90
+ - `len_tl_chr`: Length of the final translated text in characters.
91
+
92
+ - `len_sl_wrd`: Length of the original source text in words.
93
+
94
+ - `len_tl_wrd`: Length of the final translated text in words.
95
+
96
+ - `edit_time`: Total editing time for the translation in seconds.
97
+
98
+ - `k_total`: Total number of keystrokes for the translation.
99
+
100
+ - `k_letter`: Total number of letter keystrokes for the translation.
101
+
102
+ - `k_digit`: Total number of digit keystrokes for the translation.
103
+
104
+ - `k_white`: Total number of whitespace keystrokes for the translation.
105
+
106
+ - `k_symbol`: Total number of symbol (punctuation, etc.) keystrokes for the translation.
107
+
108
+ - `k_nav`: Total number of navigation keystrokes (left-right arrows, mouse clicks) for the translation.
109
+
110
+ - `k_erase`: Total number of erase keystrokes (backspace, cancel) for the translation.
111
+
112
+ - `k_copy`: Total number of copy (Ctrl + C) actions during the translation.
113
+
114
+ - `k_cut`: Total number of cut (Ctrl + X) actions during the translation.
115
+
116
+ - `k_paste`: Total number of paste (Ctrl + V) actions during the translation.
117
+
118
+ - `np_300`: Number of pauses of 300ms or more during the translation.
119
+
120
+ - `lp_300`: Total duration of pauses of 300ms or more, in milliseconds.
121
+
122
+ - `np_1000`: Number of pauses of 1s or more during the translation.
123
+
124
+ - `lp_1000`: Total duration of pauses of 1000ms or more, in milliseconds.
125
+
126
+ ### Data Splits
127
+
128
+ | config| train| test|
129
+ |------:|-----:|----:|
130
+ |`main` | 1159 | 107 |
131
+
132
+ #### Train Split
133
+
134
+ The `train` split contains a total of 1159 triplets (or pairs, when translation from scratch is performed) annotated with behavioral data produced during the translation. The following is an example of the subject `t3` post-editing a machine translation produced by system 2 (tasktype `pe2`) taken from the `train` split:
135
+
136
+ ```json
137
+ {
138
+ "item": 1072,
139
+ "subject": "t3",
140
+ "tasktype": "pe2",
141
+ "sl_text": "At the beginning dress was heavily influenced by the Byzantine culture in the east.",
142
+ "mt_text": "All'inizio il vestito era fortemente influenzato dalla cultura bizantina dell'est.",
143
+ "tl_text": "Inizialmente, l'abbigliamento era fortemente influenzato dalla cultura bizantina orientale.",
144
+ "len_sl_chr": 83,
145
+ "len_tl_chr": 91,
146
+ "len_sl_wrd": 14,
147
+ "len_tl_wrd": 9,
148
+ "edit_time": 45.687,
149
+ "k_total": 51,
150
+ "k_letter": 31,
151
+ "k_digit": 0,
152
+ "k_white": 2,
153
+ "k_symbol": 3,
154
+ "k_nav": 7,
155
+ "k_erase": 3,
156
+ "k_copy": 0,
157
+ "k_cut": 0,
158
+ "k_paste": 0,
159
+ "np_300": 9,
160
+ "lp_300": 40032,
161
+ "np_1000": 5,
162
+ "lp_1000": 38392,
163
+ }
164
+ ```
165
+
166
+ The text is provided as-is, without further preprocessing or tokenization.
167
+
168
+ #### Test split
169
+
170
+ The `test` split contains 107 entries following the same structure as `train`, with few omissions:
171
+
172
+ - the `subject` field was omitted for the translator stylometry task
173
+
174
+ - the `tasktype` and `mt_text` fields were omitted for the translation setting prediction task
175
+
176
+ - the `edit_time`, `lp_300` and `lp_1000` fields were omitted for the translation time prediction task
177
+
178
+ ### Dataset Creation
179
+
180
+ The dataset was parsed from PET XML files into CSV format using the scripts by [Antonio Toral](https://research.rug.nl/en/persons/antonio-toral) found at the following link: [https://github.com/antot/postediting_novel_frontiers](https://github.com/antot/postediting_novel_frontiers)
181
+
182
+ ## Additional Information
183
+
184
+ ### Dataset Curators
185
+
186
+ For problems on this 🤗 Datasets version, please contact us at [ik-nlp-course@rug.nl](mailto:ik-nlp-course@rug.nl).
187
+
188
+ ### Licensing Information
189
+
190
+ It is forbidden to share or publish the data associated to this 🤗 Dataset version.
191
+
192
+ ### Citation Information
193
+
194
+ No citation information is provided for this dataset.
ik_slp_22_htstyle.py ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+
3
+ import datasets
4
+ import pandas as pd
5
+
6
+ _CITATION = """No citation information available."""
7
+
8
+ _DESCRIPTION = """\
9
+ This dataset contains a sample of sentences taken from the FLORES-101 dataset that were either translated
10
+ from scratch or post-edited from an existing automatic translation by three human translators.
11
+ Translation were performed for the English-Italian language pair, and translators' behavioral data
12
+ (keystrokes, pauses, editing times) were collected using the PET platform.
13
+ """
14
+
15
+ _HOMEPAGE = "https://www.rug.nl/masters/information-science/?lang=en"
16
+
17
+ _LICENSE = "Sharing and publishing of the data is not allowed at the moment."
18
+
19
+ _SPLITS = {
20
+ "train": os.path.join("IK_NLP_22_HTSTYLE", "train.csv"),
21
+ "test": os.path.join("IK_NLP_22_HTSTYLE", "test.csv")
22
+ }
23
+
24
+
25
+ class IkNlp22HtStyleConfig(datasets.BuilderConfig):
26
+ """BuilderConfig for the IK NLP '22 HT-Style Dataset."""
27
+
28
+ def __init__(
29
+ self,
30
+ features,
31
+ **kwargs,
32
+ ):
33
+ """
34
+ Args:
35
+ features: `list[string]`, list of the features that will appear in the
36
+ feature dict. Should not include "label".
37
+ **kwargs: keyword arguments forwarded to super.
38
+ """
39
+ super().__init__(version=datasets.Version("1.0.0"), **kwargs)
40
+ self.features = features
41
+
42
+
43
+ class IkNlp22HtStyle(datasets.GeneratorBasedBuilder):
44
+ VERSION = datasets.Version("1.0.0")
45
+
46
+ BUILDER_CONFIGS = [
47
+ IkNlp22HtStyleConfig(
48
+ name="main",
49
+ features=[
50
+ "item",
51
+ "subject",
52
+ "tasktype",
53
+ "sl_text",
54
+ "mt_text",
55
+ "tl_text",
56
+ "len_sl_chr",
57
+ "len_tl_chr",
58
+ "len_sl_wrd",
59
+ "len_tl_wrd",
60
+ "edit_time",
61
+ "k_total",
62
+ "k_letter",
63
+ "k_digit",
64
+ "k_white",
65
+ "k_symbol",
66
+ "k_nav",
67
+ "k_erase",
68
+ "k_copy",
69
+ "k_cut",
70
+ "k_paste",
71
+ "np_300",
72
+ "lp_300",
73
+ "np_1000",
74
+ "lp_1000",
75
+ ],
76
+ ),
77
+ ]
78
+
79
+ DEFAULT_CONFIG_NAME = "main"
80
+
81
+ @property
82
+
83
+ def manual_download_instructions(self):
84
+ return (
85
+ "The access to the data is restricted to students of the IK MSc NLP 2022 course working on a related project."
86
+ "To load the data using this dataset, download and extract the IK_NLP_22_HTSTYLE folder you were provided upon selecting the final project."
87
+ "After extracting it, the folder (referred to as root) must contain a IK_NLP_22_HTSTYLE subfolder, containing train.csv and test.csv files."
88
+ "Then, load the dataset with: `datasets.load_dataset('GroNLP/ik-nlp-22_htstyle', data_dir='path/to/root/folder')`"
89
+ )
90
+
91
+ def _info(self):
92
+ features = {feature: datasets.Value("int32") for feature in self.config.features}
93
+ features["subject"] = datasets.Value("string")
94
+ features["tasktype"] = datasets.Value("string")
95
+ features["sl_text"] = datasets.Value("string")
96
+ features["mt_text"] = datasets.Value("string")
97
+ features["tl_text"] = datasets.Value("string")
98
+ features["edit_time"] = datasets.Value("float32")
99
+ return datasets.DatasetInfo(
100
+ description=_DESCRIPTION,
101
+ features=features,
102
+ homepage=_HOMEPAGE,
103
+ license=_LICENSE,
104
+ citation=_CITATION,
105
+ )
106
+
107
+ def _split_generators(self, dl_manager):
108
+ """Returns SplitGenerators."""
109
+ data_dir = os.path.abspath(os.path.expanduser(dl_manager.manual_dir))
110
+ if not os.path.exists(data_dir):
111
+ raise FileNotFoundError(
112
+ "{} does not exist. Make sure you insert the unzipped IK_NLP_22_HTSTYLE dir via "
113
+ "`datasets.load_dataset('GroNLP/ik-nlp-22_htstyle', data_dir=...)`"
114
+ "Manual download instructions: {}".format(
115
+ data_dir, self.manual_download_instructions
116
+ )
117
+ )
118
+ return [
119
+ datasets.SplitGenerator(
120
+ name=datasets.Split.TRAIN,
121
+ gen_kwargs={
122
+ "filepath": os.path.join(data_dir, _SPLITS["train"]),
123
+ },
124
+ ),
125
+ datasets.SplitGenerator(
126
+ name=datasets.Split.TEST,
127
+ gen_kwargs={
128
+ "filepath": os.path.join(data_dir, _SPLITS["test"]),
129
+ },
130
+ ),
131
+ ]
132
+
133
+ def _generate_examples(self, filepath: str):
134
+ """Yields examples as (key, example) tuples."""
135
+ data = pd.read_csv(filepath)
136
+ for id_, row in data.iterrows():
137
+ yield id_, row.to_dict()