gsarti commited on
Commit
109b7ff
1 Parent(s): ad2b226

Updated new version

Browse files
README.md CHANGED
@@ -11,7 +11,7 @@ licenses:
11
  - private
12
  multilinguality:
13
  - translation
14
- pretty_name: htstyle-iknlp2022
15
  size_categories:
16
  - 1K<n<10K
17
  source_datasets:
@@ -20,15 +20,14 @@ task_categories:
20
  - translation
21
  ---
22
 
23
- # Dataset Card for IK-NLP-22 Translator Stylometry
24
 
25
  ## Table of Contents
26
 
27
- - [Dataset Card for IK-NLP-22 Translator Stylometry](#dataset-card-for-ik-nlp-22-translator-stylometry)
28
  - [Table of Contents](#table-of-contents)
29
  - [Dataset Description](#dataset-description)
30
  - [Dataset Summary](#dataset-summary)
31
- - [Projects](#projects)
32
  - [Languages](#languages)
33
  - [Dataset Structure](#dataset-structure)
34
  - [Data Instances](#data-instances)
@@ -51,13 +50,9 @@ task_categories:
51
 
52
  This dataset contains a sample of sentences taken from the [FLORES-101](https://huggingface.co/datasets/gsarti/flores_101) dataset that were either translated from scratch or post-edited from an existing automatic translation by three human translators. Translation were performed for the English-Italian language pair, and translators' behavioral data (keystrokes, pauses, editing times) were collected using the [PET](https://github.com/wilkeraziz/PET) platform.
53
 
54
- This dataset is made available for final projects of the 2022 edition of the Natural Language Processing course at the [Information Science Master's Degree](https://www.rug.nl/masters/information-science/?lang=en) at the University of Groningen, taught by [Arianna Bisazza](https://research.rug.nl/en/persons/arianna-bisazza) with the assistance of [Gabriele Sarti](https://research.rug.nl/en/persons/gabriele-sarti).
55
 
56
- **Disclaimer**: *This repository is provided without direct data access due to currently unpublished results.* _**For this reason, it is strictly forbidden to share or publish all the data associated to this repository**_ *Students will be provided with a compressed folder containing the data upon choosing a project based on this dataset. To load the dataset using 🤗 Datasets, download and unzip the provided folder and pass it to the* `load_dataset` *method as:* `datasets.load_dataset('GroNLP/ik-nlp-22_htstyle', 'main', data_dir='path/to/unzipped/folder')`
57
-
58
- ### Projects
59
-
60
- To be provided.
61
 
62
  ### Languages
63
 
@@ -67,24 +62,20 @@ The language data of is in English (BCP-47 `en`) and Italian (BCP-47 `it`)
67
 
68
  ### Data Instances
69
 
70
- The dataset contains a single configuration, `main`, with two data splits: `train` and `test`.
71
 
72
  ### Data Fields
73
 
74
- The following fields are contained in the dataset:
75
 
76
  |Field|Description|
77
  |-----|-----------|
78
- |`item` | The sentence identifier. The first digits of the number represent the document containing the sentence, while the last digit of the number represents the sentence position inside the document. Documents can contain from 3 to 5 semantically-related sentences each. |
79
- |`subject` | The identifier for the translator performing the translation from scratch or post-editing task. Values: `t1`, `t2` or `t3`. |
80
- |`tasktype` | The setting of the translation task. Values: `ht` (translation from scratch), `pe1` (post-editing Google Translate translations), `pe2` (post-editing [mBART](https://huggingface.co/facebook/mbart-large-50-one-to-many-mmt) translations). |
81
- |`sl_text` | The original source sentence extracted from Wikinews, wikibooks or wikivoyage. |
82
  |`mt_text` | Missing if tasktype is `ht`. Otherwise, contains the automatically-translated sentence before post-editing. |
83
- |`tl_text` | Final sentence produced by the translator (either via translation from scratch of `sl_text` or post-editing `mt_text`) |
84
- |`len_sl_chr` | Length of the original source text in characters. |
85
- |`len_tl_chr` | Length of the final translated text in characters. |
86
- |`len_sl_wrd` | Length of the original source text in words. |
87
- |`len_tl_wrd` | Length of the final translated text in words. |
88
  |`edit_time` | Total editing time for the translation in seconds. |
89
  |`k_total` | Total number of keystrokes for the translation. |
90
  |`k_letter` | Total number of letter keystrokes for the translation. |
@@ -96,41 +87,39 @@ The following fields are contained in the dataset:
96
  |`k_copy` | Total number of copy (Ctrl + C) actions during the translation. |
97
  |`k_cut` | Total number of cut (Ctrl + X) actions during the translation. |
98
  |`k_paste` | Total number of paste (Ctrl + V) actions during the translation. |
99
- |`np_300` | Number of pauses of 300ms or more during the translation. |
100
- |`lp_300` | Total duration of pauses of 300ms or more, in milliseconds. |
101
- |`np_1000` | Number of pauses of 1s or more during the translation. |
102
- |`lp_1000` | Total duration of pauses of 1000ms or more, in milliseconds. |
103
- |`mt_tl_bleu` | Sentence-level BLEU score between MT and post-edited fields (empty for tasktype `ht`) computed using the [SacreBLEU](https://github.com/mjpost/sacrebleu) library with default parameters. |
104
- |`mt_tl_chrf` | Sentence-level chrF score between MT and post-edited fields (empty for tasktype `ht`) computed using the [SacreBLEU](https://github.com/mjpost/sacrebleu) library with default parameters. |
105
- |`mt_tl_Ins` | Number of post-editing insertions (empty for tasktype `ht`) computed using the [tercom](https://github.com/jhclark/tercom) library. |
106
- |`mt_tl_Del` | Number of post-editing deletions (empty for tasktype `ht`) computed using the [tercom](https://github.com/jhclark/tercom) library. |
107
- |`mt_tl_Sub` | Number of post-editing substitutions (empty for tasktype `ht`) computed using the [tercom](https://github.com/jhclark/tercom) library. |
108
- |`mt_tl_Shft` | Number of post-editing shifts (empty for tasktype `ht`) computed using the [tercom](https://github.com/jhclark/tercom) library. |
109
- |`mt_tl_ter` | Sentence-level TER score between MT and post-edited fields (empty for tasktype `ht`) computed using the [tercom](https://github.com/jhclark/tercom) library. |
110
- |`mt_tl_edits` | Aligned visual representation of REF (`mt_text`), HYP (`tl_text`) and edit operations (I = Insertion, D = Deletion, S = Shift or Substitution) performed on the field. Replace `:::` with `\n` to show aligned.|
 
 
111
 
112
  ### Data Splits
113
 
114
  | config| train| test|
115
  |------:|-----:|----:|
116
- |`main` | 1159 | 107 |
117
 
118
  #### Train Split
119
 
120
- The `train` split contains a total of 1159 triplets (or pairs, when translation from scratch is performed) annotated with behavioral data produced during the translation. The following is an example of the subject `t3` post-editing a machine translation produced by system 2 (tasktype `pe2`) taken from the `train` split. The field `mt_tl_edits` is showed over three lines to provide a visual understanding of its contents.
121
 
122
  ```json
123
  {
124
- "item": 1072,
125
- "subject": "t3",
126
  "tasktype": "pe2",
127
- "sl_text": "At the beginning dress was heavily influenced by the Byzantine culture in the east.",
128
  "mt_text": "All'inizio il vestito era fortemente influenzato dalla cultura bizantina dell'est.",
129
- "tl_text": "Inizialmente, l'abbigliamento era fortemente influenzato dalla cultura bizantina orientale.",
130
- "len_sl_chr": 83,
131
- "len_tl_chr": 91,
132
- "len_sl_wrd": 14,
133
- "len_tl_wrd": 9,
134
  "edit_time": 45.687,
135
  "k_total": 51,
136
  "k_letter": 31,
@@ -142,19 +131,20 @@ The `train` split contains a total of 1159 triplets (or pairs, when translation
142
  "k_copy": 0,
143
  "k_cut": 0,
144
  "k_paste": 0,
145
- "np_300": 9,
146
- "lp_300": 40032,
147
- "np_1000": 5,
148
- "lp_1000": 38392,
149
- "mt_tl_bleu": 47.99,
150
- "mt_tl_chrf": 62.05,
151
- "mt_tl_Ins": 0.0,
152
- "mt_tl_Del": 1.0,
153
- "mt_tl_Sub": 3.0,
154
- "mt_tl_Shft": 0.0,
155
- "mt_tl_ter": 40.0,
156
- "mt_tl_edits: "REF: all'inizio il vestito era fortemente influenzato dalla cultura bizantina dell'est.:::
157
- HYP: ********** inizialmente, l'abbigliamento era fortemente influenzato dalla cultura bizantina orientale.:::
 
158
  EVAL: D S S S"
159
  }
160
  ```
@@ -163,17 +153,17 @@ The text is provided as-is, without further preprocessing or tokenization.
163
 
164
  #### Test split
165
 
166
- The `test` split contains 107 entries following the same structure as `train`, with few omissions:
167
 
168
- - the `subject` field was set to `nan` for the translator stylometry task.
169
 
170
- - the `tasktype`, `mt_text` and `mt_tl` evaluation metrics fields were set to `nan` for the translation setting prediction task.
171
 
172
- - the `edit_time`, `lp_300` and `lp_1000` fields were set to -1 for the translation time prediction task.
173
 
174
  ### Dataset Creation
175
 
176
- The dataset was parsed from PET XML files into CSV format using the scripts by [Antonio Toral](https://research.rug.nl/en/persons/antonio-toral) found at the following link: [https://github.com/antot/postediting_novel_frontiers](https://github.com/antot/postediting_novel_frontiers)
177
 
178
  ## Additional Information
179
 
 
11
  - private
12
  multilinguality:
13
  - translation
14
+ pretty_name: iknlp22-pestyle
15
  size_categories:
16
  - 1K<n<10K
17
  source_datasets:
 
20
  - translation
21
  ---
22
 
23
+ # Dataset Card for IK-NLP-22 Project 1: A Study in Post-Editing Stylometry
24
 
25
  ## Table of Contents
26
 
27
+ - [Dataset Card for IK-NLP-22 Project 1: A Study in Post-Editing Stylometry](#dataset-card-for-ik-nlp-22-project-1-a-study-in-post-editing-stylometry)
28
  - [Table of Contents](#table-of-contents)
29
  - [Dataset Description](#dataset-description)
30
  - [Dataset Summary](#dataset-summary)
 
31
  - [Languages](#languages)
32
  - [Dataset Structure](#dataset-structure)
33
  - [Data Instances](#data-instances)
 
50
 
51
  This dataset contains a sample of sentences taken from the [FLORES-101](https://huggingface.co/datasets/gsarti/flores_101) dataset that were either translated from scratch or post-edited from an existing automatic translation by three human translators. Translation were performed for the English-Italian language pair, and translators' behavioral data (keystrokes, pauses, editing times) were collected using the [PET](https://github.com/wilkeraziz/PET) platform.
52
 
53
+ This dataset is made available for final projects of the 2022 edition of the Natural Language Processing course at the [Information Science Master's Degree](https://www.rug.nl/masters/information-science/?lang=en) at the University of Groningen, taught by [Arianna Bisazza](https://research.rug.nl/en/persons/arianna-bisazza) with the assistance of [Gabriele Sarti](https://research.rug.nl/en/persons/gabriele-sarti) and [Anjali Nair](https://nl.linkedin.com/in/anjalinair012).
54
 
55
+ **Disclaimer**: *This repository is provided without direct data access due to currently unpublished results.* _**For this reason, it is strictly forbidden to share or publish all the data associated to this repository**_ *Students will be provided with a compressed folder containing the data upon choosing a project based on this dataset. To load the dataset using 🤗 Datasets, download and unzip the provided folder and pass it to the* `load_dataset` *method as:* `datasets.load_dataset('GroNLP/ik-nlp-22_pestyle', 'main', data_dir='path/to/unzipped/folder')`
 
 
 
 
56
 
57
  ### Languages
58
 
 
62
 
63
  ### Data Instances
64
 
65
+ The dataset contains a single configuration, `main`, with four data splits: the main `train` split in which all fields are available, and three test splits: `test_mask_subject`, `test_mask_modality`, `test_mask_time`. See more details in the [Data Splits](#data-splits) section.
66
 
67
  ### Data Fields
68
 
69
+ The following fields are contained in the training set:
70
 
71
  |Field|Description|
72
  |-----|-----------|
73
+ |`item_id` | The sentence identifier. The first digits of the number represent the document containing the sentence, while the last digit of the number represents the sentence position inside the document. Documents can contain from 3 to 5 semantically-related sentences each. |
74
+ |`subject_id` | The identifier for the translator performing the translation from scratch or post-editing task. Values: `t1`, `t2` or `t3`. |
75
+ |`modality` | The modality of the translation task. Values: `ht` (translation from scratch), `pe1` (post-editing Google Translate translations), `pe2` (post-editing [mBART](https://huggingface.co/facebook/mbart-large-50-one-to-many-mmt) translations). |
76
+ |`src_text` | The original source sentence extracted from Wikinews, wikibooks or wikivoyage. |
77
  |`mt_text` | Missing if tasktype is `ht`. Otherwise, contains the automatically-translated sentence before post-editing. |
78
+ |`tgt_text` | Final sentence produced by the translator (either via translation from scratch of `sl_text` or post-editing `mt_text`) |
 
 
 
 
79
  |`edit_time` | Total editing time for the translation in seconds. |
80
  |`k_total` | Total number of keystrokes for the translation. |
81
  |`k_letter` | Total number of letter keystrokes for the translation. |
 
87
  |`k_copy` | Total number of copy (Ctrl + C) actions during the translation. |
88
  |`k_cut` | Total number of cut (Ctrl + X) actions during the translation. |
89
  |`k_paste` | Total number of paste (Ctrl + V) actions during the translation. |
90
+ |`n_pause_geq_300` | Number of pauses of 300ms or more during the translation. |
91
+ |`len_pause_geq_300` | Total duration of pauses of 300ms or more, in milliseconds. |
92
+ |`n_pause_geq_1000` | Number of pauses of 1s or more during the translation. |
93
+ |`len_pause_geq_1000` | Total duration of pauses of 1000ms or more, in milliseconds. |
94
+ |`num_annotations` | Number of times the translator focused the texbox for performing the translation of the sentence during the translation session. E.g. 1 means the translation was performed once and never revised. |
95
+ |`n_insert` | Number of post-editing insertions (empty for modality `ht`) computed using the [tercom](https://github.com/jhclark/tercom) library. |
96
+ |`n_delete` | Number of post-editing deletions (empty for modality `ht`) computed using the [tercom](https://github.com/jhclark/tercom) library. |
97
+ |`n_substitute` | Number of post-editing substitutions (empty for modality `ht`) computed using the [tercom](https://github.com/jhclark/tercom) library. |
98
+ |`n_shift` | Number of post-editing shifts (empty for modality `ht`) computed using the [tercom](https://github.com/jhclark/tercom) library. |
99
+ |`bleu` | Sentence-level BLEU score between MT and post-edited fields (empty for modality `ht`) computed using the [SacreBLEU](https://github.com/mjpost/sacrebleu) library with default parameters. |
100
+ |`chrf` | Sentence-level chrF score between MT and post-edited fields (empty for modality `ht`) computed using the [SacreBLEU](https://github.com/mjpost/sacrebleu) library with default parameters. |
101
+
102
+ |`ter` | Sentence-level TER score between MT and post-edited fields (empty for modality `ht`) computed using the [tercom](https://github.com/jhclark/tercom) library. |
103
+ |`aligned_edit` | Aligned visual representation of REF (`mt_text`), HYP (`tl_text`) and edit operations (I = Insertion, D = Deletion, S = Substitution) performed on the field. Replace `\\n` with `\n` to show the three aligned rows.|
104
 
105
  ### Data Splits
106
 
107
  | config| train| test|
108
  |------:|-----:|----:|
109
+ |`main` | 1170 | 120 |
110
 
111
  #### Train Split
112
 
113
+ The `train` split contains a total of 1170 triplets (or pairs, when translation from scratch is performed) annotated with behavioral data produced during the translation. The following is an example of the subject `t3` post-editing a machine translation produced by system 2 (tasktype `pe2`) taken from the `train` split. The field `aligned_edit` is showed over three lines to provide a visual understanding of its contents.
114
 
115
  ```json
116
  {
117
+ "item_id": 1072,
118
+ "subject_id": "t3",
119
  "tasktype": "pe2",
120
+ "src_text": "At the beginning dress was heavily influenced by the Byzantine culture in the east.",
121
  "mt_text": "All'inizio il vestito era fortemente influenzato dalla cultura bizantina dell'est.",
122
+ "tgt+text": "Inizialmente, l'abbigliamento era fortemente influenzato dalla cultura bizantina orientale.",
 
 
 
 
123
  "edit_time": 45.687,
124
  "k_total": 51,
125
  "k_letter": 31,
 
131
  "k_copy": 0,
132
  "k_cut": 0,
133
  "k_paste": 0,
134
+ "n_pause_geq_300": 9,
135
+ "len_pause_geq_300": 40032,
136
+ "n_pause_geq_1000": 5,
137
+ "len_pause_geq_1000": 38392,
138
+ "num_annotations": 1,
139
+ "n_insert": 0.0,
140
+ "n_delete": 1.0,
141
+ "n_substitute": 3.0,
142
+ "n_shift": 0.0,
143
+ "bleu": 47.99,
144
+ "chrf": 62.05,
145
+ "ter": 40.0,
146
+ "aligned_edit: "REF: all'inizio il vestito era fortemente influenzato dalla cultura bizantina dell'est.\\n
147
+ HYP: ********** inizialmente, l'abbigliamento era fortemente influenzato dalla cultura bizantina orientale.\\n
148
  EVAL: D S S S"
149
  }
150
  ```
 
153
 
154
  #### Test split
155
 
156
+ The three `test` splits contains the same 120 entries each, following the same structure as `train`. Each test split omit some of the fields to prevent leakage of information:
157
 
158
+ - In `test_mask_subject` the `subject_id` is absent, for the main task of post-editor stylometry.
159
 
160
+ - In `test_mask_modality` the following fields are absent for the modality prediction extra task: `modality`, `mt_text`, `n_insert`, `n_delete`, `n_substitute`, `n_shift`, `ter`, `bleu`, `chrf`, `aligned_edit`.
161
 
162
+ - In `test_mask_time` the following fields are absent for the time and pause prediction extra task: `edit_time`, `n_pause_geq_300`, `len_pause_geq_300`, `n_pause_geq_1000`, and `len_pause_geq_1000`.
163
 
164
  ### Dataset Creation
165
 
166
+ The dataset was parsed from PET XML files into CSV format using a script adapted from the one by [Antonio Toral](https://research.rug.nl/en/persons/antonio-toral) found at the following link: [https://github.com/antot/postediting_novel_frontiers](https://github.com/antot/postediting_novel_frontiers)
167
 
168
  ## Additional Information
169
 
ik-nlp-22_htstyle.py → ik-nlp-22_pestyle.py RENAMED
@@ -17,12 +17,39 @@ _HOMEPAGE = "https://www.rug.nl/masters/information-science/?lang=en"
17
  _LICENSE = "Sharing and publishing of the data is not allowed at the moment."
18
 
19
  _SPLITS = {
20
- "train": os.path.join("IK_NLP_22_HTSTYLE", "train.csv"),
21
- "test": os.path.join("IK_NLP_22_HTSTYLE", "test.csv")
 
 
22
  }
23
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
- class IkNlp22HtStyleConfig(datasets.BuilderConfig):
26
  """BuilderConfig for the IK NLP '22 HT-Style Dataset."""
27
 
28
  def __init__(
@@ -40,42 +67,13 @@ class IkNlp22HtStyleConfig(datasets.BuilderConfig):
40
  self.features = features
41
 
42
 
43
- class IkNlp22HtStyle(datasets.GeneratorBasedBuilder):
44
  VERSION = datasets.Version("1.0.0")
45
 
46
  BUILDER_CONFIGS = [
47
- IkNlp22HtStyleConfig(
48
  name="main",
49
- features=[
50
- "item",
51
- "subject",
52
- "tasktype",
53
- "sl_text",
54
- "mt_text",
55
- "tl_text",
56
- "len_sl_chr",
57
- "len_tl_chr",
58
- "len_sl_wrd",
59
- "len_tl_wrd",
60
- "edit_time",
61
- "k_total",
62
- "k_letter",
63
- "k_digit",
64
- "k_white",
65
- "k_symbol",
66
- "k_nav",
67
- "k_erase",
68
- "k_copy",
69
- "k_cut",
70
- "k_paste",
71
- "np_300",
72
- "lp_300",
73
- "np_1000",
74
- "lp_1000",
75
- "mt_tl_bleu",
76
- "mt_tl_chrf",
77
- "mt_tl_ter",
78
- ],
79
  ),
80
  ]
81
 
@@ -86,22 +84,23 @@ class IkNlp22HtStyle(datasets.GeneratorBasedBuilder):
86
  def manual_download_instructions(self):
87
  return (
88
  "The access to the data is restricted to students of the IK MSc NLP 2022 course working on a related project."
89
- "To load the data using this dataset, download and extract the IK_NLP_22_HTSTYLE folder you were provided upon selecting the final project."
90
- "After extracting it, the folder (referred to as root) must contain a IK_NLP_22_HTSTYLE subfolder, containing train.csv and test.csv files."
91
- "Then, load the dataset with: `datasets.load_dataset('GroNLP/ik-nlp-22_htstyle', 'main', data_dir='path/to/root/folder')`"
92
  )
93
 
94
  def _info(self):
95
  features = {feature: datasets.Value("int32") for feature in self.config.features}
96
- features["subject"] = datasets.Value("string")
97
- features["tasktype"] = datasets.Value("string")
98
- features["sl_text"] = datasets.Value("string")
99
  features["mt_text"] = datasets.Value("string")
100
- features["tl_text"] = datasets.Value("string")
 
101
  features["edit_time"] = datasets.Value("float32")
102
- features["mt_tl_bleu"] = datasets.Value("float32")
103
- features["mt_tl_chrf"] = datasets.Value("float32")
104
- features["mt_tl_ter"] = datasets.Value("float32")
105
  return datasets.DatasetInfo(
106
  description=_DESCRIPTION,
107
  features=datasets.Features(features),
@@ -115,30 +114,27 @@ class IkNlp22HtStyle(datasets.GeneratorBasedBuilder):
115
  data_dir = os.path.abspath(os.path.expanduser(dl_manager.manual_dir))
116
  if not os.path.exists(data_dir):
117
  raise FileNotFoundError(
118
- "{} does not exist. Make sure you insert the unzipped IK_NLP_22_HTSTYLE dir via "
119
- "`datasets.load_dataset('GroNLP/ik-nlp-22_htstyle', data_dir=...)`"
120
  "Manual download instructions: {}".format(
121
  data_dir, self.manual_download_instructions
122
  )
123
  )
124
  return [
125
  datasets.SplitGenerator(
126
- name=datasets.Split.TRAIN,
127
  gen_kwargs={
128
- "filepath": os.path.join(data_dir, _SPLITS["train"]),
 
129
  },
130
- ),
131
- datasets.SplitGenerator(
132
- name=datasets.Split.TEST,
133
- gen_kwargs={
134
- "filepath": os.path.join(data_dir, _SPLITS["test"]),
135
- },
136
- ),
137
  ]
138
 
139
- def _generate_examples(self, filepath: str):
140
  """Yields examples as (key, example) tuples."""
141
  data = pd.read_csv(filepath)
 
142
  print(data.shape)
143
  for id_, row in data.iterrows():
144
  yield id_, row.to_dict()
 
17
  _LICENSE = "Sharing and publishing of the data is not allowed at the moment."
18
 
19
  _SPLITS = {
20
+ "train": os.path.join("IK_NLP_22_PESTYLE", "train.tsv"),
21
+ "test_mask_subject": os.path.join("IK_NLP_22_PESTYLE", "test.tsv"),
22
+ "test_mask_modality": os.path.join("IK_NLP_22_PESTYLE", "test.tsv"),
23
+ "test_mask_time": os.path.join("IK_NLP_22_PESTYLE", "test.tsv")
24
  }
25
 
26
+ _ALL_FIELDS = [
27
+ "item_id", "subject_id", "modality",
28
+ "src_text", "mt_text", "tgt_text",
29
+ "edit_time", "k_total", "k_letter", "k_digit", "k_white", "k_symbol", "k_nav", "k_erase",
30
+ "k_copy", "k_cut", "k_paste", "n_pause_geq_300", "len_pause_geq_300",
31
+ "n_pause_geq_1000", "len_pause_geq_1000", "num_annotations",
32
+ "n_insert", "n_delete", "n_substitute", "n_shift", "bleu", "chrf", "ter", "aligned_edit"
33
+ ]
34
+
35
+ _FIELDS_MASK_SUBJECT = [f for f in _ALL_FIELDS if f not in ["subject_id"]]
36
+ _FIELDS_MASK_MODALITY = [f for f in _ALL_FIELDS if f not in [
37
+ "modality", "mt_text", "n_insert", "n_delete", "n_substitute",
38
+ "n_shift", "ter", "bleu", "chrf", "aligned_edit"
39
+ ]]
40
+ _FIELDS_MASK_TIME = [f for f in _ALL_FIELDS if f not in [
41
+ "edit_time", "n_pause_geq_300", "len_pause_geq_300",
42
+ "n_pause_geq_1000", "len_pause_geq_1000"
43
+ ]]
44
+
45
+ _DICT_FIELDS = {
46
+ "train": _ALL_FIELDS,
47
+ "test_mask_subject": _FIELDS_MASK_SUBJECT,
48
+ "test_mask_modality": _FIELDS_MASK_MODALITY,
49
+ "test_mask_time": _FIELDS_MASK_TIME
50
+ }
51
 
52
+ class IkNlp22PEStyleConfig(datasets.BuilderConfig):
53
  """BuilderConfig for the IK NLP '22 HT-Style Dataset."""
54
 
55
  def __init__(
 
67
  self.features = features
68
 
69
 
70
+ class IkNlp22PEStyle(datasets.GeneratorBasedBuilder):
71
  VERSION = datasets.Version("1.0.0")
72
 
73
  BUILDER_CONFIGS = [
74
+ IkNlp22PEStyleConfig(
75
  name="main",
76
+ features=_ALL_FIELDS,
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77
  ),
78
  ]
79
 
 
84
  def manual_download_instructions(self):
85
  return (
86
  "The access to the data is restricted to students of the IK MSc NLP 2022 course working on a related project."
87
+ "To load the data using this dataset, download and extract the IK_NLP_22_PESTYLE folder you were provided upon selecting the final project."
88
+ "After extracting it, the folder (referred to as root) must contain a IK_NLP_22_PESTYLE subfolder, containing train.csv and test.csv files."
89
+ "Then, load the dataset with: `datasets.load_dataset('GroNLP/ik-nlp-22_pestyle', 'main', data_dir='path/to/root/folder')`"
90
  )
91
 
92
  def _info(self):
93
  features = {feature: datasets.Value("int32") for feature in self.config.features}
94
+ features["subject_id"] = datasets.Value("string")
95
+ features["modality"] = datasets.Value("string")
96
+ features["src_text"] = datasets.Value("string")
97
  features["mt_text"] = datasets.Value("string")
98
+ features["tgt_text"] = datasets.Value("string")
99
+ features["aligned_edit"] = datasets.Value("string")
100
  features["edit_time"] = datasets.Value("float32")
101
+ features["bleu"] = datasets.Value("float32")
102
+ features["chrf"] = datasets.Value("float32")
103
+ features["ter"] = datasets.Value("float32")
104
  return datasets.DatasetInfo(
105
  description=_DESCRIPTION,
106
  features=datasets.Features(features),
 
114
  data_dir = os.path.abspath(os.path.expanduser(dl_manager.manual_dir))
115
  if not os.path.exists(data_dir):
116
  raise FileNotFoundError(
117
+ "{} does not exist. Make sure you insert the unzipped IK_NLP_22_PESTYLE dir via "
118
+ "`datasets.load_dataset('GroNLP/ik-nlp-22_pestyle', data_dir=...)`"
119
  "Manual download instructions: {}".format(
120
  data_dir, self.manual_download_instructions
121
  )
122
  )
123
  return [
124
  datasets.SplitGenerator(
125
+ name=name,
126
  gen_kwargs={
127
+ "filepath": os.path.join(data_dir, path),
128
+ "fields": _DICT_FIELDS[name],
129
  },
130
+ )
131
+ for name, path in _SPLITS.items()
 
 
 
 
 
132
  ]
133
 
134
+ def _generate_examples(self, filepath: str, fields):
135
  """Yields examples as (key, example) tuples."""
136
  data = pd.read_csv(filepath)
137
+ data = data[fields]
138
  print(data.shape)
139
  for id_, row in data.iterrows():
140
  yield id_, row.to_dict()