Datasets:
GEM
/

Tasks:
Other
Languages:
Finnish
Multilinguality:
unknown
Size Categories:
unknown
Language Creators:
unknown
Annotations Creators:
expert-created
Source Datasets:
original
License:
Sebastian Gehrmann commited on
Commit
b2edc5c
1 Parent(s): efa1cc2

data card.

Browse files
Files changed (1) hide show
  1. README.md +371 -174
README.md CHANGED
@@ -1,21 +1,85 @@
1
- ## Dataset Overview
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
3
- ### Where to find the data and its documentation
 
4
 
5
- #### What is the webpage for the dataset (if it exists)?
6
 
7
- https://turkunlp.org/paraphrase.html
8
 
9
- #### What is the link to where the original dataset is hosted?
10
 
11
- https://github.com/TurkuNLP/Turku-paraphrase-corpus
 
 
12
 
13
- #### What is the link to the paper describing the dataset (open access preferred)?
14
 
15
- https://aclanthology.org/2021.nodalida-main.29/
 
 
16
 
17
- #### Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex.
18
 
 
 
 
 
 
 
 
 
19
  ```
20
  @inproceedings{kanerva-etal-2021-finnish,
21
  title = {Finnish Paraphrase Corpus},
@@ -26,86 +90,116 @@ https://aclanthology.org/2021.nodalida-main.29/
26
  url = {https://aclanthology.org/2021.nodalida-main.29},
27
  pages = {288--298}
28
  }
29
- ```
30
 
31
- @inproceedings{kanerva-etal-2021-finnish,
32
- title = {Finnish Paraphrase Corpus},
33
- author = {Kanerva, Jenna and Ginter, Filip and Chang, Li-Hsin and Rastas, Iiro and Skantsi, Valtteri and Kilpel{\"a}inen, Jemina and Kupari, Hanna-Mari and Saarni, Jenna and Sev{\'o}n, Maija and Tarkka, Otto},
34
- booktitle = {Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa'21)},
35
- year = {2021},
36
- publisher = {Link{\"o}ping University Electronic Press, Sweden},
37
- url = {https://aclanthology.org/2021.nodalida-main.29},
38
- pages = {288--298}
39
- }
40
 
41
- #### If known, provide the name of at least one person the reader can contact for questions about the dataset.
 
 
42
 
43
- Jenna Kanerva, Filip Ginter
44
 
45
- #### If known, provide the email of at least one person the reader can contact for questions about the dataset.
 
 
46
 
47
- jmnybl@utu.fi, figint@utu.fi
48
 
49
- #### Does the dataset have an active leaderboard?
50
 
51
- no
52
 
53
- ### Languages and Intended Use
 
 
 
54
 
55
- #### Is the dataset multilingual?
56
 
57
- no
 
 
58
 
59
- #### What dialects are covered? Are there multiple dialects per language?
60
 
61
- written standard language, spoken language
 
 
 
62
 
63
- #### What languages/dialects are covered in the dataset?
64
 
65
- Finnish
 
 
 
66
 
67
- #### What is the license of the dataset?
68
 
69
- cc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International
 
 
70
 
71
- #### What is the intended use of the dataset?
72
 
73
- Paraphrase classification, paraphrase generation
 
 
74
 
75
- #### What primary task does the dataset support?
76
 
77
- Paraphrasing
 
 
 
78
 
79
- #### Provide a short description of the communicative goal of a model trained for this task on this dataset.
80
 
81
- The corpus provides naturally occurring Finnish paraphrases striving for low lexical overlap, thus supporting many different downstream applications requiring language understanding.
82
 
83
- ### Credit
84
 
85
- #### In what kind of organization did the dataset curation happen?
 
 
86
 
87
- academic
88
 
89
- #### Name the organization(s).
 
 
90
 
91
- University of Turku
92
 
93
- #### Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s).
 
 
94
 
95
- Jenna Kanerva, Filip Ginter, Li-Hsin Chang, Iiro Rastas, Valtteri Skantsi, Jemina Kilpeläinen, Hanna-Mari Kupari, Aurora Piirto, Jenna Saarni, Maija Sevón, Otto Tarkka (TurkuNLP / University of Turku)
96
 
97
- #### Who funded the data creation?
 
 
98
 
99
- The Turku paraphrase corpus project was funded by the Academy of Finland, as well as the European Language Grid project through its open call for pilot projects. The European Language Grid project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under Grant Agreement no. 825627 (ELG).
100
 
101
- #### Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM.
 
 
102
 
103
- Jenna Kanerva, Filip Ginter (TurkuNLP / University of Turku)
104
 
105
- ### Structure
106
 
107
- #### List and describe the fields present in the dataset.
108
 
 
 
109
  The dataset consist of pairs of text passages, where a typical passage is about a sentence long, however, a passage may also be longer or shorter than a sentence. Thus, each example include two text passages (string), a manually annotated label to indicate the paraphrase type (string), and additional metadata.
110
 
111
  The dataset include three different `modes`, plain, classification, and generation. The `plain` mode loads the original data without any additional preprocessing or transformations, while the `classification` mode directly builds the data in a form suitable for training a paraphrase classifier, where each example is doubled in the data with different directions (text1, text2, label) --> (text2, text1, label) taking care of the label flipping as well if needed (paraphrases with directionality flag < or >). In the `generation` mode, the examples are preprocessed to be directly suitable for paraphrase generation task. In here, paraphrases not suitable for generation are discarded (negative, and highly context-dependent paraphrases), and directional paraphrases are provided so that the generation goes from more detailed passage to the more general one in order to prevent model hallucination (i.e. model learning to introduce new information). The rest of the paraphrases are provided in both directions (text1, text2, label) --> (text2, text1, label).
@@ -131,11 +225,13 @@ Each pair in `generation` mode will include the same fields expect `text1` and `
131
  `output`: The output paraphrase passage for generation (string)
132
  `label`: Manually annotated labels (string)
133
  `binary_label`: Label turned into binary with values `positive` (paraphrase) and `negative` (not-paraphrase) (string)
134
- `is_rewrite`: Indicator whether the example is human produced rewrite or naturally occurring paraphrase (bool)
135
 
136
- #### Provide a JSON formatted example of a typical instance in the dataset.
137
 
138
- ```json
 
 
139
  {
140
  'gem_id': 'gem-turku_paraphrase_corpus-train-15',
141
  'goeswith': 'episode-02243',
@@ -146,89 +242,115 @@ Each pair in `generation` mode will include the same fields expect `text1` and `
146
  'binary_label': 'positive',
147
  'is_rewrite': False
148
  }
149
- ```
 
 
 
 
 
 
 
 
 
 
 
 
150
 
151
- {
152
- 'gem_id': 'gem-turku_paraphrase_corpus-train-15',
153
- 'goeswith': 'episode-02243',
154
- 'fold': 0,
155
- 'text1': 'Mitä merkitystä sillä on?',
156
- 'text2': 'Mitä väliä sillä edes on?',
157
- 'label': '4',
158
- 'binary_label': 'positive',
159
- 'is_rewrite': False
160
- }
161
 
162
- #### Describe and name the splits in the dataset if there are more than one.
163
 
164
- The corpus include 3 splits: train, validation, and test.
165
 
166
- #### Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here.
167
 
168
- The data is split randomly into the three section with a restriction of all paraphrases from the same document (movie, TV episode, news article, student translation, or exam question) being in the same section. All splits are manually annotated.
169
 
170
- ## Dataset in GEM
 
 
171
 
172
- ### Rationale
173
 
174
- #### What does this dataset contribute toward better generation evaluation and why is it part of GEM?
 
 
175
 
176
- This dataset provides a large amount of high quality (manually collected and verified) paraphrases for Finnish.
177
 
178
- #### Do other datasets for the high level task exist?
 
 
179
 
180
- yes
181
 
182
- #### Does this dataset cover other languages than other datasets for the same task?
 
 
183
 
184
- no
185
 
186
- #### What aspect of model ability can be measured with this dataset?
187
 
188
- natural language understanding, language variation
189
 
190
- ### GEM Additional Curation
 
 
191
 
192
- #### Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data?
193
 
194
- yes
 
 
195
 
196
- #### What changes have been made to he original dataset?
197
 
198
- data points modified
 
 
199
 
200
- #### For each of these changes, described them in more details and provided the intended purpose of the modification
201
 
202
- Data structure is slightly simplified, and the release provides ready made transformations into two tasks (paraphrase classification and generation), where some data instances are doubled with different direction, and some are discarded as not being suitable for generation (e.g. negatives).
 
 
203
 
204
- #### Does GEM provide additional splits to the dataset?
205
 
206
- no
207
 
208
- ### Getting Started
209
 
210
- ## Previous Results
211
 
212
- ### Previous Results
213
 
214
- #### What aspect of model ability can be measured with this dataset?
215
 
216
- natural language understanding, language variation
217
 
218
- #### Are previous results available?
219
 
220
- yes
 
 
221
 
222
- #### What evaluation approaches have others used?
223
 
224
- F-score in paraphrase classification
 
 
225
 
226
- ## Dataset Curation
227
 
228
- ### Original Curation
 
 
229
 
230
- #### Original curation rationale
231
 
 
 
 
 
 
 
 
 
 
232
  The dataset is fully manually annotated. The dataset strives for interesting paraphrases with low lexical overlap, thus the annotation is two fold. First the paraphrases are manually extracted from two related documents, where the annotators are instructed to extract only interesting paraphrases. In the second phrase, all extracted paraphrases are manually labeled given the annotation scheme.
233
 
234
  The annotation scheme is:
@@ -243,155 +365,230 @@ The following flags are annotated to label 4 paraphrases:
243
  i : minor traceable difference (differing in terms of grammatical number or case, 'this' vs 'that', etc.)
244
  s : style or strength difference (e.g. equivalent meaning, but one of the statements substantially more colloquial than the other)
245
 
246
- For paraphrases where the annotated label was something else than label 4 without any flags, the annotators had an option to rewrite the text passages so that the rewritten paraphrase pair formed label 4 (universal) paraphrase. This was used for cases where simple edit would turn e.g. contextual or directional paraphrase into universal one. For the rewritten examples, both the original and the rewritten pairs are available with corresponding labels annotated.
247
 
248
- #### What was the communicative goal?
249
 
250
- Representing text passages with identical meaning but different surface realization.
 
 
251
 
252
- #### Is the dataset aggregated from different data sources?
253
 
254
- yes
 
 
255
 
256
- #### List the sources (one per line)
257
 
 
 
258
  movie and TV series subtitles (82%)
259
  news articles (9%)
260
  discussion forum messages (8%)
261
  university translation exercises (1%)
262
- university course essays and exams (<1%)
 
263
 
264
- ### Language Data
265
 
266
- #### How was the language data obtained?
267
 
268
- Found Other
 
 
269
 
270
- #### If found, where from?
271
 
272
- Multiple websites Offline media collection Other
 
 
273
 
274
- #### What further information do we have on the language producers?
275
 
 
 
276
  The movie and TV series subtitles are extracted from OPUS OpenSubtitles2018 collection, which is based on data from http://www.opensubtitles.org/.
277
  The news articles are collected from two Finnish news sites, YLE and HS, during years 2017-2020.
278
  Discussion forum messages are obtained from the Finnish Suomi24 discussion forum released for academic use (http://urn.fi/urn:nbn:fi:lb-2020021801).
279
- University translation exercises, essays and exams are collected during the project.
 
 
280
 
281
- #### Was the text validated by a different worker or a data curator?
 
 
282
 
283
- validated by data curator
284
 
285
- #### Were text instances selected or filtered?
 
 
286
 
287
- not filtered
288
 
289
- ### Structured Annotations
290
 
291
- #### Does the dataset have additional annotations for each instance?
292
 
293
- expert created
 
 
 
294
 
295
- #### What is the number of raters
296
 
297
- 2<n<10
 
 
298
 
299
- #### Describe the qualifications required of an annotator.
300
 
301
- Members of the TurkuNLP research group, native speakers of Finnish, each annotator has a strong background in language studies by having an academic degree or ongoing studies in a field related to languages or linguistics.
 
 
302
 
303
- #### How many annotators saw each training example?
304
 
305
- 1
 
 
306
 
307
- #### How many annotators saw each test example?
308
 
309
- 1
 
 
310
 
311
- #### Was an annotation service used?
312
 
313
- no
 
 
314
 
315
- #### Purpose and values for each annoation
316
 
 
 
317
  1. Manual extraction of interesting paraphrases from two related documents.
318
- 2. Manual labeling of each extracted paraphrase based on the given annotation scheme, e.g. distinguishing contextual and universal paraphrases, marking style or strength differences, etc.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
319
 
320
- #### Quality control measures?
 
 
 
321
 
322
- validated by another rater
323
 
324
- #### Describe the quality control measures that were taken.
 
 
325
 
326
- Partial double annotation, double annotation batches are assigned regularly in order to monitor annotation consistency. In double annotation, one annotator first extracts the candidate paraphrases, and these candidates are assigned to two different annotators, who does the label annotation independently from each other. Afterwards, the label annotations are merged, and conflicting labels are resolved together with the whole annotation team.
327
 
328
- ### Consent
 
 
329
 
330
- #### Was there a consent policy involved when gathering the data?
331
 
332
- yes
333
 
334
- #### What was the consent policy?
335
 
336
- The corpus is mostly based on public/open data. For other data sources (student material), the licensing was agreed with the data providers during the collection.
 
 
337
 
338
- ### Private Identifying Information (PII)
339
 
340
- #### Does the source language data likely contain Personal Identifying Information about the data creators or subjects?
341
 
342
- likely
343
 
344
- #### What categories of PII are present or suspected in the data?
345
 
346
- generic PII
347
 
348
- #### Did the curators use any automatic/manual method to identify PII in the dataset?
 
 
349
 
350
- no identification
351
 
352
- ### Maintenance
353
 
354
- #### Does the original dataset have a maintenance plan?
355
 
356
- no
 
 
357
 
358
- ## Broader Social Context
359
 
360
- ### Previous Work on the Social Impact of the Dataset
361
 
362
- #### Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems?
363
 
364
- no
 
 
365
 
366
- ### Impact on Under-Served Communities
367
 
368
- #### Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models).
369
 
370
- no
371
 
372
- ### Discussion of Biases
373
 
374
- #### Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group.
375
 
376
- no
 
 
377
 
378
- ## Considerations for Using the Data
379
 
380
- ### PII Risks and Liability
381
 
382
- #### Considering your answers to the PII part of the Data Curation Section, describe any potential privacy to the data subjects and creators risks when using the dataset.
383
 
384
- None
 
 
385
 
386
- ### Licenses
387
 
388
- #### Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset?
 
 
389
 
390
- open license - commercial use allowed
391
 
392
- #### Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data?
393
 
394
- open license - commercial use allowed
395
 
396
- ### Known Technical Limitations
397
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-created
4
+ language_creators:
5
+ - unknown
6
+ languages:
7
+ - unknown
8
+ licenses:
9
+ - cc-by-sa-4.0
10
+ multilinguality:
11
+ - unknown
12
+ pretty_name: turku_paraphrase_corpus
13
+ size_categories:
14
+ - unknown
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - paraphrasing
19
+ task_ids:
20
+ - unknown
21
+ ---
22
+
23
+ # Dataset Card for GEM/turku_paraphrase_corpus
24
+
25
+ ## Dataset Description
26
+
27
+ - **Homepage:** https://turkunlp.org/paraphrase.html
28
+ - **Repository:** https://github.com/TurkuNLP/Turku-paraphrase-corpus
29
+ - **Paper:** https://aclanthology.org/2021.nodalida-main.29/
30
+ - **Leaderboard:** N/A
31
+ - **Point of Contact:** Jenna Kanerva, Filip Ginter
32
+
33
+ ### Link to Main Data Card
34
+
35
+ You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/turku_paraphrase_corpus).
36
+
37
+ ### Dataset Summary
38
+
39
+ This is a Finnish paraphrase corpus which consists of pairs of text passages, where a typical passage is about a sentence long. It can be used to either identify or generate paraphrases.
40
+
41
+ You can load the dataset via:
42
+ ```
43
+ import datasets
44
+ data = datasets.load_dataset('GEM/turku_paraphrase_corpus')
45
+ ```
46
+ The data loader can be found [here](https://huggingface.co/datasets/GEM/turku_paraphrase_corpus).
47
+
48
+ #### website
49
+ [Website](https://turkunlp.org/paraphrase.html)
50
+
51
+ #### paper
52
+ [ACL Anthology](https://aclanthology.org/2021.nodalida-main.29/)
53
 
54
+ #### authors
55
+ Jenna Kanerva, Filip Ginter, Li-Hsin Chang, Iiro Rastas, Valtteri Skantsi, Jemina Kilpeläinen, Hanna-Mari Kupari, Aurora Piirto, Jenna Saarni, Maija Sevón, Otto Tarkka (TurkuNLP / University of Turku)
56
 
57
+ ## Dataset Overview
58
 
59
+ ### Where to find the Data and its Documentation
60
 
61
+ #### Webpage
62
 
63
+ <!-- info: What is the webpage for the dataset (if it exists)? -->
64
+ <!-- scope: telescope -->
65
+ [Website](https://turkunlp.org/paraphrase.html)
66
 
67
+ #### Download
68
 
69
+ <!-- info: What is the link to where the original dataset is hosted? -->
70
+ <!-- scope: telescope -->
71
+ [Github](https://github.com/TurkuNLP/Turku-paraphrase-corpus)
72
 
73
+ #### Paper
74
 
75
+ <!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
76
+ <!-- scope: telescope -->
77
+ [ACL Anthology](https://aclanthology.org/2021.nodalida-main.29/)
78
+
79
+ #### BibTex
80
+
81
+ <!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
82
+ <!-- scope: microscope -->
83
  ```
84
  @inproceedings{kanerva-etal-2021-finnish,
85
  title = {Finnish Paraphrase Corpus},
 
90
  url = {https://aclanthology.org/2021.nodalida-main.29},
91
  pages = {288--298}
92
  }
93
+ ```
94
 
95
+ #### Contact Name
96
+
97
+ <!-- quick -->
98
+ <!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
99
+ <!-- scope: periscope -->
100
+ Jenna Kanerva, Filip Ginter
101
+
102
+ #### Contact Email
 
103
 
104
+ <!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
105
+ <!-- scope: periscope -->
106
+ jmnybl@utu.fi, figint@utu.fi
107
 
108
+ #### Has a Leaderboard?
109
 
110
+ <!-- info: Does the dataset have an active leaderboard? -->
111
+ <!-- scope: telescope -->
112
+ no
113
 
 
114
 
115
+ ### Languages and Intended Use
116
 
117
+ #### Multilingual?
118
 
119
+ <!-- quick -->
120
+ <!-- info: Is the dataset multilingual? -->
121
+ <!-- scope: telescope -->
122
+ no
123
 
124
+ #### Covered Dialects
125
 
126
+ <!-- info: What dialects are covered? Are there multiple dialects per language? -->
127
+ <!-- scope: periscope -->
128
+ written standard language, spoken language
129
 
130
+ #### Covered Languages
131
 
132
+ <!-- quick -->
133
+ <!-- info: What languages/dialects are covered in the dataset? -->
134
+ <!-- scope: telescope -->
135
+ `Finnish`
136
 
137
+ #### License
138
 
139
+ <!-- quick -->
140
+ <!-- info: What is the license of the dataset? -->
141
+ <!-- scope: telescope -->
142
+ cc-by-sa-4.0: Creative Commons Attribution Share Alike 4.0 International
143
 
144
+ #### Intended Use
145
 
146
+ <!-- info: What is the intended use of the dataset? -->
147
+ <!-- scope: microscope -->
148
+ Paraphrase classification, paraphrase generation
149
 
150
+ #### Primary Task
151
 
152
+ <!-- info: What primary task does the dataset support? -->
153
+ <!-- scope: telescope -->
154
+ Paraphrasing
155
 
156
+ #### Communicative Goal
157
 
158
+ <!-- quick -->
159
+ <!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
160
+ <!-- scope: periscope -->
161
+ The corpus provides naturally occurring Finnish paraphrases striving for low lexical overlap, thus supporting many different downstream applications requiring language understanding.
162
 
 
163
 
164
+ ### Credit
165
 
166
+ #### Curation Organization Type(s)
167
 
168
+ <!-- info: In what kind of organization did the dataset curation happen? -->
169
+ <!-- scope: telescope -->
170
+ `academic`
171
 
172
+ #### Curation Organization(s)
173
 
174
+ <!-- info: Name the organization(s). -->
175
+ <!-- scope: periscope -->
176
+ University of Turku
177
 
178
+ #### Dataset Creators
179
 
180
+ <!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
181
+ <!-- scope: microscope -->
182
+ Jenna Kanerva, Filip Ginter, Li-Hsin Chang, Iiro Rastas, Valtteri Skantsi, Jemina Kilpeläinen, Hanna-Mari Kupari, Aurora Piirto, Jenna Saarni, Maija Sevón, Otto Tarkka (TurkuNLP / University of Turku)
183
 
184
+ #### Funding
185
 
186
+ <!-- info: Who funded the data creation? -->
187
+ <!-- scope: microscope -->
188
+ The Turku paraphrase corpus project was funded by the Academy of Finland, as well as the European Language Grid project through its open call for pilot projects. The European Language Grid project has received funding from the European Union’s Horizon 2020 Research and Innovation programme under Grant Agreement no. 825627 (ELG).
189
 
190
+ #### Who added the Dataset to GEM?
191
 
192
+ <!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
193
+ <!-- scope: microscope -->
194
+ Jenna Kanerva, Filip Ginter (TurkuNLP / University of Turku)
195
 
 
196
 
197
+ ### Dataset Structure
198
 
199
+ #### Data Fields
200
 
201
+ <!-- info: List and describe the fields present in the dataset. -->
202
+ <!-- scope: telescope -->
203
  The dataset consist of pairs of text passages, where a typical passage is about a sentence long, however, a passage may also be longer or shorter than a sentence. Thus, each example include two text passages (string), a manually annotated label to indicate the paraphrase type (string), and additional metadata.
204
 
205
  The dataset include three different `modes`, plain, classification, and generation. The `plain` mode loads the original data without any additional preprocessing or transformations, while the `classification` mode directly builds the data in a form suitable for training a paraphrase classifier, where each example is doubled in the data with different directions (text1, text2, label) --> (text2, text1, label) taking care of the label flipping as well if needed (paraphrases with directionality flag < or >). In the `generation` mode, the examples are preprocessed to be directly suitable for paraphrase generation task. In here, paraphrases not suitable for generation are discarded (negative, and highly context-dependent paraphrases), and directional paraphrases are provided so that the generation goes from more detailed passage to the more general one in order to prevent model hallucination (i.e. model learning to introduce new information). The rest of the paraphrases are provided in both directions (text1, text2, label) --> (text2, text1, label).
 
225
  `output`: The output paraphrase passage for generation (string)
226
  `label`: Manually annotated labels (string)
227
  `binary_label`: Label turned into binary with values `positive` (paraphrase) and `negative` (not-paraphrase) (string)
228
+ `is_rewrite`: Indicator whether the example is human produced rewrite or naturally occurring paraphrase (bool)
229
 
230
+ #### Example Instance
231
 
232
+ <!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
233
+ <!-- scope: periscope -->
234
+ ```
235
  {
236
  'gem_id': 'gem-turku_paraphrase_corpus-train-15',
237
  'goeswith': 'episode-02243',
 
242
  'binary_label': 'positive',
243
  'is_rewrite': False
244
  }
245
+ ```
246
+
247
+ #### Data Splits
248
+
249
+ <!-- info: Describe and name the splits in the dataset if there are more than one. -->
250
+ <!-- scope: periscope -->
251
+ The corpus include 3 splits: train, validation, and test.
252
+
253
+ #### Splitting Criteria
254
+
255
+ <!-- info: Describe any criteria for splitting the data, if used. If there are differences between the splits (e.g., if the training annotations are machine-generated and the dev and test ones are created by humans, or if different numbers of annotators contributed to each example), describe them here. -->
256
+ <!-- scope: microscope -->
257
+ The data is split randomly into the three section with a restriction of all paraphrases from the same document (movie, TV episode, news article, student translation, or exam question) being in the same section. All splits are manually annotated.
258
 
 
 
 
 
 
 
 
 
 
 
259
 
 
260
 
261
+ ## Dataset in GEM
262
 
263
+ ### Rationale for Inclusion in GEM
264
 
265
+ #### Why is the Dataset in GEM?
266
 
267
+ <!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
268
+ <!-- scope: microscope -->
269
+ This dataset provides a large amount of high quality (manually collected and verified) paraphrases for Finnish.
270
 
271
+ #### Similar Datasets
272
 
273
+ <!-- info: Do other datasets for the high level task exist? -->
274
+ <!-- scope: telescope -->
275
+ yes
276
 
277
+ #### Unique Language Coverage
278
 
279
+ <!-- info: Does this dataset cover other languages than other datasets for the same task? -->
280
+ <!-- scope: periscope -->
281
+ no
282
 
283
+ #### Ability that the Dataset measures
284
 
285
+ <!-- info: What aspect of model ability can be measured with this dataset? -->
286
+ <!-- scope: periscope -->
287
+ natural language understanding, language variation
288
 
 
289
 
290
+ ### GEM-Specific Curation
291
 
292
+ #### Modificatied for GEM?
293
 
294
+ <!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
295
+ <!-- scope: telescope -->
296
+ yes
297
 
298
+ #### GEM Modifications
299
 
300
+ <!-- info: What changes have been made to he original dataset? -->
301
+ <!-- scope: periscope -->
302
+ `data points modified`
303
 
304
+ #### Modification Details
305
 
306
+ <!-- info: For each of these changes, described them in more details and provided the intended purpose of the modification -->
307
+ <!-- scope: microscope -->
308
+ Data structure is slightly simplified, and the release provides ready made transformations into two tasks (paraphrase classification and generation), where some data instances are doubled with different direction, and some are discarded as not being suitable for generation (e.g. negatives).
309
 
310
+ #### Additional Splits?
311
 
312
+ <!-- info: Does GEM provide additional splits to the dataset? -->
313
+ <!-- scope: telescope -->
314
+ no
315
 
 
316
 
317
+ ### Getting Started with the Task
318
 
 
319
 
 
320
 
 
321
 
322
+ ## Previous Results
323
 
324
+ ### Previous Results
325
 
326
+ #### Measured Model Abilities
327
 
328
+ <!-- info: What aspect of model ability can be measured with this dataset? -->
329
+ <!-- scope: telescope -->
330
+ natural language understanding, language variation
331
 
332
+ #### Previous results available?
333
 
334
+ <!-- info: Are previous results available? -->
335
+ <!-- scope: telescope -->
336
+ yes
337
 
338
+ #### Other Evaluation Approaches
339
 
340
+ <!-- info: What evaluation approaches have others used? -->
341
+ <!-- scope: periscope -->
342
+ F-score in paraphrase classification
343
 
 
344
 
345
+
346
+ ## Dataset Curation
347
+
348
+ ### Original Curation
349
+
350
+ #### Original Curation Rationale
351
+
352
+ <!-- info: Original curation rationale -->
353
+ <!-- scope: telescope -->
354
  The dataset is fully manually annotated. The dataset strives for interesting paraphrases with low lexical overlap, thus the annotation is two fold. First the paraphrases are manually extracted from two related documents, where the annotators are instructed to extract only interesting paraphrases. In the second phrase, all extracted paraphrases are manually labeled given the annotation scheme.
355
 
356
  The annotation scheme is:
 
365
  i : minor traceable difference (differing in terms of grammatical number or case, 'this' vs 'that', etc.)
366
  s : style or strength difference (e.g. equivalent meaning, but one of the statements substantially more colloquial than the other)
367
 
368
+ For paraphrases where the annotated label was something else than label 4 without any flags, the annotators had an option to rewrite the text passages so that the rewritten paraphrase pair formed label 4 (universal) paraphrase. This was used for cases where simple edit would turn e.g. contextual or directional paraphrase into universal one. For the rewritten examples, both the original and the rewritten pairs are available with corresponding labels annotated.
369
 
370
+ #### Communicative Goal
371
 
372
+ <!-- info: What was the communicative goal? -->
373
+ <!-- scope: periscope -->
374
+ Representing text passages with identical meaning but different surface realization.
375
 
376
+ #### Sourced from Different Sources
377
 
378
+ <!-- info: Is the dataset aggregated from different data sources? -->
379
+ <!-- scope: telescope -->
380
+ yes
381
 
382
+ #### Source Details
383
 
384
+ <!-- info: List the sources (one per line) -->
385
+ <!-- scope: periscope -->
386
  movie and TV series subtitles (82%)
387
  news articles (9%)
388
  discussion forum messages (8%)
389
  university translation exercises (1%)
390
+ university course essays and exams (<1%)
391
+
392
 
393
+ ### Language Data
394
 
395
+ #### How was Language Data Obtained?
396
 
397
+ <!-- info: How was the language data obtained? -->
398
+ <!-- scope: telescope -->
399
+ `Found`, `Other`
400
 
401
+ #### Where was it found?
402
 
403
+ <!-- info: If found, where from? -->
404
+ <!-- scope: telescope -->
405
+ `Multiple websites`, `Offline media collection`, `Other`
406
 
407
+ #### Language Producers
408
 
409
+ <!-- info: What further information do we have on the language producers? -->
410
+ <!-- scope: microscope -->
411
  The movie and TV series subtitles are extracted from OPUS OpenSubtitles2018 collection, which is based on data from http://www.opensubtitles.org/.
412
  The news articles are collected from two Finnish news sites, YLE and HS, during years 2017-2020.
413
  Discussion forum messages are obtained from the Finnish Suomi24 discussion forum released for academic use (http://urn.fi/urn:nbn:fi:lb-2020021801).
414
+ University translation exercises, essays and exams are collected during the project.
415
+
416
+ #### Data Validation
417
 
418
+ <!-- info: Was the text validated by a different worker or a data curator? -->
419
+ <!-- scope: telescope -->
420
+ validated by data curator
421
 
422
+ #### Was Data Filtered?
423
 
424
+ <!-- info: Were text instances selected or filtered? -->
425
+ <!-- scope: telescope -->
426
+ not filtered
427
 
 
428
 
429
+ ### Structured Annotations
430
 
431
+ #### Additional Annotations?
432
 
433
+ <!-- quick -->
434
+ <!-- info: Does the dataset have additional annotations for each instance? -->
435
+ <!-- scope: telescope -->
436
+ expert created
437
 
438
+ #### Number of Raters
439
 
440
+ <!-- info: What is the number of raters -->
441
+ <!-- scope: telescope -->
442
+ 2<n<10
443
 
444
+ #### Rater Qualifications
445
 
446
+ <!-- info: Describe the qualifications required of an annotator. -->
447
+ <!-- scope: periscope -->
448
+ Members of the TurkuNLP research group, native speakers of Finnish, each annotator has a strong background in language studies by having an academic degree or ongoing studies in a field related to languages or linguistics.
449
 
450
+ #### Raters per Training Example
451
 
452
+ <!-- info: How many annotators saw each training example? -->
453
+ <!-- scope: periscope -->
454
+ 1
455
 
456
+ #### Raters per Test Example
457
 
458
+ <!-- info: How many annotators saw each test example? -->
459
+ <!-- scope: periscope -->
460
+ 1
461
 
462
+ #### Annotation Service?
463
 
464
+ <!-- info: Was an annotation service used? -->
465
+ <!-- scope: telescope -->
466
+ no
467
 
468
+ #### Annotation Values
469
 
470
+ <!-- info: Purpose and values for each annotation -->
471
+ <!-- scope: microscope -->
472
  1. Manual extraction of interesting paraphrases from two related documents.
473
+ 2. Manual labeling of each extracted paraphrase based on the given annotation scheme, e.g. distinguishing contextual and universal paraphrases, marking style or strength differences, etc.
474
+
475
+ #### Any Quality Control?
476
+
477
+ <!-- info: Quality control measures? -->
478
+ <!-- scope: telescope -->
479
+ validated by another rater
480
+
481
+ #### Quality Control Details
482
+
483
+ <!-- info: Describe the quality control measures that were taken. -->
484
+ <!-- scope: microscope -->
485
+ Partial double annotation, double annotation batches are assigned regularly in order to monitor annotation consistency. In double annotation, one annotator first extracts the candidate paraphrases, and these candidates are assigned to two different annotators, who does the label annotation independently from each other. Afterwards, the label annotations are merged, and conflicting labels are resolved together with the whole annotation team.
486
+
487
+
488
+ ### Consent
489
+
490
+ #### Any Consent Policy?
491
+
492
+ <!-- info: Was there a consent policy involved when gathering the data? -->
493
+ <!-- scope: telescope -->
494
+ yes
495
+
496
+ #### Consent Policy Details
497
+
498
+ <!-- info: What was the consent policy? -->
499
+ <!-- scope: microscope -->
500
+ The corpus is mostly based on public/open data. For other data sources (student material), the licensing was agreed with the data providers during the collection.
501
+
502
+
503
+ ### Private Identifying Information (PII)
504
+
505
+ #### Contains PII?
506
 
507
+ <!-- quick -->
508
+ <!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
509
+ <!-- scope: telescope -->
510
+ likely
511
 
512
+ #### Categories of PII
513
 
514
+ <!-- info: What categories of PII are present or suspected in the data? -->
515
+ <!-- scope: periscope -->
516
+ `generic PII`
517
 
518
+ #### Any PII Identification?
519
 
520
+ <!-- info: Did the curators use any automatic/manual method to identify PII in the dataset? -->
521
+ <!-- scope: periscope -->
522
+ no identification
523
 
 
524
 
525
+ ### Maintenance
526
 
527
+ #### Any Maintenance Plan?
528
 
529
+ <!-- info: Does the original dataset have a maintenance plan? -->
530
+ <!-- scope: telescope -->
531
+ no
532
 
 
533
 
 
534
 
535
+ ## Broader Social Context
536
 
537
+ ### Previous Work on the Social Impact of the Dataset
538
 
539
+ #### Usage of Models based on the Data
540
 
541
+ <!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
542
+ <!-- scope: telescope -->
543
+ no
544
 
 
545
 
546
+ ### Impact on Under-Served Communities
547
 
548
+ #### Addresses needs of underserved Communities?
549
 
550
+ <!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
551
+ <!-- scope: telescope -->
552
+ no
553
 
 
554
 
555
+ ### Discussion of Biases
556
 
557
+ #### Any Documented Social Biases?
558
 
559
+ <!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
560
+ <!-- scope: telescope -->
561
+ no
562
 
 
563
 
 
564
 
565
+ ## Considerations for Using the Data
566
 
567
+ ### PII Risks and Liability
568
 
569
+ #### Potential PII Risk
570
 
571
+ <!-- info: Considering your answers to the PII part of the Data Curation Section, describe any potential privacy to the data subjects and creators risks when using the dataset. -->
572
+ <!-- scope: microscope -->
573
+ None
574
 
 
575
 
576
+ ### Licenses
577
 
578
+ #### Copyright Restrictions on the Dataset
579
 
580
+ <!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
581
+ <!-- scope: periscope -->
582
+ `open license - commercial use allowed`
583
 
584
+ #### Copyright Restrictions on the Language Data
585
 
586
+ <!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
587
+ <!-- scope: periscope -->
588
+ `open license - commercial use allowed`
589
 
 
590
 
591
+ ### Known Technical Limitations
592
 
 
593
 
 
594