parquet-converter commited on
Commit
d1b10a6
1 Parent(s): 25220fb

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,30 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
28
- train.json filter=lfs diff=lfs merge=lfs -text
29
- validation.json filter=lfs diff=lfs merge=lfs -text
30
- test.json filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,641 +0,0 @@
1
- ---
2
- annotations_creators:
3
- - expert-created
4
- language_creators:
5
- - unknown
6
- language:
7
- - fi
8
- license:
9
- - cc-by-nc-sa-4.0
10
- multilinguality:
11
- - unknown
12
- size_categories:
13
- - unknown
14
- source_datasets:
15
- - original
16
- task_categories:
17
- - table-to-text
18
- task_ids: []
19
- pretty_name: turku_hockey_data2text
20
- tags:
21
- - data-to-text
22
- ---
23
-
24
- # Dataset Card for GEM/turku_hockey_data2text
25
-
26
- ## Dataset Description
27
-
28
- - **Homepage:** https://turkunlp.org/hockey_data2text.html
29
- - **Repository:** https://github.com/TurkuNLP/Turku-hockey-data2text
30
- - **Paper:** https://aclanthology.org/W19-6125/
31
- - **Leaderboard:** N/A
32
- - **Point of Contact:** Jenna Kanerva, Filip Ginter
33
-
34
- ### Link to Main Data Card
35
-
36
- You can find the main data card on the [GEM Website](https://gem-benchmark.com/data_cards/turku_hockey_data2text).
37
-
38
- ### Dataset Summary
39
-
40
- This is a Finnish data-to-text dataset in which the input is structured information about a hockey game and the output a description of the game.
41
-
42
- You can load the dataset via:
43
- ```
44
- import datasets
45
- data = datasets.load_dataset('GEM/turku_hockey_data2text')
46
- ```
47
- The data loader can be found [here](https://huggingface.co/datasets/GEM/turku_hockey_data2text).
48
-
49
- #### website
50
- [Website](https://turkunlp.org/hockey_data2text.html)
51
-
52
- #### paper
53
- [ACL anthology](https://aclanthology.org/W19-6125/)
54
-
55
- #### authors
56
- Jenna Kanerva, Samuel Rönnqvist, Riina Kekki, Tapio Salakoski, Filip Ginter (TurkuNLP / University of Turku)
57
-
58
- ## Dataset Overview
59
-
60
- ### Where to find the Data and its Documentation
61
-
62
- #### Webpage
63
-
64
- <!-- info: What is the webpage for the dataset (if it exists)? -->
65
- <!-- scope: telescope -->
66
- [Website](https://turkunlp.org/hockey_data2text.html)
67
-
68
- #### Download
69
-
70
- <!-- info: What is the link to where the original dataset is hosted? -->
71
- <!-- scope: telescope -->
72
- [Github](https://github.com/TurkuNLP/Turku-hockey-data2text)
73
-
74
- #### Paper
75
-
76
- <!-- info: What is the link to the paper describing the dataset (open access preferred)? -->
77
- <!-- scope: telescope -->
78
- [ACL anthology](https://aclanthology.org/W19-6125/)
79
-
80
- #### BibTex
81
-
82
- <!-- info: Provide the BibTex-formatted reference for the dataset. Please use the correct published version (ACL anthology, etc.) instead of google scholar created Bibtex. -->
83
- <!-- scope: microscope -->
84
- ```
85
- @inproceedings{kanerva2019newsgen,
86
- Title = {Template-free Data-to-Text Generation of Finnish Sports News},
87
- Author = {Jenna Kanerva and Samuel R{\"o}nnqvist and Riina Kekki and Tapio Salakoski and Filip Ginter},
88
- booktitle = {Proceedings of the 22nd Nordic Conference on Computational Linguistics (NoDaLiDa’19)},
89
- year={2019}
90
- }
91
- ```
92
-
93
- #### Contact Name
94
-
95
- <!-- quick -->
96
- <!-- info: If known, provide the name of at least one person the reader can contact for questions about the dataset. -->
97
- <!-- scope: periscope -->
98
- Jenna Kanerva, Filip Ginter
99
-
100
- #### Contact Email
101
-
102
- <!-- info: If known, provide the email of at least one person the reader can contact for questions about the dataset. -->
103
- <!-- scope: periscope -->
104
- jmnybl@utu.fi, figint@utu.fi
105
-
106
- #### Has a Leaderboard?
107
-
108
- <!-- info: Does the dataset have an active leaderboard? -->
109
- <!-- scope: telescope -->
110
- no
111
-
112
-
113
- ### Languages and Intended Use
114
-
115
- #### Multilingual?
116
-
117
- <!-- quick -->
118
- <!-- info: Is the dataset multilingual? -->
119
- <!-- scope: telescope -->
120
- no
121
-
122
- #### Covered Dialects
123
-
124
- <!-- info: What dialects are covered? Are there multiple dialects per language? -->
125
- <!-- scope: periscope -->
126
- written standard language
127
-
128
- #### Covered Languages
129
-
130
- <!-- quick -->
131
- <!-- info: What languages/dialects are covered in the dataset? -->
132
- <!-- scope: telescope -->
133
- `Finnish`
134
-
135
- #### Whose Language?
136
-
137
- <!-- info: Whose language is in the dataset? -->
138
- <!-- scope: periscope -->
139
- The original news articles are written by professional journalists. The text passages extracted in the annotation may be slightly edited compared to the original language during the corpus annotation.
140
-
141
- #### License
142
-
143
- <!-- quick -->
144
- <!-- info: What is the license of the dataset? -->
145
- <!-- scope: telescope -->
146
- cc-by-nc-sa-4.0: Creative Commons Attribution Non Commercial Share Alike 4.0 International
147
-
148
- #### Intended Use
149
-
150
- <!-- info: What is the intended use of the dataset? -->
151
- <!-- scope: microscope -->
152
- This dataset was developed as a benchmark for evaluating template-free, machine learning methods on Finnish news generation in the area of ice hockey reporting.
153
-
154
- #### Primary Task
155
-
156
- <!-- info: What primary task does the dataset support? -->
157
- <!-- scope: telescope -->
158
- Data-to-Text
159
-
160
- #### Communicative Goal
161
-
162
- <!-- quick -->
163
- <!-- info: Provide a short description of the communicative goal of a model trained for this task on this dataset. -->
164
- <!-- scope: periscope -->
165
- Describe an event from an ice hockey game based on the given structural data.
166
-
167
-
168
- ### Credit
169
-
170
- #### Curation Organization Type(s)
171
-
172
- <!-- info: In what kind of organization did the dataset curation happen? -->
173
- <!-- scope: telescope -->
174
- `academic`
175
-
176
- #### Curation Organization(s)
177
-
178
- <!-- info: Name the organization(s). -->
179
- <!-- scope: periscope -->
180
- University of Turku
181
-
182
- #### Dataset Creators
183
-
184
- <!-- info: Who created the original dataset? List the people involved in collecting the dataset and their affiliation(s). -->
185
- <!-- scope: microscope -->
186
- Jenna Kanerva, Samuel Rönnqvist, Riina Kekki, Tapio Salakoski, Filip Ginter (TurkuNLP / University of Turku)
187
-
188
- #### Funding
189
-
190
- <!-- info: Who funded the data creation? -->
191
- <!-- scope: microscope -->
192
- The project was supported by the Google Digital News Innovation Fund.
193
-
194
- #### Who added the Dataset to GEM?
195
-
196
- <!-- info: Who contributed to the data card and adding the dataset to GEM? List the people+affiliations involved in creating this data card and who helped integrate this dataset into GEM. -->
197
- <!-- scope: microscope -->
198
- Jenna Kanerva, Filip Ginter (TurkuNLP / University of Turku)
199
-
200
-
201
- ### Dataset Structure
202
-
203
- #### Data Fields
204
-
205
- <!-- info: List and describe the fields present in the dataset. -->
206
- <!-- scope: telescope -->
207
- The dataset is constructed of games, where each game is a list of events. If the event was annotated (corresponding sentence was found from the news article), it includes `text` field with value other than empty string ("").
208
-
209
- For each game (dict), there are keys `gem_id` (string), `id` (string), `news_article` (string), and `events` (list).
210
-
211
- For each event (dict), there are different, relevant keys available with non empty values depending on the event type (e.g. goal or penalty). The mandatory keys for each event are `event_id` (string), `event_type` (string), `text` (string, empty string if not annotated), and `multi_reference` (bool). The keys not relevant for the specific event type are left empty.
212
-
213
- The relevant keys in the event dictionary are:
214
-
215
- For each event type, the following keys are relevant:
216
- `event_id`: Identifier of the event, unique to the game but not globally, in chronological order (string)
217
- `event_type`: Type of the event, possible values are `game result`, `goal`, `penalty`, or `saves` (string)
218
- `text`: Natural language description of the event, or empty string if not available (string)
219
- `multi_reference`: Does this event refer to a text passage describing multiple events? (bool)
220
-
221
-
222
- The rest of the fields are specific to the event type. The relevant fields for each event type are:
223
-
224
- game result:
225
- `event_id`: Identifier of the event, unique to the game but not globally, in chronological order (string)
226
- `event_type`: Type of the event (string)
227
- `home_team`: Name of the home team (string)
228
- `guest_team`: Name of the guest team (string)
229
- `score`: Final score of the game, in the form of home–guest (string)
230
- `periods`: Scores for individual periods, each in the form of home–guest score in that period (list of strings)
231
- `features`: Additional features, such as overtime win or shoot out (list of strings)
232
- `text`: Natural language description of the event, or empty string if not available (string)
233
- `multi_reference`: Does this event refer to a text passage describing multiple events? (bool)
234
-
235
- goal:
236
- `event_id`: Identifier of the event, unique to the game but not globally, in chronological order (string)
237
- `event_type`: Type of the event (string)
238
- `player`: Name of the player scoring (string)
239
- `assist`: Names of the players assisting, at most two players (list of strings)
240
- `team`: Team scoring with possible values of `home` or `guest` (string)
241
- `team_name`: Name of the team scoring (string)
242
- `score`: Score after the goal, in the form of home–guest (string)
243
- `time`: Time of the goal, minutes and seconds from the beginning (string)
244
- `features`: Additional features, such as power play or short-handed goal (list of strings)
245
- `text`: Natural language description of the event, or empty string if not available (string)
246
- `multi_reference`: Does this event refer to a text passage describing multiple events? (bool)
247
-
248
- penalty:
249
- `event_id`: Identifier of the event, unique to the game but not globally, in chronological order (string)
250
- `event_type`: Type of the event (string)
251
- `player`: Name of the player getting the penalty (string)
252
- `team`: Team getting the penalty with possible values of `home` or `guest` (string)
253
- `team_name`: Name of the team getting the penalty (string)
254
- `penalty_minutes`: Penalty minutes (string)
255
- `time`: Time of the penalty, minutes and seconds from the beginning (string)
256
- `text`: Natural language description of the event, or empty string if not available (string)
257
- `multi_reference`: Does this event refer to a text passage describing multiple events? (bool)
258
-
259
- saves:
260
- `event_id`: Identifier of the event, unique to the game but not globally, in chronological order (string)
261
- `event_type`: Type of the event (string)
262
- `player`: Name of the goalkeeper (string)
263
- `team`: Team of the goalkeeper with possible values of `home` or `guest` (string)
264
- `team_name`: Name of the team (string)
265
- `saves`: Number of saves in the game (string)
266
- `text`: Natural language description of the event, or empty string if not available (string)
267
- `multi_reference`: Does this event refer to a text passage describing multiple events? (bool)
268
-
269
-
270
- Text passages describing multiple events (multi_reference):
271
-
272
- Some text passages refer to multiple events in such way that separating them to individual statements is not adequate (e.g. "The home team received two penalties towards the end of the first period."). In these cases, multiple events are aligned to the same text passage so that the first event (in chronological order) include the annotated text passage, while the rest of the events referring to the same text passage include the identifier of the first event in the annotated text field (e.g. `text`: "E4").
273
-
274
- #### Example Instance
275
-
276
- <!-- info: Provide a JSON formatted example of a typical instance in the dataset. -->
277
- <!-- scope: periscope -->
278
- ```
279
- {
280
- 'gem_id': 'gem-turku_hockey_data2text-train-0',
281
- 'id': '20061031-TPS-HPK',
282
- 'news_article': 'HPK:n hyvä syysvire jatkuu jääkiekon SM-liigassa. Tiistaina HPK kukisti mainiolla liikkeellä ja tehokkaalla ylivoimapelillä TPS:n vieraissa 1–0 (1–0, 0–0, 0–0).\nHPK hyödynsi ylivoimaa mennen jo ensimmäisessä erässä Mikko Mäenpään maalilla 1–0 -johtoon.\nToisessa ja kolmannessa erässä HPK tarjosi edelleen TPS:lle runsaasti tilanteita, mutta maalia eivät turkulaiset millään ilveellä saaneet. Pahin este oli loistavan pelin Hämeenlinnan maalilla pelannut Mika Oksa.\nTPS:n maalissa Jani Hurme ei osumille mitään mahtanut. Joukkueen suuri yksinäinen kenttäpelaaja oli Kai Nurminen, mutta hänelläkään ei ollut onnea maalitilanteissa.',
283
- 'events':
284
- {
285
- 'event_id': ['E1', 'E2', 'E3'],
286
- 'event_type': ['game result', 'penalty', 'goal'],
287
- 'text': ['HPK kukisti TPS:n vieraissa 1–0 (1–0, 0–0, 0–0).', '', 'HPK hyödynsi ylivoimaa mennen jo ensimmäisessä erässä Mikko Mäenpään maalilla 1–0 -johtoon.'],
288
- 'home_team': ['TPS', '', ''],
289
- 'guest_team': ['HPK', '', ''],
290
- 'score': ['0–1', '', '0–1'],
291
- 'periods': [['0–1', '0–0', '0–0'], [], []],
292
- 'features': [[], [], ['power play']],
293
- 'player': ['', 'Fredrik Svensson', 'Mikko Mäenpää'],
294
- 'assist': [[], [], ['Jani Keinänen', 'Toni Mäkiaho']],
295
- 'team': ['', 'guest', 'guest'],
296
- 'team_name': ['', 'HPK', 'HPK'],
297
- 'time': ['', '9.28', '14.57'],
298
- 'penalty_minutes': ['', '2', ''],
299
- 'saves': ['', '', ''],
300
- 'multi_reference': [false, false, false]
301
- }
302
- }
303
- ```
304
-
305
- #### Data Splits
306
-
307
- <!-- info: Describe and name the splits in the dataset if there are more than one. -->
308
- <!-- scope: periscope -->
309
- The corpus include 3 splits: train, validation, and test.
310
-
311
-
312
-
313
- ## Dataset in GEM
314
-
315
- ### Rationale for Inclusion in GEM
316
-
317
- #### Why is the Dataset in GEM?
318
-
319
- <!-- info: What does this dataset contribute toward better generation evaluation and why is it part of GEM? -->
320
- <!-- scope: microscope -->
321
- The dataset was created to develop machine learned text generation models for Finnish ice hockey news, where the generation would reflect the natural language variation found from the game reports written by professional journalists. While the original game reports often include additional information not derivable from the game statistics, the corpus was fully manually curated to remove all such information from the natural language descriptions. The rationale of such curation was to prevent model 'hallucinating' additional facts.
322
-
323
- #### Similar Datasets
324
-
325
- <!-- info: Do other datasets for the high level task exist? -->
326
- <!-- scope: telescope -->
327
- yes
328
-
329
- #### Unique Language Coverage
330
-
331
- <!-- info: Does this dataset cover other languages than other datasets for the same task? -->
332
- <!-- scope: periscope -->
333
- yes
334
-
335
- #### Difference from other GEM datasets
336
-
337
- <!-- info: What else sets this dataset apart from other similar datasets in GEM? -->
338
- <!-- scope: microscope -->
339
- This is the only data2text corpus for Finnish in GEM.
340
-
341
- #### Ability that the Dataset measures
342
-
343
- <!-- info: What aspect of model ability can be measured with this dataset? -->
344
- <!-- scope: periscope -->
345
- morphological inflection, language variation
346
-
347
-
348
- ### GEM-Specific Curation
349
-
350
- #### Modificatied for GEM?
351
-
352
- <!-- info: Has the GEM version of the dataset been modified in any way (data, processing, splits) from the original curated data? -->
353
- <!-- scope: telescope -->
354
- yes
355
-
356
- #### GEM Modifications
357
-
358
- <!-- info: What changes have been made to he original dataset? -->
359
- <!-- scope: periscope -->
360
- `data points modified`
361
-
362
- #### Modification Details
363
-
364
- <!-- info: For each of these changes, described them in more details and provided the intended purpose of the modification -->
365
- <!-- scope: microscope -->
366
- Structural data was translated into English.
367
-
368
- #### Additional Splits?
369
-
370
- <!-- info: Does GEM provide additional splits to the dataset? -->
371
- <!-- scope: telescope -->
372
- no
373
-
374
-
375
- ### Getting Started with the Task
376
-
377
-
378
-
379
-
380
- ## Previous Results
381
-
382
- ### Previous Results
383
-
384
- #### Metrics
385
-
386
- <!-- info: What metrics are typically used for this task? -->
387
- <!-- scope: periscope -->
388
- `BLEU`, `METEOR`, `ROUGE`, `WER`
389
-
390
- #### Proposed Evaluation
391
-
392
- <!-- info: List and describe the purpose of the metrics and evaluation methodology (including human evaluation) that the dataset creators used when introducing this task. -->
393
- <!-- scope: microscope -->
394
- Automatic evaluation: BLEU, NIST, METEOR, ROUGE-L, CIDEr
395
- Manual evaluation: factual mistakes, grammatical errors, minimum edit distance to an acceptable game report (using WER)
396
-
397
- #### Previous results available?
398
-
399
- <!-- info: Are previous results available? -->
400
- <!-- scope: telescope -->
401
- yes
402
-
403
-
404
-
405
- ## Dataset Curation
406
-
407
- ### Original Curation
408
-
409
- #### Original Curation Rationale
410
-
411
- <!-- info: Original curation rationale -->
412
- <!-- scope: telescope -->
413
- The dataset is designed for text generation (data2text), where the original source of natural language descriptions is news articles written by journalists. While the link between structural data (ice hockey game statistics) and the news articles describing the game was quite weak (news articles including a lot of information not derivable from the statistics, while leaving many events unmentioned), the corpus includes full manual annotation aligning the events extracted from game statistics and the corresponding natural language passages extracted from the news articles.
414
-
415
- Each event is manually aligned into a sentence-like passage, and in case a suitable passage was not found, the annotation is left empty (with value `None`). The extracted passages were manually modified not to include additional information not derivable from the game statistics, or not considered as world knowledge. The manual curation of passages is designed to prevent model hallucination, i.e. model learning to generate facts not derivable from the input data.
416
-
417
- #### Communicative Goal
418
-
419
- <!-- info: What was the communicative goal? -->
420
- <!-- scope: periscope -->
421
- Describing the given events (structural data) in natural language, and therefore generating ice hockey game reports.
422
-
423
- #### Sourced from Different Sources
424
-
425
- <!-- info: Is the dataset aggregated from different data sources? -->
426
- <!-- scope: telescope -->
427
- no
428
-
429
-
430
- ### Language Data
431
-
432
- #### How was Language Data Obtained?
433
-
434
- <!-- info: How was the language data obtained? -->
435
- <!-- scope: telescope -->
436
- `Other`
437
-
438
- #### Language Producers
439
-
440
- <!-- info: What further information do we have on the language producers? -->
441
- <!-- scope: microscope -->
442
- The initial data, both game statistics and news articles, were obtained from the Finnish News Agency STT news archives released for academic use (http://urn.fi/urn:nbn:fi:lb-2019041501). The original news articles are written by professional journalists.
443
-
444
- We (TurkuNLP) gratefully acknowledge the collaboration of Maija Paikkala, Salla Salmela and Pihla Lehmusjoki from the Finnish News Agency STT while creating the corpus.
445
-
446
- #### Topics Covered
447
-
448
- <!-- info: Does the language in the dataset focus on specific topics? How would you describe them? -->
449
- <!-- scope: periscope -->
450
- Ice hockey, news
451
-
452
- #### Data Validation
453
-
454
- <!-- info: Was the text validated by a different worker or a data curator? -->
455
- <!-- scope: telescope -->
456
- not validated
457
-
458
- #### Was Data Filtered?
459
-
460
- <!-- info: Were text instances selected or filtered? -->
461
- <!-- scope: telescope -->
462
- algorithmically
463
-
464
- #### Filter Criteria
465
-
466
- <!-- info: What were the selection criteria? -->
467
- <!-- scope: microscope -->
468
- Include only games, where both game statistics and a news article describing the game were available (based on timestamps and team names).
469
-
470
-
471
- ### Structured Annotations
472
-
473
- #### Additional Annotations?
474
-
475
- <!-- quick -->
476
- <!-- info: Does the dataset have additional annotations for each instance? -->
477
- <!-- scope: telescope -->
478
- expert created
479
-
480
- #### Number of Raters
481
-
482
- <!-- info: What is the number of raters -->
483
- <!-- scope: telescope -->
484
- 1
485
-
486
- #### Rater Qualifications
487
-
488
- <!-- info: Describe the qualifications required of an annotator. -->
489
- <!-- scope: periscope -->
490
- Members of the TurkuNLP research group, native speakers of Finnish.
491
-
492
- #### Raters per Training Example
493
-
494
- <!-- info: How many annotators saw each training example? -->
495
- <!-- scope: periscope -->
496
- 1
497
-
498
- #### Raters per Test Example
499
-
500
- <!-- info: How many annotators saw each test example? -->
501
- <!-- scope: periscope -->
502
- 1
503
-
504
- #### Annotation Service?
505
-
506
- <!-- info: Was an annotation service used? -->
507
- <!-- scope: telescope -->
508
- no
509
-
510
- #### Annotation Values
511
-
512
- <!-- info: Purpose and values for each annotation -->
513
- <!-- scope: microscope -->
514
- Manual alignment of events and their natural language descriptions. Removing information not derivable from the input data or world knowledge in order to prevent the model 'hallucination'.
515
-
516
- #### Any Quality Control?
517
-
518
- <!-- info: Quality control measures? -->
519
- <!-- scope: telescope -->
520
- validated by data curators
521
-
522
- #### Quality Control Details
523
-
524
- <!-- info: Describe the quality control measures that were taken. -->
525
- <!-- scope: microscope -->
526
- Manual inspection of examples during the initial annotation training phrase.
527
-
528
-
529
- ### Consent
530
-
531
- #### Any Consent Policy?
532
-
533
- <!-- info: Was there a consent policy involved when gathering the data? -->
534
- <!-- scope: telescope -->
535
- yes
536
-
537
- #### Consent Policy Details
538
-
539
- <!-- info: What was the consent policy? -->
540
- <!-- scope: microscope -->
541
- The corpus license was agreed with the providers of the source material.
542
-
543
-
544
- ### Private Identifying Information (PII)
545
-
546
- #### Contains PII?
547
-
548
- <!-- quick -->
549
- <!-- info: Does the source language data likely contain Personal Identifying Information about the data creators or subjects? -->
550
- <!-- scope: telescope -->
551
- yes/very likely
552
-
553
- #### Categories of PII
554
-
555
- <!-- info: What categories of PII are present or suspected in the data? -->
556
- <!-- scope: periscope -->
557
- `generic PII`
558
-
559
- #### Any PII Identification?
560
-
561
- <!-- info: Did the curators use any automatic/manual method to identify PII in the dataset? -->
562
- <!-- scope: periscope -->
563
- no identification
564
-
565
-
566
- ### Maintenance
567
-
568
- #### Any Maintenance Plan?
569
-
570
- <!-- info: Does the original dataset have a maintenance plan? -->
571
- <!-- scope: telescope -->
572
- no
573
-
574
-
575
-
576
- ## Broader Social Context
577
-
578
- ### Previous Work on the Social Impact of the Dataset
579
-
580
- #### Usage of Models based on the Data
581
-
582
- <!-- info: Are you aware of cases where models trained on the task featured in this dataset ore related tasks have been used in automated systems? -->
583
- <!-- scope: telescope -->
584
- no
585
-
586
-
587
- ### Impact on Under-Served Communities
588
-
589
- #### Addresses needs of underserved Communities?
590
-
591
- <!-- info: Does this dataset address the needs of communities that are traditionally underserved in language technology, and particularly language generation technology? Communities may be underserved for exemple because their language, language variety, or social or geographical context is underepresented in NLP and NLG resources (datasets and models). -->
592
- <!-- scope: telescope -->
593
- no
594
-
595
-
596
- ### Discussion of Biases
597
-
598
- #### Any Documented Social Biases?
599
-
600
- <!-- info: Are there documented social biases in the dataset? Biases in this context are variations in the ways members of different social categories are represented that can have harmful downstream consequences for members of the more disadvantaged group. -->
601
- <!-- scope: telescope -->
602
- no
603
-
604
- #### Are the Language Producers Representative of the Language?
605
-
606
- <!-- info: Does the distribution of language producers in the dataset accurately represent the full distribution of speakers of the language world-wide? If not, how does it differ? -->
607
- <!-- scope: periscope -->
608
- The dataset represents only written standard language.
609
-
610
-
611
-
612
- ## Considerations for Using the Data
613
-
614
- ### PII Risks and Liability
615
-
616
- #### Potential PII Risk
617
-
618
- <!-- info: Considering your answers to the PII part of the Data Curation Section, describe any potential privacy to the data subjects and creators risks when using the dataset. -->
619
- <!-- scope: microscope -->
620
- None
621
-
622
-
623
- ### Licenses
624
-
625
- #### Copyright Restrictions on the Dataset
626
-
627
- <!-- info: Based on your answers in the Intended Use part of the Data Overview Section, which of the following best describe the copyright and licensing status of the dataset? -->
628
- <!-- scope: periscope -->
629
- `non-commercial use only`
630
-
631
- #### Copyright Restrictions on the Language Data
632
-
633
- <!-- info: Based on your answers in the Language part of the Data Curation Section, which of the following best describe the copyright and licensing status of the underlying language data? -->
634
- <!-- scope: periscope -->
635
- `non-commercial use only`
636
-
637
-
638
- ### Known Technical Limitations
639
-
640
-
641
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
validation.json → event-generation/turku_hockey_data2text-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:634538a60e9efad215d0b832e01a7e5ffa683da0a9e54cdf2e7957a4a97fd409
3
- size 2127107
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6e25fe0056116fb886825391d8be5f3f72ccc3844234c9701b639827d5f92eed
3
+ size 94505
test.json → event-generation/turku_hockey_data2text-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2ec802b5570d85b0f07c622d0d3af65205c569c7876d869b33aa8862be59f926
3
- size 2164446
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2d1d5195ccbc928d1d5c3aa56cf881c5d3f2cbeaac2327600893567b46ae04cd
3
+ size 808504
train.json → event-generation/turku_hockey_data2text-validation.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c8e507ddeb806a7fdd14368b8b6218bda54d85cc0eb4f38da3f57f10b1b3f0c4
3
- size 17329568
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b2f9b142095f57327f17c66c723aaaea9994b0b39d6b6180d81c465201167076
3
+ size 101349
game-generation/turku_hockey_data2text-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a41bd3196d5a59f54201d1360146ef453c1800350869d262d2dd48ae36ba784
3
+ size 431487
game-generation/turku_hockey_data2text-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:540c7a39c0e1a4ca35465ec2845f3fa189b8c2b60e3aa1d6af63b9397c15857e
3
+ size 3202276
game-generation/turku_hockey_data2text-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9fb5afd716bd301c7b71a18c35bd87ae5f0e023362276b91733dcf52b1557321
3
+ size 430709
turku_hockey_data2text.json DELETED
@@ -1,174 +0,0 @@
1
- {
2
- "overview": {
3
- "where": {
4
- "has-leaderboard": "no",
5
- "leaderboard-url": "N/A",
6
- "leaderboard-description": "N/A",
7
- "website": "[Website](https://turkunlp.org/hockey_data2text.html)",
8
- "data-url": "[Github](https://github.com/TurkuNLP/Turku-hockey-data2text)",
9
- "paper-url": "[ACL anthology](https://aclanthology.org/W19-6125/)",
10
- "paper-bibtext": "```\n@inproceedings{kanerva2019newsgen,\n Title = {Template-free Data-to-Text Generation of Finnish Sports News},\n Author = {Jenna Kanerva and Samuel R{\\\"o}nnqvist and Riina Kekki and Tapio Salakoski and Filip Ginter},\n booktitle = {Proceedings of the 22nd Nordic Conference on Computational Linguistics (NoDaLiDa\u201919)},\n year={2019}\n }\n```",
11
- "contact-name": "Jenna Kanerva, Filip Ginter",
12
- "contact-email": "jmnybl@utu.fi, figint@utu.fi"
13
- },
14
- "languages": {
15
- "is-multilingual": "no",
16
- "license": "cc-by-nc-sa-4.0: Creative Commons Attribution Non Commercial Share Alike 4.0 International",
17
- "task-other": "N/A",
18
- "language-names": [
19
- "Finnish"
20
- ],
21
- "language-dialects": "written standard language",
22
- "intended-use": "This dataset was developed as a benchmark for evaluating template-free, machine learning methods on Finnish news generation in the area of ice hockey reporting.",
23
- "license-other": "N/A",
24
- "task": "Data-to-Text",
25
- "communicative": "Describe an event from an ice hockey game based on the given structural data.",
26
- "language-speakers": "The original news articles are written by professional journalists. The text passages extracted in the annotation may be slightly edited compared to the original language during the corpus annotation."
27
- },
28
- "credit": {
29
- "organization-type": [
30
- "academic"
31
- ],
32
- "organization-names": "University of Turku",
33
- "creators": "Jenna Kanerva, Samuel R\u00f6nnqvist, Riina Kekki, Tapio Salakoski, Filip Ginter (TurkuNLP / University of Turku)",
34
- "funding": "The project was supported by the Google Digital News Innovation Fund.",
35
- "gem-added-by": "Jenna Kanerva, Filip Ginter (TurkuNLP / University of Turku)"
36
- },
37
- "structure": {
38
- "data-fields": "The dataset is constructed of games, where each game is a list of events. If the event was annotated (corresponding sentence was found from the news article), it includes `text` field with value other than empty string (\"\").\n\nFor each game (dict), there are keys `gem_id` (string), `id` (string), `news_article` (string), and `events` (list).\n\nFor each event (dict), there are different, relevant keys available with non empty values depending on the event type (e.g. goal or penalty). The mandatory keys for each event are `event_id` (string), `event_type` (string), `text` (string, empty string if not annotated), and `multi_reference` (bool). The keys not relevant for the specific event type are left empty.\n\nThe relevant keys in the event dictionary are:\n\nFor each event type, the following keys are relevant:\n `event_id`: Identifier of the event, unique to the game but not globally, in chronological order (string)\n `event_type`: Type of the event, possible values are `game result`, `goal`, `penalty`, or `saves` (string)\n `text`: Natural language description of the event, or empty string if not available (string)\n `multi_reference`: Does this event refer to a text passage describing multiple events? (bool)\n\n\nThe rest of the fields are specific to the event type. The relevant fields for each event type are:\n\ngame result:\n `event_id`: Identifier of the event, unique to the game but not globally, in chronological order (string)\n `event_type`: Type of the event (string)\n `home_team`: Name of the home team (string)\n `guest_team`: Name of the guest team (string)\n `score`: Final score of the game, in the form of home\u2013guest (string)\n `periods`: Scores for individual periods, each in the form of home\u2013guest score in that period (list of strings)\n `features`: Additional features, such as overtime win or shoot out (list of strings)\n `text`: Natural language description of the event, or empty string if not available (string)\n `multi_reference`: Does this event refer to a text passage describing multiple events? (bool)\n\ngoal:\n `event_id`: Identifier of the event, unique to the game but not globally, in chronological order (string)\n `event_type`: Type of the event (string)\n `player`: Name of the player scoring (string)\n `assist`: Names of the players assisting, at most two players (list of strings)\n `team`: Team scoring with possible values of `home` or `guest` (string)\n `team_name`: Name of the team scoring (string)\n `score`: Score after the goal, in the form of home\u2013guest (string)\n `time`: Time of the goal, minutes and seconds from the beginning (string)\n `features`: Additional features, such as power play or short-handed goal (list of strings)\n `text`: Natural language description of the event, or empty string if not available (string)\n `multi_reference`: Does this event refer to a text passage describing multiple events? (bool)\n\npenalty:\n `event_id`: Identifier of the event, unique to the game but not globally, in chronological order (string)\n `event_type`: Type of the event (string)\n `player`: Name of the player getting the penalty (string)\n `team`: Team getting the penalty with possible values of `home` or `guest` (string)\n `team_name`: Name of the team getting the penalty (string)\n `penalty_minutes`: Penalty minutes (string)\n `time`: Time of the penalty, minutes and seconds from the beginning (string)\n `text`: Natural language description of the event, or empty string if not available (string)\n `multi_reference`: Does this event refer to a text passage describing multiple events? (bool)\n\nsaves:\n `event_id`: Identifier of the event, unique to the game but not globally, in chronological order (string)\n `event_type`: Type of the event (string)\n `player`: Name of the goalkeeper (string)\n `team`: Team of the goalkeeper with possible values of `home` or `guest` (string)\n `team_name`: Name of the team (string)\n `saves`: Number of saves in the game (string)\n `text`: Natural language description of the event, or empty string if not available (string)\n `multi_reference`: Does this event refer to a text passage describing multiple events? (bool)\n\n\nText passages describing multiple events (multi_reference):\n\nSome text passages refer to multiple events in such way that separating them to individual statements is not adequate (e.g. \"The home team received two penalties towards the end of the first period.\"). In these cases, multiple events are aligned to the same text passage so that the first event (in chronological order) include the annotated text passage, while the rest of the events referring to the same text passage include the identifier of the first event in the annotated text field (e.g. `text`: \"E4\").",
39
- "structure-example": "```\n{\n 'gem_id': 'gem-turku_hockey_data2text-train-0',\n 'id': '20061031-TPS-HPK',\n 'news_article': 'HPK:n hyv\u00e4 syysvire jatkuu j\u00e4\u00e4kiekon SM-liigassa. Tiistaina HPK kukisti mainiolla liikkeell\u00e4 ja tehokkaalla ylivoimapelill\u00e4 TPS:n vieraissa 1\u20130 (1\u20130, 0\u20130, 0\u20130).\\nHPK hy\u00f6dynsi ylivoimaa mennen jo ensimm\u00e4isess\u00e4 er\u00e4ss\u00e4 Mikko M\u00e4enp\u00e4\u00e4n maalilla 1\u20130 -johtoon.\\nToisessa ja kolmannessa er\u00e4ss\u00e4 HPK tarjosi edelleen TPS:lle runsaasti tilanteita, mutta maalia eiv\u00e4t turkulaiset mill\u00e4\u00e4n ilveell\u00e4 saaneet. Pahin este oli loistavan pelin H\u00e4meenlinnan maalilla pelannut Mika Oksa.\\nTPS:n maalissa Jani Hurme ei osumille mit\u00e4\u00e4n mahtanut. Joukkueen suuri yksin\u00e4inen kentt\u00e4pelaaja oli Kai Nurminen, mutta h\u00e4nell\u00e4k\u00e4\u00e4n ei ollut onnea maalitilanteissa.',\n 'events':\n {\n 'event_id': ['E1', 'E2', 'E3'],\n 'event_type': ['game result', 'penalty', 'goal'],\n 'text': ['HPK kukisti TPS:n vieraissa 1\u20130 (1\u20130, 0\u20130, 0\u20130).', '', 'HPK hy\u00f6dynsi ylivoimaa mennen jo ensimm\u00e4isess\u00e4 er\u00e4ss\u00e4 Mikko M\u00e4enp\u00e4\u00e4n maalilla 1\u20130 -johtoon.'],\n 'home_team': ['TPS', '', ''],\n 'guest_team': ['HPK', '', ''],\n 'score': ['0\u20131', '', '0\u20131'],\n 'periods': [['0\u20131', '0\u20130', '0\u20130'], [], []],\n 'features': [[], [], ['power play']],\n 'player': ['', 'Fredrik Svensson', 'Mikko M\u00e4enp\u00e4\u00e4'],\n 'assist': [[], [], ['Jani Kein\u00e4nen', 'Toni M\u00e4kiaho']],\n 'team': ['', 'guest', 'guest'],\n 'team_name': ['', 'HPK', 'HPK'],\n 'time': ['', '9.28', '14.57'],\n 'penalty_minutes': ['', '2', ''],\n 'saves': ['', '', ''],\n 'multi_reference': [false, false, false]\n }\n}\n```",
40
- "structure-splits": "The corpus include 3 splits: train, validation, and test.",
41
- "structure-description": ""
42
- },
43
- "what": {
44
- "dataset": "This is a Finnish data-to-text dataset in which the input is structured information about a hockey game and the output a description of the game."
45
- }
46
- },
47
- "curation": {
48
- "original": {
49
- "is-aggregated": "no",
50
- "aggregated-sources": "N/A",
51
- "rationale": "The dataset is designed for text generation (data2text), where the original source of natural language descriptions is news articles written by journalists. While the link between structural data (ice hockey game statistics) and the news articles describing the game was quite weak (news articles including a lot of information not derivable from the statistics, while leaving many events unmentioned), the corpus includes full manual annotation aligning the events extracted from game statistics and the corresponding natural language passages extracted from the news articles.\n\nEach event is manually aligned into a sentence-like passage, and in case a suitable passage was not found, the annotation is left empty (with value `None`). The extracted passages were manually modified not to include additional information not derivable from the game statistics, or not considered as world knowledge. The manual curation of passages is designed to prevent model hallucination, i.e. model learning to generate facts not derivable from the input data.",
52
- "communicative": "Describing the given events (structural data) in natural language, and therefore generating ice hockey game reports."
53
- },
54
- "language": {
55
- "found": [],
56
- "crowdsourced": [],
57
- "created": "N/A",
58
- "machine-generated": "N/A",
59
- "validated": "not validated",
60
- "is-filtered": "algorithmically",
61
- "filtered-criteria": "Include only games, where both game statistics and a news article describing the game were available (based on timestamps and team names).",
62
- "obtained": [
63
- "Other"
64
- ],
65
- "producers-description": "The initial data, both game statistics and news articles, were obtained from the Finnish News Agency STT news archives released for academic use (http://urn.fi/urn:nbn:fi:lb-2019041501). The original news articles are written by professional journalists.\n\nWe (TurkuNLP) gratefully acknowledge the collaboration of Maija Paikkala, Salla Salmela and Pihla Lehmusjoki from the Finnish News Agency STT while creating the corpus.",
66
- "topics": "Ice hockey, news",
67
- "pre-processed": "N/A"
68
- },
69
- "annotations": {
70
- "origin": "expert created",
71
- "rater-number": "1",
72
- "rater-qualifications": "Members of the TurkuNLP research group, native speakers of Finnish.",
73
- "rater-training-num": "1",
74
- "rater-test-num": "1",
75
- "rater-annotation-service-bool": "no",
76
- "rater-annotation-service": [],
77
- "values": "Manual alignment of events and their natural language descriptions. Removing information not derivable from the input data or world knowledge in order to prevent the model 'hallucination'.",
78
- "quality-control": "validated by data curators",
79
- "quality-control-details": "Manual inspection of examples during the initial annotation training phrase."
80
- },
81
- "consent": {
82
- "has-consent": "yes",
83
- "consent-policy": "The corpus license was agreed with the providers of the source material.",
84
- "consent-other": "",
85
- "no-consent-justification": "N/A"
86
- },
87
- "pii": {
88
- "has-pii": "yes/very likely",
89
- "no-pii-justification": "N/A",
90
- "is-pii-identified": "no identification",
91
- "pii-identified-method": "N/A",
92
- "is-pii-replaced": "N/A",
93
- "pii-replaced-method": "N/A",
94
- "pii-categories": [
95
- "generic PII"
96
- ]
97
- },
98
- "maintenance": {
99
- "has-maintenance": "no",
100
- "description": "N/A",
101
- "contact": "N/A",
102
- "contestation-mechanism": "N/A",
103
- "contestation-link": "N/A",
104
- "contestation-description": "N/A"
105
- }
106
- },
107
- "gem": {
108
- "rationale": {
109
- "sole-task-dataset": "yes",
110
- "distinction-description": "This is the only data2text corpus for Finnish in GEM.",
111
- "sole-language-task-dataset": "yes",
112
- "contribution": "The dataset was created to develop machine learned text generation models for Finnish ice hockey news, where the generation would reflect the natural language variation found from the game reports written by professional journalists. While the original game reports often include additional information not derivable from the game statistics, the corpus was fully manually curated to remove all such information from the natural language descriptions. The rationale of such curation was to prevent model 'hallucinating' additional facts.",
113
- "model-ability": "morphological inflection, language variation"
114
- },
115
- "curation": {
116
- "has-additional-curation": "yes",
117
- "modification-types": [
118
- "data points modified"
119
- ],
120
- "modification-description": "Structural data was translated into English.",
121
- "has-additional-splits": "no",
122
- "additional-splits-description": "N/A",
123
- "additional-splits-capacicites": "N/A"
124
- },
125
- "starting": {}
126
- },
127
- "results": {
128
- "results": {
129
- "other-metrics-definitions": "N/A",
130
- "has-previous-results": "yes",
131
- "current-evaluation": "N/A",
132
- "previous-results": "N/A",
133
- "original-evaluation": "Automatic evaluation: BLEU, NIST, METEOR, ROUGE-L, CIDEr\nManual evaluation: factual mistakes, grammatical errors, minimum edit distance to an acceptable game report (using WER)",
134
- "metrics": [
135
- "BLEU",
136
- "METEOR",
137
- "ROUGE",
138
- "WER"
139
- ]
140
- }
141
- },
142
- "considerations": {
143
- "pii": {
144
- "risks-description": "None"
145
- },
146
- "licenses": {
147
- "dataset-restrictions-other": "N/A",
148
- "data-copyright-other": "N/A",
149
- "dataset-restrictions": [
150
- "non-commercial use only"
151
- ],
152
- "data-copyright": [
153
- "non-commercial use only"
154
- ]
155
- },
156
- "limitations": {}
157
- },
158
- "context": {
159
- "previous": {
160
- "is-deployed": "no",
161
- "described-risks": "N/A",
162
- "changes-from-observation": "N/A"
163
- },
164
- "underserved": {
165
- "helps-underserved": "no",
166
- "underserved-description": "N/A"
167
- },
168
- "biases": {
169
- "has-biases": "no",
170
- "bias-analyses": "N/A",
171
- "speaker-distibution": "The dataset represents only written standard language. "
172
- }
173
- }
174
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
turku_hockey_data2text.py DELETED
@@ -1,263 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- """Data Loader for Turku Hockey Data2Text corpus"""
16
-
17
-
18
- import csv
19
- import json
20
- import os
21
- import re
22
-
23
- import datasets
24
-
25
-
26
- # Find for instance the citation on arxiv or on the dataset repo/website
27
- _CITATION = """\
28
- @inproceedings{kanerva2019newsgen,
29
- Title = {Template-free Data-to-Text Generation of Finnish Sports News},
30
- Author = {Jenna Kanerva and Samuel R{\"o}nnqvist and Riina Kekki and Tapio Salakoski and Filip Ginter},
31
- booktitle = {Proceedings of the 22nd Nordic Conference on Computational Linguistics (NoDaLiDa’19)},
32
- year={2019}
33
- }
34
- """
35
-
36
-
37
- # You can copy an official description
38
- _DESCRIPTION = """\
39
- The Turku Hockey Data2Text corpus was developed as a benchmark for evaluating template-free, machine learning methods on Finnish news generation in the area of ice hockey reporting. This dataset is a collection of 3,454 ice hockey games, each including game statistics and a news article describing the game. Each game includes manual alignment of events (such as goals or penalties) and sentences describing the specific event in natural language extracted from the news article. The corpus includes 12,827 annotated events. The natural language passages are manually curated not to include any information not derivable from the input data or world knowledge.
40
- """
41
-
42
- _HOMEPAGE = "https://github.com/TurkuNLP/Turku-hockey-data2text"
43
-
44
- _LICENSE = "Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0)"
45
-
46
-
47
- # The HuggingFace dataset library don't host the datasets but only point to the original files
48
- # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
49
- _URLs = {
50
- 'train': 'train.json',
51
- 'validation': 'validation.json',
52
- 'test': 'test.json'
53
- }
54
-
55
-
56
- # relevant keys in input representation for different event types (text and event_id skipped as not being relevant for the input)
57
- relevant_keys = {"game result": ["event_type", "home_team", "guest_team", "score", "periods", "features"],\
58
- "goal": ["event_type", "score", "features", "player", "assist", "team", "team_name", "time"],\
59
- "penalty": ["event_type", "player", "team", "team_name", "time", "penalty_minutes"],\
60
- "saves": ["event_type", "player", "team", "team_name", "saves"]}
61
-
62
-
63
- class TurkuHockeyData2Text(datasets.GeneratorBasedBuilder):
64
- """The Turky Hockey Data2Text is a manually curated corpus for Finnish news generation in the area of ice hockey reporting."""
65
-
66
- VERSION = datasets.Version("1.1.0")
67
-
68
- # This is an example of a dataset with multiple configurations.
69
- # If you don't want/need to define several sub-sets in your dataset,
70
- # just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
71
-
72
- # If you need to make complex sub-parts in the datasets with configurable options
73
- # You can create your own builder configuration class to store attribute, inheriting from datasets.BuilderConfig
74
- # BUILDER_CONFIG_CLASS = MyBuilderConfig
75
-
76
- # You will be able to load one or the other configurations in the following list with
77
- # data = datasets.load_dataset('my_dataset', 'first_domain')
78
- # data = datasets.load_dataset('my_dataset', 'second_domain')
79
- BUILDER_CONFIGS = [
80
- datasets.BuilderConfig(name="event-generation", version=VERSION, description="This loads the dataset in its event generation mode, where an output description is generated for each game event separately. The full game information is not availbale."),
81
- datasets.BuilderConfig(name="game-generation", version=VERSION, description="This loads the dataset in its game generation mode, where the full game information is provided, but the actual input–output pairs must be created by the user."),
82
- ]
83
-
84
- DEFAULT_CONFIG_NAME = "event-generation" # It's not mandatory to have a default configuration. Just use one if it make sense.
85
-
86
- def _info(self):
87
- # This method specifies the datasets.DatasetInfo object which contains informations and typings for the dataset
88
- if self.config.name == "event-generation": # This is the name of the configuration selected in BUILDER_CONFIGS above
89
- features = datasets.Features(
90
- {
91
-
92
- "gem_id": datasets.Value("string"),
93
- "game_id": datasets.Value("string"),
94
- "event_id": datasets.Value("string"),
95
- "input": datasets.Value("string"),
96
- "target": datasets.Value("string"),
97
- "references": [datasets.Value("string")]
98
- }
99
- )
100
-
101
- else: # game-generation
102
- features = datasets.Features(
103
- {
104
- "gem_id": datasets.Value("string"),
105
- "id": datasets.Value("string"),
106
- "news_article": datasets.Value("string"),
107
- "events": datasets.features.Sequence(
108
- {
109
- "event_id": datasets.Value("string"),
110
- "event_type": datasets.Value("string"),
111
- "text": datasets.Value("string"),
112
- "home_team": datasets.Value("string"),
113
- "guest_team": datasets.Value("string"),
114
- "score": datasets.Value("string"),
115
- "periods": datasets.features.Sequence(datasets.Value("string")),
116
- "features": datasets.features.Sequence(datasets.Value("string")),
117
- "player": datasets.Value("string"),
118
- "assist": datasets.features.Sequence(datasets.Value("string")),
119
- "team": datasets.Value("string"),
120
- "team_name": datasets.Value("string"),
121
- "time": datasets.Value("string"),
122
- "penalty_minutes": datasets.Value("string"),
123
- "saves": datasets.Value("string"),
124
- "multi_reference": datasets.Value("bool")
125
- }
126
- ),
127
- }
128
- )
129
-
130
- # define other configs
131
- return datasets.DatasetInfo(
132
- description=_DESCRIPTION,
133
- features=features,
134
- # If there's a common (input, target) tuple from the features,
135
- # specify them here. They'll be used if as_supervised=True in
136
- # builder.as_dataset.
137
- supervised_keys=None,
138
- homepage=_HOMEPAGE,
139
- license=_LICENSE,
140
- citation=_CITATION,
141
- )
142
-
143
- def _split_generators(self, dl_manager):
144
- """Returns SplitGenerators."""
145
- # If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
146
-
147
- # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLs
148
- # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
149
- # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
150
-
151
- my_urls = _URLs
152
- data_dir = dl_manager.download_and_extract(my_urls)
153
- return [
154
- datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": data_dir["train"], "split": "train"}),
155
- datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": data_dir["validation"], "split": "validation"}),
156
- datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": data_dir["test"], "split": "test"})
157
- ]
158
-
159
- def _generate_examples(self, filepath, split):
160
- """ Yields examples as (key, example) tuples. """
161
- # This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.
162
- # The `key` is here for legacy reason (tfds) and is not important in itself.
163
-
164
- with open(filepath, "rt", encoding="utf-8") as f:
165
- data = json.load(f)
166
- idx = 0
167
- for i, example in enumerate(data): # one game
168
- if self.config.name == "game-generation":
169
- example["gem_id"] = f"gem-turku_hockey_data2text-{split}-{idx}" # fill in gem_id
170
- idx += 1
171
- example = self._generate_features(example)
172
- yield idx, example
173
- else:
174
- game_data = self._generate_features(example)
175
- events = self.create_event_data(game_data["events"])
176
- for e in events:
177
- e["gem_id"] = f"gem-turku_hockey_data2text-{split}-{idx}" # fill in gem_id
178
- idx += 1
179
- e["game_id"] = game_data["id"]
180
- yield idx, e
181
-
182
-
183
-
184
- def _generate_features(self, example):
185
- _events = []
186
- for i, e in enumerate(example["events"]):
187
- d = {"event_id": e["event_id"],
188
- "event_type": e["event_type"],
189
- "text": e["text"] if e["text"] != None else "",
190
- "home_team": e.get("home_team", ""),
191
- "guest_team": e.get("guest_team", ""),
192
- "score": e.get("score", ""),
193
- "periods": e.get("periods", []),
194
- "features": e.get("features", []),
195
- "player": e.get("player", ""),
196
- "assist": e.get("assist", []),
197
- "team": e.get("team", ""),
198
- "team_name": e.get("team_name", ""),
199
- "time": e.get("time", ""),
200
- "penalty_minutes": e.get("penalty_minutes", ""),
201
- "saves": e.get("saves", ""),
202
- "multi_reference": self._is_multireference(i, example["events"])
203
- }
204
- _events.append(d)
205
- example["events"] = _events
206
- return example
207
-
208
- def _is_multireference(self, i, events):
209
- """ Return True if this event is one of the multireference events (multiple events refer to the same text passage)
210
- Otherwise, return False.
211
- """
212
- if events[i]["text"] == None:
213
- return False
214
- multireference = re.compile("E[0-9]+")
215
- if multireference.match(events[i]["text"]):
216
- return True
217
- # can be first of the multireference events (the one including the actual text passage)
218
- multi_events = []
219
- for event in events:
220
- if event["text"] == None:
221
- continue
222
- if multireference.match(event["text"]):
223
- multi_events.append(event["text"])
224
- if events[i]["event_id"] in multi_events:
225
- return True
226
- return False
227
-
228
-
229
-
230
-
231
- def create_event_data(self, events):
232
- simplified_events = []
233
- for i, event in enumerate(events):
234
- e_idx, event_input, event_output = self.event2string(i, events)
235
- if not event_input: # skip empty annotations and multi-reference events
236
- continue
237
- simplified_events.append({"gem_id": "", "game_id": "", "event_id": e_idx, "input": event_input, "target": event_output, "references": [event_output]})
238
- return simplified_events
239
-
240
- def event2string(self, i, events):
241
- """Featurize i:th event into string input.
242
- Example:
243
- input: "event_type: saves [SEP] player: Jani Hurme [SEP] team: home [SEP] team_name: TPS [SEP] saves: 25"
244
- output: "TPS:n maalissa Jani Hurme ehti 25 kiekon tielle."
245
- """
246
- if events[i]["text"] == "": # skip, event is not annotated
247
- return None, None, None
248
- if events[i]["multi_reference"] == True: # skip multireference events in simple representation
249
- return None, None, None
250
- event_type = events[i]["event_type"] # use only relevant features for this event type
251
-
252
- e_data = []
253
- for key in relevant_keys[event_type]:
254
- if isinstance(events[i][key], str) or isinstance(events[i][key], float):
255
- s = f"{key} = {events[i][key]}"
256
- e_data.append(s)
257
- if isinstance(events[i][key], list) and len(events[i][key]) > 0:
258
- s = f'{key} = {" , ".join(events[i][key])}'
259
- e_data.append(s)
260
- event_input = " [SEP] ".join(e_data)
261
-
262
- e_idx = events[i]["event_id"]
263
- return e_idx, event_input, events[i]["text"]