system HF staff commited on
Commit
dd131c1
0 Parent(s):

Update files from the datasets library (from 1.2.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.2.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,161 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ language_creators:
5
+ - expert-generated
6
+ - machine-generated
7
+ languages:
8
+ - cs
9
+ licenses:
10
+ - cc-by-4-0
11
+ multilinguality:
12
+ - monolingual
13
+ size_categories:
14
+ - 1K<n<10K
15
+ source_datasets:
16
+ - extended|other-san-francisco-restaurants
17
+ task_categories:
18
+ - conditional-text-generation
19
+ - sequence-modeling
20
+ task_ids:
21
+ - dialogue-modeling
22
+ - language-modeling
23
+ - other-stuctured-to-text
24
+ ---
25
+
26
+ # Dataset Card Creation Guide
27
+
28
+ ## Table of Contents
29
+ - [Dataset Description](#dataset-description)
30
+ - [Dataset Summary](#dataset-summary)
31
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
32
+ - [Languages](#languages)
33
+ - [Dataset Structure](#dataset-structure)
34
+ - [Data Instances](#data-instances)
35
+ - [Data Fields](#data-instances)
36
+ - [Data Splits](#data-instances)
37
+ - [Dataset Creation](#dataset-creation)
38
+ - [Curation Rationale](#curation-rationale)
39
+ - [Source Data](#source-data)
40
+ - [Annotations](#annotations)
41
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
42
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
43
+ - [Social Impact of Dataset](#social-impact-of-dataset)
44
+ - [Discussion of Biases](#discussion-of-biases)
45
+ - [Other Known Limitations](#other-known-limitations)
46
+ - [Additional Information](#additional-information)
47
+ - [Dataset Curators](#dataset-curators)
48
+ - [Licensing Information](#licensing-information)
49
+ - [Citation Information](#citation-information)
50
+
51
+ ## Dataset Description
52
+
53
+ - **Repository:** [Czech restaurants homepage](https://github.com/UFAL-DSG/cs_restaurant_dataset)
54
+ - **Paper:** [Czech restaurants on Arxiv](https://arxiv.org/abs/1910.05298)
55
+
56
+ ### Dataset Summary
57
+
58
+ This is a dataset for NLG in task-oriented spoken dialogue systems with Czech as the target language. It originated as a translation of the [English San Francisco Restaurants dataset](https://www.repository.cam.ac.uk/handle/1810/251304) by Wen et al. (2015). The domain is restaurant information in Prague, with random/fictional values. It includes input dialogue acts and the corresponding outputs in Czech.
59
+
60
+ ### Supported Tasks and Leaderboards
61
+
62
+ - `other-stuctured-to-text`: The dataset can be used to train a model for data-to-text generation: from a desired dialogue act, the model must produce textual output that conveys this intention.
63
+
64
+ ### Languages
65
+
66
+ The entire dataset is in Czech, translated from the English San Francisco dataset by professional translators.
67
+
68
+ ## Dataset Structure
69
+
70
+ ### Data Instances
71
+
72
+ Example of a data instance:
73
+
74
+ ```
75
+ {
76
+ "da": "?request(area)",
77
+ "delex_da": "?request(area)",
78
+ "text": "Jakou lokalitu hledáte ?",
79
+ "delex_text": "Jakou lokalitu hledáte ?"
80
+ }
81
+ ```
82
+
83
+ ### Data Fields
84
+
85
+ - `da`: input dialogue act
86
+ - `delex_da`: input dialogue act, delexicalized
87
+ - `text`: output text
88
+ - `delex_text`: output text, delexicalized
89
+
90
+ ### Data Splits
91
+
92
+ The order of the instances is random; the split is roughly 3:1:1 between train, development, and test, ensuring that the different sections don't share the same DAs (so the generators need to generalize to unseen DAs), but they share as many generic different DA types as possible (e.g., confirm, inform_only_match etc.). DA types that only have a single corresponding DA (e.g., bye()) are included in the training set.
93
+
94
+ The training, development, and test set contain 3569, 781, and 842 instances, respectively.
95
+
96
+ ## Dataset Creation
97
+
98
+ ### Curation Rationale
99
+
100
+ While most current neural NLG systems do not explicitly contain language-specific components and are thus capable of multilingual generation in principle, there has been little work to test these capabilities experimentally. This goes hand in hand with the scarcity of non-English training datasets for NLG – the only data-to-text NLG set known to us is a small sportscasting Korean dataset (Chenet al., 2010), which only contains a limited number of named entities, reducing the need for their inflection. Since most generators are only tested on English, they do not need to handle grammar complexities not present in English. A prime example is the delexicalization technique used by most current generators. We create a novel dataset for Czech delexicalized generation; this extends the typical task of data-to-text NLG by requiring attribute value inflection. We choose Czech as an example of a morphologically complex language with a large set of NLP tools readily available.
101
+
102
+ ### Source Data
103
+
104
+ #### Initial Data Collection and Normalization
105
+
106
+ The original data was collected from the [English San Francisco Restaurants dataset](https://www.repository.cam.ac.uk/handle/1810/251304) by Wen et al. (2015).
107
+
108
+ #### Who are the source language producers?
109
+
110
+ The original data was produced in interactions between Amazon Mechanical Turk workers and themed around San Francisco restaurants. This data was then translated into Czech and localized to Prague restaurants by professional translators.
111
+
112
+ ### Annotations
113
+
114
+ No annotations.
115
+
116
+ ### Personal and Sensitive Information
117
+
118
+ This data does not contain personal information.
119
+
120
+ ## Considerations for Using the Data
121
+
122
+ ### Social Impact of Dataset
123
+
124
+ [More Information Needed]
125
+
126
+ ### Discussion of Biases
127
+
128
+ [More Information Needed]
129
+
130
+ ### Other Known Limitations
131
+
132
+ [More Information Needed]
133
+
134
+ ## Additional Information
135
+
136
+ ### Dataset Curators
137
+
138
+ Ondřej Dušek, Filip Jurčíček, Josef Dvořák, Petra Grycová, Matěj Hejda, Jana Olivová, Michal Starý, Eva Štichová, Charles University. This work was funded by the Ministry of Education, Youth and Sports of the Czech Republic under the grant agreement LK11221 and core research funding, SVV project 260 333, and GAUK grant 2058214 of Charles University in Prague. It used language resources stored and distributed by the LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2015071).
139
+
140
+ ### Licensing Information
141
+
142
+ [Creative Commons 4.0 BY-SA](https://creativecommons.org/licenses/by-sa/4.0/)
143
+
144
+ ### Citation Information
145
+
146
+ ```
147
+ @article{DBLP:journals/corr/abs-1910-05298,
148
+ author = {Ondrej Dusek and
149
+ Filip Jurcicek},
150
+ title = {Neural Generation for Czech: Data and Baselines},
151
+ journal = {CoRR},
152
+ volume = {abs/1910.05298},
153
+ year = {2019},
154
+ url = {http://arxiv.org/abs/1910.05298},
155
+ archivePrefix = {arXiv},
156
+ eprint = {1910.05298},
157
+ timestamp = {Wed, 16 Oct 2019 16:25:53 +0200},
158
+ biburl = {https://dblp.org/rec/journals/corr/abs-1910-05298.bib},
159
+ bibsource = {dblp computer science bibliography, https://dblp.org}
160
+ }
161
+ ```
cs_restaurants.py ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """Czech restaurant information dataset for NLG"""
16
+
17
+ from __future__ import absolute_import, division, print_function
18
+
19
+ import json
20
+
21
+ import datasets
22
+
23
+
24
+ # Find for instance the citation on arxiv or on the dataset repo/website
25
+ _CITATION = """\
26
+ @article{DBLP:journals/corr/abs-1910-05298,
27
+ author = {Ondrej Dusek and
28
+ Filip Jurcicek},
29
+ title = {Neural Generation for Czech: Data and Baselines},
30
+ journal = {CoRR},
31
+ volume = {abs/1910.05298},
32
+ year = {2019},
33
+ url = {http://arxiv.org/abs/1910.05298},
34
+ archivePrefix = {arXiv},
35
+ eprint = {1910.05298},
36
+ timestamp = {Wed, 16 Oct 2019 16:25:53 +0200},
37
+ biburl = {https://dblp.org/rec/journals/corr/abs-1910-05298.bib},
38
+ bibsource = {dblp computer science bibliography, https://dblp.org}
39
+ }
40
+ """
41
+
42
+ _DESCRIPTION = """\
43
+ This is a dataset for NLG in task-oriented spoken dialogue systems with Czech as the target language. It originated as
44
+ a translation of the English San Francisco Restaurants dataset by Wen et al. (2015).
45
+ """
46
+
47
+ _LICENSE = "Creative Commons 4.0 BY-SA"
48
+
49
+ _URLs = {
50
+ "CSRestaurants": "https://raw.githubusercontent.com/UFAL-DSG/cs_restaurant_dataset/master/",
51
+ }
52
+
53
+
54
+ class CSRestaurants(datasets.GeneratorBasedBuilder):
55
+ """Czech restaurant information dataset for NLG"""
56
+
57
+ VERSION = datasets.Version("1.0.0")
58
+ BUILDER_CONFIGS = [datasets.BuilderConfig(name="CSRestaurants", description="NLG data for Czech")]
59
+ DEFAULT_CONFIG_NAME = "CSRestaurants"
60
+
61
+ def _info(self):
62
+ features = datasets.Features(
63
+ {
64
+ "da": datasets.Value("string"),
65
+ "delex_da": datasets.Value("string"),
66
+ "text": datasets.Value("string"),
67
+ "delex_text": datasets.Value("string"),
68
+ }
69
+ )
70
+ return datasets.DatasetInfo(
71
+ description=_DESCRIPTION,
72
+ features=features,
73
+ supervised_keys=None,
74
+ homepage="https://github.com/UFAL-DSG/cs_restaurant_dataset",
75
+ license=_LICENSE,
76
+ citation=_CITATION,
77
+ )
78
+
79
+ def _split_generators(self, dl_manager):
80
+ """Returns SplitGenerators."""
81
+ master_url = _URLs[self.config.name]
82
+ train_path = dl_manager.download_and_extract(master_url + "train.json")
83
+ valid_path = dl_manager.download_and_extract(master_url + "devel.json")
84
+ test_path = dl_manager.download_and_extract(master_url + "test.json")
85
+
86
+ return [
87
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": train_path}),
88
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": valid_path}),
89
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": test_path}),
90
+ ]
91
+
92
+ def _generate_examples(self, filepath):
93
+ """ Yields examples. """
94
+
95
+ with open(filepath, encoding="utf8") as f:
96
+ data = json.load(f)
97
+ for id_, instance in enumerate(data):
98
+ yield id_, {
99
+ "da": instance["da"],
100
+ "delex_da": instance["delex_da"],
101
+ "text": instance["text"],
102
+ "delex_text": instance["delex_text"],
103
+ }
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"CSRestaurants": {"description": "This is a dataset for NLG in task-oriented spoken dialogue systems with Czech as the target language. It originated as \na translation of the English San Francisco Restaurants dataset by Wen et al. (2015).\n", "citation": "@article{DBLP:journals/corr/abs-1910-05298,\n author = {Ondrej Dusek and\n Filip Jurc{'{\\i}}cek},\n title = {Neural Generation for Czech: Data and Baselines},\n journal = {CoRR},\n volume = {abs/1910.05298},\n year = {2019},\n url = {http://arxiv.org/abs/1910.05298},\n archivePrefix = {arXiv},\n eprint = {1910.05298},\n timestamp = {Wed, 16 Oct 2019 16:25:53 +0200},\n biburl = {https://dblp.org/rec/journals/corr/abs-1910-05298.bib},\n bibsource = {dblp computer science bibliography, https://dblp.org}\n}\n", "homepage": "https://github.com/UFAL-DSG/cs_restaurant_dataset", "license": "Creative Commons 4.0 BY-SA", "features": {"dialogue_act": {"dtype": "string", "id": null, "_type": "Value"}, "delexicalized_dialogue_act": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}, "delexicalized_text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "builder_name": "cs_restaurants", "config_name": "CSRestaurants", "version": "0.0.0", "splits": {"train": {"name": "train", "num_bytes": 654071, "num_examples": 3569, "dataset_name": "cs_restaurants"}, "validation": {"name": "validation", "num_bytes": 181528, "num_examples": 781, "dataset_name": "cs_restaurants"}, "test": {"name": "test", "num_bytes": 191334, "num_examples": 842, "dataset_name": "cs_restaurants"}}, "download_checksums": {"https://raw.githubusercontent.com/UFAL-DSG/cs_restaurant_dataset/master/train.json": {"num_bytes": 953853, "checksum": "4dc46649dd44d4fb0c32ac56211ba1c5409b366129102a62b28a3a67cec4a2e7"}, "https://raw.githubusercontent.com/UFAL-DSG/cs_restaurant_dataset/master/devel.json": {"num_bytes": 247118, "checksum": "433cbcf069fbf1254b2be33d0ec799c55b46d06cc1d84ae19db758301fbe3adf"}, "https://raw.githubusercontent.com/UFAL-DSG/cs_restaurant_dataset/master/test.json": {"num_bytes": 262048, "checksum": "0af728246699009f9d3702386c7a2b4db0318697ffb5333f088b393eb33d03a2"}}, "download_size": 1463019, "post_processing_size": null, "dataset_size": 1026933, "size_in_bytes": 2489952}}
dummy/CSRestaurants/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:58d520a12fb57f40c452a0ece70b62e1a785f90d2dc2d55562d5ed5fdaecc35b
3
+ size 1027