File size: 7,320 Bytes
dd131c1
 
 
 
 
 
502cc05
dd131c1
502cc05
6bbbd78
dd131c1
 
 
 
 
 
 
6425e7e
 
 
dd131c1
 
 
6425e7e
e8e3222
0aeffca
5885acd
 
5113fda
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
42d4aa7
 
 
5113fda
 
dd131c1
 
0aeffca
dd131c1
 
 
 
e8e3222
dd131c1
 
 
e8e3222
 
dd131c1
 
 
 
 
 
 
 
 
 
 
 
 
9ad2154
dd131c1
 
 
 
 
 
 
 
 
 
 
 
6425e7e
dd131c1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0aeffca
 
 
 
 
 
 
dd131c1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9ad2154
 
 
5885acd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
---
annotations_creators:
- found
language_creators:
- expert-generated
- machine-generated
language:
- cs
license:
- cc-by-4.0
multilinguality:
- monolingual
size_categories:
- 1K<n<10K
source_datasets:
- extended|other-san-francisco-restaurants
task_categories:
- text2text-generation
- text-generation
- fill-mask
task_ids:
- dialogue-modeling
- language-modeling
- masked-language-modeling
paperswithcode_id: czech-restaurant-information
pretty_name: Czech Restaurant
tags:
- intent-to-text
dataset_info:
  features:
  - name: dialogue_act
    dtype: string
  - name: delexicalized_dialogue_act
    dtype: string
  - name: text
    dtype: string
  - name: delexicalized_text
    dtype: string
  config_name: CSRestaurants
  splits:
  - name: train
    num_bytes: 654071
    num_examples: 3569
  - name: validation
    num_bytes: 181528
    num_examples: 781
  - name: test
    num_bytes: 191334
    num_examples: 842
  download_size: 1463019
  dataset_size: 1026933
---

# Dataset Card for Czech Restaurant

## Table of Contents
- [Dataset Description](#dataset-description)
  - [Dataset Summary](#dataset-summary)
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
  - [Languages](#languages)
- [Dataset Structure](#dataset-structure)
  - [Data Instances](#data-instances)
  - [Data Fields](#data-fields)
  - [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
  - [Curation Rationale](#curation-rationale)
  - [Source Data](#source-data)
  - [Annotations](#annotations)
  - [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Social Impact of Dataset](#social-impact-of-dataset)
  - [Discussion of Biases](#discussion-of-biases)
  - [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
  - [Dataset Curators](#dataset-curators)
  - [Licensing Information](#licensing-information)
  - [Citation Information](#citation-information)
  - [Contributions](#contributions)

## Dataset Description

- **Repository:** [Czech restaurants homepage](https://github.com/UFAL-DSG/cs_restaurant_dataset)
- **Paper:** [Czech restaurants on Arxiv](https://arxiv.org/abs/1910.05298)

### Dataset Summary

This is a dataset for NLG in task-oriented spoken dialogue systems with Czech as the target language. It originated as a translation of the [English San Francisco Restaurants dataset](https://www.repository.cam.ac.uk/handle/1810/251304) by Wen et al. (2015). The domain is restaurant information in Prague, with random/fictional values. It includes input dialogue acts and the corresponding outputs in Czech.

### Supported Tasks and Leaderboards

- `other-intent-to-text`: The dataset can be used to train a model for data-to-text generation: from a desired dialogue act, the model must produce textual output that conveys this intention.

### Languages

The entire dataset is in Czech, translated from the English San Francisco dataset by professional translators.

## Dataset Structure

### Data Instances

Example of a data instance:

```
{
  "da": "?request(area)",
  "delex_da": "?request(area)",
  "text": "Jakou lokalitu hledáte ?",
  "delex_text": "Jakou lokalitu hledáte ?"
}
```

### Data Fields

- `da`: input dialogue act
- `delex_da`: input dialogue act, delexicalized
- `text`: output text
- `delex_text`: output text, delexicalized

### Data Splits

The order of the instances is random; the split is roughly 3:1:1 between train, development, and test, ensuring that the different sections don't share the same DAs (so the generators need to generalize to unseen DAs), but they share as many generic different DA types as possible (e.g., confirm, inform_only_match etc.). DA types that only have a single corresponding DA (e.g., bye()) are included in the training set.

The training, development, and test set contain 3569, 781, and 842 instances, respectively.

## Dataset Creation

### Curation Rationale

While most current neural NLG systems do not explicitly contain language-specific components and are thus capable of multilingual generation in principle, there has been little work to test these capabilities experimentally. This goes hand in hand with the scarcity of non-English training datasets for NLG – the only data-to-text NLG set known to us is a small sportscasting Korean dataset (Chenet al., 2010), which only contains a limited number of named entities, reducing the need for their inflection. Since  most  generators  are  only  tested  on  English, they do not need to handle grammar complexities not present in English.  A prime example is the delexicalization technique used by most current generators. We create a novel dataset for Czech delexicalized generation; this extends the typical task of data-to-text  NLG  by  requiring  attribute  value inflection. We  choose  Czech  as an example of a morphologically complex language with a large set of NLP tools readily available.

### Source Data

#### Initial Data Collection and Normalization

The original data was collected from the [English San Francisco Restaurants dataset](https://www.repository.cam.ac.uk/handle/1810/251304) by Wen et al. (2015).

#### Who are the source language producers?

The original data was produced in interactions between Amazon Mechanical Turk workers and themed around San Francisco restaurants. This data was then translated into Czech and localized to Prague restaurants by professional translators.

### Annotations

#### Annotation process

[More Information Needed]

#### Who are the annotators?

[More Information Needed]

### Personal and Sensitive Information

This data does not contain personal information.

## Considerations for Using the Data

### Social Impact of Dataset

[More Information Needed]

### Discussion of Biases

[More Information Needed]

### Other Known Limitations

[More Information Needed]

## Additional Information

### Dataset Curators

Ondřej Dušek, Filip Jurčíček, Josef Dvořák, Petra Grycová, Matěj Hejda, Jana Olivová, Michal Starý, Eva Štichová, Charles University. This work was funded by the Ministry of Education, Youth and Sports of the Czech Republic under the grant agreement LK11221 and core research funding, SVV project 260 333, and GAUK grant 2058214 of Charles University in Prague. It used language resources stored and distributed by the LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2015071).

### Licensing Information

[Creative Commons 4.0 BY-SA](https://creativecommons.org/licenses/by-sa/4.0/)

### Citation Information

```
@article{DBLP:journals/corr/abs-1910-05298,
  author    = {Ondrej Dusek and
               Filip Jurcicek},
  title     = {Neural Generation for Czech: Data and Baselines},
  journal   = {CoRR},
  volume    = {abs/1910.05298},
  year      = {2019},
  url       = {http://arxiv.org/abs/1910.05298},
  archivePrefix = {arXiv},
  eprint    = {1910.05298},
  timestamp = {Wed, 16 Oct 2019 16:25:53 +0200},
  biburl    = {https://dblp.org/rec/journals/corr/abs-1910-05298.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}
```

### Contributions

Thanks to [@TevenLeScao](https://github.com/TevenLeScao) for adding this dataset.