parquet-converter commited on
Commit
5ecd4ed
1 Parent(s): 668eb95

Update parquet files

Browse files
.gitattributes DELETED
@@ -1,27 +0,0 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bin.* filter=lfs diff=lfs merge=lfs -text
5
- *.bz2 filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.model filter=lfs diff=lfs merge=lfs -text
12
- *.msgpack filter=lfs diff=lfs merge=lfs -text
13
- *.onnx filter=lfs diff=lfs merge=lfs -text
14
- *.ot filter=lfs diff=lfs merge=lfs -text
15
- *.parquet filter=lfs diff=lfs merge=lfs -text
16
- *.pb filter=lfs diff=lfs merge=lfs -text
17
- *.pt filter=lfs diff=lfs merge=lfs -text
18
- *.pth filter=lfs diff=lfs merge=lfs -text
19
- *.rar filter=lfs diff=lfs merge=lfs -text
20
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
- *.tar.* filter=lfs diff=lfs merge=lfs -text
22
- *.tflite filter=lfs diff=lfs merge=lfs -text
23
- *.tgz filter=lfs diff=lfs merge=lfs -text
24
- *.xz filter=lfs diff=lfs merge=lfs -text
25
- *.zip filter=lfs diff=lfs merge=lfs -text
26
- *.zstandard filter=lfs diff=lfs merge=lfs -text
27
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.md DELETED
@@ -1,220 +0,0 @@
1
- ---
2
- paperswithcode_id: dailydialog
3
- annotations_creators:
4
- - expert-generated
5
- language_creators:
6
- - found
7
- language:
8
- - en
9
- license:
10
- - cc-by-nc-sa-4.0
11
- multilinguality:
12
- - monolingual
13
- size_categories:
14
- - 10K<n<100K
15
- source_datasets:
16
- - original
17
- task_categories:
18
- - text-classification
19
- task_ids:
20
- - multi-label-classification
21
- pretty_name: DailyDialog
22
- tags:
23
- - emotion-classification
24
- - dialog-act-classification
25
- dataset_info:
26
- features:
27
- - name: dialog
28
- sequence: string
29
- - name: act
30
- sequence:
31
- class_label:
32
- names:
33
- 0: __dummy__
34
- 1: inform
35
- 2: question
36
- 3: directive
37
- 4: commissive
38
- - name: emotion
39
- sequence:
40
- class_label:
41
- names:
42
- 0: no emotion
43
- 1: anger
44
- 2: disgust
45
- 3: fear
46
- 4: happiness
47
- 5: sadness
48
- 6: surprise
49
- splits:
50
- - name: train
51
- num_bytes: 7296715
52
- num_examples: 11118
53
- - name: test
54
- num_bytes: 655844
55
- num_examples: 1000
56
- - name: validation
57
- num_bytes: 673943
58
- num_examples: 1000
59
- download_size: 4475921
60
- dataset_size: 8626502
61
- ---
62
-
63
- # Dataset Card for "daily_dialog"
64
-
65
- ## Table of Contents
66
- - [Dataset Description](#dataset-description)
67
- - [Dataset Summary](#dataset-summary)
68
- - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
69
- - [Languages](#languages)
70
- - [Dataset Structure](#dataset-structure)
71
- - [Data Instances](#data-instances)
72
- - [Data Fields](#data-fields)
73
- - [Data Splits](#data-splits)
74
- - [Dataset Creation](#dataset-creation)
75
- - [Curation Rationale](#curation-rationale)
76
- - [Source Data](#source-data)
77
- - [Annotations](#annotations)
78
- - [Personal and Sensitive Information](#personal-and-sensitive-information)
79
- - [Considerations for Using the Data](#considerations-for-using-the-data)
80
- - [Social Impact of Dataset](#social-impact-of-dataset)
81
- - [Discussion of Biases](#discussion-of-biases)
82
- - [Other Known Limitations](#other-known-limitations)
83
- - [Additional Information](#additional-information)
84
- - [Dataset Curators](#dataset-curators)
85
- - [Licensing Information](#licensing-information)
86
- - [Citation Information](#citation-information)
87
- - [Contributions](#contributions)
88
-
89
- ## Dataset Description
90
-
91
- - **Homepage:** [http://yanran.li/dailydialog](http://yanran.li/dailydialog)
92
- - **Repository:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
93
- - **Paper:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
94
- - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
95
- - **Size of downloaded dataset files:** 4.27 MB
96
- - **Size of the generated dataset:** 8.23 MB
97
- - **Total amount of disk used:** 12.50 MB
98
-
99
- ### Dataset Summary
100
-
101
- We develop a high-quality multi-turn dialog dataset, DailyDialog, which is intriguing in several aspects.
102
- The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way
103
- and cover various topics about our daily life. We also manually label the developed dataset with communication
104
- intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it
105
- benefit the research field of dialog systems.
106
-
107
- ### Supported Tasks and Leaderboards
108
-
109
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
110
-
111
- ### Languages
112
-
113
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
114
-
115
- ## Dataset Structure
116
-
117
- ### Data Instances
118
-
119
- #### default
120
-
121
- - **Size of downloaded dataset files:** 4.27 MB
122
- - **Size of the generated dataset:** 8.23 MB
123
- - **Total amount of disk used:** 12.50 MB
124
-
125
- An example of 'validation' looks as follows.
126
- ```
127
- This example was too long and was cropped:
128
-
129
- {
130
- "act": [2, 1, 1, 1, 1, 2, 3, 2, 3, 4],
131
- "dialog": "[\"Good afternoon . This is Michelle Li speaking , calling on behalf of IBA . Is Mr Meng available at all ? \", \" This is Mr Meng ...",
132
- "emotion": [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
133
- }
134
- ```
135
-
136
- ### Data Fields
137
-
138
- The data fields are the same among all splits.
139
-
140
- #### default
141
- - `dialog`: a `list` of `string` features.
142
- - `act`: a `list` of classification labels, with possible values including `__dummy__` (0), `inform` (1), `question` (2), `directive` (3), `commissive` (4).
143
- - `emotion`: a `list` of classification labels, with possible values including `no emotion` (0), `anger` (1), `disgust` (2), `fear` (3), `happiness` (4).
144
-
145
- ### Data Splits
146
-
147
- | name |train|validation|test|
148
- |-------|----:|---------:|---:|
149
- |default|11118| 1000|1000|
150
-
151
- ## Dataset Creation
152
-
153
- ### Curation Rationale
154
-
155
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
156
-
157
- ### Source Data
158
-
159
- #### Initial Data Collection and Normalization
160
-
161
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
162
-
163
- #### Who are the source language producers?
164
-
165
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
166
-
167
- ### Annotations
168
-
169
- #### Annotation process
170
-
171
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
172
-
173
- #### Who are the annotators?
174
-
175
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
176
-
177
- ### Personal and Sensitive Information
178
-
179
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
180
-
181
- ## Considerations for Using the Data
182
-
183
- ### Social Impact of Dataset
184
-
185
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
186
-
187
- ### Discussion of Biases
188
-
189
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
190
-
191
- ### Other Known Limitations
192
-
193
- Dataset provided for research purposes only. Please check dataset license for additional information.
194
-
195
- ## Additional Information
196
-
197
- ### Dataset Curators
198
-
199
- [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
200
-
201
- ### Licensing Information
202
-
203
- DailyDialog dataset is licensed under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/).
204
-
205
- ### Citation Information
206
-
207
- ```
208
- @InProceedings{li2017dailydialog,
209
- author = {Li, Yanran and Su, Hui and Shen, Xiaoyu and Li, Wenjie and Cao, Ziqiang and Niu, Shuzi},
210
- title = {DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset},
211
- booktitle = {Proceedings of The 8th International Joint Conference on Natural Language Processing (IJCNLP 2017)},
212
- year = {2017}
213
- }
214
-
215
- ```
216
-
217
-
218
- ### Contributions
219
-
220
- Thanks to [@thomwolf](https://github.com/thomwolf), [@julien-c](https://github.com/julien-c) for adding this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
daily_dialog.py DELETED
@@ -1,122 +0,0 @@
1
- # coding=utf-8
2
- # Copyright 2020 The HuggingFace Datasets Authors
3
- #
4
- # Licensed under the Apache License, Version 2.0 (the "License");
5
- # you may not use this file except in compliance with the License.
6
- # You may obtain a copy of the License at
7
- #
8
- # http://www.apache.org/licenses/LICENSE-2.0
9
- #
10
- # Unless required by applicable law or agreed to in writing, software
11
- # distributed under the License is distributed on an "AS IS" BASIS,
12
- # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- # See the License for the specific language governing permissions and
14
- # limitations under the License.
15
- """DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset"""
16
-
17
-
18
- import os
19
- from zipfile import ZipFile
20
-
21
- import datasets
22
-
23
-
24
- _CITATION = """\
25
- @InProceedings{li2017dailydialog,
26
- author = {Li, Yanran and Su, Hui and Shen, Xiaoyu and Li, Wenjie and Cao, Ziqiang and Niu, Shuzi},
27
- title = {DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset},
28
- booktitle = {Proceedings of The 8th International Joint Conference on Natural Language Processing (IJCNLP 2017)},
29
- year = {2017}
30
- }
31
- """
32
-
33
- _DESCRIPTION = """\
34
- We develop a high-quality multi-turn dialog dataset, DailyDialog, which is intriguing in several aspects.
35
- The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way
36
- and cover various topics about our daily life. We also manually label the developed dataset with communication
37
- intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it
38
- benefit the research field of dialog systems.
39
- """
40
-
41
- _URL = "http://yanran.li/files/ijcnlp_dailydialog.zip"
42
-
43
- act_label = {
44
- "0": "__dummy__", # Added to be compatible out-of-the-box with datasets.ClassLabel
45
- "1": "inform",
46
- "2": "question",
47
- "3": "directive",
48
- "4": "commissive",
49
- }
50
-
51
- emotion_label = {
52
- "0": "no emotion",
53
- "1": "anger",
54
- "2": "disgust",
55
- "3": "fear",
56
- "4": "happiness",
57
- "5": "sadness",
58
- "6": "surprise",
59
- }
60
-
61
-
62
- class DailyDialog(datasets.GeneratorBasedBuilder):
63
- """DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset"""
64
-
65
- VERSION = datasets.Version("1.0.0")
66
-
67
- __EOU__ = "__eou__"
68
-
69
- def _info(self):
70
- return datasets.DatasetInfo(
71
- description=_DESCRIPTION,
72
- features=datasets.Features(
73
- {
74
- "dialog": datasets.features.Sequence(datasets.Value("string")),
75
- "act": datasets.features.Sequence(datasets.ClassLabel(names=list(act_label.values()))),
76
- "emotion": datasets.features.Sequence(datasets.ClassLabel(names=list(emotion_label.values()))),
77
- }
78
- ),
79
- supervised_keys=None,
80
- homepage="http://yanran.li/dailydialog",
81
- citation=_CITATION,
82
- )
83
-
84
- def _split_generators(self, dl_manager: datasets.DownloadManager):
85
- dl_dir = dl_manager.download_and_extract(_URL)
86
- data_dir = os.path.join(dl_dir, "ijcnlp_dailydialog")
87
- splits = [datasets.Split.TRAIN, datasets.Split.VALIDATION, datasets.Split.TEST]
88
- return [
89
- datasets.SplitGenerator(
90
- name=split,
91
- gen_kwargs={
92
- "data_zip": os.path.join(data_dir, f"{split}.zip"),
93
- "dialog_path": f"{split}/dialogues_{split}.txt",
94
- "act_path": f"{split}/dialogues_act_{split}.txt",
95
- "emotion_path": f"{split}/dialogues_emotion_{split}.txt",
96
- },
97
- )
98
- for split in splits
99
- ]
100
-
101
- def _generate_examples(self, data_zip, dialog_path, act_path, emotion_path):
102
- with open(data_zip, "rb") as data_file:
103
- with ZipFile(data_file) as zip_file:
104
- with zip_file.open(dialog_path) as dialog_file, zip_file.open(act_path) as act_file, zip_file.open(
105
- emotion_path
106
- ) as emotion_file:
107
- for idx, (dialog_line, act_line, emotion_line) in enumerate(
108
- zip(dialog_file, act_file, emotion_file)
109
- ):
110
- if not dialog_line.strip():
111
- break
112
- dialog = dialog_line.decode().split(self.__EOU__)[:-1]
113
- act = act_line.decode().split(" ")[:-1]
114
- emotion = emotion_line.decode().split(" ")[:-1]
115
- assert (
116
- len(dialog) == len(act) == len(emotion)
117
- ), "Different turns btw dialogue & emotion & action"
118
- yield idx, {
119
- "dialog": dialog,
120
- "act": [act_label[x] for x in act],
121
- "emotion": [emotion_label[x] for x in emotion],
122
- }
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dataset_infos.json DELETED
@@ -1 +0,0 @@
1
- {"default": {"description": "We develop a high-quality multi-turn dialog dataset, DailyDialog, which is intriguing in several aspects. \nThe language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way \nand cover various topics about our daily life. We also manually label the developed dataset with communication \nintention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it \nbenefit the research field of dialog systems.\n", "citation": "@InProceedings{li2017dailydialog,\n author = {Li, Yanran and Su, Hui and Shen, Xiaoyu and Li, Wenjie and Cao, Ziqiang and Niu, Shuzi},\n title = {DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset},\n booktitle = {Proceedings of The 8th International Joint Conference on Natural Language Processing (IJCNLP 2017)},\n year = {2017}\n}\n", "homepage": "http://yanran.li/dailydialog", "license": "cc-by-nc-sa-4.0", "features": {"dialog": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "act": {"feature": {"num_classes": 5, "names": ["__dummy__", "inform", "question", "directive", "commissive"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}, "emotion": {"feature": {"num_classes": 7, "names": ["no emotion", "anger", "disgust", "fear", "happiness", "sadness", "surprise"], "names_file": null, "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}}, "post_processed": {"features": null, "resources_checksums": {"train": {}, "test": {}, "validation": {}}}, "supervised_keys": null, "builder_name": "daily_dialog", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "datasets_version_to_prepare": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 7296715, "num_examples": 11118, "dataset_name": "daily_dialog"}, "test": {"name": "test", "num_bytes": 655844, "num_examples": 1000, "dataset_name": "daily_dialog"}, "validation": {"name": "validation", "num_bytes": 673943, "num_examples": 1000, "dataset_name": "daily_dialog"}}, "download_checksums": {"http://yanran.li/files/ijcnlp_dailydialog.zip": {"num_bytes": 4475921, "checksum": "c641e88cbf21fd7c1b57289387f9107d33fe8685a2b37fe8066b82776535ea89"}}, "download_size": 4475921, "post_processing_size": 0, "dataset_size": 8626502, "size_in_bytes": 13102423}}
 
 
default/daily_dialog-test.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de9eb8ef1cac0dd839e1fd62c513a4e4dc1289604da4bd2a09f8883d57e3853a
3
+ size 331437
default/daily_dialog-train.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:47a57a2bcb7cd90321897cd0e1fe704ebdaa3775844e7bfbd9f3a15c718356e9
3
+ size 3607813
default/daily_dialog-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ebba3ea4a58565577058a55e751c0ea5742f057508ef0bdc47d2acc83181f8dd
3
+ size 334208