silver commited on
Commit
f9b841a
1 Parent(s): 102dafe

update dataset script

Browse files
README.md ADDED
@@ -0,0 +1,234 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - other
4
+ language_creators:
5
+ - other
6
+ languages:
7
+ - zh
8
+ licenses:
9
+ - mit
10
+ multilinguality:
11
+ - monolingual
12
+ paperswithcode_id: personaldialog
13
+ pretty_name: "PersonalDialog"
14
+ size_categories:
15
+ - 10M<n<100M
16
+ source_datasets:
17
+ - original
18
+ task_categories:
19
+ - conversational
20
+ task_ids:
21
+ - dialogue-generation
22
+ ---
23
+
24
+ # Dataset Card for PersonalDialog
25
+
26
+ ## Table of Contents
27
+ - [Dataset Card for PersonalDialog](#dataset-card-for-personaldialog)
28
+ - [Table of Contents](#table-of-contents)
29
+ - [Dataset Description](#dataset-description)
30
+ - [Dataset Summary](#dataset-summary)
31
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
32
+ - [Languages](#languages)
33
+ - [Dataset Structure](#dataset-structure)
34
+ - [Data Instances](#data-instances)
35
+ - [Data Fields](#data-fields)
36
+ - [Data Splits](#data-splits)
37
+ - [Dataset Creation](#dataset-creation)
38
+ - [Curation Rationale](#curation-rationale)
39
+ - [Source Data](#source-data)
40
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
41
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
42
+ - [Annotations](#annotations)
43
+ - [Annotation process](#annotation-process)
44
+ - [Who are the annotators?](#who-are-the-annotators)
45
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
46
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
47
+ - [Social Impact of Dataset](#social-impact-of-dataset)
48
+ - [Discussion of Biases](#discussion-of-biases)
49
+ - [Other Known Limitations](#other-known-limitations)
50
+ - [Additional Information](#additional-information)
51
+ - [Dataset Curators](#dataset-curators)
52
+ - [Licensing Information](#licensing-information)
53
+ - [Citation Information](#citation-information)
54
+ - [Contributions](#contributions)
55
+
56
+ ## Dataset Description
57
+
58
+ - **Homepage:** https://www.zhengyinhe.com/datasets/
59
+ - **Repository:** https://github.com/silverriver/PersonalDilaog
60
+ - **Paper:** https://arxiv.org/abs/1901.09672
61
+
62
+ ### Dataset Summary
63
+
64
+ The PersonalDialog dataset is a large-scale multi-turn Chinese dialogue dataset containing various traits from a large number of speakers.
65
+ We are releasing about 5M sessions of carefully filtered dialogues.
66
+ Each utterance in PersonalDialog is associated with a speaker marked with traits like Gender, Location, Interest Tags.
67
+
68
+ ### Supported Tasks and Leaderboards
69
+
70
+ - dialogue-generation: The dataset can be used to train a model for generating dialogue responses.
71
+ - response-retrieval: The dataset can be used to train a reranker model that can be used to implement a retrieval-based dialogue model.
72
+
73
+ ### Languages
74
+
75
+ PersonalDialog is in Chinese
76
+
77
+ PersonalDialog中的对话是中文的
78
+
79
+ ## Dataset Structure
80
+
81
+ ### Data Instances
82
+
83
+ `train` split:
84
+
85
+ ```json
86
+ {
87
+ "dialog": ["那么 晚", "加班 了 刚 到 家 呀 !", "吃饭 了 么", "吃 过 了 !"],
88
+ "profile": [
89
+ {
90
+ "tag": ["间歇性神经病", "爱笑的疯子", "他们说我犀利", "爱做梦", "自由", "旅游", "学生", "双子座", "好性格"],
91
+ "loc": "福建 厦门", "gender": "male"
92
+ }, {
93
+ "tag": ["设计师", "健康养生", "热爱生活", "善良", "宅", "音樂", "时尚"],
94
+ "loc": "山东 济南", "gender": "male"
95
+ }
96
+ ],
97
+ "uid": [0, 1, 0, 1],
98
+ }
99
+ ```
100
+
101
+ `dev` and `test` split:
102
+
103
+ ```json
104
+ {
105
+ "dialog": ["没 人性 啊 !", "可以 来 组织 啊", "来 上海 陪姐 打 ?"],
106
+ "profile": [
107
+ {"tag": [""], "loc": "上海 浦东新区", "gender": "female"},
108
+ {"tag": ["嘉庚", "keele", "leicester", "UK", "泉州五中"], "loc": "福建 泉州", "gender": "male"},
109
+ ],
110
+ "uid": [0, 1, 0],
111
+ "responder_profile": {"tag": ["嘉庚", "keele", "leicester", "UK", "泉州五中"], "loc": "福建 泉州", "gender": "male"},
112
+ "golden_response": "吴经理 派车来 小 泉州 接 么 ?",
113
+ "is_biased": true,
114
+ }
115
+ ```
116
+
117
+ ### Data Fields
118
+
119
+ - `dialog` (list of strings): List of utterances consisting of a dialogue.
120
+ - `profile` (list of dicts): List of profiles associated with each speaker.
121
+ - `tag` (list of strings): List of tags associated with each speaker.
122
+ - `loc` (string): Location of each speaker.
123
+ - `gender` (string): Gender of each speaker.
124
+ - `uid` (list of int): Speaker id for each utterance in the dialogue.
125
+ - `responder_profile` (dict): Profile of the responder. (Only available in `dev` and `test` split)
126
+ - `golden_response` (str): Response of the responder. (Only available in `dev` and `test` split)
127
+ - `id_biased` (bool): Whether the dialogue is guranteed to be persona related or not. (Only available in `dev` and `test` split)
128
+
129
+ ### Data Splits
130
+
131
+
132
+ |train|valid|test|
133
+ |---:|---:|---:|
134
+ |5,438,165 | 10,521 | 10,523 |
135
+
136
+
137
+ ## Dataset Creation
138
+
139
+ ### Curation Rationale
140
+
141
+ [Needs More Information]
142
+
143
+ ### Source Data
144
+
145
+ #### Initial Data Collection and Normalization
146
+
147
+ [Needs More Information]
148
+
149
+ #### Who are the source language producers?
150
+
151
+ [Needs More Information]
152
+
153
+ ### Annotations
154
+
155
+ #### Annotation process
156
+
157
+ [Needs More Information]
158
+
159
+ #### Who are the annotators?
160
+
161
+ [Needs More Information]
162
+
163
+ ### Personal and Sensitive Information
164
+
165
+ [Needs More Information]
166
+
167
+ ## Considerations for Using the Data
168
+
169
+ ### Social Impact of Dataset
170
+
171
+ [Needs More Information]
172
+
173
+ ### Discussion of Biases
174
+
175
+ [Needs More Information]
176
+
177
+ ### Other Known Limitations
178
+
179
+ [Needs More Information]
180
+
181
+ ## Additional Information
182
+
183
+ ### Dataset Curators
184
+
185
+ [Needs More Information]
186
+
187
+ ### Licensing Information
188
+
189
+ MIT License
190
+
191
+ Copyright (c) 2019 silver
192
+
193
+ Permission is hereby granted, free of charge, to any person obtaining a copy
194
+ of this software and associated documentation files (the "Software"), to deal
195
+ in the Software without restriction, including without limitation the rights
196
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
197
+ copies of the Software, and to permit persons to whom the Software is
198
+ furnished to do so, subject to the following conditions:
199
+
200
+ The above copyright notice and this permission notice shall be included in all
201
+ copies or substantial portions of the Software.
202
+
203
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
204
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
205
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
206
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
207
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
208
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
209
+ SOFTWARE.
210
+
211
+ ### Citation Information
212
+
213
+ ```bibtex
214
+ @article{zheng2019personalized,
215
+ title = {Personalized dialogue generation with diversified traits},
216
+ author = {Zheng, Yinhe and Chen, Guanyi and Huang, Minlie and Liu, Song and Zhu, Xuan},
217
+ journal = {arXiv preprint arXiv:1901.09672},
218
+ year = {2019}
219
+ }
220
+
221
+ @inproceedings{zheng2020pre,
222
+ title = {A pre-training based personalized dialogue generation model with persona-sparse data},
223
+ author = {Zheng, Yinhe and Zhang, Rongsheng and Huang, Minlie and Mao, Xiaoxi},
224
+ booktitle = {Proceedings of the AAAI Conference on Artificial Intelligence},
225
+ volume = {34},
226
+ number = {05},
227
+ pages = {9693--9700},
228
+ year = {2020}
229
+ }
230
+ ```
231
+
232
+ ### Contributions
233
+
234
+ Thanks to [Yinhe Zheng](https://github.com/silverriver) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"default": {"description": "The PersonalDialog dataset is a large-scale multi-turn Chinese dialogue dataset containing various traits from a large number of speakers. \nWe are releasing about 5M sessions of carefully filtered dialogues.\nEach utterance in PersonalDialog is associated with a speaker marked with traits like Gender, Location, Interest Tags. \n", "citation": "@article{zheng2019personalized,\n title = {Personalized dialogue generation with diversified traits},\n author = {Zheng, Yinhe and Chen, Guanyi and Huang, Minlie and Liu, Song and Zhu, Xuan},\n journal = {arXiv preprint arXiv:1901.09672},\n year = {2019}\n}\n\n@inproceedings{zheng2020pre,\n title = {A pre-training based personalized dialogue generation model with persona-sparse data},\n author = {Zheng, Yinhe and Zhang, Rongsheng and Huang, Minlie and Mao, Xiaoxi},\n booktitle = {Proceedings of the AAAI Conference on Artificial Intelligence},\n volume = {34},\n number = {05},\n pages = {9693--9700},\n year = {2020}\n}\n", "homepage": "https://github.com/silverriver/PersonalDilaog", "license": "MIT", "features": {"dialog": [{"dtype": "string", "id": null, "_type": "Value"}], "profile": [{"tag": [{"dtype": "string", "id": null, "_type": "Value"}], "loc": {"dtype": "string", "id": null, "_type": "Value"}, "gender": {"dtype": "string", "id": null, "_type": "Value"}}], "uid": [{"dtype": "int32", "id": null, "_type": "Value"}], "responder_profile": {"tag": [{"dtype": "string", "id": null, "_type": "Value"}], "loc": {"dtype": "string", "id": null, "_type": "Value"}, "gender": {"dtype": "string", "id": null, "_type": "Value"}}, "golden_response": {"dtype": "string", "id": null, "_type": "Value"}, "is_biased": {"dtype": "bool", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "personal_dialog", "config_name": "default", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1659592284, "num_examples": 5438165, "dataset_name": "personal_dialog"}, "validation": {"name": "validation", "num_bytes": 5395032, "num_examples": 10521, "dataset_name": "personal_dialog"}, "test": {"name": "test", "num_bytes": 5412543, "num_examples": 10523, "dataset_name": "personal_dialog"}}, "download_checksums": {"https://huggingface.co/datasets/silver/personal_dialog/resolve/main/dialogues_train.jsonl.gz": {"num_bytes": 558585860, "checksum": "9af400265fda0e7adc9c11a04d343a1b6214a95f07b2911c61cb41f37740195e"}, "https://huggingface.co/datasets/silver/personal_dialog/resolve/main/dev_biased.jsonl.gz": {"num_bytes": 53463, "checksum": "f911aa17eaf8fabc4a093b77949779498d0008480c18320e559eb4df9b97e43f"}, "https://huggingface.co/datasets/silver/personal_dialog/resolve/main/dev_random.jsonl.gz": {"num_bytes": 1634800, "checksum": "0bcbb157125a522c68a0a73d7cfe0e518c5a12c2870bee470543a726cdf48d7f"}, "https://huggingface.co/datasets/silver/personal_dialog/resolve/main/test_biased.jsonl.gz": {"num_bytes": 52719, "checksum": "8da28b2aad57b226390410c48485b9c58faddde80aaa0f68a845f7d4797eac84"}, "https://huggingface.co/datasets/silver/personal_dialog/resolve/main/test_random.jsonl.gz": {"num_bytes": 1639580, "checksum": "b57746e599c888a025c81a870e49971f34d983de424861b6c07586f0c0eec330"}}, "download_size": 561966422, "post_processing_size": null, "dataset_size": 1670399859, "size_in_bytes": 2232366281}}
dummy/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:39b603fbd7ef383f422739b8c1260bb3e945c3a1083ea4c2afcfafceb5dbfe60
3
+ size 5974
dummy/1.0.0/dummy_data.zip.lock ADDED
File without changes
personal_dialog.py ADDED
@@ -0,0 +1,175 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ """
15
+ The PersonalDialog dataset is a large-scale multi-turn Chinese dialogue dataset containing various traits from a large number of speakers.
16
+ We are releasing about 5M sessions of carefully filtered dialogues.
17
+ Each utterance in PersonalDialog is associated with a speaker marked with traits like Gender, Location, Interest Tags.
18
+ """
19
+
20
+ import json
21
+
22
+ import datasets
23
+
24
+
25
+ _CITATION = """\
26
+ @article{zheng2019personalized,
27
+ title = {Personalized dialogue generation with diversified traits},
28
+ author = {Zheng, Yinhe and Chen, Guanyi and Huang, Minlie and Liu, Song and Zhu, Xuan},
29
+ journal = {arXiv preprint arXiv:1901.09672},
30
+ year = {2019}
31
+ }
32
+
33
+ @inproceedings{zheng2020pre,
34
+ title = {A pre-training based personalized dialogue generation model with persona-sparse data},
35
+ author = {Zheng, Yinhe and Zhang, Rongsheng and Huang, Minlie and Mao, Xiaoxi},
36
+ booktitle = {Proceedings of the AAAI Conference on Artificial Intelligence},
37
+ volume = {34},
38
+ number = {05},
39
+ pages = {9693--9700},
40
+ year = {2020}
41
+ }
42
+ """
43
+
44
+ _DESCRIPTION = """\
45
+ The PersonalDialog dataset is a large-scale multi-turn Chinese dialogue dataset containing various traits from a large number of speakers.
46
+ We are releasing about 5M sessions of carefully filtered dialogues.
47
+ Each utterance in PersonalDialog is associated with a speaker marked with traits like Gender, Location, Interest Tags.
48
+ """
49
+
50
+ _HOMEPAGE = "https://github.com/silverriver/PersonalDilaog"
51
+
52
+ _LICENSE = "MIT"
53
+
54
+ _URLS = {
55
+ "train": "https://huggingface.co/datasets/silver/personal_dialog/resolve/main/dialogues_train.jsonl.gz",
56
+ "valid": [
57
+ "https://huggingface.co/datasets/silver/personal_dialog/resolve/main/dev_biased.jsonl.gz",
58
+ "https://huggingface.co/datasets/silver/personal_dialog/resolve/main/dev_random.jsonl.gz",
59
+ ],
60
+ "test": [
61
+ "https://huggingface.co/datasets/silver/personal_dialog/resolve/main/test_biased.jsonl.gz",
62
+ "https://huggingface.co/datasets/silver/personal_dialog/resolve/main/test_random.jsonl.gz",
63
+ ],
64
+ }
65
+
66
+
67
+ class PersonalDialog(datasets.GeneratorBasedBuilder):
68
+ """Chinese Dialogues with Personal Traits."""
69
+
70
+ VERSION = datasets.Version("1.0.0")
71
+
72
+ def _info(self):
73
+ features = datasets.Features(
74
+ {
75
+ "dialog": [datasets.Value("string")],
76
+ "profile": [
77
+ {
78
+ "tag": [datasets.Value("string")],
79
+ "loc": datasets.Value("string"),
80
+ "gender": datasets.Value("string"),
81
+ }
82
+ ],
83
+ "uid": [datasets.Value("int32")],
84
+ "responder_profile": {
85
+ "tag": [datasets.Value("string")],
86
+ "loc": datasets.Value("string"),
87
+ "gender": datasets.Value("string"),
88
+ },
89
+ "golden_response": datasets.Value("string"),
90
+ "is_biased": datasets.Value("bool"),
91
+ }
92
+ )
93
+ return datasets.DatasetInfo(
94
+ # This is the description that will appear on the datasets page.
95
+ description=_DESCRIPTION,
96
+ # This defines the different columns of the dataset and their types
97
+ features=features, # Here we define them above because they are different between the two configurations
98
+ # If there's a common (input, target) tuple from the features, uncomment supervised_keys line below and
99
+ # specify them. They'll be used if as_supervised=True in builder.as_dataset.
100
+ # supervised_keys=("sentence", "label"),
101
+ # Homepage of the dataset for documentation
102
+ homepage=_HOMEPAGE,
103
+ # License for the dataset if available
104
+ license=_LICENSE,
105
+ # Citation for the dataset
106
+ citation=_CITATION,
107
+ )
108
+
109
+ def _split_generators(self, dl_manager):
110
+ urls = _URLS
111
+ data_dir = dl_manager.download_and_extract(urls)
112
+ return [
113
+ datasets.SplitGenerator(
114
+ name=datasets.Split.TRAIN,
115
+ gen_kwargs={
116
+ "data_files": [data_dir["train"]],
117
+ "split": "train",
118
+ },
119
+ ),
120
+ datasets.SplitGenerator(
121
+ name=datasets.Split.VALIDATION,
122
+ gen_kwargs={
123
+ "data_files": [data_dir["valid"][0], data_dir["valid"][1]],
124
+ "split": "valid",
125
+ },
126
+ ),
127
+ datasets.SplitGenerator(
128
+ name=datasets.Split.TEST,
129
+ gen_kwargs={
130
+ "data_files": [data_dir["test"][0], data_dir["test"][1]],
131
+ "split": "test",
132
+ },
133
+ ),
134
+ ]
135
+
136
+ # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
137
+ def _generate_examples(self, data_files, split):
138
+ id = 0
139
+ for file_i, data_file in enumerate(data_files):
140
+ with open(data_file, encoding="utf-8") as f:
141
+ for line in f:
142
+ line = line.strip()
143
+ if len(line) == 0:
144
+ continue
145
+ line = json.loads(line)
146
+
147
+ profile = [
148
+ {"tag": i["tag"][0].split(";"), "loc": i["loc"], "gender": i["gender"]}
149
+ for i in line["profile"]
150
+ ]
151
+ dialog = [i[0] for i in line["dialog"]]
152
+
153
+ if split == "train":
154
+ yield id, {
155
+ "dialog": dialog,
156
+ "profile": profile,
157
+ "uid": line["uid"],
158
+ "responder_profile": None,
159
+ "golden_response": None,
160
+ "is_biased": None,
161
+ }
162
+ else:
163
+ yield id, {
164
+ "dialog": dialog,
165
+ "profile": profile,
166
+ "uid": line["uid"],
167
+ "responder_profile": {
168
+ "tag": line["responder_profile"]["tag"][0].split(";"),
169
+ "loc": line["responder_profile"]["loc"],
170
+ "gender": line["responder_profile"]["gender"],
171
+ },
172
+ "golden_response": line["golden_response"][0],
173
+ "is_biased": True if file_i == 0 else False,
174
+ }
175
+ id += 1