Datasets:

Languages: Chinese
Multilinguality: monolingual
Size Categories: 10M<n<100M
Language Creators: other
Annotations Creators: other
Source Datasets: original
License:
Quentin Lhoest commited on
Commit
8b1f8cf
1 Parent(s): 4bb5df9

Release: 2.3.0

Browse files

Commit from https://github.com/huggingface/datasets/commit/c82d4c4d8d1124e7fe8ec3549a7c6b1ed1343010

README.md ADDED
@@ -0,0 +1,189 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - other
4
+ language_creators:
5
+ - other
6
+ languages:
7
+ - zh
8
+ licenses:
9
+ - mit
10
+ multilinguality:
11
+ - monolingual
12
+ paperswithcode_id: lccc
13
+ pretty_name: "LCCC: Large-scale Cleaned Chinese Conversation corpus"
14
+ size_categories:
15
+ - 10M<n<100M
16
+ source_datasets:
17
+ - original
18
+ task_categories:
19
+ - conversational
20
+ task_ids:
21
+ - dialogue-generation
22
+ ---
23
+
24
+ # Dataset Card for LCCC
25
+
26
+ ## Table of Contents
27
+ - [Dataset Card for LCCC](#dataset-card-for-lccc)
28
+ - [Table of Contents](#table-of-contents)
29
+ - [Dataset Description](#dataset-description)
30
+ - [Dataset Summary](#dataset-summary)
31
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
32
+ - [Languages](#languages)
33
+ - [Dataset Structure](#dataset-structure)
34
+ - [Data Instances](#data-instances)
35
+ - [Data Fields](#data-fields)
36
+ - [Data Splits](#data-splits)
37
+ - [Dataset Creation](#dataset-creation)
38
+ - [Curation Rationale](#curation-rationale)
39
+ - [Source Data](#source-data)
40
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
41
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
42
+ - [Annotations](#annotations)
43
+ - [Annotation process](#annotation-process)
44
+ - [Who are the annotators?](#who-are-the-annotators)
45
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
46
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
47
+ - [Social Impact of Dataset](#social-impact-of-dataset)
48
+ - [Discussion of Biases](#discussion-of-biases)
49
+ - [Other Known Limitations](#other-known-limitations)
50
+ - [Additional Information](#additional-information)
51
+ - [Dataset Curators](#dataset-curators)
52
+ - [Licensing Information](#licensing-information)
53
+ - [Citation Information](#citation-information)
54
+ - [Contributions](#contributions)
55
+
56
+ ## Dataset Description
57
+
58
+ - **Repository:** https://github.com/thu-coai/CDial-GPT
59
+ - **Paper:** https://arxiv.org/abs/2008.03946
60
+
61
+ ### Dataset Summary
62
+
63
+ LCCC: Large-scale Cleaned Chinese Conversation corpus (LCCC) is a large Chinese dialogue corpus originate from Chinese social medias. A rigorous data cleaning pipeline is designed to ensure the quality of the corpus. This pipeline involves a set of rules and several classifier-based filters. Noises such as offensive or sensitive words, special symbols, emojis, grammatically incorrect sentences, and incoherent conversations are filtered.
64
+
65
+ LCCC是一套来自于中文社交媒体的对话数据,我们设计了一套严格的数据过滤流程来确保该数据集中对话数据的质量。 这一数据过滤流程中包括一系列手工规则以及若干基于机器学习算法所构建的分类器。 我们所过滤掉的噪声包括:脏字脏词、特殊字符、颜表情、语法不通的语句、上下文不相关的对话等。
66
+
67
+ ### Supported Tasks and Leaderboards
68
+
69
+ - dialogue-generation: The dataset can be used to train a model for generating dialogue responses.
70
+ - response-retrieval: The dataset can be used to train a reranker model that can be used to implement a retrieval-based dialogue model.
71
+
72
+ ### Languages
73
+
74
+ LCCC is in Chinese
75
+
76
+ LCCC中的对话是中文的
77
+
78
+ ## Dataset Structure
79
+
80
+ ### Data Instances
81
+
82
+ ```json
83
+ {
84
+ "dialog": ["火锅 我 在 重庆 成都 吃 了 七八 顿 火锅", "哈哈哈哈 ! 那 我 的 嘴巴 可能 要 烂掉 !", "不会 的 就是 好 油腻"]
85
+ }
86
+ ```
87
+
88
+ ### Data Fields
89
+
90
+ - `dialog` (list of strings): List of utterances consisting of a dialogue.
91
+
92
+ ### Data Splits
93
+
94
+ We do not provide the offical split for LCCC-large.
95
+ But we provide a split for LCCC-base:
96
+
97
+ |train|valid|test|
98
+ |---:|---:|---:|
99
+ |6,820,506 | 20,000 | 10,000|
100
+
101
+ ## Dataset Creation
102
+
103
+ ### Curation Rationale
104
+
105
+ [Needs More Information]
106
+
107
+ ### Source Data
108
+
109
+ #### Initial Data Collection and Normalization
110
+
111
+ [Needs More Information]
112
+
113
+ #### Who are the source language producers?
114
+
115
+ [Needs More Information]
116
+
117
+ ### Annotations
118
+
119
+ #### Annotation process
120
+
121
+ [Needs More Information]
122
+
123
+ #### Who are the annotators?
124
+
125
+ [Needs More Information]
126
+
127
+ ### Personal and Sensitive Information
128
+
129
+ [Needs More Information]
130
+
131
+ ## Considerations for Using the Data
132
+
133
+ ### Social Impact of Dataset
134
+
135
+ [Needs More Information]
136
+
137
+ ### Discussion of Biases
138
+
139
+ [Needs More Information]
140
+
141
+ ### Other Known Limitations
142
+
143
+ [Needs More Information]
144
+
145
+ ## Additional Information
146
+
147
+ ### Dataset Curators
148
+
149
+ [Needs More Information]
150
+
151
+ ### Licensing Information
152
+
153
+ MIT License
154
+
155
+ Copyright (c) 2020 lemon234071
156
+
157
+ Permission is hereby granted, free of charge, to any person obtaining a copy
158
+ of this software and associated documentation files (the "Software"), to deal
159
+ in the Software without restriction, including without limitation the rights
160
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
161
+ copies of the Software, and to permit persons to whom the Software is
162
+ furnished to do so, subject to the following conditions:
163
+
164
+ The above copyright notice and this permission notice shall be included in all
165
+ copies or substantial portions of the Software.
166
+
167
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
168
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
169
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
170
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
171
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
172
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
173
+ SOFTWARE.
174
+
175
+ ### Citation Information
176
+
177
+ ```bibtex
178
+ @inproceedings{wang2020chinese,
179
+ title={A Large-Scale Chinese Short-Text Conversation Dataset},
180
+ author={Wang, Yida and Ke, Pei and Zheng, Yinhe and Huang, Kaili and Jiang, Yong and Zhu, Xiaoyan and Huang, Minlie},
181
+ booktitle={NLPCC},
182
+ year={2020},
183
+ url={https://arxiv.org/abs/2008.03946}
184
+ }
185
+ ```
186
+
187
+ ### Contributions
188
+
189
+ Thanks to [Yinhe Zheng](https://github.com/silverriver) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
1
+ {"large": {"description": "LCCC: Large-scale Cleaned Chinese Conversation corpus (LCCC) is a large corpus of Chinese conversations.\nA rigorous data cleaning pipeline is designed to ensure the quality of the corpus.\nThis pipeline involves a set of rules and several classifier-based filters.\nNoises such as offensive or sensitive words, special symbols, emojis,\ngrammatically incorrect sentences, and incoherent conversations are filtered.\n", "citation": "@inproceedings{wang2020chinese,\ntitle={A Large-Scale Chinese Short-Text Conversation Dataset},\nauthor={Wang, Yida and Ke, Pei and Zheng, Yinhe and Huang, Kaili and Jiang, Yong and Zhu, Xiaoyan and Huang, Minlie},\nbooktitle={NLPCC},\nyear={2020},\nurl={https://arxiv.org/abs/2008.03946}\n}\n", "homepage": "https://github.com/thu-coai/CDial-GPT", "license": "MIT", "features": {"dialog": [{"dtype": "string", "id": null, "_type": "Value"}]}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "lccc", "config_name": "large", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 1530827965, "num_examples": 12007759, "dataset_name": "lccc"}}, "download_checksums": {"https://huggingface.co/datasets/silver/lccc/resolve/main/lccc_large.jsonl.gz": {"num_bytes": 607605643, "checksum": "0eaf3b39e1f54c414c3c75a8319f89c8a98b4bc6f91913b051a0b849e7d3326f"}}, "download_size": 607605643, "post_processing_size": null, "dataset_size": 1530827965, "size_in_bytes": 2138433608}, "base": {"description": "LCCC: Large-scale Cleaned Chinese Conversation corpus (LCCC) is a large corpus of Chinese conversations.\nA rigorous data cleaning pipeline is designed to ensure the quality of the corpus.\nThis pipeline involves a set of rules and several classifier-based filters.\nNoises such as offensive or sensitive words, special symbols, emojis,\ngrammatically incorrect sentences, and incoherent conversations are filtered.\n", "citation": "@inproceedings{wang2020chinese,\ntitle={A Large-Scale Chinese Short-Text Conversation Dataset},\nauthor={Wang, Yida and Ke, Pei and Zheng, Yinhe and Huang, Kaili and Jiang, Yong and Zhu, Xiaoyan and Huang, Minlie},\nbooktitle={NLPCC},\nyear={2020},\nurl={https://arxiv.org/abs/2008.03946}\n}\n", "homepage": "https://github.com/thu-coai/CDial-GPT", "license": "MIT", "features": {"dialog": [{"dtype": "string", "id": null, "_type": "Value"}]}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "lccc", "config_name": "base", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 932634902, "num_examples": 6820506, "dataset_name": "lccc"}, "test": {"name": "test", "num_bytes": 1498216, "num_examples": 10000, "dataset_name": "lccc"}, "validation": {"name": "validation", "num_bytes": 2922731, "num_examples": 20000, "dataset_name": "lccc"}}, "download_checksums": {"https://huggingface.co/datasets/silver/lccc/resolve/main/lccc_base_train.jsonl.gz": {"num_bytes": 369854377, "checksum": "2162e0ed923fba62329cabf7e1493fbe59248afc94a62508e4abdea61e624627"}, "https://huggingface.co/datasets/silver/lccc/resolve/main/lccc_base_valid.jsonl.gz": {"num_bytes": 1071594, "checksum": "5cc27e7ac3447c5a31386178f82ff01cab56e27827445ef8d429809301491759"}, "https://huggingface.co/datasets/silver/lccc/resolve/main/lccc_base_test.jsonl.gz": {"num_bytes": 549124, "checksum": "cf8757587bdb8f360cc94fc38baadf9e185bad65a26155527a8430c048676016"}}, "download_size": 371475095, "post_processing_size": null, "dataset_size": 937055849, "size_in_bytes": 1308530944}}
dummy/base/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b55cad4bfdd78c371ec57503cec463e24ce1a37f60040cb8f5082c6e0d84fde
3
+ size 2100
dummy/large/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:87ec878e0941fa2af39af9ae57f74bea745ab0bb87ab3d7d1b943d22c6a1b833
3
+ size 723
lccc.py ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ """
15
+ LCCC: Large-scale Cleaned Chinese Conversation corpus (LCCC) is a large corpus of Chinese conversations.
16
+ A rigorous data cleaning pipeline is designed to ensure the quality of the corpus.
17
+ This pipeline involves a set of rules and several classifier-based filters.
18
+ Noises such as offensive or sensitive words, special symbols, emojis,
19
+ grammatically incorrect sentences, and incoherent conversations are filtered.
20
+ """
21
+
22
+ import json
23
+ import os
24
+
25
+ import datasets
26
+
27
+
28
+ # BibTeX citation
29
+ _CITATION = """\
30
+ @inproceedings{wang2020chinese,
31
+ title={A Large-Scale Chinese Short-Text Conversation Dataset},
32
+ author={Wang, Yida and Ke, Pei and Zheng, Yinhe and Huang, Kaili and Jiang, Yong and Zhu, Xiaoyan and Huang, Minlie},
33
+ booktitle={NLPCC},
34
+ year={2020},
35
+ url={https://arxiv.org/abs/2008.03946}
36
+ }
37
+ """
38
+
39
+ # Description of the dataset here
40
+ _DESCRIPTION = """\
41
+ LCCC: Large-scale Cleaned Chinese Conversation corpus (LCCC) is a large corpus of Chinese conversations.
42
+ A rigorous data cleaning pipeline is designed to ensure the quality of the corpus.
43
+ This pipeline involves a set of rules and several classifier-based filters.
44
+ Noises such as offensive or sensitive words, special symbols, emojis,
45
+ grammatically incorrect sentences, and incoherent conversations are filtered.
46
+ """
47
+
48
+ _HOMEPAGE = "https://github.com/thu-coai/CDial-GPT"
49
+ _LICENSE = "MIT"
50
+ _URLS = {
51
+ "large": "https://huggingface.co/datasets/silver/lccc/resolve/main/lccc_large.jsonl.gz",
52
+ "base": {
53
+ "train": "https://huggingface.co/datasets/silver/lccc/resolve/main/lccc_base_train.jsonl.gz",
54
+ "valid": "https://huggingface.co/datasets/silver/lccc/resolve/main/lccc_base_valid.jsonl.gz",
55
+ "test": "https://huggingface.co/datasets/silver/lccc/resolve/main/lccc_base_test.jsonl.gz",
56
+ },
57
+ }
58
+
59
+
60
+ class LCCC(datasets.GeneratorBasedBuilder):
61
+ """Large-scale Cleaned Chinese Conversation corpus."""
62
+
63
+ VERSION = datasets.Version("1.0.0")
64
+
65
+ BUILDER_CONFIGS = [
66
+ datasets.BuilderConfig(name="large", version=VERSION, description="The large version of LCCC"),
67
+ datasets.BuilderConfig(name="base", version=VERSION, description="The base version of LCCC"),
68
+ ]
69
+
70
+ def _info(self):
71
+ features = datasets.Features(
72
+ {
73
+ "dialog": [datasets.Value("string")],
74
+ }
75
+ )
76
+ return datasets.DatasetInfo(
77
+ # This is the description that will appear on the datasets page.
78
+ description=_DESCRIPTION,
79
+ # This defines the different columns of the dataset and their types
80
+ features=features, # Here we define them above because they are different between the two configurations
81
+ # If there's a common (input, target) tuple from the features, uncomment supervised_keys line below and
82
+ # specify them. They'll be used if as_supervised=True in builder.as_dataset.
83
+ # supervised_keys=("sentence", "label"),
84
+ # Homepage of the dataset for documentation
85
+ homepage=_HOMEPAGE,
86
+ # License for the dataset if available
87
+ license=_LICENSE,
88
+ # Citation for the dataset
89
+ citation=_CITATION,
90
+ )
91
+
92
+ def _split_generators(self, dl_manager):
93
+ urls = _URLS[self.config.name]
94
+ downloaded_data = dl_manager.download_and_extract(urls)
95
+ if self.config.name == "large":
96
+ return [
97
+ datasets.SplitGenerator(
98
+ name=datasets.Split.TRAIN,
99
+ gen_kwargs={
100
+ "filepath": os.path.join(downloaded_data),
101
+ },
102
+ )
103
+ ]
104
+ elif self.config.name == "base":
105
+ return [
106
+ datasets.SplitGenerator(
107
+ name=datasets.Split.TRAIN,
108
+ gen_kwargs={
109
+ "filepath": os.path.join(downloaded_data["train"]),
110
+ },
111
+ ),
112
+ datasets.SplitGenerator(
113
+ name=datasets.Split.TEST,
114
+ gen_kwargs={"filepath": os.path.join(downloaded_data["test"])},
115
+ ),
116
+ datasets.SplitGenerator(
117
+ name=datasets.Split.VALIDATION,
118
+ gen_kwargs={
119
+ "filepath": os.path.join(downloaded_data["valid"]),
120
+ },
121
+ ),
122
+ ]
123
+
124
+ # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
125
+ def _generate_examples(self, filepath):
126
+ with open(filepath, encoding="utf-8") as f:
127
+ for key, row in enumerate(f):
128
+ row = row.strip()
129
+ if row:
130
+ yield key, {
131
+ "dialog": json.loads(row),
132
+ }