Datasets:

Languages:
Chinese
Multilinguality:
monolingual
Size Categories:
10M<n<100M
Language Creators:
other
Annotations Creators:
other
Source Datasets:
original
ArXiv:
License:
silver commited on
Commit
1567450
1 Parent(s): 90161bb

update more info

Browse files
Files changed (2) hide show
  1. README.md +154 -1
  2. lccc.py +168 -0
README.md CHANGED
@@ -1,3 +1,156 @@
1
  ---
2
- license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - other
4
+ language_creators:
5
+ - other
6
+ languages:
7
+ - zh
8
+ licenses:
9
+ - mit
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name:
13
+ - lccc
14
+ size_categories:
15
+ - 10M<n<100M
16
+ source_datasets:
17
+ - original
18
+ task_categories:
19
+ - conversational
20
+ task_ids:
21
+ - dialogue-generation
22
+ - dialogue-response-retrieval
23
  ---
24
+
25
+ # Dataset Card for lccc_large
26
+
27
+ ## Table of Contents
28
+ - [Dataset Card for lccc_large](#dataset-card-for-lccc_large)
29
+ - [Table of Contents](#table-of-contents)
30
+ - [Dataset Description](#dataset-description)
31
+ - [Dataset Summary](#dataset-summary)
32
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
33
+ - [Languages](#languages)
34
+ - [Dataset Structure](#dataset-structure)
35
+ - [Data Instances](#data-instances)
36
+ - [Data Fields](#data-fields)
37
+ - [Data Splits](#data-splits)
38
+ - [Dataset Creation](#dataset-creation)
39
+ - [Curation Rationale](#curation-rationale)
40
+ - [Source Data](#source-data)
41
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
42
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
43
+ - [Annotations](#annotations)
44
+ - [Annotation process](#annotation-process)
45
+ - [Who are the annotators?](#who-are-the-annotators)
46
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
47
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
48
+ - [Social Impact of Dataset](#social-impact-of-dataset)
49
+ - [Discussion of Biases](#discussion-of-biases)
50
+ - [Other Known Limitations](#other-known-limitations)
51
+ - [Additional Information](#additional-information)
52
+ - [Dataset Curators](#dataset-curators)
53
+ - [Licensing Information](#licensing-information)
54
+ - [Citation Information](#citation-information)
55
+
56
+ ## Dataset Description
57
+
58
+ - **Homepage:** https://github.com/thu-coai/CDial-GPT
59
+ - **Repository:** https://github.com/thu-coai/CDial-GPT
60
+ - **Paper:** https://arxiv.org/abs/2008.03946
61
+
62
+ ### Dataset Summary
63
+
64
+ lccc: Large-scale Cleaned Chinese Conversation corpus (LCCC) is a large Chinese dialogue corpus originate from Chinese social medias. A rigorous data cleaning pipeline is designed to ensure the quality of the corpus. This pipeline involves a set of rules and several classifier-based filters. Noises such as offensive or sensitive words, special symbols, emojis, grammatically incorrect sentences, and incoherent conversations are filtered.
65
+
66
+ lccc是一套来自于中文社交媒体的对话数据,我们设计了一套严格的数据过滤流程来确保该数据集中对话数据的质量。 这一数据过滤流程中包括一系列手工规则以及若干基于机器学习算法所构建的分类器。 我们所过滤掉的噪声包括:脏字脏词、特殊字符、颜表情、语法不通的语句、上下文不相关的对话等。
67
+
68
+ ### Supported Tasks and Leaderboards
69
+
70
+ - dialogue-generation: The dataset can be used to train a model for generating dialogue responses.
71
+ - response-retrieval: The dataset can be used to train a reranker model that can be used to implement a retrieval-based dialogue model.
72
+
73
+ ### Languages
74
+
75
+ LCCC is in Chinese
76
+
77
+ LCCC中的对话是中文的
78
+
79
+ ## Dataset Structure
80
+
81
+ ### Data Instances
82
+
83
+ ["火锅 我 在 重庆 成都 吃 了 七八 顿 火锅", "哈哈哈哈 ! 那 我 的 嘴巴 可能 要 烂掉 !", "不会 的 就是 好 油腻"]
84
+
85
+ ### Data Fields
86
+
87
+ Each line is a list of utterances that consist a dialogue.
88
+ Note that the LCCC dataset provided in our original Github page is in json format,
89
+ however, we are providing LCCC in jsonl format here.
90
+
91
+ ### Data Splits
92
+
93
+ We do not provide the offical split for LCCC-large.
94
+ But we provide a split for LCCC-base:
95
+
96
+ |train|valid|test|
97
+ |:---:|:---:|:---:|
98
+ |6,820,506 | 20,000 | 10,000|
99
+
100
+ ## Dataset Creation
101
+
102
+ ### Curation Rationale
103
+
104
+ [Needs More Information]
105
+
106
+ ### Source Data
107
+
108
+ #### Initial Data Collection and Normalization
109
+
110
+ [Needs More Information]
111
+
112
+ #### Who are the source language producers?
113
+
114
+ [Needs More Information]
115
+
116
+ ### Annotations
117
+
118
+ #### Annotation process
119
+
120
+ [Needs More Information]
121
+
122
+ #### Who are the annotators?
123
+
124
+ [Needs More Information]
125
+
126
+ ### Personal and Sensitive Information
127
+
128
+ [Needs More Information]
129
+
130
+ ## Considerations for Using the Data
131
+
132
+ ### Social Impact of Dataset
133
+
134
+ [Needs More Information]
135
+
136
+ ### Discussion of Biases
137
+
138
+ [Needs More Information]
139
+
140
+ ### Other Known Limitations
141
+
142
+ [Needs More Information]
143
+
144
+ ## Additional Information
145
+
146
+ ### Dataset Curators
147
+
148
+ [Needs More Information]
149
+
150
+ ### Licensing Information
151
+
152
+ [Needs More Information]
153
+
154
+ ### Citation Information
155
+
156
+ [Needs More Information]
lccc.py ADDED
@@ -0,0 +1,168 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ """
15
+ LCCC: Large-scale Cleaned Chinese Conversation corpus (LCCC) is a large corpus of Chinese conversations.
16
+ A rigorous data cleaning pipeline is designed to ensure the quality of the corpus.
17
+ This pipeline involves a set of rules and several classifier-based filters.
18
+ Noises such as offensive or sensitive words, special symbols, emojis,
19
+ grammatically incorrect sentences, and incoherent conversations are filtered.
20
+ """
21
+
22
+ import json
23
+ import os
24
+
25
+ import datasets
26
+
27
+
28
+ # BibTeX citation
29
+ _CITATION = """\
30
+ @inproceedings{wang2020chinese,
31
+ title={A Large-Scale Chinese Short-Text Conversation Dataset},
32
+ author={Wang, Yida and Ke, Pei and Zheng, Yinhe and Huang, Kaili and Jiang, Yong and Zhu, Xiaoyan and Huang, Minlie},
33
+ booktitle={NLPCC},
34
+ year={2020},
35
+ url={https://arxiv.org/abs/2008.03946}
36
+ }
37
+ """
38
+
39
+ # Description of the dataset here
40
+ _DESCRIPTION = """\
41
+ LCCC: Large-scale Cleaned Chinese Conversation corpus (LCCC) is a large corpus of Chinese conversations.
42
+ A rigorous data cleaning pipeline is designed to ensure the quality of the corpus.
43
+ This pipeline involves a set of rules and several classifier-based filters.
44
+ Noises such as offensive or sensitive words, special symbols, emojis,
45
+ grammatically incorrect sentences, and incoherent conversations are filtered.
46
+ """
47
+
48
+ _HOMEPAGE = "https://github.com/thu-coai/CDial-GPT"
49
+ _LICENSE = "MIT"
50
+
51
+ # TODO: Add link to the official dataset URLs here
52
+ # The HuggingFace Datasets library doesn't host the datasets but only points to the original files.
53
+ # This can be an arbitrary nested dict/list of URLs (see below in `_split_generators` method)
54
+ _URLS = {
55
+ "large": "https://huggingface.co/datasets/silver/lccc/resolve/main/lccc_large.jsonl.gz",
56
+ "base": {
57
+ "train": "https://huggingface.co/datasets/silver/lccc/resolve/main/lccc_base_train.jsonl.gz",
58
+ "valid": "https://huggingface.co/datasets/silver/lccc/resolve/main/lccc_base_valid.jsonl.gz",
59
+ "test": "https://huggingface.co/datasets/silver/lccc/resolve/main/lccc_base_test.jsonl.gz",
60
+ }
61
+ }
62
+
63
+
64
+ # TODO: Name of the dataset usually match the script name with CamelCase instead of snake_case
65
+ class NewDataset(datasets.GeneratorBasedBuilder):
66
+ """Large-scale Cleaned Chinese Conversation corpus."""
67
+
68
+ VERSION = datasets.Version("1.0.0")
69
+
70
+ # This is an example of a dataset with multiple configurations.
71
+ # If you don't want/need to define several sub-sets in your dataset,
72
+ # just remove the BUILDER_CONFIG_CLASS and the BUILDER_CONFIGS attributes.
73
+
74
+ # If you need to make complex sub-parts in the datasets with configurable options
75
+ # You can create your own builder configuration class to store attribute, inheriting from datasets.BuilderConfig
76
+ # BUILDER_CONFIG_CLASS = MyBuilderConfig
77
+
78
+ # You will be able to load one or the other configurations in the following list with
79
+ # data = datasets.load_dataset('my_dataset', 'first_domain')
80
+ # data = datasets.load_dataset('my_dataset', 'second_domain')
81
+ BUILDER_CONFIGS = [
82
+ datasets.BuilderConfig(name="large", version=VERSION, description="The large version of LCCC"),
83
+ datasets.BuilderConfig(name="base", version=VERSION, description="The base version of LCCC"),
84
+ ]
85
+
86
+ # DEFAULT_CONFIG_NAME = "first_domain" # It's not mandatory to have a default configuration. Just use one if it make sense.
87
+
88
+ def _info(self):
89
+ # TODO: This method specifies the datasets.DatasetInfo object which contains informations and typings for the dataset
90
+ features = datasets.Features(
91
+ {
92
+ "dialog": datasets.Value("string"),
93
+ }
94
+ )
95
+ return datasets.DatasetInfo(
96
+ # This is the description that will appear on the datasets page.
97
+ description=_DESCRIPTION,
98
+ # This defines the different columns of the dataset and their types
99
+ features=features, # Here we define them above because they are different between the two configurations
100
+ # If there's a common (input, target) tuple from the features, uncomment supervised_keys line below and
101
+ # specify them. They'll be used if as_supervised=True in builder.as_dataset.
102
+ # supervised_keys=("sentence", "label"),
103
+ # Homepage of the dataset for documentation
104
+ homepage=_HOMEPAGE,
105
+ # License for the dataset if available
106
+ license=_LICENSE,
107
+ # Citation for the dataset
108
+ citation=_CITATION,
109
+ )
110
+
111
+ def _split_generators(self, dl_manager):
112
+ # TODO: This method is tasked with downloading/extracting the data and defining the splits depending on the configuration
113
+ # If several configurations are possible (listed in BUILDER_CONFIGS), the configuration selected by the user is in self.config.name
114
+
115
+ # dl_manager is a datasets.download.DownloadManager that can be used to download and extract URLS
116
+ # It can accept any type or nested list/dict and will give back the same structure with the url replaced with path to local files.
117
+ # By default the archives will be extracted and a path to a cached folder where they are extracted is returned instead of the archive
118
+ urls = _URLS[self.config.name]
119
+ downloaded_data = dl_manager.download_and_extract(urls)
120
+ if self.config.name == "large":
121
+ return [
122
+ datasets.SplitGenerator(
123
+ name=datasets.Split.TRAIN,
124
+ gen_kwargs={
125
+ "filepath": os.path.join(downloaded_data),
126
+ "split": "train",
127
+ }
128
+ )
129
+ ]
130
+ if self.config.name == "base":
131
+ return [
132
+ datasets.SplitGenerator(
133
+ name=datasets.Split.TRAIN,
134
+ # These kwargs will be passed to _generate_examples
135
+ gen_kwargs={
136
+ "filepath": os.path.join(downloaded_data["train"]),
137
+ "split": "train",
138
+ },
139
+ ),
140
+ datasets.SplitGenerator(
141
+ name=datasets.Split.TEST,
142
+ # These kwargs will be passed to _generate_examples
143
+ gen_kwargs={
144
+ "filepath": os.path.join(downloaded_data["train"]),
145
+ "split": "test"
146
+ },
147
+ ),
148
+ datasets.SplitGenerator(
149
+ name=datasets.Split.VALIDATION,
150
+ # These kwargs will be passed to _generate_examples
151
+ gen_kwargs={
152
+ "filepath": os.path.join(downloaded_data["valid"]),
153
+ "split": "dev",
154
+ },
155
+ ),
156
+ ]
157
+
158
+ # method parameters are unpacked from `gen_kwargs` as given in `_split_generators`
159
+ def _generate_examples(self, filepath, split):
160
+ # TODO: This method handles input defined in _split_generators to yield (key, example) tuples from the dataset.
161
+ # The `key` is for legacy reasons (tfds) and is not important in itself, but must be unique for each example.
162
+ with open(filepath, encoding="utf-8") as f:
163
+ for key, row in enumerate(f):
164
+ row = row.strip()
165
+ if len(row) == 0: continue
166
+ yield key, {
167
+ "dialog": json.loads(row),
168
+ }