CASIA-LM commited on
Commit
c7e2592
1 Parent(s): 3597950

Upload 2 files

Browse files
README.md ADDED
@@ -0,0 +1,232 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ChineseWebText: Large-Scale High-quality Chinese Web Text Extracted with Effective Evaluation Model
2
+
3
+ This directory contains the ChineseWebText dataset, and the EvalWeb tool-chain to process CommonCrawl Data. Our ChineseWebText dataset is publicly available on .
4
+
5
+ # ChineseWebText
6
+
7
+ - ### Dataset Overview
8
+
9
+ We release the latest and largest Chinese dataset **ChineseWebText**, which consists of **1.42 TB** (See Table 1) data and each text is assigned a quality score, facilitating LLM researchers to select data according to a new quality threshold. We also release a much cleaner subset of **600 GB** Chinese texts with quality exceeding **90%** .
10
+
11
+ ​ <img src=".\Overview of&nbsp; output datasets.png" style="zoom:50%;" />
12
+
13
+ - ### Data Example
14
+
15
+ ```json
16
+ {
17
+ "title": "潍坊银行2021年上半年净利润同比增长29.57% 不良率降至1.10%_财经_中国网",
18
+ "score": 0.95,
19
+ "text": "潍坊银行2021年上半年净利润同比增长29.57% 不良率降至1.10%\n中国网财经8月24日讯 潍坊银行昨日披露2021年二季度信息报告显示,截至2021 年6月末,潍坊银行资产总额1920.44亿元,较上年末增长9.34%;负债总额1789.16亿元,较上年末增长10.54%。2021年上半年,潍坊银行实现净利润 6.09亿元,同比增长29.57%。\n资产质量方面,截至2021年6月末,潍坊银行不良贷款率1.10%,较上年末下降0.13个百分点。\n资本金方面,截至 2021年6月末,潍坊银行资本充足率、核心一级资本充足率、一级资本充足率分别为11.66%、7.89%、10.13%,分别较上年末下降1.89、0.89、1.15 个百分点。",
20
+ "url": "http://finance.china.com.cn/news/special/2021bnb/20210824/5638343.shtml",
21
+ "source\_domain": "finance.china.com.cn"
22
+ }
23
+ ```
24
+
25
+ - "title": 【string】The title of the data text.
26
+ - "score": 【float】Quality score generated by the quality evaluation model.
27
+ - "text": 【string】Text content of data sample.
28
+ - "url": 【string】External URL, points to the original web address of the text.
29
+ - "source_domain": 【string】The domain name of the source website.
30
+
31
+ # EvalWeb
32
+
33
+ ### Introduction
34
+
35
+ We introduce a new complete tool-chain **EvalWeb** (See Figure 1), which could extract high-quality Chinese texts from raw web data. For the crawled data from web, we first use a preparation module to process them, and then extract the monolingual Chinese data. After that, a preprocessing module will be used to further filter them with mannual crafted rules, including data length, sensitive words, proportion of Chinese characters and so on. Finally, a BERT-based evaluation model will be employed to assess the qualities of filtered data. By this way, we can generate a quality score for each of the text, and then use an appropriate threshold to extract the high-quality data as we required. Furthermore, considering computational cost and efficiency, we also propose to leverage knowledge distillation techniques to train a FastText classifier, which can achieve similar performance with faster efficiency and lower computational costs.
36
+
37
+ ​ ![](.\BERTEval.png)
38
+
39
+ ​ <i>Figure 1: The architecture of our EvalWeb approach</i>
40
+
41
+ ### Environment Dependencies
42
+
43
+ ```shell
44
+ codescikit-learn==1.3.0
45
+ transformers==4.31.0
46
+ scipy==1.11.1
47
+ numpy==1.24.3
48
+ pytorch==2.0.1
49
+ jieba==0.42.1
50
+ zhconv==1.4.3
51
+ fasttext==0.9.2
52
+ ```
53
+
54
+ ### Stage 1: Data Preparation
55
+
56
+ #### 1. Deduplication and Language Identification (LID) using CCNet Tools
57
+
58
+ * Following the work of CCNet, in this module a Hash-based inter-string deduplication method is employed to remove duplicate text from different CommonCrawl snapshots. Additionally, a well-trained language identification model, which could support 157 languages, is applied to select Chinese data. By this way, we can obtain all the monolingual Chinese text data we required.
59
+
60
+ * [CCNet Tools](https://github.com/facebookresearch/cc_net)
61
+
62
+ * Run the script:
63
+
64
+ ```shell
65
+ python -m cc_net --config config/my_config_2023-23.json。
66
+ ```
67
+
68
+ * Outputs:
69
+
70
+ ```shell
71
+ /data/mined_split/2023-23/{0-4999}/zh_[head|middle|tail].json.gz
72
+ ```
73
+
74
+ * config/my_config_2023-23.json:
75
+
76
+ ```json
77
+ {
78
+ "hash_in_mem": 10,
79
+ "dump": "2023-23",
80
+ "task_parallelism": 20,
81
+ "num_shards": 5000,
82
+ "mine_num_processes": 20,
83
+ "num_segments_per_shard":-1,
84
+ "lang_whitelist": ["zh","en"],
85
+ "lang_blacklist": [],
86
+ "lang_threshold": 0.5,
87
+ "keep_bucket": [],
88
+ "pipeline": ["dedup", "lid", "keep_lang", "sp", "lm", "pp_bucket", "drop", "split_by_lang"],
89
+ "metadata": "None",
90
+ "execution": "local",
91
+ "output_dir": "data",
92
+ "mined_dir": "mined",
93
+ "target_size": "4G",
94
+ "min_len": 300,
95
+ "cache_dir": "/mnt/data/ccnet_data/commoncrawl"
96
+ }
97
+ ```
98
+
99
+ #### 2. Filter using blacklist and regular expression matching.
100
+
101
+ * run python clear_ccnet.py
102
+
103
+ ```sh
104
+ python clear_ccnet.py --source /mnt/data/ccnet_clean/cc_net/data/mined_split/2023-23 --target /mnt/data/cc_cleaned
105
+ # --source directory of cleaned data after the first step
106
+ # --target directory of data filtered by blacklist and regular expression matching
107
+ ```
108
+
109
+ * outputs:
110
+
111
+ ```sh
112
+ cleared*.jsonl
113
+ cleared_dirty*.jsonl
114
+ ```
115
+
116
+ - compress files
117
+
118
+ ```sh
119
+ tar -czvf ccnet-2023-23.tar.gz 2023-23
120
+ ```
121
+
122
+ ### Stage 2: Preprocessing
123
+ This section focuses on extracting high-quality texts from Chinese monolingual web data by using manually crafted rules to filter out violent, pornographic, advertising content, and erroneous characters. The details of the filtering rules are presented in the following:
124
+
125
+ - #### Text Extraction
126
+
127
+ Extract text content from `jsonl` file after the data preparation stage.
128
+
129
+ - #### Data Length
130
+
131
+ To improve language model training, documents will be filtered out if they have an average line length of fewer than **10** characters or a total text length of less than 200 characters, as such short texts often lack meaningful context and semantic relevance.
132
+
133
+ - #### Proportion of Characters
134
+
135
+ We aim to create a high-quality simplified Chinese dataset from web data by eliminating traditional Chinese characters and removing texts with less than **30%** Chinese characters to ensure the dataset is suitable for training large language models.
136
+
137
+ - #### Sensitive Words
138
+
139
+ To prevent large language models from generating toxic content, a method is proposed where texts are analyzed for the occurrence of harmful words from a predefined list, and any text with more than **0.5** occurrences of such words per line is classified as toxic and removed from the training dataset.
140
+
141
+ - #### Internal duplication
142
+
143
+ To enhance training efficiency and model performance, a subsequent analysis using a 13-gram granularity is conducted to identify and filter out data samples where over **50%** of the character sequences are repetitive in each data entry.
144
+
145
+ Here is an example command to run the preprocessing stage:
146
+
147
+ ```shell
148
+ python preprocess.py --dates 2023-06 2023-14
149
+ ```
150
+
151
+ > The **"dates"** parameter passed in corresponds to the folder names of the snapshots generated during the preparation stage.
152
+ >
153
+ > Then, you will get six subfolders under the corresponding date's folder. These six folders are respectively named **"text_extraction"**, **"length"**, **"Character"**, **"sensitive"**, **"duplication"** and **"remain"**. The **"text_extraction"** folder contains the results after extracting text from each piece of data, while **"length"**, **"Character"**, **"sensitive"**, and **"duplication"** correspond to four filtering operations, storing the filtered noise data. The **"remain"** folder stores the remaining data after the preprocessing stage, and these data will subsequently be scored through our evaluation model.
154
+
155
+ ### Stage 3: Quality Evaluation
156
+
157
+ In preprocessing procedure, we have used some handcrafted rules to remove the explicit noisy texts from our dataset. However, within the remaining data, there is still a considerable amount of low-quality text data, which cannot be filtered out with handcrafted rules. In order to extract the data of higher quality from them, in this section we further propose to design an evaluation models.
158
+
159
+ #### Stage 3.1: BERTEval
160
+
161
+ #### 1. BERTEval Training Data Composition
162
+
163
+ <img src=".\BERTEval_data_composition.png" alt="image-20231101203319089" style="zoom:50%;" />
164
+
165
+ #### 2. BERTEval Training and Inference
166
+
167
+ - Step 1: 2-stage Training
168
+
169
+ ```shell
170
+ python train.py # stage1 you can modify configs/base_config.json to set hyper-parameters
171
+ python train_ust.py # stage2 you can modify configs/ust_config.json to set hyper-parameters
172
+ ```
173
+
174
+ - Step 2: Split the previously processed CommonCrawl into multiple shards, where each shard is a JSON file. All shards for a single snapshot are stored in the same path. Refer to the example `util/text_separate.py`.
175
+
176
+ - Step 3: Run the Python inference script `pred.py` to split each text using delimiters such as newline `\n` or periods into complete paragraphs of a maximum length of 512. Predict the text quality score for each paragraph. The configuration can be modified using `config/pred_config.json`, with key parameters as follows:
177
+
178
+ ```shell
179
+ "data_path": ccnet data path
180
+ "output_path": Path to store the scored data
181
+ "num_workers": Number of CPU processes for data preprocessing
182
+ "batch_size": BERT batch size
183
+ "checkpoint": Model checkpoint path
184
+ "tokenizer_path": Path to store BERT tokenizer
185
+ "pretrained_model_path": Pre-trained BERT weights path
186
+ ```
187
+
188
+ Other parameters do not require modification. The processed text is stored in multiple JSONL files. Then, run
189
+
190
+ ```shell
191
+ python pred.py
192
+ ```
193
+
194
+ Step 4: Set the threshold value $T$ and retain text data with a quality threshold greater than $T$. Since the maximum input token limit for bert-base is 512, for longer texts, they are split into multiple text segments. For consecutive text segments in the same document with thresholds greater than $T$, the program automatically concatenates them. This functionality is implemented in the function `text_select_with_pred(file, score_threshold)` in `utils/util.py`.
195
+
196
+ Usage:
197
+
198
+ ```python
199
+ file = "test\data\cleared0_0000.jsonl"
200
+ score_threshold = 0.99
201
+ selected_data = text_select_with_pred(file, score_threshold)
202
+ ```
203
+
204
+
205
+
206
+ ### Stage 3.2: FastText
207
+
208
+ #### 1. FastText Training Data Composition:
209
+
210
+ <img src=".\FastText_data_composition.png" alt="image-20231101173016394" style="zoom:50%;" />
211
+
212
+ ### 2. FastText Training and Inference
213
+
214
+ We provide our FastText training data examples and training script in folder **"fasttext"**.
215
+
216
+ ```shell
217
+ cd fasttext
218
+ python main.py --mode train --train_file ./data/train.txt --test_file ./data/test.txt
219
+ ```
220
+
221
+ > To understand the process of constructing the **"train.txt"** and **"test.txt"** files, please refer to the **"./data/build_data.py"**.
222
+ >
223
+ > The trained model **"model.bin"** will be stored in the **"output"** folder.
224
+
225
+ After getting the remaining data after the preprocessing stage(should be stored in path like **"./2023-06/remain"**), you can using our FastText model to score all the data:
226
+
227
+ ```shell
228
+ python main.py --mode test --dates 2023-06 2023-14
229
+ ```
230
+
231
+ > This step will assign a FastText score to each data entry, with the results being stored in a directory such as **"./2023-06/remain/fasttext"**. Subsequently, you can utilize these scores to filter and extract high-quality data by using a threshold(default set to 0.5).
232
+
pictures/Overview_of_output_datasets.png ADDED

Git LFS Details

  • SHA256: 186fa8f39c1ed1c7ef8edcfd3af4ee4246565e8d7245c17def93702cff793bc5
  • Pointer size: 131 Bytes
  • Size of remote file: 209 kB