hylcool commited on
Commit
e3d1d07
1 Parent(s): abbe296

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +79 -0
README.md CHANGED
@@ -1,3 +1,82 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ text:
4
+ text-generation:
5
+ language:
6
+ - en
7
+ tags:
8
+ - data-juicer
9
+ - pretraining
10
  ---
11
+ # The Pile -- PhilPaper (refined by Data-Juicer)
12
+
13
+ A refined version of PhilPaper dataset in The Pile by [Data-Juicer](https://github.com/alibaba/data-juicer). Removing some "bad" samples from the original dataset to make it higher-quality.
14
+
15
+ This dataset is usually used to pretrain a Large Language Model.
16
+
17
+ **Notice**: Here is a small subset for previewing. The whole dataset is available [here](https://dail-wlcb.oss-cn-wulanchabu.aliyuncs.com/LLM_data/our_refined_datasets/pretraining/the-pile-philpaper-refine-result.jsonl) (About 1.7GB).
18
+
19
+ ## Dataset Information
20
+
21
+ - Number of samples: 29,117 (Keep ~88.82% from the original dataset)
22
+
23
+ ## Refining Recipe
24
+ ```yaml
25
+ # global parameters
26
+ project_name: 'our-recipes-Philpaper'
27
+ dataset_path: '/path/to/the/original/dataset/' # path to your dataset directory or file
28
+ export_path: 'Philpaper-refine-result.jsonl' # path to your dataset result file
29
+
30
+ np: 50 # number of subprocess to process your dataset
31
+ ds_cache_dir: /cache # path to your dataset cache file
32
+ open_tracer: true
33
+
34
+ # process schedule
35
+ # a list of several process operators with their arguments
36
+ process:
37
+ - clean_email_mapper:
38
+ - clean_links_mapper:
39
+ - fix_unicode_mapper:
40
+ - punctuation_normalization_mapper:
41
+ - whitespace_normalization_mapper:
42
+
43
+ - alphanumeric_filter:
44
+ tokenization: false
45
+ min_ratio: 0.7 # <3sigma (0.72)
46
+ - average_line_length_filter:
47
+ max_len: 5e5 # >3sigma (406006)
48
+ - character_repetition_filter:
49
+ rep_len: 10
50
+ max_ratio: 0.2 # >3sigma (0.145)
51
+ - flagged_words_filter:
52
+ lang: en
53
+ tokenization: true
54
+ max_ratio: 0.0007 # 3sigma
55
+ - language_id_score_filter:
56
+ min_score: 0.6
57
+ - maximum_line_length_filter:
58
+ max_len: 1e6 # 3sigma
59
+ - perplexity_filter:
60
+ lang: en
61
+ max_ppl: 5000
62
+ - special_characters_filter:
63
+ max_ratio: 0.4 # > 3sigma (0.302)
64
+ - words_num_filter:
65
+ lang: en
66
+ tokenization: true
67
+ min_num: 1000
68
+ max_num: 2e5 # 3sigma
69
+ - word_repetition_filter:
70
+ lang: en
71
+ tokenization: true
72
+ rep_len: 10
73
+ max_ratio: 0.3 # > 3sigma (0.249)
74
+
75
+ - document_simhash_deduplicator:
76
+ tokenization: space
77
+ window_size: 6
78
+ lowercase: true
79
+ ignore_pattern: '\p{P}'
80
+ num_blocks: 6
81
+ hamming_distance: 4
82
+ ```