Descartes commited on
Commit
ad62bb5
1 Parent(s): bc49a6d

update readme

Browse files
Files changed (1) hide show
  1. README.md +75 -0
README.md CHANGED
@@ -1,3 +1,78 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+
5
+ # The Pile -- PubMed Abstracts (refined by Data-Juicer)
6
+
7
+ A refined version of PubMed Abstracts dataset in The Pile by [Data-Juicer](https://github.com/alibaba/data-juicer). Removing some "bad" samples from the original dataset to make it higher-quality.
8
+
9
+ This dataset is usually used to pretrain a Large Language Model.
10
+
11
+ **Notice**: Here is a small subset for previewing. The whole dataset is available [here](https://dail-wlcb.oss-cn-wulanchabu.aliyuncs.com/LLM_data/our_refined_datasets/pretraining/the-pile-pubmed-abstract-refine-result.jsonl) (About 24G).
12
+
13
+ ## Dataset Information
14
+
15
+ - Number of samples: 371,331 (Keep ~99.55% from the original dataset)
16
+
17
+ ## Refining Recipe
18
+ ```yaml
19
+ # global parameters
20
+ project_name: 'Data-Juicer-recipes-pubmed-abstract'
21
+ dataset_path: '/path/to/your/dataset' # path to your dataset directory or file
22
+ export_path: '/path/to/your/dataset.jsonl'
23
+
24
+ np: 50 # number of subprocess to process your dataset
25
+ open_tracer: true
26
+
27
+ # process schedule
28
+ # a list of several process operators with their arguments
29
+ process:
30
+ - clean_email_mapper:
31
+ - clean_links_mapper:
32
+ - fix_unicode_mapper:
33
+ - punctuation_normalization_mapper:
34
+ - whitespace_normalization_mapper:
35
+
36
+ - alphanumeric_filter: # 4068
37
+ tokenization: false
38
+ min_ratio: 0.7 # < 3sigma (0.773)
39
+ max_ratio: 0.881 # 3sigma
40
+ - average_line_length_filter: # for code
41
+ max_len: 2100 # > 3sigma (1471) -- 7410
42
+ - character_repetition_filter:
43
+ rep_len: 10
44
+ max_ratio: 0.2 # > 3sigma (0.1458) -- 6060
45
+ - flagged_words_filter:
46
+ lang: en
47
+ tokenization: true
48
+ max_ratio: 0.00232 # 3sigma
49
+ - language_id_score_filter: # remove language filter
50
+ min_score: 0.5
51
+ - maximum_line_length_filter: # for code
52
+ max_len: 4000 # remove 8202 samples
53
+ - perplexity_filter:
54
+ lang: en
55
+ max_ppl: 4000 # remove 10284 samples
56
+ - special_characters_filter:
57
+ max_ratio: 0.38 # remove 5532 samples
58
+ - text_length_filter:
59
+ max_len: 4000 # > 3sigma -- 10873
60
+ - words_num_filter:
61
+ lang: en
62
+ tokenization: true
63
+ min_num: 20 # remove 10790 samples
64
+ max_num: 700 # remove 22709 samples
65
+ - word_repetition_filter:
66
+ lang: en
67
+ tokenization: true
68
+ rep_len: 10
69
+ max_ratio: 0.0887 # 3sigma
70
+
71
+ - document_simhash_deduplicator:
72
+ tokenization: space
73
+ window_size: 3 # small window size for short texts
74
+ lowercase: true
75
+ ignore_pattern: '\p{P}'
76
+ num_blocks: 10
77
+ hamming_distance: 8 # larger hamming distance threshold for short texts
78
+ ```