metadata
license: apache-2.0
task_categories:
- text-generation
language:
- en
tags:
- data-juicer
- pretraining
size_categories:
- 10M<n<100M
RedPajama -- Wikipedia (refined by Data-Juicer)
A refined version of Wikipedia dataset in RedPajama by Data-Juicer. Removing some "bad" samples from the original dataset to make it higher-quality.
This dataset is usually used to pretrain a Large Language Model.
Notice: Here is a small subset for previewing. The whole dataset is available here (About 68GB).
Dataset Information
- Number of samples: 26,990,659 (Keep ~90.47% from the original dataset)
Refining Recipe
# global parameters
project_name: 'Data-Juicer-recipes-wiki'
dataset_path: '/path/to/your/dataset' # path to your dataset directory or file
export_path: '/path/to/your/dataset.jsonl'
np: 50 # number of subprocess to process your dataset
open_tracer: true
# process schedule
# a list of several process operators with their arguments
process:
- clean_email_mapper:
- clean_links_mapper:
- fix_unicode_mapper:
- punctuation_normalization_mapper:
- whitespace_normalization_mapper:
- alphanumeric_filter:
tokenization: false
min_ratio: 0.6 # <3sigma (0.735)
max_ratio: 0.884 # 3sigma
- average_line_length_filter: # for code
max_len: 192 # 3sigma
- character_repetition_filter:
rep_len: 10
max_ratio: 0.4 # >3sigma (0.197)
- flagged_words_filter:
lang: en
tokenization: true
max_ratio: 0.0019 # 3sigma
- language_id_score_filter:
min_score: 0.689 # 3sigma
- maximum_line_length_filter: # for code
max_len: 1630 # 3sigma tbd
- perplexity_filter:
lang: en
max_ppl: 6887 # 3sigma
- special_characters_filter:
max_ratio: 0.5 # >3sigma (0.34)
- text_length_filter:
max_len: 18221 # 3sigma
- words_num_filter:
lang: en
tokenization: true
min_num: 20
max_num: 6086 # 3sigma
- word_repetition_filter:
lang: en
tokenization: true
rep_len: 10
max_ratio: 0.3 # 3sigma (0.194)
- document_simhash_deduplicator:
tokenization: space
window_size: 6
lowercase: true
ignore_pattern: '\p{P}'
num_blocks: 6
hamming_distance: 4