Descartes's picture
update readme
36789f1
|
raw
history blame
No virus
2.58 kB
metadata
license: apache-2.0
task_categories:
  - text-generation
language:
  - en
tags:
  - data-juicer
  - pretraining
size_categories:
  - 1M<n<10M

The Pile -- PubMed Central (refined by Data-Juicer)

A refined version of PubMed Central dataset in The Pile by Data-Juicer. Removing some "bad" samples from the original dataset to make it higher-quality.

This dataset is usually used to pretrain a Large Language Model.

Notice: Here is a small subset for previewing. The whole dataset is available here (About 83G).

Dataset Information

  • Number of samples: 2,694,860 (Keep ~86.96% from the original dataset)

Refining Recipe

# global parameters
project_name: 'Data-Juicer-recipes-pubmed-central'
dataset_path: '/path/to/your/dataset'  # path to your dataset directory or file
export_path: '/path/to/your/dataset.jsonl'

np: 50  # number of subprocess to process your dataset
open_tracer: true

# process schedule
# a list of several process operators with their arguments
process:
  - clean_email_mapper:
  - clean_links_mapper:
  - fix_unicode_mapper:
  - punctuation_normalization_mapper:
  - whitespace_normalization_mapper:

  - alphanumeric_filter:  # 89217
      tokenization: false
      min_ratio: 0.2787  # 3sigma
  - average_line_length_filter:  # for code
      max_len: 1200  # < 3sigma (1478) -- 7410
  - character_repetition_filter:
      rep_len: 10
      max_ratio: 0.3741  # 3sigma -- 65849
  - flagged_words_filter:
      lang: en
      tokenization: true
      max_ratio: 0.00195  # 3sigma -- 8305
  - language_id_score_filter:  # remove language filter
      min_score: 0.5  # 272359
  - maximum_line_length_filter:  # for code
      max_len: 7328  # remove 23808 samples
  - perplexity_filter:
      lang: en
      max_ppl: 8000  # remove 173883 samples
  - special_characters_filter:
      max_ratio: 0.842  # remove 87661 samples
  - text_length_filter:
      max_len: 136028  # 3sigma -- 15118
  - words_num_filter:
      lang: en
      tokenization: true
      min_num: 20  # remove 176537 samples
      max_num: 23305  # remove 15016 samples
  - word_repetition_filter:
      lang: en
      tokenization: true
      rep_len: 10
      max_ratio: 0.5981  # 3sigma -- 93843

  - document_simhash_deduplicator:
      tokenization: space
      window_size: 6
      lowercase: true
      ignore_pattern: '\p{P}'
      num_blocks: 6
      hamming_distance: 4