and111 commited on
Commit
46fe70d
1 Parent(s): 5af52a7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ### Dataset Summary
2
 
3
- Input data for the first phase of BERT pretraining (sequence length 128). All text is tokenized with [bert-base-uncased](https://huggingface.co/bert-base-uncased) tokenizer.
4
  Data is obtained by concatenating and shuffling [wikipedia](https://huggingface.co/datasets/wikipedia) (split: `20220301.en`) and [bookcorpusopen](https://huggingface.co/datasets/bookcorpusopen) datasets and running [reference BERT data preprocessor](https://github.com/google-research/bert/blob/master/create_pretraining_data.py) without masking and input duplication (`dupe_factor = 1`). Documents are split into sentences with the [NLTK](https://www.nltk.org/) sentence tokenizer (`nltk.tokenize.sent_tokenize`).
5
 
6
  See the dataset for the **first** phase of pretraining: [bert_pretrain_phase1](https://huggingface.co/datasets/and111/bert_pretrain_phase1).
 
1
  ### Dataset Summary
2
 
3
+ Input data for the **second** phase of BERT pretraining (sequence length 512). All text is tokenized with [bert-base-uncased](https://huggingface.co/bert-base-uncased) tokenizer.
4
  Data is obtained by concatenating and shuffling [wikipedia](https://huggingface.co/datasets/wikipedia) (split: `20220301.en`) and [bookcorpusopen](https://huggingface.co/datasets/bookcorpusopen) datasets and running [reference BERT data preprocessor](https://github.com/google-research/bert/blob/master/create_pretraining_data.py) without masking and input duplication (`dupe_factor = 1`). Documents are split into sentences with the [NLTK](https://www.nltk.org/) sentence tokenizer (`nltk.tokenize.sent_tokenize`).
5
 
6
  See the dataset for the **first** phase of pretraining: [bert_pretrain_phase1](https://huggingface.co/datasets/and111/bert_pretrain_phase1).