JackBAI commited on
Commit
db2066d
1 Parent(s): 39298c3

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -0
README.md ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - wikipedia
5
+ - bookcorpus
6
+ language:
7
+ - en
8
+ metrics:
9
+ - glue
10
+ library_name: transformers
11
+ ---
12
+
13
+ This is our reproduction using the official HuggingFace `roberta` architecture with a medium size. On the architecture side, RoBERTa is exactly the same as BERT except for its larger vocabulary size.
14
+
15
+ According to Google's [BERT releases](https://huggingface.co/google/bert_uncased_L-8_H-512_A-8) and [BERT-Medium](https://huggingface.co/google/bert_uncased_L-8_H-512_A-8/blob/main/config.json), a medium sized model should have a config of Layer=8, Hidden=512, #AttnHeads=8, and IntermediateSize=2048. We follow this config to pre-train a RoBERTa-base model for reproduction.
16
+
17
+ We use the same datasets as BERT (English Wikipedia and Book Corpus) to pre-train for 30k steps with a batch size of 8,192. I also released the reproduction of this dataset [on HuggingFace](https://huggingface.co/datasets/JackBAI/bert_pretrain_datasets).
18
+
19
+ We utilized DeepSpeed ZeRO-2 for performance optimization.
20
+
21
+ Other training configuration:
22
+
23
+ | Parameter | Value |
24
+ |----------------------|-----------|
25
+ | WARMUP_STEPS | 1800 |
26
+ | LR_DECAY | linear |
27
+ | ADAM_EPS | 1e-6 |
28
+ | ADAM_BETA1 | 0.9 |
29
+ | ADAM_BETA2 | 0.98 |
30
+ | ADAM_WEIGHT_DECAY | 0.01 |
31
+ | PEAK_LR | 1e-3 |