Migrate model card from transformers-repo
Browse filesRead announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/aodiniz/bert_uncased_L-2_H-512_A-8_cord19-200616/README.md
README.md
ADDED
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# BERT L-2 H-512 fine-tuned on MLM (CORD-19 2020/06/16)
|
2 |
+
|
3 |
+
BERT model with [2 Transformer layers and hidden embedding of size 512](https://huggingface.co/google/bert_uncased_L-2_H-512_A-8), referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962), fine-tuned for MLM on CORD-19 dataset (as released on 2020/06/16).
|
4 |
+
|
5 |
+
## Training the model
|
6 |
+
|
7 |
+
```bash
|
8 |
+
python run_language_modeling.py
|
9 |
+
--model_type bert
|
10 |
+
--model_name_or_path google/bert_uncased_L-2_H-512_A-8
|
11 |
+
--do_train
|
12 |
+
--train_data_file {cord19-200616-dataset}
|
13 |
+
--mlm
|
14 |
+
--mlm_probability 0.2
|
15 |
+
--line_by_line
|
16 |
+
--block_size 512
|
17 |
+
--per_device_train_batch_size 20
|
18 |
+
--learning_rate 3e-5
|
19 |
+
--num_train_epochs 2
|
20 |
+
--output_dir bert_uncased_L-2_H-512_A-8_cord19-200616
|