julien-c HF staff commited on
Commit
f2021c2
1 Parent(s): 0ea5b50

Migrate model card from transformers-repo

Browse files

Read announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/aodiniz/bert_uncased_L-10_H-512_A-8_cord19-200616_squad2/README.md

Files changed (1) hide show
  1. README.md +26 -0
README.md ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - squad_v2
4
+ ---
5
+
6
+ # BERT L-10 H-512 CORD-19 (2020/06/16) fine-tuned on SQuAD v2.0
7
+
8
+ BERT model with [10 Transformer layers and hidden embedding of size 512](https://huggingface.co/google/bert_uncased_L-10_H-512_A-8), referenced in [Well-Read Students Learn Better: On the Importance of Pre-training Compact Models](https://arxiv.org/abs/1908.08962), [fine-tuned for MLM](https://huggingface.co/aodiniz/bert_uncased_L-10_H-512_A-8_cord19-200616) on CORD-19 dataset (as released on 2020/06/16) and fine-tuned for QA on SQuAD v2.0.
9
+
10
+ ## Training the model
11
+
12
+ ```bash
13
+ python run_squad.py
14
+ --model_type bert
15
+ --model_name_or_path aodiniz/bert_uncased_L-10_H-512_A-8_cord19-200616
16
+ --train_file 'train-v2.0.json'
17
+ --predict_file 'dev-v2.0.json'
18
+ --do_train
19
+ --do_eval
20
+ --do_lower_case
21
+ --version_2_with_negative
22
+ --max_seq_length 384
23
+ --per_gpu_train_batch_size 10
24
+ --learning_rate 3e-5
25
+ --num_train_epochs 2
26
+ --output_dir bert_uncased_L-10_H-512_A-8_cord19-200616_squad2