Update README.md
Browse files
README.md
CHANGED
@@ -27,24 +27,29 @@ This model is fine-tuned on the SQuAD2.0 dataset. Fine-tuning the biomedical lan
|
|
27 |
Huggingface library doesn't implement the Layer-Wise decay feature, which affects the performance on the SQuAD task. The reported result of BioM-ALBERT-xxlarge-SQuAD in our paper is 87.00 (F1) since we use ALBERT open-source code with TF checkpoint, which uses Layer-Wise decay.
|
28 |
|
29 |
To reproduce results in Google Colab:
|
|
|
30 |
- Make sure you have GPU enabled.
|
|
|
31 |
- Clone and install required libraries through this code
|
32 |
|
33 |
!git clone https://github.com/huggingface/transformers
|
|
|
34 |
!pip3 install -e transformers
|
|
|
35 |
!pip3 install sentencepiece
|
|
|
36 |
!pip3 install -r /content/transformers/examples/pytorch/question-answering/requirements.txt
|
37 |
|
38 |
- Run this python code:
|
39 |
|
40 |
```python
|
41 |
-
python /content/transformers/examples/pytorch/question-answering/run_qa.py --model_name_or_path
|
42 |
-
--do_eval
|
43 |
-
--version_2_with_negative
|
44 |
-
--per_device_eval_batch_size 8
|
45 |
-
--dataset_name squad_v2
|
46 |
-
--overwrite_output_dir
|
47 |
-
--fp16
|
48 |
--output_dir out
|
49 |
```
|
50 |
|
|
|
27 |
Huggingface library doesn't implement the Layer-Wise decay feature, which affects the performance on the SQuAD task. The reported result of BioM-ALBERT-xxlarge-SQuAD in our paper is 87.00 (F1) since we use ALBERT open-source code with TF checkpoint, which uses Layer-Wise decay.
|
28 |
|
29 |
To reproduce results in Google Colab:
|
30 |
+
|
31 |
- Make sure you have GPU enabled.
|
32 |
+
|
33 |
- Clone and install required libraries through this code
|
34 |
|
35 |
!git clone https://github.com/huggingface/transformers
|
36 |
+
|
37 |
!pip3 install -e transformers
|
38 |
+
|
39 |
!pip3 install sentencepiece
|
40 |
+
|
41 |
!pip3 install -r /content/transformers/examples/pytorch/question-answering/requirements.txt
|
42 |
|
43 |
- Run this python code:
|
44 |
|
45 |
```python
|
46 |
+
python /content/transformers/examples/pytorch/question-answering/run_qa.py --model_name_or_path BioM-ALBERT-xxlarge-SQuAD2 \
|
47 |
+
--do_eval \
|
48 |
+
--version_2_with_negative \
|
49 |
+
--per_device_eval_batch_size 8 \
|
50 |
+
--dataset_name squad_v2 \
|
51 |
+
--overwrite_output_dir \
|
52 |
+
--fp16 \
|
53 |
--output_dir out
|
54 |
```
|
55 |
|