Update README.md
Browse files
README.md
CHANGED
@@ -21,7 +21,13 @@ models.
|
|
21 |
|
22 |
# Model Description
|
23 |
|
24 |
-
This model is fine-tuned on the SQuAD2.0 dataset. Fine-tuning the biomedical language model on the SQuAD dataset helps improve the score on the BioASQ challenge. If you plan to work with BioASQ or biomedical QA tasks, it's better to use this model over BioM-ALBERT-xxlarge. This model (TensorFlow version ) took the lead in the BioASQ9b-Factoid challenge under the name of (UDEL-LAB1).
|
|
|
|
|
|
|
|
|
|
|
|
|
25 |
|
26 |
Huggingface library doesn't implement the Layer-Wise decay feature, which affects the performance on the SQuAD task. The reported result of BioM-ALBERT-xxlarge-SQuAD in our paper is 87.00 (F1) since we use ALBERT open-source code with TF checkpoint, which uses Layer-Wise decay.
|
27 |
|
|
|
21 |
|
22 |
# Model Description
|
23 |
|
24 |
+
This model is fine-tuned on the SQuAD2.0 dataset. Fine-tuning the biomedical language model on the SQuAD dataset helps improve the score on the BioASQ challenge. If you plan to work with BioASQ or biomedical QA tasks, it's better to use this model over BioM-ALBERT-xxlarge. This model (TensorFlow version ) took the lead in the BioASQ9b-Factoid challenge under the name of (UDEL-LAB1).
|
25 |
+
|
26 |
+
If you want to try our Tensor Flow example and how to fine-tune ALBERT on SQuAD and BioASQ follow this link :
|
27 |
+
|
28 |
+
https://github.com/salrowili/BioM-Transformers/blob/main/examples/Example_of_SQuAD2_0_and_BioASQ7B_tasks_with_BioM_ALBERT_xxlarge_on_TPU.ipynb
|
29 |
+
|
30 |
+
To see the full details of BioASQ9B results, please check this link http://participants-area.bioasq.org/results/9b/phaseB/ ( you need to register).
|
31 |
|
32 |
Huggingface library doesn't implement the Layer-Wise decay feature, which affects the performance on the SQuAD task. The reported result of BioM-ALBERT-xxlarge-SQuAD in our paper is 87.00 (F1) since we use ALBERT open-source code with TF checkpoint, which uses Layer-Wise decay.
|
33 |
|