Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,46 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: es
|
3 |
+
datasets:
|
4 |
+
- squad_es
|
5 |
+
- hackathon-pln-es/biomed_squad_es_v2
|
6 |
+
metrics:
|
7 |
+
- "f1"
|
8 |
+
|
9 |
+
---
|
10 |
+
|
11 |
+
# biomedtra-small for QA
|
12 |
+
This model is a fine-tuned version of [mrm8488/biomedtra-small-es](https://huggingface.co/mrm8488/biomedtra-small-es) on the [squad_es (v2)](https://huggingface.co/datasets/squad_es) training dataset.
|
13 |
+
|
14 |
+
|
15 |
+
## Hyperparameters
|
16 |
+
|
17 |
+
The hyperparameters were chosen based on those used in [sultan/BioM-ELECTRA-Large-SQuAD2](https://huggingface.co/sultan/BioM-ELECTRA-Large-SQuAD2), an english-based model trained for similar purposes
|
18 |
+
|
19 |
+
```
|
20 |
+
--num_train_epochs 5
|
21 |
+
--learning_rate 5e-5
|
22 |
+
--max_seq_length 512
|
23 |
+
--doc_stride 128
|
24 |
+
```
|
25 |
+
|
26 |
+
## Performance
|
27 |
+
|
28 |
+
Evaluated on the [hackathon-pln-es/biomed_squad_es_v2](link) dev set.
|
29 |
+
|
30 |
+
The model was trained for 5 epochs, choosing the epoch with the best f1 score.
|
31 |
+
|
32 |
+
```
|
33 |
+
eval_exact = 29.7274
|
34 |
+
eval_f1 = 36.4098
|
35 |
+
|
36 |
+
eval_HasAns_exact = 32.0285
|
37 |
+
eval_HasAns_f1 = 45.5479
|
38 |
+
eval_HasAns_total = 562
|
39 |
+
eval_NoAns_exact = 27.4783
|
40 |
+
eval_NoAns_f1 = 27.4783
|
41 |
+
eval_NoAns_total = 575
|
42 |
+
|
43 |
+
```
|
44 |
+
|
45 |
+
## Team
|
46 |
+
Santiago Maximo: [smaximo](https://huggingface.co/smaximo)
|