GuillemGSubies's picture
Add README.md
352e67d
|
raw
history blame
1.2 kB
---
language: es
tags:
- biomedical
- clinical
- spanish
- bert-base-spanish-wwm-cased
license: cc-by-4.0
datasets:
- "ehealth_kd"
metrics:
- f1
model-index:
- name: IIC/bert-base-spanish-wwm-cased-ehealth_kd
results:
- task:
type: token-classification
dataset:
name: eHealth-KD
type: ehealth_kd
split: test
metrics:
- name: f1
type: f1
value: 0.843
pipeline_tag: token-classification
---
# bert-base-spanish-wwm-cased-ehealth_kd
This model is a finetuned version of bert-base-spanish-wwm-cased for the eHealth-KD dataset used in a benchmark in the paper TODO. The model has a F1 of 0.843
Please refer to the original publication for more information TODO LINK
## Parameters used
| parameter | Value |
|-------------------------|:-----:|
| batch size | 64 |
| learning rate | 4e-05 |
| classifier dropout | 0.2 |
| warmup ratio | 0 |
| warmup steps | 0 |
| weight decay | 0 |
| optimizer | AdamW |
| epochs | 10 |
| early stopping patience | 3 |
## BibTeX entry and citation info
```bibtex
TODO
```