--- language: es tags: - biomedical - clinical - spanish - xlm-roberta-large license: mit datasets: - "lcampillos/ctebmsp" metrics: - f1 model-index: - name: IIC/xlm-roberta-large-ctebmsp results: - task: type: token-classification dataset: name: CT-EBM-SP (Clinical Trials for Evidence-based Medicine in Spanish) type: lcampillos/ctebmsp split: test metrics: - name: f1 type: f1 value: 0.906 pipeline_tag: token-classification --- # xlm-roberta-large-ctebmsp This model is a finetuned version of xlm-roberta-large for the CT-EBM-SP (Clinical Trials for Evidence-based Medicine in Spanish) dataset used in a benchmark in the paper TODO. The model has a F1 of 0.906 Please refer to the original publication for more information TODO LINK ## Parameters used | parameter | Value | |-------------------------|:-----:| | batch size | 64 | | learning rate | 2e-05 | | classifier dropout | 0.1 | | warmup ratio | 0 | | warmup steps | 0 | | weight decay | 0 | | optimizer | AdamW | | epochs | 10 | | early stopping patience | 3 | ## BibTeX entry and citation info ```bibtex TODO ```