GuillemGSubies's picture
Add README.md
38704d1
|
raw
history blame
1.28 kB
metadata
language: es
tags:
  - biomedical
  - clinical
  - spanish
  - roberta-large-bne
license: apache-2.0
datasets:
  - lcampillos/ctebmsp
metrics:
  - f1
model-index:
  - name: IIC/roberta-large-bne-ctebmsp
    results:
      - task:
          type: token-classification
        dataset:
          name: CT-EBM-SP (Clinical Trials for Evidence-based Medicine in Spanish)
          type: lcampillos/ctebmsp
          split: test
        metrics:
          - name: f1
            type: f1
            value: 0.877
pipeline_tag: token-classification

roberta-large-bne-ctebmsp

This model is a finetuned version of roberta-large-bne for the CT-EBM-SP (Clinical Trials for Evidence-based Medicine in Spanish) dataset used in a benchmark in the paper TODO. The model has a F1 of 0.877

Please refer to the original publication for more information TODO LINK

Parameters used

parameter Value
batch size 64
learning rate 2e-05
classifier dropout 0.1
warmup ratio 0
warmup steps 0
weight decay 0
optimizer AdamW
epochs 10
early stopping patience 3

BibTeX entry and citation info

TODO