GuillemGSubies's picture
Add README.md
6bf589f
metadata
language: es
tags:
  - biomedical
  - clinical
  - spanish
  - mdeberta-v3-base
license: mit
datasets:
  - chizhikchi/CARES
metrics:
  - f1
model-index:
  - name: IIC/mdeberta-v3-base-caresC
    results:
      - task:
          type: multi-label-classification
        dataset:
          name: Cares Chapters
          type: chizhikchi/CARES
          split: test
        metrics:
          - name: f1
            type: f1
            value: 0.756
pipeline_tag: text-classification

mdeberta-v3-base-caresC

This model is a finetuned version of mdeberta-v3-base for the Cares Chapters dataset used in a benchmark in the paper TODO. The model has a F1 of 0.756

Please refer to the original publication for more information TODO LINK

Parameters used

parameter Value
batch size 16
learning rate 3e-05
classifier dropout 0.2
warmup ratio 0
warmup steps 0
weight decay 0
optimizer AdamW
epochs 10
early stopping patience 3

BibTeX entry and citation info

TODO