Edit model card

xlm-roberta-large-pharmaconer

This model is a finetuned version of xlm-roberta-large for the pharmaconer dataset used in a benchmark in the paper TODO. The model has a F1 of 0.924

Please refer to the original publication for more information TODO LINK

Parameters used

parameter Value
batch size 64
learning rate 3e-05
classifier dropout 0
warmup ratio 0
warmup steps 0
weight decay 0
optimizer AdamW
epochs 10
early stopping patience 3

BibTeX entry and citation info

TODO
Downloads last month
6
Safetensors
Model size
560M params
Tensor type
I64
·
F32
·

Dataset used to train IIC/xlm-roberta-large-pharmaconer

Collection including IIC/xlm-roberta-large-pharmaconer

Evaluation results