Edit model card

roberta-large-bne-pharmaconer

This model is a finetuned version of roberta-large-bne for the pharmaconer dataset used in a benchmark in the paper TODO. The model has a F1 of 0.914

Please refer to the original publication for more information TODO LINK

Parameters used

parameter Value
batch size 32
learning rate 1e-05
classifier dropout 0
warmup ratio 0
warmup steps 0
weight decay 0
optimizer AdamW
epochs 10
early stopping patience 3

BibTeX entry and citation info

TODO
Downloads last month
1

Dataset used to train IIC/roberta-large-bne-pharmaconer

Collection including IIC/roberta-large-bne-pharmaconer

Evaluation results