Edit model card


This model is a finetuned version of roberta-large-bne for the Cares Chapters dataset used in a benchmark in the paper TODO. The model has a F1 of 0.84

Please refer to the original publication for more information TODO LINK

Parameters used

parameter Value
batch size 16
learning rate 2e-05
classifier dropout 0.2
warmup ratio 0
warmup steps 0
weight decay 0
optimizer AdamW
epochs 10
early stopping patience 3

BibTeX entry and citation info

Downloads last month

Datasets used to train IIC/roberta-large-bne-caresC

Collection including IIC/roberta-large-bne-caresC

Evaluation results