--- license: mit tags: - generated_from_trainer datasets: - elsevier-oa-cc-by model-index: - name: roberta-base-finetuned-academic results: [] --- # roberta-base-finetuned-academic This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the elsevier-oa-cc-by dataset. It achieves the following results on the evaluation set: - Loss: 2.1158 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.1903 | 0.25 | 1025 | 2.0998 | | 2.1752 | 0.5 | 2050 | 2.1186 | | 2.1864 | 0.75 | 3075 | 2.1073 | | 2.1874 | 1.0 | 4100 | 2.1177 | | 2.1669 | 1.25 | 5125 | 2.1091 | | 2.1859 | 1.5 | 6150 | 2.1212 | | 2.1783 | 1.75 | 7175 | 2.1096 | | 2.1734 | 2.0 | 8200 | 2.0998 | | 2.1712 | 2.25 | 9225 | 2.0972 | | 2.1812 | 2.5 | 10250 | 2.1051 | | 2.1811 | 2.75 | 11275 | 2.1150 | | 2.1826 | 3.0 | 12300 | 2.1097 | | 2.172 | 3.25 | 13325 | 2.1115 | | 2.1745 | 3.5 | 14350 | 2.1098 | | 2.1758 | 3.75 | 15375 | 2.1101 | | 2.1834 | 4.0 | 16400 | 2.1232 | | 2.1836 | 4.25 | 17425 | 2.1052 | | 2.1791 | 4.5 | 18450 | 2.1186 | | 2.172 | 4.75 | 19475 | 2.1039 | | 2.1797 | 5.0 | 20500 | 2.1015 | ### Framework versions - Transformers 4.19.0.dev0 - Pytorch 1.11.0+cu113 - Datasets 2.3.2 - Tokenizers 0.12.1