--- license: mit tags: - generated_from_trainer metrics: - f1 - recall - accuracy base_model: roberta-base model-index: - name: RoBERTa-THESIS results: [] --- # RoBERTa-THESIS This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.1698 - F1: 0.7701 - Recall: 0.7701 - Accuracy: 0.7701 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Recall | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:--------:| | 0.9918 | 1.0 | 1446 | 0.8174 | 0.7433 | 0.7433 | 0.7433 | | 0.7223 | 2.0 | 2892 | 0.7799 | 0.7618 | 0.7618 | 0.7618 | | 0.5389 | 3.0 | 4338 | 0.7730 | 0.7716 | 0.7716 | 0.7716 | | 0.4073 | 4.0 | 5784 | 0.8121 | 0.7737 | 0.7737 | 0.7737 | | 0.2985 | 5.0 | 7230 | 0.8841 | 0.7697 | 0.7697 | 0.7697 | | 0.2233 | 6.0 | 8676 | 0.9573 | 0.7717 | 0.7717 | 0.7717 | | 0.1679 | 7.0 | 10122 | 1.0132 | 0.7721 | 0.7721 | 0.7721 | | 0.1233 | 8.0 | 11568 | 1.0948 | 0.7691 | 0.7691 | 0.7691 | | 0.096 | 9.0 | 13014 | 1.1502 | 0.7689 | 0.7689 | 0.7689 | | 0.0799 | 10.0 | 14460 | 1.1698 | 0.7701 | 0.7701 | 0.7701 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.1+cu118 - Datasets 2.14.5 - Tokenizers 0.13.3