--- base_model: DeepPavlov/xlm-roberta-large-en-ru-mnli tags: - generated_from_trainer model-index: - name: output results: [] --- # output This model is a fine-tuned version of [DeepPavlov/xlm-roberta-large-en-ru-mnli](https://huggingface.co/DeepPavlov/xlm-roberta-large-en-ru-mnli) on the None dataset. It achieves the following results on the evaluation set: - eval_loss: 0.2408 - eval_accuracy: 0.9691 - eval_f1-score: 0.9693 - eval_mcc: 0.9566 - eval_runtime: 78.6359 - eval_samples_per_second: 38.278 - eval_steps_per_second: 9.576 - epoch: 6.0 - step: 10536 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu121 - Datasets 2.15.0 - Tokenizers 0.15.0