--- language: - mn base_model: bayartsogt/mongolian-roberta-base tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: roberta-base-ner-test-2 results: [] --- # roberta-base-ner-test-2 This model is a fine-tuned version of [bayartsogt/mongolian-roberta-base](https://huggingface.co/bayartsogt/mongolian-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1207 - Precision: 0.9273 - Recall: 0.9357 - F1: 0.9315 - Accuracy: 0.9802 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 128 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0259 | 1.0 | 60 | 0.0856 | 0.9222 | 0.9308 | 0.9265 | 0.9792 | | 0.0145 | 2.0 | 120 | 0.0951 | 0.9200 | 0.9296 | 0.9248 | 0.9788 | | 0.0104 | 3.0 | 180 | 0.1018 | 0.9143 | 0.9303 | 0.9222 | 0.9784 | | 0.0073 | 4.0 | 240 | 0.1062 | 0.9224 | 0.9319 | 0.9272 | 0.9791 | | 0.0068 | 5.0 | 300 | 0.1133 | 0.9246 | 0.9340 | 0.9293 | 0.9794 | | 0.0108 | 6.0 | 360 | 0.1055 | 0.9207 | 0.9306 | 0.9256 | 0.9788 | | 0.0078 | 7.0 | 420 | 0.1170 | 0.9207 | 0.9334 | 0.9270 | 0.9786 | | 0.0061 | 8.0 | 480 | 0.1114 | 0.9226 | 0.9348 | 0.9286 | 0.9803 | | 0.005 | 9.0 | 540 | 0.1165 | 0.9255 | 0.9341 | 0.9298 | 0.9798 | | 0.0038 | 10.0 | 600 | 0.1207 | 0.9273 | 0.9357 | 0.9315 | 0.9802 | ### Framework versions - Transformers 4.39.3 - Pytorch 2.1.2 - Datasets 2.18.0 - Tokenizers 0.15.2