test / README.md
bashyaldhiraj2067's picture
End of training
9579d99 verified
metadata
library_name: transformers
license: mit
base_model: nielsr/lilt-xlm-roberta-base
tags:
  - generated_from_trainer
metrics:
  - precision
  - recall
  - f1
  - accuracy
model-index:
  - name: test
    results: []

test

This model is a fine-tuned version of nielsr/lilt-xlm-roberta-base on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2434
  • Precision: 0.9144
  • Recall: 0.9105
  • F1: 0.9124
  • Accuracy: 0.9725

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Precision Recall F1 Accuracy
No log 0.7937 100 0.1878 0.8406 0.8761 0.8580 0.9542
No log 1.5873 200 0.1337 0.8943 0.8864 0.8903 0.9650
No log 2.3810 300 0.1259 0.9020 0.9214 0.9116 0.9716
No log 3.1746 400 0.1317 0.9100 0.9181 0.9140 0.9730
0.2107 3.9683 500 0.1159 0.9144 0.9065 0.9104 0.9710
0.2107 4.7619 600 0.1169 0.9147 0.9072 0.9109 0.9715
0.2107 5.5556 700 0.1240 0.9025 0.9144 0.9084 0.9712
0.2107 6.3492 800 0.1351 0.9160 0.9118 0.9139 0.9727
0.2107 7.1429 900 0.1469 0.9207 0.9055 0.9131 0.9722
0.0518 7.9365 1000 0.1333 0.9053 0.9158 0.9105 0.9717
0.0518 8.7302 1100 0.1367 0.9119 0.9167 0.9143 0.9724
0.0518 9.5238 1200 0.1412 0.9057 0.9134 0.9095 0.9712
0.0518 10.3175 1300 0.1666 0.9203 0.9158 0.9180 0.9740
0.0518 11.1111 1400 0.1610 0.9050 0.9062 0.9056 0.9707
0.0316 11.9048 1500 0.1677 0.9175 0.9111 0.9143 0.9720
0.0316 12.6984 1600 0.1838 0.9097 0.9052 0.9074 0.9715
0.0316 13.4921 1700 0.1622 0.9182 0.9082 0.9131 0.9725
0.0316 14.2857 1800 0.1855 0.9161 0.9092 0.9126 0.9725
0.0316 15.0794 1900 0.1739 0.9078 0.9171 0.9124 0.9725
0.0174 15.8730 2000 0.1902 0.9167 0.9167 0.9167 0.9734
0.0174 16.6667 2100 0.1729 0.9207 0.9171 0.9189 0.9739
0.0174 17.4603 2200 0.2083 0.9147 0.9171 0.9159 0.9734
0.0174 18.2540 2300 0.2233 0.9108 0.9177 0.9143 0.9724
0.0174 19.0476 2400 0.2165 0.9201 0.9134 0.9168 0.9730
0.0085 19.8413 2500 0.2138 0.9117 0.9111 0.9114 0.9721
0.0085 20.6349 2600 0.2109 0.9150 0.9108 0.9129 0.9725
0.0085 21.4286 2700 0.2118 0.9216 0.9167 0.9192 0.9742
0.0085 22.2222 2800 0.2287 0.9184 0.9184 0.9184 0.9742
0.0085 23.0159 2900 0.2350 0.9118 0.9085 0.9101 0.9719
0.0043 23.8095 3000 0.2406 0.9109 0.9158 0.9133 0.9727
0.0043 24.6032 3100 0.2480 0.9105 0.9072 0.9088 0.9715
0.0043 25.3968 3200 0.2430 0.9112 0.9055 0.9084 0.9714
0.0043 26.1905 3300 0.2396 0.9092 0.9068 0.9080 0.9712
0.0043 26.9841 3400 0.2386 0.9152 0.9164 0.9158 0.9732
0.0026 27.7778 3500 0.2417 0.9123 0.9111 0.9117 0.9720
0.0026 28.5714 3600 0.2433 0.9136 0.9085 0.9110 0.9721
0.0026 29.3651 3700 0.2434 0.9144 0.9105 0.9124 0.9725

Framework versions

  • Transformers 4.48.3
  • Pytorch 2.6.0+cu124
  • Datasets 3.4.1
  • Tokenizers 0.21.1