haryoaw's picture
Initial Commit
29d0497 verified
metadata
license: mit
base_model: xlm-roberta-base
tags:
  - generated_from_trainer
datasets:
  - tweet_sentiment_multilingual
metrics:
  - accuracy
  - f1
model-index:
  - name: >-
      scenario-NON-KD-PR-COPY-CDF-ALL-D2_data-cardiffnlp_tweet_sentiment_multilingual_
    results: []

scenario-NON-KD-PR-COPY-CDF-ALL-D2_data-cardiffnlp_tweet_sentiment_multilingual_

This model is a fine-tuned version of xlm-roberta-base on the tweet_sentiment_multilingual dataset. It achieves the following results on the evaluation set:

  • Loss: 3.2943
  • Accuracy: 0.5494
  • F1: 0.5481

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 222
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy F1
1.0706 1.09 500 1.0260 0.4946 0.4746
0.97 2.17 1000 0.9626 0.5270 0.5111
0.9007 3.26 1500 0.9517 0.5490 0.5506
0.8292 4.35 2000 0.9865 0.5475 0.5339
0.7773 5.43 2500 1.0123 0.5475 0.5388
0.7155 6.52 3000 1.0156 0.5536 0.5509
0.6707 7.61 3500 1.0641 0.5637 0.5629
0.6081 8.7 4000 1.1244 0.5617 0.5614
0.5489 9.78 4500 1.1851 0.5475 0.5404
0.503 10.87 5000 1.3213 0.5586 0.5591
0.4602 11.96 5500 1.4164 0.5563 0.5520
0.4134 13.04 6000 1.4394 0.5482 0.5414
0.3697 14.13 6500 1.5564 0.5529 0.5501
0.3256 15.22 7000 1.5412 0.5478 0.5465
0.3028 16.3 7500 1.6169 0.5374 0.5353
0.2673 17.39 8000 1.7963 0.5436 0.5432
0.2501 18.48 8500 1.7266 0.5517 0.5499
0.2225 19.57 9000 1.9252 0.5532 0.5472
0.2021 20.65 9500 1.9622 0.5563 0.5542
0.1875 21.74 10000 1.9973 0.5471 0.5463
0.1681 22.83 10500 2.1299 0.5386 0.5307
0.1534 23.91 11000 2.0761 0.5463 0.5416
0.1427 25.0 11500 2.2814 0.5475 0.5471
0.1304 26.09 12000 2.4128 0.5544 0.5451
0.1126 27.17 12500 2.4318 0.5370 0.5327
0.1169 28.26 13000 2.5110 0.5451 0.5432
0.1044 29.35 13500 2.5768 0.5467 0.5432
0.1011 30.43 14000 2.6120 0.5486 0.5428
0.0982 31.52 14500 2.5795 0.5544 0.5541
0.0854 32.61 15000 2.7525 0.5482 0.5497
0.0853 33.7 15500 2.7322 0.5575 0.5557
0.0851 34.78 16000 2.7708 0.5417 0.5375
0.0726 35.87 16500 2.8363 0.5451 0.5417
0.0706 36.96 17000 2.8634 0.5505 0.5494
0.0653 38.04 17500 2.9653 0.5444 0.5434
0.0669 39.13 18000 3.0624 0.5432 0.5417
0.0585 40.22 18500 3.1669 0.5432 0.5392
0.059 41.3 19000 3.0692 0.5548 0.5544
0.048 42.39 19500 3.2014 0.5494 0.5482
0.0479 43.48 20000 3.2452 0.5428 0.5409
0.052 44.57 20500 3.2338 0.5478 0.5476
0.0477 45.65 21000 3.2556 0.5444 0.5424
0.0395 46.74 21500 3.2952 0.5444 0.5420
0.0477 47.83 22000 3.2726 0.5509 0.5500
0.0408 48.91 22500 3.2894 0.5471 0.5457
0.0407 50.0 23000 3.2943 0.5494 0.5481

Framework versions

  • Transformers 4.33.3
  • Pytorch 2.1.1+cu121
  • Datasets 2.14.5
  • Tokenizers 0.13.3