haryoaw commited on
Commit
29d0497
1 Parent(s): fa67ae2

Initial Commit

Browse files
Files changed (3) hide show
  1. README.md +50 -50
  2. pytorch_model.bin +1 -1
  3. training_args.bin +1 -1
README.md CHANGED
@@ -20,9 +20,9 @@ should probably proofread and complete it, then remove this comment. -->
20
 
21
  This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the tweet_sentiment_multilingual dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 3.3796
24
- - Accuracy: 0.5544
25
- - F1: 0.5505
26
 
27
  ## Model description
28
 
@@ -44,7 +44,7 @@ The following hyperparameters were used during training:
44
  - learning_rate: 5e-05
45
  - train_batch_size: 32
46
  - eval_batch_size: 32
47
- - seed: 111
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
  - lr_scheduler_type: linear
50
  - num_epochs: 50
@@ -53,52 +53,52 @@ The following hyperparameters were used during training:
53
 
54
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
55
  |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
56
- | 1.0774 | 1.09 | 500 | 1.0097 | 0.4900 | 0.4881 |
57
- | 0.9778 | 2.17 | 1000 | 1.0179 | 0.5015 | 0.4990 |
58
- | 0.9089 | 3.26 | 1500 | 1.0370 | 0.5247 | 0.5056 |
59
- | 0.8405 | 4.35 | 2000 | 1.0453 | 0.5455 | 0.5395 |
60
- | 0.7799 | 5.43 | 2500 | 1.0308 | 0.5513 | 0.5353 |
61
- | 0.7295 | 6.52 | 3000 | 1.0548 | 0.5343 | 0.5355 |
62
- | 0.6563 | 7.61 | 3500 | 1.1631 | 0.5571 | 0.5553 |
63
- | 0.6117 | 8.7 | 4000 | 1.1443 | 0.5475 | 0.5471 |
64
- | 0.546 | 9.78 | 4500 | 1.2980 | 0.5586 | 0.5423 |
65
- | 0.4982 | 10.87 | 5000 | 1.3312 | 0.5467 | 0.5387 |
66
- | 0.4507 | 11.96 | 5500 | 1.3316 | 0.5494 | 0.5473 |
67
- | 0.4089 | 13.04 | 6000 | 1.5340 | 0.5451 | 0.5372 |
68
- | 0.3631 | 14.13 | 6500 | 1.4828 | 0.5486 | 0.5461 |
69
- | 0.3193 | 15.22 | 7000 | 1.7461 | 0.5432 | 0.5402 |
70
- | 0.2842 | 16.3 | 7500 | 1.7921 | 0.5513 | 0.5489 |
71
- | 0.2581 | 17.39 | 8000 | 1.8562 | 0.5448 | 0.5453 |
72
- | 0.2328 | 18.48 | 8500 | 1.7978 | 0.5444 | 0.5405 |
73
- | 0.2129 | 19.57 | 9000 | 1.9086 | 0.5421 | 0.5383 |
74
- | 0.1957 | 20.65 | 9500 | 1.9664 | 0.5405 | 0.5392 |
75
- | 0.1751 | 21.74 | 10000 | 2.1506 | 0.5451 | 0.5442 |
76
- | 0.1583 | 22.83 | 10500 | 2.0700 | 0.5513 | 0.5480 |
77
- | 0.1456 | 23.91 | 11000 | 2.1757 | 0.5463 | 0.5423 |
78
- | 0.1359 | 25.0 | 11500 | 2.3672 | 0.5328 | 0.5263 |
79
- | 0.1201 | 26.09 | 12000 | 2.4088 | 0.5478 | 0.5414 |
80
- | 0.1088 | 27.17 | 12500 | 2.4984 | 0.5455 | 0.5436 |
81
- | 0.112 | 28.26 | 13000 | 2.5436 | 0.5544 | 0.5479 |
82
- | 0.1013 | 29.35 | 13500 | 2.5847 | 0.5559 | 0.5527 |
83
- | 0.1018 | 30.43 | 14000 | 2.6745 | 0.5436 | 0.5392 |
84
- | 0.0917 | 31.52 | 14500 | 2.6510 | 0.5594 | 0.5570 |
85
- | 0.0846 | 32.61 | 15000 | 2.7171 | 0.5421 | 0.5389 |
86
- | 0.0793 | 33.7 | 15500 | 2.9348 | 0.5482 | 0.5429 |
87
- | 0.0749 | 34.78 | 16000 | 2.9163 | 0.5494 | 0.5480 |
88
- | 0.0734 | 35.87 | 16500 | 2.9683 | 0.5552 | 0.5530 |
89
- | 0.0685 | 36.96 | 17000 | 3.0259 | 0.5490 | 0.5473 |
90
- | 0.069 | 38.04 | 17500 | 3.0156 | 0.5451 | 0.5393 |
91
- | 0.0586 | 39.13 | 18000 | 3.0566 | 0.5529 | 0.5477 |
92
- | 0.0533 | 40.22 | 18500 | 3.1920 | 0.5583 | 0.5561 |
93
- | 0.0607 | 41.3 | 19000 | 3.1986 | 0.5459 | 0.5424 |
94
- | 0.0564 | 42.39 | 19500 | 3.2304 | 0.5579 | 0.5541 |
95
- | 0.0543 | 43.48 | 20000 | 3.2112 | 0.5548 | 0.5537 |
96
- | 0.0519 | 44.57 | 20500 | 3.2778 | 0.5521 | 0.5490 |
97
- | 0.0485 | 45.65 | 21000 | 3.3030 | 0.5548 | 0.5526 |
98
- | 0.0421 | 46.74 | 21500 | 3.3174 | 0.5509 | 0.5479 |
99
- | 0.0394 | 47.83 | 22000 | 3.3787 | 0.5529 | 0.5470 |
100
- | 0.0428 | 48.91 | 22500 | 3.3964 | 0.5548 | 0.5504 |
101
- | 0.0442 | 50.0 | 23000 | 3.3796 | 0.5544 | 0.5505 |
102
 
103
 
104
  ### Framework versions
 
20
 
21
  This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the tweet_sentiment_multilingual dataset.
22
  It achieves the following results on the evaluation set:
23
+ - Loss: 3.2943
24
+ - Accuracy: 0.5494
25
+ - F1: 0.5481
26
 
27
  ## Model description
28
 
 
44
  - learning_rate: 5e-05
45
  - train_batch_size: 32
46
  - eval_batch_size: 32
47
+ - seed: 222
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
  - lr_scheduler_type: linear
50
  - num_epochs: 50
 
53
 
54
  | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
55
  |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
56
+ | 1.0706 | 1.09 | 500 | 1.0260 | 0.4946 | 0.4746 |
57
+ | 0.97 | 2.17 | 1000 | 0.9626 | 0.5270 | 0.5111 |
58
+ | 0.9007 | 3.26 | 1500 | 0.9517 | 0.5490 | 0.5506 |
59
+ | 0.8292 | 4.35 | 2000 | 0.9865 | 0.5475 | 0.5339 |
60
+ | 0.7773 | 5.43 | 2500 | 1.0123 | 0.5475 | 0.5388 |
61
+ | 0.7155 | 6.52 | 3000 | 1.0156 | 0.5536 | 0.5509 |
62
+ | 0.6707 | 7.61 | 3500 | 1.0641 | 0.5637 | 0.5629 |
63
+ | 0.6081 | 8.7 | 4000 | 1.1244 | 0.5617 | 0.5614 |
64
+ | 0.5489 | 9.78 | 4500 | 1.1851 | 0.5475 | 0.5404 |
65
+ | 0.503 | 10.87 | 5000 | 1.3213 | 0.5586 | 0.5591 |
66
+ | 0.4602 | 11.96 | 5500 | 1.4164 | 0.5563 | 0.5520 |
67
+ | 0.4134 | 13.04 | 6000 | 1.4394 | 0.5482 | 0.5414 |
68
+ | 0.3697 | 14.13 | 6500 | 1.5564 | 0.5529 | 0.5501 |
69
+ | 0.3256 | 15.22 | 7000 | 1.5412 | 0.5478 | 0.5465 |
70
+ | 0.3028 | 16.3 | 7500 | 1.6169 | 0.5374 | 0.5353 |
71
+ | 0.2673 | 17.39 | 8000 | 1.7963 | 0.5436 | 0.5432 |
72
+ | 0.2501 | 18.48 | 8500 | 1.7266 | 0.5517 | 0.5499 |
73
+ | 0.2225 | 19.57 | 9000 | 1.9252 | 0.5532 | 0.5472 |
74
+ | 0.2021 | 20.65 | 9500 | 1.9622 | 0.5563 | 0.5542 |
75
+ | 0.1875 | 21.74 | 10000 | 1.9973 | 0.5471 | 0.5463 |
76
+ | 0.1681 | 22.83 | 10500 | 2.1299 | 0.5386 | 0.5307 |
77
+ | 0.1534 | 23.91 | 11000 | 2.0761 | 0.5463 | 0.5416 |
78
+ | 0.1427 | 25.0 | 11500 | 2.2814 | 0.5475 | 0.5471 |
79
+ | 0.1304 | 26.09 | 12000 | 2.4128 | 0.5544 | 0.5451 |
80
+ | 0.1126 | 27.17 | 12500 | 2.4318 | 0.5370 | 0.5327 |
81
+ | 0.1169 | 28.26 | 13000 | 2.5110 | 0.5451 | 0.5432 |
82
+ | 0.1044 | 29.35 | 13500 | 2.5768 | 0.5467 | 0.5432 |
83
+ | 0.1011 | 30.43 | 14000 | 2.6120 | 0.5486 | 0.5428 |
84
+ | 0.0982 | 31.52 | 14500 | 2.5795 | 0.5544 | 0.5541 |
85
+ | 0.0854 | 32.61 | 15000 | 2.7525 | 0.5482 | 0.5497 |
86
+ | 0.0853 | 33.7 | 15500 | 2.7322 | 0.5575 | 0.5557 |
87
+ | 0.0851 | 34.78 | 16000 | 2.7708 | 0.5417 | 0.5375 |
88
+ | 0.0726 | 35.87 | 16500 | 2.8363 | 0.5451 | 0.5417 |
89
+ | 0.0706 | 36.96 | 17000 | 2.8634 | 0.5505 | 0.5494 |
90
+ | 0.0653 | 38.04 | 17500 | 2.9653 | 0.5444 | 0.5434 |
91
+ | 0.0669 | 39.13 | 18000 | 3.0624 | 0.5432 | 0.5417 |
92
+ | 0.0585 | 40.22 | 18500 | 3.1669 | 0.5432 | 0.5392 |
93
+ | 0.059 | 41.3 | 19000 | 3.0692 | 0.5548 | 0.5544 |
94
+ | 0.048 | 42.39 | 19500 | 3.2014 | 0.5494 | 0.5482 |
95
+ | 0.0479 | 43.48 | 20000 | 3.2452 | 0.5428 | 0.5409 |
96
+ | 0.052 | 44.57 | 20500 | 3.2338 | 0.5478 | 0.5476 |
97
+ | 0.0477 | 45.65 | 21000 | 3.2556 | 0.5444 | 0.5424 |
98
+ | 0.0395 | 46.74 | 21500 | 3.2952 | 0.5444 | 0.5420 |
99
+ | 0.0477 | 47.83 | 22000 | 3.2726 | 0.5509 | 0.5500 |
100
+ | 0.0408 | 48.91 | 22500 | 3.2894 | 0.5471 | 0.5457 |
101
+ | 0.0407 | 50.0 | 23000 | 3.2943 | 0.5494 | 0.5481 |
102
 
103
 
104
  ### Framework versions
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:1aa35bae3b8ec35c0e5441c298abf54e3896f23b8826a8ad173f88d49ff0dc6e
3
  size 429199798
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99dd1feff9f3278e92e79d51ce2bec9e17c1a0c55edb8f33955ed08156def8fe
3
  size 429199798
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5b700cfc7a31e499c5be56c8d837b8078e37b8d1feda47105a614ae168579428
3
  size 4664
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:ce575864678672dbca90a622a7b47abc7b2f300a3605bc9775bf91a4d1f96a4a
3
  size 4664